From d20cf39e4ad300a91d6241e6c965f805b46ad5ee Mon Sep 17 00:00:00 2001 From: Ireneusz Bachanowicz Date: Mon, 14 Jul 2025 17:12:50 +0200 Subject: [PATCH] first commit --- HUB_dummy-data_test_clean.json | 3458 ++++++++++++++++++++++++++++++++ HUB_nohtml.txt | 3458 ++++++++++++++++++++++++++++++++ anonymize_pii.py | 154 ++ bs.py | 24 + custom payload JIRA.json | 16 + custom_output.json | 16 + output.txt | 3458 ++++++++++++++++++++++++++++++++ strip.py | 62 + 8 files changed, 10646 insertions(+) create mode 100644 HUB_dummy-data_test_clean.json create mode 100644 HUB_nohtml.txt create mode 100644 anonymize_pii.py create mode 100644 bs.py create mode 100644 custom payload JIRA.json create mode 100644 custom_output.json create mode 100644 output.txt create mode 100644 strip.py diff --git a/HUB_dummy-data_test_clean.json b/HUB_dummy-data_test_clean.json new file mode 100644 index 0000000..202bb19 --- /dev/null +++ b/HUB_dummy-data_test_clean.json @@ -0,0 +1,3458 @@ +[ + { + "title": "HUB Overview", + "pageID": "164470108", + "pageLink": "/display/GMDM/HUB+Overview", + "content": "

MDM Integration services provide services for clients using MDM systems (Reltio or Nucleus 360) in following fields:


MDM Integration Services consist of:

The MDM HUB ecosystem is presented at the picture below.  

\"\" 

" + }, + { + "title": "Modules", + "pageID": "164470022", + "pageLink": "/display/GMDM/Modules", + "content": "" + }, + { + "title": "Direct Channel", + "pageID": "164469882", + "pageLink": "/display/GMDM/Direct+Channel", + "content": "

Description

Direct channel exposes unified REST API interface to update/search profiles in MDM systems. The diagram below shows the logical architecture of the Direct Channel module. 

Logical architecture

\"\"

Components


ComponentSubcomponentDescription

API Gateway


Kong API Gateway components playing the role of prox

Authentication engineKong module providing client authentication services
Manager/Orchestrator
java microservice orchestrating API calls

Data Quality Enginequality service validating data sent to Reltio 

Authorization Engine

authorize client access to MDM resources

MDM routing engineroute calls to MDM systems

Transaction Logger

registers API calls in EFK service for tracing reasons. 

Reltio Adapterhandles communication with Reltio MDM system

Nucleus Adapter

handle communication with Nucleus MDM system

HUB Store


MongoDB database plays the role of persistence store for MDM HUB logic
API Router
routing requests to regional MDM Hub services

Flows

FlowDescription
Create/Update HCP/HCO/MCOCreate or Update HCP/HCO/MCO entity
Search EntitySearch entity
Get EntityRead entity
Read LOVRead LOV
Validate HCPValidate HCP
" + }, + { + "title": "Streaming channel", + "pageID": "164469812", + "pageLink": "/display/GMDM/Streaming+channel", + "content": "

Description

Streaming channel distributes MDM profile updates through KAFKA topics in near real-time to consumers.  Reltio events generate on profile changes are sent via AWS SQS queue to MDM HUB.

MDM HUB enriches events with profile data and dedupes them. During the process, callback service process data (for example: calculate ranks and hco names, clean unused topics) and updates profile in Reltio with the calculated values.   

Publisher distributes events to target client topics based on the configured routing rules.

MDM Datamart built-in Snowflake provides SQL access to up to date MDM data in both the object and the relational model. 

Logical architecture


\"\"

Components


ComponentDescription

Reltio subscriber

Consume events from Reltio

Callback service

Trigger callback actions on incoming events for example calculated rankings

Direct Channel

Orchestrates Reltio updates triggered by callbacks

HUB Store

Keeps MDM data history

Reconciliation service

Reconcile missing events

Publisher

Evaluates routing rules and publishes data do downstream consumers

Snowflake Data Mart

Exposes MDM data in the relation model

Kafka Connect

Sends data to Snowflake from Kafka

Entity enricher

Enrich events with full data retrieved from Reltio

Flows

FlowDescription
Reltio events streamingDistribute Reltio MDM data changes to downstream consumers in the streaming mode
Nucleus events streamingDistribute Nucleus MDM data changes to downstream consumers in the streaming mode
Snowflake: Events publish flowDistribute Reltio MDM data changes to Snowflake DM
" + }, + { + "title": "Java Batch Channel", + "pageID": "164469814", + "pageLink": "/display/GMDM/Java+Batch+Channel", + "content": "

Description

Java Batch Channel is the set of services responsible to load file extract delivered by the external source to Reltio. The heart of the module is file loader service aka inc-batch-channel that maps flat model to Reltio model and orchestrates the load through asynchronous interface manage by Manager. Batch flows are managed by Apache Airflow scheduler.

Logical architecture

\"\"

Components

Flows


" + }, + { + "title": "ETL Batch Channel", + "pageID": "164469835", + "pageLink": "/display/GMDM/ETL+Batch+Channel", + "content": "

Description

ETL Batch channel exposes REST API  for ETL components like Informatica and manages a loading process in an asynchronous way.

With its own cache based on Hub Store, it supports full loads providing a delta detection logic.

Logical architecture

\"\"

Components

Flows


" + }, + { + "title": "Environments", + "pageID": "164470172", + "pageLink": "/display/GMDM/Environments", + "content": "

Reltio Export IPs

EnvironmentIPsReltio Team comment

EMEA NON-PROD

EMEA PROD

- ●●●●●●●●●●●●
- ●●●●●●●●●●●●
- ●●●●●●●●●●●●

are available across all EMEA environments

APAC NON-PROD

APAC PROD

- ●●●●●●●●●●●
- ●●●●●●●●●●●●●●
- ●●●●●●●●●●●●●

are available across all APAC environments

GBLUS NON-PROD

GBLUS PROD

- ●●●●●●●●●●●●●
- ●●●●●●●●●●●
- ●●●●●●●●●●●●● 
for the dev/test and 361 tenants, the IPs can be used by any of the environments.

AMER NON-PROD

AMER PROD

The AMER tenants use the same access points as the US

" + }, + { + "title": "AMER", + "pageID": "196878948", + "pageLink": "/display/GMDM/AMER", + "content": "

Contacts

TypeContactCommentSupported MDMHUB environments
DLDL-ADL-ATP-GLOBAL_MDM_RELTIO@COMPANY.comSupports Reltio instancesGBLUS - Reltio only
" + }, + { + "title": "AMER Non PROD Cluster", + "pageID": "196878950", + "pageLink": "/display/GMDM/AMER+Non+PROD+Cluster", + "content": "

Physical Architecture


\"\"

Kubernetes cluster


nameIPConsole addressresource typeAWS regionFilesystemComponentsType
atp-mdmhub-nprod-amer

10.9.64.0/18

10.9.0.0/18

https://pdcs-som1d.COMPANY.com
EKS over EC2us-east-1

~60GB per node,

6TBx2 replicated Portworx volumes

Kong, Kafka, Mongo, Prometheus, MDMHUB microservices

outbound and inbound

Non PROD - backend 

NamespaceComponentPod nameDescriptionLogs
kongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kong
amer-backendKafka

mdm-kafka-kafka-0

mdm-kafka-kafka-1

mdm-kafka-kafka-2

Kafkalogs
amer-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace amer-backend
amer-backendZookeeper

mdm-kafka-zookeeper-0

mdm-kafka-zookeeper-1

mdm-kafka-zookeeper-2

Zookeeperlogs
amer-backendMongomongo-0Mongologs
amer-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace amer-backend
amer-backendFluentDfluentd-*EFK - fluentd

kubectl logs {{pod name}} --namespace amer-backend

amer-backendElasticsearch

elasticsearch-es-default-0

elasticsearch-es-default-1

EFK - elasticsearchkubectl logs {{pod name}} --namespace amer-backend
amer-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace amer-backend
monitoringCadvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoring
amer-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace amer-backend
amer-backendMongo exportermongo-exporter-*mongo metrics exporter---
amer-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace amer-backend
amer-backendConsul

consul-consul-server-0

consul-consul-server-1

consul-consul-server-2

Consulkubectl logs {{pod name}} --namespace amer-backend
amer-backendSnowflake connector

amer-dev-mdm-connect-cluster-connect-*

amer-qa-mdm-connect-cluster-connect-*

amer-stage-mdm-connect-cluster-connect-*

Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace amer-backend
monitoringKafka Connect Exporter

monitoring-jdbc-snowflake-exporter-amer-dev-*

monitoring-jdbc-snowflake-exporter-amer-stage-*

monitoring-jdbc-snowflake-exporter-amer-stage-*

Kafka Connect metric exporterkubectl logs {{pod name}} --namespace monitoring
amer-backendAkhqakhq-*Kafka UIlogs


Certificates 

Wed Aug 31 21:57:19 CEST 2016 until: Sun Aug 31 22:07:17 CEST 2036

Resource

Certificate LocationValid fromValid to Issued To
Kibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/nprod/namespaces/kong/config_files/certsThu, 13 Jan 2022 14:13:53 GMTTue, 10 Jan 2023 14:13:53 GMThttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/
Kafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/nprod/namespaces/amer-backend/secrets.yaml.encryptedJan 18 11:07:55 2022 GMTJan 18 11:07:55 2024 GMTkafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094


Setup and check connections:

  1. Snowflake - managing service accounts - EMEA Snowflake Access


" + }, + { + "title": "AMER DEV Services", + "pageID": "196878953", + "pageLink": "/display/GMDM/AMER+DEV+Services", + "content": "


HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-amer-dev
Ping Federate

https://devfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-amer-dev
Kafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 

s3://gblmdmhubnprodamrasp100762

HUB UIhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ui-amer-dev/#/dashboard

Snowflake MDM DataMart

Resource NameEndpoint
DB Url

https://amerdev01.us-east-1.privatelink.snowflakecomputing.com/

DB Name

COMM_AMER_MDM_DMART_DEV_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_AMER_MDM_DMART_DEV_DEVOPS_ROLE

Grafana dashboards

Resource NameEndpoint
HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=amer_dev&var-node=All&var-type=entities
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=amer_dev&var-topic=All&var-node=1
Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_nprod
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=amer_dev&var-component=manager
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_nprod&var-service=All&var-node=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_dev&var-interval=$__auto_interval_interval

Kibana dashboards

Resource NameEndpoint
Kibana

https://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com (DEV prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-amer-dev/swagger-ui/index.html?configUrl=/api-gw-spec-amer-dev/v3/api-docs/swagger-config
Batch Service API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-amer-dev/swagger-ui/index.html?configUrl=/api-batch-spec-amer-dev/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-amer-nprod-gbl-mdm-hub.COMPANY.com

Consul

Resource NameEndpoint
Consul UIhttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.com

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-amer-nprod-gbl-mdm-hub.COMPANY.com

Components & Logs

ENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod ports

amer-dev

Managermdmhub-mdm-manager-*Gateway APIlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


amer-dev

Batch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogs
amer-devApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogs

amer-dev

Subscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogs
amer-devEnrichermdmhub-entity-enricher-*Reltio events enricherlogs
amer-devCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogs

amer-dev

Publishermdmhub-event-publisher-*Events publisherlogs
amer-devReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogs

Clients


MDM Systems

Reltio

DEV - wn60kG248ziQSMW

Resource NameEndpoint
SQS queue name

https://sqs.us-east-1.amazonaws.com/930358522410/dev_wJmSQ8GWI8Q6Fl1

Reltio

https://dev.reltio.com/ui/wJmSQ8GWI8Q6Fl1

https://dev.reltio.com/reltio/api/wJmSQ8GWI8Q6Fl1

Reltio Gateway User

svc-pfe-mdmhub
RDMhttps://rdm.reltio.com/lookups/dyzB7cAPhATUslE


Internal Resources


Resource NameEndpoint
Mongo

mongodb://mongo-amer-nprod-gbl-mdm-hub.COMPANY.com:27017

Kafka

kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSL

Kibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com
Elasticsearchhttps://elastic-amer-nprod-gbl-mdm-hub.COMPANY.com


Migration

The amer dev is the first environment that was migrated from old ifrastructure (EC2 based) to a new one - Kubernetes based. The following table presents old endpoints and their substitutes in the new environment. Everyone who wants to connect with amer dev has to use new endpoints.

DescriptionOld endpointNew endpoint
Manager API

https://amraelp00010074.COMPANY.com:8443/dev-ext

https://gbl-mdm-hub-amer-nprod.COMPANY.com:8443/dev-ext

https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-amer-dev
Batch Service API

https://amraelp00010074.COMPANY.com:8443/dev-batch-ext

https://gbl-mdm-hub-amer-nprod.COMPANY.com:8443/dev-batch-ext

https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-amer-dev
Consul API

https://amraelp00010074.COMPANY.com:8443/v1

https://gbl-mdm-hub-amer-nprod.COMPANY.com:8443/v1

https://consul-amer-nprod-gbl-mdm-hub.COMPANY.com/v1

Kafkaamraelp00010074.COMPANY.com:9094kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094


" + }, + { + "title": "AMER QA Services", + "pageID": "228921283", + "pageLink": "/display/GMDM/AMER+QA+Services", + "content": "


HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-amer-qa
Ping Federate

https://devfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-amer-qa
Kafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 

s3://gblmdmhubnprodamrasp100762

HUB UIhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ui-amer-qa/#/dashboard

Snowflake MDM DataMart

Resource NameEndpoint
DB Url

https://amerdev01.us-east-1.privatelink.snowflakecomputing.com/

DB Name

COMM_AMER_MDM_DMART_QA_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_AMER_MDM_DMART_QA_DEVOPS_ROLE


Resource NameEndpoint
HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=amer_qa&var-node=All&var-type=entities
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=amer_qa&var-topic=All&var-node=1
Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_nprod
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=amer_qa&var-component=mdm-manager
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_nprod&var-service=All&var-node=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_nprod&var-interval=$__auto_interval_interval


Resource NameEndpoint
Kibana

https://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com (QA prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-amer-qa/swagger-ui/index.html?configUrl=/api-gw-spec-amer-qa/v3/api-docs/swagger-config
Batch Service API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-amer-qa/swagger-ui/index.html?configUrl=/api-batch-spec-amer-qa/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-amer-nprod-gbl-mdm-hub.COMPANY.com

Consul

Resource NameEndpoint
Consul UIhttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.com

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-amer-nprod-gbl-mdm-hub.COMPANY.com

Components & Logs

ENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod ports

amer-qa

Managermdmhub-mdm-manager-*Gateway APIlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


amer-qa

Batch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogs
amer-qaApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogs

amer-qa

Subscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogs
amer-qaEnrichermdmhub-entity-enricher-*Reltio events enricherlogs
amer-qaCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogs

amer-qa

Publishermdmhub-event-publisher-*Events publisherlogs
amer-qaReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogs

Clients


MDM Systems

Reltio

DEV - wn60kG248ziQSMW

Resource NameEndpoint
SQS queue name

https://sqs.us-east-1.amazonaws.com/930358522410/test_805QOf1Xnm96SPj

Reltio

https://test.reltio.com/ui/805QOf1Xnm96SPj

https://test.reltio.com/reltio/api/805QOf1Xnm96SPj

Reltio Gateway User

svc-pfe-mdmhub
RDMhttps://rdm.reltio.com/lookups/805QOf1Xnm96SPj


Internal Resources


Resource NameEndpoint
Mongo

mongodb://mongo-amer-nprod-gbl-mdm-hub.COMPANY.com/reltio_amer-qa:27017

Kafka

kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSL

Kibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com
Elasticsearchhttps://elastic-amer-nprod-gbl-mdm-hub.COMPANY.com
" + }, + { + "title": "AMER STAGE Services", + "pageID": "228921315", + "pageLink": "/display/GMDM/AMER+STAGE+Services", + "content": "


HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-amer-stage
Ping Federate

https://stgfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-amer-stage
Kafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 

s3://gblmdmhubnprodamrasp100762

HUB UIhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ui-amer-stage/#/dashboard

Snowflake MDM DataMart

Resource NameEndpoint
DB Url

https://amerdev01.us-east-1.privatelink.snowflakecomputing.com/

DB Name

COMM_AMER_MDM_DMART_STG_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_AMER_MDM_DMART_STG_DEVOPS_ROLE


Resource NameEndpoint
HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=amer_stage&var-node=All&var-type=entities
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=amer_stage&var-topic=All&var-node=1
Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_nprod
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=amer_stage&var-component=mdm-manager
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_nprod&var-service=All&var-node=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_nprod&var-interval=$__auto_interval_interval


Resource NameEndpoint
Kibana

https://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com (STAGE prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-amer-stage/swagger-ui/index.html?configUrl=/api-gw-spec-amer-stage/v3/api-docs/swagger-config
Batch Service API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-amer-stage/swagger-ui/index.html?configUrl=/api-batch-spec-amer-stage/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-amer-nprod-gbl-mdm-hub.COMPANY.com

Consul

Resource NameEndpoint
Consul UIhttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.com

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-amer-nprod-gbl-mdm-hub.COMPANY.com

Components & Logs

ENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod ports

amer-stage

Managermdmhub-mdm-manager-*Gateway APIlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


amer-stage

Batch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogs
amer-stageApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogs

amer-stage

Subscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogs
amer-stageEnrichermdmhub-entity-enricher-*Reltio events enricherlogs
amer-stageCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogs

amer-stage

Publishermdmhub-event-publisher-*Events publisherlogs
amer-stageReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogs

Clients


MDM Systems

Reltio

DEV - wn60kG248ziQSMW

Resource NameEndpoint
SQS queue name

https://sqs.us-east-1.amazonaws.com/930358522410/test_K7I3W3xjg98Dy30

Reltio

https://test.reltio.com/ui/K7I3W3xjg98Dy30

https://test.reltio.com/reltio/api/K7I3W3xjg98Dy30

Reltio Gateway User

svc-pfe-mdmhub
RDMhttps://rdm.reltio.com/lookups/K7I3W3xjg98Dy30


Internal Resources


Resource NameEndpoint
Mongo

mongodb://mongo-amer-nprod-gbl-mdm-hub.COMPANY.com/reltio_amer-stage:27017

Kafka

kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSL

Kibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com
Elasticsearchhttps://elastic-amer-nprod-gbl-mdm-hub.COMPANY.com
" + }, + { + "title": "GBLUS-DEV Services", + "pageID": "234701562", + "pageLink": "/display/GMDM/GBLUS-DEV+Services", + "content": "


HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-dev
Ping Federate

https://devfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-gblus-dev
Kafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 

s3://gblmdmhubnprodamrasp100762

HUB UIhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ui-gblus-dev/#/dashboard

Snowflake MDM DataMart

Resource NameEndpoint
DB Urlhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.com
DB Name

COMM_GBL_MDM_DMART_DEV

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_DEV_MDM_DMART_DEVOPS_ROLE

Grafana dashboards

Resource NameEndpoint
HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gblus_dev&var-node=All&var-type=entities
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gblus_dev&var-topic=All&var-node=1&var-instance=amraelp00007335.COMPANY.com:9102
Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_nprod
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=gblus_dev&var-component=&var-instance=All&var-node=
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_nprod&var-service=All&var-node=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_dev&var-interval=$__auto_interval_interval


Resource NameEndpoint
Kibana

https://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com (DEV prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gblus-dev/swagger-ui/index.html?configUrl=/api-gw-spec-gblus-dev/v3/api-docs/swagger-config
Batch Service API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-gblus-dev/swagger-ui/index.html?configUrl=/api-batch-spec-gblus-dev/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-amer-nprod-gbl-mdm-hub.COMPANY.com

Consul

Resource NameEndpoint
Consul UIhttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.com

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-amer-nprod-gbl-mdm-hub.COMPANY.com

Components & Logs

ENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod ports

gblus-stage

Managermdmhub-mdm-manager-*Gateway APIlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


gblus-stage

Batch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogs
gblus-stageApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogs

gblus-stage

Subscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogs
gblus-stageEnrichermdmhub-entity-enricher-*Reltio events enricherlogs
gblus-stageCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogs

gblus-stage

Publishermdmhub-event-publisher-*Events publisherlogs
gblus-stageReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogs

Clients

MDM Systems

Reltio

DEV(gblus_dev) sw8BkTZqjzGr7hn

Resource NameEndpoint
SQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/dev_sw8BkTZqjzGr7hn
Reltio

https://dev.reltio.com/ui/sw8BkTZqjzGr7hn

https://dev.reltio.com/reltio/api/sw8BkTZqjzGr7hn

Reltio Gateway User

svc-pfe-mdmhub
RDMhttps://rdm.reltio.com/%s/wq2MxMmfTUCYk9k


Internal Resources


Resource NameEndpoint
Mongo

mongodb://mongo-amer-nprod-gbl-mdm-hub.COMPANY.com:27017

Kafka

kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSL

Kibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com
Elasticsearchhttps://elastic-amer-nprod-gbl-mdm-hub.COMPANY.com


Migration

The following table presents old endpoints and their substitutes in the new environment. Everyone who wants to connect with gblus dev has to use new endpoints.

DescriptionOld endpointNew endpoint
Manager API

https://gbl-mdm-hub-us-nprod.COMPANY.com:8443/dev-ext

https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-dev
Batch Service API

https://gbl-mdm-hub-us-nprod.COMPANY.com:8443/dev-batch-ext

https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-gblus-dev
Consul API

https://gbl-mdm-hub-us-nprod.COMPANY.com:8443/v1

https://consul-amer-nprod-gbl-mdm-hub.COMPANY.com/v1

Kafkaamraelp00007335.COMPANY.com:9094kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094


" + }, + { + "title": "GBLUS-QA Services", + "pageID": "234701566", + "pageLink": "/display/GMDM/GBLUS-QA+Services", + "content": "


HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-qa
Ping Federate

https://devfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-gblus-qa
Kafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 

s3://gblmdmhubnprodamrasp100762

HUB UIhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ui-gblus-qa/#/dashboard

Snowflake MDM DataMart

Resource NameEndpoint
DB Url

https://amerdev01.us-east-1.privatelink.snowflakecomputing.com/

DB Name

COMM_GBL_MDM_DMART_QA

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_QA_MDM_DMART_DEVOPS_ROLE


Resource NameEndpoint
HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=amer_qa&var-node=All&var-type=entities
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gblus_qa&var-topic=All&var-instance=All&var-node=
Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_nprod
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=gblus_qa&var-component=mdm-manager
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_nprod&var-service=All&var-node=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_nprod&var-interval=$__auto_interval_interval


Resource NameEndpoint
Kibana

https://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com (QA prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gblus-qa/swagger-ui/index.html?configUrl=/api-gw-spec-gblus-qa/v3/api-docs/swagger-config
Batch Service API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-gblus-qa/swagger-ui/index.html?configUrl=/api-batch-spec-gblus-qa/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-amer-nprod-gbl-mdm-hub.COMPANY.com

Consul

Resource NameEndpoint
Consul UIhttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.com

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-amer-nprod-gbl-mdm-hub.COMPANY.com

Components & Logs

ENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod ports

gblus-stage

Managermdmhub-mdm-manager-*Gateway APIlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


gblus-stage

Batch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogs
gblus-stageApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogs

gblus-stage

Subscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogs
gblus-stageEnrichermdmhub-entity-enricher-*Reltio events enricherlogs
gblus-stageCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogs

gblus-stage

Publishermdmhub-event-publisher-*Events publisherlogs
gblus-stageReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogs

Clients


MDM Systems

ReltioQA(gblus_qa) rEAXRHas2ovllvT

SQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/test_rEAXRHas2ovllvT
Reltio

https://test.reltio.com/ui/rEAXRHas2ovllvT

https://test.reltio.com/reltio/api/rEAXRHas2ovllvT

Reltio Gateway User

svc-pfe-mdmhub
RDMhttps://rdm.reltio.com/%s/u78Dh9B87sk6I2v


Internal Resources


Resource NameEndpoint
Mongo

mongodb://mongo-amer-nprod-gbl-mdm-hub.COMPANY.com/reltio_amer-qa:27017

Kafka

kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSL

Kibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com
Elasticsearchhttps://elastic-amer-nprod-gbl-mdm-hub.COMPANY.com

Migration

The following table presents old endpoints and their substitutes in the new environment. Everyone who wants to connect with gblus qa has to use new endpoints.

DescriptionOld endpointNew endpoint
Manager API

https://gbl-mdm-hub-us-nprod.COMPANY.com:8443/qa-ext

https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-qa
Batch Service API

https://gbl-mdm-hub-us-nprod.COMPANY.com:8443/qa-batch-ext

https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-gblus-qa
Consul API

https://gbl-mdm-hub-us-nprod.COMPANY.com:8443/v1

https://consul-amer-nprod-gbl-mdm-hub.COMPANY.com/v1

Kafkaamraelp00007335.COMPANY.com:9094kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094
" + }, + { + "title": "GBLUS-STAGE Services", + "pageID": "243863074", + "pageLink": "/display/GMDM/GBLUS-STAGE+Services", + "content": "

HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-stage
Ping Federate

https://stgfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-gblus-stage
Kafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 

s3://gblmdmhubnprodamrasp100762

HUB UIhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ui-gblus-stage/#/dashboard

Snowflake MDM DataMart

Resource NameEndpoint
DB Url

https://amerdev01.us-east-1.privatelink.snowflakecomputing.com/

DB Name

COMM_GBL_MDM_DMART_STG

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_STG_MDM_DMART_DEVOPS_ROLE


Resource NameEndpoint
HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gblus_stage&var-node=All&var-type=entities
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gblus_stage&var-topic=All&var-node=1
Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_nprod
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=amer_stage&var-component=mdm-manager
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_nprod&var-service=All&var-node=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_nprod&var-interval=$__auto_interval_interval


Resource NameEndpoint
Kibana

https://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com (STAGE prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gblus-stage/swagger-ui/index.html?configUrl=/api-gw-spec-gblus-stage/v3/api-docs/swagger-config
Batch Service API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-gblus-stage/swagger-ui/index.html?configUrl=/api-batch-spec-gblus-stage/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-amer-nprod-gbl-mdm-hub.COMPANY.com

Consul

Resource NameEndpoint
Consul UIhttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.com

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-amer-nprod-gbl-mdm-hub.COMPANY.com

Components & Logs

ENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod ports

gblus-stage

Managermdmhub-mdm-manager-*Gateway APIlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


gblus-stage

Batch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogs
gblus-stageApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogs

gblus-stage

Subscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogs
gblus-stageEnrichermdmhub-entity-enricher-*Reltio events enricherlogs
gblus-stageCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogs

gblus-stage

Publishermdmhub-event-publisher-*Events publisherlogs
gblus-stageReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogs

Clients

MDM Systems

Reltio

STAGE(gblus_stage) 48ElTIteZz05XwT

SQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/test_48ElTIteZz05XwT
Reltio

https://test.reltio.com/ui/48ElTIteZz05XwT

https://test.reltio.com/reltio/api/48ElTIteZz05XwT

Reltio Gateway User

svc-pfe-mdmhub
RDMhttps://rdm.reltio.com/%s/5YqAPYqQnUtQJqp


Internal Resources


Resource NameEndpoint
Mongo

mongodb://mongo-amer-nprod-gbl-mdm-hub.COMPANY.com/reltio_amer-stage:27017

Kafka

kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSL

Kibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com
Elasticsearchhttps://elastic-amer-nprod-gbl-mdm-hub.COMPANY.com
" + }, + { + "title": "AMER PROD Cluster", + "pageID": "234698165", + "pageLink": "/display/GMDM/AMER+PROD+Cluster", + "content": "

Physical Architecture


\"\"

Kubernetes cluster


nameIPConsole addressresource typeAWS regionFilesystemComponentsType
atp-mdmhub-prod-amer

10.9.64.0/18

10.9.0.0/18

https://pdcs-drm1p.COMPANY.com
EKS over EC2us-east-1

~60GB per node,

6TBx3 replicated Portworx volumes

Kong, Kafka, Mongo, Prometheus, MDMHUB microservices

outbound and inbound

PROD - backend 

NamespaceComponentPod nameDescriptionLogs
kongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kong
amer-backendKafka

mdm-kafka-kafka-0

mdm-kafka-kafka-1

mdm-kafka-kafka-2

Kafkalogs
amer-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace amer-backend
amer-backendZookeeper

mdm-kafka-zookeeper-0

mdm-kafka-zookeeper-1

mdm-kafka-zookeeper-2

Zookeeperlogs
amer-backendMongomongo-0Mongologs
amer-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace amer-backend
amer-backendFluentDfluentd-*EFK - fluentd

kubectl logs {{pod name}} --namespace amer-backend

amer-backendElasticsearch

elasticsearch-es-default-0

elasticsearch-es-default-1

EFK - elasticsearchkubectl logs {{pod name}} --namespace amer-backend
amer-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace amer-backend
monitoringCadvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoring
amer-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace amer-backend
amer-backendMongo exportermongo-exporter-*mongo metrics exporter---
amer-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace amer-backend
amer-backendConsul

consul-consul-server-0

consul-consul-server-1

consul-consul-server-2

Consulkubectl logs {{pod name}} --namespace amer-backend
amer-backendSnowflake connector

amer-prod-mdm-connect-cluster-connect-*

amer-qa-mdm-connect-cluster-connect-*

amer-stage-mdm-connect-cluster-connect-*

Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace amer-backend
monitoringKafka Connect Exporter

monitoring-jdbc-snowflake-exporter-amer-prod-*

monitoring-jdbc-snowflake-exporter-amer-stage-*

monitoring-jdbc-snowflake-exporter-amer-stage-*

Kafka Connect metric exporterkubectl logs {{pod name}} --namespace monitoring
amer-backendAkhqakhq-*Kafka UIlogs


Certificates 

Wed Aug 31 21:57:19 CEST 2016 until: Sun Aug 31 22:07:17 CEST 2036

Resource

Certificate LocationValid fromValid to Issued To
Kibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/prod/namespaces/kong/config_files/certsThu, 13 Jan 2022 14:13:53 GMTTue, 10 Jan 2023 14:13:53 GMThttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/
Kafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/prod/namespaces/amer-backend/secrets.yaml.encryptedJan 18 11:07:55 2022 GMTJan 18 11:07:55 2024 GMTkafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094


Setup and check connections:

  1. Snowflake - managing service accounts - via http://btondemand.COMPANY.com/ - Get Support → Submit ticket → 
    GBL-ATP-COMMERCIAL SNOWFLAKE DOMAIN ADMI


" + }, + { + "title": "AMER PROD Services", + "pageID": "234698356", + "pageLink": "/display/GMDM/AMER+PROD+Services", + "content": "


HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEVhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-amer-prod
Ping Federate

https://prodfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEV

https://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-gw-amer-prod

Kafkakafka-amer-prod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 

s3://gblmdmhubprodamrasp101478

HUB UIhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/ui-amer-prod/#/dashboard

Snowflake MDM DataMart

Resource NameEndpoint
DB Url

https://amerprod01.us-east-1.privatelink.snowflakecomputing.com/

DB Name

COMM_AMER_MDM_DMART_PROD_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_AMER_MDM_DMART_PROD_DEVOPS_ROLE


Resource NameEndpoint
HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=amer_prod&var-node=All&var-type=entities
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=amer_prod&var-topic=All&var-node=1
Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_prod
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=amer_prod&var-component=manager
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_prod&var-service=All&var-node=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_prod&var-interval=$__auto_interval_interval
Resource NameEndpoint
Kibana

https://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/ (PROD prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentationhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-gw-spec-amer-prod/swagger-ui/index.html?configUrl=/api-gw-spec-amer-prod/v3/api-docs/swagger-config
Batch Service API documentationhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-batch-spec-amer-prod/swagger-ui/index.html?configUrl=/api-batch-spec-amer-prod/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-amer-prod-gbl-mdm-hub.COMPANY.com

Consul

Resource NameEndpoint
Consul UIhttps://consul-amer-prod-gbl-mdm-hub.COMPANY.com/ui/

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-amer-prod-gbl-mdm-hub.COMPANY.com/

Components & Logs

ENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod ports

amer-prod

Managermdmhub-mdm-manager-*Gateway APIlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


amer-prod

Batch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogs
amer-prodApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogs

amer-prod

Subscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogs
amer-prodEnrichermdmhub-entity-enricher-*Reltio events enricherlogs
amer-prodCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogs

amer-prod

Publishermdmhub-event-publisher-*Events publisherlogs
amer-prodReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogs

Clients


MDM Systems

Reltio

PROD - Ys7joaPjhr9DwBJ

Resource NameEndpoint
SQS queue name

https://sqs.us-east-1.amazonaws.com/930358522410/361_Ys7joaPjhr9DwBJ

Reltio

https://361.reltio.com/ui/Ys7joaPjhr9DwBJ

https://361.reltio.com/reltio/api/Ys7joaPjhr9DwBJ

Reltio Gateway User

svc-pfe-mdmhub-prod
RDMhttps://rdm.reltio.com/lookups/LEo5zuzyWyG1xg4


Internal Resources


Resource NameEndpoint
Mongo

mongodb://mongo-amer-prod-gbl-mdm-hub.COMPANY.com:27017

Kafka

kafka-amer-prod-gbl-mdm-hub.COMPANY.com:9094 SASL SSL

Kibanahttps://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/
Elasticsearchhttps://elastic-amer-prod-gbl-mdm-hub.COMPANY.com/
" + }, + { + "title": "GBL US PROD Services", + "pageID": "250133277", + "pageLink": "/display/GMDM/GBL+US+PROD+Services", + "content": "

HUB Endpoints

API & Kafka & S3

Gateway API OAuth2 External - DEVhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-prod
Ping Federate

https://prodfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEV

https://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-gw-gblus-prod

Kafkakafka-amer-prod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 

s3://gblmdmhubprodamrasp101478

Snowflake MDM DataMart

DB Url

https://amerprod01.us-east-1.privatelink.snowflakecomputing.com

DB Name

COMM_GBL_MDM_DMART_PROD

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_PROD_MDM_DMART_DEVOPS_ROLE



HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gblus_prod&var-node=All&var-type=entities
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gblus_prod&var-topic=All&var-node=1&var-instance=amraelp00007848.COMPANY.com:9102
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=gblus_prod&var-component=manager&var-node=1&var-instance=amraelp00007848.COMPANY.com:9104
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_prod&var-service=All&var-node=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_prod&var-interval=$__auto_interval_interval


Kibana

https://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/ (PROD prefixed dashboards)

Documentation


Manager API documentationhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-gw-spec-amer-prod/swagger-ui/index.html?configUrl=/api-gw-spec-gblus-prod/v3/api-docs/swagger-config
Batch Service API documentationhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-batch-spec-amer-prod/swagger-ui/index.html?configUrl=/api-batch-spec-gblus-prod/v3/api-docs/swagger-config


Airflow


Airflow UIhttps://airflow-amer-prod-gbl-mdm-hub.COMPANY.com


Consul


Consul UIhttps://consul-amer-prod-gbl-mdm-hub.COMPANY.com/ui/


AKHQ - Kafka


AKHQ Kafka UIhttps://akhq-amer-prod-gbl-mdm-hub.COMPANY.com/


Components & Logs

ENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod ports

gblus-prod

Managermdmhub-mdm-manager-*Gateway APIlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


gblus-prod

Batch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogs

gblus-prod

Subscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogs
gblus-prodEnrichermdmhub-entity-enricher-*Reltio events enricherlogs
gblus-prodCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogs

gblus-prod

Publishermdmhub-event-publisher-*Events publisherlogs
gblus-prodReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogs
gblus-prodOnekey DCR
mdmhub-mdm-onekey-dcr-service-*Onekey DCR servicelogs

Clients


MDM Systems

Reltio

PROD- 9kL30u7lFoDHp6X

SQS queue name

https://sqs.us-east-1.amazonaws.com/930358522410/361_9kL30u7lFoDHp6X

Reltio

https://361.reltio.com/ui/9kL30u7lFoDHp6X

https://361.reltio.com/reltio/api/9kL30u7lFoDHp6X

Reltio Gateway User

svc-pfe-mdmhub-prod
RDMhttps://rdm.reltio.com/%s/DABr7gxyKKkrxD3


Internal Resources


Mongo

mongodb://mongo-amer-prod-gbl-mdm-hub.COMPANY.com:27017

Kafka

kafka-amer-prod-gbl-mdm-hub.COMPANY.com:9094 SASL SSL

Kibanahttps://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/
Elasticsearchhttps://elastic-amer-prod-gbl-mdm-hub.COMPANY.com/
" + }, + { + "title": "AMER SANDBOX Cluster", + "pageID": "310950353", + "pageLink": "/display/GMDM/AMER+SANDBOX+Cluster", + "content": "

Physical Architecture


<schema>

Kubernetes cluster


name

IP

Console address

resource type

AWS region

Filesystem

Components

Type

atp-mdmhub-sbx-amer

●●●●●●●●●●●●

●●●●●●●●●●●

https://pdcs-som1d.COMPANY.comEKS over EC2us-east-1

~60GB per node

Kong, Kafka, Mongo, Prometheus, MDMHUB microservices

outbound and inbound

SANDBOX - backend 

Namespace

Component

Pod name

Description

Logs

kongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kong
amer-backendKafka

mdm-kafka-kafka-0

mdm-kafka-kafka-1

mdm-kafka-kafka-2

Kafkalogs
amer-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace amer-backend
amer-backendZookeeper

mdm-kafka-zookeeper-0

mdm-kafka-zookeeper-1

mdm-kafka-zookeeper-2

Zookeeperlogs
amer-backendMongomongo-0Mongologs
amer-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace amer-backend
amer-backendFluentDfluentd-*EFK - fluentd

kubectl logs {{pod name}} --namespace amer-backend

amer-backendElasticsearch

elasticsearch-es-default-0

elasticsearch-es-default-1

elasticsearch-es-default-2

EFK - elasticsearchkubectl logs {{pod name}} --namespace amer-backend
monitoringCadvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoring
amer-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace amer-backend
amer-backendMongo exportermongo-exporter-*mongo metrics exporter---
amer-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace amer-backend
amer-backendConsul

consul-consul-server-0

consul-consul-server-1

consul-consul-server-2

Consulkubectl logs {{pod name}} --namespace amer-backend
amer-backendSnowflake connector

amer-devsbx-mdm-connect-cluster-connect-*

Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace amer-backend
amer-backendAkhqakhq-*Kafka UIlogs


Certificates 

Wed Aug 31 21:57:19 CEST 2016 until: Sun Aug 31 22:07:17 CEST 2036

Resource

Certificate Location

Valid from

Valid to 

Issued To

Kibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/sandbox/namespaces/kong/config_files/certs

2023-02-22 15:16:04

2025-02-21 15:16:04

https://api-amer-sandbox-gbl-mdm-hub.COMPANY.com/
Kafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/sandbox/namespaces/amer-backend/secrets.yaml.encrypted--kafka-amer-sandbox-gbl-mdm-hub.COMPANY.com:9094



" + }, + { + "title": "AMER DEVSBX Services", + "pageID": "310950591", + "pageLink": "/display/GMDM/AMER+DEVSBX+Services", + "content": "

HUB Endpoints

API & Kafka & S3 & UI

Resource Name

Endpoint

Gateway API OAuth2 External - DEVhttps://api-amer-sandbox-gbl-mdm-hub.COMPANY.com/ext-api-gw-amer-devsbx
Ping Federate

https://devfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEVhttps://api-amer-sandbox-gbl-mdm-hub.COMPANY.com/api-gw-amer-devsbx
Kafkahttp://kafka-amer-sandbox-gbl-mdm-hub.COMPANY.com/:9094
MDM HUB S3 

s3://gblmdmhubnprodamrasp100762

HUB UIhttps://api-amer-sandbox-gbl-mdm-hub.COMPANY.com/ui-amer-devsbx/#/dashboard

Grafana dashboards

Resource Name

Endpoint

HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=amer_devsbx&var-node=All&var-type=entities
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=amer_devsbx&var-topic=All&var-node=11
Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_sandbox
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=amer_devsbx&var-component=manager
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_sandbox&var-service=All&var-node=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_devsbx&var-interval=$__auto_interval_interval

Kibana dashboards

Resource Name

Endpoint

Kibana

https://kibana-amer-sandbox-gbl-mdm-hub.COMPANY.com (DEVSBX prefixed dashboards)

Documentation

Resource Name

Endpoint

Manager API documentationhttps://api-amer-sandbox-gbl-mdm-hub.COMPANY.com/api-gw-spec-amer-devsbx/swagger-ui/index.html?configUrl=/api-gw-spec-amer-devsbx/v3/api-docs/swagger-config
Batch Service API documentationhttps://api-amer-sandbox-gbl-mdm-hub.COMPANY.com/api-batch-spec-amer-devsbx/swagger-ui/index.html?configUrl=/api-batch-spec-amer-devsbx/v3/api-docs/swagger-config

Airflow

Resource Name

Endpoint

Airflow UIhttps://airflow-amer-sandbox-gbl-mdm-hub.COMPANY.com

Consul

Resource Name

Endpoint

Consul UIhttps://consul-amer-sandbox-gbl-mdm-hub.COMPANY.com

AKHQ - Kafka

Resource Name

Endpoint

AKHQ Kafka UIhttps://akhq-amer-sandbox-gbl-mdm-hub.COMPANY.com

Components & Logs

ENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod ports

amer-devsbx

Managermdmhub-mdm-manager-*Gateway APIlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


amer-devsbx

Batch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogs
amer-devsbxApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogs
amer-devsbxEnrichermdmhub-entity-enricher-*Reltio events enricherlogs
amer-devsbxCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogs

amer-devsbx

Publishermdmhub-event-publisher-*Events publisherlogs
amer-devsbxReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogs

Internal Resources


Resource Name

Endpoint

Mongo

mongodb://mongo-amer-sandbox-gbl-mdm-hub.COMPANY.com:27017

Kafka

kafka-amer-sandbox-gbl-mdm-hub.COMPANY.com:9094 SASL SSL

Kibanahttps://kibana-amer-sandbox-gbl-mdm-hub.COMPANY.com
Elasticsearchhttps://elastic-amer-sandbox-gbl-mdm-hub.COMPANY.com


" + }, + { + "title": "APAC", + "pageID": "228933517", + "pageLink": "/display/GMDM/APAC", + "content": "" + }, + { + "title": "APAC Non PROD Cluster", + "pageID": "228933519", + "pageLink": "/display/GMDM/APAC+Non+PROD+Cluster", + "content": "

Physical Architecture


\"\"

Kubernetes cluster


nameIPConsole addressresource typeAWS regionFilesystemComponentsType
atp-mdmhub-nprod-apac

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

https://pdcs-apa1p.COMPANY.com
EKS over EC2ap-southeast-1

~60GB per node,

6TBx2 replicated Portworx volumes

Kong, Kafka, Mongo, Prometheus, MDMHUB microservices

inbound/outbound

Components & Logs

DEV - microservices

ENV (namespace)ComponentPodDescriptionLogsPod ports

apac-dev

Managermdmhub-mdm-manager-*Managerlogs

8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


apac-dev

Batch Servicemdmhub-batch-service-*Batch Servicelogs
apac-devAPI routermdmhub-mdm-api-router-*API Routerlogs

apac-dev

Reltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogs
apac-devEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogs
apac-devCallback Servicemdmhub-callback-service-*Callback Servicelogs

apac-dev

Event Publishermdmhub-event-publisher-*Event Publisherlogs
apac-devReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogs
apac-dev
Callback delay service
mdmhub-callback-delay-service-*Callback delay service
logs

QA - microservices

ENV (namespace)ComponentPodDescriptionLogsPod ports

apac-qa

Managermdmhub-mdm-manager-*Managerlogs

8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available

apac-qa

Batch Servicemdmhub-batch-service-*Batch Servicelogs
apac-qaAPI routermdmhub-mdm-api-router-*API Routerlogs

apac-qa

Reltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogs
apac-qaEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogs
apac-qaCallback Servicemdmhub-callback-service-*Callback Servicelogs

apac-qa

Event Publishermdmhub-event-publisher-*Event Publisherlogs
apac-qaReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogs
apac-qa
Callback delay service
mdmhub-callback-delay-service-*Callback delay service
logs

STAGE - microservices

ENV (namespace)ComponentPodDescriptionLogsPod ports

apac-stage

Managermdmhub-mdm-manager-*Managerlogs

8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available

apac-stage

Batch Servicemdmhub-batch-service-*Batch Servicelogs
apac-stageAPI routermdmhub-mdm-api-router-*API Routerlogs

apac-stage

Reltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogs
apac-stageEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogs
apac-stageCallback Servicemdmhub-callback-service-*Callback Servicelogs

apac-stage

Event Publishermdmhub-event-publisher-*Event Publisherlogs
apac-stageReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogs
apac-stage
Callback delay service
mdmhub-callback-delay-service-*Callback delay service
logs

Non PROD - backend 

NamespaceComponentPodDescriptionLogs
kongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kong
apac-backendKafka

mdm-kafka-kafka-0

mdm-kafka-kafka-1

mdm-kafka-kafka-2

Kafkalogs
apac-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace apac-backend
apac-backendZookeeper

mdm-kafka-zookeeper-0

mdm-kafka-zookeeper-1

mdm-kafka-zookeeper-2

Zookeeperlogs
apac-backendMongomongo-0Mongologs
apac-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace apac-backend
apac-backendFluentDfluentd-*EFK - fluentd

kubectl logs {{pod name}} --namespace apac-backend

apac-backendElasticsearch

elasticsearch-es-default-0

elasticsearch-es-default-1

EFK - elasticsearchkubectl logs {{pod name}} --namespace apac-backend
apac-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace apac-backend
monitoringcAdvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoring
apac-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace apac-backend
apac-backendMongo exportermongo-exporter-*mongo metrics exporter---
apac-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace apac-backend
apac-backendConsul

consul-consul-server-0

consul-consul-server-1

consul-consul-server-2

Consulkubectl logs {{pod name}} --namespace apac-backend
apac-backendSnowflake connector

apac-dev-mdm-connect-cluster-connect-*

apac-qa-mdm-connect-cluster-connect-*

apac-stage-mdm-connect-cluster-connect-*

Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace apac-backend
monitoringKafka Connect Exporter

monitoring-jdbc-snowflake-exporter-apac-dev-*

monitoring-jdbc-snowflake-exporter-apac-stage-*

monitoring-jdbc-snowflake-exporter-apac-stage-*

Kafka Connect metric exporterkubectl logs {{pod name}} --namespace monitoring
apac-backendAKHQakhq-*Kafka UIlogs


Certificates 

Resource

Certificate LocationValid fromValid to Issued To
Kibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/apac/nprod/namespaces/kong/config_files/certs2022/03/042024/03/03https://api-apac-nprod-gbl-mdm-hub.COMPANY.com
Kafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/apac/nprod/namespaces/apac-backend/secrets.yaml.encrypted2022/03/072024/03/06https://kafka-api-nprod-gbl-mdm-hub.COMPANY.com:9094
" + }, + { + "title": "APAC DEV Services", + "pageID": "228933556", + "pageLink": "/display/GMDM/APAC+DEV+Services", + "content": "


HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEV

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-apac-dev

Ping Federate

https://devfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEV

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-apac-dev

Kafka

kafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094

MDM HUB S3 s3://globalmdmnprodaspasp202202171347
HUB UIhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/ui-apac-dev/#/dashboard

Snowflake MDM DataMart

Resource NameEndpoint
DB Url

https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

DB Name

COMM_APAC_MDM_DMART_DEV_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_APAC_MDM_DMART_DEV_DEVOPS_ROLE

Resource NameEndpoint
HUB Performance

https://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=apac_dev&var-node=All&var-type=entities

Kafka Topics Overview

https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=apac_dev&var-topic=All&var-node=1

JMX Overview

https://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=apac_dev&var-component=manager

Kong

https://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=apac_nprod&var-service=All&var-node=All

MongoDB

https://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=apac_dev&var-interval=$__auto_interval_interval

Kube Statehttps://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=apac-nprod&var-node=All&var-namespace=All&var-datasource=Prometheus

Pod Monitoring

https://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=apac_nprod&var-namespace=All
PVC Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/R_-8aaf7k/pvc-monitoring?orgId=1&refresh=30s&var-env=apac_nprod


Resource NameEndpoint
Kibana

https://kibana-apac-nprod-gbl-mdm-hub.COMPANY.com (DEV prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentation

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-apac-dev/swagger-ui/index.html?configUrl=/api-gw-spec-apac-dev/v3/api-docs/swagger-config

Batch Service API documentation

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-apac-dev/swagger-ui/index.html?configUrl=/api-batch-spec-apac-dev/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UI

https://airflow-apac-nprod-gbl-mdm-hub.COMPANY.com

Consul

Resource NameEndpoint
Consul UI

https://consul-apac-nprod-gbl-mdm-hub.COMPANY.com

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UI

https://akhq-apac-nprod-gbl-mdm-hub.COMPANY.com

Clients

MDM Systems

Reltio DEV - 2NBAwv1z2AvlkgS

Resource NameEndpoint
SQS queue name

https://sqs.ap-southeast-1.amazonaws.com/930358522410/mpe-02_2NBAwv1z2AvlkgS

Reltio

https://mpe-02.reltio.com/ui/2NBAwv1z2AvlkgS

https://mpe-02.reltio.com/reltio/api/2NBAwv1z2AvlkgS

Reltio Gateway User

svc-pfe-mdmhub
RDMhttps://rdm.reltio.com/lookups/GltqYa2x8xzSnB8


Internal Resources


Resource NameEndpoint
Mongo

mongodb://mongo-apac-nprod-gbl-mdm-hub.COMPANY.com:27017

Kafka

kafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSL

Kibana

https://kibana-apac-nprod-gbl-mdm-hub.COMPANY.com

Elasticsearch

https://elastic-apac-nprod-gbl-mdm-hub.COMPANY.com

" + }, + { + "title": "APAC QA Services", + "pageID": "234693067", + "pageLink": "/display/GMDM/APAC+QA+Services", + "content": "


HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEV

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-apac-qa

Ping Federate

https://devfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEV

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-apac-qa

Kafka

kafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094

MDM HUB S3 

s3://globalmdmnprodaspasp202202171347

HUB UIhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/ui-apac-qa/#/dashboard

Snowflake MDM DataMart

Resource NameEndpoint
DB Url

https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

DB Name

COMM_APAC_MDM_DMART_QA_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_APAC_MDM_DMART_QA_DEVOPS_ROLE

Resource NameEndpoint
HUB Performance

https://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=apac_qa&var-node=All&var-type=entities

Kafka Topics Overview

https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=apac_qa&var-topic=All&var-node=1

JMX Overview

https://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=apac_qa&var-component=manager

Kong

https://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=apac_nprod&var-service=All&var-node=All

MongoDB

https://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=apac_qa&var-interval=$__auto_interval_interval

Kube State

https://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=apac-nprod&var-node=All&var-namespace=All&var-datasource=Prometheus

Pod Monitoring

https://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=apac_nprod&var-namespace=All

PVC Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/R_-8aaf7k/pvc-monitoring?orgId=1&refresh=30s&var-env=apac_nprod


Resource NameEndpoint
Kibana

https://kibana-apac-nprod-gbl-mdm-hub.COMPANY.com (QA prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentation

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-apac-qa/swagger-ui/index.html?configUrl=/api-gw-spec-apac-qa/v3/api-docs/swagger-config

Batch Service API documentation

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-apac-qa/swagger-ui/index.html?configUrl=/api-batch-spec-apac-qa/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UI

https://airflow-apac-nprod-gbl-mdm-hub.COMPANY.com

Consul

Resource NameEndpoint
Consul UI

https://consul-apac-nprod-gbl-mdm-hub.COMPANY.com

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UI

https://akhq-apac-nprod-gbl-mdm-hub.COMPANY.com

Clients

MDM Systems

Reltio QA - xs4oRCXpCKewNDK

Resource NameEndpoint
SQS queue name

https://sqs.ap-southeast-1.amazonaws.com/930358522410/mpe-02_xs4oRCXpCKewNDK

Reltio

https://mpe-02.reltio.com/ui/xs4oRCXpCKewNDK

https://mpe-02.reltio.com/reltio/api/xs4oRCXpCKewNDK

Reltio Gateway User

svc-pfe-mdmhub
RDM

https://rdm.reltio.com/lookups/jemrjLkPUhOsPMa


Internal Resources


Resource NameEndpoint
Mongo

mongodb://mongo-apac-nprod-gbl-mdm-hub.COMPANY.com:27017

Kafka

kafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSL

Kibana

https://kibana-apac-nprod-gbl-mdm-hub.COMPANY.com

Elasticsearch

https://elastic-apac-nprod-gbl-mdm-hub.COMPANY.com

" + }, + { + "title": "APAC STAGE Services", + "pageID": "234693073", + "pageLink": "/display/GMDM/APAC+STAGE+Services", + "content": "

HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEV

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-apac-stage

Ping Federate

https://devfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEV

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-apac-stage

Kafka

kafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094

MDM HUB S3 s3://globalmdmnprodaspasp202202171347
HUB UIhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/ui-apac-stage/#/dashboard

Snowflake MDM DataMart

Resource NameEndpoint
DB Url

https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

DB Name

COMM_APAC_MDM_DMART_STG_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_APAC_MDM_DMART_STG_DEVOPS_ROLE


Resource NameEndpoint
HUB Performance

https://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=apac_stage&var-node=All&var-type=entities

Kafka Topics Overview

https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=apac_stage&var-topic=All&var-node=1

JMX Overview

https://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=apac_stage&var-component=manager

Kong

https://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=apac_nprod&var-service=All&var-node=All

MongoDB

https://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=apac_stage&var-interval=$__auto_interval_interval

Kube Statehttps://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=apac-nprod&var-node=All&var-namespace=All&var-datasource=Prometheus

Pod Monitoring

https://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=apac_nprod&var-namespace=All
PVC Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/R_-8aaf7k/pvc-monitoring?orgId=1&refresh=30s&var-env=apac_nprod


Resource NameEndpoint
Kibana

https://kibana-apac-nprod-gbl-mdm-hub.COMPANY.com (STAGE prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentation

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-apac-stage/swagger-ui/index.html?configUrl=/api-gw-spec-apac-stage/v3/api-docs/swagger-config

Batch Service API documentation

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-apac-stage/swagger-ui/index.html?configUrl=/api-batch-spec-apac-stage/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UI

https://airflow-apac-nprod-gbl-mdm-hub.COMPANY.com

Consul

Resource NameEndpoint
Consul UI

https://consul-apac-nprod-gbl-mdm-hub.COMPANY.com

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UI

https://akhq-apac-nprod-gbl-mdm-hub.COMPANY.com

Clients

MDM Systems

Reltio STAGE - Y4StMNK3b0AGDf6

Resource NameEndpoint
SQS queue name

https://sqs.ap-southeast-1.amazonaws.com/930358522410/mpe-02_Y4StMNK3b0AGDf6

Reltio

https://mpe-02.reltio.com/ui/Y4StMNK3b0AGDf6

https://mpe-02.reltio.com/reltio/api/Y4StMNK3b0AGDf6

Reltio Gateway User

svc-pfe-mdmhub
RDM

https://rdm.reltio.com/lookups/NYa4AETF73napDa


Internal Resources


Resource NameEndpoint
Mongo

mongodb://mongo-apac-nprod-gbl-mdm-hub.COMPANY.com:27017

Kafka

kafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSL

Kibana

https://kibana-apac-nprod-gbl-mdm-hub.COMPANY.com

Elasticsearch

https://elastic-apac-nprod-gbl-mdm-hub.COMPANY.com

" + }, + { + "title": "APAC PROD Cluster", + "pageID": "234712170", + "pageLink": "/display/GMDM/APAC+PROD+Cluster", + "content": "

Physical Architecture


\"\"

Kubernetes cluster


nameIPConsole addressresource typeAWS regionFilesystemComponentsType
atp-mdmhub-prod-apac

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

https://pdcs-apa1p.COMPANY.com
EKS over EC2ap-southeast-1

~60GB per node,

6TBx2 replicated Portworx volumes

Kong, Kafka, Mongo, Prometheus, MDMHUB microservices

inbound/outbound

Components & Logs

PROD - microservices

ENV (namespace)ComponentPodDescriptionLogsPod ports

apac-prod

Managermdmhub-mdm-manager-*Managerlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available



apac-prod

Batch Servicemdmhub-batch-service-*Batch Servicelogs
apac-prodAPI routermdmhub-mdm-api-router-*API Routerlogs

apac-prod

Reltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogs
apac-prodEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogs
apac-prodCallback Servicemdmhub-callback-service-*Callback Servicelogs

apac-prod

Event Publishermdmhub-event-publisher-*Event Publisherlogs
apac-prodReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogs
apac-prod
Callback delay service
mdmhub-callback-delay-service-*Callback delay service
logs

Non PROD - backend 

NamespaceComponentPodDescriptionLogs
kongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kong
apac-backendKafka

mdm-kafka-kafka-0

mdm-kafka-kafka-1

mdm-kafka-kafka-2

Kafkalogs
apac-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace apac-backend
apac-backendZookeeper

mdm-kafka-zookeeper-0

mdm-kafka-zookeeper-1

mdm-kafka-zookeeper-2

Zookeeperlogs
apac-backendMongomongo-0Mongologs
apac-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace apac-backend
apac-backendFluentDfluentd-*EFK - fluentd

kubectl logs {{pod name}} --namespace apac-backend

apac-backendElasticsearch

elasticsearch-es-default-0

elasticsearch-es-default-1

EFK - elasticsearchkubectl logs {{pod name}} --namespace apac-backend
apac-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace apac-backend
monitoringcAdvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoring
apac-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace apac-backend
apac-backendMongo exportermongo-exporter-*mongo metrics exporter---
apac-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace apac-backend
apac-backendConsul

consul-consul-server-0

consul-consul-server-1

consul-consul-server-2

Consulkubectl logs {{pod name}} --namespace apac-backend
apac-backendSnowflake connector

apac-prod-mdm-connect-cluster-connect-*

Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace apac-backend
monitoringKafka Connect Exporter

monitoring-jdbc-snowflake-exporter-apac-prod-*

Kafka Connect metric exporterkubectl logs {{pod name}} --namespace monitoring
apac-backendAKHQakhq-*Kafka UIlogs


Certificates 

Resource

Certificate LocationValid fromValid to Issued To
Kibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/apac/prod/namespaces/kong/config_files/certs2022/03/042024/03/03https://api-apac-prod-gbl-mdm-hub.COMPANY.com
Kafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/apac/prod/namespaces/apac-backend/secrets.yaml.encrypted2022/03/072024/03/06https://kafka-api-prod-gbl-mdm-hub.COMPANY.com:9094
" + }, + { + "title": "APAC PROD Services", + "pageID": "234712172", + "pageLink": "/display/GMDM/APAC+PROD+Services", + "content": "


HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEV

https://api-apac-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-apac-prod

Ping Federate

https://prodfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEV

https://api-apac-prod-gbl-mdm-hub.COMPANY.com/api-gw-apac-prod

Kafka

kafka-apac-prod-gbl-mdm-hub.COMPANY.com:9094

MDM HUB S3 s3://globalmdmprodaspasp202202171415
HUB UIhttps://api-apac-prod-gbl-mdm-hub.COMPANY.com/ui-apac-prod/#/dashboard

Snowflake MDM DataMart

Resource NameEndpoint
DB Url

emeaprod01.eu-west-1.privatelink.snowflakecomputing.com

DB Name

COMM_APAC_MDM_DMART_PROD_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_APAC_MDM_DMART_PROD_DEVOPS_ROLE

Resource NameEndpoint
HUB Performance

https://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=prod_dev&var-node=All&var-type=entities

Kafka Topics Overview

https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=apac_prod&var-topic=All&var-node=1

JMX Overview

https://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=apac_prod&var-component=mdm_manager

Kong

https://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=apac_prod&var-service=All&var-node=All

MongoDB

https://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=apac_prod&var-interval=$__auto_interval_interval

Kube Statehttps://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=atp-mdmhub-prod-apac&var-node=All&var-namespace=All&var-datasource=Prometheus

Pod Monitoring

https://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=apac_prod&var-namespace=All
PVC Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/R_-8aaf7k/pvc-monitoring?orgId=1&refresh=30s&var-env=apac_prod


Resource NameEndpoint
Kibana

https://kibana-apac-prod-gbl-mdm-hub.COMPANY.com (DEV prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentation

https://api-apac-prod-gbl-mdm-hub.COMPANY.com/api-gw-spec-apac-prod/swagger-ui/index.html?configUrl=/api-gw-spec-apac-prod/v3/api-docs/swagger-config

Batch Service API documentation

https://api-apac-prod-gbl-mdm-hub.COMPANY.com/api-batch-spec-apac-prod/swagger-ui/index.html?configUrl=/api-batch-spec-apac-prod/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UI

https://airflow-apac-prod-gbl-mdm-hub.COMPANY.com

Consul

Resource NameEndpoint
Consul UI

https://consul-apac-prod-gbl-mdm-hub.COMPANY.com

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UI

https://akhq-apac-prod-gbl-mdm-hub.COMPANY.com

Clients

MDM Systems

Reltio DEV - 2NBAwv1z2AvlkgS

Resource NameEndpoint
SQS queue name

https://sqs.ap-southeast-1.amazonaws.com/930358522410/ap-360_sew6PfkTtSZhLdW

Reltio

https://ap-360.reltio.com/ui/sew6PfkTtSZhLdW

https://ap-360.reltio.com/reltio/api/sew6PfkTtSZhLdW

Reltio Gateway User

svc-pfe-mdmhub-prod
RDMhttps://rdm.reltio.com/lookups/ARTA9lOg3dbvDqk


Internal Resources


Resource NameEndpoint
Mongo

mongodb://mongo-apac-prod-gbl-mdm-hub.COMPANY.com:27017

Kafka

kafka-apac-prod-gbl-mdm-hub.COMPANY.com:9094 SASL SSL

Kibana

https://kibana-apac-prod-gbl-mdm-hub.COMPANY.com

Elasticsearch

https://elastic-apac-prod-gbl-mdm-hub.COMPANY.com

" + }, + { + "title": "EMEA", + "pageID": "181022903", + "pageLink": "/display/GMDM/EMEA", + "content": "" + }, + { + "title": "EMEA External proxy", + "pageID": "308256760", + "pageLink": "/display/GMDM/EMEA+External+proxy", + "content": "

The page describes the Kong external proxy servers. deployed in a DLP (Double Lollipop) AWS account, used by clients outside of the COMPANY network, to access MDM Hub.

Kong proxy instances

EnvironmentConsole addressInstanceSSH accessresource typeAWS regionAWS Account IDComponents
Non PRODhttp://awsprodv2.COMPANY.com/
and use the role:

i-08d4b21c314a98700 (EUW1Z2DL115)

ssh ec2-user@euw1z2dl115.COMPANY.com
EC2eu-west-1432817204314

Kong

PROD

i-091aa7f1fe1ede714 (EUW1Z2DL113)

ssh ec2-user@euw1z2dl113.COMPANY.com
i-05c4532bf7b8d7511 (EUW1Z2DL114)
ssh ec2-user@euw1z2dl114.COMPANY.com
 

External Hub Endpoints

EnvironmentServiceEndpointInbound security group configuration
Non PRODAPI

https://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/

MDMHub-kafka-and-api-proxy-external-nprod-sg

Kafka

kafka-b1-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com:9095

kafka-b2-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com:9095

kafka-b3-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com:9095

PRODAPI

https://api-emea-k8s-prod-gbl-mdm-hub-ext.COMPANY.com/

MDMHub-kafka-and-api-proxy-external-prod-sg - due to the limit of 60 rules per SG, add new ones to:

MDMHub-kafka-and-api-proxy-external-prod-sg-2

Kafka

kafka-b1-emea-k8s-prod-gbl-mdm-hub-ext.COMPANY.com:9095

kafka-b2-emea-k8s-prod-gbl-mdm-hub-ext.COMPANY.com:9095

kafka-b3-emea-k8s-prod-gbl-mdm-hub-ext.COMPANY.com:9095

Clients

EnvironmentClients
Non PROD

Find all details in the Security Group

MDMHub-kafka-and-api-proxy-external-nprod-sg

PROD

Find all details in the Security Group

MDMHub-kafka-and-api-proxy-external-prod-sg

Ansible configuration

ResourceAddress
Install Kong proxyhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/install_kong.yml
Install cadvisorhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/install_cadvisor.yml
Non PROD inventoryhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/proxy_nprod
PROD inventoryhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/proxy_prod


Useful SOPs

How to access AWS Console

How to restart the EC2 instance

How to login to hosts with SSH

No downtime Kong restart/upgrade

" + }, + { + "title": "EMEA Non PROD Cluster", + "pageID": "181022904", + "pageLink": "/display/GMDM/EMEA+Non+PROD+Cluster", + "content": "

Physical Architecture


\"\"

Kubernetes cluster


nameIPConsole addressresource typeAWS regionFilesystemComponentsType
atp-mdmhub-nprod-emea

10.90.96.0/23

10.90.98.0/23

https://pdcs-ema1p.COMPANY.com/
EKS over EC2eu-west-1

~100GBper node,

7.3Ti x2 replicated Portworx volumes

Kong, Kafka, Mongo, Prometheus, MDMHUB microservices

inbound/outbound

Components & Logs

DEV - microservices

ENV (namespace)ComponentPodDescriptionLogsPod ports

emea-dev

Managermdmhub-mdm-manager-*Managerlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


emea-dev

Batch Servicemdmhub-batch-service-*Batch Servicelogs
emea-devAPI routermdmhub-mdm-api-router-*API Routerlogs

emea-dev

Reltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogs
emea-devEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogs
emea-devCallback Servicemdmhub-callback-service-*Callback Servicelogs

emea-dev

Event Publishermdmhub-event-publisher-*Event Publisherlogs
emea-devReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogs

QA - microservices

ENV (namespace)ComponentPodDescriptionLogsPod ports

emea-qa

Managermdmhub-mdm-manager-*Managerlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


emea-qa

Batch Servicemdmhub-batch-service-*Batch Servicelogs
emea-qaAPI routermdmhub-mdm-api-router-*API Routerlogs

emea-qa

Reltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogs
emea-qaEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogs
emea-qaCallback Servicemdmhub-callback-service-*Callback Servicelogs

emea-qa

Event Publishermdmhub-event-publisher-*Event Publisherlogs
emea-qaReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogs

STAGE - microservices

ENV (namespace)ComponentPodDescriptionLogsPod ports

emea-stage

Managermdmhub-mdm-manager-*Managerlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


emea-stage

Batch Servicemdmhub-batch-service-*Batch Servicelogs
emea-stageAPI routermdmhub-mdm-api-router-*API Routerlogs

emea-stage

Reltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogs
emea-stageEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogs
emea-stageCallback Servicemdmhub-callback-service-*Callback Servicelogs

emea-stage

Event Publishermdmhub-event-publisher-*Event Publisherlogs
emea-stageReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogs

GBL DEV - microservices

ENV (namespace)ComponentPodDescriptionLogsPod ports

gbl-dev

Managermdmhub-mdm-manager-*Managerlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


gbl-dev

Batch Servicemdmhub-batch-service-*Batch Servicelogs

gbl-dev

Reltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogs
gbl-devEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogs
gbl-devCallback Servicemdmhub-callback-service-*Callback Servicelogs

gbl-dev

Event Publishermdmhub-event-publisher-*Event Publisherlogs
gbl-devReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogs
gbl-devDCR Servicemdmhub-mdm-dcr-service-*DCR Servicelogs
gbl-devMAP Channel mdmhub-mdm-map-channel-*MAP Channellogs
gbl-devPforceRX Channelmdm-pforcerx-channel-*PforceRX Channellogs

GBL QA - microservices

ENV (namespace)ComponentPodDescriptionLogsPod ports

gbl-qa

Managermdmhub-mdm-manager-*Managerlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


gbl-qa

Batch Servicemdmhub-batch-service-*Batch Servicelogs

gbl-qa

Reltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogs
gbl-qaEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogs
gbl-qaCallback Servicemdmhub-callback-service-*Callback Servicelogs

gbl-qa

Event Publishermdmhub-event-publisher-*Event Publisherlogs
gbl-qaReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogs
gbl-qaDCR Servicemdmhub-mdm-dcr-service-*DCR Servicelogs
gbl-qaMAP Channel mdmhub-mdm-map-channel-*MAP Channellogs
gbl-qaPforceRX Channelmdm-pforcerx-channel-*PforceRX Channellogs

GBL STAGE - microservices

ENV (namespace)ComponentPodDescriptionLogsPod ports

gbl-stage

Managermdmhub-mdm-manager-*Managerlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


gbl-stage

Batch Servicemdmhub-batch-service-*Batch Servicelogs

gbl-stage

Reltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogs
gbl-stageEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogs
gbl-stageCallback Servicemdmhub-callback-service-*Callback Servicelogs

gbl-stage

Event Publishermdmhub-event-publisher-*Event Publisherlogs
gbl-stageReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogs
gbl-stageDCR Servicemdmhub-mdm-dcr-service-*DCR Servicelogs
gbl-stageMAP Channel mdmhub-mdm-map-channel-*MAP Channellogs
gbl-stagePforceRX Channelmdm-pforcerx-channel-*PforceRX Channellogs

Non PROD - backend 

NamespaceComponentPodDescriptionLogs
kongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kong
emea-backendKafka

mdm-kafka-kafka-0

mdm-kafka-kafka-1

mdm-kafka-kafka-2

Kafkalogs
emea-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace emea-backend
emea-backendZookeeper

mdm-kafka-zookeeper-0

mdm-kafka-zookeeper-1

mdm-kafka-zookeeper-2

Zookeeperlogs
emea-backendMongomongo-0Mongologs
emea-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace emea-backend
emea-backendFluentDfluentd-*EFK - fluentd

kubectl logs {{pod name}} --namespace emea-backend

emea-backendElasticsearch

elasticsearch-es-default-0

elasticsearch-es-default-1

EFK - elasticsearchkubectl logs {{pod name}} --namespace emea-backend
emea-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace emea-backend
monitoringcAdvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoring
emea-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace emea-backend
emea-backendMongo exportermongo-exporter-*mongo metrics exporter---
emea-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace emea-backend
emea-backendConsul

consul-consul-server-0

consul-consul-server-1

consul-consul-server-2

Consulkubectl logs {{pod name}} --namespace emea-backend
emea-backendSnowflake connector

emea-dev-mdm-connect-cluster-connect-*

emea-qa-mdm-connect-cluster-connect-*

emea-stage-mdm-connect-cluster-connect-*

Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace emea-backend
monitoringKafka Connect Exporter

monitoring-jdbc-snowflake-exporter-emea-dev-*

monitoring-jdbc-snowflake-exporter-emea-stage-*

monitoring-jdbc-snowflake-exporter-emea-stage-*

Kafka Connect metric exporterkubectl logs {{pod name}} --namespace monitoring
emea-backendAKHQakhq-*Kafka UIlogs


Certificates 

Resource

Certificate LocationValid fromValid to Issued To
Kibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/nprod/namespaces/kong/config_files/certs2022/03/042024/03/03https://api-emea-nprod-gbl-mdm-hub.COMPANY.com
Kafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/nprod/namespaces/emea-backend2022/03/072024/03/06kafka-emea-nprod-gbl-mdm-hub.COMPANY.com
" + }, + { + "title": "EMEA DEV Services", + "pageID": "181022906", + "pageLink": "/display/GMDM/EMEA+DEV+Services", + "content": "


HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEV

https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-dev

Ping Federate

https://devfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-emea-dev
Kafkakafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 

s3://pfe-atp-eu-w1-nprod-mdmhub/emea/dev

HUB UIhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-emea-dev/#/dashboard

Snowflake MDM DataMart

Resource NameEndpoint
DB Url

https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

DB Name

COMM_EMEA_MDM_DMART_DEV_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_EMEA_MDM_DMART_DEVOPS_DEV_ROLE


Resource NameEndpoint
HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=emea_dev&var-node=All&var-type=entities
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=emea_dev&var-kube_env=emea_nprod&var-topic=All&var-instance=All&var-node=
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=emea_dev&var-component=mdm_manager&var-instance=All&var-node=
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_nprod&var-service=All&var-instance=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_nprod&var-instance=10.90.130.202:9216&var-node_instance=10.90.129.220&var-interval=$__auto_interval_interval
Kube Statehttps://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=atp-mdmhub-nprod-emea&var-node=All&var-namespace=All&var-datasource=Prometheus
Pod Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=emea_nprod&var-namespace=All
PVC Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/xLgt8oTik/portworx-cluster-monitoring?orgId=1&var-cluster=atp-mdmhub-nprod-emea&var-node=All
Resource NameEndpoint
Kibana

https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/ (DEV prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-emea-dev/swagger-ui/index.html?configUrl=/api-gw-spec-emea-dev/v3/api-docs/swagger-config
Batch Service API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-emea-dev/swagger-ui/index.html?configUrl=/api-batch-spec-emea-dev/v3/api-docs/swagger-config
DCR Service 2 API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-dcr-spec-emea-dev/swagger-ui/index.html?configUrl=/api-dcr-spec-emea-dev/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/

Consul

Resource NameEndpoint
Consul UIhttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/

Clients


MDM Systems

Reltio

DEV - wn60kG248ziQSMW

Resource NameEndpoint
SQS queue namehttps://eu-west-1.queue.amazonaws.com/930358522410/mpe-01_wn60kG248ziQSMW
Reltio

https://mpe-01.reltio.com/ui/wn60kG248ziQSMW

https://mpe-01.reltio.com/reltio/api/wn60kG248ziQSMW

Reltio Gateway User

svc-pfe-mdmhub
RDMhttps://rdm.reltio.com/lookups/rQHwiWkdYGZRTNq


Internal Resources


Resource NameEndpoint
Mongo

mongodb://mongo-emea-nprod-gbl-mdm-hub.COMPANY.com:27017

Kafka

http://kafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094/ - SASL SSL

Kibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/
Elasticsearchhttps://elastic-emea-nprod-gbl-mdm-hub.COMPANY.com/




" + }, + { + "title": "EMEA QA Services", + "pageID": "192383454", + "pageLink": "/display/GMDM/EMEA+QA+Services", + "content": "


HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEV

https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-qa

Ping Federate
https://devfederate.COMPANY.com/as/introspect.oauth2
Gateway API KEY auth - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-emea-qa
Kafkakafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 

s3://pfe-atp-eu-w1-nprod-mdmhub/emea/qa

HUB UIhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-emea-qa/#/dashboard

Snowflake MDM DataMe

Resource NameEndpoint
DB Url

https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

DB Name

COMM_EMEA_MDM_DMART_QA_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_EMEA_MDM_DMART_QA_DEVOPS_ROLE

Grafana dashboards

Resource NameEndpoint
HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=emea_qa&var-node=All&var-type=entities
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=emea_qa&var-topic=All&var-node=1&var-instance=euw1z2dl112.COMPANY.com:9102
Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-env=emea_nprod&var-job=node-exporter&var-node=10.90.129.220&var-port=9100
Pod monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&var-env=emea_nprod&var-namespace=All
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=emea_qa&var-component=batch_service&var-instance=All&var-node=
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_nprod&var-service=All&var-instance=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_nprod&var-instance=10.90.130.202:9216&var-node_instance=10.90.129.220&var-interval=$__auto_interval_interval

Kibana dashboards

Resource NameEndpoint
Kibana

https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home (QA prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentation

https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-emea-qa/swagger-ui/index.html

Batch Service API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-emea-qa/swagger-ui/index.html

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/login/?next=https%3A%2F%2Fairflow-emea-nprod-gbl-mdm-hub.COMPANY.com%2Fhome

Consul

Resource NameEndpoint
Consul UIhttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/ui/

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/login

Clients


MDM Systems

Reltio

QA - vke5zyYwTifyeJS

Resource NameEndpoint
SQS queue namehttps://eu-west-1.queue.amazonaws.com/930358522410/mpe-01_vke5zyYwTifyeJS
Reltio

https://mpe-01.reltio.com/ui/vke5zyYwTifyeJS

https://mpe-01.reltio.com/reltio/api/vke5zyYwTifyeJS

Reltio Gateway User

svc-pfe-mdmhub
RDMhttps://rdm.reltio.com/lookups/jIqfd8krU6ua5kR


Internal Resources


Resource NameEndpoint
Mongomongodb://mongo-emea-nprod-gbl-mdm-hub.COMPANY.com:27017
Kafka

http://kafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094/ - SASL SSL

Kibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home
Elasticsearchhttps://elastic-emea-nprod-gbl-mdm-hub.COMPANY.com/






" + }, + { + "title": "EMEA STAGE Services", + "pageID": "192383457", + "pageLink": "/display/GMDM/EMEA+STAGE+Services", + "content": "


HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEV

https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-stage

Ping Federate

https://stgfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-emea-stage
Kafkakafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 

s3://pfe-atp-eu-w1-nprod-mdmhub/emea/stage

HUB UIhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-emea-stage/#/dashboard

Snowflake MDM DataMart

Resource NameEndpoint
DB Url

https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

DB Name

COMM_EMEA_MDM_DMART_STG_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_EMEA_MDM_DMART_STG_DEVOPS_ROLE


Resource NameEndpoint
HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=emea_stage&var-component=mdm_manager&var-component_publisher=event_publisher&var-component_subscriber=reltio_subscriber&var-instance=All&var-type=entities
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=emea_stage&var-kube_env=amer_nprod&var-topic=All&var-instance=All&var-node=
Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-env=emea_nprod&var-job=node-exporter&var-node=10.90.129.220&var-port=9100
Pod monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&var-env=emea_nprod&var-namespace=All
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=emea_stage&var-component=batch_service&var-instance=All&var-node=
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_nprod&var-service=All&var-instance=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_nprod&var-instance=10.90.130.202:9216&var-node_instance=10.90.129.220&var-interval=$__auto_interval_interval


Resource NameEndpoint
Kibana

https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home (STAGE prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-emea-stage/swagger-ui/index.html?configUrl=/api-gw-spec-emea-stage/v3/api-docs/swagger-config
Batch Service API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-emea-stage/swagger-ui/index.html?configUrl=/api-batch-spec-emea-stage/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/login/?next=https%3A%2F%2Fairflow-emea-nprod-gbl-mdm-hub.COMPANY.com%2Fhome

Consul

Resource NameEndpoint
Consul UIhttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/ui/

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/login

Clients


MDM Systems

Reltio

STAGE - Dzueqzlld107BVW

Resource NameEndpoint
SQS queue namehttps://eu-west-1.queue.amazonaws.com/930358522410/mpe-01_Dzueqzlld107BVW
Reltio

https://mpe-01.reltio.com/ui/Dzueqzlld107BVW

https://mpe-01.reltio.com/reltio/api/Dzueqzlld107BVW

Reltio Gateway User

svc-pfe-mdmhub
RDMhttps://rdm.reltio.com/lookups/TBxXCy2Z6LZ8nbn


Internal Resources


Resource NameEndpoint
Mongo

mongodb://mongo-emea-nprod-gbl-mdm-hub.COMPANY.com:27017

Kafkahttp://kafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094/ - SASL SSL
Kibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home
Elasticsearchhttps://elastic-emea-nprod-gbl-mdm-hub.COMPANY.com/




" + }, + { + "title": "GBL DEV Services", + "pageID": "250130206", + "pageLink": "/display/GMDM/GBL+DEV+Services", + "content": "

HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEV
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-dev
Ping Federate
https://devfederate.COMPANY.com/as/introspect.oauth2
Gateway API KEY auth - DEV
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-gbl-dev
Kafkakafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 
s3://pfe-atp-eu-w1-nprod-mdmhub (eu-west-1)
HUB UIhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-gbl-dev/#/dashboard

Snowflake MDM DataMart

Resource Name

Endpoint

DB Urlhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.com
DB NameCOMM_EU_MDM_DMART_DEV_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_DEV_MDM_DMART_DEVOPS_ROLE

Monitoring

Resource NameEndpoint
HUB Performance
https://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gbl_dev&var-node=All&var-type=entities
Kafka Topics Overview
https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gbl_dev&var-topic=All&var-node=1&var-instance=10.192.70.189:9102
Pod Monitoring
https://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s
Kube State
https://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=atp-mdmhub-nprod-emea&var-node=All&var-namespace=All&var-datasource=Prometheus
JMX Overview
https://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=gbl_dev&var-component=batch_service&var-instance=All&var-node=
Kong
https://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_nprod&var-service=All&var-instance=All
MongoDB
https://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_nprod&var-instance=10.90.130.202:9216&var-node_instance=10.90.129.220&var-interval=$__auto_interval_interval

Logs

Resource NameEndpoint
Kibana
https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home (DEV prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentation
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gbl-dev/swagger-ui/index.html

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/

Consul

Resource NameEndpoint
Consul UIhttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/

Clients

MDM Systems

Reltio GBL DEV FLy4mo0XAh0YEbN

Resource NameEndpoint
SQS queue name
https://sqs.eu-west-1.amazonaws.com/930358522410/mpe-01_FLy4mo0XAh0YEbN
Reltio
https://eu-dev.reltio.com/ui/FLy4mo0XAh0YEbN
https://eu-dev.reltio.com/reltio/api/FLy4mo0XAh0YEbN

Reltio Gateway User

Integration_Gateway_User
RDM
https://rdm.reltio.com/%s/WUBsSEwz3SU3idO/


Internal Resources

Resource NameEndpoint
Mongo

mongodb://mongo-emea-nprod-gbl-mdm-hub.COMPANY.com:27017

Kafkahttp://kafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094/ - SASL SSL
Kibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home#/
Elasticsearch

https://elastic-apac-nprod-gbl-mdm-hub.COMPANY.com

" + }, + { + "title": "GBL QA Services", + "pageID": "250130235", + "pageLink": "/display/GMDM/GBL+QA+Services", + "content": "

HUB Endpoints

API & Kafka & S3 & UI

Gateway API OAuth2 External - DEV
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-qa
Ping Federate
https://devfederate.COMPANY.com/as/introspect.oauth2
Gateway API KEY auth - DEV
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-gbl-qa
Kafka
kafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 
s3://pfe-atp-eu-w1-nprod-mdmhub (eu-west-1)
HUB UIhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-gbl-qa/#/dashboard

Snowflake MDM DataMart

DB Urlhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.com
DB NameCOMM_EU_MDM_DMART_QA_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_QA_MDM_DMART_DEVOPS_ROLE

Monitoring

HUB Performance
https://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gbl_qa&var-node=All&var-type=entities
Kafka Topics Overview
https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gbl_qa&var-kube_env=emea_nprod&var-topic=All&var-instance=All&var-node=
Pod Monitoring
https://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=emea_nprod&var-namespace=All
Kube State
https://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=atp-mdmhub-nprod-emea&var-node=All&var-namespace=All&var-datasource=Prometheus
JMX Overview
https://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=gbl_qa&var-component=batch_service&var-instance=All&var-node=
Kong
https://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=gbl_dev&var-service=All&var-node=All
MongoDB
https://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_nprod&var-instance=10.90.130.202:9216&var-node_instance=10.90.129.220&var-interval=$__auto_interval_interval

Logs

Kibana
https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home(QA prefixed dashboards)

Documentation

Manager API documentation
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gbl-qa/swagger-ui/index.html

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/

Consul

Resource NameEndpoint
Consul UIhttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/

Clients

MDM Systems

Reltio GBL MAPP AwFwKWinxbarC0Z

SQS queue name
https://sqs.eu-west-1.amazonaws.com/930358522410/mpe-01_AwFwKWinxbarC0Z
Reltio
https://mpe-01.reltio.com/ui/AwFwKWinxbarC0Z/
https://mpe-01.reltio.com/reltio/api/AwFwKWinxbarC0Z/

Reltio Gateway User

Integration_Gateway_User
RDM
https://rdm.reltio.com/%s/WUBsSEwz3SU3idO/

Internal Resources

Mongo

mongodb://mongo-emea-nprod-gbl-mdm-hub.COMPANY.com:27017

Kafkakafka-emea-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSL
Kibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home#/
Elasticsearch

https://elastic-emea-nprod-gbl-mdm-hub.COMPANY.com

" + }, + { + "title": "GBL STAGE Services", + "pageID": "250130297", + "pageLink": "/display/GMDM/GBL+STAGE+Services", + "content": "

HUB Endpoints

API & Kafka & S3

Gateway API OAuth2 External - DEV
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-stage
Ping Federate
https://devfederate.COMPANY.com/as/introspect.oauth2
Gateway API KEY auth - DEV
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-gbl-stage
Kafkakafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 
s3://pfe-atp-eu-w1-nprod-mdmhub (eu-west-1)
HUB UIhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-gbl-stage/#/dashboard

Snowflake MDM DataMart

DB Urlhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.com
DB NameCOMM_EU_MDM_DMART_STG_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_STG_MDM_DMART_DEVOPS_ROLE

Monitoring

HUB Performance
https://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gbl_stage&var-node=All&var-type=entities


Kafka Topics Overview
https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gbl_stage&var-kube_env=emea_nprod&var-topic=All&var-instance=All&var-node=
Pod Monitoring
https://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=emea_nprod&var-namespace=All
Kube State
https://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=atp-mdmhub-nprod-emea&var-node=All&var-namespace=All&var-datasource=Prometheus
JMX Overview
https://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=gbl_stage&var-component=batch_service&var-instance=All&var-node=
Kong
https://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_nprod&var-service=All&var-instance=All
MongoDB
https://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=gbl_stage&var-instance=&var-node_instance=&var-interval=$__auto_interval_interval

Logs

Kibana
https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home(STAGE prefixed dashboards)

Documentation

Manager API documentation
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gbl-stage/swagger-ui/index.html

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/

Consul

Resource NameEndpoint
Consul UIhttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/

Clients

MDM Systems

Reltio GBL STAGE FW4YTaNQTJEcN2g

SQS queue name
https://sqs.eu-west-1.amazonaws.com/930358522410/mpe-01_FW4YTaNQTJEcN2g
Reltio
https://eu-dev.reltio.com/ui/FW4YTaNQTJEcN2g/
https://eu-dev.reltio.com/reltio/api/FW4YTaNQTJEcN2g/

Reltio Gateway User

Integration_Gateway_User
RDM
https://rdm.reltio.com/%s/WUBsSEwz3SU3idO/

Internal Resources

Mongo

mongodb://mongo-emea-nprod-gbl-mdm-hub.COMPANY.com:27017

Kafkahttp://kafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094/ - SASL SSL
Kibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home#/
Elasticsearch

https://elastic-apac-nprod-gbl-mdm-hub.COMPANY.com

" + }, + { + "title": "EMEA PROD Cluster", + "pageID": "196881569", + "pageLink": "/display/GMDM/EMEA+PROD+Cluster", + "content": "

Physical Architecture


\"\"

Kubernetes cluster


nameIPConsole addressresource typeAWS regionFilesystemComponentsType
atp-mdmhub-nprod-emea

10.90.96.0/23

10.90.98.0/23

https://pdcs-ema1p.COMPANY.com/
EKS over EC2eu-west-1

~100GBper node,

7.3Ti x2 replicated Portworx volumes

Kong, Kafka, Mongo, Prometheus, MDMHUB microservices

inbound/outbound

Components & Logs

PROD - microservices

ENV (namespace)ComponentPodDescriptionLogsPod ports

emea-prod

Managermdmhub-mdm-manager-*Managerlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


emea-prod

Batch Servicemdmhub-batch-service-*Batch Servicelogs
emea-prodAPI routermdmhub-mdm-api-router-*API Routerlogs

emea-prod

Reltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogs
emea-prodEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogs
emea-prodCallback Servicemdmhub-callback-service-*Callback Servicelogs

emea-prod

Event Publishermdmhub-event-publisher-*Event Publisherlogs
emea-prodReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogs

PROD - backend 

NamespaceComponentPodDescriptionLogs
kongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kong
emea-backendKafka

mdm-kafka-kafka-0

mdm-kafka-kafka-1

mdm-kafka-kafka-2

Kafkalogs
emea-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace emea-backend
emea-backendZookeeper

mdm-kafka-zookeeper-0

mdm-kafka-zookeeper-1

mdm-kafka-zookeeper-2

Zookeeperlogs
emea-backendMongomongo-0
mongo-1
mongo-2
Mongologs
emea-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace emea-backend
emea-backendFluentDfluentd-*EFK - fluentd

kubectl logs {{pod name}} --namespace emea-backend

emea-backendElasticsearch

elasticsearch-es-default-0

elasticsearch-es-default-1

elasticsearch-es-default-2

EFK - elasticsearchkubectl logs {{pod name}} --namespace emea-backend
emea-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace emea-backend
monitoringcAdvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoring
emea-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace emea-backend
emea-backendMongo exportermongo-exporter-*mongo metrics exporter---
emea-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace emea-backend
emea-backendConsul

consul-consul-server-0

consul-consul-server-1

consul-consul-server-2

Consulkubectl logs {{pod name}} --namespace emea-backend
emea-backendSnowflake connector

emea-prod-mdm-connect-cluster-connect-*

Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace emea-backend
monitoringKafka Connect Exporter

monitoring-jdbc-snowflake-exporter-emea-prod-*

Kafka Connect metric exporterkubectl logs {{pod name}} --namespace monitoring
emea-backendAKHQakhq-*Kafka UIlogs


Certificates 

Resource

Certificate LocationValid fromValid to Issued To
Kibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/prod/namespaces/kong/config_files/certs2022/03/042024/03/03https://api-emea-prod-gbl-mdm-hub.COMPANY.com/
Kafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/prod/namespaces/emea-backend2022/03/072024/03/06https://kafka-emea-prod-gbl-mdm-hub.COMPANY.com/
" + }, + { + "title": "EMEA PROD Services", + "pageID": "196881867", + "pageLink": "/display/GMDM/EMEA+PROD+Services", + "content": "

HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - PRODhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-prod
Ping Federate
https://prodfederate.COMPANY.com/as/token.oauth2
Gateway API KEY auth - PROD

https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-gw-emea-prod

Kafkakafka-b1-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 

s3://pfe-atp-eu-w1-prod-mdmhub/emea/prod

HUB UIhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ui-emea-prod/#/dashboard

Snowflake MDM DataMart

Resource Name

Endpoint

DB Urlhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com/
DB Name

COMM_EMEA_MDM_DMART_PROD_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_EMEA_MDM_DMART_PROD_DEVOPS_ROLE

Monitoring

Resource NameEndpoint
HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=emea_prod&var-node=All&var-type=entities
HUB Batch Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/gz0X6rkMk/hub-batch-performance?orgId=1&refresh=10s&var-env=emea_prod&var-node=All&var-name=All
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=emea_prod&var-topic=All&var-node=5&var-instance=euw1z1pl117.COMPANY.com:9102
Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-env=emea_prod&var-job=node_exporter&var-node=euw1z2pl113.COMPANY.com&var-port=9100
Docker monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/Z1VgYm6iz/docker-monitoring?orgId=1&refresh=10s&var-env=emea_prod&var-node=1
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=emea_prod&var-component=manager&var-node=5&var-instance=euw1z1pl117.COMPANY.com:9104
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_prod&var-service=All&var-node=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_prod&var-instance=euw1z2pl115.COMPANY.com:9120&var-node_instance=euw1z2pl115.COMPANY.com&var-interval=$__auto_interval_interval

Logs

Resource NameEndpoint
Kibanahttps://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/ (PROD prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentationhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-gw-spec-emea-prod/swagger-ui/index.html?configUrl=/api-gw-spec-emea-prod/v3/api-docs/swagger-config
Batch Service API documentationhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-batch-spec-emea-prod/swagger-ui/index.html?configUrl=/api-batch-spec-emea-prod/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/home

Consul

Resource NameEndpoint
Consul UIhttps://consul-emea-prod-gbl-mdm-hub.COMPANY.com/ui/dc1/services

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-emea-prod-gbl-mdm-hub.COMPANY.com/login

Clients

MDM Systems

Reltio

PROD_EMEA Xy67R0nDA10RUV6

Resource NameEndpoint
SQS queue namehttps://sqs.eu-west-1.amazonaws.com/930358522410/eu-360_Xy67R0nDA10RUV6
Reltio

https://eu-360.reltio.com/reltio/api/Xy67R0nDA10RUV6 - API

https://eu-360.reltio.com/ui/Xy67R0nDA10RUV6/# - UI

Reltio Gateway User

svc-pfe-mdmhub-prod
RDM

https://rdm.reltio.com/%s/uJG2vepGEXEHmrI/


Internal Resources


Resource NameEndpoint
Mongo

https://mongo-emea-prod-gbl-mdm-hub.COMPANY.com:27017

Kafka

http://kafka-b1-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094/,http://kafka-b2-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094/,http://kafka-b3-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094/

Kibanahttps://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/
Elasticsearchhttps://elastic-emea-prod-gbl-mdm-hub.COMPANY.com/
" + }, + { + "title": "GBL PROD Services", + "pageID": "284792395", + "pageLink": "/display/GMDM/GBL+PROD+Services", + "content": "

HUB Endpoints

API & Kafka & S3 & UI

Gateway API OAuth2 External - PRODhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ext-api-gbl-prod
Ping Federate
https://prodfederate.COMPANY.com/as/token.oauth2
Gateway API KEY auth - PROD

https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-gbl-prod

Kafkakafka-b1-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 

s3://pfe-baiaes-eu-w1-project/mdm

HUB UIhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ui-gbl-prod/#/dashboard

Snowflake MDM DataMart

DB Urlhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com/
DB Name

COMM_EU_MDM_DMART_PROD_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_GBL_MDM_DMART_PROD_DEVOPS_ROLE

Monitoring


HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gbl_prod&var-component=mdm_manager&var-component_publisher=event_publisher&var-component_subscriber=reltio_subscriber&var-instance=All&var-type=entities
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gbl_prod&var-kube_env=emea_prod&var-topic=All&var-instance=All&var-node=
Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/Z1VgYm6iz/docker-monitoring?orgId=1&refresh=10s&var-env=emea_prod&var-node=&var-instance=10.90.130.122
Pods monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/Z1VgYm6iz/docker-monitoring?orgId=1&refresh=10s&var-env=emea_prod&var-node=&var-instance=10.90.130.122
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=emea_prod&var-component=manager&var-node=5&var-instance=euw1z1pl117.COMPANY.com:9104
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_prod&var-service=All&var-node=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_prod&var-instance=10.90.142.48:9216&var-node_instance=euw1z2pl115.COMPANY.com&var-interval=$__auto_interval_interval


Logs


Kibanahttps://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/ (PROD prefixed dashboards)


Documentation


Manager API documentationhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gbl-prod/swagger-ui/index.html?configUrl=/api-gw-spec-emea-prod/v3/api-docs/swagger-config


Airflow


Airflow UIhttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/home


Consul


Consul UIhttps://consul-emea-prod-gbl-mdm-hub.COMPANY.com/ui/dc1/services


AKHQ - Kafka


AKHQ Kafka UIhttps://akhq-emea-prod-gbl-mdm-hub.COMPANY.com/login


Clients

MDM Systems

Reltio

PROD_EMEA - FW2ZTF8K3JpdfFl

SQS queue namehttps://sqs.eu-west-1.amazonaws.com/930358522410/euprod-01_FW2ZTF8K3JpdfFl
Reltio

https://eu-360.reltio.com/reltio/api/FW2ZTF8K3JpdfFl - API

https://eu-360.reltio.com/ui/FW2ZTF8K3JpdfFl/ - UI

Reltio Gateway User

pfe_mdm_api
RDM
https://rdm.reltio.com/%s/ImsRdmCOMPANY/


Internal Resources


Mongo

https://mongo-emea-prod-gbl-mdm-hub.COMPANY.com:27017

Kafka

http://kafka-b1-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094/,http://kafka-b2-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094/,http://kafka-b3-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094/

Kibanahttps://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/
Elasticsearchhttps://elastic-emea-prod-gbl-mdm-hub.COMPANY.com/
" + }, + { + "title": "US Trade (FLEX)", + "pageID": "164470168", + "pageLink": "/pages/viewpage.action?pageId=164470168", + "content": "" + }, + { + "title": "US Non PROD Cluster", + "pageID": "164470067", + "pageLink": "/display/GMDM/US+Non+PROD+Cluster", + "content": "

Physical Architecture

\"\"


Hosts

IDIPHostnameDocker UserResource TypeSpecificationAWS RegionFilesystem

DEV

●●●●●●●●●●●●●

amraelp00005781.COMPANY.com

mdmihnpr

EC2

r4.2xlarge

us-east

750 GB - /app

15 GB - /var/lib/docker

Components & Logs

ENVHostComponentDocker nameDescriptionLogsOpen Ports
DEVDEVManagerdevmdmsrv_mdm-manager_1Gateway API/app/mdmgw/dev-mdm-srv/manager/log8849, 9104
DEVDEVBatch Channeldevmdmsrv_batch-channel_1Batch file processor, S3 poller/app/mdmgw/dev-mdm-srv/batch_channel/log9121
DEVDEVPublisherdevmdmhubsrv_event-publisher_1Event publisher/app/mdmhub/dev-mdm-srv/event_publisher/log9106
DEVDEVSubscriberdevmdmhubsrv_reltio-subscriber_1SQS Reltio event subscriber/app/mdmhub/dev-mdm-srv/reltio_subscriber/log9105
DEVDEVConsoledevmdmsrv_console_1Hawtio console
9999
ENVHostComponentDocker nameDescriptionLogsOpen Ports
TESTDEVManagertestmdmsrv_mdm-manager_1Gateway API/app/mdmgw/test-mdm-srv/manager/log8850, 9108
TESTDEVBatch Channeltestmdmsrv_batch-channel_1Batch file processor, S3 poller/app/mdmgw/test-mdm-srv/batch_channel/log9111
TESTDEVPublishertestmdmhubsrv_event-publisher_1Event publisher/app/mdmhub/test-mdm-srv/event_publisher/log9110
TESTDEVSubscribertestmdmhubsrv_reltio-subscriber_1SQS Reltio event subscriber/app/mdmhub/test-mdm-srv/reltio_subscriber/log9109

Back-End 

HostComponentDocker nameDescriptionLogsOpen Ports
DEVFluentDfluentdEFK - FluentD/app/efk/fluentd/log24225
DEVKibanakibanaEFK - Kibanadocker logs kibana5601
DEVElasticsearchelasticsearchEFK - Elasticsearch/app/efk/elasticsearch/logs9200
DEVPrometheusprometheusPrometheus Federation slave serverdocker logs prometheus9119
DEVMongomongo_mongo_1Mongodocker logs mongo_mongo_127017
DEVMongo Exportermongo_exporterMongo → Prometheus exporter/app/mongo_exporter/logs9120
DEVMonstache Connectormonstache-connectorMongo → Elasticsearch exporter
8095
DEVKafkakafka_kafka_1Kafkadocker logs kafka_kafka_19093, 9094, 9101
DEVKafka Exporterkafka_kafka_exporter_1Kafka → Prometheus exporterdocker logs kafka_kafka_exporter_19102
DEVSQS Exportersqs-exporter-devSQS → Prometheus exporterdocker logs sqs-exporter-dev9122
DEVCadvisorcadvisorDocker → Prometheus exporterdocker logs cadvisor9103
DEVKongkong_kong_1API Manager/app/mdmgw/kong/kong_logs8000, 8443, 32774
DEVKong - DBkong_kong-database_1Kong Cassandra databasedocker logs kong_kong-database_19042
DEVZookeeperkafka_zookeeper_1Zookeeperdocker logs kafka_zookeeper_12181
DEVNode Exporter(non-docker) node_exporterPrometheus node exportersystemctl status node_exporter9100

Certificates

ResourceCertificate LocationValid fromValid to Issued To
Kibana
https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/dev_us/efk/kibana/mdm-log-management-us-nonprod.COMPANY.com.cer
22.02.201907.05.2022mdm-log-management-us-nonprod.COMPANY.com
Kong - API
https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/dev_us/certs/mdm-ihub-us-nonprod.COMPANY.com.pem
18.07.201817.07.2021

CN = mdm-ihub-us-nonprod.COMPANY.com

O = COMPANY

Kafka - Server Truststore
https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/dev_us/ssl/server.truststore.jks
10.07.202001.09.2026

O = Default Company Ltd

ST = Some-State

C = AU

Kafka - Server KeyStore
https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/dev_us/ssl/server.keystore.jks
10.07.202006.07.2022 

CN = KafkaFlex

OU = Unknown

O = Unknown

L = Unknown

ST = Unknown

C = Unknown

Elasticsearch
https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/dev_us/efk/esnode1/mdm-esnode1-us-nonprod.COMPANY.com.cer

22.02.201921.02.2022

mdm-esnode1-us-nonprod.COMPANY.com

Unix groups

Resource NameTypeDescriptionSupport
userComputer Role

Login: mdmihnpr
Name: SRVGBL-Pf6687993
Uid: 27634358
Gid: 20796763 <mdmihub>


userUnix Role Group

Role: ADMIN_ROLE


portsSecurity groupSG Name: PFE-SG-IHUB-APP-DEV-001

http://btondemand.COMPANY.com

Submit ticket to GBL-BTI-IOD AWS FULL SUPPORT

Internal Clients

NameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopic
FLEX US user
flex_nprod
External OAuth2
Flex-MDM_client
- "CREATE_HCP"
- "CREATE_HCO"
- "UPDATE_HCP"
- "UPDATE_HCO"
- "GET_ENTITIES"
- "SCAN_ENTITIES"
ALL
- "FLEXProposal"
- "FLEX"
- "FLEXIDL"
- "Calculate"
- "SAP"
dev-out-full-flex-all
test-out-full-flex-all
test2-out-full-flex-all
test3-out-full-flex-all
Internal HUB user
mdm_test_user
External OAuth2
Flex-MDM_client
- "CREATE_HCP"
- "CREATE_HCO"
- "UPDATE_HCP"
- "UPDATE_HCO"
- "GET_ENTITIES"
- "DELETE_CROSSWALK"
- "GET_RELATION"
- "SCAN_ENTITIES"
- "SCAN_RELATIONS"
- "LOOKUPS"
- "ENTITY_ATTRIBUTES_UPDATE"
ALL
- "FLEXProposal"
- "FLEX"
- "FLEXIDL"
- "Calculate"
- "AddrCalc"
- "SAP"
- "HIN"
- "DEA

Integration Batch Update user
integration_batch_user
Key Auth
N/A
- "GET_ENTITIES"
- "ENTITY_ATTRIBUTES_UPDATE"
- "GENERATE_ID"
- "CREATE_HCO"
- "UPDATE_HCO"
ALL
- "FLEXProposal"
- "FLEX"
- "FLEXIDL"
- "Calculate"
- "AddrCalc"
dev-internal-integration-tests
FLEX Batch Channel user

flex_batch_dev
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "FLEX"
- "FLEXIDL"
dev-internal-hco-create-flex
flex_batch_test
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "FLEX"
- "FLEXIDL"
test-internal-hco-create-flex
flex_batch_test2
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "FLEX"
- "FLEXIDL"
test2-internal-hco-create-flex
flex_batch_test3
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "FLEX"
- "FLEXIDL"
test3-internal-hco-create-flex
SAP Batch Channel user

sap_batch_dev
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "SAP"
dev-internal-hco-create-sap
sap_batch_test
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "SAP"
test-internal-hco-create-sap
sap_batch_test2
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "SAP"
test2-internal-hco-create-sap
sap_batch_test3
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "SAP"
test3-internal-hco-create-sap
HIN Batch Channel user

hin_batch_dev
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "HIN"
dev-internal-hco-create-hin
hin_batch_test
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "HIN"
test-internal-hco-create-hin
hin_batch_test2
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "HIN"
test2-internal-hco-create-hin
hin_batch_test3
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "HIN"
test3-internal-hco-create-hin
DEA Batch Channel user

dea_batch_dev
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "DEA"
dev-internal-hco-create-dea
dea_batch_test
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "DEA"
test-internal-hco-create-dea
dea_batch_test2
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "DEA"
test2-internal-hco-create-dea
dea_batch_test3
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "DEA"
test3-internal-hco-create-dea
340B Batch Channel user
340b_batch_dev
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "340B"
dev-internal-hco-create-340b
340b_batch_test
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "340B"
test-internal-hco-create-340b
" + }, + { + "title": "US DEV Services", + "pageID": "164469990", + "pageLink": "/display/GMDM/US+DEV+Services", + "content": "

HUB Endpoints

API & Kafka & S3

Resource NameEndpoint
Gateway API OAuth2 External - DEV
https://mdm-ihub-us-nonprod.COMPANY.com:8443/dev-ext
Ping Federate
https://devfederate.COMPANY.com/as/introspect.oauth2
Gateway API KEY auth - DEV
https://mdm-ihub-us-nonprod.COMPANY.com:8443/dev
Kafka
amraelp00005781.COMPANY.com:9094
MDM HUB S3 
s3://mdmnprodamrasp22124/

Monitoring

Resource NameEndpoint
HUB Performance
https://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=us_dev&var-node=All&var-type=entities
Kafka Topics Overview
https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=us_dev&var-topic=All&var-node=1&var-instance=amraelp00005781.COMPANY.com:9102
Host Statistics
https://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=us_dev&var-node=amraelp00005781.COMPANY.com&var-port=9100
Docker monitoring
https://mdm-monitoring.COMPANY.com/grafana/d/Z1VgYm6iz/docker-monitoring?orgId=1&refresh=10s&var-env=us_dev&var-node=1
JMX Overview
https://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=us_dev&var-component=batch_channel&var-node=1&var-instance=amraelp00005781.COMPANY.com:9121
Kong
MongoDB
https://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=us_dev&var-instance=amraelp00005781.COMPANY.com:9120&var-node_instance=amraelp00005781.COMPANY.com&var-interval=$__auto_interval_interval

Logs

Resource NameEndpoint
Kibana
https://mdm-log-management-us-trade-nonprod.COMPANY.com:5601/app/kibana (DEV prefixed dashboards)

MDM Systems

Reltio US DEV keHVup25rN7ij3Y

Resource NameEndpoint
SQS queue name
https://sqs.us-east-1.amazonaws.com/930358522410/dev_keHVup25rN7ij3Y
Reltio
https://dev.reltio.com/ui/keHVup25rN7ij3Y
https://dev.reltio.com/reltio/api/keHVup25rN7ij3Y

Reltio Gateway User

Integration_Gateway_US_User
RDM
https://rdm.reltio.com/%s/aPYW1rxK6I1Op4y/

Internal Resources

Resource NameEndpoint
Mongo
mongodb://amraelp00005781.COMPANY.com:27107
Kafka
amraelp00005781.COMPANY.com:9094
Zookeeper
amraelp00005781.COMPANY.com:2181
Kibana
https://amraelp00005781.COMPANY.com:5601/app/kibana
Elasticsearch
https://amraelp00005781.COMPANY.com:9200
Hawtio
http://amraelp00005781.COMPANY.com:9999/hawtio/#/login
" + }, + { + "title": "US TEST (QA) Services", + "pageID": "164469988", + "pageLink": "/display/GMDM/US+TEST+%28QA%29+Services", + "content": "

HUB Endpoints

API & Kafka & S3

Resource NameEndpoint
Gateway API OAuth2 External - TEST
https://mdm-ihub-us-nonprod.COMPANY.com:8443/test-ext
Gateway API OAuth2 External - TEST2
https://mdm-ihub-us-nonprod.COMPANY.com:8443/test2-ext
Gateway API OAuth2 External - TEST3
https://mdm-ihub-us-nonprod.COMPANY.com:8443/test3-ext
Gateway API KEY auth - TEST
https://mdm-ihub-us-nonprod.COMPANY.com:8443/test
Gateway API KEY auth - TEST2
https://mdm-ihub-us-nonprod.COMPANY.com:8443/test2
Gateway API KEY auth - TEST3
https://mdm-ihub-us-nonprod.COMPANY.com:8443/test3
Ping Federate
https://devfederate.COMPANY.com/as/introspect.oauth2
Kafka
amraelp00005781.COMPANY.com:9094
MDM HUB S3 
s3://mdmnprodamrasp22124/

Logs

Resource NameEndpoint
Kibana
https://mdm-log-management-us-trade-nonprod.COMPANY.com:5601/app/kibana (TEST prefixed dashboards)

MDM Systems

Reltio US TEST cnL0Gq086PrguOd

Resource NameEndpoint
SQS queue name
https://sqs.us-east-1.amazonaws.com/930358522410/test_cnL0Gq086PrguOd 
Reltio
https://test.reltio.com/ui/cnL0Gq086PrguOd
https://test.reltio.com/reltio/api/cnL0Gq086PrguOd

Reltio Gateway User

Integration_Gateway_US_User
RDM
https://rdm.reltio.com/%s/FENBHNkytefh9dB/  

Reltio US TEST2 JKabsuFZzNb4K6k

Resource NameEndpoint
SQS queue name
https://sqs.us-east-1.amazonaws.com/930358522410/test_JKabsuFZzNb4K6k
Reltio
https://test.reltio.com/ui/JKabsuFZzNb4K6k
https://test.reltio.com/reltio/api/JKabsuFZzNb4K6k

Reltio Gateway User

Integration_Gateway_US_User
RDM
https://rdm.reltio.com/%s/dhUp0Lm9NebmqB9/  

Reltio US TEST3 Yy7KqOqppDVzJpk

Resource NameEndpoint
SQS queue name
https://sqs.us-east-1.amazonaws.com/930358522410/test_Yy7KqOqppDVzJpk
Reltio
https://test.reltio.com/ui/Yy7KqOqppDVzJpk
https://test.reltio.com/reltio/api/Yy7KqOqppDVzJpk

Reltio Gateway User

Integration_Gateway_US_User
RDM
https://rdm.reltio.com/%s/Q4rz1LUZ9WnpVoJ/  

Internal Resources

Resource NameEndpoint
Mongo
mongodb://amraelp00005781.COMPANY.com:27107
Kafka
amraelp00005781.COMPANY.com:9094
Zookeeper
amraelp00005781.COMPANY.com:2181
Kibana
https://amraelp00005781.COMPANY.com:5601/app/kibana
Elasticsearch
https://amraelp00005781.COMPANY.com:9200
Hawtio
http://amraelp00005781.COMPANY.com:9999/hawtio/#/login
" + }, + { + "title": "US PROD Cluster", + "pageID": "164470064", + "pageLink": "/display/GMDM/US+PROD+Cluster", + "content": "

Physical Architecture

\"\"


Hosts

IDIPHostnameDocker UserResource TypeSpecificationAWS RegionFilesystem
PROD1
●●●●●●●●●●●●●●
amraelp00006207.COMPANY.com
mdmihpr 
EC2r4.xlarge us-east-1e

500 GB - /app

15 GB - /var/lib/docker

PROD2
●●●●●●●●●●●●●●
amraelp00006208.COMPANY.com
mdmihpr
EC2r4.xlarge us-east-1e

500 GB - /app

15 GB - /var/lib/docker

PROD3
●●●●●●●●●●●●
amraelp00006209.COMPANY.com
mdmihpr
EC2r4.xlarge us-east-1e

500 GB - /app

15 GB - /var/lib/docker

Components & Logs

HostComponentDocker nameDescriptionLogsOpen Ports
PROD1, PROD2, PROD3Managermdmgw_mdm-manager_1Gateway API/app/mdmgw/manager/log9104, 8851
PROD1Batch Channelmdmgw_batch-channel_1Batch file processor, S3 poller/app/mdmgw/batch_channel/log9107
PROD1, PROD2, PROD3Publishermdmhub_event-publisher_1Event publisher/app/mdmhub/event_publisher/log9106
PROD1, PROD2, PROD3Subscribermdmhub_reltio-subscriber_1SQS Reltio event subscriber/app/mdmhub/reltio_subscriber/log9105

Back-End

HostComponentDocker nameDescriptionLogsOpen Ports
PROD1, PROD2, PROD3ElasticsearchelasticsearchEFK - Elasticsearch/app/efk/elasticsearch/logs9200
PROD1, PROD2, PROD3FluentDfluentdEFK - FluentD/app/efk/fluentd/log
PROD3KibanakibanaEFK - Kibanadocker logs kibana5601
PROD3PrometheusprometheusPrometheus Federation slave serverdocker logs prometheus9109
PROD1, PROD2, PROD3Mongomongo_mongo_1Mongodocker logs mongo_mongo_127017
PROD3Monstache Connectormonstache-connectorMongo → Elasticsearch exporter

PROD1, PROD2, PROD3

Kafkakafka_kafka_1Kafkadocker logs kafka_kafka_19101, 9093, 9094
PROD1, PROD2, PROD3Kafka Exporterkafka_kafka_exporter_1Kafka → Prometheus exporterdocker logs kafka_kafka_exporter_19102
PROD1, PROD2, PROD3CadvisorcadvisorDocker → Prometheus exporterdocker logs cadvisor9103
PROD3SQS Exportersqs-exporterSQS → Prometheus exporterdocker logs sqs-exporter9108
PROD1, PROD2, PROD3Kongkong_kong_1API Manager/app/mdmgw/kong/kong_logs8000, 8443, 32777
PROD1, PROD2, PROD3Kong - DBkong_kong-database_1Kong Cassandra databasedocker logs kong_kong-database_17000, 9042
PROD1, PROD2, PROD3Zookeeperkafka_zookeeper_1Zookeeperdocker logs kafka_zookeeper_12181, 2888, 3888
PROD1, PROD2, PROD3Node Exporter(non-docker) node_exporterPrometheus node exportersystemctl status node_exporter9100

Certificates

ResourceCertificate LocationValid fromValid to Issued To
Kibanahttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/prod_us/efk/kibana/mdm-log-management-us-trade-prod.COMPANY.com.cer22.02.201921.02.2022mdm-log-management-us-trade-prod.COMPANY.com
Kong - APIhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/prod_us/certs/mdm-ihub-us-trade-prod.COMPANY.com.pem04.01.202204.01.2024

CN = mdm-ihub-us-trade-prod.COMPANY.com

O = COMPANY

Kafka - Client Truststorehttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/prod_us/ssl/client.truststore.jks01.09.201601.09.2026COMPANY Root CA G2
Kafka - Server Truststore
PROD1 - https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/prod_us/ssl/server1.keystore.jks
PROD2 - https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/prod_us/ssl/server2.keystore.jks
PROD3 - https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/prod_us/ssl/server3.keystore.jks
04.01.202204.01.2024

CN = mdm-ihub-us-trade-prod.COMPANY.com

O = COMPANY

Elasticsearch

esnode1 - https://github.com/COMPANY/mdm-reltio-handler-env/tree/master/ssl_certs/prod_us/efk/esnode1

esnode2 - https://github.com/COMPANY/mdm-reltio-handler-env/tree/master/ssl_certs/prod_us/efk/esnode2

esnode3 - https://github.com/COMPANY/mdm-reltio-handler-env/tree/master/ssl_certs/prod_us/efk/esnode3

22.02.201921.02.2022

mdm-esnode1-us-trade-prod.COMPANY.com

mdm-esnode2-us-trade-prod.COMPANY.com

mdm-esnode3-us-trade-prod.COMPANY.com

Unix groups

Resource NameTypeDescriptionSupport
ELBLoad Balancer

Reference LB Name: PFE-CLB-JIRA-HARMONY-PROD-001
CLB name: PFE-CLB-MDM-HUB-TRADE-PROD-001
DNS name: internal-PFE-CLB-MDM-HUB-TRADE-PROD-001-1966081961.us-east-1.elb.amazonaws.com


userComputer Role

Computer Role: UNIX-UNIVERSAL-AWSCBSDEV-MDMIHPR-COMPUTERS-U 

Login: mdmihpr
Name: SRVGBL-mdmihpr
UID: 25084803
GID: 20796763 <mdmihub>


userUnix Role Group

Unix-mdmihubProd-U

Role: ADMIN_ROLE


portsSecurity groupSG Name: PFE-SG-IHUB-APP-PROD-001

http://btondemand.COMPANY.com

Submit ticket to GBL-BTI-IOD AWS FULL SUPPORT

S3S3 Bucket

mdmprodamrasp42095 (us-east-1)

Username: SRVC-MDMIHPR
Console login: https://bti-aws-prod-hosting.signin.aws.amazon.com/console


Internal Clients

NameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopic
Internal MDM Hub user
publishing_hub
Key Auth
N/A
- "CREATE_HCP"
- "CREATE_HCO"
- "UPDATE_HCP"
- "UPDATE_HCO"
- "GET_ENTITIES"
- "DELETE_CROSSWALK"
- "GET_RELATION"
- "SCAN_ENTITIES"
- "SCAN_RELATIONS"
- "LOOKUPS"
- "ENTITY_ATTRIBUTES_UPDATE"
ALL
- "FLEXProposal"
- "FLEX"
- "FLEXIDL"
- "Calculate"
- "AddrCalc"
prod-internal-reltio-events
Internal MDM Test user
mdm_test_user
External OAuth2
MDM_client
- "CREATE_HCP"
- "CREATE_HCO"
- "UPDATE_HCP"
- "UPDATE_HCO"
- "GET_ENTITIES"
- "DELETE_CROSSWALK"
- "GET_RELATION"
- "SCAN_ENTITIES"
- "SCAN_RELATIONS"
- "LOOKUPS"
- "ENTITY_ATTRIBUTES_UPDATE"
ALL
- "FLEXProposal"
- "FLEX"
- "FLEXIDL"
- "Calculate"
- "AddrCalc"
- "SAP"
- "HIN"
- "DEA"

Integration Batch Update user
integration_batch_user
Key Auth
N/A
- "GET_ENTITIES"
- "ENTITY_ATTRIBUTES_UPDATE"
- "GENERATE_ID"
- "CREATE_HCO"
- "UPDATE_HCO"
ALL
- "FLEXProposal"
- "FLEX"
- "FLEXIDL"
- "Calculate"
- "AddrCalc"

FLEX US user
flex_prod
External OAuth2
Flex-MDM_client
- "CREATE_HCP"
- "CREATE_HCO"
- "UPDATE_HCP"
- "UPDATE_HCO"
- "GET_ENTITIES"
- "SCAN_ENTITIES"
ALL
- "FLEXProposal"
- "FLEX"
- "FLEXIDL"
- "Calculate"
prod-out-full-flex-all
FLEX Batch Channel user
flex_batch
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "FLEX"
- "FLEXIDL"
prod-internal-hco-create-flex
SAP Batch Channel user
sap_batch
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "SAP"
prod-internal-hco-create-sap
HIN Batch Channel user
hin_batch
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "HIN"
prod-internal-hco-create-hin
DEA Batch Channel user
dea_batch
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "DEA"
prod-internal-hco-create-dea
340B Batch Channel user
340b_batch
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "340B"
prod-internal-hco-create-340b
" + }, + { + "title": "US PROD Services", + "pageID": "164469976", + "pageLink": "/display/GMDM/US+PROD+Services", + "content": "

HUB Endpoints

API & Kafka & S3

Resource NameEndpoint
Gateway API OAuth2 External - PROD
https://mdm-ihub-us-trade-prod.COMPANY.com/gw-api-oauth-ext
Gateway API OAuth2 - PROD
https://mdm-ihub-us-trade-prod.COMPANY.com/gw-api-oauth
Gateway API KEY auth - PROD
https://mdm-ihub-us-trade-prod.COMPANY.com/gw-api
Ping Federate
https://prodfederate.COMPANY.com/as/introspect.oauth2
Kafka
amraelp00006207.COMPANY.com:9094
amraelp00006208.COMPANY.com:9094
amraelp00006209.COMPANY.com:9094
MDM HUB S3 
s3://mdmprodamrasp42095/
- FLEX: PROD/inbound/FLEX
- SAP: PROD/inbound/SAP
- HIN: PROD/inbound/HIN
- DEA: PROD/inbound/DEA
- 340B: PROD/inbound/340B

Monitoring

Resource NameEndpoint
HUB Performance
https://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=us_prod&var-node=All&var-type=entities
Kafka Topics Overview
https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=us_prod&var-topic=All&var-node=1&var-instance=amraelp00006207.COMPANY.com:9102
Host Statistics
https://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=us_prod&var-node=amraelp00006207.COMPANY.com&var-port=9100
Docker monitoring
https://mdm-monitoring.COMPANY.com/grafana/d/Z1VgYm6iz/docker-monitoring?orgId=1&refresh=10s&var-env=us_prod&var-node=1
JMX Overview
https://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=us_prod&var-component=batch_channel&var-node=1&var-instance=amraelp00006207.COMPANY.com:9107
Kong
MongoDB
https://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=us_prod&var-instance=amraelp00006209.COMPANY.com:9110&var-node_instance=amraelp00006209.COMPANY.com&var-interval=$__auto_interval_interval

Logs

Resource NameEndpoint
Kibana
https://mdm-log-management-us-trade-prod.COMPANY.com:5601/app/kibana

MDM Systems

Reltio US PROD VUUWV21sflYijwa

Resource NameEndpoint
SQS queue name
https://sqs.us-east-1.amazonaws.com/930358522410/361_VUUWV21sflYijwa
Reltio
https://361.reltio.com/ui/VUUWV21sflYijwa/
https://361.reltio.com/reltio/api/VUUWV21sflYijwa 

Reltio Gateway User

Integration_Gateway_US_User
RDM
https://rdm.reltio.com/%s/f6dQoR9tfCpFCtm/

Internal Resources

Resource NameEndpoint
Mongo
mongodb://amraelp00006207.COMPANY.com:27017,amraelp00006208.COMPANY.com:27017,amraelp00006209.COMPANY.com:28018
Kafka
amraelp00006207.COMPANY.com:9094
amraelp00006208.COMPANY.com:9094
amraelp00006209.COMPANY.com:9094
Zookeeper
amraelp00006207.COMPANY.com:2181
amraelp00006208.COMPANY.com:2181
amraelp00006209.COMPANY.com:2181
Kibana
https://amraelp00006209.COMPANY.com:5601/app/kibana
Elasticsearch
https://amraelp00006207.COMPANY.com:9200
https://amraelp00006208.COMPANY.com:9200
https://amraelp00006209.COMPANY.com:9200
Hawtio
http://amraelp00006207.COMPANY.com:9999/hawtio/#/login
http://amraelp00006208.COMPANY.com:9999/hawtio/#/login
http://amraelp00006209.COMPANY.com:9999/hawtio/#/login
" + }, + { + "title": "Components", + "pageID": "164469881", + "pageLink": "/display/GMDM/Components", + "content": "" + }, + { + "title": "Apache Airflow", + "pageID": "164469951", + "pageLink": "/display/GMDM/Apache+Airflow", + "content": "

Description

Airflow is platform created by Apache and designed to schedule workflows called dags.

Airflow docs:

https://airflow.apache.org/docs/apache-airflow/stable/index.html

We are using airflow on kubernetes with helm of official airflow helm chart: https://airflow.apache.org/docs/helm-chart/stable/index.html

In this architecture airflow consists of 3 main components:

Interfaces

Flows

Flows are configure in mdm-hub-cluster-env repository in ansible/inventory/${environment}/group_vars/gw-airflow-services/${dag_name}.yaml files

Used flows are described in dags list


" + }, + { + "title": "API Gateway", + "pageID": "164469910", + "pageLink": "/display/GMDM/API+Gateway", + "content": "

Description

Kong (API Gateway) is the component used as the gateway for all API requests in the MDM HUB. This component exposes only one URL to the external clients, which means that all internal docker containers are secured and it is not possible to access them. This allows to track whole network traffic access in one place. Kong is the router that redirects requests to specific services using configured routes. Kong contains multiple additional plugins, these plugins are connected with the specific services and add additional security (Key-Auth, OAuth 2.0, Oauth2-External) or user management. Only Kong authorized users are allowed to execute specific operations in the HUB.

Flows


Interface NameTypeEndpoint patternDescription
Admin APIREST APIGET http://localhost:8001/Internal and secured PORT available only in the docker container used by kong to manage existing servicesroutes, plugins, consumers, certificates
External APIREST APIGET https://localhost:8443/External and secured PORT exposed to the ELB and accessed by clients. 

Dependent components


ComponentInterfaceFlowDescription
Cassandra - kong_kong-database_1TCP internal docker communicationN/Akong configuration database
HUB MicroservicesREST internal docker communicationN/AThe route to all HUB microservices, required to expose API to external clients 

Configuration

Kong configuration is divided into 5 sections:

Config ParameterDefault valueDescription
- snowflake_api_user:
create_or_update: False
vars:
username: snowflake_api_user
plugins:
- name: key-auth
parameters:
key: "{{ secret_kong_consumers.snowflake_api_user.key_auth.key }}"
N/A

Configuration for the user with key-auth authentication - used only for the technical services users.

All External OAuth2 users are configured in the 4.Routes Sections

Config ParameterDefault valueDescription
- gbl_mdm_hub_us_nprod:
create_or_update: False
vars:
cert: "{{ lookup('file', '{{playbook_dir}}/ssl_certs/{{ env_name }}/certs/gbl-mdm-hub-us-nprod.COMPANY.com.pem') }}"
key: "{{ lookup('file', '{{playbook_dir}}/ssl_certs/{{ env_name }}/certs/gbl-mdm-hub-us-nprod.key') }}"
snis:
- "gbl-mdm-hub-us-nprod.COMPANY.com"
- "amraelp00007335.COMPANY.com"
- "10.12.209.27"

N/A Configuration of the SSL Certificate in the Kong.
Config ParameterDefault valueDescription
kong_services:
- create_or_update: False
vars:
name: "{{ kong_env }}-manager-service"
url: "http://{{ kong_env }}mdmsrv_mdm-manager_1:8081"
connect_timeout: 120000
write_timeout: 120000
read_timeout: 120000
N/A

Kong Service - this is a main part of the configuration, this connects internally Kong with Docker container. 

Kong allows configuring multiple services with multiple routes and plugins.

Config ParameterDefault valueDescription
- create_or_update: False
vars:
name: "{{ kong_env }}-manager-ext-int-api-oauth-route"
service: "{{ kong_env }}-manager-service"
paths: [ "/{{ kong_env }}-ext" ]
methods: [ "GET", "POST", "PATCH", "DELETE" ]
N/A

Exposes the route to the service. Clients using ELB have to add the path to the API invocation to access specified services. "-ext" suffix defines the API that used the External OAuth 2.0 plugin connected to the PingFederate. Configures the methods that the user is allowed to invoke. 

Config ParameterDefault valueDescription
- create_or_update: False
vars:
name: key-auth
route: "{{ kong_env }}-manager-int-api-route"
config:
hide_credentials: true
N/AThe type of plugin "key-auth" used for the internal or technical users that authenticate using a security key
- create_or_update: False
vars:
name: mdm-external-oauth
route: "{{ kong_env }}-manager-ext-int-api-oauth-route"
config:
introspection_url: "https://devfederate.COMPANY.com/as/introspect.oauth2"
authorization_value: "{{ devfederate.secret_oauth2_authorization_value }}"
hide_credentials: true
users_map:
- "e2a6de9c38be44f4a3c1b53f50218cf7:engage"
N/A

The type of plugin "mdm-external-oauth" is a customized plugin used for all External Clients that are using tokens generated in the PingFederate.

The configuration contains introspection_url - Ping API for token verification.

The most important part of this configuration is the users_map 

The Key is the PingFedeate User, the Value is the HUB user configured in the services.

" + }, + { + "title": "API Router", + "pageID": "196877505", + "pageLink": "/display/GMDM/API+Router", + "content": "

Description

The api router component is responsible for routing requests to regional MDM Hub services. Application exposes REST API to call MDM Hub services from different regions simultaneously. The component provides centralized authorization and authentication service and transaction log feature. Api router uses http4k library which is a lightweight  HTTP toolkit written in Kotlin that enables the serving and consuming of HTTP services in a functional and consistent way.



Request flow

\"\"

Component

Description

Authentication service

authenticates user by x-consumer-username header

Request enricher

detects request sources, countries and role

Authorization service

authorizes user permissions to role, countries and sources

Service caller

calls MDM Hub services, tries 3 times in case of an exception,requests are routed to the appropriate mdm services based on the countries parameter, if the requests contains countries from multiple regions, different regional services are called, if the request contains no countries, default user or application country is set

Service response transformer and filter

transforms and/or filters service responses (e.g. data anonymization) depending on the defined request and/or response filtration parameters (e.g. header, http method, path)

Response composer

composes responses from services, if multiple services responded, the response is concatenated


Request enrichment



Parameter
Methodsourcescountriesrole

create hco

request body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedCREATE HCO
update hcorequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedUPDATE_HCO
batch create hcorequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedCREATE_HCO
batch update hcorequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedUPDATE_HCO
create hcprequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedCREATE_HCP
update hcprequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedUPDATE_HCP
batch create hcprequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedCREATE_HCP
batch update hcprequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedUPDATE_HCP
create mcorequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedCREATE_MCO
update mcorequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedUPDATE_MCO
batch create mcorequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedCREATE_MCO
batch update mcorequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedUPDATE_MCO
create entityrequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedCREATE_ENTITY
update entityrequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedUPDATE_ENTITY
get entities by urissources not allowedrequest param Country attribute, 0 or more allowedGET_ENTITIES
get entity by urisources not allowedrequest param Country attribute, 0 or more allowedGET_ENTITIES
delete entity by crosswalktype query param, required at least onerequest param Country attribute, 0 or more allowedDELETE_CROSSWALK
get entity matchessources not allowedrequest param Country attribute, 0 or more allowedGET_ENTITY_MATCHES
create relationrequest body crosswalk attributes, required at least onerequest param Country attribute, 0 or more allowedCREATE_RELATION
batch create relationrequest body crosswalk attributes, required at least onerequest param Country attribute, 0 or more allowedCREATE_RELATION
get relation by urisources not allowedrequest param Country attribute, 0 or more allowedGET_RELATION
delete relation by crosswalktype query param, required at least onerequest param Country attribute, 0 or more allowedDELETE_CROSSWALK
get lookupssources not allowedrequest param Country attribute, 0 or more allowedLOOKUPS


Configuration

Config parameterDescription
defaultCountrydefault application instance country
usersusers configuration listed below
zoneszones configuration listed below
responseTransformresponse transformation definitions explained below

User configuration


Config parameterDescription
nameuser name
descriptionuser description
rolesallowed user roles
countriesallowed user countries
sourcesallowed user sources
defaultCountryuser default country


Zone configuration

Config parameterDescription
urlmdm service url
userNamemdm service user name
logMessagesflag indicates that mdm service messages should be logged
timeoutMsmdm service request timeout

Response transformation configuration

Config parameterDescription
filtersrequest and response filter configuation
mapresponse body JSLT transformation definitions

Filters configuration

Config parameterDescription
requestrequest filter configuation
responseresponse filter configuation

Request filter configuration

Config parameterDescription
methodHTTP method
pathAPI REST call path
headerslist of HTTP headers with name and value parameters

Response filter configuration

Config parameterDescription
bodyresponse body JSTL transformation definition

Example configuration of response transformation

API router configuration
responseTransform: 
- filters:
     request:
       method: GET
       path: /entities.*
       headers:
- name: X-Consumer-Username
           value: mdm_test_user
     response:
       body:
         jstl.content: |
contains(true,[for (.crosswalks) .type == "configuration/sources/HUB_CALLBACK"])
   map:
- jstl.content: |
.crosswalks
- jstl.content: |
.

" + }, + { + "title": "Batch Service", + "pageID": "164469936", + "pageLink": "/display/GMDM/Batch+Service", + "content": "

Description

The batch-service component is responsible for managing the batch loads to MDM Systems. It exposes the REST API that clients use to create a new instance of a batch and upload data. The component is responsible for managing the batch instances and stages, processing the data, gathering acknowledge responses from the Manager component. Batch service stores data in two collections batchInstance - stores all instances of batches and statistics gathered during load and batchEntityProcessStatus  - stores metadata information about all objects that were loaded through all batches. These two collections are required to manage and process the data, check the checksum deduplication process, mark entities as processed after ACK from Reltio, and soft-delete entities in case of full files load. 

The component uses the Asynchronous operations using Kafka topics as the stages for each part of the load. 

Flows

Exposed interfaces

Batch Controller - manage batch instances

Interface NameTypeEndpoint patternDescription
Create a new instance for the specific batchREST APIPOST /batchController/{batchName}/instancesCreates a new instance of the specific batch. Returns the object of Batch with a generated ID that has to be used in the all below requests. Based on the ID client is able to check the status or load data using this instance. It is not possible to start new batch instance once the previous one is not completed. 
Get batch instance detailsREST APIGET /batchController/{batchName}/instances/{batchInstanceId}Returns current details about the specific batch instance. Returns object with all stages, statuses, and statistics. 
Initialize the stage or complete the stage and save statistics in the cache. REST API

POST /batchController/{batchName}/instances/{batchInstanceId}/stages/{stageName}

Creates or updates the specific stage in the batch. Using this operation clients are able to do two things.

1. initialize and start the stage before loading the data. In that case, the Body request should be empty.

2. update and complete the stage after loading the data. In that case, the Body should contain the stage name and statistics.

Clients have permission to update only "Loading" stages. The next stages are managed by the internal batch-service processes.

Initialize multiple stages or complete the stages and save statistics in the cache. REST APIPOST /batchController/{batchName}/instances/{batchInstanceId}/stagesThis operation is similar to the single-stage management operation. This operation allows manage of multiple stages in one request.
Remove the specific batch instance from the cache.REST APIDELETE /batchController/{batchName}/instances/{batchInstanceId}Additional service operation used to delete the batch instances from cache. The permission for this operation is not exposed to external clients, this operation is used only by the HUB support team. 
Clear cache ( clear objects from batchEntityProcessStatus collection that stores metada of objects and is used in deduplication logic)REST API

GET /batchController/{batchName}/_clearCache

headers:
  objectType: ENTITY/RELATION
  entityType: e.g. configuration/entityTypes/HCP

Additional service operation used to clear cache for the specific batch. The user can provide additional parameters to the API to specify what type of objects should be removed from the cache. Operation is used by the clients after executing smoke tests on PROD and during testing on DEV environments. It allows clearing the cache after load to avoid data deduplication during load. 

Bulk Service - load data using previously created batch instances

Interface NameTypeEndpoint patternDescription
Load multiple entities using create operationREST APIPOST /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/entitiesThe operation should be used once the user created a new batch instance and initialized the "Loading" stage. At that moment client is able to load entities to the MDM system. The operation accepts the bulk of entities and loads the data to Kafka topic. Using POST operation the standard creates operation is used.
Load multiple entities using the partial override operationREST APIPATCH /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/entitiesThis operation is similar to the above. The PATCH operation force to use partialOverride operation. 
Load multiple relations using create operationREST APIPOST /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/relationsThe operation is similar to the above. Using POST operation the standard creates operation is used. Using /relations suffix in the URI clients is able to create relations objects in MDM.
Load multiple Tags using PATCH operation - append operationREST APIPATCH /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/tagsThe operation should be used once the user created a new batch instance and initialized the "Loading" stage. At that moment client is able to load tags to the MDM system. The operation accepts the bulk of entities and loads the data to Kafka topic. Using PATCH operation the standard append operation is used so all tags in the input array are added to specified profile in MDM.
Load multiple Tags using delete operation - removal operationREST APIDELETE /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/tagsThis operation is similar to the above. The DELETE operation removes selected TAGS from the MDM system.
Load multiple merge requests using POST operation, this will result in a merge between two entities.REST APIPOST /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/entities/_mergeThe operation should be used once the user created a new batch instance and initialized the "Loading" stage. At that moment client is able to load merge requests to the MDM system - this will result in merging operation between two entities specified in the request. The operation accepts the bulk of merging requests and loads the data to Kafka's topic. 
Load multiple unmerge requests using POST operation, this will result in a unmerge between two entities.REST APIPOST /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/entities/_unmergeThe operation should be used once the user created a new batch instance and initialized the "Loading" stage. At that moment client is able to load unmerge requests to the MDM system - this will result in unmerging operation between two entities specified in the request. The operation accepts the bulk of unmerging requests and loads the data to Kafka's topic. 

Dependent components


ComponentInterfaceFlowDescription
ManagerAsyncMDMManagementServiceRouteEntitiesCreateProcess bulk objects with entities and creates the HCP/HCO/MCO in MDM. Returns asynchronous ACK response
EntitiesUpdateProcess entities and creates using partialOverride property the HCP/HCO/MCO in MDM. Returns asynchronous ACK response
RelationsCreateProcess bulk objects with entities and creates the HCP/HCO/MCO in MDM. Returns asynchronous ACK response
Hub StoreMongo connectionN/AStore cache data in mongo collection

Configuration

Batch Workflows configuration, main config for all Batches and Stages

Config ParameterDescription
batchWorkflows:
- batchName: "ONEKEY"
batchDescription: "ONEKEY - HCO and HCP entities and relations loading"
stages:
- stageName: "HCOLoading"

The main part of the batches configuration. Each batch has to contain:

batchName - the name of the specific batch, used in the API request.

batchDescription - additional description for the specific

stages - the list of dependent stages arranged in the execution sequence.

This configuration presents the workflow for the specific batch, Administrator can setup these stages in the order that is required for the batch and Client requirements. 

The main assumptions:

  1. The "Loading" Stage is the first one always.
  2. The "Sending" Stage is dependent on the "Loading" stage
  3. The "Processing" Stage is dependent on the "Sending" stage.

There is the possibility to add 2 additional optional stages:

  1. "EntitiesUnseenDeletion" - used only once the full file is loaded and the soft-delete process is required
  2. "HCODeletesProcessing" - process soft-deleted objects to check if all ACKs were received. 

Available jobs:

  1. SendingJob
  2. ProcessingJob
  3. DeletingJob
  4. DeletingRelationJob

It is possible to set up different stage names but the assumption is to reuse the existing names to keep consistency.

The JOB is dependent on each other in two ways:

  1. softDependentStages - allows starting next stage immediately after the dependent one is started. Used in the Sending stages to immediately send data to the Manager.
  2. dependentStages - hard dependent stages, this blocks the starting of the stage until the previous one is ended.  
- stageName: "HCOSending"
softDependentStages: ["HCOLoading"]
processingJobName: "SendingJob"
Example configuration of Sending stage dependent from the Loading stage. In this stage, data is taken from the stage Kafka Topics and published to the Manager component for further processing
- stageName: "HCOProcessing"
dependentStages: ["HCOSending"]
processingJobName: "ProcessingJob"
Example configuration of the Processing stage. This stage starts once the Sending JOB is completed. It uses the batchEntityProcessStatus collection to check if all ACK responses were received from MDM. 
- stageName: "RelationLoading"
- stageName: "RelationSending"
dependentStages: [ "HCOProcessing"]
softDependentStages: ["RelationLoading"]
processingJobName: "SendingJob"
- stageName: "RelationProcessing"
dependentStages: [ "RelationSending" ]
processingJobName: "ProcessingJob"
The full example configuration for the Relation loading, sending, and processing stages.
- stageName: "EntitiesUnseenDeletion"
dependentStages: ["RelationProcessing"]
processingJobName: "DeletingJob"
- stageName: "HCODeletesProcessing"
dependentStages: ["EntitiesUnseenDeletion"]
processingJobName: "ProcessingJob"
Configuration for entities. The example configuration that is used in the full files. It is triggered at the end of the Workflow and checks the data that should be removed. 
- stageName: "RelationsUnseenDeletion"
dependentStages: ["HCODeletesProcessing"]
processingJobName: "DeletingRelationJob"
- stageName: "RelationDeletesProcessing"
dependentStages: ["RelationsUnseenDeletion"]
processingJobName: "ProcessingJob"
Configuration for relations. The example configuration that is used in the full files. It is triggered at the end of the Workflow and checks the data that should be removed. 

Loading stage configuration for Entities and Relations BULK load through API request

Config ParameterDescription
bulkConfiguration:
destinations:
    "ONEKEY":
HCPLoading:
bulkLimit: 25
destination:
topic: "{{ env_local_name }}-internal-batch-onekey-hcp"

The configuration contains the following:

destinations - list of batches and kafka topics on which data should be loaded from REST API to Kafka Topics.

"ONEKEY" - batch name

HCPLoading - specific configuration for loading stage

bulkLimit - limit of entities/relations in one API call

destination.topic - target topic name

Sending stage configuration for Sending Entities and Relations to MDM Async API (Reltio)

Config ParameterDefault valueDescription
sendingJob:
numberOfRetriesOnError:
3Number of retries once an exception occurs during Kafka events publishing 
  pauseBetweenRetriesSecs: 
30Number of seconds to wait between the next retry
  idleTimeWhenProcessingEndsSec: 
60Number of seconds once to wait for new events and complete the Sending JOB
  threadPoolSize:
2Number of threads used to Kafka Producer
    "ONEKEY":
HCPSending:
source:
topic: "{{ env_local_name }}-internal-batch-onekey-hcp"
bulkSending: false
bulkPacketSize: 10
reltioRequestTopic: "{{ env_local_name }}-internal-async-all-onekey"
reltioReponseTopic: "{{ env_local_name }}-internal-async-all-onekey-ack"

The specific configuration for Sending Stage

"ONEKEY" - batch name

HCPSending - specific configuration for sending stage

source.topic- source topic name from which data is consumed

bulkSending - by default false (bundling is implemented and managed in Manager client, currently there is no need to bundle the events on client-side)

bulkPacketSize - optionally once bulkSending is true, batch-service is able to bundle the requests. 

reltioRequestTopic- processing requests in manager

reltioReponseTopic - processing ACK in batch-service

Processing stage config for checking processing entities status in MDM Async API (Reltio) - check ACK collector

Config ParameterDefault valueDescription
processingJob.pauseBetweenQueriesSecs:
60Interval in which Cache is cached if all ACK were received.

Entities/Relations UnseenDeletion Job config for Reltio Request Topic and Max Deletes Limit for entities soft Delete.

Config ParameterDefault valueDescription
deletingJob:
"Symphony":
"EntitiesUnseenDeletion":

The specific configuration for Deleting Stage

"Symphony" - batch name

EntitiesUnseenDelettion- specific configuration for soft-delete stage

maxDeletesLimit100The limit is a safety switch in case if we get a corrupted file (empty or partial).
It prevents from deleting all profiles Reltio in such cases.
queryBatchSize10The number of entities/relations downloaded from Cache in one call
reltioRequestTopic: "{{ env_local_name }}-internal-async-all-symphony"
target topic - processing requests in manager
reltioResponseTopic: "{{ env_local_name }}-internal-async-all-symphony-ack"
ack topics - processing ACK in batch-service

Users

Config ParameterDescription
- name: "mdmetl_nprod"
description: "MDMETL Informatica IICS User - BATCH loader"
defaultClient: "ReltioAll"
roles:
- "CREATE_HCP"
- "CREATE_HCO"
- "CREATE_MCO"
- "CREATE_BATCH"
- "GET_BATCH"
- "MANAGE_STAGE"
- "CLEAR_CACHE_BATCH"
countries:
- US
sources:
- "SHS"
...
batches:
"Symphony":
- "HCPLoading"

The example ETL user configuration. The configuration is divided into the following sections:


  1. roles - available roles to create specific objects and manage batch instances
  2. countries - list of countries that user is allowed to load
  3. sources - list of sources that user is allowed to load
  4. batches - list of batch names with corresponding stages. In general external users are able to create/edit Loading stages only.

Connections

Config ParameterDescription
mongo.url: "mongodb://mdm_batch_service:{{ mongo.users.mdm_batch_service.password }}@{{ mongo.springURL }}/{{ mongo.dbName }}"
Full Mongo DB URL
mongo.dbName: "{{ mongo.dbName }}"Mongo database name
kafka.servers: "{{ kafka.servers }}"Kafka Hostname 
kafka.groupId: "batch_service_{{ env_local_name }}"Batch Service component group name
kafka.saslMechanism: "{{ kafka.saslMechanism }}"SASL configrration
kafka.securityProtocol: "{{ kafka.securityProtocol }}"Security Protocol
kafka.sslTruststoreLocation: /opt/mdm-gw-batch-service/config/kafka_truststore.jksSSL trustore file location
kafka.sslTruststorePassword: "{{ kafka.sslTruststorePassword }}"SSL trustore file passowrd
kafka.username: batch_serviceKafka username
kafka.password: "{{ hub_broker_users.batch_service }}"Kafka dedicated user password
kafka.sslEndpointAlgorithm:SSL algoright

Advanced Kafka configuration (do not edit if not required)

Config Parameter
spring:
kafka:
properties:
sasl:
mechanism: ${kafka.saslMechanism}
security:
protocol: ${kafka.securityProtocol}
ssl.endpoint.identification.algorithm:

consumer:
properties:
max.poll.interval.ms: 600000
bootstrap-servers:
- ${kafka.servers}
groupId: ${kafka.groupId}
auto-offset-reset: earliest
max-poll-records: 50
fetch-max-wait: 1s
fetch-min-size: 512000
enable-auto-commit: false
ssl:
trustStoreLocation: file:${kafka.sslTruststoreLocation}
trustStorePassword: ${kafka.sslTruststorePassword}

producer:
bootstrap-servers:
- ${kafka.servers}
groupId: ${kafka.groupId}
auto-offset-reset: earliest
ssl:
trustStoreLocation: file:${kafka.sslTruststoreLocation}
trustStorePassword: ${kafka.sslTruststorePassword}

streams:
bootstrap-servers:
- ${kafka.servers}
applicationId: ${kafka.groupId}_ack # for Kafka Streams GroupID have to different that Kafka consumer
clientId: batch_service_ID
stateDir: /tmp
# num-stream-threads: 1 - default 1
ssl:
trustStoreLocation: file:${kafka.sslTruststoreLocation}
trustStorePassword: ${kafka.sslTruststorePassword}

Additional config (do not edit if not required)

Config Parameter
server.port: 8083

management.endpoint.shutdown.enabled=false:
management.endpoints.web.exposure.include: prometheus, health, info
spring.main.allow-bean-definition-overriding: true
camel.springboot.main-run-controller: True
camel:
component:
metrics:
metric-registry=prometheusMeterRegistry:

server:
use-forward-headers: true
forward-headers-strategy: FRAMEWORK
springdoc:
swagger-ui:
disable-swagger-default-url: True

restService:
#service port - do not change if it run in docker container
port: 8082
schedulerTreadCount: 5


" + }, + { + "title": "Callback Delay Service", + "pageID": "322536130", + "pageLink": "/display/GMDM/Callback+Delay+Service", + "content": "

Description

The application consists of two streams - precallback and postcallback. When the precallback stream detects the need to change the ranking for a given relationship, it generates an event to the post callback stream. The post callback stream collects events in the time window for a given key and processes the last one. This allows you to avoid updating the rankings multiple times when loading relations using batch.

Responsible for following transformations:

Applies transformations to the Kafka input stream producing the Kafka output stream.


Flows

Exposed interfaces

PreCallbackDelay Stream -(rankings)

Interface Name

Type

Endpoint pattern

Description

callback inputKAFKA${env}-internal-reltio-full-delay-eventsEvents processed by the precallback service
output  - callbacksKAFKA${env}-internal-reltio-proc-events

Result events processed by the precallback delay service

output - processing KAFKA${env}-internal-async-all-bulk-callbacksUpdateAttribute requests sent to Manager component for asynchronous processing

Dependent components


ComponentInterfaceFlowDescription

Manager

AsyncMDMManagementServiceRouteRelationshipAttributesUpdateUpdate relationship attributes in asynchronous mode
Hub StoreMongo connectionN/AGet mongodb stored relation data when Kafka cache is empty.


Configuration

Main Configuration


Default valueDescription
kafka.groupId${env}-precallback-delay-serviceThe application ID. Each stream processing application must have a unique ID. The same ID must be given to all instances of the application. It is recommended to use only alphanumeric characters, (dot), - (hyphen), and _ (underscore). Examples: "hello_world", "hello_world-v1.0.0"
kafkaOther.num.stream.threads10Number of threads used in the Kafka Stream
kafkaOther.default.deserialization.exception.handler

com.COMPANY.mdm.common.streams.

StructuredLogAndContinueExceptionHandler

Deserialization exception handler
kafkaOther.max.poll.interval.ms3600000Number of milliseconds to wait max time before next poll of events
kafkaOther.max.request.size2097152Events message size


CallbackWithDelay Stream -(rankings)

Config Parameter

Default value

Description

preCallbackDelay.eventInputTopic${env}-internal-reltio-full-delay-eventsinput topic
preCallbackDelay.eventDelayTopic${env}-internal-reltio-full-callback-delay-eventsdelay stream input topic, when the precallback stream detects the need to modify ranks for a given relationship group, it produces an event for this topic. Events for a given key are aggregated in a time window
preCallbackDelay.eventOutputTopic${env}-internal-reltio-proc-eventsoutput topic for events
preCallbackDelay.internalAsyncBulkCallbacksTopic${env}-internal-async-all-bulk-callbacksoutput topic for callbacks
preCallbackDelay.relationDataStore.storeName${env}-relation-data-storeRelation data cache store name
preCallbackDelay.rankCallback.featureActivationtrueParameter used to enable/disable the Rank feature
preCallbackDelay.rankCallback.callbackSourceHUB_CALLBACKCrosswalk used to update Reltio with Rank attributes
preCallbackDelay.rankCallback.rawRelationChecksumDedupeStore.namewith-delay-raw-relation-checksum-dedupe-storetopic name that store rawRelation MD5 checksum - used in rank callback deduplication
preCallbackDelay.rankCallback.rawRelationChecksumDedupeStore.retentionPeriod1hstore retention period
preCallbackDelay.rankCallback.rawRelationChecksumDedupeStore.windowSize10mstore window size
preCallbackDelay.rankCallback.attributeChangesChecksumDedupeStore.nameattribute-changes-checksum-dedupe-storetopic name that store attribute changes MD5 checksum - used in rank callback deduplication
preCallbackDelay.rankCallback.attributeChangesChecksumDedupeStore.retentionPeriod1hstore retention period
preCallbackDelay.rankCallback.attributeChangesChecksumDedupeStore.windowSize10mstore window size
preCallbackDelay.rankCallback.activeCallbacksOtherHCOtoHCOAffiliationsDelayCallbackList of Ranker to be activated
preCallbackDelay.rankTransform.featureActivationtrueParaemter defines in the Rank feature should be activated.
preCallbackDelay.rankTransform.activationFilter.activeRankSorterOtherHCOtoHCOAffiliationsDelayRankSorterRank sorter names
preCallbackDelay.rankTransform.rankSortOrder.affiliationN/A

The source order defined for the specific Ranking. Details about the algorithm in: 

 OtherHCOtoHCOAffiliations RankSorter

deduplication

Post callback stream ddeduplication config

deduplication.pingInterval1m

Post callback stream ping inverval

deduplication.duration1h

Post callback stream window duration

deduplication.gracePeriod0s

Post callback stream deduplication grace period

deduplication.byteLimit122869944

Post callback stream deduplication byte limit

deduplication.suppressNamecallback-rank-delay-suppress

Post callback stream deduplication suppress name

deduplication.namecallback-rank-delay-suppress

Post callback stream deduplication name

deduplication.storeNamecallback-rank-delay-suppress-deduplication-store

Post callback stream deduplication store name

Rank sort order config:

The component allows you to set different sorting (ranking) configurations depending on the country of the relationship. Relations for selected countries are sorted based on the rankExecutionOrder configuration - in the order of the items on the list. The following sorters are available:

Sample rankSortOrder confiugration:

rankSortOrder:
affiliation:
config:
- countries:
- AU
- NZ
rankExecutionOrder:
- type: ACTIVE
- type: ATTRIBUTE
attributeName: RelationType/RelationshipDescription
lookupCode: true
order:
REL.HIE: 1
REL.MAI: 2
REL.FPA: 3
REL.BNG: 4
REL.BUY: 5
REL.PHN: 6
REL.GPR: 7
REL.MBR: 8
REL.REM: 9
REL.GPSS: 10
REL.WPC: 11
REL.WPIC: 12
REL.DOU: 13
- type: SOURCE
order:
Reltio: 1
ONEKEY: 2
JPDWH: 3
SAP: 4
PFORCERX: 5
PFORCERX_ODS: 5
KOL_OneView: 6
ONEMED: 6
ENGAGE: 7
MAPP: 8
GRV: 9
GCP: 10
SSE: 11
PCMS: 12
PTRS: 13
- type: LUD

" + }, + { + "title": "Callback Service", + "pageID": "164469913", + "pageLink": "/display/GMDM/Callback+Service", + "content": "

Description

Responsible for following transformations:

Applies transformations to the Kafka input stream producing the Kafka output stream.


Flows

Exposed interfaces

PreCallback Stream -(rankings)

Interface Name

Type

Endpoint pattern

Description

callback inputKAFKA
${env}-internal-reltio-full-events
Events enriched by the EntityEnricher component. Full JSON data
output  - callbacksKAFKA
${env}-internal-reltio-proc-events

Events that are already processed by the precallback services (contains updated Ranks and Reltio callback is also processed)

output - processing KAFKA${env}-internal-async-all-bulk-callbacksUpdateAttribute requests sent to Manager component for asynchronous processing

HCO Names

Interface Name

Type

Endpoint pattern

Description

callback inputKAFKA
${env}-internal-callback-hconame-in
events being sent by the event publisher component. Event types being considered:  HCO_CREATED, HCO_CHANGED, RELATIONSHIP_CREATED, RELATIONSHIP_CHANGED
callback outputKAFKA
${env}-internal-hconames-rel-create

Relation Create requests sent to Manager component for asynchronous processing

Danging Affiliations

Interface Name

Type

Endpoint pattern

Description

callback inputKAFKA
${env}-internal-callback-orphanClean-in
events being sent by the event publisher component. Event types being considered:  'HCP_REMOVED', 'HCO_REMOVED', 'MCO_REMOVED', 'HCP_INACTIVATED', 'HCO_INACTIVATED', 'MCO_INACTIVATED'
callback outputKAFKA
${env}-internal-async-all-orphanClean

Relation Update (soft-delete) requests sent to Manager component for asynchronous processing

Crosswalk Cleaner

Interface Name

Type

Endpoint pattern

Description

callback inputKAFKA
${env}-internal-callback-cleaner-in
events being sent by the event publisher component. Event types being considered: 'HCO_CHANGED', 'HCP_CHANGED', 'MCO_CHANGED', 'RELATIONSHIP_CHANGED'
callback outputKAFKA
${env}-internal-async-all-cleaner-callbacks

Delete Crosswalk or Soft-Delete requests sent to Manager component for asynchronous processing


NotMatch callback (clean potential match queue)

Interface Name

Type

Endpoint pattern

Description

callback inputKAFKA
${env}-internal-callback-potentialMatchCleaner-in
events being sent by the event publisher component. Event types being considered:  'RELATIONSHIP_CHANGED', 'RELATIONSHIP_CREATED'
callback outputKAFKA
${env}-internal-async-all-notmatch-callbacks

NotMatch requests sent to Manager component for asynchronous processing



Dependent components


ComponentInterfaceFlowDescription

Manager

MDMIntegrationService


GetEntitiesByUrisRetrieve multiple entities by providing the list of entities URIS
AsyncMDMManagementServiceRouteRelationshipUpdateUpdate relationship object in asynchronous mode
EntitiesUpdateUpdate entity object in asynchronous mode - set soft-delete
CrosswalkDeleteRemove Crosswalk from entity/relation in asynchronous mode
NotMatchSet Not a Match between two  entities
Hub StoreMongo connectionN/AStore cache data in mongo collection


Configuration

Main Configuration


Default valueDescription
kafka.groupId
${env}-entity-enricherThe application ID. Each stream processing application must have a unique ID. The same ID must be given to all instances of the application. It is recommended to use only alphanumeric characters, (dot), - (hyphen), and _ (underscore). Examples: "hello_world", "hello_world-v1.0.0"
kafkaOther.num.stream.threads
10Number of threads used in the Kafka Stream
kafkaOther.default.deserialization.exception.handler
com.COMPANY.mdm.common.streams.StructuredLogAndContinueExceptionHandlerDeserialization exception handler
kafkaOther.max.poll.interval.ms
3600000Number of milliseconds to wait max time before next poll of events
kafkaOther.max.request.size
2097152Events message size
gateway.apiKey
${gateway.apiKey}API key used in the communication to Manager
gateway.logMessages
falseParameter used to turn on/off logging the payload
gateway.url
${gateway.url}Manager URL
gateway.userName
${gateway.userName}Manager user name

HCO Names

Config Parameter

Default value

Description

callback.hconames.eventInputTopic
${env}-internal-callback-hconame-ininput topic
callback.hconames.HCPCalculateStageTopic
${env}-internal-callback-hconame-hcp4calcinternal topic
callback.hconames.intAsyncHCONames
${env}-internal-hconames-rel-createoutput topic
callback.hconames.deduplicationWindowDuration
10The size of the windows in milliseconds
callback.hconames.deduplicationWindowGracePeriod
10sThe grace period to admit out-of-order events to a window.
callback.hconames.dedupStoreName
hco-name-dedupe-storededuplication topic name
callback.hconames.acceptedEntityEventTypes
HCO_CREATED, HCO_CHANGEDaccepted events types for entity objects
callback.hconames.acceptedRelationEventTypes
RELATIONSHIP_CREATED, RELATIONSHIP_CHANGEDaccepted events types for relationship objects
callback.hconames.acceptedCountries

AI,AN,AG,AR,AW,BS,BB,BZ,

BM,BO,BR,CL,CO,CR,CW,

DO,EC,GT,GY,HN,JM,

KY,LC,MX,NI,PA,PY,

PE,PN,SV,SX,TT,UY,VG

list of countries aceppted in further processing 
callback.hconames.impactedHcpTraverseRelationTypes

configuration/relationTypes/Activity, 

configuration/relationTypes/Managed, 

configuration/relationTypes/RLE.MAI

accepted relationship types to travers for impacted HCP objects
callback.hconames.mainHCOTraverseRelationTypes

configuration/relationTypes/Activity, 

configuration/relationTypes/Managed, 

configuration/relationTypes/RLE.MAI

accepted relationship types to travers for impacted main HCO objects
callback.hconames.mainHCOTypeCodes.default
HOSPthe Type code name for the Main HCO object
callback.hconames.mainHCOStructurTypeCodes

e.g.: 

AD:
- "WFR.TSR.JUR"
- "WFR.TSR.GRN"
- "WFR.TSR.ETA"

Cotains the map where the:

KEY is the country 

Values are the TypCodes for the corresponding country, 

callback.hconames.deduplicationeither callback.hconames.deduplication or callback.hconames.windowSessionDeduplication must be set
callback.hconames.deduplication.duration
duration size of time window
callback.hconames.deduplication.gracePeriod
grace period related to time window
callback.hconames.deduplication.byteLimit
byte limit of 
Suppressed.BufferConfig
callback.hconames.deduplication.suppressName

name of

Suppressed.BufferConfig

callback.hconames.deduplication.name
name of the Grouping step in deduplication
callback.hconames.deduplication.storageNamewhen switching from callback.hconames.deduplication to callback.hconames.windowSessionDeduplication storageName must be differentname of Materialized Session Store
callback.hconames.deduplication.pingInterval
interval in which ping messages are being generated
callback.hconames.windowSessionDeduplicationeither callback.hconames.deduplication or callback.hconames.windowSessionDeduplication must be set
callback.hconames.windowSessionDeduplication.duration
duration size of session window
callback.hconames.windowSessionDeduplication.byteLimit
byte limit of 
Suppressed.BufferConfig
callback.hconames.windowSessionDeduplication.suppressName

name of

Suppressed.BufferConfig

callback.hconames.windowSessionDeduplication.name
name of the Grouping step in deduplication
callback.hconames.windowSessionDeduplication.storageNamewhen switching from callback.hconames.deduplication to callback.hconames.windowSessionDeduplication storageName must be differentname of Materialized Session Store
callback.hconames.windowSessionDeduplication.pingInterval
interval in which ping messages are being generated

Pfe HCO Names

Config Parameter

Default value

Description

callback.pfeHconames.eventInputTopic
${env}-internal-callback-hconame-ininput topic
callback.pfeHconames.HCPCalculateStageTopic
${env}-internal-callback-hconame-hcp4calcinternal topic
callback.pfeHconames.intAsyncHCONames
${env}-internal-hconames-rel-createoutput topic
callback.pfeHconames.timeWindoweither callback.pfeHconames.timeWindow or callback.pfeHconames.sessionWindow must be set
callback.pfeHconames.timeWindow.duration
duration size of time window
callback.pfeHconames.timeWindow.gracePeriod
grace period related to time window
callback.pfeHconames.timeWindow.byteLimit
byte limit of 
Suppressed.BufferConfig
callback.pfeHconames.timeWindow.suppressName

name of

Suppressed.BufferConfig

callback.pfeHconames.timeWindow.name
name of the Grouping step in deduplication
callback.pfeHconames.timeWindow.storageNamewhen switching from callback.pfeHconames.timeWindow to callback.pfeHconames.sessionWindow storageName must be differentname of Materialized Session Store
callback.pfeHconames.timeWindow.pingInterval
interval in which ping messages are being generated
callback.pfeHconames.sessionWindoweither callback.pfeHconames.timeWindow or callback.pfeHconames.sessionWindow must be set
callback.pfeHconames.sessionWindow.duration
duration size of session window
callback.pfeHconames.sessionWindow.byteLimit
byte limit of 
Suppressed.BufferConfig
callback.pfeHconames.sessionWindow.suppressName

name of

Suppressed.BufferConfig

callback.pfeHconames.sessionWindow.name
name of the Grouping step in deduplication
callback.pfeHconames.sessionWindow.storageNamewhen switching from callback.pfeHconames.deduplication to callback.pfeHconames.windowSessionDeduplication storageName must be differentname of Materialized Session Store
callback.pfeHconames.sessionWindow.pingInterval
interval in which ping messages are being generated

Danging Affiliations

Config Parameter

Default value

Description

callback.danglingAffiliations.eventInputTopic
${env}-internal-callback-orphanClean-ininput topic
callback.danglingAffiliations.acceptedEntityEventTypes
HCP_REMOVED, HCO_REMOVED, MCO_REMOVED, HCP_INACTIVATED, HCO_INACTIVATED, MCO_INACTIVATEDaccepted entity events
callback.danglingAffiliations.eventOutputTopic
${env}-internal-async-all-orphanCleanoutput topic
callback.danglingAffiliations.relationUpdateHeaders.HubAsyncOperation
rel-updatekafka record header
callback.danglingAffiliations.exceptCrosswalkTypes
configuration/sources/Reltiocrosswalk types to exclude

Crosswalk Cleaner

Config Parameter

Default value

Description

callback.crosswalkCleaner.eventInputTopic
${env}-internal-callback-cleaner-ininput topic
callback.crosswalkCleaner.acceptedEntityEventTypes
MCO_CHANGED, HCP_CHANGED, HCO_CHANGEDaccepted entity events
callback.crosswalkCleaner.acceptedRelationEventTypes
RELATIONSHIP_CHANGEDaccepted relation events
callback.crosswalkCleaner.hardDeleteCrosswalkTypes.always
configuration/sources/HUB_CallbackHub callback crosswalk name
callback.crosswalkCleaner.hardDeleteCrosswalkTypes.except
configuration/sources/ReltioCleanserReltio cleanser crosswalk name
callback.crosswalkCleaner.hardDeleteCrosswalkRelationTypes.always
configuration/sources/HUB_CallbackHub callback crosswalk name
callback.crosswalkCleaner.hardDeleteCrosswalkRelationTypes.except
configuration/sources/ReltioCleanserReltio cleanser crosswalk name
callback.crosswalkCleaner.softDeleteCrosswalkTypes.always
configuration/sources/HUB_USAGETAGCrosswalks list to soft-delete
callback.crosswalkCleaner.softDeleteCrosswalkTypes.whenOneKeyNotExists
configuration/sources/IQVIA_PRDP, configuration/sources/IQVIA_RAWDEACrosswalk list to soft-delete when ONEKEY crosswalk does not exists
callback.crosswalkCleaner.softDeleteCrosswalkTypes.except
configuration/sources/HUB_CALLBACK, configuration/sources/ReltioCleanserCrosswalk to exclude
callback.crosswalkCleaner.hardDeleteHeaders.HubAsyncOperation
crosswalk-deletekafka record header
callback.crosswalkCleaner.hardDeleteRelationHeaders.HubAsyncOperation
crosswalk-relation-deletekafka record header
callback.crosswalkCleaner.softDeleteHeaders.hcp.HubAsyncOperation
hcp-updatekafka record header
callback.crosswalkCleaner.softDeleteHeaders.hco.HubAsyncOperation
hco-updatekafka record header
callback.crosswalkCleaner.oneKey
configuration/sources/ONEKEYONEKEY crosswalk name
callback.crosswalkCleaner.eventOutputTopic
${env}-internal-async-all-cleaner-callbacksoutput topic
callback.crosswalkCleaner.softDeleteOneKeyReferbackCrosswalkTypes.referbackLookupCodes

HCPIT.RBI, HCOIT.RBI

OneKey referback crosswalk lookup codes
callback.crosswalkCleaner.softDeleteOneKeyReferbackCrosswalkTypes.oneKeyLookupCodes
HCPIT.OK, HCOIT.OKOneKey crosswalk lookup codes


NotMatch callback (clean potential match queue)

Config Parameter

Default value

Description

callback.potentialMatchLinkCleaner.eventInputTopic
${env}-internal-callback-potentialMatchCleaner-ininput topic
callback.potentialMatchLinkCleaner.acceptedRelationEventTypes
- RELATIONSHIP_CREATED
- RELATIONSHIP_CHANGED
accepted relation events
callback.potentialMatchLinkCleaner.acceptedRelationObjectTypes
- "configuration/relationTypes/FlextoHCOSAffiliations"
- "configuration/relationTypes/FlextoDDDAffiliations"
- "configuration/relationTypes/SAPtoHCOSAffiliations"
accepted relationship types
callback.potentialMatchLinkCleaner.matchTypesInCache
- "AUTO_LINK"
- "POTENTIAL_LINK"
PotentialMatch cache object types
callback.potentialMatchLinkCleaner.notMatchHeaders.hco.HubAsyncOperation
entities-not-match-setkafka record header
callback.potentialMatchLinkCleaner.eventOutputTopic
${env}-internal-async-all-notmatch-callbacksoutput topic


PreCallback Stream -(rankings)

Config Parameter

Default value

Description

preCallback.eventInputTopic${env}-internal-reltio-full-eventsinput topic
preCallback.eventOutputTopic${env}-internal-reltio-proc-eventsoutput topic for events
preCallback.internalAsyncBulkCallbacksTopic${env}-internal-async-all-bulk-callbacksoutput topic for callbacks
preCallback.mdmIntegrationService.baseURLN/AManager URL defined per environemnt
preCallback.mdmIntegrationService.apiKeyN/AManager secret API KEY defined per environemnt
preCallback.mdmIntegrationService.logMessagesfalseParameter used to turn on/off logging the payload
preCallback.skipEventTypesENTITY_MATCHES_CHANGED, ENTITY_AUTO_LINK_FOUND, ENTITY_POTENTIAL_LINK_FOUND, DCR_CREATED, DCR_CHANGED, DCR_REMOVEDEvents skipped in the processing
preCallback.oldEventsDeletion.maintainDuration10mCache duration time (for callbacks MD5 checksum)
preCallback.oldEventsDeletion.interval5mCache deletion interval
preCallback.rankCallback.featureActivationtrueParameter used to enable/disable the Rank feature
preCallback.rankCallback.callbackSourceHUB_CallbackCrosswalk used to update Reltio with Rank attributes
preCallback.rankCallback.activationFilter.countriesAG, AI, AN, AR, AW, BB, BL, BM, BO, BR, BS, BZ, CL, CO, CR, CW, DE, DO, EC, ES, FR, GF, GP, GT, GY, HK, HN, ID, IN, IT, JM, JP, KY, LC, MC, MF, MQ, MX, MY, NL, NC, NI, PA, PE, PF, PH, PK, PM, PN, PY, RE, RU, SA, SG, SV, SX, TF, TH, TR, TT, TW, UY, VE, VG, VN, WF, YT, XX, EMPTYList of countries for wich process activates the Rank (different between GBL and GBLUS)
preCallback.rankCallback.rawEntityChecksumDedupeStoreNameraw-entity-checksum-dedupe-storetopic name that store rawEntity MD5 checksum - used in rank callback deduplication
preCallback.rankCallback.attributeChangesChecksumDedupeStoreNameattribute-changes-checksum-dedupe-storetopic name that store attribute changes MD5 checksum - used in rank callback deduplication
preCallback.rankCallback.forwardMainEventsDuringPartialUpdatefalseThe parameter used to define if we want to forward partial events. By default it is false so only events that are fully calculated are sent further
preCallback.rankCallback.ignoreAndRemoveDuplicatesfalseThe parameter used in the Ranking may contain duplicities in the group. It is set to False because now Reltio is removing duplicated Identifier
preCallback.rankCallback.activeCleanerCallbacksSpecialityCleanerCallback, IdentifierCleanerCallback, EmailCleanerCallback, PhoneCleanerCallbackList of cleaner callbacks to be activated
preCallback.rankCallback.activeCallbacksSpecialityCallback, AddressCallback, AffiliationCallback, IdentifierCallback, EmailCallback, PhoneCallbackList of Ranker to be activated
preCallback.rankTransform.featureActivationtrueParaemter defines in the Rank feature should be activated.
preCallback.rankTransform.activationFilter.activeRankSorterSpecialtyRankSorter, AffiliationRankSorter, AddressRankSorter, IdentifierRankSorter, EmailRankSorter, PhoneRankSorter
preCallback.rankTransform.rankSortOrder.affiliationN/A

The source order defined for the specific Ranking. Details about the algorithm in: 

 Affiliation RankSorter

preCallback.rankTransform.rankSortOrder.phoneN/A

The source order defined for the specific Ranking. Details about the algorithm in: 

Phone RankSorter

preCallback.rankTransform.rankSortOrder.emailN/A

The source order defined for the specific Ranking. Details about the algorithm in: 

Email RankSorter

preCallback.rankTransform.rankSortOrder.specialitiesN/A

The source order defined for the specific Ranking. Details about the algorithm in: 

Specialty RankSorter

preCallback.rankTransform.rankSortOrder.identifierN/A

The source order defined for the specific Ranking. Details about the algorithm in: 

Identifier RankSorter

preCallback.rankTransform.rankSortOrder.addressSource.ReltioN/A

The source order defined for the specific Ranking. Details about the algorithm in: 

Address RankSorter

preCallback.rankTransform.rankSortOrder.addressesSource.ReltioN/A

The source order defined for the specific Ranking. Details about the algorithm in: 

Addresses RankSorter

" + }, + { + "title": "China Selective Router", + "pageID": "284812312", + "pageLink": "/display/GMDM/China+Selective+Router", + "content": "

Description

The china-selective-router component is responsible for enriching events and transformig from COMPANY model to Iqivia model. Component is using Asynchronous operation using kafka topics. To transform COMPANY object it needs to be consumed from input topic and based on configuration it is enriched, hco entity is connected with mainHco and as a last step event model is transformed to Iqivia model, after all operations event is sending to output topic.

Flows

Exposed interfaces


Interface Name

Type

Endpoint pattern

Description

Event transformer topology
KAFKA

topic: {env}-{topic_postfix}

Transform event from COMPANY model to Iqivia model, and send to ouptut topic

Dependent components


Component

Interface

Flow

Description

Data model
HCPModelConverter
N/AConverter to transform Entity to COMPANY model or to Iqivia model

Configuration


Config Parameter

Description

eventTransformer:
- country: "CN"
eventInputTopic: "${env}-internal-full-hcp-merge-cn"
eventOutputTopic: "${env}-out-full-hcp-merge-cn"
enricher: com.COMPANY.mdm.event_transformer.enricher.ChinaRefEntityProcessor
hcoConnector:
processor: com.COMPANY.mdm.event_transformer.enricher.ChinaHcoConnectorProcessor
transformer: com.COMPANY.mdm.event_transformer.transformer.COMPANYToIqviaEventTransformer
refEntity:
- type: HCO
attribute: ContactAffiliations
relationLookupAttribute: RelationType.RelationshipDescription
relationLookupCode: CON
- type: MainHCO
attribute: ContactAffiliations
relationLookupAttribute: RelationType.RelationshipDescription
relationLookupCode: REL.MAI

The main part of china-selective-router configuration, contains list of event transformaton configuration

country - specify country, value of this parameter have to be in event country section otherwise event will be skipped

eventInputTopic - input topic

eventOutputTopic - output topic

enricher - specify class to enrich event, based on refEntity configuration this class is resposible for collecting related hco and mainHco entities.

hcoConnector.processor - specify class to connect hco with main hco, in this class is made a call to reltio for all connections by hco uri. Based on received data is created additional attribute 'OtherHcoToHco' contains mainHco entity collected by enricher.

hcoConnector.enabled - enable or disable hcoConnector

hcoConnector.hcoAttrName - specify additional attibute name to place connected mainHco

hcoConnector.outRelations - specify the list of out relation to filter while calling reltio for hco connections

refEntity - contains list of attributes containing information about HCO or MainHCO entity (refEntity uri)

refEntity.type - type of entity: HCO or MainHco

refEntity.attribute - base attribute to search for entity

refEntity.relationLookupAttribute - attribute to search for lookupCode to decide what entity we are looking for

refEntity.relationLookupCode - code specify entity type


" + }, + { + "title": "Component Template", + "pageID": "164469941", + "pageLink": "/display/GMDM/Component+Template", + "content": "

Description

<short description of the componet>

Flows

<List of realized flow with links to Flow section>

Exposed interfaces


Interface NameTypeEndpoint patternDescription

REST API|KAFKA

Dependent components


ComponentInterfaceFlowDescription
<component name with link><Interface name><flow name with link>for what

Configuration


Config ParameterDefault valueDescription



" + }, + { + "title": "DCR Service", + "pageID": "209949312", + "pageLink": "/display/GMDM/DCR+Service", + "content": "" + }, + { + "title": "DCR Service 2", + "pageID": "218444525", + "pageLink": "/display/GMDM/DCR+Service+2", + "content": "

Description

Responsible for the DCR processing. Client (PforceRx) sends the DCRs through REST API, DCRs are routed to the target system (OneKey/Veeva Opendata/Reltio). Client (Pforcerx) retrieves the status of the DCR using status API. Service also contains Kafka-streams functionality to process the DCR updates asynchronously and update the DCRRegistry cache.

Services are accessible with REST API.

Applies transformations to the Kafka input stream producing the Kafka output stream.


Flows


Exposed interfaces

REST API

Interface Name

Type

Endpoint pattern

Description

Create DCRsREST API

POST /dcr

Create DCRs

GET DCRs statusREST APIGET /dcr/statusGET DCRs status

OneKey Stream

Interface Name

Type

Endpoint pattern

Description

callback inputKAFKA
{env}-internal-onekey-dcr-change-events-in
Events generated by the OneKey component after OneKey DataSteward Action. Flow responsible for events generation is OneKey: generate DCR Change Events (traceVR)
output  - callbacksMongo
mongo

DCR Registry updated 

Veeva OpenData Stream

Interface Name

Type

Endpoint pattern

Description

callback inputKAFKA
{env}-internal-veeva-dcr-change-events-in
Events generated by the Veeva component after Veeva DataSteward Action. Flow responsible for events generation is Veeva: generate DCR Change Events (traceVR)
output  - callbacksMongo
mongo

DCR Registry updated 

Reltio Stream

Interface Name

Type

Endpoint pattern

Description

callback inputKAFKA
{env}-internal-reltio-dcr-change-events-in

Events generated by Reltio after DataSteward Action. Published by the event-publisher component 

selector: "(exchange.in.headers.reconciliationTarget==null)
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.eventSubtype in ['DCR_CREATED', 'DCR_CHANGED', 'DCR_REMOVED']"

 

output  - callbacksMongo
mongo

DCR Registry updated 

Dependent components


ComponentInterfaceFlowDescription
API RouterAPI routingCreate DCRroute the requests to the DCR-Service component

Manager

MDMIntegrationService


GetEntitiesByUrisRetrieve multiple entities by providing the list of entities URIS
GetEntityByIdget entity by the id
GetEntityByCrosswalkget entity by the crosswalk
CreateDCRcreate change requests in Reltio
OK DCR Service
OneKeyIntegrationService
CreateDCRcreate VR in OneKey
Veeva DCR Service
ThirdPartyIntegrationService
CreateDCR

create VR in Veeva

At the moment only Veeva realized this interface, however in the future OneKey will be exposed via this interface as well  

Hub StoreMongo connectionN/AStore cache data in mongo collection
Transaction LoggerTransactionServiceTransactionsSaves each DCR status change in transactions

Configuration

Config Parameter

Default value

Description


kafka.groupId
${env}_dcr2
The application ID. Each stream processing application must have a unique ID. The same ID must be given to all instances of the application. It is recommended to use only alphanumeric characters, (dot), - (hyphen), and _ (underscore). Examples: "hello_world", "hello_world-v1.0.0"








kafkaOther.num.stream.threads
10Number of threads used in the Kafka Stream
kafkaOther.default.deserialization.exception.handler
com.COMPANY.mdm.common.streams.StructuredLogAndContinueExceptionHandlerDeserialization exception handler
kafkaOther.ssl.engine.factory.class
com.COMPANY.mdm.common.security.CustomTrustStoreSslEngineFactory
SSL config
kafkaOther.partitioner.class
com.COMPANY.mdm.common.ping.PingPartitioner
Ping partitioner required in Kafka Streams application with PING service
kafkaOther.max.poll.interval.ms
3600000Number of milliseconds to wait max time before next poll of events
kafkaOther.max.poll.records
10
Number of records downloaded in one poll from kafka
kafkaOther.max.request.size
2097152Events message size
dataStewardResponseConfig:
reltioResponseStreamConfig:
enable: true
eventInputTopic:
- ${env}-internal-reltio-dcr-change-events-in
   sendTo3PartyDecisionTable:
      - target: Veeva
        decisionProperties:
          sourceName: "VEEVA_CROSSWALK"
      - target: Veeva
        decisionProperties:
          countries: ["ID","PK","MY","TH"]
      - target: OneKey
    sendTo3PartyTopics:
      Veeva:
        - ${env}-internal-sendtothirdparty-ds-requests-in
      OneKey:
        - ${env}-internal-onekeyvr-ds-requests-in

VeevaResponseStreamConfig:
enable: true
eventInputTopic:
- ${env}-internal-veeva-dcr-change-events-in

 onekeyResponseStreamConfig:
enable: true
eventInputTopic:
- ${env}-internal-onekey-dcr-change-events-in
    maxRetryCounter: 20
deduplication:
duration: 2m
gracePeriod: 0s
byteLimit: 2147483648
suppressName: dcr2-onekey-response-stream-suppress
name: dcr2-onekey-response-stream-with-delay
storeName: dcr2-onekey-response-window-deduplication-store
pingInterval: 1m

- ${env}-internal-reltio-dcr-change-events-in

- ${env}-internal-onekey-dcr-change-events-in

- ${env}-internal-veeva-dcr-change-events-in

- ${env}-internal-sendtothirdparty-ds-requests-in

- ${env}-internal-onekeyvr-ds-requests-in

Configuration related to the event processing from Reltio, Onekey or Veeva


Deduplication is related to Onekey and allows to configure the aggregation window for events (processing daily) - 24h

MaxRetryCounter should be set to a high number - 1000000


targetDecisionTable:
- target: Reltio
decisionProperties:
userName: "mdm_dcr2_test_reltio_user"
- target: OneKey
decisionProperties:
userName: "mdm_dcr2_test_onekey_user"

- target: Veeva
    decisionProperties:
      sourceName: "VEEVA_CROSSWALK"
- target: Veeva
    decisionProperties:
      countries: ["ID","PK","MY","TH"]

- target: Reltio
decisionProperties:
country: GB
LIST OF the following combination of attributes




  1. Each attribute in the configuration is optional. 

  2. The decision table is making the validation based on the input request and the main object- the main object is HCP, if the HCP is empty then the decision table is checking HCO. 
  3. The result of the decision table is the TargetType, the routing to the Reltio MDM system, OneKey or Veeva service. 


userName 
the user name that executes the request
sourceName
the source name of the Main object
country
the county defined in the request
operationType

the operation type for the Main object

{ insert, update, delete }
affectedAttributes
the list of attributes that the user is changing
affectedObjects
{ HCP, HCO, HCP_HCO}

RESULT →  TargetType {Reltio, OneKey, Veeva}

PreCloseConfig:
acceptCountries:
- "IN"
- "SA"
  rejectCountries:
- "PL"
- "GB"

DCRs with countries which belong to acceptCountries attribute are automatically accepted (PRE_APPROVED) or rejected (PRE_REJECTED) when belong to rejectCountires

acceptCountriesList of values, example: [ IN, GB, PL , ...]
rejectCountries

List of values, example: [ IN, GB, PL ]

transactionLogger:
simpleDCRLog:
enable: true
kafkaEfk:
enable: true
Transaction ServiceThe configuration that enables/disables the transaction logger

oneKeyClient:
url: http://devmdmsrv_onekey-dcr-service_1:8092
userName: dcr_service_2_user
OneKey Integration Service

The configuration that allows connecting to onekey dcr service

VeevaClient:
url: http://localhost:8093
username: user
apiKey: ""
Veeva Integration Service 

The configuration that allows connecting to Veeva dcr service


manager:
url: https://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/${env}/gw
userName:dcr_service_2_user
logMessages: true
timeoutMs: 120000
MDM Integration ServiceThe configuration that allows connecting to Reltio service

Indexes

DCR Service 2 Indexes

" + }, + { + "title": "DCR service connect guide", + "pageID": "415221200", + "pageLink": "/display/GMDM/DCR+service+connect+guide", + "content": "

Introduction

This guide provides comprehensive instructions on integrating new client applications with the DCR (Data Change Request) service in the MDM HUB system. It is intended for technical engineers, client architects, solution designers, and MDM/Mulesoft teams.

Table of Contents

Overview

The DCR service processes Data Change Requests (DCRs) sent by clients through a REST API. These DCRs are routed to target systems such as OneKey, Veeva Opendata, or Reltio. The service also includes Kafka-streams functionality to process DCR updates asynchronously and update the DCRRegistry cache.

Access to the DCR API should be confirmed in advance with the P.O. MDM HUB → A.J. Varganin

Getting Started

Prerequisites

Setup Instructions

  1. Create MDM HUB User: Follow the SOP to add a direct API user to the HUB.  Complete the steps outlined in → Add Direct API User to HUB
  2. Obtain Access Token: Use PingFederate to acquire an access token

API Overview

Endpoints

Methods

Authentication and Authorization

  1. First step is to acquire access token. If you are connecting first time to MDM HUB API you should create MDM HUB user 
  2. Once you have the PingFederate username and password, you can acquire the access token.

Obtaining Access Token

  1. Request Token:
    \n
    curl --location --request POST 'https://devfederate.COMPANY.com/as/token.oauth2?grant_type=client_credentials' \\      // Use devfederate for DEV & UAT, stgfederate for STAGE, prodfederate for PROD\n--header 'Content-Type: application/x-www-form-urlencoded' \\\n--header 'Authorization: Basic Base64-encoded(username:password)'
    \n
    \n
  2. Response:
    \n
    {\n  "access_token": "12341SPRtjWQzaq6kgK7hXkMVcTzX",                                                                   \n  "token_type": "Bearer",\n  "expires_in": 1799                                                                                                 // The token expires after the time - "expires_in" field. Once the token expires, it must be refreshed.\n}
    \n

Below you can see, how Postman should be configured to obtain access_token

\"\"

Using Access Token

Include the access token in the Authorization header for all API requests.

Network Configuration

Required Settings

Creating DCRs

This method is used to create new DCR objects in the MDM HUB system. Below is an example request to create a new HCP object in the MDM system.

More examples and the entire data model can be found at:

Example Request

Create new HCP
\n
curl --location '{api_url}/dcr' \\                                                                                     // e.g., https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-amer-dev\n--header 'Content-Type: application/json' \\\n--header 'Authorization: Bearer ${access_token_value}' \\                                                              // e.g., 0001WvxKA16VWwlufC2dslSILdbE\n--data-raw '[\n    {\n        "country": "${dcr_country}",                                                                                  // e.g., CA\n        "createdBy": "${created_by}",                                                                                 // e.g., Test user\n        "extDCRComment": "${external_system_comment}",                                                                // e.g., This is test DCR to create new HCP\n        "extDCRRequestId": "${external_system_request_id}",                                                           // e.g., CA-VR-00255752\n        "dcrType": "${dcr_type}",                                                                                     // e.g., PforceRxDCR\n        "entities": [\n            {\n                "@type": "hcp",\n                "action": "insert",\n                "updateCrosswalk": {\n                    "type": "${source_system_name}",                                                                  // e.g., PFORCERX \n                    "value": "${source_system_value}"                                                                 // e.g., HCP-CA-VR-00255752 \n                },\n                "values": {\n                    "birthDate": "07-08-2017",\n                    "birthYear": "2017",\n                    "firstName": "Maurice",\n                    "lastName": "Brekke",\n                    "title": "HCPTIT.1118",\n                    "middleName": "Karen",\n                    "subTypeCode": "HCPST.A",\n                    "addresses": [\n                        {\n                            "action": "insert",\n                            "values": {\n                                "sourceAddressId": {\n                                    "source": "${source_system_name}",                                                // e.g., PFORCERX\n                                    "id": "${address_source_system_value}"                                            // e.g., ADR-CA-VR-00255752 \n                                },\n                                "addressLine1": "08316 McCullough Terrace",\n                                "addressLine2": "Waynetown",\n                                "addressLine3": "Designer Books gold parsing",\n                                "addressType": "AT.OFF",\n                                "buildingName": "Handmade Cotton Shirt",\n                                "city": "Singapore",\n                                "country": "SG",\n                                "zip": "ZIP 5"\n                            }\n                        }\n                    ]              \n                }\n            }\n        ]\n    }\n]'
\n

Request placeholders:

parameter namedescriptionexample
api_urlAPI router URLhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-amer-dev
access_token_valueAccess token value0001WvxKA16VWwlufC2dslSILdbE

dcr_country

Main entity countryCA

created_by

Created by userTest user

external_system_comment

Comment that will be populate to next processing stepsThis is test DCR

external_system_request_id

ID for tracking DCR processingCA-VR-00255752

dcr_type

Provided by MDM HUB team when user with DCR permission will be createdPforceRxDCR

source_system_name

Source system name. User used to invoke request has to have access to this sourcePFORCERX

source_system_value

ID of this object in source systemHCO-CA-VR-00255752

address_source_system_value

ID of address in source systemADR-CA-VR-00255752

Handling Responses

Success Response

Create DCR success response
\n
[\n    {\n        "requestStatus": "${request_status}",                                                                         // e.g., REQUEST_ACCEPTED\n        "extDCRRequestId": "${external_system_request_id},                                                            // e.g., CA-VR-00255752\n        "dcrRequestId": "${mdm_hub_dcr_request_id}",                                                                  // e.g., 4a480255a4e942e18c6816fa0c89a0d2\n        "targetSystem": "${target_system_name}",                                                                      // e.g., Reltio\n        "country": "${dcr_request_country}",                                                                          // e.g., CA\n        "dcrStatus": {\n            "status": "CREATED",\n            "updateDate": "2024-05-07T11:22:10.806Z",\n            "dcrid": "${reltio_dcr_status_entity_uri}"                                                                // e.g., entities/0HjtwJO\n        }\n    }\n]
\n

Response placeholders:

parameterdescriptionexample
external_system_request_idDCR request id in source systemCA-VR-00255752
mdm_hub_dcr_request_idDCR request id in MDM HUB system4a480255a4e942e18c6816fa0c89a0d2
target_system_nameDCR target system name, one of values: OneKey, Reltio, VeevaReltio
dcr_request_countryDCR request countryCA
request_statusDCR request status, one of values: REQUEST_ACCEPTED, REQUEST_FAILED, REQUEST_REJECTEDREQUEST_ACCEPTED
reltio_dcr_status_entity_uriURI of DCR status entity in Reltio systementities/0HjtwJO

Rejected Response

\n
[\n    {\n        "requestStatus": "REQUEST_REJECTED",\n        "errorMessage": "DuplicateRequestException -> Request [97aa3b3f-35dc-404c-9d4a-edfaf9e7121211c] has already been processed",\n        "errorCode": "DUPLICATE_REQUEST",\n        "extDCRRequestId": "97aa3b3f-35dc-404c-9d4a-edfaf9e7121211c"\n    }\n]
\n

Failed Response

\n
[\n    {\n        "requestStatus": "REQUEST_FAILED",\n        "errorMessage": "Target lookup code not found for attribute: HCPTitle, country: SG, source value: HCPTIT.111218.",\n        "errorCode": "VALIDATION_ERROR",\n        "extDCRRequestId": "97aa3b3f-35dc-404c-9d4a-edfaf9e712121121c"\n    }\n]
\n
In case of incorrect user configuration in the system, the API will return errors as follows. In these cases, please contact the MDM HUB team.

Getting DCR status

Processing of DCR will take some time. DCR status can be track via get DCR status API calls. DCR processing ends when it reaches the final status: ACCEPTED or REJECTED. When the DCR gets the ACCEPTED status, the following fields will appear in its status: "objectUri" and "COMPANYCustomerId". These can be used to find created/modified entities in the MDM system. Full documentation can be found at → Get DCR status.

Example Request

Below is an example query for the selected external_system_request_id

\n
curl --location '{api_url}/dcr/_status/${external_system_request_id}' \\                                               // e.g., CA-VR-00255752 \n--header 'Authorization: Bearer ${access_token_value}'                                                                // e.g., 0001WvxKA16VWwlufC2dslSILdbE 
\n

Handling Responses

Success Response

\n
{\n    "requestStatus": "REQUEST_ACCEPTED",\n    "extDCRRequestId": "8600ca9a-c317-45d0-97f6-152f01d70158",\n    "dcrRequestId": "a2848f2a573344248f78bff8dc54871a",\n    "targetSystem": "Reltio",\n    "country": "AU",\n    "dcrStatus": {\n        "status": "ACCEPTED",\n        "objectUri": "entities/0Hhskyx",                                                                               // \n        "COMPANYCustomerId": "03-102837896",                                                                            // usually HCP. HCO only when creating or updating HCO without references to HCP in DCR request\n        "updateDate": "2024-05-07T11:47:08.958Z",\n        "changeRequestUri": "changeRequests/0N38Jq0",\n        "dcrid": "entities/0EUulla"\n    }\n}
\n

Rejected Response

\n
{\n    "requestStatus": "REQUEST_REJECTED",\n    "errorMessage": "Received DCR_CHANGED event, updatedBy: svc-pfe-mdmhub, on 1714378259964. Updating DCR status to: REJECTED",\n    "extDCRRequestId": "b9239835-937e-434d-948c-6a282a736c4f",\n    "dcrRequestId": "0b4125648b6c4d9cb785856841f7d65d",\n    "targetSystem": "Veeva",\n    "country": "HK",\n    "dcrStatus": {\n        "status": "REJECTED",\n        "updateDate": "2024-04-29T08:11:06.555Z",\n        "comment": "This DCR was REJECTED by the VEEVA Data Steward with the following comment: [A-20022] Veeva Data Steward: Your request has been rejected..",\n        "changeRequestUri": "changeRequests/0IojkYP",\n        "dcrid": "entities/0qmBUXU"\n    }\n}
\n

Getting multiple DCR statuses

Multiple statuses can be selected at once using the DCR status filtering API

Example Request

Filter DCR status
\n
curl --location '{api_url}/dcr/_status?updateFrom=2021-10-17T20%3A31%3A31.424Z&updateTo=2023-10-17T20%3A31%3A31.424Z&limit=5&offset=3' \\\n--header 'Authorization: Bearer ${access_token_value}'                                                                // e.g., 0001WvxKA16VWwlufC2dslSILdbE 
\n

Example Response

Success Response

\n
[\n    {\n        "requestStatus": "REQUEST_ACCEPTED",\n        "extDCRRequestId": "8d3eb4f7-7a08-4813-9a90-73caa7537eba",\n        "dcrRequestId": "360d152d58d7457ab6a0610b718b6b8b",\n        "targetSystem": "OneKey",\n        "country": "AU",\n        "dcrStatus": {\n            "status": "ACCEPTED",\n            "objectUri": "entities/05jHpR1",\n            "COMPANYCustomerId": "03-102429068",\n            "updateDate": "2023-10-13T05:43:02.007Z",\n            "comment": "ONEKEY response comment: ONEKEY accepted response - HCP EID assigned\\nONEKEY HCP ID: WUSM03999911",\n            "changeRequestUri": "8b32b8544ede4c72b7adfa861b1dc53f",\n            "dcrid": "entities/04TxaQB"\n        }\n    },\n    {\n        "requestStatus": "REQUEST_ACCEPTED",\n        "extDCRRequestId": "b66be6bd-655a-47f8-b78b-684e80166096",\n        "dcrRequestId": "becafcb2cd004c1d89ecfc670de1de70",\n        "targetSystem": "Reltio",\n        "country": "AU",\n        "dcrStatus": {\n            "status": "ACCEPTED",\n            "objectUri": "entities/06SVUCq",\n            "COMPANYCustomerId": "03-102429064",\n            "updateDate": "2023-10-13T05:35:08.597Z",\n            "comment": "26498057 [svc-pfe-mdmhub][1697175298895] -",\n            "changeRequestUri": "changeRequests/06sXnXH",\n            "dcrid": "entities/08LAHeQ"\n        }\n    }\n]
\n


Get entity

This method is used to prepare a DCR request for modifying entities and to validate the created/modified entities in the DCR process. Use the "objectUri" field available after accepting the DCR to query MDM system.

Example Request

Get entity request
\n
curl --location '{api_url}/${objectUri}' \\                                                                             // e.g., https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-amer-dev, entities/05jHpR1\n --header 'Authorization: Bearer ${access_token_value}'                                                                // e.g., 0001WvxKA16VWwlufC2dslSILdbE   
\n

Example Response

Success Response

Get entity response
\n
{\n    "type": "configuration/entityTypes/HCP",\n    "uri": "entities/06SVUCq",\n    "createdBy": "svc-pfe-mdmhub",\n    "createdTime": 1697175293866,\n    "updatedBy": "Re-cleansing of null in tenant 2NBAwv1z2AvlkgS background task. (started by test.test@COMPANY.com)",\n    "updatedTime": 1713375695895,\n    "attributes": {\n        "COMPANYGlobalCustomerID": [\n            {\n                "uri": "entities/06SVUCq/attributes/COMPANYGlobalCustomerID/LoT0xC2",\n                "type": "configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",\n                "value": "03-102429064",\n                "ov": true\n            }\n        ],\n        "TypeCode": [\n            {\n                "uri": "entities/06SVUCq/attributes/TypeCode/LoT0XcU",\n                "type": "configuration/entityTypes/HCP/attributes/TypeCode",\n                "value": "HCPT.NPRS",\n                "ov": true\n            }\n        ],\n        "Addresses": [\n            {\n                "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv",\n                "value": {\n                    "AddressType": [\n                        {\n                            "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressType/dZqkWUB",\n                            "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressType",\n                            "value": "TYS.P",\n                            "ov": true\n                        }\n                    ],\n                    "COMPANYAddressID": [\n                        {\n                            "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/COMPANYAddressID/dZqkakR",\n                            "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/COMPANYAddressID",\n                            "value": "7001330683",\n                            "ov": true\n                        }\n                    ],\n                    "AddressLine1": [\n                        {\n                            "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine1/dZqkf0h",\n                            "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine1",\n                            "value": "addressLine1",\n                            "ov": true\n                        }\n                    ],\n                    "AddressLine2": [\n                        {\n                            "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine2/dZqkjGx",\n                            "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine2",\n                            "value": "addressLine2",\n                            "ov": true\n                        }\n                    ],\n                    "AddressLine3": [\n                        {\n                            "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine3/dZqknXD",\n                            "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine3",\n                            "value": "addressLine3",\n                            "ov": true\n                        }\n                    ],\n                    "City": [\n                        {\n                            "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/City/dZqkrnT",\n                            "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/City",\n                            "value": "city",\n                            "ov": true\n                        }\n                    ],\n                    "Country": [\n                        {\n                            "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/Country/dZqkw3j",\n                            "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Country",\n                            "value": "GB",\n                            "ov": true\n                        }\n                    ],\n                    "Zip5": [\n                        {\n                            "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/Zip5/dZql0Jz",\n                            "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Zip5",\n                            "value": "zip5",\n                            "ov": true\n                        }\n                    ],\n                    "Source": [\n                        {\n                            "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF",\n                            "value": {\n                                "SourceName": [\n                                    {\n                                        "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceName/dZql8qV",\n                                        "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Source/attributes/SourceName",\n                                        "value": "PforceRx",\n                                        "ov": true\n                                    }\n                                ],\n                                "SourceAddressID": [\n                                    {\n                                        "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceAddressID/dZqlD6l",\n                                        "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Source/attributes/SourceAddressID",\n                                        "value": "string",\n                                        "ov": true\n                                    }\n                                ]\n                            },\n                            "ov": true,\n                            "label": "PforceRx"\n                        }\n                    ],\n                    "VerificationStatus": [\n                        {\n                            "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/VerificationStatus/dZrp4Jz",\n                            "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/VerificationStatus",\n                            "value": "Unverified",\n                            "ov": true\n                        }\n                    ],\n                    "VerificationStatusDetails": [\n                        {\n                            "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/VerificationStatusDetails/hLXLd9W",\n                            "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/VerificationStatusDetails",\n                            "value": "Address Verification Status is unverified - unable to verify. the output fields will contain the input data.\\nPost-Processed Verification Match Level is 0 - none.\\nPre-Processed Verification Match Level is 0 - none.\\nParsing Status isidentified and parsed - All input data has been able to be identified and placed into components.\\nLexicon Identification Match Level is 0 - none.\\nContext Identification Match Level is 5 - delivery point (postbox or subbuilding).\\nPostcode Status is PostalCodePrimary identified by context - postalcodeprimary identified by context.\\nThe accuracy matchscore, which gives the similarity between the input data and closest reference data match is 100%.",\n                            "ov": true\n                        }\n                    ],\n                    "AVC": [\n                        {\n                            "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AVC/hLXLhPm",\n                            "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/AVC",\n                            "value": "U00-I05-P1-100",\n                            "ov": true\n                        }\n                    ],\n                    "AddressRank": [\n                        {\n                            "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressRank/gjq5qMj",\n                            "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressRank",\n                            "value": "1",\n                            "ov": true\n                        }\n                    ]\n                },\n                "ov": true,\n                "label": "TYS.P - addressLine1, addressLine2, city, zip5, GB"\n            }\n        ]\n    },\n    "crosswalks": [\n        {\n            "type": "configuration/sources/ReltioCleanser",\n            "value": "06SVUCq",\n            "uri": "entities/06SVUCq/crosswalks/dZrp03j",\n            "reltioLoadDate": 1697175300805,\n            "createDate": 1697175303886,\n            "updateDate": 1697175303886,\n            "attributes": [\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/AVC/hLXLhPm",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/VerificationStatus/dZrp4Jz",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/VerificationStatusDetails/hLXLd9W"\n            ]\n        },\n        {\n            "type": "configuration/sources/Reltio",\n            "value": "06SVUCq",\n            "uri": "entities/06SVUCq/crosswalks/dZqkNxf",\n            "reltioLoadDate": 1697175300805,\n            "createDate": 1697175300805,\n            "updateDate": 1697175300805,\n            "attributes": [\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/Country/dZqkw3j",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceName/dZql8qV",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/Zip5/dZql0Jz",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine1/dZqkf0h",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceAddressID/dZqlD6l",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/COMPANYAddressID/dZqkakR",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine2/dZqkjGx",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/City/dZqkrnT",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine3/dZqknXD",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressType/dZqkWUB"\n            ],\n            "singleAttributeUpdateDates": {\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/Country/dZqkw3j": "2023-10-13T05:35:00.805Z",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceName/dZql8qV": "2023-10-13T05:35:00.805Z",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/Zip5/dZql0Jz": "2023-10-13T05:35:00.805Z",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine1/dZqkf0h": "2023-10-13T05:35:00.805Z",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceAddressID/dZqlD6l": "2023-10-13T05:35:00.805Z",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/COMPANYAddressID/dZqkakR": "2023-10-13T05:35:00.805Z",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF": "2023-10-13T05:35:00.805Z",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine2/dZqkjGx": "2023-10-13T05:35:00.805Z",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/City/dZqkrnT": "2023-10-13T05:35:00.805Z",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine3/dZqknXD": "2023-10-13T05:35:00.805Z",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv": "2023-10-13T05:35:00.805Z",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressType/dZqkWUB": "2023-10-13T05:35:00.805Z"\n            }\n        },\n        {\n            "type": "configuration/sources/HUB_CALLBACK",\n            "value": "06SVUCq",\n            "uri": "entities/06SVUCq/crosswalks/LoT0kPG",\n            "reltioLoadDate": 1697175429294,\n            "createDate": 1697175296673,\n            "updateDate": 1697175296673,\n            "attributes": [\n                "entities/06SVUCq/attributes/TypeCode/LoT0XcU",\n                "entities/06SVUCq/attributes/COMPANYGlobalCustomerID/LoT0xC2",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressRank/gjq5qMj",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv"\n            ],\n            "singleAttributeUpdateDates": {\n                "entities/06SVUCq/attributes/TypeCode/LoT0XcU": "2023-10-13T05:34:56.673Z",\n                "entities/06SVUCq/attributes/COMPANYGlobalCustomerID/LoT0xC2": "2023-10-13T05:37:09.294Z",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressRank/gjq5qMj": "2023-10-13T05:35:08.420Z",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv": "2023-10-13T05:35:08.420Z"\n            }\n        }\n    ]\n}
\n

Rejected Response

Entity not found response
\n
{\n    "code": "404",\n    "message": "Entity not found"\n}
\n

Troubleshooting Guide

All documentation with a detailed description of flows can be found at → PforceRx DCR flows

Common Issues and Solutions

Duplicate Request:


Validation Error:


Network Errors:


Authentication Errors:


Service Unavailable Errors:


Missing Configuration for User


Permission Denied to create DCR:


Validation Error:

" + }, + { + "title": "Entity Enricher", + "pageID": "164469912", + "pageLink": "/display/GMDM/Entity+Enricher", + "content": "

Description

Accepts simple events on the input. Performs the following calls to Reltio:

Produces the events enriched with the targetEntity / targetRelation field retrieved from RELTIO.

Exposed interfaces


Interface Name

Type

Endpoint pattern

Description

entity enricher inputKAFKA
${env}-internal-reltio-events
events being sent by the event publisher component. Event types being considered: HCP_*, HCO_*, ENTITY_MATCHES_CHANGED
entity enricher outputKAFKA
${env}-internal-reltio-full-events

Dependent components


ComponentInterfaceFlowDescription

Manager




MDMIntegrationService


getEntitiesByUris
getRelation
getChangeRequest
findEntityCountry

Configuration


Config ParameterDefault valueDescription
bundle.enabletrueenable / disable function
bundle.inputTopics${env}-internal-reltio-eventsinput topic
bundle.threadPoolSize10number of thread pool size
bundle.pollDuration10spoll interval
bundle.outputTopic${env}-internal-reltio-full-eventsoutput topic
kafka.groupId${env}-entity-enricherThe application ID. Each stream processing application must have a unique ID. The same ID must be given to all instances of the application. It is recommended to use only alphanumeric characters, . (dot), - (hyphen), and _ (underscore). Examples: "hello_world", "hello_world-v1.0.0"
bundle.kafkaOther.session.timeout.ms30000
bundle.kafkaOther.max.poll.records10
bundle.kafkaOther.max.poll.interval.ms300000
bundle.kafkaOther.auto.offset.resetearliest
bundle.kafkaOther.enable.auto.commitfalse
bundle.kafkaOther.max.request.size2097152
bundle.gateway.apiKey${gateway.apiKey}
bundle.gateway.logMessagesfalse
bundle.gateway.url${gateway.url}
bundle.gateway.userName${gateway.userName}



" + }, + { + "title": "HUB APP", + "pageID": "302700538", + "pageLink": "/display/GMDM/HUB+APP", + "content": "

Description


HUB UI is a front-end application that presents basic information about the MDM HUB cluster. This component allows you to manage Kafka and Airflow Dags or view quality service configuration.

The app allows users to log in with their COMPANY accounts.

Technology: Angular

Code link: mdm-hub-app

Flows

Access:

Dependent components


ComponentInterfaceDescription
MDM ManagerREST API

Used to fetch quality service configuration and for testing entities

MDM AdminREST API

Used to manage kafka, airflow dags and reconciliation service


Configuration

Component is configured via environment variables


Environment variableDefault valueDescription
BACKEND_URI
N/AMDM Manager URI
ADMIN_URIN/AMDM Admin URI
INGRESS_PREFIXN/AApplication context path
" + }, + { + "title": "Hub Store", + "pageID": "164469908", + "pageLink": "/display/GMDM/Hub+Store", + "content": "

Hub store is a mongo cache where are stored: EntityHistory, EntityMatchesHistory, EntityRelation.


Configuration

Config Parameter

Default value

Description

mongo:
host: ***:27017,***:27017,***:27017
dbName: reltio_${env}
user: ***
url: mongodb://${mongo.user}:${mongo.password}@${mongo.host}/${mongo.dbName}

Mong DB connection configuration

" + }, + { + "title": "Inc batch channel", + "pageID": "302686382", + "pageLink": "/display/GMDM/Inc+batch+channel", + "content": "

Description

Responsible for ETL data loads of data to Reltio. It takes plain data files(eg. txt, csv) and, based on defined mappings, converts it into json objects, which are then sent to Reltio.

Flows

Dependent components

ComponentInterface nameDescription
ManagerKafka

Events constructed by inc-batch-channel are transferred to the kafka topic, from where they are read by mdm-manager and sent to Reltio. When the event is processed by the Reltio manager send ACK message on the appropriate topic:

Example input topic: gbl-prod-internal-async-all-sap

Example ACK topic: gbl-prod-internal-async-all-sap-ack

Batch ServiceBatch ControllerUsed to store ETL loads state and statistics. All information are placed in mongodb


MongoDb collections


Configuration

Connections

mongoConnectionProps.dbUrl
Full Mongo DB URL
mongoConnectionProps.mongo.dbNameMongo database name
kafka.serversKafka Hostname 
kafka.groupIdBatch Service component group name
kafka.saslMechanismSASL configrration
kafka.securityProtocolSecurity Protocol
kafka.sslTruststoreLocationSSL trustore file location
kafka.sslTruststorePasswordSSL trustore file passowrd
kafka.usernameKafka username
kafka.passwordKafka dedicated user password
kafka.sslEndpointAlgorithm:SSL algoright

Batches configuration:

batches.${batch_name}
Batch configuration
batches.${batch_name}.inputFolderDirectory with input files
batches.${batch_name}.outputFolderDirectory with output files
batches.${batch_name}.columnsDefinitionFileFile defining mapping
batches.${batch_name}.requestTopicManager topic with events that are going to be sent to Reltio
batches.${batch_name}.ackTopicAck topic
batches.${batch_name}.parserTypeParser type. Defines separator and encoding format
batches.${batch_name}.preProcessingDefine preprocessin of input files
batches.${batch_name}.stages.${stage_name}.stageOrderStage priority
batches.${batch_name}.stages.${stage_name}.processorTypeProcessor type:
  • SIMPLE - change is applied only in mongo
  • ENTITY_SENDER - change is sent to Reltio
batches.${batch_name}.stages.${stage_name}.outputFileNameOutput file name
batches.${batch_name}.stages.${stage_name}.disabledIf stage is disabled
batches.${batch_name}.stages.${stage_name}.definitionsDefine which definition is used to map input file
batches.${batch_name}.stages.${stage_name}.deltaDetectionEnabledIf previous and current state of objects are compared
batches.${batch_name}.stages.${stage_name}.initDeletedLoadEnabled
batches.${batch_name}.stages.${stage_name}.fullAttributesMerge
batches.${batch_name}.stages.${stage_name}.postDeleteProcessorEnabled
batches.${batch_name}.stages.${stage_name}.senderHeadersDefines http headers


" + }, + { + "title": "Kafka Connect", + "pageID": "164469804", + "pageLink": "/display/GMDM/Kafka+Connect", + "content": "

Description

Kafka Connect is a tool for scalably and reliably streaming data between Apache Kafka® and other data systems.  It makes it simple to quickly define connectors that move large data sets in and out of Kafka.

Kafka Connect can ingest entire databases or collect metrics from all your application servers into Kafka topics, making the data available for stream processing with low latency.

Flows

Snowflake: Base tables refresh

Snowflake: Events publish flow

Snowflake: History Inactive

Snowflake: LOV data publish flow

Snowflake: MT data publish flow

Configuration

Kafka Connect - properties description

param

value

group.id<env>-kafka-connect-snowflake
topic.creation.enablefalse

offset.storage.topic

<env>-internal-kafka-connect-snowflake-offset
config.storage.topic<env>-internal-kafka-connect-snowflake-config
status.storage.topic<env>-internal-kafka-connect-snowflake-status
key.converterorg.apache.kafka.connect.storage.StringConverter
value.converterorg.apache.kafka.connect.storage.StringConverter
key.converter.schemas.enabletrue
value.converter.schemas.enabletrue
config.storage.replication.factor3
offset.storage.replication.factor3
status.storage.replication.factor3
rest.advertised.host.namelocalhost
rest.port8083
security.protocolSASL_PLAINTEXT
sasl.mechanismSCRAM-SHA-512
consumer.group.id<env>-kafka-connect-snowflake-consumer
consumer.security.protocolSASL_PLAINTEXT
consumer.sasl.mechanismSCRAM-SHA-512

connectors - SnowflakeSinkConnector - properties description

paramvalue

snowflake.topic2table.map

<env>-out-full-snowflake-all:HUB_KAFKA_DATA

topics

<env>-out-full-snowflake-all

buffer.flush.time

300

snowflake.url.name

<sf_instance_name>

snowflake.database.name

<db_name>

snowflake.schema.name

LANDING

buffer.count.records

1000

snowflake.user.name

<user_name>

value.converter

com.snowflake.kafka.connector.records.SnowflakeJsonConverter

key.converter

org.apache.kafka.connect.storage.StringConverter

buffer.size.bytes

60000000

snowflake.private.key.passphrase

<secret>

snowflake.private.key

<secret>


There is an one exception connected with FLEX environment. The S3SinkConnector is used here - properties description

param

value

s3.region<region>
s3.part.retries10
s3.bucket.name<s3_bucket>
s3.compression.typenone
topics.dir<s3_topic_dit>
topics<env>-out-full-gblus-flex-all
flush.size1000000
timezoneUTC
locale<locale>
format.classio.confluent.connect.s3.format.json.JsonFormat
schema.generator.classio.confluent.connect.storage.hive.schema.DefaultSchemaGenerator
schema.compatibilityNONE
aws.access.key.id<secret>
aws.secret.access.key<secret>
value.converterorg.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enablefalse
key.converterorg.apache.kafka.connect.storage.StringConverter
key.converter.schemas.enablefalse
partition.duration.ms86400000
partitioner.classio.confluent.connect.storage.partitioner.TimeBasedPartitioner
storage.classio.confluent.connect.s3.storage.S3Storage
rotate.schedule.interval.ms86400000
rotate.interval.ms-1
path.formatYYYY-MM-dd
timestamp.extractorWallclock


" + }, + { + "title": "Manager", + "pageID": "164469894", + "pageLink": "/display/GMDM/Manager", + "content": "

Description

Manager is the main component taking part in client interactions with MDM systems.

It orchestrates API calls with  the following services:

Manager services are accessible with REST API.  Some services are exposed as asynchronous operations through Kafka for performance reasons.


Technology: Java, Spring, Apache Camel

Code link: mdm-manager

Flows

Exposed interfaces


Interface NameTypeEndpoint patternDescription
Get entityREST API

GET /entities/{entityId}

Get detailed entity information

Get multiple entitiesREST APIGET /entities/_byUrisReturn multiple entities with provided uris
Get entity countryREST APIGET /entities/{entityId}/_countryReturn country for an entity with the provided uri
Merge & UnmegeREST API

POST/entities/{entitiyId/_merge

POST/entities/{entitiyId/_unmerge


_byUris

Merge entity A with entity B using Reltio uris as IDs.

Unmerge entity B from entity A using Reltio uris as IDs.


Merge & Unmege ComplexREST API

POST/entities/_merge

POST/entities/_unmerge

Merge entity A with entity B using request body (JSON) with ids.

Unmerge entity B from entity A using request body (JSON) with ids.


Create/Update entityREST API & KAFKA

POST /hcp

PATCH /hcp

POST /hco

PATCH /hco

Create/partially update entity
Create/Update multiple entitiesREST API

POST /batch/hcp

PATCH /batch/hcp

POST /batch/hco

PATCH /batch/hco

Batch create HCO/HCP entities
Get entity by crosswalkREST APIGET /entities/crosswalkGet entity by crosswalk
Delete entity by crosswalkREST APIDELETE /entities/crosswalkDelete entityt by crosswalk
Create/Update relationREST API

POST /relations/

_dbscan

PATCH /relations/

Create/update relation
Get relationREST APIGET /relations/{relationId}Get relation by reltio URI
Get relation by crosswalkREST APIGET /relations/crosswalkGet relation by crosswalk
Delete relation by crosswalkREST APIDELETE /relations/crosswalkDelete relation by crosswalk
Batch create relationREST APIPOST /batch/relationBatch create relation
Create/replace/update mco profileREST API

POST /mco

PATCH /mco

Create, replace or partially update mco profile
Create/replace/update batch mco profileREST API

POST /batch/mco

PATCH /batch/mco

Create, replace or partially update mco profiles
Update Usage FlagsREST APIPOST /updateUsageFlags

Create, Update, Remove UsageType UsageFlags of "Addresses' Address field of HCP and HCO entities

Search for change requestsREST APIGET /changeRequests/_byEntityCrosswalkSearch for change requests by entity crosswalk
Get change request by uriREST APIGET /changeRequests/{uri}Get change request by uri
Create change requestREST API

POST /changeRequest

Create change request - internal
Get change requestREST APIGET /changeRequestGet change request - internal

Dependent components


ComponentInterfaceDescription
Reltio AdapterInternal Java interface

Used to communicate with Reltio

Nucleus AdapterInternal Java interface

Used to communicate with Nucleus

Authorization Engine

Internal Java interfaceProvide user authorization

MDM Routing Engine

Internal Java interfaceProvides routing

Configuration

The configuration is a composition of dependent components configurations and parameters specifived below.


Config ParameterDefault valueDescription
mongo.url
Mongo url
mongo.dbName
Mongo database name
mongoConnectionProps.dbUrl
Mongo database url
mongoConnectionProps.dbName
Mongo database name
mongoConnectionProps.user
Mongo username
mongoConnectionProps.password
Mongo user password
mongoConnectionProps.entityCollectionName
Entity collection name
mongoConnectionProps.lovCollectionName
Lov collection name
" + }, + { + "title": "Authorization Engine", + "pageID": "164469870", + "pageLink": "/display/GMDM/Authorization+Engine", + "content": "

Description

Authorization Engine is responsible for authorizing users executing API operations. All API operations are secured and can be executed only by users that have specific roles. The engine checks if a user has a role allowed access to API operation.


Flows

The Authorization Engine is engaged in all flows exposed by Manager component.


Exposed interfaces

Interface NameTypeJava class:methodDescription
Authorization ServiceJavaAuthorizationService:processCheck user permission to run a specific operation. If the user has granted a role to run this operation method will allow to call it. In other case authorization exception will throw

Dependent components

All of the below operations are exposed by Manager component and details about was described here. Description column of below table has role names which have to be assigned to user permitted to use described operations.

ComponentInterfaceDescription
Manager

GET /entities/*

GET_ENTITIES

GET /relations/*GET_RELATION
GET /changeRequests/*GET_CHANGE_REQUESTS

DELETE /entities/crosswalk

DELETE /relations/crosswalk

DELETE_CROSSWALK

POST /hcp

POST /batch/hcp

CREATE_HCP

PATCH /hcp

PATCH /batch/hcp

UPDATE_HCP

POST /hco

POST /batch/hco

CREATE_HCO

PATCH /hco

PATCH /batch/hco

UPDATE_HCO

POST /mco

POST /batch/mco

CREATE_MCO

PATCH /mco

PATCH /batch/mco

UPDATE_MCO
POST /relationsCREATE_RELATION
PATCH /relationsUPDATE_RELATION
POST /changeRequestCREATE_CHANGE_REQUEST
POST /updateUsageFlagsUSAGE_FLAG_UPDATE
POST /entities/{entityId}/_mergeMERGE_ENTITIES
POST /entities/{entityId}/_unmergeUNMERGE_ENTITIES
GET /lookupLOOKUPS

Configuration

Configuration parameterDescription
users[].nameUser name
users[].descriptionDescription of user
users[].defaultClientDefault MDM client that is used in the case when the user doesn't specify country
users[].rolesList of roles assigned to user
users[].countriesList of countries whose data can be managed by user
users[].sourcesList of sources (crosswalk types) whose can be used during manage data by the user
" + }, + { + "title": "MDM Routing Engine", + "pageID": "164469900", + "pageLink": "/display/GMDM/MDM+Routing+Engine", + "content": "

Description

MDM Routing Engine is responsible for making a decision on which MDM system has to be used to process client requests. The call is made based on a decision table that maps MDM system with a  country.

In the case of multiple MDM systems for the same market, the decision table contains a user dimension allowing to select MDM system by user name.

Flows

The MDM Routing Engine is engaged in all flows supported by Manager component.


Exposed interfaces

Interface NameTypeJava class:methodDescription
MDM Client Factory

JavaMDMClientFactory:getDefaultMDMClientGet default MDM client
JavaMDMClientFactory:getDefaultMDMClient(username)Get default MDM client specified for the user
JavaMDMClientFactory:getMDMClient(country)Get MDM client that supports the specified country
JavaMDMClientFactory:getMDMClient(country, user);Get MDM client that  supported specified country and user

Dependent components

ComponentInterfaceDescription
Reltio AdapterJavaProvides integrations with Reltio MDM
Nucleus AdapterJavaProvides integration with Nucleus MDM


Configuration

Configuration parameterDescription

users[].name

name of user
users[].defaultClientdefault mdm client for user
clientsDecisionTable.{selector name}.countries[]List of countries
clientsDecisionTable.{selector name}.clients[]

Map where the key is username and value is MDM client name that will be used to process data comes from defined countries.

Special key "default" defines the default MDM client which will be used in the case when there is no specific client for username.

mdmFactoryConfig.{mdm client name}.typeType of MDM client. Only two values are supported: "reltio" or "nucleus".
mdmFactoryConfig.{mdm client name}.configMDM client configuration. It is based on adapter type: Reltio or Nucleus
" + }, + { + "title": "Nucleus Adapter", + "pageID": "164469896", + "pageLink": "/display/GMDM/Nucleus+Adapter", + "content": "

Description

Nucleus-adapter is a component of MDM Hub that is used to communicate with Nucleus. It provides 4 types of operations:

Nucleus 360 is an old COMPANY MDM platform comparing to Reltio. It's used to store and manage data about healthcare professionals(hcp) and healthcare organizations(hco).

It uses batch processing so the results of the operation are applied for the golden record after a certain period of time.

Nucleus accepts requests with an XML formatted body and also sends responses in the same way.

Flows

Exposed interfaces


Interface NameTypeJava class:methodDescription
get entityJava
NucleusMDMClient:getEntity

Provides a mechanism to obtain information about the specified entity. Entity can be obtained by entity id, e.g. xyzf325

Two Nucleuses methods are used to obtain detailed information about the entity.

First is Look up method, thanks to which we can obtain basic information about entity(xml format) by its id.

Next, we provide that information for the second Nucleus method, Get Profile Details that sends a response with all available information (xml format).

Finally, we gather all received information about the entity, convert it to Relto model(json format) and transfer it to a client.

get entitiesJava
NucleusMDMClient:getEntities

Provide a mechanism to obtain basic information about a group of entities. This entity group is determined based on the defined filters(e.g. first name, last name, professional type code).

For this purpose only Nuclueus look up method is used. This way we receive only basic information about entities but it is performance-optimized and does not create unnecessary load on the server.

create/update entity

Java

NucleusMDMClient:creteEntity

Using the Nucleus Add Update web service method nucleus-adapter provides a mechanism to create or update data present in the database according to the business rules(createEntity method).

Nucleus-adapter accepts JSON formatted requests body, maps it to xml format, and then sends it to Nucleus.

get relationsJava
NucleusMDMClient:getRelation

To get relations nucleus-adapter uses the Nucleus affiliation interface.

Nucleus produces XML formatted response and nucleus-adapter transforms it to Reltio model(JSON format).

Dependent components


ComponentInterfaceDescription
Nucleus

https://{{ nuleus host }}/CustomerManage_COMPANY_EU_Prod/manage.svc?singleWsdl


Nucleus endpoint for Creating/updating hcp and hco
https://{{ nuleus host }}/Nuc360ProfileDetails5.0/Api/DetailSearchNucleus endpoint for getting details about entity
https://{{ nuleus host }}/Nuc360QuickSearch5.0/LookupNucleus endpoint for getting basic information about entity
https://{{ nuleus host }}/Nuc360DbSearch5.0/api/affiliationNucleus endpoint for getting relations information

Configuration


Config ParameterDefault valueDescription
nucleusConfig.baseURLnullBase url of Nucleus mdm
nucleusConfig.usernamenullNucleus username

nucleusConfig.password

nullNucleus password
nucleusConfig.additionalOptions.customerManageUrlnullNucleus endpoint for creating/updating entities
nucleusConfig.additionalOptions.profileDetailsUrlnullNucleus endpoint for getting detailed information about entity
nucleusConfig.additionalOptions.quickSearchUrlnullNucleus endpoint for getting basic information about entity
nucleusConfig.additionalOptions.affiliationUrlnullNucleus endpoint for getting information about entities relations
nucleusConfig.additionalOptions.defaultIdTypenullDefault IdType for entities search(used if another not provided)
" + }, + { + "title": "Quality Engine and Rules", + "pageID": "164469944", + "pageLink": "/display/GMDM/Quality+Engine+and+Rules", + "content": "

Description

Quality engine is used to verify data quality in entity attributes. It is used for MCO, HCO, HCP entities.

Quality engine is responsible for preprocessing Entity when a specific precondition is met. This engine is started in the following cases:

It has two two components quality-engine and quality-engine-integration


Flows

Validation by quality rules is done before sending entities to reltio. Quality rules should be enabled in configuration.

Data quality checking is started in com.COMPANY.mdm.manager.service.QualityService. Whole rule flow for entity have one context (com.COMPANY.entityprocessingengine.pipeline.RuleContext)


Rule

Rule have following configuration

Preconditions

Structure:

Example:

preconditions:

    - type: source

      values: 

         - CENTRIS

Possible types:

Checks

Structure:

Example:

check:

   type: match

   attribute: FirstName

   values:

       - '[^0-9@#$%^&*~!"<>?/|\\_]+'

Possible types:


Actions

Structure:

Example:

action:

   type: add

   attributes:

      - DataQuality[].DQDescription

   value: "{source}_005_02"

Possible types:


Dependent components

ComponentInterfaceFlowDescription
managerQualityServiceValidationRuns quality engine validation

Configuration

Config ParameterDefault valueDescription
validationOntrueIt turns on or off validation - it needs to specified in application.yml
partialOverrideValidationOntrueIt turns on or off validation for updates

hcpQualityRulesConfigs

list of files with quality rules for hcpIt contains a list of files with quality rules for hcp

hcoQualityRulesConfigs

list of files with quality rules for hcoIt contains a list of files with quality rules for hco

hcpAffiliatedHCOsQualityRulesConfigs

list of files with quality rules for affilitated hcpIt contains a list of files with quality rules for affilitated HCO
mcoQualityRulesConfigslist of files with quality rules for mcoIt contains a list of files with quality rules for mco
" + }, + { + "title": "Reltio Adapter", + "pageID": "164469898", + "pageLink": "/display/GMDM/Reltio+Adapter", + "content": "

Description

Reltio-adapter is a component of MDM Hub(part of mdm-manager) that is used to communicate with Reltio. 

Flows

Exposed interfaces

Interface NameTypeEndpoint patternDescription
Get entityJava
ReltioMDMClient:getEntity

Get detailed entity information by entity URI

Get entitiesJava
ReltioMDMClient:getEntities

Get basic information about a group of entities based on applied filters

Create/Update entityJava
ReltioMDMClient:createEntity
Create/partially update entity(HCO, HCP, MCO)
Create/Update multiple entitiesJava
ReltioMDMClient:createEntities
Batch create HCO/HCP/MCO entities
Delete entityJava
ReltioMDMClient:deleteEntity
Deletes entity by its URI
Find entityJava
ReltioMDMClient:findEntity

Finds entity. The search mechanism is flexible and chooses the proper method:

  • If URI applied in entityPattern then use the getEntity method.
  • If URI not specified and finds crosswalks then uses getEntityByCrosswalk method
  • Otherwise, it uses the find matches method
Merge entitiesJava
ReltioMDMClient:mergeEntities

Merge two entities basing on reltio merging rules.

Also accepts explicit winner as explicitWinnerEntityUri.

Unmerge entitiesJava
ReltioMDMClient:unmergeEntities
Unmerge entities

Unmerge Entity Tree

Java
ReltioMDMClient:treeUnmergeEntities

Unmerge entities recursively(details in reltio treeunmerge documentation)

Scan entitiesJava
ReltioMDMClient:scanEntities
Iterate entities of a specific type in a particular tenant.
Delete crosswalkJava
ReltioMDMClient:deleteCrosswalk
Deletes crosswalk from an object
Find matchesJava
ReltioMDMClient:findMatches

Returns potential matches based on rules in entity type configuration

Get entity connectionsJava
ReltioMDMClient:getMultipleEntityConnections
Get connected entities
Get entity by a crosswalkJava
ReltioMDMClient:getEntityByCrosswalk
Get entity by the crosswalk
Delete relation by a crosswalkJava
ReltioMDMClient:deleteRelation
Delete relation by relation URI
Get relationJava
ReltioMDMClient:getRelation
Get relation by relation URI
Create/Update relationJava
ReltioMDMClient:createRelation
Create/update relation
Scan relationsJava
ReltioMDMClient:scanRelations

Iterate entities of a specific type in a particular tenant.

Get relation by a crosswalkJava
ReltioMDMClient:getRelationByCrosswalk
Get relation by the crosswalk
Batch create relationJava
ReltioMDMClient:createRelations
Batch create relation
Search for change requestsJava
ReltioMDMClient:search
Search for change requests by entity crosswalk
Get change request by URIJava
ReltioMDMClient:getChangeRequest
Get change request by URI
Create change requestJava
ReltioMDMClient:createChangeRequest
Create change request - internal
Delete change requestJava
ReltioMDMClient:deleteChangeRequest
Delete change request
Apply change requestJava
ReltioMDMClient:applyChangeRequest
Apply data change request
Reject change requestJava
ReltioMDMClient:rejectChangeRequest
Reject data change request
Add/update external inforJava
ReltioMDMClient:createOrUpdateExternalInfo
Add external info to specified DCR

Dependencies


ComponentInterfaceDescription
Reltio







GET {TenantURL}/entities/{Entity ID}

Get detailed information about the entity

https://docs.reltio.com/entitiesapi/getentity.html

GET {TenantURL}/entities

Get basic( or chosen ) information about entity based on applied filters

https://docs.reltio.com/mulesoftconnector/getentities_2.html

GET {TenantURL}/entities/_byCrosswalk/{crosswalkValue}?type={sourceType}

Get entity by crosswalk

https://docs.reltio.com/entitiesapi/getentitybycrosswalk_2.html

DELETE {TenantURL}/{entity object URI}

Delete entity

https://docs.reltio.com/entitiesapi/deleteentity.html

POST {TenantURL}/entities

Create/update single or a bunch of entities

https://docs.reltio.com/entitiesapi/createentities.html

POST {TenantURL}/entities/_dbscan
https://docs.reltio.com/searchapi/iterateentitiesbytype.html?hl=_dbscan
POST {TenantURL}/entities/{winner}/_sameAs?uri=entities/{looser}

Merge entities basing on looser and winner ID

https://docs.reltio.com/mergeapis/mergingtwoentities.html

POST {TenantURL}/<origin id>/_unmerge?contributorURI=<spawn URI>

Unmerge entities

https://docs.reltio.com/mergeapis/unmergeentitybycontriburi.html

POST {TenantURL}/<origin id>/_treeUnmerge?contributorURI=<spawn URI>

Tree unmerge entities

https://docs.reltio.com/mergeapis/unmergeentitybycontriburi.html

GET {TenantURL}/relations/

Get relation by relation URI

https://docs.reltio.com/relationsapi/getrelationship.html

POST {TenantURL}/relations

Create relation

https://docs.reltio.com/relationsapi/createrelationships.html

POST {TenantURL}/relations/_dbscan
https://docs.reltio.com/relationsapi/iteraterelationshipbytype.html?hl=relations%2F_dbscan
GET {TenantURL}/changeRequests

Get change request

https://docs.reltio.com/dcrapi/searchdcr.html

GET {TenantURL}/changeRequests/{id}

Returns a data change request by ID.

https://docs.reltio.com/dcrapi/getdatachangereq.html

POST {TenantURL}/changeRequests 

Create data change request

https://docs.reltio.com/dcrapi/createnewdatachangerequest.html

DELETE {TenantURL}/changeRequests/{id} 

Delete data change request

https://docs.reltio.com/dcrapi/deletedatachangereq.html

POST {TenantURL}/changeRequests/_byUris/_apply

This API applies (commits) all changes inside a data change request to real entities and relationships.

https://docs.reltio.com/dcrapi/applydcr.html

POST {TenantURL}/changeRequests/_byUris/_reject

Reject data change request

https://docs.reltio.com/dcrapi/rejectdcr.html

POST {TenantURL}/entities/_matches

Returns potential matches based on rules in entity type configuration.
https://docs.reltio.com/matchesapi/serachpotentialmatchesforjsonentity.html
POST {TenantURL}/_connectionsGet connected entities
https://docs.reltio.com/relationsapi/requestdifferententityconnections.html?hl=_connections
DELETE /{crosswalk URI}

Delete crosswalk

https://docs.reltio.com/mergeapis/dataapicrosswalks.html?hl=delete,crosswalkdataapicrosswalks__deletecrosswalk#dataapicrosswalks__deletecrosswalk


POST {TenantURL}/changeRequests/0000OVV/_externalInfo

Add/update external info to DCR

https://docs.reltio.com/dcrapi/addexternalinfotochangereq.html?hl=_externalinfo


Configuration


Config ParameterDefault valueDescription
mdmConfig.authURLnullReltio authentication URL
mdmConfig.baseURLnullReltio base URL
mdmConfig.rdmUrlnullReltio  RDM URL

mdmConfig.username

nullReltio username
mdmConfig.passwordnullReltio password
mdmConfig.apiKeynullReltio apiKey
mdmConfig.apiSecretnullReltio apiSecret
translateCache.milisecondsToExpire


translateCache.objectsLimit

" + }, + { + "title": "Map Channel", + "pageID": "302697819", + "pageLink": "/display/GMDM/Map+Channel", + "content": "

Description

Map Channel integrates GCP and GRV systems data. External systems use the SQS queue or REST API to load data. The data is then copied to the internal queue. This allows to redo the processing at a later time. The identifier and market contained in the data are used to retrieve complete data via REST requests. The data is then sent to the Manager component to storage in the MDM system. Application provides features for filtering events by country, status or permissions. This component uses different mappers to process data for the COMPANY or IQVIA data model.


Technology: Java, Spring, Apache Camel

Code link: map-channel

Flows

Exposed interfaces


Interface nameTypeEndpoint patternDescription
create contactREST API

POST /gcp

create HCP profile based on GCP contact data

update contactREST APIPUT /gcp/{gcpId}update HCP profile based on GCP contact data
create userREST APIPOST /grvcreate HCP profile based on GRV user data
update userREST APIPUT /grv/{grvId}update HCP profile based on GRV user data


Dependent components


ComponentInterfaceDescription
ManagerREST API

create HCP, create HCO, update HCP, update HCO

Configuration

The configuration is a composition of dependent components configurations and parameters specifived below.


Kafka processing config


Config paramDefault valueDescription
kafkaProducerProp
kafka producer properties
kafkaConsumerProp
kafka consumer properties
processing.endpoints
kafka internal topics configuration
processing.endpoints.[endpoint-type].topic
kafka entpoint-type topic name
processing.endpoints.[endpoint-type].activeOnStartup
should endpoint start on application startup
processing.endpoints.[endpoint-type].consumerCount
kafka endpoint consumer count
processing.endpoints.[endpoint-type].breakOnFirstError
should kafka rebalance on error
processing.endpoints.[endpoint-type].autoCommitEnable
should kafka cuto commit enable

DEG config

Config paramDefault valueDescription
DEG.urll
DEG gateway URL
DEG.oAuth2Service
DEG authorization service URL
DEG.protocol
DEG protocol
DEG.port
DEG port
DEG.prefix
DEG API prefix

Transaction log config

Config paramDefault valueDescription
transactionLogger.kafkaEfk.enable
should kafka efk transaction logger enable
transactionLogger.kafkaEfk.kafkaProducer.topic
kafka efk topic name
transactionLogger.kafkaEfk.logContentOnlyOnFailed
Log request body only on failed transactions
transactionLogger.simpleLog.enable
should simple console transaction logger enable


Filter config


Config paramDefault valueDescription
activeCountries.GRV
list of allowed GRV countries
activeCountries.GRV
list of allowed GCP countries
deactivatedStatuses.[Source].[Country]
list of ValidationStatus attribute values for which HCP will be deleted for given country and source
deactivateGCPContactWhenInactive
lst of countries for which GCP will be deleted when contact is inactive
deactivatedWhenNoPermissions
lst of countries for which GCP will be deleted when contact permissions are missing
deleteOption.[Source].none
HCP will be sent to MDM when deleted date is present
deleteOption.[Source].hard
call delete crosswalk action when deleted date is present
deleteOption.[Source].soft
call update HCP when delete date is present

Mapper config

Config paramDefault valueDescription
gcpMapper
name of GCP mapper implenentation
grvMapper
name of GRV mapper implenentation

Mappings

IQVIA mapping

\"\"

COMPANY mapping

\"\"

" + }, + { + "title": "MDM Admin", + "pageID": "284817212", + "pageLink": "/display/GMDM/MDM+Admin", + "content": "

Description

MDM Admin exposes an API of tools automating repetitive and/or difficult Operating Procedures and Tasks. It also aggregates APIs of various Hub components that should not be exposed to the world, while providing an authorization layer. Permissions to each Admin operation can be granted to client's API user.

Flows

Exposed interfaces

REST API

Swagger: https://api-apac-prod-gbl-mdm-hub.COMPANY.com/api-admin-spec-apac-prod/swagger-ui/index.html

Dependent components

ComponentInterfaceFlowDescription
Reconciliation ServiceReconciliation Service APIEntities ReconciliationAdmin uses internal Reconciliation Service API to trigger reconciliations. Passes the same inputs and returns the same results.
Relations Reconciliation
Partials Reconciliation
Precallback ServicePrecallback Service APIPartials ListAdmin fetches a list of partials directly from Precallback Service and returns it to the user or uses it to reconcile all entities stuck in partial state.
Partials Reconciliation
AirflowAirflow APIEvents ResendAdmin allows triggering an Airflow DAG with request parameters/body and checking its status.
Events Resend Complex
KafkaKafka Client/Admin APIKafka OffsetsAdmin allows modifying topic/group offsets.

Configuration

Config Parameter

Default value

Description

airflow-config:
url: https://airflow-amer-nprod-gbl-mdm-hub.COMPANY.com
user: admin
password: ${airflow.password}
dag: reconciliation_system_amer_dev

-

Dependent Airflow configuration including external URL, DAG name and credentials. Entities Reload operation will trigger a DAG of configured name in the configured Airflow instance.
services:
services:
reconciliationService: mdmhub-mdm-reconciliation-service-svc:8081
precallbackService: mdmhub-precallback-service-svc:8081
URLs of dependent services. Default values lead to internal Kubernetes services.
" + }, + { + "title": "MDM Integration Tests", + "pageID": "302687584", + "pageLink": "/display/GMDM/MDM+Integration+Tests", + "content": "

Description

The module contains Integration Tests. All Integration Tests are divided into different categories based on environment on which are executed.

Technology:

Gradle tasks

The table shows which environment uses which gradle task.

EnvironmentGradle taskConfiguration properties
ALL

commonIntegrationTests

-
GBLUS

integrationTestsForCOMPANYModelRegionUS

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/kube_dev_gblus/group_vars/gw-services/int_tests.yml
CHINA

integrationTestsForCOMPANYModelChina

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/kube_devchina_apac/group_vars/gw-services/int_tests.yml
EMEA

integrationTestsForCOMPANYModelRegionEMEA

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/kube_dev_emea/group_vars/gw-services/int_tests.yml

APACintegrationTestsForCOMPANYModelRegionAPAChttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/kube_dev_apac/group_vars/gw-services/int_tests.yml
AMER

integrationTestsForCOMPANYModelRegionAMER

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/kube_dev_amer/group_vars/gw-services/int_tests.yml
OTHERS

integrationTestsForIqviaModel

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/kube_dev_gbl/group_vars/gw-services/int_tests.yml

The Jenkins script with configuration: http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/jenkins/k8s_int_test.groovy

Gradle tasks - IT categories

The table shows which test categories are included in gradle tasks.

Gradle taskTest category

commonIntegrationTests

  • CommonIntegrationTest

integrationTestsForCOMPANYModelRegionUS

  • IntegrationTestForCOMPANYModel
  • IntegrationTestForCOMPANYModelRegionUS
integrationTestsForCOMPANYModelChina
  • IntegrationTestForCOMPANYModel
  • IntegrationTestForCOMPANYModelChina
integrationTestsForCOMPANYModel
integrationTestsForCOMPANYModelRegionAMER
integrationTestsForCOMPANYModelRegionAPAC
integrationTestsForCOMPANYModelRegionEMEA
integrationTestsForIqviaModel
  • IntegrationTestForIqiviaModel

Tests are configured in build.gradle file: http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/browse/build.gradle?at=refs%2Fheads%2Fproject%2Fboldmove

Test use cases included in categories

Test categoryTest use cases

CommonIntegrationTest

Common Integration Test

IntegrationTestForIqiviaModel

Integration Test For Iqvia Model

IntegrationTestForCOMPANYModel

Integration Test For COMPANY Model

IntegrationTestForCOMPANYModelRegionUS

Integration Test For COMPANY Model Region US

IntegrationTestForCOMPANYModelChina

Integration Test For COMPANY Model China

IntegrationTestForCOMPANYModelRegionAMER

Integration Test For COMPANY Model Region AMER

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Integration Test For COMPANY Model DCR2Service

IntegrationTestsForCOMPANYModelRegionEMEA

Integration Test For COMPANY Model Region EMEA

" + }, + { + "title": "Nucleus Subscriber", + "pageID": "164469790", + "pageLink": "/display/GMDM/Nucleus+Subscriber", + "content": "

Description

Nucleus subscriber collects events from Amazon AWS S3 modifies it and then transfer to the right Kafka Topic.

Data changes are stored as archive files on S3 from where they are then pulled byt the nucleus subscriber.
The next step is to modify the event from the Reltio format to one accepted by the MDM Hub. The modified data is then transfered to the appropriate Kafka Topic.

Data pulls from S3 are performed periodically so the changes made  are visible after some time.


Part of: Streaming channgel

Technology: Java, Spring, Apache Camel

Code link: nucleus-subscriber

Flows

Exposed interfaces


Interface NameTypeEndpoint patternDescription
Kafka topic KAFKA
{env}-internal-nucleus-events
Enents pulled from sqs are then transformed and published to kafka topic

Dependencies


ComponentInterfaceFlowDescription
AWS S3
Entity change events processing (Nucleus)Stores events regarding data modification in reltio
Entity enricher

Nucleus Subscriber downstream component. Collects events from Kafka and produces events enriched with the targetEntity

Configuration


Config ParameterDefault valueDescription
nucleus_subscriber.server.port

8082

Nucleus subscriber port
nucleus_subscriber.kafka.servers

10.192.71.136:9094

Kafka server
nucleus_subscriber.lockingPolicy.zookeeperServer

null

Zookeeper server
nucleus_subscriber.lockingPolicy.groupName

null

Zookeeper group name
nucleus_subscriber.deduplicationCache.maxSize

100000


nucleus_subscriber.deduplicationCache.expirationTimeSeconds

3600


nucleus_subscriber.kafka.groupId

hub

Kafka group Id
nucleus_subscriber.kafka.username

null

Kafka username
nucleus_subscriber.kafka.password

null

Kafka user password
nucleus_subscriber.publisher.entities.topic

dev-internal-integration-tests


nucleus_subscriber.publisher.dictioneries.topic

dev-internal-reltio-dictionaries-events


nucleus_subscriber.publisher.relationships.topic

dev-internal-integration-tests


nucleus_subscriber.mongoConnectionProp.dbUrl

null

MongoDB url
nucleus_subscriber.mongoConnectionProp.dbName

null

MongoDB database name
nucleus_subscriber.mongoConnectionProp.user

null

MongoDB user
nucleus_subscriber.mongoConnectionProp.password

null

MongoDB user password
nucleus_subscriber.mongoConnectionProp.chechConnectionOnStartup

null

Check connection on startup( yes/no )
nucleus_subscriber.poller.type

file

Source type
nucleus_subscriber.poller.enableOnStartup

yes

Enable on startup( yes/no )
nucleus_subscriber.poller.fileMask

null

Input files mask
nucleus_subscriber.poller.bucketName

candf-mesos

Name of S3 bucket
nucleus_subscriber.poller.processingTimeoutMs

3000000

Timeout in miliseconds
nucleus_subscriber.poller.inputFolder

C:/PROJECTS/COMPANY/GIT/mdm-publishing-hub/nucleus-subscriber/src/test/resources/data

Input directory
nucleus_subscriber.poller.outputFolder

null

Output directory
nucleus_subscriber.poller.key

null

Poller key
nucleus_subscriber.poller.secret

null

Poller secret
nucleus_subscriber.poller.region

EU_WEST_1

Poller region
nucleus_subscriber.poller.alloweSubDirs

null

Allowed sub directories( e.g. by country code - AU, CA )
nucleus_subscriber.fileFormat.hcp

.*Professional.exp

Input fiile format for hcp
nucleus_subscriber.fileFormat.hco

.*Organization.exp

Input fiile format for hco
nucleus_subscriber.fileFormat.dictionary

.*Code_Header.exp

Input fiile format for dictionary
nucleus_subscriber.fileFormat.dictionaryItem

.*Code_Item.exp

Input fiile format for dictionary Item
nucleus_subscriber.fileFormat.dictionaryItemDesc

.*Code_Item_Description.exp

Input fiile format for
nucleus_subscriber.fileFormat.dictionaryItemExternal

.*Code_Item_External.exp

Input fiile format for

nucleus_subscriber.fileFormat.

customerMerge

.*customer_merge.exp

Input fiile format for customer merge

nucleus_subscriber.fileFormat.specialty

.*Specialty.exp

Input fiile format for speciality

nucleus_subscriber.fileFormat.address

.*Address.exp

Input fiile format foraddress

nucleus_subscriber.fileFormat.degree

.*Degree.exp

Input fiile format for degree

nucleus_subscriber.fileFormat.identifier

.*Identifier.exp

Input fiile format foridentifier

nucleus_subscriber.fileFormat.communication

.*Communication.exp

Input fiile format forcommunication

nucleus_subscriber.fileFormat.optout

.*Optout.exp

Input fiile format for optout
nucleus_subscriber.fileFormat.affiliation

.*Affiliation.exp

Input fiile format for affiliation
nucleus_subscriber.fileFormat.affiliationRole

.*AffiliationRole.exp

Input fiile format for affiliation role

.

" + }, + { + "title": "OK DCR Service", + "pageID": "164469929", + "pageLink": "/display/GMDM/OK+DCR+Service", + "content": "

Description

Validation of information regarding healthcare institutions and professionals based on ONE KEY webservices database

Flows

Exposed interfaces


Interface NameTypeEndpoint patternDescription
internal onekeyvr inputKAFKA
${env}-internal-onekeyvr-in
events being sent by the event publisher component. Event types being considered: HCP_*, HCO_*, ENTITY_MATCHES_CHANGED
internal onekeyvr change requests inputKAFKA
${env}-internal-onekeyvr-change-requests-in

Dependent components


ComponentInterfaceFlowDescription

Manager





GetEntitygetEntitygetting the entity from RELTIO
MDMIntegrationService


getMatchesgetting matches from RELTIO
translateLookupstranslating lookup codes
createEntityDCR entity created in Reltio and the relation between the processed entity and the DCR entity
createResponse
patchEntityupdating the entity in RELTIO

Both ONEKEY service and the Manager service are called with the retry policy.

Configuration


Config ParameterDefault valueDescription
onekey.oneKeyIntegrationService.url${oneKeyClient.url}
onekey.oneKeyIntegrationService.userName${oneKeyClient.userName}
onekey.oneKeyIntegrationService.password${oneKeyClient.password}
onekey.oneKeyIntegrationService.connectionPoint${oneKeyClient.connectionPoint}
onekey.oneKeyIntegrationService.logMessages${oneKeyClient.logMessages}
onekey.oneKeyIntegrationService.retrying.maxAttemts22Limit to the number of attempts -> Exponential Back Off
onekey.oneKeyIntegrationService.retrying.initialIntervalMs1000Initial interval -> Exponential Back Off
onekey.oneKeyIntegrationService.retrying.multiplier2.0Multiplier -> Exponential Back Off
onekey.oneKeyIntegrationService.retrying.maxIntervalMs3600000Max interval -> Exponential Back Off
onekey.gatewayIntegrationService.url${gateway.url}
onekey.gatewayIntegrationService.userName${gateway.userName}
onekey.gatewayIntegrationService.apiKey${gateway.apiKey}
onekey.gatewayIntegrationService.logMessages${gateway.logMessages}
onekey.gatewayIntegrationService.timeoutMs${gateway.timeoutMs}
onekey.gatewayIntegrationService.gatewayRetryConfig.maxAttemts22
onekey.gatewayIntegrationService.gatewayRetryConfig.initialIntervalMs1000
onekey.gatewayIntegrationService.gatewayRetryConfig.multiplier2.0
onekey.gatewayIntegrationService.gatewayRetryConfig.maxIntervalMs3600000
onekey.gatewayIntegrationService.gatewayRetryConfig.maxAttemts22Limit to the number of attempts -> Exponential Back Off
onekey.gatewayIntegrationService.gatewayRetryConfig.initialIntervalMs1000Initial interval -> Exponential Back Off
onekey.gatewayIntegrationService.gatewayRetryConfig.multiplier2.0Multiplier -> Exponential Back Off
onekey.gatewayIntegrationService.gatewayRetryConfig.maxIntervalMs3600000Max interval -> Exponential Back Off
onekey.submitVR.eventInputTopic${env}-internal-onekeyvr-inSubmit Validation input topic
onekey.submitVR.skipEventTypeSuffix

_REMOVED

_INACTIVATED

_LOST_MERGE

Submit Validation event type string endings to skip
onekey.submitVR.storeNamewindow-deduplication-storeInternal kafka topic that stores events to deduplicate
onekey.submitVR.window.duration4hThe size of the windows in milliseconds.
onekey.submitVR.window.name<no value>Internal kafka topic that stores events being grouped by.
onekey.submitVR.window.gracePeriod0The grace period to admit out-of-order events to a window.
onekey.submitVR.window.byteLimit107374182Maximum number of bytes the size-constrained suppression buffer will use.
onekey.submitVR.window.suppressNamedcr-suppressThe specified name for the suppression node in the topology.
onekey.traceVR.enabletrue
onekey.traceVR.minusExportDateTimeMillis3600000
onekey.traceVR.schedule.cron0 0 * ? * * # every hour






quartz.properties.org.quartz.scheduler.instanceNamemdm-onekey-dcr-service

Can be any string, and the value has no meaning to the scheduler itself - but rather serves as a mechanism for client code to distinguish schedulers when multiple instances are used within the same program. If you are using the clustering features, you must use the same name for every instance in the cluster that is ‘logically’ the same Scheduler.

quartz.properties.org.quartz.scheduler.skipUpdateChecktrue

Whether or not to skip running a quick web request to determine if there is an updated version of Quartz available for download. If the check runs, and an update is found, it will be reported as available in Quartz’s logs. You can also disable the update check with the system property “org.terracotta.quartz.skipUpdateCheck=true” (which you can set in your system environment or as a -D on the java command line). It is recommended that you disable the update check for production deployments.

quartz.properties.org.quartz.scheduler.instanceIdGenerator.classorg.quartz.simpl.HostnameInstanceIdGenerator

Only used if org.quartz.scheduler.instanceId is set to “AUTO”. Defaults to “org.quartz.simpl.SimpleInstanceIdGenerator”, which generates an instance id based upon host name and time stamp. Other IntanceIdGenerator implementations include SystemPropertyInstanceIdGenerator (which gets the instance id from the system property “org.quartz.scheduler.instanceId”, and HostnameInstanceIdGenerator which uses the local host name (InetAddress.getLocalHost().getHostName()). You can also implement the InstanceIdGenerator interface your self.

quartz.properties.org.quartz.jobStore.classcom.novemberain.quartz.mongodb.MongoDBJobStore
quartz.properties.org.quartz.jobStore.mongoUri${mongo.url}
quartz.properties.org.quartz.jobStore.dbName${mongo.dbName}
quartz.properties.org.quartz.jobStore.collectionPrefix quartz-onekey-dcr
quartz.properties.org.quartz.scheduler.instanceIdAUTO

Can be any string, but must be unique for all schedulers working as if they are the same ‘logical’ Scheduler within a cluster. You may use the value “AUTO” as the instanceId if you wish the Id to be generated for you. Or the value “SYS_PROP” if you want the value to come from the system property “org.quartz.scheduler.instanceId”.

quartz.properties.org.quartz.jobStore.isClusteredtrue
quartz.properties.org.quartz.threadPool.threadCount1












" + }, + { + "title": "Publisher", + "pageID": "164469927", + "pageLink": "/display/GMDM/Publisher", + "content": "

Description

Publisher is member of Streaming channel. It distributes events to target client topics based on configured routing rules.

Main tasks:


Technology: Java, Spring, Kafka

Code: event-publisher

Flows

Exposed interfaces


Interface NameTypeEndpoint patternDescription

Kafka - input topics for entities data


KAFKA

${env_name}-internal-reltio-proc-events

${env_name}-internal-nucleus-events

Stores events about entities, relations and change requests changes.
Kafka - input topics for dicrtionaries dataKAFKA

${env_name}-internal-reltio-dictionaries-events

${env_name}-internal-nucleus-dictionaries-events

Stores events about lookup (LOV) changes.

Kafka - output topics

KAFKA

${env_name}-out-*

*(All topics that get events from publisher)

Output topics for Publisher.

Event after filtration process is then transferred on the appropriate topic based on routing rules defined in the configuration

Resend eventsREST

POST /resendLastEvent

Allow triggering reconstruction event. Events are created based on the current state fetch for MongoDB and then forwarded according to defined routing rules.
Mongo's collections

Mongo collectionentityHistoryCollection stored last known state of entities data
Mongo collectionentityRelationsCollection stored last known state of relations data
Mongo collectionLookupValuesCollection stored last known state of lookups (LOVs) data

Dependencies


ComponentInterfaceFlowDescription
Callback ServiceKAFKA

Creates input for Publisher

Responsible for following transformations:

  • HCO names calculation
  • Dangling affiliations
  • Crosswalk cleaner
  • Precallback stream
MongoDB
Stores the last known state of objects such as: entities, relations. Used as cache data to reduce Reltio load. Is updated after every entity change event
Kafka Connect Snowflake connectorKAFKA

Snowflake: Events publish flow

Receives events from the publisher and loads it to Snowflake database
Clients of the HUB

Clients that receive events from MDM HUB

MAPP, China, etc

Configuration


Config ParameterDefault valueDescription

event_publisher.users

null

Publisher users dictionary used to authenticate user in ResendService operations.

User parameters:

  • name,
  • description,
  • roles(list) - currently there is only one role which can be assign to user:
    • RESEND_EVENT - user with this role is granted to use resend last event operation

event_publisher.activeCountries
- AD
- BL
- FR
- GF
- GP
- MC
- MF
- MQ
- MU
- NC
- PF
- PM
- RE
- WF
- YT
- CN
List of active countries

event_publisher.lookupValuesPoller.

interval

60mInterval of lookups (LOVs) from Reltio

event_publisher.lookupValuesPoller.

batchSize

1000Poller batch size

event_publisher.lookupValuesPoller.

enableOnStartup

yes

Enable on startup

( yes/no )

event_publisher.lookupValuesPoller.

dbCollectionName

LookupValuesMongo's collection name stored fetched lookup data

event_publisher.eventRouter.incomingEvents

incomingEvents:
reltio:
topic: dev-internal-reltio-entity-and-relation-events
enableOnStartup: no
startupOrder: 10
properties:
autoOffsetReset: latest
consumersCount: 20
maxPollRecords: 50
pollTimeoutMs: 30000
Configuration of the incoming topic with events regarding entities, relations etc.

event_publisher.eventRouter.dictionaryEvents

dictionaryEvents:
reltio:
topic: dev-internal-reltio-dictionaries-events
enableOnStartup: true
startupOrder: 30
properties:
autoOffsetReset: earliest
consumersCount: 10
maxPollRecords: 5
pollTimeoutMs: 30000

Configuration of incoming topic with events regarding dictionary changes.

event_publisher.eventRouter.historyCollectionName

entityHistoryName of collection stored entities state

event_publisher.eventRouter.relationCollectionName

entityRelationsName of collection stored relations state

event_publisher.eventRouter.routingRules.[]

null

List of routing rules. Routing rule definition has following parameters

  • id - unique identifier of rule,
  • selector - conditional expression written in groovy which filters incoming events,
  • destination - topic name.
" + }, + { + "title": "Raw data service", + "pageID": "337869880", + "pageLink": "/display/GMDM/Raw+data+service", + "content": "

Description

Raw data service is the component used to process source data. Allows you to remove expired data in real time. Provides a REST interface for restoring source data on the environment.

Flows



Exposed interfaces

Batch Controller - manage batch instances

Interface nameTypeEndpoint patternDescription
Restore entitiesREST API

POST /restore/entities

Restore entities for selected parameters: entity types, sources, countries, date from

1. Create consumer for entities topic and given offset - date from

2. Poll and filter records

3. Produce data to bundle input topic

Restore relationsREST API

POST /restore/relations

Restore entities for selected parameters: sources, countries, relation types and date from

1. Create consumer for relations topic and given offset - date from

2. Poll and filter records

3. Produce data to bundle input topic

Restore entitiesREST API

POST /restore/entities/count

Count entities for selected parameters: entity types, sources, countries, date from

Restore entitiesREST API

POST /restore/relations/count

Count relations for selected parameters: sources, countries, relation types and date from

Configuration

Config paramdescription
kafka.groupIdkafka group id
kafkaOtherother kafka consumer/producer properties
entityTopictopic used to store entity data
relationTopictopic used to store relation data
streamConfig.patchKeyStoreNamestate store name used to store entities patch keys
streamConfig.relationStoreNamestate store name used to store relations patch keys
streamConfig.enabledis raw data stream processor enabled
streamConfig.kafkaOtherraw data processor stream kafka other properties
restoreConfig.enabledis restore api enabled
restoreConfig.consumer.pollTimeoutrestore api kafka topic consumer poll timeout
restoreConfig.consumer.kafkaOtherother kafka consumer properties
restoreConfig.producer.outputrestore data producer output topic - manager bundle input topic
restoreConfig.producer.kafkaOtherother kafka producer properties
" + }, + { + "title": "Reconciliation Service", + "pageID": "164469826", + "pageLink": "/display/GMDM/Reconciliation+Service", + "content": "

Reconciliation service is used to consume reconciliation event from reltio and decide is entity or relation should be refreshed in mongo cache. after reconsiliation this service also produce metrics from reconciliation, it counts changes and produce event with all metatdta and statistics about reconciliated entity/relation


Flows

Reconciliation+HUB-Client

Reconciliation metrics

Configuration

Config Parameter

Default value

Description

reconciliation:
eventInputTopic:
eventOutputTopic:
reconciliation:
eventInputTopic: ${env}-internal-reltio-reconciliation-events
eventOutputTopic: ${env}-internal-reltio-events
Consumes event from eventInputTopic, decide about reconiliation and produce event to eventOutputTopic
reconciliation:
eventMetricsInputTopic:
eventMetricsOutputTopic:

metricRules:
- name:
operationRegexp:
pathRegexp:
valueRegexp:
reconciliation:
eventInputTopic: ${env}-internal-reltio-reconciliation-events
eventOutputTopic: ${env}-internal-reltio-events
eventMetricsInputTopic: ${env}-internal-reltio-reconciliation-metrics-event
eventMetricsOutputTopic: ${env}-internal-reconciliation-metrics-efk-transactions

metricRules:
- name: reconciliation.object.missed
operationRegexp: "remove"
pathRegexp: ""
valueRegexp: ".*"
- name: reconciliation.object.added
operationRegexp: "add"
pathRegexp: ""
valueRegexp: ".*"
- name: reconciliation.lookupcode.error
operationRegexp: "add"
pathRegexp: "^.*/lookupCode$"
valueRegexp: ".*"
- name: reconciliation.lookupcode.changed
operationRegexp: "replace"
pathRegexp: "^.*/lookupCode$"
valueRegexp: ".*"
- name: reconciliation.value.changed
operationRegexp: "add|replace|remove"
pathRegexp: "^/attributes/.+$"
valueRegexp: ".*"
- name: reconciliation.other.reason
operationRegexp: ".*"
pathRegexp: ".*"
valueRegexp: ".*"
Consume event from eventMetricsInputTopic, then calculate diff betwent current and previous event, based on diff produce statisctis and metrics. After all produce event with all information to eventMetricsOutputTopic
" + }, + { + "title": "Reltio Subscriber", + "pageID": "164469916", + "pageLink": "/display/GMDM/Reltio+Subscriber", + "content": "

Description

Reltio subscriber is part of Reltio events streaming flow. It consumes Reltio events from Amazon SQS, filters, maps, and transfers to the Kafka Topic.


Part of: Streaming channel

Technology: Java, Spring, Apache Camel

Code link: reltio-subscriber

Flows


Exposed interfaces


Interface NameTypeEndpoint patternDescription
Kafka topic KAFKA
${env}-internal-reltio-events
Enents pulled from sqs are then transformed and published to kafka topic

Dependent components


ComponentInterfaceFlowDescription
Sqs - queue
Entity change events processing (Reltio)It stores events about entities modification in reltio
Entity enricher

Reltio Subscriber downstream component. Collects events from Kafka and produces events enriched with the target entity

Configuration


Config ParameterDefault valueDescription
reltio_subscriber.reltio.queue
mpe-01_FLy4mo0XAh0YEbN
Reltio queue name
reltio_subscriber.reltio.queueOwner
930358522410
Reltio queue owner number
reltio_subscriber.reltio.concurrentConsumers1Max number of concurrent consumers
reltio_subscriber.reltio.messagesPerPoll10Messages per poll
reltio_subscriber.publisher.topic
dev-internal-reltio-events
Publisher kafka topic
reltio_subscriber.publisher.enableOnStartup
yes
Enable on startup
reltio_subscriber.publisher.filterSelfMerges
no

Filter self merges
( yes/no )

reltio_subscriber.relationshipPublisher.topic
dev-internal-reltio-relations-events
Relationship publisher kafka topic
reltio_subscriber.dcrPublisher.topicnullDCR publisher kafka topic
reltio_subscriber.kafka.servers
10.192.71.136:9094
Kafka servers
reltio_subscriber.kafka.groupId
hub
Kafka group Id
reltio_subscriber.kafka.saslMechanism
PLAIN
Kafka sasl mechanism
reltio_subscriber.kafka.securityProtocol
SASL_SSL
Kafka security protocol
reltio_subscriber.kafka.sslTruststoreLocation
src/test/resources/client.truststore.jks
Kafka truststore location
reltio_subscriber.kafka.sslTuststorePassword
kafka123
Kafka truststore password
reltio_subscriber.kafka.username
null
Kafka username
reltio_subscriber.kafka.passwordnullKafka user password
reltio_subscriber.kafka.compressionCodecnullKafka compression codec
reltio_subscriber.poller.types3Source type
reltio_subscriber.poller.enableOnStartup

no

Enable on startup( yes/no )
reltio_subscriber.poller.fileMask

.*

Input files mask
reltio_subscriber.poller.bucketName

candf-mesos

Name of S3 bucket
reltio_subscriber.poller.processingTimeoutMs

7200000

Timeout in miliseconds
reltio_subscriber.poller.inputFolder

null

Input directory
reltio_subscriber.poller.outputFolder

null

Output directory
reltio_subscriber.poller.key

null

Poller key
reltio_subscriber.poller.secret

null

Poller secret
reltio_subscriber.poller.region

EU_WEST_1

Poller region
reltio_subscriber.allowedEventTypes
- ENTITY_CREATED
- ENTITY_REMOVED
- ENTITY_CHANGED
- ENTITY_LOST_MERGE
- ENTITIES_MERGED
- ENTITIES_SPLITTED
- RELATIONSHIP_CREATED
- RELATIONSHIP_CHANGED
- RELATIONSHIP_REMOVED
- RELATIONSHIP_MERGED
- RELATION_LOST_MERGE
- CHANGE_REQUEST_CHANGED
- CHANGE_REQUEST_CREATED
- CHANGE_REQUEST_REMOVED
- ENTITIES_MATCHES_CHANGED

Event types that are processed when received.

Other event types are being rejected

reltio_subscriber.transactionLogger.kafkaEfk

.enable

nullTransaction logger enabled( true/false)

reltio_subscriber.transactionLogger.kafkaEfk

.logContentOnlyOnFailed

null

Log content only on failed

( true/false)

reltio_subscriber.transactionLogger.kafkaEfk

.kafkaConsumerProp.groupId

nullKafka consumer group Id

reltio_subscriber.transactionLogger.kafkaEfk

.kafkaConsumerProp.autoOffsetReset

nullKafka transaction logger topic

reltio_subscriber.transactionLogger.kafkaEfk

.kafkaConsumerProp.consumerCount

null

reltio_subscriber.transactionLogger.kafkaEfk

.kafkaConsumerProp.sessionTimeoutMs

nullSession timeout

reltio_subscriber.transactionLogger.kafkaEfk

.kafkaConsumerProp.maxPollRecords

null

reltio_subscriber.transactionLogger.kafkaEfk

.kafkaConsumerProp.breakOnFirstError

null

reltio_subscriber.transactionLogger.kafkaEfk

.kafkaConsumerProp.consumerRequestTimeoutMs

null

reltio_subscriber.transactionLogger.SimpleLog.enable

null
" + }, + { + "title": "Clients", + "pageID": "164470170", + "pageLink": "/display/GMDM/Clients", + "content": "

The section describes clients (systems) that publish or subscribe data to MDM systems vis MDH HUB


Active clients

\n\n \n \n
    \n
    \n
    \n
\n\n
\n \n \n \n\n \n \n\n \n \n \n\n \n \n \n \n \n \n\n
\n \n
\n
\n

Aggregated Contact List

COMPANY MDM Team


NameContact
Andrew J. Varganin

Andrew.J.Varganin@COMPANY.com

Sowjanya Tirumala

sowjanya.tirumala@COMPANY.com

John AustinJohn.Austin@COMPANY.com
Trivedi Nishith

Nishith.Trivedi@COMPANY.com


GLOBAL

ClientContacts
MAPDL-BT-Production-Engineering@COMPANY.com
KOL

DL-SFA-INF_Support_PforceOL@COMPANY.com

Solanki, Hardik (US - Mumbai) <hsolanki@COMPANY.com>;

Yagnamurthy, Maanasa (US - Hyderabad) <myagnamurthy@COMPANY.com>;

China

Ming Ming <MingMing.Xu@COMPANY.com>;

Jiang, Dawei <Dawei.Jiang@COMPANY.com>

MAPP

Shashi.Banda@COMPANY.com

Rajesh.K.Chengalpathy@COMPANY.com

Debbie.Gelfand@COMPANY.com

Dinesh.Vs@COMPANY.com

DL-MAPP-Navigator-Hypercare-Support@COMPANY.com

Japan DWHDL-GDM-ServiceOps-Commercial_APAC@COMPANY.com
GRACEDL-AIS-Mule-Integration-Support@COMPANY.com
EngageDL-BTAMS-ENGAGE-PLUS@COMPANY.com;

Amish.Adhvaryu@COMPANY.com

PTRS

Sagar.Bodala@COMPANY.com

OneMed

Marsha.Wirtel@COMPANY.com;AnveshVedula.Chalapati@COMPANY.com

Medic

DL-F&BO-MEDIC@COMPANY.com

GBL US

ClientContacts
CDW

Narayanan, Abhilash <Abhilash.KadampanalNarayanan@COMPANY.com>

Raman, Krishnan <Krishnan.Raman@COMPANY.com>

ETL

Nayan, Rajeev <Rajeev.Nayan3@COMPANY.com>

Duvvuri, Satya <Satya.Duvvuri@COMPANY.com>

KOL

Tikyani, Devesh <Devesh.Tikyani@COMPANY.com>

Brahma, Bagmita <Bagmita.Brahma2@COMPANY.com>

Solanki, Hardik <Hardik.Solanki@COMPANY.com>


US Trade (FLEX COV)

ClientContacts
Main contacts

Dube, Santosh R <santosh.dube@COMPANY.com>

Manseau, Melissa <Melissa.Manseau@COMPANY.com>

Thirumurthy, Bala Subramanyam <BalaSubramanyam.Thirumurthy@COMPANY.com>

Business Team

Max, Deanna <Deanna.Max@COMPANY.com>

Faddah, Laura Jordan <Laura.Faddah@COMPANY.com>

GIS(file transfer)

Mandala, Venkata <venkata.mandala@COMPANY.com>

Srivastava, Jayant <Jayant.Srivastava@COMPANY.com>




" + }, + { + "title": "KOL", + "pageID": "164470183", + "pageLink": "/display/GMDM/KOL", + "content": "\n

Data pushing

\n

\"\" Figure 22. KOL authentication with Identity ManagerKOL system push data to MDM integration service using REST API. To authenticate, KOL uses external Oauth2 authorization service named Identity Manager to fetch access token. Then system sends the REST request to integration service endpoint which validates access token using Identity Manager API.\n
\nKOL manage data for several countries. Many of these is loaded to default MDM system (Reltio), supported by integration service but for GB, PT, DK and CA countries data is sent to Nucleus 360. Decision, where the data should be loaded, is made by MDM Manager logic. Based on Country attribute value, MDM manager selects the right MDM adapter. It is important to set the Country attribute value correctly during data updating. Same rule applies to the country query parameter during data fetching. Thanks to this, MDM manager is able to process the right data in the right MDM system. In case of updating data with the Country attribute set incorrectly, the REST request will be rejected. When data is being fetched without country attribute query parameter set, the default MDM (Reltio) will be used to resolve the data.\n

\n

Event processing

\n

KOL application receives events in one standard way – kafka topic. Events from Reltio MDM system are published to this topic directly after Reltio has processed changes, sent event to SQS and processed them by Event Publisher. It means that the Reltio processes change and send events in real time. Client, who listens for events, does not have to wait for receiving them too long.
\n\"\" Figure 23. Difference between processing events in Reltio and Nucleus 360The situation changes when the entity changes are processed by Nucleus 360. This MDM publishes changes once in a while, so the events will be delivered to kafka topic with longer delay.

" + }, + { + "title": "Japan DWH", + "pageID": "164470060", + "pageLink": "/display/GMDM/Japan+DWH", + "content": "

Contacts

Japan DWH Feed Support DL: DL-GDM-ServiceOps-Commercial_APAC@COMPANY.com - it is valid until 15/04/2023

DL-ATP-SERVICEOPS-JPN-DATALAKE@COMPANY.com - it will be valid since 15/04/2023 

Flows

Japan DWH has only one batch process which consume the incremental file export from data warehouse, process this and loads data to MDM. This process is based on incremental batch engine and run on Airflow platform.

Input files

The input files are delivered by GIS to AWS S3 bucket.


UATPROD
S3 service accountdidn't createdsvc_gbi-cc_mdm_japan_rw_s3
S3 Access key IDdidn't createdAKIATCTZXPPJU6VBUUKB
S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
S3 Foldermdm/UAT/inbound/JAPAN/mdm/inbound/JAPAN/
Input data file mask JPDWH_[0-9]+.zipJPDWH_[0-9]+.zip
CompressionZipZip
FormatFlat files, DWH dedicated format Flat files, DWH dedicated format 

Example

JPDWH_20200421202224.zipJPDWH_20200421202224.zip
SchedulenoneAt 08:00 UTC on every day-of-week from Monday through Friday (0 8 * * 1-5). The input file is not delivered in Japan's holidays (https://www.officeholidays.com/countries/japan/2020)
Airflow jobinc_batch_jp_stageinc_batch_jp_prod


Data mapping 

The detailed filed mappings are presented in the document.

Mapping rules:


Configuration

Flow configuration is stored in MDM Environment configuration repository. For each environment where the flow should be enabled the configuration file inc_batch_jp.yml has to be created in the location related to configured environment: inventory/[env name]/group_vars/gw-airflow-services/ and the batch name "inc_batch_jp" has to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Below table prresents the location of inc_batch_jp.yml file for UAT and PROD env:


UATPROD
inc_batch_jp.ymlhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/stage/group_vars/gw-airflow-services/inc_batch_jp.ymlhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/prod/group_vars/gw-airflow-services/inc_batch_jp.yml

Applying configuration changes is done by executing the deploy Airflow's components procedure.

SOPs

There is no particular SOP procedure for this flow. All common SOPs was described in the "Airflow:" chapter.


" + }, + { + "title": "Nucleus", + "pageID": "164470256", + "pageLink": "/display/GMDM/Nucleus", + "content": "

Contacts

Delivering of data used by Nucleus's processes is maintained by Iqvia Team: COMPANY-MDM-Support@iqvia.com

Flows

There are several batch processes that loads data extracted from Nucleus to Reltio MDM. Data are delivered for countries: Canada, South Korea, Australia, United Kingdom, Portugal and Denmark as zip archive available at S3 bucket.

Input files


UATPROD
S3 service accountdidn't createdsvc_mdm_project_nuc360_rw-s3
S3 Access key IDdidn't createdAKIATCTZXPPJTFMGRZFM
S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
S3 Folder

mdm/UAT/inbound/APAC_CCV/AU/

mdm/UAT/inbound/APAC_CCV/KR/

mdm/UAT/inbound/nuc360/inc-batch/GB/

mdm/UAT/inbound/nuc360/inc-batch/PT/

mdm/UAT/inbound/nuc360/inc-batch/DK/

mdm/UAT/inbound/nuc360/inc-batch/CA/

mdm/inbound/nuc360/inc-batch/AU/

mdm/inbound/nuc360/inc-batch/KR/

mdm/inbound/nuc360/inc-batch/GB/

mdm/inbound/nuc360/inc-batch/PT/

mdm/inbound/nuc360/inc-batch/DK/

mdm/inbound/nuc360/inc-batch/CA/

Input data file mask NUCLEUS_CCV_[0-9_]+.zipNUCLEUS_CCV_[0-9_]+.zip
CompressionZipZip
FormatFlat files in CCV format 

Flat files in CCV format 

Example

NUCLEUS_CCV_8000000792_20200609_211102.zipNUCLEUS_CCV_8000000792_20200609_211102.zip
Schedulenone

inc_batch_apac_ccv_au_prod - at 17:00 UTC on every day-of-week from Monday through Friday (0 17 * * 1-5)

inc_batch_apac_ccv_kr_prod - at 08:00 UTC on every day-of-week from Monday through Friday (0 8 * * 1-5)

inc_batch_eu_ccv_gb_stage - at 07:00 UTC on every day-of-week from Monday through Friday (0 7 * * 1-5)

inc_batch_eu_ccv_pt_stage - at 07:00 UTC on every day-of-week from Monday through Friday (0 7 * * 1-5)

inc_batch_eu_ccv_dk_stage - at 07:00 UTC on every day-of-week from Monday through Friday (0 7 * * 1-5)

inc_batch_amer_ccv_ca_prod - at 17:00 UTC on every day-of-week from Monday through Friday (0 17 * * 1-5)

Airflow's DAGS

inc_batch_apac_ccv_au_stage

inc_batch_apac_ccv_kr_stage

inc_batch_eu_ccv_gb_stage

inc_batch_eu_ccv_pt_stage

inc_batch_eu_ccv_dk_stage

inc_batch_amer_ccv_ca_stage

inc_batch_apac_ccv_au_prod

inc_batch_apac_ccv_kr_prod

inc_batch_eu_ccv_gb_stage

inc_batch_eu_ccv_pt_stage

inc_batch_eu_ccv_dk_stage

inc_batch_amer_ccv_ca_prod


Data mapping

Data mapping is described in the following document.


Configuration

Flows configuration is stored in MDM Environment configuration repository. For each environment where the flows should be enabled configuration files has to be created in the location related to configured environment: inventory/[env name]/group_vars/gw-airflow-services/ and the batch name has to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Below table presents the location of flows configuration files for UAT and PROD env:

Flow configuration fileUATPROD
inc_batch_apac_ccv_au.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_apac_ccv_au.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_apac_ccv_au.yml
inc_batch_apac_ccv_kr.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_apac_ccv_kr.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_apac_ccv_kr.yml
inc_batch_eu_ccv_gb.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_eu_ccv_gb.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_eu_ccv_gb.yml
inc_batch_eu_ccv_pt.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_eu_ccv_pt.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_eu_ccv_pt.yml
inc_batch_eu_ccv_dk.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_eu_ccv_dk.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_eu_ccv_dk.yml
inc_batch_amer_ccv_ca.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_amer_ccv_ca.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_amer_ccv_ca.yml

To deploy changes of DAG's configuration you have to execute SOP Deploying DAGs

SOPs

There is no particular SOP procedure for this flow. All common SOPs was described in the "Airflow:" chapter.


" + }, + { + "title": "Veeva New Zealand", + "pageID": "164470112", + "pageLink": "/display/GMDM/Veeva+New+Zealand", + "content": "

Contacts

DL-ATP-APC-APACODS-SUPPORT@COMPANY.com

Flow

The flow transforms the Veeva's data to Reltio model and loads the result to MDM. Data contains HCPs and HCOs from New Zealand.

This flow is divided into two steps:

  1. Pre-proccessing - Copying source files from Veeva's S3 bucket, filtering once and uploading result to HUB's bucket,
  2. Incremental batch - Running the standard incremental batch process.

Each of these steps are realized by separated Airflow's DAGs.


Input files


UATPROD
Veeva's S3 service accountSRVC-MDMHUB_GBL_NONPRODSRVC-MDMHUB_GBL
Veeva's S3 Access key IDAKIAYCS3RWHN72AQKG6BAKIAYZQEVFARKMXC574Q
Veeva's S3 bucketapacdatalakeprcaspasp55737apacdatalakeprcaspasp63567
Veeva's S3 bucket regionap-southeast-1ap-southeast-1
Veeva's S3 Folder

project_kangaroo/landing/veeva/sf_account/

project_kangaroo/landing/veeva/sf_address_vod__c/

project_kangaroo/landing/veeva/sf_child_account_vod__c/

project_kangaroo/landing/veeva/sf_account/

project_kangaroo/landing/veeva/sf_address_vod__c/

project_kangaroo/landing/veeva/sf_child_account_vod__c/

Veeva's Input data file mask 

* (all files inside above folders)

* (all files inside above folders)
Veeva's Input data file compression
nonenone
HUB's S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
HUB's S3 Foldermdm/UAT/inbound/APAC_VEEVA/mdm/inbound/APAC_PforceRx/
HUS's input data file maskin_nz_[0-9]+.zipin_nz_[0-9]+.zip
HUS's input data file compressionZipZip
Schedule (is set only for pre-processing DAG)noneAt 06:00 UTC on every day-of-week from Monday through Friday (0 8 * * 1-5)
Pre-processing Airflow's DAGinc_batch_apac_veeva_wrapper_stageinc_batch_apac_veeva_wrapper_prod
Incremental batch Airflow's DAGinc_batch_apac_veeva_stageinc_batch_apac_veeva_prod


Data mapping

Data mapping is described in the following document.


Configuration

Configuration of this flow is defined in two configuration files. First of these inc_batch_apac_veeva_wrapper.yml specifies the pre-processing DAG configuration and the second inc_batch_apac_veeva.yml defines configuration of DAG for standard incremental batch process. To activate the flow on environment files should be created in the following location inventory/[env name]/group_vars/gw-airflow-services/ and batch names "inc_batch_apac_veeva_wrapper" and "inc_batch_apac_veeva" have to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Changes made in configuration are applied on environment by running Deploy Airflow Components procedure.

Below table presents the location of flows configuration files for UAT and PROD env:

Configuration fileUATPROD
inc_batch_apac_veeva_wrapper.ymlhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/stage/group_vars/gw-airflow-services/inc_batch_apac_veeva_wrapper.ymlhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/prod/group_vars/gw-airflow-services/inc_batch_apac_veeva_wrapper.yml
inc_batch_apac_veeva.ymlhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/stage/group_vars/gw-airflow-services/inc_batch_apac_veeva.ymlhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/prod/group_vars/gw-airflow-services/inc_batch_apac_veeva.yml


SOPs

There is no dedicated SOP procedures for this flow. However, you must remember that this flow consists of two DAGs which both have to finish successfully.

All common SOPs was described in the "Incremental batch flows: SOP" chapter.


" + }, + { + "title": "ODS", + "pageID": "164470116", + "pageLink": "/display/GMDM/ODS", + "content": "

Contacts

Flow

The flow transforms the ODS's data to Reltio model and loads the result to MDM. Data contains HCPs and HCOs from: HK, ID, IN, MY, PH, PK, SG, TH, TW, VN, BL, FR, GF, GP, MF, MQ, MU, NC, PF, PM, RE, TF, WF, YT, SI, RS countries.

This flow is divided into two steps:

  1. Pre-proccessing - Copying source files from ODS's bucket and then uploading these to HUB's bucket,
  2. Incremental batch - Running the standard incremental batch process.

Each of these steps are realized by separated Airflow's DAGs.

Input files


UAT APACUAT EU

PROD APAC

PROD EU
Supported countriesHK, ID, IN, MY, PH, PK, SG, TH, TW, VN, BLFR, GF, GP, MF, MQ, MU, NC, PF, PM, RE, TF, WF, YT, SI, RSHK, ID, IN, MY, PH, PK, SG, TH, TW, VN, BLFR, GF, GP, MF, MQ, MU, NC, PF, PM, RE, TF, WF, YT, SI, RS
ODS S3 service accountSRVC-GCMDMS3DEVSRVC-GCMDMS3DEVSRVC-GCMDMS3PRDsvc_gbicc_euw1_prod_partner_gcmdm_rw_s3
ODS S3 Access key IDAKIAYCS3RWHN45FC4MOPAKIAYCS3RWHN45FC4MOPAKIAYZQEVFARE64ESXWHAKIA6NIP3JYIMUIQABMX
ODS S3 bucketapacdatalakeintaspasp100939apacdatalakeintaspasp100939apacdatalakeintaspasp104492pfe-gbi-eu-w1-prod-partner-internal
ODS S3 folder/APACODSD/GCMDM//APACODSD/GCMDM//APACODSD/GCMDM//eu-dmart-odsd-file-extracts/gateway/GATEWAY/ODS/PROD/GCMDM/
ODS Input data file mask ****
ODS Input data file compressionzipzipzipzip
HUB's S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectpfe-baiaes-eu-w1-project
HUB's S3 Foldermdm/UAT/inbound/ODS/APAC/mdm/UAT/inbound/ODS/EU/mdm/inbound/ODS/APAC/mdm/inbound/ODS/EU/
HUS's input data file mask****
HUS's input data file compressionzipzipzipzip
Pre-processing Airflow's DAGmove_ods_apac_export_stagemove_ods_eu_export_stagemove_ods_apac_export_prodmove_ods_eu_export_prod
Pre-processing Airflow's DAG schedulenonenone0 6 * * 1-50 7 * * 2  (At 07:00 on Tuesday.)
Incremental batch Airflow's DAGinc_batch_apac_ods_stageinc_batch_eu_ods_stageinc_batch_apac_ods_prodinc_batch_eu_ods_prod
Incremental batch Airflow's DAG schedulenonenone0 8 * * 1-50 8 * * 2 (At 08:00 on Tuesday.)

Data mapping

Data mapping is described in the following document.


Configuration

Configuration of this flow is defined in two configuration files. First of these move_ods_apac_export.yml specifies the pre-processing DAG configuration and the second inc_batch_apac_ods.yml defines configuration of DAG for standard incremental batch process. To activate the flow on environment files should be created in the following location inventory/[env name]/group_vars/gw-airflow-services/ and batch names "move_ods_apac_export" and "inc_batch_apac_ods" have to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Changes made in configuration are applied on environment by running Deploy Airflow's components procedure.

Below table presents the location of flows configuration files for UAT and PROD env:

Configuration fileUATPROD
move_ods_apac_export.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/move_ods_apac_export.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/move_ods_apac_export.yml
inc_batch_apac_ods.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_apac_ods.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_apac_ods.yml
move_ods_eu_export.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/move_ods_eu_export.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/move_ods_eu_export.yml
inc_batch_eu_ods.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_eu_ods.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_eu_ods.yml


SOPs

There is no dedicated SOP procedures for this flow. However, you must remember that this flow consists of two DAGs which both have to finish successfully.

All common SOPs was described in the "Incremental batch flows: SOP" chapter.


" + }, + { + "title": "China", + "pageID": "164470000", + "pageLink": "/display/GMDM/China", + "content": "

ACLs

NameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopic
China client access
china-client
Key AuthN/A
- "CREATE_HCP"
- "CREATE_HCO"
- "UPDATE_HCO"
- "UPDATE_HCP"
- "GET_ENTITIES"
- CN
- "CN3RDPARTY"
- "MDE"
- "FACE"
- "EVR"
- dev-out-full-mde-cn
- stage-out-full-mde-cn
- dev-out-full-mde-cn

Contacts

QianRu.Zhou@COMPANY.com


Flows

  1. Batch merge & unmerge
  2. DCR generation process (China DCR)
  3. [FL.IN.1] HCP & HCO update processes


Reports

Reports

" + }, + { + "title": "Corrective batch process for EVR", + "pageID": "164470250", + "pageLink": "/display/GMDM/Corrective+batch+process+for+EVR", + "content": "


Corrective batch process for EVR fixes China data using standard incremental batch mechanism. The process gets data from csv file, transforms to json model and loads to Reltio. During loading of changes following HCP's attributes can be changed:

  1. Name,
  2. Title,
  3. SubTypeCode,
  4. ValidationStatus,
  5. Specific Workplace can be ignored or its ValidationStatus can be changed,
  6. Specific MainWorkplace can be ignored.

The load saves the changes in Reltio under crosswalk where:

Thanks this, it is easy to find changes that was made by this process.


Input files

The input files are delivered to s3 bucket


UATPROD
Input S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
Input S3 Folder

mdm/UAT/inbound/CHINA/EVR/

mdm/inbound/CHINA/EVR/
Input data file mask evr_corrective_file_[0-9]*.zipevr_corrective_file_[0-9]*.zip
Compressionzipzip
FormatFlat files in CCV format Flat files in CCV format 
Exampleevr_corrective_file_20201109.zipevr_corrective_file_20201109.zip
Schedulenonenone
Airflow's DAGSinc_batch_china_evr_stageinc_batch_china_evr_prod


Data mapping

Mapping from CSV to Reltio's json was describe in this document: evr_corrective_file_format_new.xlsx

Example file presented input data: evr_corrective_file_20221215.csv


Configuration

Flows configuration is stored in MDM Environment configuration repository. For each environment where the flow should be enabled configuration file inc_batch_china_evr.yml has to be created in the location related to configured environment: inventory/[env name]/group_vars/gw-airflow-services/ and the batch name "inc_batch_china" has to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Below table presents the location of flow configuration files for UAT and PROD environment:

UATPROD
http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_china_evr.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_china_evr.yml


SOPs

There is no particular SOP procedure for this flow. All common SOPs was described in the "Incremental batch flows: SOP" chapter.


" + }, + { + "title": "Reports", + "pageID": "164469873", + "pageLink": "/display/GMDM/Reports", + "content": "

Daily Reports

There are 4 reports which their preparing is triggered by china_generate_reports_[env] DAG. The DAG starts all dependent report DAGs and then waits for files published by them on s3. When all required files are delivered to s3, DAG sents the email with generted reports to all configured recipients.

china_generate_reports_[env]
|-- china_import_and_gen_dcr_statistics_report_[env]
|-- import_pfdcr_from_reltio_[env]
+-- china_dcr_statistics_report_[env]
|-- china_import_and_gen_merge_report_[env]
|-- import_merges_from_reltio_[env]
+-- china_merge_report_[env]
|-- china_total_entities_report_[env]
+-- china_hcp_by_source_report_[env]


Daily DAGs are triggered by DAG china_generate_reports


UATPROD
Parent DAGchina_generate_reports_stagechina_generate_reports_prod
SchedulenoneEvery day at 00:05.


Filter applied to all reports:

FieldValue
countrycn
statusACTIVE


HCP by source report

The Report shows how many HCPs was delivered to MDM by specific source.

The Output  files are delivered to s3 bucket:


UATPROD
Output S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
Output S3 Folder

mdm/UAT/outbound/china_reports/daily/

mdm/outbound/china_reports/daily/
Output data file mask china_hcp_by_source_report_.*.xlsxchina_hcp_by_source_report_.*.xlsx
FormatMicrosoft Excel xlsxMicrosoft Excel xlsx
Examplechina_hcp_by_source_report_20201113093437.xlsxchina_hcp_by_source_report_20201113093437.xlsx
Schedulenonenone
Airflow's DAGSchina_hcp_by_source_report_stagechina_hcp_by_source_report_prod
Report Templatechina_hcp_by_source_template.xlsx
Mongo scripthcp_by_source_report.js
Applied filters
"country" : "CN"
"entityType": "configuration/entityTypes/HCP"
"status": "ACTIVE"


Report fields description:

ColumnDescription
SourceThe source which delivered HCP
HCPNumber of all HCPs which has the source
Daily IncrementalNumber of HCPs modified last utc day.

Total entities report

The report shows total entities count, grouped by entity type, theirs validation status and speaker attribute.

The Output  files are delivered to s3 bucket


UATPROD
Output S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
Output S3 Folder

mdm/UAT/outbound/china_reports/daily/

mdm/outbound/china_reports/daily/
Output data file mask china_total_entities_report_.*.xlsxchina_total_entities_report_.*.xlsx
FormatMicrosoft Excel xlsxMicrosoft Excel xlsx
Examplechina_total_entities_report_20201113093437.xlsxchina_total_entities_report_20201113093437.xlsx
Schedulenonenone
Airflow's DAGSchina_total_entities_report_stagechina_total_entities_report_prod
Report Templatechina_total_entities_template.xlsx
Mongo scripttotal_entities_report.js
Applied filters
"country" : "CN"
"status": "ACTIVE"

Report fields description:

ColumnDescription
Total_Hospital_MDMNumber of total hospital MDM
Total_Dept_MDMNumber of total department MDM
Total_HCP_MDMNumber of total HCP MDM
Validated_HCPNumber of validated HCP
Pending_HCPNumber of pending HCP
Not_Validated_HCPNumber of validated HCP
Other_Status_HCP?Number of HCP with other status
Total_Speaker Number of total speakers
Total_Speaker_EnabledNumber of enabled speakers
Total_Speaker_DisabledNumber of disabled speakers

DCR statistics report

The report shows statistics about data change requests which were created in MDM.

Generating of this report is divided into two steps:

  1. Importing PfDataChengeRequest data from Reltio - this step is realized by import_pfdcr_from_reltio_[env] DAG. It schedules export data in Reltio using Export Entities operation and then waits for result. After export file is ready, DAG load its content to mongo,
  2. Generating report - generates report based on proviosly imported data. This step is perform by china_dcr_statistics_report_[env] DAG.

Both of above steps are run sequentially by china_import_and_gen_dcr_statistics_report_[env] DAG.

The Output  files are delivered to s3 bucket:


UATPROD
Output S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
Output S3 Folder

mdm/UAT/outbound/china_reports/daily/

mdm/outbound/china_reports/daily/
Output data file mask china_dcr_statistics_report_.*.xlsxchina_dcr_statistics_report_.*.xlsx
FormatMicrosoft Excel xlsxMicrosoft Excel xlsx
Examplechina_dcr_statistics_report_20201113093437.xlsxchina_dcr_statistics_report_20201113093437.xlsx
Airflow's DAGSchina_dcr_statistics_report_stagechina_dcr_statistics_report_prod
Report Templatechina_dcr_statistics_template.xlsx
Mongo scriptchina_dcr_statistics_report.js
Applied filtersThere are no additional conditions applied to select data


Report fields description:

ColumnDescription
Total_DCR_MDMTotal number of DCRs
New_HCP_DCRTotal number of DCRs of type NewHCP
New_HCO_L1_DCRTotal number of DCRs of type NewHCOL1
New_HCO_L2_DCRTotal number of DCRs of type NewHCOL2
MultiAffil_DCRTotal number of DCRs of type MultiAffil
New_HCP_DCR_CompletedTotal number of DCRs of type NewHCP which have completed status
New_HCO_L1_DCR_CompletedTotal number of DCRs of type NewHCOL1 which have completed status
New_HCO_L2_DCR_CompletedTotal number of DCRs of type NewHCOL2 which have completed status
MultiAffil_DCR_CompletedTotal number of DCRs of type MultiAffil which have completed status
New_HCP_AcceptTotal number of DCRs of type NewHCP which were accepted
New_HCP_UpdateTotal number of DCRs of type NewHCP which were updated during responding for these
New_HCP_MergeTotal number of DCRs of type NewHCP which were accepted and response had entities to merge
New_HCP_MergeUpdateTotal number of DCRs of type NewHCP which were updated and response had entities to merge
New_HCP_RejectTotal number of DCRs of type NewHCP which were rejected
New_HCP_CloseTotal number of closed DCRs of type NewHCP
Affil_AcceptTotal number of DCRs of type MultiAffil which were accepted
Affil_RejectTotal number of DCRs of type MultiAffil which were rejected
Affil_AddTotal number of DCRs of type MultiAffil which data were updated during responding
MultiAffil_DCR_CloseTotal number of closed DCRs of type MultiAffil
New_HCO_L1_UpdateTotal number of closed DCRs of type NewHCOL1 which data were updated during responding
New_HCO_L1_RejectTotal number of rejected DCRs of type NewHCOL1 
New_HCO_L1_CloseTotal number of closed DCRs of type NewHCOL1 
New_HCO_L2_AcceptTotal number of accepted DCRs of type NewHCOL2
New_HCO_L2_UpdateTotal number of DCRs of type NewHCOL2 which data were updated during responding
New_HCO_L2_RejectTotal number of rejected DCRs of type NewHCOL2
New_HCO_L2_CloseTotal number of closed DCRs of type NewHCOL2
New_HCP_DCR_OpenedTotal number of opend DCRs of type NewHCP
MultiAffil_DCR_OpenedTotal number of opend DCRs of type MultiAffil
New_HCO_L1_DCR_OpenedTotal number of opend DCRs of type NewHCOL1
New_HCO_L2_DCR_OpenedTotal number of opend DCRs of type NewHCOL2
New_HCP_DCR_FailedTotal number of failed DCRs of type NewHCP
MultiAffil_DCR_FailedTotal number of failed DCRs of type MultiAffil
New_HCO_L1_DCR_FailedTotal number of failed DCRs of type NewHCOL1
New_HCO_L2_DCR_FailedTotal number of failed DCRs of type NewHCOL2

Merge report

The report shows statistics about merges which were occurred in MDM.

Generating of this report, similar to DCR statistics report, is divided into two steps:

  1. Importing merges data from Reltio - this step is performed by import_merges_from_reltio_[env] DAG. It schedules export data in Reltio unsing Export Merge Tree operation and then waits for result. After export file is ready, DAG loads its content to mongo,
  2. Generating report - generates report based on previously imported data. This step is performed by china_merge_report_[env] DAG.

Both of above steps are run sequentially by china_import_and_gen_merge_report_[env] DAG.

The Output  files are delivered to s3 bucket:


UATPROD
Output S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
Output S3 Folder

mdm/UAT/outbound/china_reports/daily/

mdm/outbound/china_reports/daily/
Output data file mask china_merge_report_.*.xlsxchina_merge_report_.*.xlsx
FormatMicrosoft Excel xlsxMicrosoft Excel xlsx
Examplechina_merge_report_20201113093437.xlsxchina_merge_report_20201113093437.xlsx
Schedulenonenone
Airflow's DAGSchina_import_and_gen_merge_report_stagechina_import_and_gen_merge_report_prod
Report Templatechina_daily_merges_template.xlsx
Mongo scriptmerge_report.js
Applied filters
"country" : "CN"


Report fields description:

ColumnDescription
DateDate when merges occurred
Daily_Merge_HosptialTotal number of merges on HCO
Daily_Merge_HCPTotal number of merges on HCP
Daily_Manually_Merge_HosptialTotal number of manual merges on HCP
Daily_Manually_Merge_HCPTotal number of manual merges on HCP

Monthly Reports

There are 8 monthly reports. All of them are triggered by china_monthly_generate_reports_[env] which then waits for files, generated and published to S3 bucket by each depended DAGs. When all required files exist on S3, DAG prepares the email with all files and sents this defined recipients.

china_monthly_generate_reports_[env]
|-- china_monthly_hcp_by_SubTypeCode_report_[env]
|-- china_monthly_hcp_by_channel_report_[env]
|-- china_monthly_hcp_by_city_type_report_[env]
|-- china_monthly_hcp_by_department_report_[env]
|-- china_monthly_hcp_by_gender_report_[env]
|-- china_monthly_hcp_by_hospital_class_report_[env]
|-- china_monthly_hcp_by_province_report_[env]
+-- china_monthly_hcp_by_source_report_[env]


Monthly DAGs are triggered by DAG china_monthly_generate_reports


UATPROD
Parent DAGchina_monthly_generate_reports_stagechina_monthly_generate_reports_prod



HCP by source report

The report shows how many HCPs were delivered by specific source.

The Output  files are delivered to s3 bucket


UATPROD
Output S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
Output S3 Folder

mdm/UAT/outbound/china_reports/daily/

mdm/outbound/china_reports/daily/
Output data file mask china_monthly_hcp_by_source_report_.*.xlsxchina_monthly_hcp_by_source_report_.*.xlsx
FormatMicrosoft Excel xlsxMicrosoft Excel xlsx
Examplechina_monthly_hcp_by_source_report_20201113093437.xlsxchina_monthly_hcp_by_source_report_20201113093437.xlsx
Schedulenonenone
Airflow's DAGSchina_monthly_hcp_by_source_report_stagechina_monthly_hcp_by_source_report_prod
Report Templatechina_monthly_hcp_by_source_template.xlsx
Mongo scriptmonthly_hcp_by_source_report.js
Applied filters
"country" : "CN"
"entityType": "configuration/entityTypes/HCP"
"status": "ACTIVE"


Report fields description:

ColumnDescription
SourceSource that delivered HCP
HCPNumber of all HCPs which has the source


HCP by channel report

The report presents amount of HCPs which were delivered to MDM through specific Channel.

The Output  files are delivered to s3 bucket


UATPROD
Output S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
Output S3 Folder

mdm/UAT/outbound/china_reports/daily/

mdm/outbound/china_reports/daily/
Output data file mask china_monthly_hcp_by_channel_report_.*.xlsxchina_monthly_hcp_by_channel_report_.*.xlsx
FormatMicrosoft Excel xlsxMicrosoft Excel xlsx
Examplechina_monthly_hcp_by_channel_report_20201113093437.xlsxchina_monthly_hcp_by_channel_report_20201113093437.xlsx
Schedulenonenone
Airflow's DAGSchina_monthly_hcp_by_channel_report_stagechina_monthly_hcp_by_channel_report_prod
Report Templatechina_monthly_hcp_by_channel_template.xlsx
Mongo scriptmonthly_hcp_by_channel_report.js
Applied filters
"country" : "CN"
"entityType": "configuration/entityTypes/HCP"
"status": "ACTIVE"


Report fields description:

ColumnDescription
Channel
Channel name
HCPNumber of all HCPs which match the channel


HCP by SubTypeCode report

The report presents HCPs grouped by its Medical Title (SubTypeCode)

The Output  files are delivered to s3 bucket


UATPROD
Output S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
Output S3 Folder

mdm/UAT/outbound/china_reports/daily/

mdm/outbound/china_reports/daily/
Output data file mask china_monthly_hcp_by_SubTypeCode_report_.*.xlsxchina_monthly_hcp_by_SubTypeCode_report_.*.xlsx
FormatMicrosoft Excel xlsxMicrosoft Excel xlsx
Examplechina_monthly_hcp_by_SubTypeCode_report_20201113093437.xlsxchina_monthly_hcp_by_SubTypeCode_report_20201113093437.xlsx
Schedulenonenone
Airflow's DAGSchina_monthly_hcp_by_SubTypeCode_report_stage china_monthly_hcp_by_SubTypeCode_report_prod
Report Templatechina_monthly_hcp_by_SubTypeCode_template.xlsx
Mongo scriptmonthly_hcp_by_SubTypeCode_report.js
Applied filters
"country" : "CN"
"entityType": "configuration/entityTypes/HCP"
"status": "ACTIVE"


Report fields description:

ColumnDescription
Medical TitleMedical Title (SubTypeCode) of HCP
HCPNumber of all HCPs which match the medical title


HCP by city type report

The report shows amount of HCP which works in specific city type. Type of city in not avaiable in MDM data. To know what is type of specific citys report uses additional collection chinaGeography which has mapping between city's name and its type. Data in the collection can be updated on request of china's team.

The Output  files are delivered to s3 bucket


UATPROD
Output S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
Output S3 Folder

mdm/UAT/outbound/china_reports/daily/

mdm/outbound/china_reports/daily/
Output data file mask china_monthly_hcp_by_city_type_report_.*.xlsxchina_monthly_hcp_by_city_type_report_.*.xlsx
FormatMicrosoft Excel xlsxMicrosoft Excel xlsx
Examplechina_monthly_hcp_by_city_type_report_20201113093437.xlsxchina_monthly_hcp_by_city_type_report_20201113093437.xlsx
Schedulenonenone
Airflow's DAGSchina_monthly_hcp_by_city_type_report_stage china_monthly_hcp_by_city_type_report_prod
Report Templatechina_monthly_hcp_by_city_type_template.xlsx
Mongo scriptmonthly_hcp_by_city_type_report.js
Applied filters
"country" : "CN"
"entityType": "configuration/entityTypes/HCP"
"status": "ACTIVE"


Report fields description:

ColumnDescription
City TypeCity Type taken from chinaGeography collection which match entity.attributes.Workplace.value.MainHCO.value.Address.value.City.value
HCPNumber of all HCPs which match the city type


HCP by department report

The report presents the HCPs grouped by department where they work.

The Output  files are delivered to s3 bucket


UATPROD
Output S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
Output S3 Folder

mdm/UAT/outbound/china_reports/daily/

mdm/outbound/china_reports/daily/
Output data file mask china_monthly_hcp_by_department_report_.*.xlsxchina_monthly_hcp_by_department_report_.*.xlsx
FormatMicrosoft Excel xlsxMicrosoft Excel xlsx
Examplechina_monthly_hcp_by_department_report_20201113093437.xlsxchina_monthly_hcp_by_department_report_20201113093437.xlsx
Schedulenonenone
Airflow's DAGSchina_monthly_hcp_by_department_report_stage china_monthly_hcp_by_department_report_prod
Report Templatechina_monthly_hcp_by_department_template.xlsx
Mongo scriptmonthly_hcp_by_department_report.js
Applied filters
"country" : "CN"
"entityType": "configuration/entityTypes/HCP"
"status": "ACTIVE"


Report fields description:

ColumnDescription
DeptDepartment's name
HCPNumber of all HCPs which match the dept


HCP by gender report

The report presents the HCPs grouped by gender.

The Output  files are delivered to s3 bucket


UATPROD
Output S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
Output S3 Folder

mdm/UAT/outbound/china_reports/daily/

mdm/outbound/china_reports/daily/
Output data file mask china_monthly_hcp_by_gender_report_.*.xlsxchina_monthly_hcp_by_gender_report_.*.xlsx
FormatMicrosoft Excel xlsxMicrosoft Excel xlsx
Examplechina_monthly_hcp_by_gender_report_20201113093437.xlsxchina_monthly_hcp_by_gender_report_20201113093437.xlsx
Schedulenonenone
Airflow's DAGSchina_monthly_hcp_by_gender_report_stage china_monthly_hcp_by_gender_report_prod
Report Templatechina_monthly_hcp_by_gender_template.xlsx
Mongo scriptmonthly_hcp_by_gender_report.js
Applied filters
"country" : "CN"
"entityType": "configuration/entityTypes/HCP"
"status": "ACTIVE"


Report fields description:

ColumnDescription
GenderGender
HCPNumber of all HCPs which match the gender


HCP by hospital class report

The report presents the HCPs grouped by theirs department.

The Output  files are delivered to s3 bucket


UATPROD
Output S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
Output S3 Folder

mdm/UAT/outbound/china_reports/daily/

mdm/outbound/china_reports/daily/
Output data file mask china_monthly_hcp_by_hospital_class_report_.*.xlsxchina_monthly_hcp_by_hospital_class_report_.*.xlsx
FormatMicrosoft Excel xlsxMicrosoft Excel xlsx
Examplechina_monthly_hcp_by_hospital_class_report_20201113093437.xlsxchina_monthly_hcp_by_hospital_class_report_20201113093437.xlsx
Schedulenonenone
Airflow's DAGSchina_monthly_hcp_by_hospital_class_report_stage china_monthly_hcp_by_hospital_class_report_prod
Report Templatechina_monthly_hcp_by_hospital_class_template.xlsx
Mongo scriptmonthly_hcp_by_hospital_class_report.js
Applied filters
"country" : "CN"
"entityType": "configuration/entityTypes/HCP"
"status": "ACTIVE"


Report fields description:

ColumnDescription
ClassClassification
HCPNumber of all HCPs which match the class


HCP by province report

The report presents the HCPs grouped by province where they work.

The Output  files are delivered to s3 bucket


UATPROD
Output S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
Output S3 Folder

mdm/UAT/outbound/china_reports/daily/

mdm/outbound/china_reports/daily/
Output data file mask china_monthly_hcp_by_province_report_.*.xlsxchina_monthly_hcp_by_province_report_.*.xlsx
FormatMicrosoft Excel xlsxMicrosoft Excel xlsx
Examplechina_monthly_hcp_by_province_report_20201113093437.xlsxchina_monthly_hcp_by_province_report_20201113093437.xlsx
Schedulenonenone
Airflow's DAGSchina_monthly_hcp_by_province_report_stage china_monthly_hcp_by_province_report_prod
Report Templatechina_monthly_hcp_by_province_template.xlsx
Mongo scriptmonthly_hcp_by_province_report.js
Applied filters
"country" : "CN"
"entityType": "configuration/entityTypes/HCP"
"status": "ACTIVE"

Report fields description:



ProvinceName of province
HCPNumber of all HCPs which match the Province


SOPs

How can I check the status of generating reports?

Status of generating reports can be chacked by verification of task statuses on main DAGs - china_generate_reports_[env] for daily reports or china_monthly_generate_reports_[env] for monthly reports. Both of these DAGs have task "sendEmailReports" which waits for files generated by dependent DAGs. If required files are not published to S3 in confgured amount of time, the task will fail with following message:

\n
[2020-11-27 12:12:54,085] {{docker_operator.py:252}} INFO - Caught: java.lang.RuntimeException: ERROR: Elapsed time 300 minutes. Timeout exceeded: 300\n[2020-11-27 12:12:54,086] {{docker_operator.py:252}} INFO - java.lang.RuntimeException: ERROR: Elapsed time 300 minutes. Timeout exceeded: 300\n[2020-11-27 12:12:54,086] {{docker_operator.py:252}} INFO - at SendEmailReports.getListOfFilesLoop(sendEmailReports.groovy:221)\n\tat SendEmailReports.processReport(sendEmailReports.groovy:257)\n[2020-11-27 12:12:54,290] {{docker_operator.py:252}} INFO - at SendEmailReports$processReport.call(Unknown Source)\n\tat sendEmailReports.run(sendEmailReports.groovy:279)\n[2020-11-27 12:12:55,552] {{taskinstance.py:1058}} ERROR - docker container failed: {'StatusCode': 1}
\n

In this case you have to check the status of all dependent DAGs to find the reason on failure, resolve the issue and retry all failed tasks starting by tasks in dependend DAGs and finishing by task in main DAG.


Daily reports failed due to error durign importing data from Reltio. What to do?

If you are able to see that DAGs import_pfdcr_from_reltio_[env] or import_merges_from_reltio_[env] in failed state, it probably means that export data from Reltio took longer then usual. To confirm this supposing you have to show details of importing DAG and check status of waitingForExportFile task. If it has failed state and in the logs you can see following messages:

\n
[2020-12-04 12:09:10,957] {{s3_key_sensor.py:88}} INFO - Poking for key : s3://pfe-baiaes-eu-w1-project/mdm/reltio_exports/merges_from_reltio_20201204T000718/_SUCCESS\n[2020-12-04 12:09:11,074] {{taskinstance.py:1047}} ERROR - Snap. Time is OUT.\nTraceback (most recent call last):\n  File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 922, in _run_raw_task\n    result = task_copy.execute(context=context)\n  File "/usr/local/lib/python3.7/site-packages/airflow/sensors/base_sensor_operator.py", line 116, in execute\n    raise AirflowSensorTimeout('Snap. Time is OUT.')\nairflow.exceptions.AirflowSensorTimeout: Snap. Time is OUT.\n[2020-12-04 12:09:11,085] {{taskinstance.py:1078}} INFO - Marking task as FAILED.
\n

You can be pretty sure that the export is still processed on Reltio side. You can confirm this by using tasks api. If on the returned list you are able to see tasks in processing state, it means that MDM still works on this export. To fix this issue in DAG you have to restart the failed task. The DAG will start checking existance of export file once agine.

" + }, + { + "title": "CDW (AMER)", + "pageID": "164470121", + "pageLink": "/pages/viewpage.action?pageId=164470121", + "content": "

Contacts

Narayanan, Abhilash <Abhilash.KadampanalNarayanan@COMPANY.com>

Balan, Sakthi <Sakthi.Balan@COMPANY.com>

Raman, Krishnan <Krishnan.Raman@COMPANY.com>

Gateway

AMER(manager)

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

CDW user (NPROD)
cdw
External OAuth2

CDW-MDM_client


["CREATE_HCO","UPDATE_HCO","GET_ENTITIES","USAGE_FLAG_UPDATE"]
["US"]

["SHS","SHS_MCO","IQVIA_MCO","CENTRIS","SAP","IQVIA_DDD","ONEKEY","DT_340b","DEA","HUB_CALLBACK",
"IQVIA_RAWDEA","IQVIA_PDRP","ENGAGE","GRV","ICUE","KOL_OneView","COV","ENGAGE 1.0","GRV","IQVIA_RX",
"MILLIMAN_MCO","ICUE","KOL_OneView","SHS_RX","MMIT","INTEGRICHAIN_TRADE_PARTNER","INTEGRICHAIN_SHIP_TO","EMDS_VVA","APUS_VVA","BMS (NAV)",
"EXAS","POLARIS_DM","ANRO_DM","ASHVVA","MM_C1st","KFIS","DVA","Reltio","DDDV","IQVIA_DDD_ZIP",
"867","MYOV_VVA","COMPANY_ACCTS"]

CDW user (PROD)
cdw
External OAuth2
CDW-MDM_client
["CREATE_HCO","UPDATE_HCO","GET_ENTITIES","USAGE_FLAG_UPDATE"]
["US"]

["SHS","SHS_MCO","IQVIA_MCO","CENTRIS","SAP","IQVIA_DDD","ONEKEY","DT_340b","DEA","HUB_CALLBACK",
"IQVIA_RAWDEA","IQVIA_PDRP","ENGAGE","GRV","ICUE","KOL_OneView","COV","ENGAGE 1.0","GRV","IQVIA_RX",
"MILLIMAN_MCO","ICUE","KOL_OneView","SHS_RX","MMIT","INTEGRICHAIN_TRADE_PARTNER","INTEGRICHAIN_SHIP_TO","EMDS_VVA","APUS_VVA","BMS (NAV)",
"EXAS","POLARIS_DM","ANRO_DM","ASHVVA","MM_C1st","KFIS","DVA","Reltio","DDDV","IQVIA_DDD_ZIP",
"867","MYOV_VVA","COMPANY_ACCTS"]

Flows

Flow

Description

Snowflake: Events publish flowEvents are published to snowflake
Snowflake: Base tables refresh

Table is refreshed (every 2 hours in prod) with those events

Snowflake MDMTable are read by an ETL process implemented by COMPANY Team 
Update Usage TagsUpdate BESTCALLEDON used flag on addresses
CDW docs: Best Address Data flow

Client software 



" + }, + { + "title": "ETL - COMPANY (GBLUS)", + "pageID": "164470236", + "pageLink": "/pages/viewpage.action?pageId=164470236", + "content": "

Contacts

Nayan, Rajeev <Rajeev.Nayan3@COMPANY.com>

Duvvuri, Satya <Satya.Duvvuri@COMPANY.com>

ACLs

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

Sources

Topic

Batches

ETL batch load user

mdmetl_nprod

OAuth2

SVC-MDMETL_client
- "CREATE_HCP"
- "CREATE_HCO"
- "CREATE_MCO"
- "CREATE_BATCH"
- "GET_BATCH"
- "MANAGE_STAGE"
- "CLEAR_CACHE_BATCH"
US
- "SHS"
- "SHS_MCO"
- "IQVIA_MCO"
- "CENTRIS"
- "ENGAGE 1.0"
- "GRV"
- "IQVIA_DDD"
- "SAP"
- "ONEKEY"
- "IQVIA_RAWDEA"
- "IQVIA_PDRP"
- "COV"
- "IQVIA_RX"
- "MILLIMAN_MCO"
- "ICUE"
- "KOL_OneView"
- "SHS_RX"
- "MMIT"
- "INTEGRICHAIN"

N/A

batches:
"Symphony":
- "HCPLoading"
"Centris":
- "HCPLoading"
"IQVIA_DDD":
- "HCOLoading"
- "RelationLoading"
"SAP":
- "HCOLoading"
"ONEKEY":
- "HCPLoading"
- "HCOLoading"
- "RelationLoading"
"IQVIA_RAWDEA":
- "HCPLoading"
"IQVIA_PDRP":
- "HCPLoading"
"PFZ_CUSTID_SYNC":
- "COMPANYCustIDLoading"
"OneView":
- "HCOLoading"
"HCPM":
- "HCPLoading"
"SHS_MCO":
- "MCOLoading"
- "RelationLoading"
"IQVIA_MCO":
- "MCOLoading"
- "RelationLoading"
"IQVIA_RX":
- "HCPLoading"
"MILLIMAN_MCO":
- "MCOLoading"
- "RelationLoading"
"VEEVA":
- "HCPLoading"
- "HCOLoading"
- "MCOLoading"
- "RelationLoading"
"SHS_RX":
- "HCPLoading"
"MMIT":
- "MCOLoading"
- "RelationLoading"
"DDD_SAP":
- "RelationLoading"
"INTEGRICHAIN":
- "HCOLoading"
...

ETL Get/Resubmit Errors

mdmetl_nprod

OAuth2

SVC-MDMETL_client
- "GET_ERRORS"
- "RESUBMIT_ERRORS"
USALLN/AN/A

Flows

Client software 

SOPs


" + }, + { + "title": "KOL_ONEVIEW (GBLUS)", + "pageID": "164469966", + "pageLink": "/pages/viewpage.action?pageId=164469966", + "content": "

Contacts

Brahma, Bagmita <Bagmita.Brahma2@COMPANY.com>

Solanki, Hardik <Hardik.Solanki@COMPANY.com>

Tikyani, Devesh <Devesh.Tikyani@COMPANY.com>

DL DL-iMed_L3@COMPANY.com

ACLs

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

Sources

Topic

KOL_OneView user
kol_oneview

OAuth2

KOL-MDM-PFORCEOL_client
- "CREATE_HCP"
- "UPDATE_HCP"
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
- "LOOKUPS"

US

KOL_OneView

N/A

KOL_OneView TOPICN/AKafka JassN/A
"(exchange.in.headers.reconciliationTarget==null 
|| exchange.in.headers.reconciliationTarget == 'KOL_ONEVIEW')
&& exchange.in.headers.eventType in ['full']
&& ['KOL_OneView'].intersect(exchange.in.headers.eventSource)
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
US
KOL_OneView
prod-out-full-koloneview-all

Flows


Client software 


" + }, + { + "title": "GRV (GBLUS)", + "pageID": "164469964", + "pageLink": "/pages/viewpage.action?pageId=164469964", + "content": "

Contacts

Bablani, Vijay <Vijay.Bablani@COMPANY.com>

Jain, Somya <Somya.Jain@COMPANY.com>

Adhvaryu, Amish <Amish.Adhvaryu@COMPANY.com>

Reynolds, Lori <Lori.Reynolds@COMPANY.com>

Alphonso, Venisa <Venisa.Alphonso@COMPANY.com>

Patel, Jay <Jay.Patel@COMPANY.com>

Anumalasetty, Jayasravani <Jayasravani.Anumalasetty@COMPANY.com>


ACLs

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

Sources

Topic

GRV User
grv

OAuth2

GRV-MDM_client
- "GET_ENTITIES"
- "LOOKUPS"
- "VALIDATE_HCP"
- "CREATE_HCP"
- "UPDATE_HCP"

US

- "GRV"

N/A

GRV-AIS-MDM User
grv_ais
OAuth2
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
- "GET_ENTITIES"
- "LOOKUPS"
- "VALIDATE_HCP"
- "CREATE_HCP"
- "UPDATE_HCP"
- "CREATE_HCO"
- "UPDATE_HCO"
US
- "GRV"
- "CENTRIS"
- "ENGAGE"
N/A
GRV TOPICN/AKafka JassN/A
"(exchange.in.headers.reconciliationTarget==null)
&& exchange.in.headers.eventType in ['full_not_trimmed']
&& ['GRV'].intersect(exchange.in.headers.eventSource)
&& exchange.in.headers.objectType in ['HCP']
&& exchange.in.headers.eventSubtype in ['HCP_CHANGED']"
US
GRV
prod-out-full-grv-all

Flows



Client software 


" + }, + { + "title": "GRACE (GBLUS)", + "pageID": "164469962", + "pageLink": "/pages/viewpage.action?pageId=164469962", + "content": "

Contacts

Jeffrey.D.LoVetere@COMPANY.com

william.nerbonne@COMPANY.com

Kalyan.Kanumuru@COMPANY.com

Brigilin.Stanley@COMPANY.com

ACLs

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

Sources

Topic

GRACE User
grace

OAuth2

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
- "GET_ENTITIES"
- "LOOKUPS"

US

- "GRV"
- "CENTRIS"
- "ENGAGE"

N/A

Flows

Client software 

" + }, + { + "title": "KOL_ONEVIEW (EMEA, AMER, APAC)", + "pageID": "164470136", + "pageLink": "/pages/viewpage.action?pageId=164470136", + "content": "

Contacts

DL-SFA-INF_Support_PforceOL@COMPANY.com

Solanki, Hardik (US - Mumbai) <hsolanki@COMPANY.com>

Yagnamurthy, Maanasa (US - Hyderabad) <myagnamurthy@COMPANY.com>

ACLs

EMEA

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

KOL_ONEVIEW user (NPROD)
kol_oneview
External OAuth2

KOL-MDM-PFORCEOL_client

KOL-MDM_client

[
"CREATE_HCP",
"UPDATE_HCP",
"CREATE_HCO",
"UPDATE_HCO",
"GET_ENTITIES",
"LOOKUPS"
]
["AD","AE","AO","AR","AU","BF","BH","BI","BJ","BL",
"BO","BR","BW","BZ","CA","CD","CF","CG","CH","CI",
"CL","CM","CN","CO","CP","CR","CV","DE","DJ","DK",
"DO","DZ","EC","EG","ES","ET","FI","FO","FR","GA","GB",
"GF","GH","GL","GM","GN","GP","GQ","GT","GW","HN",
"IE","IL","IN","IQ","IR","IT","JO","JP","KE","KW",
"LB","LR","LS","LY","MA","MC","MF","MG","ML","MQ",
"MR","MU","MW","MX","NA","NC","NG","NI","NZ","OM",
"PA","PE","PF","PL","PM","PT","PY","QA","RE","RU",
"RW","SA","SD","SE","SL","SM","SN","SV","SY","SZ","TD",
"TF","TG","TN","TR","TZ","UG","UY","VE","WF","YE",
"YT","ZA","ZM","ZW"]
GB
- "KOL_OneView"

KOL_ONEVIEW user (PROD)
kol_oneview
External OAuth2
KOL-MDM-PFORCEOL_client
KOL-MDM_client
[
"CREATE_HCP",
"UPDATE_HCP",
"CREATE_HCO",
"UPDATE_HCO",
"GET_ENTITIES",
"LOOKUPS"
]
["AD","AE","AO","AR","AU","BF","BH","BI","BJ","BL",
"BO","BR","BW","BZ","CA","CD","CF","CG","CH","CI",
"CL","CM","CN","CO","CP","CR","CV","DE","DJ","DK",
"DO","DZ","EC","EG","ES","ET","FO","FR","GA","GB",
"GF","GH","GL","GM","GN","GP","GQ","GT","GW","HN",
"IE","IL","IN","IQ","IR","IT","JO","JP","KE","KW",
"LB","LR","LS","LY","MA","MC","MF","MG","ML","MQ",
"MR","MU","MW","MX","NA","NC","NG","NI","NZ","OM",
"PA","PE","PF","PL","PM","PT","PY","QA","RE","RU",
"RW","SA","SD","SL","SM","SN","SV","SY","SZ","TD",
"TF","TG","TN","TR","TZ","UG","UY","VE","WF","YE",
"YT","ZA","ZM","ZW"]
GB
- "KOL_OneView"

AMER

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

KOL_ONEVIEW user (NPROD)
kol_oneview
External OAuth2

KOL-MDM-PFORCEOL_client

[
"CREATE_HCP",
"UPDATE_HCP",
"CREATE_HCO",
"UPDATE_HCO",
"GET_ENTITIES",
"LOOKUPS"
]
["AR","BR","CA","MX","UY"]
CA
- "KOL_OneView"

KOL_ONEVIEW user (PROD)
kol_oneview
External OAuth2
KOL-MDM-PFORCEOL_client
[
"CREATE_HCP",
"UPDATE_HCP",
"CREATE_HCO",
"UPDATE_HCO",
"GET_ENTITIES",
"LOOKUPS"
]
["AR","BR","CA","MX","UY"]
CA
- "KOL_OneView"

APAC

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

KOL_ONEVIEW user (NPROD)
kol_oneview
External OAuth2

KOL-MDM-PFORCEOL_client

[
"CREATE_HCP",
"UPDATE_HCP",
"CREATE_HCO",
"UPDATE_HCO",
"GET_ENTITIES",
"LOOKUPS"
]
["AU","IN","KR","NZ","JP"]
JP
- "KOL_OneView"

KOL_ONEVIEW user (PROD)
kol_oneview
External OAuth2
KOL-MDM-PFORCEOL_client
[
"CREATE_HCP",
"UPDATE_HCP",
"CREATE_HCO",
"UPDATE_HCO",
"GET_ENTITIES",
"LOOKUPS"
]
["AU","IN","KR","NZ","JP"]
JP
- "KOL_OneView"

Kafka

EMEA

Env

Name

Kafka Username

Consumergroup

Publisher routing rule

Topic

Partitions
emea-prod
Kol_oneview
kol_oneview

"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'KOL_ONEVIEW')
&& exchange.in.headers.eventType in ['full']
&& ['KOL_OneView'].intersect(exchange.in.headers.eventSource)
&& exchange.in.headers.objectType in ['HCP', 'HCO']
&& exchange.in.headers.country in ['ie', 'gb']"

-${env}-out-full-koloneview-all
3
emea-dev
Kol_oneview
kol_oneview

-${env}-out-full-koloneview-all

3
emea-qaKol_oneview
kol_oneview

-${env}-out-full-koloneview-all

3
emea-stageKol_oneview
kol_oneview

-${env}-out-full-koloneview-all

3

AMER

Env

Name

Kafka Username

Consumergroup

Publisher routing rule

Topic

Partitions
gblus-prod
Kol_oneview
kol_oneview

"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'KOL_OneView')
&& exchange.in.headers.eventType in ['full'] && ['KOL_OneView'].intersect(exchange.in.headers.eventSource) && exchange.in.headers.objectType in ['HCP', 'HCO']"
-${env}-out-full-koloneview-all
3
gblus-dev
Kol_oneview
kol_oneview

-${env}-out-full-koloneview-all

3
gblus-qaKol_oneview
kol_oneview

-${env}-out-full-koloneview-all

3
gblus-stageKol_oneview
kol_oneview

-${env}-out-full-koloneview-all

3
" + }, + { + "title": "GRV (EMEA, AMER)", + "pageID": "164470150", + "pageLink": "/pages/viewpage.action?pageId=164470150", + "content": "

Contacts

TODO

Gateway

EMEA

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

GRV user (NPROD)
grv
External OAuth2
GRV-MDM_client
- GET_ENTITIES
- LOOKUPS
- VALIDATE_HCP
["CA"]
GB
GRV
N/A
GRV user (PROD)
grv
External OAuth2
GRV-MDM_client
- GET_ENTITIES
- LOOKUPS
- VALIDATE_HCP
["CA"]
GB
GRV
N/A

AMER(manager)

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

GRV user (NPROD)
grv
External OAuth2
GRV-MDM_client
["GET_ENTITIES","LOOKUPS","VALIDATE_HCP","CREATE_HCP","UPDATE_HCP"]
["US"]

GRV
N/A
GRV user (PROD)
grv
External OAuth2
GRV-MDM_client
["GET_ENTITIES","LOOKUPS","VALIDATE_HCP","CREATE_HCP","UPDATE_HCP"]
["US"]

GRV
N/A

Kafka

AMER

Env

Name

Kafka Username

Consumergroup

Publisher routing rule

Topic

Partitions
gblus-prod
Grv
grv

"(exchange.in.headers.reconciliationTarget==null)
&& exchange.in.headers.eventType in ['full_not_trimmed'] && ['GRV'].intersect(exchange.in.headers.eventSource)
&& exchange.in.headers.objectType in ['HCP'] && exchange.in.headers.eventSubtype in ['HCP_CHANGED']"

- ${env}-out-full-grv-all


gblus-dev
Grv
grv

- ${local_env}-out-full-grv-all


gblus-qaGrv
grv

- ${local_env}-out-full-grv-all


gblus-stageGrv 
grv

- ${local_env}-out-full-grv-all


" + }, + { + "title": "GANT (Global, EMEA, AMER, APAC)", + "pageID": "164470148", + "pageLink": "/pages/viewpage.action?pageId=164470148", + "content": "

Contacts

Gateway

EMEA

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

GANT User
gant
External OAuth2
GANT-MDM_client

- "GET_ENTITIES"
- "LOOKUPS"
["AD", "AG", "AI", "AM", "AN",
"AR", "AT", "AU", "AW", "BA",
"BB", "BE", "BG", "BL", "BM",
"BO", "BQ", "BR", "BS", "BY",
"BZ", "CA", "CH", "CL", "CN",
"CO", "CP", "CR", "CW", "CY",
"CZ", "DE", "DK", "DO", "DZ",
"EC", "EE", "EG", "ES", "FI",
"FO", "FR", "GB", "GF", "GP",
"GR", "GT", "GY", "HK", "HN",
"HR", "HU", "ID", "IE", "IL",
"IN", "IT", "JM", "JP", "KR",
"KY", "KZ", "LC", "LT", "LU",
"LV", "MA", "MC", "MF", "MQ",
"MU", "MX", "MY", "NC", "NI",
"NL", "NO", "NZ", "PA", "PE",
"PF", "PH", "PK", "PL", "PM",
"PN", "PT", "PY", "RE", "RO",
"RS", "RU", "SA", "SE", "SG",
"SI", "SK", "SV", "SX", "TF",
"TH", "TN", "TR", "TT", "TW",
"UA", "UY", "VE", "VG", "VN",
"WF", "XX", "YT", "ZA"]
GB
GRV
N/A

AMER

Action Required

User configuration

PingFederate Username

GANT-MDM_client

Countries

Brazil

Tenant

AMER

Environments (PROD/NON-PROD/ALL)

ALL

API Services

ext-api-gw-amer-stage/entities,  ext-api-gw-amer-stage/lookups.

Sources

ONEKEY,CRMMI,MAPP

Business Justification

As we are fetching hcp data from MDM COMPANY Instance, Earlier It was MDM IQVIA instance

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

GANT User
gant
External OAuth2
GANT-MDM_client

- "GET_ENTITIES"
- "LOOKUPS"
["BR"]
BR
- ONEKEY
- CRMMI
- MAPP
N/A

APAC

Action Required

User configuration

PingFederate Username

GANT-MDM_client

Countries

India

Tenant

APAC

Environments (PROD/NON-PROD/ALL)

ALL

API Services

ext-api-gw-apac-stage/entities,  ext-api-gw-apac-stage/lookups.

Sources

ONEKEY,CRMMI,MAPP

Business Justification

As we are fetching hcp data from MDM COMPANY Instance, Earlier It was MDM IQVIA instance

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

GANT User
gant
External OAuth2
GANT-MDM_client

- "GET_ENTITIES"
- "LOOKUPS"
["IN"]
IN
- ONEKEY
- CRMMI
- MAPP
N/A
" + }, + { + "title": "Medic (EMEA, AMER, APAC)", + "pageID": "164470140", + "pageLink": "/pages/viewpage.action?pageId=164470140", + "content": "

Contacts

DL-F&BO-MEDIC@COMPANY.com

Gateway

EMEA

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

Medic user (NPROD)
medic
External OAuth2

MEDIC-MDM_client

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

["GET_ENTITIES","LOOKUPS"]
["AR","BR","CO","FR","GR","IE","IN","IT","NZ"]
IE
["MEDIC"]

Medic user (PROD)
medic
External OAuth2
MEDIC-MDM_client
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
["GET_ENTITIES","LOOKUPS"]
["AR","BR","CO","FR","GR","IE","IN","IT","NZ"]
IE
["MEDIC"]

AMER

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

Medic  user (NPROD)
medic
External OAuth2

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●, ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

["GET_ENTITIES","LOOKUPS"]
["AR","BR","CO","FR","GR","IE","IN","IT","NZ","US"]

["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI",
"DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE",
"GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO",
"IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC",
"MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM",
"PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]

Medic user (PROD)
medic
External OAuth2
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
["GET_ENTITIES","LOOKUPS"]
["AR","BR","CO","FR","GR","IE","IN","IT","NZ","US"]

["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI",
"DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE",
"GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO",
"IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC",
"MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM",
"PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]

APAC

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

Medic user (NPROD)
medic
External OAuth2

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

["GET_ENTITIES","LOOKUPS"]
["AR","BR","CO","FR","GR","IE","IN","IT","NZ"]
IN
["MEDIC"]

Medic user (PROD)
medic
External OAuth2
MEDIC-MDM_client
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
["GET_ENTITIES","LOOKUPS"]
["AR","BR","CO","FR","GR","IE","IN","IT","NZ"]
IN
["MEDIC"]

" + }, + { + "title": "PTRS (EMEA, AMER, APAC)", + "pageID": "164470165", + "pageLink": "/pages/viewpage.action?pageId=164470165", + "content": "

Requirements

EnvPublisher routing ruleTopic
emea-prod
(ptrs-eu)
"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_RECONCILIATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf', 'br', 'mx', 'id', 'pt']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"

01/Mar/23 4:14 AM

[10:13 AM] Shanbhag, Bhushan
Okay in that case we want Turkey market's events to come from emea-prod-out-full-ptrs-global2 topic only. 

${env}-out-full-ptrs-eu
emea prod and nprods

Adding MC and AD to out-full-ptrs-eu

15/05/2023

Sagar: 

Hi Karol,

Can you please add below counties for France to country configuration list for FRANCE EMEA Topics (Prod, Stage QA & Dev)

1. Monaco

2. Andorra
\n MR-6236\n -\n Getting issue details...\n STATUS\n

${env}-out-full-ptrs-eu

Contacts

API: Prapti.Nanda@COMPANY.com;Varun.ArunKumar@COMPANY.com

Kafka: Sagar.Bodala@COMPANY.com

Gateway

EMEA

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

PTRS user (NPROD)
ptrs
External OAuth2
PTRS-MDM_client
["CREATE_HCO","CREATE_HCP","GET_ENTITIES","LOOKUPS"]
["AG","AI","AN","AR","AW","BB","BL","BM","BO","BR",
"BS","BZ","CL","CO","CR","CW","DO","EC","FR","GF",
"GP","GT","GY","HN","ID","IL","JM","KY","LC","MF",
"MQ","MU","MX","NC","NI","PA","PE","PF","PH","PM",
"PN","PT","PY","RE","SV","SX","TF","TR","TT","UY",
"VE","VG","WF","YT"]

["PTRS"]

PTRS user (PROD)ptrsExternal OAuth2
PTRS-MDM_client
["CREATE_HCO","CREATE_HCP","GET_ENTITIES","LOOKUPS"]["AG","AI","AN","AR","AW","BB","BL","BM","BO","BR",
"BS","BZ","CL","CO","CR","CW","DO","EC","FR","GF",
"GP","GT","GY","HN","ID","IL","JM","KY","LC","MF",
"MQ","MU","MX","NC","NI","PA","PE","PF","PH","PM",
"PN","PT","PY","RE","SV","SX","TF","TR","TT","UY",
"VE","VG","WF","YT"]

["PTRS"]

AMER(manager)

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

PTRS user (NPROD)
ptrs
External OAuth2
PTRS-MDM_client

["CREATE_HCO","CREATE_HCP","GET_ENTITIES","LOOKUPS"]

["MX","BR"]

["PTRS"]

PTRS user (PROD)ptrsExternal OAuth2
PTRS-MDM_client

["CREATE_HCO","CREATE_HCP","GET_ENTITIES","LOOKUPS"]

["MX","BR"]
["PTRS"]

APAC(manager)

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

PTRS user (NPROD)
ptrs
External OAuth2
PTRS_RELTIO_Client
PTRS-MDM_client

["CREATE_HCO","CREATE_HCP","GET_ENTITIES"]

["ID","JP","PH"]

["VOC","PTRS"]

PTRS user (PROD)ptrsExternal OAuth2
PTRS_RELTIO_Client
PTRS-MDM_client

["CREATE_HCO","CREATE_HCP","GET_ENTITIES"]

["JP"]
["VOC","PTRS"]

Kafka

EMEA

Env

Name

Kafka Username

Consumergroup

Publisher routing rule

Topic

Partitions
emea-prod
(ptrs-eu)
Ptrs
ptrs

"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_RECONCILIATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf', 'br', 'mx', 'id', 'pt', 'ad', 'mc']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"

${env}-out-full-ptrs-eu
3
emea-prod (ptrs-global2)
Ptrs
ptrs

"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_GLOBAL2_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['tr']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
${env}-out-full-ptrs-global2
3
emea-dev 
(ptrs-global2)
Ptrs
ptrs

"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_GLOBAL2_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['tr']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"

${env}-out-full-ptrs-global2

3
emea-qa (ptrs-eu)Ptrsptrsemea-dev-ptrs-eu
"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_EU_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
${env}-out-full-ptrs-eu
3
emea-qa (ptrs-global2)Ptrsptrs
"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_GLOBAL2_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['tr']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
${env}-out-full-ptrs-global2
3
emea-stage (ptrs-eu)Ptrsptrsemea-stage-ptrs-eu
"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_EU_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf', 'pt', 'id', 'tr']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
${env}-out-full-ptrs-eu
3
emea-stage (ptrs-global2)Ptrsptrs
"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_GLOBAL2_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['tr']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
${env}-out-full-ptrs-global2
3

AMER

Env

Name

Kafka Username

Consumergroup

Publisher routing rule

Topic

Partitions
amer-prod
(ptrs-amer)
Ptrs
ptrs

"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_AMER_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['mx', 'br']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
${env}-out-full-ptrs-amer
3
amer-dev 
(ptrs-amer)
Ptrs
ptrs
amer-dev-ptrs
"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_AMER_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['mx', 'br']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
${env}-out-full-ptrs-amer
3
amer-qa (ptrs-amer)Ptrsptrsamer-qa-ptrs
"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_AMER_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['mx', 'br']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
${env}-out-full-ptrs-amer
3
amer-stage (ptrs-amer)Ptrsptrsamer-stage-ptrs
"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_AMER_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['mx', 'br']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
${env}-out-full-ptrs-amer
3

APAC

Env

Name

Kafka Username

Consumergroup

Publisher routing rule

Topic

Partitions
apac-dev 
(ptrs-apac)
Ptrs
ptrs

"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_APAC_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['pk']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
${env}-out-full-ptrs-apac

apac-qa (ptrs-apac)Ptrsptrs
"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_APAC_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['pk']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
${env}-out-full-ptrs-apac

apac-stage (ptrs-apac)Ptrsptrs
"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_APAC_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['pk']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
${env}-out-full-ptrs-apac

GBL

Env

Name

Kafka Username

Consumergroup

Publisher routing rule

Topic

Partitions
gbl-prod
Ptrs
ptrs

"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['co', 'mx', 'br', 'ph']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"

- ${env}-out-full-ptrs


gbl-prod (ptrs-eu)
Ptrs
ptrs

"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_EU_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
${env}-out-full-ptrs-eu

gbl-prod (ptrs-porind)
Ptrs
ptrs

exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['id', 'pt']
&& exchange.in.headers.objectType in ['HCP', 'HCO']
&& !exchange.in.headers.eventSubtype.endsWith('_MATCHES_CHANGED')
&& (exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_PORIND_REGENERATION')"
${env}-out-full-ptrs-porind

gbl-dev
Ptrs
ptrs

"exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['co', 'mx', 'br', 'ph', 'cl', 'tr']
&& exchange.in.headers.objectType in ['HCP', 'HCO']
&& !exchange.in.headers.eventSubtype.endsWith('_MATCHES_CHANGED')
&& (exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_REGENERATION')"

- ${env}-out-full-ptrs

20
gbl-dev (ptrs-eu)
Ptrs
ptrs
ptrs_nprod
"exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf']
&& exchange.in.headers.objectType in ['HCP', 'HCO']
&& !exchange.in.headers.eventSubtype.endsWith('_MATCHES_CHANGED')
&& (exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_EU_REGENERATION')"

- ${env}-out-full-ptrs-eu


gbl-dev (ptrs-porind)
Ptrs
ptrs

"exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['id', 'pt']
&& exchange.in.headers.objectType in ['HCP', 'HCO']
&& !exchange.in.headers.eventSubtype.endsWith('_MATCHES_CHANGED')
&& (exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_PORIND_REGENERATION')"

- ${env}-out-full-ptrs-porind


gbl-qa (ptrs-eu)Ptrsptrs
"exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf']
&& exchange.in.headers.objectType in ['HCP', 'HCO']
&& (exchange.in.headers.reconciliationTarget==null)"
- ${env}-out-full-ptrs-eu
20
gbl-stagePtrs
ptrs

"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_LATAM')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['co', 'mx', 'br', 'ph', 'cl','tr']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
- ${env}-out-full-ptrs

gbl-stage (ptrs-eu)Ptrs
ptrs
ptrs_nprod
"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_EU')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
- ${env}-out-full-ptrs-eu

gbl-stage (ptrs-porind)Ptrs
ptrs

"exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['id', 'pt']
&& exchange.in.headers.objectType in ['HCP', 'HCO']
&& !exchange.in.headers.eventSubtype.endsWith('_MATCHES_CHANGED')
&& (exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_PORIND_REGENERATION')"
- ${env}-out-full-ptrs-porind

" + }, + { + "title": "OneMed (EMEA)", + "pageID": "164470163", + "pageLink": "/pages/viewpage.action?pageId=164470163", + "content": "

Contacts

Marsha.Wirtel@COMPANY.com;AnveshVedula.Chalapati@COMPANY.com

Gateway

EMEA

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

OneMed user (NPROD)
onemed
External OAuth2

ONEMED-MDM_client

["GET_ENTITIES","LOOKUPS"]
["AR","AU","BR","CH","CN","DE","ES","FR","GB","IE",
"IL","IN","IT","JP","MX","NZ","PL","SA","TR"]
IE
["CICR","CN3RDPARTY","CRMMI","EVR","FACE","GCP","GRV","KOL_OneView","LocalMDM","MAPP",
"MDE","OK","Reltio","Rx_Audit"]

OneMeduser (PROD)
onemed
External OAuth2
ONEMED-MDM_client
["GET_ENTITIES","LOOKUPS"]
["AR","AU","BR","CH","CN","DE","ES","FR","GB","IE",
"IL","IN","IT","JP","MX","NZ","PL","SA","TR"]
IE
["CICR","CN3RDPARTY","CRMMI","EVR","FACE","GCP","GRV","KOL_OneView","LocalMDM","MAPP",
"MDE","OK","Reltio","Rx_Audit"]

" + }, + { + "title": "GRACE (EMEA, AMER, APAC)", + "pageID": "164470161", + "pageLink": "/pages/viewpage.action?pageId=164470161", + "content": "

Contacts

DL-AIS-Mule-Integration-Support@COMPANY.com

Requirements

Partial requirements

Sent by Amish Adhvaryu

action needed

Need Plugin Configuration for below usernames

username

GRACE MAVENS SFDC - DEV - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● - Dev
GRACE MAVENS SFDC - STG - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● - Stage
GRACE MAVENS SFDC - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● - Prod

countries

AU,NZ,IN,JP,KR (APAC) and AR, UY, MX (AMER)

tenant

APAC and AMER

environments (prod/nonprods/all)

ALL

API services exposed

HCP HCO MCO Search, Lookups

Sources

Grace

Business justification

Client ID used by GRACE application to search HCP and HCOs

Gateway

EMEA

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

GRACE user
grace
External OAuth2
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
["GET_ENTITIES","LOOKUPS"]
["AD","AG","AI","AM","AN","AR","AT","AU","AW","BA",
"BB","BE","BG","BL","BM","BO","BQ","BR","BS","BY",
"BZ","CA","CH","CL","CN","CO","CP","CR","CW","CY",
"CZ","DE","DK","DO","DZ","EC","EE","ES","FI","FO",
"FR","GB","GD","GF","GL","GP","GR","GT","GY","HK",
"HN","HR","HU","ID","IE","IL","IN","IT","JM","JP",
"KR","KY","KZ","LC","LT","LU","LV","MA","MC","MF",
"MQ","MU","MX","MY","NC","NI","NL","NO","NZ","PA",
"PE","PF","PH","PK","PL","PM","PN","PT","PY","RE",
"RO","RS","RU","SA","SE","SG","SI","SK","SR","SV",
"SX","TF","TH","TN","TR","TT","TW","UA","US","UY",
"VE","VG","VN","WF","XX","YT","ZA"]
GB
["NONE"]
N/A
GRACE User
grace
External OAuth2
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

["GET_ENTITIES","LOOKUPS"]
["AD","AG","AI","AM","AN","AR","AT","AU","AW","BA",
"BB","BE","BG","BL","BM","BO","BQ","BR","BS","BY",
"BZ","CA","CH","CL","CN","CO","CP","CR","CW","CY",
"CZ","DE","DK","DO","DZ","EC","EE","ES","FI","FO",
"FR","GB","GD","GF","GL","GP","GR","GT","GY","HK",
"HN","HR","HU","ID","IE","IL","IN","IT","JM","JP",
"KR","KY","KZ","LC","LT","LU","LV","MA","MC","MF",
"MQ","MU","MX","MY","NC","NI","NL","NO","NZ","PA",
"PE","PF","PH","PK","PL","PM","PN","PT","PY","RE",
"RO","RS","RU","SA","SE","SG","SI","SK","SR","SV",
"SX","TF","TH","TN","TR","TT","TW","UA","US","UY",
"VE","VG","VN","WF","XX","YT"]
GB
["NONE"]
N/A

AMER

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

GRACE user
grace
External OAuth2 (all)
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
["GET_ENTITIES","LOOKUPS"]
["CA","US","AR","UY","MX"]

["NONE"]
N/A
External OAuth2 (amer-dev)●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
External OAuth2 (gblus-stage)●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
External OAuth2 (amer-stage)●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
GRACE User
grace
External OAuth2
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

["GET_ENTITIES","LOOKUPS"]
["AD","AR","AU","BR","CA","DE","ES","FR","GB","GF",
"GP","IN","IT","JP","KR","MC","MF","MQ","MX","NC",
"NZ","PF","PM","RE","SA","TR","US","UY"]

["NONE"]
N/A


APAC

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

GRACE user
grace
External OAuth2 (all)
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
["GET_ENTITIES","LOOKUPS"]
["AR","AU","BR","CA","HK","ID","IN","JP","KR","MX",
"MY","NZ","PH","PK","SG","TH","TW","US","UY","VN"]

["NONE"]
N/A
External OAuth2 (apac-stageb469b84094724d74adb9ff7224588647
GRACE User
grace
External OAuth2
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

["GET_ENTITIES","LOOKUPS"]
["AD","AR","AU","BR","CA","DE","ES","FR","GB","GF",
"GP","IN","IT","JP","KR","MC","MF","MQ","MX","NC",
"NZ","PF","PM","RE","SA","TR","US","UY"]

["NONE"]
N/A
" + }, + { + "title": "Snowflake (Global, GBLUS)", + "pageID": "164469783", + "pageLink": "/pages/viewpage.action?pageId=164469783", + "content": "

Contacts

Narayanan, Abhilash <Abhilash.KadampanalNarayanan@COMPANY.com>

ACLs

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

Sources

Topic

Snowflake topicSnowflake TopicKafka JAASN/A
exchange.in.headers.eventType in ['full_not_trimmed']
exchange.in.headers.objectType in ['HCP', 'HCO', 'MCO', 'RELATIONSHIP'])
||
(exchange.in.headers.eventType in ['simple'] && exchange.in.headers.objectType in ['ENTITY']))
ALLALL
prod-out-full-snowflake-all

Flows

Snowflake participate in two flows:

  1. Snowflake: Events publish flow
    Event publisher pushes all events regarding entity/relation change to Kafka topic that is created for Snowflake( {{$env}}-out-full-snowflake-all }} ). Then Kafka Connect component pulls those events and loads them to Snowflake table(Flat model).
  2. Reconciliation
    Main goal of reconciliation process is to synchronise Snowflake database with MongoDB.
    Snowflake periodically exports entities and creates csv file with their identifiers and checksums. The file is sent to S3 from where it is then downloaded in the reconciliation process. This process compares the data in the file with the values stored in Mongo.
    A reconciliation event is created and posted on kafka topic in two cases:

    1. the cheksum has changed
    2. there is lack of entity in csv file

Client software 

Kafka Connect is responsible for collecting kafka events and loading them to Snowflake database in flat model.

SOPs

Currently there are no SOPs for snowflake.

" + }, + { + "title": "Vaccine (GBLUS)", + "pageID": "164469863", + "pageLink": "/pages/viewpage.action?pageId=164469863", + "content": "

Contacts

Vajapeyajula, Venkata Kalyan Ram <Kalyan.Vajapeyajula@COMPANY.com>

BAVISHI, MONICA <MONICA.BAVISHI@COMPANY.com>

Duvvuri, Satya <Satya.Duvvuri@COMPANY.com>

Garg, Nalini <Nalini.Garg@COMPANY.com>

Shah, Himanshu <Himanshu.Shah@COMPANY.com>

Flows


FlowDescription
Snowflake: Events publish flowEvents AUTO_LINK_FOUND and POTENTIAL_LINK_FOUND are published to snowflake
Snowflake: Base tables refresh

MATCHES table is refreshed (every 2 hours in prod) with those events

Snowflake MDMMATCHES table are read by an ETL process implemented by COMPANY Team 
ETL Batches

The ETL process creates relations like  SAPtoHCOSAffiliations. FlextoDDDAffiliations, FlextoHCOSAffiliations through the Batch Channel


NotMatch CallbackFor created relations, the NotMatch callback is triggered and removes LINKS using NotMatch Reltio calls

Client software 

ACLs

NameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopic
DerivedAffilations Batch Load user

derivedaffiliations_load

N/AN/A
- "CREATE_RELATION"
- "UPDATE_RELATION"
- US
*

" + }, + { + "title": "ICUE (AMER)", + "pageID": "172301085", + "pageLink": "/pages/viewpage.action?pageId=172301085", + "content": "

Contacts

Gateway

AMER

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

ICUE user (NPROD)
icue
External OAuth2

ICUE-MDM_client

["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","CREATE_MCO","UPDATE_MCO","GET_ENTITIES","LOOKUPS"]
["US"]

["ICUE"]
consumer:
regex:
- "^.*-out-full-icue-all$"
- "^.*-out-full-icue-grv-all$"
groups:
- icue_dev
- icue_qa
- icue_stage
- dev_icue_grv
- qa_icue_grv
- stage_icue_grv
ICUE user (PROD)
icue
External OAuth2
ICUE-MDM_client
["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","CREATE_MCO","UPDATE_MCO","GET_ENTITIES","LOOKUPS"]
["US"]

["ICUE"]
consumer:
regex:
- "^.*-out-full-icue-all$"
- "^.*-out-full-icue-grv-all$"
groups:
- icue_prod
- prod_icue_grv

Kafka

GBLUS (icue-grv-mule)

Name

Kafka Username

Consumergroup

Publisher routing rule

Topic

Partitions
icue - DEV
icue_nprod

"exchange.in.headers.eventType in ['full_not_trimmed']
&& exchange.in.headers.objectType in ['HCP']
&& ['GRV'].intersect(exchange.in.headers.eventSource)
&& !(['ICUE'].intersect(exchange.in.headers.eventSource))
&& exchange.in.headers.eventSubtype in ['HCP_CREATED', 'HCP_CHANGED']"
${local_env}-out-full-icue-grv-all"

icue - QA
icue_nprod

${local_env}-out-full-icue-grv-all

icue - STAGE
icue_nprod

${local_env}-out-full-icue-grv-all

icue  - PROD
icuex_prod

${env}-out-full-icue-grv-all

Flows

Client software 


" + }, + { + "title": "ESAMPLES (GBLUS)", + "pageID": "172301089", + "pageLink": "/pages/viewpage.action?pageId=172301089", + "content": "

Contacts

Adhvaryu, Amish <Amish.Adhvaryu@COMPANY.com>

Jain, Somya <Somya.Jain@COMPANY.com>

Bablani, Vijay <Vijay.Bablani@COMPANY.com>

Reynolds, Lori <Lori.Reynolds@COMPANY.com>

ACLs

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

Sources

Topic

MuleSoft - esamples user
esamples

OAuth2

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
- "GET_ENTITIES"
US
all_sources

N/A

Flows

Client software 


" + }, + { + "title": "VEEVA_FIELD (EMEA, AMER)", + "pageID": "172301091", + "pageLink": "/pages/viewpage.action?pageId=172301091", + "content": "

Contacts

Adhvaryu, Amish <Amish.Adhvaryu@COMPANY.com>

Fani, Chris <Christopher.Fani@COMPANY.com>

Gateway

EMEA

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

VEEVA_FIELD user (NPROD)
veeva_field
External OAuth2
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
["GET_ENTITIES","LOOKUPS"]
["AD","AG","AI","AM","AN","AR","AT","AU","AW","BA",
"BB","BE","BG","BL","BM","BO","BQ","BR","BS","BY",
"BZ","CA","CH","CL","CN","CO","CP","CR","CW","CY",
"CZ","DE","DK","DO","DZ","EC","EE","ES","FI","FO",
"FR","GB","GF","GL","GP","GR","GT","GY","HK","HN",
"HR","HU","ID","IE","IL","IN","IT","JM","JP","KR",
"KY","KZ","LC","LT","LU","LV","MA","MC","MF","MQ",
"MU","MX","MY","NC","NI","NL","NO","NZ","PA","PE",
"PF","PH","PK","PL","PM","PN","PT","PY","RE","RO",
"RS","RU","SA","SE","SG","SI","SK","SV","SX","TF",
"TH","TN","TR","TT","TW","UA","UY","VE","VG","VN",
"WF","XX","YT"]
GB
["AHA","AMA","AMPCO","AMS","AOA","BIODOSE","BUPA","CH","CICR","CN3RDPARTY",
"CRMMI","CRMMI-SUR","CSL","DDD","DEA","DT_340b","ENGAGE","EVR","FACE","GCP",
"GRV","HCH","HCOS","HMS","HUB_CALLBACK","HUB_Callback","HUB_USAGETAG","IMSDDD","IMSPLAN","JPDWH",
"KOL_OneView","KOL_OneView","LLOYDS","LocalMDM","MAPP","MDE","MEDIC","NHS","NUCLEUS","OK",
"ONEKEY","PCMS","PFORCERX","PFORCERX_ID","PFORCERX_ODS","PTRS","RX_AUDIT","Reltio","ReltioCleanser","Rx_Audit",
"SAP","SYMP","VEEVA","VEEVA_AU","VEEVA_NZ","VEEVA_PHARMACY_AU","XPO"]
N/A
VEEVA_FIELD user (PROD)
veeva_field
External OAuth2
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
["GET_ENTITIES","LOOKUPS"]
["AD","AG","AI","AM","AN","AR","AT","AU","AW","BA",
"BB","BE","BG","BL","BM","BO","BQ","BR","BS","BY",
"BZ","CA","CH","CL","CN","CO","CP","CR","CW","CY",
"CZ","DE","DK","DO","DZ","EC","EE","ES","FI","FO",
"FR","GB","GF","GL","GP","GR","GT","GY","HK","HN",
"HR","HU","ID","IE","IL","IN","IT","JM","JP","KR",
"KY","KZ","LC","LT","LU","LV","MA","MC","MF","MQ",
"MU","MX","MY","NC","NI","NL","NO","NZ","PA","PE",
"PF","PH","PK","PL","PM","PN","PT","PY","RE","RO",
"RS","RU","SA","SE","SG","SI","SK","SV","SX","TF",
"TH","TN","TR","TT","TW","UA","UY","VE","VG","VN",
"WF","XX","YT"]
GB
["AHA","AMA","AMPCO","AMS","AOA","BIODOSE","BUPA","CH","CICR","CN3RDPARTY",
"CRMMI","CRMMI-SUR","CSL","DDD","DEA","DT_340b","ENGAGE","EVR","FACE","GCP",
"GRV","HCH","HCOS","HMS","HUB_CALLBACK","HUB_Callback","HUB_USAGETAG","IMSDDD","IMSPLAN","JPDWH",
"KOL_OneView","LLOYDS","LocalMDM","MAPP","MDE","MEDIC","NHS","NUCLEUS","OK","ONEKEY",
"PCMS","PFORCERX","PFORCERX_ID","PFORCERX_ODS","PTRS","RX_AUDIT","Reltio","ReltioCleanser","Rx_Audit","SAP",
"SYMP","VEEVA","VEEVA_AU","VEEVA_NZ","VEEVA_PHARMACY_AU","XPO"]
N/A

AMER

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

VEEVA_FIELD   user (NPROD)
veeva_field
External OAuth2
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
["GET_ENTITIES","LOOKUPS"]
["CA", "US"]

["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI",
"DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE",
"GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO",
"IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC",
"MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM",
"PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]
N/A

External OAuth2

(GBLUS-STAGE)

55062bae02364c7598bc3ffbfe38e07b
VEEVA_FIELD user (PROD)
veeva_field
External OAuth2 (ALL)
67b77aa7ecf045539237af0dec890e59
726b6d341f994412a998a3e32fdec17a
["GET_ENTITIES","LOOKUPS"]
["CA", "US"]

["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI",
"DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE",
"GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO",
"IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC",
"MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM",
"PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]
N/A

Flows

Client software 


" + }, + { + "title": "PFORCEOL (EMEA, AMER, APAC)", + "pageID": "172301093", + "pageLink": "/pages/viewpage.action?pageId=172301093", + "content": "

Contacts

Adhvaryu, Amish <Amish.Adhvaryu@COMPANY.com>

Fani, Chris <Christopher.Fani@COMPANY.com>

Requirements

Partial requirements

Sent by Amish Adhvaryu

PforceOL Dev - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●





























































PforceOL Stage - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●





























































PforceOL Prod - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●





























































 PT RO DK BR IL TR GR NO CA JP MX AT AR RU KR DE PL AU HK IN MY PH SG TW TH ES CZ LT UA VN ID KZ HU SK UK SE FI CH SA EG MA ZA BE NL IT DZ CO NZ PE CL EE HR LV RS TN US CN SI FR BG IR WA PK

New Requirements - October 2024

Action needed

Need Access to PFORCEOL - DEV, PFORCEOL - QA, PFORCEOL - STG, PFORCEOL - PROD

PingFederate username

DEV & QA: ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
STG: ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
PROD: ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Countries

AC, AE, AG, AI, AR, AT, AU, AW, BB, BE, BH, BM, BR, BS, BZ, CA, CH, CN, CO, CR, CU, CW, CY, CZ, DE, DK, DM, DO, DZ, EG, ES, FI, FK, FR, GB, GD, GF, GP, GR, GT, GY, HK, HN, HT, ID, IE, IL, IN, IT, JM, JP, KN, KR, KW, KY, LC, LU, MF, MQ, MS, MX, MY, NI, NL, NO, NZ, OM, PA, PH, PL, PT, QA, RO, SA, SE, SG, SK, SR, SV, SX, TC, TH, TR, TT, TW, UE, UK, US, VC, VG, VN, YE, ZA

AJ: "Keep the other countries for now"

Full list:

AC, AD, AE, AG, AI, AM, AN, AR, AT, AU, AW, BA, BB, BE, BG, BH, BL, BM, BO, BQ, BR, BS, BY, BZ, CA, CH, CL, CN, CO, CP, CR, CU, CW, CY, CZ, DE, DK, DM, DO, DZ, EC, EE, EG, ES, FI, FK, FO, FR, GB, GD, GF, GL, GP, GR, GT, GY, HK, HN, HR, HT, HU, ID, IE, IL, IN, IR, IT, JM, JP, KN, KR, KW, KY, KZ, LC, LT, LU, LV, MA, MC, MF, MQ, MS, MU, MX, MY, NC, NI, NL, NO, NZ, OM, PA, PE, PF, PH, PK, PL, PM, PN, PT, PY, QA, RE, RO, RS, RU, SA, SE, SG, SI, SK, SR, SV, SX, TC, TF, TH, TN, TR, TT, TW, UA, UE, UK, US, UY, VC, VE, VG, VN, WA, WF, XX, YE, YT, ZA

Tenant

AMER, EMEA, APAC, US, EX-US

Environments

DEV, QA, STG, PROD

Permissions range

Read access for HCP Search and HCO Search and MCO Search

Sources

Sources that are configured in OneMed:
MAPP, ONEKEY,OK, PFORCERX_ODS, PFORCERX, VOD, LEGACY_SFA_IDL, PTRS, JPDWH, iCUE, IQVIA_DDD, DCR_SYNC, MDE, MEDPAGESHCP, MEDPAGESHCO

Business justification

These changes are required as part of OneMed 2.0 Transformation Project. This project is responsible to ensure an improvised system due to which the proposed changes will help the OneMed technical team to build a better solution to search for HCP/HCO data within MDM system through API integration.

Point of contact

Anvesh (anveshvedula.chalapati@COMPANY.com), Aparna (aparna.balakrishna@COMPANY.com)

Excel sheet with countries: \"\"

Gateway

EMEA

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

PFORCEOL user (NPROD)
pforceol
External OAuth2
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
["GET_ENTITIES","LOOKUPS"]
["NO","AD","AG","AI","AM","AN","AR","AT","AU","AW",
"BA","BB","BE","BG","BL","BM","BO","BQ","BR","BS",
"BY","BZ","CA","CH","CL","CN","CO","CP","CR","CW",
"CY","CZ","DE","DK","DO","DZ","EC","EE","EG","ES",
"FI","FO","FR","GB","GF","GL","GP","GR","GT","GY",
"HK","HN","HR","HU","ID","IE","IL","IN","IR","IT",
"JM","JP","KR","KY","KZ","LC","LT","LU","LV","MA",
"MC","MF","MQ","MU","MX","MY","NC","NI","NL","false",
"NZ","PA","PE","PF","PH","PK","PL","PM","PN","PT",
"PY","RE","RO","RS","RU","SA","SE","SG","SI","SK",
"SV","SX","TF","TH","TN","TR","TT","TW","UA","UK",
"US","UY","VE","VG","VN","WA","WF","XX","YT","ZA"]
GB
["AHA","AMA","AMPCO","AMS","AOA","BIODOSE","BUPA","CH","CICR","CN3RDPARTY",
"CRMMI","CRMMI-SUR","CSL","DDD","DEA","DT_340b","ENGAGE","EVR","FACE","GCP",
"GRV","HCH","HCOS","HMS","HUB_CALLBACK","HUB_Callback","HUB_USAGETAG","IMSDDD","IMSPLAN","JPDWH",
"KOL_OneView","KOL_OneView","LLOYDS","LocalMDM","MAPP","MDE","MEDIC","NHS","NUCLEUS","OK",
"ONEKEY","PCMS","PFORCERX","PFORCERX_ID","PFORCERX_ODS","PTRS","RX_AUDIT","Reltio","ReltioCleanser","Rx_Audit",
"SAP","SYMP","VEEVA","VEEVA_AU","VEEVA_NZ","VEEVA_PHARMACY_AU","XPO"]
N/A
PFORCEOL user (PROD)
pforceol
External OAuth2
- ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
["GET_ENTITIES","LOOKUPS"]
["NO","AD","AG","AI","AM","AN","AR","AT","AU","AW",
"BA","BB","BE","BG","BL","BM","BO","BQ","BR","BS",
"BY","BZ","CA","CH","CL","CN","CO","CP","CR","CW",
"CY","CZ","DE","DK","DO","DZ","EC","EE","EG","ES",
"FI","FO","FR","GB","GF","GL","GP","GR","GT","GY",
"HK","HN","HR","HU","ID","IE","IL","IN","IR","IT",
"JM","JP","KR","KY","KZ","LC","LT","LU","LV","MA",
"MC","MF","MQ","MU","MX","MY","NC","NI","NL","false",
"NZ","PA","PE","PF","PH","PK","PL","PM","PN","PT",
"PY","RE","RO","RS","RU","SA","SE","SG","SI","SK",
"SV","SX","TF","TH","TN","TR","TT","TW","UA","UK",
"UY","VE","VG","VN","WA","WF","XX","YT","ZA"]
GB
["AHA","AMA","AMPCO","AMS","AOA","BIODOSE","BUPA","CH","CICR","CN3RDPARTY",
"CRMMI","CRMMI-SUR","CSL","DDD","DEA","DT_340b","ENGAGE","EVR","FACE","GCP",
"GRV","HCH","HCOS","HMS","HUB_CALLBACK","HUB_Callback","HUB_USAGETAG","IMSDDD","IMSPLAN","JPDWH",
"KOL_OneView","LLOYDS","LocalMDM","MAPP","MDE","MEDIC","NHS","NUCLEUS","OK","ONEKEY",
"PCMS","PFORCERX","PFORCERX_ID","PFORCERX_ODS","PTRS","RX_AUDIT","Reltio","ReltioCleanser","Rx_Audit","SAP",
"SYMP","VEEVA","VEEVA_AU","VEEVA_NZ","VEEVA_PHARMACY_AU","XPO"]
N/A

AMER

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

PFORCEOL  user (NPROD)
pforceol
External OAuth2
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
["GET_ENTITIES","LOOKUPS"]
["CA", "US"]

["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI",
"DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE",
"GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO",
"IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC",
"MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM",
"PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]
N/A

External OAuth2

(GBLUS-STAGE)

223ca6b37aef4168afaa35aa2cf39a3e
PFORCEOL user (PROD)
pforceol
External OAuth2 (ALL)
e678c66c02c64b599b351e0ab02bae9f
e6ece8da20284c6987ce3b8564fe9087
["GET_ENTITIES","LOOKUPS"]
["CA", "US"]

["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI",
"DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE",
"GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO",
"IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC",
"MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM",
"PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]
N/A

Flows

Client software 


" + }, + { + "title": "1CKOL (Global)", + "pageID": "184688633", + "pageLink": "/pages/viewpage.action?pageId=184688633", + "content": "

Contacts:

Kucherov, Aleksei <Aleksei.Kucherov@COMPANY.com>; Moshin, Nikolay <Nikolay.Moshin@COMPANY.com>

Old Contacts:

Data load support:

First Name: Ilya

Last Name: Enkovich

Office:  ●●●●●●●●●●●●●●●●●●

Mob: ●●●●●●●●●●●●●●●●●●

Internet: www.unit-systems.ru

E-mail: enkovich.i.s@unit-systems.ru


Backup contact:

First Name: Sergey

Last Name: Portnov

Office: ●●●●●●●●●●●●●●●●●●

Mob: ●●●●●●●●●●●●●●●●●●

Internet: www.unit-systems.ru

E-mail: portnov.s.a@unit-systems.ru

Flows

1CKOL has one batch process which consumes export files from data warehouse, process this, and loads data to MDM. This process is base on incremental batch engine and run on Airflow platform.


Input files

The input files are delivered by 1CKOL to AWS S3 bucket

MAPP Review - Europe - 1cKOL - All Documents (sharepoint.com)


UATPROD
S3 service accountsvc_gbicc_euw1_project_mdm_inbound_1ckol_rw_s3svc_gbicc_euw1_project_mdm_inbound_1ckol_rw_s3
S3 Access key IDAKIATCTZXPPJXRNSDOGNAKIATCTZXPPJXRNSDOGN
S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
S3 Foldermdm/UAT/inbound/KOL/RU/mdm/inbound/KOL/RU/
Input data file mask KOL_Extract_Russia_[0-9]+.zipKOL_Extract_Russia_[0-9]+.zip
Compressionzipzip
FormatFlat files, 1CKOL dedicated format Flat files, 1CKOL dedicated format 

Example

KOL_Extract_Russia_07212021.zipKOL_Extract_Russia_07212021.zip
Schedulenonenone
Airflow job inc_batch_eu_kol_ru_stage inc_batch_eu_kol_ru_prod

Data mapping 

Data mapping is described in the attached document.

\"\"

Configuration

Flow configuration is stored in MDM Environment configuration repository. For each environment where the flow should be enabled the configuration file inc_batch_eu_kol_ru.yml has to be created in the location related to configured environment: inventory/[env name]/group_vars/gw-airflow-services/ and the batch name "inc_batch_eu_kol_ru" has to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Below table prresents the location of inc_batch_jp.yml file for Test, Dev, Mapp, Stage and PROD envs:


inc_batch_eu_kol_ru
UAThttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_eu_kol_ru.yml
PRODhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_eu_kol_ru.yml

Applying configuration changes is done by executing the deploy Airflow's components procedure.

SOPs


There is no particular SOP procedure for this flow. All common SOPs was described in the "Incremental batch flows: SOP" chapter.



" + }, + { + "title": "Snowflake MDM Data Mart", + "pageID": "164470197", + "pageLink": "/display/GMDM/Snowflake+MDM+Data+Mart", + "content": "

The section describes   MDM Data Mart in Snowflake. The Data Mart contains MDM data from Reltio tenants published into Snowflake via MDM HUB.

\"\"



Roles, permissions, warehouses used in MDM Data Mart in Snowflake:
NewMdmSfRoles_231017.xlsx

" + }, + { + "title": "Connect Guide", + "pageID": "196886695", + "pageLink": "/display/GMDM/Connect+Guide", + "content": "


How to add a user to the DATA Role: 

 Users accessing snowflake have to create a ticket and add themselves to the DATA role. This will allow the user to view CUSTOMER_SL schema (users access layer to Snowflake):

  1. Go to https://requestmanager.COMPANY.com/
  2. Click on the TOP: "Group Manager" - https://requestmanager1.COMPANY.com/Group/Default.aspx
  3. Click on the "Distribution Lists"
  4. Search for the correct group you want to be added. Check the group name here: "List Of Groups With Access To The DataMart
    1. \"\"
  5. In the search write the "AD Group Name" for selected SF Instance.
  6. Click Request Access
    1. \"\"
  7. Click "Add Myself" and then save 
    1. \"\"
  8. Go to "Cart" and click "Submit Request"
    1. \"\"

How to connect to the DB:


List Of Groups With Access To The DataMart

Since October 2023

NewMdmSfRoles_231017 1.xlsx

[Expired Oct 2023] Groups that have access to CUSTOMER_SL schema:

Role NameSF InstanceDB InstanceEnvAD Group Name
COMM_AMER_MDM_DMART_DEV_DATA_ROLEAMERAMERDEVsfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_DEV_DATA_ROLE
COMM_AMER_MDM_DMART_QA_DATA_ROLEAMERAMERQAsfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_QA_DATA_ROLE
COMM_AMER_MDM_DMART_STG_DATA_ROLEAMERAMERSTAGEsfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_STG_DATA_ROLE
COMM_AMER_MDM_DMART_PROD_DATA_ROLEAMERAMERPRODsfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_DATA_ROLE
COMM_MDM_DMART_DEV_DATA_ROLEAMERUSDEVsfdb_us-east-1_amerdev01_COMM_DEV_MDM_DMART_DATA_ROLE
COMM_MDM_DMART_QA_DATA_ROLEAMERUSQAsfdb_us-east-1_amerdev01_COMM_QA_MDM_DMART_DATA_ROLE
COMM_MDM_DMART_STG_DATA_ROLEAMERUSSTAGEsfdb_us-east-1_amerdev01_COMM_STG_MDM_DMART_DATA_ROLE
COMM_MDM_DMART_PROD_DATA_ROLEAMERUSPRODsfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_DATA_ROLE
COMM_APAC_MDM_DMART_DEV_DATA_ROLEEMEAAPACDEVsfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_DEV_DATA_ROLE
COMM_APAC_MDM_DMART_QA_DATA_ROLEEMEAAPACQAsfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_QA_DATA_ROLE
COMM_APAC_MDM_DMART_STG_DATA_ROLEEMEAAPACSTAGEsfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_STG_DATA_ROLE
COMM_APAC_MDM_DMART_PROD_DATA_ROLEEMEAAPACPRODsfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_DATA_ROLE
COMM_EMEA_MDM_DMART_DEV_DATA_ROLEEMEAEMEADEVsfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_DEV_DATA_ROLE
COMM_EMEA_MDM_DMART_QA_DATA_ROLEEMEAEMEAQAsfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_QA_DATA_ROLE
COMM_EMEA_MDM_DMART_STG_DATA_ROLEEMEAEMEASTAGEsfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_STG_DATA_ROLE
COMM_EMEA_MDM_DMART_PROD_DATA_ROLEEMEAEMEAPRODsfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_DATA_ROLE
COMM_MDM_DMART_DEV_DATA_ROLEEMEAEUDEVsfdb_eu-west-1_emeadev01_COMM_DEV_MDM_DMART_DATA_ROLE
COMM_MDM_DMART_QA_DATA_ROLEEMEAEUQAsfdb_eu-west-1_emeadev01_COMM_QA_MDM_DMART_DATA_ROLE
COMM_MDM_DMART_STG_DATA_ROLEEMEAEUSTAGEsfdb_eu-west-1_emeadev01_COMM_STG_MDM_DMART_DATA_ROLE
COMM_MDM_DMART_PROD_DATA_ROLEEMEAEUPRODsfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_DATA_ROLE
COMM_GBL_MDM_DMART_DEV_DATA_ROLEEMEAGBLDEVsfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_DEV_DATA_ROLE
COMM_GBL_MDM_DMART_QA_DATA_ROLEEMEAGBLQAsfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_QA_DATA_ROLE
COMM_GBL_MDM_DMART_STG_DATA_ROLEEMEAGBLSTAGEsfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_STG_DATA_ROLE
COMM_GBL_MDM_DMART_PROD_DATA_ROLEEMEAGBLPRODsfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_DATA_ROLE



" + }, + { + "title": "Data model", + "pageID": "196886989", + "pageLink": "/display/GMDM/Data+model", + "content": "


The data mart contains MDM data in object & relational data models. The fragment of the model is presented in the picture below. 

The object data model includes the latest version of Reltio JSON documents representing entities, relationships, lovs, merge-tree. They are loaded into  ENTITIES, RELATIONS, LOV_DATA, MERGES, MATCHES tables. 
They are loading from Reltio using a HUB streaming interface described here.

The object model is transformed into the relation model by a set of dynamic views using Snowflake JSON processing query language. Dynamic views are generated dynamically from the Retlio data model. The regeneration process is maintained in Jenkins and triggered weekly or on-demand.  The generation process starts from root objects like HCP, HCO, walks through JSON tree and generates views with the following rules:  


\"The

Model versions

There are two versions of Reltio data model maintained in the data mart:

Key generation strategy

Object model:

ObjectsKey columnsDescription
ENTITIES, MATCHES MERGESentity_uri, country*Reltio entity unique identifier and country
RELATIONSrelation_uri, country*Reltio relationship unique identifier & country
LOV_DATAid, mdm_region*the concatenation of Reltio LOV name + ':'+ canonical code as id & mdm region

  * - only in global data mart

Relational model:


ObjectsKey columnsDescription
root objects like HCP, HCO, MCO, MERGE_HISTORY, MATCH_HISTORYentity_uri, country*Reltio entity unique identifier and country
AFFILIATIONSrelation_uri, country*Reltio relationship unique identifier and country
child views for nested attributes Addresses, Specialties ...parent view keys, nested attribute uri, country* parent view keys + nested attribute uri  + country

  * - only in global data mart


Schemas:


MDM Data Mart contains the following schemas:

Schema nameDescription
LANDINGSchemas used by HUB ETL processes as stage area
CUSTOMERMain schema containing data mart data 
CUSTOMER_SLAccess schema to CUSTOMER schema data
AES_RS_SLContains views presenting data in Redshift data model




" + }, + { + "title": "AES_RS_SL", + "pageID": "203229895", + "pageLink": "/display/GMDM/AES_RS_SL", + "content": "

The schema contains a set of views that mimic MDM DataMart from Redshift. 

The views integrate both data models COMPANY and IQIVIA and present data from all countries available in Reltio.


Differences from original Redshift mart




" + }, + { + "title": "CUSTOMER schema", + "pageID": "163919161", + "pageLink": "/display/GMDM/CUSTOMER+schema", + "content": "

This is the main schema containing MDM data in two formats.

Object model that represents Reltio JSON format. Data in the format are kept in ENTITIES , RELATIONS, MERGE_TREE tables. 

Relation model is created as a part of views (standard or materialized) derived from the object model. Most of the views are generated in an automated way based on Reltio Data Model configuration. They directly reflect Relito object model. There are two sets of views as there are two models in Reltio: COMPANY and Iqivia,  Those views can change dynamically as Reltio config is updated.




\n\n \n \n \n\n
\n \n \n \n\n \n \n\n \n \n \n\n \n \n \n \n \n \n\n
\n \n
\n
\n

" + }, + { + "title": "Customer base objects", + "pageID": "164470194", + "pageLink": "/display/GMDM/Customer+base+objects", + "content": "

ENTITIES

Keeps Relto entities objects


ColumnTypeDescription

ENTITY_URI

TEXT

Reltio entityt uri

COUNTRY

TEXT

Country

ENTITY_TYPE

TEXT

Entity type for example: HCO, HCP

ACTIVE

BOOLEAN

Active flag 

CREATE_TIME

TIMESTAMP_LTZ

Create time

UPDATE_TIME

TIMESTAMP_LTZ

Update time

OBJECT

VARIANT

JSON object

LAST_EVENT_TYPE

TEXT

The last event updated the JSON object

LAST_EVENT_TIME

TIMESTAMP_LTZ

Last event time

PARENT

TEXT

Parent entity uri

CHECKSUM

NUMBER

Checksum
COMPANY_GLOBAL_CUSTOMER_IDTEXTEntity COMPANY Global Id
PARENT_COMPANY_GLOBAL_CUSTOMER_ITEXTIn case of lost merge that field store COMPANY Global Id of the winner entity else is empty


HIST_INACTIVE_ENTITIES

Used for history inactive onekey crosswals. Structure is a copy of entities table.

ColumnTypeDescription

ENTITY_URI

TEXT

Reltio entityt uri

COUNTRY

TEXT

Country

ENTITY_TYPE

TEXT

Entity type for example: HCO, HCP

ACTIVE

BOOLEAN

Active flag 

CREATE_TIME

TIMESTAMP_LTZ

Create time

UPDATE_TIME

TIMESTAMP_LTZ

Update time

OBJECT

VARIANT

JSON object

LAST_EVENT_TYPE

TEXT

The last event updated the JSON object

LAST_EVENT_TIME

TIMESTAMP_LTZ

Last event time

PARENT

TEXT

Parent entity uri

CHECKSUM

NUMBER

Checksum
COMPANY_GLOBAL_CUSTOMER_IDTEXTEntity COMPANY Global Id
PARENT_COMPANY_GLOBAL_CUSTOMER_ITEXTIn case of lost merge that field store COMPANY Global Id of the winner entity else is empty

RELATIONS

Keeps Relto relations objects


ColumnTypeDescription

RELATION_URI

TEXT

Reltio relation uri

COUNTRY

TEXT

Country

RELATION_TYPE

TEXT

Relation type

ACTIVE

BOOLEAN

Active flag

CREATE_TIME

TIMESTAMP_LTZ

Create time

UPDATE_TIME

TIMESTAMP_LTZ

Update time

START_ENTITY_URI

TEXT

Source entity uri 

END_ENTITY_URI

TEXT

Target entity uri

OBJECT

VARIANT

JSON object 

LAST_EVENT_TYPE

TEXT

The last event type modified the record

LAST_EVENT_TIME

TIMESTAMP_LTZ

Last event time

PARENT

TEXT

not used

CHECKSUM

NUMBER

Checksum

MATCHES

The table presents active and historical matches found in Reltio for all entities.


ColumnTypeDescription
ENTITY_URITEXTReltio entity uri
TARGET_ENTITY_URITEXTReltio entity uri to which matches ENTITY_URI
MATCH_TYPETEXTMatch type
MATCH_RULE_NAMETEXTMatch rule name
COUNTRYTEXTCountry
LAST_EVENT_TYPETEXTThe last event type modified the record
LAST_EVENT_TIMETIMESTAMP_LTZLast event time
LAST_EVENT_CHECKSUMNUMBERThe last event checksum
ACTIVEBOOLEANActive flag

MATCH_HISTORY

The view shows match history for active and inactive matches enriched by merge data. The merge info is available for matches that were inactivated by the merge action triggered by users or Reltio background processes.  

ColumnTypeDescription
ENTITY_URITEXTReltio entity uri
TARGET_ENTITY_URITEXTReltio entity uri to which matches ENTITY_URI
MATCH_TYPETEXTMatch type
MATCH_RULE_NAMETEXTMatch rule name
COUNTRYTEXTCountry
LAST_EVENT_TYPETEXTThe last event type modified the record
LAST_EVENT_TIMETIMESTAMP_LTZLast event time
LAST_EVENT_CHECKSUMNUMBERThe last event checksum
ACTIVEBOOLEANActive flag
MERGEDBOOLEANMerge indicator, the true value indicates that the merge happened for the match.
MERGE_REASONTEXT Merge reason 
MERGE_USERTEXTReltio user name or process name that executed the merge
MERGE_DATETO_TIMESTAMP_LTZMerge date 
MERGE_RULETEXTMerge rule that triggered the merge

MERGES

The table presents active merges found in Reltio based on the merge_tree export.


ColumnTypeDescription
ENTITY_URITEXTReltio entity uri
LAST_UPDATE_TIMETO_TIMESTAMP_LTZDate of the last update on the selected row
CREATE_TIMETO_TIMESTAMP_LTZCreation date on the selected row

OBJECT

VARIANT

JSON object 

MERGE_HISTORY

The view shows merge history for active entities. The merge history view is build based on the merge_tree Reltio export. 

ColumnTypeDescription
ENTITY_URITEXTReltio entity uri
LOSER_ENTITY_URITEXTReltio entity uri for the merge loser
MERGE_REASONTEXT 

Merge reason 


Merge on the flyThis indicates automatic match rules were able to find matches for a newly added entity. Therefore, the new entity was not created as a separate entity in the platform but was merged into an existing one instead.
Merge by crosswalksIf a newly added entity has the same crosswalk as that of an existing entity in the platform, such entities are merged automatically on the fly because the Reltio platform does not allow multiple entities with the same crosswalk.
Automatic merge by crosswalksSometimes, two entities with the same crosswalk may exist in the platform (simultaneously added entities). In this case, such entities are merged automatically using a special background thread.
Group merge (Matches found on object creation)This indicates that several entities are grouped into one merge request because all such entities will be merged at the same time to create a single entity in the platform. The reason for a group merge can be an automatic match rule or same crosswalk or both.
Merges found by background merge processThe background match thread (incremental match processor) modifies entities as a result of create/change/remove events and performs a rematch. During the rematch, if some entities match using the automatic match rules, such entities are merged.
Merge by handThis is a merge performed by a user through the API or from the UI by going through the potential matches.
MERGE_RULETEXTMerge rule that triggered the merge
USERTEXTUser name which executed the merge
MERGE_DATETO_TIMESTAMP_LTZMerge date 

ENTITY_HISTORY

Keeps event history for entities and relations

ColumnTypeDescription
EVENT_KEYTEXTEvent key
EVENT_PARTITIONNUMBERPartition number in Kafka
EVENT_OFFSETNUMBEROffset in Kafka
EVENT_TOPICTEXTName of the topic in Kafka where this event is stored
EVENT_TIMETIMESTAMP_LTZTimestamp when the event was generated
EVENT_TYPETEXTEvent type
COUNTRYTEXTCountry
ENTITY_URITEXTReltio entity uri
CHECKSUMNUMBERChecksum

LOV_DATA

Keeps LOV objects

ColumnTypeDescription
IDTEXTLOV identifier 
OBJECTVARIANTReltio RDM object in JSON format

CODES

ColumnTypeDescription
SOURCETEXTSource MDM system name
CODE_IDTEXTCode id - generated by concatenated LOV name and canonical code
CANONICAL_CODETEXTCanonical code
LOV_NAMETEXTLOV (Dictionary) name
ACTIVEBOOLEANActive flag
DESCTEXTEnglish description
COUNTRYTEXTCode country
PARENTSTEXTParent code id

CODE_TRANSLATIONS

RDM code translations

ColumnTypeDescription
SOURCETEXTSource MDM system name
CODE_IDTEXTCode id
CANONICAL_CODETEXTCanonical code
LOV_NAMETEXTLOV (Dictionary) name
ACTIVEBOOLEANActive flag
LANG_CODETEXTLanguage code
LAND_DESCTEXTLanguage description
COUNTRYTEXTCountry

CODE_SOURCE_MAPPINGS

Source code mappings to canonical codes in Reltio RDM

ColumnTypeDescription
SOURCETEXTSource MDM system name
CODE_IDTEXTCode id
SOURCE_NAMETEXTSource name
SOURCE_CODETEXTSource code
ACTIVEBOOLEANActve flag (true - active, false - inactive)
IS_CANONICALBOOLEANIs canonical
COUNTRYTEXTCountry
LAST_MODIFIEDTIMESTAMP_LTZLast modified date
PARENTTEXTParent code

ENTITY_CROSSWALKS

Keeps entity crosswalks

ColumnTypeDescription
CROSSWALK_URITEXTCrosswalk uri
ENTITY_URITEXTEntity uri
ENTITY_TYPETEXTEntity type
ACTIVEBOOLEANActive flag
TYPETEXTCrosswalk type
VALUETEXTCrosswalk value
SOURCE_TABLETEXTSource table
CREATE_DATETIMESTAMP_NTZCreate date
UPDATE_DATETIMESTAMP_NTZUpdate date
RELTIO_LOAD_DATETIMESTAMP_NTZDate when this crosswalk was loaded to Reltio
DELETE_DATETIMESTAMP_NTZDelete date
COMPANY_GLOBAL_CUSTOMER_IDTEXTEntity COMPANY Global Id

RELATION_CROSSWALKS

Keeps relations crosswalks

ColumnTypeDescription
CROSSWALK_URITEXTCrosswalk URI
RELATION_URITEXTRelation URI
RELATION_TYPETEXTRelation type
ACTIVEBOOLEANActive flag
TYPETEXTCrosswalk type
VALUETEXTCrosswalk value
SOURCE_TABLETEXTSource table
CREATE_DATETIMESTAMP_NTZCreate date
UPDATE_DATETIMESTAMP_NTZUpdate date
DELETE_DATETIMESTAMP_NTZDelete date
RELTIO_LOAD_DATETIMESTAMP_NTZDate when this relation was loaded to Reltio

ATTRIBUTE_SOURCE

Presents information about what crosswalk provided the given attribute. 

The view can be joined with views for nested attributes to get also attribute values.


ColumnTypeDescription
ATTTRIBUTE_URITEXTAttribute URI
ENTITY_URTEXT

Entity URI

ACTIVEBOOLEANIs entity active
TYPETEXTCrosswalk type
VALUETEXTCrosswalk value
SOURCE_TABLETEXTCrosswalk source table


ENTITY_UPDATE_DATES

Presents information about updated dates of entities in Reltio MDM or Snowflake

The view can be used to query updated records in a period of time including root objects like HCP, HCO, MCO, and child objects like IDENTIFIERS, SPECIALTIES, ADDRESSED etc.


ColumnTypeDescription
ENTITY_URITEXT

Entity URI

ACTIVEBOOLEANIs entity active
ENTITY_TYPETEXTType of entity
COUNTRYTEXTCountry iso code
MDM_CREATE_TIMETIMESTAMP_LTZEntity create time in Reltio
MDM_UPDATE_TIMETIMESAMP_LTZEntity update time in Reltio
SF_CREATE_TIMETIMESTAMP_LTZEntity create time in Snowflake DB
SF_UPDATE_TIMETIMESTAMP_LTZEntity last update time in Snowflake
LAST_EVENT_TIMETIMESTAMP_LTZLast KAFKA event timestamp
CHECKSUMNUMBERChecksum
COMPANY_GLOBAL_CUSTOMER_IDTEXTEntity COMPANY Global Id
PARENT_COMPANY_GLOBAL_CUSTOMER_ITEXTIn case of lost merge that field store COMPANY Global Id of the winner entity else is empty

RELATION_UPDATE_DATES

Presents information about updated dates of relations Reltio MDM or Snowflake

The view can be used to query all updated entries in a period of time from  AFFILIATONS and child objects like AFFIL_RELATION_TYPE


ColumnTypeDescription
RELATION_URITEXT

Entity URI

ACTIVEBOOLEANIs entity active
RELATION_TYPETEXTType of entity
COUNTRYTEXTCountry iso code
MDM_CREATE_TIMETIMESTAMP_LTZRelation create time in Reltio
MDM_UPDATE_TIMETIMESAMP_LTZRelation update time in Reltio
SF_CREATE_TIMETIMESTAMP_LTZRelation create time in Snowflake DB
SF_UPDATE_TIMETIMESTAMP_LTZRelation last update time in Snowflake
LAST_EVENT_TIMETIMESTAMP_LTZLast KAFKA event timestamp
CHECKSUMNUMBERChecksum
" + }, + { + "title": "Data Materialization Process", + "pageID": "347657026", + "pageLink": "/display/GMDM/Data+Materialization+Process", + "content": "

\"\"

" + }, + { + "title": "Dynamic views for IQVIA MDM Model", + "pageID": "164470213", + "pageLink": "/display/GMDM/Dynamic+views++for+IQVIA+MDM+Model", + "content": "


HCP

Health care provider

Column

Type

Description

Reltio Attribute URI

LOV Name

ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



FIRST_NAME

VARCHAR

First Name

configuration/entityTypes/HCP/attributes/FirstName


LAST_NAME

VARCHAR

Last Name

configuration/entityTypes/HCP/attributes/LastName


MIDDLE_NAME

VARCHAR

Middle Name

configuration/entityTypes/HCP/attributes/MiddleName


NAME

VARCHAR

Name

configuration/entityTypes/HCP/attributes/Name


PREFIX

VARCHAR


configuration/entityTypes/HCP/attributes/Prefix

LKUP_IMS_PREFIX

SUFFIX_NAME

VARCHAR

Generation Suffix

configuration/entityTypes/HCP/attributes/SuffixName

LKUP_IMS_SUFFIX

PREFERRED_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/PreferredName


NICKNAME

VARCHAR


configuration/entityTypes/HCP/attributes/Nickname


COUNTRY_CODE

VARCHAR

Country Code

configuration/entityTypes/HCP/attributes/Country

LKUP_IMS_COUNTRY_CODE

GENDER

VARCHAR


configuration/entityTypes/HCP/attributes/Gender

LKUP_IMS_GENDER

TYPE_CODE

VARCHAR

Type code

configuration/entityTypes/HCP/attributes/TypeCode

LKUP_IMS_HCP_CUST_TYPE

ACCOUNT_TYPE

VARCHAR

Account Type

configuration/entityTypes/HCP/attributes/AccountType


SUB_TYPE_CODE

VARCHAR

Sub type code

configuration/entityTypes/HCP/attributes/SubTypeCode

LKUP_IMS_HCP_SUBTYPE

TITLE

VARCHAR


configuration/entityTypes/HCP/attributes/Title

LKUP_IMS_PROF_TITLE

INITIALS

VARCHAR

Initials

configuration/entityTypes/HCP/attributes/Initials


D_O_B

DATE

Date of Birth

configuration/entityTypes/HCP/attributes/DoB


Y_O_B

VARCHAR

Birth Year

configuration/entityTypes/HCP/attributes/YoB


MAPP_HCP_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/MAPPHcpStatus

LKUP_MAPP_HCPSTATUS

GO_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/GOStatus

LKUP_GOVOFF_GOSTATUS

PIGO_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/PIGOStatus

LKUP_GOVOFF_PIGOSTATUS

NIPPIGO_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/NIPPIGOStatus

LKUP_GOVOFF_NIPPIGOSTATUS

PRIMARY_PIGO_RATIONALE

VARCHAR


configuration/entityTypes/HCP/attributes/PrimaryPIGORationale

LKUP_GOVOFF_PIGORATIONALE

SECONDARY_PIGO_RATIONALE

VARCHAR


configuration/entityTypes/HCP/attributes/SecondaryPIGORationale

LKUP_GOVOFF_PIGORATIONALE

PIGOSME_REVIEW

VARCHAR


configuration/entityTypes/HCP/attributes/PIGOSMEReview

LKUP_GOVOFF_PIGOSMEREVIEW

GSQ_DATE

DATE

GSQDate

configuration/entityTypes/HCP/attributes/GSQDate


MAPP_DO_NOT_USE

VARCHAR


configuration/entityTypes/HCP/attributes/MAPPDoNotUse

LKUP_GOVOFF_DONOTUSE

MAPP_CHANGE_DATE

VARCHAR


configuration/entityTypes/HCP/attributes/MAPPChangeDate


MAPP_CHANGE_REASON

VARCHAR


configuration/entityTypes/HCP/attributes/MAPPChangeReason


IS_EMPLOYEE

BOOLEAN


configuration/entityTypes/HCP/attributes/IsEmployee


VALIDATION_STATUS

VARCHAR

Validation Status of the Customer

configuration/entityTypes/HCP/attributes/ValidationStatus

LKUP_IMS_VAL_STATUS

SOURCE_CHANGE_DATE

DATE

SourceChangeDate

configuration/entityTypes/HCP/attributes/SourceChangeDate


SOURCE_CHANGE_REASON

VARCHAR

SourceChangeReason

configuration/entityTypes/HCP/attributes/SourceChangeReason


ORIGIN_SOURCE

VARCHAR

Originating Source

configuration/entityTypes/HCP/attributes/OriginSource


OK_VR_TRIGGER

VARCHAR


configuration/entityTypes/HCP/attributes/OK_VR_Trigger

LKUP_IMS_SEND_FOR_VALIDATION

BIRTH_CITY

VARCHAR

Birth City

configuration/entityTypes/HCP/attributes/BirthCity


BIRTH_STATE

VARCHAR

Birth State

configuration/entityTypes/HCP/attributes/BirthState

STATE_CODE

BIRTH_COUNTRY

VARCHAR

Birth Country

configuration/entityTypes/HCP/attributes/BirthCountry

COUNTRY_CD

D_O_D

DATE


configuration/entityTypes/HCP/attributes/DoD


Y_O_D

VARCHAR


configuration/entityTypes/HCP/attributes/YoD


TAX_ID

VARCHAR


configuration/entityTypes/HCP/attributes/TaxID


SSN_LAST4

VARCHAR


configuration/entityTypes/HCP/attributes/SSNLast4


ME

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/ME


NPI

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/NPI


UPIN

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/UPIN


KAISER_PROVIDER

BOOLEAN


configuration/entityTypes/HCP/attributes/KaiserProvider


MAJOR_PROFESSIONAL_ACTIVITY

VARCHAR


configuration/entityTypes/HCP/attributes/MajorProfessionalActivity

MPA_CD

PRESENT_EMPLOYMENT

VARCHAR


configuration/entityTypes/HCP/attributes/PresentEmployment

PE_CD

TYPE_OF_PRACTICE

VARCHAR


configuration/entityTypes/HCP/attributes/TypeOfPractice

TOP_CD

SOLO

BOOLEAN


configuration/entityTypes/HCP/attributes/Solo


GROUP

BOOLEAN


configuration/entityTypes/HCP/attributes/Group


ADMINISTRATOR

BOOLEAN


configuration/entityTypes/HCP/attributes/Administrator


RESEARCH

BOOLEAN


configuration/entityTypes/HCP/attributes/Research


CLINICAL_TRIALS

BOOLEAN


configuration/entityTypes/HCP/attributes/ClinicalTrials


WEBSITE_URL

VARCHAR


configuration/entityTypes/HCP/attributes/WebsiteURL


IMAGE_LINKS

VARCHAR


configuration/entityTypes/HCP/attributes/ImageLinks


DOCUMENT_LINKS

VARCHAR


configuration/entityTypes/HCP/attributes/DocumentLinks


VIDEO_LINKS

VARCHAR


configuration/entityTypes/HCP/attributes/VideoLinks


DESCRIPTION

VARCHAR


configuration/entityTypes/HCP/attributes/Description


CREDENTIALS

VARCHAR


configuration/entityTypes/HCP/attributes/Credentials

CRED

FORMER_FIRST_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/FormerFirstName


FORMER_LAST_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/FormerLastName


FORMER_MIDDLE_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/FormerMiddleName


FORMER_SUFFIX_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/FormerSuffixName


SSN

VARCHAR


configuration/entityTypes/HCP/attributes/SSN


PRESUMED_DEAD

BOOLEAN


configuration/entityTypes/HCP/attributes/PresumedDead


DEA_BUSINESS_ACTIVITY

VARCHAR


configuration/entityTypes/HCP/attributes/DEABusinessActivity


STATUS_IMS

VARCHAR


configuration/entityTypes/HCP/attributes/StatusIMS

LKUP_IMS_STATUS

STATUS_UPDATE_DATE

DATE


configuration/entityTypes/HCP/attributes/StatusUpdateDate


STATUS_REASON_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/StatusReasonCode

LKUP_IMS_SRC_DEACTIVE_REASON_CODE

COMMENTERS

VARCHAR

Commenters

configuration/entityTypes/HCP/attributes/Commenters


SOURCE_CREATION_DATE

DATE


configuration/entityTypes/HCP/attributes/SourceCreationDate


SOURCE_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/SourceName


SUB_SOURCE_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/SubSourceName


EXCLUDE_FROM_MATCH

VARCHAR


configuration/entityTypes/HCP/attributes/ExcludeFromMatch


PROVIDER_IDENTIFIER_TYPE

VARCHAR

Provider Identifier Type

configuration/entityTypes/HCP/attributes/ProviderIdentifierType

LKUP_IMS_PROVIDER_IDENTIFIER_TYPE

CATEGORY

VARCHAR

Category Code

configuration/entityTypes/HCP/attributes/Category

LKUP_IMS_HCP_CATEGORY

DEGREE_CODE

VARCHAR

Degree Code

configuration/entityTypes/HCP/attributes/DegreeCode

LKUP_IMS_DEGREE

SALUTATION_NAME

VARCHAR

Salutation Name

configuration/entityTypes/HCP/attributes/SalutationName


IS_BLACK_LISTED

BOOLEAN

Indicates to Blacklist the profile

configuration/entityTypes/HCP/attributes/IsBlackListed


TRAINING_HOSPITAL

VARCHAR

Training Hospital

configuration/entityTypes/HCP/attributes/TrainingHospital


ACRONYM_NAME

VARCHAR

AcronymName

configuration/entityTypes/HCP/attributes/AcronymName


FIRST_SET_DATE

DATE

Date of 1st Installation

configuration/entityTypes/HCP/attributes/FirstSetDate


CREATE_DATE

DATE

Individual Creation Date

configuration/entityTypes/HCP/attributes/CreateDate


UPDATE_DATE

DATE

Date of Last Individual Update

configuration/entityTypes/HCP/attributes/UpdateDate


CHECK_DATE

DATE

Date of Last Individual Quality Check

configuration/entityTypes/HCP/attributes/CheckDate


STATE_CODE

VARCHAR

Situation of the healthcare professional (ex. Active, Inactive, Retired)

configuration/entityTypes/HCP/attributes/StateCode

LKUP_IMS_PROFILE_STATE

STATE_DATE

DATE

Date when state of the record was last modified.

configuration/entityTypes/HCP/attributes/StateDate


VALIDATION_CHANGE_REASON

VARCHAR

Reason for Validation Status change

configuration/entityTypes/HCP/attributes/ValidationChangeReason

LKUP_IMS_VAL_STATUS_CHANGE_REASON

VALIDATION_CHANGE_DATE

DATE

Date of Validation change

configuration/entityTypes/HCP/attributes/ValidationChangeDate


APPOINTMENT_REQUIRED

BOOLEAN

Indicates whether sales reps need to make an appointment to see the Professional.

configuration/entityTypes/HCP/attributes/AppointmentRequired


NHS_STATUS

VARCHAR

National Health System Status

configuration/entityTypes/HCP/attributes/NHSStatus

LKUP_IMS_SECTOR_OF_CARE

NUM_OF_PATIENTS

VARCHAR

Number of attached patients

configuration/entityTypes/HCP/attributes/NumOfPatients


PRACTICE_SIZE

VARCHAR

Practice Size

configuration/entityTypes/HCP/attributes/PracticeSize


PATIENTS_X_DAY

VARCHAR

Patients Per Day

configuration/entityTypes/HCP/attributes/PatientsXDay


PREFERRED_LANGUAGE

VARCHAR

Preferred Spoken Language

configuration/entityTypes/HCP/attributes/PreferredLanguage


POLITICAL_AFFILIATION

VARCHAR

Political Affiliation

configuration/entityTypes/HCP/attributes/PoliticalAffiliation

LKUP_IMS_POL_AFFIL

PRESCRIBING_LEVEL

VARCHAR

Prescribing Level

configuration/entityTypes/HCP/attributes/PrescribingLevel

LKUP_IMS_PRES_LEVEL

EXTERNAL_RATING

VARCHAR

External Rating

configuration/entityTypes/HCP/attributes/ExternalRating


TARGETING_CLASSIFICATION

VARCHAR

Targeting Classification

configuration/entityTypes/HCP/attributes/TargetingClassification


KOL_TITLE

VARCHAR

Key Opinion Leader Title

configuration/entityTypes/HCP/attributes/KOLTitle


SAMPLING_STATUS

VARCHAR

Sampling Status of HCP

configuration/entityTypes/HCP/attributes/SamplingStatus

LKUP_IMS_SAMPLING_STATUS

ADMINISTRATIVE_NAME

VARCHAR

Administrative Name

configuration/entityTypes/HCP/attributes/AdministrativeName


PROFESSIONAL_DESIGNATION

VARCHAR


configuration/entityTypes/HCP/attributes/ProfessionalDesignation

LKUP_IMS_PROF_DESIGNATION

EXTERNAL_INFORMATION_URL

VARCHAR


configuration/entityTypes/HCP/attributes/ExternalInformationURL


MATCH_STATUS_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/MatchStatusCode

LKUP_IMS_MATCH_STATUS_CODE

SUBSCRIPTION_FLAG1

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCP/attributes/SubscriptionFlag1


SUBSCRIPTION_FLAG2

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCP/attributes/SubscriptionFlag2


SUBSCRIPTION_FLAG3

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCP/attributes/SubscriptionFlag3


SUBSCRIPTION_FLAG4

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCP/attributes/SubscriptionFlag4


SUBSCRIPTION_FLAG5

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCP/attributes/SubscriptionFlag5


SUBSCRIPTION_FLAG6

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCP/attributes/SubscriptionFlag6


SUBSCRIPTION_FLAG7

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCP/attributes/SubscriptionFlag7


SUBSCRIPTION_FLAG8

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCP/attributes/SubscriptionFlag8


SUBSCRIPTION_FLAG9

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCP/attributes/SubscriptionFlag9


SUBSCRIPTION_FLAG10

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCP/attributes/SubscriptionFlag10


MIDDLE_INITIAL

VARCHAR

Middle Initial. This attribute is populated from Middle Name

configuration/entityTypes/HCP/attributes/MiddleInitial


DELETE_ENTITY

BOOLEAN

Property for GDPR removing

configuration/entityTypes/HCP/attributes/DeleteEntity


PARTY_ID

VARCHAR


configuration/entityTypes/HCP/attributes/PartyID


LAST_VERIFICATION_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/LastVerificationStatus


LAST_VERIFICATION_DATE

DATE


configuration/entityTypes/HCP/attributes/LastVerificationDate


EFFECTIVE_DATE

DATE


configuration/entityTypes/HCP/attributes/EffectiveDate


END_DATE

DATE


configuration/entityTypes/HCP/attributes/EndDate


PARTY_LOCALIZATION_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/PartyLocalizationCode


MATCH_PARTY_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/MatchPartyName


LICENSE

Column

Type

Description

Reltio Attribute URI

LOV Name

LICENSE_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



CATEGORY

VARCHAR


configuration/entityTypes/HCP/attributes/License/attributes/Category

LKUP_IMS_LIC_CATEGORY

NUMBER

VARCHAR

State License INTEGER. A unique license INTEGER is listed for each license the physician holds. There is no standard format syntax. Format examples: 18986, 4301079019, BX1464089. There is also no limit to the INTEGER of licenses a physician can hold in a state. Example: A physician can have an inactive resident license plus unlimited active licenses. Residents can have as many as four licenses since some states issue licenses every year

configuration/entityTypes/HCP/attributes/License/attributes/Number


BOARD_EXTERNAL_ID

VARCHAR

Board External ID

configuration/entityTypes/HCP/attributes/License/attributes/BoardExternalID


BOARD_CODE

VARCHAR

State License Board Code. For AMA The board code will always be AMA

configuration/entityTypes/HCP/attributes/License/attributes/BoardCode

STLIC_BRD_CD_LOV

STATE

VARCHAR

State License State. Two character field. USPS standard abbreviations.

configuration/entityTypes/HCP/attributes/License/attributes/State

LKUP_IMS_STATE_CODE

ISO_COUNTRY_CODE

VARCHAR

ISO country code

configuration/entityTypes/HCP/attributes/License/attributes/ISOCountryCode

LKUP_IMS_COUNTRY_CODE

DEGREE

VARCHAR

State License Degree. A physician may hold more than one license in a given state. However, not more than one MD or more than one DO license in the same state.

configuration/entityTypes/HCP/attributes/License/attributes/Degree

LKUP_IMS_DEGREE

AUTHORIZATION_STATUS

VARCHAR

Authorization Status

configuration/entityTypes/HCP/attributes/License/attributes/AuthorizationStatus

LKUP_IMS_IDENTIFIER_STATUS

LICENSE_NUMBER_KEY

VARCHAR

State License Number Key

configuration/entityTypes/HCP/attributes/License/attributes/LicenseNumberKey


AUTHORITY_NAME

VARCHAR

Authority Name

configuration/entityTypes/HCP/attributes/License/attributes/AuthorityName


PROFESSION_CODE

VARCHAR

Profession

configuration/entityTypes/HCP/attributes/License/attributes/ProfessionCode

LKUP_IMS_PROFESSION

TYPE_ID

VARCHAR

Authorization Type id

configuration/entityTypes/HCP/attributes/License/attributes/TypeId


TYPE

VARCHAR

State License Type. U = Unlimited there is no restriction on the physician to practice medicine; L = Limited implies restrictions of some sort. For example, the physician may practice only in a given county, admit patients only to particular hospitals, or practice under the supervision of a physician with a license in state or private hospitals or other settings; T = Temporary issued to a physician temporarily practicing in an underserved area outside his/her state of licensure. Also granted between board meetings when new licenses are issued. Time span for a temporary license varies from state to state. Temporary licenses typically expire 6-9 months from the date they are issued; R = Resident License granted to a physician in graduate medical education (e.g., residency training).

configuration/entityTypes/HCP/attributes/License/attributes/Type

LKUP_IMS_LICENSE_TYPE

PRIVILEGE_ID

VARCHAR

License Privilege

configuration/entityTypes/HCP/attributes/License/attributes/PrivilegeId


PRIVILEGE_NAME

VARCHAR

License Privilege Name

configuration/entityTypes/HCP/attributes/License/attributes/PrivilegeName


PRIVILEGE_RANK

VARCHAR

License Privilege Rank

configuration/entityTypes/HCP/attributes/License/attributes/PrivilegeRank


STATUS

VARCHAR

State License Status. A = Active. Physician is licensed to practice within the state; I = Inactive. If the physician has not reregistered a state license OR if the license has been suspended or revoked by the state board; X = unknown. If the state has not provided current information Note: Some state boards issue inactive licenses to physicians who want to maintain licensure in the state although they are currently practicing in another state.

configuration/entityTypes/HCP/attributes/License/attributes/Status

LKUP_IMS_IDENTIFIER_STATUS

DEACTIVATION_REASON_CODE

VARCHAR

Deactivation Reason Code

configuration/entityTypes/HCP/attributes/License/attributes/DeactivationReasonCode

LKUP_IMS_SRC_DEACTIVE_REASON_CODE

EXPIRATION_DATE

DATE


configuration/entityTypes/HCP/attributes/License/attributes/ExpirationDate


ISSUE_DATE

DATE

State License Issue Date

configuration/entityTypes/HCP/attributes/License/attributes/IssueDate


BRD_DATE

DATE

State License as of date or pull date. The as of date (or stamp date) is the date the current license file is provided to the Database Licensees.

configuration/entityTypes/HCP/attributes/License/attributes/BrdDate


SAMPLE_ELIGIBILITY

VARCHAR


configuration/entityTypes/HCP/attributes/License/attributes/SampleEligibility


SOURCE_CD

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/License/attributes/SourceCD


RANK

VARCHAR

License Rank

configuration/entityTypes/HCP/attributes/License/attributes/Rank


CERTIFICATION

VARCHAR

Certification

configuration/entityTypes/HCP/attributes/License/attributes/Certification


REQ_SAMPL_NON_CTRL

VARCHAR

Request Samples Non-Controlled

configuration/entityTypes/HCP/attributes/License/attributes/ReqSamplNonCtrl


REQ_SAMPL_CTRL

VARCHAR

Request Samples Controlled

configuration/entityTypes/HCP/attributes/License/attributes/ReqSamplCtrl


RECV_SAMPL_NON_CTRL

VARCHAR

Receives Samples Non-Controlled Substances

configuration/entityTypes/HCP/attributes/License/attributes/RecvSamplNonCtrl


RECV_SAMPL_CTRL

VARCHAR

Receives Samples Controlled

configuration/entityTypes/HCP/attributes/License/attributes/RecvSamplCtrl


DISTR_SAMPL_NON_CTRL

VARCHAR

Distribute Samples Non-Controlled Substances

configuration/entityTypes/HCP/attributes/License/attributes/DistrSamplNonCtrl


DISTR_SAMPL_CTRL

VARCHAR

Distribute Samples Controlled

configuration/entityTypes/HCP/attributes/License/attributes/DistrSamplCtrl


SAMP_DRUG_SCHED_I_FLAG

VARCHAR

Sample Drug Schedule I flag

configuration/entityTypes/HCP/attributes/License/attributes/SampDrugSchedIFlag


SAMP_DRUG_SCHED_II_FLAG

VARCHAR

Sample Drug Schedule II flag

configuration/entityTypes/HCP/attributes/License/attributes/SampDrugSchedIIFlag


SAMP_DRUG_SCHED_III_FLAG

VARCHAR

Sample Drug Schedule III flag

configuration/entityTypes/HCP/attributes/License/attributes/SampDrugSchedIIIFlag


SAMP_DRUG_SCHED_IV_FLAG

VARCHAR

Sample Drug Schedule IV flag

configuration/entityTypes/HCP/attributes/License/attributes/SampDrugSchedIVFlag


SAMP_DRUG_SCHED_V_FLAG

VARCHAR

Sample Drug Schedule V flag

configuration/entityTypes/HCP/attributes/License/attributes/SampDrugSchedVFlag


SAMP_DRUG_SCHED_VI_FLAG

VARCHAR

Sample Drug Schedule VI flag

configuration/entityTypes/HCP/attributes/License/attributes/SampDrugSchedVIFlag


PRESCR_NON_CTRL_FLAG

VARCHAR

Prescribe Non-controlled flag

configuration/entityTypes/HCP/attributes/License/attributes/PrescrNonCtrlFlag


PRESCR_APP_REQ_NON_CTRL_FLAG

VARCHAR

Prescribe Application Request for Non-controlled Substances Flag

configuration/entityTypes/HCP/attributes/License/attributes/PrescrAppReqNonCtrlFlag


PRESCR_CTRL_FLAG

VARCHAR

Prescribe Controlled flag

configuration/entityTypes/HCP/attributes/License/attributes/PrescrCtrlFlag


PRESCR_APP_REQ_CTRL_FLAG

VARCHAR

Prescribe Application Request for Controlled Substances Flag

configuration/entityTypes/HCP/attributes/License/attributes/PrescrAppReqCtrlFlag


PRESCR_DRUG_SCHED_I_FLAG

VARCHAR

PrescrDrugSchedIFlag

configuration/entityTypes/HCP/attributes/License/attributes/PrescrDrugSchedIFlag


PRESCR_DRUG_SCHED_II_FLAG

VARCHAR

Prescribe Schedule II Flag

configuration/entityTypes/HCP/attributes/License/attributes/PrescrDrugSchedIIFlag


PRESCR_DRUG_SCHED_III_FLAG

VARCHAR

Prescribe Schedule III Flag

configuration/entityTypes/HCP/attributes/License/attributes/PrescrDrugSchedIIIFlag


PRESCR_DRUG_SCHED_IV_FLAG

VARCHAR

Prescribe Schedule IV Flag

configuration/entityTypes/HCP/attributes/License/attributes/PrescrDrugSchedIVFlag


PRESCR_DRUG_SCHED_V_FLAG

VARCHAR

Prescribe Schedule V Flag

configuration/entityTypes/HCP/attributes/License/attributes/PrescrDrugSchedVFlag


PRESCR_DRUG_SCHED_VI_FLAG

VARCHAR

Prescribe Schedule VI Flag

configuration/entityTypes/HCP/attributes/License/attributes/PrescrDrugSchedVIFlag


SUPERVISORY_REL_CD_NON_CTRL

VARCHAR

Supervisory Relationship for Non-Controlled Substances

configuration/entityTypes/HCP/attributes/License/attributes/SupervisoryRelCdNonCtrl


SUPERVISORY_REL_CD_CTRL

VARCHAR

SupervisoryRelCdCtrl

configuration/entityTypes/HCP/attributes/License/attributes/SupervisoryRelCdCtrl


COLLABORATIVE_NONCTRL

VARCHAR

Collaboration for Non-Controlled Substances

configuration/entityTypes/HCP/attributes/License/attributes/CollaborativeNonctrl


COLLABORATIVE_CTRL

VARCHAR

Collaboration for Controlled Substances

configuration/entityTypes/HCP/attributes/License/attributes/CollaborativeCtrl


INCLUSIONARY

VARCHAR

Inclusionary

configuration/entityTypes/HCP/attributes/License/attributes/Inclusionary


EXCLUSIONARY

VARCHAR

Exclusionary

configuration/entityTypes/HCP/attributes/License/attributes/Exclusionary


DELEGATION_NON_CTRL

VARCHAR

DelegationNonCtrl

configuration/entityTypes/HCP/attributes/License/attributes/DelegationNonCtrl


DELEGATION_CTRL

VARCHAR

Delegation for Controlled Substances

configuration/entityTypes/HCP/attributes/License/attributes/DelegationCtrl


DISCIPLINARY_ACTION_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/License/attributes/DisciplinaryActionStatus


ADDRESS

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDRESS_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



PRIMARY_AFFILIATION

VARCHAR


configuration/relationTypes/HasAddress/attributes/PrimaryAffiliation, configuration/relationTypes/HasAddress/attributes/PrimaryAffiliation

LKUP_IMS_YES_NO

SOURCE_ADDRESS_ID

VARCHAR


configuration/relationTypes/HasAddress/attributes/SourceAddressID, configuration/relationTypes/HasAddress/attributes/SourceAddressID


ADDRESS_TYPE

VARCHAR


configuration/relationTypes/HasAddress/attributes/AddressType, configuration/relationTypes/HasAddress/attributes/AddressType

LKUP_IMS_ADDR_TYPE

CARE_OF

VARCHAR


configuration/relationTypes/HasAddress/attributes/CareOf, configuration/relationTypes/HasAddress/attributes/CareOf


PRIMARY

BOOLEAN


configuration/relationTypes/HasAddress/attributes/Primary, configuration/relationTypes/HasAddress/attributes/Primary


ADDRESS_RANK

VARCHAR


configuration/relationTypes/HasAddress/attributes/AddressRank, configuration/relationTypes/HasAddress/attributes/AddressRank


SOURCE_NAME

VARCHAR


configuration/relationTypes/HasAddress/attributes/SourceAddressInfo/attributes/SourceName, configuration/relationTypes/HasAddress/attributes/SourceAddressInfo/attributes/SourceName


SOURCE_LOCATION_ID

VARCHAR


configuration/relationTypes/HasAddress/attributes/SourceAddressInfo/attributes/SourceLocationId, configuration/relationTypes/HasAddress/attributes/SourceAddressInfo/attributes/SourceLocationId


ADDRESS_LINE1

VARCHAR


configuration/entityTypes/Location/attributes/AddressLine1, configuration/entityTypes/Location/attributes/AddressLine1


ADDRESS_LINE2

VARCHAR


configuration/entityTypes/Location/attributes/AddressLine2, configuration/entityTypes/Location/attributes/AddressLine2


ADDRESS_LINE3

VARCHAR

AddressLine3

configuration/entityTypes/Location/attributes/AddressLine3, configuration/entityTypes/Location/attributes/AddressLine3


ADDRESS_LINE4

VARCHAR

AddressLine4

configuration/entityTypes/Location/attributes/AddressLine4, configuration/entityTypes/Location/attributes/AddressLine4


PREMISE

VARCHAR


configuration/entityTypes/Location/attributes/Premise, configuration/entityTypes/Location/attributes/Premise


STREET

VARCHAR


configuration/entityTypes/Location/attributes/Street, configuration/entityTypes/Location/attributes/Street


FLOOR

VARCHAR

N/A

configuration/entityTypes/Location/attributes/Floor, configuration/entityTypes/Location/attributes/Floor


BUILDING

VARCHAR

N/A

configuration/entityTypes/Location/attributes/Building, configuration/entityTypes/Location/attributes/Building


CITY

VARCHAR


configuration/entityTypes/Location/attributes/City, configuration/entityTypes/Location/attributes/City


STATE_PROVINCE

VARCHAR


configuration/entityTypes/Location/attributes/StateProvince, configuration/entityTypes/Location/attributes/StateProvince


STATE_PROVINCE_CODE

VARCHAR


configuration/entityTypes/Location/attributes/StateProvinceCode, configuration/entityTypes/Location/attributes/StateProvinceCode

LKUP_IMS_STATE_CODE

POSTAL_CODE

VARCHAR


configuration/entityTypes/Location/attributes/Zip/attributes/PostalCode, configuration/entityTypes/Location/attributes/Zip/attributes/PostalCode


ZIP5

VARCHAR


configuration/entityTypes/Location/attributes/Zip/attributes/Zip5, configuration/entityTypes/Location/attributes/Zip/attributes/Zip5


ZIP4

VARCHAR


configuration/entityTypes/Location/attributes/Zip/attributes/Zip4, configuration/entityTypes/Location/attributes/Zip/attributes/Zip4


COUNTRY

VARCHAR


configuration/entityTypes/Location/attributes/Country

LKUP_IMS_COUNTRY_CODE

CBSA_CODE

VARCHAR

Core Based Statistical Area

configuration/entityTypes/Location/attributes/CBSACode, configuration/entityTypes/Location/attributes/CBSACode

CBSA_CD

FIPS_COUNTY_CODE

VARCHAR

FIPS county Code

configuration/entityTypes/Location/attributes/FIPSCountyCode, configuration/entityTypes/Location/attributes/FIPSCountyCode


FIPS_STATE_CODE

VARCHAR

FIPS State Code

configuration/entityTypes/Location/attributes/FIPSStateCode, configuration/entityTypes/Location/attributes/FIPSStateCode


DPV

VARCHAR

USPS delivery point validation. R = Range Check; C = Clerk; F = Formally Valid; V = DPV Valid

configuration/entityTypes/Location/attributes/DPV, configuration/entityTypes/Location/attributes/DPV


MSA

VARCHAR

Metropolitan Statistical Area for a business

configuration/entityTypes/Location/attributes/MSA, configuration/entityTypes/Location/attributes/MSA


LATITUDE

VARCHAR


configuration/entityTypes/Location/attributes/GeoLocation/attributes/Latitude


LONGITUDE

VARCHAR


configuration/entityTypes/Location/attributes/GeoLocation/attributes/Longitude


GEO_ACCURACY

VARCHAR


configuration/entityTypes/Location/attributes/GeoLocation/attributes/GeoAccuracy


GEO_CODING_SYSTEM

VARCHAR


configuration/entityTypes/Location/attributes/GeoLocation/attributes/GeoCodingSystem


ADDRESS_INPUT

VARCHAR


configuration/entityTypes/Location/attributes/AddressInput, configuration/entityTypes/Location/attributes/AddressInput


SUB_ADMINISTRATIVE_AREA

VARCHAR

This field holds the smallest geographic data element within a country. For instance, USA County.

configuration/entityTypes/Location/attributes/SubAdministrativeArea, configuration/entityTypes/Location/attributes/SubAdministrativeArea


POSTAL_CITY

VARCHAR


configuration/entityTypes/Location/attributes/PostalCity, configuration/entityTypes/Location/attributes/PostalCity


LOCALITY

VARCHAR

This field holds the most common population center data element within a country. For instance, USA City, Canadian Municipality.

configuration/entityTypes/Location/attributes/Locality, configuration/entityTypes/Location/attributes/Locality


VERIFICATION_STATUS

VARCHAR


configuration/entityTypes/Location/attributes/VerificationStatus, configuration/entityTypes/Location/attributes/VerificationStatus


STATUS_CHANGE_DATE

DATE

Status Change Date

configuration/entityTypes/Location/attributes/StatusChangeDate, configuration/entityTypes/Location/attributes/StatusChangeDate


ADDRESS_STATUS

VARCHAR

Status of the Address

configuration/entityTypes/Location/attributes/AddressStatus, configuration/entityTypes/Location/attributes/AddressStatus


ACTIVE_ADDRESS

BOOLEAN


configuration/relationTypes/HasAddress/attributes/Active, configuration/relationTypes/HasAddress/attributes/Active


LOC_CONF_IND

VARCHAR


configuration/relationTypes/HasAddress/attributes/LocConfInd, configuration/relationTypes/HasAddress/attributes/LocConfInd

LKUP_IMS_LOCATION_CONFIDENCE

BEST_RECORD

VARCHAR


configuration/relationTypes/HasAddress/attributes/BestRecord, configuration/relationTypes/HasAddress/attributes/BestRecord


RELATION_STATUS_CHANGE_DATE

DATE


configuration/relationTypes/HasAddress/attributes/RelationStatusChangeDate, configuration/relationTypes/HasAddress/attributes/RelationStatusChangeDate


VALIDATION_STATUS

VARCHAR

Validation status of the Address. When Addresses are merged, the loser Address is set to INVL.

configuration/relationTypes/HasAddress/attributes/ValidationStatus, configuration/relationTypes/HasAddress/attributes/ValidationStatus

LKUP_IMS_VAL_STATUS

STATUS

VARCHAR


configuration/relationTypes/HasAddress/attributes/Status, configuration/relationTypes/HasAddress/attributes/Status

LKUP_IMS_ADDR_STATUS

HCO_NAME

VARCHAR


configuration/relationTypes/HasAddress/attributes/HcoName, configuration/relationTypes/HasAddress/attributes/HcoName


MAIN_HCO_NAME

VARCHAR


configuration/relationTypes/HasAddress/attributes/MainHcoName, configuration/relationTypes/HasAddress/attributes/MainHcoName


BUILD_LABEL

VARCHAR


configuration/relationTypes/HasAddress/attributes/BuildLabel, configuration/relationTypes/HasAddress/attributes/BuildLabel


PO_BOX

VARCHAR


configuration/relationTypes/HasAddress/attributes/POBox, configuration/relationTypes/HasAddress/attributes/POBox


VALIDATION_REASON

VARCHAR


configuration/relationTypes/HasAddress/attributes/ValidationReason, configuration/relationTypes/HasAddress/attributes/ValidationReason

LKUP_IMS_VAL_STATUS_CHANGE_REASON

VALIDATION_CHANGE_DATE

DATE


configuration/relationTypes/HasAddress/attributes/ValidationChangeDate, configuration/relationTypes/HasAddress/attributes/ValidationChangeDate


STATUS_REASON_CODE

VARCHAR


configuration/relationTypes/HasAddress/attributes/StatusReasonCode, configuration/relationTypes/HasAddress/attributes/StatusReasonCode

LKUP_IMS_SRC_DEACTIVE_REASON_CODE

PRIMARY_MAIL

BOOLEAN


configuration/relationTypes/HasAddress/attributes/PrimaryMail, configuration/relationTypes/HasAddress/attributes/PrimaryMail


VISIT_ACTIVITY

VARCHAR


configuration/relationTypes/HasAddress/attributes/VisitActivity, configuration/relationTypes/HasAddress/attributes/VisitActivity


DERIVED_ADDRESS

VARCHAR


configuration/relationTypes/HasAddress/attributes/derivedAddress, configuration/relationTypes/HasAddress/attributes/derivedAddress


NEIGHBORHOOD

VARCHAR


configuration/entityTypes/Location/attributes/Neighborhood, configuration/entityTypes/Location/attributes/Neighborhood


AVC

VARCHAR


configuration/entityTypes/Location/attributes/AVC, configuration/entityTypes/Location/attributes/AVC


COUNTRY_CODE

VARCHAR


configuration/entityTypes/Location/attributes/Country

LKUP_IMS_COUNTRY_CODE

GEO_LOCATION.LATITUDE

VARCHAR


configuration/entityTypes/Location/attributes/GeoLocation/attributes/Latitude


GEO_LOCATION.LONGITUDE

VARCHAR


configuration/entityTypes/Location/attributes/GeoLocation/attributes/Longitude


GEO_LOCATION.GEO_ACCURACY

VARCHAR


configuration/entityTypes/Location/attributes/GeoLocation/attributes/GeoAccuracy


GEO_LOCATION.GEO_CODING_SYSTEM

VARCHAR


configuration/entityTypes/Location/attributes/GeoLocation/attributes/GeoCodingSystem


ADDRESS_PHONE

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDRESS_URI

VARCHAR

generated key description



PHONE_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE_IMS

VARCHAR


configuration/relationTypes/HasAddress/attributes/Phone/attributes/TypeIMS, configuration/relationTypes/HasAddress/attributes/Phone/attributes/TypeIMS

LKUP_IMS_COMMUNICATION_TYPE

NUMBER

VARCHAR


configuration/relationTypes/HasAddress/attributes/Phone/attributes/Number, configuration/relationTypes/HasAddress/attributes/Phone/attributes/Number


EXTENSION

VARCHAR


configuration/relationTypes/HasAddress/attributes/Phone/attributes/Extension, configuration/relationTypes/HasAddress/attributes/Phone/attributes/Extension


RANK

VARCHAR


configuration/relationTypes/HasAddress/attributes/Phone/attributes/Rank, configuration/relationTypes/HasAddress/attributes/Phone/attributes/Rank


ACTIVE_ADDRESS_PHONE

BOOLEAN


configuration/relationTypes/HasAddress/attributes/Phone/attributes/Active, configuration/relationTypes/HasAddress/attributes/Phone/attributes/Active


BEST_PHONE_INDICATOR

VARCHAR


configuration/relationTypes/HasAddress/attributes/Phone/attributes/BestPhoneIndicator, configuration/relationTypes/HasAddress/attributes/Phone/attributes/BestPhoneIndicator


ADDRESS_DEA

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDRESS_URI

VARCHAR

generated key description



DEA_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



NUMBER

VARCHAR


configuration/relationTypes/HasAddress/attributes/DEA/attributes/Number, configuration/relationTypes/HasAddress/attributes/DEA/attributes/Number


EXPIRATION_DATE

DATE


configuration/relationTypes/HasAddress/attributes/DEA/attributes/ExpirationDate, configuration/relationTypes/HasAddress/attributes/DEA/attributes/ExpirationDate


STATUS

VARCHAR


configuration/relationTypes/HasAddress/attributes/DEA/attributes/Status, configuration/relationTypes/HasAddress/attributes/DEA/attributes/Status

LKUP_IMS_IDENTIFIER_STATUS

DRUG_SCHEDULE

VARCHAR


configuration/relationTypes/HasAddress/attributes/DEA/attributes/DrugSchedule, configuration/relationTypes/HasAddress/attributes/DEA/attributes/DrugSchedule


BUSINESS_ACTIVITY_CODE

VARCHAR

Business Activity Code

configuration/relationTypes/HasAddress/attributes/DEA/attributes/BusinessActivityCode, configuration/relationTypes/HasAddress/attributes/DEA/attributes/BusinessActivityCode


SUB_BUSINESS_ACTIVITY_CODE

VARCHAR

Sub Business Activity Code

configuration/relationTypes/HasAddress/attributes/DEA/attributes/SubBusinessActivityCode, configuration/relationTypes/HasAddress/attributes/DEA/attributes/SubBusinessActivityCode


DEA_CHANGE_REASON_CODE

VARCHAR

DEA Change Reason Code

configuration/relationTypes/HasAddress/attributes/DEA/attributes/DEAChangeReasonCode, configuration/relationTypes/HasAddress/attributes/DEA/attributes/DEAChangeReasonCode

LKUP_IMS_SRC_DEACTIVE_REASON_CODE

AUTHORIZATION_STATUS

VARCHAR

Authorization Status

configuration/relationTypes/HasAddress/attributes/DEA/attributes/AuthorizationStatus, configuration/relationTypes/HasAddress/attributes/DEA/attributes/AuthorizationStatus

LKUP_IMS_IDENTIFIER_STATUS

ADDRESS_OFFICE_INFORMATION

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDRESS_URI

VARCHAR

generated key description



OFFICE_INFORMATION_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



BEST_TIMES

VARCHAR


configuration/relationTypes/HasAddress/attributes/OfficeInformation/attributes/BestTimes, configuration/relationTypes/HasAddress/attributes/OfficeInformation/attributes/BestTimes


APPT_REQUIRED

BOOLEAN


configuration/relationTypes/HasAddress/attributes/OfficeInformation/attributes/ApptRequired, configuration/relationTypes/HasAddress/attributes/OfficeInformation/attributes/ApptRequired


OFFICE_NOTES

VARCHAR


configuration/relationTypes/HasAddress/attributes/OfficeInformation/attributes/OfficeNotes, configuration/relationTypes/HasAddress/attributes/OfficeInformation/attributes/OfficeNotes


SPECIALITIES

Column

Type

Description

Reltio Attribute URI

LOV Name

SPECIALITIES_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SPECIALTY_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Specialities/attributes/SpecialtyType, configuration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyType

LKUP_IMS_SPECIALTY_TYPE

SPECIALTY

VARCHAR


configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialty, configuration/entityTypes/HCO/attributes/Specialities/attributes/Specialty

LKUP_IMS_SPECIALTY

RANK

VARCHAR

Specialty Rank

configuration/entityTypes/HCP/attributes/Specialities/attributes/Rank, configuration/entityTypes/HCO/attributes/Specialities/attributes/Rank


DESC

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/Specialities/attributes/Desc


GROUP

VARCHAR


configuration/entityTypes/HCP/attributes/Specialities/attributes/Group, configuration/entityTypes/HCO/attributes/Specialities/attributes/Group


SOURCE_CD

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/Specialities/attributes/SourceCD


SPECIALTY_DETAIL

VARCHAR


configuration/entityTypes/HCP/attributes/Specialities/attributes/SpecialtyDetail, configuration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyDetail


PROFESSION_CODE

VARCHAR

Profession

configuration/entityTypes/HCP/attributes/Specialities/attributes/ProfessionCode

LKUP_IMS_PROFESSION

PRIMARY_SPECIALTY_FLAG

BOOLEAN


configuration/entityTypes/HCP/attributes/Specialities/attributes/PrimarySpecialtyFlag, configuration/entityTypes/HCO/attributes/Specialities/attributes/PrimarySpecialtyFlag


SORT_ORDER

VARCHAR


configuration/entityTypes/HCP/attributes/Specialities/attributes/SortOrder, configuration/entityTypes/HCO/attributes/Specialities/attributes/SortOrder


BEST_RECORD

VARCHAR


configuration/entityTypes/HCP/attributes/Specialities/attributes/BestRecord, configuration/entityTypes/HCO/attributes/Specialities/attributes/BestRecord


SUB_SPECIALTY

VARCHAR


configuration/entityTypes/HCP/attributes/Specialities/attributes/SubSpecialty, configuration/entityTypes/HCO/attributes/Specialities/attributes/SubSpecialty

LKUP_IMS_SPECIALTY

SUB_SPECIALTY_RANK

VARCHAR

SubSpecialty Rank

configuration/entityTypes/HCP/attributes/Specialities/attributes/SubSpecialtyRank, configuration/entityTypes/HCO/attributes/Specialities/attributes/SubSpecialtyRank


TRUSTED_INDICATOR

VARCHAR


configuration/entityTypes/HCP/attributes/Specialities/attributes/TrustedIndicator, configuration/entityTypes/HCO/attributes/Specialities/attributes/TrustedIndicator

LKUP_IMS_YES_NO

RAW_SPECIALTY

VARCHAR


configuration/entityTypes/HCP/attributes/Specialities/attributes/RawSpecialty, configuration/entityTypes/HCO/attributes/Specialities/attributes/RawSpecialty


RAW_SPECIALTY_DESCRIPTION

VARCHAR


configuration/entityTypes/HCP/attributes/Specialities/attributes/RawSpecialtyDescription, configuration/entityTypes/HCO/attributes/Specialities/attributes/RawSpecialtyDescription


IDENTIFIERS

Column

Type

Description

Reltio Attribute URI

LOV Name

IDENTIFIERS_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Identifiers/attributes/Type, configuration/entityTypes/HCO/attributes/Identifiers/attributes/Type

LKUP_IMS_HCP_IDENTIFIER_TYPE,LKUP_IMS_HCO_IDENTIFIER_TYPE

ID

VARCHAR


configuration/entityTypes/HCP/attributes/Identifiers/attributes/ID, configuration/entityTypes/HCO/attributes/Identifiers/attributes/ID


ORDER

VARCHAR

Displays the order of priority for an MPN for those facilities that share an MPN. Valid values are: P ?the MPN on a business record is the primary identifier for the business and O ?the MPN is a secondary identifier. (Using P for the MPN supports aggregating clinical volumes and avoids double counting).

configuration/entityTypes/HCP/attributes/Identifiers/attributes/Order, configuration/entityTypes/HCO/attributes/Identifiers/attributes/Order


CATEGORY

VARCHAR

Additional information about the identifer. For a DDD identifer, the DDD subcategory code (e.g. H4, D1, A2). For a DEA identifier, contains the DEA activity code (e.g. M for Mid Level Practitioner)

configuration/entityTypes/HCP/attributes/Identifiers/attributes/Category, configuration/entityTypes/HCO/attributes/Identifiers/attributes/Category

LKUP_IMS_IDENTIFIERS_CATEGORY

STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Identifiers/attributes/Status, configuration/entityTypes/HCO/attributes/Identifiers/attributes/Status

LKUP_IMS_IDENTIFIER_STATUS

AUTHORIZATION_STATUS

VARCHAR

Authorization Status

configuration/entityTypes/HCP/attributes/Identifiers/attributes/AuthorizationStatus, configuration/entityTypes/HCO/attributes/Identifiers/attributes/AuthorizationStatus

LKUP_IMS_IDENTIFIER_STATUS

DEACTIVATION_REASON_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Identifiers/attributes/DeactivationReasonCode, configuration/entityTypes/HCO/attributes/Identifiers/attributes/DeactivationReasonCode

LKUP_IMS_SRC_DEACTIVE_REASON_CODE

DEACTIVATION_DATE

DATE


configuration/entityTypes/HCP/attributes/Identifiers/attributes/DeactivationDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/DeactivationDate


REACTIVATION_DATE

DATE


configuration/entityTypes/HCP/attributes/Identifiers/attributes/ReactivationDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/ReactivationDate


NATIONAL_ID_ATTRIBUTE

VARCHAR


configuration/entityTypes/HCP/attributes/Identifiers/attributes/NationalIdAttribute, configuration/entityTypes/HCO/attributes/Identifiers/attributes/NationalIdAttribute


AMAMDDO_FLAG

VARCHAR

AMA MD-DO Flag

configuration/entityTypes/HCP/attributes/Identifiers/attributes/AMAMDDOFlag


MAJOR_PROF_ACT

VARCHAR

Major Professional Activity Code

configuration/entityTypes/HCP/attributes/Identifiers/attributes/MajorProfAct


HOSPITAL_HOURS

VARCHAR

HospitalHours

configuration/entityTypes/HCP/attributes/Identifiers/attributes/HospitalHours


AMA_HOSPITAL_ID

VARCHAR

AMAHospitalID

configuration/entityTypes/HCP/attributes/Identifiers/attributes/AMAHospitalID


PRACTICE_TYPE_CODE

VARCHAR

PracticeTypeCode

configuration/entityTypes/HCP/attributes/Identifiers/attributes/PracticeTypeCode


EMPLOYMENT_TYPE_CODE

VARCHAR

EmploymentTypeCode

configuration/entityTypes/HCP/attributes/Identifiers/attributes/EmploymentTypeCode


BIRTH_CITY

VARCHAR

BirthCity

configuration/entityTypes/HCP/attributes/Identifiers/attributes/BirthCity


BIRTH_STATE

VARCHAR

BirthState

configuration/entityTypes/HCP/attributes/Identifiers/attributes/BirthState


BIRTH_COUNTRY

VARCHAR

BirthCountry

configuration/entityTypes/HCP/attributes/Identifiers/attributes/BirthCountry


MEDICAL_SCHOOL

VARCHAR

MedicalSchool

configuration/entityTypes/HCP/attributes/Identifiers/attributes/MedicalSchool


GRADUATION_YEAR

VARCHAR

GraduationYear

configuration/entityTypes/HCP/attributes/Identifiers/attributes/GraduationYear


NUM_OF_PYSICIANS

VARCHAR

NumOfPysicians

configuration/entityTypes/HCP/attributes/Identifiers/attributes/NumOfPysicians


STATE

VARCHAR

LicenseState

configuration/entityTypes/HCP/attributes/Identifiers/attributes/State, configuration/entityTypes/HCO/attributes/Identifiers/attributes/State

LKUP_IMS_STATE_CODE

TRUSTED_INDICATOR

VARCHAR


configuration/entityTypes/HCP/attributes/Identifiers/attributes/TrustedIndicator, configuration/entityTypes/HCO/attributes/Identifiers/attributes/TrustedIndicator

LKUP_IMS_YES_NO

HARD_LINK_INDICATOR

VARCHAR


configuration/entityTypes/HCP/attributes/Identifiers/attributes/HardLinkIndicator, configuration/entityTypes/HCO/attributes/Identifiers/attributes/HardLinkIndicator

LKUP_IMS_YES_NO

LAST_VERIFICATION_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Identifiers/attributes/LastVerificationStatus, configuration/entityTypes/HCO/attributes/Identifiers/attributes/LastVerificationStatus


LAST_VERIFICATION_DATE

DATE


configuration/entityTypes/HCP/attributes/Identifiers/attributes/LastVerificationDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/LastVerificationDate


ACTIVATION_DATE

DATE


configuration/entityTypes/HCP/attributes/Identifiers/attributes/ActivationDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/ActivationDate


SPEAKER

Column

Type

Description

Reltio Attribute URI

LOV Name

SPEAKER_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



IS_SPEAKER

BOOLEAN


configuration/entityTypes/HCP/attributes/Speaker/attributes/IsSpeaker


IS_COMPANY_APPROVED_SPEAKER

BOOLEAN

Attribute to track if an HCP is a COMPANY approved speaker

configuration/entityTypes/HCP/attributes/Speaker/attributes/IsCOMPANYApprovedSpeaker


LAST_BRIEFING_DATE

DATE

Track the last date that the HCP received the briefing/training to be certified as an approved COMPANY Speaker

configuration/entityTypes/HCP/attributes/Speaker/attributes/LastBriefingDate


SPEAKER_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Speaker/attributes/SpeakerStatus

LKUP_SPEAKERSTATUS

SPEAKER_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Speaker/attributes/SpeakerType

LKUP_SPEAKERTYPE

SPEAKER_LEVEL

VARCHAR


configuration/entityTypes/HCP/attributes/Speaker/attributes/SpeakerLevel

LKUP_SPEAKERLEVEL

HCP_WORKPLACE_MAIN_HCO

Column

Type

Description

Reltio Attribute URI

LOV Name

WORKPLACE_URI

VARCHAR

generated key description



MAINHCO_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



NAME

VARCHAR

Name

configuration/entityTypes/HCO/attributes/Name


OTHER_NAMES

VARCHAR

Other Names

configuration/entityTypes/HCO/attributes/OtherNames


TYPE_CODE

VARCHAR

Customer Type

configuration/entityTypes/HCO/attributes/TypeCode

LKUP_IMS_HCO_CUST_TYPE

SOURCE_ID

VARCHAR

Source ID

configuration/entityTypes/HCO/attributes/SourceID


VALIDATION_STATUS

VARCHAR


configuration/relationTypes/RLE.MAI/attributes/ValidationStatus

LKUP_IMS_VAL_STATUS

VALIDATION_CHANGE_DATE

DATE


configuration/relationTypes/RLE.MAI/attributes/ValidationChangeDate


AFFILIATION_STATUS

VARCHAR


configuration/relationTypes/RLE.MAI/attributes/AffiliationStatus

LKUP_IMS_STATUS

COUNTRY

VARCHAR

Country Code

configuration/relationTypes/RLE.MAI/attributes/Country

LKUP_IMS_COUNTRY_CODE

HCP_WORKPLACE_MAIN_HCO_CLASSOF_TRADE_N

Column

Type

Description

Reltio Attribute URI

LOV Name

WORKPLACE_URI

VARCHAR

generated key description



MAINHCO_URI

VARCHAR

generated key description



CLASSOFTRADEN_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



PRIORITY

VARCHAR

Numeric code for the primary class of trade

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Priority


CLASSIFICATION

VARCHAR


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Classification

LKUP_IMS_HCO_CLASSOFTRADEN_CLASSIFICATION

FACILITY_TYPE

VARCHAR


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/FacilityType

LKUP_IMS_HCO_CLASSOFTRADEN_FACILITYTYPE

SPECIALTY

VARCHAR


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty

LKUP_IMS_HCO_CLASSOFTRADEN_SPECIALTY

HCP_MAIN_WORKPLACE_CLASSOF_TRADE_N

Column

Type

Description

Reltio Attribute URI

LOV Name

MAINWORKPLACE_URI

VARCHAR

generated key description



CLASSOFTRADEN_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



PRIORITY

VARCHAR

Numeric code for the primary class of trade

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Priority


CLASSIFICATION

VARCHAR


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Classification

LKUP_IMS_HCO_CLASSOFTRADEN_CLASSIFICATION

FACILITY_TYPE

VARCHAR


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/FacilityType

LKUP_IMS_HCO_CLASSOFTRADEN_FACILITYTYPE

SPECIALTY

VARCHAR


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty

LKUP_IMS_HCO_CLASSOFTRADEN_SPECIALTY

PHONE

Column

Type

Description

Reltio Attribute URI

LOV Name

PHONE_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE_IMS

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/TypeIMS, configuration/entityTypes/HCO/attributes/Phone/attributes/TypeIMS

LKUP_IMS_COMMUNICATION_TYPE

NUMBER

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/Number, configuration/entityTypes/HCO/attributes/Phone/attributes/Number


EXTENSION

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/Extension, configuration/entityTypes/HCO/attributes/Phone/attributes/Extension


RANK

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/Rank, configuration/entityTypes/HCO/attributes/Phone/attributes/Rank


COUNTRY_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/CountryCode, configuration/entityTypes/HCO/attributes/Phone/attributes/CountryCode

LKUP_IMS_COUNTRY_CODE

AREA_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/AreaCode, configuration/entityTypes/HCO/attributes/Phone/attributes/AreaCode


LOCAL_NUMBER

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/LocalNumber, configuration/entityTypes/HCO/attributes/Phone/attributes/LocalNumber


FORMATTED_NUMBER

VARCHAR

Formatted number of the phone

configuration/entityTypes/HCP/attributes/Phone/attributes/FormattedNumber, configuration/entityTypes/HCO/attributes/Phone/attributes/FormattedNumber


VALIDATION_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/ValidationStatus, configuration/entityTypes/HCO/attributes/Phone/attributes/ValidationStatus


VALIDATION_DATE

DATE


configuration/entityTypes/HCP/attributes/Phone/attributes/ValidationDate, configuration/entityTypes/HCO/attributes/Phone/attributes/ValidationDate


LINE_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/LineType, configuration/entityTypes/HCO/attributes/Phone/attributes/LineType


FORMAT_MASK

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/FormatMask, configuration/entityTypes/HCO/attributes/Phone/attributes/FormatMask


DIGIT_COUNT

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/DigitCount, configuration/entityTypes/HCO/attributes/Phone/attributes/DigitCount


GEO_AREA

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/GeoArea, configuration/entityTypes/HCO/attributes/Phone/attributes/GeoArea


GEO_COUNTRY

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/GeoCountry, configuration/entityTypes/HCO/attributes/Phone/attributes/GeoCountry


DQ_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/DQCode, configuration/entityTypes/HCO/attributes/Phone/attributes/DQCode


ACTIVE_PHONE

BOOLEAN

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/Phone/attributes/Active


BEST_PHONE_INDICATOR

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/BestPhoneIndicator, configuration/entityTypes/HCO/attributes/Phone/attributes/BestPhoneIndicator


PHONE_SOURCE_DATA

Column

Type

Description

Reltio Attribute URI

LOV Name

PHONE_URI

VARCHAR

generated key description



SOURCE_DATA_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



DATASET_IDENTIFIER

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/SourceData/attributes/DatasetIdentifier, configuration/entityTypes/HCO/attributes/Phone/attributes/SourceData/attributes/DatasetIdentifier


DATASET_PARTY_IDENTIFIER

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/SourceData/attributes/DatasetPartyIdentifier, configuration/entityTypes/HCO/attributes/Phone/attributes/SourceData/attributes/DatasetPartyIdentifier


DATASET_PHONE_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/SourceData/attributes/DatasetPhoneType, configuration/entityTypes/HCO/attributes/Phone/attributes/SourceData/attributes/DatasetPhoneType

LKUP_IMS_COMMUNICATION_TYPE

RAW_DATASET_PHONE_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/SourceData/attributes/RawDatasetPhoneType, configuration/entityTypes/HCO/attributes/Phone/attributes/SourceData/attributes/RawDatasetPhoneType


BEST_PHONE_INDICATOR

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/SourceData/attributes/BestPhoneIndicator, configuration/entityTypes/HCO/attributes/Phone/attributes/SourceData/attributes/BestPhoneIndicator


EMAIL

Column

Type

Description

Reltio Attribute URI

LOV Name

EMAIL_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE_IMS

VARCHAR


configuration/entityTypes/HCP/attributes/Email/attributes/TypeIMS, configuration/entityTypes/HCO/attributes/Email/attributes/TypeIMS

LKUP_IMS_EMAIL_TYPE

EMAIL

VARCHAR


configuration/entityTypes/HCP/attributes/Email/attributes/Email, configuration/entityTypes/HCO/attributes/Email/attributes/Email


DOMAIN

VARCHAR


configuration/entityTypes/HCP/attributes/Email/attributes/Domain, configuration/entityTypes/HCO/attributes/Email/attributes/Domain


DOMAIN_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Email/attributes/DomainType, configuration/entityTypes/HCO/attributes/Email/attributes/DomainType


USERNAME

VARCHAR


configuration/entityTypes/HCP/attributes/Email/attributes/Username, configuration/entityTypes/HCO/attributes/Email/attributes/Username


RANK

VARCHAR


configuration/entityTypes/HCP/attributes/Email/attributes/Rank, configuration/entityTypes/HCO/attributes/Email/attributes/Rank


VALIDATION_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Email/attributes/ValidationStatus, configuration/entityTypes/HCO/attributes/Email/attributes/ValidationStatus


VALIDATION_DATE

DATE


configuration/entityTypes/HCP/attributes/Email/attributes/ValidationDate, configuration/entityTypes/HCO/attributes/Email/attributes/ValidationDate


ACTIVE_EMAIL_HCP

VARCHAR


configuration/entityTypes/HCP/attributes/Email/attributes/Active


DQ_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Email/attributes/DQCode, configuration/entityTypes/HCO/attributes/Email/attributes/DQCode


SOURCE_CD

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/Email/attributes/SourceCD


ACTIVE_EMAIL_HCO

BOOLEAN


configuration/entityTypes/HCO/attributes/Email/attributes/Active


DISCLOSURE

Disclosure - Reporting derived attributes

Column

Type

Description

Reltio Attribute URI

LOV Name

DISCLOSURE_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



DGS_CATEGORY

VARCHAR


configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSCategory, configuration/entityTypes/HCO/attributes/Disclosure/attributes/DGSCategory

LKUP_BENEFITCATEGORY_HCP,LKUP_BENEFITCATEGORY_HCO

DGS_TITLE

VARCHAR


configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSTitle

LKUP_BENEFITTITLE

DGS_QUALITY

VARCHAR


configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSQuality

LKUP_BENEFITQUALITY

DGS_SPECIALTY

VARCHAR


configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSSpecialty

LKUP_BENEFITSPECIALTY

CONTRACT_CLASSIFICATION

VARCHAR


configuration/entityTypes/HCP/attributes/Disclosure/attributes/ContractClassification

LKUP_CONTRACTCLASSIFICATION

CONTRACT_CLASSIFICATION_DATE

DATE


configuration/entityTypes/HCP/attributes/Disclosure/attributes/ContractClassificationDate


MILITARY

BOOLEAN


configuration/entityTypes/HCP/attributes/Disclosure/attributes/Military


LEGALSTATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Disclosure/attributes/LEGALSTATUS

LKUP_LEGALSTATUS

THIRD_PARTY_VERIFY

Column

Type

Description

Reltio Attribute URI

LOV Name

THIRD_PARTY_VERIFY_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SEND_FOR_VERIFY

VARCHAR


configuration/entityTypes/HCP/attributes/ThirdPartyVerify/attributes/SendForVerify, configuration/entityTypes/HCO/attributes/ThirdPartyVerify/attributes/SendForVerify

LKUP_IMS_SEND_FOR_VALIDATION

VERIFY_DATE

VARCHAR


configuration/entityTypes/HCP/attributes/ThirdPartyVerify/attributes/VerifyDate, configuration/entityTypes/HCO/attributes/ThirdPartyVerify/attributes/VerifyDate


PRIVACY_PREFERENCES

Column

Type

Description

Reltio Attribute URI

LOV Name

PRIVACY_PREFERENCES_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



OPT_OUT

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/OptOut


OPT_OUT_START_DATE

DATE


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/OptOutStartDate


ALLOWED_TO_CONTACT

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/AllowedToContact


PHONE_OPT_OUT

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/PhoneOptOut, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/PhoneOptOut


EMAIL_OPT_OUT

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/EmailOptOut, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/EmailOptOut


FAX_OPT_OUT

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/FaxOptOut, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/FaxOptOut


VISIT_OPT_OUT

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/VisitOptOut, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/VisitOptOut


AMA_NO_CONTACT

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/AMANoContact


PDRP

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/PDRP


PDRP_DATE

DATE


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/PDRPDate


TEXT_MESSAGE_OPT_OUT

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/TextMessageOptOut


MAIL_OPT_OUT

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/MailOptOut, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/MailOptOut


OPT_OUT_CHANGE_DATE

DATE

The date the opt out indicator was changed

configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/OptOutChangeDate


REMOTE_OPT_OUT

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/RemoteOptOut, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/RemoteOptOut


OPT_OUT_ONE_KEY

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/OptOutOneKey, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/OptOutOneKey


OPT_OUT_SAFE_HARBOR

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/OptOutSafeHarbor


KEY_OPINION_LEADER

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/KeyOpinionLeader


RESIDENT_INDICATOR

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/ResidentIndicator


ALLOW_SAFE_HARBOR

BOOLEAN


configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/AllowSafeHarbor


SANCTION

Column

Type

Description

Reltio Attribute URI

LOV Name

SANCTION_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SANCTION_ID

VARCHAR

Court sanction Id for any case.

configuration/entityTypes/HCP/attributes/Sanction/attributes/SanctionId


ACTION_CODE

VARCHAR

Court sanction code for a case

configuration/entityTypes/HCP/attributes/Sanction/attributes/ActionCode


ACTION_DESCRIPTION

VARCHAR


configuration/entityTypes/HCP/attributes/Sanction/attributes/ActionDescription


BOARD_CODE

VARCHAR

Court case board id

configuration/entityTypes/HCP/attributes/Sanction/attributes/BoardCode


BOARD_DESC

VARCHAR

court case board description

configuration/entityTypes/HCP/attributes/Sanction/attributes/BoardDesc


ACTION_DATE

DATE


configuration/entityTypes/HCP/attributes/Sanction/attributes/ActionDate


SANCTION_PERIOD_START_DATE

DATE


configuration/entityTypes/HCP/attributes/Sanction/attributes/SanctionPeriodStartDate


SANCTION_PERIOD_END_DATE

DATE


configuration/entityTypes/HCP/attributes/Sanction/attributes/SanctionPeriodEndDate


MONTH_DURATION

VARCHAR


configuration/entityTypes/HCP/attributes/Sanction/attributes/MonthDuration


FINE_AMOUNT

VARCHAR


configuration/entityTypes/HCP/attributes/Sanction/attributes/FineAmount


OFFENSE_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Sanction/attributes/OffenseCode


OFFENSE_DESCRIPTION

VARCHAR


configuration/entityTypes/HCP/attributes/Sanction/attributes/OffenseDescription


OFFENSE_DATE

DATE


configuration/entityTypes/HCP/attributes/Sanction/attributes/OffenseDate


HCP_SANCTIONS

Column

Type

Description

Reltio Attribute URI

LOV Name

SANCTIONS_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



IDENTIFIER_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Sanctions/attributes/IdentifierType

LKUP_IMS_HCP_IDENTIFIER_TYPE

IDENTIFIER_ID

VARCHAR


configuration/entityTypes/HCP/attributes/Sanctions/attributes/IdentifierID


TYPE_CODE

VARCHAR

Type of sanction/restriction for a given provided

configuration/entityTypes/HCP/attributes/Sanctions/attributes/TypeCode

LKUP_IMS_SNCTN_RSTR_ACTN

DEACTIVATION_REASON_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Sanctions/attributes/DeactivationReasonCode

LKUP_IMS_SNCTN_RSTR_DACT_RSN

DISPOSITION_CATEGORY_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Sanctions/attributes/DispositionCategoryCode

LKUP_IMS_SNCTN_RSTR_DSP_CATG

EXCLUSION_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Sanctions/attributes/ExclusionCode

LKUP_IMS_SNCTN_RSTR_EXCL

DESCRIPTION

VARCHAR


configuration/entityTypes/HCP/attributes/Sanctions/attributes/Description


URL

VARCHAR


configuration/entityTypes/HCP/attributes/Sanctions/attributes/URL


ISSUED_DATE

DATE


configuration/entityTypes/HCP/attributes/Sanctions/attributes/IssuedDate


EFFECTIVE_DATE

DATE


configuration/entityTypes/HCP/attributes/Sanctions/attributes/EffectiveDate


REINSTATEMENT_DATE

DATE


configuration/entityTypes/HCP/attributes/Sanctions/attributes/ReinstatementDate


IS_STATE_WAIVER

BOOLEAN


configuration/entityTypes/HCP/attributes/Sanctions/attributes/IsStateWaiver


STATUS_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Sanctions/attributes/StatusCode

LKUP_IMS_IDENTIFIER_STATUS

SOURCE_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Sanctions/attributes/SourceCode

LKUP_IMS_SNCTN_RSTR_SRC

PUBLICATION_DATE

DATE


configuration/entityTypes/HCP/attributes/Sanctions/attributes/PublicationDate


GOVERNMENT_LEVEL_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Sanctions/attributes/GovernmentLevelCode

LKUP_IMS_GOVT_LVL

HCP_GSA_SANCTION

Column

Type

Description

Reltio Attribute URI

LOV Name

GSA_SANCTION_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SANCTION_ID

VARCHAR


configuration/entityTypes/HCP/attributes/GSASanction/attributes/SanctionId


FIRST_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/GSASanction/attributes/FirstName


MIDDLE_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/GSASanction/attributes/MiddleName


LAST_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/GSASanction/attributes/LastName


SUFFIX_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/GSASanction/attributes/SuffixName


CITY

VARCHAR


configuration/entityTypes/HCP/attributes/GSASanction/attributes/City


STATE

VARCHAR


configuration/entityTypes/HCP/attributes/GSASanction/attributes/State


ZIP

VARCHAR


configuration/entityTypes/HCP/attributes/GSASanction/attributes/Zip


ACTION_DATE

VARCHAR


configuration/entityTypes/HCP/attributes/GSASanction/attributes/ActionDate


TERM_DATE

VARCHAR


configuration/entityTypes/HCP/attributes/GSASanction/attributes/TermDate


AGENCY

VARCHAR


configuration/entityTypes/HCP/attributes/GSASanction/attributes/Agency


CONFIDENCE

VARCHAR


configuration/entityTypes/HCP/attributes/GSASanction/attributes/Confidence


DEGREES

DO NOT USE THIS ATTRIBUTE - will be deprecated

Column

Type

Description

Reltio Attribute URI

LOV Name

DEGREES_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



DEGREE

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/Degrees/attributes/Degree

DEGREE

BEST_DEGREE

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/Degrees/attributes/BestDegree


CERTIFICATES

Column

Type

Description

Reltio Attribute URI

LOV Name

CERTIFICATES_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



CERTIFICATE_ID

VARCHAR


configuration/entityTypes/HCP/attributes/Certificates/attributes/CertificateId


NAME

VARCHAR


configuration/entityTypes/HCP/attributes/Certificates/attributes/Name


BOARD_ID

VARCHAR


configuration/entityTypes/HCP/attributes/Certificates/attributes/BoardId


BOARD_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/Certificates/attributes/BoardName


INTERNAL_HCP_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Certificates/attributes/InternalHCPStatus


INTERNAL_HCP_INACTIVE_REASON_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Certificates/attributes/InternalHCPInactiveReasonCode


INTERNAL_SAMPLING_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Certificates/attributes/InternalSamplingStatus


PVS_ELIGIBILTY

VARCHAR


configuration/entityTypes/HCP/attributes/Certificates/attributes/PVSEligibilty


EMPLOYMENT

Column

Type

Description

Reltio Attribute URI

LOV Name

EMPLOYMENT_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TITLE

VARCHAR


configuration/relationTypes/Employment/attributes/Title


SUMMARY

VARCHAR


configuration/relationTypes/Employment/attributes/Summary


IS_CURRENT

BOOLEAN


configuration/relationTypes/Employment/attributes/IsCurrent


NAME

VARCHAR

Name

configuration/entityTypes/Organization/attributes/Name


CREDENTIAL

DO NOT USE THIS ATTRIBUTE - will be deprecated

Column

Type

Description

Reltio Attribute URI

LOV Name

CREDENTIAL_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



RANK

VARCHAR


configuration/entityTypes/HCP/attributes/Credential/attributes/Rank


CREDENTIAL

VARCHAR


configuration/entityTypes/HCP/attributes/Credential/attributes/Credential

CRED

PROFESSION

Column

Type

Description

Reltio Attribute URI

LOV Name

PROFESSION_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



PROFESSION_CODE

VARCHAR

Profession

configuration/entityTypes/HCP/attributes/Profession/attributes/ProfessionCode

LKUP_IMS_PROFESSION

RANK

VARCHAR

Profession Rank

configuration/entityTypes/HCP/attributes/Profession/attributes/Rank


EDUCATION

Column

Type

Description

Reltio Attribute URI

LOV Name

EDUCATION_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SCHOOL_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/SchoolName

LKUP_IMS_SCHOOL_CODE

TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/Type


DEGREE

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/Degree


YEAR_OF_GRADUATION

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/Education/attributes/YearOfGraduation


GRADUATED

BOOLEAN

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/Education/attributes/Graduated


GPA

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/GPA


YEARS_IN_PROGRAM

VARCHAR

Year in Grad Training Program, Year in training in current program

configuration/entityTypes/HCP/attributes/Education/attributes/YearsInProgram


START_YEAR

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/StartYear


END_YEAR

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/EndYear


FIELDOF_STUDY

VARCHAR

Specialty Focus or Specialty Training

configuration/entityTypes/HCP/attributes/Education/attributes/FieldofStudy


ELIGIBILITY

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/Education/attributes/Eligibility


EDUCATION_TYPE

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/Education/attributes/EducationType


RANK

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/Rank


MEDICAL_SCHOOL

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/MedicalSchool


TAXONOMY

Column

Type

Description

Reltio Attribute URI

LOV Name

TAXONOMY_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TAXONOMY

VARCHAR


configuration/entityTypes/HCP/attributes/Taxonomy/attributes/Taxonomy, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/Taxonomy

TAXONOMY_CD,LKUP_IMS_JURIDIC_CATEGORY

TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Taxonomy/attributes/Type, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/Type

TAXONOMY_TYPE

PROVIDER_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Taxonomy/attributes/ProviderType, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/ProviderType


CLASSIFICATION

VARCHAR


configuration/entityTypes/HCP/attributes/Taxonomy/attributes/Classification, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/Classification


SPECIALIZATION

VARCHAR


configuration/entityTypes/HCP/attributes/Taxonomy/attributes/Specialization, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/Specialization


PRIORITY

VARCHAR


configuration/entityTypes/HCP/attributes/Taxonomy/attributes/Priority, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/Priority

TAXONOMY_PRIORITY

STR_TYPE

VARCHAR


configuration/entityTypes/HCO/attributes/Taxonomy/attributes/StrType

LKUP_IMS_STRUCTURE_TYPE

DP_PRESENCE

Column

Type

Description

Reltio Attribute URI

LOV Name

DP_PRESENCE_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



CHANNEL_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/DPPresence/attributes/ChannelCode, configuration/entityTypes/HCO/attributes/DPPresence/attributes/ChannelCode

LKUP_IMS_DP_CHANNEL

CHANNEL_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/DPPresence/attributes/ChannelName, configuration/entityTypes/HCO/attributes/DPPresence/attributes/ChannelName


CHANNEL_URL

VARCHAR


configuration/entityTypes/HCP/attributes/DPPresence/attributes/ChannelURL, configuration/entityTypes/HCO/attributes/DPPresence/attributes/ChannelURL


CHANNEL_REGISTRATION_DATE

DATE


configuration/entityTypes/HCP/attributes/DPPresence/attributes/ChannelRegistrationDate, configuration/entityTypes/HCO/attributes/DPPresence/attributes/ChannelRegistrationDate


PRESENCE_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/DPPresence/attributes/PresenceType, configuration/entityTypes/HCO/attributes/DPPresence/attributes/PresenceType

LKUP_IMS_DP_PRESENCE_TYPE

ACTIVITY

VARCHAR


configuration/entityTypes/HCP/attributes/DPPresence/attributes/Activity, configuration/entityTypes/HCO/attributes/DPPresence/attributes/Activity

LKUP_IMS_DP_SCORE_CODE

AUDIENCE

VARCHAR


configuration/entityTypes/HCP/attributes/DPPresence/attributes/Audience, configuration/entityTypes/HCO/attributes/DPPresence/attributes/Audience

LKUP_IMS_DP_SCORE_CODE

DP_SUMMARY

Column

Type

Description

Reltio Attribute URI

LOV Name

DP_SUMMARY_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SUMMARY_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/DPSummary/attributes/SummaryType, configuration/entityTypes/HCO/attributes/DPSummary/attributes/SummaryType

LKUP_IMS_DP_SUMMARY_TYPE

SCORE_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/DPSummary/attributes/ScoreCode, configuration/entityTypes/HCO/attributes/DPSummary/attributes/ScoreCode

LKUP_IMS_DP_SCORE_CODE

ADDITIONAL_ATTRIBUTES

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDITIONAL_ATTRIBUTES_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



ATTRIBUTE_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/AdditionalAttributes/attributes/AttributeName, configuration/entityTypes/HCO/attributes/AdditionalAttributes/attributes/AttributeName


ATTRIBUTE_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/AdditionalAttributes/attributes/AttributeType, configuration/entityTypes/HCO/attributes/AdditionalAttributes/attributes/AttributeType

LKUP_IMS_TYPE_CODE

ATTRIBUTE_VALUE

VARCHAR


configuration/entityTypes/HCP/attributes/AdditionalAttributes/attributes/AttributeValue, configuration/entityTypes/HCO/attributes/AdditionalAttributes/attributes/AttributeValue


ATTRIBUTE_RANK

VARCHAR


configuration/entityTypes/HCP/attributes/AdditionalAttributes/attributes/AttributeRank, configuration/entityTypes/HCO/attributes/AdditionalAttributes/attributes/AttributeRank


ADDITIONAL_INFO

VARCHAR


configuration/entityTypes/HCP/attributes/AdditionalAttributes/attributes/AdditionalInfo, configuration/entityTypes/HCO/attributes/AdditionalAttributes/attributes/AdditionalInfo


DATA_QUALITY

Data Quality

Column

Type

Description

Reltio Attribute URI

LOV Name

DATA_QUALITY_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SEVERITY_LEVEL

VARCHAR


configuration/entityTypes/HCP/attributes/DataQuality/attributes/SeverityLevel, configuration/entityTypes/HCO/attributes/DataQuality/attributes/SeverityLevel

LKUP_IMS_DQ_SEVERITY

SOURCE

VARCHAR


configuration/entityTypes/HCP/attributes/DataQuality/attributes/Source, configuration/entityTypes/HCO/attributes/DataQuality/attributes/Source


SCORE

VARCHAR


configuration/entityTypes/HCP/attributes/DataQuality/attributes/Score, configuration/entityTypes/HCO/attributes/DataQuality/attributes/Score


CLASSIFICATION

Column

Type

Description

Reltio Attribute URI

LOV Name

CLASSIFICATION_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



CLASSIFICATION_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Classification/attributes/ClassificationType, configuration/entityTypes/HCO/attributes/Classification/attributes/ClassificationType

LKUP_IMS_CLASSIFICATION_TYPE

CLASSIFICATION_VALUE

VARCHAR


configuration/entityTypes/HCP/attributes/Classification/attributes/ClassificationValue, configuration/entityTypes/HCO/attributes/Classification/attributes/ClassificationValue


CLASSIFICATION_VALUE_NUMERIC_QUANTITY

VARCHAR


configuration/entityTypes/HCP/attributes/Classification/attributes/ClassificationValueNumericQuantity, configuration/entityTypes/HCO/attributes/Classification/attributes/ClassificationValueNumericQuantity


STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Classification/attributes/Status, configuration/entityTypes/HCO/attributes/Classification/attributes/Status

LKUP_IMS_CLASSIFICATION_STATUS

EFFECTIVE_DATE

DATE


configuration/entityTypes/HCP/attributes/Classification/attributes/EffectiveDate, configuration/entityTypes/HCO/attributes/Classification/attributes/EffectiveDate


END_DATE

DATE


configuration/entityTypes/HCP/attributes/Classification/attributes/EndDate, configuration/entityTypes/HCO/attributes/Classification/attributes/EndDate


NOTES

VARCHAR


configuration/entityTypes/HCP/attributes/Classification/attributes/Notes, configuration/entityTypes/HCO/attributes/Classification/attributes/Notes


TAG

Column

Type

Description

Reltio Attribute URI

LOV Name

TAG_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TAG_TYPE_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Tag/attributes/TagTypeCode, configuration/entityTypes/HCO/attributes/Tag/attributes/TagTypeCode

LKUP_IMS_TAG_TYPE_CODE

TAG_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Tag/attributes/TagCode, configuration/entityTypes/HCO/attributes/Tag/attributes/TagCode


STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Tag/attributes/Status, configuration/entityTypes/HCO/attributes/Tag/attributes/Status

LKUP_IMS_TAG_STATUS

EFFECTIVE_DATE

DATE


configuration/entityTypes/HCP/attributes/Tag/attributes/EffectiveDate, configuration/entityTypes/HCO/attributes/Tag/attributes/EffectiveDate


END_DATE

DATE


configuration/entityTypes/HCP/attributes/Tag/attributes/EndDate, configuration/entityTypes/HCO/attributes/Tag/attributes/EndDate


NOTES

VARCHAR


configuration/entityTypes/HCP/attributes/Tag/attributes/Notes, configuration/entityTypes/HCO/attributes/Tag/attributes/Notes


EXCLUSIONS

Column

Type

Description

Reltio Attribute URI

LOV Name

EXCLUSIONS_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



PRODUCT_ID

VARCHAR


configuration/entityTypes/HCP/attributes/Exclusions/attributes/ProductId, configuration/entityTypes/HCO/attributes/Exclusions/attributes/ProductId

LKUP_IMS_PRODUCT_ID

EXCLUSION_STATUS_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Exclusions/attributes/ExclusionStatusCode, configuration/entityTypes/HCO/attributes/Exclusions/attributes/ExclusionStatusCode

LKUP_IMS_EXCL_STATUS_CODE

EFFECTIVE_DATE

DATE


configuration/entityTypes/HCP/attributes/Exclusions/attributes/EffectiveDate, configuration/entityTypes/HCO/attributes/Exclusions/attributes/EffectiveDate


END_DATE

DATE


configuration/entityTypes/HCP/attributes/Exclusions/attributes/EndDate, configuration/entityTypes/HCO/attributes/Exclusions/attributes/EndDate


NOTES

VARCHAR


configuration/entityTypes/HCP/attributes/Exclusions/attributes/Notes, configuration/entityTypes/HCO/attributes/Exclusions/attributes/Notes


EXCLUSION_RULE_ID

VARCHAR


configuration/entityTypes/HCP/attributes/Exclusions/attributes/ExclusionRuleId, configuration/entityTypes/HCO/attributes/Exclusions/attributes/ExclusionRuleId


ACTION

Column

Type

Description

Reltio Attribute URI

LOV Name

ACTION_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



ACTION_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Action/attributes/ActionCode, configuration/entityTypes/HCO/attributes/Action/attributes/ActionCode

LKUP_IMS_ACTION_CODE

ACTION_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/Action/attributes/ActionName, configuration/entityTypes/HCO/attributes/Action/attributes/ActionName


ACTION_REQUESTED_DATE

DATE


configuration/entityTypes/HCP/attributes/Action/attributes/ActionRequestedDate, configuration/entityTypes/HCO/attributes/Action/attributes/ActionRequestedDate


ACTION_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Action/attributes/ActionStatus, configuration/entityTypes/HCO/attributes/Action/attributes/ActionStatus

LKUP_IMS_ACTION_STATUS

ACTION_STATUS_DATE

DATE


configuration/entityTypes/HCP/attributes/Action/attributes/ActionStatusDate, configuration/entityTypes/HCO/attributes/Action/attributes/ActionStatusDate


ALTERNATE_NAME

Column

Type

Description

Reltio Attribute URI

LOV Name

ALTERNATE_NAME_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



NAME_TYPE_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/AlternateName/attributes/NameTypeCode, configuration/entityTypes/HCO/attributes/AlternateName/attributes/NameTypeCode

LKUP_IMS_NAME_TYPE_CODE

NAME

VARCHAR


configuration/entityTypes/HCP/attributes/AlternateName/attributes/Name, configuration/entityTypes/HCO/attributes/AlternateName/attributes/Name


FIRST_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/AlternateName/attributes/FirstName, configuration/entityTypes/HCO/attributes/AlternateName/attributes/FirstName


MIDDLE_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/AlternateName/attributes/MiddleName, configuration/entityTypes/HCO/attributes/AlternateName/attributes/MiddleName


LAST_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/AlternateName/attributes/LastName, configuration/entityTypes/HCO/attributes/AlternateName/attributes/LastName


SUFFIX_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/AlternateName/attributes/SuffixName, configuration/entityTypes/HCO/attributes/AlternateName/attributes/SuffixName


LANGUAGE

Column

Type

Description

Reltio Attribute URI

LOV Name

LANGUAGE_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



LANGUAGE_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Language/attributes/LanguageCode, configuration/entityTypes/HCO/attributes/Language/attributes/LanguageCode


PROFICIENCY_LEVEL

VARCHAR


configuration/entityTypes/HCP/attributes/Language/attributes/ProficiencyLevel, configuration/entityTypes/HCO/attributes/Language/attributes/ProficiencyLevel


SOURCE_DATA

Column

Type

Description

Reltio Attribute URI

LOV Name

SOURCE_DATA_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



CLASS_OF_TRADE_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/SourceData/attributes/ClassOfTradeCode, configuration/entityTypes/HCO/attributes/SourceData/attributes/ClassOfTradeCode


RAW_CLASS_OF_TRADE_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/SourceData/attributes/RawClassOfTradeCode, configuration/entityTypes/HCO/attributes/SourceData/attributes/RawClassOfTradeCode


RAW_CLASS_OF_TRADE_DESCRIPTION

VARCHAR


configuration/entityTypes/HCP/attributes/SourceData/attributes/RawClassOfTradeDescription, configuration/entityTypes/HCO/attributes/SourceData/attributes/RawClassOfTradeDescription


DATASET_IDENTIFIER

VARCHAR


configuration/entityTypes/HCP/attributes/SourceData/attributes/DatasetIdentifier, configuration/entityTypes/HCO/attributes/SourceData/attributes/DatasetIdentifier


DATASET_PARTY_IDENTIFIER

VARCHAR


configuration/entityTypes/HCP/attributes/SourceData/attributes/DatasetPartyIdentifier, configuration/entityTypes/HCO/attributes/SourceData/attributes/DatasetPartyIdentifier


PARTY_STATUS_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/SourceData/attributes/PartyStatusCode, configuration/entityTypes/HCO/attributes/SourceData/attributes/PartyStatusCode


NOTES

Column

Type

Description

Reltio Attribute URI

LOV Name

NOTES_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



NOTE_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Notes/attributes/NoteCode, configuration/entityTypes/HCO/attributes/Notes/attributes/NoteCode

LKUP_IMS_NOTE_CODE

NOTE_TEXT

VARCHAR


configuration/entityTypes/HCP/attributes/Notes/attributes/NoteText, configuration/entityTypes/HCO/attributes/Notes/attributes/NoteText


HCO

Health care provider

Column

Type

Description

Reltio Attribute URI

LOV Name

ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



NAME

VARCHAR

Name

configuration/entityTypes/HCO/attributes/Name


TYPE_CODE

VARCHAR

Customer Type

configuration/entityTypes/HCO/attributes/TypeCode

LKUP_IMS_HCO_CUST_TYPE

SUB_TYPE_CODE

VARCHAR

Customer Sub Type

configuration/entityTypes/HCO/attributes/SubTypeCode

LKUP_IMS_HCO_SUBTYPE

EXCLUDE_FROM_MATCH

VARCHAR


configuration/entityTypes/HCO/attributes/ExcludeFromMatch


OTHER_NAMES

VARCHAR

Other Names

configuration/entityTypes/HCO/attributes/OtherNames


SOURCE_ID

VARCHAR

Source ID

configuration/entityTypes/HCO/attributes/SourceID


VALIDATION_STATUS

VARCHAR


configuration/entityTypes/HCO/attributes/ValidationStatus

LKUP_IMS_VAL_STATUS

ORIGIN_SOURCE

VARCHAR

Originating Source

configuration/entityTypes/HCO/attributes/OriginSource


COUNTRY_CODE

VARCHAR

Country Code

configuration/entityTypes/HCO/attributes/Country

LKUP_IMS_COUNTRY_CODE

FISCAL

VARCHAR


configuration/entityTypes/HCO/attributes/Fiscal


SITE

VARCHAR


configuration/entityTypes/HCO/attributes/Site


GROUP_PRACTICE

BOOLEAN


configuration/entityTypes/HCO/attributes/GroupPractice


GEN_FIRST

VARCHAR

String

configuration/entityTypes/HCO/attributes/GenFirst

LKUP_IMS_HCO_GENFIRST

SREP_ACCESS

VARCHAR

String

configuration/entityTypes/HCO/attributes/SrepAccess

LKUP_IMS_HCO_SREPACCESS

ACCEPT_MEDICARE

BOOLEAN


configuration/entityTypes/HCO/attributes/AcceptMedicare


ACCEPT_MEDICAID

BOOLEAN


configuration/entityTypes/HCO/attributes/AcceptMedicaid


PERCENT_MEDICARE

VARCHAR


configuration/entityTypes/HCO/attributes/PercentMedicare


PERCENT_MEDICAID

VARCHAR


configuration/entityTypes/HCO/attributes/PercentMedicaid


PARENT_COMPANY

VARCHAR

Replacement Parent Satellite

configuration/entityTypes/HCO/attributes/ParentCompany


HEALTH_SYSTEM_NAME

VARCHAR


configuration/entityTypes/HCO/attributes/HealthSystemName


VADOD

BOOLEAN


configuration/entityTypes/HCO/attributes/VADOD


GPO_MEMBERSHIP

BOOLEAN


configuration/entityTypes/HCO/attributes/GPOMembership


ACADEMIC

BOOLEAN


configuration/entityTypes/HCO/attributes/Academic


MKT_SEGMENT_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/MktSegmentCode


TOTAL_LICENSE_BEDS

VARCHAR


configuration/entityTypes/HCO/attributes/TotalLicenseBeds


TOTAL_CENSUS_BEDS

VARCHAR


configuration/entityTypes/HCO/attributes/TotalCensusBeds


NUM_PATIENTS

VARCHAR


configuration/entityTypes/HCO/attributes/NumPatients


TOTAL_STAFFED_BEDS

VARCHAR


configuration/entityTypes/HCO/attributes/TotalStaffedBeds


TOTAL_SURGERIES

VARCHAR


configuration/entityTypes/HCO/attributes/TotalSurgeries


TOTAL_PROCEDURES

VARCHAR


configuration/entityTypes/HCO/attributes/TotalProcedures


OR_SURGERIES

VARCHAR


configuration/entityTypes/HCO/attributes/ORSurgeries


RESIDENT_PROGRAM

BOOLEAN


configuration/entityTypes/HCO/attributes/ResidentProgram


RESIDENT_COUNT

VARCHAR


configuration/entityTypes/HCO/attributes/ResidentCount


NUMS_OF_PROVIDERS

VARCHAR

Num_of_providers displays the total number of distinct providers affiliated with a business. Current Data: Value between 1 and 422816

configuration/entityTypes/HCO/attributes/NumsOfProviders


CORP_PARENT_NAME

VARCHAR

Corporate Parent Name

configuration/entityTypes/HCO/attributes/CorpParentName


MANAGER_HCO_ID

VARCHAR

Manager Hco Id

configuration/entityTypes/HCO/attributes/ManagerHcoId


MANAGER_HCO_NAME

VARCHAR

Manager Hco Name

configuration/entityTypes/HCO/attributes/ManagerHcoName


OWNER_SUB_NAME

VARCHAR

Owner Sub Name

configuration/entityTypes/HCO/attributes/OwnerSubName


FORMULARY

VARCHAR


configuration/entityTypes/HCO/attributes/Formulary

LKUP_IMS_HCO_FORMULARY

E_MEDICAL_RECORD

VARCHAR


configuration/entityTypes/HCO/attributes/EMedicalRecord

LKUP_IMS_HCO_EREC

E_PRESCRIBE

VARCHAR


configuration/entityTypes/HCO/attributes/EPrescribe

LKUP_IMS_HCO_EREC

PAY_PERFORM

VARCHAR


configuration/entityTypes/HCO/attributes/PayPerform

LKUP_IMS_HCO_PAYPERFORM

CMS_COVERED_FOR_TEACHING

BOOLEAN


configuration/entityTypes/HCO/attributes/CMSCoveredForTeaching


COMM_HOSP

BOOLEAN

Indicates whether the facility is a short-term (average length of stay is less than 30 days) acute care, or non federal hospital. Values: Yes and Null

configuration/entityTypes/HCO/attributes/CommHosp


EMAIL_DOMAIN

VARCHAR


configuration/entityTypes/HCO/attributes/EmailDomain


STATUS_IMS

VARCHAR


configuration/entityTypes/HCO/attributes/StatusIMS

LKUP_IMS_STATUS

DOING_BUSINESS_AS_NAME

VARCHAR


configuration/entityTypes/HCO/attributes/DoingBusinessAsName


COMPANY_TYPE

VARCHAR


configuration/entityTypes/HCO/attributes/CompanyType

LKUP_IMS_ORG_TYPE

CUSIP

VARCHAR


configuration/entityTypes/HCO/attributes/CUSIP


SECTOR_IMS

VARCHAR

Sector

configuration/entityTypes/HCO/attributes/SectorIMS

LKUP_IMS_HCO_SECTORIMS

INDUSTRY

VARCHAR


configuration/entityTypes/HCO/attributes/Industry


FOUNDED_YEAR

VARCHAR


configuration/entityTypes/HCO/attributes/FoundedYear


END_YEAR

VARCHAR


configuration/entityTypes/HCO/attributes/EndYear


IPO_YEAR

VARCHAR


configuration/entityTypes/HCO/attributes/IPOYear


LEGAL_DOMICILE

VARCHAR

State of Legal Domicile

configuration/entityTypes/HCO/attributes/LegalDomicile


OWNERSHIP_STATUS

VARCHAR


configuration/entityTypes/HCO/attributes/OwnershipStatus

LKUP_IMS_HCO_OWNERSHIPSTATUS

PROFIT_STATUS

VARCHAR

The profit status of the facility. Values include: For Profit, Not For Profit, Government, Armed Forces, or NULL (If data is unknown or Not Confidential and Proprietary to IMS Health. Field Name Data Type Field Description Applicable).

configuration/entityTypes/HCO/attributes/ProfitStatus

LKUP_IMS_HCO_PROFITSTATUS

CMI

VARCHAR

CMI is the Case Mix Index for an organization. This is a government-assigned measure of the complexity of medical and surgical care provided to Medicare inpatients by a hospital under the prospective payment system (PPS). It factors in a hospital?s use of technology for patient care and medical services? level of acuity required by the patient population.

configuration/entityTypes/HCO/attributes/CMI


SOURCE_NAME

VARCHAR


configuration/entityTypes/HCO/attributes/SourceName


SUB_SOURCE_NAME

VARCHAR


configuration/entityTypes/HCO/attributes/SubSourceName


DEA_BUSINESS_ACTIVITY

VARCHAR


configuration/entityTypes/HCO/attributes/DEABusinessActivity


IMAGE_LINKS

VARCHAR


configuration/entityTypes/HCO/attributes/ImageLinks


VIDEO_LINKS

VARCHAR


configuration/entityTypes/HCO/attributes/VideoLinks


DOCUMENT_LINKS

VARCHAR


configuration/entityTypes/HCO/attributes/DocumentLinks


WEBSITE_URL

VARCHAR


configuration/entityTypes/HCO/attributes/WebsiteURL


TAX_ID

VARCHAR


configuration/entityTypes/HCO/attributes/TaxID


DESCRIPTION

VARCHAR


configuration/entityTypes/HCO/attributes/Description


STATUS_UPDATE_DATE

DATE


configuration/entityTypes/HCO/attributes/StatusUpdateDate


STATUS_REASON_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/StatusReasonCode

LKUP_IMS_SRC_DEACTIVE_REASON_CODE

COMMENTERS

VARCHAR

Commenters

configuration/entityTypes/HCO/attributes/Commenters


CLIENT_TYPE_CODE

VARCHAR

Client Customer Type

configuration/entityTypes/HCO/attributes/ClientTypeCode

LKUP_IMS_HCO_CLIENT_CUST_TYPE

OFFICIAL_NAME

VARCHAR

Official Name

configuration/entityTypes/HCO/attributes/OfficialName


VALIDATION_CHANGE_REASON

VARCHAR


configuration/entityTypes/HCO/attributes/ValidationChangeReason

LKUP_IMS_VAL_STATUS_CHANGE_REASON

VALIDATION_CHANGE_DATE

DATE


configuration/entityTypes/HCO/attributes/ValidationChangeDate


CREATE_DATE

DATE


configuration/entityTypes/HCO/attributes/CreateDate


UPDATE_DATE

DATE


configuration/entityTypes/HCO/attributes/UpdateDate


CHECK_DATE

DATE


configuration/entityTypes/HCO/attributes/CheckDate


STATE_CODE

VARCHAR

Situation of the workplace: Open/Closed

configuration/entityTypes/HCO/attributes/StateCode

LKUP_IMS_PROFILE_STATE

STATE_DATE

DATE

Date when state of the record was last modified.

configuration/entityTypes/HCO/attributes/StateDate


STATUS_CHANGE_REASON

VARCHAR

Reason the status of the Organization changed

configuration/entityTypes/HCO/attributes/StatusChangeReason


NUM_EMPLOYEES

VARCHAR


configuration/entityTypes/HCO/attributes/NumEmployees


NUM_MED_EMPLOYEES

VARCHAR


configuration/entityTypes/HCO/attributes/NumMedEmployees


TOTAL_BEDS_INTENSIVE_CARE

VARCHAR


configuration/entityTypes/HCO/attributes/TotalBedsIntensiveCare


NUM_EXAMINATION_ROOM

VARCHAR


configuration/entityTypes/HCO/attributes/NumExaminationRoom


NUM_AFFILIATED_SITES

VARCHAR


configuration/entityTypes/HCO/attributes/NumAffiliatedSites


NUM_ENROLLED_MEMBERS

VARCHAR


configuration/entityTypes/HCO/attributes/NumEnrolledMembers


NUM_IN_PATIENTS

VARCHAR


configuration/entityTypes/HCO/attributes/NumInPatients


NUM_OUT_PATIENTS

VARCHAR


configuration/entityTypes/HCO/attributes/NumOutPatients


NUM_OPERATING_ROOMS

VARCHAR


configuration/entityTypes/HCO/attributes/NumOperatingRooms


NUM_PATIENTS_X_WEEK

VARCHAR


configuration/entityTypes/HCO/attributes/NumPatientsXWeek


ACT_TYPE_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/ActTypeCode

LKUP_IMS_ACTIVITY_TYPE

DISPENSE_DRUGS

BOOLEAN


configuration/entityTypes/HCO/attributes/DispenseDrugs


NUM_PRESCRIBERS

VARCHAR


configuration/entityTypes/HCO/attributes/NumPrescribers


PATIENTS_X_YEAR

VARCHAR


configuration/entityTypes/HCO/attributes/PatientsXYear


ACCEPTS_NEW_PATIENTS

VARCHAR

Y/N field indicating whether the workplace accepts new patients

configuration/entityTypes/HCO/attributes/AcceptsNewPatients


EXTERNAL_INFORMATION_URL

VARCHAR


configuration/entityTypes/HCO/attributes/ExternalInformationURL


MATCH_STATUS_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/MatchStatusCode

LKUP_IMS_MATCH_STATUS_CODE

SUBSCRIPTION_FLAG1

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCO/attributes/SubscriptionFlag1


SUBSCRIPTION_FLAG2

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCO/attributes/SubscriptionFlag2


SUBSCRIPTION_FLAG3

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCO/attributes/SubscriptionFlag3


SUBSCRIPTION_FLAG4

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCO/attributes/SubscriptionFlag4


SUBSCRIPTION_FLAG5

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCO/attributes/SubscriptionFlag5


SUBSCRIPTION_FLAG6

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCO/attributes/SubscriptionFlag6


SUBSCRIPTION_FLAG7

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCO/attributes/SubscriptionFlag7


SUBSCRIPTION_FLAG8

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCO/attributes/SubscriptionFlag8


SUBSCRIPTION_FLAG9

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCO/attributes/SubscriptionFlag9


SUBSCRIPTION_FLAG10

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCO/attributes/SubscriptionFlag10


ROLE_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/RoleCode

LKUP_IMS_ORG_ROLE_CODE

ACTIVATION_DATE

VARCHAR


configuration/entityTypes/HCO/attributes/ActivationDate


PARTY_ID

VARCHAR


configuration/entityTypes/HCO/attributes/PartyID


LAST_VERIFICATION_STATUS

VARCHAR


configuration/entityTypes/HCO/attributes/LastVerificationStatus


LAST_VERIFICATION_DATE

DATE


configuration/entityTypes/HCO/attributes/LastVerificationDate


EFFECTIVE_DATE

DATE


configuration/entityTypes/HCO/attributes/EffectiveDate


END_DATE

DATE


configuration/entityTypes/HCO/attributes/EndDate


PARTY_LOCALIZATION_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/PartyLocalizationCode


MATCH_PARTY_NAME

VARCHAR


configuration/entityTypes/HCO/attributes/MatchPartyName


DELETE_ENTITY

BOOLEAN

DeleteEntity flag to identify GDPR compliant data

configuration/entityTypes/HCO/attributes/DeleteEntity


OK_VR_TRIGGER

VARCHAR


configuration/entityTypes/HCO/attributes/OK_VR_Trigger

LKUP_IMS_SEND_FOR_VALIDATION

HCO_MAIN_HCO_CLASSOF_TRADE_N

Column

Type

Description

Reltio Attribute URI

LOV Name

MAINHCO_URI

VARCHAR

generated key description



CLASSOFTRADEN_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



PRIORITY

VARCHAR

Numeric code for the primary class of trade

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Priority


CLASSIFICATION

VARCHAR


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Classification

LKUP_IMS_HCO_CLASSOFTRADEN_CLASSIFICATION

FACILITY_TYPE

VARCHAR


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/FacilityType

LKUP_IMS_HCO_CLASSOFTRADEN_FACILITYTYPE

SPECIALTY

VARCHAR


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty

LKUP_IMS_HCO_CLASSOFTRADEN_SPECIALTY

HCO_ADDRESS_UNIT

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDRESS_URI

VARCHAR

generated key description



UNIT_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



UNIT_NAME

VARCHAR


configuration/entityTypes/Location/attributes/Unit/attributes/UnitName


UNIT_VALUE

VARCHAR


configuration/entityTypes/Location/attributes/Unit/attributes/UnitValue


HCO_ADDRESS_BRICK

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDRESS_URI

VARCHAR

generated key description



BRICK_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE

VARCHAR


configuration/entityTypes/Location/attributes/Brick/attributes/Type

LKUP_IMS_BRICK_TYPE

BRICK_VALUE

VARCHAR


configuration/entityTypes/Location/attributes/Brick/attributes/BrickValue

LKUP_IMS_BRICK_VALUE

SORT_ORDER

VARCHAR


configuration/entityTypes/Location/attributes/Brick/attributes/SortOrder


KEY_FINANCIAL_FIGURES_OVERVIEW

Column

Type

Description

Reltio Attribute URI

LOV Name

KEY_FINANCIAL_FIGURES_OVERVIEW_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



FINANCIAL_STATEMENT_TO_DATE

DATE


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/FinancialStatementToDate


FINANCIAL_PERIOD_DURATION

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/FinancialPeriodDuration


SALES_REVENUE_CURRENCY

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueCurrency


SALES_REVENUE_CURRENCY_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueCurrencyCode


SALES_REVENUE_RELIABILITY_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueReliabilityCode


SALES_REVENUE_UNIT_OF_SIZE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueUnitOfSize


SALES_REVENUE_AMOUNT

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueAmount


PROFIT_OR_LOSS_CURRENCY

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossCurrency


PROFIT_OR_LOSS_RELIABILITY_TEXT

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossReliabilityText


PROFIT_OR_LOSS_UNIT_OF_SIZE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossUnitOfSize


PROFIT_OR_LOSS_AMOUNT

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossAmount


SALES_TURNOVER_GROWTH_RATE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesTurnoverGrowthRate


SALES3YRY_GROWTH_RATE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Sales3YryGrowthRate


SALES5YRY_GROWTH_RATE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Sales5YryGrowthRate


EMPLOYEE3YRY_GROWTH_RATE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Employee3YryGrowthRate


EMPLOYEE5YRY_GROWTH_RATE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Employee5YryGrowthRate


CLASSOF_TRADE_N

Column

Type

Description

Reltio Attribute URI

LOV Name

CLASSOF_TRADE_N_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



PRIORITY

VARCHAR

Numeric code for the primary class of trade

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Priority


CLASSIFICATION

VARCHAR


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Classification

LKUP_IMS_HCO_CLASSOFTRADEN_CLASSIFICATION

FACILITY_TYPE

VARCHAR


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/FacilityType

LKUP_IMS_HCO_CLASSOFTRADEN_FACILITYTYPE

SPECIALTY

VARCHAR


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty

LKUP_IMS_HCO_CLASSOFTRADEN_SPECIALTY

SPECIALTY

DO NOT USE THIS ATTRIBUTE - will be deprecated

Column

Type

Description

Reltio Attribute URI

LOV Name

SPECIALTY_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SPECIALTY

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCO/attributes/Specialty/attributes/Specialty


TYPE

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCO/attributes/Specialty/attributes/Type


GSA_EXCLUSION

Column

Type

Description

Reltio Attribute URI

LOV Name

GSA_EXCLUSION_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SANCTION_ID

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/SanctionId


ORGANIZATION_NAME

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/OrganizationName


ADDRESS_LINE1

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/AddressLine1


ADDRESS_LINE2

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/AddressLine2


CITY

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/City


STATE

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/State


ZIP

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/Zip


ACTION_DATE

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/ActionDate


TERM_DATE

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/TermDate


AGENCY

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/Agency


CONFIDENCE

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/Confidence


OIG_EXCLUSION

Column

Type

Description

Reltio Attribute URI

LOV Name

OIG_EXCLUSION_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SANCTION_ID

VARCHAR


configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/SanctionId


ACTION_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/ActionCode


ACTION_DESCRIPTION

VARCHAR


configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/ActionDescription


BOARD_CODE

VARCHAR

Court case board id

configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/BoardCode


BOARD_DESC

VARCHAR

court case board description

configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/BoardDesc


ACTION_DATE

DATE


configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/ActionDate


OFFENSE_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/OffenseCode


OFFENSE_DESCRIPTION

VARCHAR


configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/OffenseDescription


BRICK

Column

Type

Description

Reltio Attribute URI

LOV Name

BRICK_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE

VARCHAR


configuration/entityTypes/HCO/attributes/Brick/attributes/Type

LKUP_IMS_BRICK_TYPE

BRICK_VALUE

VARCHAR


configuration/entityTypes/HCO/attributes/Brick/attributes/BrickValue

LKUP_IMS_BRICK_VALUE

EMR

Column

Type

Description

Reltio Attribute URI

LOV Name

EMR_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



NOTES

BOOLEAN

Y/N field indicating whether workplace uses EMR software to write notes

configuration/entityTypes/HCO/attributes/EMR/attributes/Notes


PRESCRIBES

BOOLEAN

Y/N field indicating whether the workplace uses EMR software to write a prescriptions

configuration/entityTypes/HCO/attributes/EMR/attributes/Prescribes

LKUP_IMS_EMR_PRESCRIBES

ELABS_X_RAYS

BOOLEAN

Y/N indicating whether the workplace uses EMR software for eLabs/Xrays

configuration/entityTypes/HCO/attributes/EMR/attributes/ElabsXRays

LKUP_IMS_EMR_ELABS_XRAYS

NUMBER_OF_PHYSICIANS

VARCHAR

Number of physicians that use EMR software in the workplace

configuration/entityTypes/HCO/attributes/EMR/attributes/NumberOfPhysicians


POLICYMAKER

VARCHAR

Individual who makes decisions regarding EMR software

configuration/entityTypes/HCO/attributes/EMR/attributes/Policymaker


SOFTWARE_TYPE

VARCHAR

Name of the EMR software used at the workplace

configuration/entityTypes/HCO/attributes/EMR/attributes/SoftwareType


ADOPTION

VARCHAR

When the EMR software was adopted at the workplace

configuration/entityTypes/HCO/attributes/EMR/attributes/Adoption


BUYING_FACTOR

VARCHAR

Buying factor which influenced the workplace's decision to purchase the EMR

configuration/entityTypes/HCO/attributes/EMR/attributes/BuyingFactor


OWNER

VARCHAR

Individual who made the decision to purchase EMR software

configuration/entityTypes/HCO/attributes/EMR/attributes/Owner


AWARE

BOOLEAN


configuration/entityTypes/HCO/attributes/EMR/attributes/Aware

LKUP_IMS_EMR_AWARE

SOFTWARE

BOOLEAN


configuration/entityTypes/HCO/attributes/EMR/attributes/Software

LKUP_IMS_EMR_SOFTWARE

VENDOR

VARCHAR


configuration/entityTypes/HCO/attributes/EMR/attributes/Vendor

LKUP_IMS_EMR_VENDOR

BUSINESS_HOURS

Column

Type

Description

Reltio Attribute URI

LOV Name

BUSINESS_HOURS_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



DAY

VARCHAR


configuration/entityTypes/HCO/attributes/BusinessHours/attributes/Day


PERIOD

VARCHAR


configuration/entityTypes/HCO/attributes/BusinessHours/attributes/Period


TIME_SLOT

VARCHAR


configuration/entityTypes/HCO/attributes/BusinessHours/attributes/TimeSlot


START_TIME

VARCHAR


configuration/entityTypes/HCO/attributes/BusinessHours/attributes/StartTime


END_TIME

VARCHAR


configuration/entityTypes/HCO/attributes/BusinessHours/attributes/EndTime


APPOINTMENT_ONLY

BOOLEAN


configuration/entityTypes/HCO/attributes/BusinessHours/attributes/AppointmentOnly


PERIOD_START

VARCHAR


configuration/entityTypes/HCO/attributes/BusinessHours/attributes/PeriodStart


PERIOD_END

VARCHAR


configuration/entityTypes/HCO/attributes/BusinessHours/attributes/PeriodEnd


ACO_DETAILS

ACO Details

Column

Type

Description

Reltio Attribute URI

LOV Name

ACO_DETAILS_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



ACO_TYPE_CODE

VARCHAR

AcoTypeCode

configuration/entityTypes/HCO/attributes/ACODetails/attributes/AcoTypeCode

LKUP_IMS_ACO_TYPE

ACO_TYPE_CATG

VARCHAR

AcoTypeCatg

configuration/entityTypes/HCO/attributes/ACODetails/attributes/AcoTypeCatg


ACO_TYPE_MDEL

VARCHAR

AcoTypeMdel

configuration/entityTypes/HCO/attributes/ACODetails/attributes/AcoTypeMdel


ACO_DETAIL_ID

VARCHAR

AcoDetailId

configuration/entityTypes/HCO/attributes/ACODetails/attributes/AcoDetailId


ACO_DETAIL_CODE

VARCHAR

AcoDetailCode

configuration/entityTypes/HCO/attributes/ACODetails/attributes/AcoDetailCode

LKUP_IMS_ACO_DETAIL

ACO_DETAIL_GROUP_CODE

VARCHAR

AcoDetailGroupCode

configuration/entityTypes/HCO/attributes/ACODetails/attributes/AcoDetailGroupCode

LKUP_IMS_ACO_DETAIL_GROUP

ACO_VAL

VARCHAR

AcoVal

configuration/entityTypes/HCO/attributes/ACODetails/attributes/AcoVal


TRADE_STYLE_NAME

Column

Type

Description

Reltio Attribute URI

LOV Name

TRADE_STYLE_NAME_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



ORGANIZATION_NAME

VARCHAR


configuration/entityTypes/HCO/attributes/TradeStyleName/attributes/OrganizationName


LANGUAGE_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/TradeStyleName/attributes/LanguageCode


FORMER_ORGANIZATION_PRIMARY_NAME

VARCHAR


configuration/entityTypes/HCO/attributes/TradeStyleName/attributes/FormerOrganizationPrimaryName


DISPLAY_SEQUENCE

VARCHAR


configuration/entityTypes/HCO/attributes/TradeStyleName/attributes/DisplaySequence


TYPE

VARCHAR


configuration/entityTypes/HCO/attributes/TradeStyleName/attributes/Type


PRIOR_DUNS_NUMBER

Column

Type

Description

Reltio Attribute URI

LOV Name

PRIOR_DUNSN_UMBER_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TRANSFER_DUNS_NUMBER

VARCHAR


configuration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferDUNSNumber


TRANSFER_REASON_TEXT

VARCHAR


configuration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferReasonText


TRANSFER_REASON_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferReasonCode


TRANSFER_DATE

VARCHAR


configuration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferDate


TRANSFERRED_FROM_DUNS_NUMBER

VARCHAR


configuration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferredFromDUNSNumber


TRANSFERRED_TO_DUNS_NUMBER

VARCHAR


configuration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferredToDUNSNumber


INDUSTRY_CODE

Column

Type

Description

Reltio Attribute URI

LOV Name

INDUSTRY_CODE_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



DNB_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/DNBCode


INDUSTRY_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCode


INDUSTRY_CODE_DESCRIPTION

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeDescription


INDUSTRY_CODE_LANGUAGE_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeLanguageCode


INDUSTRY_CODE_WRITING_SCRIPT

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeWritingScript


DISPLAY_SEQUENCE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/DisplaySequence


SALES_PERCENTAGE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/SalesPercentage


TYPE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/Type


INDUSTRY_TYPE_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryTypeCode


IMPORT_EXPORT_AGENT

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/ImportExportAgent


ACTIVITIES_AND_OPERATIONS

Column

Type

Description

Reltio Attribute URI

LOV Name

ACTIVITIES_AND_OPERATIONS_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



LINE_OF_BUSINESS_DESCRIPTION

VARCHAR


configuration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/LineOfBusinessDescription


LANGUAGE_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/LanguageCode


WRITING_SCRIPT_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/WritingScriptCode


IMPORT_INDICATOR

BOOLEAN


configuration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/ImportIndicator


EXPORT_INDICATOR

BOOLEAN


configuration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/ExportIndicator


AGENT_INDICATOR

BOOLEAN


configuration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/AgentIndicator


EMPLOYEE_DETAILS

Column

Type

Description

Reltio Attribute URI

LOV Name

EMPLOYEE_DETAILS_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



INDIVIDUAL_EMPLOYEE_FIGURES_DATE

VARCHAR


configuration/entityTypes/HCO/attributes/EmployeeDetails/attributes/IndividualEmployeeFiguresDate


INDIVIDUAL_TOTAL_EMPLOYEE_QUANTITY

VARCHAR


configuration/entityTypes/HCO/attributes/EmployeeDetails/attributes/IndividualTotalEmployeeQuantity


INDIVIDUAL_RELIABILITY_TEXT

VARCHAR


configuration/entityTypes/HCO/attributes/EmployeeDetails/attributes/IndividualReliabilityText


TOTAL_EMPLOYEE_QUANTITY

VARCHAR


configuration/entityTypes/HCO/attributes/EmployeeDetails/attributes/TotalEmployeeQuantity


TOTAL_EMPLOYEE_RELIABILITY

VARCHAR


configuration/entityTypes/HCO/attributes/EmployeeDetails/attributes/TotalEmployeeReliability


PRINCIPALS_INCLUDED

VARCHAR


configuration/entityTypes/HCO/attributes/EmployeeDetails/attributes/PrincipalsIncluded


MATCH_QUALITY

Column

Type

Description

Reltio Attribute URI

LOV Name

MATCH_QUALITY_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



CONFIDENCE_CODE

VARCHAR

DnB Match Quality Confidence Code

configuration/entityTypes/HCO/attributes/MatchQuality/attributes/ConfidenceCode


DISPLAY_SEQUENCE

VARCHAR

DnB Match Quality Display Sequence

configuration/entityTypes/HCO/attributes/MatchQuality/attributes/DisplaySequence


MATCH_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/MatchQuality/attributes/MatchCode


BEMFAB

VARCHAR


configuration/entityTypes/HCO/attributes/MatchQuality/attributes/BEMFAB


MATCH_GRADE

VARCHAR


configuration/entityTypes/HCO/attributes/MatchQuality/attributes/MatchGrade


ORGANIZATION_DETAIL

Column

Type

Description

Reltio Attribute URI

LOV Name

ORGANIZATION_DETAIL_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



MEMBER_ROLE

VARCHAR


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/MemberRole


STANDALONE

BOOLEAN


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/Standalone


CONTROL_OWNERSHIP_DATE

DATE


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/ControlOwnershipDate


OPERATING_STATUS

VARCHAR


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/OperatingStatus


START_YEAR

VARCHAR


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/StartYear


FRANCHISE_OPERATION_TYPE

VARCHAR


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/FranchiseOperationType


BONEYARD_ORGANIZATION

BOOLEAN


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/BoneyardOrganization


OPERATING_STATUS_COMMENT

VARCHAR


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/OperatingStatusComment


DUNS_HIERARCHY

Column

Type

Description

Reltio Attribute URI

LOV Name

DUNS_HIERARCHY_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



GLOBAL_ULTIMATE_DUNS

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/GlobalUltimateDUNS


GLOBAL_ULTIMATE_ORGANIZATION

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/GlobalUltimateOrganization


DOMESTIC_ULTIMATE_DUNS

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/DomesticUltimateDUNS


DOMESTIC_ULTIMATE_ORGANIZATION

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/DomesticUltimateOrganization


PARENT_DUNS

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/ParentDUNS


PARENT_ORGANIZATION

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/ParentOrganization


HEADQUARTERS_DUNS

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/HeadquartersDUNS


HEADQUARTERS_ORGANIZATION

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/HeadquartersOrganization


AFFILIATIONS

Column

Type

Description

Reltio Attribute URI

LOV Name

RELATION_URI

VARCHAR

Reltio Relation URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



RELATION_TYPE

VARCHAR

Reltio Relation Type



START_ENTITY_URI

VARCHAR

Reltio Start Entity URI



END_ENTITY_URI

VARCHAR

Reltio End Entity URI



REL_GROUP

VARCHAR

HCRS relation group from the relationship type, each rel group refers to one relation id

configuration/relationTypes/AffiliatedPurchasing/attributes/RelGroup, configuration/relationTypes/Managed/attributes/RelGroup

LKUP_IMS_RELGROUP_TYPE

REL_ORDER_AFFILIATEDPURCHASING

VARCHAR

Order

configuration/relationTypes/AffiliatedPurchasing/attributes/RelOrder


STATUS_REASON_CODE

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/StatusReasonCode, configuration/relationTypes/Activity/attributes/StatusReasonCode, configuration/relationTypes/Managed/attributes/StatusReasonCode

LKUP_IMS_SRC_DEACTIVE_REASON_CODE

STATUS_UPDATE_DATE

DATE


configuration/relationTypes/AffiliatedPurchasing/attributes/StatusUpdateDate, configuration/relationTypes/Activity/attributes/StatusUpdateDate, configuration/relationTypes/Managed/attributes/StatusUpdateDate


VALIDATION_CHANGE_REASON

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/ValidationChangeReason, configuration/relationTypes/Activity/attributes/ValidationChangeReason, configuration/relationTypes/Managed/attributes/ValidationChangeReason

LKUP_IMS_VAL_STATUS_CHANGE_REASON

VALIDATION_CHANGE_DATE

DATE


configuration/relationTypes/AffiliatedPurchasing/attributes/ValidationChangeDate, configuration/relationTypes/Activity/attributes/ValidationChangeDate, configuration/relationTypes/Managed/attributes/ValidationChangeDate


VALIDATION_STATUS

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/ValidationStatus, configuration/relationTypes/Activity/attributes/ValidationStatus, configuration/relationTypes/Managed/attributes/ValidationStatus

LKUP_IMS_VAL_STATUS

AFFILIATION_STATUS

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/AffiliationStatus, configuration/relationTypes/Activity/attributes/AffiliationStatus, configuration/relationTypes/Managed/attributes/AffiliationStatus

LKUP_IMS_STATUS

COUNTRY

VARCHAR

Country Code

configuration/relationTypes/AffiliatedPurchasing/attributes/Country, configuration/relationTypes/Activity/attributes/Country, configuration/relationTypes/Managed/attributes/Country

LKUP_IMS_COUNTRY_CODE

AFFILIATION_NAME

VARCHAR

Affiliation Name

configuration/relationTypes/AffiliatedPurchasing/attributes/AffiliationName, configuration/relationTypes/Activity/attributes/AffiliationName


SUBSCRIPTION_FLAG1

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag1, configuration/relationTypes/Activity/attributes/SubscriptionFlag1, configuration/relationTypes/Managed/attributes/SubscriptionFlag1


SUBSCRIPTION_FLAG2

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag2, configuration/relationTypes/Activity/attributes/SubscriptionFlag2, configuration/relationTypes/Managed/attributes/SubscriptionFlag2


SUBSCRIPTION_FLAG3

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag3, configuration/relationTypes/Activity/attributes/SubscriptionFlag3, configuration/relationTypes/Managed/attributes/SubscriptionFlag3


SUBSCRIPTION_FLAG4

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag4, configuration/relationTypes/Activity/attributes/SubscriptionFlag4, configuration/relationTypes/Managed/attributes/SubscriptionFlag4


SUBSCRIPTION_FLAG5

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag5, configuration/relationTypes/Activity/attributes/SubscriptionFlag5, configuration/relationTypes/Managed/attributes/SubscriptionFlag5


SUBSCRIPTION_FLAG6

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag6, configuration/relationTypes/Activity/attributes/SubscriptionFlag6, configuration/relationTypes/Managed/attributes/SubscriptionFlag6


SUBSCRIPTION_FLAG7

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag7, configuration/relationTypes/Activity/attributes/SubscriptionFlag7, configuration/relationTypes/Managed/attributes/SubscriptionFlag7


SUBSCRIPTION_FLAG8

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag8, configuration/relationTypes/Activity/attributes/SubscriptionFlag8, configuration/relationTypes/Managed/attributes/SubscriptionFlag8


SUBSCRIPTION_FLAG9

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag9, configuration/relationTypes/Activity/attributes/SubscriptionFlag9, configuration/relationTypes/Managed/attributes/SubscriptionFlag9


SUBSCRIPTION_FLAG10

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag10, configuration/relationTypes/Activity/attributes/SubscriptionFlag10, configuration/relationTypes/Managed/attributes/SubscriptionFlag10


BEST_RELATIONSHIP_INDICATOR

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/BestRelationshipIndicator, configuration/relationTypes/Activity/attributes/BestRelationshipIndicator, configuration/relationTypes/Managed/attributes/BestRelationshipIndicator

LKUP_IMS_YES_NO

RELATIONSHIP_RANK

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/RelationshipRank, configuration/relationTypes/Activity/attributes/RelationshipRank, configuration/relationTypes/Managed/attributes/RelationshipRank


RELATIONSHIP_VIEW_CODE

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/RelationshipViewCode, configuration/relationTypes/Activity/attributes/RelationshipViewCode, configuration/relationTypes/Managed/attributes/RelationshipViewCode


RELATIONSHIP_VIEW_TYPE_CODE

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/RelationshipViewTypeCode, configuration/relationTypes/Activity/attributes/RelationshipViewTypeCode, configuration/relationTypes/Managed/attributes/RelationshipViewTypeCode


RELATIONSHIP_STATUS

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/RelationshipStatus, configuration/relationTypes/Activity/attributes/RelationshipStatus, configuration/relationTypes/Managed/attributes/RelationshipStatus

LKUP_IMS_RELATIONSHIP_STATUS

RELATIONSHIP_CREATE_DATE

DATE


configuration/relationTypes/AffiliatedPurchasing/attributes/RelationshipCreateDate, configuration/relationTypes/Activity/attributes/RelationshipCreateDate, configuration/relationTypes/Managed/attributes/RelationshipCreateDate


UPDATE_DATE

DATE


configuration/relationTypes/AffiliatedPurchasing/attributes/UpdateDate, configuration/relationTypes/Activity/attributes/UpdateDate, configuration/relationTypes/Managed/attributes/UpdateDate


RELATIONSHIP_START_DATE

DATE


configuration/relationTypes/AffiliatedPurchasing/attributes/RelationshipStartDate, configuration/relationTypes/Activity/attributes/RelationshipStartDate, configuration/relationTypes/Managed/attributes/RelationshipStartDate


RELATIONSHIP_END_DATE

DATE


configuration/relationTypes/AffiliatedPurchasing/attributes/RelationshipEndDate, configuration/relationTypes/Activity/attributes/RelationshipEndDate, configuration/relationTypes/Managed/attributes/RelationshipEndDate


CHECKED_DATE

DATE


configuration/relationTypes/Activity/attributes/CheckedDate


PREFERRED_MAIL_INDICATOR

BOOLEAN


configuration/relationTypes/Activity/attributes/PreferredMailIndicator


PREFERRED_VISIT_INDICATOR

BOOLEAN


configuration/relationTypes/Activity/attributes/PreferredVisitIndicator


COMMITTEE_MEMBER

VARCHAR


configuration/relationTypes/Activity/attributes/CommitteeMember

LKUP_IMS_MEMBER_MED_COMMITTEE

APPOINTMENT_REQUIRED

BOOLEAN


configuration/relationTypes/Activity/attributes/AppointmentRequired


AFFILIATION_TYPE_CODE

VARCHAR

Affiliation Type Code

configuration/relationTypes/Activity/attributes/AffiliationTypeCode


WORKING_STATUS

VARCHAR


configuration/relationTypes/Activity/attributes/WorkingStatus

LKUP_IMS_WORKING_STATUS

TITLE

VARCHAR


configuration/relationTypes/Activity/attributes/Title

LKUP_IMS_PROF_TITLE

RANK

VARCHAR


configuration/relationTypes/Activity/attributes/Rank


PRIMARY_AFFILIATION_INDICATOR

BOOLEAN


configuration/relationTypes/Activity/attributes/PrimaryAffiliationIndicator


ACT_WEBSITE_URL

VARCHAR


configuration/relationTypes/Activity/attributes/ActWebsiteURL


ACT_VALIDATION_STATUS

VARCHAR


configuration/relationTypes/Activity/attributes/ActValidationStatus

LKUP_IMS_VAL_STATUS

PREF_OR_ACTIVE

VARCHAR


configuration/relationTypes/Activity/attributes/PrefOrActive


COMMENTERS

VARCHAR

Commenters

configuration/relationTypes/Activity/attributes/Commenters


REL_ORDER_MANAGED

BOOLEAN

Order

configuration/relationTypes/Managed/attributes/RelOrder


PURCHASING_CLASSIFICATION

Column

Type

Description

Reltio Attribute URI

LOV Name

CLASSIFICATION_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



CLASSIFICATION_TYPE

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/ClassificationType

LKUP_IMS_CLASSIFICATION_TYPE

CLASSIFICATION_INDICATOR

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/ClassificationIndicator

LKUP_IMS_CLASSIFICATION_INDICATOR

CLASSIFICATION_VALUE

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/ClassificationValue


CLASSIFICATION_VALUE_NUMERIC_QUANTITY

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/ClassificationValueNumericQuantity


STATUS

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/Status

LKUP_IMS_CLASSIFICATION_STATUS

EFFECTIVE_DATE

DATE


configuration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/EffectiveDate


END_DATE

DATE


configuration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/EndDate


NOTES

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/Notes


PURCHASING_SOURCE_DATA

Column

Type

Description

Reltio Attribute URI

LOV Name

SOURCE_DATA_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



DATASET_IDENTIFIER

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/SourceData/attributes/DatasetIdentifier


START_OBJECT_DATASET_PARTY_IDENTIFIER

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/SourceData/attributes/StartObjectDatasetPartyIdentifier


END_OBJECT_DATASET_PARTY_IDENTIFIER

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/SourceData/attributes/EndObjectDatasetPartyIdentifier


RANK

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/SourceData/attributes/Rank


ACTIVITY_PHONE

Column

Type

Description

Reltio Attribute URI

LOV Name

ACT_PHONE_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



TYPE_IMS

VARCHAR


configuration/relationTypes/Activity/attributes/ActPhone/attributes/TypeIMS

LKUP_IMS_COMMUNICATION_TYPE

NUMBER

VARCHAR


configuration/relationTypes/Activity/attributes/ActPhone/attributes/Number


EXTENSION

VARCHAR


configuration/relationTypes/Activity/attributes/ActPhone/attributes/Extension


RANK

VARCHAR


configuration/relationTypes/Activity/attributes/ActPhone/attributes/Rank


COUNTRY_CODE

VARCHAR


configuration/relationTypes/Activity/attributes/ActPhone/attributes/CountryCode

LKUP_IMS_COUNTRY_CODE

AREA_CODE

VARCHAR


configuration/relationTypes/Activity/attributes/ActPhone/attributes/AreaCode


LOCAL_NUMBER

VARCHAR


configuration/relationTypes/Activity/attributes/ActPhone/attributes/LocalNumber


FORMATTED_NUMBER

VARCHAR

Formatted number of the phone

configuration/relationTypes/Activity/attributes/ActPhone/attributes/FormattedNumber


VALIDATION_STATUS

VARCHAR


configuration/relationTypes/Activity/attributes/ActPhone/attributes/ValidationStatus


LINE_TYPE

VARCHAR


configuration/relationTypes/Activity/attributes/ActPhone/attributes/LineType


FORMAT_MASK

VARCHAR


configuration/relationTypes/Activity/attributes/ActPhone/attributes/FormatMask


DIGIT_COUNT

VARCHAR


configuration/relationTypes/Activity/attributes/ActPhone/attributes/DigitCount


GEO_AREA

VARCHAR


configuration/relationTypes/Activity/attributes/ActPhone/attributes/GeoArea


GEO_COUNTRY

VARCHAR


configuration/relationTypes/Activity/attributes/ActPhone/attributes/GeoCountry


ACTIVE

BOOLEAN

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/relationTypes/Activity/attributes/ActPhone/attributes/Active


ACTIVITY_PRIVACY_PREFERENCES

Column

Type

Description

Reltio Attribute URI

LOV Name

PRIVACY_PREFERENCES_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



PHONE_OPT_OUT

BOOLEAN


configuration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/PhoneOptOut


ALLOWED_TO_CONTACT

BOOLEAN


configuration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/AllowedToContact


EMAIL_OPT_OUT

BOOLEAN


configuration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/EmailOptOut


MAIL_OPT_OUT

BOOLEAN


configuration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/MailOptOut


FAX_OPT_OUT

BOOLEAN


configuration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/FaxOptOut


REMOTE_OPT_OUT

BOOLEAN


configuration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/RemoteOptOut


OPT_OUT_ONEKEY

BOOLEAN


configuration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/OptOutOnekey


VISIT_OPT_OUT

BOOLEAN


configuration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/VisitOptOut


ACTIVITY_SPECIALITIES

Column

Type

Description

Reltio Attribute URI

LOV Name

SPECIALITIES_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



SPECIALTY_TYPE

VARCHAR


configuration/relationTypes/Activity/attributes/Specialities/attributes/SpecialtyType

LKUP_IMS_SPECIALTY_TYPE

SPECIALTY

VARCHAR


configuration/relationTypes/Activity/attributes/Specialities/attributes/Specialty

LKUP_IMS_SPECIALTY

EMAIL_OPT_OUT

BOOLEAN


configuration/relationTypes/Activity/attributes/Specialities/attributes/EmailOptOut


DESC

VARCHAR


configuration/relationTypes/Activity/attributes/Specialities/attributes/Desc


GROUP

VARCHAR


configuration/relationTypes/Activity/attributes/Specialities/attributes/Group


SOURCE_CD

VARCHAR


configuration/relationTypes/Activity/attributes/Specialities/attributes/SourceCD


SPECIALTY_DETAIL

VARCHAR


configuration/relationTypes/Activity/attributes/Specialities/attributes/SpecialtyDetail


PROFESSION_CODE

VARCHAR


configuration/relationTypes/Activity/attributes/Specialities/attributes/ProfessionCode


RANK

VARCHAR


configuration/relationTypes/Activity/attributes/Specialities/attributes/Rank


PRIMARY_SPECIALTY_FLAG

BOOLEAN

Primary Specialty flag to be populated by client teams according to business rules

configuration/relationTypes/Activity/attributes/Specialities/attributes/PrimarySpecialtyFlag


SORT_ORDER

VARCHAR


configuration/relationTypes/Activity/attributes/Specialities/attributes/SortOrder


BEST_RECORD

VARCHAR


configuration/relationTypes/Activity/attributes/Specialities/attributes/BestRecord


SUB_SPECIALTY

VARCHAR


configuration/relationTypes/Activity/attributes/Specialities/attributes/SubSpecialty

LKUP_IMS_SPECIALTY

SUB_SPECIALTY_RANK

VARCHAR

SubSpecialty Rank

configuration/relationTypes/Activity/attributes/Specialities/attributes/SubSpecialtyRank


ACTIVITY_IDENTIFIERS

Column

Type

Description

Reltio Attribute URI

LOV Name

ACT_IDENTIFIERS_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



ID

VARCHAR


configuration/relationTypes/Activity/attributes/ActIdentifiers/attributes/ID


TYPE

VARCHAR


configuration/relationTypes/Activity/attributes/ActIdentifiers/attributes/Type

LKUP_IMS_HCP_IDENTIFIER_TYPE

ORDER

VARCHAR

Displays the order of priority for an MPN for those facilities that share an MPN. Valid values are: P ?the MPN on a business record is the primary identifier for the business and O ?the MPN is a secondary identifier. (Using P for the MPN supports aggregating clinical volumes and avoids double counting).

configuration/relationTypes/Activity/attributes/ActIdentifiers/attributes/Order


AUTHORIZATION_STATUS

VARCHAR

Authorization Status

configuration/relationTypes/Activity/attributes/ActIdentifiers/attributes/AuthorizationStatus

LKUP_IMS_IDENTIFIER_STATUS

NATIONAL_ID_ATTRIBUTE

VARCHAR


configuration/relationTypes/Activity/attributes/ActIdentifiers/attributes/NationalIdAttribute


ACTIVITY_ADDITIONAL_ATTRIBUTES

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDITIONAL_ATTRIBUTES_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



ATTRIBUTE_NAME

VARCHAR


configuration/relationTypes/Activity/attributes/AdditionalAttributes/attributes/AttributeName


ATTRIBUTE_TYPE

VARCHAR


configuration/relationTypes/Activity/attributes/AdditionalAttributes/attributes/AttributeType

LKUP_IMS_TYPE_CODE

ATTRIBUTE_VALUE

VARCHAR


configuration/relationTypes/Activity/attributes/AdditionalAttributes/attributes/AttributeValue


ATTRIBUTE_RANK

VARCHAR


configuration/relationTypes/Activity/attributes/AdditionalAttributes/attributes/AttributeRank


ADDITIONAL_INFO

VARCHAR


configuration/relationTypes/Activity/attributes/AdditionalAttributes/attributes/AdditionalInfo


ACTIVITY_BUSINESS_HOURS

Column

Type

Description

Reltio Attribute URI

LOV Name

BUSINESS_HOURS_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



DAY

VARCHAR


configuration/relationTypes/Activity/attributes/BusinessHours/attributes/Day


PERIOD

VARCHAR


configuration/relationTypes/Activity/attributes/BusinessHours/attributes/Period


TIME_SLOT

VARCHAR


configuration/relationTypes/Activity/attributes/BusinessHours/attributes/TimeSlot


START_TIME

VARCHAR


configuration/relationTypes/Activity/attributes/BusinessHours/attributes/StartTime


END_TIME

VARCHAR


configuration/relationTypes/Activity/attributes/BusinessHours/attributes/EndTime


APPOINTMENT_ONLY

BOOLEAN


configuration/relationTypes/Activity/attributes/BusinessHours/attributes/AppointmentOnly


PERIOD_START

VARCHAR


configuration/relationTypes/Activity/attributes/BusinessHours/attributes/PeriodStart


PERIOD_END

VARCHAR


configuration/relationTypes/Activity/attributes/BusinessHours/attributes/PeriodEnd


PERIOD_OF_DAY

VARCHAR


configuration/relationTypes/Activity/attributes/BusinessHours/attributes/PeriodOfDay


ACTIVITY_AFFILIATION_ROLE

Column

Type

Description

Reltio Attribute URI

LOV Name

AFFILIATION_ROLE_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



ROLE_RANK

VARCHAR


configuration/relationTypes/Activity/attributes/AffiliationRole/attributes/RoleRank


ROLE_NAME

VARCHAR


configuration/relationTypes/Activity/attributes/AffiliationRole/attributes/RoleName

LKUP_IMS_ROLE

ROLE_ATTRIBUTE

VARCHAR


configuration/relationTypes/Activity/attributes/AffiliationRole/attributes/RoleAttribute


ROLE_TYPE_ATTRIBUTE

VARCHAR


configuration/relationTypes/Activity/attributes/AffiliationRole/attributes/RoleTypeAttribute


ROLE_STATUS

VARCHAR


configuration/relationTypes/Activity/attributes/AffiliationRole/attributes/RoleStatus


BEST_ROLE_INDICATOR

VARCHAR


configuration/relationTypes/Activity/attributes/AffiliationRole/attributes/BestRoleIndicator


ACTIVITY_EMAIL

Column

Type

Description

Reltio Attribute URI

LOV Name

ACT_EMAIL_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



TYPE_IMS

VARCHAR


configuration/relationTypes/Activity/attributes/ActEmail/attributes/TypeIMS

LKUP_IMS_COMMUNICATION_TYPE

EMAIL

VARCHAR


configuration/relationTypes/Activity/attributes/ActEmail/attributes/Email


DOMAIN

VARCHAR


configuration/relationTypes/Activity/attributes/ActEmail/attributes/Domain


DOMAIN_TYPE

VARCHAR


configuration/relationTypes/Activity/attributes/ActEmail/attributes/DomainType


USERNAME

VARCHAR


configuration/relationTypes/Activity/attributes/ActEmail/attributes/Username


RANK

VARCHAR


configuration/relationTypes/Activity/attributes/ActEmail/attributes/Rank


VALIDATION_STATUS

VARCHAR


configuration/relationTypes/Activity/attributes/ActEmail/attributes/ValidationStatus


ACTIVE

BOOLEAN


configuration/relationTypes/Activity/attributes/ActEmail/attributes/Active


ACTIVITY_BRICK

Column

Type

Description

Reltio Attribute URI

LOV Name

BRICK_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



TYPE

VARCHAR


configuration/relationTypes/Activity/attributes/Brick/attributes/Type

LKUP_IMS_BRICK_TYPE

BRICK_VALUE

VARCHAR


configuration/relationTypes/Activity/attributes/Brick/attributes/BrickValue

LKUP_IMS_BRICK_VALUE

SORT_ORDER

VARCHAR


configuration/relationTypes/Activity/attributes/Brick/attributes/SortOrder


ACTIVITY_CLASSIFICATION

Column

Type

Description

Reltio Attribute URI

LOV Name

CLASSIFICATION_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



CLASSIFICATION_TYPE

VARCHAR


configuration/relationTypes/Activity/attributes/Classification/attributes/ClassificationType

LKUP_IMS_CLASSIFICATION_TYPE

CLASSIFICATION_INDICATOR

VARCHAR


configuration/relationTypes/Activity/attributes/Classification/attributes/ClassificationIndicator

LKUP_IMS_CLASSIFICATION_INDICATOR

CLASSIFICATION_VALUE

VARCHAR


configuration/relationTypes/Activity/attributes/Classification/attributes/ClassificationValue


CLASSIFICATION_VALUE_NUMERIC_QUANTITY

VARCHAR


configuration/relationTypes/Activity/attributes/Classification/attributes/ClassificationValueNumericQuantity


STATUS

VARCHAR


configuration/relationTypes/Activity/attributes/Classification/attributes/Status

LKUP_IMS_CLASSIFICATION_STATUS

EFFECTIVE_DATE

DATE


configuration/relationTypes/Activity/attributes/Classification/attributes/EffectiveDate


END_DATE

DATE


configuration/relationTypes/Activity/attributes/Classification/attributes/EndDate


NOTES

VARCHAR


configuration/relationTypes/Activity/attributes/Classification/attributes/Notes


ACTIVITY_SOURCE_DATA

Column

Type

Description

Reltio Attribute URI

LOV Name

SOURCE_DATA_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



DATASET_IDENTIFIER

VARCHAR


configuration/relationTypes/Activity/attributes/SourceData/attributes/DatasetIdentifier


START_OBJECT_DATASET_PARTY_IDENTIFIER

VARCHAR


configuration/relationTypes/Activity/attributes/SourceData/attributes/StartObjectDatasetPartyIdentifier


END_OBJECT_DATASET_PARTY_IDENTIFIER

VARCHAR


configuration/relationTypes/Activity/attributes/SourceData/attributes/EndObjectDatasetPartyIdentifier


RANK

VARCHAR


configuration/relationTypes/Activity/attributes/SourceData/attributes/Rank


MANAGED_CLASSIFICATION

Column

Type

Description

Reltio Attribute URI

LOV Name

CLASSIFICATION_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



CLASSIFICATION_TYPE

VARCHAR


configuration/relationTypes/Managed/attributes/Classification/attributes/ClassificationType

LKUP_IMS_CLASSIFICATION_TYPE

CLASSIFICATION_INDICATOR

VARCHAR


configuration/relationTypes/Managed/attributes/Classification/attributes/ClassificationIndicator

LKUP_IMS_CLASSIFICATION_INDICATOR

CLASSIFICATION_VALUE

VARCHAR


configuration/relationTypes/Managed/attributes/Classification/attributes/ClassificationValue


CLASSIFICATION_VALUE_NUMERIC_QUANTITY

VARCHAR


configuration/relationTypes/Managed/attributes/Classification/attributes/ClassificationValueNumericQuantity


STATUS

VARCHAR


configuration/relationTypes/Managed/attributes/Classification/attributes/Status

LKUP_IMS_CLASSIFICATION_STATUS

EFFECTIVE_DATE

DATE


configuration/relationTypes/Managed/attributes/Classification/attributes/EffectiveDate


END_DATE

DATE


configuration/relationTypes/Managed/attributes/Classification/attributes/EndDate


NOTES

VARCHAR


configuration/relationTypes/Managed/attributes/Classification/attributes/Notes


MANAGED_SOURCE_DATA

Column

Type

Description

Reltio Attribute URI

LOV Name

SOURCE_DATA_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



DATASET_IDENTIFIER

VARCHAR


configuration/relationTypes/Managed/attributes/SourceData/attributes/DatasetIdentifier


START_OBJECT_DATASET_PARTY_IDENTIFIER

VARCHAR


configuration/relationTypes/Managed/attributes/SourceData/attributes/StartObjectDatasetPartyIdentifier


END_OBJECT_DATASET_PARTY_IDENTIFIER

VARCHAR


configuration/relationTypes/Managed/attributes/SourceData/attributes/EndObjectDatasetPartyIdentifier


RANK

VARCHAR


configuration/relationTypes/Managed/attributes/SourceData/attributes/Rank


" + }, + { + "title": "Dynamic views for COMPANY MDM Model", + "pageID": "163917858", + "pageLink": "/display/GMDM/Dynamic+views+for+COMPANY+MDM+Model", + "content": "

HCP

Health care provider

Column

Type

Description

Reltio Attribute URI

LOV Name

ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



COUNTRY_HCP

VARCHAR

Country

configuration/entityTypes/HCP/attributes/Country


COMPANY_CUST_ID

VARCHAR

An auto-generated unique COMPANY id assigned to an HCP

configuration/entityTypes/HCP/attributes/COMPANYCustID


PREFIX

VARCHAR

Prefix added before the name, e.g., Mr, Ms, Dr

configuration/entityTypes/HCP/attributes/Prefix

HCPPrefix

NAME

VARCHAR

Name

configuration/entityTypes/HCP/attributes/Name


FIRST_NAME

VARCHAR

First Name

configuration/entityTypes/HCP/attributes/FirstName


LAST_NAME

VARCHAR

Last Name

configuration/entityTypes/HCP/attributes/LastName


MIDDLE_NAME

VARCHAR

Middle Name

configuration/entityTypes/HCP/attributes/MiddleName


CLEANSED_MIDDLE_NAME

VARCHAR

Middle Name

configuration/entityTypes/HCP/attributes/CleansedMiddleName


STATUS

VARCHAR

Status, e.g., Active or Inactive

configuration/entityTypes/HCP/attributes/Status

HCPStatus

STATUS_DETAIL

VARCHAR

Deactivation reason

configuration/entityTypes/HCP/attributes/StatusDetail

HCPStatusDetail

DEACTIVATION_CODE

VARCHAR

Deactivation reason

configuration/entityTypes/HCP/attributes/DeactivationCode

HCPDeactivationReasonCode

SUFFIX_NAME

VARCHAR

Generation Suffix

configuration/entityTypes/HCP/attributes/SuffixName

SuffixName

GENDER

VARCHAR

Gender

configuration/entityTypes/HCP/attributes/Gender

Gender

NICKNAME

VARCHAR

Nickname

configuration/entityTypes/HCP/attributes/Nickname


PREFERRED_NAME

VARCHAR

Preferred Name

configuration/entityTypes/HCP/attributes/PreferredName


FORMATTED_NAME

VARCHAR

Formatted Name

configuration/entityTypes/HCP/attributes/FormattedName


TYPE_CODE

VARCHAR

HCP Type Code

configuration/entityTypes/HCP/attributes/TypeCode

HCPType

SUB_TYPE_CODE

VARCHAR

HCP SubType Code

configuration/entityTypes/HCP/attributes/SubTypeCode

HCPSubTypeCode

IS_COMPANY_APPROVED_SPEAKER

BOOLEAN

Is COMPANY Approved Speaker

configuration/entityTypes/HCP/attributes/IsCOMPANYApprovedSpeaker


SPEAKER_LAST_BRIEFING_DATE

DATE

Last Briefing Date

configuration/entityTypes/HCP/attributes/SpeakerLastBriefingDate


SPEAKER_TYPE

VARCHAR

Speaker type

configuration/entityTypes/HCP/attributes/SpeakerType


SPEAKER_STATUS

VARCHAR

Speaker Status

configuration/entityTypes/HCP/attributes/SpeakerStatus

HCPSpeakerStatus

SPEAKER_LEVEL

VARCHAR

Speaker Status

configuration/entityTypes/HCP/attributes/SpeakerLevel


SPEAKER_EFFECTIVE_DATE

DATE

Speaker Effective Date

configuration/entityTypes/HCP/attributes/SpeakerEffectiveDate


SPEAKER_DEACTIVATE_REASON

VARCHAR

Speaker Effective Date

configuration/entityTypes/HCP/attributes/SpeakerDeactivateReason


DELETION_DATE

DATE

Deletion Data

configuration/entityTypes/HCP/attributes/DeletionDate


ACCOUNT_BLOCKED

BOOLEAN

Indicator of account blocked or not

configuration/entityTypes/HCP/attributes/AccountBlocked


Y_O_B

VARCHAR

Birth Year

configuration/entityTypes/HCP/attributes/YoB


D_O_D

DATE


configuration/entityTypes/HCP/attributes/DoD


Y_O_D

VARCHAR


configuration/entityTypes/HCP/attributes/YoD


TERRITORY_NUMBER

VARCHAR

Title of HCP

configuration/entityTypes/HCP/attributes/TerritoryNumber


WEBSITE_URL

VARCHAR

Website URL

configuration/entityTypes/HCP/attributes/WebsiteURL


TITLE

VARCHAR

Title of HCP

configuration/entityTypes/HCP/attributes/Title

HCPTitle

EFFECTIVE_END_DATE

DATE


configuration/entityTypes/HCP/attributes/EffectiveEndDate


COMPANY_WATCH_IND

BOOLEAN

COMPANY Watch Ind

configuration/entityTypes/HCP/attributes/COMPANYWatchInd


KOL_STATUS

BOOLEAN

KOL Status

configuration/entityTypes/HCP/attributes/KOLStatus


THIRD_PARTY_DECIL

VARCHAR

Third Party Decil

configuration/entityTypes/HCP/attributes/ThirdPartyDecil


FEDERAL_EMP_LETTER_DATE

DATE

Federal Emp Letter Date

configuration/entityTypes/HCP/attributes/FederalEmpLetterDate


MARKETING_CONTRACT_CODE

VARCHAR

Marketing Contract Code

configuration/entityTypes/HCP/attributes/MarketingContractCode


CURRICULUM_VITAE_LINK

VARCHAR

Curriculum Vitae Link

configuration/entityTypes/HCP/attributes/CurriculumVitaeLink


SPEAKER_TRAVEL_INDICATOR

VARCHAR

Speaker Travel Indicator

configuration/entityTypes/HCP/attributes/SpeakerTravelIndicator


SPEAKER_INFO

VARCHAR

Speaker Information

configuration/entityTypes/HCP/attributes/SpeakerInfo


DEGREE

VARCHAR

Degree Information

configuration/entityTypes/HCP/attributes/Degree


PRESENT_EMPLOYMENT

VARCHAR

Present Employment

configuration/entityTypes/HCP/attributes/PresentEmployment

PE_CD

EMPLOYMENT_TYPE_CODE

VARCHAR

Employment Type Code

configuration/entityTypes/HCP/attributes/EmploymentTypeCode


EMPLOYMENT_TYPE_DESC

VARCHAR

Employment Type Description

configuration/entityTypes/HCP/attributes/EmploymentTypeDesc


TYPE_OF_PRACTICE

VARCHAR

Type Of Practice

configuration/entityTypes/HCP/attributes/TypeOfPractice

TOP_CD

TYPE_OF_PRACTICE_DESC

VARCHAR

Type Of Practice Description

configuration/entityTypes/HCP/attributes/TypeOfPracticeDesc


SCHOOL_SEQ_NUMBER

VARCHAR

School Sequence Number

configuration/entityTypes/HCP/attributes/SchoolSeqNumber


MRM_DELETE_FLAG

BOOLEAN

MRM Delete Flag

configuration/entityTypes/HCP/attributes/MRMDeleteFlag


MRM_DELETE_DATE

DATE

MRM Delete Date

configuration/entityTypes/HCP/attributes/MRMDeleteDate


CNCY_DATE

DATE

CNCY Date

configuration/entityTypes/HCP/attributes/CNCYDate


AMA_HOSPITAL

VARCHAR

AMA Hospital Info

configuration/entityTypes/HCP/attributes/AMAHospital


AMA_HOSPITAL_DESC

VARCHAR

AMA Hospital Desc

configuration/entityTypes/HCP/attributes/AMAHospitalDesc


PRACTISE_AT_HOSPITAL

VARCHAR

Practise At Hospital

configuration/entityTypes/HCP/attributes/PractiseAtHospital


SEGMENT_ID

VARCHAR

Segment ID

configuration/entityTypes/HCP/attributes/SegmentID


SEGMENT_DESC

VARCHAR

Segment Desc

configuration/entityTypes/HCP/attributes/SegmentDesc


DCR_STATUS

VARCHAR

Status of HCP profile

configuration/entityTypes/HCP/attributes/DCRStatus

DCRStatus

PREFERRED_LANGUAGE

VARCHAR

Language preference

configuration/entityTypes/HCP/attributes/PreferredLanguage


SOURCE_TYPE

VARCHAR

Type of the source

configuration/entityTypes/HCP/attributes/SourceType


STATE_UPDATE_DATE

DATE

Update date of state

configuration/entityTypes/HCP/attributes/StateUpdateDate


SOURCE_UPDATE_DATE

DATE

Update date at source

configuration/entityTypes/HCP/attributes/SourceUpdateDate


COMMENTERS

VARCHAR

Commenters

configuration/entityTypes/HCP/attributes/Commenters


IMAGE_GALLERY

VARCHAR


configuration/entityTypes/HCP/attributes/ImageGallery


BIRTH_CITY

VARCHAR

Birth City

configuration/entityTypes/HCP/attributes/BirthCity


BIRTH_STATE

VARCHAR

Birth State

configuration/entityTypes/HCP/attributes/BirthState

State

BIRTH_COUNTRY

VARCHAR

Birth Country

configuration/entityTypes/HCP/attributes/BirthCountry

Country

D_O_B

DATE

Date of Birth

configuration/entityTypes/HCP/attributes/DoB


ORIGINAL_SOURCE_NAME

VARCHAR

Original Source Name

configuration/entityTypes/HCP/attributes/OriginalSourceName


SOURCE_MATCH_CATEGORY

VARCHAR

Source Match Category

configuration/entityTypes/HCP/attributes/SourceMatchCategory


ALTERNATE_NAME

Column

Type

Description

Reltio Attribute URI

LOV Name

ALTERNATE_NAME_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



NAME_TYPE_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/AlternateName/attributes/NameTypeCode

HCPAlternateNameType

FULL_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/AlternateName/attributes/FullName


FIRST_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/AlternateName/attributes/FirstName


MIDDLE_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/AlternateName/attributes/MiddleName


LAST_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/AlternateName/attributes/LastName


VERSION

VARCHAR


configuration/entityTypes/HCP/attributes/AlternateName/attributes/Version


ADDRESSES

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDRESSES_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



ADDRESS_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressType, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressType, configuration/entityTypes/MCO/attributes/Addresses/attributes/AddressType

AddressType

COMPANY_ADDRESS_ID

VARCHAR

COMPANY Address ID

configuration/entityTypes/HCP/attributes/Addresses/attributes/COMPANYAddressID, configuration/entityTypes/HCO/attributes/Addresses/attributes/COMPANYAddressID, configuration/entityTypes/MCO/attributes/Addresses/attributes/COMPANYAddressID


ADDRESS_LINE1

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine1, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressLine1, configuration/entityTypes/MCO/attributes/Addresses/attributes/AddressLine1


ADDRESS_LINE2

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine2, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressLine2, configuration/entityTypes/MCO/attributes/Addresses/attributes/AddressLine2


ADDRESS_LINE3

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine3, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressLine3, configuration/entityTypes/MCO/attributes/Addresses/attributes/AddressLine3


ADDRESS_LINE4

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine4, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressLine4, configuration/entityTypes/MCO/attributes/Addresses/attributes/AddressLine4


CITY

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/City, configuration/entityTypes/HCO/attributes/Addresses/attributes/City, configuration/entityTypes/MCO/attributes/Addresses/attributes/City


STATE_PROVINCE

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/StateProvince, configuration/entityTypes/HCO/attributes/Addresses/attributes/StateProvince, configuration/entityTypes/MCO/attributes/Addresses/attributes/StateProvince

State

COUNTRY_ADDRESSES

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/Country, configuration/entityTypes/HCO/attributes/Addresses/attributes/Country, configuration/entityTypes/MCO/attributes/Addresses/attributes/Country

Country

PO_BOX

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/POBox, configuration/entityTypes/HCO/attributes/Addresses/attributes/POBox, configuration/entityTypes/MCO/attributes/Addresses/attributes/POBox


ZIP5

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/Zip5, configuration/entityTypes/HCO/attributes/Addresses/attributes/Zip5, configuration/entityTypes/MCO/attributes/Addresses/attributes/Zip5


ZIP4

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/Zip4, configuration/entityTypes/HCO/attributes/Addresses/attributes/Zip4, configuration/entityTypes/MCO/attributes/Addresses/attributes/Zip4


STREET

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/Street, configuration/entityTypes/HCO/attributes/Addresses/attributes/Street, configuration/entityTypes/MCO/attributes/Addresses/attributes/Street


POSTAL_CODE_EXTENSION

VARCHAR

Postal Code Extension

configuration/entityTypes/HCP/attributes/Addresses/attributes/PostalCodeExtension, configuration/entityTypes/HCO/attributes/Addresses/attributes/PostalCodeExtension, configuration/entityTypes/MCO/attributes/Addresses/attributes/PostalCodeExtension


ADDRESS_USAGE_TAG

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressUsageTag, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressUsageTag

AddressUsageTag

CNCY_DATE

DATE

CNCY Date

configuration/entityTypes/HCP/attributes/Addresses/attributes/CNCYDate, configuration/entityTypes/HCO/attributes/Addresses/attributes/CNCYDate


CBSA_CODE

VARCHAR

Core Based Statistical Area

configuration/entityTypes/HCP/attributes/Addresses/attributes/CBSACode, configuration/entityTypes/HCO/attributes/Addresses/attributes/CBSACode, configuration/entityTypes/MCO/attributes/Addresses/attributes/CBSACode


PREMISE

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/Premise, configuration/entityTypes/HCO/attributes/Addresses/attributes/Premise


ISO3166-2

VARCHAR

This field holds the ISO 3166 2-character country code.

configuration/entityTypes/HCP/attributes/Addresses/attributes/ISO3166-2, configuration/entityTypes/HCO/attributes/Addresses/attributes/ISO3166-2, configuration/entityTypes/MCO/attributes/Addresses/attributes/ISO3166-2


ISO3166-3

VARCHAR

This field holds the ISO 3166 3-character country code.

configuration/entityTypes/HCP/attributes/Addresses/attributes/ISO3166-3, configuration/entityTypes/HCO/attributes/Addresses/attributes/ISO3166-3, configuration/entityTypes/MCO/attributes/Addresses/attributes/ISO3166-3


ISO3166-N

VARCHAR

This field holds the ISO 3166 N-digit numeric country code.

configuration/entityTypes/HCP/attributes/Addresses/attributes/ISO3166-N, configuration/entityTypes/HCO/attributes/Addresses/attributes/ISO3166-N, configuration/entityTypes/MCO/attributes/Addresses/attributes/ISO3166-N


LATITUDE

VARCHAR

Latitude

configuration/entityTypes/HCP/attributes/Addresses/attributes/Latitude, configuration/entityTypes/HCO/attributes/Addresses/attributes/Latitude, configuration/entityTypes/MCO/attributes/Addresses/attributes/Latitude


LONGITUDE

VARCHAR

Longitude

configuration/entityTypes/HCP/attributes/Addresses/attributes/Longitude, configuration/entityTypes/HCO/attributes/Addresses/attributes/Longitude, configuration/entityTypes/MCO/attributes/Addresses/attributes/Longitude


GEO_ACCURACY

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/GeoAccuracy, configuration/entityTypes/HCO/attributes/Addresses/attributes/GeoAccuracy, configuration/entityTypes/MCO/attributes/Addresses/attributes/GeoAccuracy


VERIFICATION_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/VerificationStatus, configuration/entityTypes/HCO/attributes/Addresses/attributes/VerificationStatus, configuration/entityTypes/MCO/attributes/Addresses/attributes/VerificationStatus


VERIFICATION_STATUS_DETAILS

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/VerificationStatusDetails, configuration/entityTypes/HCO/attributes/Addresses/attributes/VerificationStatusDetails, configuration/entityTypes/MCO/attributes/Addresses/attributes/VerificationStatusDetails


AVC

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/AVC, configuration/entityTypes/HCO/attributes/Addresses/attributes/AVC, configuration/entityTypes/MCO/attributes/Addresses/attributes/AVC


SETTING_TYPE

VARCHAR

Setting Type

configuration/entityTypes/HCP/attributes/Addresses/attributes/SettingType, configuration/entityTypes/HCO/attributes/Addresses/attributes/SettingType


ADDRESS_SETTING_TYPE_DESC

VARCHAR

Address Setting Type Desc

configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressSettingTypeDesc, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressSettingTypeDesc


CATEGORY

VARCHAR

Category

configuration/entityTypes/HCP/attributes/Addresses/attributes/Category, configuration/entityTypes/HCO/attributes/Addresses/attributes/Category

AddressCategory

FIPS_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/FIPSCode, configuration/entityTypes/HCO/attributes/Addresses/attributes/FIPSCode


FIPS_COUNTY_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/FIPSCountyCode, configuration/entityTypes/HCO/attributes/Addresses/attributes/FIPSCountyCode


FIPS_COUNTY_CODE_DESC

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/FIPSCountyCodeDesc, configuration/entityTypes/HCO/attributes/Addresses/attributes/FIPSCountyCodeDesc


FIPS_STATE_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/FIPSStateCode, configuration/entityTypes/HCO/attributes/Addresses/attributes/FIPSStateCode


FIPS_STATE_CODE_DESC

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/FIPSStateCodeDesc, configuration/entityTypes/HCO/attributes/Addresses/attributes/FIPSStateCodeDesc


CARE_OF

VARCHAR

Care Of

configuration/entityTypes/HCP/attributes/Addresses/attributes/CareOf, configuration/entityTypes/HCO/attributes/Addresses/attributes/CareOf


MAIN_PHYSICAL_OFFICE

VARCHAR

Main Physical Office

configuration/entityTypes/HCP/attributes/Addresses/attributes/MainPhysicalOffice, configuration/entityTypes/HCO/attributes/Addresses/attributes/MainPhysicalOffice


DELIVERABILITY_CONFIDENCE

VARCHAR

Deliverability Confidence

configuration/entityTypes/HCP/attributes/Addresses/attributes/DeliverabilityConfidence, configuration/entityTypes/HCO/attributes/Addresses/attributes/DeliverabilityConfidence


APPLID

VARCHAR

APPLID

configuration/entityTypes/HCP/attributes/Addresses/attributes/APPLID, configuration/entityTypes/HCO/attributes/Addresses/attributes/APPLID


SMPLDLV_IND

BOOLEAN

SMPLDLV Ind

configuration/entityTypes/HCP/attributes/Addresses/attributes/SMPLDLVInd, configuration/entityTypes/HCO/attributes/Addresses/attributes/SMPLDLVInd


STATUS

VARCHAR

Status

configuration/entityTypes/HCP/attributes/Addresses/attributes/Status, configuration/entityTypes/HCO/attributes/Addresses/attributes/Status

AddressStatus

STARTER_ELIGIBLE_FLAG

VARCHAR

StarterEligibleFlag

configuration/entityTypes/HCP/attributes/Addresses/attributes/StarterEligibleFlag, configuration/entityTypes/HCO/attributes/Addresses/attributes/StarterEligibleFlag


DEA_FLAG

BOOLEAN

DEA Flag

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEAFlag, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEAFlag


USAGE_TYPE

VARCHAR

Usage Type

configuration/entityTypes/HCP/attributes/Addresses/attributes/UsageType, configuration/entityTypes/HCO/attributes/Addresses/attributes/UsageType


PRIMARY

BOOLEAN

Primary Address

configuration/entityTypes/HCP/attributes/Addresses/attributes/Primary, configuration/entityTypes/HCO/attributes/Addresses/attributes/Primary


EFFECTIVE_START_DATE

DATE

Effective Start Date

configuration/entityTypes/HCP/attributes/Addresses/attributes/EffectiveStartDate, configuration/entityTypes/HCO/attributes/Addresses/attributes/EffectiveStartDate


EFFECTIVE_END_DATE

DATE

Effective End Date

configuration/entityTypes/HCP/attributes/Addresses/attributes/EffectiveEndDate, configuration/entityTypes/HCO/attributes/Addresses/attributes/EffectiveEndDate


ADDRESS_RANK

VARCHAR

Address Rank for priority

configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressRank, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressRank, configuration/entityTypes/MCO/attributes/Addresses/attributes/AddressRank


SOURCE_SEGMENT_CODE

VARCHAR

Source Segment Code

configuration/entityTypes/HCP/attributes/Addresses/attributes/SourceSegmentCode, configuration/entityTypes/HCO/attributes/Addresses/attributes/SourceSegmentCode


SEGMENT1

VARCHAR

Segment1

configuration/entityTypes/HCP/attributes/Addresses/attributes/Segment1, configuration/entityTypes/HCO/attributes/Addresses/attributes/Segment1


SEGMENT2

VARCHAR

Segment2

configuration/entityTypes/HCP/attributes/Addresses/attributes/Segment2, configuration/entityTypes/HCO/attributes/Addresses/attributes/Segment2


SEGMENT3

VARCHAR

Segment3

configuration/entityTypes/HCP/attributes/Addresses/attributes/Segment3, configuration/entityTypes/HCO/attributes/Addresses/attributes/Segment3


ADDRESS_IND

BOOLEAN

AddressInd

configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressInd, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressInd


SCRIPT_UTILIZATION_WEIGHT

VARCHAR

Script Utilization Weight

configuration/entityTypes/HCP/attributes/Addresses/attributes/ScriptUtilizationWeight, configuration/entityTypes/HCO/attributes/Addresses/attributes/ScriptUtilizationWeight


BUSINESS_ACTIVITY_CODE

VARCHAR

Business Activity Code

configuration/entityTypes/HCP/attributes/Addresses/attributes/BusinessActivityCode, configuration/entityTypes/HCO/attributes/Addresses/attributes/BusinessActivityCode


BUSINESS_ACTIVITY_DESC

VARCHAR

Business Activity Desc

configuration/entityTypes/HCP/attributes/Addresses/attributes/BusinessActivityDesc, configuration/entityTypes/HCO/attributes/Addresses/attributes/BusinessActivityDesc


PRACTICE_LOCATION_RANK

VARCHAR

Practice Location Rank

configuration/entityTypes/HCP/attributes/Addresses/attributes/PracticeLocationRank, configuration/entityTypes/HCO/attributes/Addresses/attributes/PracticeLocationRank

PracticeLocationRank

PRACTICE_LOCATION_CONFIDENCE_IND

VARCHAR

Practice Location Confidence Ind

configuration/entityTypes/HCP/attributes/Addresses/attributes/PracticeLocationConfidenceInd, configuration/entityTypes/HCO/attributes/Addresses/attributes/PracticeLocationConfidenceInd


PRACTICE_LOCATION_CONFIDENCE_DESC

VARCHAR

Practice Location Confidence Desc

configuration/entityTypes/HCP/attributes/Addresses/attributes/PracticeLocationConfidenceDesc, configuration/entityTypes/HCO/attributes/Addresses/attributes/PracticeLocationConfidenceDesc


SINGLE_ADDRESS_IND

BOOLEAN

Single Address Ind

configuration/entityTypes/HCP/attributes/Addresses/attributes/SingleAddressInd, configuration/entityTypes/HCO/attributes/Addresses/attributes/SingleAddressInd


SUB_ADMINISTRATIVE_AREA

VARCHAR

This field holds the smallest geographic data element within a country. For instance, USA County.

configuration/entityTypes/HCP/attributes/Addresses/attributes/SubAdministrativeArea, configuration/entityTypes/HCO/attributes/Addresses/attributes/SubAdministrativeArea, configuration/entityTypes/MCO/attributes/Addresses/attributes/SubAdministrativeArea


SUPER_ADMINISTRATIVE_AREA

VARCHAR

This field holds the largest geographic data element within a country.

configuration/entityTypes/HCO/attributes/Addresses/attributes/SuperAdministrativeArea


ADMINISTRATIVE_AREA

VARCHAR

This field holds the most common geographic data element within a country. For instance, USA State, and Canadian Province.

configuration/entityTypes/HCO/attributes/Addresses/attributes/AdministrativeArea


UNIT_NAME

VARCHAR


configuration/entityTypes/HCO/attributes/Addresses/attributes/UnitName


UNIT_VALUE

VARCHAR


configuration/entityTypes/HCO/attributes/Addresses/attributes/UnitValue


FLOOR

VARCHAR

N/A

configuration/entityTypes/HCO/attributes/Addresses/attributes/Floor


BUILDING

VARCHAR

N/A

configuration/entityTypes/HCO/attributes/Addresses/attributes/Building


SUB_BUILDING

VARCHAR


configuration/entityTypes/HCO/attributes/Addresses/attributes/SubBuilding


NEIGHBORHOOD

VARCHAR


configuration/entityTypes/HCO/attributes/Addresses/attributes/Neighborhood


PREMISE_NUMBER

VARCHAR


configuration/entityTypes/HCO/attributes/Addresses/attributes/PremiseNumber


ADDRESSES_SOURCE

Source

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDRESSES_URI

VARCHAR

Generated Key



SOURCE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SOURCE_NAME

VARCHAR

SourceName

configuration/entityTypes/HCP/attributes/Addresses/attributes/Source/attributes/SourceName, configuration/entityTypes/HCO/attributes/Addresses/attributes/Source/attributes/SourceName, configuration/entityTypes/MCO/attributes/Addresses/attributes/Source/attributes/SourceName


SOURCE_RANK

VARCHAR

SourceRank

configuration/entityTypes/HCP/attributes/Addresses/attributes/Source/attributes/SourceRank, configuration/entityTypes/HCO/attributes/Addresses/attributes/Source/attributes/SourceRank, configuration/entityTypes/MCO/attributes/Addresses/attributes/Source/attributes/SourceRank


SOURCE_ADDRESS_ID

VARCHAR

Source Address ID

configuration/entityTypes/HCP/attributes/Addresses/attributes/Source/attributes/SourceAddressID, configuration/entityTypes/HCO/attributes/Addresses/attributes/Source/attributes/SourceAddressID, configuration/entityTypes/MCO/attributes/Addresses/attributes/Source/attributes/SourceAddressID


LEGACY_IQVIA_ADDRESS_ID

VARCHAR

Legacy address id

configuration/entityTypes/HCP/attributes/Addresses/attributes/Source/attributes/LegacyIQVIAAddressID, configuration/entityTypes/HCO/attributes/Addresses/attributes/Source/attributes/LegacyIQVIAAddressID


ADDRESSES_DEA

DEA

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDRESSES_URI

VARCHAR

Generated Key



DEA_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



NUMBER

VARCHAR

Number

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/Number, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/Number


EXPIRATION_DATE

DATE

Expiration Date

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/ExpirationDate, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/ExpirationDate


STATUS

VARCHAR

Status

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/Status, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/Status

AddressDEAStatus

STATUS

VARCHAR

Status

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/Status, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/Status


STATUS_DETAIL

VARCHAR

Deactivation Reason Code

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/StatusDetail, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/StatusDetail

HCPDEAStatusDetail

STATUS_DETAIL

VARCHAR

Deactivation Reason Code

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/StatusDetail, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/StatusDetail


DRUG_SCHEDULE

VARCHAR

Drug Schedule

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/DrugSchedule, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/DrugSchedule


DRUG_SCHEDULE

VARCHAR

Drug Schedule

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/DrugSchedule, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/DrugSchedule

App-LSCustomer360DEADrugSchedule

EFFECTIVE_DATE

DATE

Effective Date

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/EffectiveDate, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/EffectiveDate


STATUS_DATE

DATE

Status Date

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/StatusDate, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/StatusDate


DEA_BUSINESS_ACTIVITY

VARCHAR

Business Activity

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/DEABusinessActivity, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/DEABusinessActivity

DEABusinessActivity

DEA_BUSINESS_ACTIVITY

VARCHAR

Business Activity

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/DEABusinessActivity, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/DEABusinessActivity


SUB_BUSINESS_ACTIVITY

VARCHAR

Sub Business Activity

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/SubBusinessActivity, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/SubBusinessActivity

DEABusinessSubActivity

SUB_BUSINESS_ACTIVITY

VARCHAR

Sub Business Activity

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/SubBusinessActivity, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/SubBusinessActivity


BUSINESS_ACTIVITY_DESC

VARCHAR

Business Activity Desc

configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/BusinessActivityDesc


SUB_BUSINESS_ACTIVITY_DESC

VARCHAR

Sub Business Activity Desc

configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/SubBusinessActivityDesc


ADDRESSES_OFFICE_INFORMATION

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDRESSES_URI

VARCHAR

Generated Key



OFFICE_INFORMATION_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



BEST_TIMES

VARCHAR

Best Times

configuration/entityTypes/HCP/attributes/Addresses/attributes/OfficeInformation/attributes/BestTimes, configuration/entityTypes/HCO/attributes/Addresses/attributes/OfficeInformation/attributes/BestTimes


APPT_REQUIRED

BOOLEAN

Appointment Required or not

configuration/entityTypes/HCP/attributes/Addresses/attributes/OfficeInformation/attributes/ApptRequired, configuration/entityTypes/HCO/attributes/Addresses/attributes/OfficeInformation/attributes/ApptRequired


OFFICE_NOTES

VARCHAR

Office Notes

configuration/entityTypes/HCP/attributes/Addresses/attributes/OfficeInformation/attributes/OfficeNotes, configuration/entityTypes/HCO/attributes/Addresses/attributes/OfficeInformation/attributes/OfficeNotes


COMPLIANCE

Compliance

Column

Type

Description

Reltio Attribute URI

LOV Name

COMPLIANCE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



GO_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Compliance/attributes/GOStatus

HCPComplianceGOStatus

PIGO_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Compliance/attributes/PIGOStatus

HCPPIGOStatus

NIPPIGO_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Compliance/attributes/NIPPIGOStatus

HCPNIPPIGOStatus

PRIMARY_PIGO_RATIONALE

VARCHAR


configuration/entityTypes/HCP/attributes/Compliance/attributes/PrimaryPIGORationale

HCPPIGORationale

SECONDARY_PIGO_RATIONALE

VARCHAR


configuration/entityTypes/HCP/attributes/Compliance/attributes/SecondaryPIGORationale

HCPPIGORationale

PIGOSME_REVIEW

VARCHAR


configuration/entityTypes/HCP/attributes/Compliance/attributes/PIGOSMEReview

HCPPIGOSMEReview

GSQ_DATE

DATE


configuration/entityTypes/HCP/attributes/Compliance/attributes/GSQDate


DO_NOT_USE

BOOLEAN


configuration/entityTypes/HCP/attributes/Compliance/attributes/DoNotUse


CHANGE_DATE

DATE


configuration/entityTypes/HCP/attributes/Compliance/attributes/ChangeDate


CHANGE_REASON

VARCHAR


configuration/entityTypes/HCP/attributes/Compliance/attributes/ChangeReason


MAPPHCP_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Compliance/attributes/MAPPHCPStatus


MAPP_MAIL

VARCHAR


configuration/entityTypes/HCP/attributes/Compliance/attributes/MAPPMail


DISCLOSURE

Disclosure

Column

Type

Description

Reltio Attribute URI

LOV Name

DISCLOSURE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



BENEFIT_CATEGORY

VARCHAR

Benefit Category

configuration/entityTypes/HCP/attributes/Disclosure/attributes/BenefitCategory

HCPBenefitCategory

BENEFIT_TITLE

VARCHAR

Benefit Title

configuration/entityTypes/HCP/attributes/Disclosure/attributes/BenefitTitle

HCPBenefitTitle

BENEFIT_QUALITY

VARCHAR

Benefit Quality

configuration/entityTypes/HCP/attributes/Disclosure/attributes/BenefitQuality

HCPBenefitQuality

BENEFIT_SPECIALTY

VARCHAR

Benefit Specialty

configuration/entityTypes/HCP/attributes/Disclosure/attributes/BenefitSpecialty

HCPBenefitSpecialty

CONTRACT_CLASSIFICATION

VARCHAR

Contract Classification

configuration/entityTypes/HCP/attributes/Disclosure/attributes/ContractClassification


CONTRACT_CLASSIFICATION_DATE

DATE

Contract Classification Date

configuration/entityTypes/HCP/attributes/Disclosure/attributes/ContractClassificationDate


MILITARY

BOOLEAN

Military

configuration/entityTypes/HCP/attributes/Disclosure/attributes/Military


CIVIL_SERVANT

BOOLEAN

Civil Servant

configuration/entityTypes/HCP/attributes/Disclosure/attributes/CivilServant


CREDENTIAL

Credential Information

Column

Type

Description

Reltio Attribute URI

LOV Name

CREDENTIAL_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



CREDENTIAL

VARCHAR


configuration/entityTypes/HCP/attributes/Credential/attributes/Credential

Credential

OTHER_CDTL_TXT

VARCHAR

Other Credential Text

configuration/entityTypes/HCP/attributes/Credential/attributes/OtherCdtlTxt


PRIMARY_FLAG

BOOLEAN

Primary Flag

configuration/entityTypes/HCP/attributes/Credential/attributes/PrimaryFlag


EFFECTIVE_END_DATE

DATE

Effective End Date

configuration/entityTypes/HCP/attributes/Credential/attributes/EffectiveEndDate


PROFESSION

Profession Information

Column

Type

Description

Reltio Attribute URI

LOV Name

PROFESSION_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



PROFESSION

VARCHAR


configuration/entityTypes/HCP/attributes/Profession/attributes/Profession

HCPSpecialtyProfession

PROFESSION_SOURCE

Source

Column

Type

Description

Reltio Attribute URI

LOV Name

PROFESSION_URI

VARCHAR

Generated Key



SOURCE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SOURCE_NAME

VARCHAR

SourceName

configuration/entityTypes/HCP/attributes/Profession/attributes/Source/attributes/SourceName


SOURCE_RANK

VARCHAR

SourceRank

configuration/entityTypes/HCP/attributes/Profession/attributes/Source/attributes/SourceRank


SPECIALITIES

Column

Type

Description

Reltio Attribute URI

LOV Name

SPECIALITIES_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SPECIALTY

VARCHAR

Specialty of the entity, e.g., Adult Congenital Heart Disease

configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialty, configuration/entityTypes/HCO/attributes/Specialities/attributes/Specialty

HCPSpecialty,App-LSCustomer360Specialty

PROFESSION

VARCHAR


configuration/entityTypes/HCP/attributes/Specialities/attributes/Profession

HCPSpecialtyProfession

PRIMARY

BOOLEAN

Whether Primary Specialty or not

configuration/entityTypes/HCP/attributes/Specialities/attributes/Primary, configuration/entityTypes/HCO/attributes/Specialities/attributes/Primary


RANK

VARCHAR

Rank

configuration/entityTypes/HCP/attributes/Specialities/attributes/Rank


TRUST_INDICATOR

VARCHAR


configuration/entityTypes/HCP/attributes/Specialities/attributes/TrustIndicator


DESC

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/Specialities/attributes/Desc


SPECIALTY_TYPE

VARCHAR

Type of Specialty, e.g. Secondary

configuration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyType

App-LSCustomer360SpecialtyType

GROUP

VARCHAR

Group, Specialty belongs to

configuration/entityTypes/HCO/attributes/Specialities/attributes/Group


SPECIALTY_DETAIL

VARCHAR

Description of Specialty

configuration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyDetail


SPECIALITIES_SOURCE

Column

Type

Description

Reltio Attribute URI

LOV Name

SPECIALITIES_URI

VARCHAR

Generated Key



SOURCE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SOURCE_NAME

VARCHAR

SourceName

configuration/entityTypes/HCP/attributes/Specialities/attributes/Source/attributes/SourceName


SOURCE_RANK

VARCHAR

Rank

configuration/entityTypes/HCP/attributes/Specialities/attributes/Source/attributes/SourceRank


SUB_SPECIALITIES

Column

Type

Description

Reltio Attribute URI

LOV Name

SUB_SPECIALITIES_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SPECIALTY_CODE

VARCHAR

Sub specialty code of the entity

configuration/entityTypes/HCP/attributes/SubSpecialities/attributes/SpecialtyCode


SUB_SPECIALTY

VARCHAR

Sub specialty of the entity

configuration/entityTypes/HCP/attributes/SubSpecialities/attributes/SubSpecialty


PROFESSION_CODE

VARCHAR

Profession Code

configuration/entityTypes/HCP/attributes/SubSpecialities/attributes/ProfessionCode


SUB_SPECIALITIES_SOURCE

Column

Type

Description

Reltio Attribute URI

LOV Name

SUB_SPECIALITIES_URI

VARCHAR

Generated Key



SOURCE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SOURCE_NAME

VARCHAR

SourceName

configuration/entityTypes/HCP/attributes/SubSpecialities/attributes/Source/attributes/SourceName


SOURCE_RANK

VARCHAR

Rank

configuration/entityTypes/HCP/attributes/SubSpecialities/attributes/Source/attributes/SourceRank


EDUCATION

Column

Type

Description

Reltio Attribute URI

LOV Name

EDUCATION_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SCHOOL_CD

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/SchoolCD


SCHOOL_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/SchoolName


YEAR_OF_GRADUATION

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/Education/attributes/YearOfGraduation


STATE

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/State


COUNTRY_EDUCATION

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/Country


TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/Type


GPA

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/GPA


GRADUATED

BOOLEAN

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/Education/attributes/Graduated


EMAIL

Column

Type

Description

Reltio Attribute URI

LOV Name

EMAIL_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE

VARCHAR

Type of Email, e.g., Home

configuration/entityTypes/HCP/attributes/Email/attributes/Type, configuration/entityTypes/HCO/attributes/Email/attributes/Type, configuration/entityTypes/MCO/attributes/Email/attributes/Type

EmailType

EMAIL

VARCHAR

Email address

configuration/entityTypes/HCP/attributes/Email/attributes/Email, configuration/entityTypes/HCO/attributes/Email/attributes/Email, configuration/entityTypes/MCO/attributes/Email/attributes/Email


RANK

VARCHAR

Rank used to assign priority to a Email

configuration/entityTypes/HCP/attributes/Email/attributes/Rank, configuration/entityTypes/HCO/attributes/Email/attributes/Rank, configuration/entityTypes/MCO/attributes/Email/attributes/Rank


EMAIL_USAGE_TAG

VARCHAR


configuration/entityTypes/HCP/attributes/Email/attributes/EmailUsageTag, configuration/entityTypes/HCO/attributes/Email/attributes/EmailUsageTag, configuration/entityTypes/MCO/attributes/Email/attributes/EmailUsageTag

EmailUsageTag

USAGE_TYPE

VARCHAR

Usage Type of an Email

configuration/entityTypes/HCP/attributes/Email/attributes/UsageType, configuration/entityTypes/HCO/attributes/Email/attributes/UsageType, configuration/entityTypes/MCO/attributes/Email/attributes/UsageType


DOMAIN

VARCHAR


configuration/entityTypes/HCP/attributes/Email/attributes/Domain, configuration/entityTypes/HCO/attributes/Email/attributes/Domain, configuration/entityTypes/MCO/attributes/Email/attributes/Domain


VALIDATION_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Email/attributes/ValidationStatus, configuration/entityTypes/HCO/attributes/Email/attributes/ValidationStatus, configuration/entityTypes/MCO/attributes/Email/attributes/ValidationStatus


DOMAIN_TYPE

VARCHAR

Status of Email

configuration/entityTypes/HCO/attributes/Email/attributes/DomainType, configuration/entityTypes/MCO/attributes/Email/attributes/DomainType


USERNAME

VARCHAR

Domain on which Email is created

configuration/entityTypes/HCO/attributes/Email/attributes/Username, configuration/entityTypes/MCO/attributes/Email/attributes/Username


EMAIL_SOURCE

Source

Column

Type

Description

Reltio Attribute URI

LOV Name

EMAIL_URI

VARCHAR

Generated Key



SOURCE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SOURCE_NAME

VARCHAR

SourceName

configuration/entityTypes/HCP/attributes/Email/attributes/Source/attributes/SourceName, configuration/entityTypes/HCO/attributes/Email/attributes/Source/attributes/SourceName, configuration/entityTypes/MCO/attributes/Email/attributes/Source/attributes/SourceName


SOURCE_RANK

VARCHAR

SourceRank

configuration/entityTypes/HCP/attributes/Email/attributes/Source/attributes/SourceRank, configuration/entityTypes/HCO/attributes/Email/attributes/Source/attributes/SourceRank, configuration/entityTypes/MCO/attributes/Email/attributes/Source/attributes/SourceRank


IDENTIFIERS

Column

Type

Description

Reltio Attribute URI

LOV Name

IDENTIFIERS_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE

VARCHAR

Identifier Type

configuration/entityTypes/HCP/attributes/Identifiers/attributes/Type, configuration/entityTypes/HCO/attributes/Identifiers/attributes/Type, configuration/entityTypes/MCO/attributes/Identifiers/attributes/Type

HCPIdentifierType,HCOIdentifierType

ID

VARCHAR

Identifier ID

configuration/entityTypes/HCP/attributes/Identifiers/attributes/ID, configuration/entityTypes/HCO/attributes/Identifiers/attributes/ID, configuration/entityTypes/MCO/attributes/Identifiers/attributes/ID


EXTL_DATE

DATE

External Date

configuration/entityTypes/HCP/attributes/Identifiers/attributes/EXTLDate


ACTIVATION_DATE

DATE

Activation Date

configuration/entityTypes/HCP/attributes/Identifiers/attributes/ActivationDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/ActivationDate


REFER_BACK_ID_STATUS

VARCHAR

Status

configuration/entityTypes/HCP/attributes/Identifiers/attributes/ReferBackIDStatus, configuration/entityTypes/HCO/attributes/Identifiers/attributes/ReferBackIDStatus


DEACTIVATION_DATE

DATE

Identifier Deactivation Date

configuration/entityTypes/HCP/attributes/Identifiers/attributes/DeactivationDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/DeactivationDate


STATE

VARCHAR

Identifier State

configuration/entityTypes/HCP/attributes/Identifiers/attributes/State

State

SOURCE_NAME

VARCHAR

Name of the Identifier source

configuration/entityTypes/HCP/attributes/Identifiers/attributes/SourceName, configuration/entityTypes/HCO/attributes/Identifiers/attributes/SourceName, configuration/entityTypes/MCO/attributes/Identifiers/attributes/SourceName


TRUST

VARCHAR

Trust

configuration/entityTypes/HCP/attributes/Identifiers/attributes/Trust, configuration/entityTypes/HCO/attributes/Identifiers/attributes/Trust, configuration/entityTypes/MCO/attributes/Identifiers/attributes/Trust


SOURCE_START_DATE

DATE

Start date at source

configuration/entityTypes/HCP/attributes/Identifiers/attributes/SourceStartDate


SOURCE_UPDATE_DATE

DATE

Update date at source

configuration/entityTypes/HCP/attributes/Identifiers/attributes/SourceUpdateDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/SourceUpdateDate


STATUS

VARCHAR

Status

configuration/entityTypes/HCP/attributes/Identifiers/attributes/Status, configuration/entityTypes/HCO/attributes/Identifiers/attributes/Status

HCPIdentifierStatus,HCOIdentifierStatus

STATUS_DETAIL

VARCHAR

Identifier Deactivation Reason Code

configuration/entityTypes/HCP/attributes/Identifiers/attributes/StatusDetail, configuration/entityTypes/HCO/attributes/Identifiers/attributes/StatusDetail

HCPIdentifierStatusDetail,HCOIdentifierStatusDetail

DRUG_SCHEDULE

VARCHAR

Status

configuration/entityTypes/HCP/attributes/Identifiers/attributes/DrugSchedule


TAXONOMY

VARCHAR


configuration/entityTypes/HCP/attributes/Identifiers/attributes/Taxonomy


SEQUENCE_NUMBER

VARCHAR


configuration/entityTypes/HCP/attributes/Identifiers/attributes/SequenceNumber


MCRPE_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Identifiers/attributes/MCRPECode


MCRPE_START_DATE

DATE


configuration/entityTypes/HCP/attributes/Identifiers/attributes/MCRPEStartDate


MCRPE_END_DATE

DATE


configuration/entityTypes/HCP/attributes/Identifiers/attributes/MCRPEEndDate


MCRPE_IS_OPTED

BOOLEAN


configuration/entityTypes/HCP/attributes/Identifiers/attributes/MCRPEIsOpted


EXPIRATION_DATE

DATE


configuration/entityTypes/HCP/attributes/Identifiers/attributes/ExpirationDate


ORDER

VARCHAR

Order

configuration/entityTypes/HCO/attributes/Identifiers/attributes/Order


REASON

VARCHAR

Reason

configuration/entityTypes/HCO/attributes/Identifiers/attributes/Reason


START_DATE

DATE

Identifier Start Date

configuration/entityTypes/HCO/attributes/Identifiers/attributes/StartDate


END_DATE

DATE

Identifier End Date

configuration/entityTypes/HCO/attributes/Identifiers/attributes/EndDate


DATA_QUALITY

Column

Type

Description

Reltio Attribute URI

LOV Name

DATA_QUALITY_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



DQ_DESCRIPTION

VARCHAR

DQ Description

configuration/entityTypes/HCP/attributes/DataQuality/attributes/DQDescription, configuration/entityTypes/HCO/attributes/DataQuality/attributes/DQDescription, configuration/entityTypes/MCO/attributes/DataQuality/attributes/DQDescription

DQDescription

LICENSE

Column

Type

Description

Reltio Attribute URI

LOV Name

LICENSE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



CATEGORY

VARCHAR

Category License belongs to, e.g., International

configuration/entityTypes/HCP/attributes/License/attributes/Category


PROFESSION_CODE

VARCHAR

Profession Information

configuration/entityTypes/HCP/attributes/License/attributes/ProfessionCode

HCPProfession

NUMBER

VARCHAR

State License INTEGER. A unique license INTEGER is listed for each license the physician holds. There is no standard format syntax. Format examples: 18986, 4301079019, BX1464089. There is also no limit to the INTEGER of licenses a physician can hold in a state. Example: A physician can have an inactive resident license plus unlimited active licenses. Residents can have as many as four licenses since some states issue licenses every year

configuration/entityTypes/HCP/attributes/License/attributes/Number, configuration/entityTypes/HCO/attributes/License/attributes/Number


REG_AUTH_ID

VARCHAR

RegAuthID

configuration/entityTypes/HCP/attributes/License/attributes/RegAuthID


STATE_BOARD

VARCHAR

State Board

configuration/entityTypes/HCP/attributes/License/attributes/StateBoard


STATE_BOARD_NAME

VARCHAR

State Board Name

configuration/entityTypes/HCP/attributes/License/attributes/StateBoardName


STATE

VARCHAR

State License State. Two character field. USPS standard abbreviations.

configuration/entityTypes/HCP/attributes/License/attributes/State, configuration/entityTypes/HCO/attributes/License/attributes/State


TYPE

VARCHAR

State License Type. U = Unlimited there is no restriction on the physician to practice medicine; L = Limited implies restrictions of some sort. For example, the physician may practice only in a given county, admit patients only to particular hospitals, or practice under the supervision of a physician with a license in state or private hospitals or other settings; T = Temporary issued to a physician temporarily practicing in an underserved area outside his/her state of licensure. Also granted between board meetings when new licenses are issued. Time span for a temporary license varies from state to state. Temporary licenses typically expire 6-9 months from the date they are issued; R = Resident License granted to a physician in graduate medical education (e.g., residency training).

configuration/entityTypes/HCP/attributes/License/attributes/Type

ST_LIC_TYPE

STATUS

VARCHAR

State License Status. A = Active. Physician is licensed to practice within the state; I = Inactive. If the physician has not reregistered a state license OR if the license has been suspended or revoked by the state board; X = unknown. If the state has not provided current information Note: Some state boards issue inactive licenses to physicians who want to maintain licensure in the state although they are currently practicing in another state.

configuration/entityTypes/HCP/attributes/License/attributes/Status

HCPLicenseStatus

STATUS_DETAIL

VARCHAR

Deactivation Reason Code

configuration/entityTypes/HCP/attributes/License/attributes/StatusDetail

HCPLicenseStatusDetail

TRUST

VARCHAR

Trust flag

configuration/entityTypes/HCP/attributes/License/attributes/Trust


DEACTIVATION_REASON_CODE

VARCHAR

Deactivation Reason Code

configuration/entityTypes/HCP/attributes/License/attributes/DeactivationReasonCode

HCPLicenseDeactivationReasonCode

EXPIRATION_DATE

DATE

License Expiration Date

configuration/entityTypes/HCP/attributes/License/attributes/ExpirationDate


ISSUE_DATE

DATE

State License Issue Date

configuration/entityTypes/HCP/attributes/License/attributes/IssueDate


STATE_LICENSE_PRIVILEGE

VARCHAR

State License Privilege

configuration/entityTypes/HCP/attributes/License/attributes/StateLicensePrivilege


STATE_LICENSE_PRIVILEGE_NAME

VARCHAR

State License Privilege Name

configuration/entityTypes/HCP/attributes/License/attributes/StateLicensePrivilegeName


STATE_LICENSE_STATUS_DATE

DATE

State License Status Date

configuration/entityTypes/HCP/attributes/License/attributes/StateLicenseStatusDate


RANK

VARCHAR

Rank of License

configuration/entityTypes/HCP/attributes/License/attributes/Rank


CERTIFICATION_CODE

VARCHAR

Certification Code

configuration/entityTypes/HCP/attributes/License/attributes/CertificationCode

HCPLicenseCertification

LICENSE_SOURCE

Source

Column

Type

Description

Reltio Attribute URI

LOV Name

LICENSE_URI

VARCHAR

Generated Key



SOURCE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SOURCE_NAME

VARCHAR

SourceName

configuration/entityTypes/HCP/attributes/License/attributes/Source/attributes/SourceName


SOURCE_RANK

VARCHAR

SourceRank

configuration/entityTypes/HCP/attributes/License/attributes/Source/attributes/SourceRank


LICENSE_REGULATORY

License Regulatory

Column

Type

Description

Reltio Attribute URI

LOV Name

LICENSE_URI

VARCHAR

Generated Key



REGULATORY_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



REQ_SAMPL_NON_CTRL

VARCHAR

Req Sampl Non Ctrl

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/ReqSamplNonCtrl


REQ_SAMPL_CTRL

VARCHAR

Req Sampl Ctrl

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/ReqSamplCtrl


RECV_SAMPL_NON_CTRL

VARCHAR

Recv Sampl Non Ctrl

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/RecvSamplNonCtrl


RECV_SAMPL_CTRL

VARCHAR

Recv Sampl Ctrl

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/RecvSamplCtrl


DISTR_SAMPL_NON_CTRL

VARCHAR

Distr Sampl Non Ctrl

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/DistrSamplNonCtrl


DISTR_SAMPL_CTRL

VARCHAR

Distr Sampl Ctrl

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/DistrSamplCtrl


SAMP_DRUG_SCHED_I_FLAG

VARCHAR

Samp Drug Sched I Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SampDrugSchedIFlag


SAMP_DRUG_SCHED_II_FLAG

VARCHAR

Samp Drug Sched II Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SampDrugSchedIIFlag


SAMP_DRUG_SCHED_III_FLAG

VARCHAR

Samp Drug Sched III Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SampDrugSchedIIIFlag


SAMP_DRUG_SCHED_IV_FLAG

VARCHAR

Samp Drug Sched IV Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SampDrugSchedIVFlag


SAMP_DRUG_SCHED_V_FLAG

VARCHAR

Samp Drug Sched V Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SampDrugSchedVFlag


SAMP_DRUG_SCHED_VI_FLAG

VARCHAR

Samp Drug Sched VI Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SampDrugSchedVIFlag


PRESCR_NON_CTRL_FLAG

VARCHAR

Prescr Non Ctrl Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrNonCtrlFlag


PRESCR_APP_REQ_NON_CTRL_FLAG

VARCHAR

Prescr App Req Non Ctrl Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrAppReqNonCtrlFlag


PRESCR_CTRL_FLAG

VARCHAR

Prescr Ctrl Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrCtrlFlag


PRESCR_APP_REQ_CTRL_FLAG

VARCHAR

Prescr App Req Ctrl Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrAppReqCtrlFlag


PRESCR_DRUG_SCHED_I_FLAG

VARCHAR

Prescr Drug Sched I Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedIFlag


PRESCR_DRUG_SCHED_II_FLAG

VARCHAR

Prescr Drug Sched II Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedIIFlag


PRESCR_DRUG_SCHED_III_FLAG

VARCHAR

Prescr Drug Sched III Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedIIIFlag


PRESCR_DRUG_SCHED_IV_FLAG

VARCHAR

Prescr Drug Sched IV Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedIVFlag


PRESCR_DRUG_SCHED_V_FLAG

VARCHAR

Prescr Drug Sched V Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedVFlag


PRESCR_DRUG_SCHED_VI_FLAG

VARCHAR

Prescr Drug Sched VI Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedVIFlag


SUPERVISORY_REL_CD_NON_CTRL

VARCHAR

Supervisory Rel Cd Non Ctrl

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SupervisoryRelCdNonCtrl


SUPERVISORY_REL_CD_CTRL

VARCHAR

Supervisory Rel Cd Ctrl

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SupervisoryRelCdCtrl


COLLABORATIVE_NONCTRL

VARCHAR

Collaborative Non ctrl

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/CollaborativeNonctrl


COLLABORATIVE_CTRL

VARCHAR

Collaborative ctrl

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/CollaborativeCtrl


INCLUSIONARY

VARCHAR

Inclusionary

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/Inclusionary


EXCLUSIONARY

VARCHAR

Exclusionary

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/Exclusionary


DELEGATION_NON_CTRL

VARCHAR

Delegation Non Ctrl

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/DelegationNonCtrl


DELEGATION_CTRL

VARCHAR

Delegation Ctrl

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/DelegationCtrl


CSR

Column

Type

Description

Reltio Attribute URI

LOV Name

CSR_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



PROFESSION_CODE

VARCHAR

Profession Information

configuration/entityTypes/HCP/attributes/CSR/attributes/ProfessionCode

HCPProfession

AUTHORIZATION_NUMBER

VARCHAR

Autorization number of CSR

configuration/entityTypes/HCP/attributes/CSR/attributes/AuthorizationNumber


REG_AUTH_ID

VARCHAR

RegAuthID

configuration/entityTypes/HCP/attributes/CSR/attributes/RegAuthID


STATE_BOARD

VARCHAR

State Board

configuration/entityTypes/HCP/attributes/CSR/attributes/StateBoard


STATE_BOARD_NAME

VARCHAR

State Board Name

configuration/entityTypes/HCP/attributes/CSR/attributes/StateBoardName


STATE

VARCHAR

State of CSR.

configuration/entityTypes/HCP/attributes/CSR/attributes/State


CSR_LICENSE_TYPE

VARCHAR

CSR License Type

configuration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseType


CSR_LICENSE_TYPE_NAME

VARCHAR

CSR License Type Name

configuration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseTypeName


CSR_LICENSE_PRIVILEGE

VARCHAR

CSR License Privilege

configuration/entityTypes/HCP/attributes/CSR/attributes/CSRLicensePrivilege


CSR_LICENSE_PRIVILEGE_NAME

VARCHAR

CSR License Privilege Name

configuration/entityTypes/HCP/attributes/CSR/attributes/CSRLicensePrivilegeName


CSR_LICENSE_EFFECTIVE_DATE

DATE

CSR License Effective Date

configuration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseEffectiveDate


CSR_LICENSE_EXPIRATION_DATE

DATE

CSR License Expiration Date

configuration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseExpirationDate


CSR_LICENSE_STATUS

VARCHAR

CSR License Status

configuration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseStatus

HCPLicenseStatus

STATUS_DETAIL

VARCHAR

CSRLicenseDeactivationReason

configuration/entityTypes/HCP/attributes/CSR/attributes/StatusDetail

HCPLicenseStatusDetail

CSR_LICENSE_DEACTIVATION_REASON

VARCHAR

CSR License Deactivation Reason

configuration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseDeactivationReason

HCPCSRLicenseDeactivationReason

CSR_LICENSE_CERTIFICATION

VARCHAR

CSR License Certification

configuration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseCertification

HCPLicenseCertification

CSR_LICENSE_TYPE_PRIVILEGE_RANK

VARCHAR

CSR License Type Privilege Rank

configuration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseTypePrivilegeRank


CSR_REGULATORY

CSR Regulatory

Column

Type

Description

Reltio Attribute URI

LOV Name

CSR_URI

VARCHAR

Generated Key



REGULATORY_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



REQ_SAMPL_NON_CTRL

VARCHAR

Req Sampl Non Ctrl

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/ReqSamplNonCtrl


REQ_SAMPL_CTRL

VARCHAR

Req Sampl Ctrl

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/ReqSamplCtrl


RECV_SAMPL_NON_CTRL

VARCHAR

Recv Sampl Non Ctrl

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/RecvSamplNonCtrl


RECV_SAMPL_CTRL

VARCHAR

Recv Sampl Ctrl

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/RecvSamplCtrl


DISTR_SAMPL_NON_CTRL

VARCHAR

Distr Sampl Non Ctrl

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/DistrSamplNonCtrl


DISTR_SAMPL_CTRL

VARCHAR

Distr Sampl Ctrl

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/DistrSamplCtrl


SAMP_DRUG_SCHED_I_FLAG

VARCHAR

Samp Drug Sched I Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SampDrugSchedIFlag


SAMP_DRUG_SCHED_II_FLAG

VARCHAR

Samp Drug Sched II Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SampDrugSchedIIFlag


SAMP_DRUG_SCHED_III_FLAG

VARCHAR

Samp Drug Sched III Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SampDrugSchedIIIFlag


SAMP_DRUG_SCHED_IV_FLAG

VARCHAR

Samp Drug Sched IV Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SampDrugSchedIVFlag


SAMP_DRUG_SCHED_V_FLAG

VARCHAR

Samp Drug Sched V Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SampDrugSchedVFlag


SAMP_DRUG_SCHED_VI_FLAG

VARCHAR

Samp Drug Sched VI Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SampDrugSchedVIFlag


PRESCR_NON_CTRL_FLAG

VARCHAR

Prescr Non Ctrl Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrNonCtrlFlag


PRESCR_APP_REQ_NON_CTRL_FLAG

VARCHAR

Prescr App Req Non Ctrl Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrAppReqNonCtrlFlag


PRESCR_CTRL_FLAG

VARCHAR

Prescr Ctrl Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrCtrlFlag


PRESCR_APP_REQ_CTRL_FLAG

VARCHAR

Prescr App Req Ctrl Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrAppReqCtrlFlag


PRESCR_DRUG_SCHED_I_FLAG

VARCHAR

Prescr Drug Sched I Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrDrugSchedIFlag


PRESCR_DRUG_SCHED_II_FLAG

VARCHAR

Prescr Drug Sched II Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrDrugSchedIIFlag


PRESCR_DRUG_SCHED_III_FLAG

VARCHAR

Prescr Drug Sched III Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrDrugSchedIIIFlag


PRESCR_DRUG_SCHED_IV_FLAG

VARCHAR

Prescr Drug Sched IV Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrDrugSchedIVFlag


PRESCR_DRUG_SCHED_V_FLAG

VARCHAR

Prescr Drug Sched V Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrDrugSchedVFlag


PRESCR_DRUG_SCHED_VI_FLAG

VARCHAR

Prescr Drug Sched VI Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrDrugSchedVIFlag


SUPERVISORY_REL_CD_NON_CTRL

VARCHAR

Supervisory Rel Cd Non Ctrl

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SupervisoryRelCdNonCtrl


SUPERVISORY_REL_CD_CTRL

VARCHAR

Supervisory Rel Cd Ctrl

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SupervisoryRelCdCtrl


COLLABORATIVE_NONCTRL

VARCHAR

Collaborative Non ctrl

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/CollaborativeNonctrl


COLLABORATIVE_CTRL

VARCHAR

Collaborative ctrl

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/CollaborativeCtrl


INCLUSIONARY

VARCHAR

Inclusionary

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/Inclusionary


EXCLUSIONARY

VARCHAR

Exclusionary

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/Exclusionary


DELEGATION_NON_CTRL

VARCHAR

Delegation Non Ctrl

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/DelegationNonCtrl


DELEGATION_CTRL

VARCHAR

Delegation Ctrl

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/DelegationCtrl


PRIVACY_PREFERENCES

Column

Type

Description

Reltio Attribute URI

LOV Name

PRIVACY_PREFERENCES_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



AMA_NO_CONTACT

BOOLEAN

Can be Contacted through AMA or not

configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/AMANoContact


FTC_NO_CONTACT

BOOLEAN

Can be Contacted through FTC or not

configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/FTCNoContact


PDRP

BOOLEAN

Physician Data Restriction Program enrolled or not

configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/PDRP


PDRP_DATE

DATE

Physician Data Restriction Program enrolment date

configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/PDRPDate


OPT_OUT_START_DATE

DATE

Opt Out Start Date

configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/OptOutStartDate


ALLOWED_TO_CONTACT

BOOLEAN

Indicator whether allowed to contact

configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/AllowedToContact


PHONE_OPT_OUT

BOOLEAN

Opted Out for being contacted on Phone or not

configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/PhoneOptOut


EMAIL_OPT_OUT

BOOLEAN

Opted Out for being contacted through Email or not

configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/EmailOptOut


FAX_OPT_OUT

BOOLEAN

Opted Out for being contacted through Fax or not

configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/FaxOptOut


MAIL_OPT_OUT

BOOLEAN

Opted Out for being contacted through Mail or not

configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/MailOptOut


NO_CONTACT_REASON

VARCHAR

Reason for no contact

configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/NoContactReason


NO_CONTACT_EFFECTIVE_DATE

DATE

Effective date of no contact

configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/NoContactEffectiveDate


CERTIFICATES

Column

Type

Description

Reltio Attribute URI

LOV Name

CERTIFICATES_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



CERTIFICATE_ID

VARCHAR

Certificate Id of Certificate received by HCP

configuration/entityTypes/HCP/attributes/Certificates/attributes/CertificateId


SPEAKER

Column

Type

Description

Reltio Attribute URI

LOV Name

SPEAKER_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



LEVEL

VARCHAR

Level

configuration/entityTypes/HCP/attributes/Speaker/attributes/Level

HCPTierLevel

TIER_STATUS

VARCHAR

Tier Status

configuration/entityTypes/HCP/attributes/Speaker/attributes/TierStatus

HCPTierStatus

TIER_APPROVAL_DATE

DATE

Tier Approval Date

configuration/entityTypes/HCP/attributes/Speaker/attributes/TierApprovalDate


TIER_UPDATED_DATE

DATE

Tier Updated Date

configuration/entityTypes/HCP/attributes/Speaker/attributes/TierUpdatedDate


TIER_APPROVER

VARCHAR

Tier Approver

configuration/entityTypes/HCP/attributes/Speaker/attributes/TierApprover


EFFECTIVE_DATE

DATE

Speaker Effective Date

configuration/entityTypes/HCP/attributes/Speaker/attributes/EffectiveDate


DEACTIVATE_REASON

VARCHAR

Speaker Deactivate Reason

configuration/entityTypes/HCP/attributes/Speaker/attributes/DeactivateReason


IS_SPEAKER

BOOLEAN


configuration/entityTypes/HCP/attributes/Speaker/attributes/IsSpeaker


SPEAKER_TIER_RATIONALE

Tier Rationale

Column

Type

Description

Reltio Attribute URI

LOV Name

SPEAKER_URI

VARCHAR

Generated Key



TIER_RATIONALE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TIER_RATIONALE

VARCHAR

Tier Rationale

configuration/entityTypes/HCP/attributes/Speaker/attributes/TierRationale/attributes/TierRationale

HCPTierRational

RAWDEA

Column

Type

Description

Reltio Attribute URI

LOV Name

RAWDEA_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



DEA_NUMBER

VARCHAR

RAW DEA Number

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/DEANumber


DEA_BUSINESS_ACTIVITY

VARCHAR

DEA Business Activity

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/DEABusinessActivity


EFFECTIVE_DATE

DATE

RAW DEA Effective Date

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/EffectiveDate


EXPIRATION_DATE

DATE

RAW DEA Expiration Date

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/ExpirationDate


NAME

VARCHAR

RAW DEA Name

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/Name


ADDITIONAL_COMPANY_INFO

VARCHAR

Additional Company Info

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/AdditionalCompanyInfo


ADDRESS1

VARCHAR

RAW DEA Address 1

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/Address1


ADDRESS2

VARCHAR

RAW DEA Address 2

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/Address2


CITY

VARCHAR

RAW DEA City

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/City


STATE

VARCHAR

RAW DEA State

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/State


ZIP

VARCHAR

RAW DEA Zip

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/Zip


BUSINESS_ACTIVITY_SUB_CD

VARCHAR

Business Activity Sub Cd

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/BusinessActivitySubCd


PAYMT_IND

VARCHAR

Paymt Indicator

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/PaymtInd

HCPRAWDEAPaymtInd

RAW_DEA_SCHD_CLAS_CD

VARCHAR

Raw Dea Schd Clas Cd

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/RawDeaSchdClasCd


STATUS

VARCHAR

Raw Dea Status

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/Status


PHONE

Column

Type

Description

Reltio Attribute URI

LOV Name

PHONE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/Type, configuration/entityTypes/HCO/attributes/Phone/attributes/Type, configuration/entityTypes/MCO/attributes/Phone/attributes/Type

PhoneType

NUMBER

VARCHAR

Phone number

configuration/entityTypes/HCP/attributes/Phone/attributes/Number, configuration/entityTypes/HCO/attributes/Phone/attributes/Number, configuration/entityTypes/MCO/attributes/Phone/attributes/Number


FORMATTED_NUMBER

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/FormattedNumber, configuration/entityTypes/HCO/attributes/Phone/attributes/FormattedNumber, configuration/entityTypes/MCO/attributes/Phone/attributes/FormattedNumber


EXTENSION

VARCHAR

Extension, if any

configuration/entityTypes/HCP/attributes/Phone/attributes/Extension, configuration/entityTypes/HCO/attributes/Phone/attributes/Extension, configuration/entityTypes/MCO/attributes/Phone/attributes/Extension


RANK

VARCHAR

Rank used to assign priority to a Phone number

configuration/entityTypes/HCP/attributes/Phone/attributes/Rank, configuration/entityTypes/HCO/attributes/Phone/attributes/Rank, configuration/entityTypes/MCO/attributes/Phone/attributes/Rank


PHONE_USAGE_TAG

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/PhoneUsageTag, configuration/entityTypes/HCO/attributes/Phone/attributes/PhoneUsageTag, configuration/entityTypes/MCO/attributes/Phone/attributes/PhoneUsageTag

PhoneUsageTag

USAGE_TYPE

VARCHAR

Usage Type of a Phone number

configuration/entityTypes/HCP/attributes/Phone/attributes/UsageType, configuration/entityTypes/HCO/attributes/Phone/attributes/UsageType, configuration/entityTypes/MCO/attributes/Phone/attributes/UsageType


AREA_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/AreaCode, configuration/entityTypes/HCO/attributes/Phone/attributes/AreaCode, configuration/entityTypes/MCO/attributes/Phone/attributes/AreaCode


LOCAL_NUMBER

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/LocalNumber, configuration/entityTypes/HCO/attributes/Phone/attributes/LocalNumber, configuration/entityTypes/MCO/attributes/Phone/attributes/LocalNumber


VALIDATION_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/ValidationStatus, configuration/entityTypes/HCO/attributes/Phone/attributes/ValidationStatus, configuration/entityTypes/MCO/attributes/Phone/attributes/ValidationStatus


LINE_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/LineType, configuration/entityTypes/HCO/attributes/Phone/attributes/LineType, configuration/entityTypes/MCO/attributes/Phone/attributes/LineType


FORMAT_MASK

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/FormatMask, configuration/entityTypes/HCO/attributes/Phone/attributes/FormatMask, configuration/entityTypes/MCO/attributes/Phone/attributes/FormatMask


DIGIT_COUNT

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/DigitCount, configuration/entityTypes/HCO/attributes/Phone/attributes/DigitCount, configuration/entityTypes/MCO/attributes/Phone/attributes/DigitCount


GEO_AREA

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/GeoArea, configuration/entityTypes/HCO/attributes/Phone/attributes/GeoArea, configuration/entityTypes/MCO/attributes/Phone/attributes/GeoArea


GEO_COUNTRY

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/GeoCountry, configuration/entityTypes/HCO/attributes/Phone/attributes/GeoCountry, configuration/entityTypes/MCO/attributes/Phone/attributes/GeoCountry


COUNTRY_CODE

VARCHAR

Two digit code for a Country

configuration/entityTypes/HCO/attributes/Phone/attributes/CountryCode, configuration/entityTypes/MCO/attributes/Phone/attributes/CountryCode


PHONE_SOURCE

Source

Column

Type

Description

Reltio Attribute URI

LOV Name

PHONE_URI

VARCHAR

Generated Key



SOURCE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SOURCE_NAME

VARCHAR

SourceName

configuration/entityTypes/HCP/attributes/Phone/attributes/Source/attributes/SourceName, configuration/entityTypes/HCO/attributes/Phone/attributes/Source/attributes/SourceName, configuration/entityTypes/MCO/attributes/Phone/attributes/Source/attributes/SourceName


SOURCE_RANK

VARCHAR

SourceRank

configuration/entityTypes/HCP/attributes/Phone/attributes/Source/attributes/SourceRank, configuration/entityTypes/HCO/attributes/Phone/attributes/Source/attributes/SourceRank, configuration/entityTypes/MCO/attributes/Phone/attributes/Source/attributes/SourceRank


SOURCE_ADDRESS_ID

VARCHAR

SourceAddressID

configuration/entityTypes/HCP/attributes/Phone/attributes/Source/attributes/SourceAddressID, configuration/entityTypes/HCO/attributes/Phone/attributes/Source/attributes/SourceAddressID, configuration/entityTypes/MCO/attributes/Phone/attributes/Source/attributes/SourceAddressID


HCP_ADDRESS_ZIP

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDRESS_URI

VARCHAR

Generated Key



ZIP_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



POSTAL_CODE

VARCHAR


configuration/entityTypes/Location/attributes/Zip/attributes/PostalCode


ZIP5

VARCHAR


configuration/entityTypes/Location/attributes/Zip/attributes/Zip5


ZIP4

VARCHAR


configuration/entityTypes/Location/attributes/Zip/attributes/Zip4


DEA

Column

Type

Description

Reltio Attribute URI

LOV Name

DEA_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



NUMBER

VARCHAR


configuration/entityTypes/HCP/attributes/DEA/attributes/Number, configuration/entityTypes/HCO/attributes/DEA/attributes/Number


STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/DEA/attributes/Status, configuration/entityTypes/HCO/attributes/DEA/attributes/Status


STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/DEA/attributes/Status, configuration/entityTypes/HCO/attributes/DEA/attributes/Status

App-LSCustomer360DEAStatus

EXPIRATION_DATE

DATE


configuration/entityTypes/HCP/attributes/DEA/attributes/ExpirationDate, configuration/entityTypes/HCO/attributes/DEA/attributes/ExpirationDate


DRUG_SCHEDULE

VARCHAR


configuration/entityTypes/HCP/attributes/DEA/attributes/DrugSchedule, configuration/entityTypes/HCO/attributes/DEA/attributes/DrugSchedule

App-LSCustomer360DEADrugSchedule

DRUG_SCHEDULE_DESCRIPTION

VARCHAR


configuration/entityTypes/HCP/attributes/DEA/attributes/DrugScheduleDescription, configuration/entityTypes/HCO/attributes/DEA/attributes/DrugScheduleDescription


BUSINESS_ACTIVITY

VARCHAR


configuration/entityTypes/HCP/attributes/DEA/attributes/BusinessActivity, configuration/entityTypes/HCO/attributes/DEA/attributes/BusinessActivity

App-LSCustomer360DEABusinessActivity

BUSINESS_ACTIVITY_PLUS_SUB_CODE

VARCHAR

Business Activity SubCode

configuration/entityTypes/HCP/attributes/DEA/attributes/BusinessActivityPlusSubCode, configuration/entityTypes/HCO/attributes/DEA/attributes/BusinessActivityPlusSubCode

App-LSCustomer360DEABusinessActivitySubcode

BUSINESS_ACTIVITY_DESCRIPTION

VARCHAR

String

configuration/entityTypes/HCP/attributes/DEA/attributes/BusinessActivityDescription, configuration/entityTypes/HCO/attributes/DEA/attributes/BusinessActivityDescription

App-LSCustomer360DEABusinessActivityDescription

PAYMENT_INDICATOR

VARCHAR

String

configuration/entityTypes/HCP/attributes/DEA/attributes/PaymentIndicator, configuration/entityTypes/HCO/attributes/DEA/attributes/PaymentIndicator

App-LSCustomer360DEAPaymentIndicator

TAXONOMY

Column

Type

Description

Reltio Attribute URI

LOV Name

TAXONOMY_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TAXONOMY

VARCHAR

Taxonomy related to HCP, e.g., Obstetrics & Gynecology

configuration/entityTypes/HCP/attributes/Taxonomy/attributes/Taxonomy, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/Taxonomy

App-LSCustomer360Taxonomy,TAXONOMY_CD

TYPE

VARCHAR

Type of Taxonomy, e.g., Primary

configuration/entityTypes/HCP/attributes/Taxonomy/attributes/Type, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/Type

App-LSCustomer360TaxonomyType,TAXONOMY_TYPE

STATE_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Taxonomy/attributes/StateCode


GROUP

VARCHAR

Group Taxonomy belongs to

configuration/entityTypes/HCP/attributes/Taxonomy/attributes/Group


PROVIDER_TYPE

VARCHAR

Taxonomy Provider Type

configuration/entityTypes/HCP/attributes/Taxonomy/attributes/ProviderType, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/ProviderType


CLASSIFICATION

VARCHAR

Classification of Taxonomy

configuration/entityTypes/HCP/attributes/Taxonomy/attributes/Classification, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/Classification


SPECIALIZATION

VARCHAR

Specialization of Taxonomy

configuration/entityTypes/HCP/attributes/Taxonomy/attributes/Specialization, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/Specialization


PRIORITY

VARCHAR

Taxonomy Priority

configuration/entityTypes/HCP/attributes/Taxonomy/attributes/Priority, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/Priority

TAXONOMY_PRIORITY

SANCTION

Column

Type

Description

Reltio Attribute URI

LOV Name

SANCTION_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SANCTION_ID

VARCHAR

Court sanction Id for any case.

configuration/entityTypes/HCP/attributes/Sanction/attributes/SanctionId


ACTION_CODE

VARCHAR

Court sanction code for a case

configuration/entityTypes/HCP/attributes/Sanction/attributes/ActionCode


ACTION_DESCRIPTION

VARCHAR

Court sanction Action Description

configuration/entityTypes/HCP/attributes/Sanction/attributes/ActionDescription


BOARD_CODE

VARCHAR

Court case board id

configuration/entityTypes/HCP/attributes/Sanction/attributes/BoardCode


BOARD_DESC

VARCHAR

court case board description

configuration/entityTypes/HCP/attributes/Sanction/attributes/BoardDesc


ACTION_DATE

DATE

Court sanction Action Date

configuration/entityTypes/HCP/attributes/Sanction/attributes/ActionDate


SANCTION_PERIOD_START_DATE

DATE

Sanction Period Start Date

configuration/entityTypes/HCP/attributes/Sanction/attributes/SanctionPeriodStartDate


SANCTION_PERIOD_END_DATE

DATE

Sanction Period End Date

configuration/entityTypes/HCP/attributes/Sanction/attributes/SanctionPeriodEndDate


MONTH_DURATION

VARCHAR

Sanction Duration in Months

configuration/entityTypes/HCP/attributes/Sanction/attributes/MonthDuration


FINE_AMOUNT

VARCHAR

Fine Amount for Sanction

configuration/entityTypes/HCP/attributes/Sanction/attributes/FineAmount


OFFENSE_CODE

VARCHAR

Offense Code for Sanction

configuration/entityTypes/HCP/attributes/Sanction/attributes/OffenseCode


OFFENSE_DESCRIPTION

VARCHAR

Offense Description for Sanction

configuration/entityTypes/HCP/attributes/Sanction/attributes/OffenseDescription


OFFENSE_DATE

DATE

Offense Date for Sanction

configuration/entityTypes/HCP/attributes/Sanction/attributes/OffenseDate


GSA_SANCTION

Column

Type

Description

Reltio Attribute URI

LOV Name

GSA_SANCTION_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SANCTION_ID

VARCHAR

Sanction Id of HCP as per GSA Saction list

configuration/entityTypes/HCP/attributes/GSASanction/attributes/SanctionId


FIRST_NAME

VARCHAR

First Name of HCP as per GSA Saction list

configuration/entityTypes/HCP/attributes/GSASanction/attributes/FirstName


MIDDLE_NAME

VARCHAR

Middle Name of HCP as per GSA Saction list

configuration/entityTypes/HCP/attributes/GSASanction/attributes/MiddleName


LAST_NAME

VARCHAR

Last Name of HCP as per GSA Saction list

configuration/entityTypes/HCP/attributes/GSASanction/attributes/LastName


SUFFIX_NAME

VARCHAR

Suffix Name of HCP as per GSA Saction list

configuration/entityTypes/HCP/attributes/GSASanction/attributes/SuffixName


CITY

VARCHAR

City of HCP as per GSA Saction list

configuration/entityTypes/HCP/attributes/GSASanction/attributes/City


STATE

VARCHAR

State of HCP as per GSA Saction list

configuration/entityTypes/HCP/attributes/GSASanction/attributes/State


ZIP

VARCHAR

Zip of HCP as per GSA Saction list

configuration/entityTypes/HCP/attributes/GSASanction/attributes/Zip


ACTION_DATE

VARCHAR

Action Date for GSA Saction

configuration/entityTypes/HCP/attributes/GSASanction/attributes/ActionDate


TERM_DATE

VARCHAR

Term Date for GSA Saction

configuration/entityTypes/HCP/attributes/GSASanction/attributes/TermDate


AGENCY

VARCHAR

Agency that imposed Sanction

configuration/entityTypes/HCP/attributes/GSASanction/attributes/Agency


CONFIDENCE

VARCHAR

Confidence as per GSA Saction list

configuration/entityTypes/HCP/attributes/GSASanction/attributes/Confidence


MULTI_CHANNEL_COMMUNICATION_CONSENT

Column

Type

Description

Reltio Attribute URI

LOV Name

MULTI_CHANNEL_COMMUNICATION_CONSENT_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



CHANNEL_TYPE

VARCHAR

Channel type for the consent, e.g. email, SMS, etc.

configuration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/ChannelType


CHANNEL_VALUE

VARCHAR

Value of the channel for consent - john.doe@email.com

configuration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/ChannelValue


CHANNEL_CONSENT

VARCHAR

The consent for the corresponding channel and the id - yes or no

configuration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/ChannelConsent

ChannelConsent

START_DATE

DATE

Start date of the consent

configuration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/StartDate


EXPIRATION_DATE

DATE

Expiration date of the consent

configuration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/ExpirationDate


COMMUNICATION_TYPE

VARCHAR

Different communication type that the individual prefers, for e.g. - New Product Launches, Sales/Discounts, Brand-level News

configuration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/CommunicationType


COMMUNICATION_FREQUENCY

VARCHAR

How frequently can the individual be communicated to. Example - Daily/monthly/weekly

configuration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/CommunicationFrequency


CHANNEL_PREFERENCE_FLAG

BOOLEAN

When checked denotes the preferred channel of communication

configuration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/ChannelPreferenceFlag


EMPLOYMENT

Column

Type

Description

Reltio Attribute URI

LOV Name

EMPLOYMENT_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



NAME

VARCHAR

Name

configuration/entityTypes/Organization/attributes/Name


TITLE

VARCHAR


configuration/relationTypes/Employment/attributes/Title


SUMMARY

VARCHAR


configuration/relationTypes/Employment/attributes/Summary


IS_CURRENT

BOOLEAN


configuration/relationTypes/Employment/attributes/IsCurrent


HCO

Health care organization

Column

Type

Description

Reltio Attribute URI

LOV Name

ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE_CODE

VARCHAR

Type Code

configuration/entityTypes/HCO/attributes/TypeCode

HCOType

COMPANY_CUST_ID

VARCHAR

COMPANY Customer ID

configuration/entityTypes/HCO/attributes/COMPANYCustID


SUB_TYPE_CODE

VARCHAR

SubType Code

configuration/entityTypes/HCO/attributes/SubTypeCode

HCOSubType

SUB_CATEGORY

VARCHAR

SubCategory

configuration/entityTypes/HCO/attributes/SubCategory

HCOSubCategory

STRUCTURE_TYPE_CODE

VARCHAR

SubType Code

configuration/entityTypes/HCO/attributes/StructureTypeCode

HCOStructureTypeCode

NAME

VARCHAR

Name

configuration/entityTypes/HCO/attributes/Name


DOING_BUSINESS_AS_NAME

VARCHAR


configuration/entityTypes/HCO/attributes/DoingBusinessAsName


FLEX_RESTRICTED_PARTY_IND

VARCHAR

party indicator for FLEX

configuration/entityTypes/HCO/attributes/FlexRestrictedPartyInd


TRADE_PARTNER

VARCHAR

String

configuration/entityTypes/HCO/attributes/TradePartner


SHIP_TO_SR_PARENT_NAME

VARCHAR

String

configuration/entityTypes/HCO/attributes/ShipToSrParentName


SHIP_TO_JR_PARENT_NAME

VARCHAR

String

configuration/entityTypes/HCO/attributes/ShipToJrParentName


SHIP_FROM_JR_PARENT_NAME

VARCHAR

String

configuration/entityTypes/HCO/attributes/ShipFromJrParentName


TEACHING_HOSPITAL

VARCHAR

Teaching Hospital

configuration/entityTypes/HCO/attributes/TeachingHospital


OWNERSHIP_STATUS

VARCHAR


configuration/entityTypes/HCO/attributes/OwnershipStatus

HCOOwnershipStatus

PROFIT_STATUS

VARCHAR

Profit Status

configuration/entityTypes/HCO/attributes/ProfitStatus

HCOProfitStatus

CMI

VARCHAR

CMI

configuration/entityTypes/HCO/attributes/CMI


COMPANY_HCOS_FLAG

VARCHAR

COMPANY HCOS Flag

configuration/entityTypes/HCO/attributes/COMPANYHCOSFlag


SOURCE_MATCH_CATEGORY

VARCHAR

Source Match Category

configuration/entityTypes/HCO/attributes/SourceMatchCategory


COMM_HOSP

VARCHAR

CommHosp

configuration/entityTypes/HCO/attributes/CommHosp


GEN_FIRST

VARCHAR

String

configuration/entityTypes/HCO/attributes/GenFirst

HCOGenFirst

SREP_ACCESS

VARCHAR

String

configuration/entityTypes/HCO/attributes/SrepAccess

HCOSrepAccess

OUT_PATIENTS_NUMBERS

VARCHAR


configuration/entityTypes/HCO/attributes/OutPatientsNumbers


UNIT_OPER_ROOM_NUMBER

VARCHAR


configuration/entityTypes/HCO/attributes/UnitOperRoomNumber


PRIMARY_GPO

VARCHAR

Primary GPO

configuration/entityTypes/HCO/attributes/PrimaryGPO


TOTAL_PRESCRIBERS

VARCHAR

Total Prescribers

configuration/entityTypes/HCO/attributes/TotalPrescribers


NUM_IN_PATIENTS

VARCHAR

Total InPatients

configuration/entityTypes/HCO/attributes/NumInPatients


TOTAL_LIVES

VARCHAR

Total Lives

configuration/entityTypes/HCO/attributes/TotalLives


TOTAL_PHARMACISTS

VARCHAR

Total Pharmacists

configuration/entityTypes/HCO/attributes/TotalPharmacists


TOTAL_M_DS

VARCHAR

Total MDs

configuration/entityTypes/HCO/attributes/TotalMDs


TOTAL_REVENUE

VARCHAR

Total Revenue

configuration/entityTypes/HCO/attributes/TotalRevenue


STATUS

VARCHAR


configuration/entityTypes/HCO/attributes/Status

HCOStatus

STATUS_DETAIL

VARCHAR

Deactivation Reason

configuration/entityTypes/HCO/attributes/StatusDetail

HCOStatusDetail

ACCOUNT_BLOCK_CODE

VARCHAR

Account Block Code

configuration/entityTypes/HCO/attributes/AccountBlockCode


TOTAL_LICENSE_BEDS

VARCHAR

Total License Beds

configuration/entityTypes/HCO/attributes/TotalLicenseBeds


TOTAL_CENSUS_BEDS

VARCHAR


configuration/entityTypes/HCO/attributes/TotalCensusBeds


TOTAL_STAFFED_BEDS

VARCHAR


configuration/entityTypes/HCO/attributes/TotalStaffedBeds


TOTAL_SURGERIES

VARCHAR

Total Surgeries

configuration/entityTypes/HCO/attributes/TotalSurgeries


TOTAL_PROCEDURES

VARCHAR

Total Procedures

configuration/entityTypes/HCO/attributes/TotalProcedures


NUM_EMPLOYEES

VARCHAR

Number of Procedures

configuration/entityTypes/HCO/attributes/NumEmployees


RESIDENT_COUNT

VARCHAR

Resident Count

configuration/entityTypes/HCO/attributes/ResidentCount


FORMULARY

VARCHAR

Formulary

configuration/entityTypes/HCO/attributes/Formulary

HCOFormulary

E_MEDICAL_RECORD

VARCHAR

e-Medical Record

configuration/entityTypes/HCO/attributes/EMedicalRecord


E_PRESCRIBE

VARCHAR

e-Prescribe

configuration/entityTypes/HCO/attributes/EPrescribe

HCOEPrescribe

PAY_PERFORM

VARCHAR

Pay Perform

configuration/entityTypes/HCO/attributes/PayPerform

HCOPayPerform

DEACTIVATION_REASON

VARCHAR

Deactivation Reason

configuration/entityTypes/HCO/attributes/DeactivationReason

HCODeactivationReason

INTERNATIONAL_LOCATION_NUMBER

VARCHAR

International location number (part 1)

configuration/entityTypes/HCO/attributes/InternationalLocationNumber


DCR_STATUS

VARCHAR

Status of HCO profile

configuration/entityTypes/HCO/attributes/DCRStatus

DCRStatus

COUNTRY_HCO

VARCHAR

Country

configuration/entityTypes/HCO/attributes/Country


ORIGINAL_SOURCE_NAME

VARCHAR

Original Source

configuration/entityTypes/HCO/attributes/OriginalSourceName


SOURCE_UPDATE_DATE

DATE


configuration/entityTypes/HCO/attributes/SourceUpdateDate


CLASSOF_TRADE_N

Column

Type

Description

Reltio Attribute URI

LOV Name

CLASSOF_TRADE_N_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SOURCE_COTID

VARCHAR

Source COT ID

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/SourceCOTID

COT

PRIORITY

VARCHAR

Priority

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Priority


SPECIALTY

VARCHAR

Specialty of Class of Trade

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty

COTSpecialty

CLASSIFICATION

VARCHAR

Classification of Class of Trade

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Classification

COTClassification

FACILITY_TYPE

VARCHAR

Facility Type of Class of Trade

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/FacilityType

COTFacilityType

COT_ORDER

VARCHAR

COT Order

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/COTOrder


START_DATE

DATE

Start Date

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/StartDate


SOURCE

VARCHAR

Source

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Source


PRIMARY

VARCHAR

Primary

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Primary


HCO_ADDRESS_ZIP

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDRESS_URI

VARCHAR

Generated Key



ZIP_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



POSTAL_CODE

VARCHAR


configuration/entityTypes/Location/attributes/Zip/attributes/PostalCode


ZIP5

VARCHAR


configuration/entityTypes/Location/attributes/Zip/attributes/Zip5


ZIP4

VARCHAR


configuration/entityTypes/Location/attributes/Zip/attributes/Zip4


340B

Column

Type

Description

Reltio Attribute URI

LOV Name

340B_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



340BID

VARCHAR

340B ID

configuration/entityTypes/HCO/attributes/340b/attributes/340BID


ENTITY_SUB_DIVISION_NAME

VARCHAR

Entity Sub-Division Name

configuration/entityTypes/HCO/attributes/340b/attributes/EntitySubDivisionName


PROGRAM_CODE

VARCHAR

Program Code

configuration/entityTypes/HCO/attributes/340b/attributes/ProgramCode

340BProgramCode

PARTICIPATING

BOOLEAN

Participating

configuration/entityTypes/HCO/attributes/340b/attributes/Participating


AUTHORIZING_OFFICIAL_NAME

VARCHAR

Authorizing Official Name

configuration/entityTypes/HCO/attributes/340b/attributes/AuthorizingOfficialName


AUTHORIZING_OFFICIAL_TITLE

VARCHAR

Authorizing Official Title

configuration/entityTypes/HCO/attributes/340b/attributes/AuthorizingOfficialTitle


AUTHORIZING_OFFICIAL_TEL

VARCHAR

Authorizing Official Tel

configuration/entityTypes/HCO/attributes/340b/attributes/AuthorizingOfficialTel


AUTHORIZING_OFFICIAL_TEL_EXT

VARCHAR

Authorizing Official Tel Ext

configuration/entityTypes/HCO/attributes/340b/attributes/AuthorizingOfficialTelExt


CONTACT_NAME

VARCHAR

Contact Name

configuration/entityTypes/HCO/attributes/340b/attributes/ContactName


CONTACT_TITLE

VARCHAR

Contact Title

configuration/entityTypes/HCO/attributes/340b/attributes/ContactTitle


CONTACT_TELEPHONE

VARCHAR

Contact Telephone

configuration/entityTypes/HCO/attributes/340b/attributes/ContactTelephone


CONTACT_TELEPHONE_EXT

VARCHAR

Contact Telephone Ext

configuration/entityTypes/HCO/attributes/340b/attributes/ContactTelephoneExt


SIGNED_BY_NAME

VARCHAR

Signed By Name

configuration/entityTypes/HCO/attributes/340b/attributes/SignedByName


SIGNED_BY_TITLE

VARCHAR

Signed By Title

configuration/entityTypes/HCO/attributes/340b/attributes/SignedByTitle


SIGNED_BY_TELEPHONE

VARCHAR

Signed By Telephone

configuration/entityTypes/HCO/attributes/340b/attributes/SignedByTelephone


SIGNED_BY_TELEPHONE_EXT

VARCHAR

Signed By Telephone Ext

configuration/entityTypes/HCO/attributes/340b/attributes/SignedByTelephoneExt


SIGNED_BY_DATE

DATE

Signed By Date

configuration/entityTypes/HCO/attributes/340b/attributes/SignedByDate


CERTIFIED_DECERTIFIED_DATE

DATE

Certified/Decertified Date

configuration/entityTypes/HCO/attributes/340b/attributes/CertifiedDecertifiedDate


RURAL

VARCHAR

Rural

configuration/entityTypes/HCO/attributes/340b/attributes/Rural


ENTRY_COMMENTS

VARCHAR

Entry Comments

configuration/entityTypes/HCO/attributes/340b/attributes/EntryComments


NATURE_OF_SUPPORT

VARCHAR

Nature Of Support

configuration/entityTypes/HCO/attributes/340b/attributes/NatureOfSupport


EDIT_DATE

VARCHAR

Edit Date

configuration/entityTypes/HCO/attributes/340b/attributes/EditDate


340B_PARTICIPATION_DATES

Column

Type

Description

Reltio Attribute URI

LOV Name

340B_URI

VARCHAR

Generated Key



PARTICIPATION_DATES_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



PARTICIPATING_START_DATE

DATE

Participating Start Date

configuration/entityTypes/HCO/attributes/340b/attributes/ParticipationDates/attributes/ParticipatingStartDate


TERMINATION_DATE

DATE

Termination Date

configuration/entityTypes/HCO/attributes/340b/attributes/ParticipationDates/attributes/TerminationDate


TERMINATION_CODE

VARCHAR

Termination Code

configuration/entityTypes/HCO/attributes/340b/attributes/ParticipationDates/attributes/TerminationCode

340BTerminationCode

OTHER_NAMES

Column

Type

Description

Reltio Attribute URI

LOV Name

OTHER_NAMES_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE

VARCHAR

Type

configuration/entityTypes/HCO/attributes/OtherNames/attributes/Type


NAME

VARCHAR

Name

configuration/entityTypes/HCO/attributes/OtherNames/attributes/Name


ACO

Column

Type

Description

Reltio Attribute URI

LOV Name

ACO_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE

VARCHAR

Type

configuration/entityTypes/HCO/attributes/ACO/attributes/Type

HCOACOType

ACO_TYPE_CATEGORY

VARCHAR

Type Category

configuration/entityTypes/HCO/attributes/ACO/attributes/ACOTypeCategory

HCOACOTypeCategory

ACO_TYPE_GROUP

VARCHAR

Type Group of ACO

configuration/entityTypes/HCO/attributes/ACO/attributes/ACOTypeGroup

HCOACOTypeGroup

ACO_ACODETAIL

Column

Type

Description

Reltio Attribute URI

LOV Name

ACO_URI

VARCHAR

Generated Key



ACO_DETAIL_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



ACO_DETAIL_CODE

VARCHAR

Detail Code for ACO

configuration/entityTypes/HCO/attributes/ACO/attributes/ACODetail/attributes/ACODetailCode

HCOACODetail

ACO_DETAIL_VALUE

VARCHAR

Detail Value for ACO

configuration/entityTypes/HCO/attributes/ACO/attributes/ACODetail/attributes/ACODetailValue


ACO_DETAIL_GROUP_CODE

VARCHAR

Detail Value for ACO

configuration/entityTypes/HCO/attributes/ACO/attributes/ACODetail/attributes/ACODetailGroupCode

HCOACODetailGroup

WEBSITE

Column

Type

Description

Reltio Attribute URI

LOV Name

WEBSITE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



WEBSITE_URL

VARCHAR

Url of the website

configuration/entityTypes/HCO/attributes/Website/attributes/WebsiteURL


WEBSITE_SOURCE

Source

Column

Type

Description

Reltio Attribute URI

LOV Name

WEBSITE_URI

VARCHAR

Generated Key



SOURCE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SOURCE_NAME

VARCHAR

SourceName

configuration/entityTypes/HCO/attributes/Website/attributes/Source/attributes/SourceName


SOURCE_RANK

VARCHAR

SourceRank

configuration/entityTypes/HCO/attributes/Website/attributes/Source/attributes/SourceRank


SALES_ORGANIZATION

Sales Organization

Column

Type

Description

Reltio Attribute URI

LOV Name

SALES_ORGANIZATION_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SALES_ORGANIZATION_CODE

VARCHAR

Sales Organization Code

configuration/entityTypes/HCO/attributes/SalesOrganization/attributes/SalesOrganizationCode


CUSTOMER_ORDER_BLOCK

VARCHAR

Customer Order Block

configuration/entityTypes/HCO/attributes/SalesOrganization/attributes/CustomerOrderBlock


CUSTOMER_GROUP

VARCHAR

Customer Group

configuration/entityTypes/HCO/attributes/SalesOrganization/attributes/CustomerGroup


HCO_BUSINESS_UNIT_TAG

Column

Type

Description

Reltio Attribute URI

LOV Name

BUSINESSUNITTAG_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



BUSINESS_UNIT

VARCHAR

Business Unit

configuration/entityTypes/HCO/attributes/BusinessUnitTAG/attributes/BusinessUnit


SEGMENT

VARCHAR

Segment

configuration/entityTypes/HCO/attributes/BusinessUnitTAG/attributes/Segment


CONTRACT_TYPE

VARCHAR

Contract Type

configuration/entityTypes/HCO/attributes/BusinessUnitTAG/attributes/ContractType


GLN

Column

Type

Description

Reltio Attribute URI

LOV Name

GLN_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE

VARCHAR

GLN Type

configuration/entityTypes/HCO/attributes/GLN/attributes/Type


ID

VARCHAR

GLN ID

configuration/entityTypes/HCO/attributes/GLN/attributes/ID


STATUS

VARCHAR

GLN Status

configuration/entityTypes/HCO/attributes/GLN/attributes/Status

HCOGLNStatus

STATUS_DETAIL

VARCHAR

GLN Status

configuration/entityTypes/HCO/attributes/GLN/attributes/StatusDetail

HCOGLNStatusDetail

HCO_REFER_BACK

Column

Type

Description

Reltio Attribute URI

LOV Name

REFERBACK_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



REFER_BACK_ID

VARCHAR

Refer Back ID

configuration/entityTypes/HCO/attributes/ReferBack/attributes/ReferBackID


REFER_BACK_HCOSID

VARCHAR

GLN ID

configuration/entityTypes/HCO/attributes/ReferBack/attributes/ReferBackHCOSID


DEACTIVATION_REASON

VARCHAR

Deactivation Reason

configuration/entityTypes/HCO/attributes/ReferBack/attributes/DeactivationReason


BED

Column

Type

Description

Reltio Attribute URI

LOV Name

BED_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE

VARCHAR

Type

configuration/entityTypes/HCO/attributes/Bed/attributes/Type

HCOBedType

LICENSE_BEDS

VARCHAR

License Beds

configuration/entityTypes/HCO/attributes/Bed/attributes/LicenseBeds


CENSUS_BEDS

VARCHAR

Census Beds

configuration/entityTypes/HCO/attributes/Bed/attributes/CensusBeds


STAFFED_BEDS

VARCHAR

Staffed Beds

configuration/entityTypes/HCO/attributes/Bed/attributes/StaffedBeds


GSA_EXCLUSION

Column

Type

Description

Reltio Attribute URI

LOV Name

GSA_EXCLUSION_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SANCTION_ID

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/SanctionId


ORGANIZATION_NAME

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/OrganizationName


ADDRESS_LINE1

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/AddressLine1


ADDRESS_LINE2

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/AddressLine2


CITY

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/City


STATE

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/State


ZIP

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/Zip


ACTION_DATE

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/ActionDate


TERM_DATE

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/TermDate


AGENCY

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/Agency


CONFIDENCE

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/Confidence


OIG_EXCLUSION

Column

Type

Description

Reltio Attribute URI

LOV Name

OIG_EXCLUSION_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SANCTION_ID

VARCHAR


configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/SanctionId


ACTION_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/ActionCode


ACTION_DESCRIPTION

VARCHAR


configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/ActionDescription


BOARD_CODE

VARCHAR

Court case board id

configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/BoardCode


BOARD_DESC

VARCHAR

court case board description

configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/BoardDesc


ACTION_DATE

DATE


configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/ActionDate


OFFENSE_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/OffenseCode


OFFENSE_DESCRIPTION

VARCHAR


configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/OffenseDescription


BUSINESS_DETAIL

Column

Type

Description

Reltio Attribute URI

LOV Name

BUSINESS_DETAIL_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



DETAIL

VARCHAR

Detail

configuration/entityTypes/HCO/attributes/BusinessDetail/attributes/Detail

HCOBusinessDetail

GROUP

VARCHAR

Group

configuration/entityTypes/HCO/attributes/BusinessDetail/attributes/Group

HCOBusinessDetailGroup

DETAIL_VALUE

VARCHAR

Detail Value

configuration/entityTypes/HCO/attributes/BusinessDetail/attributes/DetailValue


DETAIL_COUNT

VARCHAR

Detail Count

configuration/entityTypes/HCO/attributes/BusinessDetail/attributes/DetailCount


HIN

HIN

Column

Type

Description

Reltio Attribute URI

LOV Name

HIN_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



HIN

VARCHAR

HIN

configuration/entityTypes/HCO/attributes/HIN/attributes/HIN


TICKER

Column

Type

Description

Reltio Attribute URI

LOV Name

TICKER_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SYMBOL

VARCHAR


configuration/entityTypes/HCO/attributes/Ticker/attributes/Symbol


STOCK_EXCHANGE

VARCHAR


configuration/entityTypes/HCO/attributes/Ticker/attributes/StockExchange


TRADE_STYLE_NAME

Column

Type

Description

Reltio Attribute URI

LOV Name

TRADE_STYLE_NAME_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



ORGANIZATION_NAME

VARCHAR


configuration/entityTypes/HCO/attributes/TradeStyleName/attributes/OrganizationName


LANGUAGE_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/TradeStyleName/attributes/LanguageCode


FORMER_ORGANIZATION_PRIMARY_NAME

VARCHAR


configuration/entityTypes/HCO/attributes/TradeStyleName/attributes/FormerOrganizationPrimaryName


DISPLAY_SEQUENCE

VARCHAR


configuration/entityTypes/HCO/attributes/TradeStyleName/attributes/DisplaySequence


TYPE

VARCHAR


configuration/entityTypes/HCO/attributes/TradeStyleName/attributes/Type


HRIOR_DUNS_NUMBER

Column

Type

Description

Reltio Attribute URI

LOV Name

PRIOR_DUNS_NUMBER_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TRANSFER_DUNS_NUMBER

VARCHAR


configuration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferDUNSNumber


TRANSFER_REASON_TEXT

VARCHAR


configuration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferReasonText


TRANSFER_REASON_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferReasonCode


TRANSFER_DATE

VARCHAR


configuration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferDate


TRANSFERRED_FROM_DUNS_NUMBER

VARCHAR


configuration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferredFromDUNSNumber


TRANSFERRED_TO_DUNS_NUMBER

VARCHAR


configuration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferredToDUNSNumber


INDUSTRY_CODE

Column

Type

Description

Reltio Attribute URI

LOV Name

INDUSTRY_CODE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



DNB_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/DNBCode


INDUSTRY_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCode


INDUSTRY_CODE_DESCRIPTION

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeDescription


INDUSTRY_CODE_LANGUAGE_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeLanguageCode


INDUSTRY_CODE_WRITING_SCRIPT

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeWritingScript


DISPLAY_SEQUENCE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/DisplaySequence


SALES_PERCENTAGE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/SalesPercentage


TYPE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/Type


INDUSTRY_TYPE_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryTypeCode


IMPORT_EXPORT_AGENT

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/ImportExportAgent


ACTIVITIES_AND_OPERATIONS

Column

Type

Description

Reltio Attribute URI

LOV Name

ACTIVITIES_AND_OPERATIONS_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



LINE_OF_BUSINESS_DESCRIPTION

VARCHAR


configuration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/LineOfBusinessDescription


LANGUAGE_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/LanguageCode


WRITING_SCRIPT_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/WritingScriptCode


IMPORT_INDICATOR

BOOLEAN


configuration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/ImportIndicator


EXPORT_INDICATOR

BOOLEAN


configuration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/ExportIndicator


AGENT_INDICATOR

BOOLEAN


configuration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/AgentIndicator


EMPLOYEE_DETAILS

Column

Type

Description

Reltio Attribute URI

LOV Name

EMPLOYEE_DETAILS_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



INDIVIDUAL_EMPLOYEE_FIGURES_DATE

VARCHAR


configuration/entityTypes/HCO/attributes/EmployeeDetails/attributes/IndividualEmployeeFiguresDate


INDIVIDUAL_TOTAL_EMPLOYEE_QUANTITY

VARCHAR


configuration/entityTypes/HCO/attributes/EmployeeDetails/attributes/IndividualTotalEmployeeQuantity


INDIVIDUAL_RELIABILITY_TEXT

VARCHAR


configuration/entityTypes/HCO/attributes/EmployeeDetails/attributes/IndividualReliabilityText


TOTAL_EMPLOYEE_QUANTITY

VARCHAR


configuration/entityTypes/HCO/attributes/EmployeeDetails/attributes/TotalEmployeeQuantity


TOTAL_EMPLOYEE_RELIABILITY

VARCHAR


configuration/entityTypes/HCO/attributes/EmployeeDetails/attributes/TotalEmployeeReliability


PRINCIPALS_INCLUDED

VARCHAR


configuration/entityTypes/HCO/attributes/EmployeeDetails/attributes/PrincipalsIncluded


KEY_FINANCIAL_FIGURES_OVERVIEW

Column

Type

Description

Reltio Attribute URI

LOV Name

KEY_FINANCIAL_FIGURES_OVERVIEW_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



FINANCIAL_STATEMENT_TO_DATE

DATE


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/FinancialStatementToDate


FINANCIAL_PERIOD_DURATION

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/FinancialPeriodDuration


SALES_REVENUE_CURRENCY

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueCurrency


SALES_REVENUE_CURRENCY_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueCurrencyCode


SALES_REVENUE_RELIABILITY_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueReliabilityCode


SALES_REVENUE_UNIT_OF_SIZE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueUnitOfSize


SALES_REVENUE_AMOUNT

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueAmount


PROFIT_OR_LOSS_CURRENCY

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossCurrency


PROFIT_OR_LOSS_RELIABILITY_TEXT

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossReliabilityText


PROFIT_OR_LOSS_UNIT_OF_SIZE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossUnitOfSize


PROFIT_OR_LOSS_AMOUNT

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossAmount


SALES_TURNOVER_GROWTH_RATE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesTurnoverGrowthRate


SALES3YRY_GROWTH_RATE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Sales3YryGrowthRate


SALES5YRY_GROWTH_RATE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Sales5YryGrowthRate


EMPLOYEE3YRY_GROWTH_RATE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Employee3YryGrowthRate


EMPLOYEE5YRY_GROWTH_RATE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Employee5YryGrowthRate


MATCH_QUALITY

Column

Type

Description

Reltio Attribute URI

LOV Name

MATCH_QUALITY_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



CONFIDENCE_CODE

VARCHAR

DnB Match Quality Confidence Code

configuration/entityTypes/HCO/attributes/MatchQuality/attributes/ConfidenceCode


DISPLAY_SEQUENCE

VARCHAR

DnB Match Quality Display Sequence

configuration/entityTypes/HCO/attributes/MatchQuality/attributes/DisplaySequence


MATCH_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/MatchQuality/attributes/MatchCode


BEMFAB

VARCHAR


configuration/entityTypes/HCO/attributes/MatchQuality/attributes/BEMFAB


MATCH_GRADE

VARCHAR


configuration/entityTypes/HCO/attributes/MatchQuality/attributes/MatchGrade


ORGANIZATION_DETAIL

Column

Type

Description

Reltio Attribute URI

LOV Name

ORGANIZATION_DETAIL_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



MEMBER_ROLE

VARCHAR


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/MemberRole


STANDALONE

BOOLEAN


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/Standalone


CONTROL_OWNERSHIP_DATE

DATE


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/ControlOwnershipDate


OPERATING_STATUS

VARCHAR


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/OperatingStatus


START_YEAR

VARCHAR


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/StartYear


FRANCHISE_OPERATION_TYPE

VARCHAR


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/FranchiseOperationType


BONEYARD_ORGANIZATION

BOOLEAN


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/BoneyardOrganization


OPERATING_STATUS_COMMENT

VARCHAR


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/OperatingStatusComment


DUNS_HIERARCHY

Column

Type

Description

Reltio Attribute URI

LOV Name

DUNS_HIERARCHY_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



GLOBAL_ULTIMATE_DUNS

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/GlobalUltimateDUNS


GLOBAL_ULTIMATE_ORGANIZATION

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/GlobalUltimateOrganization


DOMESTIC_ULTIMATE_DUNS

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/DomesticUltimateDUNS


DOMESTIC_ULTIMATE_ORGANIZATION

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/DomesticUltimateOrganization


PARENT_DUNS

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/ParentDUNS


PARENT_ORGANIZATION

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/ParentOrganization


HEADQUARTERS_DUNS

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/HeadquartersDUNS


HEADQUARTERS_ORGANIZATION

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/HeadquartersOrganization


MCO

Managed Care Organization

Column

Type

Description

Reltio Attribute URI

LOV Name

ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



COMPANY_CUST_ID

VARCHAR

COMPANY Customer ID

configuration/entityTypes/MCO/attributes/COMPANYCustID


NAME

VARCHAR

Name

configuration/entityTypes/MCO/attributes/Name


TYPE

VARCHAR

Type

configuration/entityTypes/MCO/attributes/Type

MCOType

MANAGED_CARE_CHANNEL

VARCHAR

Managed Care Channel

configuration/entityTypes/MCO/attributes/ManagedCareChannel

MCOManagedCareChannel

PLAN_MODEL_TYPE

VARCHAR

PlanModelType

configuration/entityTypes/MCO/attributes/PlanModelType

MCOPlanModelType

SUB_TYPE

VARCHAR

SubType

configuration/entityTypes/MCO/attributes/SubType

MCOSubType

SUB_TYPE2

VARCHAR

SubType2

configuration/entityTypes/MCO/attributes/SubType2


SUB_TYPE3

VARCHAR

Sub Type 3

configuration/entityTypes/MCO/attributes/SubType3


NUM_LIVES_MEDICARE

VARCHAR

Medicare Number of Lives

configuration/entityTypes/MCO/attributes/NumLives_Medicare


NUM_LIVES_MEDICAL

VARCHAR

Medical Number of Lives

configuration/entityTypes/MCO/attributes/NumLives_Medical


NUM_LIVES_PHARMACY

VARCHAR

Pharmacy Number of Lives

configuration/entityTypes/MCO/attributes/NumLives_Pharmacy


OPERATING_STATE

VARCHAR

State Operating from

configuration/entityTypes/MCO/attributes/Operating_State


ORIGINAL_SOURCE_NAME

VARCHAR

Original Source Name

configuration/entityTypes/MCO/attributes/OriginalSourceName


DISTRIBUTION_CHANNEL

VARCHAR

Distribution Channel

configuration/entityTypes/MCO/attributes/DistributionChannel


ACCESS_LANDSCAPE_FORMULARY_CHANNEL

VARCHAR

Access Landscape Formulary Channel

configuration/entityTypes/MCO/attributes/AccessLandscapeFormularyChannel


EFFECTIVE_START_DATE

DATE

Effective Start Date

configuration/entityTypes/MCO/attributes/EffectiveStartDate


EFFECTIVE_END_DATE

DATE

Effective End Date

configuration/entityTypes/MCO/attributes/EffectiveEndDate


STATUS

VARCHAR

Status

configuration/entityTypes/MCO/attributes/Status

MCOStatus

SOURCE_MATCH_CATEGORY

VARCHAR

Source Match Category

configuration/entityTypes/MCO/attributes/SourceMatchCategory


COUNTRY_MCO

VARCHAR

Country

configuration/entityTypes/MCO/attributes/Country


AFFILIATIONS

Column

Type

Description

Reltio Attribute URI

LOV Name

RELATION_URI

VARCHAR

Reltio Relation URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



RELATION_TYPE

VARCHAR

Reltio Relation Type



START_ENTITY_URI

VARCHAR

Reltio Start Entity URI



END_ENTITY_URI

VARCHAR

Reltio End Entity URI



SOURCE

VARCHAR


configuration/relationTypes/FlextoDDDAffiliations/attributes/Source, configuration/relationTypes/Ownership/attributes/Source, configuration/relationTypes/PAYERtoPLAN/attributes/Source, configuration/relationTypes/PBMVendortoMCO/attributes/Source, configuration/relationTypes/ACOAffiliations/attributes/Source, configuration/relationTypes/MCOtoPLAN/attributes/Source, configuration/relationTypes/FlextoHCOSAffiliations/attributes/Source, configuration/relationTypes/FlextoSAPAffiliations/attributes/Source, configuration/relationTypes/MCOtoMMITORG/attributes/Source, configuration/relationTypes/HCOStoDDDAffiliations/attributes/Source, configuration/relationTypes/EnterprisetoBOB/attributes/Source, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/Source, configuration/relationTypes/ContactAffiliations/attributes/Source, configuration/relationTypes/VAAffiliations/attributes/Source, configuration/relationTypes/PBMtoPLAN/attributes/Source, configuration/relationTypes/Purchasing/attributes/Source, configuration/relationTypes/BOBtoMCO/attributes/Source, configuration/relationTypes/DDDtoSAPAffiliations/attributes/Source, configuration/relationTypes/Distribution/attributes/Source, configuration/relationTypes/ProviderAffiliations/attributes/Source, configuration/relationTypes/SAPtoHCOSAffiliations/attributes/Source


LINKED_BY

VARCHAR


configuration/relationTypes/FlextoDDDAffiliations/attributes/LinkedBy, configuration/relationTypes/FlextoHCOSAffiliations/attributes/LinkedBy, configuration/relationTypes/FlextoSAPAffiliations/attributes/LinkedBy, configuration/relationTypes/SAPtoHCOSAffiliations/attributes/LinkedBy


COUNTRY_AFFILIATIONS

VARCHAR


configuration/relationTypes/FlextoDDDAffiliations/attributes/Country, configuration/relationTypes/Ownership/attributes/Country, configuration/relationTypes/PAYERtoPLAN/attributes/Country, configuration/relationTypes/PBMVendortoMCO/attributes/Country, configuration/relationTypes/ACOAffiliations/attributes/Country, configuration/relationTypes/MCOtoPLAN/attributes/Country, configuration/relationTypes/FlextoHCOSAffiliations/attributes/Country, configuration/relationTypes/FlextoSAPAffiliations/attributes/Country, configuration/relationTypes/MCOtoMMITORG/attributes/Country, configuration/relationTypes/HCOStoDDDAffiliations/attributes/Country, configuration/relationTypes/EnterprisetoBOB/attributes/Country, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/Country, configuration/relationTypes/ContactAffiliations/attributes/Country, configuration/relationTypes/VAAffiliations/attributes/Country, configuration/relationTypes/PBMtoPLAN/attributes/Country, configuration/relationTypes/Purchasing/attributes/Country, configuration/relationTypes/BOBtoMCO/attributes/Country, configuration/relationTypes/DDDtoSAPAffiliations/attributes/Country, configuration/relationTypes/Distribution/attributes/Country, configuration/relationTypes/ProviderAffiliations/attributes/Country, configuration/relationTypes/SAPtoHCOSAffiliations/attributes/Country


AFFILIATION_TYPE

VARCHAR


configuration/relationTypes/PAYERtoPLAN/attributes/AffiliationType, configuration/relationTypes/PBMVendortoMCO/attributes/AffiliationType, configuration/relationTypes/MCOtoPLAN/attributes/AffiliationType, configuration/relationTypes/MCOtoMMITORG/attributes/AffiliationType, configuration/relationTypes/EnterprisetoBOB/attributes/AffiliationType, configuration/relationTypes/VAAffiliations/attributes/AffiliationType, configuration/relationTypes/PBMtoPLAN/attributes/AffiliationType, configuration/relationTypes/BOBtoMCO/attributes/AffiliationType


PBM_AFFILIATION_TYPE

VARCHAR


configuration/relationTypes/PAYERtoPLAN/attributes/PBMAffiliationType, configuration/relationTypes/PBMVendortoMCO/attributes/PBMAffiliationType, configuration/relationTypes/MCOtoPLAN/attributes/PBMAffiliationType, configuration/relationTypes/MCOtoMMITORG/attributes/PBMAffiliationType, configuration/relationTypes/EnterprisetoBOB/attributes/PBMAffiliationType, configuration/relationTypes/PBMtoPLAN/attributes/PBMAffiliationType, configuration/relationTypes/BOBtoMCO/attributes/PBMAffiliationType


PLAN_MODEL_TYPE

VARCHAR


configuration/relationTypes/PAYERtoPLAN/attributes/PlanModelType, configuration/relationTypes/PBMVendortoMCO/attributes/PlanModelType, configuration/relationTypes/MCOtoPLAN/attributes/PlanModelType, configuration/relationTypes/MCOtoMMITORG/attributes/PlanModelType, configuration/relationTypes/EnterprisetoBOB/attributes/PlanModelType, configuration/relationTypes/PBMtoPLAN/attributes/PlanModelType, configuration/relationTypes/BOBtoMCO/attributes/PlanModelType

MCOPlanModelType

MANAGED_CARE_CHANNEL

VARCHAR


configuration/relationTypes/PAYERtoPLAN/attributes/ManagedCareChannel, configuration/relationTypes/PBMVendortoMCO/attributes/ManagedCareChannel, configuration/relationTypes/MCOtoPLAN/attributes/ManagedCareChannel, configuration/relationTypes/MCOtoMMITORG/attributes/ManagedCareChannel, configuration/relationTypes/EnterprisetoBOB/attributes/ManagedCareChannel, configuration/relationTypes/PBMtoPLAN/attributes/ManagedCareChannel, configuration/relationTypes/BOBtoMCO/attributes/ManagedCareChannel

MCOManagedCareChannel

EFFECTIVE_START_DATE

DATE


configuration/relationTypes/MCOtoPLAN/attributes/EffectiveStartDate


EFFECTIVE_END_DATE

DATE


configuration/relationTypes/MCOtoPLAN/attributes/EffectiveEndDate


STATUS

VARCHAR


configuration/relationTypes/VAAffiliations/attributes/Status


AFFIL_RELATION_TYPE

Column

Type

Description

Reltio Attribute URI

LOV Name

RELATION_TYPE_URI

VARCHAR

Generated Key



RELATION_URI

VARCHAR

Reltio Relation URI



RELATIONSHIP_GROUP_OWNERSHIP

VARCHAR


configuration/relationTypes/Ownership/attributes/RelationType/attributes/RelationshipGroup

HCORelationGroup

RELATIONSHIP_DESCRIPTION_OWNERSHIP

VARCHAR


configuration/relationTypes/Ownership/attributes/RelationType/attributes/RelationshipDescription

HCORelationDescription

RELATIONSHIP_ORDER

VARCHAR


configuration/relationTypes/Ownership/attributes/RelationType/attributes/RelationshipOrder, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/RelationshipOrder, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/RelationshipOrder, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/RelationshipOrder, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/RelationshipOrder, configuration/relationTypes/Distribution/attributes/RelationType/attributes/RelationshipOrder


RANK

VARCHAR


configuration/relationTypes/Ownership/attributes/RelationType/attributes/Rank, configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/Rank, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/Rank, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/Rank, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/Rank, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/Rank, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/Rank, configuration/relationTypes/Distribution/attributes/RelationType/attributes/Rank, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/Rank


AMA_HOSPITAL_ID

VARCHAR


configuration/relationTypes/Ownership/attributes/RelationType/attributes/AMAHospitalID, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/AMAHospitalID, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/AMAHospitalID, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/AMAHospitalID, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/AMAHospitalID, configuration/relationTypes/Distribution/attributes/RelationType/attributes/AMAHospitalID


AMA_HOSPITAL_HOURS

VARCHAR


configuration/relationTypes/Ownership/attributes/RelationType/attributes/AMAHospitalHours, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/AMAHospitalHours, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/AMAHospitalHours, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/AMAHospitalHours, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/AMAHospitalHours, configuration/relationTypes/Distribution/attributes/RelationType/attributes/AMAHospitalHours


EFFECTIVE_START_DATE

DATE


configuration/relationTypes/Ownership/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/Distribution/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/EffectiveStartDate


EFFECTIVE_END_DATE

DATE


configuration/relationTypes/Ownership/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/Distribution/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/EffectiveEndDate


ACTIVE_FLAG

BOOLEAN


configuration/relationTypes/Ownership/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/Distribution/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/ActiveFlag


PRIMARY_AFFILIATION

VARCHAR


configuration/relationTypes/Ownership/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/Distribution/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/PrimaryAffiliation


AFFILIATION_CONFIDENCE_CODE

VARCHAR


configuration/relationTypes/Ownership/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/Distribution/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/AffiliationConfidenceCode


RELATIONSHIP_GROUP_ACOAFFILIATIONS

VARCHAR


configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/RelationshipGroup

HCPRelationGroup

RELATIONSHIP_DESCRIPTION_ACOAFFILIATIONS

VARCHAR


configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/RelationshipDescription

HCPRelationshipDescription

RELATIONSHIP_STATUS_CODE

VARCHAR


configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/RelationshipStatusCode, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/RelationshipStatusCode, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/RelationshipStatusCode

HCPtoHCORelationshipStatus

RELATIONSHIP_STATUS_REASON_CODE

VARCHAR


configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/RelationshipStatusReasonCode, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/RelationshipStatusReasonCode, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/RelationshipStatusReasonCode

HCPtoHCORelationshipStatusReasonCode

WORKING_STATUS

VARCHAR


configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/WorkingStatus, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/WorkingStatus, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/WorkingStatus

WorkingStatus

RELATIONSHIP_GROUP_HCOSTODDDAFFILIATIONS

VARCHAR


configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/RelationshipGroup

HCORelationGroup

RELATIONSHIP_DESCRIPTION_HCOSTODDDAFFILIATIONS

VARCHAR


configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/RelationshipDescription

HCORelationDescription

RELATIONSHIP_GROUP_OTHERHCOTOHCOAFFILIATIONS

VARCHAR


configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/RelationshipGroup

HCORelationGroup

RELATIONSHIP_DESCRIPTION_OTHERHCOTOHCOAFFILIATIONS

VARCHAR


configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/RelationshipDescription

HCORelationDescription

RELATIONSHIP_GROUP_CONTACTAFFILIATIONS

VARCHAR


configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/RelationshipGroup

HCPRelationGroup

RELATIONSHIP_DESCRIPTION_CONTACTAFFILIATIONS

VARCHAR


configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/RelationshipDescription

HCPRelationshipDescription

RELATIONSHIP_GROUP_PURCHASING

VARCHAR


configuration/relationTypes/Purchasing/attributes/RelationType/attributes/RelationshipGroup

HCORelationGroup

RELATIONSHIP_DESCRIPTION_PURCHASING

VARCHAR


configuration/relationTypes/Purchasing/attributes/RelationType/attributes/RelationshipDescription

HCORelationDescription

RELATIONSHIP_GROUP_DDDTOSAPAFFILIATIONS

VARCHAR


configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/RelationshipGroup

HCORelationGroup

RELATIONSHIP_DESCRIPTION_DDDTOSAPAFFILIATIONS

VARCHAR


configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/RelationshipDescription

HCORelationDescription

RELATIONSHIP_GROUP_DISTRIBUTION

VARCHAR


configuration/relationTypes/Distribution/attributes/RelationType/attributes/RelationshipGroup

HCORelationGroup

RELATIONSHIP_DESCRIPTION_DISTRIBUTION

VARCHAR


configuration/relationTypes/Distribution/attributes/RelationType/attributes/RelationshipDescription

HCORelationDescription

RELATIONSHIP_GROUP_PROVIDERAFFILIATIONS

VARCHAR


configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/RelationshipGroup

HCPRelationGroup

RELATIONSHIP_DESCRIPTION_PROVIDERAFFILIATIONS

VARCHAR


configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/RelationshipDescription

HCPRelationshipDescription

AFFIL_ACO

Column

Type

Description

Reltio Attribute URI

LOV Name

ACO_URI

VARCHAR

Generated Key



RELATION_URI

VARCHAR

Reltio Relation URI



ACO_TYPE

VARCHAR


configuration/relationTypes/Ownership/attributes/ACO/attributes/ACOType, configuration/relationTypes/ACOAffiliations/attributes/ACO/attributes/ACOType, configuration/relationTypes/HCOStoDDDAffiliations/attributes/ACO/attributes/ACOType, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/ACO/attributes/ACOType, configuration/relationTypes/ContactAffiliations/attributes/ACO/attributes/ACOType, configuration/relationTypes/Purchasing/attributes/ACO/attributes/ACOType, configuration/relationTypes/DDDtoSAPAffiliations/attributes/ACO/attributes/ACOType, configuration/relationTypes/Distribution/attributes/ACO/attributes/ACOType, configuration/relationTypes/ProviderAffiliations/attributes/ACO/attributes/ACOType

HCOACOType

ACO_TYPE_CATEGORY

VARCHAR


configuration/relationTypes/Ownership/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/ACOAffiliations/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/HCOStoDDDAffiliations/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/ContactAffiliations/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/Purchasing/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/DDDtoSAPAffiliations/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/Distribution/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/ProviderAffiliations/attributes/ACO/attributes/ACOTypeCategory

HCOACOTypeCategory

ACO_TYPE_GROUP

VARCHAR


configuration/relationTypes/Ownership/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/ACOAffiliations/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/HCOStoDDDAffiliations/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/ContactAffiliations/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/Purchasing/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/DDDtoSAPAffiliations/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/Distribution/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/ProviderAffiliations/attributes/ACO/attributes/ACOTypeGroup

HCOACOTypeGroup

AFFIL_RELATION_TYPE_ROLE

Column

Type

Description

Reltio Attribute URI

LOV Name

RELATION_TYPE_URI

VARCHAR

Generated Key



ROLE_URI

VARCHAR

Generated Key



RELATION_URI

VARCHAR

Reltio Relation URI



ROLE

VARCHAR


configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/Role/attributes/Role, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/Role/attributes/Role, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/Role/attributes/Role

RoleType

RANK

VARCHAR


configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/Role/attributes/Rank, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/Role/attributes/Rank, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/Role/attributes/Rank


AFFIL_USAGE_TAG

Column

Type

Description

Reltio Attribute URI

LOV Name

USAGE_TAG_URI

VARCHAR

Generated Key



RELATION_URI

VARCHAR

Reltio Relation URI



USAGE_TAG

VARCHAR


configuration/relationTypes/ProviderAffiliations/attributes/UsageTag/attributes/UsageTag


" + }, + { + "title": "CUSTOMER_SL schema", + "pageID": "163924327", + "pageLink": "/display/GMDM/CUSTOMER_SL+schema", + "content": "

The schema plays the role of access layer for clients reading MDM data. It includes a set of views that are directly inherited from CUSTOMER schema.

Views have the same structure as views in CUSTOMER schemat. To learn about view definitions please see CUSTOMER schema

In regional data marts, the schema views have MDM prefix. 

In CUSTOMER_SL schema in Global Data Mart views are prefixed with 'P'  for COMPANY Reltio Model,'I' for IQIVIA Reltio model, and 'P_HI' for Historical Inactive data for COMPANY Reltio Model.


To speed up access, most views are being materialized to physical tables. The process is transparent to users. Access views are being switched to physical tables automatically if they are available.  The refresh process is incremental and connected with the loading process. 




" + }, + { + "title": "LANDING schema", + "pageID": "163920137", + "pageLink": "/display/GMDM/LANDING+schema", + "content": "

LANDING schema plays a role of the staging database for publishing  MDM data from Reltio tenants throught MDM HUB

HUB_KAFKA_DATA


Target table for KAFA events published through Snowflake pipe.


ColumnTypeDescription
RECORD_METADATAVARIANTMetadata of KAFKA event like KAFKA key, topic, partition, create time
RECORD_CONTENTVARIANTEvent payload

LOV_DATA

Target table for LOV data publish 

ColumnTypeDescription 
IDTEXTLOV object id
OBJECTVARIANTRelto RDM json object

MERGE_TREE_DATA

Target table for merge_tree exports from Reltio

ColumnTypeDescription 
FILENAMETEXTFull S3 file path
OBJECTVARIANTRelto MERGE_TREE json object

HI_DATA

Target table for ad-hoc historical inactive data

ColumnTypeDescription 
OBJECTVARIANTHistorical Inactive json object
" + }, + { + "title": "PTE_SL", + "pageID": "302687546", + "pageLink": "/display/GMDM/PTE_SL", + "content": "

The schema plays the role of access layer for Clients reading data required for PT&E reports. It mimics its structure and logic. 

To make a connection to the PTE_SL schema you need to have a proper role assigned:

COMM_GBL_MDM_DMART_DEV_PTE_ROLE
COMM_GBL_MDM_DMART_QA_PTE_ROLE
COMM_GBL_MDM_DMART_STG_PTE_ROLE
COMM_GBL_MDM_DMART_PROD_PTE_ROLE

that are connected with groups:

sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_DEV_PTE_ROLE\nsfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_QA_PTE_ROLE\nsfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_STG_PTE_ROLE\nsfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_PTE_ROLE

Information how to request for an acces is described here: Snowflake - connection guid

Snowflake path to the client report: "COMM_GBL_MDM_DMART_PROD_DB"."PTE_SL"."PTE_REPORT"

General assumptions for view creation:

  1. The views integrate both data models COMPANY and IQIVIA via a Union function. Meaning that they're calculated separately and then joined together. 
  2. driven_tabel1.iso_code = entity_uri.country 
  3. The lang_code from the code translations is always 'en'
  4. In case the hcp identifiers aren't provided by the client there is an option to calculate them dynamically by the number of HCPs having the identifier.


Driven tables:

DRIVEN_TABLE1

This is a view selecting data from the country_config table for countries that need to be added to the PTE_REPORT

Column nameDescription
ISO_CODEISO2 code of the country
NAMECountry name
LABELCountry label (name + iso_code)
RELTIO_TENANTEither 'IQVIA' or the region of the Reltio tenant (EMEA/AMER...)
HUB_TENANTIndicator of the HUB database the date comes from
SF_INSTANCEName of the Snowflake instance the data comes from (

emeaprod01.eu-west-1...)

SF_TENANTDATABASEFull database name form which the data comes from
CUSTOMERSL_PREFIXeither 'i_' for the IQVIA data model or 'p_' for the COMPANY data model

DRIVEN_TABLEV2 / DRIVEN_TABLE2_STATIC

DRIVEN_TABLEV2 is a view used to get the HCP identifiers and sort them by the count of HCPs that have the identifier. DRIVEN_TABLE2_STATIC is a table containing the list of identifiers used per country and the order in which they're placed in the PTE_REPORT view. If the country isn't available in DRIVEN_TABLE2_STATIC the report will use DRIVEN_TABLEV2 to get them calculated dynamically every time the report is used.

Column nameDescription
ISO_CDOEISO2 code of the country
CANONICAL_CODECanonical code of the identifier
LANG_DESCCode description in English
CODE_IDCode id
MODELeither 'i' for the IQVIA data model or 'p' for the COMPANY data model
ORDER_IDOrder in which the identifier will be available in the PTE_REPORT view. Only identifiers from 1 to 5 will be used.

DRIVEN_TABLE3

Specialty dictionary provided by the client for the IQVIA data model only. Used for calculating the is_prescriber data.'IS PRESCRIBER' calculation method for IQIVIA model

The path to the dictionary files on S3: pfe-baiaes-eu-w1-project/mdm/config/PTE_Dictionaries

Column nameDescription
COUNTRY_CODEISO2 code of the country
HEADER_NAMECode name
MDM_CODECode id
CANONICAL_CODECanonical code of the identifier
LONG_DESCRIPTIONCode description in English
PROFESSIONAL_TYPEIf the specialty is a prescriber or not 

PTE_REPORT:

The PTE_REPORT is the view from which the clients should get their data. It's an UNION of the reports for the IQVIA data model and the COMPANY data model. Calculation detail may be found in the respective articles:

IQVIA: PTE_SL IQVIA MODEL

COMPANY: PTE_SL COMPANY MODEL


" + }, + { + "title": "Data Sourcing", + "pageID": "347664788", + "pageLink": "/display/GMDM/Data+Sourcing", + "content": "
CountryIso CodeMDM Region

Data Model

Snowflake View

FranceFREMEACOMPANYPTE_REPORT
ArgentinaAEGBL

IQVIA

PTE_REPORT

BrazilBRAMERCOMPANYPTE_REPORT
MexicoMXGBLIQVIAPTE_REPORT
ChileCLGBLIQVIAPTE_REPORT
ColombiaCOGBL

IQVIA

PTE_REPORT

SlovakaSKGBL

IQVIA

PTE_REPORT

PhilippinesPKGBL

IQVIA

PTE_REPORT

RéunionREEMEA

COMPANY

PTE_REPORT

Saint Pierre and MiquelonPMEMEA

COMPANY

PTE_REPORT

MayotteYTEMEA

COMPANY

PTE_REPORT

French PolynesiaPFEMEA

COMPANY

PTE_REPORT

French GuianaGFEMEA

COMPANY

PTE_REPORT

Wallis and FutunaWFEMEA

COMPANY

PTE_REPORT

GuadeloupeGPEMEA

COMPANY

PTE_REPORT

New CaledoniaNCEMEA

COMPANY

PTE_REPORT

MartiniqueMQEMEA

COMPANY

PTE_REPORT

MauritiusMUEMEA

COMPANY

PTE_REPORT

MonacoMCEMEA

COMPANY

PTE_REPORT

AndorraADEMEA

COMPANY

PTE_REPORT

TurkeyTREMEA

COMPANY

PTE_REPORT_TR

South KoreaKRAPAC

COMPANY

PTE_REPORT_KR

All views are available in the global database in the PTE_SL schema.

" + }, + { + "title": "PTE_SL IQVIA MODEL", + "pageID": "218432348", + "pageLink": "/display/GMDM/PTE_SL+IQVIA+MODEL", + "content": "

Iqvia data model specification:

name typedescription Reltio attribute URILOV Name additional querry conditions (IQIVIA model)additional querry conditions (COMPANY model)
HCP_IDVARCHARReltio Entity URI

i_hcp.entity_uri or i_affiliations.start_entity_uri

only active hcp are returned (customer_sl.i_hcp.active ='TRUE')

i_hcp.entity_uri or i_affiliations.start_entity_uri

only active hcp are returned

HCO_IDVARCHARReltio Entity URI

For the IQIVIA model, all affiliation with i_affiliation.active = 'TRUE' and relation type in ('Activity','HasHealthCareRole') must be returned.

i_hco.entity_uri 


select END_ENTITY_URI from customer_sl.i_affiliations where start_entity_uri ='T9u7Ej4'and active = 'TRUE'and relation_type in ('Activity','HasHealthCareRole') ;


select * from customer_sl.p_affiliations where active=TRUE and relation_type = 'ContactAffiliations';

WORKPLACE_NAMEVARCHARReltio workplace name or reltio workplace parent name.configuration/entityTypes/HCO/attributes/Name

For the IQIVIA model, all affiliation with i_affiliation.active = 'TRUE' and relation type in ('Activity','HasHealthCareRole') must be returned.

i_hco.name must be returned

select hco.name from 
customer_sl.i_affiliations a,
customer_sl.i_hco hco
where a.end_entity_uri = hco.entity_uri 
and a.start_entity_uri ='T9u7Ej4'and a.active = 'TRUE'and a.relation_type in ('Activity','HasHealthCareRole') ;

For the COMPANY model, all affiliation with p_affiliation.active=TRUE and relation_type = 'ContactAffiliations'

i_hco.name

STATUSBOOLEANReltio Entity status

i_customer_sl.i_hcp.active

mapping rule TRUE = ACTIVE

i_customer_sl.p_hcp.active

mapping rule TRUE = ACTIVE

LAST_MODIFICATION_DATETIMESAMP_LTZEntity update time in SnowFlakeconfiguration/entityTypes/HCP/updateTime

customer_sl.i_entity_update_dates.SF_UPDATE_TIME

i_customer_sl.p_entity_update.SF_UPDATE_TIME
FIRST_NAMEVARCHAR
configuration/entityTypes/HCP/attributes/FirstName
i_customer_sl.i_hcp.first_namei_customer_sl.p_hcp.first_name
LAST_NAMEVARCHAR
configuration/entityTypes/HCP/attributes/LastName
i_customer_sl.i_hcp.last_namei_customer_sl.p_hcp.last_name
TITLE_CODEVARCHAR
configuration/entityTypes/HCP/attributes/Title

LOV Name COMPANY = HCPTitle

LOV Name IQIVIA = LKUP_IMS_PROF_TITLE


select  c.canonical_code  from 
customer_sl.i_hcp hcp,
customer_sl.i_codetranslations c
where 
hcp.title_lkp = c.code_id

e.g.

select c.canonical_code from
customer_sl.i_hcp hcp,
customer_sl.i_code_translations c
where
hcp.title_lkp = c.code_id
and hcp.entity_uri='T9u7Ej4'
and c.country='FR';

select c.canonical_code from 
customer_sl.p_hcp hcp,
customer_sl.p_codes c
where 
hcp.title_lkp = c.code_id
TITLE_DESCVARCHAR
configuration/entityTypes/HCP/attributes/Title

LOV Name COMPANY = THCPTitle

LOV Name IQIVIA = LKUP_IMS_PROF_TITLE

select  c.lang_desc  from 
customer_sl.i_hcp hcp,
customer_sl.i_code_translations c
where 
hcp.title_lkp = c.code_id


e.g.

select c.lang_desc from
customer_sl.i_hcp hcp,
customer_sl.i_code_translations c
where
hcp.title_lkp = c.code_id
and hcp.entity_uri='T9u7Ej4'
and c.country='FR';

select c.desc from 
customer_sl.p_hcp hcp,
customer_sl.p_codes c
where 
hcp.title_lkp = c.code_id
IS_PRESCRIBER



'IS PRESCRIBER' calculation method for IQIVIA model

CASE

When p_hcp.TYPE_CODE_LKP = 'HCPType:HCPT.PRES' then Y

CASE

When p_hcp.TYPE_CODE_LKP = 'HCPType:HCPT.NPRS' then N

ELSE

To define
                                                

COUNTRY
Country codeconfiguration/entityTypes/Location/attributes/country
customer_sl.i_hcp.countrycustomer_sl.p_hcp.country
PRIMARY_ADDRESS_LINE_1

IQIVIA: configuration/entityTypes/Location/attributes/AddressLine1

COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine1


select address_line1 from customer_sl.i_address where address_rank=1

select address_line1 from customer_sl.i_address where address_rank=1 and entity_uri='T9u7Ej4';

select a. address_line1 from customer_sl.p_addresses a where a.address_rank =1
PRIMARY_ADDRESS_LINE_2

IQIVIA: configuration/entityTypes/Location/attributes/AddressLine2

COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine2


select address_line2 from customer_sl.i_address where address_rank=1select a. address_line2 from customer_sl.p_addresses a where a.address_rank =1
PRIMARY_ADDRESS_CITY

IQIVIA: configuration/entityTypes/Location/attributes/City

COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/City


select cityfrom customer_sl.i_address where address_rank=1select a.city from customer_sl.p_addresses a where a.address_rank =1
PRIMARY_ADDRESS_POSTAL_CODE

IQIVIA: configuration/entityTypes/Location/attributes/Zip/attributes/ZIP5

COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/Zip5


select ZIP5 from customer_sl.i_address where address_rank=1select a.ZIP5 from customer_sl.p_addresses a where a.address_rank =1
PRIMARY_ADDRESS_STATE

IQIVIA: configuration/entityTypes/Location/attributes/StateProvince

COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/StateProvince

LOV Name COMPANY = Stateselect state_province from customer_sl.i_address where address_rank=1select c.desc from
customer_sl.p_codes c,
customer_sl.p_addresses a
where 
a.address_rank=1
and
a.STATE_PROVINCE_LKP = c.code_id 
PRIMARY_ADDR_STATUS

IQIVIA: configuration/entityTypes/Location/attributes/VerificationStatus

COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/VerificationStatus


customer_sl.i_address.verification_statuscustomer_sl.p_addresses.verification_status
PRIMARY_SPECIALTY_CODE

configuration/entityTypes/HCO/attributes/Specialities/attributes/Specialty

LOV Name COMPANY = HCPSpecialty

LOV Name IQIVIA =LKUP_IMS_SPECIALTY

e.g.

select c.canonical_code from 
customer_sl.i_specialities s,
customer_sl.i_code_translations c
where 
s.specialty_lkp = c.code_id
and s.entity_uri ='T9liLpi'

and s.SPECIALTY_TYPE_LKP='LKUP_IMS_SPECIALTY_TYPE:SPEC' 
and c.lang_code = 'en'
and c.country = 'FR';

select c.canonical_code from 
customer_sl.p_specialities s,
customer_sl.p_codes c
where s.specialty_lkp =c.code_id
and s.rank = 1 
;

There are no extra query conditions connected with SPECIALTY_TYPE_LKP because in the GBL environment that parameter always has a NULL value. 

PRIMARY_SPECIALTY_DESC

configuration/entityTypes/HCO/attributes/Specialities/attributes/Specialty

LOV Name COMPANY = LKUP_IMS_SPECIALTY

LOV Name IQIVIA =LKUP_IMS_SPECIALTY

e.g

select  c.lang_desc from 
customer_sl.i_specialities s,
customer_sl.i_code_translations c
where 
s.specialty_lkp = c.code_id
and s.entity_uri ='T9liLpi'

and s.SPECIALTY_TYPE_LKP='LKUP_IMS_SPECIALTY_TYPE:SPEC' 
and c.lang_code = 'en'
and c.country = 'FR';

select c.desc from 
customer_sl.p_specialities s,
customer_sl.p_codes c
where s.specialty_lkp =c.code_id
and s.rank = 1 
;

There are no extra query conditions connected with SPECIALTY_TYPE_LKP because in the GBL environment that parameter always has a NULL value. 

GO_STATUSVARCHAR
configuration/entityTypes/HCP/attributes/Compliance/attributes/GOStatus

go_status <> ''


CASE

When i_hcp.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:GO' then Yes

CASE

When i_hcp.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:NGO' then No

ELSE

NULL

go_status <> ''


CASE

When p_compliance.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:GO' then Y

CASE

When p_compliance.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:NGO' then N

ELSE Not defined

\"(lightbulb)\"(now this is an empty tabel)

IDENTIFIER1_CODEVARCHAR

Reltio identyfier code.


configuration/entityTypes/HCP/attributes/Identifiers/attributes/Type

select ct.canonical_code from 
customer_sl.i_code_translations ct,
customer_sl.i_identifiers d
where
ct.code_id = d.TYPE_LKP


There is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the first one.


e.g.

select ct.canonical_code, ct.lang_desc, d.id, ct.*,d.* from 
customer_sl.i_code_translations ct,
customer_sl.i_identifiers d
where
ct.code_id = d.TYPE_LKP
and 
d.entity_uri='T9v0e54'
and
ct.lang_code='en'
and 
ct.country ='FR'
;

select ct.canonical_code from 
customer_sl.p_codes ct,
customer_sl.p_identifiers d
where
ct.code_id = d.TYPE_LKP


There is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the first one.

IDENTIFIER1_CODE_DESCVARCHAR
configuration/entityTypes/HCP/attributes/Identifiers/attributes/Type
select ct.lang_desc from 
customer_sl.i_code_translations ct,
customer_sl.i_identifiers d
where
ct.code_id = d.TYPE_LKP
select ct.desc from 
customer_sl.p_codes ct,
customer_sl.p_identifiers d
where
ct.code_id = d.TYPE_LKP
IDENTIFIER1_VALUEVARCHAR
configuration/entityTypes/HCP/attributes/Identifiers/attributes/ID
select id from customer_sl.i_identifiers.id select id from customer_sl.p_identifiers
IDENTIFIER2_CODEVARCHAR
configuration/entityTypes/HCP/attributes/Identifiers/attributes/Type

select ct.canonical_code from 
customer_sl.i_code_translations ct,
customer_sl.i_identifiers d
where
ct.code_id = d.TYPE_LKP

Maximum two identyfiers can be returned

There is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the second one.

select ct.canonical_code from 
customer_sl.p_codes ct,
customer_sl.p_identifiers d
where
ct.code_id = d.TYPE_LKP

Maximum two identifiers can be returned

There is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the second one.

IDENTIFIER2_CODE_DESCVARCHAR
configuration/entityTypes/HCP/attributes/Identifiers/attributes/Type
select ct.lang_desc from 
customer_sl.i_code_translations ct,
customer_sl.i_identifiers d
where
ct.code_id = d.TYPE_LKP
select ct.desc from 
customer_sl.p_codes ct,
customer_sl.p_identifiers d
where
ct.code_id = d.TYPE_LKP
IDENTIFIER2_VALUEVARCHAR
configuration/entityTypes/HCP/attributes/Identifiers/attributes/ID
select i.id from customer_sl.i_identifiers.idselect id from customer_sl.p_identifiers
DGSCATEGORYVARCHAR

IQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSCategory

COMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitCategory

LKUP_BENEFITCATEGORY_HCP,

LKUP_BENEFITCATEGORY_HCO

select ct.lang_desc from 
customer_sl.i_code_translations ct,
customer_sl.i_disclosure d
where
ct.code_id = d.dgs_category_lkp
select DisclosureBenefitCategory from p_hcp
DGSCATEGORY_CODEVARCHAR

configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSCategory

LKUP_BENEFITCATEGORY_HCP,

LKUP_BENEFITCATEGORY_HCO

select ct.canonical_code from 
customer_sl.i_code_translations ct,
customer_sl.i_disclosure d
where
ct.code_id = d.dgs_category_lkp
comment: select i_code.canonical_code for a valu returned from DisclosureBenefitCategory 
DGSTITLEVARCHAR

IQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSTitle

COMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitTitle

LKUP_BENEFITTITLEselect ct.lang_desc from 
customer_sl.i_code_translations ct,
customer_sl.i_disclosure d
where
ct.code_id = d.DGS_TITLE_LKP

select DisclosureBenefitTitle from p_hcp


DGSTITLE_CODEVARCHAR
configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSTitleLKUP_BENEFITTITLE

select ct.canonical_code from 
customer_sl.i_code_translations ct,
customer_sl.i_disclosure d
where
ct.code_id = d.DGS_TITLE_LKP

comment: select i_code.canonical_code for a valu returned from DisclosureBenefitTitle 
DGSQUALITYVARCHAR

IQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSQuality

COMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitQuality



LKUP_BENEFITQUALITY

select ct.lang_desc from 
customer_sl.i_code_translations ct,
customer_sl.i_disclosure d
where
ct.code_id = d.DGS_QUALITY_LKP
select DisclosureBenefitQuality from p_hcp
DGSQUALITY_CODEVARCHAR

configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSQuality


LKUP_BENEFITQUALITYselect ct.canonical_code from 
customer_sl.i_code_translations ct,
customer_sl.i_disclosure d
where
ct.code_id = d.DGS_QUALITY_LKP
comment: select i_code.canonical_code for a valu returned from DisclosureBenefitQuality 
DGSSPECIALTYVARCHAR

IQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSSpecialty

COMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitSpecialty

LKUP_BENEFITSPECIALTYselect ct.lang_desc from 
customer_sl.i_code_translations ct,
customer_sl.i_disclosure d
where
ct.code_id = d.DGS_SPECIALTY_LKP
DisclosureBenefitSpecialty
DGSSPECIALTY_CODEVARCHAR
configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSSpecialtyLKUP_BENEFITSPECIALTYselect canonical_code from 
customer_sl.i_code_translations ct,
customer_sl.i_disclosure d
where
ct.code_id = d.DGS_SPECIALTY_LKP
comment: select i_code.canonical_code for a valu returned from DisclosureBenefitSpecialty
SECONDARY_SPECIALTY_DESCVARCHAR


A query should return values like:


select c.LANG_DESC from 
"COMM_GBL_MDM_DMART_PROD_DB"."CUSTOMER_SL"."I_SPECIALITIES" s,
"COMM_GBL_MDM_DMART_PROD_DB"."CUSTOMER_SL"."I_CODE_TRANSLATIONS" c
where s.SPECIALTY_LKP =c.CODE_ID
and s.RANK=2
and s.SPECIALTY_TYPE_LKP='LKUP_IMS_SPECIALTY_TYPE:SPEC'
and c.LANG_CODE ='en' ← lang code condition
and c.country ='PH' ← country condition
and s.ENTITY_URI ='ENTITI_URI'; ← entity uri condition


EMAILVARCHAR


A query should return values like:

select EMAIL from 
"COMM_GBL_MDM_DMART_PROD_DB"."CUSTOMER_SL"."I_EMAIL" 
where rank= 1 
and entity_uri ='ENTITI_URI';  ← entity uri condition


CAUTION: In case when multiple values are returned, the first one must be returned as a query result.


PHONEVARCHAR


A query should return values like:

select FORMATTED_NUMBER from 
"COMM_GBL_MDM_DMART_PROD_DB"."CUSTOMER_SL"."I_PHONE" 
where RANK=1 
and entity_uri ='ENTITI_URI'; ← entity uri condition

CAUTION: In case when multiple values are returned, the first one must be returned as a query result.



" + }, + { + "title": "'IS PRESCRIBER' calculation method for IQIVIA model", + "pageID": "218434836", + "pageLink": "/display/GMDM/%27IS+PRESCRIBER%27+calculation+method+for+IQIVIA+model", + "content": "

Parameters contains in SF model:

SF xml parameter name in calculation metode.g. value from SF model
customer_sl.i_hcp.type_code_lkp hcp.professional_type_cdi_hcp.type_code_lkp LKUP_IMS_HCP_CUST_TYPE:PRES
select c.canonical_code from 
customer_sl.i_hcp s,
customer_sl.i_codes c
where
s.SUB_TYPE_CODE_LKP = c.code_id 
hcp.professional_subtype_cdprof_subtype_codeWFR.TYP.I
select c.canonical_code from 
customer_sl.i_specialities s,
customer_sl.i_codes c
where
s.specialty_lkp = c.code_id and s.rank=1 and s.SPECIALTY_TYPE_LKP='LKUP_IMS_SPECIALTY_TYPE:SPEC' and c.parents='SPEC'
spec.specialty_codespec_codeWFR.SP.IE
customer_sl.i_hcp.countryhcp.countryi_hcp.countryFR

Dictionaries parameters:

profesion_type_subtype.csv as dict_subtypes

profesion_type_subtype_fr.csv as dict_subtypes

professions_type_subtype.xlsxxml

value from file to calculate SF view

e.g. value to calculate SF view

mdm_codedict_subtypes.mdm_codecanonical_codeWAR.TYP.A
professional_typedict_subtypes.professional_typeprofessional_typeNon-Prescriber, Prescriber

country_code

dict_subtypes.country_codecountry_codeFR

profesion_type_speciality.csv as dict_specialties

profesion_type_speciality_fr.csv as dict_specialties

professions_type_subtype.xlsxxml

value from file to calculate SF view

e.g. value to calculate SF view

mdm_codedict_subtypes.mdm_codecanonical_codeWAC.SP.24
professional_typedict_subtypes.professional_typeprofessional_typeNon-Prescriber, Prescriber

country_code

dict_subtypes.country_codecountry_codeFR

In a new PTE_SL view the files mentions above are migrated to driven_tabel3. So in a method description, there is an extra condition that matches a dependence with profession subtype or specialty.

Method description:

Query condition: 

  1. driven_tabel3.country_code = i_hcp.country and driven_tabel3.canonical_code = prof_subtype_code and driven_tabel3.header_name = 'LKUP_IMS_HCP_SUBTYPE'
  2. driven_tabel3.country_code = i_hcp.country and driven_tabel3.canonical_code = spec_code and driven_tabel3.header_name='LKUP_IMS_SPECIALTY'

CASE
         WHEN i_hcp.type_code_lkp ='LKUP_IMS_HCP_CUST_TYPE:PRES' THEN 'Y'
         WHEN    coalesce(prof_subtype_code,spec_code,'') = '' THEN 'N'
         WHEN    coalesce(prof_subtype_code,'') <> '' THEN
                    CASE
                             WHEN coalesce(driven_tabel3.canonical_code,'') = '' THEN 'N@1'                             –- for driven_tabel3.header_name = 'LKUP_IMS_HCP_SUBTYPE', this is a profession subtype checking condition
                             WHEN coalesce(driven_tabel3.canonical_code,'') <> '' THEN                                      –- for driven_tabel3.header_name = 'LKUP_IMS_HCP_SUBTYPE', this is a profession subtype checking condition
                                        CASE
                                                 WHEN driven_tabel3.professional_type = 'Prescriber' THEN 'Y'              –- for driven_tabel3.header_name = 'LKUP_IMS_HCP_SUBTYPE', this is a profession subtype checking condition
                                                 WHEN driven_tabel3.professional_type = 'Non-Prescriber' THEN 'N'     –- for driven_tabel3.header_name = 'LKUP_IMS_HCP_SUBTYPE', this is a profession subtype checking condition
                                                 ELSE 'N@2'
                                        END
                     END
          WHEN    coalesce(spec_code,'') <> '' THEN
                     CASE
                              WHEN coalesce(driven_tabel3.canonical_code,'') = '' THEN 'N@3'                                –- for driven_tabel3.header_name = 'LKUP_IMS_SPECIALTY', this is a specialty checking condition
                              WHEN coalesce(driven_tabel3.canonical_code,'') <> '' THEN                                        –- for driven_tabel3.header_name = 'LKUP_IMS_SPECIALTY', this is a specialty checking condition
                                         CASE
                                                  WHEN driven_tabel3.professional_type = 'Prescriber' THEN 'Y'                 –- for driven_tabel3.header_name = 'LKUP_IMS_SPECIALTY', this is a specialty checking condition
                                                  WHEN driven_tabel3.professional_type = 'Non-Prescriber' THEN 'N'        –- for driven_tabel3.header_name = 'LKUP_IMS_SPECIALTY', this is a specialty checking condition
                                                  ELSE 'N@4'
                                          END
                     END
           ELSE 'N@99'

END AS IS_PRESCRIBER

" + }, + { + "title": "PTE_SL COMPANY MODEL", + "pageID": "234711638", + "pageLink": "/display/GMDM/PTE_SL+COMPANY+MODEL", + "content": "


COMPANY data model specification:

name typedescription Reltio attribute URILOV Name additional querry conditions (COMPANY model)
HCP_IDVARCHARReltio Entity URI

\"(tick)\"i_hcp.entity_uri or i_affiliations.start_entity_uri

only active hcp are returned (customer_sl.i_hcp.active ='TRUE')

HCO_IDVARCHARReltio Entity URI

\"(warning)\"SELECT HCO.ENTITY_URI
FROM CUSTOMER_SL.P_HCP HCP
INNER JOIN CUSTOMER_SL.P_AFFILIATIONS AF
    ON HCP.ENTITY_URI= AF.START_ENTITY_URI
INNER JOIN CUSTOMER_SL.P_HCO HCO
    ON AF.END_ENTITY_URI = HCO.ENTITY_URI
WHERE AF.relation_type = 'ContactAffiliations'
AND AF.ACTIVE = 'TRUE';


TO - DO An additional conditions that should be included:

  • querry need to return only HCP-HCO pairs for witch "P_AFFIL_RELATION_TYPE.RELATIONSHIPDESCRIPTION_LKP" = 'HCPRelationshipDescription:CON' \"(question)\"


A Pair HCP plus HCO must be uniqe.

WORKPLACE_NAMEVARCHARReltio workplace name or reltio workplace parent name.configuration/entityTypes/HCO/attributes/Name

\"(tick)\"SELECT HCO.NAME
FROM CUSTOMER_SL.P_HCP HCP
INNER JOIN CUSTOMER_SL.P_AFFILIATIONS AF
    ON HCP.ENTITY_URI= AF.START_ENTITY_URI
INNER JOIN CUSTOMER_SL.P_HCO HCO
    ON AF.END_ENTITY_URI = HCO.ENTITY_URI
WHERE AF.relation_type = 'ContactAffiliations'
AND AF.ACTIVE = 'TRUE';


A Pair HCP plus HCO must be uniqe.

STATUSBOOLEANReltio Entity status

\"(tick)\"i_customer_sl.p_hcp.active

mapping rule TRUE = ACTIVE

LAST_MODIFICATION_DATETIMESAMP_LTZEntity update time in SnowFlakeconfiguration/entityTypes/HCP/updateTime
\"(tick)\"p_entity_update.SF_UPDATE_TIME
FIRST_NAMEVARCHAR
configuration/entityTypes/HCP/attributes/FirstName
\"(tick)\"i_customer_sl.p_hcp.first_name
LAST_NAMEVARCHAR
configuration/entityTypes/HCP/attributes/LastName
\"(tick)\"i_customer_sl.p_hcp.last_name
TITLE_CODEVARCHAR
configuration/entityTypes/HCP/attributes/Title

LOV Name COMPANY = HCPTitle

LOV Name IQIVIA = LKUP_IMS_PROF_TITLE


select c.canonical_code from 
customer_sl.p_hcp hcp,
customer_sl.p_codes c
where 
hcp.title_lkp = c.code_id
TITLE_DESCVARCHAR
configuration/entityTypes/HCP/attributes/Title

LOV Name COMPANY = THCPTitle

LOV Name IQIVIA = LKUP_IMS_PROF_TITLE

select c.desc from 
customer_sl.p_hcp hcp,
customer_sl.p_codes c
where 
hcp.title_lkp = c.code_id
IS_PRESCRIBER



CASE

When p_hcp.TYPE_CODE_LKP = 'HCPType:HCPT.PRES' then Y

CASE

When p_hcp.TYPE_CODE_LKP = 'HCPType:HCPT.NPRS' then N

ELSE

To define
                                                

COUNTRY
Country codeconfiguration/entityTypes/Location/attributes/country
customer_sl.p_hcp.country
PRIMARY_ADDRESS_LINE_1

IQIVIA: configuration/entityTypes/Location/attributes/AddressLine1

COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine1


select a. address_line1 from customer_sl.p_addresses a where a.address_rank =1
PRIMARY_ADDRESS_LINE_2

IQIVIA: configuration/entityTypes/Location/attributes/AddressLine2

COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine2


select a. address_line2 from customer_sl.p_addresses a where a.address_rank =1
PRIMARY_ADDRESS_CITY

IQIVIA: configuration/entityTypes/Location/attributes/City

COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/City


select a.city from customer_sl.p_addresses a where a.address_rank =1
PRIMARY_ADDRESS_POSTAL_CODE

IQIVIA: configuration/entityTypes/Location/attributes/Zip/attributes/ZIP5

COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/Zip5


select a.ZIP5 from customer_sl.p_addresses a where a.address_rank =1
PRIMARY_ADDRESS_STATE

IQIVIA: configuration/entityTypes/Location/attributes/StateProvince

COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/StateProvince

LOV Name COMPANY = Stateselect c.desc from
customer_sl.p_codes c,
customer_sl.p_addresses a
where 
a.address_rank=1
and
a.STATE_PROVINCE_LKP = c.code_id 
PRIMARY_ADDR_STATUS

IQIVIA: configuration/entityTypes/Location/attributes/VerificationStatus

COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/VerificationStatus


customer_sl.p_addresses.verification_status
PRIMARY_SPECIALTY_CODE

configuration/entityTypes/HCO/attributes/Specialities/attributes/Specialty

LOV Name COMPANY = HCPSpecialty

LOV Name IQIVIA =LKUP_IMS_SPECIALTY

select c.canonical_code from 
customer_sl.p_specialities s,
customer_sl.p_codes c
where s.specialty_lkp =c.code_id
and s.rank = 1 
;

There are no extra query conditions connected with SPECIALTY_TYPE_LKP because in the GBL environment that parameter always has a NULL value. 

PRIMARY_SPECIALTY_DESC

configuration/entityTypes/HCO/attributes/Specialities/attributes/Specialty

LOV Name COMPANY = LKUP_IMS_SPECIALTY

LOV Name IQIVIA =LKUP_IMS_SPECIALTY

select c.desc from 
customer_sl.p_specialities s,
customer_sl.p_codes c
where s.specialty_lkp =c.code_id
and s.rank = 1 
;

There are no extra query conditions connected with SPECIALTY_TYPE_LKP because in the GBL environment that parameter always has a NULL value. 

GO_STATUSVARCHAR
configuration/entityTypes/HCP/attributes/Compliance/attributes/GOStatus

go_status <> ''


CASE

When p_compliance.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:GO' then Y

CASE

When p_compliance.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:NGO' then N

ELSE Not defined

\"(lightbulb)\"(now this is an empty tabel)

IDENTIFIER1_CODEVARCHAR

Reltio identyfier code.


configuration/entityTypes/HCP/attributes/Identifiers/attributes/Type

select ct.canonical_code from 
customer_sl.p_codes ct,
customer_sl.p_identifiers d
where
ct.code_id = d.TYPE_LKP


There is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the first one.

IDENTIFIER1_CODE_DESCVARCHAR
configuration/entityTypes/HCP/attributes/Identifiers/attributes/Type
select ct.desc from 
customer_sl.p_codes ct,
customer_sl.p_identifiers d
where
ct.code_id = d.TYPE_LKP
IDENTIFIER1_VALUEVARCHAR
configuration/entityTypes/HCP/attributes/Identifiers/attributes/ID
select id from customer_sl.p_identifiers
IDENTIFIER2_CODEVARCHAR
configuration/entityTypes/HCP/attributes/Identifiers/attributes/Type

select ct.canonical_code from 
customer_sl.p_codes ct,
customer_sl.p_identifiers d
where
ct.code_id = d.TYPE_LKP

Maximum two identifiers can be returned

There is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the second one.

IDENTIFIER2_CODE_DESCVARCHAR
configuration/entityTypes/HCP/attributes/Identifiers/attributes/Type
select ct.desc from 
customer_sl.p_codes ct,
customer_sl.p_identifiers d
where
ct.code_id = d.TYPE_LKP
IDENTIFIER2_VALUEVARCHAR
configuration/entityTypes/HCP/attributes/Identifiers/attributes/ID
select id from customer_sl.p_identifiers
DGSCATEGORYVARCHAR

IQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSCategory

COMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitCategory

LKUP_BENEFITCATEGORY_HCP,

LKUP_BENEFITCATEGORY_HCO

select DisclosureBenefitCategory from p_hcp
DGSCATEGORY_CODEVARCHAR

configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSCategory

LKUP_BENEFITCATEGORY_HCP,

LKUP_BENEFITCATEGORY_HCO

comment: select i_code.canonical_code for a valu returned from DisclosureBenefitCategory 
DGSTITLEVARCHAR

IQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSTitle

COMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitTitle

LKUP_BENEFITTITLE

select DisclosureBenefitTitle from p_hcp


DGSTITLE_CODEVARCHAR
configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSTitleLKUP_BENEFITTITLEcomment: select i_code.canonical_code for a valu returned from DisclosureBenefitTitle 
DGSQUALITYVARCHAR

IQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSQuality

COMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitQuality



LKUP_BENEFITQUALITY

select DisclosureBenefitQuality from p_hcp
DGSQUALITY_CODEVARCHAR

configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSQuality


LKUP_BENEFITQUALITYcomment: select i_code.canonical_code for a valu returned from DisclosureBenefitQuality 
DGSSPECIALTYVARCHAR

IQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSSpecialty

COMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitSpecialty

LKUP_BENEFITSPECIALTYDisclosureBenefitSpecialty
DGSSPECIALTY_CODEVARCHAR
configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSSpecialtyLKUP_BENEFITSPECIALTYcomment: select i_code.canonical_code for a valu returned from DisclosureBenefitSpecialty
SECONDARY_SPECIALTY_DESCVARCHAR



EMAILVARCHAR



PHONEVARCHAR




" + }, + { + "title": "Global Data Mart", + "pageID": "196886082", + "pageLink": "/display/GMDM/Global+Data+Mart", + "content": "

The section describes the structure of  MDM GLOBAL Data Mart in Snowflake. The GLOBAL Data Mart contains consolidated data from multiple regional data marts.

\"\"

Databases:

The Global MDM Data mart connects all markets using Snowflake DB Replication (if in the different zone) or Local DB (if in the same zone)

<ENV>: DEV/QA/STG/PROD

MDM_REGIONMDM Region detailsSnowflake  InstanceSnowflake DB nameTypeModel
EMEAlink

https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

https://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com

COMM_EMEA_MDM_DMART_<ENV>_DBlocalP / P_HI
AMERlink

https://amerdev01.us-east-1.privatelink.snowflakecomputing.com

https://amerprod01.us-east-1.privatelink.snowflakecomputing.com

COMM_AMER_MDM_DMART_<ENV>_DBreplicaP / P_HI
USlink

https://amerdev01.us-east-1.privatelink.snowflakecomputing.com

https://amerprod01.us-east-1.privatelink.snowflakecomputing.com

COMM_GBL_MDM_DMART_<ENV>replicaP / P_HI
APAClink

https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

https://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com

COMM_APAC_MDM_DMART_<ENV>_DBlocalP / P_HI
EUlink

https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

https://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com

COMM_EU_MDM_DMART_<ENV>_DBlocalI

Consolidated GLOBAL Schema:

The COMM_GBL_MDM_DMART_<ENV>_DB database includes the following schema:


User accessing the CUSTOMER_SL schema can query across all markets, having in mind the following details:

P_ prefixed viewsP_HI prefixed viewsI_ prefixed views

Consolidated view from all markets that are from "P" Model.

The first column in each view is the MDM_REGION representing the information about the connection of the specific row to the market. 

Each market may contain a different number of columns and also some columns that exist in one market may not be available in the other. The Consolidated views aggregate all columns from all markets.



Corresponding data model: Dynamic views for COMPANY MDM Model


Consolidated view from all markets that are from "P_HI" Model.

The first column in each view is the MDM_REGION representing the information about the connection of the specific row to the market. 

Each market may contain a different number of columns and also some columns that exist in one market may not be available in the other. The Consolidated views aggregate all columns from all markets.

View build based on the Legacy IQVIA Reltio Model, from EU market that is using "I" Model"







Corresponding data model: Dynamic views for IQIVIA MDM Model


GLOBAL

Instance details

ENV

Snowflake Instance

Snowflake DB Name

Reltio Tenant

Refresh time

DEVhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

COMM_GBL_MDM_DMART_DEV_DB

EMEA + AMER + US+ APAC + EUonce per day
QAhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_GBL_MDM_DMART_QA_DBEMEA + AMER + US+ APAC + EUonce per day
STGhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_GBL_MDM_DMART_STG_DBEMEA + AMER + US+ APAC + EUonce per day
PRODhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com

COMM_GBL_MDM_DMART_PROD_DB

EMEA + AMER + US+ APAC + EUevery 2h

Roles

NPROD

<ENV> = DEV/QA/STG

Role Name

Landing

Customer

Customer SL

AES RS SL

Account Mapping

Metrics

Sandbox

PTE_SL

Warehouse

AD Group Name

COMM_GBL_MDM_DMART_<ENV>_DEVOPS_ROLEFullFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)
COMM_MDM_DMART_M_WH(M)
COMM_MDM_DMART_L_WH(L)
sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_<ENV>_DEVOPS_ROLE
COMM_GBL_MDM_DMART_<ENV>_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only

COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_<ENV>_MTCH_AFFIL_ROLE
COMM_GBL_MDM_DMART_<ENV>_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull

COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_<ENV>_METRIC_ROLE
COMM_GBL_MDM_DMART_<ENV>_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_<ENV>_MDM_ROLE
COMM_GBL_MDM_DMART_<ENV>_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-Only

COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_<ENV>_READ_ROLE
COMM_GBL_MDM_DMART_<ENV>_DATA_ROLE

Read-Only

Read-Only

COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_<ENV>_DATA_ROLE
COMM_GBL_MDM_DMART_<ENV>_PTE_ROLE

Read-Only



Read-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_<ENV>_PTE_ROLE

PROD

Role Name

Landing

Customer

Customer SL

AES RS SL

Account Mapping

Metrics

Sandbox

PTE_SL

Warehouse

AD Group Name

COMM_GBL_MDM_DMART_PROD_DEVOPS_ROLEFullFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)
COMM_MDM_DMART_M_WH(M)
COMM_MDM_DMART_L_WH(L)
sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_DEVOPS_ROLE
COMM_GBL_MDM_DMART_PROD_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only

COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PRD_MTCHAFFIL_ROLE
COMM_GBL_MDM_DMART_PROD_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull

COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_METRIC_ROLE
COMM_GBL_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_MDM_ROLE
COMM_GBL_MDM_DMART_PROD_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-Only

COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_READ_ROLE
COMM_GBL_MDM_DMART_PROD_DATA_ROLE

Read-Only

Read-Only

COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_DATA_ROLE
COMM_GBL_MDM_DMART_PROD_PTE_ROLE

Read-Only



Read-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_PTE_ROLE




" + }, + { + "title": "Global Data Materialization Process", + "pageID": "356800042", + "pageLink": "/display/GMDM/Global+Data+Materialization+Process", + "content": "

\"\"

" + }, + { + "title": "Regional Data Marts", + "pageID": "196886987", + "pageLink": "/display/GMDM/Regional+Data+Marts", + "content": "

The regional data mart is presenting MDM data from one region.  Data are loaded from one selected Reltio instance. 

They are being refreshed more frequently than the global mart. They are a good choice for clients operating in local markets.


EMEA

Instance details

ENVSnowflake InstanceSnowflake DB NameReltio TenantRefresh time
DEVhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

COMM_EMEA_MDM_DMART_DEV_DB

wn60kG248ziQSMW

every day between 2 am - 4 am EST
QAhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_EMEA_MDM_DMART_QA_DB

vke5zyYwTifyeJS

every day between 2 am - 4 am EST
STGhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_EMEA_MDM_DMART_STG_DB

Dzueqzlld107BVW

every day between 2 am - 4 am EST *Due to many projects running on the environment the refresh time has been temporarily changed to "every 2 hours" for the client's convenience.
PRODhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com/

COMM_EMEA_MDM_DMART_PROD_DB

Xy67R0nDA10RUV6

every 2 hours

Roles

NPROD

<ENV> = DEV/QA/STG

Role NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group Name
COMM_EMEA_MDM_DMART_<ENV>_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)
COMM_MDM_DMART_M_WH(M)
COMM_MDM_DMART_L_WH(L)
sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_<ENV>_DEVOPS_ROLE
COMM_EMEA_MDM_DMART_<ENV>_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_<ENV>_MTCH_AFFIL_ROLE
COMM_EMEA_MDM_DMART_<ENV>_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_<ENV>_METRIC_ROLE
COMM_EMEA_MDM_DMART_<ENV>_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_<ENV>_MDM_ROLE
COMM_EMEA_MDM_DMART_<ENV>_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_<ENV>_READ_ROLE
COMM_EMEA_MDM_DMART_<ENV>_DATA_ROLE

Read-Only

Read-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_<ENV>_DATA_ROLE

PROD

Role NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group Name
COMM_EMEA_MDM_DMART_PROD_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)
COMM_MDM_DMART_M_WH(M)
COMM_MDM_DMART_L_WH(L)
sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_DEVOPS_ROLE
COMM_EMEA_MDM_DMART_PROD_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PRD_MTCHAFFIL_ROLE
COMM_EMEA_MDM_DMART_PROD_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_METRIC_ROLE
COMM_EMEA_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_MDM_ROLE
COMM_EMEA_MDM_DMART_PROD_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_READ_ROLE
COMM_EMEA_MDM_DMART_PROD_DATA_ROLE

Read-Only

Read-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_DATA_ROLE



AMER

Instance details

ENVSnowflake InstanceSnowflake DB NameReltio TenantRefresh time
DEVhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.com/

COMM_AMER_MDM_DMART_DEV_DB

wJmSQ8GWI8Q6Fl1

every day between 2 am - 4 am EST
QAhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.com/COMM_AMER_MDM_DMART_QA_DB

805QOf1Xnm96SPj

every day between 2 am - 4 am EST
STGhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.com/COMM_AMER_MDM_DMART_STG_DB

K7I3W3xjg98Dy30

every day between 2 am - 4 am EST
PRODhttps://amerprod01.us-east-1.privatelink.snowflakecomputing.com

COMM_AMER_MDM_DMART_PROD_DB

Ys7joaPjhr9DwBJ

every 2 hours

Roles

NPROD

<ENV> = DEV/QA/STG

Role NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group Name
COMM_AMER_MDM_DMART_<ENV>_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)
COMM_MDM_DMART_M_WH(M)
COMM_MDM_DMART_L_WH(L)
sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_<ENV>_DEVOPS_ROLE
COMM_AMER_MDM_DMART_<ENV>_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_<ENV>_MTCH_AFFIL_ROLE
COMM_AMER_MDM_DMART_<ENV>_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_<ENV>_METRIC_ROLE
COMM_AMER_MDM_DMART_<ENV>_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_<ENV>_MDM_ROLE
COMM_AMER_MDM_DMART_<ENV>_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-Only
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_<ENV>_READ_ROLE
COMM_AMER_MDM_DMART_<ENV>_DATA_ROLE

Read-Only

Read-Only
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_<ENV>_DATA_ROLE

PROD

Role NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group Name
COMM_AMER_MDM_DMART_PROD_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)
COMM_MDM_DMART_M_WH(M)
COMM_MDM_DMART_L_WH(L)
sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_DEVOPS_ROLE
COMM_AMER_MDM_DMART_PROD_MTCH_AFFIL_RORead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_MTCH_AFFIL_RO
COMM_AMER_MDM_DMART_PROD_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_METRIC_ROLE
COMM_AMER_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_MDM_ROLE
COMM_AMER_MDM_DMART_PROD_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-Only
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_READ_ROLE
COMM_AMER_MDM_DMART_PROD_DATA_ROLE

Read-Only

Read-Only
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_DATA_ROLE




US

Instance details

ENVSnowflake InstanceSnowflake DB NameReltio TenantRefresh time
DEVhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.com

COMM_GBL_MDM_DMART_DEV

sw8BkTZqjzGr7hn

every day between 2 am - 4 am EST
QAhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.comCOMM_GBL_MDM_DMART_QA

rEAXRHas2ovllvT

every day between 2 am - 4 am EST
STGhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.comCOMM_GBL_MDM_DMART_STG

48ElTIteZz05XwT

every day between 2 am - 4 am EST
PRODhttps://amerprod01.us-east-1.privatelink.snowflakecomputing.com

COMM_GBL_MDM_DMART_PROD

9kL30u7lFoDHp6X

every 2 hours

Roles

NPROD

<ENV> = DEV/QA/STG

Role NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group Name
COMM_<ENV>_MDM_DMART_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)
COMM_MDM_DMART_M_WH(M)
COMM_MDM_DMART_L_WH(L)
sfdb_us-east-1_amerdev01_COMM_<ENV>_MDM_DMART_DEVOPS_ROLE
COMM_MDM_DMART_<ENV>_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_<ENV>_MDM_DMART_MTCH_AFFIL_ROLE
COMM_<ENV>_MDM_DMART_ANALYSIS_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only

sfdb_us-east-1_amerdev01_COMM_<ENV>_MDM_DMART_ANALYSIS_ROLE
COMM_<ENV>_MDM_DMART_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_<ENV>_MDM_DMART_METRIC_ROLE
COMM_MDM_DMART_<ENV>_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_<ENV>_MDM_DMART_MDM_ROLE
COMM_<ENV>_MDM_DMART_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-Only
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_<ENV>_MDM_DMART_READ_ROLE
COMM_MDM_DMART_<ENV>_DATA_ROLE

Read-Only

Read-Only
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_<ENV>_MDM_DMART_DATA_ROLE


PROD

Role NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group Name
COMM_PROD_MDM_DMART_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)
COMM_MDM_DMART_M_WH(M)
COMM_MDM_DMART_L_WH(L)
sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_DEVOPS_ROLE
COMM_MDM_DMART_PROD_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_MTCH_AFFIL_ROLE
COMM_PROD_MDM_DMART_ANALYSIS_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_ANALYSIS_ROLE
COMM_PROD_MDM_DMART_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_METRIC_ROLE
COMM_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_MDM_ROLE
COMM_PROD_MDM_DMART_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-Only
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_READ_ROLE
COMM_MDM_DMART_PROD_DATA_ROLE

Read-Only

Read-Only
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_DATA_ROLE




APAC

Instance details

ENVSnowflake InstanceSnowflake DB NameReltio TenantRefresh time
DEVhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

COMM_APAC_MDM_DMART_DEV_DB

w2NBAwv1z2AvlkgS

every day between 2 am - 4 am EST
QAhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_APAC_MDM_DMART_QA_DB

xs4oRCXpCKewNDK

every day between 2 am - 4 am EST
STGhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_APAC_MDM_DMART_STG_DB

Y4StMNK3b0AGDf6

every day between 2 am - 4 am EST
PRODhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com/

COMM_APAC_MDM_DMART_PROD_DB

sew6PfkTtSZhLdW

every 2 hours

Roles

NPROD

<ENV> = DEV/QA/STG

Role NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group Name
COMM_APAC_MDM_DMART_<ENV>_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)
COMM_MDM_DMART_M_WH(M)
COMM_MDM_DMART_L_WH(L)
sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_<ENV>_DEVOPS_ROLE
COMM_APAC_MDM_DMART_<ENV>_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_<ENV>_MTCH_AFFIL_ROLE
COMM_APAC_MDM_DMART_<ENV>_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_<ENV>_METRIC_ROLE
COMM_APAC_MDM_DMART_<ENV>_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_<ENV>_MDM_ROLE
COMM_APAC_MDM_DMART_<ENV>_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_<ENV>_READ_ROLE
COMM_APAC_MDM_DMART_<ENV>_DATA_ROLE

Read-Only

Read-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_<ENV>_DATA_ROLE

PROD

Role NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group Name
COMM_APAC_MDM_DMART_PROD_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)
COMM_MDM_DMART_M_WH(M)
COMM_MDM_DMART_L_WH(L)
sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_DEVOPS_ROLE
COMM_APAC_MDM_DMART_PROD_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PRD_MTCHAFFIL_ROLE
COMM_APAC_MDM_DMART_PROD_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_METRIC_ROLE
COMM_APAC_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_MDM_ROLE
COMM_APAC_MDM_DMART_PROD_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_READ_ROLE
COMM_APAC_MDM_DMART_PROD_DATA_ROLE

Read-Only

Read-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_DATA_ROLE




EU (ex-us)

Instance details

ENVSnowflake InstanceSnowflake DB NameReltio TenantRefresh time
DEVhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

COMM_EU_MDM_DMART_DEV_DB

FLy4mo0XAh0YEbN

every day between 2 am - 4 am EST
QAhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_EU_MDM_DMART_QA_DB

AwFwKWinxbarC0Z

every day between 2 am - 4 am EST
STGhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_EU_MDM_DMART_STG_DB

FW4YTaNQTJEcN2g

every day between 2 am - 4 am EST
PRODhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com/

COMM_EU_MDM_DMART_PROD_DB

FW2ZTF8K3JpdfFl

every 2 hours

Roles

NPROD

<ENV> = DEV/QA/STG

Role NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group Name
COMM_<ENV>_MDM_DMART_OPS_ROLEDEVFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)
COMM_MDM_DMART_M_WH(M)
COMM_MDM_DMART_L_WH(L)
sfdb_eu-west-1_emeadev01_COMM_<ENV>_MDM_DMART_DEVOPS_ROLE
COMM_MDM_DMART_<ENV>_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_<ENV>_MDM_DMART_MTCH_AFFIL_ROLE
COMM_EU_<ENV>_MDM_DMART_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EU_<ENV>_MDM_DMART_METRIC_ROLE
COMM_MDM_DMART_<ENV>_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_<ENV>_MDM_DMART_MDM_ROLE
COMM_EU_MDM_DMART_<ENV>_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_<ENV>_MDM_DMART_READ_ROLE
COMM_MDM_DMART_<ENV>_DATA_ROLE

Read-Only

Read-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_<ENV>_MDM_DMART_DATA_ROLE

PROD

Role NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group Name
COMM_PROD_MDM_DMART_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)
COMM_MDM_DMART_M_WH(M)
COMM_MDM_DMART_L_WH(L)
sfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_DEVOPS_ROLE
COMM_MDM_DMART_PROD_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_MTCH_AFFIL_ROLE
COMM_EU_MDM_DMART_PROD_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EU_PROD_MDM_DMART_METRIC_ROLE
COMM_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_MDM_ROLE
COMM_PROD_MDM_DMART_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_READ_ROLE
COMM_MDM_DMART_PROD_DATA_ROLE

Read-Only

Read-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_DATA_ROLE





" + }, + { + "title": "MDM Admin Management API", + "pageID": "294663752", + "pageLink": "/display/GMDM/MDM+Admin+Management+API", + "content": "" + }, + { + "title": "Description", + "pageID": "294663759", + "pageLink": "/display/GMDM/Description", + "content": "

MDM Admin is a management API, automating numerous repeatable tasks and enabling the end user to perform them, without the need to make a request and wait for one of MDM Hub's engineers to pick it up.

At its current state, MDM Hub provides below services:

Each functionality is described in detail in the following chapters.

API URL list

TenantEnvironmentMDM Admin API Base URLSwagger URL - API Documentation
GBL (EX-US)


DEV
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-dev/

https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gbl-dev/swagger-ui/index.html 

QA
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-qa/

https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gbl-qa/swagger-ui/index.html 

STAGE
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-stage/

https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gbl-stage/swagger-ui/index.html 

PROD
https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-prod/

https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gbl-prod/swagger-ui/index.html 

GBLUS


DEV
https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gblus-dev/

https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gblus-dev/swagger-ui/index.html 

QA
https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gblus-qa/

https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gblus-qa/swagger-ui/index.html 

STAGE
https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gblus-stage/

https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gblus-stage/swagger-ui/index.html 

PROD
https://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-admin-gblus-prod/

https://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gblus-prod/swagger-ui/index.html 

EMEA


DEV
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/

https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html 

QA
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-qa/

https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-qa/swagger-ui/index.html 

STAGE
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-stage/

https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-stage/swagger-ui/index.html 

PROD
https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-admin-emea-prod/

https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-prod/swagger-ui/index.html 

AMER


DEV
https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-amer-dev/

https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-amer-dev/swagger-ui/index.html 

QA
https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-amer-qa/

https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-amer-qa/swagger-ui/index.html 

STAGE
https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-amer-stage/

https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-amer-stage/swagger-ui/index.html 

PROD
https://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-admin-amer-prod/

https://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-admin-spec-amer-prod/swagger-ui/index.html 

APAC


DEV
https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-apac-dev/

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-apac-dev/swagger-ui/index.html 

QA
https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-apac-qa/

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-apac-qa/swagger-ui/index.html 

STAGE
https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-apac-stage/

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-apac-stage/swagger-ui/index.html 

PROD
https://api-apac-prod-gbl-mdm-hub.COMPANY.com/api-admin-apac-prod/

https://api-apac-prod-gbl-mdm-hub.COMPANY.com/api-admin-spec-apac-prod/swagger-ui/index.html 

Modify Kafka offset

If you are consuming from MDM Hub's outbound topic, you can now modify the offsets to skip/re-send messages. Please refer to the Swagger Documentation for additional details.

Example 1

Environment is EMEA DEV. User wants to consume the last 100 messages from his topic again. He is using topic "emea-dev-out-full-test-topic-1" and consumer-group "emea-dev-consumergroup-1"

Steps:

  1. Disable the consumer. Kafka will not allow offset manipulation, if the topic/consumergroup is being used
  2. Send below request:

    \n
    POST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/kafka/offset\n{\n  "topic": "emea-dev-out-full-test-topic-1", \n  "groupId": "emea-dev-consumergroup-1",\n  "shiftBy": -100\n}
    \n
  3. Enable the consumer. Last 100 events will be re-consumed.

Example 2

User wants to consume all available messages from the topic again.

Steps:

  1. Disable the consumer. Kafka will not allow offset manipulation, if the topic/consumergroup is being used.
  2. Send below request:

    \n
    POST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/kafka/offset\n{\n  "topic": "emea-dev-out-full-test-topic-1", \n  "groupId": "emea-dev-consumergroup-1",\n  "offset": earliest\n}
    \n
  3. Enable the consumer. All events from the topic will be available for consumption again.

Resend Events

Allows re-sending events to MDM Hub's outbound Kafka topics, with filtering by Entity Type (entity or relation), modification date, country and source. Please refer to the Swagger Documentation for more details. Example use scenario is described below.

Generated events are filtered by the topic routing rule (by country, event type etc.). Generating events for some country may not result in anything being produced on the topic, if this country is not added to the filter.

Before starting a Resend Events job, please make sure that the country is already added to the routing rule. Otherwise, request additional country to be added (TODO: link to the instruction).

Example

For development purposes, user needs to generate 10k of events to his "emea-dev-out-full-test-topic-1" topic for the new market - Belgium (BE).

Steps:

  1. Send below request:

    \n
    POST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/events/resend\n{\n  "countries": [\n    "be"\n  ],\n  "objectType": "ENTITY",\n  "limit": 10000,\n  "reconciliationTarget": "emea-dev-out-full-test-topic-1"\n}
    \n
  2. A process will start on MDM Hub's side, generating events on this topic. Response to the request will contain the process ID (dag_run_id):

    \n
    {\n  "dag_id": "reconciliation_system_amer_dev",\n  "dag_run_id": "manual__2022-11-30T14:12:07.780320+00:00",\n  "execution_date": "2022-11-30T14:12:07.780320+00:00",\n  "state": "queued"\n}
    \n
  3. You can check the status of this process by sending below request:

    \n
    GET https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/events/resend/status/manual__2022-11-30T14:12:07.780320+00:00
    \n


    Response:

    \n
    {\n  "dag_id": "reconciliation_system_amer_dev",\n  "dag_run_id": "manual__2022-11-30T14:12:07.780320+00:00",\n  "execution_date": "2022-11-30T14:12:07.780320+00:00",\n  "state": "started"\n}
    \n
  4. Once the process is completed, all the requested events will have been sent to the topic.


" + }, + { + "title": "Requesting Access", + "pageID": "294663762", + "pageLink": "/display/GMDM/Requesting+Access", + "content": "

Access to MDM Admin Management API should be requested via email sent to MDM Hub's DL: DL-ATP_MDMHUB_SUPPORT@COMPANY.com.

Below chapters contain required details and email templates.

Modify Kafka Offset

Required details:

Email template:


\n
Hi Team,\n\nPlease provide us with access to the MDM Admin API. Details below:\n\nAPI: Kafka Offset\nTeam name: MDM Hub\nTopics:\n  - emea-dev-out-full-test-topic\n  - emea-qa-out-full-test-topic \n  - emea-stage-out-full-test-topic \nConsumergroups: \n  - emea-dev-hub \n  - emea-qa-hub  \n  - emea-stage-hub \nUsername: mdm-hub-user\n\nBest Regards,\nPiotr
\n


Resend Events

Required details:

Email template:


\n
Hi Team,\n\nPlease provide us with access to the MDM Admin API. Details below:\n\nAPI: Resend Events\nTeam name: MDM Hub\nTopics: \n  - emea-dev-out-full-test-topic\nUsername: mdm-hub-user\n\nBest Regards,\nPiotr
\n


" + }, + { + "title": "Flows", + "pageID": "164470069", + "pageLink": "/display/GMDM/Flows", + "content": "


" + }, + { + "title": "Batch clear ETL data load cache", + "pageID": "333154693", + "pageLink": "/display/GMDM/Batch+clear+ETL+data+load+cache", + "content": "

Description

This is the batch operation to clear batch cache. The process was design to clear mongo cache (removes records from batchEntityProcessStatus) for specified batch name, sourceId type and value. This process is an adapter to the /batchController/{batchName}/_clearCache operation exposed by mdmhub batch service that allows user to clear cache.

Link to clear batch cache by crosswalk documentation exposed by Batch Service Clear Cache by croswalks

Link to HUB UI documentation: HUB UI User Guide

 Flow: 

File load through UI details:

MAX Size

Max file size is 128MB

How to prepare the file to avoid unexpected errors:

File format description

File needs to be encoded with UTF-8 without bom.

Input file

File format: CSV 

Encoding: UTF-8

EOL: Unix

How to setup this using Notepad++:

Set encoding:

\"\"

Set EOL to Unix:

\"\"

Check (bottom right corner):

\"\"



Column headers:


Input file example

1
2
3

SourceType;SourceValue
Reltio;upIP01W
SAP;3000201428

\"\"clear_cache_ex.csv

Internals

Airflow process name: clear_batch_service_cache_{{ env }}

" + }, + { + "title": "Batch merge & unmerge", + "pageID": "164470091", + "pageLink": "/pages/viewpage.action?pageId=164470091", + "content": "

Description

This is the batch operation to merge/unmerge entities in Reltio. The process was designed to execute the force merge operation between Reltio objects. In Reltio, there are merge rules that automatically merge objects, but the user may explicitly define the merge between objects. This process is the adapter to the _merge or _unmerge operation that allows the user to specify the CSV file with multi entries so there is no need to execute API multiple times. 

 Flow: 

File load through UI details:

MAX Size

Max file size is 128MB or 10k records

How to prepare the file to avoid unexpected errors:

File format description

File needs to be encoded with UTF-8 without bom.

Merge operation 

Input file

File format: CSV 

Encoding: UTF-8

EOL: Unix

How to setup this using Notepad++:

Set encoding:

\"\"

Set EOL to Unix:

\"\"

Check (bottom right corner):

\"\"

File name format: merge_YYYYMMDD.csv


Drop location: 


Column headers:

The column names are kept for backward compatibility. The winner of the merge is always the entity that was created earlier. There is currently no possibility to select an explicit winner via the merge_unmerge batch.

 In the output file there are two additional fields:


Merge input file example
\n
WinnerSourceName;WinnerId;LoserSourceName;LoserId\nRELTIO;15hgDlsd;RELTIO;1JRPpffH\nRELTI;15hgDlsd;RELTIO;1JRPpffH
\n

Output file

File format: CSV 

Encoding: UTF-8

File name format: status_merge_YYYYMMDD_<seqNr>.csv

  <seqNr> - the number of the file process in the current day. Starting with 1 to n. 

Drop location: 

Column headers:

Merge output file example
\n
sourceId.type,sourceId.value,status,errorCode,errorMessage\nmerge_RELTIO_RELTIO,0009e93_00Ff82E,updated,,\nmerge_GRV_GRV,6422af22f7c95392db313216_23f45427-8cdc-43e6-9aea-0896d4cae5f8,updated,,\nmerge_RELTI_RELTIO,15hgDlsd_1JRPpffH,notFound,EntityNotFoundByCrosswalk,Entity not found by crosswalk in getEntityByCrosswalk [Type:RELTI Value:15hgDlsd]
\n

Unmerge operation 

Input file

File format: CSV 

Encoding: UTF-8

File name format: unmerge_YYYYMMDD_<seqNr>.csv

  <seqNr> - the number of the file process in the current day. Starting with 1 to n. 

Drop location: 

Column headers:


Unmerge input file example
\n
SourceURI;TargetURI\n15hgG6nP;15hgG6nQ1\n15hgG6qc;15hgG6rq
\n

Output file

File format: CSV 

Encoding: UTF-8

File name format: status_umerge_YYYYMMDD_<seqNr>.csv

  <seqNr> - the number of the file process in the current day. Starting with 1 to n. 

Column headers:


Unmerge output file example
\n
sourceId.type,sourceId.value,status,errorCode,errorMessage\nunmerge_RELTIO_RELTIO,01lAEll_01jIfxx,updated,,\nunmerge_RELTIO_RELTIO,0144V4D_01EFVyb,updated,,
\n

Internals

Airflow process name: merge_unmerge_entities


" + }, + { + "title": "Batch reload MapChannel data", + "pageID": "407896553", + "pageLink": "/display/GMDM/Batch+reload+MapChannel+data", + "content": "


Description

This process is used to reload source data from GCP/GRV systems. The user has two ways to indicate the data he wants to reload:

In process Airflow Dag is used to control the flow 


 Flow: 


File load through UI details:

MAX Size

Max file size is 128MB



Input file example

\"\"reload_map_channel_data.csv

Output file

File format: CSV 

Encoding: UTF-8

File name format: report__reload_map_channel_data_YYYYMMDD_<seqNr>.csv

  <seqNr> - the number of the file process in the current day. Starting with 1 to n. 

Column headers: TODO




Output file example TODO

SourceCrosswalkType,SourceCrosswalkValue,IdentifierType,IdentifierValue,status,errorCode,errorMessage
Reltio,upIP01W,HCOIT.PFORCERX,TEST9_OEG_1000005218888,failed,404,Can't find entity for target: EntityURITargetObjectId(entityURI=entities/upIP01W)
SAP,3000201428,HCOIT.SAP,3000201428,failed,CrosswalkNotFoundException,Entity not found by crosswalk in getEntityByCrosswalk [Type:SAP Value:3000201428]


Internals

Airflow process name: reload_map_channel_data_{{ env }}

" + }, + { + "title": "Batch Reltio Reindex", + "pageID": "337846347", + "pageLink": "/display/GMDM/Batch+Reltio+Reindex", + "content": "

Description

This is the operation to execute Reltio Reindex API. The process was designed to get the input CSV file with entities URIS and schedule the Reltio Reindex API. 

More details about the Reltio API is available here: 5. Reltio Reindex

HUB wraps the Entity URIs and schedules Reltio Task. 

 Flow: 

File load through UI details:

MAX Size

Max file size is 128MB. The user should be able to load around 7.4M entity uris lines in one file to fit into a 128MB file size. Please check the file size before uploading. Larger files will be rejected.

Please be aware that 128MB file upload may take a few minutes depending on the user network performance. Please wait until processing is finished and the response appears.

How to prepare the file to avoid unexpected errors:

File format description

File needs to be encoded with UTF-8 without bom.

Input file

File format: CSV 

Encoding: UTF-8

EOL: Unix

How to setup this using Notepad++:

Set encoding:

\"\"

Set EOL to Unix:

\"\"

Check (bottom right corner):

\"\"

Column headers:


Input file example

1
2
3

entities/E0pV5Xm
entities/1CsgdXN4
entities/2O5RmRi

\"\"reltio_reindex.csv

Internals

Airflow process name: reindex_entities_mdm_{{ env }}



" + }, + { + "title": "Batch update identifiers", + "pageID": "234704200", + "pageLink": "/display/GMDM/Batch+update+identifiers", + "content": "

Description

This is the batch operation to update identifiers in Reltio. The process was design to update selected identifiers selected by identifier lookup code. This process is an adapter to the /entities/_updateAttributes operation exposed by mdmhub manager service that allows user to modify nested attributes using specific filters.

Source for the batch process is csv in which one row corresponds with single identifiers that should be changed.

In process batch service is used to control the flow 


 Flow: 


File load through UI details:

MAX Size

Max file size is 128MB or 10k records

How to prepare the file to avoid unexpected errors:

File format description

File needs to be encoded with UTF-8 without bom.

Input file

File format: CSV 

Encoding: UTF-8

EOL: Unix

How to setup this using Notepad++:

Set encoding:

\"\"

Set EOL to Unix:

\"\"

Check (bottom right corner):

\"\"

File name format: update_identifiers_YYYYMMDD_<seqNr>.csv

  <seqNr> - the number of the file process in the current day. Starting with 1 to n. 

Drop location: 

GBL:

EMEA:


Column headers:



Input file example

1
2
3

SourceCrosswalkType;SourceCrosswalkValue;IdentifierType;IdentifierValue;IdentifierTrust;IdentifierSourceName;Action;TargetCrosswalkType
Reltio;upIP01W;HCOIT.PFORCERX;TEST9_OEG_1000005218888;;;update;
SAP;3000201428;HCOIT.SAP;3000201428;Yes;SAP;update;

\"\"update_identifier_20220323.csv

Output file

File format: CSV 

Encoding: UTF-8

File name format: report__update_identifiers_YYYYMMDD_<seqNr>.csv

  <seqNr> - the number of the file process in the current day. Starting with 1 to n. 

Column headers:




Output file example

\n
SourceCrosswalkType,SourceCrosswalkValue,IdentifierType,IdentifierValue,status,errorCode,errorMessage\nReltio,upIP01W,HCOIT.PFORCERX,TEST9_OEG_1000005218888,failed,404,Can't find entity for target: EntityURITargetObjectId(entityURI=entities/upIP01W)\nSAP,3000201428,HCOIT.SAP,3000201428,failed,CrosswalkNotFoundException,Entity not found by crosswalk in getEntityByCrosswalk [Type:SAP Value:3000201428]
\n


Internals

Airflow process name: update_identifiers_{{ env }}

" + }, + { + "title": "Callbacks", + "pageID": "164469861", + "pageLink": "/display/GMDM/Callbacks", + "content": "

Description

The HUB Callbacks are divided into the following two sections:

  1. PreCallback process is responsible for the Ranking of the selected attributes RankSorters. This callback is based on the full enriched events from the "${env}-internal-reltio-full-events". Only events that do not require additional ranking updates in Reltio are published to the next processing stage. Some rankings calculations - like OtherHCOtoHCO is delayed and processed in PreDylayCallbackService - such functionality was required to gather all changes for relations in time windows and send events to Reltio only after the aggregation window is closed. This limits the number of events and updates to Reltio.
    1.  OtherHCOtoHCOAffiliations Rankings - more details related to the OtherHCOtoHCO relation ranking with all PreDylayCallbackService  and DelayRankActivationProcessor
      1. rank details OtherHCOtoHCOAffiliations RankSorter
  2. "Post" Callback process is responsible for the specific logic and is based on the events published by the Event Publisher component. Here are the processes executed in the post callback process:
    1. AttributeSetter Callback - based on the "{env}-internal--callback-attributes-setter-in" events. Sets additional attributes for EMEA COMPANY France market  e.g. ComplianceMAPPHCPStatus
    2. CrosswalkActivator Callback  - based on the "${env}-internal-callback-activator-in" events. Activates selected crosswalk or soft-delete specific crosswalks based on the configuration. 
    3. CrosswalkCleaner Callback - based on the "${env}-internal-callback-cleaner-in" events. Cleans orphan HUB_Callback crosswalk or soft-delete specific crosswalks based on the configuration. 
    4. CrosswalkCleanerWithDelay Callback - based on the "${env}-internal-callback-cleaner-with-delay-in" events. Cleans orphan HUB_Callback crosswalk or soft-delete specific crosswalks based on the configuration with delay (aggregate events in time window)
    5. DanglingAffiliations Callback - based on the "${env}-internal-callback-orphan-clean-in" events. Removes orphan affiliations once one of the start or end objects was removed. 
    6. Derived Addresses Callback  - based on the "${env}-internal-callback-derived-addresses-in" events. Rewrites an Address from HCO to HCP, connected to each other with some type of Relationship. used on IQVIA tenant
    7. HCONames Callback for IQVIA model - based on the "${env}-internal-callback-hconame-in" events. Caclucate HCO Names. 
    8. HCONames Callback for COMPANY model -  based on the "${env}-internal-callback-hconame-in" events. Caclucate HCO Names in COMPANY Model.
    9. NotMatch Callback - based on the "${env}-internal-callback-potential-match-cleaner-in" events. Based on the created relationships between two matched objects, removes the match using _notMatch operation. 

More details about the HUB callbacks are described in the sub-pages. 

Flow diagram



\"\"



" + }, + { + "title": "AttributeSetter Callback", + "pageID": "250150261", + "pageLink": "/display/GMDM/AttributeSetter+Callback", + "content": "

Description

Callback auto-fills configured static Attributes, as long as the profile's attribute values meet the requirements. If no requirement (rule) is met, an optional cleaner deletes the existing, Hub-provided value for this attribute. AttributeSetter uses Manager's Update Attributes async interface.

Flow Diagram

\"\"

Steps


  1. After event has been routed from EventPublisher, check the following:
    1. Entity must be active and have at least one active crosswalk 
    2. Event Type must match configured allowedEventTypes
    3. Country must match configured allowedCountries
  2. For each configured setAttribute do the following:
    1. Check if the entityType matches 
    2. For each rules do the following:
      1. Check if criteria are met
      1. If criteria are met:
        1. Check if Hub crosswalk already provides the AutoFill value (either Attribute's value or lookupCode must match)
        2. If attribute value is already present, do nothing
        3. If attribute is not present:
          1. Add inserting AutoFill attribute to the list of changes
          2. Check if Hub crosswalk provides another value for this attribute
          3. If Hub crosswalk provides another value, add deleting that attribute value to the list of changes
    3. If no rules were matched for this setAttribute and cleaner is enabled:
      1. Find the Hub-provided value of this attribute and add deleting this value to the list of changes (if exists)
    4. Map the list of changes into a single AttributeUpdateRequest object and send to Manager inbound topic.

Configuration

Example AttributeSetter rule (multiple allowed):

\n
      - setAttribute: "ComplianceMAPPHCPStatus"\n        entityType: "HCP"\n        cleanerEnabled: true\n        rules:\n          - name: "AutoFill HCPMHS.Non-HCP IF SubTypeCode = Administrator (HCPST.A) / Researcher/Scientist (HCPST.C) / Counselor/Social Worker (HCPST.CO) / Technician/Technologist (HCPST.TC)"\n            setValue: "HCPMHS.Non-HCP"\n            where:\n              - attribute: "SubTypeCode"\n                values: [ "HCPST.A", "HCPST.C", "HCPST.CO", "HCPST.TC" ]\n\n          - name: "AutoFill HCPMHS.Non-HCP IF SubTypeCode = Allied Health Professionals (HCPST.R) AND PrimarySpecialty = Psychology (SP.PSY)"\n            setValue: "HCPMHS.Non-HCP"\n            where:\n              - attribute: "SubTypeCode"\n                values: [ "HCPST.R" ]\n              - attribute: "Specialities"\n                nested:\n                  - attribute: "Primary"\n                    values: [ "true" ]\n                  - attribute: "Specialty"\n                    values: [ "SP.PSY" ]\n\n          - name: "AutoFill HCPMHS.HCP for all others"\n            setValue: "HCPMHS.HCP"
\n

Rule inserts ComplianceMAPPHCPStatus attribute for every HCP:

Dependent Components

ComponentUsage
Callback ServiceMain component with flow implementation
PublisherGeneration of incoming events
ManagerAsynchronous processing of generated AttributeUpdateRequest events



" + }, + { + "title": "CrosswalkActivator Callback", + "pageID": "302701827", + "pageLink": "/display/GMDM/CrosswalkActivator+Callback", + "content": "

Description

CrosswalkActivator is the opposite of CrosswalkCleaner. There are 4 main processing branches (described in more detail in the "Algorithm" section):

Algorithm

For each event from ${env}-internal-callback-activator-in topic, do:

  1. filter by event country (configured),
  2. filter by event type (configured, usually only CHANGED events),
  3. Processing: WhenOneKeyExistsAndActive
    1. find all active Onekey crosswalks (exact Onekey source name is fetched from configuration)
    2. for each crosswalk in the input event entity do:
      1. if crosswalk type is in the configured list (getWhenOneKeyExistsAndActive) and crosswalk value is the same as one of active Onekey crosswalks, send activator request to Manager,
      2. activator request contains
        • entityType,
        • activated crosswalk with empty string ("") in deleteDate,
        • Country attribute rewritten from the input event,
      3. Manager processes the request as partialOverride.

  4. Processing: WhenAnyOneKeyExistsAndActive
    1. find all active Onekey crosswalks (exact Onekey source name is fetched from configuration)
    2. for each crosswalk in the input event entity do:
      1. if crosswalk type is in the configured list (getWhenAnyOneKeyExistsAndActive) and active Onekey crosswalks list is not empty, send activator request to Manager,
      2. activator request contains
        • entityType,
        • activated crosswalk with empty string ("") in deleteDate,
        • Country attribute rewritten from the input event,
      3. Manager processes the request as partialOverride.

  5. Processing: WhenAnyCrosswalksExistsAndActive
    1. find all active crosswalks (sources in the configuration except list are filtered out)
    2. for each crosswalk in the input event entity do:
      1. if crosswalk type is in the configured list (getWhenAnyCrosswalksExistsAndActive) and active Onekey crosswalks list is not empty, send activator request to Manager,
      2. activator request contains
        • entityType,
        • activated crosswalk with empty string ("") in deleteDate,
        • Country attribute rewritten from the input event,
      3. Manager processes the request as partialOverride.
  6. Processing: ActivateOneKeyReferbackCrosswalkWhenRelatedOneKeyCrosswalkExistsAndActive
    1. find all OneKey crosswalks,
    2. check for active OneKey crosswalk with lookupCode included in the configured list oneKeyLookupCodes,
    3. check for related inactive OneKey referback crosswalk with lookupCode included in the configured list referbackLookupCodes,
    4. if above conditions are met, send activator request to Manager,
    5. activator request contains:
      • entityType,
      • activated OneKey referback crosswalk with empty string ("") in deleteDate,
      • Country attribute rewritten from the input event,
    6. Manager processes the request as partialOverride.

Dependent components

Component

Usage

Callback ServiceMain component with flow implementation
PublisherRoutes incoming events
ManagerAsync processing of generated activator requests
" + }, + { + "title": "CrosswalkCleaner Callback", + "pageID": "164469744", + "pageLink": "/display/GMDM/CrosswalkCleaner+Callback", + "content": "

Description

This process removes using the hard delete or soft-delete operation crosswalks on Entity or Relation objects. There are the following sections in this process.

  1. Hard Delete Crosswalks - Entities
    1. Based on the input configuration removes the crosswalk from Reltio once all other crosswalks were removed or inactivated.  Once the source decides to inactivated the crosswalk, associated attributes are removed from the Golden Profile (OV), and in that case Rank attributes delivered by the HUB have to be removed. The process is used to remove orphan HUB_CALLBACK crosswalks that are used in the PreCallbacks (Rankings/COMPANYGlobalCustomerId/Canada Micro-Bricks/HCPType) process
  2. Hard Delete Crosswalks - Relationships
    1. This is similar to the above. The only difference here is that the PreCallbacks (Rankings/COMPANYGlobalCustomerId/Canada Micro-Bricks/HCPType) process is adding new Rank attributes to the relationship between two objects. Once the relationship is deactivated by the Source, the orphan HUB_CALLBACK crosswalk is removed. 
  3. Soft Delete Crosswalks 
    1. This process does not remove the crosswalk from Reltio. It updates the existing providing additional deleteDate attribute on the soft-deleting crosswalk. In that case in Reltio the corresponding crosswalk becomes inactive. There are three types of soft-deletes:
      1. always - soft-delete crosswalks based on the configuration once all other crosswalks are removed or inactivated,
      2. whenOneKeyNotExists - soft-delete crosswalks based on the configuration once ONEKEY crosswalk is removed or inactivated. This process is similar to the "always" process by the activation is only based on the ONEKEY crosswalk inactivation,
      3. softDeleteOneKeyReferbackCrosswalkWhenOneKeyCrosswalkIsInactive - soft-delete ONEKEY referback crosswalk (lookupCode in configuration) once ONEKEY crosswalk is inactivated.

Flow diagram


\"\"

Steps


Triggers

Trigger actionComponentActionDefault time
IN Events incoming mdm-callback-service:CrosswalkCleanerStream (callback package)Process events and calculate hard or soft-delete requests and publish to the next processing stage. realtime - events stream

Dependent components

ComponentUsage
Callback ServiceMain component with flow implementation
PublisherEvents publisher generates incoming events
ManagerAsynchronous process of generated events
" + }, + { + "title": "CrosswalkCleanerWithDelay Callback", + "pageID": "302701874", + "pageLink": "/display/GMDM/CrosswalkCleanerWithDelay+Callback", + "content": "

Description

CrosswalkCleanerWithDelay works similarly to CrosswalkCleaner. It is using the same Kafka Streams topology, but events are trimmed (eliminateNeedlessData parameter - all the fields other than crosswalks are removed), and, which is most important, deduplication window is added.

Deduplication window's parameters are configured, there are no default parameters. EMEA PROD example:

This means, that the delay is equal to 8-9 hours.

Algorithm

For more details on algorithm steps, see CrosswalkCleaner Callback.

Dependencies

ComponentUsage
Callback ServiceMain component with flow implementation
PublisherRoutes incoming events
ManagerAsync processing of generated requests
" + }, + { + "title": "DanglingAffiliations Callback", + "pageID": "164469754", + "pageLink": "/display/GMDM/DanglingAffiliations+Callback", + "content": "

Description


DanglingAffiliation Callback consists of two sub-processes:

" + }, + { + "title": "DanglingAffiliations Based On Inactive Objects", + "pageID": "347635836", + "pageLink": "/display/GMDM/DanglingAffiliations+Based+On+Inactive+Objects", + "content": "

Description

The process soft-deletes active relationships between inactivated start or end objects. Based on the configuration only REMOVED or INACTIVATE events are processed. It means that once the Start or End objects becomes inactive process checks the orphan relationship and sends the soft-delete request to the next processing stage. 

Flow diagram

\"\"

Steps


Triggers

Trigger actionComponentActionDefault time
IN Events incoming mdm-callback-service:DanglingAffiliationsStream (callback package)Process events for inactive entities and calculate soft-delete requests and publish to the next processing stage. realtime - events stream

Dependent components

ComponentUsage
Callback ServiceMain component with flow implementation
PublisherEvents publisher generates incoming events
ManagerAsynchronous process of generated events
Hub StoreRelationship Cache
" + }, + { + "title": "DanglingAffiliations Based On Same Start And End Objects", + "pageID": "347635839", + "pageLink": "/display/GMDM/DanglingAffiliations+Based+On+Same+Start+And+End+Objects", + "content": "

Description

This process soft-deletes looping relations - active relations having the same startObject and endObject.

Such loops can be created in one of two ways:

both of these create a RELATIONSHIP_CHANGED event, so the process is based off of RELATIONSHIP_CREATED and RELATIONSHIP_CHANGED events.

Unlike the other DanglingAffiliations sub-process, this one does not query the cache for relations, because all the required information is in the processed event.

Flow diagram

\"\"

Steps


Triggers

Trigger actionComponentActionDefault time
IN Events incoming mdm-callback-service:DanglingAffiliationsStream (callback package)Process events for relations and calculate soft-delete requests and publish to the next processing stage. realtime - events stream

Dependent components

ComponentUsage
Callback ServiceMain component with flow implementation
PublisherEvents publisher generates incoming events
ManagerAsynchronous process of generated events
" + }, + { + "title": "Derived Addresses Callback", + "pageID": "294677441", + "pageLink": "/display/GMDM/Derived+Addresses+Callback", + "content": "

Description

The Callback is a tool for rewriting an Address from HCO to HCP, connected to each other with some type of Relationship.

Sequence Diagram

\"\"

Flow

Process is a callback. It operates on four Kafka topics:

Steps

Algorithm has 3 stages:

  1. Stage I – Event Publisher
    1. Event Publisher routes all above event types to ${env}-internal-callback-derived-addresses-in topic, optional filtering by country/source.
  2. Stage II – Callback Service – Preprocessing Stage
    1. If event subType ~ HCP_*:
    2. pass targetEntity URI to ${env}-internal-callback-derived-addresses-hcp4calc
    3. If event subtype ~ HCO_*:
      1. Find all ACTIVE relations of types ${walkRelationType} ending at this HCO in entityRelations collection.
      2. Extract URIs of all HCPs at starts of these relations and send them to topic ${env}-internal-callback-derived-addresses-hcp4calc
    4. If event subtype ~ RELATIONSHIP_*:
      1. Find the relation by URI in entityRelations collection.
      2. Check if relation type matches the configured ${walkRelationType}
      3. Extract URI of the startObject (HCP) and send it to the topic ${env}-internal-callback-derived-addresses-hcp4calc
  3. Stage III – Callback Service – Main Stage
    1. Input is HCP URI.
    2. Find HCP by URI in entityHistory collection.
    3.  Check:
      1. If we cannot find entity in entityHistory, log error and skip
      2. If found entity has other type than “configuration/entityTypes/HCP”, log error and skip
      3. If entity has status LOST_MERGE/DELETED/INACTIVE, skip
    4. In entityHistory, find all relations of types ${walkRelationType} starting at this HCP, extract HCO at the end of relation
    5. For each extracted HCO (Hospital) do:
      1. Find HCO in entityHistory collection
      2. Wrap HCO Addresses in a Create HCP Request:
        1. Rewrite all sub-attributes from each ov==true Hospital’s Address
        2. Add attributes from ${staticAddedFields}, according to strategy: overwrite or underwrite (add if missing)
        3. Add the required Country attribute (rewrite from HCP)
        4. Add two crosswalks:
          1. Data provider ${hubCrosswalk} with value: ${hcpId}_${hcoId}.
          2. Contributor provider Reltio type with HCP uri.
        5. Send Create HPC Request to Manager through bundle topic
    6. If HCP has a crosswalk of type and sourceTable as below:

      type: ${hubCrosswalk.type}
      sourceTable: ${hubCrosswalk.sourceTable}
      value: ${hcpId}_${hcoId}

      but its hcoUri suffix does not match any Hospital found, send request to delete the crosswalk to MDM Manager.

Configuration

Following configurations have to be made (examples are for GBL tenants).

Callback Service

Add and handle following section to CallbackService application.yml in GBL:

\n
callback:\n...\n  derivedAddresses:\n    enabled: true\n    walkRelationType: \n      - configuration/relationTypes/HasHealthCareRole\n    hubCrosswalk:\n      type: HUB_Callback\n      sourceTable: DerivedAddresses\n    staticAddedFields:\n      - attributeName: AddressType\n        attributeValue: TYS.P\n        strategy: over\n    inputTopic: ${env}-internal-callback-derived-addresses-in\n    hcp4calcTopic: ${env}-internal-callback-derived-addresses-hcp4calc\n    outputTopic: ${env}-internal-derived-addresses-hcp-create\n    cleanerTopic: ${env}-internal-async-all-cleaner-callbacks
\n

Since we are adding a new crosswalk, cleaning of which will be handled by the Derived Addresses callback itself, we should exclude this crosswalk from the Crosswalk Cleaner config (similar to HcoNames one):

\n
callback:\n  crosswalkCleaner:\n    ...\n    hardDeleteCrosswalkTypes:\n      ...\n      exclude:\n        - type: configuration/sources/HUB_Callback\n          sourceTable: DerivedAddresses
\n

Manager

Add below to the MDM Manager bundle config:

\n
bundle:\n...\n  inputs:\n...\n    - topic: "${env}-internal-derived-addresses-hcp-create"\n      username: "mdm_callback_service_user"\n      defaultOperation: hcp-create
\n


Check DQ Rules configuration.

Event Publisher

Routing rule has to be added:

\n
- id: derived_addresses_callback\n  destination: "${env}-internal-derived-addresses-in"\n  selector: "(exchange.in.headers.reconciliationTarget==null)\n              && exchange.in.headers.eventType in ['simple']\n              && exchange.in.headers.country in ['cn']\n              && exchange.in.headers.eventSubtype in ['HCP_CREATED', 'HCP_CHANGED', 'HCO_CREATED', 'HCO_CHANGED', 'HCO_REMOVED', 'HCO_INACTIVATED', 'RELATIONSHIP_CREATED', 'RELATIONSHIP_CHANGED', 'RELATIONSHIP_REMOVED']"
\n

Dependent Components

ComponentUsage
Callback ServiceMain component with flow implementation
ManagerProcessing HCP Create, Crosswalk Delete operations
Event PublisherGeneration of incoming events
" + }, + { + "title": "HCONames Callback for IQVIA model", + "pageID": "164469742", + "pageLink": "/display/GMDM/HCONames+Callback+for+IQVIA+model", + "content": "

Description

The HCO names callback is responsible for calculating HCO Names. At first events are filtered, deduplicated and the list of impacted hcp is being evaluated. Then the new HCO are calculated. And finally if there is a need for update, the updates are being send for asynchronous processing in HUB Callback Source

Flow diagram

\"\"

Steps

1. Impacted HCP Generator

  1. Listen for the events on the ${env}-internal-callback-hconame-in topic.
  2. Filter out against the list of predefined countries (AI, AN, AG, AR, AW, BS, BB, BZ, BM, BO, BR,
    CL, CO, CR, CW, DO, EC, GT, GY, HN, JM, KY, LC,
    MX, NI, PA, PY, PE, PN, SV, SX, TT, UY, VG, VE).
  3. Filter out against the list of predefined event types (HCO_CREATED, HCO_CHANGED,
    RELATIONSHIP_CREATED, RELATIONSHIP_CHANGED).
  4. Split into two following branches. Results of both are then published on the ${env}-internal-callback-hconame-hcp4calc.

Entity Event Stream

1 extract the "Name" attribute from the target entity.

2. reject the event if "Name" does not exist

3. check if there was already a record with the identical Key + Name pair (a duplicate)

4. reject the duplicate

5. find the list of impacted HCPs based on the key

6. return a flat stream of the key and the list
e.g. key: entities/dL144Hk, impactedHCP: 1, 2, 3 return (entities/dL144Hk, 1), (entities/dL144Hk, 2), (entities/dL144Hk, 3)

Relation Event Stream

1. map Event to RelationWrapper(type,uRI,country,startURI,endURI,active,startObjectType,endObjectTyp)

2. reject if any of fields missing

3. check if there was already a record with the identical Key + Name pair (a duplicate)

4. reject the duplicate

5. find the list of impacted HCPs based on the key

6. return a flat stream of the key and the list
e.g. key: entities/dL144Hk, impactedHCP: 1, 2, 3 return (entities/dL144Hk, 1), (entities/dL144Hk, 2), (entities/dL144Hk, 3)

2. HCO Names Update Stream

  1. Listen for the events on the ${env}-internal-callback-hconame-hcp4calc.
  2. The incoming list of HCPs is passed to the calculator (described below).
  3. The HcoMainCalculatorResult contains hcpUri, a list of entityAddresses and the mainWorkplaceUri (to update)
  4. The result is being mapped to the RelationRequest
  5.  The RelationRequest is generated to the "${env}-internal-hconames-rel-create" topic.

3. HCP Calc Alogithm

calculate HCO Name

  1. HCOL1: get HCO from mongo where uri equals HCP.attributes.Workplace.refEntity.uri
  2. return HCOL1.Name

calculate MainHCOName

  1. get all target HCO for relations (paremeter traverseRelationTypes) when start object id equals HCOL1 uri.
  2. for each target HCO (curHCO) do
    1. if target HCO is last in hierarchy then
      1. return HCO.attributes.Name
    2. else if target HCO.attributes.TypeCode.lookupCode is on the configured list defined by parameter mainHCOTypeCodes for selected country
      1. return HCO.attributes.Name
    3. else if target HCO.attributes.Taxonomy.StrType.lookupCode is on the configured list defined by parameter mainHCOStructurTypeCodes for selected country
      1. return HCO.attributes.Name
    4. else if target HCO.attributes.ClassofTradeN.FacilityType.lookupCode is on the configured list defined by parameter mainHCOFacilityTypeCodes for selected country
      1. return HCO.attributes.Name
    5. else
      1. get all target HCO when start object id is curHCO.uri (recursive call)

update HCP addresses

  1. find address in HCP.attributes.Address when Address.refEntity.uri=HCOL1.uri
  2. if found and address.HCOName<>calcHCOName or address.MainHcoName<>calcMainHCOName then
  3. create/update HasAddress relation using HUBCallback source

Triggers

*

Oops, it seems that you need to place a table or a macro generating a table within the Table Filter macro.

Trigger actionComponentActionDefault time
IN Events incoming mdm-callback-service:HCONamesUpdateStream (callback package)Evaluates the list of affected HCPs. Based on that the HCO updates being sent when needed.realtime - events stream
\n
\n
\n\n\n

Dependent components

ComponentUsage
Callback ServiceMain component with flow implementation
PublisherEvents publisher generates incoming events
ManagerAsynchronous process of generated events
Hub StoreCache




" + }, + { + "title": "HCONames Callback for COMPANY model", + "pageID": "243863711", + "pageLink": "/display/GMDM/HCONames+Callback+for+COMPANY+model", + "content": "

Description

HCONames Callback for COMPANY data model differs from the one for IQVIA model.

Callback consists of two stages: preprocessing and main processing. Main processing stage takes in HCP URIs, so the preprocessing stage logic extracts such affected HCPs from HCO, HCP, RELATIONSHIP events.

During main processing, Callback calculates trees, where nodes are HCOs (tree root is always the input HCP) and edges are Relationships. HCOs and MainHCOs are extracted from this tree. MainHCOs are chosen following some business specification from the Callback config. Direct Relationships from HCPs to MainHCOs are created (or cleaned if no longer applicable). If any of HCP's Addresses matches HCO/MainHCO Address, adequate sub-attribute is added to this Address.

Algorithm

Stage I - preprocessing

Input topic: ${env}-internal-callback-hconame-in

Input event types:

For each HCO event from the topic:

  1. Deduplicate events by key (deduplication window size is configurable),
  2. using MongoDB entityRelations collection, build maximum dependency tree (recursive algorithm) consisting of HCPs and HCOs connected with:
    1. relations of type equal to hcoHcoTraverseRelationTypes from configuration,
    2. relations of type equal to hcoHcpTraverseRelationTypes from configuration,
  3. return all HCPs from the dependency tree (all visited HCPs),
  4. generate events having key and value equal to HCP uri and send to the main processing topic (${env}-internal-callback-hconame-hcp4calc).

For each RELATIONSHIP event from the topic:

  1. Deduplicate events by key (deduplication window size is configurable),
  2. if relation's startObject is HCP:
    1. add HCP's entityURI to result list,
  3. if relation's startObject is HCO: 
    1. similarly to HCO events preprocessing, build dependency tree and return all HCPs from the tree. HCP URIs are added to the result list,
  4. for each HCP on the result list, generate an event and send to the main processing topic (${env}-internal-callback-hconame-hcp4calc).

For each HCP event from the topic:

  1. Deduplicate events by key (deduplication window size is configurable),
  2. generate events having key and value equal to HCP uri and send to the main processing topic (${env}-internal-callback-hconame-hcp4calc).

Stage II - main processing

Input topic: ${env}-internal-callback-hconame-hcp4calc

For each HCP from the topic:

  1. Deduplicate by entity URI (deduplication window size is configurable),
  2. fetch current state of HCP from MongoDB, entityHistory collection,
  3. traversing by HCP-HCO relation type from config, find all affiliated HCOs with "CON" descriptors,
  4. traversing by HCO-HCO relation type from config, find all affiliated HCOs with MainHCO: "REL.MAI" or "REL.HIE" descriptors,
  5. from the "CON" HCO list, find all MainHCO candidates - MainHCO candidate must pass the configured specification. Below is MainHCO spec in EMEA PROD:
    \"\"
  6. if not yet existing, create new HcoNames relationship to MainHCO candidates by generating a request and sending to Manager async topic: ${env}-internal-hconames-rel-create,
  7. if existing, but not on candidates list, delete the relationship by generating a request and sending to Manager async topic: ${env}-internal-async-all-cleaner-callbacks,
  8. if one of input HCP's Addresses matches HCO Address or MainHCO Address, generate a request adding "HCO" or "MainHCO" sub-attribute to the Address and send to Manager async topic: ${env}-internal-hconames-hcp-create.

Processing events

\"\"

1. Find Impacted HCP

  1. Listen for the events on the ${env}-internal-callback-hconame-in topic.
  2. Filter out against the list of predefined countries (GB, IE).
  3. Filter out against the list of predefined event types (HCO_CREATED, HCO_CHANGED,
    RELATIONSHIP_CREATED, RELATIONSHIP_CHANGED).
  4. Split into two following branches. Results of both are then published on the ${env}-internal-callback-hconame-hcp4calc.

Entity Event Stream

1 extract the "Name" attribute from the target entity.

2. reject the event if "Name" does not exist

3. check if there was already a record with the identical Key + Name pair (a duplicate)

4. reject the duplicate

5. find the list of impacted HCPs based on the key

6. return a flat stream of the key and the list
e.g. key: entities/dL144Hk, impactedHCP: 1, 2, 3 return (entities/dL144Hk, 1), (entities/dL144Hk, 2), (entities/dL144Hk, 3)

Relation Event Stream

1. map Event to RelationWrapper(type,uRI,country,startURI,endURI,active,startObjectType,endObjectTyp)

2. reject if any of fields missing

3. check if there was already a record with the identical Key + Name pair (a duplicate)

4. reject the duplicate

5. find the list of impacted HCPs based on the key

6. return a flat stream of the key and the list
e.g. key: entities/dL144Hk, impactedHCP: 1, 2, 3 return (entities/dL144Hk, 1), (entities/dL144Hk, 2), (entities/dL144Hk, 3)

2. Select HCOs affiliated with HCP

  1. Listen for incoming list of HCPs on the ${env}-internal-callback-hconame-hcp4calc.
  2. For each HCP a list of affiliated HCOs is retrieved from a database. HCP-HCO relation is based on type:
    configuration/relationTypes/ContactAffiliations
    and description:
    "CON"

3. Find Main HCO traversing HCO-HCO hierarchy

  1. For each HCO from the list of selected HCOs above a list of HCO is retrieved from the database.  HCO-HCO relation is based on type:
    configuration/relationTypes/OtherHCOtoHCOAffiliations
    and description:
    "RLE.MAI", "RLE.HIE"
    The step is being repeated recursively until there are no affiliated HCOs or the Subtype matches the one provided in configuration.
    mainHcoIndicator.subTypeCode (STOP condition)
  2. The result is being mapped to the RelationRequest
  3.  The RelationRequest is generated to the "${env}-internal-hconames-rel-create" topic.

4. Populate HcoName / Main HCO Name in HCP addresses if required 

  1. So far there are two HCO lists: HCOs affiliated with HCP and Main HCOs.
  2. There's a check if HCP fields HCOName and MainHCOName which are also two lists match the HCO names.
  3. If not, then the HCP update event is being generated.
  4. Address is nested attribute in the model ​
    Matching by uri must be replaced by matching by the key on attribute values. ​
    The match key will include AddressType, AddressLine1, AddressLine2,City,StateProvinance, Zip5.​
    The same key is configured in Reltio for address deduping. ​
    Changes the address key in Reltio must be consulted with HUB team​

    The target attributes in addresses will be populated by creating new HCP address having the same match key + HCOName and MainHCOName by HubCallback source. Reltio will match the new address with the existing based on the match key.​

    Each HCP address will have own HUBCallback crosswalk {type=HUB_Callback, value={Address Attribute URI}, sourceTable=HCO_NAME}​


4. Create HCO -> Main HCO affiliation if not exist 

  1. Also there's a check if the HCP outgoing relations point to Main HCOs. Only relations with the type 
    "configuration/relationTypes/ContactAffiliations"
    and description
    "MainHCO"
     are being considered.
  2. Appropriate relations need to be created and not appropriate removed.


Data model

 \"\"

Dependencies

Component

Usage

Callback ServiceMain component with flow implementation
PublisherRoutes incoming events
ManagerAsync processing of generated requests
" + }, + { + "title": "NotMatch Callback", + "pageID": "164469859", + "pageLink": "/display/GMDM/NotMatch+Callback", + "content": "

Description

The NotMatch callback was created to clear the potential match queue for the suspect matches when the Linkage has been created by the DerivedAffiliationsbatch process. During this batch process, affiliations are created between COV and ONEKEY HCO objects. The potential match queue is not cleared and this impacts the Data Steward process because DS does not know what matches have to be processed through the UI. Potential match queue is cleared during RELATIONSHIP events processing using the "NotMatch callback" process. The process invokes _notMatch operation in MDM and removed these matches from Reltio. All "_notMatch" matches are visible in the UI in the "Potental Matches"."Not a Match" TAB. 

Flow diagram

\"\"

Steps

Triggers

Trigger actionComponentActionDefault time
IN Events incoming mdm-callback-service:PotentialMatchLinkCleanerStreamprocess relationship events in streaming mode and sets _notMatch in MDMrealtime - events stream

Dependent components

ComponentUsage
Callback ServiceMain component with flow implementation
PublisherEvents publisher generates incoming events
ManagerReltio Adapter for _notMatch operation in asynchronous mode
Hub StoreMatches Store
" + }, + { + "title": "PotentialMatchLinkCleaner Callback", + "pageID": "302702435", + "pageLink": "/display/GMDM/PotentialMatchLinkCleaner+Callback", + "content": "

Description

Algorithm

Callback accepts relationship events - this is configurable, usually:

For each event from inbound topic (${env}-internal-callback-potential-match-cleaner-in):

  1. event is filtered by eventType (acceptedRelationEventTypes list in configuration),
  2. event is filtered by relationship type (acceptedRelationObjectTypes list in configuration),
  3. extract startObjectURI and endObjectURI from event targetRelation,
  4. search MongoDB, collection entityMatchesHistory, for records having both URIs in matches and having same matchType (matchTypesInCache list in configuration),
  5. if found a record in cache, check if it has already been sent (boolean field in the document),
  6. if record has not been yet sent, generate a EntitiesNotMatchRequest containing two fields:
    • sourceEntityURI,
    • targetEntityURI,
  7. add the operation header and send the Request to Manager.

Dependencies

Component

Usage

Callback ServiceMain component with flow implementation
PublisherRoutes incoming events
ManagerAsync processing of generated requests
" + }, + { + "title": "PreCallbacks (Rankings/COMPANYGlobalCustomerId/Canada Micro-Bricks/HCPType)", + "pageID": "164469756", + "pageLink": "/pages/viewpage.action?pageId=164469756", + "content": "

Description

The main part of the process is responsible for setting up the Rank attributes on the specific Attributes in Reltio. Based on the input JSON events, the difference between the RAW entity and the Ranked entity is calculated and changes shared through the asynchronous topic to Manager. Only events that contain no changes are published to the next processing stage, it limits the number of events sent to the external Clients. Only data that is ranked and contains the correct callback is shared further. During processing, if changes are detected main events are skipped and a callback is executed. This will cause the generation of new events in Reltio and the next calculation. The next calculation should detect 0 changes but that may occur that process will fall into an infinity loop. Due to this, the MD5 checksum is implemented on the Entity and AttributeUpdate request to percent such a situation. 

The PreCallback is the setup with the chain of responsibility with the following steps:

  1. Enricher Processor Enrich object with RefLookup service
  2. MultMergeProcessor - change the ID of the main entity to the loser Id when the Main Entity is different from Target Entity - it means that the merge happened between timestamp when Reltio generated the EVENT and HUB retrieved the Entitty from Reltio. In that case the outcome entity contains 3 ID <New Winner, Old Winner as loser, loser>
  3. RankSorters Calculate rankings - transform entity with correct Ranks attributes
  4. Based on the calculated rank generate pre-callback events that will be sent to Manger
  5. Global COMPANY ID callback Generation of changes on COMPANYGlobalCustomerIDs <if required when there is a need to fix the ID>
  6. Canada Micro-Bricks Autofill Canada Micro-Bricks
  7. HCPType Callback Calculate HCPType attribute based on Specilaity and SubTypeCode canonical Reltio codes. 
  8. Cleaner Processor Clean reference attributes enriched in the first step (save in mongo only when cleanAdditionalRefAttributes is false)
  9. Inactivation Generator Generation of inactivated events (for each changed event)
  10. OtherHCOtoHCOAffiliations Rankings Generation of the event to full-delay topic to process Ranking changes on relationships objects 


Flow diagram

\"\"


Steps


Triggers

Trigger actionComponentActionDefault time
IN Events incoming mdm-callback-service:PrecallbackStream (precallback package)Process full events, execute ranking services, generates callbacks, and published calculated events to the EventPublisher componentrealtime - events stream

Dependent components

ComponentUsage
Callback ServiceMain component with flow implementation
Entity EnricherGenerates incoming events full events
ManagerProcess callbacks generated by this service
Hub StoreCache-Store
" + }, + { + "title": "Global COMPANY ID callback", + "pageID": "218447103", + "pageLink": "/display/GMDM/Global+COMPANY+ID+callback", + "content": "

Proces provides a unique Global COMPANY ID to each entity. The current solution on the Reltio side overwrites an entity's Global COMPANY ID when it loses a merge. 

Global COMPANY ID pre-callback solution was created to contain Global COMPANY Id as a unique value for entity_uri.

To fulfill the requirement a solution based on COMPANY Global ID Registry is prepared. It includes elements like below:

  1. Modification on Orchestrator/Manager side - during the entity creation process
  2. Creation of COMPANYGloballId Pre-callback 
  3. Modification on entity history to enrich search process


Logical Architecture

\"\"


Modification on Orchestrator/Manager side - during the entity creation process

  1. Process description
    1. The request is sent to the HUB Manager - it may come from each source allowed. Like ETL loading or direct channel. 
    2. getCOMPANYIdOrRegister service is call and entityURI with COMPANYGlobalId is stored in COMPANYIdRegistry 
  2. From an external system point of view, the response to a client is modified. COMPANY Global Id is a part of the main attributes section in the JSON file (not in a nest). 
    1. In response, there are information about OVI true and false

{
    "uri": "entities/19EaDJ5L",
    "status": "created",
    "errorCode": null,
    "errorMessage": null,
    "COMPANYGlobalCustomerID": "04-125652694",
    "crosswalk": {
        "type": "configuration/sources/RX_AUDIT",
        "value": "test1_104421022022_RX_AUDIT_1",
        "deleteDate": ""
    }
}



{
    "uri""entities/entityURI",
    "type""configuration/entityTypes/HCP",
    "createdBy""username",
    "createdTime"1000000000000,
    "updatedBy""username",
    "updatedTime"1000000000000,

"attributes": {
        "COMPANYGlobalCustomerID": [
            {
                "type""configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",
                "ov"true,
                "value""04-111855581",
                "uri""entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/1jVIrkG2D"
            },
            {
                "type""configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",
                "ov"false,
                "value""04-123653905",
                "uri""entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/1jVIrosrm"
            },
            {
                "type""configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",
                "ov"false,
                "value""04-124022162",
                "uri""entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/1jVIrhcNY"
            },
            {
                "type""configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",
                "ov"false,
                "value""04-117260591",
                "uri""entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/1jVIrnM10"
            },
            {
                "type""configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",
                "ov"false,
                "value""04-129895294",
                "uri""entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/1mrOsvf6P"
            },
            {
                "type""configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",
                "ov"false,
                "value""04-112615849",
                "uri""entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/2ZNzEowk3"
            },
            {
                "type""configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",
                "ov"false,
                "value""04-111851893",
                "uri""entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/2LG7Grmul"
            }
        ],


3. How to store GlobalCOMPANYId process diagram - business level.

\"\"


Creation of COMPANYGlobalId Pre-callback

A publisher event model is extended with two new values:

    1. COMPANYGlobalCustomerIDs - list of ID. For some merge events, there is two entityURI ID. The order of the IDs must match the order of the IDs in entitiURI field.
    2. parentCOMPANYGlobalCustomerID - it has value only for the LOST_MERGE event type. It contains winner entityURI.

data class PublisherEvent(val eventType: EventType?,
                          val eventTime: Long? = null,
                          val entityModificationTime: Long? = null,
                          val countryCode: String? = null,
                          val entitiesURIs: List<String> = emptyList(),
                          val targetEntity: Entity? = null,
                          val targetRelation: Relation? = null,
                          val targetChangeRequest: ChangeRequest? = null,
                          val dictionaryItem: DictionaryItem? = null,
                          val mdmSource: String?,
                          val viewName: String? = DEFAULT_VIEW_NAME,
                          val matches: List<MatchItem>? = null,
                          val COMPANYGlobalCustomerIDs: List<String> = emptyList(),
                          val parentCOMPANYGlobalCustomerID: String? = null,
                          @JsonIgnore
                          val checksumChanged: Boolean = false,
                          @JsonIgnore
                          val isPartialUpdate: Boolean = false,
                          @JsonIgnore
                          val isReconciliation: Boolean = false


There are made changes in  entityHistory collection on MongoDB side

For each object in a collection, we store also COMPANYGlobalCustomerID:

Additionally, new fields are stored in the Snowflake structure in %_HCP and %_HCO views in CUSTOMER_SL schema, like:

From an external system point of view, those internal changes are prepared to make a GlobalCOMPANYID filed unique.

In case of overwriting GLobalCOMPANYID on Reltio MDM side (lost merge) pre-callback main task is to search for an original value in COMPANYIfRegistry. It will then insert this value into that entity in Reltio MDM that has been overwritten due to lost merge.

Process diagram:

\"\" 


Search LOST_MERGE entity with its first Global COMPANY ID

Process diagram:

\"\"

Process description:

  1. MDM HUB gets SEARCH calls from an external system. The search parameter is Global COMPANY ID.
  2. Verification entity status.  
  3. If entity status is 'LOST_MERGE' then replace in search request PfiezrGlobalCustomerId to parentCOMPANYGlobalCustomerId
  4. Make a search call in Reltio with enriched data


Dependent components

" + }, + { + "title": "Canada Micro-Bricks", + "pageID": "250138445", + "pageLink": "/display/GMDM/Canada+Micro-Bricks", + "content": "

Description

The process was designed to auto-fill the Micro Brick values on Addresses for Canadian market entities. The process is based on the events streaming, the main event is recalculated based on the current state and during comparison, the current mapping file the changes are generated. The generated change (partial event) updates the Reltio which leads to another change. Only when the entity is fully updated the main event is published to the output topic and processed in the next stage in the event publisher. The process also registers the Changelog events on the topic. the Changelog events are saved only when the state of the entity is not partial. The Changelog events are required in the ReloadService that is triggered by the Airflow DAG. Business users may change the mapping file, this triggers the reload process, changelog events are processed and the updates are generated in reltio.

For Canada, we created a new brick type "Micro Brick" and implemented a new pre-callback service to populate the brick codes based on the postal code mapping file:

The mapping file will be delivered monthly, usually with no change.  However, 1-2 a year the Business will go thru a re-mapping exercise that could cause significant change.  Also, a few minor changes may happen (e.g., add new pair, etc.). 

A month change process will be added to the Airflow scheduler as a DAG. This DAG will be scheduled and will generate the export from the Snowflake, when there will be mapping changes changelog events will trigger update to the existing MicroBrick codes in Reltio. 

A new BrickType code has been added for Micro Brick - "UGM"

Flow diagram

Logical Architecture

\"\"


PreCallback Logic

\"\"


Reload Logic

\"\"

Steps

Overview Reltio attributes


Brick

"uri": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Brick",

                Brick Type:

                RDM: A new BrickType code has been added for Micro Brick - "UGM"

                                    "uri": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Brick/attributes/Type",

                                    "lookupCode": "rdm/lookupTypes/BrickType",

                Brick Value:

                                    "uri": "configuration/entityTypes/HCO/attributes/Addresses/attributes/Brick/attributes/Value",

                                    "lookupCode": "rdm/lookupTypes/BrickValue",

PostalCode:

"uri": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Zip5",


Canada postal codes format:

e.g: K1A 0B1



PreCallback Logic

Flow:

  1. Activation:
    1. Check if feature flag activation is true and the acceptedCountires list contains entity country
    2. Take into account only the CHANGED and CREATED events in this pre-callback implementation
  2. Steps:
    1. For each address in the entity check:
      1. Check if the Address contains BrickType= microBrickType and BrickValue!=null and PostalCode!=null
        1. Check if PostalCode is in the micro-bricks-mapping.csv file
          1. if true compare
            1. if different generate UPDATE_ATTRIBUTE
            2. if in sync add AddressChange with all attributes to MicroBrickChangelog
          2. if false compare BrickValue with “numberOfPostalCodeCharacters” from PostalCode
            1. if different generate UPDATE_ATTRIBUTE
            2. if in sync add AddressChange with all attributes to MicroBrickChangelog
      2. Check if Address does not contain BrickType= microBrickType and BrickValue==null and PostalCode !=null
        1. check if PostalCode is in the micro-bricks-mapping.csv file
          1. if true generate INSERT_ATTRIBUTE
          2. if false get “numberOfPostalCodeCharacters” from PostalCode and generate INSERT_ATTRIBUTE


  1. After the Addresses array is checked, the main event is blocked when partial. Only when there are 0 changes main event is forwarded
    1. if there are changes send partialUpdate and skip the main event depending on the forwardMainEventsDuringPartialUpdate
    2. if there are 0 changes send MainEvent and push MicroBrickChangelog to the changelog topic

Note: The service contains 2 roles – the main role is to check PostalCode for each address with a mapping file and generate MicroBrick Changes (INSERT (initial) UPDATE (changes)). The second role is to push MicroBrickChangelog events when we detected 0 changes. It means this flow should keep in sync the changelog topic with all changes that are happening in Reltio (address was added/removed/changed). Because ReloadService will work on these changelog events and requires the exact URI to the BrickValue this service needs to push all MicroBrickChangelog events with calculatedMicroBrickUri and calculatedMicroBrickValue and current value on postalCode for specific address represented by the address URI.



Reload Logic (Airflow DAG)

Flow: 

  1. Activation
    1. Business users make changes on the Snowflake side to micro bricks mapping.
  2. Steps
    1. DAG is scheduled once a month and process changes made by the Business users, this triggers the Reload Logic on Callback-Service components
    2. Get changes from snowflake and generate the micro-bricks-mapping.csv file
    3. If there are 0 changes END the process
    4. If there are change in the micro-bricks-mapping.csv file push the changes to the Consul. Load current Configuration to GIT and push micro-bricks-mapping.csv to Consul.
    5. Trigger API call on Callback-Service to reload Consul configuration - this will cause that Pre-Callback processors and the ReloadService will now use new mapping files. Only after this operation is successful go to the next step:Copy events from current topic to reload topic using tmp file
    6. Copy events from current topic to reload topic using temporary file
      1. Note: the micro-brick process is divided into 2 steps 
        1. Pre-Callback generated ChangeLog events to the $env-internal-microbricks-changelog-events
        2. Reload service is reading the events from $env-internal-microbricks-changelog-reload-events
      2. The main goal here is to copy events from one topic to another using Kafka Console Producer and Consumer. Copy is made by the Kafka Console Consumer, we are generating a temporary file with all events, Consumer has to poll all events, and wait 2 min until no new events are in the topic. After this time Kafka Console Producer should send all events to the target topic.
    7. After events are in the target $env-internal-microbricks-changelog-reload-events topic the next step described below starts automatically. 

Reload Logic (Callback-Service)

Flow:

  1. Activation:
    1. Callback-Service Exposes API to reload Consul Configuration - because these changes are made once per month max, there is no need to schedule this process in service internally. Reload is made by the DAG and reloads mapping file inside callback-service.
    2. Only after Consul Configuration is reloaded the events are pushed from the $env-internal-microbricks-changelog-events to the $env-internal-microbricks-changelog-reload-events.
    3. This triggers the MicroBrickReloadService because it is based on the Kafka-Streams – service is subscribing to events in real-time
  2. Steps:
    1. New events to the $env-internal-microbricks-changelog-reload-events will trigger the following:
    2. Kafka Stream consumer that will read the changelogTopic
    3. For each MicroBrickChangelog event check:
      1. for each address in addresses changes check:
        1. check if PostalCode is in the micro-bricks-mapping.csv file
          1. if true and the current mapping value is different than calculatedMicroBrickValue  → generate UPDATE_ATTRIBUTE
          2. if false and calculatedMicroBrickValue is different than “numberOfPostalCodeCharacters” from PostalCode → generate UPDATE_ATTRIBUTE
      2. Gather all changes and push them to the $env-internal-async-all-bulk-callbacks


The reload is required because it may happen that:


Note: The data model requires the calculatedMicroBrickUri because we need to trigger UPDATE_ATTRIBUE on the specified BrickValue on a specific Address so an exact URI is required to work properly with the Reltio UPDATE_ATTRIBUTE operation. Only INSERT_ATTRIBUTE requires the URI only on the address attribute, and the body will contain BrickType and BrickValue (this insert is handled in the pre-callback implementation). The changes made by ReloadService will generate the next changes after the mapping file was updated. Once we trigger this event Reltio will generate the change, this change will be processed by the pre-callback service (MicroBrickProcessor). The result of this processor will be no-change-detected (entity and mapping file are in sync) and new CHANGELOG event generation. It may happen that during ReloadService run new Changelog events will be constantly generated, but this will not impact the current process because events from the original topic to the target topic are triggered by the manual copy during reloading. Additionally, 24h compaction window on Kafka will overwrite old changes with new changes generated from pre-callback. So we will have only one newest key on kafka topic after this time, and these changes will be copied to reload process after the next business change (1-2 times a year)


Attachment docs with more details:

IMPL:\"\" TEST:\"\"



Data Model and Configuration


ChangeLog Event
\n
CHANGELOG Event:\n\nKafka KEY: entityUri\n\nBody:\ndata class MicroBrickChangelog(\n        val entityUri: String,\n        val addressesChanges: List<AddressChange>,\n)\ndata class AddressChange(\n        val addressUri: String,\n        val postalCode: String,\n        val calculatedMicroBrickUri: String,\n        val calculatedMicroBrickValue: String,\n)\n\n
\n




Triggers

Trigger action

Component

Action

Default time

IN Events incoming Callback Service: Pre-Callback:Canada Micro-Brick LogicFull events trigger pre-callback stream and during processing, partial events are processed with generated changes. If data is in sync partial event is not generated, and the main event is forwarded to external clientsrealtime - events stream
User  - triggers a change in mapping

API: Callback-service - sync consul Configuration

Pre-Callback:ReloadService - streaming

The business user changes the mapping file. Process refreshed Consul store, copies data to changelog topic and this triggers real-time processing on Reload service

Manual Trigger by Business User

realtime - events stream

Dependent components

Component

Usage

Callback ServiceMain component of flow implementation
Entity EnricherGenerates incoming events full events
ManagerProcess callbacks generated by this service
" + }, + { + "title": "RankSorters", + "pageID": "302687133", + "pageLink": "/display/GMDM/RankSorters", + "content": "" + }, + { + "title": "Address RankSorter", + "pageID": "164469761", + "pageLink": "/display/GMDM/Address+RankSorter", + "content": "

GLOBAL - IQVIA model

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Address provided by source "Reltio" is higher in the hierarchy than the Address provided by "CRMMI" source. Based on this configuration, each specialty will be sorted in the following order:

addressSource:
"Reltio": 1
"EVR": 2
"OK": 3
"AMPCO": 4
"JPDWH": 5
"NUCLEUS": 6
"CMM": 7
"MDE": 8
"LocalMDM": 9
"PFORCERX": 10
"VEEVA_NZ": 11
"VEEVA_AU": 12
"VEEVA_PHARMACY_AU": 13
"CRMMI": 14
"FACE": 15
"KOL_OneView": 16
"GRV": 17
"GCP": 18
"MAPP": 19
"CN3RDPARTY": 20
"Rx_Audit": 21
"PCMS": 22
"CICR": 23

Additionally, Address Rank Sorting is based on the following configuration:

addressType:
"[TYS.P]": 1
"[TYS.PHYS]": 2
"[TYS.S]": 3
"[TYS.L]": 4
"[TYS.M]": 5
"[Mailing]": 6
"[TYS.F]": 7
"[TYS.HEAD]": 8
"[TYS.PHAR]": 9
"[Unknown]": 10
addressValidationStatus:
"[STA.3]": 1
"[validated]": 2
"[Y]": 3
"[STA.0]": 4
"[pending]": 5
"[NEW]": 6
"[RNEW]": 7
"[selfvalidated]": 8
"[SVALD]": 9
"[preregister]": 10
"[notapplicable]": 11
"[N]": 97
"[notvalidated]": 98
"[STA.9]": 99
addressStatus:
"[VALD]": 1
"[ACTV]": 2
"[INAC]": 98
"[INVL]": 99


Address rank sort process operates under the following conditions:

First, before address ranking the Affiliation RankSorter have to be executed. It is required to get the appropriate value on the Workplace.PrimaryAffiliationIndicator attribute value

  1. Each address is sorted with the following rules:
    1. sort by the PrimaryAffiliationIndicator value. The address with "true" values is ranked higher in the hierarchy. The attribute used in this step is taken from the Workplace.PrimaryAffiliationIndicator
    2. sort by Validation Status (lowest rank from the configuration on TOP) - attribute Address.ValidationStatus
    3. sort by Status (lowest rank from the configuration on TOP) - attribute Address.Status
    4. sort by Source Name (lowest rank from the configuration on TOP) - this is calculated based on the Address.RefEntity.crosswalks, means that each address is associated with the appropriate crosswalk and based on the input configuration the order is caluclated.
    5. sort by Primary Affiliation (true value wins against false value) - attribute Address.PrimaryAffiliation
    6. sort by Address Type (lowest rank from the configuration on TOP) - attribute Address.AddressType
    7. sort by Rank (lowers rank on TOP) in descending order 1 -> 99 - attribute Address.AddressRank
    8. sort by LUD (highest LUD date on TOP) in descending order 2017.07 -> 2017.06 - attribute Address.RefEntity.crosswalks.updateDate
    9. sort by Label value alphabetically in ascending order A -> Z - attribute Address.label
  2. Sorted addresses are recalculated for the new Rank – each Address Rank is reassigned with an appropriate number from lowest to highest.

Additionally:

  1. When refRelation.crosswalk.deleteDate exists, then the address is excluded from the sorting process

When recalculated Address Rank has a value equal to "1" then BestRecord attribute is added with the value set to "true"


Address rank sort process fallback operates under the following conditions:

  1. During Validation Status from configuration (, 1.b) sorting, when ValitdationStatus attribute is missing address, is placed on 90 position ( which means that empty validation status is higher in the ranking than e.g. STA.9 status)
  2. During Status from configuration (1.c) sorting when the Status attribute is missing address is placed on 90 position (which means that empty status is higher in the ranking than e.g. INAC status)
  3. When Source system name (1.d) is missing address, address is placed on 99 position
  4. When address Type (1.e) is empty, address is placed on 99 position
  5. When Rank (1.f) is empty, address is placed on 99 position
  6. For multiple Address Types for the same relation – an address with a higher rank is taken


Business requirements (provided by AJ)

COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*



" + }, + { + "title": "Addresses RankSorter", + "pageID": "164469759", + "pageLink": "/display/GMDM/Addresses+RankSorter", + "content": "

GLOBAL US

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Address provided by source "ONEKEY" is higher in the hierarchy than the Address provided by "COV" source. Configuration is divided by country and source lists, for which this order is applicable.  Based on this configuration, each address will be sorted in the following order:

addressesSource:
- countries:
- "ALL"
sources:
- "ALL"
rankSortOrder:
"Reltio" : 1
"ONEKEY" : 2
"IQVIA_RAWDEA" : 3
"IQVIA_DDD" : 4
"HCOS" : 5
"SAP" : 6
"SAPVENDOR" : 7
"COV" : 8
"DVA" : 9
"ENGAGE" : 10
"KOL_OneView" : 11
"ONEMED" : 11
"ICUE" : 12
"DDDV" : 13
"MMIT" : 14
"MILLIMAN_MCO" : 15
"SHS": 16
"COMPANY_ACCTS" : 17
"IQVIA_RX" : 18
"SEAGEN": 19
"CENTRIS" : 20
"ASTELAS" : 21
"EMD_SERONO" : 22
"MAPP" : 23
"VEEVALINK" : 24
"VALKRE" : 25
"THUB" : 26
"PTRS" : 27
"MEDISPEND" : 28
"PORZIO" : 29

 Additionally, Addresses Rank Sorting is based on the following configuration:

addressType:
"[OFFICE]": 1
"[PHYSICAL]": 2
"[MAIN]": 3
"[SHIPPING]": 4
"[MAILING]": 5
"[BILLING]": 6
"[SOLD_TO]": 7
"[HOME]": 8
"[PO_BOX]": 9


Address rank sort process operates under the following conditions:

  1. Each address is sorted with the following rules:
    1. sort by address status (active addresses on top) - attribute Status (is Active)
    2. sort by the source order number from input source order configuration (lowest rank from the configuration on TOP) - source is taken from last updated crosswalk Addresses.RefEntity.crosswalks.updateDate once multiple from the same source
    3. sort by DEA flag (HCP only with DEA flag set to true on top) - attribute DEAFlag
    4. sort by SingleAddressIndicator (true on top) - attribute SingleAddressInd
    5. sort by Source Rank (lowers rank on TOP) in descending order 1 -> 99 - for ONEKEY rank is calculated with minus sign - attribute Source.SourceRank
    6. sort by address type of HCO and MCO only (lowest rank from the configuration on TOP) - attribute AddressType
    7. sort by COMPANYAddressId (addresses with this attribute are on top) - attribute COMPANYAddressID
  2. Sorted addresses are recalculated for new Rank – each Address Rank is reassigned with an appropriate number from lowest to highest - attribute AddressRank

Additionally:

  1. When refRelation.crosswalk.deleteDate exists, then the address is excluded from the sorting process


MORAWM03 explaining reverse rankings for ONEKEY Addresses:

Here is the clarification:


The minus rank can be related only to ONEKEY source and will be related to the lowest precedence address.


All other sources, different than ONEKEY, contains the normal SourceRank source precedence - it means that the SourceRank 1 will be on top. We will sort SourceRank attribute in ascending order 1 -> 99 (lowest source rank on TOP), so SourceRank 1 will be first, SourceRank 2 second and so on.


Due to the ONEKEY data in US - That rank code is a number from 10 to -10 with the larger number (i.e., 10) being the top ranked. We have a logic that makes an opposite ranking on ONEKEY SourceRank attribute. We are sorting in descending order …10 -> -10…, meaning that the rank 10 will be on TOP (highest source rank on TOP)


We have reverse the SourceRank logic for ONEKEY, otherwise it led to -10 SourceRank ranked on TOP.

In US ONEKEY Addresses contains minus sign and are ranked in descending order. (10,9,8…-1,-2..-10)


I am sorry for the confusion that was made in previous explanation.


This opposite logic for ONEKEY SourceRank data is in:

Addresses: https://confluence.COMPANY.com/display/GMDM/Addresses+RankSorter



DOC:

\"\"



EMEA/AMER/APAC


This feature requires the following configuration:

This map contains sources with appropriate sort numbers, which means e.g. Configuration is divided by country and source lists, for which this order is applicable. Address provided by source "Reltio" is higher in the hierarchy than the Address provided by "ONEKEY" source. Based on this configuration, each address will be sorted in the following order:


EMEA

addressesSource:
- countries:
- GB
- IE
- FK
- FR
- BL
- GP
- MF
- MQ
- NC
- PF
- PM
- RE
- TF
- WF
- ES
- DE
- IT
- VA
- SM
- TR
- RU
rankSortOrder:
Reltio: 1
ONEKEY: 2
SAP: 3
SAPVENDOR: 4
PFORCERX: 5
PFORCERX_ODS: 5
KOL_OneView: 6
ONEMED: 6
ENGAGE: 7
MAPP: 8
SEAGEN: 9
GRV: 10
GCP: 11
SSE: 12
BIODOSE: 13
BUPA: 14
CH: 15
HCH: 16
CSL: 17
1CKOL: 18
VEEVALINK: 19
VALKRE: 201
THUB: 21
PTRS: 22
MEDISPEND: 23
PORZIO: 24
sources:
- ALL
- countries:
- ALL
rankSortOrder:
Reltio: 1
ONEKEY: 2
MEDPAGESHCP: 3
MEDPAGESHCO: 3
SAP: 4
SAPVENDOR: 5
ENGAGE: 6
MAPP: 7
PFORCERX: 8
PFORCERX_ODS: 8
KOL_OneView: 9
ONEMED: 9
SEAGEN: 10
GRV: 11
GCP: 12
SSE: 13
SDM: 14
PULSE_KAM: 15
WEBINAR: 16
DREAMWEAVER: 17
EVENTHUB: 18
SPRINKLR: 19
VEEVALINK: 20
VALKRE: 21
THUB: 22
PTRS: 23
MEDISPEND: 24
PORZIO: 25
sources:
- ALL
AMER
addressesSource:
- countries:
- ALL
rankSortOrder:
Reltio: 1
DCR_SYNC: 2
ONEKEY: 3
IMSO: 4
CS: 5
PFCA: 6
WSR: 7
PFORCERX: 8
PFORCERX_ODS: 8
SAP: 9
SAPVENDOR: 10
LEGACY_SFA_IDL: 11
ENGAGE: 12
MAPP: 13
SEAGEN: 14
GRV: 15
KOL_OneView: 16
ONEMED: 16
GCP: 17
SSE: 18
RX_AUDIT: 19
VEEVALINK: 20
VALKRE: 21
THUB: 22
PTRS: 23
MEDISPEND: 24
PORZIO: 25
sources:
- ALL


APAC

addressesSource:
- countries:
- CN
rankSortOrder:
Reltio: 1
EVR: 2
MDE: 3
FACE: 4
GRV: 5
CN3RDPARTY: 6
PFORCERX: 7
PFORCERX_ODS: 7
KOL_OneView: 8
ONEMED: 8
ENGAGE: 9
MAPP: 10
GCP: 11
SSE: 12
VEEVALINK: 13
THUB: 14
PTRS: 15
sources:
- ALL
- countries:
- ALL
rankSortOrder:
Reltio: 1
ONEKEY: 2
JPDWH: 3
VOD: 4
PFORCERX: 5
PFORCERX_ODS: 5
SAP: 6
SAPVENDOR: 7
KOL_OneView: 8
ONEMED: 8
ENGAGE: 9
MAPP: 10
SEAGEN: 11
GRV: 12
GCP: 13
SSE: 14
PCMS: 15
WEBINAR: 16
DREAMWEAVER: 17
EVENTHUB: 18
SPRINKLR: 19
VEEVALINK: 20
VALKRE: 21
THUB: 22
PTRS: 23
MEDISPEND: 24
PORZIO: 25
sources:
- ALL


This map contains AddressType attribute values with appropriate sort numbers, which means e.g. Address Type AT.OFF is higher in the hierarchy than the AddressType AT.MAIL. Based on this configuration, each address will be sorted in the following order:

addressType:
"[OFF]": 1
"[BUS]": 2
"[DEL]": 3
"[LGL]": 4
"[MAIL]": 5
"[BILL]": 6
"[HOM]": 7
"[UNSP]": 99

 
  1. Each address is sorted with the following rules: 
    1. sort by Primary affiliation indicator - address related to affiliation with primary usage tag on top, HCP and HCO addresses are compared by fields: AddressType, AddressLine1, AddressLine2, City, StateProvince and Zip5
    2. sort by Addresses.Primary attribute - primary addresses on TOP - applicable only for HCO entities
    3. sort by address status Addresses.Status (contains the AddressStatus configuration)
    4. sort by the source order number from input source order configuration (lowest rank from the configuration on TOP) - source is taken from the last updated crosswalk Addresses.RefEntity.crosswalks.updateDate once multiple from the same source
    5. sort by address type (lowest rank from the configuration on TOP) - attribute Addresses.AddressType
    6. sort by Source Rank (lowers rank on TOP) in descending order 1 -> 99 - attribute Addresses.Source.SourceRank
    7. sort by COMPANYAddressId (addresses with this attribute are on top) - attribute Addresses.COMPANYAddressID
    8. sort by address label (alphabetically from A to Z)
  2. Sorted addresses are recalculated for new Rank – each Address Rank is reassigned with an appropriate number from lowest to highest - attribute AddressRank

Additionally:

  1. When refRelation.crosswalk.deleteDate exists, then the address is excluded from the sorting process


Business requirements (provided by AJ)

COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*


\"\"


" + }, + { + "title": "Affiliation RankSorter", + "pageID": "164469770", + "pageLink": "/display/GMDM/Affiliation+RankSorter", + "content": "

GLOBAL - IQVIA model

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Workplace provided by source "Reltio" is higher in the hierarchy than the Workplace provided by "CRMMI" source. Based on this configuration, each specialty will be sorted in the following order:

affiliation:
"Reltio": 1
"EVR": 2
"OK": 3
"AMPCO": 4
"JPDWH": 5
"NUCLEUS": 6
"CMM": 7
"MDE": 8
"LocalMDM": 9
"PFORCERX": 10
"VEEVA_NZ": 11
"VEEVA_AU": 12
"VEEVA_PHARMACY_AU": 13
"CRMMI": 14
"FACE": 15
"KOL_OneView": 16
"GRV": 17
"GCP": 18
"MAPP": 19
"CN3RDPARTY": 20
"Rx_Audit": 21
"PCMS": 22
"CICR": 23

The affiliation rank sort process operates under the following conditions:

  1. Each workplace is sorted with the following rules:
    1. sort by Source Name (lowest rank from the configuration on TOP) - this is calculated based on the Workplace.RefEntity.crosswalks, means that each address is associated with the appropriate crosswalk, and based on the input configuration the order is calculated.
    2. sort by LUD (highest LUD date on TOP) in descending order 2017.07 -> 2017.06 - attribute Workplace.RefEntity.crosswalks.updateDate
    3. sort by Label value alphabetically in ascending order A -> Z - attribute Workplace.label
  2. Sorted workplaces are recalculated for the new PrimaryAffiliationIndicator attribute – each Workplace is reassigned with an appropriate value. The winner gets the "true" on the PrimaryAffiliationIndicator. Any looser, if exists is reasigned to "false"

Additionally:

  1. When refRelation.crosswalk.deleteDate exists, then the workplace is excluded from the sorting process



GLOBAL US

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. FacilityType with name "35" is higher in the hierarchy than FacilityType with the name "27". Based on this configuration, each affiliation will be sorted in the following order:

facilityType:
"35": 1
"MHS": 1
"34": 1
"27": 2

Each affiliation before sorting is enriched with the ProviderAffiliation attribute which contains information about HCO because there are attributes that are needed during sorting.

Affiliation rank sort process operates under the following conditions:

  1. Each affiliation is sorted with the following rules
    1. sort by facility type (the lower number is on top) - attribute ClassofTradeN.FacilityType
    2. sort by affiliation confidence code DESC(the higher number or if exists it is on top) - attribute RelationType.AffiliationConfidenceCode
    3. sort by staffed beds (if it exists it is higher and higher number on top) - attribute Bed.Type("StaffedBeds").Total
    4. sort by total prescribers (if it exists it is higher and higher number on top) - attribute TotalPrescribers
    5. sort by org identifier (if it exists it is higher and if not it compares is as a string) - attribute Identifiers.Type("HCOS_ORG_ID").ID
  2. Sorted affiliation are recalculated for new Rank - each Affiliation Rank is reassigned with an appropriate number from lowest to highest - attribute Rank
    1. Affiliation with Rank = "1" is enriched with the UsageTag attribute with the "Primary" value.

Additionally:

  1. If facility type is not found it is set to 99


EMEA/AMER/APAC


This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Contact Affiliation provided by source "Reltio" is higher in the hierarchy than the Contact Affiliation provided by "ONEKEY" source.  Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each specialty will be sorted in the following order:

EMEA



affiliation:
- countries:
- GB
- IE
- FK
- FR
- BL
- GP
- MF
- MQ
- NC
- PF
- PM
- RE
- TF
- WF
- ES
- DE
- IT
- VA
- SM
- TR
- RU
rankSortOrder:
Reltio: 1
ONEKEY: 2
SAP: 3
SAPVENDOR: 4
PFORCERX: 5
PFORCERX_ODS: 5
KOL_OneView: 6
ONEMED: 6
ENGAGE: 7
MAPP: 8
SEAGEN: 9
VALKRE: 10
GRV: 11
GCP: 12
SSE: 13
BIODOSE: 14
BUPA: 15
CH: 16
HCH: 17
CSL: 18
THUB: 19
PTRS: 20
1CKOL: 21
MEDISPEND: 22
VEEVALINK: 23
PORZIO: 24
sources:
- ALL
- countries:
- ALL
rankSortOrder:
Reltio: 1
ONEKEY: 2
MEDPAGESHCP: 3
MEDPAGESHCO: 3
SAP: 4
SAPVENDOR: 5
PFORCERX: 6
PFORCERX_ODS: 6
KOL_OneView: 7
ONEMED: 7
ENGAGE: 8
MAPP: 9
SEAGEN: 10
VALKRE: 11
GRV: 12
GCP: 13
SSE: 14
SDM: 15
PULSE_KAM: 16
WEBINAR: 17
DREAMWEAVER: 18
EVENTHUB: 19
SPRINKLR: 20
THUB: 21
PTRS: 22
VEEVALINK: 23
MEDISPEND: 24
PORZIO: 25
sources:
- ALL
 

AMER

affiliation:
- countries:
- ALL
rankSortOrder:
Reltio: 1
DCR_SYNC: 2
ONEKEY: 3
SAP: 4
SAPVENDOR: 5
PFORCERX: 6
PFORCERX_ODS: 6
KOL_OneView: 7
ONEMED: 7
LEGACY_SFA_IDL: 8
ENGAGE: 9
MAPP: 10
SEAGEN: 11
VALKRE: 12
GRV: 13
GCP: 14
SSE: 15
IMSO: 16
CS: 17
PFCA: 18
WSR: 19
THUB: 20
PTRS: 21
RX_AUDIT: 22
VEEVALINK: 23
MEDISPEND: 24
PORZIO: 25
sources:
- ALL

APAC

affiliation:
- countries:
- CN
rankSortOrder:
Reltio: 1
EVR: 2
MDE: 3
FACE: 4
GRV: 5
CN3RDPARTY: 6
GCP: 7
SSE: 8
PFORCERX: 9
PFORCERX_ODS: 9
KOL_OneView: 10
ONEMED: 10
ENGAGE: 11
MAPP: 12
VALKRE: 13
THUB: 14
PTRS: 15
sources:
- ALL
- countries:
- ALL
rankSortOrder:
Reltio: 1
ONEKEY: 2
JPDWH: 3
VOD: 4
SAP: 5
SAPVENDOR: 6
PFORCERX: 7
PFORCERX_ODS: 7
KOL_OneView: 8
ONEMED: 8
ENGAGE: 9
MAPP: 10
SEAGEN: 11
VALKRE: 12
GRV: 13
GCP: 14
SSE: 15
PCMS: 16
WEBINAR: 17
DREAMWEAVER: 18
EVENTHUB: 19
SPRINKLR: 20
THUB: 21
PTRS: 22
VEEVALINK: 23
MEDISPEND: 24
PORZIO: 25
sources:
- ALL


The affiliation rank sort process operates under the following conditions:

  1. Each contact affiliation is sorted with the following rules:
    1. sort by affiliation status - active on top
    2. sort by source priority
    3. sort by source rank - attribute ContactAffiliation.RelationType.Source.SourceRank, ascending
    4. sort by confidence level - attribute ContactAffiliation.RelationType.AffiliationConfidenceCode
    5. sort by attribute last updated date - newest at the top
    6. sort by Label value alphabetically in ascending order A -> Z - attribute ContactAffiliation.label
  2. Sorted contact affiliations are recalculated for the new primary usage tag attribute – each contact affiliation is reassigned with an appropriate value. The winner gets the "true" on the primary usage tag.

Additionally:

  1. When refRelation.crosswalk.deleteDate exists, then the workplace is excluded from the sorting process


Business requirements (provided by AJ)

COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*


\"\"

" + }, + { + "title": "Email RankSorter", + "pageID": "164469768", + "pageLink": "/display/GMDM/Email+RankSorter", + "content": "

GLOBAL - IQVIA model

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Email provided by source "1CKOL" is higher in the hierarchy than Email provided by any other source. Based on this configuration, each email address will be sorted in the following order:

email:
- countries:
- "ALL"
sources:
- "ALL"
rankSortOrder:
"1CKOL": 1

Email rank sort process operates under the following conditions:

  1. Each email is sorted with the following rules
  2. Group by the TypeIMS attribute and sort each group:
    1. sort by source rank (the lower number on top of the one with this attribute)
    2. sort by the validation status (VALID value is the winner) - attribute ValidationStatus
    3. sort by LUD (highest LUD date on TOP) in descending order 2017.07 -> 2017.06 - attribute crosswalks.updateDate
    4. sort by email value alphabetically in ascending order A -> Z - attribute Email.email
  3. Sorted emails are recalculated for the new Rank - each Email Rank is reassigned with an appropriate number



GLOBAL US

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Email provided by source "GRV" is higher in the hierarchy than Email provided by "ONEKEY" source. Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each email address will be sorted in the following order:

email:
- countries:
- "ALL"
sources:
- "ALL"
rankSortOrder:
"Reltio" : 1
"GRV" : 2
"ENGAGE" : 3
"KOL_OneView" : 4
"ONEMED" : 4
"ICUE" : 5
"MAPP" : 6
"ONEKEY" : 7
"SHS" : 8
"VEEVALINK": 9
"SEAGEN": 10
"CENTRIS" : 11
"ASTELAS" : 12
"EMD_SERONO" : 13
"IQVIA_RX" : 14
"IQVIA_RAWDEA" : 15
"COV" : 16
"THUB" : 17
"PTRS" : 18
"SAP" : 19
"SAPVENDOR": 20
"IQVIA_DDD" : 22
"VALKRE": 23
"MEDISPEND" : 24
"PORZIO" : 25

Email rank sort process operates under the following conditions:

  1. Each email is sorted with the following rules
    1. sort by source order (the lower number on top)
    2. sort by source rank (the lower number on top of the one with this attribute)
  2. Sorted email are recalculated for new Rank - each Email Rank is reassigned with an appropriate number




EMEA/AMER/APAC

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Email provided by source "Reltio" is higher in the hierarchy than Email provided by "GCP" source. Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each email address will be sorted in the following order:


EMEA

email:
- countries:
- GB
- IE
- FK
- FR
- BL
- GP
- MF
- MQ
- NC
- PF
- PM
- RE
- TF
- WF
- ES
- DE
- IT
- VA
- SM
- TR
- RU
rankSortOrder:
Reltio: 1
1CKOL: 2
GCP: 3
GRV: 4
SSE: 5
ENGAGE: 6
MAPP: 7
VEEVALINK: 8
SEAGEN: 9
KOL_OneView: 10
ONEMED: 10
PFORCERX: 11
PFORCERX_ODS: 11
THUB: 12
PTRS: 13
ONEKEY: 14
SAP: 15
SAPVENDOR: 16
SDM: 17
BIODOSE: 18
BUPA: 19
CH: 20
HCH: 21
CSL: 22
MEDISPEND: 23
PORZIO: 24
sources:
- ALL
- countries:
- ALL
rankSortOrder:
Reltio: 1
GCP: 2
GRV: 3
SSE: 4
ENGAGE: 5
MAPP: 6
VEEVALINK: 7
SEAGEN: 8
KOL_OneView: 9
ONEMED: 9
PULSE_KAM: 10
SPRINKLR: 11
WEBINAR: 12
DREAMWEAVER: 13
EVENTHUB: 14
PFORCERX: 15
PFORCERX_ODS: 15
THUB: 16
PTRS: 17
ONEKEY: 18
MEDPAGESHCP: 19
MEDPAGESHCO: 19
SAP: 20
SAPVENDOR: 21
SDM: 22
MEDISPEND: 23
PORZIO: 24
sources:
- ALL

AMER

email:
- countries:
- ALL
rankSortOrder:
Reltio: 1
DCR_SYNC: 2
GCP: 3
GRV: 4
SSE: 5
ENGAGE: 6
MAPP: 7
VEEVALINK: 8
SEAGEN: 9
KOL_OneView: 10
ONEMED: 10
PFORCERX: 11
PFORCERX_ODS: 11
ONEKEY: 12
IMSO: 13
CS: 14
PFCA: 15
WSR: 16
THUB: 17
PTRS: 18
SAP: 19
SAPVENDOR: 20
LEGACY_SFA_IDL: 21
RX_AUDIT: 22
MEDISPEND: 23
PORZIO: 24
sources:
- ALL

APAC

email:
- countries:
- CN
rankSortOrder:
Reltio: 1
EVR: 2
MDE: 3
FACE: 4
GRV: 5
CN3RDPARTY: 6
ENGAGE: 7
MAPP: 8
VEEVALINK: 9
KOL_OneView: 10
ONEMED: 10
PFORCERX: 11
PFORCERX_ODS: 11
THUB: 12
PTRS: 13
sources:
- ALL
- countries:
- ALL
rankSortOrder:
Reltio: 1
JPDWH: 2
PCMS: 3
GCP: 4
GRV: 5
SSE: 6
ENGAGE: 7
MAPP: 8
VEEVALINK: 9
SEAGEN: 10
KOL_OneView: 11
ONEMED: 11
SPRINKLR: 12
WEBINAR: 13
DREAMWEAVER: 14
EVENTHUB: 15
PFORCERX: 16
PFORCERX_ODS: 16
THUB: 17
PTRS: 18
ONEKEY: 19
VOD: 20
SAP: 21
SAPVENDOR: 22
MEDISPEND: 23
PORZIO: 24
sources:
- ALL


Email rank sort process operates under the following conditions:

  1. Each email is sorted with the following rules 
    1. sort by cleanser status - valid/invalid
    2. sort by source order (the lower number on top)
    3. sort by source rank (the lower number on top of the one with this attribute)
    4. sort by last updated date - newest at the top
    5. sort by email value alphabetically in ascending order A -> Z - attribute Email.label
  2. Sorted email are recalculated for new Rank - each Email Rank is reassigned with an appropriate number



Business requirements (provided by AJ)

COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*


\"\"

" + }, + { + "title": "Identifier RankSorter", + "pageID": "164469766", + "pageLink": "/display/GMDM/Identifier+RankSorter", + "content": "

IQVIA Model (Global)

Algorithm

The identifier rank sort process operates under the following conditions:

  1. Each Identifier is grouped by Identifier Type: e.g GRV_ID / GCP ID / MI_ID / Physician_Code /. .. – each group is sorted separately.
  2. Each group is sorted with the following rules:
    1. By identifier "Source System order configuration" (lowest rank from the configuration on TOP)
    2. By identifier Order (lower ranks on TOP) in descending order 1 -> 99 - attribute Order
    3. By update date (LUD) (highest LUD date on TOP) in descending order 2017.07 -> 2017.06  - attribute crosswalks.updateDate
    4. By Identifier value (alphabetically in ascending order A -> Z)
  3. Sorted identifiers are optionally deduplicated (by Identifier Type in each group) – from each group, the lowest in rank and the duplicated identifier is removed. Currently the ( isIgnoreAndRemoveDuplicates = False) is set to False, which means that groups are not deduplicated. Duplicates are removed by Reltio.
  4. Sorted identifiers are recalculated for the new Rank – each Rank (for each sorted group) is reassigned with an appropriate number from lowest to highest. - attribute - Order

Identifier rank sort process fallback operates under the following conditions:

  1. When Identifier Type is empty – each empty identifier is grouped together. Each identifier with an empty type is added to the "EMPTY" group and sorted and DE duplicated separately.
  2. During source system from configuration (2.a) sorting when Source system is missing identifier is placed on 99 position
  3. During Rank (, 2.b) sorting when the Source system is missing identifier is placed on 99 position

Source Order Configuration 

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Identifier provided by source "Reltio" is higher in the hierarchy than the Identifier provided by the "CRMMI" source. Based on this configuration each identifier will be sorted in the following order:

Updated: 2023-12-29

EnvironmentGlobal (EX-US)

Countries

(in environment)

  • CN
Others
Source Order
Reltio: 1
EVR: 2
MDE: 3
MAPP: 4
FACE: 5
CRMMI: 6
KOL_OneView: 7
GRV: 8
CN3RDPARTY: 9
Reltio: 1
EVR: 2
OK: 3
AMPCO: 4
JPDWH: 5
NUCLEUS: 6
CMM: 7
MDE: 8
LocalMDM: 9
PFORCERX: 10
VEEVA_NZ: 11
VEEVA_AU: 12
VEEVA_PHARMACY_AU: 13
CRMMI: 14
FACE: 15
KOL_OneView: 16
GRV: 17
GCP: 18
MAPP: 19
CN3RDPARTY: 20
Rx_Audit: 21
PCMS: 22
CICR: 23

COMPANY Model

Algorithm

Identifier Rank sort algorithm slightly varies from the IQVIA model one:

  1. Identifiers are grouped by Type (Identifiers.Type field). Identifiers without a Type count as a separate group.
  2. Each group is sorted separately according to following rules:
    1. By Trust flag (Identifiers.Trust field). "Yes" takes precedence over "No". If Trust flag is missing, it's as if it was equal to "No".
    2. By Source Order (table below). Lowest rank from configuration takes precedence. If a Source is missing in configuration, it gets the lowest possible order (99).
    3. By Status (Identifiers.Status). Valid/Active status takes precedence over Invalid/Inactive/missing status. List of status codes is configurable. Currently (2023-12-29), the following codes are configured in all COMPANY environments:
      1. Valid codes: [HCPIS.VLD], [HCPIS.ACTV], [HCOIS.VLD], [HCOIS.ACTV]
      2. Invalid codes: [HCPIS.INAC], [HCPIS.INVLD], [HCOIS.INAC], [HCOIS.INVLD]
    4. By Source Rank (Identifiers.SourceRank field). Lowest rank takes precedence.

    5. By LUD. Latest LUD takes precedence. LUD is equal to the highest of 3 dates: 
      1. providing crosswalk's createDate
      2. providing crosswalk's updateDate
      3. providing crosswalk's singleAttributeUpdateDate for this Identifier (if present)
    6. By ID alphabetically. This is a fallback mechanism.
  3. Sorted identifiers are recalculated for the new Rank – each Rank (for each sorted group) is reassigned with an appropriate number from lowest to highest. - attribute - Rank.

Source Order Configuration

Updated: 2023-12-29

EnvironmentUSAMEREMEAAPAC

Countries

(in environment)

ALLALL

EU:

  • GB
  • IE
  • FR
  • BL
  • GP
  • MF
  • MQ
  • NC
  • PF
  • PM
  • RE
  • TF
  • WF
  • ES
  • DE
  • IT
  • VA
  • SM
  • TR
  • RU
Others (AfME)
  • CN
Others
Source Order
Reltio: 1
ONEKEY: 2
ICUE: 3
ENGAGE: 4
KOL_OneView: 5
ONEMED: 5
GRV: 6
SHS: 7
IQVIA_RX: 8
IQVIA_RAWDEA: 9
SEAGEN: 10
CENTRIS: 11
MAPP: 12
ASTELAS: 13
EMD_SERONO: 14
COV: 15
SAP: 16
SAPVENDOR: 17
IQVIA_DDD: 18
PTRS: 19
Reltio: 1
ONEKEY: 2
PFORCERX: 3
PFORCERX_ODS: 3
KOL_OneView: 4
ONEMED: 4
LEGACY_SFA_IDL: 5
ENGAGE: 6
MAPP: 7
SEAGEN: 8
GRV: 9
GCP: 10
SSE: 11
IMSO: 12
CS: 13
PFCA: 14
SAP: 15
SAPVENDOR: 16
PTRS: 17
RX_AUDIT: 18
Reltio: 1
ONEKEY: 2
PFORCERX: 3
PFORCERX_ODS: 3
KOL_ONEVIEW: 4
ENGAGE: 5
MAPP: 6
SEAGEN: 7
GRV: 8
GCP: 9
SSE: 10
1CKOL: 11
SAP: 12
SAPVENDOR: 13
BIODOSE: 14
BUPA: 15
CH: 16
HCH: 17
CSL: 18
Reltio: 1
ONEKEY: 2
MEDPAGES: 3
MEDPAGESHCP: 3
MEDPAGESHCO: 3
PFORCERX: 4
PFORCERX_ODS: 4
KOL_ONEVIEW: 5
ENGAGE: 6
MAPP: 7
SEAGEN: 8
GRV: 9
GCP: 10
SSE: 11
PULSE_KAM: 12
WEBINAR: 13
SAP: 14
SAPVENDOR: 15
SDM: 16
PTRS: 17
Reltio: 1
EVR: 2
MDE: 3
FACE: 4
GRV: 5
CN3RDPARTY: 6
GCP: 7
PFORCERX: 8
PFORCERX_ODS: 8
KOL_OneView: 9
ONEMED: 9
ENGAGE: 10
MAPP: 11
PTRS: 12
Reltio: 1
ONEKEY: 2
JPDWH: 3
VOD: 4
PFORCERX: 5
PFORCERX_ODS: 5
KOL_OneView: 6
ONEMED: 6
ENGAGE: 7
MAPP: 8
SEAGEN: 9
GRV: 10
GCP: 11
SSE: 12
PCMS: 13
PTRS: 14
SAP: 15
SAPVENDOR: 16



Business requirements (provided by AJ)

COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*


\"\"

" + }, + { + "title": "OtherHCOtoHCOAffiliations RankSorter", + "pageID": "319291956", + "pageLink": "/display/GMDM/OtherHCOtoHCOAffiliations+RankSorter", + "content": "

APAC COMPANY (currently for AU and NZ)


Business requirements (provided by AJ)

COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*


\"\"


The functionality is configured in the callback delay service. Allows you to set different types of sorting for each country. The configuration for AU and NZ is shown below.


rankSortOrder:
affiliation:
- countries:
- AU
- NZ
rankExecutionOrder:
- type: ATTRIBUTE
attributeName: RelationType/RelationshipDescription
lookupCode: true
order:
REL.HIE: 1
REL.MAI: 2
REL.FPA: 3
REL.BNG: 4
REL.BUY: 5
REL.PHN: 6
REL.GPR: 7
REL.MBR: 8
REL.REM: 9
REL.GPSS: 10
REL.WPC: 11
REL.WPIC: 12
REL.DOU: 13
- type: ACTIVE
- type: SOURCE
order:
Reltio: 1
ONEKEY: 2
JPDWH: 3
SAP: 4
PFORCERX: 5
PFORCERX_ODS: 5
KOL_OneView: 6
ONEMED: 6
ENGAGE: 7
MAPP: 8
GRV: 9
GCP: 10
SSE: 11
PCMS: 12
PTRS: 13
- type: LUD

Relationships are grouped by endObjectId, then the whole bundle is sorted and ranked. The relationship's position on the list (its rank) for AU and NZ is calculated based on the following algorithm:

" + }, + { + "title": "Phone RankSorter", + "pageID": "164469748", + "pageLink": "/display/GMDM/Phone+RankSorter", + "content": "

GLOBAL - IQVIA model

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Phones provided by source "Reltio" is higher in the hierarchy than the Address provided by "EVR" source. Based on this configuration, each phonewill be sorted in the following order:

phone:
- countries:
- "ALL"
sources:
- "ALL"
rankSortOrder:
"Reltio": 1
"EVR": 2
"OK": 3
"AMPCO": 4
"JPDWH": 5
"NUCLEUS": 6
"CMM": 7
"MDE": 8
"LocalMDM": 9
"PFORCERX": 10
"VEEVA_NZ": 11
"VEEVA_AU": 12
"VEEVA_PHARMACY_AU": 13
"CRMMI": 14
"FACE": 15
"KOL_OneView": 16
"GRV": 17
"GCP": 18
"MAPP": 19
"CN3RDPARTY": 20
"Rx_Audit": 21
"PCMS": 22
"CICR": 23

Phone rank sort process operates under the following conditions:

  1. Each phone is sorted with the following rules
  2. Group by the TypeIMS attribute and sort each group:
    1. sort by "Source System order configuration" (lowest rank from the configuration on TOP)
    2. sort by source rank (the lower number on top of the one with this attribute)
    3. sort by the validation status (VALID value is the winner) - attribute ValidationStatus
    4. sort by LUD (highest LUD date on TOP) in descending order 2017.07 -> 2017.06 - attribute crosswalks.updateDate
    5. sort by number value alphabetically in ascending order A -> Z - attribute Phone.number
  3. Sorted phones are recalculated for the new Rank - each Phone Rank is reassigned with an appropriate number

GLOBAL US

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Phone provided by source "ONEKEY" is higher in the hierarchy than the Phone provided by "ENGAGE" source.  Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each phone number will be sorted in the following order:

phone:
- countries:
- "ALL"
sources:
- "ALL"
rankSortOrder:
"Reltio" : 1
"ONEKEY" : 2
"ICUE" : 3
"VEEVALINK" : 4
"ENGAGE" : 5
"KOL_OneView" : 6
"ONEMED" : 6
"GRV" : 7
"SHS" : 8
"IQVIA_RX" : 9
"IQVIA_RAWDEA" : 10
"SEAGEN": 11
"CENTRIS" : 12
"MAPP" : 13
"ASTELAS" : 14
"EMD_SERONO" : 15
"COV" : 16
"SAP" : 17
"SAPVENDOR": 18
"IQVIA_DDD" : 19
"VALKRE" : 20
"THUB" : 21
"PTRS" : 22
"MEDISPEND" : 23
"PORZIO" : 24



Phone number rank sort process operates under the following conditions:

  1. Each phone number is sorted with the following rules, on top, it is grouped by type.
  2. Group by the Type attribute and sort each group 
    1. sort by source order (the lower number on top) - source name is taken from the last updated crosswalk for this Phone attribute
    2. sort by source rank (the lower number on top or the one with this attribute) - attribute Source.SourceRank for this Phone attribute
  3. Sorted phone numbers are recalculated for new Rank - each Phone Rank is reassigned with an appropriate number - attribute Rank for Phone attribute



EMEA/AMER/APAC

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Phone provided by source "ONEKEY" is higher in the hierarchy than the Phone provided by "ENGAGE" source.  Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each phone number will be sorted in the following order:


EMEA

phone:
- countries:
- GB
- IE
- FK
- FR
- BL
- GP
- MF
- MQ
- NC
- PF
- PM
- RE
- TF
- WF
- ES
- DE
- IT
- VA
- SM
- TR
- RU
rankSortOrder:
Reltio: 1
ONEKEY: 2
PFORCERX: 3
PFORCERX_ODS: 3
VEEVALINK: 4
KOL_OneView: 5
ONEMED: 5
ENGAGE: 6
MAPP: 7
SEAGEN: 8
GRV: 9
GCP: 10
SSE: 11
1CKOL: 12
THUB: 13
PTRS: 14
SAP: 15
SAPVENDOR: 16
BIODOSE: 17
BUPA: 18
CH: 19
HCH: 20
CSL: 21
MEDISPEND: 22
PORZIO: 23
sources:
- ALL
- countries:
- ALL
rankSortOrder:
Reltio: 1
ONEKEY: 2
MEDPAGESHCP: 3
MEDPAGESHCO: 3
PFORCERX: 4
PFORCERX_ODS: 4
VEEVALINK: 5
KOL_OneView: 6
ONEMED: 6
ENGAGE: 7
MAPP: 8
SEAGEN: 9
GRV: 10
GCP: 11
SSE: 12
PULSE_KAM: 13
SPRINKLR: 14
WEBINAR: 15
DREAMWEAVER: 16
EVENTHUB: 17
SAP: 18
SAPVENDOR: 19
SDM: 20
THUB: 21
PTRS: 22
MEDISPEND: 23
PORZIO: 24
sources:
- ALL


AMER


phone:
- countries:
- ALL
rankSortOrder:
Reltio: 1
DCR_SYNC: 2
ONEKEY: 3
PFORCERX: 4
PFORCERX_ODS: 4
VEEVALINK: 5
KOL_OneView: 6
ONEMED: 6
LEGACY_SFA_IDL: 7
ENGAGE: 8
MAPP: 8
SEAGEN: 9
GRV: 10
GCP: 11
SSE: 12
IMSO: 13
CS: 14
PFCA: 15
WSR: 16
SAP: 17
SAPVENDOR: 18
THUB: 19
PTRS: 20
RX_AUDIT: 21
MEDISPEND: 22
PORZIO: 23
sources:
- ALL

APAC


phone:
- countries:
- CN
rankSortOrder:
Reltio: 1
EVR: 2
MDE: 3
FACE: 4
GRV: 5
CN3RDPARTY: 6
GCP: 7
PFORCERX: 8
PFORCERX_ODS: 8
VEEVALINK: 9
KOL_OneView: 10
ONEMED: 10
ENGAGE: 11
MAPP: 12
PTRS: 13
sources:
- ALL
- countries:
- ALL
rankSortOrder:
Reltio: 1
ONEKEY: 2
JPDWH: 3
VOD: 4
PFORCERX: 5
PFORCERX_ODS: 5
VEEVALINK: 6
KOL_OneView: 7
ONEMED: 7
ENGAGE: 8
MAPP: 9
SEAGEN: 10
GRV: 11
GCP: 12
SSE: 13
PCMS: 14
THUB: 15
PTRS: 16
SAP: 17
SAPVENDOR: 18
SPRINKLR: 19
WEBINAR: 20
DREAMWEAVER: 21
EVENTHUB: 22
MEDISPEND: 23
PORZIO: 24
sources:
- ALL



Phone number rank sort process operates under the following conditions:

  1. Each phone number is sorted with the following rules, on top, it is grouped by type.
  2. Group by the Type attribute and sort each group  
    1. sort by cleanser status - valid/invalid
    2. sort by source order (the lower number on top) - source name is taken from the last updated crosswalk for this Phone attribute
    3. sort by source rank (the lower number on top or the one with this attribute) - attribute Source.SourceRank for this Phone attribute
    4. last update date - newest to oldest
    5. sort by label - alphabetical order A-Z
  3. Sorted phone numbers are recalculated for new Rank - each Phone Rank is reassigned with an appropriate number - attribute Rank for Phone attribute


Business requirements (provided by AJ)

COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*


\"\"

" + }, + { + "title": "Speaker RankSorter", + "pageID": "337862629", + "pageLink": "/display/GMDM/Speaker+RankSorter", + "content": "

Description

Unlike other RankSorters, Speaker Rank is expressed not by a nested "Rank" or "Order" field, but by the "ignore" flag.

"Ignore" flag sets the attribute's "ov" to false. By operating this flag, we assure that only the most valuable attribute is visible and sent downstream from Hub.

Algorithm

  1. Sort all Speaker nests
    1. Sort by source hierarchy
    2. If same source, sort by Last Update Date (higher of crosswalk.updateDate / crosswalk.singleAttributeUpdateDates/{speaker attribute uri})
    3. If same source and LUD, sort by attribute URI (fallback strategy)
  2. Process sorted group
    1. If first Speaker nest has ignored == true, set ignored := false for that nest
    2. If every next Speaker nest does not have ignored == true, set ignored := true for that nest
    3. Post the list of changes to Manager's async interface using Kafka topic

Global - IQVIA Model

Speaker RankSorter is active only for China. Source hierarchy is as follows:

speaker:
"Reltio": 1
"MAPP": 2
"FACE": 3
"EVR": 4
"MDE": 5
"CRMMI": 6
"KOL_OneView": 7
"GRV": 8
"CN3RDPARTY": 9

Specific Configuration

Unlike other PreCallback flows, Speaker RankSorter requires both ov=true and ov=false attribute values to work correctly.

This is why:


Business requirements (provided by AJ)

COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*


" + }, + { + "title": "Specialty RankSorter", + "pageID": "164469746", + "pageLink": "/display/GMDM/Specialty+RankSorter", + "content": "

GLOBAL - IQVIA model

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Specialty provided by source "Reltio" is higher in the hierarchy than the Specialty provided by the "CRMMI" source. Additionally, for Specialities, there is a difference between countries. The configuration for RU and TD contains only 4 sources and is different than the base configuration. Based on this configuration each specialty will be sorted in the following order:

specialities:
-
countries:
- "RU"
- "TR"
sources:
- "ALL"
rankSortOrder:
"GRV": 1
"GCP": 2
"OK": 3
"KOL_OneView": 4
-
countries:
- "ALL"
sources:
- "ALL"
rankSortOrder:
"Reltio": 1
"EVR": 2
"OK": 3
"AMPCO": 4
"JPDWH": 5
"NUCLEUS": 6
"CMM": 7
"MDE": 8
"LocalMDM": 9
"PFORCERX": 10
"VEEVA_NZ": 11
"VEEVA_AU": 12
"VEEVA_PHARMACY_AU": 13
"CRMMI": 14
"FACE": 15
"KOL_OneView": 16
"GRV": 17
"GCP": 18
"MAPP": 19
"CN3RDPARTY": 20
"Rx_Audit": 21
"PCMS": 22
"CICR": 23


The specialty rank sort process operates under the following conditions:

  1. Each Specialty is grouped by Specialty Type: SPEC/TEND/QUAL/EDUC – each group is sorted separately.
  2. Each group is sorted with the following rules:
    1. By specialty "Source System order configuration" (lowest rank from the configuration on TOP)
    2. By specialty Rank (lower ranks on TOP) in descending order 1 -> 99
    3. By update date (LUD) (highest LUD date on TOP) in descending order 2017.07 -> 2017.06 - attribute crosswalks.updateDate
    4. By Specialty Value (alphabetically in ascending order A -> Z)
  3. Sorted specialties are optionally deduplicated (by Specialty Type in each group) – from each group, the lowest in rank and the duplicated specialty is removed. Currently the ( isIgnoreAndRemoveDuplicates = False) is set to False, which means that groups are not deduplicated. Duplicates are removed by Reltio.
  4. Sorted specialties are recalculated for the new Ranks – each Rank (for each sorted group) is reassigned with an appropriate number from lowest to highest.
  5. Additionally, for the Specialty Rank = 1 the best record is set to true - attribute - PrimarySpecialtyFlag

Specialty rank sort process fallback operates under the following conditions:

  1. When Specialty Type is empty – each empty specialty is grouped together. Each specialty with an empty type is added to the "EMPTY" group and sorted and DE duplicated separately.
  2. During source system from configuration (2.a) sorting when Source system is missing specialty is placed on 99 position
  3. During Rank (, 2.b) sorting when the Source system is missing specialty is placed on 99 position



GLOBAL US

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Speciality provided by source "ONEKEY" is higher in the hierarchy than the Speciality provided by the "ENGAGE" source. Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each Speciality will be sorted in the following order:

specialities:
- countries:
- "ALL"
sources:
- "ALL"
rankSortOrder:
"Reltio" : 1
"ONEKEY" : 2
"IQVIA_RAWDEA" : 3
"VEEVALINK" : 4
"ENGAGE" : 5
"KOL_OneView" : 6
"ONEMED" : 6
"SPEAKER" : 7
"ICUE" : 8
"SHS" : 9
"IQVIA_RX" : 10
"SEAGEN": 11
"CENTRIS" : 12
"ASTELAS" : 13
"EMD_SERONO" : 14
"MAPP" : 15
"GRV" : 16
"THUB" : 17
"PTRS" : 18
"VALKRE" : 19
"MEDISPEND" : 20
"PORZIO" : 21


The specialty rank sort process operates under the following conditions:

  1. Specialty is sorted with the following rules, but on the top, it is grouped by Speciality.SpecialityType attribute:
  2. Group by Speciality.SpecialityType attribute and sort each group: 
    1. sort by specialty unspecified status value (higher value on the top) - attribute Specialty with value Unspecified
    2. sort by source order number (the lower number on the top) - source name is taken from crosswalk that was last updated
    3. sort by source rank (the lower on the top) - attribute Source.SourceRank
    4. sort by last update date (the earliest on the top) - last update date is taken from lately updated crosswalk
    5. sort by specialty attribute value (string comparison) - attribute Specialty
  3. Sorted specialties are recalculated for new Rank - each Specialty Rank is reassigned with an appropriate number - attribute Rank

Additionally:

  1. If the source is not found it is set to 99
  2. If specialty unspecified attribute name or value is not set it is set to 99



EMEA/AMER/APAC

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Speciality provided by source "ONEKEY" is higher in the hierarchy than the Speciality provided by the "ENGAGE" source. Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each Speciality will be sorted in the following order:

EMEA


specialities:
- countries:
- GB
- IE
- FK
- FR
- BL
- GP
- MF
- MQ
- NC
- PF
- PM
- RE
- TF
- WF
- ES
- DE
- IT
- VA
- SM
- TR
- RU
rankSortOrder:
Reltio: 1
ONEKEY: 2
PFORCERX: 3
PFORCERX_ODS: 3
VEEVALINK: 4
KOL_OneView: 5
ONEMED: 5
ENGAGE: 6
MAPP: 7
SEAGEN: 8
GRV: 9
GCP: 10
SSE: 11
THUB: 12
PTRS: 13
1CKOL: 14
MEDISPEND: 15
PORZIO: 16
sources:
- ALL
- countries:
- ALL
sources:
- ALL
rankSortOrder:
Reltio: 1
ONEKEY: 2
MEDPAGESHCP: 3
MEDPAGESHCO: 3
PFORCERX: 4
PFORCERX_ODS: 4
VEEVALINK: 5
KOL_OneView: 6
ONEMED: 6
ENGAGE: 7
MAPP: 8
SEAGEN: 9
GRV: 10
GCP: 11
SSE: 12
PULSE_KAM: 13
WEBINAR: 14
DREAMWEAVER: 15
EVENTHUB: 16
SPRINKLR: 17
THUB: 18
PTRS: 19
MEDISPEND: 20
PORZIO: 21


AMER


specialities:
- countries:
- ALL
rankSortOrder:
Reltio: 1
DCR_SYNC: 2
ONEKEY: 3
PFORCERX: 4
PFORCERX_ODS: 4
VEEVALINK: 5
KOL_OneView: 6
ONEMED: 6
LEGACY_SFA_IDL: 7
ENGAGE: 8
MAPP: 9
SEAGEN: 10
GRV: 11
GCP: 12
SSE: 13
THUB: 14
PTRS: 15
RX_AUDIT: 16
PFCA: 17
WSR: 18
MEDISPEND: 19
PORZIO: 20
sources:
- ALL

APAC


specialities:
- countries:
- CN
rankSortOrder:
Reltio: 1
EVR: 2
MDE: 3
FACE: 4
GRV: 5
CN3RDPARTY: 6
GCP: 7
SSE: 8
PFORCERX: 9
PFORCERX_ODS: 9
VEEVALINK: 10
KOL_OneView: 11
ONEMED: 11
ENGAGE: 12
MAPP: 13
THUB: 14
PTRS: 15
sources:
- ALL
- countries:
- ALL
rankSortOrder:
Reltio: 1
ONEKEY: 2
JPDWH: 3
VOD: 4
PFORCERX: 5
PFORCERX_ODS: 5
VEEVALINK: 6
KOL_OneView: 7
ONEMED: 7
ENGAGE: 8
MAPP: 9
SEAGEN: 10
GRV: 11
GCP: 12
SSE: 13
PCMS: 14
WEBINAR: 15
DREAMWEAVER: 16
EVENTHUB: 17
SPRINKLR: 18
THUB: 19
PTRS: 20
MEDISPEND: 21
PORZIO: 22
sources:
- ALL


The specialty rank sort process operates under the following conditions:

  1. Specialty is sorted with the following rules, but on the top, it is grouped by Speciality.SpecialityType attribute:
  2. Group by Speciality.SpecialityType attribute and sort each group: 
    1. sort by specialty unspecified status value (higher value on the top) - attribute Specialty with value Unspecified
    2. sort by source order number (the lower number on the top) - source name is taken from crosswalk that was last updated
    3. sort by source rank (the lower on the top) - attribute Source.SourceRank
    4. sort by last update date (the earliest on the top) - last update date is taken from lately updated crosswalk
    5. sort by specialty attribute value (string comparison) - attribute Specialty
  3. Sorted specialties are recalculated for new Rank - each Specialty Rank is reassigned with an appropriate number - attribute Rank. The primary flag is set for the top ranked specialty.

Additionally:

  1. If the source is not found it is set to 99
  2. If specialty unspecified attribute name or value is not set it is set to 99


Business requirements (provided by AJ)

COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*



\"\"

" + }, + { + "title": "Enricher Processor", + "pageID": "302687243", + "pageLink": "/display/GMDM/Enricher+Processor", + "content": "

EnricherProcessor is the first PreCallback processor applied to incoming events. It enriches reference attributes with refEntity attributes, for the Rank calculation purposes. Usually, enriched attributes are removed after applying all PreCallbacks - this is configurable using cleanAdditionalRefAttributes flag. The only exception is GBL (EX-US), where attributes remain for CN. Removing "borrowed" attributes is carried out by the Cleaner Processor.

Algorithm

For targetEntity:

  1. Find reference attributes matching configuration
  2. For each such attribute:
    1. Walk the relation to get endObject entity
    2. Fetch endObject entity's current state through Manager (using cache)
    3. Rewrite entity's attributes to this reference attribute, inserting them in <Attribute>.refEntity.attributes path
      steps a-b are applied recursively, according to configured maxDepth.

Example

Below is EnricherProcessor config from APAC PROD's Precallback Service:

\n
refLookupConfig:\n    - cleanAdditionalRefAttributes: true\n      country:\n          - AU\n          - IN\n          - JP\n          - KR\n          - NZ\n      entities:\n          - attributes:\n                - ContactAffiliations\n            type: HCP\n      maxDepth: 2
\n

How to read the config:

" + }, + { + "title": "Cleaner Processor", + "pageID": "302687603", + "pageLink": "/display/GMDM/Cleaner+Processor", + "content": "

Cleaner Processor removed attributes enriched by the Enricher Processor. It is one of the last processors in the Precallback Service's execution order. Processor checks the cleanAdditionalRefAttributes flag in config.

Algorithm

For targetEntity:

  1. Find all refLookupConfig entries applicable for this Country.
  2. For all attributes in found entries, remove refEntity.attributes map.
" + }, + { + "title": "Inactivation Generator", + "pageID": "302697554", + "pageLink": "/display/GMDM/Inactivation+Generator", + "content": "

Inactivation Generator is one of Precallback Service's event Processors. It checks input event's targetEntity and changes event type to INACTIVATED, if it detects one of below:

Algorithm

For each event:

  1. If targetEntity not null and targetEntity.endDate is null, skip event,
  2. If targetRelation not null:
    1. If targetRelation.endDate is null or targetRelation.startRefIgnored is null or targetRelation.endRefIgnored is null, skip event,
  3. Search the mapping for adequate output event type, according to table below. If no match found, skip event,

    Inbound event typeOutbound event type
    HCP_CREATEDHCP_INACTIVATED
    HCP_CHANGED
    HCO_CREATEDHCO_INACTIVATED
    HCO_CHANGED
    MCO_CREATEDMCO_INACTIVATED
    MCO_CHANGED
    RELATIONSHIP_CREATEDRELATIONSHIP_INACTIVATED
    RELATIONSHIP_CHANGED
  4. Return same event with new event type, according to table above.
" + }, + { + "title": "MultiMerge Processor", + "pageID": "302697588", + "pageLink": "/display/GMDM/MultiMerge+Processor", + "content": "

MultiMerge Processor is one of Precallback Service's event Processors.

For MERGED events, it checks if targetEntity.uri is equal to first URI from entitiesURIs. If it is different, entitiesURIs is adjusted, by inserting targetEntity.uri in the beginning. This is to assure, that targetEntity.uri[0] always contains a merge winner, even in cases of multiple merges.

Algorithm

For each event of type:

do:

  1. if targetEntity.uri is null, skip event,
  2. if entitiesURIs[0] and targetEntity.uri are equal, skip event,
  3. insert targetEntity.uri at the beginning of entitiesURIs and return the event.
" + }, + { + "title": "OtherHCOtoHCOAffiliations Rankings", + "pageID": "319291954", + "pageLink": "/display/GMDM/OtherHCOtoHCOAffiliations+Rankings", + "content": "

Description

The process was designed to rank OtherHCOtoHCOAffiliation with rules that are specific to the country. The current configuration contains Activator and Rankers available for AU and NZ countries and the OtherHCOtoHCOAffiliationsType. The process (compared to the ContactAffilaitions) was designed to process RELATIONSHIP_CHANGE events, which are single events that contain one piece of information about specific relation. The process builds the cache with the hierarchy of objects when the main object is Reltio EndObject (The direction that we check and implement the Rankins: (child)END_OBJECT -> START_OBJECT(parent).  Change in the relation is not generating the HCO_CHANGE events so we need to check relations events. Relation change/create/remove events may change the hierarchy and ranking order.

Comparing this to the ContactAffiliations ranking logic, change on HCP object had whole information about the whole hierarchy in one event, this caused we could count and generate events based on HCP CHANGE.

This new logic builds this hierarchy based on RELATIONSHIP events, compact the changes in the time window, and generates events after aggregation to limit the number of changes in REltio and API calls. 


DATA VERIFICATION:

Snowflake queries:

\n
SELECT COUNT(*) FROM (\n\nSELECT END_ENTITY_URI, COUNT(*) FROM COMM_APAC_MDM_DMART_PROD_DB.CUSTOMER_SL.MDM_RELATIONS\n\nWHERE COUNTRY = 'AU' and RELATION_TYPE ='OtherHCOtoHCOAffiliations' and ACTIVE = TRUE\n\nGROUP BY END_ENTITY_URI\n\n)\n\n\n\n\nSELECT COUNT(*) FROM COMM_APAC_MDM_DMART_PROD_DB.CUSTOMER_SL.MDM_ENTITIES\n\nWHERE ENTITY_TYPE='HCO' and COUNTRY ='AU' AND ACTIVE = TRUE\n\nSELECT COUNT(*) FROM (\n\nSELECT END_ENTITY_URI, COUNT(*) FROM COMM_APAC_MDM_DMART_PROD_DB.CUSTOMER_SL.MDM_RELATIONS\n\nWHERE COUNTRY = 'NZ' and RELATION_TYPE ='OtherHCOtoHCOAffiliations' and ACTIVE = TRUE\n\nGROUP BY END_ENTITY_URI\n\n)
\n


Example few cases from APAC QA:

010Xcxi NZ          2

00zxT2O              NZ          2

008NxIA              NZ          2

1CVfmxOm        NZ          2

VCMuTvz            NZ          2

cvoyNhG             NZ          2

VCMnOvP          NZ          2

00yZOis                NZ          2

00JoRnN              NZ          2


\n
SELECT END_ENTITY_URI, COUNTRY, COUNT(*) AS count FROM CUSTOMER_SL.MDM_RELATIONS\n\nWHERE RELATION_TYPE ='OtherHCOtoHCOAffiliations' AND ACTIVE = TRUE\n\nAND COUNTRY IN ('AU','NZ')\n\nGROUP BY END_ENTITY_URI, COUNTRY\n\nORDER BY count DESC
\n


Cq2pWio             AU          5

00KcdEA              AU          3

T5NxyUa             AU          3

ZsTdYcS               AU          3

XhGoqwo           AU          3

00wMWdy         AU          3

Cq1wjj8               AU          3


The direction that we should check and implement the Rankins:

(child)END_OBJECT -> START_OBJECT(parent)

We are starting with Child objects and checking if this child is connected to multiple parents and we are ranking. In most cases, 99% of these will be one relation that will auto-filled with rank=1 during load. If not we are going to rank this using below implementation:

Example:

https://mpe-02.reltio.com/nui/xs4oRCXpCKewNDK/profile?entityUri=entities%2F00KcdEA

\"\"


REQUIREMENTS:

\"\"

Flow diagram


Logical Architecture

\"\"

PreDelayCallback Logic

\"\"


Steps

Overview Reltio attributes

\n
ATTRIBUTES TO UPDATE/INSERT\nRANK\n                {\n                    "label": "Rank",\n                    "name": "Rank",\n                    "description": "Rank",\n                    "type": "Int",\n                    "hidden": false,\n                    "important": false,\n                    "system": false,\n                    "required": false,\n                    "faceted": true,\n                    "searchable": true,\n                    "attributeOrdering": {\n                        "orderType": "ASC",\n                        "orderingStrategy": "LUD"\n                    },\n                    "uri": "configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/Rank",\n                    "skipInDataAccess": false\n                },
\n

PreCallback Logic - RANK Activator

DelayRankActivationProcessor:

The purpose of this activator is to pick specific events and push them to delay-events topics, events from this topic will be ranked using the algorithm described on this page (OtherHCOtoHCOAffiliations Rankings), the flow is also described below.


Example configuration for AU and NZ:

delayRankActivationCallback:
featureActivation: true
activators:
- description: "Delay OtherHCOtoHCOAffiliations RELATION events from AU and NZ country to calculate Rank in delay service"
acceptedEventTypes:
- RELATIONSHIP_CHANGED
- RELATIONSHIP_CREATED
- RELATIONSHIP_REMOVED
- RELATIONSHIP_INACTIVATED
acceptedRelationObjectTypes:
- configuration/relationTypes/OtherHCOtoHCOAffiliations
acceptedCountries:
- AU
- NZ
additionalFunctions:
- RelationEndObjectAsKafkaKey




PreDelayCallback - RANK Logic

The purpose of this pre-delay-callback service is to Rank specific objects (currently available OtherHCOToHCO ranking for AU and NZ - OtherHCOtoHCOAffiliations Rankings)

CallbackWithDelay and CurrentStateCache advantages:


Logic:


Data Model and Configuration

\n
RelationData cache model:\n[\n   Id: endObjectId\n   relations:\n        - relationUri: relations/13pTXPR0\n          endObjectUri: endObjectId"       \n          country: AU \n          crosswalks:\n              - type: ONEKEY\n                value: WSK123sdcF\n                deleteDate: 123324521243\n          RankUri: e.g. relations/13pTXPR0/attributes/Rank\n          Rank: null\n  \t      Attributes:\n              Status:\n  \t              - ACTIVE                     \n              RelationType/RelationshipDescription:\n                  - REL.MAI\n                  - REL.CON\n\n]\n\n
\n

Triggers

RankActivation

Trigger action

Component

Action

Default time

IN Events incoming 

Callback Service: Pre-Callback: DelayRankActivationProcessor

$env-internal-reltio-full-events

Full events trigger pre-callback stream and the activation logic that will route the events to next processing state


realtime - events stream
OUT Activated events to be sorted

Callback Service: Pre-Callback: DelayRankActivationProcessor 

$env-internal-reltio-full-delay-events

Output topic

realtime - events stream

Trigger action

Component

Action

Default time

IN Events incoming 

mdm-callback-delay-service: Pre-Delay-Callback: PreCallbackDelayStream

$env-internal-reltio-full-delay-events

DELAY: ${env}-internal-reltio-full-callback-delay-events

Full events trigger pre-delay-callback stream and the ranking logic

realtime - events stream

OUT Sorted events with the correct state 

mdm-callback-delay-service: Pre-Delay-Callback: PreCallbackDelayStream

$env-internal-reltio-proc-events

Output topic with correct events

realtime - events stream

OUT Reltio Updates

mdm-callback-delay-service: Pre-Delay-Callback: PostCallbackStream

$env-internal-async-all-bulk-callbacks

Output topic with Reltio updates

realtime - events stream

Dependent components

Component

Usage

Callback ServiceRELATION ranking activator that push events to delay service
Callback Delay ServiceMain Service with OtherHCOtoHCOAffiliations Rankings logic
Entity EnricherGenerates incoming events full events
Manager

Process callbacks generated by this service


Attachment docs with more technical implementation details:

\"\"\"\"example-reqeusts.json

" + }, + { + "title": "HCPType Callback", + "pageID": "347637202", + "pageLink": "/display/GMDM/HCPType+Callback", + "content": "

Description

The process was designed to update HCPType RDM code in TypeCode attribute on HCP profiles. The process is based on the events streaming, the main event is recalculated based on the current state and during comparison of existing TypeCode on Profile and calculated value the callback is generated. This process (like all processes in PreCallback Service) blocks the main event and will send the update to external clients only when the update is visible in Reltio and TypeCode contains correct code. The process uses the RDM as a internal cache and calculates the output value based on current mapping. To limit the number of requests to RDM we are using the internal Mongo Cache and we refresh this cache every 2 hours on PROD. Additionally we designed the in-memory cache to store 2 required codes (PRES/NON-PRESC) with HUB_CALLBACK source code values.

This logic is related to these 2 values in Reltio HCP profiles:

Type-  Prescriber (HCPT.PRES)

Type - Non-Prescriber (HCPT.NPRS)


Why this process was designed:

With the addition of the Eastern Cluster LOVs, we have hit the limit/issue where HCP Type Prescriber & Non-Prescriber canonical codes no longer into RDM.

Issue is a size limit in RDM’s underlying GCP tech stack It is a GCP physical limitation and cannot be increased. We cannot add new RDM codes to PRES/NON-PRESC codes and this will cause issues in HCP data.

The previous logic:

In the ingestion service layer (all API calls) there was a DQ rule called “HCP TypeCode”. This logic adds the TypeCode as a concatenation of SubTypeCode and Speciality Ranked 1. Logic get source code and puts the concatenation in TypeCode attribute. The number of combination on source codes is reaching the limit so we are building new logic.

For future reference adding old DQ rules that will be removed after we deploy the new process.

DQ rules (sort rank):

\"\"

- name: Sort specialities by source rank
category: OTHER
createdDate: 20-10-2022
modifiedDate: 20-10-2022
preconditions:
- type: operationType
values:
- create
- update
- type: not
preconditions:
- type: source
values:
- HUB_CALLBACK
- NUCLEUS
- LEGACYMDM
- PFORCERX_ID
- type: not
preconditions:
- type: match
attribute: TypeCode
values:
- "^.+$"
action:
type: sort
key: Specialities
sorter: SourceRankSorter


DQ rules (add sub type code):

\"\"

- name: Autofill sub type code when sub type is null/empty
category: AUTOFILL_BASE
createdDate: 20-10-2022
modifiedDate: 20-10-2022
preconditions:
- type: operationType
values:
- create
- update
- type: not
preconditions:
- type: source
values:
- HUB_CALLBACK
- NUCLEUS
- LEGACYMDM
- PFORCERX_ID
- KOL_OneView
action:
type: modify
attributes:
- TypeCode
value: "{SubTypeCode}-{Specialities.Specialty}"
replaceNulls: true
when:
- ""
- "NULL"


Example of previous input values:

attributes:
"TypeCode": [
{
"value": "TYP.M-SP.WDE.04"
}
]

TYP.M is a SubTypeCode
SP.WDE.04 is a Speciality

calucated value - PRESC:
\"\"
As we can see on this screenshot on EMEA PROD there are 2920 combinations for one ONEKEY source that generates PRESC value.



The new logic:

The new logic was designed in pre callback service in hybrid mode. The logic uses the same assumptions like are made in previous version, but instead we are using Reltio Canonical codes, and this limits the number of combinations. We are providing this value using only one Source HUB_CALLBACK so there is no need to configure ONEKEY,GRV and all other sources that provides multiple combinations.

Advantages:

Service populates HCP Type with SubType & Specialty canonical codes

HCP Type LOVs reduced to single source (HUB_CALLBACK) and canonical codes


The change in HCP Type RDM will be processed using standard reindex process.

This change is impacting the Historical Inactive flow – change described Snowflake: HI HCPType enrichment


Key features in new logic and what you should know:

  1. The change in HCP Type RDM will be processed using standard reindex process.
  2. Calculate the HCP TypeCode is based on the OV profile and Reltio canonical codes
    1. Previously each source delivered data and the ingestion service calculated TypeCode based on RAW JSON data delivered by the source.
    2. Now we calculate on OV Profile, not on the source level.
      1. We deliver only one value using HUB_CALLBACK crosswalk.
    3. Now once we receive the event we have access to ov:true – golden profile
      1. Specialties, this is the list, each source has the SourceName and SourceRank, so we pick with Rank 1 for selected profile.
      2. SubTypeCode is a single attribute, and can pick only ov:true value.
    4. 2 canonical cocdes are mapped to TypeCode attribute like on the below example
  3. Activation/Deactivation profiles in Reltio and Historical Inactive flow
    1. Snowflake: HI HCPType enrichment
    2. Snowflake: History Inactive 
    3. When the whole profile is deactivated HUB_CALLBACK technical crosswalks are hard-deleted, HCPTypeCode will be hard-deleted
    4. This is impact HI Views because the HUB_CALLBACK value will be dropped
    5. We implemented a logic in HI view that will rebuild TypeCode attribute and put this PRES/NON-PRESC in JSON file visible in HI view.
  4. Reltio contains the checksum logic and is not generating the event when the sourceCode changes but is mapped to the same canonical code
    1. We implemented a delta detection logic and we are sending an update only when change is detected
      1. Lookup to RDM, requeiers the logic to resolve HUB_CALLBACK code to canonical code.
      2. Change only when
        1. Type does not exists
        2. Type changes from PRESC to NON-PRESC
        3. Type changes from NON-PRESC to PRESC


Example of new input values:

attributes:
"TypeCode": [
{
"value": "HCPST.M-SP.AN"
}
]

TYP.M is a SubTypeCode source code mapped to HCPST.M
SP.WDE.04 is a Speciality source code mapped to SP.AN

rdm/lookupTypes/HCPSubTypeCode:HCPST.M
rdm/lookupTypes/HCPSpecialty:SP.AN

Flow diagram

Logical Architecture

\"\"

HCPType PreCallback Logic

\"\"


Steps

Overview Reltio attributes and RDM

                {
                    "label": "Type",
                    "name": "TypeCode",
                    "description": "HCP Type Code",
                    "type": "String",
                    "hidden": false,
                    "important": false,
                    "system": false,
                    "required": false,
                    "faceted": true,
                    "searchable": true,
                    "attributeOrdering": {
                        "orderType": "ASC",
                        "orderingStrategy": "LUD"
                    },
                    "uri": "configuration/entityTypes/HCP/attributes/TypeCode",
                    "lookupCode": "rdm/lookupTypes/HCPType",
                    "skipInDataAccess": false
                },

Based on:

SubTypeCode:

                {
                    "label": "Sub Type",
                    "name": "SubTypeCode",
                    "description": "HCP SubType Code",
                    "type": "String",
                    "hidden": false,
                    "important": false,
                    "system": false,
                    "required": false,
                    "faceted": true,
                    "searchable": true,
                    "attributeOrdering": {
                        "orderType": "ASC",
                        "orderingStrategy": "LUD"
                    },
                    "uri": "configuration/entityTypes/HCP/attributes/SubTypeCode",
                    "lookupCode": "rdm/lookupTypes/HCPSubTypeCode",
                    "skipInDataAccess": false
                },

Speciality:

                        {
                            "label": "Specialty",
                            "name": "Specialty",
                            "description": "Specialty of the entity, e.g., Adult Congenital Heart Disease",
                            "type": "String",
                            "hidden": false,
                            "important": false,
                            "system": false,
                            "required": false,
                            "faceted": true,
                            "searchable": true,
                            "attributeOrdering": {
                                "orderingStrategy": "LUD"
                            },
                            "cardinality": {
                                "minValue": 0,
                                "maxValue": 1
                            },
                            "uri": "configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialty",
                            "lookupCode": "rdm/lookupTypes/HCPSpecialty",
                            "skipInDataAccess": false
                        },

RDM

Codes:

rdm/lookupTypes/HCPType:HCPT.NPRS

rdm/lookupTypes/HCPType:HCPT.PRES

\"\"


HCPType PreCallback Logic

Flow:

  1. Component Startup
    1. during the Pre-Callback component startup we are initializing in memory cache to store 2 PRESC and NPRES values for HUB_CALLBACK soruce
      1. This implementation limits number of requests to RDM Reltio through manager
      2. Also this limit number of API call manager service from pre-callback service
    2. The Cache contains TTL configuration and is invalidated after TTL
  2. Activation
    1. Check if feature flag activation is true
    2. Take into account only the CHANGED and CREATED events in this pre-callback implementation limited to HCP objects
    3. Take into account only profiles that crosswalks are not on the following list. When Profile contains the crosswalks that are related to this configuration list skip the TypeCode generation. When the Profile contains the following crosswalk and additionally valid crosswalk like ONEKEY generate a TypeCode.
      1. - type: not
        preconditions:
        - type: source
        values:
        - HUB_CALLBACK
        - NUCLEUS
        - LEGACYMDM
        - PFORCERX_ID
  3. Steps
    1. Each CHANGE or CREATE event triggers the following logic:
      1. Get the canonical code from HCP/attributes/SubTypeCode
        1. pick a lookupCode
        2. <fallback 1> if lookupCode is missing and lookupError exists pick a value
        3. <fallback 2> if the SupTypeCode does not exists put an empty value = ""
      2. Get the canonical code from HCP/attributes/Specialities/attributes/Specialty array
        1. pick a speciality with Rank equal to 1
        2. pick a lookupCode 
        3. <fallback 1> if lookupCode is missing and lookupError exists pick a value
        4. <fallback 2> if the Specialty does not exists put an empty value = ""
      3. Combine to canonical codes, using "-" hyphen character as a concatenation.
      4. possible values:
        1. <subtypecode_canonicalCode>-<speciality_canonicalCode>
        2. <subtypecode_canonicalCode>-""
        3. ""-<speciality_canonicalCode>
        4. ""-""
      5. Execute delta detection logic:
        1. <transformation function>: using the RDM cache translate the generated value to PRESC or NPRES code
        2. Compare the generated value with HCP/attributes/TypeCode
          1. pick a lookupCode and compare to generated and translated value
          2. <fallback 1> if lookupCode is missing and lookupError exists pick a value and compare to generated and not translated value
        3. Generate:
          1. INSERT_ATTRIBUTE: when TypeCode does not exits
          2. UPDATE_ATTRIBUTE: when value is different
        4. Forward main event to next processing topic when there are 0 changes.

Triggers

Trigger action

Component

Action

Default time

IN Events incoming Callback Service: Pre-Callback:HCP Type Callback logicFull events trigger pre-callback stream and during processing, partial events are processed with generated changes. If data is in sync partial event is not generated, and the main event is forwarded to external clientsrealtime - events stream

Dependent components

Component

Usage

Callback ServiceMain component of flow implementation
Entity EnricherGenerates incoming events full events
ManagerProcess callbacks generated by this service
Hub StoreHUB Mongo Cache
LOV readLookup RDM values flow


" + }, + { + "title": "China IQVIA<->COMPANY", + "pageID": "263501508", + "pageLink": "/display/GMDM/China+IQVIA%3C-%3ECOMPANY", + "content": "

Description

The section and all subpages describe HUB adjustments for China clients with transformation to the COMPANY model. HUB created a logic to allow China clients to make a transparent transition between IQVIA and COMPANY Models. Additionally, the DCR process will be adjusted to the new COMPANY model. The New DCR process will eliminate a lot of DCRs that are currently created in the IQVIA tenant. The description of changes and all flows are described in this section and the subpages, links are displayed below. 

HUB processed all the changes in MR-4191 – the MAIN task, To verify and track please check Jira.

Flows

Triggers

Described in the separated sub-pages for each process.

Dependent components

Described in the separated sub-pages for each process.


Documents with HUB details

mapping China_attributes.xlsx

API: China_HUB_Changes.docx

dcr: China_HUB_DCR_Changes.docx


" + }, + { + "title": "China IQVIA - current flow and user properties + COMPANY changes", + "pageID": "284805827", + "pageLink": "/pages/viewpage.action?pageId=284805827", + "content": "

Description

On this page, the current IQVIA flow is described. Contains the full API description, and complex API on IQVIA end with all details about HUB configuration and properties used for the China IQVIA model.

In the next section of this page, the COMPANY changes are described in a generic way. More details of the new COMPANY complex model and API adjustments were described in other subpages. 

IQVIA

COMPANY

The key concepts and general description of COMPANY adjustments:




" + }, + { + "title": "China Selective Router - model transformation flow", + "pageID": "284800572", + "pageLink": "/display/GMDM/China+Selective+Router+-+model+transformation+flow", + "content": "

Description

China selective router was created to enrich and transform event from COMPANY model to IQIVIA model. Component is also able to connect related mainHco with hco, based on reltio connections API, in Iqivia model its reflected as MainHco in Workplace attribute.

Flow diagram

\"\"


\"\"

Steps

Triggers

Trigger action

Component

Action

Default time

kafka message
eventTransformerTopology
transform event to Iqivia modelrealtime


Dependent components

Component

Usage

Mdm manager

getEntitisByUri

getEntityConnectionsByUri

HCPModelConverter
toIqviaModel
" + }, + { + "title": "Create HCP/HCO complex methods - IQVIA model (legacy)", + "pageID": "284800564", + "pageLink": "/pages/viewpage.action?pageId=284800564", + "content": "

Description

The IQVIA China user uses the following methods to create the HCP HCO objects - Create/Update HCP/HCO/MCO. On this linked page the API calls flow is described. The most complex and important thing is the following sections for China users:

IQVIA China user also activates the DCR logic using this Create HCP method. The complex description of this flow is here DCR IQVIA flow

Currently, the DCR activation process from the IQVIA flow is described here - DCR generation process (China DCR)

New DCR COMPANY flow is described here: DCR COMPANY flow


The below flow diagram and steps description contain the detailed description of all cases used in HCP HCO and DCR methods in legacy code.

Flow diagram

\"\"

Steps

HCP Service = China logic / STEPS:




 

HCO Service = China logic / STEPS:

Triggers

Trigger action

Component

Action

Default time

operation link
REST callManager: POST/PATCH /hco /hcp /mcocreate specific objects in MDM systemAPI synchronous requests - realtimeCreate/Update HCP/HCO/MCO
REST callManager: GET /lookupget lookup Code from ReltioAPI synchronous requests - realtimeLOV read
REST callManager: GET /entity?filter=(criteria)search the specific objects in the MDM systemAPI synchronous requests - realtimeSearch Entity
REST callManager: GET /entityget Object from RetlioAPI synchronous requests - realtimeGet Entity
Kafka Request DCRManager: Push Kafka DCR eventpush Kafka DCR EventKafka asynchronous event - realtimeDCR IQVIA flow


Dependent components

Component

Usage

Managersearch entities in MDM systems
API Gatewayproxy REST and secure access
ReltioReltio MDM system
DCR ServiceOld legacy DCR processor
" + }, + { + "title": "Create HCP/HCO complex V2 methods - COMPANY model", + "pageID": "284800566", + "pageLink": "/pages/viewpage.action?pageId=284800566", + "content": "

Description


This API is used to process complex HCP/HCO requests. It supports the management of MDM entities with the relationships between them. The user can provide data in the IQVIA or COMPANY model.


Flow diagram

\"\"

Flow diagram HCP (overview)

(details on main diagram)


\"\"

Steps HCP 

  1. Map HCP to COMPANY model
  2. Extract parent HCO - MainHCO attribute of affiliated HCO entity
  3. Execute search service for affiliated HCO and parent HCO
    1. If affiliated HCO or parent HCO not found in MDM system: execute trigger service
    2. Otherwise set entity URI for found objects
  4. Execute HCO complex service for HCO request - affiliated  HCO and parent HCO entities
  5. Map HCO response to contact affiliations HCP attribute
    1. create relation between HCP and affiliated HCO
    2. create relation between HCP and parent HCO
  6. Execute HCP simple service


HCP API search entity service

Search entity service is used to search for existing entities in the MDM system. This feature is configured for user via searchConfigHcpApi attribute. This configuration is divided for HCO and affiliated HCO entities and contains a list of searcher implementations - searcher type.

attributedescription
HCOsearch configuration for affiliated HCO entity
MAIN_HCO search configuration for parent HCO entity
searcherTypetype of searcher implementation
attributesattributes used for attribute search implementation



HCP trigger service

Trigger service is used to execute action when entities are missing in MDM system. This feature is configured for user via triggerType attribute.


trigger typedescription
CREATEcreate missing HCO or parent HCO via HCO complex API
DCRcreate DCR request for missing objects
IGNOREignore missing objects, flow will continue, missing objects and relations will not be created
REJECTreject request, stop processing and return response to client


Flow diagram HCO (overview)

(details on main diagram)

\"\"

Steps HCO

  1. Map HCO request to COMPANY model
  2. If hco.uri attribute is null then create HCO entity
  3. Create relation
    1. if parentHCO.uri is not null then use to create other affiliations
    2. if parentHCO.uri is null then use search service to find entity
      1. if entity is found then use is to create other affiliations
      2. if entity is not found then create parentHCO and use to create other affiliations
    3. if Relation exists then do nothing
    4. if Relation doesn't exist then create relation


Triggers

Trigger action

Component

Action

Default time

REST callmanager POST/PATCH v2/hcp/complexcreate HCP, HCO objects and relationsAPI synchronous requests - realtime
REST callmanager POST/PATCH v2/hco/complexcreate HCO objects and relationsAPI synchronous requests - realtime


Dependent components

Component

Usage

Entity search servicesearch entity HCP API opertaion
Trigger serviceget trigger result opertaion
Entity management serviceget entity connections
" + }, + { + "title": "Create HCP/HCO simple V2 methods - COMPANY model", + "pageID": "284806830", + "pageLink": "/pages/viewpage.action?pageId=284806830", + "content": "

Description

V2 API simple methods are used to manage the Reltio entities - HCP/HCO/MCO.

They support basic HCP/HCO/MCO request with COMPANY model.

Flow diagram

\"\"

Steps

  1.  Crosswalk generator - auto-create crosswalk - if not exists
  2.  Entity validation
    • Authorize request - check if user has appropriate permission, country, source
    • GetEntityByCrosswalk operaion-  check if entity exists in reltio, applicable for PATCH operation
    • Quality service - checks entity attributes against validation pipeline
    • DataProviderCrosswalkCheck - check if entity contributor provider exists in reltio

  3. Execute HTTP request - post entities Reltio operation
  4. Execute GetOrRegister COMPANYGlobalCustomerID operation


 

Crosswalk generator service

Crosswalk generator service is used for creating crosswalk when entity crosswalk is missing. This feature is configured for user via crosswalkGeneratorConfig attribute.


attributedescription
crosswalkGeneratorTypecrosswalk generator implementation
typecrosswalk type value
sourceTablecrosswalk source table value


Triggers

Trigger actionComponentActionDefault time
REST callManager: POST/PATCH /v2/hcpcreate HCP objects in MDM systemAPI synchronous requests - realtime
REST callManager: POST/PATCH /v2/hcocreate HCO objects in MDM systemAPI synchronous requests - realtime
REST callManager: POST/PATCH /v2/mcocreate MCO objects in MDM systemAPI synchronous requests - realtime

Dependent components

Component

Usage

COMPANY Global Customer ID RegistrygetOrRegister operation
Crosswalk generator servicegenerate crosswalk opertaion
" + }, + { + "title": "DCR IQVIA flow", + "pageID": "284800568", + "pageLink": "/display/GMDM/DCR+IQVIA+flow", + "content": "

Description

The following page contains a detailed description of IQVIA DCR flow for China clients. The logic is complicated and contains multiple relations.

Currently, it contains the following:

Complex business rules for generating DCRs,

Limited flexibility with IQVIA tenants,

Complex end-to-end technical processes (e.g., hand-offs, transfers, etc.)


The flow is related to numerous file transfers & hand-offs.

The idea is to make a simplified flow in the COMPANY model - details described here - DCR COMPANY flow


The below diagrams and description contain the current state that will be deprecated in the future.

Flow diagram - Overview - high level

\"\"

Flow diagram - Overview - simplified view


\"\"

Steps

\"\"



HUB LOGIC

HUB Configuration overview:

DCR CONFIG AND CLASSES:

Logic is in the MDM-MANAGER

 

Config:

\n
dcrConfig:  \n  dcrProcessing: yes\n  routeEnableOnStartup: yes\n  deadLetterEndpoint: "file:///opt/app/log/rejected/"\n  externalLogActive: yes\n  activationCriteria:\n    NEW_HCO:\n      - country: "CN"\n        sources:\n          - "CN3RDPARTY"\n          - "FACE"\n          - "GRV"\n    NEW_HCP:\n      - country: "CN"\n        sources:\n          - "GRV"\n    NEW_WORKPLACE:\n      - country: "CN"\n        sources:\n          - "GRV"\n          - "MDE"\n          - "FACE"\n          - "CN3RDPARTY"\n          - "EVR"\n\n  continueOnHCONotFoundActivationCriteria:\n    - country: "CN"\n      sources:\n        - "GCP"\n    - countries:\n        - AD\n        - BL\n        - BR\n        - DE\n        - ES\n        - FR\n        - FR\n        - GF\n        - GP\n        - IT\n        - MC\n        - MF\n        - MQ\n        - MU\n        - MX\n        - NC\n        - NL\n        - PF\n        - PM\n        - RE\n        - RU\n        - TR\n        - WF\n        - YT\n      sources:\n        - GRV\n        - GCP\n  validationStatusesMap:\n    VALID: validated\n    NOT_VALID: notvalidated\n    PENDING: pending
\n

Flow diagram - DCR Activation

\"\"

Steps

IQVIA/China  ACTIVATION LOGIC/ACTIVATION CRITERIA:


Kafka DCR sender - produce event to Kafka Topic



Flow diagram - DCR event Receiver (DCR processor)

\"\"

Steps


NewHCPDCRService - STEPS  - Process DCR Custom Logic (NEW_HCP)


NewHCODCRService - STEPS  - Process DCR Custom Logic (NEW_HCO, NEW_HCO_L1,NEW_HCO_L2)


NewWorkplaceDCRService - STEPS  - Process DCR Custom Logic (NEW_WORKPLACE)


Flow diagram - DCR Response - process DCR Response from API client

\"\"

Steps


IQVIA/China DCRResponseRoute:


DCR response processing:

REST api

Activated by china_apps user based on the IQVIA EVRs export


Used by China Client to accept/reject(Action) DCR in Reltio


Triggers

Trigger action

Component

Action

Default time

Operation linkDetails
REST callManager: POST/PATCH /hcpcreate specific objects in MDM systemAPI synchronous requests - realtimeCreate/Update HCP/HCO/MCOInitializes the DCR request
Kafka Request DCRManager: Push Kafka DCR eventpush Kafka DCR EventKafka asynchronous event - realtimeDCR IQVIA flowPush DCR event to DCR processor
Kafka Request DCRDCRServiceRoute: Poll Kafka DCR evenConsumes Kafa DCR eventsKafka asynchronous event - realtimeDCR IQVIA flowPoll/Consumes DCR events and process it
Rest call - DCR responseManager:DCRResponseRoute POST /dcrResponse/{id}/acceptupdates DCR by API (accept/reject e.tc.)API synchronous requests - realtimeDCR IQVIA flowAPI to accept DCR
Rest call - DCR responseManager:DCRResponseRoute POST /dcrResponse/{id}/updateHCPupdates DCR by API (accept/reject e.tc.)API synchronous requests - realtimeDCR IQVIA flowAPI to update HCP through DCR
Rest call - DCR responseManager:DCRResponseRoute POST /dcrResponse/{id}/updateHCOupdates DCR by API (accept/reject e.tc.)API synchronous requests - realtimeDCR IQVIA flowAPI to update HCO through DCR
Rest call - DCR responseManager:DCRResponseRoute POST /dcrResponse/{id}/updateAffiliationsupdates DCR by API (accept/reject e.tc.)API synchronous requests - realtimeDCR IQVIA flowAPI to update HCO to HCO affiliations through DCR
Rest call - DCR responseManager:DCRResponseRoute POST /dcrResponse/{id}/rejectupdates DCR by API (accept/reject e.tc.)API synchronous requests - realtimeDCR IQVIA flowAPI to reject DCR
Rest call - DCR responseManager:DCRResponseRoute POST /dcrResponse/{id}/mergeupdates DCR by API (accept/reject e.tc.)API synchronous requests - realtimeDCR IQVIA flowAPI to merge DCR HCP entities


Dependent components

Component

Usage

Managersearch entities in MDM systems
API Gatewayproxy REST and secure access
ReltioReltio MDM system
ManagerOld legacy DCR processor
" + }, + { + "title": "DCR COMPANY flow", + "pageID": "284800570", + "pageLink": "/display/GMDM/DCR+COMPANY+flow", + "content": "

Description

TBD 

Flow diagram (drafts)

\"\"




\"\"




Steps

TBD


Triggers

Trigger action

Component

Action

Default time






Dependent components

Component

Usage



" + }, + { + "title": "Model Mapping (IQVIA<->COMPANY)", + "pageID": "284800575", + "pageLink": "/pages/viewpage.action?pageId=284800575", + "content": "

Description

The interface is used to map MDM Entities between IQIVIA and COMPANY model.

Flow diagram

-

Mapping

Address ↔ Addresses attribute mapping

IQIVIA MODEL ATTRIBUTE [Address]COMPANY MODEL ATTRIBUTE [Addresses]

Address

Premise


Addresses

Premise


Address

Building


Addresses

Building


Address

VerificationStatus


Addresses

VerificationStatus


Address

StateProvince


Addresses

StateProvince


Address

Country


Addresses

Country


Address

AddressLine1


Addresses

AddressLine1


Address

AddressLine2


Addresses

AddressLine2


Address

AVC


Addresses

AVC


Address

City


Addresses

City


Address

Neighborhood


Addresses

Neighborhood


Address

Street


Addresses

Street


Address

Geolocation

Latitude

Addresses

Latitude


Address

Geolocation

Longitude

Addresses

Longitude


Address

Geolocation

GeoAccuracy

Addresses

GeoAccuracy


Address

Zip

Zip4

Addresses

Zip4


Address

Zip

Zip5

Addresses

Zip5


Address

Zip

PostalCode

Addresses

POBox


Phone attribute mappings

IQIVIA MODEL ATTRIBUTECOMPANY MODEL ATTRIBUTE

Phone

LineType


Phone

LineType


Phone

LocalNumber


Phone

LocalNumber


Phone

Number


Phone

Number


Phone

FormatMask


Phone

FormatMask


Phone

GeoCountry


Phone

GeoCountry


Phone

DigitCount


Phone

DigitCount


Phone

CountryCode


Phone

CountryCode


Phone

GeoArea


Phone

GeoArea


Phone

FormattedNumber


Phone

FormattedNumber


Phone

AreaCode


Phone

AreaCode


Phone

ValidationStatus


Phone

ValidationStatus


Phone

TypeIMS


Phone

Type


Phone

Active


Phone

Privacy

OptOut


Email attribute mappings

IQIVIA MODEL ATTRIBUTECOMPANY MODEL ATTRIBUTE
Email

Email

EmailDomain
EmailDomain
EmailDomainType
EmailDomainType
EmailValidationStatus
EmailValidationStatus
EmailTypeIMS
EmailType
EmailActive
EmailPrivacyOptOut
EmailUsername
EmailSourceSourceName

HCO mappings

IQIVIA MODEL ATTRIBUTECOMPANY MODEL ATTRIBUTE
Country
Country
Name
Name
TypeCode
TypeCode
SubTypeCode
SubTypeCode

CMSCoveredForTeaching


CMSCoveredForTeaching

Commenters


Commenters

CommHosp


CommHosp

Description


Description

Fiscal


Fiscal

GPOMembership


GPOMembership

HealthSystemName


HealthSystemName

NumInPatients


NumInPatients

ResidentProgram


ResidentProgram

TotalLicenseBeds


TotalLicenseBeds

TotalSurgeries


TotalSurgeries

VADOD


VADOD

Academic


Academic

KeyFinancialFiguresOverview

SalesRevenueUnitOfSizeKeyFinancialFiguresOverviewSalesRevenueUnitOfSize
ClassofTradeNSpecialtyClassofTradeNSpecialty
ClassofTradeNClassificationClassofTradeNClassification
IdentifiersIDIdentifiersID
IdentifiersTypeIdentifiersType
SourceName

OriginalSourceName

NumOutPatients

OutPatientsNumbers

Status

ValidationStatus

UpdateDate

SourceUpdateDate

WebsiteURL


Website

WebsiteURL

OtherNames-
OtherNames
Name

-
Type (constant: OTHER_NAMES)

OfficialName

-OtherNamesName

-
Type (constant: OFFICIAL_NAME)
Address*Addresses*
Phone*Phone*


HCP mappings

IQIVIA MODEL ATTRIBUTECOMPANY MODEL ATTRIBUTEDESCRIPTION
Country

Country


DoB

DoB


FirstName

FirstName

case: (IQVIA -> COMPANY), if IQIVIA(FirstName) is empty then IQIVIA(Name) is used as COMPANY(FirstName) mapping result
LastName

LastName

case: (IQVIA -> COMPANY), if IQIVIA(LastName) is empty then IQIVIA(Name) is used as COMPANY(LastName) mapping result
Name

Name


NickName

NickName


Gender

Gender


PrefferedLanguage

PrefferedLanguage


Prefix

Prefix


SubTypeCode

SubTypeCode


Title

Title


TypeCode

TypeCode


PresentEmployment

PresentEmployment


Certificates

Certificates


License

License


IdentifiersID
IdentifiersID

IdentifiersType
IdentifiersType

UpdateDate

SourceUpdateDate


SourceName

SourceValidationSourceName

ValidationChangeDate

SourceValidationChangeDate

ValidationStatus

SourceValidationStatus

SpeakerSpeakerLevel
SpeakerLevel


SpeakerSpeakerType
SpeakerType


SpeakerSpeakerStatus
SpeakerStatus


SpeakerIsSpeaker
IsSpeaker


DPPresenceChannelCode
DigitalPresenceChannelCode

METHOD PARAM<Workplaces>

ContactAffiliations

case: (IQVIA -> COMPANY), param workplaces is converted to HCO and added to ContactAffiliations
METHOD PARAM<MainWorkplaces>

ContactAffiliations

case: (IQVIA -> COMPANY), param main workplaces are converted to HCO and added to ContactAffiliations
Workplace

METHOD PARAM<Workplaces>

case: (COMPANY → IQIVIA), param workplaces is converted to HCO and assigned to Workplace
MainWorkplace

METHOD PARAM<MainWorkplaces>

case: (COMPANY → IQIVIA),  param main workplaces are converted to HCO and assigned to MainWorkplace
Address*
Addresses*

Phone*
Phone*

Email*
Email*

Triggers

Trigger action

Component

Action

Default time

Method invocation

HCPModelConverter.class

toCOMPANYModel(EntityKt  iqiviaModel, List<EntityKt> workplaces, List<EntityKt> mainWorkplaces, List<AttributeValueKt> addresses)realtime
Method invocationHCPModelConverter.classtoCOMPANYModel(EntityKt  iqiviaModel, List<EntityKt> workplaces, List<EntityKt> mainWorkplaces)realtime
Method invocationHCPModelConverter.classtoIqiviaModel(EntityKt  COMPANYModel, List<EntityKt> workplaces, List<EntityKt> mainWorkplaces)realtime
Method invocation

HCOModelConverter.class

toCOMPANYModel(EntityKt iqiviaModel)realtime
Method invocation

HCOModelConverter.class

toIqiviaModel(EntityKt  COMPANYModel)realtime


Dependent components

Component

Usage

data-modelMapper uses models to convert between them
" + }, + { + "title": "User Profile (China user)", + "pageID": "284800562", + "pageLink": "/pages/viewpage.action?pageId=284800562", + "content": "

Description

User profile got new attributes used in V2 API.


AttributeDescription
searchConfigHcpApiconfig search entity service for HCP API - contains HCO/MAIN_HCO search entity type configuration
searchConfigHcoApiconfig search entity service for HCO API
searcherType

type of searcher implementation

available values: [UriEntitySearch/CrosswalkEntitySearch/AttributesEntitySearch]

attributesattribute names used in AttributesEntitySearch
triggerType

V2 HCP/HCO complex API trigger configuration - action executed when there are missing entities in request

available values: [REJECT/IGNORE/DCR/CREATE]

crosswalkGeneratorConfigauto-create entity crosswalk - if missing in request
crosswalkGeneratorTypetype of crosswalk generator, available values: [UUID]
typeauto-generated crosswalk type value
soruceTableauto-generated crosswalk source table value
sourceModel

source model of entity provided by user for V2 HCP/HCO complex,

available values: [COMPANY,IQIVIA]



\"\"

                   

Flow diagram

TBD

Steps

TBD


Triggers

Trigger action

Component

Action

Default time






Dependent components

Component

Usage



" + }, + { + "title": "User", + "pageID": "284811104", + "pageLink": "/display/GMDM/User", + "content": "


The user is configured with a profile that is shared between all MDM services. Configuration is provided via yaml files and loaded at boot time. To use the profile in any application, import the com.COMPANY.mdm.user.UserConfiguration configuration from the mdm-user module. This operation will allow you to use the UserService class, which is used to retrieve users.


User profile configuration

attributedescription
nameuser name
descriptionuser description
tokentoken used for authentication
getEntityUsesMongoCacheretrive entity from mongo cache in get entity operation
lookupsUseMongoCacheretrive lookups from mongo cache in LookupService
trim

trimming entities/relationships in response to the client

guardrailsEnabledcheck if contributor provider crosswalk exists with data provider crosswalk
rolesuser permissions
countriesuser allowed countries
sourcesuser allowed crosswalks
defaultClientdefault mdm client name
validationRulesForValidateEntityServicevalidation rules configuration
batchesuser allowed batches configuration
defaultCountryuser default country, used in api-router, when country is not provided in request
overrideZonesuser country-zone configuration that overwrites default api-router behavior
kafkauser kafka configuration, used in kafka management service
reconciliationTargetsreconciliation targets, used in event resend service



" + }, + { + "title": "Country Cluster", + "pageID": "234715057", + "pageLink": "/display/GMDM/Country+Cluster", + "content": "

General assumptions

Example of mapping: 

Country

countryCluster

Andorra (AD)

France (FR)

Maroco (MC)

France (FR)

Changes in MDM HUB

1. Enrichment of  Kafka events  with extra parameter defaultClusterCoutry

2. Add a new column COUNTRY_CLUSTER representing the default country cluster  in views:

3. Handling cluster country sent by PforceRx in DCR process in a transparent way

Change in the event model

{

  "eventType": "HCP_CHANGED",

  "eventTime": 1514976138977,

  "countryCode": "MC",

  “defaultCountryCluster": "FR",

  "entitiesURIs": ["entities/ysCkGNx“

  ] ,

  "targetEntity":

  {

  "uri": "entities/ytY3wd9",

  "type": "configuration/entityTypes/HCP",

Changes on client-side

  1. MULE
    • MULE must map defaultCountryCluster to country sent to PforceRx in the GRV pipeline.
  2. ODS


" + }, + { + "title": "Create/Update HCP/HCO/MCO", + "pageID": "164470018", + "pageLink": "/pages/viewpage.action?pageId=164470018", + "content": "

Description

The REST interfaces exposed through the MDM Manager component used by clients to update or create HCP/HCO/MCO objects. The update process is supported by all connected MDMs – Reltio and Nucleus360 with some limitations. At this moment Reltio MDM is fully supported for entity types: HCP, HCO, MCO. The Nucleus360 supports only the HCP update process. The decision which MDM should be selected to process the update request is controlled by configuration. Configuration map defines country assignment to MDM which stores country's data. Based on this map, MDM Manager selects the correct MDM system to forward the update request.

The difference between Create and Update operations is the additional API request during the update operation. During the update, an entity is retrieved from the MDM by the crosswalk value for validation purposes. 

Diagrams 1 and 2 presents standard flow. On diagrams 3, 4, 5, 6 additional logic is optional and activated once the specific condition or attribute is provided. 

The diagrams below present a sequence of steps in processing client calls.

Update 2023-09:

To increase Update HCP/HCO/MCO performance, the logic was slightly altered:

Flow diagram

1Create HCP/HCO/MCO

\"\"

2 Update HCP/HCO/MCO

\"\"

3 (additional optional logic) Create/Update HCO with ParentHCO 

\"\"

4 (additional optional logic) Create/Update HCP with AffiliatedHCO&Relation

\"\"

5 (additional optional logic) Create/Update HCO with ParentHCO 



\"\"


6 (additional optional logic) Create/Update HCP with source crosswalk replace 

\"\"

Steps

Triggers

Trigger actionComponentActionDefault time
REST callManager: POST/PATCH /hco /hcp /mcocreate specific objects in MDM systemAPI synchronous requests - realtime

Dependent components

ComponentUsage
Managercreate update Entities in MDM systems
API Gatewayproxy REST and secure access
ReltioReltio MDM system
NucleusNucleus MDM system





" + }, + { + "title": "Create/Update Relations", + "pageID": "164469796", + "pageLink": "/pages/viewpage.action?pageId=164469796", + "content": "

Description

The operation creates or updates the Relation of MDM Manager manages the relations in the Reltio MDM system. User can update the specific relation using a crosswalk to match or create a new object using unique crosswalks and information about start and end object

The detailed process flow is shown below.

Flow diagram

Create/Update Relation

\"\"

Steps

  1. The client sends HTTP requests to the MDM Manager endpoint.
  2. Kong Gateway receives requests and handles authentication.
  3. If the authentication succeeds, the request is forwarded to the MDM Manager component.
  4. MDM Manager checks user permissions to call createRelation/updateRelation operation and the correctness of the request.
  5. If the user's permissions are correct, MDM Manager proceeds with the create/update operation.
  6. OPTIONALLY: after successfully update (ResponseStatus != failed), relations are cached in the MongoDB, the relations are then reused in the ReferenceAttributeEnrichment Service (currently configured for the GBLUS ONEKEY Affiliations). This is required to enrich these relations to the HCP/HCO objects during the update, this prevents losing reference attributes duringHCP create operation.
  7. OPTIONALLY: PATCH operation adds the PARTIAL_OVERRIDE header to Reltio switching the request to the partial update operation.


Triggers

Trigger actionComponentActionDefault time
REST call

Manager: POST/PATCH

/relations

create or updates the Relations in MDM systemAPI synchronous requests - realtime

Dependent components

ComponentUsage
Managercreate or updates the Relations in MDM system
" + }, + { + "title": "Create/Update/Delete tags", + "pageID": "172295228", + "pageLink": "/pages/viewpage.action?pageId=172295228", + "content": "

The REST interfaces exposed through the MDM Manager component used by clients to update, delete or create tags assigned to entity objects. Difference between create and update is that tags are added and if the option returnObjects is set to true all previously added and new tags will be returned. Delete action removes one tag.

The diagrams below present a sequence of steps in processing client calls.

Flow diagram

  1. Create tag
  2. Update tag
  3. Delete tag


Steps

Triggers

Trigger actionComponentActionDefault time
REST callManager: POST/PATCH/DELETE /entityTagscreate specific objects in MDM systemAPI synchronous requests - realtime

Dependent components

ComponentUsage
Managercreate update delete Entity Tags in MDM systems
API Gatewayproxy REST and secure access
ReltioReltio MDM system



" + }, + { + "title": "DCR flows", + "pageID": "415205424", + "pageLink": "/display/GMDM/DCR+flows", + "content": "
\n
\n
\n
\n

Overview

DCR (Data Change Request) process helps to improve existing data in source systems. Proposal for change is being created by source systems a as DCR object (sometimes also called VR - Validation Request) which is usually being routed by MDM HUB to DS (Data Stewards) either in Reltio or in Third party validators (OneKey, Veeva OpenData). Response is provided twofold:

  • response for specific DCR - metadata
  • profile data update as a direct effect of a DCR processing - payload


General DCR process flow

High level solution architecture for DCR flow


\"\"

Source: Lucid



\n
\n
\n
\n
\n
\n

Solution for OneKey (OK)

\"\"

\n
\n
\n
\n

Solution for Veeva OpenData (VOD)

\"\"

\n
\n
\n
\n
\n
\n

Architecture highlights

  • Actors involved: PforceRX, Reltio, HUB, OneKey
  • Key components: DCR Service 2 (second version) for AMER, EMEA, APAC, US tenants
  • Process details:
    • DCRs are created directly by PforceRx using DCR's HUB API
    • PforceRx checks for DCR status updates every 24h → finds out which DCRs has been updated (since last check 24h ago) and the pulls details from each one with /dcr/_status 
    • Integration with OneKey is realized by APIs - DCRs are created with /vr/submit and their status is verified every 8h with /vr/trace
    • Data profile updates (payload) are being delivered via CSV and S3 and ETLed (VOD batch) to Reltio with COMPANY's help
    • DCRRegistry & DCRRegistryVeeva collections are used in Mongo for tracking purposes




\n
\n
\n
\n

Architecture highlights

  • Actors involved: Data Stewards in Reltio, HUB, Veeva OpenData (VOD)
  • Key components: DCR Service 2 (second version) for AMER, EMEA, APAC, US tenants
  • Process details:
    • DCRs are created by Data Stewards (DSRs) in Reltio via Suggest / Send to 3rd Party Validation - input for DSRs is being provided by reports from PforceRx
    • Communication with Veeva via S3<>SFTP and synchronization GMTF jobs. DCRs are sent and received in batches every 24h 
    • DCRs metadata is being exchanged via multiple CSV files ZIPed
    • Data profile updates (payload) are being delivered via CSV and S3 and ETLed (VOD batch) to Reltio with COMPANY's help  
    • DCRRegistry & DCRRegistryONEKEY collections are used in Mongofor tracking purposes
\n
\n
\n
\n
\n
\n

Solution for IQVIA Highlander (HL) 

\"\"


\n
\n
\n
\n

Solution for OneKey on GBLUS - sources ICEU, Engage, GRV

\n
\n
\n
\n
\n
\n

Architecture highlights

  • Actors involved: Veeva on behalf of PforceRX, Reltio, HUB, IQVIA wrapper
  • Key components: DCR Service (first version) for GBLUS tenant
  • Process details:
    • DCRs are created by sending CSV requests by Veeva - based on information acquired from PforceRx
    • Integration HUB <> Veeva → via files and S3<>SFTP. HUB confirms DCR creation by returning file reports back to Veeva
    • Integration HUB <> IQVIA wrapper → via files and S3
    • HUB is responsible for translation of Veeva DCR CSV format to IQVIA CSV wrapper which then creates DCR in Reltio
    • Data Stewards approve or reject the DCRs in Reltio which updates data profiles accordingly. 
    • PforceRx receives update about changes in Reltio
    • DCRRequest collection is used in Mongo for tracking purposes
\n
\n
\n
\n

Architecture highlights (draft)

  • Actors involved: HUB, IQVIA wrapper
  • Key components: DCR Service (first version) for GBLUS tenant
  • Process details:
    • POST events from sources are captured - some of them are translated to direct DCRs, some of them are gathered and then pushed via flat files to be transformed into DCRs to OneKey

 


\n
\n
\n
" + }, + { + "title": "DCR generation process (China DCR)", + "pageID": "164470008", + "pageLink": "/pages/viewpage.action?pageId=164470008", + "content": "

The gateway supports following DCR types:


DCR generation processes are handled in two steps:

  1. During HCP modification – if initial activation criteria are met, then a DCR request is generated and published to KAFKA <env>-gw-dcr-requests topic.
  2. In the next step, the internal Camel route DCRServiceRoute reads requests generated from the topic and processes as follows:
    1. checks if the time specified by delayPrcInSeconds elapsed since request generation – it makes sure that Reltio batch match process has finished and newly inserted profiles merge with the existing ones.
    2. checks if an entity, that caused DCR generation, still exists;
    3. checks full activation criteria (table below) on the latest state of the target entity, if criteria are not met then the request is closed
    4. creates DCR in Reltio
    5. updates external info
    6. creates COMPANYDataChangeRequest entity in Reltio for tracking and exporting purposes.
  3. Created DCRs are exported by the Informatica ETL process managed by IQIVIA
  4. DCR applying process (reject/approve actions) are executed through MDM HUB DCR response API executed by the external app manged by MDE team.


The table below presents DCR activation criteria handled by system.

Table 9. DCR activation criteria





Rule

NewHCP

MultiAffiliation

NewHCOL2

NewHCOL1

Country in

CN

CN

CN

CN

Source in

GRV

GRV, MDE, FACE, EVR, CN3RDPARTY

GRV, FACE, CN3RDPARTY

GRV, FACE, CN3RDPARTY

ValidationStatus in

pending, partial-validated

or, if merged:

OV: notvalidated, GRV nonOV: pending/partial-validated

validated, pending

validated, pending

validated, pending

SpeakerStatus in

enabled, null

enabled, null

enabled, null

enabled, null

Workplaces count


>1



Hospital found

true

true

false

true

Department found

true

true


false

Similar DCR created in the past

false

false

false

false


Update: December 2021

\"\"

" + }, + { + "title": "HL DCR [Decommissioned April 2025]", + "pageID": "164470085", + "pageLink": "/pages/viewpage.action?pageId=164470085", + "content": "

Contacts

VendorContact
PforceRX

DL-PForceRx-SUPPORT@COMPANY.com

IQVIA (DCR Wrapper)COMPANY-MDM-Support@iqvia.com 


As a part of Highlander project, the DCR processing flow was created which realizes following scenarios:

  1. Update HCP account details i.e. specialty, address, name (different sources of elements),
  2. Add new HCP account with primary affiliation to an existing organization,
  3. Add new HCP account with a new business account,
  4. Update HCP and add affiliation to a new HCO,
  5. Update HCP account details and remove existing details i.e. birth date, national id, …,
  6. Update HCP account and add new non primary affiliation to an existing organization,
  7. Update HCP account and add new primary affiliation to an existing organization,
  8. Update HCP account inactivate primary affiliation. Person account has more than 1 affiliation,
  9. Update HCP account inactivate non primary affiliation. Person account has more than 1 affiliation,
  10. Inactivate HCP account,
  11. Update HCP and add a private address,
  12. Update HCP and update existing private address,
  13. Update HCP and inactivate a private address,
  14. Update HCO details i.e. address, name (different sources of elements),
  15. Add new HCO account,
  16. Update HCO and remove details,
  17. Inactivate HCO account,
  18. Update HCO address,
  19. Update HCO and add new address,
  20. Update HCO and inactivate address,
  21. Update HCP's existing affiliation.


Above cases has been aggregated into six generic types in internal HUB model:

  1. NEW_HCP_GENERIC - represents cases when the new HCP object is created with or without affiliation to HCO,
  2. UPDATE_HCP_GENERIC - aggregates cases when the existing HCP object is changed,
  3. DELETE_HCP_GENERIC - represents the case when HCP is deactivating,
  4. NEW_HCO_GENERIC - aggregates scenarios when new HCO object is created with or without affiliations to parent HCO,
  5. UPDATE_HCO_GENERIC - represents cases when existing HCO object is changing,
  6. DELETE_HCO_GENERIC - represents the case when HCO is deactivating.


General Process Overview

\"\"


Process steps:

  1. Veeva uploads DCR request file to FTP location,
  2. PforceRx Channel component downloads the DCR request file,
  3. PforceRx Channel validates and maps each DCR requests to internal model,
  4. PforceRx Channel sends the request to DCR Service,
  5. DCR Service process the request: validating, enriching and mapping to Iqvia DCR Wrapper,
  6. PforceRx Channel prepares the report file containing technical status of DCR processing - at this time, report will contain only requests which don't pass the validation,
  7. Scheduled process in DCR Service, prepares the Wrapper requests file and uploads this to S3 location.
  8. DCR Wrapper processes the file: creating DCRs in Reltio or rejecting the request due to errors. After that the response file is published to s3 location,
  9. DCR Service downloads the response and updates DCRs status,
  10. Scheduled process in PforceRx Channel gets DCR requests and prepares next technical report - at this time the report has technical status which comes from DCR Wrappper,
  11. DCRs that was created by DCR Wrapper are reviewed by Data Stewards. DCR can be accepted or rejected,
  12. After accepting or rejecting DCR, Reltio publishes the message about this event,
  13. DCR Service consumes the message and updates DCR status,
  14. PforceRx Channel gets DCR data to prepare a response file. The response file contains the final status of DCRs processing in Reltio.


Veeva DCR request file specification

The specification is available at following location:

https://COMPANY-my.sharepoint.com/:x:/r/personal/chinj2_COMPANY_com/Documents/Mig%20In-Prog/Highlander/PMO/09%20Integration/LATAM%20Reltio%20DCR/DCR_Reltio_T144_Field_Mapping_Reltio.xlsx


DCR Wrapper request file specification

The specification is available at following link:

https://COMPANY.sharepoint.com/:x:/r/sites/HLDCR/Shared%20Documents/ReltioCloudMDM_LATAM_Highlander_DCR_DID_COMPANY__DEVMapping_v2.1.xlsx





" + }, + { + "title": "OK DCR flows (GBLUS)", + "pageID": "164469877", + "pageLink": "/pages/viewpage.action?pageId=164469877", + "content": "

Description

The process is responsible for creating DCRs in Reltio and starting Change Requests Workflow for singleton entities created in Reltio. During this process, the communication to IQVIA OneKey VR API is established.  SubmitVR operation is executed to create a new Validation Request. The TraceVR operation is executed to check the status of the VR in OneKey. All DCRs are saved in the dedicated collection in HUB Mongo DB, required to gather metadata and trace the changes for each DCR request. Some changes can be suggested by the DS using "Suggest" operation in Reltio and "Send to Third Party Validation" button, the process "Data Steward OK Validation Request" is processing these changes and sends them to the OneKey service. 

The process is divided into 4 sections:

  1. Submit Validation Request
  2. Trace Validation Request
  3. Data Steward Response
  4. Data Steward OK Validation Request

The below diagram presents an overview of the entire process. Detailed descriptions are available in the separated subpages.

Flow diagram

\"\"

Model diagram

\"\"

Steps

Triggers

Described in the separated sub-pages for each process.

Dependent components

Described in the separated sub-pages for each process.

" + }, + { + "title": "Data Steward OK Validation Request", + "pageID": "172306908", + "pageLink": "/display/GMDM/Data+Steward+OK+Validation+Request", + "content": "

Description

The process the DS suggested changes based on the Change Request events received from Reltio(publishing) that are marked with the ThirdPartyValidation flag. The "suggested" changes are retrieved using the "preview" method and send to IQVIA OneKey or Veeva OpenData for validation. After successful submitVR response HUB is closing/rejecting the existing DCR in Reltio and additionally creates a new DCR object with relation to the entity in Reltio for tracking and status purposes. 

Because of the ONEKEY interface limitation, removal of attributes is send to IQVIA as a comment.

Flow diagram

\"\"


Steps


 ONEKEY Comparator (suggested changes)

HCP

Reltio AttributeONEKEY attributemandatory typeattribute type
FirstName
individual.firstNameoptionalsimple value
LastNameindividual.lastNamemandatorysimple value
Country
isoCod2
mandatorysimple value
Genderindividual.genderCodeoptionalsimple lookup
Prefixindividual.prefixNameCodeoptionalsimple lookup
Titleindividual.titleCodeoptionalsimple lookup
MiddleNameindividual.middleNameoptionalsimple value
YoBindividual.birthYearoptionalsimple value
Dobindividual.birthDayoptionalsimple value
TypeCodeindividual.typeCodeoptionalsimple lookup
PreferredLanguageindividual.languageEidoptionalsimple value
WebsiteURL
individual.websiteoptionalsimple value

Identifier value 1

individial.externalId1optionalsimple value

Identifier value 2

individial.externalId2optionalsimple value
Addresses[]

address.country

address.city

address.addressLine1

address.addressLine2

address.Zip5

mandatorycomplex (nested)
Specialities[]
individual.speciality1 / 2 / 3optionalcomplex (nested)
Phone[]
individual.phoneoptionalcomplex (nested)
Email[]
individual.emailoptionalcomplex (nested)
Contact Affiliations[]

workplace.usualName

workplace.officialName

workplace.workplaceEid

optionalContact Affiliation
ONEKEY crosswalk
individual.individualEid
mandatoryID

HCO

Reltio AttributeONEKEY attributemandatory typeattribute type
Name

workplace.usualName

workplace.officialName

optionalsimple value
Country
isoCod2
mandatorysimple value
OtherNames.Name
workplace.usualName2optionalcomplex (nested)
TypeCodeworkplace.typeCodeoptionalsimple lookup
WebisteWebsiteURL
workplace.websiteoptionalcomplex (nested)
Addresses[]

address.country

address.city

address.addressLine1

address.addressLine2

address.Zip5

mandatorycomplex (nested)
Specialities[]
workplace.speciality1 / 2 / 3optionalcomplex (nested)
Phone[] (!FAX)
workplace.telephoneoptionalcomplex (nested)
Phone[] (FAX)
workplace.faxoptionalcomplex (nested)
Email[]
workplace.emailoptionalcomplex (nested)
ONEKEY crosswalk
workplace.workplaceEid
mandatoryID



Triggers

Trigger actionComponentActionDefault time
IN Events incoming mdm-onekey-dcr-service:ChangeRequestStreamprocess publisher full change request events in the stream that contain ThirdPartyValidation flagrealtime: events stream processing 

Dependent components

ComponentUsage
OK DCR ServiceMain component with flow implementation
Veeva DCR ServiceMain component with flow implementation
PublisherEvents publisher generates incoming events
Hub StoreDCR and Entities Cache 
" + }, + { + "title": "Data Steward Response", + "pageID": "164469841", + "pageLink": "/display/GMDM/Data+Steward+Response", + "content": "

Description

The process updates the DCR's based on the Change Request events received from Reltio(publishing). Based on the Data Steward decision the state attribute contains relevant information to update DCR status.

Flow diagram


\"\"

Steps

Triggers

Trigger actionComponentActionDefault time
IN Events incoming 

mdm-onekey-dcr-service:OneKeyResponseStream

mdm-veeva-dcr-service:veevaResponseStream

process publisher full change request events in streamrealtime: events stream processing 

Dependent components

ComponentUsage
OK DCR ServiceMain component with flow implementation
Veeva DCR ServiceMain component with flow implementation
PublisherEvents publisher generates incoming events
Hub StoreDCR and Entities Cache 
" + }, + { + "title": "Submit Validation Request", + "pageID": "164469875", + "pageLink": "/display/GMDM/Submit+Validation+Request", + "content": "

Description

The process of submitting new validation requests to the OneKey service based on the Reltio change events aggregated in time windows. During this process, new DCRs are created in Reltio.

Flow diagram


\"\"

Steps


Triggers

Trigger actionComponentActionDefault time
IN Events incoming mdm-onekey-dcr-service:OneKeyStreamprocess publisher simple events in streamevents stream processing with 4h time window events aggregation
OUT API requestone-key-client:OneKeyIntegrationService.submitValidationsubmit VR request to OneKeyinvokes API request for each accepted event

Dependent components

ComponentUsage
OK DCR ServiceMain component with flow implementation
PublisherEvents publisher generates incoming events
ManagerReltio Adapter for getMatches and created operations
OneKey AdapterSubmits Validation Request
Hub StoreDCR and Entities Cache 

Mappings

Reltio → OK mapping file: onkey_mappings.xlsx

OK mandatory / required fields: VR - Business Fields Requirements(COMPANY).xlsx

OneKey Documentation

\"\"




" + }, + { + "title": "Trace Validation Request", + "pageID": "164469983", + "pageLink": "/display/GMDM/Trace+Validation+Request", + "content": "

Description

The process of tracing the VR changes based on the OneKey VR changes. During this process HUB, DCR Cache is triggered every <T> hour for SENT DCR's and check VR status using OneKey web service. After verification DCR is updated in Reltio or a new Workflow is started in Reltio for the Data Steward manual validation. 

Flow diagram


\"\"


Steps



Triggers

Trigger actionComponentActionDefault time
IN Timer (cron)mdm-onekey-dcr-service:TraceVRServicequery mongo to get all SENT DCR's related to OK_VR processevery <T> hour
OUT API requestone-key-client:OneKeyIntegrationService.traceValidationtrace VR request to OneKeyinvokes API request for each DCR

Dependent components

ComponentUsage
OK DCR ServiceMain component with flow implementation
ManagerReltio Adapter for GET /changeRequests and POST /workflow/_initiate operations 
OneKey AdapterTraceValidation Request
Hub StoreDCR and Entities Cache 



" + }, + { + "title": "PforceRx DCR flows", + "pageID": "209949183", + "pageLink": "/display/GMDM/PforceRx+DCR+flows", + "content": "

Description

MDM HUB exposes Rest API to create and check the status of DCR. The process is responsible for creating DCRs in Reltio and starting Change Requests Workflow DCRs created in Reltio or creating the DCRs (submitVR operation) in ONEKEY. DCR requests can be routed to an external MDM HUB instance handling the requested country. The action is transparent to the caller. During this process, the communication to IQVIA OneKey VR API / Reltio API is established. The routing decision depends on the market, operation type, or changed profile attributes.

Reltio API:  createEntity (with ChangeReqest) operation is executed to create a completely new entity in the new Change Request in Reltio. attributesUpdate (with ChageRequest) operation is executed after calculation of the specific changes on complex or simple attributes on existing entity - this also creates a new Change Request.  Start Workflow operation is requested at the end, this starts the Wrofklow for the DCR in Reltio so the change requests are started in the Reltio Inbox for Data Steward review.

IQVIA API: SubmitVR operation is executed to create a new Validation Request. The TraceVR operation is executed to check the status of the VR in OneKey.

All DCRs are saved in the dedicated collection in HUB Mongo DB, required to gather metadata and trace the changes for each DCR request. The DCR statuses are updated by consuming events generated by Reltio or periodic query action of open DCRs in OneKey

The Data Steward can decide to route a DCR to IQVIA as well - some changes can be suggested by the DS using the "Suggest" operation in Reltio and "Send to Third Party Validation" button, the process "Data Steward OK Validation Request" is processing these changes and sends them to the OneKey service. 

The below diagram presents an overview of the entire process. Detailed descriptions are available in the separated subpages.

API doc URL: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-dcr-spec-emea-dev/swagger-ui/index.html

Flow diagram

DCR Service High-Level Architecture

\"\"

DCR HUB Logical Architecture

\"\"


Model diagram


\"\"

Flows:

Triggers

Described in the separated sub-pages for each process.

Dependent components

Described in the separated sub-pages for each process.


" + }, + { + "title": "Create DCR", + "pageID": "209949185", + "pageLink": "/display/GMDM/Create+DCR", + "content": "

Description

The process creates change requests received from PforceRx Client and sends the DCR to the specified target service - Reltio, OneKey or Veeva OpenData (VOD). DCR is created in the system and then processed by the data stewards. The status is asynchronously updated by the HUB processes, Client represents the DCR using a unique extDCRRequestId value. Using this value Client can check the status of the DCR (Get DCR status). 

Flow diagram

\"\"

Source: Lucid

\"\"

Source: Lucid


DCR Service component perspective


Steps


  1. Clients execute the API POST /dcr request
  2. Kong receives requests and handles authentication
  3. If the authentication succeeds the request is forwarded to the dcr-service-2 component,
  4. DCR Service checks permissions to call this operation and the correctness of the request, then the flow is started and the following steps are executed:
    1. Parse and validate the dcr request. The validation logic checks the following: 
      1. Check if the list of DCRRequests contains unique extDCRRequestId.
        1. Requests that are duplicate will be rejected with the error message - "Found duplicated request(s)"
      2. For each DCRRequest in the input list execute the following checks:
        1. Users can define the following number of entities in the Request:
          1. at least one entity has to be defined, otherwise, the request will be rejected with an error message - "No entities found in the request"
          2. single HCP
          3. singe HCO
          4. singe HCP with single HCO
          5. two HCOs
        2. Check if the main reference objects exist in Reltio for update and delete action
          1. HCP.refId or HCO.refId, user have to specify one of:
            1. CrosswalkTargetObjectId - then the entity is retrieved from Reltio using get entity by crosswalk operation
            2. EntityURITargetObjectId - then the entity is retrieved from Reltio using get entity by uri operation
            3. COMPANYCustomerIdTargetObjectId - then the entity is retrieved from Reltio using search operation by the COMPANYGlobalCustomerID
        3. Attributes validation:
          1. Simple attributes - like firstName/lastName e.t.c
            1. for update action on the main object:
              1. if the input parameter is defined with an empty value - "" - this will result in the removal of the target attribute
              2. if the input parameter is defined with a non-empty value - this will result in the update of the target attribute
          2. Nested attributes - like Specialties/Addresses e.t.c
            1. for each attribute, the user has to define the refId to uniquely identify the attribute
              1. For action "update" - if the refId is not found in the target object request will be rejected with a detailed error message 
              2. For action "insert" - the refId is not required - new reference attribute will be added to the target object
        4. Changes validation:
          1. If the validation detected 0 changes (during comparison of applying changes and the target entity) -  the request is rejected with an error message - "No changes detected"
    2. Evaluate dcr service (based on the decision table config)
      1. The following decision table is defined to choose the target service
        1. LIST OF the following combination of attributes:

          attributedescription
          userName 
          the user name that executes the request
          sourceName
          the source name of the Main object
          country
          the county defined in the request
          operationType

          the operation type for the Main object

          { insert, update, delete }
          affectedAttributes
          the list of attributes that the user is changing
          affectedObjects
          { HCP, HCO, HCP_HCO }

          RESULT →  TargetType {Reltio, OneKey, Veeva}

        2. Each attribute in the configuration is optional. 

        3. The decision table is making the validation based on the input request and the main object- the main object is HCP, if the HCP is empty then the decision table is checking HCO. 
        4. The result of the decision table is the TargetType, the routing to the Reltio MDM system, OneKey or Veeva service. 
    3. Execute target service (reltio/onekey/veeva)
      1. Reltio: create DCR method - direct
      2. OneKey: create DCR method (submitVR) - direct
      3. Veeva: create DCR method (storeVR)
    4. Create DCR in Reltio and save DCR in DCR Registry 
      • If the submission is successful then: 
        • DCR entity is created in Reltio and the relation between the processed entity and the DCR entity
          • Reltio source name (crosswalk.type): DCR
          • Reltio relation type: HCPtoDCR or HCOtoDCR (depending on the object type)
            • for "create" and "delete" operation the Relation have to be created between objects
            • if this is just the "insert" operation the Relation will be created after the acceptance of the Change Request in Reltio - Reltio: process DCR Change Events
          • DCR entity attributes once sent to OneKey

            DCR entity attributes

            Mapping

            DCRIDextDCRRequestId
            EntityURIthe processed entity URI
            VRStatus"OPEN"
            VRStatusDetail"SENT_TO_OK"
            CreatedByMDM HUB
            SentDatecurrent time

            CreateDate

            current time

            CloseDate

            if REJECTED | ACCEPTED -> current time

            dcrType

            evaluate based on config:

            dcrTypeRules:
            - type: CR0
            size: 1
            action: insert
            entity: com.COMPANY.mdm.api.dcr2.HCP

            \"\"

          • DCR entity attributes once sent to Veeva

            DCR entity attributes

            Mapping

            DCRIDextDCRRequestId
            EntityURIthe processed entity URI
            VRStatus"OPEN"
            VRStatusDetail"SENT_TO_VEEVA"
            CreatedByMDM HUB
            SentDatecurrent time

            CreateDate

            current time

            CloseDate

            if REJECTED | ACCEPTED -> current time

            dcrType

            evaluate based on config:

            dcrTypeRules:
            - type: CR0
            size: 1
            action: insert
            entity: com.COMPANY.mdm.api.dcr2.HCP

            \"\"

          • DCR entity attributes once sent to Reltio → action is passed to DS and workflow is started. 

            DCR entity attributes

            Mapping

            DCRIDextDCRRequestId
            EntityURIthe processed entity URI
            VRStatus"OPEN"
            VRStatusDetail"DS_ACTION_REQUIRED "
            CreatedByMDM HUB
            SentDatecurrent time

            CreateDate

            current time

            CloseDate

            if REJECTED | ACCEPTED -> current time

            dcrType

            evaluate based on config:

            dcrTypeRules:
            - type: CR0
            size: 1
            action: insert
            entity: com.COMPANY.mdm.api.dcr2.HCP

            \"\"

        • Mongo Update: DCRRequest.status is updated to SENT with OneKey or Veeva request and response details or DS_ACTION_REQURIED with all Reltio details
      • Otherwise FAILED status is recorded in DCRRequest with a detailed error message.
        • Mongo Update:  DCRRequest.status is updated to FAILED with all required attributes, request, and exception response details 
    5. Initialize Workflow in Reltio (only requests that TargetType is Reltio)
      1. POST /workflow/_initiate operation is invoked to init new Workflow in Reltio

        Workflow attributes

        Mapping

        changeRequest.uriChangeRequest Reltio URI
        changeRequest.changesEntity URI
    6. Then Auto close logic is invoked to evaluate whether DCR request meets conditions to be auto accepted or auto rejected. Logic is based on decision table PreCloseConfig. If DCRRequest.country is contained in PreCloseConfig.acceptCountries or PreCloseConfig.rejectCountries then DCR is accepted or rejected respectively. 
    7. return DCRResponse to Client - During the flow, DCRRespone may be returned to Client with the specific errorCode or requestStatus. The description for all response codes is presented on this page: Get DCR status

Triggers

Trigger actionComponentActionDefault time
REST callDCR Service: POST /dcrcreate DCRs in the Reltio, OneKey or Veeva systemAPI synchronous requests - realtime


Dependent components

ComponentUsage
DCR ServiceMain component with flow implementation
OK DCR ServiceOneKey Adapter - API operations
Veeva DCR ServiceVeeva Adapter - API operations and S3/SFTP communication 
ManagerReltio Adapter - API operations
Hub StoreDCR and Entities Cache 
" + }, + { + "title": "DCR state change", + "pageID": "218438617", + "pageLink": "/display/GMDM/DCR+state+change", + "content": "

Description

The following diagram represents the DCR state changes. DCR object stat is saved in HUB and in Reltio DCR entity object. The state of the DCR is changed based on the Reltio/IQVIA/Veeva Data Steward action.

Flow diagram

\"\"

Steps

  1. DCR is created (OPEN)  - Create DCR
    1. DCR is sent to Reltio, OneKey or Veeva
      1. When sent to Reltio
        1. Pre Close logic is invoked to auto accept (PRE_ACCEPT) or auto reject (PRE_REJECT) DCR
        2. Reltio Data Steward process the DCR - Reltio: process DCR Change Events
      2. OneKey Data Steward process the DCR - OneKey: process DCR Change Events
      3. Veeva Data Steward process the DCR - Veeva: process DCR Change Events


Data Steward DCR status change perspective

\"\"

Transaction Log

There are the following main assumptions regarding the transaction log in DCR service: 


Log appenders:


Triggers

Trigger action

Component

Action

Default time

REST callDCR Service: POST /dcrcreate DCRs in the Reltio system or in OneKeyAPI synchronous requests - realtime
IN Events incoming dcr-service-2:DCRReltioResponseStreamprocess publisher full change request events in the streamrealtime: events stream processing 
IN Events incoming dcr-service-2:DCROneKeyResponseStreamprocess publisher full change request events in the streamrealtime: events stream processing 
IN Events incoming dcr-service-2:DCRVeevaResponseStreamprocess publisher full change request events in the streamrealtime: events stream processing 


Dependent components

Component

Usage

DCR ServiceMain component with flow implementation
OK DCR ServiceOneKey Adapter  - API operations
Veeva DCR ServiceVeeva Adapter  - API operations
ManagerReltio Adapter  - API operations
Hub StoreDCR and Entities Cache 
" + }, + { + "title": "Get DCR status", + "pageID": "209949187", + "pageLink": "/display/GMDM/Get+DCR+status", + "content": "

Description

The client creates DCRs in Reltio, OneKey or Veeva OpenData using the Create DCR operation. The status is then asynchronously updated in the DCR Registry. The operation retrieves the current status of the DCRs that the updated date is between 'updateFrom' and 'updateTo' input parameters. PforceRx first asks what DCRs have been changed since last time they checked (usually 24h) and then iterate for each DCR they get detailed info.

Flow diagram

\"\",

\"\"

Source: Lucid



Dependent flows:
  1. The DCRRegistry is enriched by the DCR events that are generated by Reltio - the flow description is here - Reltio: process DCR Change Events
  2. The DCRRegistry is enriched by the DCR events generated in OneKey DCR service component - after submitVR operation is invoked to ONEKEY, each DCR is traced asynchronously in this process - OneKey: process DCR Change Events
  3. The DCRRegistry is enriched by the DCR events generated in Veeva OpenData DCR service component - after submitVR operation is invoked to VEEVA, each DCR is traced asynchronously in this process - Veeva: process DCR Change Events

Steps

Status

There are the following request statuses that users may receive during Create DCR operation or during checking the updated status using GET /dcr/_status operation described below:

RequestStatusDCRStatus Internal Cache statusDescription
REQUEST_ACCEPTEDCREATEDSENT_TO_OKDCR was sent to the ONEKEY system for validation and pending the processing by Data Steward in the system
REQUEST_ACCEPTEDCREATEDSENT_TO_VEEVADCR was sent to the VEEVA system for validation and pending the processing by Data Steward in the system
REQUEST_ACCEPTEDCREATEDDS_ACTION_REQUIREDDCR is pending Data Steward validation in Reltio, waiting for approval or rejection
REQUEST_ACCEPTEDCREATEDOK_NOT_FOUNDUsed when ONEKEY profile was not found after X retries
REQUEST_ACCEPTEDCREATEDVEEVA_NOT_FOUNDUsed when VEEVA profile was not found after X retries
REQUEST_ACCEPTEDCREATEDWAITING_FOR_ETL_DATA_LOADUsed when waiting for actual data profile load from 3rd Party to appear in Reltio
REQUEST_ACCEPTEDACCEPTEDACCEPTEDData Steward accepted the DCR, changes were applied
REQUEST_ACCEPTEDACCEPTEDPRE_ACCEPTEDPreClose logic was invoked and automatically accepted DCR according to decision table in PreCloseConfig
REQUEST_REJECTEDREJECTED REJECTEDData Steward rejected the changes presented in the Change Request
REQUEST_REJECTEDREJECTED PRE_REJECTEDPreClose logic was invoked and automatically rejected DCR according to decision table in PreCloseConfig
REQUEST_FAILED-FAILEDDCR requests failed due to: validation error/ unexpected error e.t.d - details in the errorCode and errorMessage
Error codes:

There are the following classes of exception that users may receive during Create DCR operation:

ClasserrorCodeDescriptionHTTP code
1DUPLICATE_REQUESTrequest rejected - extDCRRequestId  is registered - this is a duplicate request403
2NO_CHANGES_DETECTEDentities are the same (request is the same) - no changes400
3VALIDATION_ERRORref object does not exist (not able to find HCP/HCO target object404
3VALIDATION_ERRORref attribute does not exist - not able to find nested attribute in the target object400
3VALIDATION_ERRORwrong number of HCP/HCO entities in the input request400


  1. Clients execute the API GET/dcr/_status request
  2. Kong receives requests and handles authentication
  3. If the authentication succeeds the request is forwarded to the dcr-service-2 component,
  4. DCR Service checks permissions to call this operation and the correctness of the request, then the flow is started and the following steps are executed
    1. Query on mongo is executed to get all DCRs matching input parameters:
      1. updateFrom (date-time) - DCR last update from - DCRRequestDetails.status.changeDate
      2. updateTo (date-time) - DCR last update to - DCRRequestDetails.status.changeDate
      3. limit (int) the maximum number of results returned through API - the recommended value is 25. The max value for a single request is 50.
      4. offset(int) - result offset - the parameter used to query through results that exceeded the limit. 
    2. Resulted values are aggregated and returned to the Client.
    3. The client receives the List<DCRResposne> body.

Triggers

Trigger action

Component

Action

Default time

REST callDCR Service: GET/dcr/_statusget status of created DCRs. Limit the results using query parameters like dates and offsetAPI synchronous requests - realtime


Dependent components

Component

Usage

DCR ServiceMain component with flow implementation
Hub StoreDCR and Entities Cache 
" + }, + { + "title": "OneKey: create DCR method (submitVR) - direct", + "pageID": "209949294", + "pageLink": "/display/GMDM/OneKey%3A+create+DCR+method+%28submitVR%29+-+direct", + "content": "

Description

Rest API method exposed in the OK DCR Service component responsible for submitting the VR to OneKey

Flow diagram

\"\"

Steps


  1. Receive the API request
  2. Validate - check if the onekey crosswalk exists once there is an update on the profile, otherwise reject the request
  3. The DCR is mapped to OK VR Request and it's submitted using API REST method POST /vr/submit. (mapping described below)
    1. If the submission is successful then:
      • DCRRequesti updated to SENT_TO_OK with OK request and response details. DCRRegistryONEKEY collection in saved for tracing purposes. The process that reads and check ONEKEY VRs is described here: OneKey: generate DCR Change Events (traceVR)
    2. Otherwise FAILED status is recorded and the response is returned with an OK error response

Mapping


VR - Business Fields Requirements_UK.xlsx - file that contains VR UK requirements and mapping to IQVIA model


HUB

ONEKEY

attributesattributescodes
mandatoryattributesvalues

HCO













YentityTypeWORKPLACE





Yvalidation.clientRequestIdHUB_GENERATED_ID





Yvalidation.processQ





Yvalidation.requestDate1970-01-01T00:00Z





Yvalidation.callDate1970-01-01T00:00Z
attributes



Yvalidation.requestProcessI

extDCRComment



validation.requestComment









country


YisoCod2

















reference EntitycrosswalkONEKEY

workplace.workplaceEid









name



workplace.usualName






workplace.officialName

otherHCOAffiliationsparentUsualName


workplace.parentUsualName

subTypeCode

COTFacilityType

(TET.W.*)



workplace.typeCode

typeCodeno value in PFORCERX

HCOSubType

(LEX.W.*)



workplace.activityLocationCode

addresses







sourceAddressId


N/A


addressType


N/A


addressLine1


address.longLabel


addressLine2


address.longLabel2


addressLine3


N/A


stateProvince

AddressState

(DPT.W.*)



address.countyCode


city

Yaddress.city


zip


address.longPostalCode


country

Yaddress.country


rank


get address with rank=1 

emails







type


N/A


email


workplace.email


rank


get email with rank=1 

otherHCOAffiliations







type


N/A


rank


get affiliation with rank=1 

reference EntityotherHCOAffiliations reference entity onekeyID ONEKEY

workplace.parentWorkplaceEid

phones







typecontains FAX





number


workplace.telephone


rank


get phone with rank=1 










typenot contains FAX





number


workplace.fax


rank


get phone with rank=1 

HCP













YentityTypeACTIVITY





Yvalidation.clientRequestIdHUB_GENERATED_ID





Yvalidation.processQ





Yvalidation.requestDate1970-01-01T00:00Z





Yvalidation.callDate1970-01-01T00:00Z
attributes



Yvalidation.requestProcessI

extDCRComment



validation.requestComment









country


YisoCod2

















reference EntitycrosswalkONEKEY

individual.individualEid









firstName



individual.firstName

lastName


Yindividual.lastName

middleName



individual.middleName

typeCode



N/A



subTypeCode

HCPSubTypeCode

(TYP..*)



individual.typeCode

title

HCPTitle

(TIT.*)



individual.titleCode

prefix

HCPPrefix

(APP.*)



individual.prefixNameCode

suffix



N/A

gender

Gender

(.*)



individual.genderCode

specialties







typeCode

HCPSpecialty

(SP.W.*)



individual.speciality1


type


N/A


rank


get speciality with rank=1 


typeCode

HCPSpecialty

(SP.W.*)



individual.speciality2


type


N/A


rank


get speciality with rank=2 


typeCode

HCPSpecialty

(SP.W.*)



individual.speciality3


type


N/A


rank


get speciality with rank=3 

addresses







sourceAddressId


N/A


addressType


N/A


addressLine1


address.longLabel


addressLine2


address.longLabel2


addressLine3


N/A


stateProvince

AddressState

(DPT.W.*)



address.countyCode


city

Yaddress.city


zip


address.longPostalCode


country

Yaddress.country


rank


get address with rank=1 

identifiers







type


N/A


id


N/A

phones







type


N/A


number


individual.mobilePhone


rank


get phone with rank=1 

emails







type


N/A


email


individual.email


rank


get phone with rank=1 

contactAffiliationsno value in PFORCERX






type

RoleType

(TIH.W.*)



activity.role


primary


N/A


rank


get affiliation with rank=1 

contactAffiliations reference EntitycrosswalksONEKEY

workplace.workplaceEid

HCP & HCO













YentityTypeACTIVITY

For HCP full mapping check the HCP section above

Yvalidation.clientRequestIdHUB_GENERATED_ID

For HCO full mapping check the HCO section above

Yvalidation.processQ





Yvalidation.requestDate1970-01-01T00:00Z





Yvalidation.callDate1970-01-01T00:00Z
attributes



Yvalidation.requestProcessI

extDCRComment



validation.requestComment









country


YisoCod2

addresses







If the HCO address exists map to ONEKEY address

address (mapping HCO)


else






If the HCP address exists map to ONEKEY address

address (mapping HCP)

contactAffiliationsno value in PFORCERX






type

RoleType

(TIH.W.*)


activity.role


primary


N/A


rank


get affiliation with rank=1 


Triggers

Trigger action

Component

Action

Default time

REST callDCR Service: POST /dcrcreate DCRs in the ONEKEYAPI synchronous requests - realtime


Dependent components

Component

Usage

DCR Service 2Main component with flow implementation
Hub StoreDCR and Entities Cache 
" + }, + { + "title": "OneKey: generate DCR Change Events (traceVR)", + "pageID": "209950500", + "pageLink": "/pages/viewpage.action?pageId=209950500", + "content": "

Description

This process is triggered after the DCR was routed to Onekey based on the decision table configuration. The process of tracing the VR changes is based on the OneKey VR changes. During this process HUB, DCR Cache is triggered every <T> hour for SENT DCR's and check VR status using OneKey web service. After verification, the DCR Change event is generated. The DCR event is processed in the OneKey: process DCR Change Events and the DCR is updated in Reltio with Accepted or Rejected status.

Flow diagram

\"\"

Steps


Event Model

data class OneKeyDCREvent(val eventType: String? = null,
val eventTime: Long? = null,
val eventPublishingTime: Long? = null,
val countryCode: String? = null,
val dcrId: String? = null,
val targetChangeRequest: OneKeyChangeRequest,
)

data class OneKeyChangeRequest(
val vrStatus : String? = null,
val vrStatusDetail : String? = null,
val oneKeyComment : String? = null,
val individualEidValidated : String? = null,
val workplaceEidValidated : String? = null,
val vrTraceRequest : String? = null,
val vrTraceResponse : String? = null,
)

Triggers

Trigger action

Component

Action

Default time

IN Timer (cron)dcr-service:TraceVRServicequery mongo to get all SENT DCR's related to the PFORCERX processevery <T> hour
OUT Eventsdcr-service:TraceVRServicegenerate the OneKeyDCREventevery <T> hour


Dependent components

Component

Usage

DCR ServiceMain component with flow implementation
Hub StoreDCR and Entities Cache 
" + }, + { + "title": "OneKey: process DCR Change Events", + "pageID": "209949303", + "pageLink": "/display/GMDM/OneKey%3A+process+DCR+Change+Events", + "content": "
\n
\n
\n
\n

Description

The process updates the DCR's based on the Change Request events received from [ONEKEY|VOD] (after trace VR method result). Based on the [IQVIA|VEEVA] Data Steward decision the state attribute contains relevant information to update DCR status. During this process also the comments created by IQVIA DS are retrieved and the relationship (optional step) between the DCR object and the newly created entity is created. DCR status is accepted only after the [ONEKEY|VOD] profile is created in Reltio, only then the Client will receive the ACCEPTED status. The process is checking Reltio with <T> delay and retries if the ETL load is still in progress waiting for [ONEKEY|VOD] profile. 

Flow diagram

\n
\n
\n
\n
\n
\n

OneKey variant

\"\"

\n
\n
\n
\n

Veeva variant: \"\"

\n
\n
\n
\n
\n
\n


Steps

  • OneKey: generate DCR Change Events (traceVR) publishes simple events to $env-internal-onekey-dcr-change-events-in: DCR_CHANGED
  • Events are aggregated in a time window (recommended the window length 24 hours) and the last event is returned to the process after the window is closed.
  • Events are processed in the Stream and based on the OneKeyDCREvent.OneKeyChangeRequest.vrStatus | VeevaDCREvent.VeevaChangeRequestDetails.vrStatus attribute decision is made
  • DCR is retrieved from the cache based on the _id of the DCR
  • If the event state is ACCEPTED
    • Get Reltio entity COMPANYCustomerID by [ONEKEY|VOD] crosswalk
    • If such crosswalk entity exists in Reltio:
      • COMPANYGlobalCustomerId is saved in Registry and will be returned to the Client 
      • During the process, the optional check is triggered - create the relation between the DCR object and newly created entities
        • if DCRRegistry contain an empty list of entityUris, or some of the newly created entity is not present in the list, the Relation between this object and the DCR has to be created
          • DCR entity is updated in Reltio and the relation between the processed entity and the DCR entity
            • Reltio source name (crosswalk. type): DCR
            • Reltio relation type: HCPtoDCR or HCOtoDCR (depending on the object type)
          • Newly created entities uris should be retrieved by the individualEidValidated or workplaceEidValidated (it may be both) attributes from the events that represent the HCP or HCO crosswalks.
      • The status in Reltio and in Mongo is updated

        DCR entity attributes

        Mapping for OneKey

        Mapping for Veeva

        VRStatusCLOSED
        VRStatusDetail

        state: ACCEPTED

        CommentsONEKEY comments ({VR.rsp.responseComments})
        ONEKEY ID = individualEidValidated or workplaceEidValidated
        VEEVA comments = VR.rsp.responseComments
        VEEVA ID = entityUris
        COMPANYGlobalCustomerIdThis is required in ACCEPTED status

         

    • If the [ONEKEY|VOD] does not exist in Reltio
      • Regenerate the Event with a new timestamp to the input topic so this will be processed in the next <T> hours
      • Update the Reltio DCR status
        • DCR entity attributes

          Mapping

          VRStatusOPEN
          VRStatusDetail

          ACCEPTED

      • update the Mongo status to the OK_NOT_FOUND | VEEVA_NOT_FOUND and increase the "retryCounter" attribute
  • If the event state is REJECTED
    • If a Reltio DS has already seen this request, REJECT the DCR and end the flow (if the initial target type is Reltio)

      The status in Reltio and in Mongo is updated

      DCR entity attributes

      Mapping

      VRStatusCLOSED
      VRStatusDetail

      state: REJECTED

      Comments[ONEKEY|VOD] comments ({VR.rsp.responseComments})
    • If this is based on the routing table and it was never sent to the Reltio DS, then create the DCR workflow and send this to the Reltio DS. Add the information comment that this was Rejected by the OneKey, so now Reltio DS has to decide if this should be REJECTED or APPLIED in Reltio. Add the comment that this is not possible to execute the sendTo3PartyValidation button in this case. Steps:
      • Check if the initial target type is [ONEKEY|VOD]
      • Use the DCR Request that was initially received from PforceRx and is a Domain Model request (after validation) 
      • Send the DCR to Reltio the service returns the following response:
        • ACCEPTED (change request accepted by Reltio)
          • update the status to DS_ACTION_REQUIERED and in the comment add the following: "This DCR was REJECTED by the [ONEKEY|VOD] Data Steward with the following comment: <[ONEKEY|VOD] reject comment>. Please review this DCR in Reltio and APPLY or REJECT. It is not possible to execute the sendTo3PartyValidation button in this case"
          • initialize new Workflow in Reltio with the comment.
          • save data in the DCR entity status in Reltio and update Mongo DCR Registry with workflow ID and other attributes that were used in this Flow.
        • REJECTED  (failure or error response from Reltio)
          • CLOSE the DCR with the information that DCR was REJECTED by the [ONEKEY|VOD] and Reltio also REJECTED the DCR. Add the error message from both systems in the comment. 

Triggers

Trigger action

Component

Action

Default time

IN Events incoming 

dcr-service-2:DCROneKeyResponseStream

dcr-service-2:DCRVeevaResponseStream ($env-internal-veeva-dcr-change-events-in)

process publisher full change request events in the streamrealtime: events stream processing 

Dependent components

Component

Usage

DCR Service 2Main component with flow implementation
ManagerReltio Adapter  - API operations
PublisherEvents publisher generates incoming events
Hub StoreDCR and Entities Cache 
\n
\n
\n
" + }, + { + "title": "Reltio: create DCR method - direct", + "pageID": "209949292", + "pageLink": "/display/GMDM/Reltio%3A+create+DCR+method+-+direct", + "content": "

Description

Rest API method exposed in the Manager component responsible for submitting the Change Request to Reltio

Flow diagram

\"\"

Steps

  1. Receive the DCR request generated by DCR Service 2 component
  2. Depending on the Action execute the method in the Manager component:
    1. insert - Execute standard Create/Update HCP/HCO/MCO operation with additional changeRequest.id parameter
    2. update - Execute Update Attributes operation with additional changeRequest.id parameter
      1. the combination of IGNORE_ATTRIBUTE & INSERT_ATTRIBUTE once updating existing parameter in Reltio
      2. the INSERT_ATTRIBUTE once adding new attribute to Reltio
    3. delete - Execute Update Attribute operation with additional changeRequest.id parameter
      1. the UPDATE_END_DATE on the entity to inactivate this profile
  3. Based on the Reltio response the DCR Response is returned:
    1. REQUEST_ACCEPTED - Reltio processed the request successfully 
    2. REQUEST_FAILED - Reltio returned the exception, Client will receive the detailed description in the errorMessage

Triggers

Trigger action

Component

Action

Default time

REST callDCR Service: POST /dcr2Create change Requests in ReltioAPI synchronous requests - realtime


Dependent components

Component

Usage

DCR ServiceMain component with flow implementation
Hub StoreDCR and Entities Cache 
" + }, + { + "title": "Reltio: process DCR Change Events", + "pageID": "209949300", + "pageLink": "/display/GMDM/Reltio%3A+process+DCR+Change+Events", + "content": "

Description

The process updates the DCR's based on the Change Request events received from Reltio(publishing). Based on the Data Steward decision the state attribute contains relevant information to update DCR status. During this process also the comments created by DS are retrieved and the relationship (optional step) between the DCR object and the newly created entity is created.


Flow diagram

\"\"

Steps

Triggers

Trigger action

Component

Action

Default time

IN Events incoming dcr-service-2:DCRReltioResponseStreamprocess publisher full change request events in the streamrealtime: events stream processing 

Dependent components

Component

Usage

DCR Service

DCR Service 2

Main component with flow implementation
ManagerReltio Adapter  - API operations
PublisherEvents publisher generates incoming events
Hub StoreDCR and Entities Cache 
" + }, + { + "title": "Reltio: Profiles created by DCR", + "pageID": "510266969", + "pageLink": "/display/GMDM/Reltio%3A+Profiles+created+by+DCR", + "content": "
DCR typeApproval/Reject Record visibility in MDMCrosswalk TypeCrosswalk ValueSource
DCR create for HCP/HCOApproved by OneKey/VODHCP/HCO created in MDMONEKEY|VODonekey id ONEKEY|VOD
Approved by DSRHCP/HCO created in MDMSystem source name from DCR (KOL_OneView, PforceRx, etc)DCR IDSystem source name from DCR (KOL_OneView, PforceRx, etc)
DCR edit for HCP/HCOApproved by OneKey/VODHCP/HCO requested attribute updated in MDMONEKEY|VOD
ONEKEY|VOD
Approved by DSRHCP/HCO requested attribute updated in MDMReltioentity uriReltio
DCR edit for HCPaddress/HCO addressApproved by OneKey/VODNew address created in MDM, existing address marked as inactiveONEKEY|VOD
ONEKEY|VOD
Approved by DSRNew address created in MDM, existing address marked as inactiveReltioentity uriReltio
" + }, + { + "title": "Veeva DCR flows", + "pageID": "379332475", + "pageLink": "/display/GMDM/Veeva+DCR+flows", + "content": "

Description

The process is responsible for creating DCRs which are stored (Store VR) to be further transferred and processed by Veeva. Changes can be suggested by the DS using "Suggest" operation in Reltio and "Send to Third Party Validation" button. All DCRs are saved in the dedicated collection in HUB Mongo DB, required to gather metadata and trace the changes for each DCR request. During this process, the communication to Veeva Opendata is established via S3/SFTP communication. SubmitVR operation is executed to create a new ZIP files with DCR requests spread across multiple CSV files. The TraceVR operation is executed to check if Veeva responded to initial DCR Requests via ZIP file placed Inbound S3 dir. 

The process is divided into 3 sections:

  1. Create DCR request - Veeva
  2. Submit DCR Request - Veeva
  3. Trace Validation Request - Veeva

The below diagram presents an overview of the entire process. Detailed descriptions are available in the separated subpages.

Business process diagram for R1 phase

\"\"


Flow diagram

\"\"

Steps

Triggers

DCR service 2 is being triggered via /dcr API calls which are triggered by Data Stewards actions (R1 phase) → "Suggests 3rd party validation" which pushes DCR from Reltio to HUB.

Dependent components

Described in the separated sub-pages for each process.

Design document for HUB development 

  1. Design → VeevaOpenData-implementation.docx

  2. Reltio HUB-VOD mapping → VeevaOpenDataAPACDataDictionary.xlsx
  3. VOD model description (v4) → Veeva_OpenData_APAC_Data_Dictionary v4.xlsx
" + }, + { + "title": "Create DCR request - Veeva", + "pageID": "386814533", + "pageLink": "/display/GMDM/Create+DCR+request+-+Veeva", + "content": "

Description

The process of creating new DCR requests to the Veeva OpenData. During this process, new DCRs are created in DCRregistryVeeva mongo collection.

Flow diagram

\"\"

Steps

Mappings

DCR domain model→ VOD mapping file: VeevaOpenDataAPACDataDictionary-mmor-mapping.xlsx

Veeva integration guide

\"\"

" + }, + { + "title": "Submit DCR Request - Veeva", + "pageID": "379333348", + "pageLink": "/display/GMDM/Submit+DCR+Request+-+Veeva", + "content": "

Description

The process of submitting new validation requests to the Veeva OpenData service via VeevaAdapter (communication with S3/SFTP) based on DCRRegistryVeeva mongo collection . During this process, new DCRs are created in VOD system.

Flow diagram


\"\"

Steps

Veeva DCR service flow:

SFTP integration service flow:

Triggers

Trigger actionComponentActionDefault time
Spring scheduler
mdm-veeva-dcr-service:VeevaDCRRequestSenderprepare ZIP files for VOD systemCalled every specified interval

Dependent components

ComponentUsage
Veeva adapterUpload DCR request to s3 location
" + }, + { + "title": "Trace Validation Request - Veeva", + "pageID": "379333358", + "pageLink": "/display/GMDM/Trace+Validation+Request+-+Veeva", + "content": "

Description

The process of tracing the VR changes based on the Veeva VR changes. During this process HUB, DCRRegistryVeeva Cache is triggered every <T> hour for SENT DCR's and check VR status using Veeva Adapter (s3/SFTP integration). After verification DCR event is sent to DCR Service 2  Veeva response stream.

Flow diagram


\"\"


Steps


Triggers

Trigger actionComponentActionDefault time
IN Spring schedulermdm-veeva-dcr-service:VeevaDCRRequestTracestart trace validation request processevery <T> hour
OUT Kafka topicmdm-dcr-service-2:VeevaResponseStreamupdate DCR status in Reltio, create relationsinvokes Kafka producer for each veeva DCR response

Dependent components

ComponentUsage
DCR Service 2Process response event
" + }, + { + "title": "Veeva: create DCR method (storeVR)", + "pageID": "379332642", + "pageLink": "/pages/viewpage.action?pageId=379332642", + "content": "

Description

Rest API method exposed in the Veeva DCR Service component responsible for creating new DCR requests specific to Veeva OpenData (VOD) and storing them in dedicated collection for further submit. Since VOD enables communication only via S3/SFTP, it's required to use dedicated mechanism to actually trigger CSV/ZIP file creation and file placement in outbound directory. This will periodic call to Submit VR method will be scheduled once a day (with cron) which will in the end call VeevaAdapter with method createChangeRequest.

Flow diagram

\"\"

Steps


  1. Receive the API request
  2. Validate initial request
    1. check if the Veeva crosswalk exists once there is an update on the profile
    2. otherwise it's required to prepare DCR to create new Veeva profile
    3. If there is any formal attribute missing or incorrect: skip request
  3. Then the DCR is mapped to Veeva Request by invoking mapper between HUB DCR → VEEVA model 
    1. For mapping purpose below mapping table should be used 
    2. If there is not proper LOV mapping between HUB and Veeva, default fallback should be set to question mark → ?  
  4. Once proper request has been created, it should be stored as a VeevaVRDetails entry in dedicated DCRRegistryVeeva collection to be ready for actually send via Submit VR job and for future tracing purposes
  5. Prepare return response for initial API request with below logic
    1. Generate sample request after successful mongo insert →  generateResponse(dcrRequest, RequestStatus.REQUEST_ACCEPTED, null, null)
    2. Generate error when validation or exception →  generateResponse(dcrRequest, RequestStatus.REQUEST_FAILED, getErrorDetails(), null);

Mapping HUB DCR → Veeva model 

ReltioHUBVEEVA
Attribute PathDetailsDCR Request pathDetailsFile NameField NameRequired for Add Request?Required for Change Request?DescriptionReference (RDM/LOV)NOTE
HCO
N/A
Mongo Generated ID for this DCR | Kafka KEYonce mapping from HUB Domain DCRRequest take this from DCRRequestD.dcrRequestId: String, // HUB DCR request id - Mongo ID - required in ONEKEY servicechange_requestdcr_keyYYCustomer's internal identifier for this request

Change Requests comments 
extDCRComment
change_requestdescriptionYYRequester free-text comments explaining the DCR

targetChangeRequest.createdBy
createdBy
change_requestcreated_byYYFor requestor identification

N/A
if new objects - ADD, if veeva ID CHANGE
change_requestchange_request_typeYYADD_REQUEST or CHANGE_REQUEST

N/Adepends on suggested changes (check use-cases)main entity object type HCP or HCO
change_requestentity_typeYNHCP or HCOEntityType
N/A
Mongo Generated ID for this DCR | Kafka KEY
change_request_hcodcr_keyYYCustomer's internal identifier for this request

Reltio Uri and Reltio Typewhen insert new profileentities.HCO.updateCrosswalk.type (Reltio)
entities.HCO.updateCrosswalk.value (Reltio id)
and
refId.entityURI
concatenate Reltio:rvu44dmchange_request_hcoentity_keyYYCustomer's internal HCO identifier

Crosswalks - VEEVA crosswalkwhen update on VEEVAentities.HCO.updateCrosswalk.type (VEEVA)
entities.HCO.updateCrosswalk.value (VEEVA ID)

change_request_hcovid__vYNVeeva ID of existing HCO to update; if blank, the request will be interpreted as an add request

configuration/entityTypes/HCO/attributes/OtherNames/attributes/Namefirst elementTODO - add new attribute
change_request_hcoalternate_name_1__vYN


??
??
change_request_hcobusiness_type__vYN
HCOBusinessTypeTO BE CONFIRMED
configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/FacilityType
HCO.subTypeCode
change_request_hcpmajor_class_of_trade__vNN
COTFacilityType

In PforceRx - Account Type, more info: \n MR-9512\n -\n Getting issue details...\n STATUS\n

configuration/entityTypes/HCO/attributes/Name
name
change_request_hcocorporate_name__vNY


configuration/entityTypes/HCO/attributes/TotalLicenseBeds
TODO - add new attribute
change_request_hcocount_beds__vNY


configuration/entityTypes/HCO/attributes/Email/attributes/Emailemail with rank 1emails
change_request_hcoemail_1__vNN


configuration/entityTypes/HCO/attributes/Email/attributes/Emailemail with rank 2
change_request_hcoemail_2__vNN


configuration/entityTypes/HCO/attributes/Phone/attributes/Numberphone type TEL.FAX with best rankphones
change_request_hcofax_1__vNN


configuration/entityTypes/HCO/attributes/Phone/attributes/Numberphone type TEL.FAX with worst rank
change_request_hcofax_2__vNN


configuration/entityTypes/HCO/attributes/StatusDetail
TODO - add new attribute
change_request_hcohco_status__vNN
HCOStatus
configuration/entityTypes/HCO/attributes/TypeCode
typecode
change_request_hcohco_type__vNN
HCOType
configuration/entityTypes/HCO/attributes/Phone/attributes/Numberphone type TEL.OFFICE with best rankphones
change_request_hcophone_1__vNN


configuration/entityTypes/HCO/attributes/Phone/attributes/Numberphone type TEL.OFFICE with worst rank
change_request_hcophone_2__vNN


configuration/entityTypes/HCO/attributes/Phone/attributes/Numberphone type TEL.OFFICE with worst rank
change_request_hcophone_3__vNN


configuration/entityTypes/HCO/attributes/Country
DCRRequest.country
change_request_hcoprimary_country__vNN


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialtyelements from COT specialties
change_request_hcospecialty_1__vNN


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty
change_request_hcospecialty_10__vNN
Speciality
configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty
change_request_hcospecialty_2__vNN

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty
change_request_hcospecialty_3__vNN

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty
change_request_hcospecialty_4__vNN

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty
change_request_hcospecialty_5__vNN

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty
change_request_hcospecialty_6__vNN

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty
change_request_hcospecialty_7__vNN

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty
change_request_hcospecialty_8__vNN

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty
change_request_hcospecialty_9__vNN

configuration/entityTypes/HCO/attributes/Website/attributes/WebsiteURLfirst elementwebsiteURL
change_request_hcoURL_1__vNN


configuration/entityTypes/HCO/attributes/Website/attributes/WebsiteURLN/AN/A
change_request_hcoURL_2__vNN


HCP
 
N/A
Mongo Generated ID for this DCR | Kafka KEY
change_request_hcpdcr_keyYYCustomer's internal identifier for this request

Reltio Uri and Reltio Typewhen insert new profileentities.HCO.updateCrosswalk.type (Reltio)
entities.HCO.updateCrosswalk.value (Reltio id)
and
refId.entityURI
concatenate Reltio:rvu44dmchange_request_hcpentity_keyYYCustomer's internal HCP identifier

configuration/entityTypes/HCP/attributes/Country
DCRRequest.country
change_request_hcpprimary_country__vYY


Crosswalks - VEEVA crosswalkwhen update on VEEVAentities.HCO.updateCrosswalk.type (VEEVA)
entities.HCO.updateCrosswalk.value (VEEVA ID)

change_request_hcpvid__vNY


configuration/entityTypes/HCP/attributes/FirstName
firstName
change_request_hcpfirst_name__vYN


configuration/entityTypes/HCP/attributes/Middle
middleName
change_request_hcpmiddle_name__vNN


configuration/entityTypes/HCP/attributes/LastName
lastName
change_request_hcplast_name__vYN


configuration/entityTypes/HCP/attributes/Nickname
TODO - add new attribute
change_request_hcpnickname__vNN


configuration/entityTypes/HCP/attributes/Prefix
prefix
change_request_hcpprefix__vNN
HCPPrefix
configuration/entityTypes/HCP/attributes/SuffixName
suffix
change_request_hcpsuffix__vNN


configuration/entityTypes/HCP/attributes/Title
title
change_request_hcpprofessional_title__vNN
HCPProfessionalTitle
configuration/entityTypes/HCP/attributes/SubTypeCode
subTypeCode
change_request_hcphcp_type__vYN
HCPType
configuration/entityTypes/HCP/attributes/StatusDetail
TODO - add new attribute
change_request_hcphcp_status__vNN
HCPStatus
configuration/entityTypes/HCP/attributes/AlternateName/attributes/FirstName
TODO - add new attribute
change_request_hcpalternate_first_name__vNN


configuration/entityTypes/HCP/attributes/AlternateName/attributes/LastName
TODO - add new attribute
change_request_hcpalternate_last_name__vNN


configuration/entityTypes/HCP/attributes/AlternateName/attributes/MiddleName
TODO - add new attribute
change_request_hcpalternate_middle_name__vNN


??
TODO - add new attribute
change_request_hcpfamily_full_name__vNN

TO BE CONFRIMED
configuration/entityTypes/HCP/attributes/DoB
birthYear
change_request_hcpbirth_year__vNN


configuration/entityTypes/HCP/attributes/Credential/attributes/Credentialby rank 1TODO - add new attribute
change_request_hcpcredentials_1__vNN

TO BE CONFIRMED
configuration/entityTypes/HCP/attributes/Credential/attributes/Credential2TODO - add new attribute
change_request_hcpcredentials_2__vNN

In reltio there is attribute but not used
configuration/entityTypes/HCP/attributes/Credential/attributes/Credential3TODO - add new attribute
change_request_hcpcredentials_3__vNN

                            "uri": "configuration/entityTypes/HCP/attributes/Credential/attributes/Credential",
configuration/entityTypes/HCP/attributes/Credential/attributes/Credential4TODO - add new attribute
change_request_hcpcredentials_4__vNN

                            "lookupCode": "rdm/lookupTypes/Credential",
configuration/entityTypes/HCP/attributes/Credential/attributes/Credential5TODO - add new attribute
change_request_hcpcredentials_5__vNN
HCPCredentials                            "skipInDataAccess": false
??
TODO - add new attribute
change_request_hcpfellow__vNN
BooleanReferenceTO BE CONFRIMED
configuration/entityTypes/HCP/attributes/Gender
gender
change_request_hcpgender__vNN
HCPGender
?? Education ??
TODO - add new attribute
change_request_hcpeducation_level__vNN
HCPEducationLevelTO BE CONFRIMED
configuration/entityTypes/HCP/attributes/Education/attributes/SchoolName
TODO - add new attribute
change_request_hcpgrad_school__vNN


configuration/entityTypes/HCP/attributes/Education/attributes/YearOfGraduationTODO - add new attribute
change_request_hcpgrad_year__vNN


??


change_request_hcphcp_focus_area_10__vNN

TO BE CONFRIMED
??


change_request_hcphcp_focus_area_1__vNN


??


change_request_hcphcp_focus_area_2__vNN


??


change_request_hcphcp_focus_area_3__vNN


??


change_request_hcphcp_focus_area_4__vNN


??


change_request_hcphcp_focus_area_5__vNN


??


change_request_hcphcp_focus_area_6__vNN


??


change_request_hcphcp_focus_area_7__vNN


??


change_request_hcphcp_focus_area_8__vNN


??


change_request_hcphcp_focus_area_9__vNN
HCPFocusArea
??


change_request_hcpmedical_degree_1__vNN

TO BE CONFRIMED
??


change_request_hcpmedical_degree_2__vNN
HCPMedicalDegree
configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyby rank from 1 to 100specialties
change_request_hcpspecialty_1__vYN


configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialties
change_request_hcpspecialty_10__vNN


configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialties
change_request_hcpspecialty_2__vNN


configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialties
change_request_hcpspecialty_3__vNN


configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialties
change_request_hcpspecialty_4__vNN


configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialties
change_request_hcpspecialty_5__vNN


configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialties
change_request_hcpspecialty_6__vNN


configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialties
change_request_hcpspecialty_7__vNN


configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialties
change_request_hcpspecialty_8__vNN


configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialties
change_request_hcpspecialty_9__vNN
Specialty
configuration/entityTypes/HCP/attributes/WebsiteURL
TODO - add new attribute
change_request_hcpURL_1__vNN


ADDRESS


Mongo Generated ID for this DCR | Kafka KEY
change_request_addressdcr_keyYYCustomer's internal identifier for this request

Reltio Uri and Reltio Typewhen insert new profileentities.HCP OR HCO.updateCrosswalk.type (Reltio)
entities.HCP OR HCO.updateCrosswalk.value (Reltio id)
and
refId.entityURI
concatenate Reltio:rvu44dmchange_request_addressentity_keyYYCustomer's internal HCO/HCP identifier

attributes/Addresses/attributes/COMPANYAddressID
address.refId
change_request_addressaddress_keyYYCustomer's internal address identifier

attributes/Addresses/attributes/AddressLine1
addressLine1
change_request_addressaddress_line_1__vYN


attributes/Addresses/attributes/AddressLine2
addressLine2
change_request_addressaddress_line_2__vNN


attributes/Addresses/attributes/AddressLine3
addressLine3
change_request_addressaddress_line_3__vNN


N/A
N/AAchange_request_addressaddress_status__vNN
AddressStatus
attributes/Addresses/attributes/AddressType
addressType
change_request_addressaddress_type__vYN
AddressType
attributes/Addresses/attributes/StateProvince
stateProvince
change_request_addressadministrative_area__vYN
AddressAdminArea
attributes/Addresses/attributes/Country
country
change_request_addresscountry__vYN


attributes/Addresses/attributes/City
city
change_request_addresslocality__vYY


attributes/Addresses/attributes/Zip5
zip
change_request_addresspostal_code__vYN


attributes/Addresses/attributes/Source/attributes/SourceName
attributes/Addresses/attributes/Source/attributes/SourceAddressID
when VEEVA map VEEVA ID to sourceAddressId
change_request_addressvid__vNY


map from
relationTypes/OtherHCOtoHCOAffiliations
or
relationTypes/ContactAffiliations

This will be HCP.ContactAffiliation or HCO.OtherHcoToHCO affiliation








Mongo Generated ID for this DCR | Kafka KEY
change_request_parenthcodcr_keyYYCustomer's internal identifier for this request



HCO.otherHCOAffiliations.relationUri
or
HCP.contactAffiliations.relationUri
 (from Domain model)
information about Reltio Relation ID
change_request_parenthcoparenthco_keyYYCustomer's internal identifier for this relationshipRELATION ID


KEY entity_key from HCP or HCO (start object)
change_request_parenthcochild_entity_keyYYChild Identifier in the HCO/HCP fileSTART OBJECT ID
endObject entity uri mapped to refId.EntityURITargetObjectId
KEY entity_key from HCP or HCO (end object, by affiliation)
change_request_parenthcoparent_entity_keyYYParent identifier in the HCO fileEND OBJECT ID

changes in Domain model mappingmap Reltion.Source.SourceName - VEEVA
map Relation.Source.SourceValue - VEEVA ID
add to Domain model
map if relation is from VEEVA ID 
change_request_parenthcovid__vNY




start object entity type 
change_request_parenthcoentity_type__vYN


attributes/RelationType/attributes/PrimaryAffiliation
if is primary
TODO - add new attribute to otherHcoToHCO

change_request_parenthcois_primary_relationship__vNN
BooleanReference


HCO_HCO or HCP_HCO
change_request_parenthcohierarchy_type__v


RelationHierarchyType
attributes/RelationType/attributes/RelationshipDescription
type from affiliation
based on ContactAffliation or OtherHCOToHCO affiliation
I think it will be 14-Emploted for HCP_HCO
and 4-Manages for HCO_HCO
but maybe we can map from affiliation.type
change_request_parenthcorelationship_type__vYN
RelationType


Mongo collection

All DCRs initiated by the dcr-service-2 API and to be sent to Veeva will be stored in Mongo in new collection DCRRegistryVeeva. The idea is to gather all DCRs requested by the client through the day and schedule ‘SubmitVR’ process that will communicate with Veeva adapter.

Typical use case: 


In this store we are going to keep both types of DCRs:

\n
initiated by PforceRX - PFORCERX_DCR("PforceRxDCR")\ninitiated by Reltio SubmitVR - SENDTO3PART_DCR("ReltioSuggestedAndSendTo3PartyDCR");
\n


Store class idea:


VeevaVRDetails
\n
@Document("DCRRegistryVEEVA")\n@JsonIgnoreProperties(ignoreUnknown = true)\n@JsonInclude(JsonInclude.Include.NON_NULL)\ndata class VeevaVRDetails(\n    @JsonProperty("_id")\n    @Id\n    val id: String? = null,\n    val type: DCRType,\n    val status: DCRRequestStatusDetails,\n    val createdBy: String? = null,\n    val createTime: ZonedDateTime? = null,\n    val endTime: ZonedDateTime? = null,\n    val veevaRequestTime: ZonedDateTime? = null,\n    val veevaResponseTime: ZonedDateTime? = null,\n    val veevaRequestFileName: String? = null\n    val veevaResponseFileName: String? = null    val veevaResponseFileTime: ZonedDateTime? = null\n    val country: String? = null,\n    val source: String? = null,\n    val extDCRComment: String? = null, // external DCR Comment (client comment)\n    val trackingDetails: List<DCRTrackingDetails> = mutableListOf(),\n\n    RAW FILE LINES mapped from DCRRequestD to Veeva model\n    val veevaRequest:\n            val change_request_csv: String,\n            val change_request_hcp_csv: String\n            val change_request_hco_csv: List<String>\n            val change_request_address_csv: List<String>\n            val change_request_parenthco_csv: List<String>\n\n    RAW FILE LINES mapped from Veeva Response model\n    val veevaResponse:\n            val change_request_response_csv: String,\n            val change_request_response_hcp_csv: String\n            val change_request_response_hco_csv: List<String>\n            val change_request_response_address_csv: List<String>\n            val change_request_response_parenthco_csv: List<String>\n)
\n

Mapping Reltio canonical codes → Veeva source codes

There are a couple of steps performed to find out a mapping for canonical code from Reltio to source code understood by VOD. Below steps are performed (in this order) once a code is found. 

Veeva Defaults 

Configuration is stored in mdm-config-registry > config-hub/stage_apac/mdm-veeva-dcr-service/defaults

The purpose of these logic is to select one of possible multiple source codes on VOD end for a single code on COMPANY side (1:N). The other scenario is when there is no actual source code for a canonical code on VOD end (1:0), however this is usually covered by fallback code logic.

There are a couple of files, each containing source codes for a specific attribute. The ones related to HCO.Specialty and HCP.Specialty have logic which selects proper code.

RDM lookups with RegExp

The main logic which is used to find out proper source code for canonical code. We're using codes configured in RDM, however mongo collection LookupValues are used. For specific canonical code (code) we looking for sourceMappings with source = VOD. Often country is embedded within source code so we're applying regexpConfig (more in Veeva Fallback section) to extract specific source code for particular country.

Veeva Fallback

Configuration is stored in mdm-config-registry > config-hub/stage_apac/mdm-veeva-dcr-service/fallback


Triggers

Trigger action

Component

Action

Default time

REST callmdm-veeva-dcr-service: POST /dcr → veevaDCRService.createChangeRequest(request)

Creates DCR and stores it in collection without actual send to Veeva. 

API synchronous requests - realtime


Dependent components

Component

Usage

DCR Service 2Main component with flow implementation
Hub StoreDCR and Entities Cache 
" + }, + { + "title": "Veeva: create DCR method (submitVR)", + "pageID": "386796763", + "pageLink": "/pages/viewpage.action?pageId=386796763", + "content": "

Description

Gather all stored DCR entities in DCRRegistryVeeva collection (status = NEW) and sends them via S3/SFTP to Veeva OpenData (VOD). This method triggers CSV/ZIP file creation and file placement in outbound directory. This method is triggered from cron which invokes VeevaDCRRequestSender.sendDCRs() from the Veeva DCR Service 

Flow diagram

\"\"

Steps


  1. Receive the API request via scheduled trigger, usually every 24h (senderConfiguration.schedulerConfig.fixedDelay) at specific time of day (senderConfiguration.schedulerConfig.initDelay)
  2. All DCR entities (VeevaVRDetails) with status NEW are being retrieved from DCRRegistryVeeva collection 
  3. Then VeevaCreateChangeRequest object is created which aggregates all CSV content which should be placed in actual CSV files. 
    1. Each object contains only DCRs specific for country
    2. Each country has its own S3/SFTP directory structure as well as dedicated SFTP server instance
  4. Once CSV files are created with header and content, they are packed into single ZIP file
  5. Finally ZIP file is placed in outbound S3 directory
  6. If file was placed
    1. successfuly - then VeevaChangeRequestACK status = SUCCESS
    2. otherwise - then VeevaChangeRequestACK status = FAILURE and process ends
  7. Finally, status of VeevaVRDetails entity in DCRRegistryVeeva collection is updated and set to SENT_TO_VEEVA

Triggers

Trigger action

Component

Action

Default time

Timer (cron)mdm-veeva-dcr-service: VeevaDCRRequestSender.sendDCRs()

Takes all unsent entities (status = NEW) from Veeva collection and actually puts file on S3/SFTP directory via veevaAdapter.createDCRs



Usually every 24h (senderConfiguration.schedulerConfig.fixedDelay) at specific time of day (senderConfiguration.schedulerConfig.initDelay)


Dependent components

Component

Usage

DCR Service 2Main component with flow implementation
Hub StoreDCR and Entities Cache 
" + }, + { + "title": "Veeva: generate DCR Change Events (traceVR)", + "pageID": "379329922", + "pageLink": "/pages/viewpage.action?pageId=379329922", + "content": "

Description

The process is responsible for gathering DCR responses from Veeva OpenData (VOD). Responses are provided via CSV/ZIP files placed on S3/SFTP server in inbound directory which are specific for each country. During this process files should be retrieved, mapped from VOD to HUB DCR model and published to Kafka topic to be properly processed by DCR Service 2, Veeva: process DCR Change Events.

Flow diagram

\"\"

Source: Lucid

Steps

  1. Method is trigger via cron, usually every 24h (traceConfiguration.schedulerConfig.fixedDelay) at specific time of day (traceConfiguration.schedulerConfig.initDelay)
  2. For each country, each inbound directory in scanned for ZIP files
  3. Each ZIP files (<country>_DCR_Response_<Date>.zip) should be unpacked and processed. A bunch of CSV files should be extracted. Specifically:
    1. change_request_response.csv → it's a manifest file with general information in specific columns
      1. dcr_key → ID of DCR which was established during DCR request creation 
      2. entity_key → ID of entity in Reltio, the same one we provided during DCR request creation
      3. entity_type → type of entity (HCO, HCP) which is being modified via this DCR
      4. resolution → has information whether DCR was accepted or rejected. Full list of values is below.
        1. resolution value

          Description

          CHANGE_PENDING

          This change is still processing and hasn't been resolved

          CHANGE_ACCEPTED

          This change has been accepted without modification

          CHANGE_PARTIAL

          This change has been accepted with additional changes made by the steward, or some parts of the change request have been rejected

          CHANGE_REJECTED

          This change has been rejected in its entirety

          CHANGE_CANCELLED

          This change has been cancelled

      5. change_request_type 
        1. change_request_type valueDescription
          ADD_REQUEST

          whether DCR caused to create new profile in VOD with new vid__v  (Veeva id)

          CHANGE_REQUEST

          just update of existing profile in VOD with existing and already known vid__v (Veeva id)

    2. change_request_hcp_response.csv - contains information about DCR related to HCP
    3. change_request_hco_response.csv - contains information about DCR related to HCO
    4. change_request_address_response.csv - contains information about DCR related to addresses which are related to specific HCP or HCO
    5. change_request_parenthco_response.csv - contains information about DCR which correspond to relations between HCP and HCO, and HCO and HCO
    6. File with log: <country>_DCR_Request_Job_Log.csv can be skipped. It does not contain any useful information to be processed automatically
  4. For all DCR responses from VOD, we need to get corresponding DCR entity (VeevaVRDetails)from collection DCRRegistryVeeva should be selected. 
  5. In general, specific response files are not that important (VOD profiles updates will be ingested to HUB via ETL channel) however when new profiles are created (change_request_response.csv.change_request_type = ADD_REQUEST) we need to extract theirs Veeva ID. 
    1. We need to deep dive into change_request_hcp_response.csv or change_request_hco_response.csv to find vid__v (Veeva ID) for specific dcr_key 
    2. This new Veeva ID should be stored in VeevaDCREvent.vrDetails.veevaHCPIds
    3. It should be further used as a crosswalk value in Reltio:

      1. entities.HCO.updateCrosswalk.type (VEEVA)
      2. entities.HCO.updateCrosswalk.value (VEEVA ID)
  6. Once data has been properly mapped from Veeva to HUB DCR model, new VeevaDCREvent entity should be created and published to dedicated Kafka topic $env-internal-veeva-dcr-change-events-in
    1. Please be advised, when the status of resolution is not final (CHANGE_ACCEPTED, CHANGE_REJECTED, CHANGE_CANCELLED, CHANGE_PARTIAL) we should not sent event to DCR-service-2
  7. Then for each successfully processed DCR entity (VeevaVRDetails) in Mongo  DCRRegistryVeeva collection should be updated 
    1. Veeva CSV: resolution

      Mongo: DCRRegistryVeeva 

      Entity: VeevaVRDetails.status: DCRRequestStatusDetails

      Topic: $env-internal-veeva-dcr-change-events-in

      Event: VeevaDCREvent.vrDetails.vrStatus

      Topic: $env-internal-veeva-dcr-change-events-in

      Event: VeevaDCREvent.vrDetails.vrStatusDetail

      CHANGE_PENDING

      status should not be updated at all (stays as SENT)

      do not send events to DCR-service-2 do not send events to DCR-service-2 

      CHANGE_ACCEPTED

      ACCEPTEDCLOSEDACCEPTED

      CHANGE_PARTIAL

      ACCEPTED

      CLOSED

      ACCEPTED

      resolutionNotes / veevaComment should contain more information what was rejected by VEEVA DS

      CHANGE_REJECTED

      REJECTEDCLOSEDREJECTED

      CHANGE_CANCELLED

      REJECTEDCLOSEDREJECTED
  8. Once files are processed, ZIP file should be moved from inbound to archive directory


Event VeevaDCREvent Model

\n
data class VeevaDCREvent (val eventType: String? = null,\n                          val eventTime: Long? = null,\n                          val eventPublishingTime: Long? = null,\n                          val countryCode: String? = null,\n                          val dcrId: String? = null,\n                          val vrDetails: VeevaChangeRequestDetails)\n\ndata class VeevaChangeRequestDetails (\n    val vrStatus: String? = null, - HUB CODEs\n    val vrStatusDetail: String? = null, - HUB CODEs\n    val veevaComment: String? = null,\n    val veevaHCPIds: List<String>? = null,\n    val veevaHCOIds: List<String>? = null)
\n


Triggers

Trigger action

Component

Action

Default time

IN Timer (cron)mdm-veeva-dcr-service: VeevaDCRRequestTrace.traceDCRs()get DCR responses from S3/SFTP directory, extract CSV files from ZIP file and publish events to kafka topic

every <T> hour

usually every 6h (traceConfiguration.schedulerConfig.fixedDelay) at specific time of day (traceConfiguration.schedulerConfig.initDelay)

OUT Events on Kafka Topic

mdm-veeva-dcr-service: VeevaDCRRequestTrace.traceDCRs()

$env-internal-veeva-dcr-change-events-in

VeevaDCREvent event published to topic to be consumed by DCR Service 2

every <T> hour

usually every 6h (traceConfiguration.schedulerConfig.fixedDelay) at specific time of day (traceConfiguration.schedulerConfig.initDelay)


Dependent components

Component

Usage

DCR Service 2Main component with flow implementation
Hub StoreDCR and Entities Cache 
" + }, + { + "title": "ETL Batches", + "pageID": "164470046", + "pageLink": "/display/GMDM/ETL+Batches", + "content": "

Description

The process is responsible for managing the batch instances/stages and loading data received from the ETL channel to the MDM system. The Batch service is a complex component that contains predefined JOBS, Batch Workflow configuration that is using the JOBS implementations and using asynchronous communication with Kafka topis updates data in MDM system and gathered the acknowledgment events. Mongo cache stores the BatchInstances with corresponding stages and EntityProcessStatus objects that contain metadata information about loaded objects.


The below diagram presents an overview of the entire process. Detailed descriptions are available in the separated subpages.

Flow diagram

\"\"

Model diagram

\"\"


\"\"

Steps

Triggers

Described in the separated sub-pages for each process.

Dependent components

ComponentUsage
Batch ServiceMain component with flow implementation
ManagerAsynchronous events processing
Hub StoreDatastore and cache



" + }, + { + "title": "ACK Collector", + "pageID": "164469774", + "pageLink": "/display/GMDM/ACK+Collector", + "content": "

Description

The flow process the ACK response messages and updates the cache. Based on these responses the Processing flow is checking the Cache status and is blocking the workflow by the time all responses are received. This process updates the "status" attribute with the MDM system response and the "updateDateMDM" with the corresponding update timestamp. 

Flow diagram

\"\"

Steps

Triggers

Trigger actionComponentActionDefault time
IN Events incoming batch-service:AckProcessorupdate the cache based on the ACK responserealtime

Dependent components

ComponentUsage
Batch ServiceThe main component
ManagerAsync route with ACK responses
Hub StoreCache
" + }, + { + "title": "Batch Controller: creating and updating batch instance", + "pageID": "164469788", + "pageLink": "/display/GMDM/Batch+Controller%3A+creating+and+updating+batch+instance", + "content": "

Description

The batch controller is responsible for managing the Batch Instances. The service allows to creation of a new batch instance for the specific Batch, create a new Stage in the batch and update stage with the statistics. The Batch controller component manages the batch instances and checks the validation of the requests. Only authorized users are allowed to manage specific batches or stages. Additionally, it is not possible to START multiple instances of the same batch in one time. Once batch is started Client should load the data and at the end complete the current batch instance. Once user creates new batch instance the new unique ID is assigned, in the next request user has to use this ID to update the workflow. By default, once the batch instance is created all stages are initialized with status PENDING. Batch controller also manages the dependent stages and is marking the whole batch as COMPLETED at the end. 

Flow diagram

\"\"

Steps

Triggers

Trigger actionComponentActionDefault time
API requestbatch-service.RestBatchControllerRoute

User initializes the new batch instance, updates the STAGE, saves the statistics, and completes the corresponding STAGE.

User is able to get batch instance details and wait for the load completionm

user API request dependent, triggered by an external client

Dependent components

ComponentUsage
Batch ServiceThe main component that exposes the REST API
Hub StoreBatch Instances Cache
" + }, + { + "title": "Batches registry", + "pageID": "234695693", + "pageLink": "/display/GMDM/Batches+registry", + "content": "

There is a list of batches configured from 01.02.2022.

ONEKEY

TenantCountrySource NameBatch NameStageDetails
EMEAAlgeriaONEKEYONEKEY_DZHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
TunisiaONEKEYONEKEY_TNHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
MoroccoONEKEYONEKEY_MAHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
GermanyONEKEYONEKEY_DEHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
France, AD, MCONEKEYONEKEY_FRHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
France (DOMTOM) = RE,MQ,GP,PF,YT,GF,PM,WF,MU,NCONEKEYONEKEY_PFHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
ItalyONEKEYONEKEY_ITHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
SpainONEKEYONEKEY_ESHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
Turkey ONEKEYONEKEY_TRHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
Denmark
(Plus Faroe Islands and Greenland)
ONEKEYONEKEY_DKHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
PortugalONEKEYONEKEY_PTHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
RussiaONEKEYONEKEY_RUHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
APACAustraliaONEKEYONEKEY_AUHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
New ZealandONEKEYONEKEY_NZHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
South KoreaONEKEYONEKEY_KRHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
AMERCanadaONEKEYONEKEY_CAHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
BrazilONEKEYONEKEY_BRHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
MexicoONEKEYONEKEY_MXHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
Argentina/UruguayONEKEYONEKEY_ARHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)

PFORCE_RX

TenantCountrySource NameBatch NameStageDetails
AMERBrazilPFORCERX_ODSPFORCERX_ODSHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
Mexico
Argentina/Uruguay
Canada
APACJapan PFORCERX_ODSPFORCERX_ODSHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
Australia /New Zealand
India
South Korea
EMEASaudi ArabiaPFORCERX_ODSPFORCERX_ODSHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
Germany
France
Italy
Spain
Russia
Turkey 
Denmark
Portugal

GRV

TenantCountrySource NameBatch NameStage
EMEAGRGRVGRVHCPLoading
IT
FR
ES
RU
TR
SA
DK
GL
FO
PT
AMERCAGRVGRVHCPLoading
BR
MX
AR
APACAUGRVGRVHCPLoading
NZ
IN
JP
KR

GCP

TenantCountrySource NameBatch NameStage
EMEAGRGCPGCPHCPLoading
IT
FR
ES
RU
TR
SA
DK
GL
FO
PT
AMERCAGCPGCPHCPLoading
BR
MX
AR
APACAUGCPGCPHCPLoading
NZ
IN
JP
KR

ENGAGE

TenantCountrySource NameBatch NameStage
AMERCAENGAGEENGAGEHCPLoading
HCOLoading
RelationLoading
" + }, + { + "title": "Bulk Service: loading bulk data", + "pageID": "164469786", + "pageLink": "/display/GMDM/Bulk+Service%3A+loading+bulk+data", + "content": "

Description

The bulk service is responsible for loading the bundled data using REST API as the input and Kafka stage topics as the output. This process is strictly connected to the Batch Controller: creating and updating batch instance flow, which means that the Client should first initialize the new batch instance and stage. Using API requests data is loaded to the next processing stages. 

Flow diagram

\"\"

Steps


Triggers

Trigger actionComponentActionDefault time
API requestbatch-service.RestBulkControllerRouteClients send the data to the bulk service.user API request dependent, triggered by an external client

Dependent components

ComponentUsage
Batch ServiceThe main component that exposes the REST API
Hub StoreBatch Instances Cache



" + }, + { + "title": "Clear Cache", + "pageID": "164469784", + "pageLink": "/display/GMDM/Clear+Cache", + "content": "

Description

This flow is used to clear mongo cache (removes records from batchEntityProcessStatus) for specified batch name, object type and entity type. Optional list of countries (comma-separated) allows filtering by countries.

Flow diagram

\"\"

Steps

Triggers

Trigger actionComponentActionDefault time
API Requestbatch-service.RestBatchControllerRouteExternal client calls request to clear the cacheuser API request dependent, triggered by an external client

Dependent components

ComponentUsage
Batch ServiceThe main component that exposes the REST API
Hub StoreBatch entities/relations cache
" + }, + { + "title": "Clear Cache by croswalks", + "pageID": "282663410", + "pageLink": "/display/GMDM/Clear+Cache+by+croswalks", + "content": "

Description

This flow is used to clear mongo cache (removes records from batchEntityProcessStatus) for specified batch name, sourceId type or/and value

Flow diagram

\"\"

Steps

Triggers

Trigger action

Component

Action

Default time

API Requestbatch-service.RestBatchControllerRouteExternal client calls request to clear the cacheuser API request dependent, triggered by an external client

Dependent components

Component

Usage

Batch ServiceThe main component that exposes the REST API
Hub StoreBatch entities/relations cache
" + }, + { + "title": "PATCH Operation", + "pageID": "355371021", + "pageLink": "/display/GMDM/PATCH+Operation", + "content": "

Description

Entity PATCH (UpdateHCP/UpdateHCO/UpdateMCO) operation differs slightly from the standard POST (CreateHCP/CreateHCO/CreateMCO) operation:

Algorithm

PATCH operation logic consists of following steps:

" + }, + { + "title": "Processing JOB", + "pageID": "164469780", + "pageLink": "/display/GMDM/Processing+JOB", + "content": "

Description

The flow checks the Cache using a poller that executes the query each <T> minutes. During this processing, the count is decreasing until it reaches 0. 

The following query is used to check the count of objects that were not delivered. The process ends if the query return 0 objects - it means that we received ACK for each object and it is possible to go to the next dependent stage. 

"{'batchName': ?0 ,'sendDateMDM':{ $gt: ?1 }, '$or':[ {'updateDateMDM':{ $lt: ?1 } }, { 'updateDateMDM':{ $exists : false } } ] }"

Using Mongo query there is a possibility to find what objects are still not processed. In that case, the user should provide batchName==" currently loading batch " and use the date that is the batch start date. 

Flow diagram

\"\"

Steps

Triggers

Trigger actionComponentActionDefault time
The previous dependent JOB is completed. Triggered by the Scheduler mechanismbatch-service:ProcessingJobTriggers mongo and checks the number of objects that are not yet processed.every 60 seconds

Dependent components

ComponentUsage
Batch ServiceThe main component with the Processing JOB implementation
Hub StoreThe cache that stores all information about the loaded objects
" + }, + { + "title": "Sending JOB", + "pageID": "164469778", + "pageLink": "/display/GMDM/Sending+JOB", + "content": "

Description

The JOB is responsible for sending the data from the Stage Kafka topics to the manager component. During this process data is checked, the checksum is calculated and compared to the previous state, os only the changes are applied to MDM. The Cache - Batch data store, contains multiple metadata attributes like sourceIngetstionDate - the time once this entity was recently shared by the Client, and the ACK response status (create/update/failed) 

The Checksum is calculation is skipped for the "failed" objects. It means there is no need to clear the cache for the failed objects, the user just needs to reload the data. 

The JOB is triggered once the previous dependent job is completed or is started. There are two mode of dependences between Loading STAGE and Sending STAGE

The purpose of hard dependency is the case when the user has to Load HCP/HCO and Relations objects. The sending of relation has to start after HCP and HCO load is COMPLETED. 

The process finishes once the Batch stage queue is empty for 1 minute (no new events are in the queue).

The following query is used to retrieve processing object from cache. Where the batchName is the corersponding Batch Instance, and sourceId is the information about loaded source crosswalk.

{'batchName': ?0, {'sourceId.type': ?1, 'sourceId.value': ?2,'sourceId.sourceTable': ?3 } }

Flow diagram

\"\"

Steps

Triggers

Trigger actionComponentActionDefault time
The previous dependent JOB is completed. Triggered by the Scheduler mechanismbatch-service:SendingJobGet entries from stage topic, saved data in mongo and create/updates profiles using Kafka producer (asynchronous channel)once the dependence JOB is completed

Dependent components

ComponentUsage
Batch ServiceThe main component with the Sending JOB implementation
Hub StoreThe cache that stores all information about the loaded objects
" + }, + { + "title": "SoftDeleting JOB", + "pageID": "164469776", + "pageLink": "/display/GMDM/SoftDeleting+JOB", + "content": "

Description

This JOB is responsible for the soft-delete process for the full file loads. Batches that are configured with this JOB have to always deliver the full set of data. The process is triggered at the end of the workflow and soft-delete objects in the MDM system. 

The following query is used to check how many objects are going to be removed and also to get all these objects and send the soft-delete requests. 

{'batchName': ?0, 'deleted': false, 'objectType': 'ENTITY OR RELATION', 'sourceIngestionDate':{ $lt: ?1 } }

Once the object is soft deleted "deleted" flag is changed to "true"
Using the mongo query there is a possibility to check what objects were soft-deleted by this process. In that case, the Administrator should provide the batchName=" currently loading batch" and the deleted parameter =" true".
The process removes all objects that were not delivered in the current load, which means that the "SourceIngestionDate" is lower than the "BatchStartDate".
It may occur that the number of objects to soft-delete exceeds the limit, in that case, the process is aborted and the Administrator should verify what objects are blocked and notify the client.
The production limit is a maximum of 10000 objects in one load.

Flow diagram

\"\"

Steps 

2023-07 Update: Set Soft-Delete Limit by Country

DeletingJob now allows additional configuration:

\n
deletingJob:\n  "TestDeletesPerCountryBatch":\n    "EntitiesUnseenDeletion":\n      maxDeletesLimit: 20\n      queryBatchSize: 5\n      reltioRequestTopic: "local-internal-async-all-testbatch"\n      reltioResponseTopic: "local-internal-async-all-testbatch-ack"\n>     maxDeletesLimitPerCountry:\n>       enabled: true\n>       overrides:\n>         CA: 10\n>         BR: 30
\n

If maxDeletesLimitPerCountry.enabled == true (default false):


Triggers

Trigger actionComponentActionDefault time
The previous dependent JOB is completed. Triggered by the Scheduler mechanismbatch-service:AbstractDeletingJob (DeletingJob/DeletingRelationJob)Triggers mongo and soft-delete profiles using Kafka producer (asynchronous channel)once the dependence JOB is completed

Dependent components

ComponentUsage
Batch ServiceThe main component with the SoftDeleting JOB implementation
ManagerAsynchronous channel 
Hub StoreThe cache that stores all information about the loaded objects
" + }, + { + "title": "Event filtering and routing rules", + "pageID": "164470034", + "pageLink": "/display/GMDM/Event+filtering+and+routing+rules", + "content": "

At various stages of processing events can be filtered based on some configurable criteria. This helps to lessen the load on the Hub and client systems, as well as simplifies processing on client side by avoiding the types of events that are of no interest to the target application. There are three places where event filtering is applied:

Event type filtering

Each event received from SQS queue has a "type" attribute. Reltio Subscriber has a "allowedEventTypes" configuration parameter (in application.yml config file) that lists event types which are processed by application. Currently, complete list of supported types is:

An event that does not match this list is ignored, and "Message skipped" entry is added to a log file.
Please keep in mind that while it is easy to remove an event type from this list in order to ignore it, adding new event type is a whole different story – it might not be possible without changes to the application source code.

Duplicate detection (Nucleus)

There's an in-memory cache maintained that stores entityUri and type of an event previously sent for that uri. This allows duplicate detection. The cache is cleared after successful processing of the whole zip file.

Entity data-based filtering

Event Publisher component receives events from internal Kafka topic. After fetching current Entity state from Reltio (via MDM Integration Gateway) it imposes few additional filtering rules based on fetched data. Those rules are:

  1. Filtering based on Country that entity belongs to. This is based on value of ISO country code, extracted from Country attribute of an entity. List of allowed codes is maintained as "activeCountries" parameter in application.yml config file.
  2. Filtering based on Entity type. This is controlled by "allowedEntityTypes" configuration parameter, which currently lists two values: "HCP" and "HCO". Those values are matched against "entityType" attribute of Entity (prefix "configuration/entityTypes/" is added automatically, so it does not need to be included in configuration file)
  3. Filtering out events that have empty "targetEntity" attribute – such events are considered outdated, plus they lack some mandatory information that would normally be extracted from targetEntity, such as originating country and source system. They are filtered out because Hub would not be able to process them correctly anyway.
  4. Filtering out events that have value mismatch between "entitiesURIs" attribute of an event and "uri" attribute of targetEntity – for all event types except HCP_LOST_MERGE and HCO_LOST_MERGE. Uri mismatch may arise when EventPublisher is processing events with significant delay (e.g. due to downtime, or when reprocessing events) – Event Publisher might be processing HCP_CHANGED (HCO_CHANGED) event for an Entity that was merged with another Entity since then, so HCP_CHANGED event is considered outdated, and we are expecting HCP_LOST_MERGE event for the same Entity.

This filter is controlled by eventRouter.filterMismatchedURIs configuration parameter, which takes Boolean values (yes/no, true/false)

  1. Filtering out events based on timestamps. When HCP_CHANGED or HCO_CHANGED event arrives that has "eventTime" timestamp older than "updatedTime" of the targetEntity, it is assumed that another change for the same entity has already happened and that another event is waiting in the queue to be processed. By ignoring current event Event Publisher is ensuring that only the most recent change is forwarded to client systems.

This filter is controlled by eventRouter.filterOutdatedChanges configuration parameter, which can take Boolean values (yes/no, true/false)

Event routing

Publishing Hub supports multiple client systems subscribing for Entity change events. Since those clients might be interested in different subset of Events, the event routing mechanism was created to allow configurable, content-based routing of the events to specific client systems. Routing mechanics consists of three main parts:

  1. Kafka topics – each client system can has one or more dedicated topics where events of interest for that system are published
  2. Metadata extraction – as one of the processing steps, there are some pieces of information extracted from the Event and related Entity and put in processing context (as headers), so they can be easily accessed.
  3. Configurable routing rulesEvent Publisher's configuration file contains the whole section for defining rules that facilitates Groovy scripting language and the metadata.

Available metadata is described in the table below.

Table 10. Routing headers





Header

Type

Values

Source Field

Description

eventType

String

full
simple

none

Type of an event. "full" means Event Sourcing mode, with full targetEntity data.
"simple" is just an event with basic data, without targetEntity

eventSubtype

String

HCP_CREATED,
HCP_CHANGED,
….

event.eventType

For the full list of available event subtypes is specified in MDM Publishing Hub Streaming Interface document.

country

String

CN
FR

event.targetEntity.attributes .Country.lookupCode

Country of origin for the Entity

eventSource

Array of String

["OK", "GRV"]

event. targetEntity.crosswalks.type

Array containing names of all the source systems as defined by Reltio crosswalks

mdmSource

String

["RELTIO", NUCLEUS"]

None

System of origin for the Entity.

selfMerge

Boolean

true, false

None

Is the event "self-merge"? Enables filtering out merges on the fly.


Routing rules configuration is found in eventRouter.routingRules section of application.yml configuration file. Here's an example of such rule:
\"\"
Elements of this configuration are described below.

Selector syntax can include, among the others, the elements listed in the table below.

Table 11. Selector syntax



Element

Example

Description

comparison operators

==, !=, <, >

Standard Groovy syntax

boolean operators

&&,


set operators

in, intersect


Message headers

exchange.in.headers.country

See Table 10 for list of available headers. "exchange.in.headers" is the standard prefix that must be used do access them


Full syntax reference can be found in Apache Camel documentation: http://camel.apache.org/groovy.html .
The limitation here is that the whole snippet should return a single boolean value.
Destination name can be literal, but can also reference any of the message headers from Table 10, with the following syntax:
\"\"

" + }, + { + "title": "FLEX COV Flows", + "pageID": "172301002", + "pageLink": "/display/GMDM/FLEX+COV+Flows", + "content": "" + }, + { + "title": "Address rank callback", + "pageID": "164470175", + "pageLink": "/display/GMDM/Address+rank+callback", + "content": "

The Address Rank Callback is used only in the FLEX COV environment to update the Rank attribute on Addresses. This process sends the callback to Reltio only when the specific source exists on the profile. The Rank is used then by the Bussiness Team or Data Stewards in Reltio or by the downstream FLEX system. 

Address Rank Callback is triggered always when getEntity operation is invoked. The purpose of this process is to synchronize Reltio with correct address rank sort order.

Currently the functionality is configured only for US Trade Instance. Below is the diagram outlining the whole process.

\"\" 
Process steps description:

  1. Event Publisher receives events from internal Kafka topic and calls MDM Gateway API to retrieve latest state of Entity from Reltio.
  2. Event Publisher internal user is authorized in MDM Manager to check source, country and appropriate access roles. MDM Manager invokes get entity operation in Reltio. Returned JSON is then added to the Address Rank sort process, so the client will always get entity with sorted address rank order, but only when this feature is activated in configuration.
  3. When Address Rank Sort process is activated, each address in entity is sorted. In this case "AddressRank" and "BestRecord" attributes are set. When AddressRank is equal to "1" BestRecord attribute will always have "1" value.
  4. When Address Rank Callback process is activated, relation operation is invoked in Reltio. The Relation Request object contains Relation object for each sorted address. Each Relation will be created with "AddrCalc" source, where the start object is current entity id and the end object is id of the Location entity. In that case relation between entity and Location is created with additional rank attributes. There is no need to send multiple callback requests every time when get entity operation is invoked, so the Callback operation is invoked only when address rank sort order have changed.
  5. Entity data is stored in MongoDB NOSQL database, for later use in Simple mode (publication of events that entityURI and require client to retrieve full Entity via REST API).
  6. For every Reltio event there are two Publishing Hub events created: one in Simple mode and one in Event Sourcing (full) mode. Based on metadata, and Routing Rules provided as a part of application configuration, the list of the target destinations for those events is created. Event is sent to all matched destinations.


" + }, + { + "title": "DEA Flow", + "pageID": "164470009", + "pageLink": "/display/GMDM/DEA+Flow", + "content": "

This flow processes DEA files published by GIS Team to S3 Bucket. Flow steps are presented on the sequence diagram below.

\"\" 
Process steps description:

  1. DEA files are uploaded to AWS S3 storage bucket to the appropriate directory intended only for DEA files.
  2. Batch Channel component is monitoring S3 location and processes the files uploaded to it.
  3. Folder structure for DEA is divided on "inbound" and "archive" directories. Batch Channel component is polling data from inbound directory, after successful processing the file is copied to "archive directory"
  4. Files downloaded from S3 is processed in streaming mode. The processing of the file can be stared before full download of the file. Such solution is dedicated to speed up processing of the big files, because there is no need to wait until the file will be fully downloaded.
  5. DEA file load Start Time is saved for the specific load – as loadStartDate.
  6. Each line in file is parsed in Batch Channel component and mapped to the dedicated DEA object. DEA file is saved in Fixed Width Data Format, in that case one DEA record is saved in one line in the file so there is no need to use record aggregator. Each line has specified length, each column has specified star and end point number in the row.
  7. BatchContext is downloaded from MongoDB for each DEA record. This context contains DEA crosswalk ID, line from file, MD5 checksum, last modification date, delete flag. When BatchContext is empty it means that this DEA record is initially created – such object is send to Kafka Topic. When BatchContext is not empty the MD5 form the source DEA file is compared to the MD5 from the BatchContext (mongo). If MD5 checksums are equals – such object is skipped, otherwise – such object is send to Kafka Topic. For each modified object, lastModificationDate is updated in Mongo – it is required to detected delete records as the final step.
  8. Only when record MD5 checksum is not changed, DEA record will be published to Kafka topic dedicated for events for DEA records. They will be processed by MDM Manager component. The first step is authorization check to verify if this event was produced by Batch Channel component with appropriate source name and country and roles. Then the standard process for HCO creation is stared. The full description of this process is in HCO Post section.
  9. TransactionLog Service is an additional component for managing transaction logs. The role of this component is to save each successful or unsuccessful flow in transaction log. Additionally each log is saved in MongoDB to create a full report from current load and to correlate record flow between Batch Channel and MDM Manager components.
  10. After DEA file is successfully processed, DEA delete record processor is started. From Mongo Database each record with lastModificationDate less than loadStartDate and delete flag equal to false is downloaded. When the result count is grater that 1000, delete record processor is stoped – it is a protector feature in case of wrong file uploade which can generate multiple unexpected DEA profiles deletion. Otherwise, when result count is less than 1000, each record from MongoDB is parsed and send to Kafka Topic with deleteDate attribute on crosswalk. Then they will be processed by MDM Manager component. The first step is authorization check to verify if this event was produced by Batch Channel component with appropriate source name and country and roles. Then the standard process for HCO creation is stared. The full description of this process is in HCO Post section. Profiles created with deleteDate attribute on crosswalk are soft deleted in Reltio.
  11. Finally DEA file is moved to archive subtree in S3 bucket.


" + }, + { + "title": "FLEX Flow", + "pageID": "164470035", + "pageLink": "/display/GMDM/FLEX+Flow", + "content": "

This flow processes FLEX files published by Flex Team to S3 Bucket. Flow steps are presented on the sequence diagram below.

\"\"
Process steps description:

  1. FLEX files are uploaded to AWS S3 storage bucket to the appropriate directory intended only for FLEX files.
  2. Batch Channel component is monitoring S3 location and processes the files uploaded to it.
  3. Folder structure for FLEX is divided on "inbound" and "archive" directories. Batch Channel component is polling data from inbound directory, after successful processing the file is copied to "archive directory"
  4. Files downloaded from S3 is processed in streaming mode. The processing of the file can be stared before full download of the file. Such solution is dedicated to speed up processing of the big files, because there is no need to wait until the file will be fully downloaded.
  5. Each line in file is parsed in Batch Channel component and mapped to the dedicated FLEX object. FLEX file is saved in CSV Data Format, in that case one FLEX record is saved in one line in the file so there is no need to use record aggregator. The first line in the file is always the header line with column names, each next line is the FLEX records with "," (comma character) delimiter. The most complex thing in FLEX mapping is Identifiers mapping. When Flex records contain "GROUP_KEY" ("Address Key") attribute it means that Identifiers saved in "Other Active IDs" will be added to FlexID.Identifiers nested attributes. "Other Active IDs" is one line string with key value pairs separated by "," (comma character), and key-value delimiter ":" (colon character). Additionally for each type of customer Flex identifier is always saved in FlexID section.
  6. FLEX record will be published to Kafka topic dedicated for events for FLEX records. They will be processed by MDM Manager component. The first step is authorization check to verify if this event was produced by Batch Channel component with appropriate source name and country and roles. Then the standard process for HCO creation is stared. The full description of this process is in HCO Post section.
  7. TransactionLog Service is an additional component for managing transaction logs. The role of this component is to save each successful or unsuccessful flow in transaction log. Additionally each log is saved in MongoDB to create a full report from current load and to correlate record flow between Batch Channel and MDM Manager components.
  8. After FLEX file is successfully processed, it is moved to archive subtree in S3 bucket.



" + }, + { + "title": "HIN Flow", + "pageID": "164469995", + "pageLink": "/display/GMDM/HIN+Flow", + "content": "

This flow processes HIN files published by HIN Team to S3 Bucket. Flow steps are presented on the sequence diagram below.

\"\"
Process steps description:

  1. HIN files are uploaded to AWS S3 storage bucket to the appropriate directory intended only for HIN files.
  2. Batch Channel component is monitoring S3 location and processes the files uploaded to it.
  3. Folder structure for HIN is divided on "inbound" and "archive" directories. Batch Channel component is polling data from inbound directory, after successful processing the file is copied to "archive directory"
  4. Files downloaded from S3 is processed in streaming mode. The processing of the file can be stared before full download of the file. Such solution is dedicated to speed up processing of the big files, because there is no need to wait until the file will be fully downloaded.
  5. Each line in file is parsed in Batch Channel component and mapped to the dedicated HIN object. HIN file is saved in Fixed Width Data Format, in that case one HIN record is saved in one line in the file so there is no need to use record aggregator. Each line has specified length, each column has specified star and end point number in the row.
  6. HIN record will be published to Kafka topic dedicated for events for FLEX records. They will be processed by MDM Manager component. The first step is authorization check to verify if this event was produced by Batch Channel component with appropriate source name and country and roles. Then the standard process for HCO creation is stared. The full description of this process is in HCO Post section.
  7. TransactionLog Service is an additional component for managing transaction logs. The role of this component is to save each successful or unsuccessful flow in transaction log. Additionally each log is saved in MongoDB to create a full report from current load and to correlate record flow between Batch Channel and MDM Manager components.
  8. After HIN file is successfully processed, it is moved to archive subtree in S3 bucket.


" + }, + { + "title": "SAP Flow", + "pageID": "164469997", + "pageLink": "/display/GMDM/SAP+Flow", + "content": "

This flow processes SAP files published by GIS system to S3 Bucket. Flow steps are presented on the sequence diagram below.

\"\"
Process steps description:

  1. SAP files are uploaded to AWS S3 storage bucket to the appropriate directory intended only for SAP files.
  2. Batch Channel component is monitoring S3 location and processes the files uploaded to it.
    Important note: To facilitate fault tolerance the Batch Channel component will be deployed on multiple instances on different machines. However, to avoid conflicts, such as processing the same file twice, only one instance is allowed to do the processing at any given time. This is implemented via standard Apache Camel mechanism of Route Policy, which is backed by Zookeeper distributed key-value store. When a new file is picked up by Batch Channel instance, the first processing step would be to create a key in Zookeeper, acting as a lock. Only one instance will succeed in creating the key, therefore only one instance will be allowed to proceed.
  3. Folder structure for SAP is divided on "inbound" and "archive" directories. Batch Channel component is polling data from inbound directory, after successful processing the file is copied to "archive directory"
  4. Files downloaded from S3 is processed in streaming mode. The processing of the file can be stared before full download of the file. Such solution is dedicated to speed up processing of the big files, because there is no need to wait until the file will be fully downloaded.
  5. Each line in file is parsed in Batch Channel component and mapped to the dedicated SAP object. In case of SAP files where one SAP record is saved in multiple lines in the file there is need to use SAPRecordAggregator. This class will read each line of the SAP file and aggregate each line to create full SAP record. Each line starts with Record Type character, the separator for SAP is "~" (tilde character). Only lines that start with the following character are parsed and create full SAP record:

    • 1 – Header
    • 4 – Sales Organization
    • E – License
    • C – Notes
    When header line is parsed Account Type attribute is checked. Only SAP records with "Z031" type are filtered and post to Reltio.

  6. BatchContext is downloaded from MongoDB for each SAP record. This context contains Start Date for SAP and 340B Identifiers. When BatchContext is empty current timestamp is saved for each of the Identifiers, otherwise the start date for the identifiers is changed for the one saved in the Mongo cache. This Start Date always must be overwritten with the initial dates from mongo cache.
  7. Aggregated SAP record will be published to Kafka topic dedicated for events for SAP records. They will be processed by MDM Manager component. The first step is authorization check to verify if this event was produced by Batch Channel component with appropriate source name and country and roles. Then the standard process for HCO creation is stared. The full description of this process is in HCO POST section.
  8. TransactionLog Service is an additional component for managing transaction logs. The role of this component is to save each successful or unsuccessful flow in transaction log. Additionally each log is saved in MongoDB to create a full report from current load and to correlate record flow between Batch Channel and MDM Manager components.
  9. After SAP file is successfully processed, it is moved to archive subtree in S3 bucket.




" + }, + { + "title": "US overview", + "pageID": "164470019", + "pageLink": "/display/GMDM/US+overview", + "content": "

\"\"

" + }, + { + "title": "Generic Batch", + "pageID": "164469994", + "pageLink": "/display/GMDM/Generic+Batch", + "content": "

The generic batch offers the functionality of configuring processes of HCP/HCO data loading from text files (CSV) into MDM.
The loading processes are defined in the configuration, without the need for changes in the implementation.

Description of the process


\"\"


Definition of single data flow 

Configuration (definition) od each data flow contains:


Currently defined data flows:



Flow nameCountrySource systemInput files (with names required after preprocessing stage)Detailed columns to entity attribute mapping file

TH HCP

THCICR
  • hcpEntities

fileNamePattern: '(TH_Contact_In)+(\\.(?i)(txt))$'

  • hcpAddresses

fileNamePattern: '(TH_Contact_Address_In_JOINED)+(\\.(?i)(txt))$'

  • hcpSpecialties

fileNamePattern: '(TH_Contact_Speciality_In)+(\\.(?i)(txt))$'

mdm-gateway\\batch-channel\\src\\main\\resources\\flows.yml

SA HCP

SALocalMDM
  • hcpEntities

fileNamePattern: '(KSA_HCPs)+(\\.(?i)(csv))$'

mdm-gateway\\batch-channel\\src\\main\\resources\\flows.yml





" + }, + { + "title": "Get Entity", + "pageID": "164470021", + "pageLink": "/display/GMDM/Get+Entity", + "content": "

Description

Operation getEntity of MDM Manager fetches current state of OV from MongoDB store.

The detailed process flow is shown below.

Flow diagram

Get Entity


\"\"


Steps

  1. Client sends HTTP request to MDM Manager endpoint.
  2. Kong Gateway receives requests and handles authentication.
  3. If the authentication succeeds, the request is forwarded to MDM Manager component.
  4. MDM Manager checks user permissions to call getEntity operation and the correctness of the request.
  5. If user's permissions are correct, MDM Manager proceeds with searching for the specified entity by id.
  6. MDM Manager checks user profile configuration for getEntity operation to determine whether to return results based on MongoDB state or call Reltio directly.
  7. For clients configured to use MongoDB – if the entity is found, then its status is checked. For entities with LOST_MERGE status parentEntityId attribute is used to fetch and return the parent Entity instead. This is in line with default Reltio behavior since MDM Manager is supposed to mirror Reltio.


Triggers

Trigger actionComponentActionDefault time
REST callManager: GET /entity/{entityId}get specific objects from MDM systemAPI synchronous requests - realtime

Dependent components

ComponentUsage
Managerget Entities in MDM systems









" + }, + { + "title": "GRV & GCP events processing", + "pageID": "164470032", + "pageLink": "/pages/viewpage.action?pageId=164470032", + "content": "


Contacts

VendorContact
MAP/DEG API supportMatej.Dolanc@COMPANY.com


This flow processes events from GRV and GCP systems distributed through Event Hub. Processing is split into three stages. Since each stage is implemented as separate Apache Camel route and separated from other stages by persistent message store (Kafka), it is possible to turn each stage on/off separately using Admin Console.

SQS subscription

First processing stage is receiving data published by Event Hub from Amazon SQS queues, which is done as shown on diagram below.


\"\"

Figure 5. First processing stage


Process steps description:

  1. Data changes in GRV and GCP are captured by Event Hub and distributed via queues to MAP Channel components using SQS queues with names:
    1. eh-out-reltio-gcp-update-<env_code>
    2. eh-out-reltio-gcp-batch-update-<env_code>
    3. eh-out-reltio-grv-update-<env_code>
  2. Events pulled from SQS queue are published to Kafka topic as a way of persisting them (allowing reprocessing) and to do event prioritizing and control throughput to Reltio. The following topics are used:
    1. <env_code>-gw-internal-gcp-events-raw
    2. <env_code>-gw-internal-grv-events-raw
  3. To ensure correct ordering of messages in Kafka, there is a custom message key generated. It is a concatenation of market code and unique Contact/User id.
  4. Once the message is published to Kafka, it is confirmed in SQS and deleted from the queue.

Enrichment with DEG data


\"\"

Figure 6. Second processing stageSecond processing stage is focused on getting data from DEG system. The control flow is presented below.


Process steps description:

  1. MAPChannel receives events from Kafka topic on which they were published in previous stage.
  2. MAPChannel filters events based on country activation criteria – events coming from not activated countries are skipped. A list of active countries is controlled by configuration parameter, separately for each source (GRV, GCP);
  3. Next, MapChannel calls DEG REST services (INT2.1 or INT 2.2 depending on whether it is a GRV or GCP event) to get detailed information about changed record. DEG always returns current state of GRV and GCP records.
  4. Data from DEG is published to Kafka topic (again, as a way of persisting them and separating processing stages). The topics used are:
    1. <env_code>-gw-internal-gcp-events-deg
    2. <env_code>-gw-internal-grv-events-deg
  5. Again, custom message key (which is a concatenation of market code and unique Contact/User id

Creating HCP entities

Last processing stage involves mapping data to Reltio format and calling MDM Gateway API to create HCP entities in Reltio. Process overview is shown below.


\"\"

Figure 7. Third processing stage



Process steps description:

  1. MAPChannel receives events from Kafka topic on which they were published in previous stage.
  2. MAPChannel filters events based on country activation criteria, events coming from not activated countries are skipped. A list of active countries is controlled by configuration parameter, separately for each source (GRV, GCP) – this is exactly the same parameter as in previous stage.
  3. MapChannel maps data from GCP/GRV to HCP:
    1. EMEA mapping
    2. GLOBAL mapping
  4. Validation status of mapped HCP is checked – if it matches a configurable list of inactive statuses, then deleteCrosswalk operation is called on MDM Manager. As a result entity data originating from GCP/GRV is deleted from Reltio.
  5. Otherwise, Map Channel calls REST operation POST /hcp on MDM Manager (INT4.1) to create or replace HCP profile in Reltio. MDM Manager handles complexity of the update process in Reltio.

Processing events from multiple sources and prioritization

As mentioned in previous sections, there are three different SQS queues that are populated with events by Event Hub. Each of them is processed by a separate Camel Route, allowing for some flexibility and prioritizing one queue above others. This can be accomplished by altering consumer configuration found in application.yml file. Relevant section of mentioned file is shown below.


\"\"


Queue eh-out-reltio-gcp-batch-update-dev has 15 consumers (and therefore 15 processing threads), while two remaining queues have only 5 consumers each. This allows faster processing of GCP Batch events.
The same principle applies to further stages of the processing, which use Kafka endpoints. Again, there is a configuration section dedicated to each of the internal Kafka topic that allows tuning the pace of processing.


\"\"


" + }, + { + "title": "HUB UI User Guide", + "pageID": "302701919", + "pageLink": "/display/GMDM/HUB+UI+User+Guide", + "content": "

This page contains the complete user guide related to the HUB UI.

Please check the sub-pages to get details about the HUB UI and usage.

Start with Main Page - HUB Status - main page


A handful of information that may be helpful when you are using HUB UI:


If you want to add any new features to the HUB UI please send your suggestions to the HUB Team: DL-ATP_MDMHUB_SUPPORT@COMPANY.com


" + }, + { + "title": "HUB Admin", + "pageID": "302701923", + "pageLink": "/display/GMDM/HUB+Admin", + "content": "

All the subpages contain the user guide - how to use the hub admin tools.

To gain access to the selected operation please read - UI Connect Guide

" + }, + { + "title": "1. Kafka Offset", + "pageID": "302703128", + "pageLink": "/display/GMDM/1.+Kafka+Offset", + "content": "

Description

This tab is available to a user with the MODIFY_KAFKA_OFFSET management role.

Allows you to reset the offset for the selected topic and group.

Kafka Consumer

Please turn off your Kafka Consumer before executing this operation, it is not possible to manage the ACTIVE consumer group

Required parameters

Details

The offset parameter can take one of three values:

View

\"\"



" + }, + { + "title": "10. Jobs Manager", + "pageID": "337846274", + "pageLink": "/display/GMDM/10.+Jobs+Manager", + "content": "

Description

This page is available to users that scheduled the JOB

Allows you to check the current status of an asynchronous operation 

Required parameters

Job Type  choose a JOB to check the status

Details

The page shows the statuses of jobs for each operation.

Click the Job Type and select the business operation.

In the table below all the jobs for all users in your AD group are displayed. You can track the jobs and download the reports here.

Click the\"\" Refresh view button to refresh the page

Click the \"\"icon to download the report.

View

\"\"

" + }, + { + "title": "2. Partials", + "pageID": "302703134", + "pageLink": "/display/GMDM/2.+Partials", + "content": "

Description

This tab is available to the user with the LIST_PARTIALS role to manage the precallback service.

It allows you to download a list of partials - these are events for which the need to change the Reltio has been detected and their sending to output topics has been suspended.

The operation allows you to specify the limit of returned records and to sort them by the time of their occurrence.

HUB ADMIN

Used only internally by MDM HUB ADMINS

Required parameters

N/A - by default, you will get all partial entities.

Details


View

\"\"

" + }, + { + "title": "3. HUB Reconciliation", + "pageID": "302703130", + "pageLink": "/display/GMDM/3.+HUB+Reconciliation", + "content": "

Description

This tab is available to the user with the reconciliation service management role - RECONCILE and RECONCILE_COMPLEX

The operation accepts a list of identifiers for which it is to be performed. It allows you to trigger a reconciliation task for a selected type of object:

Divided into 2 sections:

Simple JOBS:

Required parameters

N/A - by default generate CHANGE events and skip entity when it is in REMOE/INACTIVE/LOST_MERGE state. In that case, we only push CHANGE events. 

Details

ParameterDefault valueDescription
forcefalseSend an event to output topics even when a partial update is detected or the checksum is the same.
push lost mergefalseReconcile event with LOST_MERGE status
push inactivatedfalseReconcile event with INACTIVE status
push removedfalseReconcile event with REMOVE status

View

\"\"


Complex JOBS:

Required parameters

Details

Simple

ParameterDefault valueDescription
forcefalseSend an event to output topics even when a partial update is detected or the checksum is the same.
Countries N/Alist of countries.e.g: CA, MX
SourcesN/Acrosswalks names for which you want to generate the events.
Object TypeENTITYgenerates events from ENTITY or RELATION objects
Entity Typedepend on object Type

Can be for ENTITY: HCP/HCO/MCO/DCR

Can be for RELATION: input test in which you specify the relation e.g.: OtherHCOToHCO

Batch limitN/A

limit the number of events - useful for testing purposes

Complex

ParameterDefault valueDescription
forcefalseSend an event to output topics even when a partial update is detected
Entity QueryN/A

PUT the MATCH query to get Mongo results and generate events. e.g.:

{

"status": "ACTIVE",

"sources": "ONEKEY",

"country": "gb"

}

Entities limitN/Alimit the number of events - useful for testing purposes
Relation QueryN/A

PUT the MATCH query to get Mongo results and generate events. e.g.:

{

"status": "ACTIVE",

"sources": "ONEKEY",

"country": "gb"

}

Relation limitN/Alimit the number of events - useful for testing purposes

View

\"\"


" + }, + { + "title": "4. Kafka Republish Events", + "pageID": "302703132", + "pageLink": "/display/GMDM/4.+Kafka+Republish+Events", + "content": "

Description

This page is available to users with the publisher manager role -RESEND_KAFKA_EVENT and RESEND_KAFKA_EVENT_COMPLEX

Allows you to resend events to output topics. It can be used in two modes: simple and complex.

The operation will trigger JOB  with selected parameters. In response, the user will receive an identifier that is used to check the status of the asynchronous operation in the 10. Jobs Manager tab.

Simple mode

Required parameters

Details

In this mode, the user specifies values ​​for defined parameters:

ParameterDefault valueDescription
Select moderepublish CHANGE events

note:

  • when you mark 'republish CHANGE events' - the process will generate CHANGE events for all entities that are ACTIVE, and will check if the entity is LOST_MERGE - then will generate LOST_MERGED events, DELETED - then will generate REMOVED events, INACTIVE - then will generate INACTIVATED events.
  • when you mark 'republish CREATE events' - the process will generate CREATE events for all entities that are ACTIVE, and will check if the entity is LOST_MERGE - then will generate LOST_MERGED events, DELETED - then will generate REMOVED events, INACTIVE - then will generate INACTIVATED events.
  • The difference between these 2 modes is, in one we generate CHANGEs in the second CREATE events (depending if whether this is IDL generation or not)
CountriestrueList of countries for which the task will be performed
SourcesfalseList of sources for which the task will be performed
Object typetrueObject type for which operation will be performed, available values: Entity, Relation
Reconciliation targettrueOutput kafka topick name
limittrueLimit of generated events
modification time fromfalseEvents with a modification date greater than this will be generated
modification time tofalseEvents with a modification date less than this will be generated

View

\"\"

Complex mode

Required parameters

Entities query or  Relation query

Details

      In this mode, the user himself defines the Mongo query that will be used to generate events


ParameterRequiredDescription
Select moderepublish CHANGE events

note:

  • when you mark 'republish CHANGE events' - the process will generate CHANGE events for all entities that are ACTIVE, and will check if the entity is LOST_MERGE - then will generate LOST_MERGED events, DELETED - then will generate REMOVED events, INACTIVE - then will generate INACTIVATED events.
  • when you mark 'republish CREATE events' - the process will generate CREATE events for all entities that are ACTIVE, and will check if the entity is LOST_MERGE - then will generate LOST_MERGED events, DELETED - then will generate REMOVED events, INACTIVE - then will generate INACTIVATED events.
  • The difference between these 2 modes is, in one we generate CHANGEs in the second CREATE events (depending if whether this is IDL generation or not)
Entities querytrueResend entities Mongo query
Entities limitfalseResend entities limit
Relation querytrueResend relations Mongo query
Relations limittrueResend relations limit
Reconciliation targettrueOutput kafka topick name

View

\"\"






" + }, + { + "title": "5. Reltio Reindex", + "pageID": "337846264", + "pageLink": "/display/GMDM/5.+Reltio+Reindex", + "content": "

Description

This page is available to users with the reltio reindex role - REINDEX_ENTITIES

Allows you to schedule Reltio Reindex JOB. It can be used in two modes: query and file.

The operation will trigger JOB  with selected parameters. In response, the user will receive an identifier that is used to check the status of the asynchronous operation in the 10. Jobs Manager tab.

Required parameters

Specify Countries in query mode or file with entity uris in file mode. 

Details

query

ParameterDescription
CountriesList of countries for which the task will be performed
SourcesList of sources for which the task will be performed
Entity typeObject type for which operation will be performed, available values: HCP/HCO/MCO/DCR
Batch limitAdd if you want to limit the reindex to the specific number - helpful with testing purposes

file

Input file

File format: CSV 

Encoding: UTF-8

Column headers: - N/A

Input file example

1
2
3

entities/E0pV5Xm
entities/1CsgdXN4
entities/2O5RmRi

View

\"\"


Reltio Reindex details:

HUB executes Reltio Reindex API with the following default parameters:

\"\"

ParameterAPI Parameter nameDefault ValueReltio detailed descriptionUI details
Entity type
entityType
N/AIf provided, the task restricts the reindexing scope to Entities of specified type.User can specify  the EntityType is search API and the URIS list will be generated. There is no need to pass this to Reltio API becouse we are using the generated URI list
Skip entities count
skipEntitiesCount
0If provided, sets the number of Entities which are skipped during reindexing.-
Entities limit
entitiesLimit
infinityIf provided, sets the maximum number of Entities are reindexed-
Updated since
updatedSince
N/ATimestamp in Unix format. If this parameter is provided, then only entities with greater or equal timestamp are reindexed. This is a good way to limit the reindexing to newer records.-
Update entities
updateEntities
true 

If set to true, initiates update for Search, Match tables, History. If set to false, then no rematching, no history changes, only ES structures are updated.

If set to true (default), in addition to refreshing the ElasticSearch index, the task also updates history, match tables, and the analytics layer (RI). This ensures that all indexes and supporting structures are as up-to-date as possible. As explained above, however, triggering all these activities may decrease the overall performance level of the database system for business work, and overwhelm the event streaming channels. If set to false, the task updates ElasticSearch data only. It does not perform rematching, or update history or analytics. These other activities can be performed at different times to spread out the performance impact.

-
Check crosswalk consistency
checkCrosswalksConsistency
false

If true, this will start a task to check if all crosswalks are unique before reindexing data. Please note, if entitiesLimit or distributed parameters have any value other than default, this parameter will be unavailable

Specify true to reindex each Entity, whether it has changed or not. This operation ensures that each Entity in the database is processed. Reltio does not recommend this optionit decreases the performance of the reindex task dramatically, and may overload the server, which will interfere with all database operations.

-
URI list
entityUris
generated list of URIS from UI

One or more entity URIs (separated by a comma) that you would like to process. For example: entities/<id1>, entities/<id2>.


Reltio suggests to use 50-100K uris in one API request, this is Reltio limitation. 
Our process splits to 100K files if required. 


Based on the input files size one JOB from HUB end may produce multiple Reltio tasks.

UI generates list of URIS from mongo querry or we are running the reindex with the input files
Ignore streaming events
forceIgnoreInStreaming
false

If set to true, no streaming events will be generated until after the reindex job has completed.


-
Distributed
distributed
falseIf set to true, the task runs in distributed mode, which is a good way to take advantage of a networked or clustered computing environment to spread the performance demands of reindexing over several nodes. -
Job parts count
taskPartsCount

N/A due to distributed=false

Default value: 2

The number of tasks which are created for distributed reindexing. Each task reindexes its own subset of Entities. Each task may be executed on a different API node, so that all tasks can run in parallel. Recommended value: the number of API nodes which can execute the tasks. 


Note: This parameter is used only in distributed mode ( distributed=true); otherwise, its ignored.

-


More detials in Reltio docs:

https://docs.reltio.com/en/explore/get-going-with-apis-and-rocs-utilities/reltio-rest-apis/engage-apis/tasks-api/reindex-data-task

https://docs.reltio.com/en/explore/get-your-bearings-in-reltio/console/tenant-management-applications/tenant-management/jobs/creating-a-reindex-data-job



" + }, + { + "title": "6. Merge/Unmerge entities", + "pageID": "337846268", + "pageLink": "/pages/viewpage.action?pageId=337846268", + "content": "

Description

This page is available to users with the merge/unmerge role - MERGE_UNMERGE_ENTITIES

Allows you to schedule Merge/Unmerge JOB. It can be used in two modes: merge or unmerge.

The operation will trigger JOB  with selected parameters. In response, the user will receive an identifier that is used to check the status of the asynchronous operation in the 10. Jobs Manager tab.

Required parameters

file with profiles to be merged or unmerged in the selected format

Details

file

Input file

File format: CSV 

Encoding: UTF-8

more details here - Batch merge & unmerge


View

\"\"

" + }, + { + "title": "7. Update Identifiers", + "pageID": "337846270", + "pageLink": "/display/GMDM/7.+Update+Identifiers", + "content": "

Description

This page is available to users with the update identifiers role - UPDATE_IDENTIFIERS

Allows you to schedule update identifiers JOB.

The operation will trigger JOB  with selected parameters. In response, the user will receive an identifier that is used to check the status of the asynchronous operation in the 10. Jobs Manager tab.

Required parameters

file with profiles to be updated in the selected format

Details

file

Input file

File format: CSV 

Encoding: UTF-8

more details here - Batch update identifiers

View

\"\"

" + }, + { + "title": "8. Clear Cache", + "pageID": "337846272", + "pageLink": "/display/GMDM/8.+Clear+Cache", + "content": "

Description

This page is available to users with the ETL clear cache role - CLEAR_CACHE_BATCH

The cache is related to the Direct Channel ETL jobs:

Docs: ETL Batch Channel and ETL Batches

Allows you to clear the ETL checksum cache. It can be used in three modes: query or by_source or file.

The operation will trigger JOB  with selected parameters. In response, the user will receive an identifier that is used to check the status of the asynchronous operation in the 10. Jobs Manager tab.

Query mode

Required parameters

Batch name  - specify a batch name for which you want to clear the cache

Object type - ENTITY or RELATION

Entity type - e.g. configuration/relationTypes/Employment or configuration/entityTypes/HCP

Details

ParameterDescription
Batch nameSpecify a batch on which the clear cache will be triggered
Object type ENTITY or RELATION
Entity type

If object type is ENTITY then e.g:

configuration/entityTypes/HCO

configuration/entityTypes/HCP

If object type is RELATION then e.g.:

configuration/relationTypes/ContactAffiliations

configuration/relationTypes/Employment

CountryAdd a country if required to limit the clear cache query 

View

\"\"


by_source mode

Required parameters

Batch name  - specify a batch name for which you want to clear the cache

Source - crosswalk type and value

Details

Specify a batch name and click add a source to specify new crosswalks that you want to remove from the cache.

View

\"\"


file mode

Required parameters

Batch name  - specify a batch name for which you want to clear cache

file with crosswalks to be cleared in ETL cache in the selected format for specified batch

Details

file

Input file

File format: CSV 

Encoding: UTF-8

more details here - Batch clear ETL data load cacheView

View

\"\"


" + }, + { + "title": "9. Restore Raw Data", + "pageID": "356650113", + "pageLink": "/display/GMDM/9.+Restore+Raw+Data", + "content": "

Description

This page is available to users with the restore data role - RESTORE

The raw data contains data send to MDM HUB:

Docs: Restore raw data

Allows you to restore raw (source) data on selected environment

The operation will trigger asynchronous job with selected parameters.

Restore entities

Required parameters

Source environment - restore data from another environment eg from QA to DEV environment, the default is the currently logged in environment

Entity type  - restore data only for specified entity type: HCP, HCO, MCO

Optional parameters

Countries - restore data only for specified entity country, eq: GB, IE, BR

Sources - restore data only for specified entity source, eq: GRV, ONEKEY

Date Time - restore data created after specified date time


View

\"\"

Restore relations


Required parameters

Source environment - restore data from another environment eg from QA to DEV environment, the default is the currently logged in environment

Optional parameters

Countries - restore data only for specified entity country, eq: GB, IE, BR

Sources - restore data only for specified entity source, eq: GRV, ONEKEY

Relation types- restore data only for specified relation type, eg: configuration/relationTypes/OtherHCOtoHCOAffiliations

Date Time - restore data created after specified date time


View

\"\"

" + }, + { + "title": "HUB Status - main page", + "pageID": "333155175", + "pageLink": "/display/GMDM/HUB+Status+-+main+page", + "content": "

Description

The UI is divided into the following sections:

\"\"

  1. MENU
    1. Contains links to 
      1. Ingestion Services Configuration
      2. Ingestion Services Tester
      3. HUB Admin
  2. HEADER
    1. Shows the current tenant name, click to quickly change the tenant to a different one.
    2. Shows the logged-in user name. Click to log out. 
  3. FOOTER
    1. Link to User Guide
    2. Link to Connect Gide
    3. Link to the whole HUB documentation
    4. Link to the Get Help page
    5. Currently deployed version
      1. Click to get the details about the CHANGELOG
        1. on PROD - released version
        2. on NON-PROD- snapshot version - Changelog contains unreleased changes that will be deployed in the upcoming release to PROD.
  4. HUB Status dashboard is divided into the following sections:
    1. On this page you can check HUB processing status / kafka topics LAGs / API availability / Snowflake DataMart refresh. 
    2. API (related to the Direct Channel)
      1. \"\"
      2. API Availability  - status related to HUB API (all API exposed by HUB e.g. based on EMEA PROD - EMEA PROD Services )
      3. Reltio READ operations performance and latency - for example, GET Entity operations (every operation that gets data from Reltio)
      4. Reltio WRITE operations performance and latency - for example, POST/PATCH Entity operations (every operation that changes data in Reltio)
    3. Batches (related to the ETL Batch Channel)
      1. \"\"
      2. Currently running batches and duration of completed batches.
      3. Currently running batches may cause data load and impact event processing visible in the dashboard below (inbound and outbound)
    4. Event Processing 
      1. \"\"
      2. Shows information about events that we are processing to:
        1. Inbound - all updates made by HUB on profiles in Reltio
          1. shows the ETA based on the:
            1. ETL Batch Channel (loading and processing events into HUB from ETL)
            2. Direct Channel processing:
              1. loading ETL data to Reltio
              2. loading Rankings/Callbacks/HcoNames (all updates on profiles on Reltio)
                    
        2. Outbound - streaming channel processing (related to the Streaming channel)
          1. shows the ETA based on the:
            1. Streaming channel - all events processing starting from Reltio SQS queue, events currently processing by HUB Streaming channel microservices.
    5. DataMart (related to the Snowflake MDM Data Mart)
      1. \"\"
      2. The time when the last REGIONAL and GLOBAL Snowflake data marts
      3. Shows the number of events that are still processing by HUB microservices and are not yet consumed by Snowflake Connector. 



" + }, + { + "title": "Ingestion Services Configuration", + "pageID": "302701936", + "pageLink": "/display/GMDM/Ingestion+Services+Configuration", + "content": "

Description

This page shows configuration related to the

Choose a filter to switch between different entity types and use input boxes to filter results.

\"\"


Available filters:

FilterDescription
Entity TypeHCP/HCO/MCO - choose an entity type that you want to review and click Search
CategoryPick to limit the result and review only selected rules
CountryType a country code to limit the number of rules related to the specific country
Source Type a source to limit the number of rules related to the specific source
QueryOpen Text filed -helps to limit the number of results when searching for specific attributes. Example case - put the "firstname" and click Search to get all rules that modify/use FirstName attribute.

Audit filed

Comparison type

Date

Use a combination of these 3 attributes to find rules created before or after a specific date. Or to get rules modified after a specific date. 


Click on the:

\"\"


                                                                                 

" + }, + { + "title": "Ingestion Services Tester", + "pageID": "302701950", + "pageLink": "/display/GMDM/Ingestion+Services+Tester", + "content": "

Description

This site allows you to test quality service. The user can select the input entity using the 'upload' button, paste the content of the entity into the editor or drag it. After clicking the 'test' button, the entity will be sent to the quality service. After processing, the result will appear in the right window. The user can choose two modes of presenting the result - the whole entity or the difference. In the second mode, only changes made by quality service will be displayed. After clicking the 'validation result' button, a dialog box will be displayed with information on which rules were applied during the operation of the service for the selected entity.


Quality service tester editor

\"\"


Validation summary                                      

Here you can check which rules were "triggered" and check the rule in the Ingestion Services Configuration using the Rule name.

Search by text using attribute or "triggered" keyword to get all triggered rules. 

\"\"

                                           

" + }, + { + "title": "Incremantal batch", + "pageID": "164470033", + "pageLink": "/display/GMDM/Incremantal+batch", + "content": "

On the diagram below presented the generic structure of the batch flow. Data sources will have own instances of the flow configured:

\"\"

The flow consists of the following stages: 


Generic Mapper

Generic Mapper is a component that converts source data into documents in the unified format required by Reltio API. The component is flexible enough to support incremental batches as well as full snapshots of data. Handling a new type of data source is a matter of (in most cases) creating a new configuration that consists of stage and metadata parts. 

The first one defines details of so called "stages", i.e.: HCO, HCP, etc. The latter contains all mapping rules defining how to transform source data into attribute path/value form. Once data are transformed into the mentioned form it is easy to store it, merge it or do any other operation (including Reltio document creation) in the same way for all types of sources. This simple idea makes Generic Mapper a very powerful tool that can be extended in many ways. 

\"Mapping

 A stage is a logical group of steps that as a whole process single type of Reltio document, i.e.: HCO entity.    

\"Stage

At the beginning of each stage the component reads source data and generates attribute changes (events) and then stores this in an output file. It is worth to notice that there can be many source data configured. Once the output file is produced it is sorted. The above logic can be called phase 1 of a stage. Until now no database has been used. 

In the phase 2 the sorted file is read, events are aggregated into groups in such a way that each element of a group refers to the same Reltio document. Next all lookups are resolved against a database, merged with previous version of a document attributes and persisted. Then, Reltio document (Json) is created and sent to Kafka. The stage is finished when all acks from the gateway are collected. 

Under the hood each stage is a sequence of jobs: a job (i.e.: the one for sorting a file) can be started only in case its direct predecessor is finished with a success. Stages can be configured to run in parallel and depends on each other. 


Load reports 

At runtime Generic Mapper collects various types of data that give insight into DAG state and load statistics. The HTML report is written to disk each time a status of any job is changed. The report consists of three panels: Summary, Metrics and DAG. 

The summary panel contains details of all jobs within a DAG that was created for the current execution (load). The DAG panel shows relationships between jobs in the form of a graph. 

\"\"

The metrics panel presents details of a load. Each metric key is prefixed by a stage name.  

\"\"

" + }, + { + "title": "Kafka offset modification", + "pageID": "273695178", + "pageLink": "/display/GMDM/Kafka+offset+modification", + "content": "

Description

The REST interfaces exposed through the MDM Manager component used by clients to modify kafka offset.

During the update, we will check access to groupId and specyfic topic.

Diagram 1 presents flow, and kafka communication during offset modification.


The diagrams below present a sequence of steps in processing client calls.

Flow diagram


\"\"


Steps

Triggers

Trigger action

Component

Action

Default time

REST callManager: POST /kafka/offsetmodify kafka offsetAPI synchronous requests - realtime
RequestResponse

{
    "groupId""mdm_test_user_group",
    "topic""amer-dev-in-guest-tests",
    "offset""latest"
}

{
    "values": [
        {
            "topic""amer-dev-in-guest-tests",
            "partition"0,
            "offset"2
        }
    ]
}

{
    "groupId""mdm_test_user_group",
    "topic""amer-dev-in-guest-tests",
    "offset""earliest"
}
{
    "values": [
        {
            "topic""amer-dev-in-guest-tests",
            "partition"0,
            "offset"0
        }
    ]
}
{
    "groupId""mdm_test_user_group",
    "topic""amer-dev-in-guest-tests",
    "offset""2022-12-15T08:15:02Z"
}
{
    "values": [
        {
            "topic""amer-dev-in-guest-tests",
            "partition"0,
            "offset"1
        }
    ]
}

{
    "groupId""mdm_test_user_group",
    "topic""amer-dev-in-guest-tests",
    "offset""latest"
    "partition"4
}

{
    "values": [
        {
            "topic""amer-dev-in-guest-tests",
            "partition"4,
            "offset"2
        }
    ]
}

{
    "groupId""mdm_test_user_group",
    "topic""amer-dev-in-guest-tests",
    "offset""2022-12-15T08:15:02Z",
    "shift"5
}

{
    "values": [
        {
            "topic""amer-dev-in-guest-tests",
            "partition"0,
            "offset"6
        }
    ]
}

Dependent components

Component

Usage

Managercreate update Entities in MDM systems
API Gatewayproxy REST and secure access
" + }, + { + "title": "LOV read", + "pageID": "164469998", + "pageLink": "/display/GMDM/LOV+read", + "content": "


The flow is triggered by API GET /lookup  call.  It retrives LOV data from HUB store.


\"\"


Process steps description:

  1. Client sends HTTP request to MDM Manager endpoint.
  2. Kong Gateway receives request and handles authentication
  3. If the authentication succeeds, the request is forwarded to MDM Manager component
  4. MDM Manager checks user permissions to call getEntity operation and the correctness of the request
  5. MDM Manager checks user profile configuration for lookup operation to determine whether to return results based on MongoDB state, or call Reltio directly.
  6. Request parameters are used to dynamically generate a query. This query is executed in findByCriteria method.
  7. Query results are returned to the client



" + }, + { + "title": "LOV update process (Nucleus)", + "pageID": "164469999", + "pageLink": "/pages/viewpage.action?pageId=164469999", + "content": "\n

Process steps description:

\n
    \n\t
  1. Nucleus Subscriber monitors AWS S3 location where CCV files are uploaded.
  2. \n\t
  3. When a new file is found, it is downloaded and processed. Single CCV zip file contains multiple *.exp files, which contain different parts of LOV – header, description, references to values from external systems.
  4. \n\t
  5. Each *.exp file is processed line by line, with Dictionary change events generated for each line. These events are published to a Kafka topic from where the Event Publisher component receives them.
  6. \n\t
  7. After CCV file is processed completely, it is moved to archive subtree in S3 bucket folder structure.
  8. \n\t
  9. When Dictionary change event is received in Event Publisher the current state of LOV is first fetched from Mongo database. New data from the event is then merged with that state and the result is saved back in Mongo.
  10. \n
\n\n\n

Additional remarks:

\n\n" + }, + { + "title": "LOV update processes (Reltio)", + "pageID": "164469992", + "pageLink": "/pages/viewpage.action?pageId=164469992", + "content": "\n

\"\" Figure 18. Updating LOVs from ReltioLOV update processes are triggered by timer on regular, configurable intervals. Their purpose is to synchronize dictionary values from Reltio. Below is the diagram outlining the whole process.\n
\nProcess steps description:

\n
    \n\t
  1. Synchronization processes are triggered at regular intervals.
  2. \n\t
  3. Reltio Subscriber calls MDM Gateway lookups API to retrieve first batch of LOV data
  4. \n\t
  5. Fetched data is inserted into the Mongo database. Existing records are updated
  6. \n
\n\n\n

Second and third steps are repeated in a loop until there is no more LOV data remaining.

" + }, + { + "title": "MDM Admin Flows", + "pageID": "302683297", + "pageLink": "/display/GMDM/MDM+Admin+Flows", + "content": "" + }, + { + "title": "Kafka Offset", + "pageID": "302684674", + "pageLink": "/display/GMDM/Kafka+Offset", + "content": "

Swagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Kafka/kafkaOffsetModification

API allows offset manipulation for consumergroup-topic pair. Offsets can be set to earliest/latest/timestamp, or adjusted (shifted) by a numeric value.

An important point to mention is that in many cases offset does not equal to messages - shifting offset on a topic back by 100 may result in receiving 90 extra messages. This is due to compactation and retention - Kafka may mark offset as removed, but it still remains for the sake of continuity.

Example 1

Environment is EMEA DEV. User wants to consume the last 100 messages from his topic again. He is using topic "emea-dev-out-full-test-topic-1" and consumer-group "emea-dev-consumergroup-1".

User has disabled the consumer - Kafka will not allow offset manipulation, if the topic/consumergroup is being used.

He sent below request:

\n
POST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/kafka/offset\nBody:\n{\n  "topic": "emea-dev-out-full-test-topic-1",\n  "groupId": "emea-dev-consumergroup-1",\n  "shiftBy": -100\n}
\n

Upon re-enabling the consumer, 100 of the last events were re-consumed.

Example 2

User wants to consume all available messages from the topic again.

User has disabled the consumer and sent below request:

\n
POST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/kafka/offset\nBody:\n{\n  "topic": "emea-dev-out-full-test-topic-1",\n  "groupId": "emea-dev-consumergroup-1",\n  "offset": earliest\n}
\n

Upon re-enabling the consumer, all events from the topic were available for consumption again.

" + }, + { + "title": "Partial List", + "pageID": "302683607", + "pageLink": "/display/GMDM/Partial+List", + "content": "

Swagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Precallback%20Service/reconcilePartials_1

API calls Precallback Service's internal API and returns a list of events stuck in partial state (more information here). List can be limited and sorted. Partial age can be displayed in one of below formats:

Example

User has noticed an alert being triggered for GBLUS DEV, informing about events in partial state. To investigate the situation, he sends the following request:

\n
GET https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gblus-dev/precallback/partials?absolute=true
\n

Response:

\n
{\n    "entities/1sgqoyCR": "2023-02-09T11:42:06.523Z",\n    "entities/1eUqpXVe": "2023-02-01T12:39:57.345Z",\n    "entities/2ZlDTE2U": "2023-02-09T11:40:30.950Z",\n    "entities/2J1YiLW9": "2023-02-09T11:41:45.092Z",\n    "entities/1KgPnkhY": "2023-02-01T12:39:58.594Z",\n    "entities/1YpLnUIR": "2023-02-01T12:40:06.661Z"\n}
\n

He realized, that it is difficult to quickly tell the age of each partial based on timestamp. He removed the absolute flag from request:

\n
GET https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gblus-dev/precallback/partials
\n

Response:

\n
{\n    "entities/1sgqoyCR": "27:26:56.228",\n    "entities/1eUqpXVe": "218:29:05.406",\n    "entities/2ZlDTE2U": "27:28:31.801",\n    "entities/2J1YiLW9": "27:27:17.659",\n    "entities/1KgPnkhY": "218:29:04.157",\n    "entities/1YpLnUIR": "218:28:56.090"\n}
\n

Three partials have been stuck for more than 200 hours. Other three partials - for over 27 hours.

" + }, + { + "title": "Reconciliation", + "pageID": "302683312", + "pageLink": "/display/GMDM/Reconciliation", + "content": "

Entities

Swagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Reconciliation/reconcileEntities

API accepts a JSON list of entity URIs. URIs not beginning with "entities/" are filtered out. For each URI it:

  1. Checks entityType (HCP/HCO/MCO) in Mongo
  2. Checks status (ACTIVE/LOST_MERGE/INACTIVE/REMOVED) in Mongo
  3. If entity is ACTIVE, it generates a *_CHANGED event and sends it to the ${env}-internal-reltio-events to be enriched by the Entity Enricher
  4. If entity has status other than ACTIVE:
    1. If entity has status LOST_MERGE and pushLostMerge parameter is true, generate a *_LOST_MERGE event.
    2. If entity has status INACTIVE and pushInactived parameter is true, generate a *_INACTIVATED event.
    3. If entity has status DELETED and pushRemoved parameter is true, generate a *_REMOVED event.
  5. *Additional parameter, force, may be used. When set to true, event will proceed to the EventPublisher even if rejected by Precallbacks.

Example

User wants to reconcile 4 entities, which have different data in Snowflake/Mongo than in Reltio:

Below request is sent (GBL DEV):

\n
POST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-dev/reconciliation/entities\nBody:\n["entities/108dNvgB", "entities/10VLBsCl", "entities/10bH3nze", "entities/1065AHEA", "relations/101LIzcm"]
\n

Response:

\n
{\n    "entities/10bH3nze": "false - Record with INACTIVE status in cache",\n    "entities/1065AHEA": "false - Record with DELETED status in cache",\n    "entities/10VLBsCl": "false - Record with LOST_MERGE status in cache",\n    "entities/108dNvgB": "true",\n    "relations/101LIzcm": "false"\n}
\n

Only one event was generated: HCP_CHANGED for entities/108dNvgB.

User decided that he also need an HCP_LOST_MERGE event for entities/10VLBsCl. He sent the same request with pushLostMerge flag:

\n
POST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-dev/reconciliation/entities?pushLostMerge=true\nBody:\n["entities/108dNvgB", "entities/10VLBsCl", "entities/10bH3nze", "entities/1065AHEA", "relations/101LIzcm"]
\n

Response:

\n
{\n    "entities/10bH3nze": "false - Record with INACTIVE status in cache",\n    "entities/1065AHEA": "false - Record with DELETED status in cache",\n    "entities/10VLBsCl": "true",\n    "entities/108dNvgB": "true",\n    "relations/101LIzcm": "false"\n}
\n

This time, two events have been generated:

Relations

Swagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Reconciliation/reconcileRelations

API works the same way as for Entities, but this time URIs not beginning with "relations/" are filtered out.

Example

User sent the same request as in previous example (GBL DEV):

\n
POST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-dev/reconciliation/relations\nBody:\n["entities/108dNvgB", "entities/10VLBsCl", "entities/10bH3nze", "entities/1065AHEA", "relations/101LIzcm"]
\n

Response:

\n
{\n    "entities/10bH3nze": "false",\n    "entities/1065AHEA": "false",\n    "entities/10VLBsCl": "false",\n    "entities/108dNvgB": "false",\n    "relations/101LIzcm": "false - Record with DELETED status in cache"\n}
\n

First 4 URIs have been filtered out due to unexpected prefix. Event for relations/101LIzcm has not been generated, because this relation has DELETED status in cache.

Same request has been sent with pushRemoved flag:

\n
POST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-dev/reconciliation/relations?pushRemoved=true\nBody:\n["entities/108dNvgB", "entities/10VLBsCl", "entities/10bH3nze", "entities/1065AHEA", "relations/101LIzcm"]
\n

Response:

\n
{\n    "entities/10bH3nze": "false",\n    "entities/1065AHEA": "false",\n    "entities/10VLBsCl": "false",\n    "entities/108dNvgB": "false",\n    "relations/101LIzcm": "true"\n}
\n

A single event has been generated: RELATIONSHIP_REMOVED for relations/101LIzcm.

Partials

Swagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Reconciliation/reconcilePartials

Partials Reconciliation API works the same way that Entities Reconciliation does, but it automatically fetches the current list of entities stuck in partial state using Partial List API.

Partials Reconciliation API also handles push and force flags. Additionally, partials can be filtered by age, using partialAge parameter with one of following values: NONE (default), MINUTE, HOUR, DAY.

Example

User wants to reload entities stuck in partial state in GBL DEV. Prometheus alert informs him that there are plenty, but he remembers that there is currently an ongoing data load, which may cause many temporary partials.

User decides that he should use the partialAge parameter with value DAY, to only reload the entities which have been stuck for a longer while, and not generate unnecessary additional traffic.

He sends the following request:

\n
POST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-dev/reconciliation/partials?partialAge=DAY\nBody: -
\n

Flow fetches a full list of partials from Precallback Service API and filters out the ones stuck for less than a day. It then executes the Entities Reconciliation with this list. Response:

\n
{\n    "entities/1yHHKEZ7": "true",\n    "entities/2EHamZr3": "true",\n    "entities/2EyP0kYM": "true",\n    "entities/21QU96KG": "true",\n    "entities/2BmHQMCn": "true"\n}
\n

5 HCP/HCO_CHANGED events have been generated as a result.

" + }, + { + "title": "Resend Events", + "pageID": "302684685", + "pageLink": "/display/GMDM/Resend+Events", + "content": "

API triggers an Airflow DAG. The DAG:

  1. Runs a query on MongoDB and generates a list of entity/relation URIs.
  2. Using Event Publisher's /resendLastEvent API, it produces outbound events for received reconciliationTarget (user-sent).

Resend - Simple

Swagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Events/resendEvent

When using Simple API, user does not actually write the Mongo query - they instead fill in the blanks.

Required parameters are:

Optionally, objects can be filtered by:

Example

Environment is EMEA DEV. User wants to generate 300 entity events (HCP_CHANGED or HCO_CHANGED) for Poland, source CRMMI. His outbound topic is emea-dev-out-full-user-all.

He sends the request:

\n
POST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/events/resend\nBody:\n{\n  "countries": [\n    "pl"\n  ],\n  "sources": [\n    "CRMMI"\n  ],\n  "objectType": "ENTITY",\n  "limit": 300,\n  "reconciliationTarget": "emea-dev-out-full-user-all"\n}
\n

Response:

\n
{\n  "dag_id": "reconciliation_system_emea_dev",\n  "dag_run_id": "manual__2023-02-13T14:26:22.283902+00:00",\n  "execution_date": "2023-02-13T14:26:22.283902+00:00",\n  "state": "queued"\n}
\n

A new Airflow DAG run was started. dag_run_id field contains this run's unique ID. Below request can be sent to fetch current status of this DAG run:

\n
GET https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/events/resend/status/manual__2023-02-13T14:26:22.283902+00:00
\n

Response:

\n
{\n  "dag_id": "reconciliation_system_emea_dev",\n  "dag_run_id": "manual__2023-02-13T14:26:22.283902+00:00",\n  "execution_date": "2023-02-13T14:26:22.283902+00:00",\n  "state": "running"\n}
\n

After the DAG has finished, 300 HCP_CHANGED/HCO_CHANGED events will have been generated to the emea-dev-out-full-user-all topic.

Resend - Complex

Swagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Events/resendEventComplex

For Complex API, user writes their own Mongo query.

Required parameters are:

Optionally, resulting objects can be limited (separate fields for each query).

Example

As in previous example, user wants to generate 300 events for Poland, source CRMMI. Output topic is emea-dev-out-full-user-all.

This time, he sends the following request:

\n
POST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/events/resend/complex\nBody:\n{\n  "entitiesQuery": "{ 'country': 'pl', 'sources': 'CRMMI' }",\n  "relationsQuery": null,\n  "reconciliationTarget": "emea-dev-out-full-user-all",\n  "limitEntities": 300,\n  "limitRelations": null\n}
\n

Response:

\n
{\n  "dag_id": "reconciliation_system_emea_dev",\n  "dag_run_id": "manual__2023-02-13T14:57:11.543256+00:00",\n  "execution_date": "2023-02-13T14:57:11.543256+00:00",\n  "state": "queued"\n}
\n

Resend - Status

Swagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Events/getStatus

As described in previous examples, this API returns current status of DAG run. Request url parameter must be equal to dag_run_id. Possible statuses are:



" + }, + { + "title": "Internals", + "pageID": "164470109", + "pageLink": "/display/GMDM/Internals", + "content": "


" + }, + { + "title": "Archive", + "pageID": "333152415", + "pageLink": "/display/GMDM/Archive", + "content": "" + }, + { + "title": "APM performance tests", + "pageID": "333152417", + "pageLink": "/display/GMDM/APM+performance+tests", + "content": "

Performance tests were executed using Jmeter tool placed on CI/CD server.

Test scenario:

Tests werer performed by 4 parallel users  in a loop for 60 min.

\"\"

Test results:

\"\"



" + }, + { + "title": "Client integration specifics", + "pageID": "492493127", + "pageLink": "/display/GMDM/Client+integration+specifics", + "content": "" + }, + { + "title": "Saudi Arabia integration with IQVIA", + "pageID": "492493129", + "pageLink": "/display/GMDM/Saudi+Arabia+integration+with+IQVIA", + "content": "

Below design was confirmed with Alain and Eleni during 14.01.2025 meeting. Concept of such solution was earlier approved by AJ.

\"\"

Source: Lucid

" + }, + { + "title": "Components providers - AWS S3, networking, etc...", + "pageID": "273702388", + "pageLink": "/pages/viewpage.action?pageId=273702388", + "content": "
TenantProviderReltioAWS accounts IDsIAM usersIAM rolesS3 bucketsNetwork (subnets, VPCe)Application ID
EMEA NPROD

PDCS - Kubernetes in IoD

COMPANY
  1. Airflow (S3) - 211782433747
  2. Snowflake (S3) - 211782433747
  3. Reltio (S3) -  211782433747
  4. AWS (PDCS) - 330470878083
  1. Airflow (S3)- arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_nprod_rw_s3
  2. Snowflake (S3) - arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_nprod_rw_s3
  3. Reltio (S3) - arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_nprod_rw_s3

Node Instance Role ARN: arn:aws:iam::330470878083:role/atp-mdmhub-nprod-emea-eks-worker-NodeInstanceRole-1OG6IFX6DO8B9

Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSO

  1. Airflow - pfe-atp-eu-w1-nprod-mdmhub 
  2. Snowflake - pfe-atp-eu-w1-nprod-mdmhub
  3. Reltio - pfe-atp-eu-w1-nprod-mdmhub

VPC

  • vpc-0c55bf38e97950aa5

Subnets

SC3028977
EMEA PROD
  1. Airflow (S3) - 211782433747
  2. Snowflake (S3) - 211782433747
  3. Reltio (S3) -  211782433747
  4. AWS (PDCS) - 330470878083
  5. S3 backup bucket - 604526422050

  1. Airflow (S3) - arn:aws:iam::211782433747:user/SRVC-MDMCDI-PROD
  2. Snowflake (S3) - arn:aws:iam::211782433747:user/SRVC-MDMCDI-PROD
  3. Reltio (S3) - arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_mdm_exports_prod_rw_s3
Node Instance Role ARN: arn:aws:iam::330470878083:role/atp-mdmhub-prod-emea-eks-worker-n-NodeInstanceRole-11OT3ADBULAGC

Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSO

  1. Airflow - pfe-atp-eu-w1-prod-mdmhub
  2. Snowflake - pfe-atp-eu-w1-prod-mdmhub
  3. Reltio - pfe-atp-eu-w1-prod-mdmhub
  4. Backups - pfe-atp-eu-w1-prod-mdmhub-backupemaasp202207120811

VPC

  • vpc-0c55bf38e97950aa5

Subnets

SC3211836
AMER NPRODPDCS - Kubernetes in IoDCOMPANY
  1. Airflow (S3) - 555316523483
  2. Snowflake (S3)-  555316523483
  3. Reltio (S3) -  555316523483
  4. AWS (PDCS) - 330470878083
  1. Airflow (S3) - arn:aws:iam::555316523483:user/SRVC-MDMGBLFT
  2. Snowflake (S3) - arn:aws:iam::555316523483:user/SRVC-MDMGBLFT
  3. Reltio (S3) - arn:aws:iam::555316523483:user/SVRC-MDMRELTIOGBLFTNPROD

Node Instance Role ARN: arn:aws:iam::330470878083:role/atp-mdmhub-nprod-amer-eks-worker-NodeInstanceRole-1X8MZ6QZQD5V7

Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSO


  1. Airflow - gblmdmhubnprodamrasp100762
  2. Snowflake - gblmdmhubnprodamrasp100762
  3. Reltio - gblmdmhubnprodamrasp100762

VPC

  • vpc-0aedf14e7c9f0c024

Subnets

  • subnet-0dec853f7c9e507dd (10.9.0.0/18)
  • subnet-07743203751be58b9 (10.9.64.0/18)
SC3028977
AMER PROD
  1. Airflow (S3) - 604526422050
  2. Snowflake (S3)- 604526422050
  3. Reltio (S3) -  555316523483
  4. AWS (PDCS) - 330470878083
  5. Backup bucket (S3) - 604526422050

  1. Airflow (S3) - arn:aws:iam::604526422050:user/SRVC-MDMGBLFT
  2. Snowflake (S3) - arn:aws:iam::604526422050:user/SRVC-MDMGBLFT
  3. Reltio (S3) - arn:aws:iam::555316523483:user/SVRC-MDMRELTIOGBLFTNPROD

Node Instance Role ARN: arn:aws:iam::330470878083:role/atp-mdmhub-prod-amer-eks-worker-n-NodeInstanceRole-1KA6LWUDBA3OI

Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSO

  1. Airflow - gblmdmhubprodamrasp101478
  2. Snowflake - gblmdmhubprodamrasp101478
  3. Reltio - gblmdmhubprodamrasp101478
  4. Backups - pfe-atp-us-e1-prod-mdmhub-backupamrasp202207120808

VPC

  • vpc-0aedf14e7c9f0c024

Subnets

  • subnet-0dec853f7c9e507dd (10.9.0.0/18)
  • subnet-07743203751be58b9 (10.9.64.0/18)
SC3211836
APAC NPRODPDCS - Kubernetes in IoDCOMPANY
  1. Airflow (S3) - 555316523483
  2. Snowflake (S3) - 555316523483
  3. Reltio (S3) -  555316523483
  4. AWS (PDCS) - 330470878083

1.Airflow - (S3) - arn:aws:iam::555316523483:user/svc_atp_aps1_mdmetl_nprod_rw_s3

2. Snowflake (S3) - arn:aws:iam::555316523483:user/svc_atp_aps1_mdmetl_nprod_rw_s3

3. Reltio (S3) - arn:aws:iam::555316523483:user/SVRC-MDMRELTIOGBLFTNPROD

Node Instance Role ARN: arn:aws:iam::330470878083:role/atp-mdmhub-nprod-apac-eks-worker-NodeInstanceRole-1053BVM6D7I2L

Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSO

  1. Airflow - globalmdmnprodaspasp202202171347
  2. Snowflake - globalmdmnprodaspasp202202171347
  3. Reltio - globalmdmnprodaspasp202202171347

VPC

  • vpc-0d4b6d3f77ac3a877

Subnets

SC3028977
APAC PROD
  1. Airflow (S3) -
  2. Snowflake (S3) - 
  3. Reltio -  555316523483
  4. AWS (PDCS) - 330470878083
  5. S3 backup bucket 604526422050

1.Airflow - (S3) -  arn:aws:iam::604526422050:user/svc_atp_aps1_mdmetl_prod_rw_s3

2. Snowflake (S3) - arn:aws:iam::604526422050:user/svc_atp_aps1_mdmetl_prod_rw_s3

3. Reltio (S3) - arn:aws:iam::555316523483:user/SVRC-MDMRELTIOGBLFTNPROD


Node Instance Role ARN: arn:aws:iam::330470878083:role/atp-mdmhub-prod-apac-eks-worker-n-NodeInstanceRole-1NMGPUSYG7H8Q

Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSO

  1. Airflow - globalmdmprodaspasp202202171415
  2. Snowflake - globalmdmprodaspasp202202171415
  3. Reltio - globalmdmprodaspasp202202171415
  4. Backups - pfe-atp-ap-se1-prod-mdmhub-backuaspasp202207141502

VPC

  • vpc-0d4b6d3f77ac3a877

Subnets

SC3211836
GBLUS NPRODPDCS - Kubernetes in IoDCOMPANY
  1. Airflow (S3) - 555316523483
  2. Snowflake (S3) - 555316523483
  3. Reltio (S3) -  555316523483
  4. AWS (PDCS) - 330470878083
  1. Airflow (S3) - arn:aws:iam::555316523483:user/SRVC-MDMGBLFT
  2. Snowflake (S3) - arn:aws:iam::555316523483:user/SRVC-MDMGBLFT
  3. Reltio (S3) - arn:aws:iam::555316523483:user/SVRC-MDMRELTIOGBLFTNPROD

Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSO

  1. Airflow - gblmdmhubnprodamrasp100762
  2. Snowflake - gblmdmhubnprodamrasp100762
  3. Reltio - gblmdmhubnprodamrasp100762
Same as AMER NPRODSC3028977
GBLUS PROD
  1. Airflow (S3) - 604526422050
  2. Snowflake - 604526422050
  3. Reltio (S3) -  
  4. AWS (PDCS) - 330470878083
  5. S3 backup bucket - 604526422050

  1. Airflow (S3) - arn:aws:iam::604526422050:user/SRVC-MDMGBLFT
  2. Snowflake (S3) - arn:aws:iam::604526422050:user/SRVC-MDMGBLFT
  3. Reltio (S3) - arn:aws:iam::555316523483:user/SVRC-MDMRELTIOGBLFTNPROD

Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSO

  1. Airflow - gblmdmhubprodamrasp101478
  2. Snowflake - gblmdmhubprodamrasp101478
  3. Reltio - gblmdmhubprodamrasp101478
  4. Backups - pfe-atp-us-e1-prod-mdmhub-backupamrasp202207120808
Same as AMER  PRODSC3211836
GBL NPROD

PDCS - Kubernetes in IoD

IQVIA
  1. Airflow (S3) -
  2. Snowflake (S3) - 211782433747
  3. Reltio (S3) -  
  4. AWS (PDCS) - 330470878083

1.Airflow (S3) - arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_nprod_rw_s3

2. Snowflake (S3) - arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_nprod_rw_s3

3. Reltio (S3) - arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_mdm_exports_prod_rw_s3


Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSO

  1. Airflow - pfe-atp-eu-w1-nprod-mdmhub
  2. Snowflake - pfe-atp-eu-w1-nprod-mdmhub
  3. Reltio - pfe-atp-eu-w1-nprod-mdmhub
Same as EMEA NPRODSC3028977
GBL PROD
  1. Airflow (S3) -
  2. Snowflake (S3) - 211782433747
  3. Reltio (S3) -  
  4. AWS (PDCS) - 330470878083
  5. S3 backup bucket - 604526422050

1.Airflow (S3) - arn:aws:iam::211782433747:user/svc_mdm_project_rw_s3

2. Snowflake (S3) - arn:aws:iam::211782433747:user/svc_mdm_project_rw_s3

3. Reltio (S3) - arn:aws:iam::211782433747:user/svc_mdm_project_rw_s3 ???

Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSO

  1. Airflow - pfe-baiaes-eu-w1-project
  2. Snowflake - pfe-baiaes-eu-w1-project
  3. Reltio - pfe-baiaes-eu-w1-project
  4. Backups - pfe-atp-eu-w1-prod-mdmhub-backupemaasp202207120811
Same as EMEA PRODSC3211836
FLEX NPRODCloudBroker - EC2IQVIA
  1. Airflow (S3) -
  2. Reltio (S3) - 


  1. Airflow - mdmnprodamrasp22124
  2. Reltio - mdmnprodamrasp22124


FLEX PROD
  1. Airflow (S3) - 
  2. Reltio (S3) - 


  1. Airflow - mdmprodamrasp42095
  2. Reltio - mdmprodamrasp42095


Proxy

Rapid - EC2N/A
  1. AWS EC2 - 432817204314





Monitoring

CloudBroker - EC2N/A
  1. AWS EC2 - 604526422050
  2. AWS S3 - 604526422050
  1. Thanos (S3) - arn:aws:iam::604526422050:user/SRVC-gblmdmhub
Node Instance Role: arn:aws:iam::604526422050:role/PFE-ATP-MDMHUB-MONITORING-BACKUP-ROLE-01
  1. Grafana Backup - pfe-atp-us-e1-prod-mdmhub-grafanaamrasp20240315101601
  2. Thanos - pfe-atp-us-e1-prod-mdmhub-monitoringamrasp20240208135314


Jenkins build

FLEX Airflow

CloudBroker - EC2N/A



VPC:

  • Jenkins vpc-12aa056a

" + }, + { + "title": "Configuration", + "pageID": "164470110", + "pageLink": "/display/GMDM/Configuration", + "content": "\n

All runtime configuration is stored in GitHub repository and changes are monitored using GIT history. Sensitive data is encrypted by Ansible Vault using AES256 algorithm and decrypted only during automatic deployment managed by Continuous Delivery process in Jenkins.

" + }, + { + "title": "●●●●●●●●●●●●● [https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1587199]", + "pageID": "164470111", + "pageLink": "/pages/viewpage.action?pageId=164470111", + "content": "\n

Configuration for all environments is placed in mdm-reltio-handler-env/inventory branch.
\nAvailable environments:

\n\n\n\n

In order to separate variables for each service, we created the following groups:

\n\n" + }, + { + "title": "Kafka", + "pageID": "164470104", + "pageLink": "/display/GMDM/Kafka", + "content": "\n

Kafka deployment procedures

\n\n\n\n

Kafka variables

\n

Production Kafka cluster requires the following variables:

\n\n" + }, + { + "title": "Kong", + "pageID": "164470105", + "pageLink": "/display/GMDM/Kong", + "content": "\n

Kong deployment procedures

\n\n\n\n

Kong variables

\n

Cassandra memory parameters are controlled by:

\n\n\n\n

Kong required variables:

\n\n\n\n

To manage kong api through deployment procedure these maps are needed:

\n\n" + }, + { + "title": "Mongo", + "pageID": "164470004", + "pageLink": "/display/GMDM/Mongo", + "content": "\n

Mongo deployment procedures

\n\n\n\n

Mongo variables

\n

Production mongo cluster requires the following variables declared in /inventory/prod/group_vars/ all/all.yml file:

\n\n\n\n

Development mongo instance requires the following variables declared in /inventory/dev/group_vars/all/all.yml file:

\n\n" + }, + { + "title": "Services - hub_gateway", + "pageID": "164470005", + "pageLink": "/display/GMDM/Services+-+hub_gateway", + "content": "\n

Services deployment procedures

\n

Hub deployment procedure:

\n\n\n\n


\nGateway deployment procedure:

\n\n\n\n

Services variables

\n

[gw-services] - this group contains variables for map channel and mdm manager in the following two maps:

\n

\n\n\n

[hub-services] - this group contains variables for hub api, reltio subscriber and event publisher in the following maps:

\n

\n\n\n

It is possible to redefine JVM_OPTS or any other environment using these maps:

\n\n" + }, + { + "title": "Data storage", + "pageID": "164470006", + "pageLink": "/display/GMDM/Data+storage", + "content": "\n

Publishing Hub among other functions serves as data store, caching the latest state of each Entity fetched from Reltio MDM. This allows clients to take advantage of increased performance and high availability provided by MongoDB NoSQL database.

" + }, + { + "title": "Data structures", + "pageID": "164470007", + "pageLink": "/display/GMDM/Data+structures", + "content": "\n

\"\" Figure 21. Structure of Publishing HUB's databasesThe following diagram shows the structure of DB collections used by Publishing Hub.\n
\nDetailed description:

\n\n\n\n

INSERT vs UPSERT

\n

To speed up database operations Publishing Hub takes advantage of MongoDB "upsert" flag of db.collection.update() method. This allows the application to skip the potentially costly query checking if the entity already exists in database. Instead the update operation is call right away, ceding the responsibility of checking for entity existence on Mongo internal mechanisms.

" + }, + { + "title": "Indexes", + "pageID": "164470001", + "pageLink": "/display/GMDM/Indexes", + "content": "\n

All of the fields in database collections are indexed, except complex documents (i.e. "entity" in entityHistory, "value" in LookupValues). Queries that do not use indexes (for example querying arbitrarily nested attributes of "entity") might suffer from bad performance.

" + }, + { + "title": "DoR, AC, DoD", + "pageID": "294674667", + "pageLink": "/display/GMDM/DoR%2C+AC%2C+DoD", + "content": "" + }, + { + "title": "DoD - template", + "pageID": "294674670", + "pageLink": "/display/GMDM/DoD+-+template", + "content": "

Requirements of task needed to be met before closing:

" + }, + { + "title": "DoR - template", + "pageID": "294674659", + "pageLink": "/display/GMDM/DoR+-+template", + "content": "

Requirements of task needed to be met before pushing to the Sprint:

" + }, + { + "title": "Exponential Back Off", + "pageID": "164469928", + "pageLink": "/display/GMDM/Exponential+Back+Off", + "content": "

BackOff mechanizm that increases the back off period for each retry attempt. When the interval has reached the max interval, it is no longer increased. Stops retrying once the max elapsed time has been reached.
Example: The default interval is 2000L ms, the default multiplier is 1.5, and the default max interval is 30000L. For 10 attempts the sequence will be as follows:

requestback off ms
12000
23000
34500
46750
510125
615187
722780
830000
930000
1030000


Note that the default max elapsed time is Long.MAX_VALUE. Use setMaxElapsedTime(long) to limit the maximum length of time that an instance should accumulate before returning BackOffExecution.STOP.

Implementation based on spring-retry library.


" + }, + { + "title": "HUB UI", + "pageID": "294675912", + "pageLink": "/display/GMDM/HUB+UI", + "content": "


DRAFT:

\"\"


TODO: 

Grafana dashboards through iframe - https://www.itpanther.com/embedding-grafana-in-iframe/

" + }, + { + "title": "Integration Tests", + "pageID": "302681782", + "pageLink": "/display/GMDM/Integration+Tests", + "content": "

Integration tests are devided into different categories. These categories are used for different environments.

Jenkins IT configuration: http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/jenkins/k8s_int_test.groovy

" + }, + { + "title": "Common Integration Test", + "pageID": "302681798", + "pageLink": "/display/GMDM/Common+Integration+Test", + "content": "
Test classTest caseFlow
CommonGetEntityTests
testGetEntityByUri
  1. Create HCP
  2. Get HCP by URI and validate

testSearchEntity
  1. Create HCP
  2. Get entities using filter (get by country code, first name and last name)
  3. Validate if entity exists

testGetEntityByCrosswalk
  1. Create HCP
  2. Get entity by corsswalk and validate if exists

testGetEntitiesByUris
  1. Create HCP
  2. Get entity by uris andvalidate if exists

testGetEntityCountry
  1. Create HCP
  2. Get entity by country and validate if exists

testGetEntityCountryOv
  1. Create HCP
  2. Add new country
  3. Send update request
  4. Get HCP's Country and validate
  5. Make ignored = true and ov = false on all countries
  6. Send update request
  7. Get HCP's Country and validate
CreateHCPTestcreateHCPTest
  1. Create HCP
  2. Get entity and validate
CreateRelationTestcreateRelationTest
  1. Create HCP
  2. Create HCO
  3. Create Relation between HCP and HCO
  4. Get Relation and validate
DeleteCrosswalkTestdeleteCrosswalkTest
  1. Create HCO
  2. Delete crosswalk and validate status response
UpdateHCOTestupdateHCPTest
  1. Create HCO
  2. Get created HCO
  3. Update HCO's name
  4. Validate response status
  5. Get HCO and validate if it is updated
UpdateHCPUsingReltioContributorProviderupdateHCPUsingReltioContributorProviderTrueAndDataProviderFalse
  1. Create HCP
  2. Get created HCP and validate
  3. Update existing corosswalk and set contributorProvider to false
  4. Add new contributor provider crosswalk
  5. Update first name
  6. Send update HCP request
  7. Validate if it is updated
PublishingEventTesttest1_hcp
  1. Create HCP
  2. Wait for HCP_CREATED event
  3. Update HCP first name
  4. Wait for HCP_CHANGED event
  5. Get entity and validate

test2_hcp
  1. Create HCP
  2. Wait for HCP_CREATED event
  3. Update HCP's last name
  4. Wait for HCP_CHANGED event
  5. Delete crosswalk
  6. Wait for HCP_REMOVED event

test3_hco
  1. Create HCO
  2. Wait for HCO_CREATED event
  3. Update HCO's name
  4. Wait for HCO_CHANGED event
  5. Delete crosswalk
  6. Wait for HCO_REMOVED event
" + }, + { + "title": "Integration Test For Iqvia Model", + "pageID": "302681788", + "pageLink": "/display/GMDM/Integration+Test+For+Iqvia+Model", + "content": "
Test classTest caseFlow
CRUDHCOAsynctest
  1. Send HCORequest to Kafka topic
  2. Wait for created event and validate
  3. Update HCO's name and send HCORequest to Kafka topic
  4. Wait for updated event and validate
  5. Remove entities
CRUDHCOAsyncComplextest
  1. Create Source HCO
  2. Send HCORequest with Source HCO to Kafka Topic
  3. Wait for created event and validate
  4. Create Source Department HCO - set Source HCO as Main HCO
  5. Send HCORequest with Source Department HCO
  6. Wait for event and validate
  7. Remove entities
CRUDHCPAsynctest
  1. Send HCPRequest to Kafka topic
  2. Wait for created event and validate
  3. Update HCP's Last Name and send HCORequest to Kafka topic
  4. Wait for updated event and validate
  5. Remove entities
CRUDPostBulkAsynctestHCO
  1. Send EntitiesUpdateRequest with multiple HCO entities to Kafka topic
  2. Wait for entities-create event with specific correlactionId header
  3. Validate message payload and check if all entities are created
  4. Remove entities

testHCP
  1. Send EntitiesUpdateRequest with multiple HCP entities to Kafka topic
  2. Wait for entities-create event with specific correlactionId header
  3. Validate message payload and check if all entities are created
  4. Remove entities

testHCPRejected
  1. Send EntitiesUpdateRequest with multiple incorrect HCP entities to Kafka topic
  2. Wait for event with specific correlactionId header
  3. Check if all entities have ValidatioError and status is failed
CreateRelationAsynctestCreate
  1. Create HCO
  2. Create HCP
  3. Send RelationRequest with Relation Activity between HCP and HCO to Kafka topic
  4. Wait for event with specific correlactionId header and validate status

testCreateRelations
  1. Create HCO
  2. Create HCP_1
  3. Create HCP_2 and validate response
  4. Create HCP_3 and validate response
  5. Create HCP_4 and validate response
  6. Create Activity Relations between HCP_1 → HCO, HCP_2 → HCO, HCP_3 → HCO, HCP_4 → HCO
  7. Send RelationRequest event with all relations to Kafka topic
  8. Wait for event with specific correlactionId header and validate status
  9. Remove entities

testCraeteWithAddressCopy
  1. Create HCO
  2. Create HCP
  3. Create Activity Relation between HCP and HCO
  4. Send RelationRequest event to Kafka topic with param copyAddressFromTarget = true
  5. Wait for event with specific correlactionId header and validate status is created
  6. Get HCP and HCO
  7. Validate updated HCP - check if address exists and contains HcoName attribute
  8. Remove entities

testDeactivateRelation
  1. Create HCO
  2. Create HCP
  3. Create Activity Relation between HCP and HCO with PrimaryAffiliationIndicator = true
  4. Send RelationRequest event to Kafka topic
  5. Wait for event with specific correlactionId header and validate status is created
  6. Update Relation - set delete date on now
  7. Send RelationRequest event to Kafka topic
  8. Wait for event with specific correlactionId header and validate status is deleted
  9. Remove entities
HCOAsyncErrorsTestCasetest
  1. Send HCORequest to Kafka topic - create HCO with incorrect values
  2. Wait for event with specific correlactionId header and validate status is failed
HCPAsyncErrorsTestCasetest
  1. Send HCPRequest to Kafka topic - create HCP without permissions
  2. Wait for event with specific correlactionId header and validate status is failed
UpdateRelationAsynctest
  1. Create HCO and validate status created
  2. Create HCP with affiliatedHCO and validate status created
  3. Get HCP and check if Workplace relation exists
  4. Get existing Relation
  5. Patch Relation - update ActEmail.Email attribute and validate if status is updated
  6. Get Relation and validate if ActEmail list size is 1
  7. Add Country attribute to Relation
  8. Send RelationRequest event to Kafka topic with updated Relation
  9. Wait for event with specific correlactionId header and validate status is updated
  10. Get Relation and check if ActEmail and Country exist
  11. Add AffiliationStatus attribute to Relation
  12. Send RelationRequest event to Kafka topic with updated Relation
  13. Wait for event with specific correlactionId header and validate status is updated
  14. Get Relation and check if ActEmail, Country and  AffiliationStatus  exist
  15. Remove entities
BundlingTesttest
  1. Send multiple HCORequests to Kafka topic - create HCOs
  2. For each request wait for event with status created and collect HCO's uri
  3. Check if number of requests equals number of recived events
  4. Send multiple HCPRequests to Kafka topic - create HCPs
  5. For each request wait for event with status created and collect HCP's uri
  6. Check if number of requests equals number of recived events
  7. Send multiple RelationRequests to Kafka topic - create Relation
  8. For each request wait for event with status created and collect Relation's uri
  9. Check if number of requests equals number of recived events
  10. Set delete date on now for every HCO
  11. Send multiple HCORequests to Kafka topic
  12. For each request wait for event with status deleted
  13. Set delete date on now for every HCP
  14. Send multiple HCPRequests to Kafka topic
  15. For each request wait for event with status deleted
DCRResponseTestcreateAndAcceptDCRThenTryToAcceptAgainTest
  1. Create Hopsital HCO
  2. Create Department HCO
  3. Set Hospital HCO as Department's Main HCO
  4. Create HCP with Affiliated HCO as Department
  5. Check if DCR is created
  6. Accept DCR and check if response is OK
  7. Accept DCR again and check if response is BAD_REQUEST
  8. Remove entities

createAndPartialAcceptThenConfirmNoLoop
  1. Create Hopsital HCO
  2. Create Department HCO
  3. Set Hospital HCO as Department's Main HCO
  4. Create HCP with Affiliated HCO as Department
  5. Check if DCR is created
  6. Partial accept DCR and check if response is OK
  7. Get HCP entity and check if ValidationStatus attribute is "partialValidated"
  8. Check if DCR is not created - confirms that DCR creation does not loop
  9. Remove entities

createAndRejectDCRThenTryToRejectAgainTest
  1. Create Hopsital HCO
  2. Create Department HCO
  3. Set Hospital HCO as Department's Main HCO
  4. Create HCP with Affiliated HCO as Department
  5. Check if DCR is created
  6. Reject DCR and check if response is OK
  7. Reject again DCR and check if response is BAD_REQUEST
  8. Remove entities
DeriveHCPAddressesTestCasederivedHCPAddressesTest
  1. Create HCP and validate response
  2. Create HCO Department with 1 Address and validate response
  3. Create HCO Hospital with 2 Addresses and validate response
  4. Create "Activity" Relation HCP → HCO Department and validate response
  5. Create "Has Health Care Role" Relation HCP → HCO Hospital and validate response
  6. Get HCP and check if contains Hospital's Addresses
  7. Update HCO Hospital Address and validate response
  8. Get HCP and check if contains updated Hospital's Addresses
  9. Remove HCO Hospital Address and validate response
  10. Get HCP and check if contains Hospital's Addresses (without removed)
  11. Remove "Has Health Care Role" Relation HCP → HCO Hospital and validate response
  12. Get HCP and check if Addresses are removed
  13. Remove entities
EVRDCRUpdateHCPLUDTestCasetest
  1. Create Hopsital HCO
  2. Create Department HCO
  3. Set Hospital HCO as Department's Main HCO
  4. Create HCP with Affiliated HCO as Department
  5. Get Change requests and check that DCR was created
  6. Update HCP
    1. ValidationStatus = notvalidated
    2. change existing GRV crosswalk - set DataProvider = true
    3. add DCR crosswalk - EVR set ContributorProvider = true
    4. add another EVR crosswalk set DataProvider = true
  7. Send update request and vadiate response
  8. Update HCP (partial update)
    1. ValidationStatus = validated
    2. Remove First and Last Name
    3. Remove crosswalks
  9. Send update request and validate response
  10. Get HCP and validate
  11. Check if the ValidationStatus & LUD (updateDate/singleAttributeUpdateDate) were refreshed
  12. Remove crosswalks
ExistingDepartmentAndHCPTestCasecreateHCP_HCPNotInPendingStatus_NoDCR
  1. Create Hospital HCO
  2. Create Department HCO with Hospital HCO as MainHCO
  3. Create HCP with affiliated HCO (Department HCO) and ValidationStatus = validated
  4. Get HCP and validate attributes
  5. Get Change requests and check if the list is empty
  6. Remove crosswalks

createHCP_HCPIsInPendingStatus_HCPDCRCreated
  1. Create Hospital HCO
  2. Create Department HCO with Hospital HCO as MainHCO
  3. Create HCP with affiliated HCO (Department HCO) and ValidationStatus = pending
  4. Get HCP and validate attributes
  5. Get Change requests and check if there is one NEW_HCP change request
  6. Remove crosswalks

createHCP_HCPHasTwoWorkplaces_HCPAndWorkplaceDCRCreated
  1. Create Hospital HCO
  2. Create Department1 HCO with Hospital HCO as MainHCO
  3. Create Department2 HCO with Hospital HCO as MainHCO
  4. Create HCP with affiliated HCO (Department1 HCO) and ValidationStatus = pending
  5. Get HCP and validate attributes
    1. has only one Workplace (Department1 HCO)
  6. Update HCP with affiliated HCO (Department2 HCO) and ValidationStatus = pending
  7. Get HCP and validate attributes
    1. has only one Workplace (Department2 HCO)
  8. Get Change requests and check if there is one NEW_HCP change request
  9. Remove crosswalks
NewHCODCRTestCasescreateHCP_DepartmentDoesNotExist_HCOL1DCR
  1. Create Hospital HCO
  2. Create Department HCO with Hospital HCO as MainHCO
  3. Create HCP with affiliated HCO (Department HCO)
  4. Get HCP and validate attributes
    1. Validate Workplace and MainWorkplace
  5. Get Change requests and check if the list is empty
  6. Remove crosswalks

createHCP_HospitalAndDepartmentDoesNotExist_HCOL1DCR
  1. Create Department HCO with Hospital HCO (not created yet) as MainHCO
  2. Create HCP with affiliated HCO (Department HCO) and ValidationStatus = pending
  3. Get HCP and validate attributes
  4. Get HCO Department and validate attributes
  5. Get Change requests and check if there is one NEW_HCO_L2 change request
  6. Remove crosswalks
NewHCPDCRTestCasecreateHCPTest
  1. Create HCO Hospital
  2. Create HCO Department
  3. Create HCP with affiliated HCO (Department HCO)
  4. Get HCP and validate Workplace and MainWorkplace
  5. Remove crosswalks

createHCPPendingTest
  1. Create HCO Hospital
  2. Create HCO Department
  3. Create HCP with affiliated HCO (Department HCO) and ValidationStatus = pending
  4. Validate HCP response
  5. Validate if DCR is created
  6. Remove crosswalks

createHCPNotValidatedTest
  1. Create HCO Hospital
  2. Create HCO Department
  3. Create HCP with affiliated HCO (Department HCO) and ValidationStatus = notvalidated
  4. Validate HCP response
  5. Validate if DCR is created
  6. Remove crosswalks

createHCPNotValidatedMergedIntoNotValidatedTest
  1. Create HCP_1 with ValidationStatus = notvalidated (Merge winner HCP)
  2. Create HCO Hospital
  3. Create HCO Department
  4. Create HCP_2 with affiliated HCO (Department HCO) and ValidationStatus = notvalidated
  5. Validate HCP response
  6. Validate if DCR is not created
  7. Remove crosswalks

createHCPPendingMergedIntoNotValidatedTest
  1. Create HCP_1 with ValidationStatus = notvalidated (Merge winner HCP)
  2. Create HCO Hospital
  3. Create HCO Department
  4. Create HCP_2 with affiliated HCO (Department HCO) and ValidationStatus = pending
  5. Validate HCP response
  6. Validate if DCR is created
  7. Remove crosswalks

createHCPPendingMergedIntoNotValidatedWithAnotherGRVNotValidatedTest
  1. Create HCP_1 with ValidationStatus = notvalidated (Merge winner HCP)
  2. Create HCO Hospital
  3. Create HCP_2 with ValidationStatus = notvalidated (Merge loser HCP)
  4. Create HCO Department
  5. Create HCP_3 with affiliated HCO (Department HCO) and ValidationStatus = pending
  6. Validate if DCR is created
  7. Remove crosswalks

createHCPNotValidatedMergedIntoNotValidatedWithAnotherGRVNotValidatedTest
  1. Create HCP_1 with ValidationStatus = notvalidated (Merge winner HCP)
  2. Create HCO Hospital
  3. Create HCP_2 with ValidationStatus = notvalidated (Merge loser HCP)
  4. Create HCO Department
  5. Create HCP_3 with affiliated HCO (Department HCO) and ValidationStatus = notvalidated
  6. Validate if DCR is not created
  7. Remove crosswalks

createHCPPendingMergedIntoNotValidatedWithGRVAsUpdateTest
  1. Create HCP_1 with ValidationStatus = notvalidated (Merge winner HCP)
  2. Create HCO Hospital
  3. Create HCP_2 with ValidationStatus = notvalidated (Merge loser HCP)
  4. Create HCO Department
  5. Create HCP_3 with affiliated HCO (Department HCO) and ValidationStatus = notvalidated
  6. Get HCP and validate corsswalk GRV count == 3
  7. Validate if DCR is not created
  8. Update HCP_3 set code = pending
  9. Validate if DCR is created
  10. Remove crosswalks
PfDataChangeRequestLiveCycleTesttest
  1. Create HCO Hospital
  2. Create HCO Department with parent HCO Hospital
  3. Create HCP with affiliated HCO (Department HCO) and ValidationStatus = pending
  4. Check if DCR exist
  5. Check if PfDataChangeRequest exist
  6. Accpet DCR
  7. Check that HCP ValidationStatus == validated
  8. Check that PfDataChangeRequest is closed
  9. Remove crosswalks
ResponseInfoTestTest
  1. Create HCO Hospital
  2. Create HCO Department with parent HCO Hospital
  3. Create HCP_1 with affiliated HCO (Department HCO) and ValidationStatus = pending
  4. Create HCP_2 with affiliated HCO (Department HCO) and ValidationStatus = pending
  5. Check that DCR_1 exist
  6. Check that DCR_2 exist
  7. Check that PfDataChangeRequest exist
  8. Respond for DCR_1 - update HCP with merged uris
    1. change First Name
    2. set ValidationStatus = validated
  9. Get HCP and check if ValidationStatus is validated
  10. Check if PfDataChangeRequest is closed and validate ResponseInfo
  11. Respond for DCR_2 - accept and validate message
  12. Check if PfDataChangeRequest is closed and validate ResponseInfo
  13. Check that DCR_2 does not exist
  14. Remove crosswalks
RevalidateNewHCPDCRTestCasetest
  1. Create Parent HCO and validate response
  2. Create Department HCO with Parent HCO and validate response
  3. Create HCP with affiliated HCO (Department HCO), ValidationStatus = pending and validate response
  4. Check that DCR exist
  5. Check that PfDataChangeRequest exist
  6. Respond to DCR - accept
  7. Check that HCP has ValidationStatus = validated
  8. Send revalidate event to Kafka topic
  9. Check that new DCR was created
  10. Checking that previous PfDataChangeRequest has ResponseStatus=accept
  11. Check that new PfDataChangeRequest exist
  12. Check that HCP has ValidationStatus = pending
  13. Remove crosswalks
StandarNonExistingDepartmentTestCasecreateNewHCPTest
  1. Create Hospital HCO
  2. Create HCP with a new affiliated HCO (Department HCO with Hospital HCO as MainHCO)
  3. Get HCP and validate attributes (Workplace and MainWorkplace)
UpdateHCPPhonestest
  1. Create HCP and validate response
  2. Update Phone and send patchHCP request
  3. Validate response status is OK
  4. Remove crosswalks
GetEntityTeststestGetEntityByUri
  1. Create HCP with ValidationStatus = validated and affiliatedHcos (HCO_1, HCO_2)
  2. Get HCP by uri and validate attributes
  3. Remove crosswalks

testSearchEntity
  1. Create HCP with ValidationStatus = validated and affiliatedHcos (HCO_1, HCO_2)
  2. Get entites using filter - HCP by country, first name and last name
  3. Validate if entity exists
  4. Remove crosswalks

testSearchEntityWithoutCountryFilter
  1. Create HCP with ValidationStatus = validated and affiliatedHcos (HCO_1, HCO_2)
  2. Get by corsswalk HCO_1 and check if exists
  3. Get by corsswalk HCO_2 and check if exists
  4. Get entites using filter - HCO by country and (HCO_1 name or HCO_2 name)
  5. Validate if both HCO exists
  6. Remove crosswalks

testGetEntityByCrosswalk
  1. Create HCP with ValidationStatus = validated and affiliatedHcos (HCO_1, HCO_2)
  2. Get HCP by crosswalk
  3. Validate if HCP exists
  4. Remove crosswalks

testGetEntitiesByUris
  1. Create HCP with ValidationStatus = validated and affiliatedHcos (HCO_1, HCO_2)
  2. Get HCP by uri
  3. Validate if HCP exists
  4. Remove crosswalks

testGetEntityCountry
  1. Create HCP with ValidationStatus = validated and affiliatedHcos (HCO_1, HCO_2)
  2. Get HCP's country
  3. Validate reponse
  4. Remove crosswalks

testGetEntityCountryOv
  1. Create HCP with ValidationStatus = validated, affiliatedHcos (HCO_1, HCO_2) and Country = Brazil
  2. Update HCP
    1. update existing crosswalk - set ContributorProvider = true
    2. add new crosswalk as DataProvider
    3. set Country ignored = true
    4. update Country - set to China
  3. Get HCP's Country and validate
    1. check value == BR-Brazil
    2. check ov == true
  4. Update HCP - make ignored=true, ov=false on all countries
  5. Get HCP's Country and validate
    1. lookupCode == BR
  6. Remove crosswalks
MergeUnmergeHCPTestcreateHCP1andHCP2_checkMerge_checkUnmerge_API
  1. Create HCP_1 and validate response
  2. Create HCP_2 and validate response
  3. Merge HCP_1 with HCP_2
  4. Get HCP_1 after merge and validate attributes
  5. Get HCP_2 after merge and validate attributes
  6. Unmerge HCP_1 and HCP_2
  7. Get HCP_1 after unmerge and validate attributes
  8. Get HCP_2 after unmerge and validate attributes
  9. Unmerge HCP_1 and HCP_2 - validate if response code is BAD_REQUEST
  10. Merge HCP_1 and NOT_EXISTING_URI - validate if response code is NOT_FOUND
  11. Remove crosswalks
HCPMatcherTestCasetestPositiveMatch
  1. Create 2 the same HCP objects
  2. Check that objects match

testNegativeMatch
  1. Create 2 different HCP objects
  2. Check that objects do not match
GetEntitiesTesttestGetHCPs
  1. Get entities with filter: country = BR and entityType = HCP
  2. Validate response
    1. All entites are HCP
    2. At least one entity has Workplace

testGetHCOs
  1. Get entities with filter: country = BR and entityType = HCO
  2. Validate response
    1. All entites are HCO
GetEntityUSTestcreateHCPTest
  1. Create HCP and validate response
  2. Get HCP and check if exists
  3. Remove crosswalks
" + }, + { + "title": "Integration Test For COMPANY Model", + "pageID": "302681792", + "pageLink": "/display/GMDM/Integration+Test+For+COMPANY+Model", + "content": "
Test classTest caseFlow
AttributeSetterTestTestAttributeSetter
  1. Create HCP with TypeCode attribute
  2. Get entity and validate if has autofilled attributes
  3. Update TypeCode field: send "None" as attribute value
  4. Update HCP request
  5. Get entity and validate autofileld attributes by DQ rules
  6. Update TypeCode field
  7. Update HCP request
  8. Get entity and validate autofileld attributes by DQ rules
  9. Update TypeCode field
  10. Update HCP request
  11. Get entity and validate autofilled NON-HCP value
  12. Set HCP's crosswalk delete date
  13. Update and validate if delete date has been set
BatchControllerTestmanageBatchInstance_checkPermissionsWithLimitation
  1. Create batch instance
  2. Create batch stage
  3. Validate response code: 403 and message: Cannot access the processor which has been protected
  4. Get batch instance with incorrect name
  5. Validate response code: 403 and message: Batch 'testBatchNotAdded' is not allowed. 
  6. Update batch stage with existing stage name
  7. Update batch stage with limited user
  8. Validate response code: 403 and message: Stage '' is not allowed.
  9. Update batch stage with not authorized stage name
  10. Validate response code: 403 and message: Stage '' passed in Body is not allowed.

createBatchInstance
  1. Create batch instance and validate
  2. Complete stage 1 and start stage 2
  3. Validate stages
  4. Complete stage 2
  5. Start stage 3
  6. Validate all 3 stages
  7. Complete stage 3 and finish batch
  8. Get batch instance and validate
TestBatchBundlingErrorQueueTesttestBatchWorkflowTest
  1. Create batch instance
  2. Get errors and check if there is no errors
  3. Create batch stage: HCO_LOADING
  4. Create batch stage: HCP_LOADING
  5. Create batch stage: RELATION_LOADING
  6. Send entites to HCO_LOADING stage
  7. Finish HCO_LOADING stage
  8. Check sender job status - validate if all entities were sent to Reltio
  9. Check processing job status - validate if all entities were processed
  10. Send entites to HCP_LOADING stage
  11. Finish HCP_LOADING stage
  12. Check sender job status - validate if all entities were sent to Reltio
  13. Check processing job status - validate if all entities were processed
  14. Send relations to RELATION_LOADING stage
  15. Finish RELATION_LOADING stage
  16. Check sender job status - validate if all relations were sent to Reltio
  17. Check processing job status - validate if all relatons were processed
  18. Get batch instance and validate completion status
  19. Validate expected errors
  20. Resubmit errors
  21. Validate expected errors
  22. Validate if all errors were resubmited
TestBatchBundlingTesttestBatchWorkflowTest
  1. Create batch instance
  2. Create batch stage: HCO_LOADING
  3. Create batch stage: HCP_LOADING
  4. Create batch stage: RELATION_LOADING
  5. Send entites to HCO_LOADING stage
  6. Finish HCO_LOADING stage
  7. Check sender job status - validate if all entities were sent to Reltio
  8. Check processing job status - validate if all entities were processed
  9. Send entites to HCP_LOADING stage
  10. Finish HCP_LOADING stage
  11. Check sender job status - validate if all entities were sent to Reltio
  12. Check processing job status - validate if all entities were processed
  13. Send relations to RELATION_LOADING stage
  14. Finish RELATION_LOADING stage
  15. Check sender job status - validate if all relations were sent to Reltio
  16. Check processing job status - validate if all relatons were processed
  17. Get batch instance and validate completion status
  18. Get Relations by crosswalk and validate
TestBatchHCOBulkTesttestBatchWorkflowTest
  1. Create batch instance
  2. Create batch stage: HCO_LOADING
  3. Send entites to HCO_LOADING stage
  4. Finish HCO_LOADING stage
  5. Check sender job status - validate if all entities were sent to Reltio
  6. Check processing job status - validate if all entities were processed
  7. Get batch instance and validate completion status
  8. Get entities by crosswalk and validate
TestBatchHCOTesttestBatchWorkflowTest
  1. Create batch instance
  2. Create batch stage: HCO_LOADING
  3. Send entites to HCO_LOADING stage
  4. Finish HCO_LOADING stage
  5. Check sender job status - validate if all entities were sent to Reltio
  6. Check processing job status - validate if all entities were processed
  7. Get batch instance and validate completion status
  8. Get entities by crosswalk and validate created status

testBatchWorkflowTest_CheckFAILonLoadJob
  1. Create batch instance
  2. Create batch stage: HCO_LOADING
  3. Send entites to HCO_LOADING stage
  4. Update batch stage status: FAILED
  5. Get batch instance and validate

testBatchWorkflowTest_SendEntities_Update_and_MD5Skip
  1. Create batch instance
  2. Create batch stage: HCO_LOADING
  3. Send entites to HCO_LOADING stage
  4. Finish HCO_LOADING stage
  5. Get batch instance and validate completion status
  6. Get entities by crosswalk and validate create status
  7. Create batch instance
  8. Create batch stage: HCO_LOADING
  9. Send entites to HCO_LOADING stage (skip 2 entities - MD5 check sum changed)
  10. Finish HCO_LOADING stage
  11. Get batch instance and validate completion status
  12. Get entities by crosswalk and validate update status

testBatchWorkflowTest_SendEntities_Update_and_DeletesProcessing
  1. Create batch instance
  2. Create batch stage: HCO_LOADING
  3. Send entites to HCO_LOADING stage
  4. Finish HCO_LOADING stage
  5. Check sender job status - validate if all entities were sent to Reltio
  6. Check processing job status - validate if all entities were processed
  7. Check deleting job status - validate if all entities were send
  8. Check deleting processing job - validate if all entities were processed
  9. Get batch instance and validate completion status
  10. Get entities by crosswalk and validate delete status
  11. -- second run
  12. Create batch instance
  13. Create batch stage: HCO_LOADING
  14. Send entites to HCO_LOADING stage (skip 2 entities - delete in post processing)
  15. Finish HCO_LOADING stage
  16. Check sender job status - validate if all entities were sent to Reltio
  17. Check processing job status - validate if all entities were processed
  18. Check deleting job status - validate if all entities were send
  19. Check deleting processing job - validate if all entities were processed
  20. Get batch instance and validate completion status
  21. Get entities by crosswalk and validate delete status
  22. -- third run
  23. Create batch instance for checking activation
  24. Create batch stage: HCO_LOADING
  25. Send entites to HCO_LOADING stage
  26. Finish HCO_LOADING stage
  27. Check sender job status - validate if all entities were sent to Reltio
  28. Check processing job status - validate if all entities were processed
  29. Check deleting job status - validate if all entities were send
  30. Check deleting processing job - validate if all entities were processed
  31. Get batch instance and validate completion status
  32. Get entities by crosswalk and validate delete status
TestBatchHCPErrorQueueTesttestBatchWorkflowTest
  1. Create batch instance
  2. Create batch stage: HCP_LOADING
  3. Get errors and check if there is no errors
  4. Send entites to HCP_LOADING stage
  5. Finish HCP_LOADING stage
  6. Check sender job status - validate if all entities were sent to Reltio
  7. Check processing job status - validate if all entities were processed
  8. Get errors and validate if exists excepted
  9. Resubmit errors
  10. Get errors and validate if all were resubmited
TestBatchHCPPartialOverwriteTesttestBatchWorkflowTest
  1. Create HCP
  2. Create batch instance
  3. Create batch stage: HCP_LOADING
  4. Send entites to HCP_LOADING stage with update last name
  5. Finish HCP_LOADING stage
  6. Check sender job status - validate if all entities are created in mongo
  7. Check processing job status - validate if all entities were processed
  8. Get batch instance and validate completion status
  9. Get entities by crosswalk and validate
TestBatchHCPSoftDependentTesttestBatchWorkflowTest
  1. Create batch instance
  2. Create batch stage: HCP_LOADING
  3. Check Sender job status - SOFT DEPENDENT 
  4. Send entites to HCP_LOADING stage
  5. Finish HCP_LOADING stage
  6. Check sender job status - validate if all entities are sent to Reltio
  7. Check processing job status - validate if all entities were processed
  8. Get batch instance and validate completion status
  9. Get entities by crosswalk and validate created status
TestBatchHCPTesttestBatchWorkflowTest
  1. Create batch instance
  2. Create batch stage: HCP_LOADING
  3. Send entites to HCP_LOADING stage
  4. Finish HCP_LOADING stage
  5. Check sender job status - validate if all entities are sent to Reltio
  6. Check processing job status - validate if all entities were processed
  7. Get batch instance and validate completion status
  8. Get entities by crosswalk and validate created status
TestBatchMergeTesttestBatchWorkflowTest
  1. Create 4 x HCP and validate respons status
  2. Get entities and validate if are created
  3. Create batch instance
  4. Create batch stage: MERGE_ENTITIES_LOADING
  5. Send merge entities objects (Reltio, Onekey)
  6. Finish MERGE_ENTITIES_LOADING stage
  7. Check sender job status - validate if all tags are sent to Reltio
  8. Check processing job status - validate if all entities were processed
  9. Get batch instance and validate completion status
  10. Get entities and validate update status (check if tags are visible in Reltio)
  11. Create batch instance
  12. Create batch stage: MERGE_ENTITIES_LOADING
  13. Send unmerge entities objects (Reltio, Onekey)
  14. Finish MERGE_ENTITIES_LOADING stage
  15. Check sender job status - validate if all tags are sent to Reltio
  16. Check processing job status - validate if all entities were processed
  17. Get batch instance and validate completion status
TestBatchPatchHCPPartialOverwriteTest
  1. Create batch instance
  2. Create batch stage: HCP_LOADING
  3. Create HCP entity with crosswalk's delete date set on now
  4. Send entites to HCP_LOADING stage
  5. Finish HCP_LOADING stage
  6. Check sender job status - validate if all entities are sent to Reltio
  7. Check processing job status - validate if all entities were processed
  8. Get batch instance and validate completion status
  9. Get entities by crosswalk and validate created status
  10. Create batch instance
  11. Create batch stage: HCP_LOADING
  12. Send entites PATCH to HCP_LOADING stage with empty crosswalk's delete date and missing first and last name
  13. Finish HCP_LOADING stage
  14. Check sender job status - validate if all entities are sent to Reltio
  15. Check processing job status - validate if all entities were processed
  16. Get batch instance and validate completion status
  17. Get entities by crosswalk and validate if are update
TestBatchRelationTesttestBatchWorkflowTest
  1. Create batch instance
  2. Create batch stage: HCO_LOADING
  3. Create batch stage: HCP_LOADING
  4. Create batch stage: RELATION_LOADING
  5. Send entites to HCO_LOADING stage
  6. Finish HCO_LOADING stage
  7. Check sender job status - validate if all entities were sent to Reltio
  8. Check processing job status - validate if all entities were processed
  9. Send entites to HCP_LOADING stage
  10. Finish HCP_LOADING stage
  11. Check sender job status - validate if all entities were sent to Reltio
  12. Check processing job status - validate if all entities were processed
  13. Send relations to RELATION_LOADING stage
  14. Finish RELATION_LOADING stage
  15. Check sender job status - validate if all relations were sent to Reltio
  16. Check processing job status - validate if all relatons were processed
  17. Get batch instance and validate completion status
TestBatchTAGSTesttestBatchWorkflowTest
  1. Create HCP
  2. Get HCP and check if there is no tags
  3. Create batch instance
  4. Create batch stage: TAGS_LOADING
  5. Send request: Append entity tags objects
  6. Finish TAGS_LOADING stage
  7. Check sender job status - validate if all entities were sent to Reltio
  8. Check processing job status - validate if all entities were processed
  9. Get batch instance and validate completion status
  10. Create batch instance
  11. Create batch stage: TAGS_LOADING - DELETE
  12. Send request: Delete entity tags objects
  13. Check sender job status - validate if all entities were sent to Reltio
  14. Check processing job status - validate if all entities were processed
  15. Get batch instance and validate update status
  16. Get entity and check if tags are removed from Reltio
COMPANYGlobalCustomerIdSearchOnLostMergeEntitiesTesttest
  1. Create first HCP and validate response status
  2. Create second HCP and validate response status
  3. Create third HCP and validate response status
  4. Merge HCP2 with HCP3 and validate response status
  5. Merge HCP2 with HCP1 and validate response status
  6. Get entities: filter by COMPANYGlobalCustomerID and HCP1Uri
  7. Validate if exists
  8. Get entities: filter by COMPANYGlobalCustomerID and HCP2Uri
  9. Validate if exists
  10. Get entities: filter by COMPANYGlobalCustomerID and HCP3Uri
  11. Validate if exists
COMPANYGlobalCustomerIdTesttest
  1. Create HCP_1 with RX_AUDIT crosswalk
  2. Wait for HCP_CREATED event
  3. Create HCP_2 with GRV crosswalk
  4. Wait for HCP_CREATED event
  5. Merge both HCP's with RX_AUDIT being winner
  6. Wait for HCP_MERGE, HCP_LOST_MARGE and HCP_CHANGED events
  7. Get entities by uri and validate. Check if merge succeeded and resulting profile has winner COMPANYId.
  8. Update HCP_1: set delete date on RX_AUDIT crosswalk
  9. Check if entity's COMPANYID has not changed after softDeleting the crosswalk
  10. Get HCP_1 and validate COMPANYGlobalCustomerID after soft deleting crosswalk
  11. Remove HCP_1 by crosswalk
  12. Remove HCP_2 by crosswalk

testWithDeleteDate
  1. Create HCP_1 with crosswalk delete date
  2. Wait for HCP_CREATED event
  3. Create HCP_2
  4. Wait for HCP_CREATED event
  5. Merge both HCP's
  6. Wait for HCP_MERGE, HCP_LOST_MARGE and HCP_CHANGED events
  7. Check if merge succeeded and resulting profile has winner COMPANYId.
  8. Remove HCP_1 by crosswalk
  9. Remove HCP_2 by crosswalk
RelationEventChecksumTesttest
  1. Create HCP and validate status
  2. Get HCP and validate if exists
  3. Create HCO and validate status
  4. Create Employment Relation between HCP and HCO - validate response status
  5. Wait for RELATIONSHIP_CREATED event and validate
  6. Find Relation by id and keep checksum
  7. Update Relation title attribute and validate response
  8. Wait for RELATIONSHIP_CHANGED event
  9. Validate if checksum has changed
  10. Delete HCO crosswalk and validate
  11. Delete HCP crosswalk and validate
  12. Delete Relation crosswalk and validate
CreateChangeRequestTestcreateChangeRequestTest
  1. Create Change Request
  2. Create HCP
  3. Get HCP and validate
  4. Update HCP's First Name with dcrId from Change Request
  5. Init Change Request and validate response is not null
  6. Delete Change Request
  7. Delete HCP's crosswalk
AttributesEnricherNoCachedTesttestCreateFailedRelationNoCache
  1. Create HCO
  2. Create HCP
  3. Create Relation with missing attributes - validate response stats is failed
  4. Search Relation in mogno and check if not exists
AttributesEnricherTesttestCreate
  1. Create HCP and validate
  2. Create HCP and validate
  3. Create Relation and validate
  4. Get HCP and validate if ProviderAffiliations attribute exists
  5. Update HCP's Last Name
  6. Get HCP and validate if ProviderAffiliations attribute exists
  7. Check last Last Name is updated
  8. Remove HCP, HCO and Relation by crosswalk
AttributesEnricherWithDeleteDateOnRelationTesttestCreateAndUpdateRelationWithDeleteDate
  1. Create HCP and validate
  2. Create HCP and validate
  3. Create Relation and validate
  4. Get HCP and validate if ProviderAffiliations attribute exists
  5. Update HCP's Last Name
  6. Get HCP and validate if ProviderAffiliations attribute exists
  7. Check if Last Name is updated
  8. Set Relation's crosswalk delete date on now and update
  9. Update HCP's Last Name
  10. Get HCP and validate that ProviderAffiliations attribute does not exist
  11. Check last Last Name is updated
  12. Send update Relation request and check status is deleted
AttributesEnricherWithMultipleEndObjectstestCreateWithMultipleEndObjects
  1. Create HCO_1
  2. Create HCO_2
  3. Create HCP
  4. Create Relation between HCP and HCO_1
  5. Create Relation between HCP and HCO_2
  6. Get HCP and validate if ProviderAffiliations attribute exists
  7. Update HCP's Last Name
  8. Get HCP and validate that ProviderAffiliations attribute exists
  9. Remove all entities
UpdateEntityAttributeTestshouldUpdateIdentifier
  1. Create HCP and validate
  2. Update HCP's attribute: insert idetifier and validate
  3. Update HCP's attribute: update idetifier and validate
  4. Update HCP's attribute: merge idetifier and validate
  5. Update HCP's attribute: replace idetifier and validate
  6. Update HCP's attribute: delete idetifier and validate
  7. Remove all entities by crosswalk
CreateEntityTestcreateAndUpdateEntityTest
  1. Create DCR entity
  2. Get entity and validate
  3. Update DCR ID attribute
  4. Validate updated entity
  5. Get matches entities and validate that response is not null
  6. Remove entity
CreateHCPWithoutCOMPANYAddressIdcreateHCPTest
  1. Create HCP
  2. Get HCP and validate fields
  3. Get generatedId from Mongo cache collection keyIdRegistry
  4. Validate if created HCP's address has COMPANYAddressID
  5. Check if COMPANYAddressID equals generatedId
  6. Remove entity
GetMatchesTestcreateHCPTest
  1. Create HCP_1
  2. Create HCP_2 with similar attributes and values
  3. Get matches for HCP_1
  4. Check if matches size >= 0
TranslateLookupsTesttranslateLookupTest
  1. Send get translate lookups request: Type=AddressStatus, canonicalCode=A,sourceName=ONEKEY
  2. Assert resposne is not null
DelayRankActivationTesttest
  1. Create HCO_A
  2. CREATE HCO_B1
  3. CREATE HCO_B2
  4. CREATE HCO_B3
  5. CREATE RELATION B1 → A (type: OtherHCOtoHCOAffiliations, rel type: REL.BNG, source: ONEKEY)
  6. CREATE RELATION B2 → A (type: OtherHCOtoHCOAffiliations, rel type: REL.BNG, source: ONEKEY)
  7. CREATE RELATION B3 → A (type: OtherHCOtoHCOAffiliations, rel type: REL.BNG, source: ONEKEY)
  8. Check UPDATE ATTRIBUTE events:
    1. UPDATE RANK event exists with Rank = 3 for B1.A
    2. UPDATE RANK event exists with Rank = 2 for B2.A
  9. Check PUBLISHED events:
    1. B3 - RELATIONSHIP_CREATED event exists with Rank = 1
    2. B1 - RELATIONSHIP_CHANGED event exists with Rank = 3
    3. B2 - RELATIONSHIP_CHANGED event exists with Rank = 2
  10. Check order of events:
    1. B1 - RELATIONSHIP_CHANGED and B2 - RELATIONSHIP_CHANGED are after UPDATE events
  11. CREATE HCO_B4
  12. CREATE RELATION B4 → A (type: OtherHCOtoHCOAffiliations, rel type: REL.BNG, source: GRV)
  13. Check UPDATE ATTRIBUTE events:
    1. UPDATE RANK event exists with Rank = 4 for B4.A
  14. Check PUBLISHED events:
    1. B4 - RELATIONSHIP_CHANGED event exists with Rank = 4
  15. Check order of events:
    1. B4 - RELATIONSHIP_CHANGED is after UPDATE events
  16. CREATE HCO_B5
  17. CREATE RELATION B5 → A (type: OtherHCOtoHCOAffiliations, rel type: REL.FPA, source: ONEKEY)
  18. Check UPDATE ATTRIBUTE events:
    1. UPDATE RANK event exists with Rank = 4 for B1.A
    2. UPDATE RANK event exists with Rank = 3 for B2.A
    3. UPDATE RANK event exists with Rank = 2 for B3.A
    4. UPDATE RANK event exists with Rank = 5 for B4.A
  19. Check PUBLISHED events:
    1. B1 - RELATIONSHIP_CHANGED event exists with Rank = 4
    2. B2 - RELATIONSHIP_CHANGED event exists with Rank = 3
    3. B3 - RELATIONSHIP_CHANGED event exists with Rank = 2
    4. B4 - RELATIONSHIP_CHANGED event exists with Rank = 5
    5. B5 - RELATIONSHIP_CREATED event exists with Rank = 1
  20. Check order of events:
    1. All published RELATIONSHIP_CHANGED are after UPDATE_RANK events
  21. Set deleteDate on B1.A
  22. Check UPDATE ATTRIBUTE events:
    1. UPDATE RANK event exists with Rank = 4 for B4.A
  23. Check PUBLISHED events:
    1. B4 - RELATIONSHIP_CHANGED event exists with Rank = 4
  24. Check order of events:
    1. Published RELATIONSHIP_CHANGED is after UPDATE_RANK event
  25. Get B2.A relation and check Rank = 3
  26. Get B3.A relation and check Rank = 2
  27. Get B4.A relation and check Rank = 4
  28. Get B5.A relation and check Rank = 1
  29. Clear data
RawDataTestshouldRestoreHCP
  1. Create HCP entity
  2. Delete HCP by crosswalk
  3. Search entity by name - expected not found
  4. Restore HCP entity
  5. Search entity by name
  6. Clear data
shouldRestoreHCO
  1. Create HCO entity
  2. Delete HCO by crosswalk
  3. Search entity by name - expected not found
  4. Restore HCO entity
  5. Search entity by name
  6. Clear data
shouldRestoreRelation
  1. Create HCP entity
  2. Create HCO entity
  3. Create relation from HCP to HCO
  4. Delete relation by crosswalk
  5. Get relation by crosswalk - expected not found
  6. Restore relation
  7. Get relation by crosswalk
  8. Clear data
TestBatchUpdateAttributesTest
testBatchWorkFlowTest
  1. Create 2 x HCP and validate respons status
  2. Get entities and validate if they are created
  3. Test Insert Identifiers
    1. Create batch instance
    2. Create batch stage: UPDATE_ATTRIBUTES_LOADING
    3. Initialize UPDATE_ATTRIBUTES_LOADING stage
    4. Send updateEntityAttributeRequest objects with different identifiers
    5. Finish UPDATE_ATTRIBUTES_LOADING stage
    6. Check sender job status - validate if all updates are sent to Reltio
    7. Check processing job status - validate if all entities were processed
    8. Get batch instance and validate completion status
    9. Get entities and validate update status (check if inserted identifiers are visible in Reltio)
  4. Test Update Identifiers
    1. Create batch instance
    2. Create batch stage: UPDATE_ATTRIBUTES_LOADING
    3. Initialize UPDATE_ATTRIBUTES_LOADING stage
    4. Send updateEntityAttributeRequest objects with different identifiers
    5. Finish UPDATE_ATTRIBUTES_LOADING stage
    6. Check sender job status - validate if all updates are sent to Reltio
    7. Check processing job status - validate if all entities were processed
    8. Get batch instance and validate completion status
    9. Get entities and validate update status (check if updated identifiers are visible in Reltio)
  5. Test Merge Identifiers
    1. Create batch instance
    2. Create batch stage: UPDATE_ATTRIBUTES_LOADING
    3. Initialize UPDATE_ATTRIBUTES_LOADING stage
    4. Send updateEntityAttributeRequest objects with different identifiers
    5. Finish UPDATE_ATTRIBUTES_LOADING stage
    6. Check sender job status - validate if all updates are sent to Reltio
    7. Check processing job status - validate if all entities were processed
    8. Get batch instance and validate completion status
    9. Get entities and validate update status (check if merged identifiers are visible in Reltio)
  6. Test Replace Identifiers
    1. Create batch instance
    2. Create batch stage: UPDATE_ATTRIBUTES_LOADING
    3. Initialize UPDATE_ATTRIBUTES_LOADING stage
    4. Send updateEntityAttributeRequest objects with different identifiers
    5. Finish UPDATE_ATTRIBUTES_LOADING stage
    6. Check sender job status - validate if all updates are sent to Reltio
    7. Check processing job status - validate if all entities were processed
    8. Get batch instance and validate completion status
    9. Get entities and validate update status (check if replaced identifiers are visible in Reltio)
  7. Test Delete Identifiers
    1. Create batch instance
    2. Create batch stage: UPDATE_ATTRIBUTES_LOADING
    3. Initialize UPDATE_ATTRIBUTES_LOADING stage
    4. Send updateEntityAttributeRequest objects with different identifiers
    5. Finish UPDATE_ATTRIBUTES_LOADING stage
    6. Check sender job status - validate if all updates are sent to Reltio
    7. Check processing job status - validate if all entities were processed
    8. Get batch instance and validate completion status
    9. Get entities and validate update status (check if deleted identifiers are visible in Reltio)
  8. Remove all entities by crosswalk and all batch instances by id
" + }, + { + "title": "Integration Test For COMPANY Model China", + "pageID": "302681804", + "pageLink": "/display/GMDM/Integration+Test+For+COMPANY+Model+China", + "content": "
Test classTest caseFlow
ChinaComplexEventCaseshouldCreateHCPAndConnectWithAffiliatedHCOByName
  1. Create HCO (AffiliatedHCO) and validate response
  2. Get entities with filter by HCO's Name and entityType
  3. Validate if exists
  4. Create HCP (V2Complex method)
    1. with not existing MainHCO
    2. with affiliatedHCO and existing HCO's Name
  5. Get HCP and validate
    1. Check if affiliatedHCO Uri equals created HCO uri (Workplace)
  6. Remove entities

shouldCreateHCPAndMainHCO
  1. Create HCO (AffiliatedHCO) and validate response
  2. Create HCP (V2Complex method)
    1. with AffiliatedHCO - set uri from previously created HCO
    2. with MainHCO without uri
  3. Get HCP and validate
    1. Check if affiliatedHCO Uri equals created HCO uri (Workplace)
    2. Validate Workplace attributes
  4. Remove entities

shouldCreateHCPAndAffiliatedHCO
  1. Create HCO (MainHCO) and validate response
  2. Create HCP (V2Complex method)
    1. with AffiliatedHCO without uri (not existing HCO)
    2. with MainHCO - set objectURI from previously created Main HCO
  3. Get HCP and validate
    1. Check if MainHCO Uri equals created HCO uri (MainWorkplace)
    2. Validate MainWorkplace attributes
  4. Remove entities

shouldCreateHCPAndConnectWithAffiliations
  1. Create HCO (MainHCO) and validate response
  2. Create HCO (AffiliatedHCO) and validate response
  3. Create HCP (V2Complex method)
    1. with AffiliatedHCO - set uri from previously created Affiliated HCO
    2. with MainHCO - set objectURI from previously created Main HCO
  4. Get HCP and validate
    1. Check if affiliatedHCO Uri equals created HCO uri (Workplace)
    2. Check if MainHCO Uri equals created HCO uri (MainWorkplace)
    3. Validate Workplace and MainWorkplace attributes
  5. Remove entities

shouldCreateHCPAndAffiliations
  1. Create HCP (V2Complex method)
    1. without AffialitedHCO uri
    2. without MainHCO objectURI
  2. Get HCP and validate
    1. Check if Workplace is created and has correct attributes
    2. Check if MainWorkplace is created and has correct attributes
    3. Validate Workplace and MainWorkplace attributes
  3. Remove entities
ChinaSimpleEventCaseshouldPublishCreateHCPInIqiviaModel
  1. Create HCP in COMPANYModel (V2Simple method)
  2. Validate response
  3. Get HCP entity and validate attributes
  4. Wait for Kafka output event
  5. Validate event
    1. Validate attributes and check if event is in IqiviaModel
  6. Remove entities
ChinaMergeEntityTest
  1. Craete HCP_1 (V2Complex method) and validate response
  2. Craete HCP_2 (V2Complex method) and validate response
  3. Merge entities HCP_1 and HCP_2
  4. Get HCP by HCP_1 uri and check if exists
  5. Wait for Kafka event on merge response topic
  6. Validate Kafka event
  7. Remove entities
ChinaWorkplaceValidationEntityTestshouldValidateMainHCO
  1. Create HCP (V2Complex method)
    1. with 2 affiliatedHCO which do not exist
    2. with 1 MainHCO which does not exist
  2. Get HCP entity and check if exist
  3. Wait for Kafka event on response topic
  4. Validate Kafka event
    1. Validate MainWorkplace (1 exists)
    2. Validate Workplaces (2 exists)
    3. Validate MainHCO (1 exists)
    4. Assert MainWorkplace equals MainHCO
  5. Remove entities
" + }, + { + "title": "Integration Test For COMPANY Model DCR2Service", + "pageID": "302681794", + "pageLink": "/display/GMDM/Integration+Test+For+COMPANY+Model+DCR2Service", + "content": "
Test classTest caseFlow
DCR2ServiceTestshouldCreateHCPTest
  1. Create HCO and validate response
  2. Create DCR request (hcp-create)
  3. Send Apply Change request
  4. Get DCR status and validate
  5. Validate created entity
  6. Remove entities

shouldUpdateHCPChangePrimarySpecialtyTest
  1. Create HCP
  2. Create DCR request: update HCP Primary Speciality
  3. Validate DCR response
  4. Apply Change request
  5. Get DCR status and validate
  6. Get HCP and validate
  7. Get DCR and validate
  8. Remove all entities

shouldCreateHCOTest
  1. Create DCR Request (hco-create) and validate response
  2. Apply Change request
  3. Get DCR status and validate
  4. Get HCO and validate
  5. Get DCR and validate
  6. Remove all entities

shouldUpdateHCPChangePrimaryAffiliationTest
  1. Create HCO_1 and valdiate response
  2. Create HCO_2 and validate response
  3. Create HCP with affiliations and validate reponse
  4. Get HCO_1 and save COMPANYGlobalCustomerId
  5. Get HCP and save COMPANYGlobalCustomerId
  6. Get entities - search by HCO_1's COMPANYGlobalCustomerId and check if exists
  7. Get entities - search by HCP's COMPANYGlobalCustomerId and check if exists
  8. Create DCR Request and validate response: update HCP primary affiliation
  9. Apply Change request
  10. Get DCR status and validate
  11. Get HCP and validate
  12. Get DCR and validate
  13. Remove all entities

shouldUpdateHCPIgnoreRelation
  1. Create HCO_1 and valdiate response
  2. Create HCO_2 and validate response
  3. Create HCP with affiliations and validate reponse
  4. Get HCO_1 and save COMPANYGlobalCustomerId
  5. Get HCP and save COMPANYGlobalCustomerId
  6. Get entities - search by HCO_1's COMPANYGlobalCustomerId and check if exists
  7. Get entities - search by HCP's COMPANYGlobalCustomerId and check if exists
  8. Create DCR Request and validate response: ignore affiliation
  9. Apply Change request
  10. Get DCR status and validate
  11. Wait for RELATIONSHIP_CHANGED event
  12. Wait for RELATIONSHIP_INACIVATED event
  13. Get HCP and validate
  14. Get DCR and validate
  15. Remove all entities

shouldUpdateHCPAddPrimaryAffiliationTest
  1. Create HCO and validate response
  2. Create HCP and validate response
  3. Create DCR Request: HCP update added new primary affiliation
  4. Validate DCR response
  5. Apply Change request
  6. Get DCR status and validate
  7. Get HCP and validate
  8. Get DCR and validate
  9. Remove all entities

shouldUpdateHCOAddAffiliationTest
  1. Create HCO_1 and validate
  2. Create HCO_2 and validate
  3. Create DCR Request: update HCO add other affiliation (OtherHCOtoHCOAffiliations)
  4. Validate DCR response
  5. Apply Change request
  6. Get DCR status and validate
  7. Get HCO's connections (OtherHCOtoHCOAffiliations) and validate
  8. Get DCR and validate
  9. Remove all entities

shouldInactivateHCP
  1. Create HCP and validate response
  2. Create DCR Request: Inactivate HCP
  3. Validate DCR response
  4. Apply Change request
  5. Get DCR status and validate
  6. Get HCP and validate
  7. Get DCR and validate
  8. Remove all entities

shouldUpdateHCPAddPrivateAddress
  1. Create HCP and validate response
  2. Create DCR Request: update HCP - add private address
  3. Validate DCR response
  4. Apply Change request
  5. Get DCR status and validate
  6. Get HCP and validate
  7. Get DCR and validate
  8. Remove all entities

shouldUpdateHCPAddAffiliationToNewHCO
  1. Create HCO and validate response
  2. Create HCP and validate response
  3. Create DCR Request: update HCP - add affiliation to new HCO
  4. Validate DCR response
  5. Apply Change request
  6. Get DCR status and validate
  7. Get HCP and validate
  8. Get HCO entity by crosswalk and save uri
  9. Get DCR and validate
  10. Remove all entities

shouldReturnValidationError
  1. Create DCR request with unknown entityUri
  2. Validate DCR response and check if REQUEST_FAILED

shouldCreateHCPOneKey
  1. Create HCP and validate response
  2. Create DCR Request: create OneKey HCP
  3. Validate DCR response
  4. Get DCR status and validate
  5. Get HCP and validate
  6. Get DCR and validate
  7. Remove all entities

shouldCreateHCPOneKeySpecialityMapping
  1. Create HCP and validate response
  2. Create DCR Request: create OneKey HCP with speciality value
  3. Validate DCR response
  4. Get DCR status and validate
  5. Get HCP and validate
  6. Get DCR and validate
  7. Remove all entities

shouldCreateHCPOneKeyRedirectToReltio
  1. Create HCP and validate response
  2. Create DCR Request: create OneKey HCP with speciality value "not found key"
  3. Validate DCR response
  4. Apply Change Request
  5. Get DCR status and validate
  6. Get HCP and validate
  7. Get DCR and validate
  8. Remove all entities

shouldCreateHCOOneKey
  1. Create HCO nad validate response
  2. Create DCR Request: create OneKey HCO
  3. Validate DCR response
  4. Get DCR status and validate
  5. Get HCO and validate
  6. Get DCR and validate
  7. Remove all entities

shouldReturnMissingDataException
  1. Create DCR Request with missing data
  2. Validate DCR response: status = REQUEST_REJECTED and response has correct message

shouldReturnForbiddenAccessException
  1. Create DCR Request with forbidden access data
  2. Validate DCR response: status = REQUEST_FAILED and response has correct message

shouldReturnInternalServerError
  1. Create DCR Request with internal server error data
  2. Validate DCR response: status = REQUEST_FAILED and response has correct message
" + }, + { + "title": "Integration Test For COMPANY Model Region AMER", + "pageID": "302681796", + "pageLink": "/display/GMDM/Integration+Test+For+COMPANY+Model+Region+AMER", + "content": "
Test classTest caseFlow
MicroBrickTestshouldCalculateMicroBricks
  1. Create HCP and validate response
  2. Wait for event on ChangeLog topic with specified country
  3. Get HCP entity and validate MicroBrick
  4. Update HCP with new zip codes and valdiate response
  5. Wait for event on ChangeLog topic with specified country
  6. Get HCP entity and validate MicroBrick
  7. Delete entities
ValidateHCPTestvalidateHCPTest
  1. Create HCP and validate response status
  2. Create validation request with valid params
  3. Assert if response is ok and validation status is "Valid"

validateHCPTestNotValid
  1. Create HCP and validate response status
  2. Create validation request with not valid params
  3. Assert if response is ok and validation status is "NotValid"

validateHCPLookupTest
  1. Create HCP with "Speciality" attribute and validate response status
  2. Create lookup validation request with "Speciality" attribute
  3. Assert if response is ok and validation status is "Valid"
" + }, + { + "title": "Integration Test For COMPANY Model Region EMEA", + "pageID": "347655258", + "pageLink": "/display/GMDM/Integration+Test+For+COMPANY+Model+Region+EMEA", + "content": "
Test classTest caseFlow
AutofillTypeCodeTestshouldProcessNonPrescriber
  1. Create HCP entity
  2. Validate type code value is Non-Prescriber on output topic
  3. Inactivate HCP entity
  4. Validate type code value is Non-Prescriber on history inactive topic
  5. Delete entity
shouldProcessPrescriber
  1. Create HCP entity
  2. Validate type code value is Prescriber on output topic
  3. Inactivate HCP entity
  4. Validate type code value is Prescriber on history inactive topic
  5. Delete entity
shouldProcessMerge
  1. Create first HCP entity
  2. Validate type code is Prescriber on output topic
  3. Create second HCP entity
  4. Validate type code is Non-Prescriber on output topic
  5. Merge entities
  6. Validate type code is Prescriber on output topic
  7. Inactivate first entity
  8. Validate type code is Non-Prescriber
  9. Delete second entity crosswalk
  10. Validate entity has end date on output topic
  11. Validate type code value is Prescriber on output topic
  12. Delete entity
shouldNotUpdateTypeCode
  1. Create HCP entity with correct type code value
  2. Validate there is no type code value provided by HUB technical source on output topic
  3. Delete entity
shouldProcessLookupErrors
  1. Create HCP entity with invalid sub type code and speciality values
  2. Validate type code value is concatenation of sub type code and speciality values on output topic
  3. Inactivate HCP entity
  4. Validate type code value is concatenation of sub type code and speciality values on history inactive topic
  5. Delete entity
" + }, + { + "title": "Integration Test For COMPANY Model Region US", + "pageID": "302681784", + "pageLink": "/display/GMDM/Integration+Test+For+COMPANY+Model+Region+US", + "content": "
Test classTest caseFlow
CRUDMCOAsynctest
  1. Send MCORequest to Kafka topic
  2. Wait for created event
  3. Validate created MCO
  4. Update MCO's name
  5. Send MCORequest to Kafka topic
  6. Wait for updated event
  7. Validate updated entity
  8. Delete all entities
TestBatchMCOTesttestBatchWorkflowTest
  1. Create batch instance: testBatch
  2. Create MCO_LOADNIG stage
  3. Send MCO entities to MCO_LOADNIG stage
  4. Finish MCO_LOADNIG stage
  5. Check sender job status - get batch instance and validate if all entities are created
  6. Check processing job status - get batch instance and validate if all entties are processed
  7. Get batch instance and check batch completion status
  8. Get entities by crosswalk and check if all are created
  9. Remove all entities

testBatchWorkflowTest_SendEntities_Update_and_MD5Skip
  1. Create batch instance: testBatch
  2. Create MCO_LOADNIG stage
  3. Send MCO entities to MCO_LOADNIG stage
  4. Finish MCO_LOADNIG stage
  5. Check sender job status - get batch instance and validate if all entities are created
  6. Check processing job status - get batch instance and validate if all entties are processed
  7. Get batch instance and check batch completion status
  8. Get entities by crosswalk and check if all are created
  9. Create batch instance: testBatch
  10. Create MCO_LOADNIG stage
  11. Send MCO entities to MCO_LOADNIG stage (skip 2 entities MD5 checksum changed)
  12. Finish MCO_LOADNIG stage
  13. Check sender job status - get batch instance and validate if all entities are created
  14. Check processing job status - get batch instance and validate if all entties are processed
  15. Get batch instance and check batch completion status
  16. Get entities by crosswalk and check if all are created
  17. Remove all entities
MCOBundlingTesttest
  1. Send multiple MCORequest to kafka topic
  2. Wait for created event for every MCORequest
  3. Check if number of recived events equals number of sent requests
  4. Set crosswalk's delete date on now for every request
  5. Send all updated MCORequests to Kafka topic
  6. Wait for deleted event for every MCORequest
EntityEventChecksumTesttest
  1. Create HCP
  2. Wait for HCP_CREATED event
  3. Get created HCP by uri and check if exists
  4. Find by id created HCP in mogno and save "checksum"
  5. Update HCP's attribute and send request
  6. Wait for HCP_CHANGED event
  7. Find by id created HCP in mogno and save
  8. Check if old checksum is different than current checksum
  9. Remove HCP
  10. Wait for HCP_REMOVED event
EntityEventsTesttest
  1. Create MCO
  2. Wait for ENTITY_CREATED event
  3. Update MCO
  4. Wait for ENTITY_CHANGED event
  5. Remove MCO
  6. Wait for ENTITY_REMOVED event
HCPEventsMergeTesttest
  1. Create HCP_1 and validate response
  2. Wait for HCP_CREATED event
  3. Get HCP_1 and validate attributes
  4. Create HCP_2 and validate response
  5. Get HCP_2 and validate attributes
  6. Merge HCP_1 and HCP_2
  7. Wait for HCP_MERGED event
  8. Get HCP_2 and validate attributes
  9. Delete HCP_1 crosswalk
  10. Wait for HCP_CHANGED event and validate HCP_URI
  11. Delete HCP_1 and HCP_2 crosswalks
  12. Wait for HCP_REMOVED event
  13. Delete HCP_2 crosswalk
HCPEventsNotTrimmedMergeTesttest
  1. Create HCP_1 and validate response
  2. Wait for HCP_CREATED event
  3. Get HCP_1 and validate attributes
  4. Create HCP_2 and validate response
  5. Get HCP_2 and validate attributes
  6. Merge HCP_1 and HCP_2
  7. Wait for HCP_MERGED event and validate attributes
  8. Get HCP_2 and validate attributes
  9. Delete HCP_1 crosswalk
  10. Wait for HCP_CHANGED event and validate HCP_URI
  11. Delete HCP_1 and HCP_2 crosswalks
  12. Wait for HCP_REMOVED event
  13. Delete HCP_2 crosswalk
MCOEventsTesttest
  1. Create MCO and validate reponse
  2. Wait for MCO_CREATED event and validate uris
  3. Update MCO's name and validate response
  4. Wait for MCO_CHANGED event and validate uris
  5. Delete MCO's crosswalk and validate response status
  6. Wait for MCO_REMOVED event and validate uris
  7. Remove entities
PotentialMatchLinkCleanerTest
  1. Create HCO: Start FLEX
  2. Get HCO and validate
  3. Create HCO: End ONEKEY
  4. Get HCO and validate
  5. Get matches by Start FLEX HCO entityId
  6. Validate matches
  7. Get not matches by Start FLEX HCO entityId
  8. Validate - not match does not exist
  9. Get Start FLEX HCO from mongo entityMatchesHistory collection
  10. Validate matches from mongo
  11. Create DerivedAffiliation - realtion between FLEX and HCO
  12. Get matches by Start FLEX HCO entityId
  13. Check if there is no matches
  14. Get not matches by Start FLEX HCO entityId
  15. Validate not matches response
  16. Remove all entities
UpdateMCOTesttest1_createMCOTest
  1. Create MCO and validate response
  2. Get MCO by uri and validate
  3. Remove entities

test2_updateMCOTest
  1. Create MCO and validate response
  2. Update MCO's name
  3. Get MCO by uri and validate
  4. Remove entities

test3_createMCOBatchTest
  1. Create multiple MCOs using postBatchMCO
  2. Validate response
  3. Remove entities
UpdateUsageFlagsTesttest1_updateUsageFlags
  1. Create HCP and validate response
  2. Get entities using filter (Country & Uri) and validate if HCP exists
  3. Get entities using filter (Uri) and validate if HCP exists
  4. Update usage flags and validate response
  5. Get entity and validate updated usage flags

test2_updateUsageFlags
  1. Create HCO and validate response
  2. Get entities using filter (Country & Uri) and validate if HCO exists
  3. Get entities using filter (Uri) and validate if HCO exists
  4. Update usage flags and validate response
  5. Get entity and validate updated usage flags

test3_updateUsageFlags
  1. Create HCO with 2 addresses (COMPANYAddressId=3001 and 3002) and validate response
  2. Get entities using filter (Country & Uri) and validate if HCO exists
  3. Get entities using filter (Uri) and validate if HCO exists
  4. Update usage flags (COMPANYAddressId = 3002, action=set) and validate response
  5. Update usage flags (COMPANYAddressId = 3001, action=set) and validate response
  6. Get entity and validate updated usage flags
  7. Remove usage flag and validate response
  8. Get entity and validate updated usage flags
  9. Clear usage flag and validate response
  10. get entity and validate updated usage flags 
" + }, + { + "title": "MDM Factory", + "pageID": "164470002", + "pageLink": "/display/GMDM/MDM+Factory", + "content": "\n

MDM Client Factory was implemented in MDM manager to select a specific MDM Client (Reltio/Nucleus) based on a client selector configuration. Factory allows to register multiple MDM Clients on runtime and choose it based on country. To register Factory the following example configuration needs to be defined:

\n
    \n\t
  1. clientDecisionTable
  2. \n
\n\n\n

Based on this configuration a specific request will be processed by Reltio or Nucleus. Each selector has to define default view for a specific client. For example, 'ReltioAllSelector' has a definition of a default and PforceRx view which corresponds to two factory clients with different user name to Reltio.
\n\"\"

\n
    \n\t
  1. mdmFactoryConfig
  2. \n
\n\n\n

This map contains MDM Factory Clients. Each client has a specific unique name and a configuration with URL, username, ●●●●●●●●●●●● other specific values defined for a Client. This unique name is used in decision table to choose a factory client based on country in request.
\n \"\"

" + }, + { + "title": "Mulesoft integration", + "pageID": "447577227", + "pageLink": "/display/GMDM/Mulesoft+integration", + "content": "

Description

Mulesoft platform is integration portal that is used to integrate Clients from inside and outside of COMPANY network with MDM Hub. 

Mule integration

API Endpoints

MuleSoft API Catalog:

\"\"

Requests routing on Mule side

API Country Mapping

Tenant
Dev
Test (QA)
Stage
Prod
US
US
US
US
US
EMEA
UK,IE,GB,SA,EG,DZ,TN,MA,AE,KW,QA,OM,
BH,NG,GH,KE,ET,ZW,MU,IQ,LB,JO,ZA,BW,
CI,DJ,GQ,GA,GM,GN,GW,LR,MG,ML,MR,SN,
SL,TG,MW,TZ,UG,RW,LS,NA,SZ,ZM,IR,SY,
CD,LY,AO,BJ,BF,BI,CM,CV,CF,TD,CG,SD,
YE,FR,DE,IT,ES,TF,PM,WF,MF,BL,RE,NC,
YT,MQ,GP,GF,PF,MC,AD,SM,VA,TR,AT,BE,
LU,DK,FO,GL,FI,NL,NO,PT,SE,CH,CZ,GR,
CY,PL,RO,SK,IL,AL,AM,IO,GE,IS,MT,NE,RS,SI,ME
UK,IE,GB,SA,EG,DZ,TN,MA,AE,KW,QA,OM,
BH,NG,GH,KE,ET,ZW,MU,IQ,LB,JO,ZA,BW,
CI,DJ,GQ,GA,GM,GN,GW,LR,MG,ML,MR,SN,
SL,TG,MW,TZ,UG,RW,LS,NA,SZ,ZM,IR,SY,
CD,LY,AO,BJ,BF,BI,CM,CV,CF,TD,CG,SD,
YE,FR,DE,IT,ES,TF,PM,WF,MF,BL,RE,NC,
YT,MQ,GP,GF,PF,MC,AD,SM,VA,TR,AT,BE,
LU,DK,FO,GL,FI,NL,NO,PT,SE,CH,CZ,GR,
CY,PL,RO,SK,IL,AL,AM,IO,GE,IS,MT,NE,RS,SI,ME
UK,IE,GB,SA,EG,DZ,TN,MA,AE,KW,QA,OM,
BH,NG,GH,KE,ET,ZW,MU,IQ,LB,JO,ZA,BW,
CI,DJ,GQ,GA,GM,GN,GW,LR,MG,ML,MR,SN,
SL,TG,MW,TZ,UG,RW,LS,NA,SZ,ZM,IR,SY,
CD,LY,AO,BJ,BF,BI,CM,CV,CF,TD,CG,SD,
YE,FR,DE,IT,ES,TF,PM,WF,MF,BL,RE,NC,
YT,MQ,GP,GF,PF,MC,AD,SM,VA,TR,AT,BE,
LU,DK,FO,GL,FI,NL,NO,PT,SE,CH,CZ,GR,
CY,PL,RO,SK,IL,AL,AM,IO,GE,IS,MT,NE,RS,SI,ME
UK,GB,IE,AE,AO,BF,BH,BI,BJ,BW,CD,CF,
CG,CI,CM,CV,DJ,DZ,EG,ET,GA,GH,GM,GN,
GQ,GW,IQ,IR,JO,KE,KW,LB,LR,LS,LY,MA,
MG,ML,MR,MU,MW,NA,NG,OM,QA,RW,SA,SD,
SL,SN,SY,SZ,TD,TG,TN,TZ,UG,YE,ZA,ZM,
ZW,FR,DE,IT,ES,AD,BL,GF,GP,MC,MF,MQ,
NC,PF,PM,RE,TF,WF,YT,SM,VA,TR,AT,BE,
LU,DK,FO,GL,FI,NL,NO,PT,SE,CH,CZ,GR,
CY,PL,RO,SK,IL
AMER
CA,BR,AR,UY,MX,CL,CO,PE,BO,EC
CA,BR,AR,UY,MX,CL,CO,PE,BO,EC
CA,BR,AR,UY,MX,CL,CO,PE,BO,EC
CA,BR,AR,UY,MX
APAC

AU,NZ,IN,KR,JP,HK,ID,MY,PK,PH,SG,TW,TH,

VN,MO,BN,BD,NP,LK,MN

AU,NZ,IN,KR,JP,HK,ID,MY,PK,PH,SG,TW,TH, VN,MO,BN,NP,LK,MNKR,JP,AU,NZ,IN,HK,ID,MY,PK,PH,SG,TW,TH, VN,MO,BN,NP,LK,MNKR,JP,AU,NZ,IN,HK,ID,MY,PK,PH,SG,TW,TH, VN,MO,BN
EXUS 
(IQVIA)
Everything else
Everything else
Everything else
Everything else

API URLs

MuleSoft MDM HCP Reltio API URLs

EnvironmentCloud APIGround API
Devhttps://muleapic-amer-dev.COMPANY.com/mdm-hcp-reltio-dlb-v1-devhttp://mule4api-comm-amer-dev.COMPANY.com/mdm-hcp-reltio-v1/
Testhttps://muleapic-amer-dev.COMPANY.com/mdm-hcp-reltio-dlb-v1-tst/http://mule4api-comm-amer-tst.COMPANY.com/mdm-hcp-reltio-v1
Stagehttps://muleapic-amer-stg.COMPANY.com/mdm-hcp-reltio-dlb-v1-stghttp://mule4api-comm-amer-stg.COMPANY.com/mdm-hcp-reltio-v1
Prodhttps://muleapic-amer.COMPANY.com/mdm-hcp-reltio-dlb-v1http://mule4api-comm-amer.COMPANY.com/mdm-hcp-reltio-v1

Integrations

Integrations can be found under below url:

MDM - AIS Application Integration Solutions Mule - Confluence

Mule documentation reference

Solution Profiles/MDM 

https://confluence.COMPANY.com/display/AAISM/MDM

MDM HCP Reltio API

https://confluence.COMPANY.com/display/AAISM/MDM+HCP+Reltio+API

MDM Tenant URL Configuration


https://confluence.COMPANY.com/display/AAISM/MDM+Tenant+URL+Configuration

Using OAuth2 for API Authentication

Described how to use OAuth2

How to use an API

Described how to request access to API and how to use it

Consumer On-boarding

Described consumer onboarding process


" + }, + { + "title": "Multi view", + "pageID": "164470089", + "pageLink": "/display/GMDM/Multi+view", + "content": "\n

During getEntity or getRelation operation "ViewAdapterService" is activated. This feature contains two steps:

\n
    \n\t
  1. Adapt
  2. \n
\n\n\n

Based on the following map each entity will be checked before return:
\n\"\"
\nThis means that for PforceRx view, only entities with source CRMMI will be returned. Otherwise getEntity or getRelation operations will return "404" EntityNotFound exception.
\nWhen entity can be returned with success the next step is started:

\n
    \n\t
  1. Filter
  2. \n
\n\n\n

Each entity is filtered based on attribute Uris list provided in crosswalks.attribute list.
\nThe process will take each attribute from entity and will check if this attribute exists in restricted for specific source crosswalk attribute list. When this attribute is not on restricted list, then it will be removed from entity. This way we will receive entity for specific view only with attribute restricted for specific source.
\nMDM publishing HUB has an additional configuration for multi view process. When an entity with a specific country suits the configuration, getEntity operation is invoked with country and view name parameter. Then MDM gateway Factory is activated, and entity is returned from a specific Reltio instance and saved in a mongo collection suffixed with a view name.
\n \"\"
\nFor this configuration entities from BR country will be saved in entityHistory and entityHistory_PforceRx mongo collections. In the view collection entities will be adapted and filtered by View Adapter Service.

" + }, + { + "title": "Playbook", + "pageID": "218437749", + "pageLink": "/display/GMDM/Playbook", + "content": "

The document depicts how to request access to different sources. 

" + }, + { + "title": "Issues list", + "pageID": "218441145", + "pageLink": "/display/GMDM/Issues+list", + "content": "" + }, + { + "title": "Add a user to a new group.", + "pageID": "218438493", + "pageLink": "/pages/viewpage.action?pageId=218438493", + "content": "
  1. To create a request you need to use  a link:https://requestmanager1.COMPANY.com/Group/
  2. Then choose as follow:
  3. \"\"
  4. Than search a group and click request access:
  5. \"\"
  6. As the last step, you need to choose the 'View Cart' button and submit your request. 
" + }, + { + "title": "Snowflake new schema/group/role creation", + "pageID": "218437752", + "pageLink": "/pages/viewpage.action?pageId=218437752", + "content": "
  1. Connect with: https://digitalondemand.COMPANY.com/
  2. Click 'Get Support' button.

\"\"

3. Then click that one:

\"\"

4. And as a next step:

\"\"

5. Now you are on create ticket site. The most important thing is to place a proper queue name in a detailed description place. For example a queue name for Snowflake issues looks like this:  gbl-atp-commercial snowflake domain admin. I recommend to you to place it as a first line. And then the request text is required.

\"\"

6. There is a typical request for a new schema:


gbl-atp-commercial snowflake domain admin
Hello,\nI'd like to ask to create a new schema and new roles on Snowflake side.\nNew schema name: PTE_SL\nEnvironments: DEV, QA, STG, PROD, details below:\nDEV\t\nSnowflake instance: https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com\t\nSnowflake DB name:COMM_GBL_MDM_DMART_DEV_DB\nQA\t\nSnowflake instance: https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com\t\nSnowflake DB name: COMM_GBL_MDM_DMART_QA_DB\nSTG\t\nSnowflake instance: https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com\t\nSnowflake DB name:COMM_GBL_MDM_DMART_STG_DB\nPROD\t\nSnowflake instance: https://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com\t\nSnowflake DB name: COMM_GBL_MDM_DMART_PROD_DB\n\nAdd new roles with names (one for each environment): COMM_GBL_MDM_DMART_[Dev/QA/STG/Prod]_PTE_ROLE\nwith read-only acces on Customer_SL & PTE_SL\nand\nadd a roles with full acces to new schema with names (one for each environment) COMM_GBL_MDM_DMART_[Dev/QA/STG/Prod]_DEVOPS_ROLE - like in customer_sl schema


7. If you are requesting for a new role too - like in an example above - you need to request to add this role to AD. In this case you need to provide primary and secondary owner details for all groups to be created.
You can send a primary ana a secondary owner data or write that the ownership should be set like in another existing role.

8. Ticket example: https://digitalondemand.COMPANY.com/My-Tickets/Ticket-Details?ticket=RF3490743

" + }, + { + "title": "AWS ELB NLB configuration request", + "pageID": "218440089", + "pageLink": "/display/GMDM/AWS+ELB+NLB+configuration+request", + "content": "
  1. To create a ticket use this link: http://btondemand.COMPANY.com/
  2. Please follow this link if you want to know all the specific steps and click: Snowflake new schema/group/role creation
  3. Remember to add a proper queue name!
  4. In a request please attached full list of general information:
    1. VPC
    2. ELB Type
    3. Health Checks
    4. Allowed incoming traffic from
  5. Then please add a specific ELB NLB information FOR EACH NLB ELB you requested for - even if the information is the same and obvious:
    1. Listener
    2. Target Group 
    3. No of ELB
    4. Type
    5. Environment
    6. ELB Health Check
    7. Target Group additional information: e.x: 1 Target group with 3 servers:port
    8. Where to add a Listener: e.x.: Listener to be added in ELB #Listner Name
    9. Security Group information
    10. Additional information: e.x: IP ●●●●●●●●●●●● mdm-event-handler (Prod) should be able to access this ELB
  6. Ticket example: http://btod.COMPANY.com/My-Tickets/Ticket-Details?ticket=IM40983303
  7. E.g. request text:


VPC: Public\nELB Type: Network Load Balancer\nHealth Checks: Passive\nAllowed incoming traffic from:\n●●●●●●●●●●●● mdm-event-handler (Prod)\n\n1. API\nListener:\napi-emea-prod-gbl-mdm-hub-ext.COMPANY.com:8443\n\nTarget Group:\neuw1z2pl116.COMPANY.com:8443\neuw1z1pl117.COMPANY.com:8443\neuw1z2pl118.COMPANY.com:8443\n\n2. KAFKA\n\n2.1\nListener:\nkafka-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095\nTG:\neuw1z2pl116.COMPANY.com:9095\neuw1z1pl117.COMPANY.com:9095\neuw1z2pl118.COMPANY.com:9095\n\n2.2\nListener:\nkafka-b1-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095\nTG:\neuw1z2pl116.COMPANY.com:9095\n\n2.3\nListener:\nkafka-b2-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095\nTG:\neuw1z1pl117.COMPANY.com:9095\n\n2.4\nListener:\nkafka-b3-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095\nTG:\neuw1z2pl118.COMPANY.com:9095\n\nGBL-BTI-EXT HOSTING AWS CLOUD
" + }, + { + "title": "To open a traffic between hosts", + "pageID": "218441143", + "pageLink": "/display/GMDM/To+open+a+traffic+between+hosts", + "content": "
  1. To create a ticket using this link: http://btondemand.COMPANY.com/
  2. Please follow this link if you want to know all the specific steps and click: Snowflake new schema/group/role creation
  3. Remember to add a proper queue name!
  4. In a request please attached the full list of general information:
    1. Source
      1. IP range
      2. IP range
      3. ..
      4. ..
    2. Targets - remember to add each targets instances
      1. Target1
        1. Name
        2. Cname
        3. Address
        4. Port
      2. Target2
        1. ..
        2. ..
        3. ..
      3. ..
  5. Example ticket: http://btod.COMPANY.com/My-Tickets/Ticket-Details?ticket=IM41240161
  6. Example request text:


Source:\n1. IP range: ●●●●●●●●●●●●●\n2. IP range: ●●●●●●●●●●●●●\n\nTarget1:\nLoadBalancer:\ngbl-mdm-hub-us-prod.COMPANY.com  canonical name = internal-pfe-clb-atp-mdmhub-us-prod-001-146249044.us-east-1.elb.amazonaws.com.\nName:   internal-pfe-clb-atp-mdmhub-us-prod-001-146249044.us-east-1.elb.amazonaws.com\nAddress: ●●●●●●●●●●●●●●\nName:   internal-pfe-clb-atp-mdmhub-us-prod-001-146249044.us-east-1.elb.amazonaws.com\nAddress: ●●●●●●●●●●●●●●\nTarget port: 443\n\nTarget2:\nhosts:\namraelp00007848.COMPANY.com(●●●●●●●●●●●●●●)\namraelp00007849.COMPANY.com(●●●●●●●●●●●●●)\namraelp00007871.COMPANY.com(●●●●●●●●●●●●●●)\ntarget port: 8443
" + }, + { + "title": "Support information with queue and DL names", + "pageID": "218438484", + "pageLink": "/display/GMDM/Support+information+with+queue+and+DL+names", + "content": "

There are a few places when you can send your request:

  1. https://digitalondemand.COMPANY.com/getsupport
  2. https://requestmanager.COMPANY.com/

Caution! 

When we are adding a new client to our architecture there is a MUST to get from him a support queue.

Support queues

System/component/area nameDedicated queueSupport DLAdditional notes
Rapid, Digital Labs, GCP etc
GBL-EPS-CLOUD OPS FULL SUPPORT
EPS-CloudOps@COMPANY.comAWS Global, EMEA environments
IOD AWS Team
GBL-BTI-IOD AWS FULL SUPPORT
EPS-CloudOps@COMPANY.com (same as EPS, not a mistake)Rotating AWS keys, AWS GBL US, AWS FLEX US
IOD
GBL-BTI-IOD FULL OS SUPPORT (VMC)

VMware Cloud
FLEX Team
GBL-F&BO-MAST AMM SUPPORT
DL-CBK-MAST@COMPANY.comData, file transfer issues in US FLEX environments
SAP Interface Team (FLEX)
GBL-SS SAP SALES ORDER MGMT

Queries regarding SAP FLEX input files
SAP Master Date Team (FLEX)
Dianna.OConnell@COMPANY.comQueries regarding data in SAP FLEX
Network Team
GBL-NETWORK DDI

All domain and DNS changes
Firewall Team
GBL-NETWORK ECS
GBL-NETWORK-SCS@COMPANY.com"Big" firewall changes
Snowflake
GBL-ATP-COMMERCIAL SNOWFLAKE DOMAIN ADMIN


MDM Hub - non-prod
GBL-ADL-ATP GLOBAL MDM - HUB DEVOPS
DL-ATP_MDMHUB_SUPPORT@COMPANY.com
MDM Hub - prod
GBL-ADL-ATP GLOBAL MDM - HUB DEVOPS
DL-ATP_MDMHUB_SUPPORT_PROD@COMPANY.com
PDKS
GBL-BAP-Kubernetes Service L2
PDCSOps@COMPANY.comPDKS Kubernetes cluster, ie. new MDM Hub Amer NPROD
Go to http://containers.COMPANY.com/ "PDKS Get Help" for details.
PDKS Engineering Team
GBL-BTI-SYSTEMS ENGINEERING BTCS
DL-PDCS-ADMIN@COMPANY.comPDKS Kubernetes - For Environment provisioning/modification issues with CloudBrokerage/IOD
AMER/APAC/EMEA/GBLUS Reltio - COMPANY
GBL-ADL-ATP GLOBAL MDM - RELTIO
DL-ADL-ATP-GLOBAL_MDM_RELTIO@COMPANY.comTeam responsible for Reltio and ETL batch loads.
GBL/USFLEX Reltio - IQVIA
GBL-MDM APP SUPPORT

COMPANY-MDM-Support@iqvia.com

DL-Global-MDM-Support@COMPANY.com


Reltio consulting
N/A

Sumit Singh - reltio consulting (NO support)

sumit.singh@reltio.com
Sumit.Singh@COMPANY.com

It is no support, we can use that contact on technical issues level (API implementation etc) 
Reltio UI with data acces
use request manager: https://requestmanager.COMPANY.com/

Reltio Commercial MDM - GBLUS

Reltio Customer MDM - GBL

Ping Federate
DL-CIT-PXEDOperations@COMPANY.comPing Federate/OAuth2 support
MAPP Navigator
GBL-FBO-MAPP NAVIGATOR HYPERCARE
DL-BTAMS-MAPP-Navigator@COMPANY.com (rarely respond)MAPP Nav issues
Harmony Bitbucket
GBL-CBT-GBI HARMONY SERVICES

DL-GBI-Harmony-Support@COMPANY.com

Confluence page:
ATP Harmony Service SD

Confluence, Jira
GBL-DA-DEVSECOPS TOOLS SUPPORT
DL-SESRM-ATLASSIAN-SUPPORT <DL-SESRM-ATLASSIAN-SUPPORT@COMPANY.com>
Artifactory
GBL-SESRM-ARTIFACTORY SUPPORT
DL-SESRM-ARTIFACTORY-SUPPORT@COMPANY.com

Mule integration team support
DL-AIS Mule Integration Support 
DL-AIS-Mule-Integration-Support@COMPANY.comUsed to integrate with mule proxy 
VOD DCR
Laurie.Koudstaal@COMPANY.comPOC if Veeva did not send an input file for the VOD DCR process for 24 hours

Example: there is a description how to request with https://digitalondemand.COMPANY.com/for a ticket assigned to one of groups above. Snowflake new schema/group/role creation

" + }, + { + "title": "Global Clients", + "pageID": "310963401", + "pageLink": "/display/GMDM/Global+Clients", + "content": "


ClientContact
CICRProbably Amish
ADTSDL-BTAMS-ENGAGE-PLUS@COMPANY.com
EASI
ENGAGE
ESAMPLESSomya.Jain@COMPANY.com;
Vijay.Bablani@COMPANY.com;
Lori.Reynolds@COMPANY.com
GANTGangadhar.Nadpolla@COMPANY.com
GRACECory.Arthus@COMPANY.com
GRVvikas.verma@COMPANY.com; Luther Chris <chris.luther@COMPANY.com>; Matej.Dolanc@COMPANY.com
JOShweta.Kulkarni@COMPANY.com
MAPDL-BT-Production-Engineering@COMPANY.com;
Matej.Dolanc@COMPANY.com
MAPPDL-BTAMS-MAPP-Navigator@COMPANY.com;
Rajesh.K.Chengalpathy@COMPANY.com
MEDICDL-F&BO-MEDIC@COMPANY.com
MULEDL-AIS-Mule-Integration-Support@COMPANY.com
Amish.Adhvaryu@COMPANY.com
ODSDL-GBI-PFORCERX_ODS_Support@COMPANY.com
ONEMEDMarsha.Wirtel@COMPANY.com;
AnveshVedula.Chalapati@COMPANY.com
PFORCEOLChristopher.Fani@COMPANY.com
VEEVA_FIELD
PFORCERXNagaJayakiran.Nagumothu@COMPANY.com;
dl-pforcerx-support@COMPANY.com
PTRSSagar.Bodala@COMPANY.com;
bhushan.shanbhag@COMPANY.com
JAPAN DWHDL-GDM-ServiceOps-Commercial_APAC@COMPANY.com DL-ATP-SERVICEOPS-JPN-DATALAKE@COMPANY.com
CHINAChen, Yong <Yong.Chen@COMPANY.com>; QianRu.Zhou@COMPANY.com
KOL_ONEVIEWDL-SFA-INF_Support_PforceOL@COMPANY.comSolanki,
Hardik (US - Mumbai)<hsolanki@COMPANY.com>
Yagnamurthy, Maanasa (US - Hyderabad) <myagnamurthy@COMPANY.com>
NEXUS SriVeerendra.Chode@COMPANY.com;
DL-Acc-GBICC-Team@COMPANY.com
IMPROMPTUPRAWDOPODOBNIE AMISH
CDWNarayanan, Abhilash <Abhilash.KadampanalNarayanan@COMPANY.com>
Balan, Sakthi <Sakthi.Balan@COMPANY.com>
Raman, Krishnan <Krishnan.Raman@COMPANY.com>
ICUEBrahma, Bagmita <Bagmita.Brahma2@COMPANY.com>
Solanki, Hardik <Hardik.Solanki@COMPANY.com>
Tikyani, Devesh <Devesh.Tikyani@COMPANY.com>
EVENTHUB


SNOWFLAKE
ClientContact
C360DL-C360_Support@COMPANY.com
PT&EDL-PTE-Batch-Team@COMPANY.com>;  Drabold, Erich <Erich.Drabold@COMPANY.com>
DQ_OPSmarkus.henriksson@COMPANY.com;
dl-atp-dq-ops@COMPANY.com


accentureDL-Acc-GBICC-Team@COMPANY.com


Big bossesPratap.Deshmukh@COMPANY.com

Mikhail.Komarov@COMPANY.com

Rafael.Aviles@COMPANY.com
" + }, + { + "title": "How to login to Service Manager", + "pageID": "218448126", + "pageLink": "/display/GMDM/How+to+login+to+Service+Manager", + "content": "

How to add a user to Service Manager tool

  1. Choose link: https://smweb.COMPANY.com/SCAccountRequest.aspx#/search
  2. Find yourself
    \"\"
  3. Click "Next >>"
  4. Choose proper role: Service desk analyst – and click „Needs training”
    \"\"
  5. When you have your training succeeded, there is a need to choose groups to which you want to be added :
    1. GBL-ADL-ATP GLOBAL MDM - HUB DEVOPS
  6. You do it here:
    \"\"
  7. Please remember when you click “Add selected group to cart” there is a second approval step – click: “SUBMIT”.
  8. When permissions will be granted you can explore Service Manager possibilities here: https://sma.COMPANY.com/sm/index.do
" + }, + { + "title": "How to Escalate btondemand Ticket Priority", + "pageID": "218448925", + "pageLink": "/display/GMDM/How+to+Escalate+btondemand+Ticket+Priority", + "content": "

Below is a copy of: AWS Rapid Support → How to Escalate Ticket Priority

How to Escalate Ticket Priority

Tickets will be opened as low priority by default and response time will align to the restoration and resolution times listed in the SLA below. If your request priority needs to be change follow these instructions:

  1. Use the Chat function at BT On Demand (or call the Service Desk at 1-877-733-4357)
    1. Select Get Support
    2. Select "Click here to continue without selecting a ticket option."
    3. Select Chat
  2. Provide the existing ticket number you already opened
  3. Ask that ticket Priority be raised to Medium, High or Critical based on the issue and utilize one of the following key phrases to help set priority:
    1. Issue is Effecting Production Application
    2. Product Quality is being impacted
    3. Batch is unable to proceed
    4. Life safety or physical security is impacted
    5. Development work stopped awaiting resolution
" + }, + { + "title": "How to get AWS Account ID", + "pageID": "218453784", + "pageLink": "/display/GMDM/How+to+get+AWS+Account+ID", + "content": "

MDM Hub components are deployed in different AWS Accounts. In a ticket support process, you might be asked about the AWS Account ID of the host, load balancer, or other resources. You can get it quickly in at least two ways described below.

Using AWS Console

In AWS Console: http://awsprodv2.COMPANY.com/ (How to access AWS Console) you can find the Account ID in any resource's Amazon Resource Name (ARN).

\"\"

Using curl

SSH to a host and run this curl command, same for all AWS accounts:

[ec2-user@euw1z2pl116 ~]$ curl http://169.254.169.254/latest/dynamic/instance-identity/document
{
"accountId" : "432817204314",
"architecture" : "x86_64",
"availabilityZone" : "eu-west-1b",
"billingProducts" : null,
"devpayProductCodes" : null,
"marketplaceProductCodes" : null,
"imageId" : "ami-05c4f918537788bab",
"instanceId" : "i-030e29a6e5aa27e38",
"instanceType" : "r5.2xlarge",
"kernelId" : null,
"pendingTime" : "2021-12-21T06:07:12Z",
"privateIp" : "10.90.98.178",
"ramdiskId" : null,
"region" : "eu-west-1",
"version" : "2017-09-30"
}

" + }, + { + "title": "How to push Docker image to artifactory.COMPANY.com", + "pageID": "218458682", + "pageLink": "/display/GMDM/How+to+push+Docker+image+to+artifactory.COMPANY.com", + "content": "

I am using the AKHQ image as an example.

Login to artifactory.COMPANY.com

  1. Log in with COMPANY credentials: https://artifactory.COMPANY.com/artifactory/
  2. Generate Identity Token: https://artifactory.COMPANY.com/ui/admin/artifactory/user_profile
  3. Use COMPANY username and generated Identity Token in "docker login artifactory.COMPANY.com"
marek@CF-19CHU8:~$ docker login artifactory.COMPANY.com
Authenticating with existing credentials...
Login Succeeded

Pull, tag, and push

marek@CF-19CHU8:~$ docker pull tchiotludo/akhq:0.14.1
0.14.1: Pulling from tchiotludo/akhq
...
Digest: sha256:b7f21a6a60ed1e89e525f57d6f06f53bea6e15c087a64ae60197d9a220244e9c
Status: Downloaded newer image for tchiotludo/akhq:0.14.1
docker.io/tchiotludo/akhq:0.14.1
marek@CF-19CHU8:~$ docker tag tchiotludo/akhq:0.14.1 artifactory.COMPANY.com/mdmhub-docker-dev/tchiotludo/akhq:0.14.1
marek@CF-19CHU8:~$ docker push artifactory.COMPANY.com/mdmhub-docker-dev/tchiotludo/akhq:0.14.1
The push refers to repository [artifactory.COMPANY.com/mdmhub-docker-dev/tchiotludo/akhq]
0.14.1: digest: sha256:b7f21a6a60ed1e89e525f57d6f06f53bea6e15c087a64ae60197d9a220244e9c size: 1577


And that's all, you can now use this image from artifactory.COMPANY.com!
" + }, + { + "title": "Emergency contact list", + "pageID": "218459579", + "pageLink": "/display/GMDM/Emergency+contact+list", + "content": "

In case of emergency please inform the person from the list attached to each environment.

EMEA:

Varganin, A.J. <Andrew.J.Varganin@COMPANY.com>; Trivedi, Nishith <Nishith.Trivedi@COMPANY.com>; Austin, John <John.Austin@COMPANY.com>; Simon, Veronica <Veronica.Simon@COMPANY.com>; Adhvaryu, Amish <Amish.Adhvaryu@COMPANY.com>; Kothandaraman, Sathyanarayanan <Sathyanarayanan.Kothandaraman@COMPANY.com>; Dolanc, Matej <Matej.Dolanc@COMPANY.com>; Kunchithapatham, Bhavanya <Bhavanya.Kunchithapatham@COMPANY.com>; Bhowmick, Aditya <Aditya.Bhowmick@COMPANY.com>

GBL:

TO-DO


GBL US:

TO-DO


EMEA:

TO-DO


AMER:

TO-DO

" + }, + { + "title": "How to handle issues reported to DL", + "pageID": "294665000", + "pageLink": "/display/GMDM/How+to+handle+issues+reported+to+DL", + "content": "
  1. Create a ticket in Jira
    1. Name: "DL: {{ email title }}"
    2. Epic: BAU
    3. Fix Version(s): BAU
  2. Use below template:
    \"\"MDM Hub Issue Response Template.oft
  3. Replace all the red placeholders. Fill in the table where you can, based on original email.
  4. Respond to the email, requesting additional details if any of the table rows could not be filled in.
  5. Update the ticket:
    1. Copy/Paste the filled table
    2. Adjust the priority based on the "Business impact details" row

\"\"


" + }, + { + "title": "Sample estimation for jira tickets", + "pageID": "415215566", + "pageLink": "/display/GMDM/Sample+estimation+for+jira+tickets", + "content": "

1

https://jira.COMPANY.com/browse/MR-8591(Disable keycloak by default)
https://jira.COMPANY.com/browse/MR-8544(Investigate server git hooks in BitBucket)
https://jira.COMPANY.com/browse/MR-8508(Lack of changelog when build from master)
https://jira.COMPANY.com/browse/MR-8506(pvc-autoresizer deployment on PRODs)
https://jira.COMPANY.com/browse/MR-8502(Dashboards adjustments)

2

https://jira.COMPANY.com/browse/MR-8649 (Move kong-mdm-external-oauth-plugin to mdm-utils repo)
https://jira.COMPANY.com/browse/MR-8585 (Alert about not ready ScaledObject)
https://jira.COMPANY.com/browse/MR-8539 (Reduce number of stored Cadvisor metrics and labels)
https://jira.COMPANY.com/browse/MR-8531 (Old monitoring host decomissioning)
https://jira.COMPANY.com/browse/MR-8375 (Quality Gateway: deploy publisher changes to PRODs)
https://jira.COMPANY.com/browse/MR-8359 (Write article to describe Airflow upgrade procedure)
https://jira.COMPANY.com/browse/MR-8166 (Fluentd - improve deployment time and downtime)
https://jira.COMPANY.com/browse/MR-8128 (Turn on compression in reconciliation service)

3

https://jira.COMPANY.com/browse/MR-8543 (POC: Create local git hook with secrets verification)
https://jira.COMPANY.com/browse/MR-8503 (Replace hardcoded rate intervals)
https://jira.COMPANY.com/browse/MR-8370 (Investigate and plan fix for different version of monitoring CRDs)
https://jira.COMPANY.com/browse/MR-8245 (Fluentbit: deploy NPRODs)
https://jira.COMPANY.com/browse/MR-7926 (Move jenkins agents containers definition to inbound-services repo)

5

https://jira.COMPANY.com/browse/MR-8334 (Implement integration with Grafana)
https://jira.COMPANY.com/browse/MR-7720 (Logstash - configuration creation and deployment)
https://jira.COMPANY.com/browse/MR-7417 (Grafana dashboards backup process)
https://jira.COMPANY.com/browse/MR-7075 (POC: Store transaction logs for 6 months)

8

https://jira.COMPANY.com/browse/MR-8258 (Implement integration with Kibana)
https://jira.COMPANY.com/browse/MR-6285 (Prepare Kafka upgrade plan to version 3.3.2)
https://jira.COMPANY.com/browse/MR-5981 (Process analysis)
https://jira.COMPANY.com/browse/MR-5694 (Implement Reltio mock)
https://jira.COMPANY.com/browse/MR-5835 (Mongo backup process: implement backup process)


" + }, + { + "title": "FAQ - Frequently Asked Questions", + "pageID": "415217275", + "pageLink": "/display/GMDM/FAQ+-+Frequently+Asked+Questions", + "content": "" + }, + { + "title": "API", + "pageID": "415217277", + "pageLink": "/display/GMDM/API", + "content": "

Is there an MDM Hub API Documentation?

Of course - it is available for each component:

What is the difference between /api-emea-prod and /api-gw-emea-prod API endpoints?

Both of these endpoints are leading to different API Components:

Both of these Components' APIs can be used in similar way. The main difference is:

What is the difference between /api-emea-prod and /ext-api-emea-prod API endpoints?

These endpoints use different Authentication methods:

It is recommended that all the API Users use OAuth2 and /ext-api-emea-prod endpoint, leaving Key Auth for support and debugging purposes.

When should I use a GET Entity operation, when should I use a SEARCH Entity operation?

There are two main ways of fetching an HCP/HCO JSON using HUB API:

Below two requests correspond to each other:

Although both are quick, Hub recommends only using the first one to find and entity by URI:

What is the difference between POST and PATCH /hcp, /hco, /entities operations?

The key difference is:

POST should be used if we are sending the full JSON - crosswalk + all attributes.

PATCH should be used if we are only sending incremental changes to a pre-existing profile.



" + }, + { + "title": "Merging Into Existing Entities", + "pageID": "462075948", + "pageLink": "/display/GMDM/Merging+Into+Existing+Entities", + "content": "

Can I post a profile and merge it to one already existing in MDM?

Yes, there are 3 ways you can do that:

Merge-On-The-Fly - Details

Merge-on-the-fly is a Reltio mechanism using matchGroups configuration. MatchGroups contain lists of requirements that two entities must pass in order to be merged. There are two types of matchGroups: "suspect" and "automatic". Suspects merely display as potential matches in Reltio UI, but Automatic groups trigger automatic merges of the objects.

Example of an HCP automatic matchGroup from Reltio's configuration (EMEA PROD):

\n
                {\n                    "uri": "configuration/entityTypes/HCP/matchGroups/ExctONEKEYID",\n                    "label": "(iii) Auto Rule - Exact Source Unique Identifier(ReferBack ID)",\n                    "type": "automatic",\n                    "useOvOnly": "true",\n                    "rule": {\n                        "and": {\n                            "exact": [\n                                "configuration/entityTypes/HCP/attributes/Identifiers/attributes/ID",\n                                "configuration/entityTypes/HCP/attributes/Country"\n                            ],\n                            "in": [\n                                {\n                                    "values": [\n                                        "OneKey ID"\n                                    ],\n                                    "uri": "configuration/entityTypes/HCP/attributes/Identifiers/attributes/Type"\n                                },\n                                {\n                                    "values": [\n                                        "ONEKEY"\n                                    ],\n                                    "uri": "configuration/entityTypes/HCP/attributes/OriginalSourceName"\n                                },\n                                {\n                                    "values": [\n                                        "Yes"\n                                    ],\n                                    "uri": "configuration/entityTypes/HCP/attributes/Identifiers/attributes/Trust"\n                                }\n                            ]\n                        }\n                    },\n                    "scoreStandalone": 100,\n                    "scoreIncremental": 0\n                
\n

Above example merges two entities having same Country attribute and same Identifier of type "OneKey ID". Identifier must have the Trusted flag and the OriginalSourceName must be "ONEKEY".


When posting a record to MDM, matchGroups are evaluated. If an automatic matchGroup is matched, Reltio will perform a Merge-On-The-Fly, adding the posted crosswalk to an existing profile.

Contributor Merge - Details

When posting an object to Reltio, we can use its Crosswalk contributorProvider/dataProvider mechanism to bind posted crosswalk to an existing one.

If we know that a crosswalk exists in MDM, we can add it to the crosswalks array with contributorProvider=true and dataProvider=false flags. Crosswalk marked like that serves as an indicator of an object to bind to.

The other crosswalk must have the flags set the other way around: contributorProvider=false and dataProvider=true. This is the crosswalk that will de facto provide the attributes and be considered for the Hub's ingestion rules.


Example - we are sending data with an MAPP crosswalk and binding that crosswalk to the existing ONEKEY crosswalk:

\n
{\n    "hcp": {\n        "type": "configuration/entityTypes/HCP",\n        "attributes": {\n            "FirstName": [\n                {\n                    "value": "John"\n                }\n            ],\n            "LastName": [\n                {\n                    "value": "Doe"\n                }\n            ],\n            "Country": [\n                {\n                    "value": "ES"\n                }\n            ]\n        },\n        "crosswalks": [\n            {\n                "type": "configuration/sources/MAPP",\n                "value": "B53DFCEA-8231-E444-24F8-7E72C62C0147",\n                "contributorProvider": false,\n                "dataProvider": true\n            },\n            {\n                "type": "configuration/sources/ONEKEY",\n                "value": "WESR04566503",\n                "contributorProvider": true,\n                "dataProvider": false\n            }\n        ]\n    }\n}
\n


Every MDM record also has a crosswalk of type "Reltio" and value equal to Reltio ID. We can use that to bind our record to the entity:

\n
{\n    "hcp": {\n        "type": "configuration/entityTypes/HCP",\n        "attributes": {\n            "FirstName": [\n                {\n                    "value": "John"\n                }\n            ],\n            "LastName": [\n                {\n                    "value": "Doe"\n                }\n            ],\n            "Country": [\n                {\n                    "value": "ES"\n                }\n            ]\n        },\n        "crosswalks": [\n            {\n                "type": "configuration/sources/MAPP",\n                "value": "B53DFCEA-8231-E444-24F8-7E72C62C0147",\n                "contributorProvider": false,\n                "dataProvider": true\n            },\n            {\n                "type": "configuration/sources/Reltio",\n                "value": "00TnuTu",\n                "contributorProvider": true,\n                "dataProvider": false\n            }\n        ]\n    }\n}
\n


This approach has a downside: crosswalks are bound, so they cannot be unmerged later on.

Manual Merge - Details

Last approach is simply creating a record in Reltio and straight away merging it with another.


Let's use the previous example. First, we are simply posting the MAPP data:

\n
{\n    "hcp": {\n        "type": "configuration/entityTypes/HCP",\n        "attributes": {\n            "FirstName": [\n                {\n                    "value": "John"\n                }\n            ],\n            "LastName": [\n                {\n                    "value": "Doe"\n                }\n            ],\n            "Country": [\n                {\n                    "value": "ES"\n                }\n            ]\n        },\n        "crosswalks": [\n            {\n                "type": "configuration/sources/MAPP",\n                "value": "B53DFCEA-8231-E444-24F8-7E72C62C0147"\n            }\n        ]\n    }\n}
\n


Response:

\n
{\n    "uri": "entities/0zu5sHM",\n    "status": "created",\n    "errorCode": null,\n    "errorMessage": null,\n    "COMPANYGlobalCustomerID": "04-131155084",\n    "crosswalk": {\n        "type": "configuration/sources/MAPP",\n        "value": "B53DFCEA-8231-E444-24F8-7E72C62C0147",\n        "updateDate": 1728043082037,\n        "deleteDate": ""\n    }\n}
\n


We can now use the URI from response to merge the new record into existing one:

\n
POST /entities/0zu5sHM/_merge?uri=00TnuTu
\n


" + }, + { + "title": "Quality rules", + "pageID": "164470090", + "pageLink": "/display/GMDM/Quality+rules", + "content": "

Quality engine is responsible for preprocessing Entity when a specific precondition is met. This engine is started in the following cases:

When a validationOn parameter is set to true the first step in HCP/HCO request processing is quality engine validation. MDM Manager Configuration should contain the following quality rules:

These properties are able to accept a list of yaml files. Each file has to be added in environment repository in /config_files/<env_name>/mdm_mananger/config/.*quality-rules.yaml. Then each of these files has to be added to these variables in inventory /<env_name>/group_vars/gw-services/mdm_manager.yml.
For HCP request processing, files are loaded in the following order:

  1. hcpQualityRulesConfigs
  2. hcpAffiliatedHCOsQualityRulesConfigs


For HCO request processing, files are loaded only from the following configuration:

  1. hcoQualityRulesConfigs


It is a good practice to divide files in a common logic and a specific logic for countries. For example, HCP Quality Rules file names should have the following structure:


Quality rules yaml file is a set of rules, which will be applied on Entity. Each rule should have the following yaml structure:
\"\"

preconditions

\"\"

\"\"


check

\"\"

\"\"

\"\"
action
When the precondition and check are properly evaluated then a specific action can be invoked on entity attributes.

\"\"

\"\"


\"\"

\"\"

\"\"

\"\"


\"\"

action:
type: autofillSourceName
attribute: Addresses


The logic of the quality engine rule check is as follows:


Quality rules DOC: 

\"\"



" + }, + { + "title": "Relation replacer", + "pageID": "164470095", + "pageLink": "/display/GMDM/Relation+replacer", + "content": "


After getRelation operation is invoked, "Relation Replacer" feature can be activated on returned relation entity object. When entity is merged, Reltio sometimes does not replace objectUri id with new updated value. This process will detect such situation and replace objectUri with correct URI from crosswalk.
Relation replacer process operates under the following conditions:

  1. Relation replacer will check EndObject and StartObject sections.
  2. When objectUri is different from each entity id from crosswalks section, then objectURI is replaced with entity id from crosswalks.
  3. When crosswalks contain multiple entries in list and there is a situation that crosswalks list contains different entity uri, relation replacer process ends with the following warning: "Object has more than one possible uri to replace" – it is not possible to decide which entity should be pointed as StartObject or EndObject after merge.
" + }, + { + "title": "SMTP server", + "pageID": "387170360", + "pageLink": "/display/GMDM/SMTP+server", + "content": "

Access to SMTP server is granted for each region separately:


AMER

Destination Host: amersmtp.COMPANY.com

Destination SMTP Port: 25

Authentication: NONE


EMEA

Destination Host: emeasmtp.COMPANY.com

Destination SMTP Port: 25

Authentication: NONE


APAC

Destination Host: apacsmtp.COMPANY.com

Destination SMTP Port: 25

Authentication: NONE


To request access to SMTP server there is need to fill in the SMTP relay registration form through http://ecmi.COMPANY.com portal.


" + }, + { + "title": "Airflow", + "pageID": "218432163", + "pageLink": "/display/GMDM/Airflow", + "content": "" + }, + { + "title": "Overview", + "pageID": "218432165", + "pageLink": "/display/GMDM/Overview", + "content": "

Configuration

Airflow is deployed on kubernetes cluster using official airflow helm chart:

Main airflow chart adjustments(creting pvc's, k8s jobs, etc.) are located in components repository.

Environment's specific configuration is located in cluster configuration repository.

Deployment

Local deployment

Airflow can be easily deployed on local kubernetes cluster for testing purposes. All you have to do is:

If deployment is performed on windows machine please make sure that install.sh, encrypt.sh, decrypt.sh and .config files have unix line endings. Otherwise it will cause deployment errors.

  1. Edit .config file to enable airflow deployment(and any other component you want. To enable component it needs to have assigned value greater than 0

    \n
    enable_airflow=1
    \n
  2. Run ./install.sh file located in main helm directory

    \n
    ./install.sh
    \n

Environment deployment

Environment deployment should be performed with great care.

If deployment is performed on windows machine please make sure that install.sh, encrypt.sh, decrypt.sh and .config files have unix line endings. Otherwise it will cause deployment errors.


Environment deployemnt can be performed after connecting local machine to remote kubernetes cluster.

  1. Prepare airflow configuration in cluster env repository.
  2. Adjust .config file to update airflow(and any other service you want)

    \n
    enable_airflow=1
    \n
  3. Run ./install.sh script to update kuberntes cluster
  4. Check if all airflow pods are working correctly

Helm chart configuration

You can find described available configuration in values.yaml file in airflow github repository.

Helm chart adjustments

Additionally to base airflow kubernetes resources there are created:

Definitions: helm templates

Dags deployment

Dags are deployed using ansible playbook: install_mdmgw_airflow_services_k8s.yml

Playbook uses kubectl command to work with airflow pods.

You can run this playbook locally:

  1. To modify lists of dags that should be deployed during playbook run you have to adjust airflow_components list:
    e.g.

    \n
    airflow_components:\n  - lookup_values_export_to_s3
    \n
  2. Run playbook(adjust environment)
    e.g.

    \n
    ansible-playbook install_mdmgw_airflow_services.yml -i inventory/emea_dev/inventory
    \n

Or with jenkins job:

https://jenkins-gbicomcloud.COMPANY.com/job/MDM_Airflow_Deploy_jobs/

" + }, + { + "title": "Airflow DAGs", + "pageID": "164470169", + "pageLink": "/display/GMDM/Airflow+DAGs", + "content": "" + }, + { + "title": "●●●●●●●●●●●●●●● [https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1589274]", + "pageID": "310943460", + "pageLink": "/pages/viewpage.action?pageId=310943460", + "content": "

\"\"

Description

Dag used to prepare data from FLEX(US) tenant to be lodaed into  GBLUS tenant.

S3 kafka connector on FLEX enironment uploads files everyday to s3 bucket as multiple small files. This dag takes those multiple files and concatenate them into one. ETL team downloads this concatenated file from s3 bucket and upload it into GBLUS tenant via batch service.

Example

https://airflow-amer-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=concat_s3_files_gblus_prod

" + }, + { + "title": "active_hcp_ids_report", + "pageID": "310939877", + "pageLink": "/display/GMDM/active_hcp_ids_report", + "content": "

\"\"

Description

Generates report of active hcp's from defined countries.

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=active_hcp_ids_report_emea_prod

Steps

" + }, + { + "title": "China reports", + "pageID": "310939879", + "pageLink": "/display/GMDM/China+reports", + "content": "

Description

Set of dags that produces china reports on gbl environment that are later sent via email:

Single reports are generated by executing the defined queries on mongo, then extracts are published on s3. Then main dags download exports from s3 and send an email with all reports.


Main dag example:

\"\"

Report generating dag example:

\"\"

Dags list

Dags executed every day:

china_generate_reports_gbl_prod - main dag that triggers the rest

china_affiliation_status_report_gbl_prod

china_dcr_statistics_report_gbl_prod

china_hcp_by_source_report_gbl_prod

china_import_and_gen_dcr_statistics_report_gbl_prod

china_import_and_gen_merge_report_gbl_prod

china_merge_report_gbl_prod


Dags executed weekly:

china_monthly_generate_reports_gbl_prod - main dag that triggers the rest

china_monthly_hcp_by_channel_report_gbl_prod

china_monthly_hcp_by_city_type_report_gbl_prod

china_monthly_hcp_by_department_report_gbl_prod

china_monthly_hcp_by_gender_report_gbl_prod

china_monthly_hcp_by_hospital_class_report_gbl_prod

china_monthly_hcp_by_province_report_gbl_prod

china_monthly_hcp_by_source_report_gbl_prod

china_monthly_hcp_by_SubTypeCode_report_gbl_prod

china_total_entities_report_gbl_prod



" + }, + { + "title": "clear_batch_service_cache", + "pageID": "333156979", + "pageLink": "/display/GMDM/clear_batch_service_cache", + "content": "

\"\"

Description

This dag is used to clear batch-service cache(mongo batchEntityProcessStatus collection). It deletes all records specified in csv file for specified batchName.

To clear cache batch-service batchController/{batch_name}/_clearCache endpoint is used.

Dag used by mdmhub hub-ui.

Input parameters:

\n
{\n  "fileName": "inputFile.csv",\n  "batchName": "testBatchTAGS"\n}
\n

Main steps

\n
{'removedRecords': 1}\n
\n


Example

https://airflow-amer-nprod-gbl-mdm-hub.COMPANY.com/graph?dag_id=clear_batch_service_cache_amer_dev&root=

" + }, + { + "title": "distribute_nucleus_extract", + "pageID": "310939886", + "pageLink": "/display/GMDM/distribute_nucleus_extract", + "content": "

DEPRECATED

Description

Distributes extracts that are sent by nucleus to s3 directory between multiple directories for the respective countries that are later used by inc_batch_* dags

Input and output directories are configured in dags configuration file:

\"\"

Dag:

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=distribute_nucleus_extract_gbl_prod&root=

" + }, + { + "title": "export_merges_from_reltio_to_s3", + "pageID": "310939888", + "pageLink": "/display/GMDM/export_merges_from_reltio_to_s3", + "content": "

\"\"

Description

Dag used to schedule Reltio merges export, adjust file format and then uload file to s3 snowflake directory.

Steps:

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=export_merges_from_reltio_to_s3_full_emea_prod

" + }, + { + "title": "get_rx_audit_files", + "pageID": "310943418", + "pageLink": "/display/GMDM/get_rx_audit_files", + "content": "

\"\"

Description

Download rx_audit files from:

Files are the uploaded to defined s3 directory that is later used by inc_batch_rx_audit dag.

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=inc_batch_rx_audit_gbl_prod

Useful links

RX_AUDIT

" + }, + { + "title": "historical_inactive", + "pageID": "310943421", + "pageLink": "/display/GMDM/historical_inactive", + "content": "

\"\"

Description

Dag used to implement history inactive process

Steps:

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=historical_inactive_emea_prod

Reference

Snowflake: History Inactive

" + }, + { + "title": "hldcr_reconciliation", + "pageID": "310943423", + "pageLink": "/display/GMDM/hldcr_reconciliation", + "content": "

\"\"

Description

HL DCR flow occasionally blocked some VRs' statuses from being sent to PforceRx in an outbound file, because Hub has not received an event from Reltio, informing about Change Request resolution. The exact event expected is CHANGE_REQUEST_CHANGED.

To prevent the above, HLDCR Reconciliation process runs regularly, doing the following steps:

  1. Query MongoDB store (Collection DCRRequests) for VRs in CREATED status. Export result as list.
  2. For each VR from the list, generate a CHANGE_REQUEST_CHANGED event and post it to Kafka.
  3. Further processing is as usual - DCR Service enriches the event with current changeRequest state. If the changeRequest has been resolved, it updates the status in MongoDB.

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hldcr_reconciliation_gbl_prod

" + }, + { + "title": "HUB Reconciliation process", + "pageID": "164470182", + "pageLink": "/display/GMDM/HUB+Reconciliation+process", + "content": "

The reconciliation process was created to synchronize Reltio with HUB. Because Reltio sometimes does not generate events, and therefore these events are not consumed by HUB from the SQS queue and the HUB platform is out of sync with Reltio data. External Clients dose not receive the required changes, which cause that multiple systems are not consistent. To solve this problem this process was designed. 

The fully automated reconciliation process generates these missing events. Then these events are sent to the inbound Kafka topic, HUB platform process these events, updates mongo collection and route the events to external Clients topics.

Airflow

The following diagram presents the reconciliation process steps:

\"\"

This directed acyclic diagram presents the steps that are taken to compare Reltio and HUB and produce the missing events. This diagram is divided into the following sections:

  1. Initialization and Reltio Data preparation - in this section the process invokes the Reltio export, and upload full export to mongo.
    1. clean_dirs_before_init, init_dirs, timestamp – these 3 tasks are responsible for the directory structure preparation required in the further steps and timestamp capture required for the reconciliation process. Reltio and HUB data changes in time and the export is made at a specific point in time. We need to ensure that during comparison only entities that were changed before Reltio Export are compared. This requirement guarantee that only correct events are generated and consistent data is compared.
    2. entities_export – the task invokes the Reltio Export API and triggers the export job in Reltio
    3. sensor_s3_reltio_file – this task is an S3 bucket sensor. Because the Reltio export job is an asynchronous task running in the background, the file sensor checks the S3 location ‘hub_reconciliation/<ENV>/RELTIO/inbound/’ and waits for export. When the success criteria are met, the process exits with success. The timeout for this job is set to 24 hours, the poke interval is set to 10 minutes.
    4. download_reltio_s3_file, unzip_reltio_export, mongo_import_json_array, generate_mongo_indexes – these 4 tasks are invoked after successful export generation. Zip is downloaded and extracted to the JSON file, then this file is uploaded to mongo collection. The generate_mongo_indexes task is responsible for generating mongo indexes in the newly uploaded collection. The indexes are created to optimize performance.
    5. archive_flex_s3_file_name – After successful mongo import Reltio export is archived for future reference. 
  2. HUB validation - Reltio ↔ HUB comparison - the main comparison and events generation logic is invoked in this SUB DAG. The details are described in the section below
  3. Events generation  - after data comparison, generated events are sent to selected Kafka topic.
    1. Then standard events processing begins. The details are described in HUB documentation.
      1. Please check the following documents to find more details: 
        1. Entity change events processing (Reltio)
        2. Event filtering and routing rules
        3. Processing events on client side


HUB validation - Reltio ↔ HUB comparison

\"\"

This directed acyclic diagram (SUB DAG) presents the steps that are taken to compare HUB and Reltio data in both directions. Because Reltio data is already uploaded and HUB (“entityHistory”) collection is always available we can immediately start the comparison process. 

  1. mongo_find_reltio_hub_differnces - this process compares Reltio data to HUB data.  
    1. Mongo aggregation pipeline matches the entities from Reltio export to HUB profiles located in mongo collection by entity URI (ID). All Reltio profiles that are not presented in Reltio export data are marked as missing. All attributes in Reltio are compared to HUB profile attributes - based on this when the difference is found, it means that the profile is out of sync and new even should be generated. 
      1. Based on these changes the HCP_CHANGED or HCO_CHANGED events are generated.
      2. When the profile is missing the HCP_CREATED or HCO_CREATED events are generated. 
  2. mongo_find_hub_reltio_differnces - this process compares HUB entities to Reltio data. The process is designed to find only missing entities in Reltio, based on these changes the HCP_REMOVED or HCO_REMOVED events are generated
    1. Mongo aggregation pipeline matches the entities from HUB mongo collection to Reltio profiles by entity URI (ID). All HUB profiles that are not presented in Reltio export data are marked as missing for future reference. 
  3. mongo_generate_hub_events_differences - this task is related to the automated reconciliation process. The full process is described in this paragraph.


Configuration and scheduling

The process can be started in Airflow on demand. 

The configuration for this process is stored in the MDM Environment configuration repository. 

The following section is responsible for the HUB Reconciliation process activation on the selected environment:

\n
active_dags:\n  gbl_dev:\n    - hub_reconciliation.py
\n


The file is available in "inventory/scheduler/group_vars/all/all.yml"
To activate the Reconciliation process on the new environment the new environment should be added to "active_dags" map.
Then the "ansible-playbook install_airflow_dags.yml" needs to be invoked. After this new process is ready for use in Airflow. 

Reconciliation process 


To synchronize Reltio with HUB and therefore synchronize profiles in Reltio with external Clients the fully automated process is started after full HUB<->Reltio comparison. this is the "mongo_generate_hub_events_differences" task. 

The automated reconciliation process generates events. Then these events are sent to the inbound Kafka topic, HUB platform process these events, updates mongo collection and route the events to flex topic.

The following diagram presents the reconciliation steps:

\"\"

  1. Automated reconciliation process generates events:

The following events are generated during this process:

2. Next, Event Publisher receives events from the internal Kafka topic and calls MDM Gateway API to retrieve the latest state of Entity from Reltio. Entity data in JSON is added to the event to form a full event. For REMOVED events, where Entity data is by definition not available in Reltio at the time of the event, Event Publisher fetches the cached Entity data from Mongo database instead.

3. Event Publisher extracts the metadata from Entity (type, country of origin, source system).

4. Entity data is stored in the MongoDB database, for later use

5. For every Reltio event, there are two Publishing Hub events created: one in Simple mode and one in Event Sourcing (full) mode. Based on the metadata, and Routing Rules provided as a part of application configuration, the list of the target destinations for those events is created. The event is sent to all matched destinations to the target topic (<env>-out-full-<client>) when the event type is full or (<env>-out-simple-<client>) when the event type is simple. 




" + }, + { + "title": "HUB Reconciliation Process V2", + "pageID": "164470184", + "pageLink": "/display/GMDM/HUB+Reconciliation+Process+V2", + "content": "

\"\"


  1. Hub reconciliation process is starting from downloading reconciliation.properties file with following information:
    1. reconciliationType - reconciliation type - possible values: FULL_RECONCILIATION or PARTIAL_RECONCILIATION (since last run)
    2. eventType - event type - it is used in in generating events for kafka - possible values: FULL or CROSSWALK_ONLY
    3. reconcileEntities - if set to true entities will be reconciliated
    4. reconcileRelations - if set to true relations will be reconciliated
    5. reconcileMergeTree - if set to true mergeTree will be reconciliated
  2. Sets hub reconciliation properties in the process
  3. If reconcileEntities is set to true that process for reconciliate entities is started
    1. <entities_get_last_timestamp> Process gets last timestamp when entities was lately exported
    2. <entities_export> Entities export is triggered from Reltio - this step is done by groovy script
    3. <entities_export_sensor> Process is checking if export is finished by verifing if the file SUCCESS with manifest.json exists on S3 folder /us/<env>/inboud/hub/hub_reconciliation/entities/inbound/entities_export_<timestamp>
    4. <entities_set_last_timestamp> In this step process is setting timestamp for future reconciliation of entities - it is set in airflow variables
    5. <entities_generate_hub_reconciliation_events> this step is responsible for checking which entities has been changed and generate events for changed entities
      1. firstly we get export file from S3 folder /us/<env>/inboud/hub/hub_reconciliation/entities/inbound/entities_export_<timestamp>
      2. we unzip the file in bash script
      3. for the unzipped file we there are two options
        1. if we useChecksum than calculateChecksum groovy script is executed which calculates checksum for exported entities and generates ReconciliationEvent only with checksum
        2. if we don't useChecksum than ReconciliationEvent is generated with whole entity
      4. in the last step we send those generated events to specified kafka topics 
      5. Events from topic will be processed by reconciliation service
      6. Reconciliation service is checking basing on checksum change/changes if PublisherEvent should be generated
        1. it compares checksum if it exists from ReconciliationEvent with the one that we have in entityHistory table
        2. it compares entity objects from ReconciliationEvent with the one that we have in mongo in entityHistory table if checksum is absent - objects on both sides are normalized before compare process
        3. it compares SimpleCrosswalkOnlyEntity objects if CROSSWALK_ONLY reconciliation event type is choosen
    6. <entities_export_archive> - move export folder on S3 from inbound to archive folder

\"\"

4. If reconcileRelations is set to true that process for reconciliate relations is started

  1. <relations_get_last_timestamp> Process gets last timestamp when relations was lately exported
  2. <relations_export> Relations export is triggered from Reltio - this step is done by groovy script
  3. <relations_export_sensor> Process is checking if export is finished by verifing if the file SUCCESS with manifest.json exists on S3 folder /us/<env>/inboud/hub/hub_reconciliation/relations/inbound/relations_export_<timestamp>
  4. <relations_set_last_timestamp> In this step process is setting timestamp for future reconciliation of relations - it is set in airflow variables
  5. <relations_generate_hub_reconciliation_events> this step is responsible for checking which relations has been changed and generate events for changed relations
    1. firstly we get export file from S3 folder /us/<env>/inboud/hub/hub_reconciliation/relations/inbound/relations_export_<timestamp>
    2. we unzip the file in bash script
    3. for the unzipped file we there are two options
      1. if we useChecksum than calculateChecksum groovy script is executed which calculates checksum for exported relations and generates ReconciliationEvent only with checksum
      2. if we don't useChecksum than ReconciliationEvent is generated with whole relation
    4. in the last step we send those generated events to specified kafka topic 
    5. Events from topic will be processed by reconciliation service
    6. Reconciliation service is checking basing on checksum change/object changes if PublisherEvent should be generated
      1. it compares checksum if it exists from ReconciliationEvent with the one that we have in mongo in entityRelation table
      2. it compares relation objects from ReconciliationEvent with the one that we have in mongo in entityRelation table if checksum is absent - objects on both sides are normalized before compare process
      3. it compares SimpleCrosswalkOnlyRelation objects if CROSSWALK_ONLY reconciliation event type is choosen
  6. <relations_export_archive> - move export folder on S3 from inbound to archive folder

\"\"

5. If reconcileMergeTree is set to true that process for reconciliate relations is started

  1. <merge_tree_get_last_timestamp> Process gets last timestamp when merge tree was lately exported
  2. <merge_tree_export> Merge tree export is triggered from Reltio - this step is done by groovy script
  3. <merge_tree_export_sensor> Process is checking if export is finished by verifing if the file SUCCESS with manifest.json exists on S3 folder /us/<env>/inboud/hub/hub_reconciliation/merge_tree/inbound/merge_tree_export_<timestamp>
  4. <merge_tree_set_last_timestamp> In this step process is setting timestamp for future reconciliation of merge tree - it is set in airflow variables
  5. <merge_tree_generate_hub_reconciliation_events> this step is responsible for checking which merge tree has been changed and generate events for changed merge tree objects
    1. firstly we get export file from S3 folder /us/<env>/inboud/hub/hub_reconciliation/merge_tree/inbound/merge_tree_export_<timestamp>
    2. we unzip the file in bash script
    3. for the unzipped file we there are two options
      1. if we useChecksum than calculateChecksum groovy script is executed which creates ReconciliationMergeEvent with uri of the main object and list of loosers uri
      2. if we don't useChecksum than ReconciliationEvent is generated with whole merge tree object
    4. in the last step we send those generated events to specified kafka topic 
    5. Events from topic will be processed by reconciliation service
    6. Reconciliation service is sending merge and lost_merger PublisherEvent for winner and every looser
  6. <merge_tree_export_archive> - move export folder on S3 from inbound to archive folder









" + }, + { + "title": "import_merges_from_reltio", + "pageID": "310943426", + "pageLink": "/display/GMDM/import_merges_from_reltio", + "content": "

\"\"

Description

Schedules reltio merges export, and imports it into mong.

This dag is scheduled by china_import_and_gen_merge_report and data imported into mongo are used by china_merge_report to generate china raport files

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=import_merges_from_reltio_gbl_prod&root=&num_runs=25&base_date=2023-04-06T00%3A05%3A20Z

" + }, + { + "title": "import_pfdcr_from_reltio", + "pageID": "310943428", + "pageLink": "/display/GMDM/import_pfdcr_from_reltio", + "content": "

\"\"

Description

Schedules reltio entities export, download it from s3, make small changes in export and import into mongo.

This dag is scheduled by china_import_and_gen_dcr_statistics_report and data imported into mongo is used by china_dcr_statistics_report to generate china raport files

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=import_pfdcr_from_reltio_gbl_prod

" + }, + { + "title": "inc_batch", + "pageID": "310943432", + "pageLink": "/display/GMDM/inc_batch", + "content": "

\"\"

Description

Proces used to load idl files stored on s3 into Reltio. This dags is basing on mdmhub inc_batch_channel component.

Steps

  1. Crate batch instance in mongo using batch-service /batchController endpoint
  2. Download idl files from s3 directory
  3. Extract compressed archives
  4. Preprocess files(eg. dos2unix )
  5. Run inc_batch_channel component
  6. Archive input files and reports

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=inc_batch_sap_gbl_prod

" + }, + { + "title": "Initial events generation process", + "pageID": "164470083", + "pageLink": "/display/GMDM/Initial+events+generation+process", + "content": "

Newly connected clients doesn't have konwledge about entities which was created in MDM before theirs connecting. Due to this the initial event loading process was designed. Process loads events about already existing entities to client's kafka topic. Thanks this the new client is synced with MDM.

Airflow

The process was implemented as Airflow's DAG:

\"\"

Process steps:

  1. prepareWorkingDir - prepares directories structure required for the process,

  2. getLastTimestamp - gets time marked of last process execution. This marker is used to determine which of events has been sent by previously running process. If the process is run first time the marker has always 0 value,

  3. getTimestamp - gets current time marker,

  4. generatesEvents - generates events file based on current Mongo state. Data used to prepare event messages is selected based on condition entity.lastModificationDate > lastTimestamp,

  5. divEventsByEventKind - divides events file based on event kind: simple or full,

  6. loadFullEvents* - it is a group of steps that populates full events to specific topic. The amount of this steps is based on amount of topics specified in configuration,

  7. loadSimpleEvents* - similar to above, those steps populates simple events to specific topic. The amount of this steps is based on amount of topics specified in configuration,

  8. setLastTimestamp - save current time marker. It will be used in the next process execution as last time marker.


Configuration and scheduling

The process can be started on demand.

The Process's configuration is stored in the MDM Environment configuration repository.

To enable the process on specific environment:

  1. Its should be valid with template "generate_events_for_[client name]" and added to the list "airflow_components" which is defined in "inventory/[env name]/group_vars/gw-airflow-services/all.yml" file,
  2. Create configuration file in "inventory/[env name]/group_vars/gw-airflow-services/generate_events_for_[client name].yml" with content as below:
  3. The process configuration
    \n
    ---\n\ngenerate_events_for_test_name: "generate_events_for_test" #Process name. It has to be the same as in "airflow_components" list avaiable in all.yml\ngenerate_events_for_test_base_dir: "{{ install_base_dir }}/{{ generate_events_for_test_name }}"\ngenerate_events_for_test:\n  dag: #Airflow's DAG configuration section\n    template: "generate_events.py" #do not change\n    variables:\n      DOCKER_URL: "tcp://euw1z1dl039.COMPANY.com:2376" #do not change\n      dataDir: "{{ generate_events_for_test_base_dir }}/data" #do not change\n      configDir: "{{ generate_events_for_test_base_dir }}/config" #do not change\n      logDir: "{{ generate_events_for_test_base_dir }}/log" #do not change\n      tmpDir: "{{ generate_events_for_test_base_dir }}/tmp" #do not change\n      user:\n        id: "7000" #do not change\n        name: "mdm" #do not change\n        groupId: "1002" #do not change\n        groupName: "docker" #do not change\n      mongo: #mongo configuration properties\n        host: "localhost"\n        port: "27017"\n        user: "mdm_gw"\n        password: "{{ secret_generate_events_for_test.dag.variables.mongo.password }}" #password is taken from the secret.yml file\n        authDB: "reltio"\n      kafka: #kafka configuration properties\n        username: "hub"\n        password: "{{ secret_generate_events_for_test.dag.variables.kafka.password }}" #password is taken from the secret.yml file\n        servers: "10.192.71.136:9094"\n        properties:\n          "security.protocol": SASL_SSL\n          "sasl.mechanism": PLAIN\n          "ssl.truststore.location": /opt/kafka_utils/config/kafka_truststore.jks\n          "ssl.truststore.password": "{{ secret_generate_events_for_test.dag.variables.kafka.properties.sslTruststorePassword }}" #password is taken from the secret.yml file\n          "ssl.endpoint.identification.algorithm": ""\n      countries: #Events will be generated only for below countries\n        - CR\n        - BR\n      targetTopics: #Target topics list. It is array of pairs topic name and event Kind. Only simple and full event kind are allowed.\n        - topic: dev-out-simple-int_test\n          eventKind: simple\n        - topic: dev-out-full-int_test\n          eventKind: full\n\n...
    \n
  4. then the playbook install_mdmgw_services.yml needs to be invoked to update runtime configuration.


" + }, + { + "title": "lookup_values_export_to_s3", + "pageID": "310943435", + "pageLink": "/display/GMDM/lookup_values_export_to_s3", + "content": "

\"\"

Description

Process used to extract lookup values from mongo and upload it to s3. The file from s3 i then pulled into snowflake.

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=lookup_values_export_to_s3_gbl_prod


" + }, + { + "title": "MAPP IDL Export process", + "pageID": "164470173", + "pageLink": "/display/GMDM/MAPP+IDL+Export+process", + "content": "

\"\"

Description

Process used to generate excel with entities export. Export is based on two monogo collections: lookupValues and entityHistory. Excel files are then uploaded into s3 directory

Excels are used in MAPP Review process on gbl_prod environment.

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=mapp_idl_excel_template_gbl_prod

" + }, + { + "title": "mapp_update_idl_export_config", + "pageID": "310943437", + "pageLink": "/display/GMDM/mapp_update_idl_export_config", + "content": "

Description

Process is used to update configuration of mapp_idl_excel_template dags stored in mongo.

Configuration is stored in mappExportConfig collection and consists of information about configuration and crosswalks order for each country.

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=mapp_update_idl_export_config_gbl_prod

" + }, + { + "title": "merge_unmerge_entities", + "pageID": "310943439", + "pageLink": "/display/GMDM/merge_unmerge_entities", + "content": "

\"\"

\"\"


Description

This dag implements batch Batch merge & unmerge process. It download file from s3 with list of files to merge or unmerge and then process documents. To process documents batch-service is used. After documents are processed report is generated and transferred to s3 directory.

Flow

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=merge_unmerge_entities_emea_prod

" + }, + { + "title": "micro_bricks_reload", + "pageID": "310943463", + "pageLink": "/display/GMDM/micro_bricks_reload", + "content": "

\"\"

Description

Dag extract data from snowflake table that contains microbricks exceptions. Data is then comited in git repository from where it will be pulled by consul and loaded into mdmhub components.

If microbricks mapping file has changed since last dag run then we'll wait for mapping reload and  copy events from {{ env_name }}-internal-microbricks-changelog-events topic into {{ env_name }}-internal-microbricks-changelog-reload-events"

Example

https://airflow-amer-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=micro_bricks_reload_amer_prod

" + }, + { + "title": "move_ods_", + "pageID": "310943441", + "pageLink": "/pages/viewpage.action?pageId=310943441", + "content": "

\"\"

Description

Dag copies files from external source s3 buckets and uploads them to our internal s3 bucket to the desired location. This data is later used in inc_batch_* dags

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=move_ods_eu_export_gbl_prod

" + }, + { + "title": "rdm_errors_report", + "pageID": "310943445", + "pageLink": "/display/GMDM/rdm_errors_report", + "content": "

DEPRECATED

\"\"

Description

This dags generate report with all rdm errors from ErrorLogs collection and publish it to s3 bucket.

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=rdm_errors_report_gbl_prod

" + }, + { + "title": "reconcile_entities", + "pageID": "337846202", + "pageLink": "/display/GMDM/reconcile_entities", + "content": "

\"\"


Details:

Process allowing export data from mongo based on query and generate https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Reconciliation/reconcileEntities request for each package or generate a flat file from exported entities and push to Kafka reltio-events.

Steps:


Example

https://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/tree?dag_id=reconcile_entities_emea_dev&root=

" + }, + { + "title": "reconciliation_ptrs", + "pageID": "310943447", + "pageLink": "/display/GMDM/reconciliation_ptrs", + "content": "

\"\"

DEPRECATED

Details

Process allowing to reconcile events for ptrs source.

Logic: Reconciliation process

Steps:

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=reconciliation_ptrs_emea_prod

" + }, + { + "title": "reconciliation_snowflake", + "pageID": "310943449", + "pageLink": "/display/GMDM/reconciliation_snowflake", + "content": "

\"\"

Details

Process allowing to reconcile events for snowflake topic.

Logic: Reconciliation process

Steps:

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=reconciliation_ptrs_emea_prod

" + }, + { + "title": "Kubernetes", + "pageID": "218693740", + "pageLink": "/display/GMDM/Kubernetes", + "content": "" + }, + { + "title": "Platform Overview", + "pageID": "218452673", + "pageLink": "/display/GMDM/Platform+Overview", + "content": "

In the latest physical architecture, MDM HUB services are deployed in Kubernetes clusters managed by COMPANY Digitial Kubernates Service (PDKS)

There are non-prod and prod cluster for each region: AMER, EMEA, APAC 

Architecture

The picture below presents the layout of HUB services in Kubernetes cluster managed by PDKS  


\"PDCS

\"Global

Nodes

There are two groups of nodes:

Storage

Portworx storage appliance is used to manage persistence volumes required by stateful components.

Configuration:

Operators

 MDM HUB uses K8 operators to manage applications like:

Application NameOperator (with link)Version
MongoDBMongo Comunity operator0.6.2
KafkaStrimzi0.27.x
ElasticSearchElasticsearch operator1.9.0
Prometheus

Prometheus operator

8.7.3

Monitoring

Cluster are monitored by local Prometheus service integrated with central Prometheus and Grafana services 

For details got to monitoring section.

Logging 

All logs from HUB components are sent to Elastic service and can be discovered by Kibana UI.

For details got to Kibana dashboard section. 

Backend components

NameVersion
MongoDB4.2.6
Kafka2.8.1
ElasticSearch7.13.1
Prometheus2.15.2

Scaling 

TO BE 

Implementation

Kubernetes objects are implemented using helm - package manager for Kubernetes. There are several modules that connected together makes the MDMHUB application:

  1. operators - delivers a set of operators used to manage backend components of MDMHUB: Mongo operator, Kafka operator, Elasticsearch operator, Kong operator and Prometheus operator,
  2. consul - delivers consul server instance, user management tools and git2consul - the tool used to synchronize consul key-value registry with a git repository,
  3. airflow - deploys an instance of Airflow server,
  4. eck - using Elasticsearch operator creates EFK stack - Kibana, Elasticsearch and Fluentd,
  5. kafka - installs Kafka server,
  6. kafka-resources - installs Kafka topics, Kafka connector instances, managed users and ACLs,
  7. kong - using Kong operators installs a Kong server,
  8. kong-resources - delivers basic Kong configuration: users, plugins etc,
  9. mongo - installs mongo server instance, configures users and their permissions,
  10. monitoring - install Prometheus server and exporters used to monitors resources, components and endpoints,
  11. migration - a set of tools supported migration from old (ec2 based environments) to new Kubernetes infrastructure,
  12. mdmhub - delivers the MDMHUB components, their configuration and dependencies.

All above modules are stored in application source code as a part of module helm.

Configuration

The runtime configuration is stored in mdm-hub-cluster-env repository. Configuration has following structure:

[region]/ - MDMHUB rerion eg: emea, amer, apac

    nprod|prod/ -  cluster class. nprod or prod values are possible,

        namespaces/ - logical spaces where MDMHUB coponents are deployed

            monitoring/ - configuration of prometheus stack

                service-monitors/

                values.yaml - namespace level variables

            [region]-dev/ - specific configuration for dev env eg.: kafka topics, hub components configuration

                config_files/ - MDMHUB components configuration files

                    all|mdm-manager|batch-service|.../

                values.yaml - variables specific for dev env.

                kafka-topics.yaml - kafka topic configuration

            [region]-qa/ - specific configuration for qa env

                config_files/

                    all|mdm-manager|batch-service|.../

            [region]-stage/ - specific configuration for stage env

                config_files/

                    all|mdm-manager|batch-service|.../

                values.yaml

                kafka-topics.yaml

            [region]-prod/ - specific configuration for prod env

                config_files/

                    all|mdm-manager|batch-service|.../

                values.yaml

                kafka-topics.yaml

            [region]-backend/ - backend services configuration: EFK stack, Kafka, Mongo etc.

                eck-config/ #eck specific files

                values.yaml

            kong/ - configuration of Kong proxy

                values.yaml

            airflow/ - configuration of Airflow scheduler

                values.yaml

        users/ #users configuration

            mdm_test_user.yaml

            callback_service_user.yaml

            ...

        values.yaml #cluster level variables

        secrets.yaml #cluster level sensitive data

    values.yaml #region level variables

values.yaml #values common for all environments and clusters

install.sh #implementation of deployment procedure


Application is deployed by install.sh script. The script does this in the following steps:

  1. Decrypt sensitive data: passwords, certificates, token, etc,
  2. Prepare the order of values and secrets precedence (the last listed variables override all other variables):
    1. common values for all environments,
    2. region values,
    3. cluster variables,
    4. users values,
    5. namespace values.
  3. Download helm package,
  4. Do some package customization if required,
  5. Install helm package to the selected cluster.


Deployment

Build

Job: mdm-hub-inbound-services/feature/kubernates

Deploy

All Kubernetes deployment jobs

AMER:

Deploy backend: Kong, Kafka, mongoDB, EFK, Consul, Airflow, Prometheus

Deploy MDM HUB


Administration

Administration tasks and standard operating procedures were described here.

" + }, + { + "title": "Migration guide", + "pageID": "218452659", + "pageLink": "/display/GMDM/Migration+guide", + "content": "

Phase 0

  1. Validate configuration:
    1. validate if all configuration was moved correctly - compare application.yml files, check topic name prefix (on k8s env the prefix has 2 parts), check Reltio confguration etc,
    2. Check if reading event from sqs is disabled on k8s - reltio-subscriber,
    3. Check if reading evets from MAP sqs is disabled on k8s - map-channel,
    4. Check if event-publisher is configured to publish events to old kafka server - all client topics (*-out-*) without snowflake.
  2. Check if network traffic is opened:
    1. from old servers to new REST api endpoint,
    2. from k8s cluster to old kafka,
    3. from k8s cluster to old REST API endpoint,
  3. Make a mongo dump of data collections from mongo - remember start date and time:
    1. find mongo-migration-* pod and run shell on it.
    2. cd /opt/mongo_utils/data
      mkdir data
      cd data
      nohup dumpData.sh <source database schema> &
    3. start date is shown in the first line of log file:
      head -1 nohup.out #example output → [Mon Jul  4 12:09:32 UTC 2022] Dumping all collections without: entityHistory, entityMatchesHistory, entityRelations and LookupValues from source database mongo
    4. validate the output of dump tool by:
      cd /opt/mongo_utils/data/data && tail -f nohup.out
  4. Restore dumped collections in the new mongo instance:
    cd /opt/mongo_utils/data/data
    mv nohup.out nohup.out.dump
    nohup mongorestore.sh dump/ <target database schema> <source database schema> &
    tail -f nohup.out #validate the output
  5. Validate the target database and check if only entityHistory, entityMatchesHistory, entityRelations and LookupValues coolections were copied from source. If there are more collections than mentioned, you can delete them.
  6. Create a new consumer group ${new_env}-event-publisher for sync-event-publisher component on topic ${old_env}-internal-reltio-proc-events located on old Kafka instance. Set offset to start date and time of mongo dump - do this by command line client because Akhq has a problem with this action,
  7. Configure and run sync-event-publisher - it is responsible for the synchronization of mongo DB with the old environment. The component has to be connected with the old Kafka and Manager and the routing rules list has to be empty,

Phase 1(External clients are still connected to old endpoints of rest services and kafka):

  1. Check if something is waiting for processing on kafka topics and there are active batches in batch service,
  2. If there is a data on kafka topics stop subscriber and wait until all data in enricher, callback and publisher will be processed. Check it out by monitoring input topics of these components,
  3. Wait unit all data will be processed by the snowflake connector,
  4. Disable Jenkins jobs,
  5. Stop outbound (mdmhub) components,
  6. Stop inbound (mdmgw) components,
  7. Disable all Airflow's DAGs assigned to the migrated environment,
  8. Turn off the snowflake connector at the old environment,
  9. Turn off sync-event-publisher on k8s environment,
  10. Run Mongo Migration Tool to copy mongo databases - copy only collections with caches, data collections were synced before (mongodump + sync-event-publisher). Before start check collections in old mongo instance. You can delete all temporary collections lookup_values_export_to_s3_*, reconciliation_* etc.
    #dumping
    cd /opt/mongo_utils/data
    mkdir non_data
    cd non_data
    nohup dumpNonData.sh <source database schema> &
    tail -f nohup.out #validate the output

    #restoring
    nohup mongorestore.sh dump/ <target database schema> <source database schema> &
    tail -f nohup.out #validate the output
  11. Enable reltio subscriber on K8s - check SQS credentials and turn on SQS route,
  12. Enable processing events on MAP sqs queues - if map-channel exists on migrated environment,
  13. Reconfigure Kong:
    1. forward all incoming traffic to the new instance of MDMHUB
    2. include rules for API paths from: \n MR-3140\n -\n Getting issue details...\n STATUS\n
    3. Delete all plugins oauth and key-auth plugins https://docs.konghq.com/gateway-oss/2.5.x/admin-api/#delete-plugin
    4. it might be required to remove routes, when ansible playbook will throw a duplication error https://docs.konghq.com/gateway-oss/2.5.x/admin-api/#delete-route
  14. Start Snowflake connector located at k8s cluster, 
  15. Turn on components (without sync-event-publisher) on k8s environment,
  16. Change api url and secret (manager apikey) in snowflake deployment configuration (Ansible)
  17. Chnage api key in depenedent api routers.
  18. Install Kibana dashboards,
  19. Add mappings to Monstache,
  20. Add transaction topics to fluentd.


Phase 2 (Environment run in K8s):

  1. Run Kibana Migration Tool to copy indexes, - after migration,
  2. Run Kafka Mirror Maker to copy all data from old output topics to new ones.

Phase 2 (All external clients confirmed that they switched their applications to new endpoints):

  1. Wait until all clients will be switched to new endpoints,

Phase 3 (All environments are migrated to kubernetes):

  1. Stop old mongo instance,
  2. Stop fluentd and kibana,
  3. Stop Kafka Mirror Maker
  4. Stop kafka and kong at old environment,
  5. Decommission old environment hosts.


To remember after migration

  1. Review CPU requests on k8s https://pdcs-som1d.COMPANY.com/c/c-57wsz/monitoringResource management for components - done
  2. MongoDB on k8s has only 1 instance
  3. Kong API delete plugin - https://docs.konghq.com/gateway-oss/2.5.x/admin-api/#delete-plugin
  4. K8s add consul-server service to ingress - consul ui already exposes API https://consul-amer-nprod-gbl-mdm-hub.COMPANY.com/v1/kv/

  5. Consul UI redirect doesn't work due to consul being stubborn about using /ui path. Decision: skip this, send client new consul address 
  6. Fix issue with MDMHUB manage and batch-service oauth user being duplicated in mappings - done
  7. Verify if mdm hub components are using external api address and switch to internal k8s service address - checked, confirmed nothing is using external addresses
  8. Check if Portworx requires setting affinity rules to be running only on 3 nodes
  9. akhq - disable default k8s token automount - done
" + }, + { + "title": "PDKS Cluster tests", + "pageID": "228917568", + "pageLink": "/display/GMDM/PDKS+Cluster+tests", + "content": "

Assumptions

Addresses used in tests

  1. API: https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-amer-dev/actuator/health/
  2. Kafka
  3. Consul

K8s resources


Each MDM Hub app deployed in 1 replica, so no redundancy.

Failover tests

Expected results

No downtimes of API and all services exposed to clients.

Scenario

One EKS node down

Force node drain with timeout and grace period set to low 10 seconds. 

Results

One EKS node down

Conclusions

Test was partially successful

To remove risk of services unavailability

To reduce time of services unavailability

Scale tests

Expected results

EKS node scaling up and down should be automatic based on cluster capacity. 

Scenarios

Scale pods up, to overcome capacity of static ASG, then scale down.

Results

Scale up and down test was carried out while doing failover tests. 

When 1 of 3 static nodes became unavailable, ASG scaled up number of dynamic instances. First to 1 and then to 2. After a static node was once again operational, ASG scaled down dynamic nodes to 0.

Conclusions

" + }, + { + "title": "Portworx - storage administration guide", + "pageID": "218458438", + "pageLink": "/display/GMDM/Portworx+-+storage+administration+guide", + "content": "

Outdated

Portworx is not longer used in MDM Hub Kubernetes clusters

Portworx, what is it?

Commercial product, validated storage solution and a standard for PDKS Kubernetes clusters. It uses AWS EBS volumes, adds a replication and provides a k8s storage class as a result. It then can be used just as any k8s storage by defining PVC. 

What problem does it solve?

\"\"

How to:

use Portworx storage

Configure Persistent Volume Claim to use one of Portworx Storage Classes configured on K8s.

2 classes are available

\"\"

extend volumes

In Helm just change PVC requested size and deploy changes to a cluster with a Jenkins job. No other action should be required. 

Example change: MR-3124 change persistent volumes claims

check status, statistics and alerts

TBD

One of the tools should provide volume status and statistics:

Responsibilities

Who is responsible for what is described in the table below. 

In short: if any change in Portworx setup is required, create a support ticket to a queue found on Support information with queues names page.

\"\"

Additional documentation

  1. PDCS Kubernetes Storage Management Platform Standards (If link doesn't work, go to http://containers.COMPANY.com/ search in "PDKS Docs" section for "WTST-0299 PDCS Kubernetes Storage Management Platform Standards")
  2. Kubernetes Portworx storage class documentation
  3. Portworx on Kubernetes docs




" + }, + { + "title": "Resource management for components", + "pageID": "218444330", + "pageLink": "/display/GMDM/Resource+management+for+components", + "content": "


Outdated

MDM Hub components resources are managed automatically by the Vertical Pod Autoscaler - table below is no longer applicable

K8s resource requests vs limits 

Quotes on how to understand Kubernetes resource limits

requests is a guarantee, limits is an obligation

Galo Navarro


When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for Pods. The scheduler ensures that, for each resource type, the sum of the resource requests of the scheduled containers is less than the capacity of the node. Note that although actual memory or CPU resource usage on nodes is very low, the scheduler still refuses to place a Pod on a node if the capacity check fails. This protects against a resource shortage on a node when resource usage later increases, for example, during a daily peak in request rate.

How Pods with resource requests are scheduled

MDM Hub resource configuration per component

IMPORTANT: table is outdated. The current CPU and memory configuration are in mdm-hub-cluster-env git repository.


CPU [m]Memory [Mi]
ComponentRequestLimitRequestLimit
mdm-callback-service200400016002560
mdm-hub-reltio-subscriber2001000400640
mdm-hub-event-publisher20020008001280
mdm-hub-entity-enricher20020008001280
mdm-api-router20040008001280
mdm-manager200400010002000
mdm-reconciliation-service200400016002560
mdm-batch-service20020008001280
Kafka500400010000 (Xmx 3GB)20000
Zookeeper2001000256512
akhq100500256512
kafka-connect500200010002000
MongoDB50040002000032000
MongoDB agent200400200500
Elasticsearch5002000800020000
Kibana

100

200010241536
Airflow - scheduler2007005122048
Airflow - webserver2007002561024
Airflow - postgresql250-256-
Airflow - statsd200500256512
Consul100500256512
git2consul100500

256

512
Kong10020005122048
Prometheus200100015363072
Legend
requires tuning
proposal
deployed

Useful links

Links helpful when talking about k8s resource management:

" + }, + { + "title": "Standards and rules", + "pageID": "218435163", + "pageLink": "/display/GMDM/Standards+and+rules", + "content": "

K8s Limit definition

Limit size for CPU has to be defined in "m" (milliCPU), ram in "Mi" (mibibytes) and storage in "Gi" (Gibibytes). More details about resource limits you can find on https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

GB vs GiB: What’s the Difference Between Gigabytes and Gibibytes?

At its most basic level, one GB is defined as 1000³ (1,000,000,000) bytes and one GiB as 1024³ (1,073,741,824) bytes. That means one GB equals 0.93 GiB. 

Source: https://massive.io/blog/gb-vs-gib-whats-the-difference/


To check current resource configuration, check: Resource management for components

Docker

To secure our images from changing of remote images which come from remote registries such as https://hub.docker.com/ before using remote these as a base image in the implementation, you have to publish the remote image in our private registry http://artifactory.COMPANY.com/mdmhub-docker-dev.

Kafka objects naming standards

Kafka topics

Name template: <$envName>-$<topicType>-$<name>

Topic Types: 

Consumer Groups

Name template: <$envName>-<$componentName>-[$processName]


Standardized environment names

Standardized component names

" + }, + { + "title": "Technical details", + "pageID": "218440550", + "pageLink": "/display/GMDM/Technical+details", + "content": "

Network

Subnet name

Subnet mask

RegionDetails
subnet-07743203751be58b910.9.64.0/18amer

\"\"

subnet-0dec853f7c9e507dd10.9.0.0/18amer

\"\"

subnet-018f9a3c441b24c2b

●●●●●●●●●●●●●●●

apac

\"\"

subnet-06e1183e436d67f2910.116.176.0/20apac

\"\"

subnet-0e485098a41ac03ca10.90.144.0/20emea

\"\"

subnet-067425933ced0e77f10.90.128.0/20emea

\"\"

" + }, + { + "title": "SOPs", + "pageID": "228923665", + "pageLink": "/display/GMDM/SOPs", + "content": "

Standard operation procedures are available here.

" + }, + { + "title": "Downstream system migration guide", + "pageID": "218452663", + "pageLink": "/display/GMDM/Downstream+system+migration+guide", + "content": "

This chapter describes steps that you have to take if you want to switch your application to new MDM HUB instance.

Direct channel (Rest services)

If you use the direct channel to communicate with MDM HUB the only thing that you should do is changing of API endpoint addresses. The authentication mechanism, based on oAuth serving by Ping Federate stays unchanged. Please remember that probably network traffic between your services and MDMHUB has to be opened before switching your application to new HUB endpoints.

The following table presents old endpoints and their substitutes in the new environment. Everyone who wants to connect with MDMHUB has to use new endpoints.

EnvironmentOld endpointNew endpointAffected clientsDescription
GBLUS DEV/QA/STAGEhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/v1https://consul-amer-nprod-gbl-mdm-hub.COMPANY.com/v1ETLConsul
GBLUS DEVhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/dev-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-devCDW, ENGAGE, KOL_ONEVIEW, GRV, GRACE, ICUE, ESAMPLES, MULE

Manager API

GBLUS DEVhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/dev-batch-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-gblus-devETLBatch API
GBLUS QAhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/qa-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-qaCDW, ENGAGE, KOL_ONEVIEW, GRV, GRACE, ICUE, ESAMPLES, MULEManager API
GBLUS QAhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/qa-batch-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-gblus-qaETL,Batch API
GBLUS STAGEhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/stage-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-stageCDW, ENGAGE, KOL_ONEVIEW, GRV, GRACE, ICUE, ESAMPLES, MULEManager API
GBLUS STAGEhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/stage-batch-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-gblus-stageETL,Batch API
GBLUS PRODhttps://gbl-mdm-hub-us-prod.COMPANY.com/v1https://consul-amer-prod-gbl-mdm-hub.COMPANY.com/v1ETLConsul
GBLUS PRODhttps://gbl-mdm-hub-us-prod.COMPANY.com/prod-exthttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-prodCDW, ENGAGE, KOL_ONEVIEW, GRV, GRACE, ICUE, ESAMPLES, MULEManager API
GBLUS PRODhttps://gbl-mdm-hub-us-prod.COMPANY.com/prod-batch-exthttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/ext-api-batch-gblus-prodETLBatch API
EMEA DEV/QA/STAGEhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/v1https://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/v1ETLConsul
EMEA DEVhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/dev-ext

https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-emea-dev

MULE, GRV, PforceRx, JORouter API
EMEA DEVhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/dev-ext/gwhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-dev
Manager API
EMEA DEVhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/dev-batch-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-emea-devETLBatch API
EMEA QAhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/qa-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-emea-qaMULE, GRV, PforceRx, JORouter API
EMEA QAhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/qa-ext/gwhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-qa
Manager API
EMEA QAhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/qa-batch-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-emea-qaETLBatch API
EMEA STAGEhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/stage-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-emea-stageMULE, GRV, PforceRx, JORouter API
EMEA STAGEhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/stage-ext/gwhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-stage
Manager API
EMEA STAGEhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/stage-batch-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-emea-stageETLBatch API
EMEA PRODhttps://gbl-mdm-hub-emea-prod.COMPANY.com:8443/v1https://consul-emea-prod-gbl-mdm-hub.COMPANY.com/v1ETLConsul
EMEA PRODhttps://gbl-mdm-hub-emea-prod.COMPANY.com:8443/prod-exthttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ext-api-emea-prodMULE, GRV, PforceRxRouter API
EMEA PRODhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/prod-ext/gwhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-prod
Manager API
EMEA PRODhttps://gbl-mdm-hub-emea-prod.COMPANY.com:8443/prod-batch-exthttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ext-api-batch-emea-prod
Batch API
GBL DEVhttps://mdm-reltio-proxy.COMPANY.com:8443/dev-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-devMULE, GRV, JO, KOL_ONEVIEW, MAPP, MEDIC, ONEMED, PTRS, VEEVA_FIELD,Manager API
GBL QA (MAPP)https://mdm-reltio-proxy.COMPANY.com:8443/mapp-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-qaMULE, GRV, JO, KOL_ONEVIEW, MAPP, MEDIC, ONEMED, PTRS, VEEVA_FIELD,Manager API
GBL STAGEhttps://mdm-reltio-proxy.COMPANY.com:8443/stage-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-stageMULE, GRV, JO, KOL_ONEVIEW, MEDIC, ONEMED, PTRS, VEEVA_FIELDManager API
GBL PRODhttps://mdm-gateway.COMPANY.com/prod-exthttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-prodMULE, GRV, JO, KOL_ONEVIEW, MAPP, MEDIC, ONEMED, PTRS, VEEVA_FIELDManager API
GBL PRODhttps://mdm-gateway-int.COMPANY.com/gw-apihttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-gw-gbl-prodCHINAManager API
EXTERNAL GBL DEVhttps://mdm-reltio-proxy.COMPANY.com:8443/dev-exthttps://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/ext-api-gw-gbl-devMAP, GANT, MAPPManager API
EXTERNAL GBL QA (MAPP)https://mdm-reltio-proxy.COMPANY.com:8443/mapp-exthttps://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/ext-api-gw-gbl-qaMAP, GANT, MAPPManager API
EXTERNAL GBL STAGEhttps://mdm-reltio-proxy.COMPANY.com:8443/stage-exthttps://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/ext-api-gw-gbl-stageMAP, GANT, MAPPManager API
EXTERNAL GBL PRODhttps://mdm-gateway.COMPANY.com/prod-exthttps://api-emea-k8s-prod-gbl-mdm-hub-ext.COMPANY.com/ext-api-gw-gbl-prodMAP, GANT, MAPPManager API
EXTERNAL EMEA DEVhttps://api-emea-nprod-gbl-mdm-hub-ext.COMPANY.com:8443/dev-exthttps://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/ext-api-emea-devMAP, GANT, MAPPRouter API
EXTERNAL EMEA QAhttps://api-emea-nprod-gbl-mdm-hub-ext.COMPANY.com:8443/qa-exthttps://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/ext-api-emea-qaMAP, GANT, MAPPRouter API
EXTERNAL EMEA STAGEhttps://api-emea-nprod-gbl-mdm-hub-ext.COMPANY.com:8443/stage-exthttps://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/ext-api-emea-stageMAP, GANT, MAPPRouter API
EXTERNAL EMEA PRODhttps://api-emea-prod-gbl-mdm-hub-ext.COMPANY.com:8443/prod-exthttps://api-emea-k8s-prod-gbl-mdm-hub-ext.COMPANY.com/ext-api-emea-prodMAP, GANT, MAPPRouter API

Streaming channel (Kafka)

Switching to a new environment requires configuration change on your side:

  1. Change the Kafka's broker address,
  2. Change JAAS configuration - in the new architecture, we decided to change JAAS authentication mechanisms to SCRAM. To be sure that you are using the right authentication you have to change a few parameters in Kafka's connection:
    1. JAAS login config file which path is specified in "java.security.auth.login.config" java property. It should look like below:
KafkaClient {
  org.apache.kafka.common.security.scram.ScramLoginModule required
username="<user>"
●●●●●●●●●●●●●●●●●●●>";
};

                   b.  change the value of "sasl.mechanism" property to "SCRAM-SHA-512"

                   c. if you configure JAAS login using "sasl.jaas.config" property you have to change its value to "org.apache.kafka.common.security.scram.ScramLoginModule required username="<user>" ●●●●●●●●●●●●●●●●●●●>";"

You should receive new credentials (username and password) in the email about changing Kafka endpoints. In another case to get the proper username and ●●●●●●●●●●●●●●● contact our support team.


The following table presents old endpoints and their substitutes in the new environment. Everyone who wants to connect with MDMHUB has to use new endpoints.

EnvironmentOld endpointNew endpointAffected clientsDescription
GBLUS DEV/QA/STAGEamraelp00007335.COMPANY.com:9094kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094ENGAGE, KOL_ONEVIEW, GRV, ICUE, MULE

Kafka

GBLUS PRODamraelp00007848.COMPANY.com:9094,amraelp00007849.COMPANY.com:9094,amraelp00007871.COMPANY.com:9094kafka-amer-prod-gbl-mdm-hub.COMPANY.com:9094ENGAGE, KOL_ONEVIEW, GRV, ICUE, MULEKafka
EMEA DEV/QA/STAGE

euw1z2dl112.COMPANY.com:9094

mdm-reltio-proxy.COMPANY.com:9094 (external)

kafka-b1-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094MAP (external), PforceRx, MULEKafka
EMEA PROD

euw1z2pl116.COMPANY.com:9094,euw1z1pl117.COMPANY.com:9094,euw1z2pl118.COMPANY.com:9094

kafka-b1-emea-prod-gbl-mdm-hub.COMPANY.com:9094,kafka-b2-emea-prod-gbl-mdm-hub.COMPANY.com:9094,kafka-b3-emea-prod-gbl-mdm-hub.COMPANY.com:9094

kafka-b1-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095,kafka-b2-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095,kafka-b3-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095 (external)

kafka-b1-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094MAP (external), PforceRx, MULEKafka
GBL DEV/QA/STAGE

euw1z1dl037.COMPANY.com:9094

mdm-reltio-proxy.COMPANY.com:9094 (external)

kafka-b1-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094MAP (external), China, KOL_ONEVIEW, PTRS, PTE, ENGAGE, MAPP,Kafka
GBL PROD

euw1z1pl017.COMPANY.com:9094,euw1z1pl021.COMPANY.com:9094,euw1z1pl022.COMPANY.com:9094

mdm-broker-p1.COMPANY.com:9094,mdm-broker-p2.COMPANY.com:9094,mdm-broker-p3.COMPANY.com:9094 (external)

kafka-b1-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094MAP (external), China, KOL_ONEVIEW, PTRS, ENGAGE, MAPP,Kafka
EXTERNAL GBL DEV/QA/STAGE



Data Mart (Snowflake)

There are no changes required if you use Snowflake to get MDMHUB data.

" + }, + { + "title": "MDM HUB Log Management", + "pageID": "164470115", + "pageLink": "/display/GMDM/MDM+HUB+Log+Management", + "content": "

MDM HUB has built in a log management solution that allows to trace data going through the system (incoming and outgoing events).

It improves:

The solution is based on EFK stack:

The solutions is presented on the picture below: 



\"\"

" + }, + { + "title": "EFK Environments", + "pageID": "164470092", + "pageLink": "/display/GMDM/EFK+Environments", + "content": "


" + }, + { + "title": "Elastic Cloud on Kubernetes in MDM HUB", + "pageID": "284787486", + "pageLink": "/display/GMDM/Elastic+Cloud+on+Kubernetes+in+MDM+HUB", + "content": "

Overview

<graphic0>

After migration on Kubernetes platform from on premise solutions we started to use Elastic Cloud on Kubernetes (ECK).

https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-overview.html

With ECK we can streamline critical operations, such as:

  1. Setting up hot-warm-cold architectures.
  2. Providing lifecycle policies for logs and transactions, snapshots of obsolete/older/less utility data.
  3. Creating dashboards visualising data of MDM HUB core processes.

Logs, transactions and mongo collections

We splitted all the data entering the Elastic Stack cluster into different categories listed as follows:

1. MDM HUB services logs

For forwarding MDM HUB services logs we use FluentBit where its used as a sidecar/agent container inside the mdmhub service pod.

The sidecar/agents send data directly to a backend service on Kubernetes cluster.

\"\"

2. Backend logs and transactions

For backend logs and transactions forwarding we use Fluentd as a forwarder and aggregator, lightweight pod instance deployed on edge.

In case of Elasticsearch unavailability, secondary output is defined on S3 storage to not miss any data coming from services.

\"\"

3. MongoDB collections

In this scenario we decided to use Monstache, sync daemon written in Go that continously indexes MongoDB collections into Elasticsearch.

We use it to mirror Reltio data gathered in MongoDB collections in Elasticsearch as a backup and a source for Kibana's dashboards visualisations.

\"\"


Data streams

MDM HUB services and backend logs and transactions are managed by Data streams mechanism.
A data stream lets us store append-only time series data (logs/transactions) across multiple indices while giving a single named resource for requests.

https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams.html

Index lifecycle policies and snapshots management

Index templates, index lifecycle policies and snapshots for index management are enirely covered by the Elasticsearch built-in mechanisms.

Description of the index lifecycle divided into phases:

  1. Index rollover - logs and transactions are stored in hot-tiers
  2. Index rollover - logs and transactions are moved to delete phase
  3. Snapshot - deleted logs and transactions from elasticsearch are snapshotted on S3 bucket
  4. Snapshot -  logs and transactions are deleted from S3 bucket - index is no longer available

All snapshotted indices may be restored and recreated on Elasticsearch anytime.

Maximum sizes and ages for the indexes rollovers and snapshots are included in the following tables:

Non PROD environments

typeindex rollover hot phase

index rollover delete phase

snapshot phase
 MDM HUB logs

age: 7d

size: 100gb

age: 30dage: 180d
Backend logs

age: 7d

size: 100gb

age: 30dage: 180d
Kafka transactions

age: 7d

size: 25gb

age: 30dage: 180d

PROD environments

typeindex rollover hot phase

index rollover delete phase

snapshot phase
 MDM HUB logs

age: 7d

size: 100gb

age: 90dage: 365d
Backend logs

age: 7d

size: 100gb

age: 90dage: 365d
Kafka transactions

age: 7d

size: 25gb

age: 180dage:  365d

Aditionally, we execute full snapshot policy on daily basis. It is responsible for incremental storing all the elasticsearch indexes on S3 buckets as a backup. 

Snapshots locations

environmentS3 bucketpath
EMEA NPRODpfe-atp-eu-w1-nprod-mdmhubemea/archive/elastic/full
EMEA PRODpfe-atp-eu-w1-prod-mdmhub-backupemaasp202207120811emea/archive/elastic/full

AMER NPROD

gblmdmhubnprodamrasp100762amer/archive/elastic/full
AMER PRODpfe-atp-us-e1-prod-mdmhub-backupamrasp202207120808amer/archive/elastic/full
APAC NPRODglobalmdmnprodaspasp202202171347apac/archive/elastic/full
APAC PRODpfe-atp-ap-se1-prod-mdmhub-backuaspasp202207141502apac/archive/elastic/full


MongoDB collections data are stored on Elasticsearch permanently, they are not covered by the index lifecycle processes.

Kibana dashboards

Kibana Dashboard Overview


" + }, + { + "title": "Kibana Dashboards", + "pageID": "164470093", + "pageLink": "/display/GMDM/Kibana+Dashboards", + "content": "


" + }, + { + "title": "Tracing areas", + "pageID": "164470094", + "pageLink": "/display/GMDM/Tracing+areas", + "content": "

Log data are generated in the following actions:



\"\"

" + }, + { + "title": "MDM HUB Monitoring", + "pageID": "164470106", + "pageLink": "/display/GMDM/MDM+HUB+Monitoring", + "content": "" + }, + { + "title": "AKHQ", + "pageID": "164470020", + "pageLink": "/display/GMDM/AKHQ", + "content": "

AKHQ (https://github.com/tchiotludo/akhq) is a tool for browsing, changing and monitoring Kafka's instances.


https://akhq-amer-nprod-gbl-mdm-hub.COMPANY.com/

https://akhq-amer-prod-gbl-mdm-hub.COMPANY.com/

https://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/

https://akhq-emea-prod-gbl-mdm-hub.COMPANY.com/

https://akhq-apac-nprod-gbl-mdm-hub.COMPANY.com/

https://akhq-apac-prod-gbl-mdm-hub.COMPANY.com/

" + }, + { + "title": "Grafana & Kibana", + "pageID": "228933027", + "pageLink": "/pages/viewpage.action?pageId=228933027", + "content": "

KIBANA

US PROD https://mdm-log-management-us-trade-prod.COMPANY.com:5601/app/kibana

User: kibana_dashboard_view


US NONPROD https://mdm-log-management-us-trade-nonprod.COMPANY.com:5601/app/kibana

User: kibana_dashboard_view

=====

GBL PROD https://kibana-emea-prod-gbl-mdm-hub.COMPANY.com

GBL NONPROD https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com

=====

EMEA PROD https://kibana-emea-prod-gbl-mdm-hub.COMPANY.com

EMEA NONPROD https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com

=====

GBLUS PROD https://kibana-amer-prod-gbl-mdm-hub.COMPANY.com

GBLUS NONPROD https://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com

=====

AMER PROD https://kibana-amer-prod-gbl-mdm-hub.COMPANY.com

AMER NONPROD https://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com

=====

APAC PROD https://kibana-apac-prod-gbl-mdm-hub.COMPANY.com

APAC NONPROD https://kibana-apac-nprod-gbl-mdm-hub.COMPANY.com


GRAFANA

https://grafana-mdm-monitoring.COMPANY.com


KeePass

 - download this

\"\"Kibana-k8s.kdbx

The password to the KeePass is sent in a separate email to improve the security level of credentials sending.

To get access, you only need to download the KeePass application 2.50 version (https://keepass.info/download.html) and use a password that is sent to log in to it.

After you do it you will see a screen like:

\"\"

Then just click a title that you are interested in. And you get a window like:

\"\"

Here you have a user name, and a proper link and when you click 3 dots = red square you will get the password.

" + }, + { + "title": "Grafana Dashboard Overview", + "pageID": "164470208", + "pageLink": "/display/GMDM/Grafana+Dashboard+Overview", + "content": "

MDM HUB's Grafana is deployed on the MONITORING host and is available under the following URL:

https://grafana-mdm-monitoring.COMPANY.com


All the dashboards are built using Prometheus's metrics.

" + }, + { + "title": "Alerts Monitoring PROD&NON_PROD", + "pageID": "163917772", + "pageLink": "/pages/viewpage.action?pageId=163917772", + "content": "

PROD: https://mdm-monitoring.COMPANY.com/grafana/d/5h4gLmemz/alerts-monitoring-prod

NON PROD: https://mdm-monitoring.COMPANY.com/grafana/d/COVgYieiz/alerts-monitoring-non_prod


\"\"


The Dashboard contains firing alerts and last Airflow DAG runs statuses for GBL (left side) and US FLEX (right side):

a., e. number of alerts firing

b., f. turns red when one or more DAG JOBS have failed

c., g. alerts currently firing

d., h. table containing all the DAGs and their run count for each of the statuses

" + }, + { + "title": "AWS SQS", + "pageID": "163917788", + "pageLink": "/display/GMDM/AWS+SQS", + "content": "

Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/CI4RLieik/aws-sqs


The dashboard is describing the SQS queue used in Reltio→MDM HUB communication.


\"\"


The dashboard is divided into following sections:

a. Approximate number of messages - how many messages are currently waiting in the queue

b. Approximate number of messages delayed - how many messages are waiting to be added in the queue

c. Approximate number of messages invisible - how many messages are not timed out nor deleted

" + }, + { + "title": "Docker Monitoring", + "pageID": "163917797", + "pageLink": "/display/GMDM/Docker+Monitoring", + "content": "

Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/Z1VgYm6iz/docker-monitoring


This dashboard is describing the Docker containers running on hosts in each environment. Switch currently viewed environment/host using the variables at the top of the dashboard ("env", "host").


\"\"


The dashboard is divided into following sections:

a. Running containers - how many containers are currently running on this host

b. Total Memory Usage

c. Total CPU Usage

d. CPU Usage - over time CPU use per container

e. Memory Usage - over time Memory use per container

f. Network Rx - received bytes per container over time

g. Network Tx - transmited bytes per container over time

" + }, + { + "title": "Host Statistics", + "pageID": "163917801", + "pageLink": "/display/GMDM/Host+Statistics", + "content": "
\n
\n
\n
\n

Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics

Dashboard template source: https://grafana.com/grafana/dashboards/1860


This dashboard is describing various statistics related to hosts' resource usage. It uses metrics from the node_exporter. You can change the currently viewed environment and host using variables at the top of the dashboard.


\n
\n
\n
\n
\n
\n

Basic CPU / Mem / Disk Gauge

\"\"


a. CPU Busy

b. Used RAM Memory

c. Used SWAP - hard disk memory used for swapping

d. Used Root FS

e. CPU System Load (1m avg)

f. CPU System Load (5m avg)


\n
\n
\n
\n
\n
\n

Basic CPU / Mem / Disk Info

\"\"


a. CPU Cores

b. Total RAM

c. Total SWAP

d. Total RootFS

e. System Load (1m avg)

f. Uptime - time since last restart


\n
\n
\n
\n
\n
\n

Basic CPU / Mem Graph

\"\"

a. CPU Basic - CPU state %

b. Memory Basic - memory (SWAP + RAM) use


\n
\n
\n
\n
\n
\n

Basic Net / Disk Info

\"\"

a. Network Traffic Basic - network traffic in bytes per interface

b, Disk Space Used Basic - disk usage per mount


\n
\n
\n
\n
\n
\n

CPU Memory Net Disk

\"\"
a. CPU - percentage use per status/operation

b. Memory Stack - use per status/operation

c. Network Traffic - detailed network traffic in bytes per interface. Negative values correspond to transmited bytes, positive to received.

d. Disk Space Used - disk usage per mount

\"\"

e. Disk IOps - disk operations per partition. Negative values correspond to write operations, positive - read operations.

f. I/O Usage Read / Write - bytes read(positive)/written(negative) per partition

g. I/O Usage Times - time of I/O operations in seconds per partition


\n
\n
\n
\n
\n
\n

Etc.

As the dashboard template is a publicaly-available project, the panels/graphs are sufficiently described and do not require further explanation.

\n
\n
\n
" + }, + { + "title": "HUB Batch Performance", + "pageID": "163917855", + "pageLink": "/display/GMDM/HUB+Batch+Performance", + "content": "
\n\n
\n
\n
\n

\"\"

a. Batch loading rate

b. Batch loading latency

c. Batch sending rate

d. Batch sending latency

e. Batch processing rate - batch processing in ops/s

f. Batch processing latency - batch processing time in seconds

\"\"

g. Batch loading max gauge - max loading time in seconds

h. Batch sending max gauge - max sending time in seconds

i. Batch processing max gauge - max processing in seconds

\n
\n
\n
" + }, + { + "title": "HUB Overview Dashboard", + "pageID": "163917867", + "pageLink": "/display/GMDM/HUB+Overview+Dashboard", + "content": "
\n
\n
\n
\n

Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/OfVgLm6ik/hub-overview

This dashboard contains information about Kafka topics/consumer groups in HUB - downstream from Reltio.


\n
\n
\n
\n
\n
\n

\"\"

a. Lag by Consumer Group - lag on each INBOUND consumer group

b. Message consume per minute - messages consumed by each INBOUND consumer group

c. Message in per minute - inbound messages count by each INBOUND topic

d. Lag by Consumer Group - lag on each OUTBOUND consumer group

e. Message consume per minute - messages consumed by each OUTBOUND consumer group

f. Message in per minute - inbound messages count by each OUTBOUND topic

g. Lag by Consumer Group - lag on each INTERNAL BATCH consumer group

h. Message consume per minute - messages consumed by each INTERNAL BATCH consumer group

i. Message in per minute - inbound messages count by each INTERNAL BATCH topic

\n
\n
\n
" + }, + { + "title": "HUB Performance", + "pageID": "163917830", + "pageLink": "/display/GMDM/HUB+Performance", + "content": "
\n\n
\n
\n
\n

API Performance

\"\"

a. Read Rate - API Read operations in 5/10/15min rate

b. Read Latency - API Read operations latency in seconds for 50/75/99th percentile of requests. Consists of Reltio response time, processing time and total time

c. Write Rate - API Write operations in 5/10/15min rate

d. Write Latency - API Write operations latency in seconds for 50/75/99th percentile of requests per each API operation


\n
\n
\n
\n
\n
\n

Publishing Performance

\"\"

a. Event Preprocessing Total Rate - Publisher's preprocessed events 5/10/15min rate divided for entity/relation events

b. Event Preprocessing Total Latency - preprocessing time in seconds for 50/75/99th percentile of events


\n
\n
\n
\n
\n
\n

Subscribing Performance

\"\"

a. MDM Events Subscribing Rate - Subscriber's events rate

b. MDM Events Subscribing Latency - Subscriber's event processing (passing downstream) rate

\n
\n
\n
" + }, + { + "title": "JMX Overview", + "pageID": "163917876", + "pageLink": "/display/GMDM/JMX+Overview", + "content": "

Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview

This dashboard organizes and displays data extracted from each component by a JMX exporter - related to this component's resource usage. You can switch currently viewed environment/component/node using variables on the top of the dashboard.


\"\"

a. Memory

b. Total RAM

c. Used SWAP

d. Total SWAP

e. CPU System Load(1m avg)

f. CPU System Load(5m avg)

g. CPU Cores

h. CPU Usage

i. Memory Heap/NonHeap

j. Memory Pool Used

k. Threads used

l. Class loading

m. Open File Descriptors

n. GC time / 1 min. rate - Garbage Collector time rate/min

o. GC count - Garbage Collector operations count

" + }, + { + "title": "Kafka Overview", + "pageID": "163917904", + "pageLink": "/display/GMDM/Kafka+Overview", + "content": "

Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/YNIRYmeik/kafka-overview

This dashboard describes Kafka's per node resource usage.


\"\"

a. CPU Usage

b. JVM Memory Used

c. Time spent in GC

d. Messages in Per Topic

e. Bytes in Per Topic

f. Bytes Out Per Topic

" + }, + { + "title": "Kafka Overview - Total", + "pageID": "163917913", + "pageLink": "/display/GMDM/Kafka+Overview+-+Total", + "content": "

Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/W6OysZ5Zz/kafka-overview-total

This dashboard describes Kafka's total (all node summary) resource usage per environment.


\"\"

a. CPU Usage

b. JVM Memory Used

c. Time spent in GC

d. Messages rate

e. Bytes in Rate

f. Bytes Out Rate

" + }, + { + "title": "Kafka Topics Overview", + "pageID": "163917920", + "pageLink": "/display/GMDM/Kafka+Topics+Overview", + "content": "

Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview

This dashboard describes Kafka topics and consumer groups in each environment.


\"\"

a. Topics purge ETA in hours - approximate time it should take for each consumer group to process all the events on their topic

b. Lag by Consumer Group

c. Message in per minute - per topic

d. Message consume per minute - per consumer group

e. Message in per second - per topic

" + }, + { + "title": "Kong Dashboard", + "pageID": "163917927", + "pageLink": "/display/GMDM/Kong+Dashboard", + "content": "

Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong

This dashboard describes the Kong component statistics.


\"\"

a. Total requests per second

b. DB reachability

c. Requests per service

d. Requests by HTTP status code

e. Total Bandwidth

\"\"

f. Egress per service (All) - traffic exiting the MDM network in bytes

g. Ingress per service (All) - traffic entering the MDM network in bytes

h. Kong Proxy Latency across all services - divided on 90/95/99 percentile

i. Kong Proxy Latency per service (All) - divided on 90/95/99 percentile

j. Request Time across all services - divided on 90/95/99 percentile

k. Request Time per service (All) - divided on 90/95/99 percentile

l. Upstream Time across all services - divided on 90/95/99 percentile

m. Upstream Time per service (All) - divided on 90/95/99 percentile

\"\"

o. Nginx connection state

p. Total Connections

q. Handled Connections

r. Accepted Connections

" + }, + { + "title": "MongoDB", + "pageID": "163917945", + "pageLink": "/display/GMDM/MongoDB", + "content": "

Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb


\"\"

a. Query Operations

b. Document Operations

c. Document Query Executor

d. Member Health

e. Member State

f. Replica Query Operations

g. Uptime

h. Available Connections

i. Open Connections

j. Oplog Size

k. Memory

l. Network I/O

\"\"

m. Oplog Lag

n. Disk I/O Utilization

o. Disk Reads Completed

p. Disk Writes Completed

" + }, + { + "title": "Snowflake Tasks", + "pageID": "163917954", + "pageLink": "/display/GMDM/Snowflake+Tasks", + "content": "

Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/358IxM_Mz/snowflake-tasks

This dashboard describes tasks running on each Snowflake instance.

Please keep in mind that metrics supporting this dashboard are scraped rarely (every 8h on nprod, every 2h on prod), so keep the Time since last scrape gauge in mind when reviewing the results.


\"\"

a. Time since last scrape - time since the metrics were last scraped - it marks dashboard freshness

b. Last Task Runs - table contains:

c. Processing time - visualizes how the processing time of each task was changing over time

" + }, + { + "title": "Kibana Dashboard Overview", + "pageID": "164469839", + "pageLink": "/display/GMDM/Kibana+Dashboard+Overview", + "content": "" + }, + { + "title": "API Calls Dashboard", + "pageID": "164469837", + "pageLink": "/display/GMDM/API+Calls+Dashboard", + "content": "

The dashboard contains summary of MDM Gateway API calls in the chosen time range.

Use it to:


\"\"


The dashboard is divided into the following sections:

a. Total requests count - how many requests have been logged in this time range (or passed the filter if that's the case)

b. Controls - allows user to filter requests based on username and operation

c. Requests by operation - how many requests have been sent per each operation

d. Average response time - how long the response time was on average per each action

e. Request per client - how many requests have been sent per each client

f. Response status - how many requests have resulted with each status

g. Top 10 processing times - summary of 10 requests that have been processed the longest in this time range. Contains transaction ID, related entity URI, operation type and duration in ms.

\"\"

681pxh. Logs - summary of all the logged requests

" + }, + { + "title": "Batch Loads Dashboard", + "pageID": "164469855", + "pageLink": "/display/GMDM/Batch+Loads+Dashboard", + "content": "

The dashboard contains information about files processed by the Batch Channel component.

Use this dashboard to:


\"\"


The dashboard is divided into following sections:

a. File by type - summary of how many files of each type were delivered in this time range.

b. File load status count - visualisation of how many entities were extracted from each file type and what was the result of their processing.

c. File load count - visualisation of loaded files in this time range. Use it to verify that the files have been delivered on schedule.

d. File load summary - summary of the processing of each loaded file. 

e. Response status load summary - summary of processing result for each file type.

" + }, + { + "title": "HL DCR Dashboard", + "pageID": "164469753", + "pageLink": "/display/GMDM/HL+DCR+Dashboard", + "content": "

This dashboard contains information related to the HL DCR flow (DCR Service).

Use it to:


\"\"


The dashboard is divided into following sections:

a. DCR Status - summary of how many DCRs have each of the statuses

b. Reltio DCR Stats - summary of how many DCRs that have been processed and sent to Reltio have each of the statuses

c. DCRRequestProcessing report - list of DCR reports generated in this time range


\"\"


d. DCR Current state - list of DCRs and their current statuses

" + }, + { + "title": "HUB Events Dashboard", + "pageID": "164469849", + "pageLink": "/display/GMDM/HUB+Events+Dashboard", + "content": "

Dashboard contains information about the Publisher component - events sent to clients or internal components (ex. Callback Service).

Use it to:


\"\"


The dashboard is divided into following sections:

a. Count - how many events have been processed by the Publisher in this time range

b. Event count - visualisation of how many events have been processed over time

c. Simple events in time - visualisation of how many simple events have been processed (published) over time per each outbound topic

d. Skipped events in time - visualisation of how many events have been skipped (filtered) for each reason over time


\"\"


e. Full events in time - visualisation of how many full events have been published over time per each topic

f. Processing time - visualisation of how long the processing of entities/relations events took

g. Events by country - summary of how many events were related to each country

h. Event types - summary of how many events were of each type


\"\"


i. Full events by Topics - visualisation of how many full events of each type were published on each of the topics

j. Simple events by Topics - visualisation of how many simple events of each type were published on each of the topics

k. Publisher Logs - list containing all the useful information extracted from the Publisher logs for each event. Use it to track issues related to Publisher's event processing.

" + }, + { + "title": "HUB Store Dashboard", + "pageID": "164469853", + "pageLink": "/display/GMDM/HUB+Store+Dashboard", + "content": "

Summary of all entities in the MDM in this environment. Contains summary information about entities count, countries and sources. 


\"\"


The dashboard is divided into following sections:

a. Entities count - how many entities are there currently in MDM

b. Entities modification count - how many entity modifications (create/update/delete) were there over time

c. Status - summary of how many entities have each of the statuses

d. Type - summary of how many entities are HCO (Health Care Organization) or HCP (Health Care Professional)

e. MDM - summary of how many MDM entities are in Reltio/Nucleus

f. Entities country - visualisation of country to entity count

g. Entities source - visualisation of source to entity count


\"\"


h. Entities by country source type - visualisation of how many entities are there from each country with each source

i. World Map - visualisation of how many entities are there from each country


\"\"


j. Source/Country Heat Map - another visualisation of Country-Source distribution

" + }, + { + "title": "MDM Events Dashboard", + "pageID": "164469851", + "pageLink": "/display/GMDM/MDM+Events+Dashboard", + "content": "

This dashboard contains information extracted from the Subscriber component.

Use it to:


\"\"


The dashboard is divided into following sections:

a. Total events count - how many events have been received and published to an internal topic in this time range

b. Event types - visualisation of how many events processed were of each type

c. Event count - visualisation of how many events were processed over time

d. Event destinations - visualisation of how many events have been passed to each of internal topics over time

e. Average consume time - visualisation of how long it took to process/pass received events over time

f. Subscriber Logs - list containing all the useful information extracted from the Subscriber logs. Use it to track potential issues

" + }, + { + "title": "Profile Updates Dashboard", + "pageID": "164469751", + "pageLink": "/display/GMDM/Profile+Updates+Dashboard", + "content": "

This dashboard contains information about HCO/HCP profile updates via MDM Gateway.

Use it to:

Note, that the Gateway is not only used by the external vendors, but also by HUB's components (Callback Service).


\"\"


The dashboard is divided into following sections:

a. Count - how many profile updates have been logged in this time period

b. Updates by status - how many updates have each of the statuses

c. Updates count - visualisation of how many updates were received by the Gateway over time

d. Updates by country source status - visualisation of how many updates were there for each country, from each source and with each status


\"\"


e. Updates by source - summary of how many profile updates were there from each source

f. Updates by country source status - another visualisation of how many updates were there for each country, source, status

g. World Map - visualisation of how many updates were there on profiles from each of the countries


\"\"


h. Gateway Logs - list containing all the useful information extracted from the Gateway components' logs. Use it to track issues related to the MDM Gateway

" + }, + { + "title": "Reconciliation metrics Dashboard", + "pageID": "310964632", + "pageLink": "/display/GMDM/Reconciliation+metrics+Dashboard", + "content": "

The Reconciliation Metrics Dashboard shows reasons why the MDM object (entity or relation) was reconciled.

Use it to:

Currently, the dashboard can show the following reasons:


\"\"

 The dashboard consists of a few diagrams:

  1. {ENV NAME} Reconciliation reasons - shows the most often existing reasons for reconciliation,
  2. Number by country - general number of reconciliation reasons divided by countries,
  3. Number by types - shows the general number of reconciliation reasons grouped by MDM object type,
  4. Reason list - reconciliation reasons with the number of their occurrences,
  5. {ENV NAME} Reconciliation metrics - detail view that shows data generated by Reconciliation Metrics flow. Data has detailed information about what exactly changed on specific MDM object.
" + }, + { + "title": "Prometheus Alerts", + "pageID": "164470107", + "pageLink": "/display/GMDM/Prometheus+Alerts", + "content": "

Dashboards

There are 2 dashboards available for problems overview: 

Karma

Grafana - Alerts Monitoring Dashboard

Alerts

ENV

Name

Alert

Cause (Expression)

Time

Severity

Action to be taken

ALL

MDM

high_load

> 30 load1

30m

warning

Detect why load is increasing. Decrease number of threads on components or turn off some of them.

ALL

MDM

high_load

> 30 load1

2h

critical

Detect why load is increasing. Decrease number of threads on components or turn off some of them.

ALL

MDM

memory_usage

>  90% used

1h

critical

Detect the component which is causing high memory usage and restart it.

ALL

MDM

disk_usage

< 10% free

2m

high

Remove or archive old component logs.

ALL

MDM

disk_usage

<  5% free

2m

critical

Remove or archive old component logs.

ALL

MDM

kong_processor_usage
> 120% CPU used by container10mhighCheck the Kong container

ALL

MDM

cpu_usage
> 90% CPU used1hcriticalDetect the cause of high CPU use and take appropriate measures

ALL

MDM

snowflake_task_not_successful_nprod
Last Snowflake task run has state other than "SUCCEEDED"1mhigh

Investigate whether the task failed or was skipped, and what caused it.

Metric value returned by the alert corresponds to the task state:

  • 0 - FAILED
  • 1 - SUCCEEDED
  • 2 - SCHEDULED
  • 3 - SKIPPED

ALL

MDM

snowflake_task_not_successful_prod
Last Snowflake task run has state other than "SUCCEEDED"1mhigh

Investigate whether the task failed or was skipped, and what caused it.

Metric value returned by the alert corresponds to the task state:

  • 0 - FAILED
  • 1 - SUCCEEDED
  • 2 - SCHEDULED
  • 3 - SKIPPED

ALL

MDM

snowflake_task_not_started_24h
Snowflake task has not started in the last 24h (+ 8h scrape time)1mhighInvestigate why the task was not scheduled/did not start.
ALLMDM
reltio_response_time
Reltio response time to entities/get requests is >= 3 sec for 99th percentile20mhighNotify the Reltio Team.

NON PROD

MDM

service_down

up{env!~".*_prod"} == 0

20m

warning

Detect the not working component and start it.

NON PROD

MDM

kafka_streams_client_state
kafka streams client state != 21mhighCheck and restart the Callback Service.
NON PRODKong
kong_database_down
Kong DB unreachable20mwarningCheck the Kong DB component.
NON PRODKong
kong_http_500_status_rate
HTTP 500 > 10%5mwarningCheck Gateway components' logs.
NON PRODKong
kong_http_502_status_rate
HTTP 502 > 10%5mwarningCheck Kong's port availability.
NON PRODKong
kong_http_503_status_rate
HTTP 503 > 10%5mwarningCheck the Kong component.
NON PRODKong
kong_http_504_status_rate
HTTP 504 > 10%5mwarningCheck Reltio response rates. Check Gateway components for issues.
NON PRODKong
kong_http_401_status_rate
HTTP 401 > 30%20mwarningCheck Kong logs. Notify the authorities in case of suspected break-in attempts.
GBL NON PRODKafka
internal_reltio_events_lag_dev
> 500 00030minfoCheck why lag is increasing. Restart the Event Publisher.
GBL NON PRODKafka
internal_reltio_relations_events_lag_dev
> 500 00030minfoCheck why lag is increasing. Restart the Event Publisher.
GBL NON PRODKafka
internal_reltio_events_lag_stage
> 500 00030minfoCheck why lag is increasing. Restart the Event Publisher.
GBL NON PRODKafka
internal_reltio_relations_events_lag_stage
> 500 00030minfoCheck why lag is increasing. Restart the Event Publisher.
GBL NON PRODKafka
internal_reltio_events_lag_qa
> 500 00030minfoCheck why lag is increasing. Restart the Event Publisher.
GBL NON PRODKafka
internal_reltio_relations_events_lag_qa
> 500 00030minfoCheck why lag is increasing. Restart the Event Publisher.
GBL NON PRODKafka
kafka_jvm_heap_memory_increasing
> 1000MB memory use predicted in 5 hours20mhighCheck if Kafka is rebalancing. Check the Event Publisher.
GBL NON PRODKafka
fluentd_dev_kafka_consumer_group_members
0 EFK consumergroup members30mhighCheck Fluentd logs. Restart Fluentd.
GBLUS NON PRODKafka
internal_reltio_events_lag_gblus_dev
> 500 00040minfoCheck why lag is increasing. Restart the Event Publisher.
GBLUS NON PRODKafka
internal_reltio_events_lag_gblus_qa
> 500 00040minfoCheck why lag is increasing. Restart the Event Publisher.
GBLUS NON PRODKafka
internal_reltio_events_lag_gblus_stage
> 500 00040minfoCheck why lag is increasing. Restart the Event Publisher.
GBLUS NON PRODKafka
kafka_jvm_heap_memory_increasing
> 3100MB memory use predicted in 5 hours20mhighCheck if Kafka is rebalancing. Check the Event Publisher.
GBLUS NON PRODKafka
fluentd_gblus_dev_kafka_consumer_group_members
0 EFK consumergroup members30mhighCheck Fluentd logs. Restart Fluentd.

GBL PROD

MDM

service_down

count(up{env=~"gbl_prod"} == 0) by (env,component) == 1

5m

high

Detect the not working component and start it.

GBL PROD

MDM

service_down

count(up{env=~"gbl_prod"} == 0) by (env,component) > 1

5m

critical

Detect the not working component and start it.

GBL PROD

MDM

service_down_kafka_connect
0 Kafka Connect Exporters up in the environment5mcriticalCheck and start the Kafka Connect Exporter.

GBL PROD

MDM

service_down
One or more Kafka Connect instances down5mcriticalCheck and start he Kafka Connect.

GBL PROD

MDM

dcr_stuck_on_prepared_status
DCR has been PREPARED for 1h1hhighDCR has not been processed downstream. Notify IQVIA.

GBL PROD

MDM

dcr_processing_failure
DCR processing failed in the last 24 hours

Check DCR Service, Wrapper logs.

GBL PROD

Cron Jobs

mongo_automated_script_not_started
Mongo Cron Job has not started1hhighCheck the MongoDB.

GBL PROD

Kong

kong_database_down
Kong DB unreachable20mwarningCheck the Kong DB component.

GBL PROD

Kong

kong_http_500_status_rate
HTTP 500 > 10%5mwarningCheck Gateway components' logs.

GBL PROD

Kong

kong_http_502_status_rate
HTTP 502 > 10%5mwarningCheck Kong's port availability.

GBL PROD

Kong

kong_http_503_status_rate
HTTP 503 > 10%5mwarningCheck the Kong component.

GBL PROD

Kong

kong_http_504_status_rate
HTTP 504 > 10%5mwarningCheck Reltio response rates. Check Gateway components for issues.

GBL PROD

Kong

kong_http_401_status_rate
HTTP 401 > 30%10mwarningCheck Kong logs. Notify the authorities in case of suspected break-in attempts.

GBL PROD

Kafka

internal_reltio_events_lag_prod
> 1 000 00030minfoCheck why lag is increasing. Restart the Event Publisher.

GBL PROD

Kafka

internal_reltio_relations_events_lag_prod
> 1 000 00030minfoCheck why lag is increasing. Restart the Event Publisher.

GBL PROD

Kafka

prod-out-full-snowflake-all_no_consumers
prod-out-full-snowflake-all has lag and has not been consumed for 2 hours1mhighCheck and restart the Kafka Connect Snowflake component.

GBL PROD

Kafka

internal_gw_gcp_events_deg_lag_prod
> 50 00030minfoCheck the Map Channel component.

GBL PROD

Kafka

internal_gw_gcp_events_raw_lag_prod
> 50 00030minfoCheck the Map Channel component.

GBL PROD

Kafka

internal_gw_grv_events_deg_lag_prod
> 50 00030minfoCheck the Map Channel component.

GBL PROD

Kafka

internal_gw_grv_events_deg_lag_prod
> 50 00030minfoCheck the Map Channel component.

GBL PROD

Kafka

forwarder_mapp_prod_kafka_consumer_group_members
forwarder_mapp_prod consumer group has 0 members30mcriticalCheck the MAPP Events Forwarder.

GBL PROD

Kafka

igate_prod_kafka_consumer_group_members
igate_prod consumer group members have decreased (still > 20)15minfoCheck the Gateway components.

GBL PROD

Kafka

igate_prod_kafka_consumer_group_members
igate_prod consumer group members have decreased (still > 10)15mhighCheck the Gateway components.

GBL PROD

Kafka

igate_prod_kafka_consumer_group_members
igate_prod consumer group has 0 members15mcriticalCheck the Gateway components.

GBL PROD

Kafka

hub_prod_kafka_consumer_group_members
hub_prod consumer group members have decreased (still > 100)15minfoCheck the Hub components.

GBL PROD

Kafka

hub_prod_kafka_consumer_group_members
hub_prod consumer group members have decreased (still > 50)15minfoCheck the Hub components.

GBL PROD

Kafka

hub_prod_kafka_consumer_group_members
hub_prod consumer group has 0 members15minfoCheck the Hub components.

GBL PROD

Kafka

kafka_jvm_heap_memory_increasing
> 2100MB memory use on node 1 predicted in 5 hours20mhighCheck if Kafka is rebalancing. Check the Event Publisher.

GBL PROD

Kafka

kafka_jvm_heap_memory_increasing
> 2000MB memory use on nodes 2&3 predicted in 5 hours20mhighCheck if Kafka is rebalancing. Check the Event Publisher.

GBL PROD

Kafka

fluentd_prod_kafka_consumer_group_members
Fluentd consumergroup has 0 members30mhighCheck and restart Fluentd.
US PRODMDM
service_down

Batch Channel is not running

5m

critical

Start the Batch Channel
US PRODMDM
service_down
1 component is not running5mhighDetect the not working component and start it.
US PRODMDM
service_down

>1 component is not running

5m

critical

Detect the not working components and start them.
US PRODCron Jobs
archiver_not_started
Archiver has not started in 24 hours1hhighCheck the Archiver.

US PROD

Kafka

internal_reltio_events_lag_us_prod

> 500 000

5m

high

Check why lag is increasing. Restart the Event Publisher.

US PROD

Kafka

internal_reltio_events_lag_us_prod

> 1 000 000

5m

critical

Check why lag is increasing. Restart the Event Publisher.

US PROD

Kafka

hin_kafka_consumer_lag_us_prod

> 1000

15m

critical

Check why lag is increasing. Restart the Batch Channel.

US PROD

Kafka

flex_kafka_consumer_lag_us_prod

> 1000

15m

critical

Check why lag is increasing. Restart the Batch Channel.

US PROD

Kafka

sap_kafka_consumer_lag_us_prod

> 1000

15m

critical

Check why lag is increasing. Restart the Batch Channel.

US PROD

Kafka

dea_kafka_consumer_lag_us_prod

> 1000

15m

critical

Check why lag is increasing. Restart the Batch Channel.

US PROD

Kafka

igate_prod_hco_create_kafka_consumer_group_members

>= 30 < 40 and lag > 1000

15m

info

Check why the number of consumers is decreasing. Restart the Batch Channel.

US PROD

Kafka

igate_prod_hco_create_kafka_consumer_group_members

>= 10 < 30 and lag > 1000

15m

high

Check why the number of consumers is decreasing. Restart the Batch Channel.

US PROD

Kafka

igate_prod_hco_create_kafka_consumer_group_members

== 0 and lag > 1000

15m

critical

Check why the number of consumers is decreasing. Restart the Batch Channel.

US PROD

Kafka

hub_prod_kafka_consumer_group_members

>= 30 < 45 and lag > 1000

15m

info

Check why the number of consumers is decreasing. Restart the Event Publisher.

US PROD

Kafka

hub_prod_kafka_consumer_group_members

>= 10 < 30 and lag > 1000

15m

high

Check why the number of consumers is decreasing. Restart the Event Publisher.

US PROD

Kafka

hub_prod_kafka_consumer_group_members

== 0 and lag > 1000

15m

critical

Check why the number of consumers is decreasing. Restart the Event Publisher.

US PROD

Kafka

fluentd_prod_kafka_consumer_group_members
EFK consumer group has 0 members30mhighCheck and restart Fluentd.

US PROD

Kafka

flex_prod_kafka_consumer_group_members
FLEX Kafka Connector has 0 consumers10mcriticalNotify the FLEX Team

GBLUS PROD

MDM

service_down
count(up{env=~"gblus_prod"} == 0) by (env,component) == 15mhighDetect the not working component and start it.

GBLUS PROD

MDM

service_down
count(up{env=~"gblus_prod"} == 0) by (env,component) > 15mcriticalDetect the not working component and start it.
GBLUS PRODKong
kong_database_down
Kong DB unreachable20mwarningCheck the Kong DB component.
GBLUS PRODKong
kong_http_500_status_rate
HTTP 500 > 10%5mwarningCheck Gateway components' logs.
GBLUS PRODKong
kong_http_502_status_rate
HTTP 502 > 10%5mwarningCheck Kong's port availability.
GBLUS PRODKong
kong_http_503_status_rate
HTTP 503 > 10%5mwarningCheck the Kong component.
GBLUS PRODKong
kong_http_504_status_rate
HTTP 504 > 10%5mwarningCheck Reltio response rates. Check Gateway components for issues.
GBLUS PRODKong
kong_http_401_status_rate
HTTP 401 > 30%10mwarningCheck Kong logs. Notify the authorities in case of suspected break-in attempts.

GBLUS PROD

Kafka

internal_reltio_events_lag_prod
> 1 000 00030minfoCheck why lag is increasing. Restart the Event Publisher.

GBLUS PROD

Kafka

igate_async_prod_kafka_consumer_group_members
igate_async_prod consumer group members have decreased (still > 20)15minfoCheck the Gateway components.

GBLUS PROD

Kafka

igate_async_prod_kafka_consumer_group_members
igate_async_prod consumer group members have decreased (still > 10)15mhighCheck the Gateway components.

GBLUS PROD

Kafka

igate_async_prod_kafka_consumer_group_members
igate_async_prod consumer group has 0 members15mcriticalCheck the Gateway components.

GBLUS PROD

Kafka

hub_prod_kafka_consumer_group_members
hub_prod consumer group members have decreased (still > 20)15minfoCheck the Hub components.

GBLUS PROD

Kafka

hub_prod_kafka_consumer_group_members
hub_prod consumer group members have decreased (still > 10)15mhighCheck the Hub components.

GBLUS PROD

Kafka

hub_prod_kafka_consumer_group_members
hub_prod consumer group has 0 members15mcriticalCheck the Hub components.

GBLUS PROD

Kafka

batch_service_prod_kafka_consumer_group_members
batch_service_prod consumer group has 0 members15mcritical

Check the Batch Service component.

GBLUS PROD

Kafka

batch_service_prod_ack_kafka_consumer_group_members
batch_service_prod_ack consumer group has 0 members15mcriticalCheck the Batch Service component.

GBLUS PROD

Kafka

fluentd_gblus_prod_kafka_consumer_group_members
EFK consumer group has 0 members30mhighCheck Fluentd. Restart if necessary.

GBLUS PROD

Kafka

kafka_jvm_heap_memory_increasing
> 3100MB memory use predicted in 5 hours20mhighCheck if Kafka is rebalancing. Check the Event Publisher.
" + }, + { + "title": "Security", + "pageID": "164470097", + "pageLink": "/display/GMDM/Security", + "content": "\n

There are following aspects supporting security implemented in the solution:

\n\n" + }, + { + "title": "Authentication", + "pageID": "164470075", + "pageLink": "/display/GMDM/Authentication", + "content": "\n

API Authentication

\n

API authentication is provided by KONG. There are two methods supported:

\n\n\n\n

OAuth2 method is recommended, especially for cloud services. The gateway uses Client Credentials grant type variant of OAuth2. The method is supported by KONG OAuth2 plugin. Client secrets are managed by Kong and stored in Cassandra configuration database.
\nAPI key authentication is a deprecated method, its usage should be avoided for new services. Keys are unique, randomly generated with 32 characters length managed by Kong Gateway – please see Kong Gateway documentation for details.

" + }, + { + "title": "Authorization", + "pageID": "164470078", + "pageLink": "/display/GMDM/Authorization", + "content": "\n

Rest APIs

\n

Access to exposed services is controlled with the following algorithm:

\n" + }, + { + "title": "KONG external OAuth2 plugin", + "pageID": "164470072", + "pageLink": "/display/GMDM/KONG+external+OAuth2+plugin", + "content": "\n

To integrate with Ping Federate token validation process, external KONG plugin was implemented. Source code and instructions for installation and configuration of local environment were published on GitHub.
\nCheck https://github.com/COMPANY/mdm-gateway/tree/kong/mdm-external-oauth-plugin readme file for more information.
\nThe role of plugin:
\nValidate access tokens sent by developers using a third-party OAuth 2.0 Authorization Server (RFC 7662). The flow of plugin, request, and response from PingFedarate have to be compatible with RFC 7622 specification. To get more information about this specification check https://tools.ietf.org/html/rfc7662 .Plugin assumes that the Consumer already has an access token that will be validated against a third-party OAuth 2.0 server – Ping Federate.
\nFlow of the plugin:

\n
    \n\t
  1. Client invokes Gateway API providing token generated from PING API
  2. \n\t
  3. KONG plugin introspects this token\n\t
      \n\t\t
    1. if the token is active, plugin will fill X-Consumer-Username header
    2. \n\t\t
    3. if the token is not active, the access to the specific uri will be forbidden
    4. \n\t
    \n\t
  4. \n
\n\n\n


\nExample External Plugin configuration:
\n \"\"\n
\nTo define a mdm-external-oauth plugin the following parameters have to be defined:

\n\n\n\n

KAFKA authentication

\n

Kafka access is protected using SASL framework. Clients are required to specify user and ●●●●●●●●●●● the configuration. Credentials are sent over TLS transport.

" + }, + { + "title": "Transport", + "pageID": "164470076", + "pageLink": "/display/GMDM/Transport", + "content": "\n

Communication between the KONG API Gateway and external systems is secured by setting up an encrypted connection with the following specifications:

\n\n\n\n


" + }, + { + "title": "User management", + "pageID": "164470079", + "pageLink": "/display/GMDM/User+management", + "content": "\n

User accounts are managed by the respective components of the Gateway and Hub.

\n

API Users

\n

Those are managed by Kong Gateway and stored in Cassandra database. There are two ways of adding a new user to Kong configuration:

\n
    \n\t
  1. Using configuration repository and Ansible
  2. \n
\n\n\n

Ansible tool, which is used to deploy MDM Integration Services, has a plugin that supports Kong user management. User configuration is kept in YAML configuration files (passwords being encrypted using built-in AES-256 encryption). Adding a new user requires adding the following section to the appropriate configuration file:
\n \"\"

\n
    \n\t
  1. Directly, using Kong REST API
  2. \n
\n\n\n

This method requires access to COMPANY VPN and to machine that hosts the MDM Integration Services, since REST endpoints are only bound to "localhost", and not exposed to the outside world. URL of the endpoint is:
\n \"\" It can be accessed via cURL commandline tool. To list all the users that are currently defined use the following command:
\n \"\"
\nTo create a new user:
\n \"\" To set an API Key for the user:
\n \"\" A new API key will be automatically generated by Kong and returned in response.
\nTo create OAuth2 credentials use the following call instead:
\n \"\" client_id and client_secret are login credentials, redirect_uri should point to HUB API endpoint. Please see Kong Gateway documentation for details.\n

\n

KAFKA users

\n

Kafka users are managed by brokers. Authentication method used is Java Authentication and Authorization Service (JAAS) with PlainLogin module. User configuration is stored inside kafka_server_jaas.conf file, that is present in each broker. File has the following structure:
\n \"\"
\nProperties "username" and "password" define credentials to use to secure inter-broker communication. Properties in format "user_<username>" are actual definitions of users. So, adding a new user named "bob" would require addition of the following property to kafka_server_jaas.conf file:\n
\n \"\"\n
\nCAUTION! Since JAAS configuration file is only read on Kafka broker startup, adding a new user requires restart of all brokers. In multi-broker environment this can be achieved by restarting one broker at a time, which should be transparent for end users, given Kafka fault-tolerance capabilities. This limitation might be overcome in future versions by using external user store or custom login module, instead of PlainLoginModule.The process of adding this entry and distributing kafka_server_jass.conf file is automated with Ansible: usernames and ●●●●●●●●●●●● kept in YAML configuration file, encrypted using Ansible Vault (with AES encryption).

\n

MongoDB users

\n

MongoDB is used only internally, by Publishing Hub modules and is not exposed to external users, therefore there is no need to create accounts for them. For operational purposes there might be some administration/technical accounts created using standard Mongo commandline tools, as described in MongoDB documentation.

" + }, + { + "title": "SOP HUB", + "pageID": "164470101", + "pageLink": "/display/GMDM/SOP+HUB", + "content": "


" + }, + { + "title": "Hub Configuration", + "pageID": "302705379", + "pageLink": "/display/GMDM/Hub+Configuration", + "content": "" + }, + { + "title": "APM:", + "pageID": "302703254", + "pageLink": "/pages/viewpage.action?pageId=302703254", + "content": "" + }, + { + "title": "Setup APM integration in Kibana", + "pageID": "302703256", + "pageLink": "/display/GMDM/Setup+APM+integration+in+Kibana", + "content": "
  1. To setup APM integration in Kibana you need to deploy fleet server first. To do so you need to enable it in mdm-hub-cluster-env repository(eg. in emea/nprod/namespaces/emea-backend/values.yaml)
    \"\"
  2. After deploying it open kibana UI. And got to Fleet.
    \"\"
    Verify if fleet-server is properly configured:
    \"\"
  3. Go to Observability - APM
    \"\"
  4. Click Add the APM Integration
    \"\"
  5. Click Add Elastic APM
    \"\"
  6. Change host to 0.0.0.0:8200
    \"\"
    In section 2 choose Existing hosts and choose desired agent-policy(Fleet server on ECK policy)
    \"\"
    \"\"
    Save changes
    \"\"
  7. After configuring your service to connect to apm-server it should be visible in Observability.APM
    \"\"


" + }, + { + "title": "Consul:", + "pageID": "302705585", + "pageLink": "/pages/viewpage.action?pageId=302705585", + "content": "" + }, + { + "title": "Updating Dictionary", + "pageID": "164470212", + "pageLink": "/display/GMDM/Updating+Dictionary", + "content": "

To update dictionary from excel

  1. Convert excel to csv format
  2. Change EOL to Unix 
  3. Put file in appropriate path in mdm-config-registry repository in config-ext
  4. Check Updating ETL Dictionaries in Consul page for appropriate Consul UI URL (You need to have a security token set in ACL section)
" + }, + { + "title": "Updating ETL Dictionaries in Consul", + "pageID": "164470102", + "pageLink": "/display/GMDM/Updating+ETL+Dictionaries+in+Consul", + "content": "

Configuration repository has dedicated directories that store dictionaries used by the ETL engine during loading data with batch service. The content of directories is published in Consul. The table shows the dir name and consul's key under which data in posted:

To update Consul values you have to:

  1. Make changes in the desired directory and push them to the master git branch,
  2. git2consul will synchronize the git repo to Consul 

Please be advised that proper SecretId token is required to access key/value path you desire. Especially important for AMER/GBLUS directories. 

" + }, + { + "title": "Environment Setup:", + "pageID": "164470244", + "pageLink": "/pages/viewpage.action?pageId=164470244", + "content": "" + }, + { + "title": "Configuration (amer k8s)", + "pageID": "228917406", + "pageLink": "/pages/viewpage.action?pageId=228917406", + "content": "

Configuration steps:

  1. Configure mongo permissions for users mdm_batch_service, mdmhub, and mdmgw. Add permissions to database schema related to new environment:

---

users:

  mdm_batch_service:

    mongo:

      databases:

        reltio_amer-dev:

          roles:

            - "readWrite"

        reltio_[tenant-env]:

             - "readWrite"

2. Add directory with environment configuration files in amer/nprod/namespaces/. You can just make a copy of the existing amer-dev configuration.

3. Change file [tenant-env]/values.yaml:

4. Change file [tenant-env]/kafka-topics.yaml by changing the prefix of topic names.

5. Add kafka connect instance for newly added environment - add the configuration section to kafkaConnect property located in amer/nprod/namespaces/amer-backend/values.yaml
5.1 Add secrets - kafkaConnect.[tenant-env].connectors.[tenant-env]-snowflake-sink-connector.spec.config.snowflake.private.key.passphrase and kafkaConnect.[tenant-env].connectors.[tenant-env]-snowflake-sink-connector.spec.config.snowflake.private.key

6. Configure Consul (amer/nprod/namespaces/amer-backend/values.yaml and amer/nprod/namespaces/amer-backend/secrets.yaml):

7. Modify components configuration:

\"\"

8. Add transaction topics in fluentd configuration - amer/nprod/namespaces/amer-backend/values.yaml and change fluentd.kafka.topics list.

9. Monitoring

a) Add additional service monitor to amer/nprod/namespaces/monitoring/service-monitors.yaml configuration file:

- namespace: [tenant-env]

  name: sm-[tenant-env]-services

  selector:

    matchLabels:

      prometheus: [tenant-env]-services

  endpoints:

    - port: prometheus

      interval: 30s

      scrapeTimeout: 30s

    - port: prometheus-fluent-bit

      path: "/api/v1/metrics/prometheus"

      interval: 30s

      scrapeTimeout: 30s

b) Add Snowflake database details to amer/nprod/namespaces/monitoring/jdbc-exporter.yaml configuration file:

jdbcExporters:
amer-dev:
db:
url: "jdbc:snowflake://amerdev01.us-east-1.privatelink.snowflakecomputing.com/?db=COMM_AMER_MDM_DMART_DEV_DB&role=COMM_AMER_MDM_DMART_DEV_DEVOPS_ROLE&warehouse=COMM_MDM_DMART_WH"
username: "[ USERNAME ]"

Add ●●●●●●●●●●● amer/nprod/namespaces/monitoring/secrets.yaml

jdbcExporters:
amer-dev:
db:
password: "[ ●●●●●●●●●●●


10. Run Jenkins job responsible for deploying backend services - to apply mongo and fluentd changes.

11. Connect to mongodb server and create scheme reltio_[tenant-env].

11.1 Create collections and indexes in the newly added schemas:
 Intellishell

db.createCollection("entityHistory") 
db.entityHistory.createIndex({country: -1},  {background: true, name:  "idx_country"});
db.entityHistory.createIndex({sources: -1},  {background: true, name:  "idx_sources"});
db.entityHistory.createIndex({entityType: -1},  {background: true, name:  "idx_entityType"});
db.entityHistory.createIndex({status: -1},  {background: true, name:  "idx_status"});
db.entityHistory.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});
db.entityHistory.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});
db.entityHistory.createIndex({"entity.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});
db.entityHistory.createIndex({"entity.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});
db.entityHistory.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});
db.entityHistory.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});
db.entityHistory.createIndex({entityChecksum: -1},  {background: true, name:  "idx_entityChecksum"});
db.entityHistory.createIndex({parentEntityId: -1},  {background: true, name:  "idx_parentEntityId"});

db.entityHistory.createIndex({COMPANYGlobalCustomerID: -1},  {background: true, name:  "idx_COMPANYGlobalCustomerID"});


db.createCollection("entityRelations")
db.entityRelations.createIndex({country: -1},  {background: true, name:  "idx_country"});
db.entityRelations.createIndex({sources: -1},  {background: true, name:  "idx_sources"});
db.entityRelations.createIndex({relationType: -1},  {background: true, name:  "idx_relationType"});
db.entityRelations.createIndex({status: -1},  {background: true, name:  "idx_status"});
db.entityRelations.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});
db.entityRelations.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});
db.entityRelations.createIndex({startObjectId: -1},  {background: true, name:  "idx_startObjectId"});
db.entityRelations.createIndex({endObjectId: -1},  {background: true, name:  "idx_endObjectId"});
db.entityRelations.createIndex({"relation.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});   
db.entityRelations.createIndex({"relation.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});   
db.entityRelations.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});   
db.entityRelations.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});
 
db.createCollection("LookupValues")
db.LookupValues.createIndex({updatedOn: 1},  {background: true, name:  "idx_updatedOn"});
db.LookupValues.createIndex({countries: 1},  {background: true, name:  "idx_countries"});
db.LookupValues.createIndex({mdmSource: 1},  {background: true, name:  "idx_mdmSource"});
db.LookupValues.createIndex({type: 1},  {background: true, name:  "idx_type"});
db.LookupValues.createIndex({code: 1},  {background: true, name:  "idx_code"});
db.LookupValues.createIndex({valueUpdateDate: 1},  {background: true, name:  "idx_valueUpdateDate"});

db.createCollection("ErrorLogs")
db.ErrorLogs.createIndex({plannedResubmissionDate: -1},  {background: true, name:  "idx_plannedResubmissionDate_-1"});
db.ErrorLogs.createIndex({timestamp: -1},  {background: true, name:  "idx_timestamp_-1"});
db.ErrorLogs.createIndex({exceptionClass: 1},  {background: true, name:  "idx_exceptionClass_1"});
db.ErrorLogs.createIndex({status: -1},  {background: true, name:  "idx_status_-1"});

db.createCollection("batchEntityProcessStatus")
db.batchEntityProcessStatus.createIndex({batchName: -1, sourceId: -1},  {background: true, name:  "idx_findByBatchNameAndSourceId"});
db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1},  {background: true, name:  "idx_EntitiesUnseen_SoftDeleteJob"});
db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResult_ProcessingJob"});
db.batchEntityProcessStatus.createIndex({batchName: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResultAll_ProcessingJob"});

db.createCollection("batchInstance")

db.createCollection("relationCache")
db.relationCache.createIndex({startSourceId: -1},  {background: true, name:  "idx_findByStartSourceId"});

db.createCollection("DCRRequests")
db.DCRRequests.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});
db.DCRRequests.createIndex({entityURI: -1, "status.name": -1},  {background: true, name:  "idx_entityURIStatusNameFind_SubmitVR"});
db.DCRRequests.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});

db.createCollection("entityMatchesHistory")
db.entityMatchesHistory.createIndex({_id: -1, "matches.matchObjectUri": -1, "matches.matchType": -1},  {background: true, name:  "idx_findAutoLinkMatch_CleanerStream"});


db.createCollection("DCRRegistry")

db.DCRRegistry.createIndex({"status.changeDate": -1},  {background: true, name:  "idx_changeDate_FindDCRsBy"});

db.DCRRegistry.createIndex({extDCRRequestId: -1},  {background: true, name:  "idx_extDCRRequestId_FindByExtId"});
db.DCRRegistry.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});

db.DCRRegistryONEKEY.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});


db.createCollection("sequenceCounters")

db.sequenceCounters.insertOne({_id: "COMPANYAddressIDSeq", sequence: NumberLong([sequence start number])}) //NOTE!!!! replace text [sequence start count] with value from below table

RegionSeq start number
emea5000000000
amer6000000000
apac7000000000

12. Run Jenkins job to deploy kafka resources and mdmhub components for the new environment.

13. Create paths on S3 bucket required by Snowflake and Airflow's DAGs.

14. Configure Kibana:

15. Configure basic Airflow DAGs (ansible directory):

16. Deploy DAGs (NOTE: check if your kubectl is configured to communicate with the cluster you wanted to change):

ansible-playbook install_mdmgw_airflow_services_k8s.yml -i inventory/[tenant-env]/inventory

17. Configure Snowflake for the [tenant-env] in mdm-hub-env-config as in example inventory/dev_amer/group_vars/snowflake/*. 


Verification points

Check Reltio's configuration - get reltio tenant configuration:

  1. Check if you are able to execute Reltio's operations using credentials of the service user,

  2. Check if streaming processing is enable - streamingConfig.messaging.destinations.enabled = true, streamingConfig.streamingEnabled=true, streamingConfig.streamingAPIEnabled=true,

  3. Check if cassanda export is configured - exportConfig.smartExport.secondaryDsEnabled = false.


Check Kafka:

  1. Check if you are able to connect to kafka server using command line client running from your local machine.


Check Mongo:

  1. Users mdmgw, mdmhub and mdm_batch_service - permissions for the newly added database (readWrite),
  2. Indexes,

  3. Verify if correct start value is set for sequance COMPANYAddressIDSeq - collection sequenceCounters _id = COMPANYAddressIDSeq.


Check MDMHUB API:

  1. Check mdm-manager API with apikey authentication by executing one of read operations: GET {{ manager_url }}/entities?filter=equals(type, 'configuration/entityTypes/HCP'). The request should execute properly (HTTP status code 200) and returns some HCP objects. The empty response is also possible in the case when there is no HCP data in Reltio,

  2. Run the same operation using oAuth2 authentication - remember that the manager url is different,
  3. Check mdm-manager API with apikey authentication by executing write operation:

    curl --location --request POST '{{ manager_url }}/hcp' \\
    --header 'apikey: {{ api_key }}' \\
    --header 'Content-Type: application/json' \\
    --data-raw '{
      "hcp" : {
        "type" : "configuration/entityTypes/HCP",
        "attributes" : {
          "Country" : [ {
            "value" : "{{ country }}"
          } ],
          "FirstName" : [ {
            "value" : "Verification Test MDMHUB"
          } ],
          "LastName" : [ {
            "value" : "Verification Test MDMHUB"
          } ]
        },
        "crosswalks" : [ {
          "type" : "configuration/sources/{{ source }}",
          "value" : "verification_test_mdmhub"
        } ]
      }
    }'

    Replace all placeholders in the above request using the correct values for the configured environment. The response should return HTTP code 200 and a URI of the created object. After verification deleted created object by running: curl --location --request DELETE '{{ manager_url }}/entities/crosswalk?type={{ source }}&value=verification_test_mdmhub' --header 'apikey: {{ api_key }}'
  4. Run the same operations using oAuth2 authentication - remember that the mdm manager url is different,
  5. Verify api-router API with apikey authentication using search operation: GET {{ api_router_url }}/entities?filter=equals(type, 'configuration/entityTypes/HCP'). The request should execute properly (HTTP status code 200) and returns some HCP objects. Empty response is also possible in the case when there is no HCP data in Reltio,

  6. Check api-router API with apikey authentication by executing write operation:

    curl --location --request POST '{{ api_router_url }}/hcp' \\
    --header 'apikey: {{ api_key }}' \\
    --header 'Content-Type: application/json' \\
    --data-raw '{
      "hcp" : {
        "type" : "configuration/entityTypes/HCP",
        "attributes" : {
          "Country" : [ {
            "value" : "{{ country }}"
          } ],
          "FirstName" : [ {
            "value" : "Verification Test MDMHUB"
          } ],
          "LastName" : [ {
            "value" : "Verification Test MDMHUB"
          } ]
        },
        "crosswalks" : [ {
          "type" : "configuration/sources/{{ source }}",
          "value" : "verification_test_mdmhub"
        } ]
      }
    }'

    Replace all placeholders in the above request using the correct values for the configured environment. The response should return HTTP code 200 and a URI of the created object. After verification deleted created object by running: curl --location --request DELETE '2/entities/crosswalk?type={{ source }}&value=verification_test_mdmhub' --header 'apikey: {{ api_key }}'
  7. Run the same operations using oAuth2 authentication - remember that the api router url is different,
  8. Check batch service API with apikey authentication by executing following operation GET {{ batch_service_url }}/batchController/NA/instances/NA. The request should return 403 HTTP Code and body:

    {

        "code": "403",

        "message": "Forbidden: com.COMPANY.mdm.security.AuthorizationException: Batch 'NA' is not allowed."

    }

    The request doesn't create any batch.

  9. Run the same operation using oAuth2 authentication - remember that the batch service url is different,
  10. Verify of component logs: mdm-manager, api-router and batch-service url. Focus on errors and kafka records - rebalancing, authorization problems, topic existence warnings etc.


MDMHUB streaming services:

  1. Check logs of reltio-subscriber, entity-enricher, callback-service, event-publisher and mdm-reconciliation-service components. Verify if there is no errors and kafka warnings related with rebalancing, authorization problems, topic existence warnings etc,

  2. Verify if lookup refresh process is working properly - check existance of mongo collection LookupValues. It should have data,


Airflow:

  1. Check if DAGs are enabled and have a defined schedule,
  2. Run DAGs: export_merges_from_reltio_to_s3_full_{{ env }}, hub_reconciliation_v2_{{ env }}, lookup_values_export_to_s3_{{ env }}, reconciliation_snowflake_{{ env }}.

  3. Wait for their finish and validate results.


Snowflake:

  1. Check snowflake connector logs,

  2. Check if tables HUB_KAFKA_DATA, LOV_DATA, MERGE_TREE_DATA exist at LANDING schama and has data,

  3. Verify if mdm-hub-snowflake-dm package is deployed,
  4. What else?


Monitoring:

  1. Check grafana dashboards:
    1. HUB Performance,
    2. Kafka Topics Overview,
    3. Host Statistics,
    4. JMX Overview,
    5. Kong,
    6. MongoDB.
  2. Check Kibana index patterns:
    1. {{env}}-internal-batch-efk-transactions*,
    2. {{env}}-internal-gw-efk-transactions*,
    3. {{env}}-internal-publisher-efk-transactions*,
    4. {{env}}-internal-subscriber-efk-transactions*,
    5. {{env}}-mdmhub,
  3. Check Kibana dashboards:
    1. {{env}} API calls,
    2. {{env}} Batch Instances,
    3. {{env}} Batch loads,
    4. {{env}} Error Logs Overview,
    5. {{env}} Error Logs RDM,
    6. {{env}} HUB Store
    7. {{env}} HUB events,

    8. {{env}} MDM Events,
    9. {{env}} Profile Updates,
  4. Check alerts - How?



" + }, + { + "title": "Configuration (amer prod k8s)", + "pageID": "234691394", + "pageLink": "/pages/viewpage.action?pageId=234691394", + "content": "

Configuration steps:

  1. Copy mdm-hub-cluster-env/amer/nprod directory into mdm-hub-cluster-env/amer/nprod directory.
  2. Replace ...
  3. Certificates
    1. Generate private-keys, CSRs and request Kong certificate (kong/config_files/certs).

      \n
      marek@CF-19CHU8:~$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout api-amer-prod-gbl-mdm-hub.COMPANY.com.key -out api-amer-prod-gbl-mdm-hub.COMPANY.com.csr\nGenerating a RSA private key\n.....+++++\n.....................................................+++++\nwriting new private key to 'api-amer-prod-gbl-mdm-hub.COMPANY.com.key'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []: api-amer-prod-gbl-mdm-hub.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT_PROD@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1588632">●●●●●●●●●●●●</a>\nAn optional company name []:
      \n
    2. Generate private-keys, CSRs and request Kafka certificate (apac-backend/secret.yaml

      \n
      marek@CF-19CHU8:~$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout kafka-amer-prod-gbl-mdm-hub.COMPANY.com.key -out kafka-amer-prod-gbl-mdm-hub.COMPANY.com.csr\nGenerating a RSA private key\n..........................+++++\n.....+++++\nwriting new private key to 'kafka-amer-prod-gbl-mdm-hub.COMPANY.com.key'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:kafka-amer-prod-gbl-mdm-hub.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT_PROD@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1588633">●●●●●●●●●●●●</a>\nAn optional company name []:
      \n




BELOW IS AMER NPROD COPY WE USE AS A REFERENCE


Configuration steps:

  1. Configure mongo permissions for users mdm_batch_service, mdmhub, and mdmgw. Add permissions to database schema related to new environment:

---

users:

  mdm_batch_service:

    mongo:

      databases:

        reltio_amer-dev:

          roles:

            - "readWrite"

        reltio_[tenant-env]:

             - "readWrite"

2. Add directory with environment configuration files in amer/nprod/namespaces/. You can just make a copy of the existing amer-dev configuration.

3. Change file [tenant-env]/values.yaml:

4. Change file [tenant-env]/kafka-topics.yaml by changing the prefix of topic names.

5. Add kafka connect instance for newly added environment - add the configuration section to kafkaConnect property located in amer/nprod/namespaces/amer-backend/values.yaml
5.1 Add secrets - kafkaConnect.[tenant-env].connectors.[tenant-env]-snowflake-sink-connector.spec.config.snowflake.private.key.passphrase and kafkaConnect.[tenant-env].connectors.[tenant-env]-snowflake-sink-connector.spec.config.snowflake.private.key

6. Configure Consul (amer/nprod/namespaces/amer-backend/values.yaml and amer/nprod/namespaces/amer-backend/secrets.yaml):

7. Modify components configuration:

8. Add transaction topics in fluentd configuration - amer/nprod/namespaces/amer-backend/values.yaml and change fluentd.kafka.topics list.

9. Monitoring

a) Add additional service monitor to amer/nprod/namespaces/monitoring/service-monitors.yaml configuration file:

- namespace: [tenant-env]

  name: sm-[tenant-env]-services

  selector:

    matchLabels:

      prometheus: [tenant-env]-services

  endpoints:

    - port: prometheus

      interval: 30s

      scrapeTimeout: 30s

    - port: prometheus-fluent-bit

      path: "/api/v1/metrics/prometheus"

      interval: 30s

      scrapeTimeout: 30s

b) Add Snowflake database details to amer/nprod/namespaces/monitoring/jdbc-exporter.yaml configuration file:

jdbcExporters:
amer-dev:
db:
url: "jdbc:snowflake://amerdev01.us-east-1.privatelink.snowflakecomputing.com/?db=COMM_AMER_MDM_DMART_DEV_DB&role=COMM_AMER_MDM_DMART_DEV_DEVOPS_ROLE&warehouse=COMM_MDM_DMART_WH"
username: "[ USERNAME ]"

Add ●●●●●●●●●●● amer/nprod/namespaces/monitoring/secrets.yaml

jdbcExporters:
amer-dev:
db:
password: "[ ●●●●●●●●●●●


10. Run Jenkins job responsible for deploying backend services - to apply mongo and fluentd changes.

11. Connect to mongodb server and create scheme reltio_[tenant-env].

11.1 Create collections and indexes in the newly added schemas:
 Intellishell

db.createCollection("entityHistory") 
db.entityHistory.createIndex({country: -1},  {background: true, name:  "idx_country"});
db.entityHistory.createIndex({sources: -1},  {background: true, name:  "idx_sources"});
db.entityHistory.createIndex({entityType: -1},  {background: true, name:  "idx_entityType"});
db.entityHistory.createIndex({status: -1},  {background: true, name:  "idx_status"});
db.entityHistory.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});
db.entityHistory.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});
db.entityHistory.createIndex({"entity.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});
db.entityHistory.createIndex({"entity.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});
db.entityHistory.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});
db.entityHistory.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});
db.entityHistory.createIndex({entityChecksum: -1},  {background: true, name:  "idx_entityChecksum"});
db.entityHistory.createIndex({parentEntityId: -1},  {background: true, name:  "idx_parentEntityId"});

db.entityHistory.createIndex({COMPANYGlobalCustomerID: -1},  {background: true, name:  "idx_COMPANYGlobalCustomerID"});


db.createCollection("entityRelations")
db.entityRelations.createIndex({country: -1},  {background: true, name:  "idx_country"});
db.entityRelations.createIndex({sources: -1},  {background: true, name:  "idx_sources"});
db.entityRelations.createIndex({relationType: -1},  {background: true, name:  "idx_relationType"});
db.entityRelations.createIndex({status: -1},  {background: true, name:  "idx_status"});
db.entityRelations.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});
db.entityRelations.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});
db.entityRelations.createIndex({startObjectId: -1},  {background: true, name:  "idx_startObjectId"});
db.entityRelations.createIndex({endObjectId: -1},  {background: true, name:  "idx_endObjectId"});
db.entityRelations.createIndex({"relation.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});   
db.entityRelations.createIndex({"relation.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});   
db.entityRelations.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});   
db.entityRelations.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});
 
db.createCollection("LookupValues")
db.LookupValues.createIndex({updatedOn: 1},  {background: true, name:  "idx_updatedOn"});
db.LookupValues.createIndex({countries: 1},  {background: true, name:  "idx_countries"});
db.LookupValues.createIndex({mdmSource: 1},  {background: true, name:  "idx_mdmSource"});
db.LookupValues.createIndex({type: 1},  {background: true, name:  "idx_type"});
db.LookupValues.createIndex({code: 1},  {background: true, name:  "idx_code"});
db.LookupValues.createIndex({valueUpdateDate: 1},  {background: true, name:  "idx_valueUpdateDate"});

db.createCollection("ErrorLogs")
db.ErrorLogs.createIndex({plannedResubmissionDate: -1},  {background: true, name:  "idx_plannedResubmissionDate_-1"});
db.ErrorLogs.createIndex({timestamp: -1},  {background: true, name:  "idx_timestamp_-1"});
db.ErrorLogs.createIndex({exceptionClass: 1},  {background: true, name:  "idx_exceptionClass_1"});
db.ErrorLogs.createIndex({status: -1},  {background: true, name:  "idx_status_-1"});

db.createCollection("batchEntityProcessStatus")
db.batchEntityProcessStatus.createIndex({batchName: -1, sourceId: -1},  {background: true, name:  "idx_findByBatchNameAndSourceId"});
db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1},  {background: true, name:  "idx_EntitiesUnseen_SoftDeleteJob"});
db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResult_ProcessingJob"});
db.batchEntityProcessStatus.createIndex({batchName: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResultAll_ProcessingJob"});

db.createCollection("batchInstance")

db.createCollection("relationCache")
db.relationCache.createIndex({startSourceId: -1},  {background: true, name:  "idx_findByStartSourceId"});

db.createCollection("DCRRequests")
db.DCRRequests.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});
db.DCRRequests.createIndex({entityURI: -1, "status.name": -1},  {background: true, name:  "idx_entityURIStatusNameFind_SubmitVR"});
db.DCRRequests.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});

db.createCollection("entityMatchesHistory")
db.entityMatchesHistory.createIndex({_id: -1, "matches.matchObjectUri": -1, "matches.matchType": -1},  {background: true, name:  "idx_findAutoLinkMatch_CleanerStream"});


db.createCollection("DCRRegistry")

db.DCRRegistry.createIndex({"status.changeDate": -1},  {background: true, name:  "idx_changeDate_FindDCRsBy"});

db.DCRRegistry.createIndex({extDCRRequestId: -1},  {background: true, name:  "idx_extDCRRequestId_FindByExtId"});
db.DCRRegistry.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});

db.DCRRegistryONEKEY.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});


db.createCollection("sequenceCounters")

db.sequenceCounters.insertOne({_id: "COMPANYAddressIDSeq", sequence: NumberLong([sequence start number])}) //NOTE!!!! replace text [sequence start count] with value from below table

RegionSeq start number
emea5000000000
amer6000000000
apac7000000000

12. Run Jenkins job to deploy kafka resources and mdmhub components for the new environment.

13. Create paths on S3 bucket required by Snowflake and Airflow's DAGs.

14. Configure Kibana:

15. Configure basic Airflow DAGs (ansible directory):

16. Deploy DAGs (NOTE: check if your kubectl is configured to communicate with the cluster you wanted to change):

ansible-playbook install_mdmgw_airflow_services_k8s.yml -i inventory/[tenant-env]/inventory

17. Configure Snowflake for the [tenant-env] in mdm-hub-env-config as in example inventory/dev_amer/group_vars/snowflake/*. 


Verification points

Check Reltio's configuration - get reltio tenant configuration:

  1. Check if you are able to execute Reltio's operations using credentials of the service user,

  2. Check if streaming processing is enable - streamingConfig.messaging.destinations.enabled = true, streamingConfig.streamingEnabled=true, streamingConfig.streamingAPIEnabled=true,

  3. Check if cassanda export is configured - exportConfig.smartExport.secondaryDsEnabled = false.


Check Mongo:

  1. Users mdmgw, mdmhub and mdm_batch_service - permissions for the newly added database (readWrite),
  2. Indexes,

  3. Verify if correct start value is set for sequance COMPANYAddressIDSeq - collection sequenceCounters _id = COMPANYAddressIDSeq.


Check MDMHUB API:

  1. Check mdm-manager API with apikey authentication by executing one of read operations: GET {{ manager_url }}/entities?filter=equals(type, 'configuration/entityTypes/HCP'). The request should execute properly (HTTP status code 200) and returns some HCP objects. The empty response is also possible in the case when there is no HCP data in Reltio,

  2. Run the same operation using oAuth2 authentication - remember that the manager url is different,
  3. Verify api-router API with apikey authentication using search operation: GET {{ api_router_url }}/entities?filter=equals(type, 'configuration/entityTypes/HCP'). The request should execute properly (HTTP status code 200) and returns some HCP objects. Empty response is also possible in the case when there is no HCP data in Reltio,

  4. Run the same operation using oAuth2 authentication - remember that the api router url is different,
  5. Check batch service API with apikey authentication by executing following operation GET {{ batch_service_url }}/batchController/NA/instances/NA. The request should return 403 HTTP Code and body:

    {

        "code": "403",

        "message": "Forbidden: com.COMPANY.mdm.security.AuthorizationException: Batch 'NA' is not allowed."

    }

    The request doesn't create any batch.

  6. Run the same operation using oAuth2 authentication - remember that the batch service url is different,
  7. Verify of component logs: mdm-manager, api-router and batch-service url. Focus on errors and kafka records - rebalancing, authorization problems, topic existence warnings etc.


MDMHUB streaming services:

  1. Check logs of reltio-subscriber, entity-enricher, callback-service, event-publisher and mdm-reconciliation-service components. Verify if there is no errors and kafka warnings related with rebalancing, authorization problems, topic existence warnings etc,

  2. Verify if lookup refresh process is working properly - check existance of mongo collection LookupValues. It should have data,


Airflow:

  1. Run DAGs: export_merges_from_reltio_to_s3_full_{{ env }}, hub_reconciliation_v2_{{ env }}, lookup_values_export_to_s3_{{ env }}, reconciliation_snowflake_{{ env }}.

  2. Wait for their finish and validate results.


Snowflake:

  1. Check snowflake connector logs,

  2. Check if tables HUB_KAFKA_DATA, LOV_DATA, MERGE_TREE_DATA exist at LANDING schama and has data,

  3. Verify if mdm-hub-snowflake-dm package is deployed,
  4. What else?


Monitoring:

  1. Check grafana dashboards:
    1. HUB Performance,
    2. Kafka Topics Overview,
    3. Host Statistics,
    4. JMX Overview,
    5. Kong,
    6. MongoDB.
  2. Check Kibana index patterns:
    1. {{env}}-internal-batch-efk-transactions*,
    2. {{env}}-internal-gw-efk-transactions*,
    3. {{env}}-internal-publisher-efk-transactions*,
    4. {{env}}-internal-subscriber-efk-transactions*,
    5. {{env}}-mdmhub,
  3. Check Kibana dashboards:
    1. {{env}} API calls,
    2. {{env}} Batch Instances,
    3. {{env}} Batch loads,
    4. {{env}} Error Logs Overview,
    5. {{env}} Error Logs RDM,
    6. {{env}} HUB Store
    7. {{env}} HUB events,

    8. {{env}} MDM Events,
    9. {{env}} Profile Updates,
  4. Check alerts - How?



" + }, + { + "title": "Configuration (apac k8s)", + "pageID": "228933487", + "pageLink": "/pages/viewpage.action?pageId=228933487", + "content": "

Installation of new APAC non-prod cluster basing on AMER non-prod configuration.


  1. Copy mdm-hub-cluster-env/amer directory into mdm-hub-cluster-env/apac directory.

  2. Change dir names from "amer" to "apac".

  3. Replace everything in files in apac directory: "amer"→"apac".
    \"\"

  4. Certificates

    1. Generate private-keys, CSRs and request Kong certificate (kong/config_files/certs).

      \n
      anuskp@CF-341562$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout api-apac-nprod-gbl-mdm-hub.COMPANY.com.key -out api-apac-nprod-gbl-mdm-hub.COMPANY.com.csr\nGenerating a RSA private key\n..................+++++\n.........................+++++\nwriting new private key to 'api-apac-nprod-gbl-mdm-hub.COMPANY.com.key'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:api-apac-nprod-gbl-mdm-hub.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1588584">●●●●●●●●●●●●</a>\nAn optional company name []:
      \n

      SAN:
      DNS Name=api-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=www.api-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=kibana-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=prometheus-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=grafana-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=elastic-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=consul-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=akhq-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=airflow-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=mongo-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=mdm-log-management-apac-nonprod.COMPANY.com
      DNS Name=gbl-mdm-hub-apac-nprod.COMPANY.com

      Place private-key and signed certificate in kong/config_files/certs. Git-ignore them and encrypt them into .encrypt files.

    2. Generate private-keys, CSRs and request Kafka certificate (apac-backend/secret.yaml)

      \n
      anuskp@CF-341562$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout kafka-apac-nprod-gbl-mdm-hub.COMPANY.com.key -out kafka-apac-nprod-gbl-mdm-hub.COMPANY.com.csr\nGenerating a RSA private key\n................................................................+++++\n.......................................+++++\nwriting new private key to 'kafka-apac-nprod-gbl-mdm-hub.COMPANY.com.key'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:kafka-apac-nprod-gbl-mdm-hub.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1588586">●●●●●●●●●●●●</a>\nAn optional company name []:
      \n

      SAN:
      DNS Name=kafka-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=kafka-b1-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=kafka-b2-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=kafka-b3-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=kafka-b4-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=kafka-b5-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=kafka-b6-apac-nprod-gbl-mdm-hub.COMPANY.com

      After receiving the certificate, encode it with base64 and paste into apac-backend/secrets.yaml:
        -> secrets.mdm-kafka-external-listener-cert.listener.key
        -> secrets.mdm-kafka-external-listener-cert.listener.crt 

  5.  (*) Since this is a new environment, remove everything under "migration" key in apac-backend/values.yaml.

  6. Replace all user_passwords in apac/nprod/secrets.yaml. for each ●●●●●●●●●●●●●●●●● a new, 32-char one and globally replace it in all apac configs.

  7. Go through apac-dev/config_files one by one and adjust settings such as: Reltio, SQS etc.

  8. (*) Change Kafka topics and consumergroup names to fit naming standards. This is a one-time activity and does not need to be repeated if next environments will be built based on APAC config.
  9. Export amer-nprod CRDs into yaml file and import it in apac-nprod:

    \n
    $ kubectx atp-mdmhub-nprod-amer\n$ kubectl get crd -A -o yaml > ~/crd-definitions-amer.yaml\n$ kubectx atp-mdmhub-nprod-apac\n$ kubectl apply -f ~/crd-definitions-amer.yaml
    \n
  10. Create config dirs for git2consul (mdm-hub-env-config):

    \n
    $ git checkout config/dev_amer\n$ git pull\n$ git branch config/dev_apac\n$ git checkout config/dev_apac\n$ git push origin config/dev_apac
    \n

    Repeat for qa and stage.

  11. Install operators:

    \n
    $ ./install.sh -l operators -r apac -c nprod -e apac-dev -v 3.9.4
    \n
  12. Install backend:

    \n
    $ ./install.sh -l backend -r apac -c nprod -e apac-dev -v 3.9.4
    \n
  13. Log into mongodb (use port forward if there is no connection to kong: run "kubectl port-forward mongo-0 -n apac-backend 27017" and connect to mongo on localhost:27017). Run below script:

    \n
    db.createCollection("entityHistory") \ndb.entityHistory.createIndex({country: -1},  {background: true, name:  "idx_country"});\ndb.entityHistory.createIndex({sources: -1},  {background: true, name:  "idx_sources"});\ndb.entityHistory.createIndex({entityType: -1},  {background: true, name:  "idx_entityType"});\ndb.entityHistory.createIndex({status: -1},  {background: true, name:  "idx_status"});\ndb.entityHistory.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});\ndb.entityHistory.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});\ndb.entityHistory.createIndex({"entity.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});\ndb.entityHistory.createIndex({"entity.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});\ndb.entityHistory.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});\ndb.entityHistory.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});\ndb.entityHistory.createIndex({entityChecksum: -1},  {background: true, name:  "idx_entityChecksum"});\ndb.entityHistory.createIndex({parentEntityId: -1},  {background: true, name:  "idx_parentEntityId"});\n\ndb.entityHistory.createIndex({COMPANYGlobalCustomerID: -1},  {background: true, name:  "idx_COMPANYGlobalCustomerID"});\n\ndb.createCollection("entityRelations")\ndb.entityRelations.createIndex({country: -1},  {background: true, name:  "idx_country"});\ndb.entityRelations.createIndex({sources: -1},  {background: true, name:  "idx_sources"});\ndb.entityRelations.createIndex({relationType: -1},  {background: true, name:  "idx_relationType"});\ndb.entityRelations.createIndex({status: -1},  {background: true, name:  "idx_status"});\ndb.entityRelations.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});\ndb.entityRelations.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});\ndb.entityRelations.createIndex({startObjectId: -1},  {background: true, name:  "idx_startObjectId"});\ndb.entityRelations.createIndex({endObjectId: -1},  {background: true, name:  "idx_endObjectId"});\ndb.entityRelations.createIndex({"relation.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});   \ndb.entityRelations.createIndex({"relation.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});   \ndb.entityRelations.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});   \ndb.entityRelations.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});\n \ndb.createCollection("LookupValues")\ndb.LookupValues.createIndex({updatedOn: 1},  {background: true, name:  "idx_updatedOn"});\ndb.LookupValues.createIndex({countries: 1},  {background: true, name:  "idx_countries"});\ndb.LookupValues.createIndex({mdmSource: 1},  {background: true, name:  "idx_mdmSource"});\ndb.LookupValues.createIndex({type: 1},  {background: true, name:  "idx_type"});\ndb.LookupValues.createIndex({code: 1},  {background: true, name:  "idx_code"});\ndb.LookupValues.createIndex({valueUpdateDate: 1},  {background: true, name:  "idx_valueUpdateDate"});\n\ndb.createCollection("ErrorLogs")\ndb.ErrorLogs.createIndex({plannedResubmissionDate: -1},  {background: true, name:  "idx_plannedResubmissionDate_-1"});\ndb.ErrorLogs.createIndex({timestamp: -1},  {background: true, name:  "idx_timestamp_-1"});\ndb.ErrorLogs.createIndex({exceptionClass: 1},  {background: true, name:  "idx_exceptionClass_1"});\ndb.ErrorLogs.createIndex({status: -1},  {background: true, name:  "idx_status_-1"});\n\ndb.createCollection("batchEntityProcessStatus")\ndb.batchEntityProcessStatus.createIndex({batchName: -1, sourceId: -1},  {background: true, name:  "idx_findByBatchNameAndSourceId"});\ndb.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1},  {background: true, name:  "idx_EntitiesUnseen_SoftDeleteJob"});\ndb.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResult_ProcessingJob"});\ndb.batchEntityProcessStatus.createIndex({batchName: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResultAll_ProcessingJob"});\n\ndb.createCollection("batchInstance")\n\ndb.createCollection("relationCache")\ndb.relationCache.createIndex({startSourceId: -1},  {background: true, name:  "idx_findByStartSourceId"});\n\ndb.createCollection("DCRRequests")\ndb.DCRRequests.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});\ndb.DCRRequests.createIndex({entityURI: -1, "status.name": -1},  {background: true, name:  "idx_entityURIStatusNameFind_SubmitVR"});\ndb.DCRRequests.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});\n\ndb.createCollection("entityMatchesHistory")\ndb.entityMatchesHistory.createIndex({_id: -1, "matches.matchObjectUri": -1, "matches.matchType": -1},  {background: true, name:  "idx_findAutoLinkMatch_CleanerStream"});\n\ndb.createCollection("DCRRegistry")\ndb.DCRRegistry.createIndex({"status.changeDate": -1},  {background: true, name:  "idx_changeDate_FindDCRsBy"});\ndb.DCRRegistry.createIndex({extDCRRequestId: -1},  {background: true, name:  "idx_extDCRRequestId_FindByExtId"});\ndb.DCRRegistry.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});\n\ndb.DCRRegistryONEKEY.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});\n\ndb.createCollection("sequenceCounters")\ndb.sequenceCounters.insertOne({_id: "COMPANYAddressIDSeq", sequence: NumberLong(7000000000)}) // NOTE: 7000000000 is APAC-specific
    \n
  14. Log into Kibana. Export dashboards/indices from AMER and import them in APAC.
  15. Install mdmhub:

    \n
    $ ./install.sh -l mdmhub -r apac -c nprod -e apac-dev -v 3.9.4
    \n
  16. Tickets:
    1. DNS names ticket:

      Ticket queue: GBL-NETWORK DDI

      Title: Add domains to DNS


      Description:

      Hi Team,\n\nPlease add below domains:\n\napi-apac-nprod-gbl-mdm-hub.COMPANY.com\nkibana-apac-nprod-gbl-mdm-hub.COMPANY.com\nprometheus-apac-nprod-gbl-mdm-hub.COMPANY.com\ngrafana-apac-nprod-gbl-mdm-hub.COMPANY.com\nelastic-apac-nprod-gbl-mdm-hub.COMPANY.com\nconsul-apac-nprod-gbl-mdm-hub.COMPANY.com\nakhq-apac-nprod-gbl-mdm-hub.COMPANY.com\nairflow-apac-nprod-gbl-mdm-hub.COMPANY.com\nmongo-apac-nprod-gbl-mdm-hub.COMPANY.com\nmdm-log-management-apac-nonprod.COMPANY.com\ngbl-mdm-hub-apac-nprod.COMPANY.com\n\nas CNAMEs of our ELB:\na81322116787943bf80a29940dbc2891-00e7418d9be731b0.elb.ap-southeast-1.amazonaws.com

      Also, please add one CNAME for each one of below ELBs:\n\nCNAME: kafka-apac-nprod-gbl-mdm-hub.COMPANY.com\nELB: a7ba438d7068b4a799d29d3d408b0932-1e39235cdff6d511.elb.ap-southeast-1.amazonaws.com\n\nCNAME: kafka-b1-apac-nprod-gbl-mdm-hub.COMPANY.com\nELB: a72bbc64327cb4ee4b35ae5abeefbb26-4c392c106b29b6e5.elb.us-east-1.amazonaws.com\n\nCNAME: kafka-b2-apac-nprod-gbl-mdm-hub.COMPANY.com\nELB: a7fdb6117b2184096915aed31732110b-91c5ac7fb0968710.elb.us-east-1.amazonaws.com\n\nCNAME: kafka-b3-apac-nprod-gbl-mdm-hub.COMPANY.com\nELB: a99220323cc684bcaa5e29c198777e13-ddf5ddbf36fe3025.elb.us-east-1.amazonaws.com

      Best Regards,
      Piotr
      MDM Hub
    2. Firewall whitelisting

      Ticket queue: GBL-NETWORK ECS

      Title: Firewall exceptions for new BoldMoves PDKS cluster


      Description:

      Hi Team,\n\nPlease open all traffic listed in attached Excel sheet.\nIn case this is not the queue where I should request Firewall changes, kindly point me in the right direction.\n\nBest Regards,\nPiotr\nMDM Hub

      Attached excel:

      SourceSource IPDestinationDestination IPPort

      MDM Hub monitoring (euw1z1pl046.COMPANY.com)

      CI/CD server (sonar-gbicomcloud.COMPANY.com)

      10.90.98.0/24pdcs-apa1p.COMPANY.com-443

      MDM Hub monitoring (euw1z1pl046.COMPANY.com)

      CI/CD server (sonar-gbicomcloud.COMPANY.com)

      EMEA NPROD MDM Hub

      10.90.98.0/24APAC NPROD - PDKS cluster

      ●●●●●●●●●●●●●●●

      ●●●●●●●●●●●●●●●

      443

      9094

      Global NPROD MDM Hub10.90.96.0/24APAC NPROD - PDKS cluster

      ●●●●●●●●●●●●●●●

      ●●●●●●●●●●●●●●●

      443
      APAC NPROD - PDKS cluster

      ●●●●●●●●●●●●●●●

      ●●●●●●●●●●●●●●●

      Global NPROD MDM Hub10.90.96.0/248443
      APAC NPROD - PDKS cluster

      ●●●●●●●●●●●●●●●

      ●●●●●●●●●●●●●●●

      EMEA NPROD MDM Hub10.90.98.0/248443
  17. Integration tests:
    In mdm-hub-env-config prepare inventory/kube_dev_apac (copy kube_dev_amer and adjust variables)
    run "prepare_int_tests" playbook:

    \n
    $ ansible-playbook prepare_int_tests.yml -i inventory/kube_dev_apac/inventory -e src_dir="/mnt/c/Users/panu/gitrep/mdm-hub-inbound-services-all"
    \n


    in mdm-hub-inbound-services confirm test resources (citrus properties) for mdm-integration-tests have been replaced and run two Gradle tasks:
    -mdm-gateway/mdm-interation-tests/Tasks/verification/commonIntegrationTests
    -mdm-gateway/mdm-interation-tests/Tasks/verification/integrationTestsForCOMPANYModel


" + }, + { + "title": "Configuration (apac prod k8s)", + "pageID": "234699630", + "pageLink": "/pages/viewpage.action?pageId=234699630", + "content": "

Installation of new APAC prod cluster basing on AMER prod configuration.


  1. Copy mdm-hub-cluster-env/amer/prod directory into mdm-hub-cluster-env/apac directory.

  2. Change dir names from "amer" to "apac" - apac-backend, apac-prod

  3. Replace everything in files in apac directory: "amer"→"apac".
    \"\"

  4. Certificates

    1. Generate private-keys, CSRs and request Kong certificate (kong/config_files/certs).

      \n
      anuskp@CF-341562$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout api-apac-prod-gbl-mdm-hub.COMPANY.com.key -out api-apac-prod-gbl-mdm-hub.COMPANY.com.csr\nGenerating a RSA private key\n..................+++++\n.........................+++++\nwriting new private key to 'api-apac-prod-gbl-mdm-hub.COMPANY.com.key'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:api-apac-prod-gbl-mdm-hub.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1588665">●●●●●●●●●●●●</a>\nAn optional company name []:
      \n

      SAN:
      DNS Name=api-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=www.api-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=kibana-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=prometheus-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=grafana-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=elastic-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=consul-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=akhq-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=airflow-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=mongo-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=mdm-log-management-apac-noprod.COMPANY.com
      DNS Name=gbl-mdm-hub-apac-prod.COMPANY.com

      Place private-key and signed certificate in kong/config_files/certs. Git-ignore them and encrypt them into .encrypt files.

    2. Generate private-keys, CSRs and request Kafka certificate (apac-backend/secret.yaml)

      \n
      anuskp@CF-341562$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout kafka-apac-prod-gbl-mdm-hub.COMPANY.com.key -out kafka-apac-prod-gbl-mdm-hub.COMPANY.com.csr\nGenerating a RSA private key\n................................................................+++++\n.......................................+++++\nwriting new private key to 'kafka-apac-prod-gbl-mdm-hub.COMPANY.com.key'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:kafka-apac-prod-gbl-mdm-hub.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1588666">●●●●●●●●●●●●</a>\nAn optional company name []:
      \n

      SAN:
      DNS Name=kafka-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=kafka-b1-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=kafka-b2-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=kafka-b3-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=kafka-b4-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=kafka-b5-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=kafka-b6-apac-prod-gbl-mdm-hub.COMPANY.com

      After receiving the certificate, encode it with base64 and paste into apac-backend/secrets.yaml:
        -> secrets.mdm-kafka-external-listener-cert.listener.key
        -> secrets.mdm-kafka-external-listener-cert.listener.crt 

      Raise a ticket via Request Manager
  5.  (*) Since this is a new environment, remove everything under "migration" key in apac-backend/values.yaml.

  6. Replace all user_passwords in apac/prod/secrets.yaml. for each ●●●●●●●●●●●●●●●●● a new, 40-char one and globally replace it in all apac configs.

  7. Go through apac-dev/config_files one by one and adjust settings such as: Reltio, SQS etc.

  8. (*) Change Kafka topics and consumergroup names to fit naming standards. This is a one-time activity and does not need to be repeated if next environments will be built based on APAC config.
  9. Export amer-prod CRDs into yaml file and import it in apac-prod:

    \n
    $ kubectx atp-mdmhub-prod-amer\n$ kubectl get crd -A -o yaml > ~/crd-definitions-amer.yaml\n$ kubectx atp-mdmhub-prod-apac\n$ kubectl apply -f ~/crd-definitions-amer.yaml
    \n
  10. Create config dirs for git2consul (mdm-hub-env-config):

    \n
    $ git checkout config/dev_amer\n$ git pull\n$ git branch config/dev_apac\n$ git checkout config/dev_apac\n$ git push origin config/dev_apac
    \n

    Repeat for qa and stage.

  11. Install operators:

    \n
    $ ./install.sh -l operators -r apac -c prod -e apac-dev -v 3.9.4
    \n
  12. Install backend:

    \n
    $ ./install.sh -l backend -r apac -c prod -e apac-dev -v 3.9.4
    \n
  13. 1 Log into mongodb (use port forward if there is no connection to kong: run "kubectl port-forward mongo-0 -n apac-backend 27017" and connect to mongo on localhost:27017) or
    retrieve ip address from ELB of kong service and add it to Windows hosts file as DNS name (example. ●●●●●●●●●●●● mongo-amer-prod-gbl-mdm-hub.COMPANY.com) and connect to mongo on mongo-amer-prod-gbl-mdm-hub.COMPANY.com:27017

    2 Run below script:

    \n
    db.createCollection("entityHistory") \ndb.entityHistory.createIndex({country: -1},  {background: true, name:  "idx_country"});\ndb.entityHistory.createIndex({sources: -1},  {background: true, name:  "idx_sources"});\ndb.entityHistory.createIndex({entityType: -1},  {background: true, name:  "idx_entityType"});\ndb.entityHistory.createIndex({status: -1},  {background: true, name:  "idx_status"});\ndb.entityHistory.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});\ndb.entityHistory.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});\ndb.entityHistory.createIndex({"entity.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});\ndb.entityHistory.createIndex({"entity.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});\ndb.entityHistory.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});\ndb.entityHistory.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});\ndb.entityHistory.createIndex({entityChecksum: -1},  {background: true, name:  "idx_entityChecksum"});\ndb.entityHistory.createIndex({parentEntityId: -1},  {background: true, name:  "idx_parentEntityId"});\n\ndb.entityHistory.createIndex({COMPANYGlobalCustomerID: -1},  {background: true, name:  "idx_COMPANYGlobalCustomerID"});\n\ndb.createCollection("entityRelations")\ndb.entityRelations.createIndex({country: -1},  {background: true, name:  "idx_country"});\ndb.entityRelations.createIndex({sources: -1},  {background: true, name:  "idx_sources"});\ndb.entityRelations.createIndex({relationType: -1},  {background: true, name:  "idx_relationType"});\ndb.entityRelations.createIndex({status: -1},  {background: true, name:  "idx_status"});\ndb.entityRelations.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});\ndb.entityRelations.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});\ndb.entityRelations.createIndex({startObjectId: -1},  {background: true, name:  "idx_startObjectId"});\ndb.entityRelations.createIndex({endObjectId: -1},  {background: true, name:  "idx_endObjectId"});\ndb.entityRelations.createIndex({"relation.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});   \ndb.entityRelations.createIndex({"relation.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});   \ndb.entityRelations.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});   \ndb.entityRelations.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});\n \ndb.createCollection("LookupValues")\ndb.LookupValues.createIndex({updatedOn: 1},  {background: true, name:  "idx_updatedOn"});\ndb.LookupValues.createIndex({countries: 1},  {background: true, name:  "idx_countries"});\ndb.LookupValues.createIndex({mdmSource: 1},  {background: true, name:  "idx_mdmSource"});\ndb.LookupValues.createIndex({type: 1},  {background: true, name:  "idx_type"});\ndb.LookupValues.createIndex({code: 1},  {background: true, name:  "idx_code"});\ndb.LookupValues.createIndex({valueUpdateDate: 1},  {background: true, name:  "idx_valueUpdateDate"});\n\ndb.createCollection("ErrorLogs")\ndb.ErrorLogs.createIndex({plannedResubmissionDate: -1},  {background: true, name:  "idx_plannedResubmissionDate_-1"});\ndb.ErrorLogs.createIndex({timestamp: -1},  {background: true, name:  "idx_timestamp_-1"});\ndb.ErrorLogs.createIndex({exceptionClass: 1},  {background: true, name:  "idx_exceptionClass_1"});\ndb.ErrorLogs.createIndex({status: -1},  {background: true, name:  "idx_status_-1"});\n\ndb.createCollection("batchEntityProcessStatus")\ndb.batchEntityProcessStatus.createIndex({batchName: -1, sourceId: -1},  {background: true, name:  "idx_findByBatchNameAndSourceId"});\ndb.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1},  {background: true, name:  "idx_EntitiesUnseen_SoftDeleteJob"});\ndb.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResult_ProcessingJob"});\ndb.batchEntityProcessStatus.createIndex({batchName: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResultAll_ProcessingJob"});\n\ndb.createCollection("batchInstance")\n\ndb.createCollection("relationCache")\ndb.relationCache.createIndex({startSourceId: -1},  {background: true, name:  "idx_findByStartSourceId"});\n\ndb.createCollection("DCRRequests")\ndb.DCRRequests.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});\ndb.DCRRequests.createIndex({entityURI: -1, "status.name": -1},  {background: true, name:  "idx_entityURIStatusNameFind_SubmitVR"});\ndb.DCRRequests.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});\n\ndb.createCollection("entityMatchesHistory")\ndb.entityMatchesHistory.createIndex({_id: -1, "matches.matchObjectUri": -1, "matches.matchType": -1},  {background: true, name:  "idx_findAutoLinkMatch_CleanerStream"});\n\ndb.createCollection("DCRRegistry")\ndb.DCRRegistry.createIndex({"status.changeDate": -1},  {background: true, name:  "idx_changeDate_FindDCRsBy"});\ndb.DCRRegistry.createIndex({extDCRRequestId: -1},  {background: true, name:  "idx_extDCRRequestId_FindByExtId"});\ndb.DCRRegistry.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});\n\ndb.DCRRegistryONEKEY.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});\n\ndb.createCollection("sequenceCounters")\ndb.sequenceCounters.insertOne({_id: "COMPANYAddressIDSeq", sequence: NumberLong(7000000000)}) // NOTE: 7000000000 is APAC-specific
    \n

    Region

    Seq start number

    amer6000000000
    apac7000000000
    emea5000000000
  14. Log into Kibana. Export dashboards/indices from AMER and import them in APAC.
    Use the following playbook:
    - change values in  ansible repository:
    inventory/jenkins/group_vars/all/all.yml → #CHNG
    - run playbook:  ansible-playbook install_kibana_objects.yml -i inventory/jenkins/inventory --vault-password-file=../vault -v
  15. Install mdmhub:

    \n
    $ ./install.sh -l mdmhub -r apac -c prod -e apac-dev -v 3.9.4
    \n
  16. Tickets:
    1. DNS names ticket:

    2. Firewall whitelisting

      Ticket queue: GBL-NETWORK ECS

      Title: Firewall exceptions for new BoldMoves PDKS cluster


      Description:

      Hi Team,\n\nPlease open all traffic listed in attached Excel sheet.\nIn case this is not the queue where I should request Firewall changes, kindly point me in the right direction.\n\nBest Regards,\nPiotr\nMDM Hub

      Attached excel:

      SourceSource IPDestinationDestination IPPort

      MDM Hub monitoring (euw1z1pl046.COMPANY.com)

      CI/CD server (sonar-gbicomcloud.COMPANY.com)

      10.90.98.0/24pdcs-apa1p.COMPANY.com-443

      MDM Hub monitoring (euw1z1pl046.COMPANY.com)

      CI/CD server (sonar-gbicomcloud.COMPANY.com)

      EMEA prod MDM Hub

      10.90.98.0/24APAC prod - PDKS cluster

      ●●●●●●●●●●●●●●●

      ●●●●●●●●●●●●●●●

      443

      9094

      Global prod MDM Hub10.90.96.0/24APAC prod - PDKS cluster

      ●●●●●●●●●●●●●●●

      ●●●●●●●●●●●●●●●

      443
      APAC prod - PDKS cluster

      ●●●●●●●●●●●●●●●

      ●●●●●●●●●●●●●●●

      Global prod MDM Hub10.90.96.0/248443
      APAC prod - PDKS cluster

      ●●●●●●●●●●●●●●●

      ●●●●●●●●●●●●●●●

      EMEA prod MDM Hub10.90.98.0/248443
  17. Integration tests:
    In mdm-hub-env-config prepare inventory/kube_dev_apac (copy kube_dev_amer and adjust variables)
    run "prepare_int_tests" playbook:

    \n
    $ ansible-playbook prepare_int_tests.yml -i inventory/kube_dev_apac/inventory -e src_dir="/mnt/c/Users/panu/gitrep/mdm-hub-inbound-services-all"
    \n


    in mdm-hub-inbound-services confirm test resources (citrus properties) for mdm-integration-tests have been replaced and run two Gradle tasks:
    -mdm-gateway/mdm-interation-tests/Tasks/verification/commonIntegrationTests
    -mdm-gateway/mdm-interation-tests/Tasks/verification/integrationTestsForCOMPANYModel


" + }, + { + "title": "Configuration (emea)", + "pageID": "218444982", + "pageLink": "/pages/viewpage.action?pageId=218444982", + "content": "


Setup Mongo Indexes and Collections:

EntityHistory


db.entityHistory.createIndex({COMPANYGlobalCustomerID: -1},  {background: true, name:  "idx_COMPANYGlobalCustomerID"});


DCR Service 2 Indexes:

DCR Service 2 Indexes
\n
db.DCRRegistryONEKEY.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});\n\ndb.DCRRegistry.createIndex({"status.changeDate": -1},  {background: true, name:  "idx_changeDate_FindDCRsBy"});\ndb.DCRRegistry.createIndex({extDCRRequestId: -1},  {background: true, name:  "idx_extDCRRequestId_FindByExtId"});\ndb.DCRRegistry.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});
\n



" + }, + { + "title": "Configuration (gblus prod)", + "pageID": "164470081", + "pageLink": "/pages/viewpage.action?pageId=164470081", + "content": "

Config file: gblmdm-hub-us-spec_v05.xlsx

AWS Resources

Resource Name
Resource Type
Specification
AWS Region
AWS Availability Zone
Dependen on
Description
Components
HUB
GW
Interface
GBL MDM US HUB Prod Data Svr1 - amraelp00007844EC2r5.2xlargeus-east-1b

EBS APP DATA MDM PROD SVR1
EBS DOCKER DATA MDM PROD SVR1

- Mongo - data redundancy and high availability
   primary, secondary, tertiary needs to be hosted on a separated server and zones - high availability if one zone is offline
- Disks:
    Mount 50G - /var/lib/docker/ - docker installation directory
    Mount 750GB - /app/ - docker applications local storage
OS: Red Hat Enterprise Linux Server release 7.4
mongo
EFK
-DATA
GBL MDM US HUB Prod Data Svr2 - amraelp00007870EC2r5.2xlargeus-east-1eEBS APP DATA MDM PROD SVR2
EBS DOCKER DATA MDM PROD SVR2
- Mongo - data redundancy and high availability
   primary, secondary, tertiary needs to be hosted on a separated server and zones - high availability if one zone is offline
- Disks:
    Mount 50G - /var/lib/docker/ - docker installation directory
    Mount 750GB - /app/ - docker applications local storage
OS: Red Hat Enterprise Linux Server release 7.4
mongo
EFK
-DATA
GBL MDM US HUB Prod Data Svr3 - amraelp00007847EC2r5.2xlargeus-east-1bEBS APP DATA MDM PROD SVR3
EBS DOCKER DATA MDM PROD SVR3
- Mongo - data redundancy and high availability
   primary, secondary, tertiary needs to be hosted on a separated server and zones - high availability if one zone is offline
- Disks:
    Mount 50G - /var/lib/docker/ - docker installation directory
    Mount 750GB - /app/ - docker applications local storage
OS: Red Hat Enterprise Linux Server release 7.4
mongo
EFK
-DATA
GBL MDM US HUB Prod Svc Svr1 - amraelp00007848EC2r5.2xlargeus-east-1bEBS APP SVC MDM PROD SVR1
EBS DOCKER SVC MDM PROD SVR1
- Kafka and zookeeper
- Kong and Cassandra
    Cassandra replication factory set to 3 – Kong proxy high availability
    Load balancer for Kong API
- Disks:
    Mount 50G - /var/lib/docker/ - docker installation directory
    Mount 450GB - /app/ - docker applications local storage
OS: Red Hat Enterprise Linux Server release 7.4
Kafka
Zookeeper
Kong
Cassandra

HUB

GW

inbound

outbound

GBL MDM US HUB Prod Svc Svr2 - amraelp00007849EC2r5.2xlargeus-east-1bEBS APP SVC MDM PROD SVR2
EBS DOCKER SVC MDM PROD SVR2
- Kafka and zookeeper
- Kong and Cassandra
    Cassandra replication factory set to 3 – Kong proxy high availability
    Load balancer for Kong API
- Disks:
    Mount 50G - /var/lib/docker/ - docker installation directory
    Mount 450GB - /app/ - docker applications local storage
OS: Red Hat Enterprise Linux Server release 7.4
Kafka
Zookeeper
Kong
Cassandra

HUB

GW

inbound

outbound

GBL MDM US HUB Prod Svc Svr3 - amraelp00007871EC2r5.2xlargeus-east-1eEBS APP SVC MDM PROD SVR3
EBS DOCKER SVC MDM PROD SVR3
- Kafka and zookeeper
- Kong and Cassandra
    Cassandra replication factory set to 3 – Kong proxy high availability
    Load balancer for Kong API
- Disks:
    Mount 50G - /var/lib/docker/ - docker installation directory
    Mount 450GB - /app/ - docker applications local storage
OS: Red Hat Enterprise Linux Server release 7.4
Kafka
Zookeeper
Kong
Cassandra

HUB

GW

inbound

outbound

EBS APP DATA MDM Prod Svr1EBS750 GB XFSus-east-1b
mount to /app on GBL MDM US HUB Prod Data Svr1 - amraelp00007844


EBS APP DATA MDM Prod Svr2EBS750 GB XFSus-east-1e
mount to /app on GBL MDM US HUB Prod Data Svr2 - amraelp00007870


EBS APP DATA MDM Prod Svr3EBS750 GB XFSus-east-1b
mount to /app on GBL MDM US HUB Prod Data Svr3 - amraelp00007847


EBS DOCKER DATA MDM Prod Svr1EBS50 GB XFSus-east-1b
mount to docker devicemapper on GBL MDM US HUB Prod Data Svr1 - amraelp00007844


EBS DOCKER DATA MDM Prod Svr2EBS50 GB XFSus-east-1e
mount to docker devicemapper on GBL MDM US HUB Prod Data Svr2 - amraelp00007870


EBS DOCKER DATA MDM Prod Svr3EBS50 GB XFSus-east-1b
mount to docker devicemapper on GBL MDM US HUB Prod Data Svr3 - amraelp00007847


EBS APP SVC MDM Prod Svr1EBS450 GB XFSus-east-1b
mount to /app on GBL MDM US HUB Prod Svc Svr1 - amraelp00007848


EBS APP SVC MDM Prod Svr2EBS450 GB XFSus-east-1b
mount to /app on GBL MDM US HUB Prod Svc Svr2 - amraelp00007849


EBS APP SVC MDM Prod Svr3EBS450 GB XFSus-east-1e
mount to /app on GBL MDM US HUB Prod Svc Svr3 - amraelp00007871


EBS DOCKER SVC MDM Prod Svr1EBS50 GB XFSus-east-1b
mount to docker devicemapper on GBL MDM US HUB Prod Svc Svr1 - amraelp00007848


EBS DOCKER SVC MDM Prod Svr2EBS50 GB XFSus-east-1b
mount to docker devicemapper on GBL MDM US HUB Prod Svc Svr2 - amraelp00007849


EBS DOCKER SVC MDM Prod Svr3EBS50 GB XFSus-east-1e
mount to docker devicemapper on GBL MDM US HUB Prod Svc Svr3 - amraelp00007871


GBLMDMHUB US S3 Bucket
gblmdmhubprodamrasp101478
S3
us-east-1





Load BalancerELBELB


GBL MDM US HUB Prod Svc Svr1
GBL MDM US HUB Prod Svc Svr2
GBL MDM US HUB Prod Svc Svr3

MAP 443 - 8443 (only HTTPS) - ssl offloading on KONG
Domain: gbl-mdm-hub-us-prod.COMPANY.com


NAME:  PFE-CLB-ATP-MDMHUB-US-PROD-001

DNS Name : internal-PFE-CLB-ATP-MDMHUB-US-PROD-001-146249044.us-east-1.elb.amazonaws.com




SSL cert for doiman domain gbl-mdm-hub-us-prod.COMPANY.comCertificateDomain : domain gbl-mdm-hub-us-prod.COMPANY.com






DNS RecordDNSAddress: gbl-mdm-hub-us-prod.COMPANY.com -> Load Balancer







Roles

Name
Type
Privileges
Member of
Description
Reqeusts IDProvided access
UNIX-universal-awscbsdev-mdmhub-us-prod-computers-UUnix Computer ROLEAccess to hosts:
GBL MDM US HUB Prod Data Svr1
GBL MDM US HUB Prod Data Svr2
GBL MDM US HUB Prod Data Svr3
GBL MDM US HUB Prod Svc Svr1
GBL MDM US HUB Prod Svc Svr2
GBL MDM US HUB Prod Svc Svr3

Computer role including all MDM servers-
UNIX-GBLMDMHUB-US-PROD-ADMINUser Role- dzdo root
- access to docker
- access to docker-engine (systemctl) – restart, stop, start docker engine
UNIX-GBLMDMHUB-US-PROD-U  Admin role to manage all resource on servers-

KUCR - 20200519090759337

WARECP - 20200519083956229

GENDEL - 20200519094636480

MORAWM03 - 20200519084328245

PIASEM - 20200519095309490

UNIX-GBLMDMHUB-US-PROD-HUBROLEUser Role- Read only for logs
- dzdo docker ps * - list docker container
- dzdo docker logs * - check docker container logs
- Read access to /app/* - check  docker container logs
UNIX-GBLMDMHUB-US-PROD-U  role without root access, read only for logs and check docker status. It will be used by monitoring-
UNIX-GBLMDMHUB-US-PROD-SEROLEUser Role
- dzdo docker * 
UNIX-GBLMDMHUB-US-PROD-U  service role - it will be used to run microservices  from Jenkins CD pipeline-

Service Account - GBL32452299i

mdmuspr mdmhubuspr - 20200519095543524

UNIX-GBLMDMHUB-US-PROD-UUser Role- Read only for logs
- Read access to /app/* - check  docker container logs
UNIX-GBLMDMHUB-US-PROD-U  
-


Ports - Security Group 

PFE-SG-GBLMDMHUB-US-APP-PROD-001

 

Port ApplicationWhitelisted
8443Kong (API proxy)ALL from COMPANY VPN
7000Cassandra (Kong DB)  - inter-node communicationALL from COMPANY VPN
7001Cassandra (Kong DB) - inter-node communicationALL from COMPANY VPN
9042Cassandra (Kong DB)  - client portALL from COMPANY VPN
9094Kafka - SASL_SSL protocolALL from COMPANY VPN
9093Kafka - SSL protocolALL from COMPANY VPN
9092

KAFKA  - Inter-broker communication   

ALL from COMPANY VPN
2181ZookeeperALL from COMPANY VPN
2888

Zookeeper - intercommunication

ALL from COMPANY VPN
3888

Zookeeper - intercommunication

ALL from COMPANY VPN
27017MongoALL from COMPANY VPN
9999HawtIO - administration consoleALL from COMPANY VPN
9200ElasticsearchALL from COMPANY VPN
9300Elasticsearch TCP - cluster communication portALL from COMPANY VPN
5601KibanaALL from COMPANY VPN
9100 - 9125Prometheus exportersALL from COMPANY VPN
9542Kong exporterALL from COMPANY VPN
2376Docker encrypted communication with the daemonALL from COMPANY VPN

Documentation

Service Account ( Jenkins / server access )
http://btondemand.COMPANY.com/solution/160303162657677

NSA - UNIX
- user access to Servers:
http://btondemand.COMPANY.com/solution/131014104610578


Instructions


How to add user access to UNIX-GBLMDMHUB-US-PROD-ADMIN


How to add/create new Service Account with access to UNIX-GBLMDMHUB-US-PROD-SEROLE


Service Account NameUNIX group namedetailsBTOnDemandLessons Learned 
mdmusprmdmhubusprService Account Name has to contain max 8 charactersGBL32452299i




How to open ports / create new Security Group - PFE-SG-GBLMDMHUB-US-APP-PROD-001

http://btondemand.COMPANY.com/solution/120906165824277

To create a new security group:

Create server Security Group and Open Ports on  SC queue Name: GBL-BTI-IOD AWS FULL SUPPORT

log in to http://btondemand.COMPANY.com/ go to Get Support 

Search for queue: GBL-BTI-IOD AWS FULL SUPPORT

Submit Request to this queue:

Request

Hi Team,
Could you please create a new security group and assign it with these servers.

GBL MDM US HUB Prod Data Svr1 - amraelp00007844.COMPANY.com
GBL MDM US HUB Prod Data Svr2 - amraelp00007870.COMPANY.com
GBL MDM US HUB Prod Data Svr3 - amraelp00007847.COMPANY.com
GBL MDM US HUB Prod Svc Svr1 - amraelp00007848.COMPANY.com
GBL MDM US HUB Prod Svc Svr2 - amraelp00007849.COMPANY.com
GBL MDM US HUB Prod Svc Svr3 - amraelp00007871.COMPANY.com


Please add the following owners:
Primary: VARGAA08
Secondary: TIRUMS05
(please let me know if approval is required)


New Security group Requested: PFE-SG-GBLMDMHUB-US-APP-PROD-001

Please Open the following ports:


Port Application Whitlisted

8443 Kong (API proxy) ALL from COMPANY VPN
7000 Cassandra (Kong DB) - inter-node communication ALL from COMPANY VPN
7001 Cassandra (Kong DB) - inter-node communication ALL from COMPANY VPN
9042 Cassandra (Kong DB) - client port ALL from COMPANY VPN
9094 Kafka - SASL_SSL protocol ALL from COMPANY VPN
9093 Kafka - SSL protocol ALL from COMPANY VPN
9092 KAFKA - Inter-broker communication ALL from COMPANY VPN
2181 Zookeeper ALL from COMPANY VPN
2888 Zookeeper - intercommunication ALL from COMPANY VPN
3888 Zookeeper - intercommunication ALL from COMPANY VPN
27017 Mongo ALL from COMPANY VPN
9999 HawtIO - administration console ALL from COMPANY VPN
9200 Elasticsearch ALL from COMPANY VPN
9300 Elasticsearch TCP - cluster communication port ALL from COMPANY VPN
5601 Kibana ALL from COMPANY VPN
9100 - 9125 Prometheus exporters ALL from COMPANY VPN
9542 Kong exporter ALL from COMPANY VPN
2376 Docker encrypted communication with the daemon ALL from COMPANY VPN


Apply this group to the following servers:
amraelp00007844
amraelp00007870
amraelp00007847
amraelp00007848
amraelp00007849
amraelp00007871

Regards,
Mikolaj


This will create a new Security Group

http://btondemand.COMPANY.com/My-Tickets/Ticket-Details?ticket=GBL32141041i

Then these security groups have to be assigned to servers through the IOD portal by the Servers Owner.

To open new ports:

log in to http://btondemand.COMPANY.com/ go to Get Support 

Search for queue: GBL-BTI-IOD AWS FULL SUPPORT

Submit Request to this queue:

Request

Hi,
Could you please modify the below security group and open the following port.

PROD security group:
Security group: PFE-SG-GBLMDMHUB-US-APP-PROD-001
Port: 2376
(this port is related to Docker for encrypted communication with the daemon)

The host related to this:
amraelp00007844
amraelp00007870
amraelp00007847
amraelp00007848
amraelp00007849
amraelp00007871

Regards,
Mikolaj


Certificates Configuration

Kafka 

GO TO:How to Generate JKS Keystore and Truststore

keytool -genkeypair -alias kafka.gbl-mdm-hub-us-prod.COMPANY.com -keyalg RSA -keysize 2048 -keystore server.keystore.jks -dname "CN=kafka.gbl-mdm-hub-us-prod.COMPANY.com, O=COMPANY, L=mdm_gbl_us_hub, C=US"
keytool -certreq -alias kafka.gbl-mdm-hub-us-prod.COMPANY.com -file kafka.gbl-mdm-hub-us-prod.COMPANY.com.csr -keystore server.keystore.jks

SAN:

gbl-mdm-hub-us-prod.COMPANY.com
amraelp00007848.COMPANY.com
●●●●●●●●●●●●●●
amraelp00007849.COMPANY.com
●●●●●●●●●●●●●
amraelp00007871.COMPANY.com
●●●●●●●●●●●●●●


Crete guest_user for KAFKA - "CN=kafka.guest_user.gbl-mdm-hub-us-prod.COMPANY.com, O=COMPANY, L=GBLMDMHUB-US-PROD-KAFKA, C=US":

GO TO: How to Generate JKS Keystore and Truststore

keytool -genkeypair -alias guest_user -keyalg RSA -keysize 2048 -keystore guest_user.keystore.jks -dname "CN=kafka.guest_user.gbl-mdm-hub-us-prod.COMPANY.com, O=COMPANY, L=GBLMDMHUB-US-PROD-KAFKA, C=US"
keytool -certreq -alias guest_user -file kafka.guest_user.gbl-mdm-hub-us-prod.COMPANY.com.csr -keystore guest_user.keystore.jks

Kong

openssl req -nodes -newkey rsa:2048 -sha256 -keyout gbl-mdm-hub-us-prod.key -out gbl-mdm-hub-us-prod.csr

Subject Alternative Names

gbl-mdm-hub-us-prod.COMPANY.com
amraelp00007848.COMPANY.com
●●●●●●●●●●●●●●
amraelp00007849.COMPANY.com
●●●●●●●●●●●●●
amraelp00007871.COMPANY.com
●●●●●●●●●●●●●●


EFK

PROD_GBL_US

openssl req -nodes -newkey rsa:2048 -sha256 -keyout mdm-log-management-gbl-us-prod.key -out mdm-log-management-gbl-us-prod.csr
mdm-log-management-gbl-us-prod.COMPANY.com

Subject Alternative Names
mdm-log-management-gbl-us-prod.COMPANY.com
gbl-mdm-hub-us-prod.COMPANY.com
amraelp00007844.COMPANY.com
●●●●●●●●●●●●●●
amraelp00007870.COMPANY.com
●●●●●●●●●●●●●●
amraelp00007847.COMPANY.com
●●●●●●●●●●●●●


esnode1
openssl req -nodes -newkey rsa:2048 -sha256 -keyout mdm-esnode1-gbl-us-prod.key -out mdm-esnode1-gbl-us-prod.csr
mdm-esnode1-gbl-us-prod.COMPANY.com - Elasticsearch esnode1

Subject Alternative Names
mdm-esnode1-gbl-us-prod.COMPANY.com
gbl-mdm-hub-us-prod.COMPANY.com
amraelp00007844.COMPANY.com
●●●●●●●●●●●●●●

esnode2
openssl req -nodes -newkey rsa:2048 -sha256 -keyout mdm-esnode2-gbl-us-prod.key -out mdm-esnode2-gbl-us-prod.csr
mdm-esnode2-gbl-us-prod.COMPANY.com - Elasticsearch esnode2

Subject Alternative Names
mdm-esnode2-gbl-us-prod.COMPANY.com
gbl-mdm-hub-us-prod.COMPANY.com
amraelp00007870.COMPANY.com
●●●●●●●●●●●●●●

esnode3
openssl req -nodes -newkey rsa:2048 -sha256 -keyout mdm-esnode3-gbl-us-prod.key -out mdm-esnode3-gbl-us-prod.csr
mdm-esnode3-gbl-us-prod.COMPANY.com - Elasticsearch esnode3

Subject Alternative Names
mdm-esnode3-gbl-us-prod.COMPANY.com
gbl-mdm-hub-us-prod.COMPANY.com
amraelp00007847.COMPANY.com
●●●●●●●●●●●●●


Domain Configuration:

Example request: GBL30514754i "Register domains "mdm-log-management*"


  1. log in to http://btondemand.COMPANY.com/getsupport
  2. What can we help you with? - Search for "Network Team Ticket"
  3. Select the most relevant topic - "DNS Request"
  4. Submit a ticket to this queue.
  5. Ticket Details: - GBL32508266i

Request

Hi,
Could you please register the following domains:

ADD the below DNS entry:
========================
mdm-log-management-gbl-us-prod.COMPANY.com              Alias Record to                             amraelp00007847.COMPANY.com[●●●●●●●●●●●●●]


Kind regards,
Mikolaj

Request DNS

Hi,
Could you please register the following domains:

ADD the below DNS entry for the ELB: PFE-CLB-ATP-MDMHUB-US-PROD-001:

========================
gbl-mdm-hub-us-prod.COMPANY.com              Alias Record to                             DNS Name : internal-PFE-CLB-ATP-MDMHUB-US-PROD-001-146249044.us-east-1.elb.amazonaws.com


Referenced ELB creation ticket: GBL32561307i


Kind regards,
Mikolaj




Environment Installation


DISC:

server1 amraelp00007844
    APP DISC: nvme1n1
   DOCKER DISC: nvme2n1

server2 amraelp00007870
   APP DISC: nvme2n1
   DOCKER DISC: nvme1n1

server3 amraelp00007847
   APP DISC: nvme2n1
   DOCKER DISC: nvme1n1

server4 amraelp00007848
   APP1 DISC: nvme2n1
   APP2 DISC: nvme3n1
   DOCKER DISC: nvme1n1

server5 amraelp00007849
   APP1 DISC: nvme2n1
   APP2 DISC: nvme3n1
   DOCKER DISC: nvme1n

server6 amraelp00007871
   APP1 DISC: nvme2n1
   APP2 DISC: nvme3n1
   DOCKER DISC: nvme1n1

Pre:

umount /var/lib/docker
lvremove /dev/datavg/varlibdocker
vgreduce datavg /dev/nvme1n1
vi /etc/fstab
RM - /dev/mapper/datavg-varlibdocker /var/lib/docker ext4 defaults 1 2


rmdir /var/lib/ -> docker
mkdir /app/docker
ln -s /app/docker /var/lib/docker


Start docker service after prepare_env_airflow_certs playbook run is completed
Clear content of /etc/sysconfig/docker-storage to DOCKER_STORAGE_OPTIONS="" to use deamon.json file


Ansible:

ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server1 --vault-password-file=~/vault-password-file
ansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server1 --vault-password-file=~/vault-password-file

CN_NAME=amraelp00007844.COMPANY.com
SUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●●

ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server2 --vault-password-file=~/vault-password-file
ansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server2 --vault-password-file=~/vault-password-file

CN_NAME=amraelp00007870.COMPANY.com
SUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●●

ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server3 --vault-password-file=~/vault-password-file
ansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server3 --vault-password-file=~/vault-password-file

CN_NAME=amraelp00007847.COMPANY.com
SUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●

ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server4 --vault-password-file=~/vault-password-file
ansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server4 --vault-password-file=~/vault-password-file

CN_NAME=amraelp00007848.COMPANY.com
SUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●●

ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server5 --vault-password-file=~/vault-password-file
ansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server5 --vault-password-file=~/vault-password-file

CN_NAME=amraelp00007849.COMPANY.com
SUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●

ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server6 --vault-password-file=~/vault-password-file
ansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server6 --vault-password-file=~/vault-password-file

CN_NAME=amraelp00007871.COMPANY.com
SUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●●



Docker Version:

amraelp00007844:root:[04:57 AM]:/home/morawm03> docker --version
Docker version 1.13.1, build b2f74b2/1.13.1

amraelp00007870:root:[04:57 AM]:/home/morawm03> docker --version
Docker version 1.13.1, build b2f74b2/1.13.1

amraelp00007847:root:[04:57 AM]:/home/morawm03> docker --version
Docker version 1.13.1, build b2f74b2/1.13.1

amraelp00007848:root:[04:57 AM]:/home/morawm03> docker --version
Docker version 1.13.1, build b2f74b2/1.13.1

amraelp00007849:root:[04:57 AM]:/home/morawm03> docker --version
Docker version 1.13.1, build b2f74b2/1.13.1

amraelp00007871:root:[05:00 AM]:/home/morawm03> docker --version
Docker version 1.13.1, build b2f74b2/1.13.1


Configure Registry Login (registry-gbicomcloud.COMPANY.com):

ansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server1 --vault-password-file=~/vault-password-file
ansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server2 --vault-password-file=~/vault-password-file
ansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server3 --vault-password-file=~/vault-password-file
ansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server4 --vault-password-file=~/vault-password-file
ansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server5 --vault-password-file=~/vault-password-file
ansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server6 --vault-password-file=~/vault-password-file

Registry (manual config):
  Copy certs: /etc/docker/certs.d/registry-gbicomcloud.COMPANY.com/ from (mdm-reltio-handler-env\\ssl_certs\\registry)
  docker login registry-gbicomcloud.COMPANY.com (login on service account too)
  user/pass: mdm/**** (check mdm-reltio-handler-env\\group_vars\\all\\secret.yml)




Playbooks installation order:

Install node_exporter (run on user with root access - systemctl node_exprter installation):
ansible-playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file
ansible-playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-file
ansible-playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus3 --vault-password-file=~/vault-password-file
ansible-playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus4 --vault-password-file=~/vault-password-file
ansible-playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus5 --vault-password-file=~/vault-password-file
ansible-playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus6 --vault-password-file=~/vault-password-file

Install Kafka
ansible-playbook install_hub_broker_cluster.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-file

Install Kafka TOPICS:
ansible-playbook install_hub_broker_cluster.yml -i inventory/prod_gblus/inventory --limit kafka1 --vault-password-file=~/vault-password-file

Install Mongo
ansible-playbook install_hub_mongo_rs_cluster.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-file


Install Kong
ansible-playbook install_mdmgw_gateway_v1.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-file


Update KONG Config
ansible-playbook update_kong_api_v1.yml -i inventory/prod_gblus/inventory --limit kong_v1_01 --vault-password-file=~/vault-password-file
Verification:
openssl s_client -connect amraelp00007848.COMPANY.com:8443 -servername gbl-mdm-hub-us-prod.COMPANY.com -CAfile /mnt/d/dev/mdm/GBL_US_NPROD/root_inter/RootCA-G2.cer
openssl s_client -connect amraelp00007849.COMPANY.com:8443 -servername gbl-mdm-hub-us-prod.COMPANY.com -CAfile /mnt/d/dev/mdm/GBL_US_NPROD/root_inter/RootCA-G2.cer
openssl s_client -connect amraelp00007871.COMPANY.com:8443 -servername gbl-mdm-hub-us-prod.COMPANY.com -CAfile /mnt/d/dev/mdm/GBL_US_NPROD/root_inter/RootCA-G2.cer


Install EFK
ansible-playbook install_efk_stack.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-file

Install Promehtues services :
mongo_exporter:
ansible-playbook install_prometheus_mongo_exporter.yml -i inventory/prod_gblus/inventory --limit mongo3_exporter --vault-password-file=~/vault-password-file
cadvisor:
ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file
ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-file
ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus3 --vault-password-file=~/vault-password-file
ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus4 --vault-password-file=~/vault-password-file
ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus5 --vault-password-file=~/vault-password-file
ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus6 --vault-password-file=~/vault-password-file
sqs_exporter:
ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus6 --vault-password-file=~/vault-password-file

Install Consul
ansible-playbook install_consul.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-file
# After operation get SecretID from consul container. On the container execute the following command:

$ consul acl bootstrap

and copy it as mgmt_token to consul secrets.yml

After install consul step run update consul playbook with proper mgmt_token (secret.yml) in every execution for each node.

Update Consul
ansible-playbook update_consul.yml -i inventory/prod_gblus/inventory --limit consul1 --vault-password-file=~/vault-password-file -v
ansible-playbook update_consul.yml -i inventory/prod_gblus/inventory --limit consul2 --vault-password-file=~/vault-password-file -v
ansible-playbook update_consul.yml -i inventory/prod_gblus/inventory --limit consul3 --vault-password-file=~/vault-password-file -v

Setup Mongo Indexes and Collections:

Create Collections and Indexes
\n
Create Collections and Indexes:\n    entityHistory\n\n        db.entityHistory.createIndex({country: -1},  {background: true, name:  "idx_country"});\n        db.entityHistory.createIndex({sources: -1},  {background: true, name:  "idx_sources"});\n        db.entityHistory.createIndex({entityType: -1},  {background: true, name:  "idx_entityType"});\n        db.entityHistory.createIndex({status: -1},  {background: true, name:  "idx_status"});\n        db.entityHistory.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});\n        db.entityHistory.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});\n        db.entityHistory.createIndex({"entity.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});\n        db.entityHistory.createIndex({"entity.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});\n        db.entityHistory.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});\n        db.entityHistory.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});\n        db.entityHistory.createIndex({entityChecksum: -1},  {background: true, name:  "idx_entityChecksum"});\n        db.entityHistory.createIndex({parentEntityId: -1},  {background: true, name:  "idx_parentEntityId"}); \n        \n        \n        \n\n    entityRelations\n        db.entityRelations.createIndex({country: -1},  {background: true, name:  "idx_country"});\n        db.entityRelations.createIndex({sources: -1},  {background: true, name:  "idx_sources"});\n        db.entityRelations.createIndex({relationType: -1},  {background: true, name:  "idx_relationType"});\n        db.entityRelations.createIndex({status: -1},  {background: true, name:  "idx_status"});\n        db.entityRelations.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});\n        db.entityRelations.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});\n        db.entityRelations.createIndex({startObjectId: -1},  {background: true, name:  "idx_startObjectId"});\n        db.entityRelations.createIndex({endObjectId: -1},  {background: true, name:  "idx_endObjectId"});\n        db.entityRelations.createIndex({"relation.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});    \n        db.entityRelations.createIndex({"relation.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});    \n        db.entityRelations.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});    \n        db.entityRelations.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});\n\n\n\n    LookupValues\n        db.LookupValues.createIndex({updatedOn: 1},  {background: true, name:  "idx_updatedOn"});\n        db.LookupValues.createIndex({countries: 1},  {background: true, name:  "idx_countries"});\n        db.LookupValues.createIndex({mdmSource: 1},  {background: true, name:  "idx_mdmSource"});\n        db.LookupValues.createIndex({type: 1},  {background: true, name:  "idx_type"});\n        db.LookupValues.createIndex({code: 1},  {background: true, name:  "idx_code"});\n        db.LookupValues.createIndex({valueUpdateDate: 1},  {background: true, name:  "idx_valueUpdateDate"});\n\n\n    ErrorLogs\n        db.ErrorLogs.createIndex({plannedResubmissionDate: -1},  {background: true, name:  "idx_plannedResubmissionDate_-1"});\n        db.ErrorLogs.createIndex({timestamp: -1},  {background: true, name:  "idx_timestamp_-1"});\n        db.ErrorLogs.createIndex({exceptionClass: 1},  {background: true, name:  "idx_exceptionClass_1"});\n        db.ErrorLogs.createIndex({status: -1},  {background: true, name:  "idx_status_-1"});\n\n\tbatchEntityProcessStatus\n    \tdb.batchEntityProcessStatus.createIndex({batchName: -1, sourceId: -1},  {background: true, name:  "idx_findByBatchNameAndSourceId"});\n\t    db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1},  {background: true, name:  "idx_EntitiesUnseen_SoftDeleteJob"});\n\t\tdb.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResult_ProcessingJob"});\n\t\tdb.batchEntityProcessStatus.createIndex({batchName: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResultAll_ProcessingJob"});\n\n\n    batchInstance\n\t\t- create collection\n\n\trelationCache\n\t\tdb.relationCache.createIndex({startSourceId: -1},  {background: true, name:  "idx_findByStartSourceId"});\n\n    DCRRequests\n          db.DCRRequests.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});\n          db.DCRRequests.createIndex({entityURI: -1, "status.name": -1},  {background: true, name:  "idx_entityURIStatusNameFind_SubmitVR"});\n          db.DCRRequests.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});\n          \n    entityMatchesHistory    \n          db.entityMatchesHistory.createIndex({_id: -1, "matches.matchObjectUri": -1, "matches.matchType": -1},  {background: true, name:  "idx_findAutoLinkMatch_CleanerStream"});\n
\n



Connect ENV with Prometheus:

Prometheus config
\n
node_exporter\n       - targets:\n          - "amraelp00007844.COMPANY.com:9100"\n          - "amraelp00007870.COMPANY.com:9100"\n          - "amraelp00007847.COMPANY.com:9100"\n          - "amraelp00007848.COMPANY.com:9100"\n          - "amraelp00007849.COMPANY.com:9100"\n          - "amraelp00007871.COMPANY.com:9100"\n         labels:\n            env: gblus_prod\n            component: node\n \n\nkafka\n       - targets:\n          - "amraelp00007848.COMPANY.com:9101"\n         labels:\n            env: gblus_prod\n            node: 1\n            component: kafka\n       - targets:\n          - "amraelp00007849.COMPANY.com:9101"\n         labels:\n            env: gblus_prod\n            node: 2\n            component: kafka\n       - targets:\n          - "amraelp00007871.COMPANY.com:9101"\n         labels:\n            env: gblus_prod\n            node: 3\n            component: kafka\n             \n             \nkafka_exporter\n       - targets:\n          - "amraelp00007848.COMPANY.com:9102"\n         labels:\n            trade: gblus\n            node: 1\n            component: kafka\n            env: gblus_prod\n       - targets:\n           - "amraelp00007849.COMPANY.com:9102"\n         labels:\n            trade: gblus\n            node: 2\n            component: kafka\n            env: gblus_prod\n       - targets:\n           - "amraelp00007871.COMPANY.com:9102"\n         labels:\n            trade: gblus\n            node: 3\n            component: kafka\n            env: gblus_prod \n \n \nComponents:\n    jmx_manager\n       - targets:\n          - "amraelp00007848.COMPANY.com:9104"\n         labels:\n            env: gblus_prod\n            node: 1\n            component: manager\n       - targets:\n          - "amraelp00007849.COMPANY.com:9104"\n         labels:\n            env: gblus_prod\n            node: 2\n            component: manager\n       - targets:\n          - "amraelp00007871.COMPANY.com:9104"\n         labels:\n            env: gblus_prod\n            node: 3\n            component: manager \n            \n    jmx_event_publisher\n       - targets:\n          - "amraelp00007848.COMPANY.com:9106"\n         labels:\n            env: gblus_prod\n            node: 1\n            component: publisher\n       - targets:\n          - "amraelp00007849.COMPANY.com:9106"\n         labels:\n            env: gblus_prod\n            node: 2\n            component: publisher\n       - targets:\n          - "amraelp00007871.COMPANY.com:9106"\n         labels:\n            env: gblus_prod\n            node: 3\n            component: publisher\n            \n    jmx_reltio_subscriber\n       - targets:\n          - "amraelp00007848.COMPANY.com:9105"\n         labels:\n            env: gblus_prod\n            node: 1\n            component: subscriber\n       - targets:\n          - "amraelp00007849.COMPANY.com:9105"\n         labels:\n            env: gblus_prod\n            node: 2\n            component: subscriber\n       - targets:\n          - "amraelp00007871.COMPANY.com:9105"\n         labels:\n            env: gblus_prod\n            node: 3\n            component: subscriber\n            \n  jmx_batch_service\n      - targets:\n          - "amraelp00007848.COMPANY.com:9107"\n        labels:\n          env: gblus_prod\n          node: 1\n          component: batch_service\n      - targets:\n          - "amraelp00007849.COMPANY.com:9107"\n        labels:\n          env: gblus_prod\n          node: 2\n          component: batch_service\n      - targets:\n          - "amraelp00007871.COMPANY.com:9107"\n        labels:\n          env: gblus_prod\n          node: 3\n          component: batch_service\n \n batch_service_actuator\n      - targets:\n          - "amraelp00007848.COMPANY.com:9116"\n        labels:\n          env: gblus_prod\n          node: 1\n          component: batch_service\n      - targets:\n          - "amraelp00007849.COMPANY.com:9116"\n        labels:\n          env: gblus_prod\n          node: 2\n          component: batch_service\n      - targets:\n          - "amraelp00007871.COMPANY.com:9116"\n        labels:\n          env: gblus_prod\n          node: 3\n          component: batch_service\n          \n          \nsqs_exporter   \n       - targets:\n          - "amraelp00007871.COMPANY.com:9122"\n         labels:\n            env: gblus_prod\n            component: sqs_exporter\n\n \n \ncadvisor\n \n       - targets:\n          - "amraelp00007844.COMPANY.com:9103"\n         labels:\n            env: gblus_prod\n            node: 1\n            component: cadvisor_exporter\n       - targets:\n          - "amraelp00007870.COMPANY.com:9103"\n         labels:\n            env: gblus_prod\n            node: 2\n            component: cadvisor_exporter           \n       - targets:\n          - "amraelp00007847.COMPANY.com:9103"\n         labels:\n            env: gblus_prod\n            node: 3\n            component: cadvisor_exporter   \n       - targets:\n          - "amraelp00007848.COMPANY.com:9103"\n         labels:\n            env: gblus_prod\n            node: 4\n            component: cadvisor_exporter   \n       - targets:\n          - "amraelp00007849.COMPANY.com:9103"\n         labels:\n            env: gblus_prod\n            node: 5\n            component: cadvisor_exporter   \n       - targets:\n          - "amraelp00007871.COMPANY.com:9103"\n         labels:\n            env: gblus_prod\n            node: 6\n            component: cadvisor_exporter               \n \n     \nmongodb_exporter\n \n      - targets:\n          - "amraelp00007847.COMPANY.com:9120"\n        labels:\n          env: gblus_prod\n          component: mongodb_exporter\n     \n \nkong_exporter\n       - targets:\n          - "amraelp00007848.COMPANY.com:9542"\n         labels:\n            env: gblus_prod\n            node: 1\n            component: kong_exporter\n       - targets:\n          - "amraelp00007849.COMPANY.com:9542"\n         labels:\n            env: gblus_prod\n            node: 2\n            component: kong_exporter\n       - targets:\n          - "amraelp00007871.COMPANY.com:9542"\n         labels:\n            env: gblus_prod\n            node: 3\n            component: kong_exporter
\n







" + }, + { + "title": "Configuration (gblus)", + "pageID": "164470073", + "pageLink": "/pages/viewpage.action?pageId=164470073", + "content": "

Config file: gblmdm-hub-us-spec_v04.xlsx

AWS Resources

Resource Name
Resource Type
Specification
AWS Region
AWS Availability Zone
Dependen on
Description
Components
HUB
GW
Interface

GBL MDM US HUB nProd Svr1 amraelp00007334

PFE-AWS-MULTI-AZ-DEV-us-east-1

EC2r5.2xlargeus-east-1b

EBS APP DATA MDM NPROD SVR1


EBS DOCKER DATA MDM NPROD SVR1

- Mongo -  no data redundancy for nProd

- Disks:
    Mount 50G - docker installation directory
    Mount 1000GB - /app/ - docker applications local storage


OS: Red Hat Enterprise Linux Server release 7.3 (Maipo)

mongo
EFK
HUBoutbound

GBL MDM US HUB nProd Svr2 amraelp00007335

PFE-AWS-MULTI-AZ-DEV-us-east-1

EC2r5.2xlargeus-east-1b

EBS APP DATA MDM NPROD SVR2


EBS DOCKER DATA MDM NPROD SVR2

- Kafka and zookeeper
- Kong and Cassandra
- Disks:
    Mount 50G - docker installation directory
    Mount 500GB - /app/ - docker applications local storage


OS: Red Hat Enterprise Linux Server release 7.3 (Maipo)

Kafka
Zookeeper
Kong
Cassandra
GWinbound
EBS APP DATA MDM nProd Svr1EBS1000 GB XFSus-east-1b
mount to /app on amraelp00007334


EBS APP DATA MDM nProd Svr2EBS500 GB XFSus-east-1b
mount to /app on amraelp00007335


EBS DOCKER DATA MDM nProd Svr1EBS50 GB XFSus-east-1b
mount to docker devicemapper on amraelp00007334


EBS DOCKER DATA MDM nProd Svr2EBS50 GB XFSus-east-1b
mount to docker devicemapper on amraelp00007335


GBLMDMHUB US S3 Bucket
gblmdmhubnprodamrasp100762
S3
us-east-1





SSL cert for doiman domain gbl-mdm-hub-us-nprod.COMPANY.comCertificateDomain : domain gbl-mdm-hub-us-nprod.COMPANY.com






DNS RecordDNSAddress: gbl-mdm-hub-us-nprod.COMPANY.com







Roles

Name
Type
Privileges
Member of
Description
Reqeusts IDProvided access
UNIX-IoD-global-mdmhub-us-nprod-computers-UUnix Computer ROLEAccess to hosts:
GBL MDM US HUB nProd Svr1
GBL MDM US HUB nProd Svr2

Computer role including all MDM servers

UNIX-GBLMDMHUB-US-NPROD-ADMIN-UUser Role- dzdo root
- access to docker
- access to docker-engine (systemctl) – restart, stop, start docker engine
UNIX-GBLMDMHUB-US-NPROD-COMPUTERS-UAdmin role to manage all resource on serversNSA-UNIX: 20200303065003900

KUCR - GBL32099554i

WARECP - 

GENDEL - GBL32134727i

MORAWM03 - GBL32097468i

UNIX-GBLMDMHUB-US-NPROD-HUBROLE-UUser Role- Read only for logs
- dzdo docker ps * - list docker container
- dzdo docker logs * - check docker container logs
- Read access to /app/* - check  docker container logs
UNIX-GBLMDMHUB-US-NPROD-COMPUTERS-Urole without root access, read only for logs and check docker status. It will be used by monitoringNSA-UNIX: 20200303065731900
UNIX-GBLMDMHUB-US-NPROD-SEROLE-UUser Role
- dzdo docker * 
UNIX-GBLMDMHUB-US-NPROD-COMPUTERS-Uservice role - it will be used to run microservices  from Jenkins CD pipelineNSA-UNIX: 20200303070216948

Service Account - GBL32099918i

mdmusnpr

UNIX-GBLMDMHUB-US-NPROD-READONLYUser Role- Read only for logs
- Read access to /app/* - check  docker container logs
UNIX-GBLMDMHUB-US-NPROD-COMPUTERS-U
NSA-UNIX: 20200303070544951


Ports - Security Group 

PFE-SG-GBLMDMHUB-US-APP-NPROD-001

 

Port ApplicationWhitelisted
8443Kong (API proxy)ALL from COMPANY VPN
9094Kafka - SASL_SSL protocolALL from COMPANY VPN
9093Kafka - SSL protocolALL from COMPANY VPN
2181ZookeeperALL from COMPANY VPN
27017MongoALL from COMPANY VPN
9999HawtIO - administration consoleALL from COMPANY VPN
9200ElasticsearchALL from COMPANY VPN
5601KibanaALL from COMPANY VPN
9100 - 9125Prometheus exportersALL from COMPANY VPN
9542Kong exporterALL from COMPANY VPN
2376Docker encrypted communication with the daemonALL from COMPANY VPN

Open ports between Jenkins and Airflow

Request to Przemek.Puchajda@COMPANY.com and Mateusz.Szewczyk@COMPANY.com - this is required to open ports between WBS<>IOD blocked traffic ( the requests take some time to finish so request at the beginning) 

  1. A connection is required from euw1z1dl039.COMPANY.com (●●●●●●●●●●●●●)

                       to amraelp00008810.COMPANY.com (●●●●●●●●●●●●●) port 2376. This connection is between airflow and docker host to run gblus DAGs.

                       to amraelp00008810.COMPANY.com (●●●●●●●●●●●●●) port 22. This connection is between airflow and docker host to run gblus DAGs.

      2. A connection is required from the Jenkins instance (gbinexuscd01 - ●●●●●●●●●●●●●).

                       to amraelp00008810.COMPANY.com (●●●●●●●●●●●●●) port 22. This connection is between Jenkins and the target host required for code deployment purposes.


Documentation

Service Account ( Jenkins / server access )
http://btondemand.COMPANY.com/solution/160303162657677

NSA - UNIX
- user access to Servers:
http://btondemand.COMPANY.com/solution/131014104610578


Instructions


How to add user access to UNIX-GBLMDMHUB-US-NPROD-ADMIN-U


How to add/create new Service Account with access to UNIX-GBLMDMHUB-US-NPROD-SEROLE-U


Service Account NameUNIX group namedetailsBTOnDemandLessons Learned 
mdmusnprmdmhubusnprService Account Name has to contain max 8 charactersGBL32099918i




How to open ports / create new Security Group - PFE-SG-GBLMDMHUB-US-APP-NPROD-001

http://btondemand.COMPANY.com/solution/120906165824277

To create a new security group:

Create server Security Group and Open Ports on  SC queue Name: GBL-BTI-IOD AWS FULL SUPPORT

log in to http://btondemand.COMPANY.com/ go to Get Support 

Search for queue: GBL-BTI-IOD AWS FULL SUPPORT

Submit Request to this queue:

Request

Hi Team,
Could you please create a new security group and assign it with two servers.

GBL MDM US HUB nProd Svr1 (amraelp00007334) - PFE-AWS-MULTI-AZ-DEV-us-east-1
and
GBL MDM US HUB nProd Svr2 (amraelp00007335) - PFE-AWS-MULTI-AZ-DEV-us-east-1


Please add the following owners:
Primary: VARGAA08
Secondary: TIRUMS05
(please let me know if approval is required)


New Security group Requested: PFE-SG-GBLMDMHUB-US-APP-NPROD-001

Please Open the following ports:
Port  Application Whitelisted
8443 Kong (API proxy) ALL from COMPANY VPN
9094 Kafka - SASL_SSL protocol ALL from COMPANY VPN
9093 Kafka - SASL_SSL protocol ALL from COMPANY VPN
2181 Zookeeper ALL from COMPANY VPN
27017 Mongo ALL from COMPANY VPN
9999 HawtIO - administration console ALL from COMPANY VPN
9200 Elasticsearch ALL from COMPANY VPN
5601 Kibana ALL from COMPANY VPN
9100 - 9125 Prometheus exporters ALL from COMPANY VPN


Apply this group to the following servers:
amraelp00007334
amraelp00007335

Regards,
Mikolaj


This will create a new Security Group

http://btondemand.COMPANY.com/My-Tickets/Ticket-Details?ticket=GBL32141041i

Then these security groups have to be assigned to servers through the IOD portal by the Servers Owner.

To open new ports:

log in to http://btondemand.COMPANY.com/ go to Get Support 

Search for queue: GBL-BTI-IOD AWS FULL SUPPORT

Submit Request to this queue:

Request

Hi,
Could you please modify the below security group and open the following port.

NONPROD security group:
Security group: PFE-SG-GBLMDMHUB-US-APP-NPROD-001
Port: 2376
(this port is related to Docker for encrypted communication with the daemon)

The host related to this:
amraelp00007334
amraelp00007335

Regards,
Mikolaj


Certificates Configuration

Kafka - GBL32139266i  

GO TO:How to Generate JKS Keystore and Truststore

keytool -genkeypair -alias kafka.gbl-mdm-hub-us-nprod.COMPANY.com -keyalg RSA -keysize 2048 -keystore server.keystore.jks -dname "CN=kafka.gbl-mdm-hub-us-nprod.COMPANY.com, O=COMPANY, L=mdm_gbl_us_hub, C=US"
keytool -certreq -alias kafka.gbl-mdm-hub-us-nprod.COMPANY.com -file kafka.gbl-mdm-hub-us-nprod.COMPANY.com.csr -keystore server.keystore.jks

SAN:

gbl-mdm-hub-us-nprod.COMPANY.com
amraelp00007334.COMPANY.com
●●●●●●●●●●●●
amraelp00007335.COMPANY.com
●●●●●●●●●●●●


Crete guest_user for KAFKA - "CN=kafka.guest_user.gbl-mdm-hub-us-nprod.COMPANY.com, O=COMPANY, L=GBLMDMHUB-US-NONPROD-KAFKA, C=US":

GO TO: How to Generate JKS Keystore and Truststore

keytool -genkeypair -alias guest_user -keyalg RSA -keysize 2048 -keystore guest_user.keystore.jks -dname "CN=kafka.guest_user.gbl-mdm-hub-us-nprod.COMPANY.com, O=COMPANY, L=GBLMDMHUB-US-NONPROD-KAFKA, C=US"
keytool -certreq -alias guest_user -file kafka.guest_user.gbl-mdm-hub-us-nprod.COMPANY.com.csr -keystore guest_user.keystore.jks

Kong - GBL32144418i

openssl req -nodes -newkey rsa:2048 -sha256 -keyout gbl-mdm-hub-us-nprod.key -out gbl-mdm-hub-us-nprod.csr

Subject Alternative Names

gbl-mdm-hub-us-nprod.COMPANY.com
amraelp00007334.COMPANY.com
●●●●●●●●●●●●
amraelp00007335.COMPANY.com
●●●●●●●●●●●●


EFK - GBL32139762i  , GBL32144243i

openssl req -nodes -newkey rsa:2048 -sha256 -keyout mdm-log-management-gbl-us-nonprod.key -out mdm-log-management-gbl-us-nonprod.csr
mdm-log-management-gbl-us-nonprod.COMPANY.com

Subject Alternative Names
mdm-log-management-gbl-us-nonprod.COMPANY.com
gbl-mdm-hub-us-nprod.COMPANY.com
amraelp00007334.COMPANY.com
●●●●●●●●●●●●
amraelp00007335.COMPANY.com
●●●●●●●●●●●●


openssl req -nodes -newkey rsa:2048 -sha256 -keyout mdm-esnode1-gbl-us-nonprod.key -out mdm-esnode1-gbl-us-nonprod.csr
mdm-esnode1-gbl-us-nonprod.COMPANY.com - Elasticsearch

Subject Alternative Names
mdm-esnode1-gbl-us-nonprod.COMPANY.com
gbl-mdm-hub-us-nprod.COMPANY.com
amraelp00007334.COMPANY.com
●●●●●●●●●●●●
amraelp00007335.COMPANY.com
●●●●●●●●●●●●


Domain Configuration:

Example request: GBL30514754i "Register domains "mdm-log-management*"


  1. log in to http://btondemand.COMPANY.com/getsupport
  2. What can we help you with? - Search for "Network Team Ticket"
  3. Select the most relevant topic - "DNS Request"
  4. Submit a ticket to this queue.
  5. Ticket Details:

Request

Hi,
Could you please register the following domains:

ADD the below DNS entry:
========================
mdm-log-management-gbl-us-nonprod.COMPANY.com              Alias Record to                             amraelp00007334.COMPANY.com[●●●●●●●●●●●●]
gbl-mdm-hub-us-nprod.COMPANY.com                                        Alias Record to                             amraelp00007335.COMPANY.com[●●●●●●●●●●●●]


Kind regards,
Mikolaj





Environment Installation


Pre:

rmdir /var/lib/ -> docker
ln -s /app/docker /var/lib/docker

umount /var/lib/docker
lvremove /dev/datavg/varlibdocker
vgreduce datavg /dev/nvme1n1

Clear content of /etc/sysconfig/docker-storage to DOCKER_STORAGE_OPTIONS="" to use deamon.json file


Ansible:

ansible-playbook prepare_env_gbl_us.yml -i inventory/dev_gblus/inventory --limit server1 --vault-password-file=~/vault-password-file

ansible-playbook prepare_env_airflow_certs.yml -i inventory/dev_gblus/inventory --limit server1 --vault-password-file=~/vault-password-file


ansible-playbook prepare_env_gbl_us.yml -i inventory/dev_gblus/inventory --limit server2 --vault-password-file=~/vault-password-file

ansible-playbook prepare_env_airflow_certs.yml -i inventory/dev_gblus/inventory --limit server2 --vault-password-file=~/vault-password-file


ansible-playbook prepare_env_gbl_us.yml -i inventory/dev_gblus/inventory --limit server3 --vault-password-file=~/vault-password-file

ansible-playbook prepare_env_airflow_certs.yml -i inventory/dev_gblus/inventory --limit server3 --vault-password-file=~/vault-password-file


copy daemon_docker_tls_overlay.json.j2 to /etc/docker/daemon.json

FIX using - https://stackoverflow.com/questions/44052054/unable-to-start-docker-after-configuring-hosts-in-daemon-json

$ sudo cp /lib/systemd/system/docker.service /etc/systemd/system/\n$ sudo sed -i 's/\\ -H\\ fd:\\/\\///g' /etc/systemd/system/docker.service\n$ sudo systemctl daemon-reload\n$ sudo service docker restart


Docker Version:

amraelp00007334:root:[10:10 AM]:/app> docker --version
Docker version 1.13.1, build b2f74b2/1.13.1

amraelp00007335:root:[10:04 AM]:/app> docker --version
Docker version 1.13.1, build b2f74b2/1.13.1

[root@amraelp00008810 docker]# docker --version
Docker version 19.03.13-ce, build 4484c46


Configure Registry Login (registry-gbicomcloud.COMPANY.com):

ansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit server1 --vault-password-file=~/vault-password-file - using ●●●●●●●●●●●●● root access
ansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit server2 --vault-password-file=~/vault-password-file
ansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit server3 --vault-password-file=~/vault-password-file
ansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file - using ●●●●●●●●●●●● service account
ansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-file
ansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit prometheus3 --vault-password-file=~/vault-password-file

Registry (manual config):
  Copy certs: /etc/docker/certs.d/registry-gbicomcloud.COMPANY.com/ from (mdm-reltio-handler-env\\ssl_certs\\registry)
  docker login registry-gbicomcloud.COMPANY.com (login on service account too)
  user/pass: mdm/**** (check mdm-reltio-handler-env\\group_vars\\all\\secret.yml)


Playbooks installation order:

Install node_exporter:
    ansible-playbook install_prometheus_node_exporter.yml -i inventory/dev_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file
    ansible-playbook install_prometheus_node_exporter.yml -i inventory/dev_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-file
ansible-playbook install_prometheus_node_exporter.yml -i inventory/dev_gblus/inventory --limit prometheus3 --vault-password-file=~/vault-password-file

Install Kafka
  ansible-playbook install_hub_broker.yml -i inventory/dev_gblus/inventory --limit broker --vault-password-file=~/vault-password-file

Install Mongo
  ansible-playbook install_hub_db.yml -i inventory/dev_gblus/inventory --limit mongo --vault-password-file=~/vault-password-file

Install Kong
  ansible-playbook install_mdmgw_gateway_v1.yml -i inventory/dev_gblus/inventory --limit kong_v1_01 --vault-password-file=~/vault-password-file

Update KONG Config (IT NEEDS TO BE UPDATED ON EACH ENV (DEV, QA, STAGE)!!)
  ansible-playbook update_kong_api_v1.yml -i inventory/dev_gblus/inventory --limit kong_v1_01 --vault-password-file=~/vault-password-file
  Verification:
    openssl s_client -connect amraelp00007335.COMPANY.com:8443 -servername gbl-mdm-hub-us-nprod.COMPANY.com -CAfile /mnt/d/dev/mdm/GBL_US_NPROD/root_inter/RootCA-G2.cer

Install EFK
  ansible-playbook install_efk_stack.yml -i inventory/dev_gblus/inventory --limit efk --vault-password-file=~/vault-password-file

Install FLUEND Forwarder (without this docker loggin may not work and docker commands will be blocked)
  ansible-playbook install_fluentd_forwarder.yml -i inventory/dev_gblus/inventory --limit docker-services --vault-password-file=~/vault-password-file

Install Promehtues services :
  mongo_exporter:
    ansible-playbook install_prometheus_mongo_exporter.yml -i inventory/dev_gblus/inventory --limit mongo_exporter1 --vault-password-file=~/vault-password-file
  cadvisor:
    ansible-playbook install_prometheus_stack.yml -i inventory/dev_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file
    ansible-playbook install_prometheus_stack.yml -i inventory/dev_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-file
ansible-playbook install_prometheus_stack.yml -i inventory/dev_gblus/inventory --limit prometheus3 --vault-password-file=~/vault-password-file
  sqs_exporter:
    ansible-playbook install_prometheus_stack.yml -i inventory/dev_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file
    ansible-playbook install_prometheus_stack.yml -i inventory/stage_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file
    ansible-playbook install_prometheus_stack.yml -i inventory/qa_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file

Install Consul 
ansible-playbook install_consul.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-file
# After operation get SecretID from consul container. On the container execute the following command:

$ consul acl bootstrap

and copy it as mgmt_token to consul secrets.yml

After install consul step run update consul playbook
Update Consul
ansible-playbook update_consul.yml -i inventory/prod_gblus/inventory --limit consul1 --vault-password-file=~/vault-password-file -v


Setup Mongo Indexes and Collections:

Create Collections and Indexes
\n
Create Collections and Indexes:\n    entityHistory\n\n        db.entityHistory.createIndex({country: -1},  {background: true, name:  "idx_country"});\n        db.entityHistory.createIndex({sources: -1},  {background: true, name:  "idx_sources"});\n        db.entityHistory.createIndex({entityType: -1},  {background: true, name:  "idx_entityType"});\n        db.entityHistory.createIndex({status: -1},  {background: true, name:  "idx_status"});\n        db.entityHistory.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});\n        db.entityHistory.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});\n        db.entityHistory.createIndex({"entity.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});\n        db.entityHistory.createIndex({"entity.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});\n        db.entityHistory.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});\n        db.entityHistory.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});\n        db.entityHistory.createIndex({entityChecksum: -1},  {background: true, name:  "idx_entityChecksum"});\n        db.entityHistory.createIndex({parentEntityId: -1},  {background: true, name:  "idx_parentEntityId"}); \n        \n        \n        \n\n    entityRelations\n        db.entityRelations.createIndex({country: -1},  {background: true, name:  "idx_country"});\n        db.entityRelations.createIndex({sources: -1},  {background: true, name:  "idx_sources"});\n        db.entityRelations.createIndex({relationType: -1},  {background: true, name:  "idx_relationType"});\n        db.entityRelations.createIndex({status: -1},  {background: true, name:  "idx_status"});\n        db.entityRelations.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});\n        db.entityRelations.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});\n        db.entityRelations.createIndex({startObjectId: -1},  {background: true, name:  "idx_startObjectId"});\n        db.entityRelations.createIndex({endObjectId: -1},  {background: true, name:  "idx_endObjectId"});\n        db.entityRelations.createIndex({"relation.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});    \n        db.entityRelations.createIndex({"relation.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});    \n        db.entityRelations.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});    \n        db.entityRelations.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});\n\n\n\n    LookupValues\n        db.LookupValues.createIndex({updatedOn: 1},  {background: true, name:  "idx_updatedOn"});\n        db.LookupValues.createIndex({countries: 1},  {background: true, name:  "idx_countries"});\n        db.LookupValues.createIndex({mdmSource: 1},  {background: true, name:  "idx_mdmSource"});\n        db.LookupValues.createIndex({type: 1},  {background: true, name:  "idx_type"});\n        db.LookupValues.createIndex({code: 1},  {background: true, name:  "idx_code"});\n        db.LookupValues.createIndex({valueUpdateDate: 1},  {background: true, name:  "idx_valueUpdateDate"});\n\n\n    ErrorLogs\n        db.ErrorLogs.createIndex({plannedResubmissionDate: -1},  {background: true, name:  "idx_plannedResubmissionDate_-1"});\n        db.ErrorLogs.createIndex({timestamp: -1},  {background: true, name:  "idx_timestamp_-1"});\n        db.ErrorLogs.createIndex({exceptionClass: 1},  {background: true, name:  "idx_exceptionClass_1"});\n        db.ErrorLogs.createIndex({status: -1},  {background: true, name:  "idx_status_-1"});\n\n\tbatchEntityProcessStatus\n        db.batchEntityProcessStatus.createIndex({batchName: -1, sourceId: -1},  {background: true, name:  "idx_findByBatchNameAndSourceId"});\n        db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1},  {background: true, name:  "idx_EntitiesUnseen_SoftDeleteJob"});\n        db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResult_ProcessingJob"});\n        db.batchEntityProcessStatus.createIndex({batchName: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResultAll_ProcessingJob"});\n\n    batchInstance\n\t\t- create collection\n\n\trelationCache\n\t\tdb.relationCache.createIndex({startSourceId: -1},  {background: true, name:  "idx_findByStartSourceId"});\n\n    DCRRequests\n          db.DCRRequests.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});\n          db.DCRRequests.createIndex({entityURI: -1, "status.name": -1},  {background: true, name:  "idx_entityURIStatusNameFind_SubmitVR"});\n          db.DCRRequests.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});\n          \n    entityMatchesHistory    \n          db.entityMatchesHistory.createIndex({_id: -1, "matches.matchObjectUri": -1, "matches.matchType": -1},  {background: true, name:  "idx_findAutoLinkMatch_CleanerStream"});\n\n
\n



Connect ENV with Prometheus:

Update config -  ansible-playbook install_prometheus_configuration.yml -i inventory/prod_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-file

Prometheus config
\n
node_exporter\n       - targets:\n          - "amraelp00007334.COMPANY.com:9100"\n          - "amraelp00007335.COMPANY.com:9100"\n         labels:\n            env: gblus_dev\n            component: node\n\n\nkafka\n       - targets:\n          - "amraelp00007335.COMPANY.com:9101"\n         labels:\n            env: gblus_dev\n            node: 1            \n            component: kafka\n            \n            \nkafka_exporter\n\n       - targets:\n          - "amraelp00007335.COMPANY.com:9102"\n         labels:\n            trade: gblus\n            node: 1\n            component: kafka\n            env: gblus_dev            \n\n\nComponents:\n    jmx_manager\n       - targets:\n          - "amraelp00007335.COMPANY.com:9104"\n         labels:\n            env: gblus_dev\n            node: 1\n            component: manager\n       - targets:\n          - "amraelp00007335.COMPANY.com:9108"\n         labels:\n            env: gblus_qa\n            node: 1\n            component: manager\n       - targets:\n          - "amraelp00007335.COMPANY.com:9112"\n         labels:\n            env: gblus_stage\n            node: 1\n            component: manager            \n    jmx_event_publisher\n       - targets:\n          - "amraelp00007334.COMPANY.com:9106"\n         labels:\n            env: gblus_dev\n            node: 1\n            component: publisher    \n       - targets:\n          - "amraelp00007334.COMPANY.com:9110"\n         labels:\n            env: gblus_qa\n            node: 1\n            component: publisher   \n       - targets:\n          - "amraelp00007334.COMPANY.com:9104"\n         labels:\n            env: gblus_stage\n            node: 1\n            component: publisher               \n    jmx_reltio_subscriber\n       - targets:\n          - "amraelp00007334.COMPANY.com:9105"\n         labels:\n            env: gblus_dev\n            node: 1\n            component: subscriber\n       - targets:\n          - "amraelp00007334.COMPANY.com:9109"\n         labels:\n            env: gblus_qa\n            node: 1\n            component: subscriber\n                   - targets:\n          - "amraelp00007334.COMPANY.com:9113"\n         labels:\n            env: gblus_stage\n            node: 1\n            component: subscriber\n  jmx_batch_service\n      - targets:\n          - "amraelp00007335.COMPANY.com:9107"\n        labels:\n          env: gblus_dev\n          node: 1\n          component: batch_service\n      - targets:\n          - "amraelp00007335.COMPANY.com:9111"\n        labels:\n          env: gblus_qa\n          node: 1\n          component: batch_service\n      - targets:\n          - "amraelp00007335.COMPANY.com:9115"\n        labels:\n          env: gblus_stage\n          node: 1\n          component: batch_service\n\nsqs_exporter    \n       - targets:\n          - "amraelp00007334.COMPANY.com:9122"\n         labels:\n            env: gblus_dev\n            component: sqs_exporter\n       - targets:\n          - "amraelp00007334.COMPANY.com:9123"\n         labels:\n            env: gblus_qa\n            component: sqs_exporter\n                   - targets:\n          - "amraelp00007334.COMPANY.com:9124"\n         labels:\n            env: gblus_stage\n            component: sqs_exporter\n\n\ncadvisor\n\n       - targets:\n          - "amraelp00007334.COMPANY.com:9103"\n         labels:\n            env: gblus_dev\n            node: 1\n            component: cadvisor_exporter\n       - targets:\n          - "amraelp00007335.COMPANY.com:9103"\n         labels:\n            env: gblus_dev\n            node: 2\n            component: cadvisor_exporter            \n\n\n    \nmongodb_exporter\n\n      - targets:\n          - "amraelp00007334.COMPANY.com:9120"\n        labels:\n          env: gblus_dev\n          component: mongodb_exporter\n    \n\nkong_exporter\n       - targets:\n          - "amraelp00007335.COMPANY.com:9542"\n         labels:\n            env: gblus_dev\n            component: kong_exporter
\n









" + }, + { + "title": "Getting access to PDKS Rancher and Kubernetes clusters", + "pageID": "259433725", + "pageLink": "/display/GMDM/Getting+access+to+PDKS+Rancher+and+Kubernetes+clusters", + "content": "
  1. Go to https://requestmanager.COMPANY.com/#/
  2. Search nsa-unix and select first link (NSA-UNIX)
  3. You will see the form for requesting an access which should be fulfilled like on an example below: 


Do you need to be added to any Role Groups? YES

Do you need privileged access to specific Servers in a Role Group? NO

Please provide the Server Location: Not applicable

NIS Domain: Other 

Add to Role Group(s) UNIX-GBLMDMHUB-US-PROD-ADMIN-U or UNIX-GBLMDMHUB-US-NPROD-ADMIN-U (depends on an environment)

Please provide information about Account Privileges: Add Privileges  

Please choose the Type of Privilege to Add: 

Please provide the UNIX Group Name:  UNIX-GBLMDMHUB-US-PROD-COMPUTERS-U or UNIX-GBLMDMHUB-US-NPROD-COMPUTERS-U


Please provide a brief Business Justification:

For prod:

atp-mdmhub-prod-amer
atp-mdmhub-prod-emea
atp-mdmhub-prod-apac

PDKS EKS clusters regarding project BoldMove.


For nprod:

atp-mdmhub-nprod-amer
atp-mdmhub-nprod-emea
atp-mdmhub-nprod-apac

PDKS EKS clusters regarding project BoldMove.


Comments or Special Instructions:  

I am creating this request to have an access to Global MDM HUB prod clusters. 


\"\"\"\"



" + }, + { + "title": "UI:", + "pageID": "308256633", + "pageLink": "/pages/viewpage.action?pageId=308256633", + "content": "" + }, + { + "title": "Add new role and add users to the UI", + "pageID": "308256635", + "pageLink": "/display/GMDM/Add+new+role+and+add+users+to+the+UI", + "content": "

MDM HUB UI roles standards:

Here is the role standard that has to be used to get access to the UI by specific users:

Environments


NON-PRODPROD

DEVQASTAGEPROD
GBL****
EMEA****
AMER****
APAC****
GBLUS****
ALL****

Use the 'ALL' keyword with connection to the 'NON-PROD' and 'PROD' - using this approach will produce only 2 roles for the system.

Role Schema:

<prefix>_<tenant>_<system name>_<application>_<environment>_<system>_<suffix>

<prefix> - COMM
<tenant> - ALL or GBL/AMER/EMEA e.t.c (recommendation is ALL)
<system name> - MDMHUB 
<application> - UI 
<environment> - PROD / NON-PROD  or specific based on a table above
<system> HUB_ADMIN / PTRS e.t.c Important: <system> name has to be in sync with HUB configuration users in e.g http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/nprod/users   
<suffix> ROLE


example roles:

HUB ADMIN → COMM_ALL_MDMHUB_UI_NON-PROD_HUB_ADMIN_ROLE - HUB UI group for hub-admin users - access to all clusters, and non-prod environments.

HUB ADMIN → COMM_ALL_MDMHUB_UI_PROD_HUB_ADMIN_ROLE - HUB UI group for hub-admin users - access to all clusters, and prod environments.

PTRS system → COMM_ALL_MDMHUB_UI_NON-PROD_PTRS_ROLE - HUB UI group for PTRS users - access to all clusters, and non-prod environments.

PTRS system → COMM_ALL_MDMHUB_UI_PROD_PTRS_ROLE - HUB UI group for PTRS users - access to all clusters, and prod environments.

The system is the user name used in HUB. All users related to the specific system can have access to the specific role.


For example, if someone from the PTRS system wants to have access to the UI, how to process such request:


  1. Add user to existing UI role
    1. Go to https://requestmanager1.COMPANY.com/Group/Default.aspx
    2. search a group:
    3. \"\"
    4. If a role is found in search results you can check current members or request a new member
    5. add a new user:
    6. \"\"
    7. save
    8. go to Cart https://requestmanager1.COMPANY.com/group/Review.aspx
    9. and submit the request.
  2. If the role does not exist:
    1. First, create a new role:
      1. click Create a NEW Security Group
      2. https://requestmanager1.COMPANY.com/group/Create.aspx?type=sec
      3. \"\"
      4. region -EMEA
      5. name - the name of a group 
      6. primary owner - AJ
      7. secondary owner  - Mikołaj Morawski
      8. Description - e.g. HUB UI group for hub-admin users - access to all clusters, and prod environments.
      9. now you can add users to this group
    2. Second, configure roles and access to the user in HUB:
      1. Important: <system> name has to be in sync with HUB configuration users in http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/nprod/users 
      2. Users can have access to the following roles and APIs:
        1. https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html
      1. Add roles and topics to the user:
        1. .e.g: http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/nprod/users/ptrs.yaml
          1. Put "kafka" section with specific kafka topics:
          2. Add mdm admin section with specific roles and access to topics:
            1. e.g. 
            2.     mdm_admin:
                    reconciliationTargets:
                      - emea-dev-out-full-ptrs-eu
                      - emea-dev-out-full-ptrs-global2
                      - emea-qa-out-full-ptrs-eu
                      - emea-qa-out-full-ptrs-global2
                      - emea-stag-out-full-ptrs-eu
                      - emea-stag-out-full-ptrs-global2
                      - gbl-dev-out-full-ptrs
                      - gbl-dev-out-full-ptrs-eu
                      - gbl-dev-out-full-ptrs-porind
                      - gbl-qa-out-full-ptrs-eu
                      - gbl-stage-out-full-ptrs
                      - gbl-stage-out-full-ptrs-eu
                      - gbl-stage-out-full-ptrs-porind
                    sources:
                      - ALL
                    countries:
                      - ALL
                    roles: &roles
                      - MODIFY_KAFKA_OFFSET
                      - RESEND_KAFKA_EVENT
                    kafka: *kafka
          3. REMEMBER TO ADD: Add mdm_auth  section  this  will  start  the  UI  access.
            1. Without this section the UI will not show HUB Admin tools! 
            2. mdm_auth:
              roles: *roles
          4. The mdm_auth section and roles there will cause the user will only see 2 pages in UI - in that case, MODIFY OFFSET and RESET_KAFKA_EVENTS
      2. When the roles and users are configured on the HUB end go to the first step and add selected users to the selected roles.
      3. Starting from this time any new e.g. PTRS user can be added to the COMM_ALL_MDMHUB_UI_NON-PROD_PTRS_ROLE and will be able to log in to UI and see the pages and use API through UI.




" + }, + { + "title": "Current users and roles", + "pageID": "347636361", + "pageLink": "/display/GMDM/Current+users+and+roles", + "content": "
EnvironmentClientClusterRoleCOMPANY UsersHUB internal user
NON-PRODMDMHUBALLCOMM_ALL_MDMHUB_UI_NON-PROD_HUB_ADMIN_ROLE

ALL HUB Team Members 

+

Andrew.J.Varganin@COMPANY.com

Nishith.Trivedi@COMPANY.com


e.g. http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/nprod/users/hub_admin.yamll
PRODMDMHUBALLCOMM_ALL_MDMHUB_UI_PROD_HUB_ADMIN_ROLE    

ALL HUB Team Members

+

Andrew.J.Varganin@COMPANY.com

Nishith.Trivedi@COMPANY.com

e.g. http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/prod/users/hub_admin.yaml
NON-PRODMDMETLALLCOMM_ALL_MDMHUB_UI_NON-PROD_MDMETL_ADMIN_ROLE

Anurag.Choudhary@COMPANY.com
Shikha@COMPANY.com
Raghav.Gupta@COMPANY.com
Khushboo.Bharti@COMPANY.com
Manisha.Kansal@COMPANY.com
Ajit.Tiwari@COMPANY.com
Sayak.Acharya@COMPANY.com
Jeevitha.R@COMPANY.com
Priya.Suthar@COMPANY.com
Joymalya.Bhattacharya@COMPANY.com
Chinthamani.Kalebu@COMPANY.com
Arindam.Roy2@COMPANY.com
NarendraSingh.Chouhan@COMPANY.com
Adrita.Sarkar@COMPANY.com
Manish.Panda@COMPANY.com
Meghana.Das@COMPANY.com
Hanae.Laroussi@COMPANY.com
Somil.Sethi@COMPANY.com
Shivani.Jha@COMPANY.com
Pradnya.Raikar@COMPANY.com
KOMAL.MANTRI@COMPANY.com
Absar.Ahsan@COMPANY.com
Asmita.Datta@COMPANY.com

e.g. http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/nprod/users/mdmetl_admin.yaml
PRODMDMETLALLCOMM_ALL_MDMHUB_UI_PROD_MDMETL_ADMIN_ROLE

Anurag.Choudhary@COMPANY.com
Shikha@COMPANY.com
Raghav.Gupta@COMPANY.com
Khushboo.Bharti@COMPANY.com
Manisha.Kansal@COMPANY.com
Ajit.Tiwari@COMPANY.com
Sayak.Acharya@COMPANY.com
Jeevitha.R@COMPANY.com
Priya.Suthar@COMPANY.com
Joymalya.Bhattacharya@COMPANY.com
Chinthamani.Kalebu@COMPANY.com
Arindam.Roy2@COMPANY.com
NarendraSingh.Chouhan@COMPANY.com
Manish.Panda@COMPANY.com
Meghana.Das@COMPANY.com
Hanae.Laroussi@COMPANY.com
Somil.Sethi@COMPANY.com
Shivani.Jha@COMPANY.com
Pradnya.Raikar@COMPANY.com
KOMAL.MANTRI@COMPANY.com
Asmita.Datta@COMPANY.com

e.g. https://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/prod/users/mdmetl_admin.yaml
NON-PRODPTRSALLCOMM_ALL_MDMHUB_UI_NON-PROD_PTRS_ROLEsagar.bodala@COMPANY.com
Aishwarya.Shrivastava@COMPANY.com
Tanika.Das@COMPANY.com
Rishabh.Singh@COMPANY.com
Bhushan.Shanbhag@COMPANY.com
Hasibul.Mallik@COMPANY.com
AbhinavMishra.Mishra@COMPANY.com
Asmita.Mishra@COMPANY.com
Prema.NayagiGS@COMPANY.com
e.g. http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/nprod/users/ptrs.yaml
PRODPTRSALLCOMM_ALL_MDMHUB_UI_PROD_PTRS_ROLEsagar.bodala@COMPANY.com
Aishwarya.Shrivastava@COMPANY.com
Tanika.Das@COMPANY.com
Rishabh.Singh@COMPANY.com
Bhushan.Shanbhag@COMPANY.com
Hasibul.Mallik@COMPANY.com
AbhinavMishra.Mishra@COMPANY.com
Asmita.Mishra@COMPANY.com
Prema.NayagiGS@COMPANY.com

e.g. http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/prod/users/ptrs.yaml


NON-PRODCOMPANYALLCOMM_ALL_MDMHUB_UI_NON-PROD_COMPANY_ROLEnavaneel.ghosh@COMPANY.com

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/pull-requests/1707/diff#amer/nprod/users/COMPANY.yml

PRODCOMPANYALLCOMM_ALL_MDMHUB_UI_PROD_COMPANY_ROLEnavaneel.ghosh@COMPANY.com

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/pull-requests/1707/diff#amer/nprod/users/COMPANY.yml

" + }, + { + "title": "SSO and roles", + "pageID": "322564881", + "pageLink": "/display/GMDM/SSO+and+roles", + "content": "

To login to UI dashboard You have to be in COMPANY network. sso authorization is made by SAML, using COMPANY pingfederate.


Auth flow

\"\"


SSO login


\"\"


SAML login role

After successful authentication with SAML we are receiving roles from Active Directory (Group Manager - distribution list)

Then we are decoding roles using following regexp:

COMM_(?<tenant>[A-Z]+)_MDMHUB_UI_(?<environment>NON-PROD|PROD)_(?<system>.+)_ROLE

When role is matching environment and tenant we are getting roles by searching system in user configuration.


Backend AD groups

ServiceNPROD GroupPROD GroupDescription
Kibana

COMM_ALL_MDMHUB_KIBANA_NON-PROD_ADMIN_ROLE

COMM_ALL_MDMHUB_KIBANA_PROD_ADMIN_ROLE
COMM_ALL_MDMHUB_KIBANA_NON-PROD_VIEWER_ROLECOMM_ALL_MDMHUB_KIBANA_PROD_VIEWER_ROLE
Grafana
COMM_ALL_MDMHUB_GRAFANA_PROD_ADMIN_ROLE


COMM_ALL_MDMHUB_GRAFANA_PROD_VIEWER_ROLE
AkhqCOMM_ALL_MDMHUB_KAFKA_NON-PROD_ADMIN_ROLECOMM_ALL_MDMHUB_KAFKA_PROD_ADMIN_ROLE

COMM_ALL_MDMHUB_KAFKA_NON-PROD_VIEWER_ROLECOMM_ALL_MDMHUB_KAFKA_PROD_VIEWER_ROLE
MonitoringCOMM_ALL_MDMHUB_ALL_NON-PROD_MON_ROLECOMM_ALL_MDMHUB_ALL_PROD_MON_ROLEThis groups aggregates users that are responsible for monitoring of MDMHUB 
AirflowCOMM_ALL_MDMHUB_AIRFLOW_NON-PROD_ADMIN_ROLECOMM_ALL_MDMHUB_AIRFLOW_PROD_ADMIN_ROLE

COMM_ALL_MDMHUB_AIRFLOW_NON-PROD_VIEWER_ROLECOMM_ALL_MDMHUB_AIRFLOW_PROD_VIEWER_ROLE












" + }, + { + "title": "UI Connect Guide", + "pageID": "322540727", + "pageLink": "/display/GMDM/UI+Connect+Guide", + "content": "

Log in to UI and switch Tenants

  1. To log in to UI please use the following link: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-emea-dev
  2. Log in to UI using your COMPANY credentials:
    1. \"\"
  3. There is no need to know each UI address, you can easily switch between Tenants using the following link (available on the TOP RIGHT corner in UI near the USERNAME):
    1. \"\"


What pages are available with the default VIEW role

By default, you are logged in with the default VIEW role, the following pages are available:

\"\"

  1. HUB Status

    1. You can use the HUB Dashboard main page that contains HUB platform status: Event processing details, Snowflake refresh time, started batches and ETA to load data to Reltio or get Events from Reltio.
  2. Ingestion Services Configuration

    1. This page contains the documentation related to the Data Quality checks, Source Match Categorization, Cleansing & Formatting, Auto-Fills, and Minimum Viable Profile Checks.
    2. You can choose a filter to switch between different entity types and use input boxes to filter results.
    3. You can use the 'Category' filter to include the operations that you are interested in
    4. You can use the 'Query' filter and put any text to find what you are looking for (e.g. 'prefix' to find rules with prefix word)
    5. You can use the 'Date' filter to find rules created or updated after a specific time - now using this filter you can easily find the rules added after data reload and reload data one more time to reflect changes. 
    6. This page contains also documentation related to duplicate identifiers and noise lists.
    7. You can choose a  filter to switch between different entity types and use input boxes to filter results
  3. Ingestion Services Tester

    1. This page contains the JSON tester, input JSON and click the 'Test' button to check the output JSON with all rules applied
    2. Click the 'Difference' to get only changed sections
    3. Click the 'Validation result' to get the rules that were executed.

More details here: HUB UI User Guide

What operations are available in the UI

As a user, you can request access to the technical operations in HUB. The details on how to access more operations are described in the section below.

Here you will get to know the different UI operations and what can be done using these operations:

HUB Admin allows to:

\"\"


  1. Kafka Offset

    1. Technical operation
    2. On this page user can modify Kafka offset on specific consumer group
    3. System/User that wants to have access to this page will be allowed to maintain the consumer group offset, change to:
      1. latest
      2. earliest
      3. specific date time
      4. shift by a specific number of events.
  2. HUB Reconciliation

    1. Technical operation
    2. Used internally by HUB Team.
    3. This operation allows us to mimic Reltio events generation - this operation generates the events to the input HUB topic so that we can reprocess the events.
    4. You can use this page and generates events by:
      1. provide an input array with entity/relation URIs
      2. or
      3. provide the query and select the source/market that you want to reprocess.
  3. Kafka Republish Events

    1. Technical operation
    2. This operation can be used to generate events for your Kafka topic
    3. Use case - you are consuming data from HUB and you want to test something on non-prod environments and consume events for a specific market one more time. You want to receive 1000 events for France market for your testing.
    4. You can use this page and generates events for the target topic:
      1. Specify the Countries/Sources/Limits/Dates and Target Reconciliation topic - as a result, you will receive the events.
  4. Reltio Reindex

    1. Technical operation
    2. This operation executes the Reltio Reindexing operation
    3. You can use this page and generates events by:
      1. provide the query and select the source/market that you want to reprocess.
      2. or
      3. provide the input file with entity/relation URIs, that will be sent to Reltio API.
  5. Merge/Unmerge Entities

    1. Business operation
    2. This operation consumes the input file and executes the merge/unmerge operations in Reltio
    3. More details about the file and process are described here: Batch merge & unmerge
  6. Update Identifiers

    1. Business operation
    2. This operation consumes the input file and executes the merge/unmerge operations in Reltio
    3. More details about the file and process are described here: Batch update identifiers
  7. Clear Cache

    1. Business operation
    2. Clear ETL Batch Cache
    3. More details about the file and process are described here: Batch clear ETL data load cache

How to request additional access to new operations

Please send the following email to the HUB DL: DL-ATP_MDMHUB_SUPPORT@COMPANY.com

Subject:

HUB UI - Access request for <user-name/system-name>

Body:

Please provide the access / update the existing access for <user-name/system-name> to HUB Admin operations.

ID

Details

Comments:

1

Action needed


Add user to the HUB UI

Edit user in the HUB UI (please provide the existing group name)

<any other>

2

Tenant


GBL, EMEA, AMER, GBLUS, APAC/ALL

Tenant - more details in Environments

By default please select ALL Tenants, but if you need access only to a specified one please select.

3

Environments


 PROD / NON-PROD  or specific: DEV/QA/STAGE/PROD

By default please select PROD / NON-PROD environments, but if you need access only to a specified one please select.

4

Permissions range


Choose the operation:

Kafka Offset

HUB Reconciliation

Kafka Republish Events

Reltio Reindex

Merge/Unmerge Entities

Update Identifiers

Clear Cache

5

COMPANY Team


ETL/COMPANY or DSR or Change Management e.t.c

8

Business justification


Needs access to execute merge unmerge operation in EMEA/AMER/APAC PROD Reltio

9

Point of contact


If you are from the system please provide the DL email and system details.

7

Sources


<optional  - list of sources to which user should have access>

required in Events/Reindex/Reconciliation operations

3

Countries


<optional  - list of countries to which user should have access>

required in Events/Reindex/Reconciliation operations


The request will be processed after Andrew.J.Varganin@COMPANY.com approval. 


In the response, you will receive the Group Name. Please use this for future reference.

e.g. PTRS system roles used in the PTRS system to manage UI operations.

   PTRS system → COMM_ALL_MDMHUB_UI_NON-PROD_PTRS_ROLE - HUB UI group for PTRS users - access to all clusters, and non-prod environments.

   PTRS system → COMM_ALL_MDMHUB_UI_PROD_PTRS_ROLE - HUB UI group for PTRS users - access to all clusters, and prod environments.

HUB Team will use the following SOP to add you to a selected role: Add a new role and add users to the UI

Get Help

In case of any questions, the GetHelp page or full HUB documentation is available here (UI page footer):

\"\"

GetHelp

Welcome to the Global MDM Home!




" + }, + { + "title": "Users:", + "pageID": "302705550", + "pageLink": "/pages/viewpage.action?pageId=302705550", + "content": "" + }, + { + "title": "Add Direct API User to HUB", + "pageID": "273694347", + "pageLink": "/display/GMDM/Add+Direct+API+User+to+HUB", + "content": "

To add a new user to MDM HUB direct API a few steps must be done. That document describes what activities must be fulfilled and who is responsible fot them.

Create PingFederate user - client's responsibility 

 If the client's authentication method is oauth2 then there is a need to create PingFederate user.

To add a user you must have a Ping Federate user created: How to Request PingFederate (PXED) External OAuth 2.0 Account 

Caution: If the authentication method is key auth then HUB Team generates it and sends it securely way to the client.


Send a request to MDM HUB that contains all necessary data - client's responsibility 

Send a request to create a new user with direct API access to HUB Team: dl-atp_mdmhub_support@COMPANY.com

The request must contain as follows:



1

Action needed

2

PingFederate username

3

Countries

4

Tenant

5

Environments

6

Permissions range

7

Sources

8

Business justification

9

Point of contact

10

Gateway

Description

  1. Action needed – this is a place where you decide if you want to create a new user or modify the existing one.
  2. PingFederate username – you need to create a user on the PingFederate side. Its username is crucial to authenticate on the HUB side. If you do not have a PingFederate user please check: https://confluence.COMPANY.com/display/GMDM/How+to+request+PingFederate+%28PXED%29+external+OAuth+2.0+account
  3. Countries - list of countries that access to will be granted
  4. Tenant – a tenant or list of tenants where the user will be created. Please notice that if you have a connection from open internet only EMEA is possible. If you have a local application split to Reltio Region it is recommended to request a local tenant. If you have a global solution you can call EMEA and your requests will be routed by HUB.
  5. Environments – list of environment instances – DEV/QA/STG/PROD
  6. Permissions range – do you need to write or read/write? To which entities do you need access? HCO/HCP/MCO
  7. Sources – to which sources do you need to have access?
  8. Business justification – please describe
    1. Why do you have a connection with HUB?
    2. Why the user must be created/modified?
    3. What’s the project name?
    4. Who’s the project manager?
  9. Point of contact – please add a DL group name - in case of any issues connected with that user
  10. Which API you want to call: EMEA, AMER, APAC,etc

Prepare new user on MDM HUB side - HUB Team Responsibility 

  1. Store clients' request in dedicated confluence space: Clients
  2. In the COMPANY tenants, there is a need to connect the new user with API Router directly.
  3. Change API router configuration, and add a new user with:
    1. user PingFederate name or when the user uses key auth add API key to secrets.yaml
    2. sources
    3. countries
    4. roles
  4. Change Manager configuration, add
    1. sources
    2. countries
  5. Change DCR service configuration - if applicable
    1. dcrServiceConfig-  initTrackingDetailsStatus, initTrackingDetail, dcrType
    2. roles - CREATE_DCR, GET_DCR
  6. You need to check how the request will be routed. If there is a  need to make a routing configuration, follow these steps:
    1. change API Router configuration by adding new countries to proper tenants
    2. change Manager configuration in destinated tenant by adding
      1. sources
      2. countries


" + }, + { + "title": "Add External User to MDM Hub", + "pageID": "164470196", + "pageLink": "/display/GMDM/Add+External+User+to+MDM+Hub", + "content": "

Kong configuration

  1. Firstly You need to have users logins from Ping Federate for every env
  2. Go folder inventory/{{ kong_env }}/group_vars/kong_v1 in repository mdm-hub-env-config
    Find section PLUGINS in file kong_{{ env }}.yml and then rule with name mdm-external-oauth
    1. in this section find "users_map"
    2. add there new entry with following rule:

      \n
      - "<user_name_from_ping_federate>:<user_name_in_mdm_hub>"
      \n
    3. change False to True in create_or_update setting for this rule

      \n
      create_or_update: True
      \n

      Repeat this steps( a-c ) for every environment {{ env }} you want to apply changes to(e.g., dev, qa, stage)

      {{ kong_env }} - environment on which kong instance is deployed

      {{ env }} - environment on which MDM Hub instance is deployed

      kong_envenv
      devdev, mapp, stage
      prodprod
      dev_gblus

      dev_gblus, qa_gblus, stage_gblus

      prod_gblusprod_gblus
      dev_usdev_us
      prod_usprod_us
  3. Go to folder inventory/{{ env }}/group_vars/gw-services

    In file gw_users.yml add section with new user after last added user, specify roles and sources needed for this user. E.g.,

    User configuration
    \n
    - name: "<user_name_in_mdm_hub>"\n  description: "<Some description>"\n  defaultClient: "ReltioAll"\n  getEntityUsesMongoCache: yes\n  lookupsUseMongoCache: yes\n  roles:\n    - <specify_only_roles_that_are_required_for_this_user>\n  countries:\n    - US\n  sources: \n\t- <specify_only_sources_needed by this user>
    \n

    Repeat this step for every environment {{ env }} you want to apply changes to( e.g., dev, qa, stage)

  4. After configuration changes You need to update kong using following command
    1. for nonprod gblus envs

      GBLUS NPROD - kong update
      \n
      ansible-playbook update_kong_api_v1.yml -i inventory/dev_gblus/inventory --limit kong_v1_01 --vault-password-file=~/ansible.secret
      \n
    2. for prod gblus env

      GBLUS PROD - kong update
      \n
      ansible-playbook update_kong_api_v1.yml -i inventory/prod_gblus/inventory --limit kong_v1_01 --vault-password-file=~/ansible.secret
      \n
    3. for nprod gbl envs

      GBL NPROD - kong update
      \n
      ansible-playbook update_kong_api_v1.yml -i inventory/dev/inventory --vault-password-file=~/ansible.secret
      \n
    4. for prod gbl env

      GBL PROD - kong update
      \n
      ansible-playbook update_kong_api_v1.yml -i inventory/prod/inventory --vault-password-file=~/ansible.secret
      \n
    5. for nprod US env

      US NPROD - kong update
      \n
      ansible-playbook update_kong_api_v1.yml -i inventory/dev_us/inventory --vault-password-file=~/ansible.secret
      \n
    6. for prod US

      US PROD - kong update
      \n
      ansible-playbook update_kong_api_v1.yml -i inventory/prod_us/inventory --vault-password-file=~/ansible.secret
      \n

      Troubleshooting

      In case when there will be a problem with deploying You need to set create_or_update as True also for route and manager service.

      Ansible secret

      To use this script You need to have ansible.secret file created in your home directory or adjust vault-password-file if needed.
      Another option is to change --vault-password-file to --ask-vault and provide ansible vault during the runtime.

  5. Before commiting changes find all occurrences where You set create_or_update to true and change it again to:

    \n
    create_or_update: False
    \n

    Then commit changes

  6. Redeploy gateway services on all modified envs. Before deploying please verify if there is no batch running in progress
    Jenkins job to deploy gateway services:
    https://jenkins-gbicomcloud.COMPANY.com/job/mdm-gateway/



" + }, + { + "title": "Add new Batch to HUB", + "pageID": "310944945", + "pageLink": "/display/GMDM/Add+new+Batch+to+HUB", + "content": "

To add a new batch to MDM HUB  a few steps must be done. That document describes what activities must be fulfilled and who is responsible for them.

Check source and country configuration

The first step is to check if DQ rules and SMC are configured for the new source. 

Repository: mdm-config-registry; Path: \\config-hub\\<env_tenant>\\mdm-manager\\quality-service\\quality-rules\\

If not you have to immediately send an email to a person that requested a new batch. This condition is usually performed on a separate task as prerequisite to adding the batch configuration.

"This is a new source. You have to send DQ and SMC requirements for a new source to A.J. and Eleni. Based on it a new HUB requirement deck will be prepared. When we received it the task can be planned. Until that time the task is blocked." 

The same exercise has to be made when we get requirements for a new country.

Authorization and authentication

Clients use mdmetl batch service user to populate data to Reltio. There is no changes needed.

Send a request to MDM HUB that contains all necessary data - client's responsibility 

Send a request to create a new batch to HUB Team: dl-atp_mdmhub_support@COMPANY.com

The request must contain as follows:



subject arealist of stages HCP/HCO/Affiliations
data source

countries list


source name
batch name
file typefull/incremental
frequency
bussines justification
single point of contact on client side

Prepare new batch on MDM HUB side - HUB Team Responsibility 

Repository: mdm-hub-cluster-env

Changes on manager level

In mdmetl.yaml configuration must be extended with:

Path: \\<tenant>\\<env>\\users\\mdmetl.yaml

  1. New sources
  2. New countries
  3. Add new batch with stages to batch_service, example:
batch_service:
defaultClient: "ReltioAll"
description: "MDMETL Informatica IICS User - BATCH loader"
batches:
"ONEKEY": <- new batch name
- "HCPLoading" <- new stage
- "HCOLoading" <- new stage
- "RelationLoading" <- new stage

In the MDM manager config, if the batch includes RelationLoading stage then add to the refAttributesEnricher configuration 

relationType: ProviderAffiliations
relationType: ContactAffiliations
relationType: ACOAffiliations
  1. New sources
  2. New countries

Changes in batch-service level

Based on stages that are adding there is a need to change a batch-service configuration.

Path: \\<tenant>\\<env>\\namespaces\\<namespace>\\config_files\\batch-service\\config\\application.yml

  1. Add configuration in BatchWorkflows, example:
- batchName: "PFORCERX_ODS"
batchDescription: "PFORCERX_ODS - HCO, HCP, Relation entities loading"
stages:
- stageName: "HCOLoading"
- stageName: "HCOSending"
softDependentStages: [ "HCOLoading" ]
processingJobName: "SendingJob"
- stageName: "HCOProcessing"
dependentStages: [ "HCOSending" ]
processingJobName: "ProcessingJob"
# --------------------------------
- stageName: "HCPLoading"
- stageName: "HCPSending"
softDependentStages: [ "HCPLoading" ]
processingJobName: "SendingJob"
- stageName: "HCPProcessing"
dependentStages: [ "HCPSending" ]
processingJobName: "ProcessingJob"
# ------------------
- stageName: "RelationLoading"
- stageName: "RelationSending"
dependentStages: [ "HCOProcessing", "HCPProcessing" ]
softDependentStages: [ "RelationLoading" ]
processingJobName: "SendingJob"
- stageName: "RelationProcessing"
dependentStages: [ "RelationSending" ]
processingJobName: "ProcessingJob"

If batch is full load than two additional stages must be configured, it destination is to allows deletating profiles:

- stageName: "EntitiesUnseenDeletion"
dependentStages: [ "HCOProcessing" ]
processingJobName: "DeletingJob"
- stageName: "HCODeletesProcessing"
dependentStages: [ "EntitiesUnseenDeletion" ]
processingJobName: "ProcessingJob"


2. Add configuration to bulkConfiguration, example:
"PFORCERX_ODS":
HCOLoading:
bulkLimit: 25
destination:
topic: "${env}-internal-batch-pforcerx-ods-hco"
maxInFlightRequest: 5
HCPLoading:
bulkLimit: 25
destination:
topic: "${env}-internal-batch-pforcerx-ods-hcp"
maxInFlightRequest: 5
RelationLoading:
bulkLimit: 25
destination:
topic: "${env}-internal-batch-pforcerx-ods-rel"
maxInFlightRequest: 5

All new dedicated topic must be configured. There is a need to add configuration in kafka-topics.yml, example:
emea-prod-internal-batch-pulse-kam-hco:
partitions: 6
replicas: 3

3. Add configuration in sendingJob, example:
PFORCERX_ODS:
HCOSending:
source:
topic: "${env}-internal-batch-pforcerx-ods-hco"
maxInFlightRequest: 5
bulkSending: false
bulkPacketSize: 10
reltioRequestTopic: "${env}-internal-async-all-mdmetl-user"
reltioReponseTopic: "${env}-internal-async-all-mdmetl-user-ack"
HCPSending:
source:
topic: "${env}-internal-batch-pforcerx-ods-hcp"
maxInFlightRequest: 5
bulkSending: false
bulkPacketSize: 10
reltioRequestTopic: "${env}-internal-async-all-mdmetl-user"
reltioReponseTopic: "${env}-internal-async-all-mdmetl-user-ack"
RelationSending:
source:
topic: "${env}-internal-batch-pforcerx-ods-rel"
maxInFlightRequest: 5
bulkSending: false
bulkPacketSize: 10
reltioRequestTopic: "${env}-internal-async-all-mdmetl-user"
reltioReponseTopic: "${env}-internal-async-all-mdmetl-user-ack"

4. If a batch is full load then deletingJob must be configured, for example:

PULSE_KAM:
EntitiesUnseenDeletion:
maxDeletesLimit: 10000
queryBatchSize: 10
reltioRequestTopic: "${env}-internal-async-all-mdmetl-user"
reltioResponseTopic: "${env}-internal-async-all-mdmetl-user-ack"



" + }, + { + "title": "How to Request PingFederate (PXED) External OAuth 2.0 Account", + "pageID": "263491721", + "pageLink": "/display/GMDM/How+to+Request+PingFederate+%28PXED%29+External+OAuth+2.0+Account", + "content": "

This instruction describes the Client steps that should be triggered to create the PingFederate account. Referring to security requirements HUB should only know the details about the UserName created by the PXED Team. HUB is not requesting external accounts, passwords and all the details are shared only with the Client. The client is sharing the user name to HUB and only after the User name is configured Client will gain the access to HUB resources. 


Contact Persons:


Details required to fulfill the PXED request are in this doc:

\"\"


User Name standard: <SYSTEM_NAME>-MDM_client


Steps:

  1. Go to https://requestmanager.COMPANY.com/#/
  2. In Search For Application type: PXED
  3. \"\"
  4.  Pick - Application enablement with enterprise authentication services (PXED, LDAP and/or SSO)
  5. Fulfill the request and send.
  6. Wait for the user name and password
  7. After confirmation share the Client Id with HUB and wait for the grant of access. Do not share the password. 



EXAMPLE: 

For the Reference Example request send for PFORCEOL user:


Request Ticket

GBL32702829i

Ticket ID

Name

Varganin, Andrew Joseph

Requested user name

AD Username

VARGAA08

Requested user Id

User Domain

AMER

Region (AMER/EMEA/APAC/US...)

Request ID

20200717112252425

request ID

Hosting location

External

Hosting location of the Client services: (External or  Internal COMPANY Network)

VCAS Reference number

V...

VCAS Reference number

Data Feed

No, API/Services

flow - requests send to HUB API then - API/Services

Application access methods

Web Browser

Type of access for the Client application - (Intranet/Web Browser e.t.c) 

Application User base

COMPANY colleagues

Contractors

Application User base

Application access devices

Laptop/Desktop

Tablets (iPad/Android/Windows)

Application access devices

Application Access Locations

Internet

Location (External - Internet / Internal - Intranet)

Application Name

<EXAMPLE: PFORCEOL (BIOPHARMA)>

Requested application name that requires new account

CMDB ID (Production Deployment)

SC....

CMDB ID (Production Deployment)

IPRM Solution profile number

....

IPRM Solution profile number

Number of users for the application

...

Number of users for the application

Concurrent Users

....

Concurrent Users

Comments

Application-to-Application Integration using NSA (Non-Standard Service Account.)  

PTRS will use REST APIs to authenticate to and access COMPANY Global MDM Services.
This application will access MDM API Services (MDM_client) and will need OAuth2 account (KOL-MDM_client) for access to those APIs/Services

full description of requested account and integration

Application Scope

All Users

Application Scope

Referenced tickets (only for example / reference purposes):

https://btondemand.COMPANY.com/My-Tickets/Ticket-Details?ticket=GBL32702829i

https://requestmanager.COMPANY.com/#/request/20201208091510997

" + }, + { + "title": "Hub Operations", + "pageID": "302705582", + "pageLink": "/display/GMDM/Hub+Operations", + "content": "" + }, + { + "title": "Airflow:", + "pageID": "164470119", + "pageLink": "/pages/viewpage.action?pageId=164470119", + "content": "" + }, + { + "title": "Checking that Process Ends Correctly", + "pageID": "164470118", + "pageLink": "/display/GMDM/Checking+that+Process+Ends+Correctly", + "content": "

To check that process ended without any issues you need to login into Prometheus and check the Alerts Monitoring PROD dashboard. You have to check rows in the GBL PROD Airflow DAG's Status panel. If you can see red rows (like on blow screenshot) it means that there occured some issues:

\"\"

Details of issues are available in the Airflow.

" + }, + { + "title": "Common Problems", + "pageID": "164470117", + "pageLink": "/display/GMDM/Common+Problems", + "content": "

Failed task getEarliestUploadedFile

During reviewing of failed DAG you noticed that the task getEarliestUploadedFile has failed state. In the task's logs you can see the line like this:

[2020-03-19 18:44:07,082] {{docker_operator.py:252}} INFO - Unable to find the earliest uploaded file. S3 directory is empty?

The issue is because getEarliestUploadedFile was not able to download the export file. In this case you need to check the S3 localtion and verify that the correct export file was uploded to valid location.


" + }, + { + "title": "Deploy Airflow Components", + "pageID": "164470010", + "pageLink": "/display/GMDM/Deploy+Airflow+Components", + "content": "

Deployment procedure is implemented as ansible playbook. The source code is stored in MDM Environment configuration repository. The runnable file is available under the path:  https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/install_mdmgw_airflow_services.yml and can be run by the command: 

ansible-playbook install_mdmgw_airflow_services.yml -i inventory/[env name]/inventory  

Deployment has following steps: 

  1. Creating directory structure on execution host, 
  2. Templating configuration files and transferring those to config location, 
  3. Creating DAG, variable and connections in Apache Airflow, 
  4. Restarting Airflow instance to apply configuration changes. 

After successful deployment the dag and configuration changes should be available to trigger in Airflow UI. 

" + }, + { + "title": "Deploying DAGs", + "pageID": "164469947", + "pageLink": "/display/GMDM/Deploying+DAGs", + "content": "

To deploy newly created DAG or configuration changes you have to run the deployment procedure implemented as ansible playbook install_mdmgw_airflow_services.yml:

ansible-playbook install_mdmgw_airflow_services.yml -i inventory/[env name]/inventory

If you you have access to Jenkins you can also use jenkins' jobs: https://jenkins-gbicomcloud.COMPANY.com/job/MDM_Airflow_Deploy_jobs/. Each environment has its own deploy job. Once you choose the right job you have to:

1 Click the button "Build Now": \"\"

2 After a few seconds the stage icon "Choose dags to deploy" will be active and will wait for choosing DAG to deploy:

\"\"

\"\"

3 Choose the DAG you wanted to deploy and approve you decision.


After this job will deploy all changes made by you to Airflow's server.




" + }, + { + "title": "Error Grabbing Grapes - hub_reconciliation_v2", + "pageID": "218438556", + "pageLink": "/display/GMDM/Error+Grabbing+Grapes+-+hub_reconciliation_v2", + "content": "

In hub_reconciliation_v2 airflow dag, during stage  entities_generate_hub_reconciliation_events grape error might occur:

\n
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:\nGeneral error during conversion: Error grabbing Grapes\n(...)
\n

Cause:

That could be caused by connectivity/configuration issues.

Workaround:

For this dag dependencies are mounted in container. Mounted directory is located in airflow server on path:

/app/airflow/{{ env_name }}/hub_reconciliation_v2/tmp/.groovy/grapes/
To solve this problem copy libs from working dag. E.g. hub_reconciliation_v2_gblus_prod

\n
amraelp00007847.COMPANY.com/app/airflow/gblus_prod/hub_reconciliation_v2/tmp/.groovy/grapes
\n
" + }, + { + "title": "Batches (Batch Service):", + "pageID": "302705680", + "pageLink": "/pages/viewpage.action?pageId=302705680", + "content": "" + }, + { + "title": "Adding a New Batch", + "pageID": "164469956", + "pageLink": "/display/GMDM/Adding+a+New+Batch", + "content": "

1. Add batch to batch_service.yml in the following sections

- add batch info to section batchWorkflows - add basing on some already defined
- add bulk configuration
- add to sendingJob
- add to deletingJob if needed

2. Add source and user for batch to batch_service_users.yml

- add for user mdmetl_nprod apropriate source and batch

3. Add user to:

- for appropriate source, country and roles

4. Add topic to bundle section in manager/config/application.yml 

5. Add kafka topics

We use kafka manager to add new topics which can be found under directory /inventory/<env>/group_vars/kafka/manager/topics.yml

Firstly set create_or_update to True after creation of topics change to False

7. Create topics and redeploy services by using Jenkins

https://jenkins-gbicomcloud.COMPANY.com/job/mdm-gateway/

8. Redeploy gateway on others envs qa, stage, prod only if there is no batch running - check it in mongo on batchInstance collection using following query: {"status" : "STARTED"}

9. Ask if new source should be added to dq rules

" + }, + { + "title": "Cache Address ID Clear (Remove Duplicates) Process", + "pageID": "163917838", + "pageLink": "/display/GMDM/Cache+Address+ID+Clear+%28Remove+Duplicates%29+Process", + "content": "

This process is similar to the Cache Address ID Update Process . So the user should load the file to mongo and process it with the following steps: 

  1. Download the files that were indicated by the user and apply on a specific environment (sometimes only STAGE and sometimes all envs)
    1. For example - 3 files - /us/prod/inbound/cdw/one-time-feeds/other/
    2. \"\"
  2. Merge these file to one file - Duplicate_Address_Ids_<date>.txt
  3. Proceed with the script.sh based on the Cache Address ID Update Process
  4. Generated Extract load to the removeIdsFromkeyIdRegistry collection
    1. mongoimport --host=localhost:27017 --username=admin --password=zuMMQvMl7vlkZ9XhXGRZWoqM8ux9d08f7BIpoHb --authenticationDatabase=admin --db=reltio_stage --collection=removeIdsFromkeyIdRegistry --type=csv --columnsHaveTypes --fields="_id.string(),key.string(),sequence.string(),generatedId.int64(),_class.string()" --file=EXTRACT_Duplicate_Address_Ids_16042021.txt --mode=insert
  5. CLEAR keyIdRegistry
    1. docker exec -it mongo_mongo_1 bash
    2. cd /data/configdb
    3. NPROD - nohup mongo duplicate_address_ids_clear.js &

    4. PROD   - nohup mongo --host mongo_reltio_repl_set/amraelp00007844.COMPANY.com:27017,amraelp00007870.COMPANY.com:27017,amraelp00007847.COMPANY.com:28017 -u mdm_hub -p <passw>--authenticationDatabase reltio_prod duplicate_address_ids_clear.js &

    5. FOR REFERENCE SCRIPT:
      1. \n
        CLEAR keyIdRegistry\n    db = db.getSiblingDB('reltio_dev')\n    db.auth("mdm_hub", "<pass>")\n    \n    db = db.getSiblingDB('reltio_prod')\n    db.auth("mdm_hub", "<pass>")\n\n\n\n    print("START")\n    var start = new Date().getTime();\n\n\n    var cursor = db.getCollection("removeIdsFromkeyIdRegistry").aggregate(    \n        [\n            \n        ], \n        { \n            "allowDiskUse" : false\n        }\n    )\n        \n    cursor.forEach(function (doc){\n        db.getCollection("keyIdRegistry").remove({"_id": doc._id});\n    });\n\n    var end = new Date().getTime();\n    var duration = end - start;\n    print("duration: " + duration + " ms")\n    print("END")\n\n\n    nohup mongo duplicate_address_ids_clear.js &\n\n    nohup mongo --host mongo_reltio_repl_set/amraelp00007844.COMPANY.com:27017,amraelp00007870.COMPANY.com:27017,amraelp00007847.COMPANY.com:28017 -u mdm_hub -p <pass>--authenticationDatabase reltio_prod duplicate_address_ids_clear.js &
        \n
  6. CLEAR batchEntityProcessStatus checksums
    1. docker exec -it mongo_mongo_1 bash
    2. cd /data/configdb
    3. NPROD - nohup mongo unset_checsum_duplicate_address_ids_clear.js &
    4. PROD   - nohup mongo --host mongo_reltio_repl_set/amraelp00007844.COMPANY.com:27017,amraelp00007870.COMPANY.com:27017,amraelp00007847.COMPANY.com:28017 -u mdm_hub -p <pass>--authenticationDatabase reltio_prod unset_checsum_duplicate_address_ids_clear.js &
    5. FOR REFERENCE SCRIPT
      1. \n
        CLEAR batchEntityProcessStatus\n\n    db = db.getSiblingDB('reltio_dev')\n    db.auth("mdm_hub", "<pass>")\n    \n    db = db.getSiblingDB('reltio_prod')\n    db.auth("mdm_hub", "<pass>")\n\n\n    print("START")\n    var start = new Date().getTime();\n    var cursor = db.getCollection("removeIdsFromkeyIdRegistry").aggregate(    \n        [\n        ], \n        { \n            "allowDiskUse" : false\n        }\n    )\n        \n    cursor.forEach(function (doc){\n        var key = doc.key \n        var arrVars = key.split("/");\n        \n        var type = "configuration/sources/"+arrVars[0]\n        var value = arrVars[3];\n       \n        print(type + " " + value)\n        \n        var result = db.getCollection("batchEntityProcessStatus").update(\n        { "batchName" : { $exists : true }, "sourceId" : { "type" : type, "value" : value } },\n        { $set: { "checksum": "" } },\n        { multi: true}\n        )\n        \n        printjson(result);\n         \n    });\n    \n    var end = new Date().getTime();\n    var duration = end - start;\n    print("duration: " + duration + " ms")\n    print("END")\n\n    nohup mongo unset_checsum_duplicate_address_ids_clear.js &\n    \n    nohup mongo --host mongo_reltio_repl_set/amraelp00007844.COMPANY.com:27017,amraelp00007870.COMPANY.com:27017,amraelp00007847.COMPANY.com:28017 -u mdm_hub -p <pass>--authenticationDatabase reltio_prod unset_checsum_duplicate_address_ids_clear.js &
        \n
  7. Verify nohup output
  8. Check few rows and verify if these rows do not exist in the KeyIdRegistry collection
  9. Check few profiles and verify if the checksum was cleared in the BatchEntityProcessStatus collection


  1. ISSUE - for the ONEKEY profiles there is a difference between the generated cache and the corresponding profile.
  2. ISSUE - for the GRV profiles there is a difference between the generated cache and the corresponding profile. - check the crosswalks values in COMPANY_ADDRESS_ID_EXTRACT_PAC_files - should be e.g. 00002b9b-f327-456c-959c-fd5b04ed04b8
  3. ISSUE - for the ENGAGE 1.0 profiles there is a difference between the generated cache and the corresponding profile.  check the crosswalks values in COMPANY_ADDRESS_ID_EXTRACT_ENG_ files - should be e.g 00002b9b-f327-456c-959c-fd5b04ed04b8

Please check the following example:

CUST_SYSTEM,CUST_TYPE,SRC_ADDR_ID,SRC_CUST_ID,SRC_CUST_ID_TYPE,PFZ_ADDR_ID,PFZ_CUST_ID,SRC_SYS,MDM_SRC_SYS,EXTRACT_DT
PROBLEM : HCPM,HCP,0000407429,8091473,HCE,38357661,1374316,HCPS,HCPS,2021-04-15
OK            : HCPM,HCP,a012K000022cqBoQAI,0012K00001lCEyYQAW,HCP,109525669,178336284,VVA,VVA,2021-04-15

For VVA the crosswalk is equal to the 001A000001VgOEVIA3 and it is easy to match with the ICUE profile and clear the cache 

for ONEKEY the generated row is equal to the - 

COMPANYAddressIDSeq|ONEKEY/HCP/HCE/8091473/0000407429,ONEKEY/HCP/HCE/8091473/0000407429,COMPANYAddressIDSeq,38357661,com.COMPANY.mdm.generator.db.KeyIdRegistry

The 8091473 is not a crosswalk so to remove the checksum from the BatchEntityProcessStatus collection there is a need to find the profile in Reltio - crosswalk si WUSM01113231 - and clear the cache in the BatchEntityProcessStatus collection.

In my example, there was only one crosswalk. So it was easy to find this profile. For multiple profiles, there is a need to find the solution. ( I think we need to ask CDW to provide the file for ONEKEY with an additional crosswalk column, so we will be able to match the crosswalk with the Key and clear the checksum)


    Solution: once we receive ONEKEY KeyIdRegstriy Update file ask COMPANY Team to generate crosswalks ids - simple CSV file


  1. The file received from CDW does not contain crosswalks id, only COMPANYAddressIds - example input - https://gblmdmhubprodamrasp101478.s3.amazonaws.com/us/prod/inbound/cdw/one-time-feeds/COMPANY-address-id/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210511.txt
  2. Ask DT Team and download CSV file
  3. Load the file to TMP collection in Mongo e.g. - AddressIDCrosswalks_COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210511
  4. Execute the following:
    1. \n
      CLEAR batchEntityProcessStatus based on crosswalks ID list \n\n    db = db.getSiblingDB('reltio_dev')\n    db.auth("mdm_hub", "<pass>")\n    \n    db = db.getSiblingDB('reltio_prod')\n    db.auth("mdm_hub", "<pass>")\n\n\n    print("START")\n    var start = new Date().getTime();\n    var cursor = db.getCollection("AddressIDCrosswalks_COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210511").aggregate(    \n        [\n        ], \n        { \n            "allowDiskUse" : false\n        }\n    )\n        \n    cursor.forEach(function (doc){\n        \n        var type = "configuration/sources/ONEKEY";\n        var value = doc.COMPANYcustid_individualeid;\n       \n        print(type + " " + value)\n        \n        var result = db.getCollection("batchEntityProcessStatus").update(\n        { "batchName" : { $exists : true }, "sourceId" : { "type" : type, "value" : value } },\n        { $set: { "checksum": "" } },\n        { multi: true}\n        )\n        \n        printjson(result);\n         \n    });\n    \n    var end = new Date().getTime();\n    var duration = end - start;\n    print("duration: " + duration + " ms")\n    print("END")
      \n




" + }, + { + "title": "Changelog of removed duplicates", + "pageID": "172294537", + "pageLink": "/display/GMDM/Changelog+of+removed+duplicates", + "content": "

01.02.2021 - DROP keys
         Duplicate_Address_Ids.txt
         nohup ./script.sh inbound/Duplicate_Address_Ids.txt > EXTRACT_Duplicate_Address_Ids.txt &


19.04.2021 - DROP keys STAGE GBLUS
         Duplicate_Address_Ids_16042021.txt - 11 380 - 1 ONEKEY, ICUE, CENTRIS
         nohup ./script.sh inbound/Duplicate_Address_Ids_16042021.txt > EXTRACT_Duplicate_Address_Ids_16042021.txt &


17.05.2021 - DROP STAGE GBLUS
         Duplicate_Address_Ids_17052021.txt - 25121 - 1 ONEKEY
         nohup ./script.sh inbound/Duplicate_Address_Ids_17052021.txt > EXTRACT_Duplicate_Address_Ids_17052021.txt


25.06.2021 - DROP STAGE GBLUS
         Duplicate_Address_Ids_17052021.txt - 71509, 2 ONEKEY
         nohup ./script.sh inbound/Duplicate_Address_Ids_25062021.txt > EXTRACT_Duplicate_Address_Ids_25062021.txt &


12.07.2021 - DROP PROD GBLUS
         Duplicate_Address_Ids_12072021.txt - 4550 Duplicate_Address_Ids_12072021.txt - us/prod/inbound/cdw/one-time-feeds/Address-DeDup/FileSet-3/
         nohup ./script.sh inbound/Duplicate_Address_Ids_12072021.txt > EXTRACT_Duplicate_Address_Ids_12072021.txt &


" + }, + { + "title": "Cache Address ID Update Process", + "pageID": "164469955", + "pageLink": "/display/GMDM/Cache+Address+ID+Update+Process", + "content": "

1. Log using S3 browser to production bucket gblmdmhubprodamrasp101478 and go to dir /us/prod/inbound/cdw/one-time-feeds/COMPANY-address-id/ and check last update dates

2. Log using mdmusnpr service user to server amraelp00007334.COMPANY.com using ssh

3. Sync files from S3 using below command

docker run -u 27519996:24670575 -e "AWS_ACCESS_KEY_ID=<access_key>" -e "AWS_SECRET_ACCESS_KEY=<secret_access_key>" -e "AWS_DEFAULT_REGION=us-east-1" -v /app/mdmusnpr/AddressID/inbound:/src:z mesosphere/aws-cli s3 sync s3://gblmdmhubprodamrasp101478/us/prod/inbound/cdw/one-time-feeds/COMPANY-address-id/ /src

4. After syncing check new files with those two commads replacing new_file_name with name of the file which was updated. Check in script file that SRC_SYS and MDM_SRC_SYS exists, if not something is wrong and probably script needs to be updated ask the person who asked for address id update

cut -d',' -f8 <new_file_name> | sort | uniq
cut -d',' -f9 <new_file_name> | sort | uniq

5. Remove old extracts from /app/mdmusnpr/AddressID

rm EXTRACT_<new_file_name>

6. Run script which will prepare data for mongo

nohup ./script.sh inbound/<new_file_name> > EXTRACT_<new_file_name> &

Wait until processing in foreground finishes. Check after some time using below command:
ps ax | grep script
If process is marked as done You can continue with next file or if there is no more files You can proceed to next step.

7. Log in using Your user to the server amraelp00007334.COMPANY.com and change to root

8. Go to /app/mongo/config and remove old extracts

rm EXTRACT_<new_file_name>

9. Go to /app/mdmusnpr/AddressID and copy new extracts to mongo

cp EXTRACT_<new_file_name> /app/mongo/config/

10. Run mongo shell

docker exec -it mongo_mongo_1 bash
cd /data/configdb

11. Execute following command for each non prod env and for every new extract file

<db_name> - reltio_dev, reltio_qa, reltio_stage

mongoimport --host=localhost:27017 --username=admin --password=<db_password> --authenticationDatabase=admin --db=<db_name> --collection=keyIdRegistry --type=csv --columnsHaveTypes --fields="_id.string(),key.string(),sequence.string(),generatedId.int64(),_class.string()" --file=EXTRACT_<new_file_name> --mode=upsert

Write into changelog the number of records that were updated - it should be equal on all envs.

12. If needed and requested update production using following command

mongoimport --host=mongo_reltio_repl_set/amraelp00007844.COMPANY.com:27017,amraelp00007870.COMPANY.com:27017,amraelp00007847.COMPANY.com:28017 --username=admin --password=<prod_db_password> --authenticationDatabase=admin --db=reltio_prod --collection=keyIdRegistry --type=csv --columnsHaveTypes --fields="_id.string(),key.string(),sequence.string(),generatedId.int64(),_class.string()" --file=EXTRACT_<new_file_name> --mode=upsert

13. Verify number of entries from input file with updated records number in mongo

14. Update changelog

15. Respond to email that update is done

16. Force merge will be generated - there will be mail about this.

17. Download force merge delta from S3 using S3 browser and change name to merge_<date>_1.csv

bucket: gblmdmhubprodamrasp101478

path: us/prod/inbound/HcpmForceMerge/ForceMergeDelta

18. Upload file merge_<date>_1.csv to

bucket: gblmdmhubprodamrasp101478

path: us/prod/inbound/hub/merge_unmerge_entities/input/

19. Trigger dag 

https://mdm-monitoring.COMPANY.com/airflow/tree?dag_id=merge_unmerge_entities_gblus_prod_gblus

20. After dag is finished login using S3 Browser 

bucket: gblmdmhubprodamrasp101478

path: us/prod/inbound/hub/merge_unmerge_entities/output/<most_recent_date>_<most_recent_time>
so for date 17/5/2021 and time 12:11: 39, the file looks like this: 
         us/prod/inbound/hub/merge_unmerge_entities/output/20210517_121139

and download result file, check for failed merge and send it in response to email about force merge




" + }, + { + "title": "Changelog of updated", + "pageID": "164469954", + "pageLink": "/display/GMDM/Changelog+of+updated", + "content": "

20.11.2020 - Loading NEW files:

GRV & ENGAGE 1.0
nohup ./script.sh inbound/COMPANY_ADDRESS_ID_EXTRACT_PAC_ENG.txt > EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_PAC_ENG.txt &
IQVIA_RX
nohup ./script.sh inbound/COMPANY_ADDRESS_ID_EXTRACT_HCPS00.txt > EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_HCPS00.txt &
IQVIA_MCO & MILLIMAN & MMIT
nohup ./script.sh inbound/COMPANY_ACCOUNT_ADDR_ID_EXTRACT.txt > EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT.txt &

09.12.2020 - Loading new file: -> 460927

14.12.2020 - Loading new file: PAC_ENG -> 820 document, CAPP-> 464583 document

16.12.2020 - Loading MILLIMAN_MCO: 10504 document

22.12.2020 - Loading CPMRTE: 15686 document, CAPP: 1287, PAC_ENG: 1340, VVA: 11927070, IMS: 343, HCO i SAP problem, CENTRIS: 41496, hcps00: 4215

29.12.2020 - Loading PAC_ENG: 1260, CAPP: 1414

04.01.2021 - Loading PAC_ENG: 330, CAPP: 338

08.01.2021 - Loading HCPS00: 3214

11.01.2021 - Loading PAC_ENG: 496, CAPP: 512

18.01.2021 - Loading PAC_ENG: 616, CAPP: 795

25.01.2021 - Loading PAC_ENG: 1009, CAPP: 939

01.02.2021 - Loading PAC_ENG: 884, CAPP: 1106

08.02.2021 - Loading PAC_ENG: 576, CAPP: 394

15.02.2021 - Loading PAC_ENG: 690, CAPP: 696

17.02.2021 - Loading VVA: 12048364

22.02.2021 - Loading PAC_ENG: 724, CAPP: 757

01.03.2021 - Loading PAC_ENG: 906, CAPP: 969

26.04.2021 - Loading PAC_ENG: 738, CAPP: 795

11.05.2021 - Loading PAC_ENG: 589, CAPP: 626

17.05.2021 - Loading PAC_ENG: 489, CAPP: 613

17.05.2021 - Loading - us/prod/inbound/cdw/one-time-feeds/COMPANY-address-id/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210511.txt

                     Updated: 1171703 - customers updated - cleared cache in batchEntityProcessStatus collection for reload

                     Updated: 1513734 - document(s) imported successfully in KeyIdRegistry

18.05.2021 - STAGE only
      COMPANY_ACCOUNT_ADDR_ID_EXTRACT_HCOS_20210511_fix.txt - 43771 document(s) imported successfully
      COMPANY_ACCOUNT_ADDR_ID_EXTRACT_IMS_20210511.txt - 10076 document(s) imported successfully


19.05.3021 -  Load 15 Files to PROD and clear cache. Load these files to DEV QA and STAGE
      2972 May 17 11:40 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_DVA_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_DVA_20210511.txt &
      19124366 May 19 07:11 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_HCOS_20210511_fix.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_HCOS_20210511_fix.txt &
      3154666 May 17 11:41 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_IMS_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_IMS_20210511.txt &
      221969 May 17 11:40 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MLM_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MLM_20210511.txt &
      214430 May 17 11:41 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MMIT_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MMIT_20210511.txt &
      163142 May 17 11:40 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_SAP_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_SAP_20210511.txt &
      73236 May 17 15:00 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_APUS-VVA_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_APUS-VVA_20210511.txt &
      6399709 May 17 15:00 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_CENTRIS_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_CENTRIS_20210511.txt &
      60175 May 17 15:00 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_EMDS-VVA_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_EMDS-VVA_20210511.txt &
      318915 May 17 14:59 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_ENG_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_ENG_20210511.txt &
      13528 May 17 14:59 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_HCPS_ZIP_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_HCPS_ZIP_20210511.txt &
      1360570 May 17 14:59 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_KOL_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_KOL_20210511.txt &
      8135990 May 17 14:59 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_PAC_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_PAC_20210511.txt &
      14583373 May 17 15:00 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_SHS_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_SHS_20210511.txt &
      283564 May 17 15:00 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_SHS_ZIP_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_SHS_ZIP_20210511.txt &


24.05.2021 - Loading PAC_ENG: Dev:1283, QA: 1283, Stage: 1509, Prod: 1283

                                         CAPP: Dev: 1873, QA: 1392, Stage: 1873, Prod: 1873


1/6/2021 - Loading PAC_ENG: 379, CAPP: 433


9/6/2021 - Loading PAC_ENG: 38, CAPP: 47


14/6/2021 - Loading PAC_ENG: 83, CAPP: 102

16/6/2021 - Loading COMPANY_ACCT: Prod: 236 

28/06/2021 - Loading PAC_ENG: Dev:182, QA: 182, Stage: 182, Prod: 646, CAPP: Dev: 215, QA: 215, Stage: 215, Prod: 215



02.07.2021
    Load 11 Files to PROD and clear cache. Load these files to DEV QA and STAGE
    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_HCOS_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_HCOS_20210630.txt &
    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_IMS_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_IMS_20210630.txt &
    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MLM_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MLM_20210630.txt &
    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_APUS-VVA_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_APUS-VVA_20210630.txt &
    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_CENTRIS_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_CENTRIS_20210630.txt &
    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_EMDS-VVA_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_EMDS-VVA_20210630.txt &
    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210630.txt &
    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_HCPS_ZIP_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_HCPS_ZIP_20210630.txt &
    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_KOL_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_KOL_20210630.txt &
    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_SHS_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_SHS_20210630.txt &
    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_SHS_ZIP_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_SHS_ZIP_20210630.txt &


5/7/2021 - Loading PAC_ENG: 39 , CAPP: 44


16.07.2021
    Load 1 VVA File to PROD and clear cache. Load this file to DEV QA and STAGE
    nohup ./script.sh inbound/COMPANY_ADDRESS_ID_EXTRACT_VVA_20210715.txt > EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_VVA_20210715.txt &

20.07.2021
    Load 1 VVA File to PROD and clear cache. Load this file to DEV QA and STAGE
    nohup ./script.sh inbound/COMPANY_ADDRESS_ID_EXTRACT_VVA_20210718.txt > EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_VVA_20210718.txt &


GBLUS/Fletcher PROD GO-LIVE COMPANYAddressID sequence - PROD (MAX)139510034 + 5000000 = 144510034







" + }, + { + "title": "Manual Cache Clear", + "pageID": "164470086", + "pageLink": "/display/GMDM/Manual+Cache+Clear", + "content": "
  1. Open Studio 3T and connect to appropriate Mongo DB
  2. Open IntelliShell
  3. Run following query for appropriate source - replace <source> with right name


\n
db.getCollection("batchEntityProcessStatus").updateMany({"sourceId.type":"configuration/sources/<source>"}, {$set: {"checksum" : ""}})
\n
" + }, + { + "title": "Data Quality", + "pageID": "492471763", + "pageLink": "/display/GMDM/Data+Quality", + "content": "" + }, + { + "title": "Quality Rules Deployment Process", + "pageID": "492471766", + "pageLink": "/display/GMDM/Quality+Rules+Deployment+Process", + "content": "

Resource changing

The process regards modifying the resources related to data quality configuration that are stored in Consul and load by mdm-manager, mdm-onekey-dcr-service, precallback-service components in runtime. They are present in mdm-config-registry/config-hub location.

When modifying data quality rules configuration present at mdm-config-registry/config-hub/<env_name>/mdm-manager/quality-service/quality-rules , the following rules should be applied:

  1. Each YAML file should be formatted in accordance with yamllint rules (See Yamllint validation rules)
  2. The attributes createdDate/modifiedDate were deleted from the rules configuration files. They will be automatically set for each rule during the deployment process. (See Deployment of changes)
  3. Adding more than one rule with the same value of name attribute is not allowed.

PR validation

Every PR to mdm-config-registry repository is validated for correctness of YAML syntax (See Yamllint validation rules). Upon PR creation the job is triggered that checks the format of YAML files using yamllint. The jobs succeeds only when all the yaml files in repository passed the yamllint test.

The PRs that did not passed validations should not be merged to master.

Deployment of changes

All changes in mdm-config-registry/config-hub should be deployed to consul using JENKINS JOBS. The separate job exist for deploying changes done on each environment. Eg. job deploy_config_amer_nprod_amer-dev is used to deploy all changes done on AMER DEV environment (all changes under path mdm-config-registry/config/hub/dev_amer). Jobs allow to deploy configuration from master branch or PR's to mdm-config-registry repo.

The deployment job flow can be described by the following diagram:


\"\"


Steps

  1. Clean workspace - wipes workspace of all the files left from previous job run.
  2. Checkout mdm-config-registry - this repository contains files with data quality configuration and yamllint rules
  3. Checkout mdm-hub-cluster-env - this repository contains script for assigning createdDate / modifiedDate attributes to quality rules and ansible job for running this script and uploading files to consul.
  4. Validate yaml files - runs yamllint validation for every YAML file at mdm-config-registry/config-hub/<env_name> (See Yamllint validation rules)
  5. Get previous quality rules registry files - downloads quality rules registry file produced after previous successfull run of a job. The file is responsible for storing information about modification dates and checksum of quality rules. Decision if modification dates should be update is made based on checksum change, . The registry file is a csv with the following headers:
    1. ID - ID for each quality rule in form of <file_name>:<rule_name>
    2. CREATED_DATE - stores createdDate attribute value for each rule
    3. MODIFIED_DATE - stores modifiedDate attribute value for each rule
    4. CHECKSUM - stores checksum counted for each rule
  6. Update Quality Rules files - runs ansible job responsible for:
    1. Running script QualityRuleDatesManager.groovy - responsible for adjusting createdDate / modifiedDate for quality rules based on checksum changes and creating new quality rules registry file.
    2. Updating changed quality rules files in Consul kv store.
  7. Archive quality rules registry file - save new registry file in job artifacts.


Algorithm of updating modification dates

The following algorithm is implemented in QualityRuleDatesManager.groovy script. The main goal of this is to update createdDate/modifiedDate in the case when new quality rule has been added or its definition changed.

\"\"

Yamllint validation rules

TODO

" + }, + { + "title": "DCRs:", + "pageID": "259432965", + "pageLink": "/pages/viewpage.action?pageId=259432965", + "content": "" + }, + { + "title": "DCR Service 2:", + "pageID": "302705607", + "pageLink": "/pages/viewpage.action?pageId=302705607", + "content": "" + }, + { + "title": "Reject pending VOD DCR - transfer to Data Stewards", + "pageID": "415993922", + "pageLink": "/display/GMDM/Reject+pending+VOD+DCR+-+transfer+to+Data+Stewards", + "content": "

Description

There's a DCR request which was sent to Veeva OpenData (VOD) by HUB however it hasn't been processed - we didn't receive information whether is should be ACCEPTED or REJECTED. This causes a couple of things:

Goal

We want to simulate REJECT response from VOD which will make DCR to return to Reltio for further processing by Data Stewards. This may be realized in a couple of ways: 

Procedure #1

Step 1 - Adjust below event template

JSON event to populate
\n
{\n  "eventType": "CHANGE_REJECTED",\n  "eventTime": 1712573721000,\n  "countryCode": "SG",\n  "dcrId": "a51f229331b14800846503600c787083",\n  "vrDetails": {\n    "vrStatus": "CLOSED",\n    "vrStatusDetail": "REJECTED",\n    "veevaComment": "MDM HUB: Simulated reject response to close DCR.",\n    "veevaHCPIds": [],\n    "veevaHCOIds": []\n  }\n}
\n

Step 2 - Populate event to topic $env-internal-veeva-dcr-change-events-in (for APAC-STAGE: apac-stage-internal-veeva-dcr-change-events-in). 

\"\"

After a couple of minutes two things should be in effect:

Step 3 - update MongoDB DCRRegistryVeeva collection

Document update
\n
{\n    $set : {\n        "status.name" : "REJECTED",\n        "status.changeDate" : "2024-04-07T17:42:37.882195Z"\n    }\n}
\n


\"\"


Step 4 - check Reltio DCR

Check if DCR status has changed to "DS Action Required" and DCR Tracing details has been updated with simulated Veeva Reject response. 

\"\"

" + }, + { + "title": "Close VOD DCR - override any status", + "pageID": "492489948", + "pageLink": "/display/GMDM/Close+VOD+DCR+-+override+any+status", + "content": "

This SoP is almost identical to the one in Override VOD Accept to VOD Reject for VOD DCR with small updates:

In Step 1, please also update target = VOD to target = Reltio

" + }, + { + "title": "Override VOD Accept to VOD Reject for VOD DCR", + "pageID": "490649621", + "pageLink": "/display/GMDM/Override+VOD+Accept+to+VOD+Reject+for+VOD+DCR", + "content": "

Description

There's a DCR request which was sent to Veeva OpenData (VOD) and mistakenly ACCEPTED, however business requires such DCR to be Rejected and redirected to DSR for processing via Reltio Inbox.

Goal

We want to:

Procedure

Step 0 - Assume that VOD_NOT_FOUND

  1. Set retryCounter to 9999
  2. Wait for 12h

Step 1 - Adjust DCR document in MongoDB in DCRRegistry collection (Studio3T)

  1. Remove incorrect DCR Tracking entries for your DCR (trackingDetails section) - usually nested attribute 3 and 4 in this section
  2. Set retryCounter to 0
  3. Set status.name to "SENT_TO_VEEVA"

Step 2 - update MongoDB DCRRegistryVeeva collection

Document update
\n
{\n    $set : {\n        "status.name" : "REJECTED",\n        "status.changeDate" : "2024-04-07T17:42:37.882195Z"\n    }\n}
\n

Step 3 - Adjust below event template

JSON event to populate
\n
{\n  "eventType": "CHANGE_REJECTED",\n  "eventTime": 1712573721000,\n  "countryCode": "SG",\n  "dcrId": "a51f229331b14800846503600c787083",\n  "vrDetails": {\n    "vrStatus": "CLOSED",\n    "vrStatusDetail": "REJECTED",\n    "veevaComment": "MDM HUB: Simulated reject response to close DCR.",\n    "veevaHCPIds": [],\n    "veevaHCOIds": []\n  }\n}
\n


Step 4 - Populate event to topic $env-internal-veeva-dcr-change-events-in (for APAC-STAGE: apac-stage-internal-veeva-dcr-change-events-in). 

\"\"

After a couple of minutes (it depends on the traceVR schedule - it my take up to 6h on PROD) two things should be in effect:

\"\"


Step 6 - check Reltio DCR

Check if DCR status has changed to "DS Action Required" and DCR Tracing details has been updated with simulated Veeva Reject response. 

" + }, + { + "title": "DCR escalation to Veeva Open Data (VOD)", + "pageID": "430348063", + "pageLink": "/pages/viewpage.action?pageId=430348063", + "content": "

Integration fail

It occasionally happens that DCR response files from Veeva are not being delivered to S3 bucket which is used for ingestion by HUB. VOD provides CVS/ZIP files every day, even though there's no actual payload related to DCRs - files contain only CSV headers. This disruption may be caused by two things: 

Either way, we need to pin point of the two are causing the problem.

Troubleshooting 

It's usually good to check when the last synchronization took place.

GMFT issue

If there is more than one file (usually this dir should be empty) in outbound directory /globalmdmprodaspasp202202171415/apac/prod/outbound/vod/APAC/DCR_request it means that GMFT job does not push files from S3 to SFTP. The files which are properly processed by GMFT job are copied to Veeva SFTP and additionally moved to  /globalmdmprodaspasp202202171415/apac/prod/archive/vod/APAC/DCR_request.

Veeva Open Data issue

Once you are sure it's not GMFT issue, check archive directory for the latest DCR response file: 

If the latest file is older that 24h → there's an issue on VOD side. 


Who to contact?




" + }, + { + "title": "DCR rejects from IQVIA due to missing RDM codes", + "pageID": "475927691", + "pageLink": "/display/GMDM/DCR+rejects+from+IQVIA+due+to+missing+RDM+codes", + "content": "

Description

Sometimes our Clients are being provided with below error message when they are trying to send DCRs to OneKey. 

This request was not accepted by the IQVIA due to missing RDM code mapping and was redirected to Reltio Inbox. The reason is: 'Target lookup code not found for attribute: HCPSpecialty, country: CA, source value: SP.ONCM.'. This means that there is no equivalent of this code in IQVIA code mapping. Please contact MDM Hub DL-ATP_MDMHUB_SUPPORT@COMPANY.com asking to add this code and click "SendTo3Party" in Reltio after Hub's confirmation.

Why

This is caused when PforceRx tries to send DCR with changes on attribute with Lookup Values. On HUB end we're trying to remap canonical codes from Reltio/RDM to source mapping values which are specific to OneKey and understood by them. 

Usual we are dealing with situation that for each canonical code there is a proper source code mapping mapping. Please refer to below screen (Mongo collection LookupValues). 

\"\"


However when their is no such mapping like in case below (no ONEKEY entry in sourceMappings) then we're dealing with problem above

\"\"


For more information about canonical code mapping and the flow to get target code sent to OneKey or VOD, please refer to → Veeva: create DCR method (storeVR), section "Mapping Reltio canonical codes → Veeva source codes"

How

We should contact people responsible for RDM codes mappings (MDM COMPANY team) to add find out correct sourceMapping value for this specific canonical code for specific country. In the end they will contact AJ to add it to RDM (usually every week).

" + }, + { + "title": "Defaults", + "pageID": "284795409", + "pageLink": "/display/GMDM/Defaults", + "content": "

DCR defaults map the source codes of the Reltio system to the codes in the OneKey or VOD (Veeva Open Data) system. 

Occur for specific types of attributes: HCPSpecialities, HCOSpecialities, HCPTypeCode, HCOTypeCode, HCPTitle, HCOFacilityType. 

The values ​​are configured in the Consul system. To configure the values:

  1.  Sort the source (.xlsx) file:


    \"\"
  2. Divide the file into separate sheets for each attribute.
  3. Save the sheets in separate csv format files - columns separated by semicolons.
  4. Paste the contents of the files into the appropriate files in the consul configuration repository - mdm-config-registry:

    \"\"
      - each environment has its own folder in the configuration repository
      - files must have header- Country;CanonicalCode;Default


For more information about canonical code mapping and the flow to get target code sent to OneKey or VOD, please refer to → Veeva: create DCR method (storeVR), section "Mapping Reltio canonical codes → Veeva source codes"


" + }, + { + "title": "Go-Live Readiness", + "pageID": "273696220", + "pageLink": "/display/GMDM/Go-Live+Readiness", + "content": "

Procedure:\"\"


" + }, + { + "title": "OneKey Crosswalk is Missing and IQVIA Returned Wrong ID in TraceVR Response", + "pageID": "259432967", + "pageLink": "/display/GMDM/OneKey+Crosswalk+is+Missing+and+IQVIA+Returned+Wrong+ID+in+TraceVR+Response", + "content": "


This SOP describes how to FIX the case when there is a DCR in OK_NOT_FOUND status and IQVIA change  the individualID from wrong one to correct one (due to human error)


Example Case based on EMEA PROD:


New Case (2023-03-21)

ONEKEY responded with ACCEPTED with ONEKEY ID but OneKey VR Trace response contains: "requestStatus": "VAS_FOUND_BUT_INVALID".

DCR2 Service is checking every 12h if Onekey already provided the data to Reltio. We must manually close this DCR.

Steps:

In amer-prod-internal-onekey-dcr-change-events-in topic find the latest event for ID ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●.

Change from:

\n
{\n\t"eventType": "DCR_CHANGED",\n\t"eventTime": 1677801600678,\n\t"eventPublishingTime": 1677801600678,\n\t"countryCode": "CA",\n\t"dcrId": "f19305a6e6af4b5aa03d26c1ec1ae5a6",\n\t"targetChangeRequest": {\n\t\t"vrStatus": "CLOSED",\n\t\t"vrStatusDetail": "ACCEPTED",\n\t\t"oneKeyComment": "ONEKEY response comment: Already Exists-Data Privacy\\nONEKEY HCP ID: WCAP00028176\\nONEKEY HCO ID: WCAH00052991",\n\t\t"individualEidValidated": "WCAP00028176",\n\t\t"workplaceEidValidated": "WCAH00052991",\n\t\t"vrTraceRequest": "{\\"isoCod2\\":\\"CA\\",\\"validation.clientRequestId\\":\\"f19305a6e6af4b5aa03d26c1ec1ae5a6\\"}",\n\t\t"vrTraceResponse": "{\\"response.traceValidationRequestOutputFormatVersion\\":\\"1.8\\",\\"response.status\\":\\"SUCCESS\\",\\"response.resultSize\\":1,\\"response.totalNumberOfResults\\":1,\\"response.success\\":true,\\"response.results\\":[{\\"codBase\\":\\"WCA\\",\\"cisHostNum\\":\\"7853\\",\\"userEid\\":\\"07853\\",\\"requestType\\":\\"Q\\",\\"responseEntityType\\":\\"ENT_ACTIVITY\\",\\"clientRequestId\\":\\"f19305a6e6af4b5aa03d26c1ec1ae5a6\\",\\"cegedimRequestEid\\":\\"9d02f7547dbc4e659a9d230c91f96279\\",\\"customerRequest\\":null,\\"trace1ClientRequestDate\\":\\"2023-02-27T23:53:44Z\\",\\"trace2CegedimOkcProcessDate\\":\\"2023-02-27T23:53:40Z\\",\\"trace3CegedimOkeTransferDate\\":\\"2023-02-27T23:54:23Z\\",\\"trace4CegedimOkeIntegrationDate\\":\\"2023-02-27T23:55:47Z\\",\\"trace5CegedimDboResponseDate\\":\\"2023-03-02T21:23:36Z\\",\\"trace6CegedimOkcExportDate\\":null,\\"requestComment\\":null,\\"responseComment\\":\\"Already Exists-Data Privacy\\",\\"individualEidSource\\":null,\\"individualEidValidated\\":\\"WCAP00028176\\",\\"workplaceEidSource\\":\\"WCAH00052991\\",\\"workplaceEidValidated\\":\\"WCAH00052991\\",\\"activityEidSource\\":null,\\"activityEidValidated\\":\\"WCAP0002817602\\",\\"addressEidSource\\":null,\\"addressEidValidated\\":\\"WCA00000006206\\",\\"countryEid\\":\\"CA\\",\\"processStatus\\":\\"REQUEST_RESPONDED\\",\\"requestStatus\\":\\"VAS_FOUND_BUT_INVALID\\",\\"updateDate\\":\\"2023-03-02T21:37:16Z\\"}]}"\n\t}\n}
\n

To:

\n
{\n\t"eventType": "DCR_CHANGED",\n\t"eventTime": 1677801600678,\n\t"eventPublishingTime": 1677801600678,\n\t"countryCode": "CA",\n\t"dcrId": "f19305a6e6af4b5aa03d26c1ec1ae5a6",\n\t"targetChangeRequest": {\n\t\t"vrStatus": "CLOSED",\n\t\t"vrStatusDetail": "REJECTED",\n\t\t"oneKeyComment": "ONEKEY response comment: Already Exists-Data Privacy\\nONEKEY HCP ID: WCAP00028176\\nONEKEY HCO ID: WCAH00052991",\n\t\t"individualEidValidated": "WCAP00028176",\n\t\t"workplaceEidValidated": "WCAH00052991",\n\t\t"vrTraceRequest": "{\\"isoCod2\\":\\"CA\\",\\"validation.clientRequestId\\":\\"f19305a6e6af4b5aa03d26c1ec1ae5a6\\"}",\n\t\t"vrTraceResponse": "{\\"response.traceValidationRequestOutputFormatVersion\\":\\"1.8\\",\\"response.status\\":\\"SUCCESS\\",\\"response.resultSize\\":1,\\"response.totalNumberOfResults\\":1,\\"response.success\\":true,\\"response.results\\":[{\\"codBase\\":\\"WCA\\",\\"cisHostNum\\":\\"7853\\",\\"userEid\\":\\"07853\\",\\"requestType\\":\\"Q\\",\\"responseEntityType\\":\\"ENT_ACTIVITY\\",\\"clientRequestId\\":\\"f19305a6e6af4b5aa03d26c1ec1ae5a6\\",\\"cegedimRequestEid\\":\\"9d02f7547dbc4e659a9d230c91f96279\\",\\"customerRequest\\":null,\\"trace1ClientRequestDate\\":\\"2023-02-27T23:53:44Z\\",\\"trace2CegedimOkcProcessDate\\":\\"2023-02-27T23:53:40Z\\",\\"trace3CegedimOkeTransferDate\\":\\"2023-02-27T23:54:23Z\\",\\"trace4CegedimOkeIntegrationDate\\":\\"2023-02-27T23:55:47Z\\",\\"trace5CegedimDboResponseDate\\":\\"2023-03-02T21:23:36Z\\",\\"trace6CegedimOkcExportDate\\":null,\\"requestComment\\":null,\\"responseComment\\":\\"Already Exists-Data Privacy\\",\\"individualEidSource\\":null,\\"individualEidValidated\\":\\"WCAP00028176\\",\\"workplaceEidSource\\":\\"WCAH00052991\\",\\"workplaceEidValidated\\":\\"WCAH00052991\\",\\"activityEidSource\\":null,\\"activityEidValidated\\":\\"WCAP0002817602\\",\\"addressEidSource\\":null,\\"addressEidValidated\\":\\"WCA00000006206\\",\\"countryEid\\":\\"CA\\",\\"processStatus\\":\\"REQUEST_RESPONDED\\",\\"requestStatus\\":\\"VAS_FOUND_BUT_INVALID\\",\\"updateDate\\":\\"2023-03-02T21:37:16Z\\"}]}"\n\t}\n}
\n

and post back to the topic. DCR will be closed in 24h.


New Case (2024-03-19)


We need to force close/reject a couple of DCRs which cannot closed themselves. There were sent to OneKey, but for some reasons OK does not recognize them.  IQVIA have not generated the TraceVR response and we need to simulate it.  To break TRACEVR process for this DCRs we need to manually change the Mongo Status to REJECTED. If we keep SENT we are going to ask IQVIA forever in - TODO - describe this in SOP


Change from:
\"\"

To:


\"\"


\n
    "vrStatus": "CLOSED",\n    "vrStatusDetail": "REJECTED", 
\n



\n
 {\n  "eventType": "DCR_CHANGED",\n  "eventTime": <current_time>,\n  "eventPublishingTime": <current_time>,\n  "countryCode": "<country>",\n  "dcrId": "<dcr_id>",\n  "targetChangeRequest": {\n    "vrStatus": "CLOSED",\n    "vrStatusDetail": "REJECTED",\n    "oneKeyComment": "HUB manual update due to MR-<ticket_number>",\n    "individualEidValidated": null,\n    "workplaceEidValidated": null,\n    "vrTraceRequest": "{\\"isoCod2\\":\\"<country>\\",\\"validation.clientRequestId\\":\\"<dcr_id>\\"}",\n    "vrTraceResponse": "{\\"response.traceValidationRequestOutputFormatVersion\\":\\"1.8\\",\\"response.status\\":\\"SUCCESS\\",\\"response.resultSize\\":1,\\"response.totalNumberOfResults\\":1,\\"response.success\\":true,\\"response.results\\":[{\\"codBase\\":\\"W<country>\\",\\"cisHostNum\\":\\"4605\\",\\"userEid\\":\\"HUB\\",\\"requestType\\":\\"Q\\",\\"responseEntityType\\":\\"ENT_ACTIVITY\\",\\"clientRequestId\\":\\"<dcr_id>\\",\\"cegedimRequestEid\\":\\"\\",\\"customerRequest\\":null,\\"trace1ClientRequestDate\\":\\"2024-02-27T09:29:34Z\\",\\"trace2CegedimOkcProcessDate\\":\\"2024-02-27T09:29:34Z\\",\\"trace3CegedimOkeTransferDate\\":\\"2024-02-27T09:32:22Z\\",\\"trace4CegedimOkeIntegrationDate\\":\\"2024-02-27T09:29:48Z\\",\\"trace5CegedimDboResponseDate\\":\\"2024-03-04T14:51:54Z\\",\\"trace6CegedimOkcExportDate\\":null,\\"requestComment\\":\\"\\",\\"responseComment\\":\\"HUB manual update due to MR-<ticket_number>\\",\\"individualEidSource\\":null,\\"individualEidValidated\\":null,\\"workplaceEidSource\\":null,\\"workplaceEidValidated\\":null,\\"activityEidSource\\":null,\\"activityEidValidated\\":null,\\"addressEidSource\\":null,\\"addressEidValidated\\":null,\\"countryEid\\":\\"<country>\\",\\"processStatus\\":\\"REQUEST_RESPONDED\\",\\"requestStatus\\":\\"VAS_NOT_FOUND\\",\\"updateDate\\":\\"2024-03-04T16:06:29Z\\"}]}"\n  }\n}
\n
" + }, + { + "title": "CHANGELOG", + "pageID": "411338079", + "pageLink": "/display/GMDM/CHANGELOG", + "content": "

List of DCRs:

\"\"Re COMPANY RE IM44066249 VR missing FR.msg

" + }, + { + "title": "Update DCRs with missing comments", + "pageID": "425495306", + "pageLink": "/display/GMDM/Update+DCRs+with+missing+comments", + "content": "

Description

Due to temporary problem with our calls to Reltio workflow API we had multiple DCRs with missing workflow comments. The symptoms of this error were: no changeRequestComment field in DCRRegistry mongo collection and lack of content in Comment field in Reltio while viewing DCR by entityUrl.
We have created a solution allowing to find deficient DCRs and update their comments in database and Reltio.

Goal

We want to find all deficient DCRs in a given environment and update their comments in DCRRegistry and Reltio.
This can be accomplished by following the procedure described below.

Procedure

Step 1 - Configure the solution

Go to tools/dcr-update-workflow-comments module in mdm-hub-inbound-services repository.

Prepare env configuration.
Provide mongo.dbName and manager.url in application.yaml file.
Create a file named application-secrets.yaml. Copy the content from application-secretsExample.yaml file and replace mock values with real ones appropriate to a given environment.

Prepare solution configuration. 
Provide desired mode (find/repair) and DCR endTime time limits for deficient DCRs search in application.yaml.
Here is an example of update-comments configuration.

application.yaml
\n
update-comments:\n  mode: find\n  starting: 2024-04-01T10:00:00Z\n  ending: 2024-05-15T10:00:00Z
\n

Step 2 - Find deficient DCRs

Run the application using ApplicationServiceRunner.java in find mode with Spring profile: secrets.

\"\"

As a result, dcrs.csv file will appear in resources directory. It contains a list of DCRs to be updated in the next step. Those are DCRs ended within the configuration time limits, with no changeRequestComment field in DCRRegistry and having not empty processInstanceId (that value is needed to retrieve workflow comments from Reltio). This list can be viewed and altered if there is a need to omit a specific DCR update.

\"\"

Step 3 - Repair the DCRs

Change update-comments.mode configuration to repair. Run the application exactly the same as in Step 2.
As a result, report.txt file will be created in resources directory. It will contain a log for every DCR with its update status. If the update fails, it will contain the reason. 

In case of failed updated, the application can be ran again with dcrs.csv needed adjustments.

" + }, + { + "title": "GBLUS DCRs:", + "pageID": "310966586", + "pageLink": "/pages/viewpage.action?pageId=310966586", + "content": "" + }, + { + "title": "ICUE VRs manual load from file", + "pageID": "310966588", + "pageLink": "/display/GMDM/ICUE+VRs+manual+load+from+file", + "content": "

This SOP describes the manual load of selected ICUE DCRS to the GBLUS environment.

Scope and issue description:

On GBLUS PROD VRs(DCRs) are sent to IQVIA(ONEKEY) for validation using events. The process is responsible for this is described on this page (OK DCR flows (GBLUS)). IQVIA receives the data based on singleton profiles. 

The current flow enables only GRV and ENGAGE. ICUE was disabled from the flow and requires manual work to load this to IQVIA due to a high number of ICUE standalone profiles created by this system on January/February 2023. 

More details related to the ICUE issue are here:

\"\"ODP_ US IQVIA DRC_VR Request for 2023.msg\"\"DCR_Counts_GBLUS_PROD.xlsx

Steps to add ICUE in the IQVIA validation process:


  1. Check if there are no loads on environment GBLUS PROD:
    1. Check reltio-* topics and check if there are no huge number of events per minute and if there is no LAG on topics:
    2. \"\"
  2. Pick the input file from a client and after approval from Monica.Mulloy@COMPANY.com proceed with changes:
    1. example email and input file:
    2. First batch_ Leftover ICUE VRs (27th Feb-31st March).msg
  3. Generate the events for the VR topic
    1. - id: onekey_vr_dcrs_manual
      destination: "${env}-internal-onekeyvr-in"
    2. Reconciliation target ONEKEY_DCRS_MANUAL
    3. use the resendLastEvent operation in the publisher (generate CHANGES events)
  4. After all events are pushed to topic verify on akhq if generated events are available on desired topic
  5. Wait for events aggregation window closure(24h).
  6. Check if VR's are visible in DCRRequests mongo collection. createTime should be within the last 24h

    \n
    { "entity.uri" : "entities/<entity_uri>" }
    \n


" + }, + { + "title": "HL DCR:", + "pageID": "302705613", + "pageLink": "/pages/viewpage.action?pageId=302705613", + "content": "" + }, + { + "title": "How do we answer to requests about DCRs?", + "pageID": "416002490", + "pageLink": "/pages/viewpage.action?pageId=416002490", + "content": "" + }, + { + "title": "EFK:", + "pageID": "284806852", + "pageLink": "/pages/viewpage.action?pageId=284806852", + "content": "" + }, + { + "title": "FLEX Environments - Elasticsearch Shard Limit", + "pageID": "513736765", + "pageLink": "/display/GMDM/FLEX+Environments+-+Elasticsearch+Shard+Limit", + "content": "

Alert

Sometimes, below alert gets triggered:

\"\"


This means that Elasticsearch has allocated >80% of allowed number of shards (default 1000 max).

Further Debugging

Also, we can check directly on the EFK cluster what is the shard count:

  1. Log into Kibana and choose "Dev Tools" from the panel on the left:

    \"\"

  2. Use one of below API calls:

    To fetch current cluster status and number of active/unassigned shards (# of active shards + # of unassigned shards = # of allocated shards):
    GET _cluster/health
    \"\"

    To check the current assigned shards limit:
    GET
    \"\"


Solution: Removing Old Shards/Indices

This is the preferred solution. Old indices can be removed through Kibana.


  1. Log into Kibana and choose "Management" from the panel on the left:

    \"\"

  2. Choose "Index Management":

    \"\"

  3. Find and mark indices that can be removed. In my case, I searched for indices containing "2023" in their names:

    \"\"

  4. Click "Manage Indices" and "Delete Indices". Confirm:

    \"\"

Solution: Increasing the Limit

This is not the preferred solution, as it is not advised to go beyond the default limit of 1000 shards per node - it can lead to worse performance/stability of the Elasticsearch cluster.

TODO: extend this section when we need to increase the limit somewhere, use this article: https://www.elastic.co/guide/en/elasticsearch/reference/7.4/misc-cluster.html



" + }, + { + "title": "Kibana: How to Restore Data from Snapshots", + "pageID": "284806856", + "pageLink": "/display/GMDM/Kibana%3A+How+to+Restore+Data+from+Snapshots", + "content": "

NOTE: The time of restoring is based on the amount of data you wanted to restore. Before beginning of restoration you have to be sure that the elastic cluster has a sufficient amount of storage to save restoring data.

To restore data from the snapshot you have to use "Snapshot and Restore" site from Kibana. It is one of sites avaiable in "Stack Management" section:

\"\"


\"\"


Select the snapshot which contains data you are interested in and click the Restore button:

\"\"


In the presented wizard please set up the following options:

Disable the option "All data streams and indices" and provide index patterns that match index or data stream you want to restore:

\"\"


It is important to enable option "Rename data streams and indices" and set "Capture pattern" as "(.+)" and "Replacement pattern" as "$1-restored-<idx>", where the idx <1, 2, 3, ... , n> - it is required once we restore more than one snapshot from the same datastream. In another case, the restore operation will override current elasticsearch objects and we lost the data:

\"\"

The rest of the options on this page have to be disabled:

\"\"

Click the "Next" button to move to "Index settings" page. Leave all options disabled and go to the next page.

On the page "Review restore details" you can see the summary of the restore process settings. Validate them and click the "Restore snapshot" button to start restoring.

You can track the restoration progress in "Restore Status" section:

\"\"


When data is no longer needed, it should be deleted:

\"\"







" + }, + { + "title": "External proxy", + "pageID": "379322691", + "pageLink": "/display/GMDM/External+proxy", + "content": "" + }, + { + "title": "No downtime Kong restart/upgrade", + "pageID": "379322693", + "pageLink": "/pages/viewpage.action?pageId=379322693", + "content": "


This SOP describes how to perform "no downtime" restart. 

Resources

http://awsprodv2.COMPANY.com/ - AWS console

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/install_kong.yml - ansible playbook 

SOP

Remove one node instance from target groups (AWS console)

  1. Access AWS console http://awsprodv2.COMPANY.com/. Log in using COMPANY SSO
  2. Choose Account: prod-dlp-wbs-rapid (432817204314). Role: WBS-EUW1-GBICC-ALLENV-RO-SSO
    \"\"
  3. Change region to Europe(Ireland - eu-west-1)
  4. Got to EC2 → Load Balancing → Target Groups
    \"\"

    \"\"
  5. Search for target group

    \n
    -prod-gbl-mdm
    \n

    There should be 4 target groups visible. 1 for mdmhub api and 3 for Kafka

    \"\"
  6. Remove first instance (EUW1Z2DL113) from all 4 target groups.

    Perform below steps for all target groups

    To do so, open each target group select desired instance and choose 'deregister'. Now this instance should have 'Health status': 'Draining'.
    Next do the same operation for other target groups.

    Do not remove two instances from consumer group at the same time. It'll cause API unabailability.
    Also make sure to remove the same instance from all target groups.


    \"\"
    \"\"

Wait for Instance to be removed from target group

  1. Wait for target groups to be adjusted. Deregistered instance should eventually be removed from target group
    \"\"

Additionally you can check kong logs directly

First instance: 


\n
ssh ec2-user@euw1z2dl113.COMPANY.com\ncd /app/kong/\ndocker-compose logs -f --tail=0\n# Check if there are new requests to exteral api
\n


Second isntance: 

\n
ssh ec2-user@euw1z2dl114.COMPANY.com\ncd /app/kong/\ndocker-compose logs -f --tail=0\n# Check if there are new requests to exteral api
\n
Some internal requests may be still visible, eg. metrics

Perform restart of Kong on removed instance (Ansible playbook)

Execute ansible playbook inside mdm-hub-cluster-env repository inside 'ansible' directory

For the first instance:

\n
ansible-playbook install_kong.yml -i inventory/proxy_prod/inventory  -l kong_01
\n

For the second instance:

\n
ansible-playbook install_kong.yml -i inventory/proxy_prod/inventory  -l kong_02
\n

Make sure that kong_01 is the same instance you've removed from target group(check ansible inventory)

\"\"

Re-add the removed instance

Perform this steps for all target groups


  1. Select target group
    \"\"
    Choose 'Register targets'
  2. Filter instances to find previously removed instance. Select it and choose 'Include as pending below'. Make sure that correct port is chosen
    \"\"
  3. Verify below request and select 'Register pending targets'
    \"\"
    Instance should be in 'Initial' state in target group
    \"\"

Wait for instance to be properly added to target group

Wait for all instances to have 'Healthy' status instead of 'Initial'. Make sure everything work as expected (Check Kong logs)
\"\"

Perform steps 1-5 for second Kong instance

Second instance: euw1z2dl114.COMPANY.com

Second Kong host(ansible inventory): kong_02

" + }, + { + "title": "Full Environment Refresh - Reltio Clone", + "pageID": "386803861", + "pageLink": "/display/GMDM/Full+Environment+Refresh+-+Reltio+Clone", + "content": "" + }, + { + "title": "Full Environment Refresh", + "pageID": "386803864", + "pageLink": "/display/GMDM/Full+Environment+Refresh", + "content": "

Introduction

Below steps are the record of steps done in January 2024 due to Reltio Data Clone between GBLUS PROD → STAGE and APAC PROD → STAGE.

Environment refresh consists of:

  1. disabling MDM Hub components
  2. full cleanup of existing STAGE data: Kafka and MongoDB
  3. identifying and copying cache collections from PROD to STAGE MongoDB
  4. re-enabling MDM Hub components
  5. running the Hub Reconciliation DAG


Disabling Services, Kafka Cleanup

  1. Comment out the EFK topics in fluentd configuration:

    \n
    mdm-hub-cluster-env\\apac\\nprod\\namespaces\\apac-backend\\values.yaml
    \n

    \"\"

  2. Deploy apac-backend through Jenkins, to apply the fluentd changes:
    https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_backend_apac_nprod/
    (fluentd pods in the apac-backend namespace should recreate)

  3. Block the apac-stage mdmhub deployment job in Jenkins:
    https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/

  4. Notify the monitoring/support Team, that the environment is disabled (in case alerts are triggered or users inquire via emails)
  5. Use Kubernetes & Helm command line tools to uninstall the mdmhub components and Kafka topics:
    1. use kubectx/kubectl to switch context to apac-nprod cluster:

      \"\"

    2. use helm to uninstall below two releases from the apac-nprod cluster (you can confirm release names by using the "$ helm list -A" command):

      \n
      $ helm uninstall mdmhub -n apac-stage\n$ helm uninstall kafka-resources-apac-stage -n apac-backend
      \n

      \"\"

    3. confirm there are no pods in the apac-stage namespace:
      \"\"

    4. list remaining Kafka topics (kubernetes kafkatopic resources) with "apac-stage" prefix:
      \"\"
      manually remove all the remaining "apac-stage" prefixed topics. Note that it is expected that some topics remain - some of them have been created by Kafka Streams, for example.

MongoDB Cleanup

  1. Log into the APAC NPROD MongoDB through Studio 3T.
  2. Clear all the collections in the apac-stage database.
    \"\"
    Exceptions:
    • "batchInstance" collection
    • "quartz-" prefixed collections
    • "shedLock" collection

  3. Wait until MongoDB cleans all these collections (could take a few hours):
    \"\"

  4. Log into the APAC PROD MongoDB through Studio 3T. You want to have both connections in the same session.
  5. Copy below collections from APAC PROD (Ctrl+C):
    • keyIdRegistry
    • relationCache
    • sequenceCounters

  6. Right click APAC NPROD database "apac-stage" and choose "Paste Collections"
    \"\"

  7. Dialog will appear - use below options for each collection:
    • Collections Copy Mode: Append to existing target collection
    • Documents Copy Mode: Overwrite documents with same _id
    • Copy indices from the source collection: uncheck
      \"\"

  8. Wait until all the collections are copied.
    \"\"

Snowflake Cleanup

  1. Cleanup the base tables:

    \n
    TRUNCATE TABLE CUSTOMER.ENTITIES;\nTRUNCATE TABLE CUSTOMER.RELATIONS;\nTRUNCATE TABLE CUSTOMER.LOV_DATA;\nTRUNCATE TABLE CUSTOMER.MATCHES;\nTRUNCATE TABLE CUSTOMER.MERGES;\nTRUNCATE TABLE CUSTOMER.HIST_INACTIVE_ENTITIES;
    \n
  2. Run the full materialization jobs:

    \n
    CALL CUSTOMER.MATERIALIZE_FULL_ALL('M', 'CUSTOMER');\nCALL CUSTOMER.HI_MATERIALIZE_FULL_ALL('CUSTOMER');
    \n
  3. Check for any tables that haven't been cleaned properly:

    \n
    SELECT *\nFROM INFORMATION_SCHEMA.TABLES\nWHERE 1=1\nAND TABLE_TYPE = 'BASE TABLE'\nAND TABLE_NAME ILIKE 'M^_%' ESCAPE '^'\nAND ROW_COUNT != 0;
    \n
  4. Run the materialization for those tables specifically or you can run the queries prepared from the bellow query:

    \n
    SELECT 'TRUNCATE TABLE ' || TABLE_SCHEMA || '.' || TABLE_NAME || ';'\nFROM INFORMATION_SCHEMA.TABLES\nWHERE 1=1\nAND TABLE_TYPE = 'BASE TABLE'\nAND TABLE_NAME ILIKE 'M^_%' ESCAPE '^'\nAND ROW_COUNT != 0;
    \n

Re-Enabling Hub

  1. Get a confirmation that the Reltio data cloning process has finished.
  2. Re-enable the mdmhub apac-stage deployment job and perform a deployment of an adequate version.
  3. Uncomment previously commented (look: Disabling The Services, Kafka Cleanup, 1.) EFK transaction topic list, deploy apac-backend. Fluentd pods in the apac-backend namespace should recreate.
  4. Wait for both deployments to finish (should be performed one after another).
  5. Test the MDM Hub API - try sending a couple of GET requests to fetch some entities that exist in Reltio. Confirm that the result is correct and the requests are visible in Kibana (dashboard APAC-STAGE API Calls):
    \"\"

  6. (2025-05-19 Piotr: we no longer need to do this - Matches Enricher now deploys with minimum 1 pod in every environment) Run below command in your local Kafka client environment.

    \n
    kafka-console-consumer.sh --bootstrap-server kafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094 --group apac-stage-matches-enricher --topic apac-stage-internal-reltio-matches-events --consumer.config client.sasl.properties
    \n

    This needs to be done to create the consumergroup, so that Keda can scale the deployment in the future.

Running The Hub Reconciliation

  1. After confirming that Hub is up and working correctly, navigate to APAC NPROD Airflow:
    https://airflow-apac-nprod-gbl-mdm-hub.COMPANY.com/home

  2. Trigger the hub_reconciliation_v2_apac_stage DAG:
    \"\"
    \"\"

  3. To minimize the chances of overfilling the Kafka storage, set retention of reconciliation metrics topics to an hour:
    1. Navigate to APAC NPROD AKHQ:
      https://akhq-apac-nprod-gbl-mdm-hub.COMPANY.com/

    2. Find below topics and navigate to their "Configs" tabs:
    3. For each topic, find the config "retention.ms" (do not mistake it with "delete.retention.ms", which is responsible for compaction) and set it to 3600000. Apply changes.
      \"\"

  4. Monitor the DAG, event processing and Kafka/Elasticsearch storage.
  5. After the DAG finishes, disable reconciliation jobs (if reconciliations start uncontrollably before the data is fully restored, it will unnecessarily increase the workload):
    1. Manually disable the hub_reconciliation_v2_apac_stage DAG: https://airflow-apac-nprod-gbl-mdm-hub.COMPANY.com/dags/hub_reconciliation_v2_apac_stage/grid
    2. Manually disable the reconciliation_snowflake_apac_stage DAG: https://airflow-apac-nprod-gbl-mdm-hub.COMPANY.com/dags/reconciliation_snowflake_apac_stage/grid
  6. After all reconciliation events are processed, the environment is ready to use. Compare entity/relation counts between Reltio-MongoDB-Snowflake to confirm that everything went well.
  7. Re-enable reconciliation jobs from 5.





" + }, + { + "title": "Full Environment Refresh - Legacy (Docker Environments)", + "pageID": "164470082", + "pageLink": "/pages/viewpage.action?pageId=164470082", + "content": "

Steps to take when a Hub environment needs to be cleaned up or refreshed.

1.Preparation

$ ./consumer_groups_sasl.sh --describe --group <group_name> | sort

For every consumer group in this environment. This will list currently connected consumers.

If there are external consumers connected they will prevent deletion of topics they're connected to. Contact people responsible for those consumers to disconnect them.



2. Stop GW/Hub components: subscriber, publisher, manager, batch_channel


$ docker stop <container name>


3. Double-check that consumer groups (internal and external) have been disconnected


4. Delete all topics:

a) Preparation:

b) Deleting the topics:

          (...) continue for all topics

5. Check whether topics are deleted on disk and using $ ./topics.sh --list 

6. Recreate the topics by launching the Ansible playbook with parameter create_or_update: True set for desired topics in topics.yml

\"\"

7. Cleanup MongoDB:


8. After confirming everything is ready (in case of environment refresh there has to be a notification from Reltio that it's ready) restart GW and Hub components

9. Check component logs to confirm they started up and connected correctly.


" + }, + { + "title": "Hub Application:", + "pageID": "302706338", + "pageLink": "/pages/viewpage.action?pageId=302706338", + "content": "" + }, + { + "title": "Batch Channel: Importing MAPP's Extract", + "pageID": "164470063", + "pageLink": "/display/GMDM/Batch+Channel%3A+Importing+MAPP%27s+Extract", + "content": "

To import MAPP's extract you have to:

  1. Have original extract (eg. original.csv) which was uploaded to Teams channel,
  2. Open it in Excel and save as "CSV (Comma delimited) (*.csv)",
  3. Run dos2unix tool on the file.
  4. Do steps from 2 and 3 on extract file (eg. changes.csv) received form MAPP's team,
  5. Compare original file to file with changes and select only lines which was changed in the second file: ( head -1 changes.csv && diff original.csv changes.csv | grep '^>' | sed 's/^> //' ) > result.csv
  6. Divide result file into the smaller ones by running splitFile.sh script: ./splitFile.sh  result.csv. The script will generate set of files where theirs names will end with _{idx}.{extension} eg.: result_00.csv, result_01.csv, result_02.csv etc.
  7. Upload the result set of files to s3 location: s3://pfe-baiaes-eu-w1-project/mdm/inbound/mapp/. This action will trigger batch-channel component, which will start loading changes to MDM.


\"\"splitFile.sh

" + }, + { + "title": "Callback Service: How to Find Events Stuck in Partial State", + "pageID": "273681936", + "pageLink": "/display/GMDM/Callback+Service%3A+How+to+Find+Events+Stuck+in+Partial+State", + "content": "

What is partial state?

When an event gets processed by Callback Service, if any change is done at the precallback stage, event will not be sent further, to Event Publisher. It is expected that in a few seconds another event will come, signaling the change done by precallback logic - this one gets passed to Publisher and downstream clients/Snowflake as far as precallback detects no need for a change.

Sometimes the second event is not coming - this is what we call a partial state. It means, that update event will actually not reach Snowflake and downstream clients. PartialCounter functionality of CallbackService was implemented to monitor such behaviour.

How to identify that an event is stuck in partial state?

PartialCounter is counting events which have not been passed down to Event Publisher (identified by Reltio URI) and exporting this count as a Prometheus (Actuator) metric. Prometheus alert "callback_service_partial_stuck_24h" is notifying us that an event has been stuck for more than 24 hours.

How to find events stuck in partial state?

Use below command to fetch the list of currently stuck events as JSON array (example for emea-dev). You will have to authorize using mdm_test_user or mdm_admin:

\n
# curl https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/precallback/partials
\n

\"\"


More details can be found in Swagger Documentation: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/

What to do?

Events identified as stuck in partial state should be reconciled.

" + }, + { + "title": "Integration Test - how to run tests locally from your computer to target environment", + "pageID": "337839648", + "pageLink": "/display/GMDM/Integration+Test+-+how+to+run+tests+locally+from+your+computer+to+target+environment", + "content": "

Steps:

  1. First, choose the environment and go to the Jenkins integration tests directory:
  2. https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/
  3. based on APAC DEV:
  4. go to https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/
  5. choose the latest RUN and click Workspace on the left
  6. \"\"
  7. Click on /home/jenkins workspace link
  8. \"\"
  9. Go to /code/mdm-integretion-tests/src/test/resources/ 
  10. \"\"
  11. Download 3 files
    1. citrus-application.properties
    2. kafka_jaas.conf
    3. kafka_truststore.jks
  12. Edit 
    1. citrus-application.properties
    2. change local K8s URLS to real URLS and local PATH. Leave other variables as is. 
    3. in that case, use the KeePass that contains all URLs:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/credentials.kdbx


Example code that is adjusted to APAC DEV

API URLs + local PATH to certs

This is just the example from APAC DEV that contains the C:\\\\Users\\\\mmor\\\\workspace\\\\SCM\\\\mdm-hub-inbound-services\\\\ path - replace this with your own code localization 

\n
citrus.spring.java.config=com.COMPANY.mdm.tests.config.SpringConfiguration\n\njava.security.auth.login.config=C:\\\\Users\\\\mmor\\\\workspace\\\\SCM\\\\mdm-hub-inbound-services\\\\mdm-integretion-tests\\\\src\\\\test\\\\resources\\\\kafka_jaas.conf\n\nreltio.oauth.url=https://auth.reltio.com/\nreltio.oauth.basic=secret\nreltio.url=https://mpe-02.reltio.com/reltio/api/2NBAwv1z2AvlkgS\nreltio.username=svc-pfe-mdmhub\nreltio.password=secret\nreltio.apiKey=secret\nreltio.apiSecret=secret\n\nmongo.dbUrl=mongodb://admin:secret@mongo-apac-nprod-gbl-mdm-hub.COMPANY.com:27017/reltio_apac-dev?authMechanism=SCRAM-SHA-256&authSource=admin\nmongo.url=mongodb://mongo-apac-nprod-gbl-mdm-hub.COMPANY.com:27017\nmongo.dbName=reltio_apac-dev\nmongo.username=mdmgw\nmongo.password=secret\n\ngateway.url=https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-apac-dev\ngateway.username=mdm_test_user\ngateway.apiKey=secret\n\nbatchService.url=https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-batch-apac-dev\nbatchService.username=mdm_test_user\nbatchService.apiKey=secret\nbatchService.limitedUsername=mdm_test_user_limited\nbatchService.limitedApiKey=secret\n\nmapchannel.url=https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/dev-map-api\nmapchannel.username=mdm_test_user\nmapchannel.apiKey=secret\n\napiRouter.url=https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-apac-dev\napiRouter.dcrReltioUserApiKey=secret\napiRouter.dcrOneKeyUserApiKey=secret\napiRouter.intTestUserApiKey=secret\napiRouter.dcrReltioUser=mdm_dcr2_test_reltio_user\napiRouter.dcrOneKeyUser=mdm_dcr2_test_onekey_user\napiRouter.intTestUser=mdm_test_user\n\nadminService.url=https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-apac-dev\nadminService.intTestUserApiKey=secret\nadminService.intTestUser=mdm_test_user\n\ndeg.url=https://hcp-gateway-dev.eu.cloudhub.io/v1\ndeg.oAuth2Service=https://hcp-gateway-dev.eu.cloudhub.io/\ndeg.apiKey=secret\ndeg.apiSecret=secret\n\nkafka.brokers=kafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094\nkafka.group=int_test_dev\nkafka.topic=apac-dev-out-simple-all-int-tests-all\nkafka.security.protocol=SASL_SSL\nkafka.sasl.mechanism=SCRAM-SHA-512\nkafka.ssl.truststore.location=C:\\\\Users\\\\mmor\\\\workspace\\\\SCM\\\\mdm-hub-inbound-services\\\\mdm-integretion-tests\\\\src\\\\test\\\\resources\\\\kafka_truststore.jks\nkafka.ssl.truststore.password=secret\nkafka.receive.timeout=60000\nkafka.purgeEndpoints.timeout=100000\n...\n...\n...
\n



  1. Now go to your local code checkout - mdm-hub-inbound-services\\mdm-integretion-tests
  2. Copy 3 files to the mdm-integretion-tests/src/test/resources
  3. \"\"
  4. Select the test and click RUN
  5. \"\"
  6. END - the result: You are running Jenkins integration tests from your local computer on target DEV environment. 
  7. Now you can check logs locally and repeat. 







" + }, + { + "title": "Manager: Reload Entity - Fix COMPANYAddressID Using Reload Action", + "pageID": "229180577", + "pageLink": "/display/GMDM/Manager%3A+Reload+Entity+-+Fix+COMPANYAddressID+Using+Reload+Action", + "content": "
  1. Before starting check what DQ rules have -reload action on the list. Now it is SourceMatchCategory and COMPANYAddressId
    1. check here - - example dq rule
    2. update with -reload operation to reload more DQ rules
  2. Generate events using the script :
    1.  script
    2. or
    3. script - fix SourceMatchCategory without ONEKEY
    4. the script gets all ACTIVE entities with Addresses
      1. that have missing COMPANYAddressId
      2. that COMPANYAddressID is lower that correct value for each env: emea 5000000000  amer 6000000000  apac 7000000000
    5. Script generate events: example:
      1. entities/lwBrc9K|{"targetEntity":{"entityURI":"entities/lwBrc9K","sources":["FUSIONMDM"],"targetType":"entityUri"},"overwrites":[{"uriMask":"COMPANYAddressID"}]}
        entities/1350l3D6|{"targetEntity":{"entityURI":"entities/1350l3D6","sources":["FUSIONMDM"],"targetType":"entityUri"},"overwrites":[{"uriMask":"COMPANYAddressID"}]}
        entities/1350kZNI|{"targetEntity":{"entityURI":"entities/1350kZNI","sources":["FUSIONMDM"],"targetType":"entityUri"},"overwrites":[{"uriMask":"COMPANYAddressID"}]}
        entities/cPSKBB9|{"targetEntity":{"entityURI":"entities/cPSKBB9","sources":["FUSIONMDM"],"targetType":"entityUri"},"overwrites":[{"uriMask":"COMPANYAddressID"}]}
  3. Make a fix for COMPANYAddressID that is lower than the correct value for each env
    1. Go to the keyIdRegistry Mongo collection
    2. find all entries that have generatedId lower than emea 5000000000  amer 6000000000  apac 7000000000
    3. increase the generatedId  adding the correct value from correct environments using the script - script
  4. Get the file and push it to the <env>-internal-async-all-reload-entity topic
    1. ./start_sasl_producer.sh <env>-internal-async-all-reload-entity
    2. or using the input file  
    3. ./start_sasl_producer.sh <env>-internal-async-all-reload-entity < reload_dev_emea_pack_entities.txt (file that contains each json generated by the Mongo script, each row in new line)



How to Run a script on docker:

example emea DEV:

go to - svc-mdmnpr@euw1z2dl111
docker exec -it mongo_mongo_1 bash
cd  /data/configdb
create script - touch reload_entities_fix_COMPANYaddressid_hub.js
edit header:
db = db.getSiblingDB("<DB>")
db.auth("mdm_hub", "<PASS>")
RUN: nohup mongo --host mongo_dev_emea_reltio_rs/euw1z2dl111.COMPANY.com:27017 -u mdm_hub -p <PASS> --authenticationDatabase reltio_dev reload_entities_fix_COMPANYaddressid_hub.js &

OR
nohup mongo --host mongo_dev_emea_reltio_rs/euw1z2dl111.COMPANY.com:27017 -u mdm_hub -p <PASS> --authenticationDatabase reltio_dev reload_entities_fix_sourcematch_hub_DEV.js > smc_DEV_FIX.out 2>&1 &
nohup mongo --host mongo_dev_emea_reltio_rs/euw1z2dl111.COMPANY.com:27017 -u mdm_hub -p <PASS> --authenticationDatabase reltio_qa reload_entities_fix_sourcematch_hub_QA.js > smc_QA_FIX.out 2>&1 &
nohup mongo --host mongo_dev_emea_reltio_rs/euw1z2dl111.COMPANY.com:27017 -u mdm_hub -p <PASS> --authenticationDatabase reltio_stage reload_entities_fix_sourcematch_hub_STAGE.js > smc_STAGE_FIX.out 2>&1 &

" + }, + { + "title": "Manager: Resubmitting Failed Records", + "pageID": "164470200", + "pageLink": "/display/GMDM/Manager%3A+Resubmitting+Failed+Records", + "content": "

There is new API in manager for getting/resubmitting/removing failed records from batches.

1. Get failed records method - it returns list of errors basing on provided criterias

ii. Example:

[
        {
            "field" : "HubAsyncBatchServiceBatchName",
            "operation" : "Equals",
            "value" : "testBatchBundle"
        }
    
]

b. Response

i. List of Error objects

ii. Example:

[

    {
        "id""5fa93377e720a55f0bb68c99",
        "batchName""testBatchBundle",
        "objectType""configuration/entityTypes/HCP",
        "batchInstanceId""0+3j45V7S1K1GT2i6c3Mqw",
        "key""{\\r\\n  \\"type\\" : \\"SHS\\",\\r\\n  \\"value\\" : \\"TEST:HCP:b09b6085-28dc-451d-85b6-fe3ce2079446\\"\\r\\n}",
        "errorClass""javax.ws.rs.ClientErrorException",
        "errorMessage""HTTP 409 Conflict",
        "resubmitted"false,
        "deleted"false
    },
    {
        "id""5fa93378e720a55f0bb68ca6",
        "batchName""testBatchBundle",
        "objectType""configuration/entityTypes/HCP",
        "batchInstanceId""0+3j45V7S1K1GT2i6c3Mqw",
        "key""{\\r\\n  \\"type\\" : \\"SHS\\",\\r\\n  \\"value\\" : \\"TEST:HCP:25bfc672-9ba1-44a5-b3c1-d657de701d76\\"\\r\\n}",
        "errorClass""javax.ws.rs.ClientErrorException",
        "errorMessage""HTTP 409 Conflict",
        "resubmitted"false,
        "deleted"false
    },
    {
        "id""5fa93377e720a55f0bb68c9a",
        "batchName""testBatchBundle",
        "objectType""configuration/entityTypes/HCP",
        "batchInstanceId""0+3j45V7S1K1GT2i6c3Mqw",
        "key""{\\r\\n  \\"type\\" : \\"SHS\\",\\r\\n  \\"value\\" : \\"TEST:HCP:60067d46-07a6-4902-b9e8-1bf2acbc8a6e\\"\\r\\n}",
        "errorClass""javax.ws.rs.ClientErrorException",
        "errorMessage""HTTP 409 Conflict",
        "resubmitted"false,
        "deleted"false
    },
    {
        "id""5fa93377e720a55f0bb68c9b",
        "batchName""testBatchBundle",
        "objectType""configuration/entityTypes/HCP",
        "batchInstanceId""0+3j45V7S1K1GT2i6c3Mqw",
        "key""{\\r\\n  \\"type\\" : \\"SHS\\",\\r\\n  \\"value\\" : \\"TEST:HCP:e8d05d96-7aa3-4059-895e-ce20550d7ead\\"\\r\\n}",
        "errorClass""javax.ws.rs.ClientErrorException",
        "errorMessage""HTTP 409 Conflict",
        "resubmitted"false,
        "deleted"false
    },
    {
        "id""5fa96ba300061d51e822854a",
        "batchName""testBatchBundle",
        "objectType""configuration/entityTypes/HCP",
        "batchInstanceId""iN2LB3TiT3+Sd5dYemDGHg",
        "key""{\\r\\n  \\"type\\" : \\"SHS\\",\\r\\n  \\"value\\" : \\"TEST:HCP:973411ec-33d4-477e-a6ae-aca5a0875abb\\"\\r\\n}",
        "errorClass""javax.ws.rs.ClientErrorException",
        "errorMessage""HTTP 409 Conflict",
        "resubmitted"false,
        "deleted"false
    }

]


2. Resubmit failed records - it takes list of FieldFilter objects and returns list of errors that were resubmitted - if it was correctly resubmitted resubmitted flag is set to true

a.  Request

i. List of FieldFilter objects

b. Response

i. List of Error objects

3. Remove failed records - it takes list of FieldFilter objects that contains criteria for removing error objects and returns list of errors that were deleted - if it was correctly deleted deleted flag is set to true

a.  Request

i. List of FieldFilter objects

b. Response

i. List of Error objects

" + }, + { + "title": "Issues diagnosis", + "pageID": "438905271", + "pageLink": "/display/GMDM/Issues+diagnosis", + "content": "" + }, + { + "title": "API issues", + "pageID": "438905273", + "pageLink": "/display/GMDM/API+issues", + "content": "

Symptoms


Confirmation

To confirm if problem with API is really occurring, you have to invoke some operation that is shared by HTTP interface. To do this you can use Postman or other tool that can run HTTP requests. Below you can find a few examples that describe how to check API in components that expose this:


Reasons finding

Below diagram presents the HTTP request processing flow with engaged components:

\"\"


" + }, + { + "title": "Kafka:", + "pageID": "164470059", + "pageLink": "/pages/viewpage.action?pageId=164470059", + "content": "" + }, + { + "title": "Client Configuration", + "pageID": "243862610", + "pageLink": "/display/GMDM/Client+Configuration", + "content": "


      1. Installation

To install kafka binary version 2.8.1 should be downloaded and installed from

https://kafka.apache.org/downloads


      2. The email from the MDMHUB Team

In the email received from the MDMHUB support team you can find connection parameters like server address, topic name, group name, and the following files:


      3. Example command to test client and configuration

To connect with Kafka using the command line client save delivered files on your disc and run the following command:

export KAFKA_OPTS=-Djava.security.auth.login.config={ ●●●●●●●●●●●● Kafka_client_jaas.conf }

kafka-console-consumer.sh --bootstrap-server { kafka server } --group { group } --topic { topic_name } --consumer.config { consumer config file eg. client.sasl.properties}


For example for amer dev:

●●●●●●●●●●● in provided file: kafka_client_jaas.conf

Kafka server: kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094

Group: dev-mule

Topic: dev-out-full-pforcerx-grv-all

Consumer config is in provided file: client.sasl.properties

export KAFKA_OPTS=-Djava.security.auth.login.config=kafka_client_jaas.conf

kafka-console-consumer.sh --bootstrap-server kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 --group dev-mule --topic dev-out-full-pforcerx-grv-all --consumer.config client.sasl.properties

" + }, + { + "title": "Client Configuration in k8s", + "pageID": "284806978", + "pageLink": "/display/GMDM/Client+Configuration+in+k8s", + "content": "

Each of k8s clusters have installed kafka-client pod. To find this pod you have to list all pods deployed in *-backend namespace and select pod which name starts with kafka-client:

\n
kubectl get pods --namespace emea-backend  | grep kafka-client
\n


To run commands on this pod you have to remember its name and use in "kubectl exec" command:

Using kubectl exec with kafka client
\n
kubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -- <command>
\n


As a <command> you can use all of standard Kafka client scripts eg. kafka-consumer-groups.sh or one of wrapper scripts which simplify configuration of standard scripts - broker and authentication configuration. They are the following scripts:


Kafka-client pod has other kafka tool named kcat. To use this tool you have to run commands on container kafka-kcat unsing wrapper script kcat.sh:

Running kcat.sh on emea-nprod cluster
\n
kubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -c kafka-kcat -- kcat.sh
\n



NOTE: Remember that all wrapper scripts work with admin permissions.


Examples

Describe the current offsets of a group

Describe group dev_grv_pforcerx on emea-nprod cluster
\n
kubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -- consumer_groups.sh --describe --group dev_grv_pforcerx
\n


Reset offset of group to earliset

Reset offset to earliest for group group1 and topic gbl-dev-internal-gw-efk-transactions on emea-nprod cluster
\n
kubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -- reset_offsets.sh --group group1 --to-earliest gbl-dev-internal-gw-efk-transactions
\n


Consumer events from the beginning of topic. It will produce the output where each of lines will have the following format: <message key>|<message body>

Read topic gbl-dev-internal-gw-efk-transactions from beginning on emea-nprod cluster
\n
kubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -- start_consumer.sh gbl-dev-internal-gw-efk-transactions --from-beginning
\n


Send messages defined in text file to kafka topics. Each of message in file have to have following format: <message key>|<message body>

Send all messages from file file_with_messages.csv to topic gbl-dev-internal-gw-efk-transactions
\n
kubectl exec -i --namespace emea-backend kafka-client-8585fbb7f9-55cjm -- start_producer.sh gbl-dev-internal-gw-efk-transactions < file_with_messages.csv
\n


Delete consumer group on topic

Delete consumer group test on topic gbl-dev-internal-gw-efk-transactions emea-nprod cluster
\n
kubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -- consumer_groups.sh --delete-offsets --group test gbl-dev-internal-gw-efk-transactions
\n


List topics and their partitions using kcat

List topcis into on emea-nprod cluster
\n
kubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -c kafka-kcat -- kcat.sh -L
\n



" + }, + { + "title": "How to Add a New Consumer Group", + "pageID": "164470080", + "pageLink": "/display/GMDM/How+to+Add+a+New+Consumer+Group", + "content": "

These instructions demonstrate how to add an additional consumer group to an existing topic.


  1. Open file "topics.yml" located under mdm-reltio-handler-env\\inventory\\<environment_name>\\group_vars\\kafka and find the topic to be updated. In this example new consumer group "flex_dev_prj2" was added to topic "dev-out-full-flex-all".

\"\"

   2. Make sure the parameter "create_or_update" is set to True for the desired topic:

\"\"

   3.  Additionally, double-check that the parameter "install_only_topics" in the "all.yml" file is set to True:

\"\"

    4. Save the files after making the changes. Run ansible to update the configuration using the following command:  ansible-playbook install_hub_broker.yml -i inventory/<environment_name>/inventory --limit broker1 --vault-password-file=~/vault-password-file

\"\"

   5. Double-check ansible output to make sure changes have been implemented correctly.

   6. Change the "create_or_update" parameter in "topics.yml" back to False.

   7. Save the file and upload the new configuration to git. 






" + }, + { + "title": "How to Generate JKS Keystore and Truststore", + "pageID": "164470062", + "pageLink": "/display/GMDM/How+to+Generate+JKS+Keystore+and+Truststore", + "content": "

This instruction is based on the current GBL PROD Kafka keystore.jks and trustrore.jks generation. 


  1. Create a certificate pair using keytool genkeypair command 
    1. keytool -genkeypair -alias kafka.mdm-gateway.COMPANY.com -keyalg RSA -keysize 2048 -keystore server.keystore.jks -dname "CN=kafka.mdm-gateway.COMPANY.com, O=COMPANY, L=mdm_hub, C=US"  
    2. set the security password, set the same ●●●●●●●●●●●● the key passphrase
  2. Now create a certificate signing request ( csr ) which has to be passed on to our external / third party CA ( Certificate Authority ).
    1. keytool -certreq -alias kafka.mdm-gateway.COMPANY.com -file kafka.mdm-gateway.COMPANY.com.csr -keystore server.keystore.jks 
  3. Send the csr file through the Request Manager:
    1. Log in to the BT On Demand
    2. Go to Request Manager.
    3. Click "Continue"
    4. Search for " Digital Certificates"
    5. Select the " Digital Certificates" Application and click "Continue"
    6. Click "Checkout"
    7. Select "COMPANY SSL Certificate - Internal Only" and fill:
      1. Copy CSR file
      2. fill SAN e.g from the GBL PROD Kafka: 

      3. fill email address

    8. select "No" for additional SSL Cert request, 
    9. Continue
    10. Send the CSR reqeust.
  4. When you receive the signed certificate verify the certificate
    1. Check the Subject: CN and O should be filled just like in the  1.a.
    2. Check the SAN: there should be the list of hosts from 3.g.ii.
  5. If the certificate is correct CONTINUE:
  6. Now we need to import these certificates into server.keystore.jks keystore. Import the intermediate certificate first --> then the root certificate --> and then the signed cert.
    1. keytool -importcert -alias inter -file PBACA-G2.cer -keystore server.keystore.jks
    2. keytool -importcert -alias root -file RootCA-G2.cer -keystore server.keystore.jks
    3. keytool -importcert -alias kafka.mdm-gateway.COMPANY.com -file kafka.mdm-gateway.COMPANY.com.cer -keystore server.keystore.jks
  7. After importing all three certificates you should see : "Certificate reply was installed in keystore" message.
  8. Now list the keystore and check if all the certificates are imported successfully.
    1. keytool -list -keystore server.keystore.jks
    2. Your keystore contains 3 entries
    3. For debugging start with "-v" parameter
  9. Lets create a truststore now. Set the security ●●●●●●●●●● different than the keystore
    1. keytool -import -file PBACA-G2.cer -alias inter -keystore server.truststore.jks
    2. keytool -import -file RootCA-G2.cer -alias root -keystore server.truststore.jks




COMPANY Certificates:

\"\"PBACA-G2.cer \"\"RootCA-G2.cer


" + }, + { + "title": "Reset Consumergroup Offset", + "pageID": "243862614", + "pageLink": "/display/GMDM/Reset+Consumergroup+Offset", + "content": "

To reset offset on Kafka topic you need to have configured the command line client. The tool that can do this action is kafka-consumer-groups.sh. You have to specify a few parameters which determine where you want to reset the offset:

and specify the offset value by proving one of following parameters:

1. --shift-by

Reset offsets shifting current offset by provided number which can be negative or positive:

kafka-consumer-groups.sh --bootstrap-server { server } --group { group } -–command-config {  client.sasl.properties } --reset-offsets --shift-by {  number from formula } --topic {  topic } --execute


2. --to-datetime

Switch which can be used to rest offset from datetime. Date should be in format ‘YYYY-MM-DDTHH:mm:SS.sss’

kafka-consumer-groups.sh --bootstrap-server { server }--group { group } -–command-config {  client.sasl.properties } --reset-offsets --to-datetime 2022-02-02T00:00:00.000Z --topic {  topic } --execute


3. --to-earliest

Switch which can be used to reset the offsets to the earliest (oldest) offset which is available in the topic.

kafka-consumer-groups.sh --bootstrap-server { server }--group { group } -–command-config {  client.sasl.properties } --reset-offsets -–to-earliest --topic {  topic } --execute


4. --to-latest

Switch which can be used to reset the offsets to the latest (the most recent) offset which is available in the topic.

kafka-consumer-groups.sh --bootstrap-server { server }--group { group } -–command-config {  client.sasl.properties } --reset-offsets -–to-latest --topic {  topic } --execute


Example

Let's assume that you want to have 10000 messages to read by your consumer and the topic has 10 partitions. The first step is moving the current offset to the latest to make sure that there is no messages to read on the topic:

kafka-consumer-groups.sh --bootstrap-server { server }--group { group } -–command-config {  client.sasl.properties } --reset-offsets --to-latest --topic {  topic } --execute

Then calculate the offset you need to shift to achieve requested lag using following formula:

-1 * desired_lag / number_of_partitions

In our example the result will be: -1 * 10000 / 10 = -1000. Use this value in the below  command:

kafka-consumer-groups.sh --bootstrap-server { server } --group { group } -–command-config {  client.sasl.properties } --reset-offsets --shift-by -1000 --topic {  topic } --execute



" + }, + { + "title": "Kong gateway", + "pageID": "462065054", + "pageLink": "/display/GMDM/Kong+gateway", + "content": "" + }, + { + "title": "Kong gateway migration", + "pageID": "462065057", + "pageLink": "/display/GMDM/Kong+gateway+migration", + "content": "

Installation procedure

  1. Deploy crds

    \n
    # Download package with crds to current directory\ntar -xzf crds_to_deploy.tar.gzcd crds_to_deploy/\nbase=$(pwd)
    \n


    1. Backup olds crds

      \n
      # Switch to proper k8s context\nkubectx atp-mdmhub-nprod-apac\n\n# Get all crds from cluster and saves them into file ${crd_name}_${env}.yaml\n# Args:\n# $1 = env\ncd $base\nmkdir old_apac_nprod\ncd old_apac_nprod\nget_crds.sh apac_nprod\n\n
      \n


    2. create new crds

      \n
      cd $base/new/splitted/\n# create new crds\nfor i in $(ls); do echo $i; kubectl create -f $i ; done\n# apply new crds\nfor i in $(ls); do echo $i; kubectl apply -f $i ; done\n# replace crds that were not properly installed \nfor i in   kic-crds.yaml01 kic-crds.yaml03 kic-crds.yaml05 kic-crds.yaml07 kic-crds.yaml10 kic-crds.yaml11; do echo $i ; kubectl replace -f $i; done
      \n


    3. Apply new version of gatewayconfigrations 

      \n
      cd $base/new\nkubectl replace -f gatewayconfiguration-new.yaml
      \n


    4. Apply old version of kongingress

      \n
      cd $base/old\nkubectl replace -f kongingresses.configuration.konghq.com.yaml
      \n


      # Performing tests is advised to check if everything is working
  2. Deploy operators with version that have kong-gateway-operator(4.32.0 or newer)
    # Performing tests is advised to check if everything is working
  3. Merge configuration
    http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/pull-requests/1967/overview

  4. Deploy backend (4.33.0-project-boldmove-SNAPSHOT or newer)
    # Performing tests is advised to check if everything is working

  5. Deploy mdmhub components (4.33.0-project-boldmove-SNAPSHOT or newer)
    # Performing tests is advised to check if everything is working

Tests

  1. Checking all ingresses
    \n
    # Change /etc/hosts if dns's are not yet changed. To obtain all hosts that should be modified in /etc/hosts: \n# Switch to correct k8s context\n# k get ingresses -o custom-columns=host0:.spec.rules[0].host -A | tail -n +2 | sort | uniq | tr '\\n' ' '\n# To get dataplane svc: \n# k get svc -n kong -l gateway-operator.konghq.com/dataplane-service-type=ingress\nendpoints=$(kubectl get  ingress -A  -o custom-columns="NAME:.metadata.name,HOST:.spec.rules[0].host,PATH:.spec.rules[0].http.paths[0].path" | tail -n +2 | awk '{print "https://"$2":443"$3}')\nwhile IFS= read -r line; do echo -e "\\n\\n---- $line ----"; curl -k $line; done <<< $endpoints
    \n
  2. Checking plugins 
    \n
    export apikey="xxxxxxxxx"\nexport reltio_authorization="yyyyyyyyy"\nexport consul_token="zzzzzzzzzzz"\n\n\nkey-auth:\n  curl https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-emea-dev\n  curl https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-emea-dev -H "apikey: $apikey"\n  curl https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-emea-dev/entities/2c9cf5a5 -H 'apikey: $apikey'\n\nmdm-external-oauth:\n  curl --location --request POST 'https://devfederate.COMPANY.com/as/token.oauth2?grant_type=client_credentials' --header 'Content-Type: application/x-www-form-urlencoded' --header 'Origin: http://10.192.71.136:8000' --header "Authorization: Basic $reltio_authorization" | jq .access_token\n  curl https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-dev/entities/2c9cf5a5 --header 'Authorization: Bearer access_token_from_previous_command'\n\ncorrelation-id:\n  curl -v https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-emea-dev/entities/2c9cf5a5 -H "apikey: $apikey" 2>&1 | grep hub-correlation-id  \n\nbackend-auth:\n  kibana-backend-auth:\n   # Web browser \n    https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home#/\n\nsession:\n   # Web browser \n   # Open debugger console in web browser and check if kong cookies are set\n\npre-function:\n  k logs -n emea-backend -l app=consul -f --tail=0\n  k exec -n airflow airflow-scheduler-0 -- curl -k http://http-mdmhub-kong-kong-proxy.kong.svc.cluster.local:80/v1/kv/dev?token=$consul_token\n\nopentelemetry:\n  curl https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-emea-dev/entities/testtest -H "apikey: $apikey"\n  +\n  # Web browser\n  https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/apm/services/kong/overview?comparisonEnabled=true&environment=ENVIRONMENT_ALL&kuery=&latencyAggregationType=avg&offset=1d&rangeFrom=now-15h&rangeTo=now&serviceGroup=&transactionType=request\n\nprometheus:\n  k exec -it dataplane-kong-knkcn-bjrc7-75bb85fc4c-2msfv -- /bin/bash\n  curl localhost:8100/metrics\n\n
    \n
  3. Check logs
    1. Gateway operator
    2. Kong operator
    3. Old kong pod - proxy and ingress controller
    4. New kong dataplane
    5. New kong controlPlane
  4. Status of new kong objects: 
    1. Dataplane
    2. Controlplane
    3. Gateway
      \n
      k get Gateway,dataplane,controlplane -n kong
      \n
  5. Check services in old and new kong 
    1. Old kong
      \n
      services=$(k exec -n kong mdmhub-kong-kong-f548788cd-27ltl -c proxy -- curl -k https://localhost:8444/services); echo $services | jq .
      \n
    2. New kong
      \n
       services=$(k exec -n kong dataplane-kong-knkcn-bjrc7-5c9f596ff9-t94lf -c proxy -- curl -k https://localhost:8444/services); echo $services | jq .
      \n



Reference

Kong operator configuration

https://github.com/Kong/kong-operator/blob/main/deploy/crds/charts_v1alpha1_kong_cr.yaml

Kong gateway operator crd's reference

https://docs.konghq.com/gateway-operator/latest/reference/custom-resources/#dataplanedeploymentoptions

\"\"get_crds.sh\"\"crds_to_deploy.tar.gz

" + }, + { + "title": "MongoDB:", + "pageID": "164470061", + "pageLink": "/pages/viewpage.action?pageId=164470061", + "content": "" + }, + { + "title": "Mongo-SOP-001: Mongo Scripts", + "pageID": "164470056", + "pageLink": "/display/GMDM/Mongo-SOP-001%3A+Mongo+Scripts", + "content": "
\n
hub_errors\n db.hub_errors.createIndex({plannedResubmissionDate: -1}, {background: true, name: "idx_plannedResubmissionDate_-1"});\n db.hub_errors.createIndex({timestamp: -1}, {background: true, name: "idx_timestamp_-1"});\n db.hub_errors.createIndex({exceptionClass: 1}, {background: true, name: "idx_exceptionClass_1"});\n db.hub_errors.createIndex({status: -1}, {background: true, name: "idx_status_-1"});\n\n\ngateway_errors\n db.gateway_errors.createIndex({plannedResubmissionDate: -1}, {background: true, name: "idx_plannedResubmissionDate_-1"});\n db.gateway_errors.createIndex({timestamp: -1}, {background: true, name: "idx_timestamp_-1"});\n db.gateway_errors.createIndex({exceptionClass: 1}, {background: true, name: "idx_exceptionClass_1"});\n db.gateway_errors.createIndex({status: -1}, {background: true, name: "idx_status_-1"});\n\n\ngateway_transactions\n db.gateway_transactions.createIndex({transactionTS: -1}, {background: true, name: "idx_transactionTS_-1"});\n db.gateway_transactions.createIndex({status: -1}, {background: true, name: "idx_status_-1"});\n db.gateway_transactions.createIndex({requestId: -1}, {background: true, name: "idx_requestId_-1"});\n db.gateway_transactions.createIndex({username: -1}, {background: true, name: "idx_username_-1"});\n\n\nentityHistory\n db.entityHistory.createIndex({country: -1}, {background: true, name: "idx_country"});\n db.entityHistory.createIndex({sources: -1}, {background: true, name: "idx_sources"});\n db.entityHistory.createIndex({entityType: -1}, {background: true, name: "idx_entityType"});\n db.entityHistory.createIndex({status: -1}, {background: true, name: "idx_status"});\n db.entityHistory.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\n db.entityHistory.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\n db.entityHistory.createIndex({"entity.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"});\n db.entityHistory.createIndex({"entity.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"});\n db.entityHistory.createIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"});\n\n\nentityRelations\n db.entityRelations.createIndex({country: -1}, {background: true, name: "idx_country"});\n db.entityRelations.createIndex({sources: -1}, {background: true, name: "idx_sources"});\n db.entityRelations.createIndex({entityType: -1}, {background: true, name: "idx_relationType"});\n db.entityRelations.createIndex({status: -1}, {background: true, name: "idx_status"});\n db.entityRelations.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\n db.entityRelations.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\n db.entityRelations.createIndex({startObjectId: -1}, {background: true, name: "idx_startObjectId"});\n db.entityRelations.createIndex({endObjectId: -1}, {background: true, name: "idx_endObjectId"});\n db.entityRelations.createIndex.({"relation.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"}); \n db.entityRelations.createIndex.({"relation.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"}); \n db.entityHistory.createIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"});\n\n\n\n\n\n
\n
\n
var start = new Date().getTime();\n\nvar result = db.getCollection("entityRelations").aggregate(\n\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: { \n\t\t\t        "status" : "ACTIVE"\n\t\t\t}\n\t\t},\n\n//\t\t// Stage 2\n//\t\t{\n//\t\t\t$limit: 1000\n//\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$lookup: // Equality Match\n\t\t\t{\n\t\t\t    from: "entityHistory",\n\t\t\t    localField: "relation.endObject.objectURI",\n\t\t\t    foreignField: "_id",\n\t\t\t    as: "matched_entity"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$match: {\n\t\t\t        "$or" : [\n\t\t\t            {\n\t\t\t                "matched_entity.status" : "INACTIVE"\n\t\t\t            }, \n\t\t\t            {\n\t\t\t                "matched_entity.status" : "LOST_MERGE"\n\t\t\t            },\n\t\t\t            {\n\t\t\t                "matched_entity.status" : "DELETED"\n\t\t\t            }            \n\t\t\t        ]\n\t\t\t}\n\t\t},\n\n\t\t// Stage 5\n\t\t{\n\t\t\t$group: {\n\t\t\t\t\t\t  _id:"$matched_entity.status", \n\t\t\t\t\t\t  count:{$sum:1}, \n\t\t\t}\n\t\t},\n\n\t]\n\n\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n\n);\n\n\n    \t\nprintjson(result._batch)    \t\n\nvar end = new Date().getTime();\nvar duration = end - start;\nprint("duration: " + duration + " ms")\nprint("END")
\n
\n
print("START")\nvar start = new Date().getTime();\n\nvar result = db.getCollection("entityHistory").aggregate(\n   // Pipeline\n   [\n      // Stage 1\n      {\n         $match: {\n                 "status" : "LOST_MERGE",\n                 "$and" : [\n                     {\n                         "$or" : [\n                             {\n                                 "mdmSource" : "RELTIO"\n                             },\n                             {\n                                 "mdmSource" : {\n                                     "$exists" : false\n                                 }\n                             }\n                         ]\n                     }\n                 ]\n         }\n      },\n\n      // Stage 2\n      {\n         $graphLookup: {\n             "from" : "entityHistory",\n             "startWith" : "$_id",\n             "connectFromField" : "parentEntityId",\n             "connectToField" : "_id",\n             "as" : "master",\n             "maxDepth" : 10.0,\n             "depthField" : "depthField"\n         }\n      },\n\n      // Stage 3\n      {\n         $unwind: {\n             "path" : "$master",\n             "includeArrayIndex" : "arrayIndex",\n             "preserveNullAndEmptyArrays" : false\n         }\n      },\n\n      // Stage 4\n      {\n         $match: {\n             "master.status" : {\n                 "$ne" : "LOST_MERGE"\n             }\n         }\n      },\n\n      // Stage 5\n      {\n         $redact: {\n             "$cond" : {\n                 "if" : {\n                     "$ne" : [\n                         "$master._id",\n                         "$parentEntityId"\n                     ]\n                 },\n                 "then" : "$$KEEP",\n                 "else" : "$$PRUNE"\n             }\n         }\n      },\n\n   ]\n\n   // Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n\n);\n\n\nresult.forEach(function(obj) {\n    var id = obj._id;\n    var masterId = obj.master._id;\n\n   if( masterId !== undefined){\n\n     print( id + " " + " " + obj.parentEntityId +" replaced to "+ masterId);\n     var currentTime = new Date().getTime();\n\n      var result = db.entityHistory.update( {"_id":id}, {$set: { "parentEntityId":masterId, "forceModificationDate": NumberLong(currentTime) } });\n      printjson(result);\n   }\n\n});\n\n\nvar end = new Date().getTime();\nvar duration = end - start;\nprint("duration: " + duration + " ms")\nprint("END")\n\n\n
\n
\n
db = db.getSiblingDB('reltio')\nvar file = cat('crosswalks.txt');  // read the  crosswalks file\nvar crosswalk_ids = file.split('\\n'); // create an array of crosswalks\nfor (var i = 0, l = crosswalk_ids.length; i < l; i++){ // for every crosswalk search it in the entityHistory\n    print("ID crosswalk: " + crosswalk_ids[i])\n    var result =  db.entityHistory.find({\n         status: { $eq: "ACTIVE" },\n         "entity.crosswalks.value": crosswalk_ids[i]\n    }).projection({id:1, country:1})\n    printjson(result.toArray());\n}
\n
\n
db.getCollection("entityHistory").aggregate(\n\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: { status: { $eq: "ACTIVE" }, entityType:"configuration/entityTypes/HCP" , mdmSource: "RELTIO",         "lastModificationDate" : {\n\t\t\t            "$gte" : NumberLong(1529966574477)\n\t\t\t        } }\n\t\t},\n\n\t\t// Stage 2\n\t\t{\n\t\t\t$project: { _id: 0, "entity.crosswalks": 1,"entity.uri":2, "entity.updatedTime":3 }\n\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$unwind: "$entity.crosswalks"\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$group: {_id:"$entity.crosswalks.value", count:{$sum:1}, entities:{$push: {uri:"$entity.uri", modificationTime:"$entity.updatedTime"}}}\n\t\t},\n\n\t\t// Stage 5\n\t\t{\n\t\t\t$match: { count: { $gte: 2 } }\n\t\t},\n\n\t\t// Stage 6\n\t\t{\n\t\t\t$redact: {\n\t\t\t    "$cond" : {\n\t\t\t        "if" : {\n\t\t\t            "$ne" : [\n\t\t\t                "$entity.crosswalks.0.value", \n\t\t\t                "$entity.crosswalks.1.value"\n\t\t\t            ]\n\t\t\t        }, \n\t\t\t        "then" : "$$KEEP", \n\t\t\t        "else" : "$$PRUNE"\n\t\t\t    }\n\t\t\t}\n\t\t},\n\t],\n\n\t// Options\n\t{\n\t\tallowDiskUse: true\n\t}\n\n\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n\n);\n\n\n
\n
\n
print("START")\nvar start = new Date().getTime();\n\nvar result = db.getCollection("entityHistory").aggregate(\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: {\n\t\t\t        "status" : "LOST_MERGE", \n\t\t\t        "entityType" : {\n\t\t\t            "$exists" : false\n\t\t\t        },        \n\t\t\t        "$and" : [\n\t\t\t            {\n\t\t\t                "$or" : [\n\t\t\t                    {\n\t\t\t                        "mdmSource" : "RELTIO"\n\t\t\t                    }, \n\t\t\t                    {\n\t\t\t                        "mdmSource" : {\n\t\t\t                            "$exists" : false\n\t\t\t                        }\n\t\t\t                    }\n\t\t\t                ]\n\t\t\t            }\n\t\t\t        ]\n\t\t\t}\n\t\t},\n\n\t\t// Stage 2\n\t\t{\n\t\t\t$graphLookup: {\n\t\t\t    "from" : "entityHistory", \n\t\t\t    "startWith" : "$_id", \n\t\t\t    "connectFromField" : "parentEntityId", \n\t\t\t    "connectToField" : "_id", \n\t\t\t    "as" : "master", \n\t\t\t    "maxDepth" : 10.0, \n\t\t\t    "depthField" : "depthField"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$unwind: {\n\t\t\t    "path" : "$master", \n\t\t\t    "includeArrayIndex" : "arrayIndex", \n\t\t\t    "preserveNullAndEmptyArrays" : false\n\t\t\t}\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$match: {\n\t\t\t    "master.status" : {\n\t\t\t        "$ne" : "LOST_MERGE"\n\t\t\t    }\n\t\t\t}\n\t\t},\n\n\t\t// Stage 5\n\t\t{\n\t\t\t$redact: {\n\t\t\t    "$cond" : {\n\t\t\t        "if" : {\n\t\t\t            "$eq" : [\n\t\t\t                "$master._id", \n\t\t\t                "$parentEntityId"\n\t\t\t            ]\n\t\t\t        }, \n\t\t\t        "then" : "$$KEEP", \n\t\t\t        "else" : "$$PRUNE"\n\t\t\t    }\n\t\t\t}\n\t\t}\n\t]\n\n\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n);\n\n\t\nresult.forEach(function(obj) {\n    var id = obj._id;\n\n    var masterEntityType = obj.master.entityType;\n\t\n\tif( masterEntityType !== undefined){\n      if(obj.entityType == undefined){\n\t    print("entityType is " + obj.entityType + " for " + id +", changing to "+ masterEntityType);\n\t    var currentTime = new Date().getTime();\n\t\n        var result = db.entityHistory.update( {"_id":id}, {$set: { "entityType":masterEntityType, "lastModificationDate": NumberLong(currentTime) } });\n        printjson(result);\n      }\n\t}\n\n});\n    \t\n    \t\nvar end = new Date().getTime();\nvar duration = end - start;\nprint("duration: " + duration + " ms")\nprint("END")
\n
\n
db.getCollection("gateway_transactions").aggregate(\n\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: { \n\t\t\t    "$and" : [\n\t\t\t        {\n\t\t\t        "transactionTS" : {\n\t\t\t            "$gte" : NumberLong(1551974500000)\n\t\t\t        }, \n\t\t\t        "username" : "dea_batch"\n\t\t\t        }\n\t\t\t    ]\n\t\t\t}\n\t\t},\n\n\t\t// Stage 2\n\t\t{\n\t\t\t$group: {\n\t\t\t  _id:"$requestId", \n\t\t\t  count:  {  $sum:1  },\n\t\t\t  transactions: { $push : "$$ROOT" }\n\t\t\t}\n\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$unwind: {\n\t\t\t    path : "$transactions",\n\t\t\t}\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$addFields: {\n\t\t\t    \n\t\t\t    "statusNumber": { \n\t\t\t        $cond: { \n\t\t\t            if: { \n\t\t\t                $eq: ["$transactions.status", "failed"] \n\t\t\t            }, \n\t\t\t            then: 0, \n\t\t\t            else: 1 \n\t\t\t        }\n\t\t\t      } \n\t\t\t       \n\t\t\t  \n\t\t\t}\n\t\t},\n\n\t\t// Stage 5\n\t\t{\n\t\t\t$sort: {\n\t\t\t "transactions.requestId": 1, \n\t\t\t "statusNumber": -1,\n\t\t\t "transactions.transactionTS": -1 \n\t\t\t}\n\t\t},\n\n\t\t// Stage 6\n\t\t{\n\t\t\t$group: {\n\t\t\t      _id:"$_id", \n\t\t\t      transaction: { "$first": "$$CURRENT" }\n\t\t\t}\n\t\t},\n\n\t\t// Stage 7\n\t\t{\n\t\t\t$addFields: {\n\t\t\t     "transaction.transactions.count": "$transaction.count" \n\t\t\t}\n\t\t},\n\n\t\t// Stage 8\n\t\t{\n\t\t\t$replaceRoot: {\n\t\t\t    newRoot: "$transaction.transactions"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 9\n\t\t{\n\t\t\t$addFields: {\n\t\t\t    "file_raw_line": "$metadata.file_raw_line",\n\t\t\t    "filename": "$metadata.filename"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 10\n\t\t{\n\t\t\t$project: {\n\t\t\t    requestId : 1,\n\t\t\t    count: 2,\n\t\t\t    "filename": 3,\n\t\t\t    uri: "$mdmUri",\n\t\t\t    country: 5,\n\t\t\t    source: 6,\n\t\t\t    crosswalkId: 7,\n\t\t\t    status: 8,\n\t\t\t    timestamp: "$transactionTS",\n\t\t\t    //"file_raw_line": 10,\n\t\t\t\n\t\t\t}\n\t\t},\n\t],\n\n\t// Options\n\t{\n\t\tallowDiskUse: true\n\t}\n\n\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n\n);\n\n\n
\n


Export Config for Studio3T - format:

<ExportSettings>
<VERSION>1</VERSION>
<exportSource>CURRENT_QUERY_RESULT</exportSource>
<skipValue>0</skipValue>
<limitValue>0</limitValue>
<exportFormat>CSV</exportFormat>
<exportOptions>
<VERSION>2</VERSION>
<emptyFieldImportStrategy>MAKE_NULL</emptyFieldImportStrategy>
<delimiter> </delimiter>
<encapsulator>&quot;</encapsulator>
<isEscapeControlChars>false</isEscapeControlChars>
<exportNullFieldsAsEmptyStrings>true</exportNullFieldsAsEmptyStrings>
<isAddColHeaders>true</isAddColHeaders>
<selectedFields>
<string>_id</string>
<string>count</string>
<string>country</string>
<string>crosswalkId</string>
<string>filename</string>
<string>requestId</string>
<string>source</string>
<string>status</string>
<string>timestamp</string>
<string>uri</string>
</selectedFields>
<noArrays>false</noArrays>
<noNestedFields>false</noNestedFields>
<noHeader>false</noHeader>
<skipLines>0</skipLines>
<parseError>false</parseError>
<trimLeadingSpaces>false</trimLeadingSpaces>
<trimTrailingSpaces>false</trimTrailingSpaces>
<isUnixLF>false</isUnixLF>
<csvPreset>Excel</csvPreset>
</exportOptions>
<selectedFields>
<string>_id</string>
<string>count</string>
<string>country</string>
<string>crosswalkId</string>
<string>filename</string>
<string>requestId</string>
<string>source</string>
<string>status</string>
<string>timestamp</string>
<string>uri</string>
</selectedFields>
<exportTargetType>FILE</exportTargetType>
<exportPath>D:\\docs\\FLEX\\REPORT_transaction_log\\10_10_2018\\load_report.csv</exportPath>
<noCursorTimeout>true</noCursorTimeout>
</ExportSettings>



\n
 db.entityHistory.aggregate([\n {$match: { status: { $eq: "ACTIVE" }, entityType:"configuration/entityTypes/HCP" } },\n {$project: { _id: 1, "country":1 } },\n {$group : {_id:"$country", count:{$sum:1},}},\n {$match: { count: { $gte: 2 } } },\n],{ allowDiskUse: true } )
\n
\n
//https://stackoverflow.com/questions/43778747/check-if-a-field-exists-in-all-the-elements-of-an-array-in-mongodb-and-return-th?rq=1\n\n// find entities where ALL crosswalk array objects has delete date set (not + exists false)\ndb.entityHistory.find({\n    entityType: "configuration/entityTypes/HCP",\n    country: "br",\n    status: "ACTIVE",\n    "entity.crosswalks": { $not: { $elemMatch: { deleteDate: {$exists:false} } } }\n})\n\n// find entities where ANY OF crosswalk array objecst has delete date set\ndb.entityHistory.find({\n    entityType: "configuration/entityTypes/HCP",\n    country: "br",\n    status: "ACTIVE",\n    "entity.crosswalks": {   $elemMatch: { deleteDate: {$exists:true} }  }\n})
\n
\n
db.getCollection("entityHistory").update(\n    { \n        "status" : "LOST_MERGE", \n        "entity" : {\n            "$exists" : true\n        }\n    },\n    { \n        $set: { "lastModificationDate": NumberLong(1551433013000) }, \n        $unset: {entity:""}\n    },\n    { multi: true }\n)\n\n\n
\n
\n
// Stages that have been excluded from the aggregation pipeline query\n__3tsoftwarelabs_disabled_aggregation_stages = [\n\n\t{\n\t\t// Stage 2 - excluded\n\t\tstage: 2,  source: {\n\t\t\t$limit: 1000\n\t\t}\n\t},\n]\n\ndb.getCollection("hub_errors").aggregate(\n\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: {\n\t\t\t        "exceptionClass" : "com.COMPANY.publishinghub.processing.RDMMissingEventForwardedException",\n\t\t\t         "status" : "NEW"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$project: { \n\t\t\t      "entityId":"$exchangeInHeaders.kafka[dot]KEY",\n\t\t\t      "attributeName": "$exceptionDetails.attributeName",\n\t\t\t      "attributeValue":  "$exceptionDetails.attributeValue", \n\t\t\t      "errorCode":  "$exceptionDetails.errorCode"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$group: {\n\t\t\t   _id: { entityId:"$entityId", attributeValue:  "$attributeValue",attributeName:"$attributeName"}, // can be grouped on multiple properties \n\t\t\t   dups: { "$addToSet": "$_id" }, \n\t\t\t   count: { "$sum": 1 } \n\t\t\t}\n\t\t},\n\n\t\t// Stage 5\n\t\t{\n\t\t\t$group: {\n\t\t\t   //_id: { attributeValue:  "$_id.attributeValue",attributeName:"$_id.attributeName"}, // can be grouped on multiple properties \n\t\t\t   _id: { attributeName:"$_id.attributeName"}, // can be grouped on multiple properties \n\t\t\t    entities: { "$addToSet": "$_id.entityId" }\n\t\t\t}\n\t\t},\n\n\t\t// Stage 6\n\t\t{\n\t\t\t$project: {\n\t\t\t    _id: 1,\n\t\t\t    sample_entities: { $slice: [ "$entities", 10 ] } \n\t\t\t    affected_entities_count: { $size: "$entities" } \n\t\t\t}\n\t\t},\n\t],\n\n\t// Options\n\t{\n\t\tallowDiskUse: true\n\t}\n\n\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n\n);\n\n\n
\n
\n
// GET\ndb.entityHistory.find({})\n// GET random 20 entities\ndb.entityHistory.aggregate( \n    [ \n        { $match : { status : "ACTIVE" } },\n        { \n            $sample: {size: 20} \n        },  \n        {\n          $project: {_id:1}\n        },\n\n] )\n    \n// entity get by ID\ndb.entityHistory.find({\n"_id":"entities/rOATtJD"\n})\n\n\ndb.entityHistory_PforceRx.find({\n        _id: "entities/Tq4c32l"\n})\n\n// Specialities exists\ndb.entityHistory.find({\n    "entity.attributes.Specialities": {\n          $exists: true\n    }\n}).limit(20)\n\n// Specialities size > 4\ndb.entityHistory.find({\n    "entity.attributes.Specialities": {\n        $exists: true\n    },\n     $and: [\n        {$where: "this.entity.attributes.Specialities.length > 6"}, \n        {$where: "this.sources.length >= 2"},\n    ]\n\n})\n.limit(10)\n// only project ID\n.projection({id:1})\n\n\n// Address size > 4\ndb.entityHistory.find({\n    "entity.attributes.Address": {\n        $exists: true\n    },\n     $and: [\n        {$where: "this.entity.attributes.Address.length > 4"}, \n        {$where: "this.sources.length > 2"},\n    ]\n\n})\n.limit(10)\n// only project ID\n//.projection({id:1})\n\n\n// Address AddressType size 2\ndb.entityHistory.find({\n        "entity.attributes.Address": {\n            $exists: true\n        },\n        "entity.attributes.Address.value.Status.lookupCode": {\n            $exists: true,\n            $eq: "ACTV"\n        },\n    }, {\n        "entity.attributes.Address.value.Status": 1\n    })\n    .limit(10)\n\n\n// Address AddressType size 2\ndb.entityHistory.find({\n    "entity.attributes.Address": {\n        $exists: true\n    },\n     $and: [\n        {$where: "this.entity.attributes.Address.length >= 4"}, \n        {$where: "this.sources.length >= 4"},\n    ]\n\n})\n.limit(2)\n//.projection({id:1})\n// only project ID\n\n\ndb.entityHistory.find({\n        "entity.attributes.Address": {\n            $exists: true\n        },\n        "entity.attributes.Address.value.BestRecord": {\n            $exists: true\n        }\n})\n.limit(2)\n// only project ID\n//.projection({id:1})\n\ndb.entityHistory.find({\n        "entity.attributes.Address": {\n            $exists: true\n        },\n        "entity.attributes.Address.value.ValidationStatus": {\n            $exists: true\n        },\n        "entityType":"configuration/entityTypes/HCO",\n        $and: [{\n            $where: "this.entity.attributes.Address.length > 4"\n        \n        }]\n    })\n    .limit(1)\n// only project ID\n//.projection({id:1})\n\n\n\n//SOURCE NAME\ndb.entityHistory.find({\n        "entity.attributes.Address": {\n            $exists: true\n        },\n        lastModificationDate: {\n            $gt: 1534850405000\n        }\n    })\n    .limit(10)\n// only project\n\n\n\ndb.entityHistory.find({\n            "entity.attributes.Address": {\n            $exists: true\n        },\n        "entity.attributes.Address.refRelation.objectURI": {\n            $exists: false\n        },\n    }).limit(10)\n// only project\n\n\n// Phone exists\ndb.entityHistory.find({\n    "entity.attributes.Phone": {\n          $exists: true\n    }\n})   .limit(1)\n\n//Specialities exists\ndb.entityHistory.find({\n    "entity.attributes.Specialities": {\n        $exists: true\n    },\n    country: "mx"\n}).limit(10)\n    \n// Speclaity Code\ndb.entityHistory.find({\n   "entity.attributes.Specialities": {\n        $exists: true\n    },\n    "entity.attributes.Specialities.value.Specialty.lookupCode": "WMX.TE",\n    country: "mx"\n}).limit(1)\n    \n// entity.attributes. Identifiers License exists\ndb.entityHistory.find({\n    "entity.attributes.Identifiers": {\n        $exists: true\n    },\n    country: "mx"\n}).limit(1)\n    \n    \n// Name of organization is empty\ndb.entityHistory.find({\n    entityType: "configuration/entityTypes/HCO",\n    "entity.attributes.Name": {\n        $exists: false\n    },\n    // "parentEntityId": {\n    //     $exists: false\n    // },\n    country: "mx"\n}).limit(10)\n\n\n\n\n// RELACJE\n// GET\ndb.entityRelations.find({})\n\n// entity get by ID startObjectID\ndb.entityRelations.find({\n        startObjectId: "entities/14tDdkhy"\n})\n\ndb.entityRelations.find({\n        endObjectId: "entities/14tDdkhy"\n})\n\n\ndb.entityRelations.find({\n        _id: "relations/RJx9ZkM"\n})\n\ndb.entityRelations.find({\n   "relation.attributes.ActPhone": {\n       $exists: true\n   }\n}).limit(1)\n\n\n\n// Address size > 4\ndb.entityRelations.find({\n    "relation.attributes.Phone": {\n        $exists: true\n    },\n    "relationType":"configuration/relationTypes/HasAddress",\n     //$and: [\n//        {$where: "this.relation.attributes.Address.length > 3"}, \n        //{$where: "this.sources.length >= 2"},\n    //]\n\n})\n.limit(10)\n// only project ID\n//.projection({id:1})\n\n\n\n\n// \ndb.entityRelations.find({\n    "relation.crosswalks": {\n        $exists: true\n    },\n    "relation.crosswalks.deleteDate": {\n        $exists: true\n    }\n\n})\n.limit(10)\n// only project ID\n//.projection({id:1})\n\n\ndb.entityRelations.find({\n    "relation.startObject": {\n        $exists: true\n    },\n    "relation.startObject.objectURI": {\n        $exists: false\n    }\n\n})\n.limit(1)\n\n\n\n// merge finder\ndb.entityRelations.find({\n    "relation.startObject": {\n        $exists: true\n    },\n    "relation.endObject": {\n        $exists: true\n    },\n     $and: [\n        {$where: "this.relation.startObject.crosswalks.length > 2"}, \n        {$where: "this.sources.length >= 1"},\n    ]\n\n})\n.limit(10)\n// only project ID\n//.projection({id:1})\n\n\n// merge finder\ndb.entityRelations.find({\n        "relation.startObject": {\n            $exists: true\n        },\n        "relation.endObject": {\n            $exists: true\n        },\n        //"relation.startObject.crosswalks.0.uri": mb.regex.startsWith("relation.startObject.objectURI")\n         "relation.startObject.crosswalks.0.uri": /^relation.startObject.objectURI.*$/i\n})\n.limit(2)\n\n\n\n\n\n// Phone - HasAddress\ndb.entityRelations.find({\n    "relation.attributes.Phone": {\n        $exists: true\n    },\n    "relationType":"configuration/relationTypes/HasAddress",\n})\n.limit(10)\n\n// ActPhone - Activity\ndb.entityRelations.find({\n    "relation.attributes.ActPhone": {\n        $exists: true\n    },\n    "relationType":"configuration/relationTypes/Activity",\n})\n\n\n// Identifiers - HasAddress\ndb.entityRelations.find({\n    "relation.attributes.Identifiers": {\n        $exists: true\n    },\n    "relationType":"configuration/relationTypes/HasAddress",\n})\n.limit(10)\n\n\n// Identifiers - Activity\ndb.entityRelations.find({\n    "relation.attributes.ActIdentifiers": {\n        $exists: true\n    },\n    "relationType":"configuration/relationTypes/Activity",\n})\n\n\n\n\ndb.entityHistory.find({\n            "entity.attributes.Address": {\n            $exists: true\n        }\n    })\n// only project\n\n\ndb.entityHistory.find({\n            "entity.attributes.Address": {\n            $exists: true\n        },\n        "entity.attributes.Address.refRelation.uri": {\n            $exists: false\n        },\n        "entity.attributes.Address.refRelation.objectURI": {\n            $exists: true\n        },\n    })\n// only project\n\n\ndb.entityHistory.find({\n            "entity.attributes.Address": {\n            $exists: true\n        },\n        "entity.attributes.Address.refRelation.uri": {\n            $exists: true\n        },\n        "entity.attributes.Address.refRelation.objectURI": {\n            $exists: false\n        }\n    })\n// only project\n\ndb.entityHistory.find({\n            "entity.attributes.Address": {\n            $exists: true\n        },\n        "entity.attributes.Address.refRelation.uri": {\n            $exists: true\n        },\n        "entity.attributes.Address.refRelation.objectURI": {\n            $exists: true\n        },\n    })\n\ndb.entityHistory.find({\n        "entity.attributes.Address": {\n            $exists: true\n        },\n        lastModificationDate: {\n            $gt: 1534850405000\n        }\n    })\n    .limit(10)\n// only project\n\ndb.entityHistory.find({})\n// GET random 20 entities\n\n    \n// entity get by ID\ndb.entityHistory.find({\n        _id: "entities/Nzn07bq"\n})\n\n\n// Address AddressType size 2\ndb.entityHistory.find({\n    "entity.attributes.Address": {\n        $exists: true\n    },\n     $and: [\n        {$where: "this.entity.attributes.Address.length >= 4"}, \n        {$where: "this.sources.length >= 4"},\n    ]\n\n})\n.limit(2)\n\n\n\n
\n
\n
db.getCollection("entityHistory").aggregate(\n\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: {   \t\n\t\t\t     mdmSource: "RELTIO"        \n\t\t\t}\n\t\t},\n\n\t\t// Stage 2\n\t\t{\n\t\t\t$limit: 1000\n\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$addFields: {\n\t\t\t   "crosswalksSize":  { $size: { "$ifNull": [ "$entity.crosswalks", [] ] } }\n\t\t\t}\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$project: {\n\t\t\t    _id: 1,\n\t\t\t    crosswalksSize:1 \n\t\t\t    \n\t\t\t}\n\t\t},\n\n\t]\n\n\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n\n);\n\n
\n
\n
// COPY THIS SECTION 
\n



" + }, + { + "title": "Mongo-SOP-002: Running mongo scripts remotely on k8s cluster", + "pageID": "284809016", + "pageLink": "/display/GMDM/Mongo-SOP-002%3A+Running+mongo+scripts+remotely+on+k8s+cluster", + "content": "

Get the tool:

  1. Go to file http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/browse/helm/mongo/src/scripts/run_mongo_remote/run_mongo_remote.sh?at=refs%2Fheads%2Fproject%2Fboldmove in inbound-services repository.
  2. Download the file to your computer.

The tool requires kubenetes installed and WSL (tested on WSL2) for working correctly.

Usage guide:

Available commands:

Shows general help message for the script tool:

\"\"

Execute to run script remotely on pod agent on k8s script. Script will be copied from the given path on local machine to pod and then run on pod. To get details about accepted arguments run ./run_mongo_remote.sh exec --help

\"\"

Execute to download script results from pod agent and save in given path on your local machine. To get details about accepted arguments run ./run_mongo_remote.sh get --help

\"\"

Example flow:

  1. Save mongo script you want to run in file example_script.js (Script file has to have .js or .mongo extension for tool to run correctly)
  2. Run ./run_mongo_remote.sh exec example_script.js emea_dev to run your script on emea_dev environment
  3. Upon complection the path where the script results were saved on pod agent will be returned (eg. /pod/path/result.txt)
  4. Run ./run_mongo_remote.sh get /pod/path/result.txt local/machine/path/example_script_result.txt emea_dev to save script results on your local machine.

Tool edition

The tool was written using bashly - a bash framework for developing CLI applications.

The tool source is available HERE. Edit files and generate singular output script based on guides available on bashly site.

DO NOT EDIT run_mongo_remote.sh file MANUALLY (it may result in script not working correctly).

" + }, + { + "title": "Notifications:", + "pageID": "430347505", + "pageLink": "/pages/viewpage.action?pageId=430347505", + "content": "" + }, + { + "title": "Sending notification", + "pageID": "430347508", + "pageLink": "/display/GMDM/Sending+notification", + "content": "

We send notifications to our clients in the case of the following events:

  1. Unplanned outage - MDMHUB is not available for our clients - REST API, Kafka or Snowflake doesn't work properly and clients are not able to connect.
    Currently, you have to send notification in the case of the following events:
    1. kong_http_500_status_prod

    2. kong_http_502_status_prod
    3. kong_http_503_status_prod
    4. kong3_http_500_status_prod
    5. kong3_http_502_status_prod
    6. kong3_http_503_status_prod
    7. kafka_missing_all_brokers_prod
  2. Planned outage - it is maintenance window when we have to do some maintenance tasks that will cause temporary problems with accessing to MDMHUB endpoints,
  3. Update configuration - some of MDMHUB endpoints are changed i.e.: rest API URL address, Kafka address etc.

We always sends notification in the case of unplanned outage to inform our clients about and let them know that somebody from us is working on issue. Planned outage and update configuration are always planned activity that are confirmed with release management and scheduled to specific time range.

Notification Layout

  1. You send notifications using your COMPANY's email account.
  2. As CC always set our DLs: DL-ATP_MDMHUB_SUPPORT_PROD@COMPANY.com, DL-ATP_MDMHUB_SUPPORT@COMPANY.com
  3. Add our clients as BCC according to table mentioned below:

\"\"


{"name":"MDM_Hub_notification_recipients.xlsx","type":"xlsx","pageID":"430347508"}

Loading

\"\"


On the above screen we can see a few placeholders,

Notification templates

Below you can find notification templates that you can get, fill and send to our clients:

  1. Generic template: notification.msg
  2. Kafka issues: kafka.msg
  3. API issues: api.msg




" + }, + { + "title": "COMPANYGlobalCustomerID:", + "pageID": "302706348", + "pageLink": "/pages/viewpage.action?pageId=302706348", + "content": "" + }, + { + "title": "Fix \"\" or null IDs - Fix Duplicates", + "pageID": "250675882", + "pageLink": "/pages/viewpage.action?pageId=250675882", + "content": "

The following SOP describes how to fix "" or null COMPANYGlobalCustomerIDs values in Mongo and regenerate events in Snowflake.

The SOP also contains the step to fix duplicated values and regenerate events.


Steps:

  1.  Check empty or null: 
    1. \n
      \t    db = db.getSiblingDB("reltio_amer-prod");\n\t\tdb.getCollection("entityHistory").find(\n\t\t\t{\n\t\t\t\t"$or" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t"COMPANYGlobalCustomerID" : ""\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t"COMPANYGlobalCustomerID" : {\n\t\t\t\t\t\t\t"$exists" : false\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t],\n\t\t\t\t"status" : {\n\t\t\t\t\t"$ne" : "DELETED"\n\t\t\t\t}\n\t\t\t}\n\t\t);
      \n
    2. Mark all ids for further event regeneration. 
  2. Run the Scritp on Studio3t or K8s mongo
    1. Script - http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/docker/mongo_utils/scripts/COMPANYglobalcustomerids_fix_empty_null_script.js
    2. Run on K8s:
      1. log in to correct cluster on backend namespace 
      2. copy script - kubectl cp  ./reload_entities_fix_COMPANY_id_DEV.js mongo-0:/tmp/reload_entities_fix_COMPANY_id_DEV.js
      3. run - nohup mongo --host mongo/localhost:27017 -u admin -p <pass> --authenticationDatabase admin reload_entities_fix_COMPANY_id_DEV.js > out/reload_DEV.out 2>&1 &
      4. download result - kubectl cp mongo-0:/tmp/out/reload_DEV.out ./reload_DEV.out
      5. Using output find all "TODO" lines and regenerate correct events
  3. Check duplicates:
    1. \n
      \t\t\t\t// Pipeline\n\t\t\t[\n\t\t\t\t// Stage 1\n\t\t\t\t{\n\t\t\t\t\t$group: {\n\t\t\t\t\t_id: {COMPANYID: "$COMPANYID"},\n\t\t\t\t\tuniqueIds: {$addToSet: "$_id"},\n\t\t\t\t\tcount: {$sum: 1}\n\t\t\t\t\t}\n\t\t\t\t},\n\n\t\t\t\t// Stage 2\n\t\t\t\t{\n\t\t\t\t\t$match: { \n\t\t\t\t\tcount: {"$gt": 1}\n\t\t\t\t\t}\n\t\t\t\t},   \n\t\t\t],\n\n\t\t\t// Options\n\t\t\t{\n\t\t\t\tallowDiskUse: true\n\t\t\t}\n\n\t\t\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/
      \n
    2. If there are duplicates run run the Scritp on Studio3t or K8s mongo
      1. Script - http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/docker/mongo_utils/scripts/COMPANYglobalcustomerids_fix_duplicates_script.js
      2. Run on K8s:
        1. log in to correct cluster on backend namespace 
        2. copy script - kubectl cp  ./reload_entities_fix_COMPANY_id_DEV.js mongo-0:/tmp/reload_entities_fix_COMPANY_id_DEV.js
        3. run - nohup mongo --host mongo/localhost:27017 -u admin -p <pass> --authenticationDatabase admin reload_entities_fix_COMPANY_id_DEV.js > out/reload_DEV.out 2>&1 &
        4. download result - kubectl cp mongo-0:/tmp/out/reload_DEV.out ./reload_DEV.out
        5. Using output find all "TODO" lines and regenerate correct events
  4. Reload events    


Events RUN

You can use the following 2 scripts:

\n
#!/bin/bash\n\nfile=$1\nevent_type=$2\n\ndos2unix $file\n\njq -R -s -c 'split("\\n")' < "${file}"  | jq --arg eventTimeArg `date +%s%3N` --arg eventType ${event_type} -r '.[] | . +"|{\\"eventType\\": \\"\\($eventType)\\", \\"eventTime\\": \\"\\($eventTimeArg)\\", \\"entityModificationTime\\": \\"\\($eventTimeArg)\\", \\"entitiesURIs\\": [\\"" + (.|tostring) + "\\"], \\"mdmSource\\": \\"RELTIO\\", \\"viewName\\": \\"default\\"}"'\n\n
\n

This script input is the file with entityid separated by new line

Exmaple:

entities/xVIK0nh
entities/uP4eLws
entities/iiKryQO
entities/ZYjRCFN
entities/13n4v93A


Example execution:

./script.sh dev_reload_empty_ids.csv HCP_CHANGED >> EMEA_DEV_events.txt


OR


\n
#!/bin/bash\n\nfile=$1\n\ndos2unix $file\n\njq -R -s -c 'split("\\n")' < "${file}"  | jq --arg eventTimeArg `date +%s%3N` -r '.[] | (. | tostring | split(",") | .[0] | tostring ) +"|{\\"eventType\\": \\""+ ( . | tostring | split(",") | if .[1] == "LOST_MERGE" then "HCP_LOST_MERGE" else "HCP_CHANGED" end ) + "\\", \\"eventTime\\": \\"\\($eventTimeArg)\\", \\"entityModificationTime\\": \\"\\($eventTimeArg)\\", \\"entitiesURIs\\": [\\"" + (. | tostring | split(",") | .[0] | tostring ) + "\\"], \\"mdmSource\\": \\"RELTIO\\", \\"viewName\\": \\"default\\"}"'\n\n
\n

This script input is the file with entityId,status separate by new line

Example:

entities/10BBdiHR,LOST_MERGE
entities/10BBdv4D,LOST_MERGE
entities/10BBe7qz,LOST_MERGE
entities/10BBgKFF,INACTIVE
entities/10BBgOVV,ACTIVE


Example execution:

./script_2_columns.sh dev_reload_lost_merges.csv >> EMEA_DEV_events.txt


Push the generate file to Kafka topic using Kafka producer:

./start_sasl_producer.sh prod-internal-reltio-events < EMEA_PROD_events.txt


Snowflake Check

\n
-- COMPANY COMPANY_GLOBAL_CUSTOMER_ID checks - null/empty\nSELECT count(*) FROM ENTITIES  WHERE COMPANY_GLOBAL_CUSTOMER_ID  IS NULL OR COMPANY_GLOBAL_CUSTOMER_ID  = '' \nSELECT * FROM ENTITIES  WHERE COMPANY_GLOBAL_CUSTOMER_ID  IS NULL OR COMPANY_GLOBAL_CUSTOMER_ID  = '' \n\n-- duplicates\nSELECT COMPANY_GLOBAL_CUSTOMER_ID \nFROM ENTITIES \nWHERE COMPANY_GLOBAL_CUSTOMER_ID  IS NOT NULL OR COMPANY_GLOBAL_CUSTOMER_ID  != '' \nGROUP BY COMPANY_GLOBAL_CUSTOMER_ID HAVING COUNT(*) >1\n\n
\n









" + }, + { + "title": "Initialization Process", + "pageID": "218694652", + "pageLink": "/display/GMDM/Initialization+Process", + "content": "

The process will sync COMPANYGlobalCustomerID attributes to the MongoDB (EntityHistory and COMPANYIDRegistry) and then refresh the snowflake with this data.

The process is divided into the following steps:

  1. Create an index in Mongo
    1. db.entityHistory.createIndex({COMPANYGlobalCustomerID: -1},  {background: true, name:  "idx_COMPANYGlobalCustomerID"});
  2. Configure entity-enricher so it has the ov:false option for COMPANYGlobalCustomerID
    1. bundle.nonOvAttributesToInclude:
      - COMPANYCustID
      - COMPANYGlobalCustomerID
  3. Deploy the hub components with callback enabled -COMPANYGlobalCustomerIDCallback (3.9.1 version)
  4. RUN hub_reconciliation_v2 - first run the HUB Reconciliation -> this will enrich all Mongo data with COMPANYGlobaCustomerID with ov:true and ov:false values
    1. based on EMEA this is here - http://euw1z1dl039.COMPANY.com:8080/airflow/tree?dag_id=hub_reconciliation_v2_emea_dev&root=
    2. doc - HUB Reconciliation Process V2
    3. check if the configuration contains the following - nonOvAttrToInclude: "COMPANYCustID,COMPANYGlobalCustomerID"
    4. check S3 directory structure and reconciliation.properties file in emea/<env>/inbound/hub/hub_reconciliation/ 
      1. http://euw1z1dl039.COMPANY.com:8080/airflow/tree?dag_id=hub_reconciliation_v2_emea_dev
      2. http://euw1z1dl039.COMPANY.com:8080/airflow/tree?dag_id=hub_reconciliation_v2_emea_qa
      3. http://euw1z1dl039.COMPANY.com:8080/airflow/tree?dag_id=hub_reconciliation_v2_emea_stage
  5. RUN hub_COMPANYglobacustomerid_initial_sync_<ENV> DAG
    1. It contains 2 steps:
      1. COMPANYglobacustomerid_active_inactive_reconciliation 
        1. the groovy script that - check the HUB entityHistory ACTIVE/INACTIVE/DELETED entities - for all these entities get ov:true COMPANYGlobalCustomerId and enrich Mongo and Cache
      2. COMPANYglobacustomerid_lost_merge_reconciliation  
        1. the groovy script that - this step checks LOST_MERGE entities. Do the merge_tree full export from Reltio. Based on merge_tree adds the 
  6. RUN snowflake_reconciliation - full snowflake reconciliation by generating the full file with empty checksums





" + }, + { + "title": "Remove Duplicates and Regenerate Events", + "pageID": "272368703", + "pageLink": "/display/GMDM/Remove+Duplicates+and+Regenerate+Events", + "content": "

This SOP describes the workaround to fix the COMPANYGlobalCustomerID duplicated values.


Case:

There are 2 entities with the same COMPANYGlobalCustomerID.

Example:

    1Qbu0jBQ - Jun 14, 2022 @ 18:10:44.963    ID-mdmhub-reltio-subscriber-dynamic-866b588c7-w9crm-1655205289718-0-157609    ENTITY_CREATED    entities/1Qbu0jBQ    RELTIO    success    entities/1Qbu0jBQ    
    3Ot2Cfw  - Aug 11, 2022 @ 18:53:31.433    ID-mdmhub-reltio-subscriber-dynamic-79cd788b59-gtzm6-1659525443436-0-1693016    ENTITY_CREATED    entities/3Ot2Cfw    RELTIO    success    entities/3Ot2Cfw


3Ot2Cfw  is a WINNER

1Qbu0jBQ  is a LOSER. 


Rule: if there are duplicates, always pick the LOST_MERGED entity and update the looser only with the different value. Do not change an active entity:

Steps:

  1. GO to Reltio to the winner and check the other (OV:FALSE) COMPANYGlobalCustomerIDs
  2. Pick the new value from the list:
  3. Check if there are no duplicates in Mongo, and search for a new value by the COMPANY in the cache. If exists pick different.
  4. Update Mongo Cache:
    1. \"\"
  5. Regenerate event:
    1. if the loser entity is now active in Reltio but not active in Mongo regenerate CREATED event:
      1. entities/1Qbu0jBQ|{  "eventType" : "HCP_CREATED",  "eventTime" : "1666090581000",  "entityModificationTime" : "1666090581000",  "entitiesURIs" : [ "entities/1Qbu0jBQ" ],  "mdmSource" : "RELTIO",  "viewName" : "default" }
    2. if the loser entity is not present in Reltio because is a looser regenerate LOST_MERGE event:
      1. entities/1Q7XLreu|{"eventType":"HCO_LOST_MERGE","eventTime":1666018656000,"entityModificationTime":1666018656000,"entitiesURIs":["entities/1Q7XLreu"],"mdmSource":"RELTIO","viewName":"default"}
  6. Example PUSH to PROD:
    1. \"\"
  7. Check Mongo, an updated entity should change COMPANYGlobalCustomerID
  8. Check Reltio
  9. Check Snowflake
" + }, + { + "title": "Project FLEX (US):", + "pageID": "302705645", + "pageLink": "/pages/viewpage.action?pageId=302705645", + "content": "" + }, + { + "title": "Batch Loads - Client-Sourced", + "pageID": "164470098", + "pageLink": "/display/GMDM/Batch+Loads+-+Client-Sourced", + "content": "


  1. Log in to US PROD Kibana: https://amraelp00006209.COMPANY.com:5601/app/kibana
    1. use the dedicated "kibana_gbiccs_user" 
  2. Go to the Dashboards Tab - "PROD Batch loads"
    1. \"\"
  3. Change the Time rage 
    1. \"\"
    2. Choose 24 hours to check if the new file was loaded for the last 24 hours.
  4. The Dashboard is divided into the following sections:
    1. File by type - this visualization presents how many file of the specific type were loaded during a specific time range
    2. File load count - this visualization presents when the specific file was loaded
    3. File load summary - on this table you can verify the detailed information about file load
    4. \"\"
  5. Check if files are loaded with the following agenda:
    1. SAP - incremental loads - max 4 files per day, min 2 files per day
      1.  Agenda: 

        whenhours
        Monday-Friday 1. 01:20 CET time
        2. 13:20 CET time
        3. 17:20 CET time
        4. 21:20 CET time
        Saturday1. 01:20 CET time
        Sundaynone
    2. HIN - incremental loads - 2 file per day. WKCE.*.txt and WKHH.*.txt
      1. Agenda:

        whenhours
        Tuesday-Saturday1. estimates: 12PM - 1PM CET time
    3. DEA - full load -  1 file per week FF_DEA_IN_.*.txt
      1. Agenda:

        whenhours
        Tuesday1. estimates: 10AM - 12PM CET time
    4. 340B - incremental load - 4 files per month. 340B_FLEX_TO_RELTIO_*.txt
      1. Agenda:

        Files uploaded on 3rd, 10th, 24th and the last day of the month at ~12:30 PM CET time. If the upload day is on the weekend, the file will be loaded on the next workday.

  6. Check if DEA file limit was not exceeded. 
    1. Check "Suspended Entities" attribute. If this parameter is grater than 0, it means that DEA post processing was not invoked. Current DEA post processing limit is 22 000. To increase limit - Send the notification (7.d), after agreement do (8.)
  7. Take an action if the input files are not delivered on schedule:

    1. SAP 
      1. To:  santosh.dube@COMPANY.com;Venkata.Mandala@COMPANY.com;Jayant.Srivastava@COMPANY.com;DL-GMFT-EDI-PRD-SUPPORT@COMPANY.com
      2. CC: tj.struckus@COMPANY.com;Patrick.Neuman@COMPANY.com;przemyslaw.warecki@COMPANY.com;mikolaj.morawski@COMPANY.com;Melissa.Manseau@COMPANY.com;Deanna.Max@COMPANY.com;Laura.Faddah@COMPANY.com;DL-CBK-MAST@COMPANY.com;BalaSubramanyam.Thirumurthy@COMPANY.com
    2. HIN
      1. To: santosh.dube@COMPANY.com;Venkata.Mandala@COMPANY.com;Jayant.Srivastava@COMPANY.com;DL-GMFT-EDI-PRD-SUPPORT@COMPANY.com
      2. CC: tj.struckus@COMPANY.com;Patrick.Neuman@COMPANY.com;przemyslaw.warecki@COMPANY.com;mikolaj.morawski@COMPANY.com;Melissa.Manseau@COMPANY.com;Deanna.Max@COMPANY.com;Laura.Faddah@COMPANY.com;DL-CBK-MAST@COMPANY.com; BalaSubramanyam.Thirumurthy@COMPANY.com
    3. DEA
      1. To: santosh.dube@COMPANY.com;Venkata.Mandala@COMPANY.com;Jayant.Srivastava@COMPANY.com;DL-GMFT-EDI-PRD-SUPPORT@COMPANY.com
      2. CC: tj.struckus@COMPANY.com;Patrick.Neuman@COMPANY.com;przemyslaw.warecki@COMPANY.com;mikolaj.morawski@COMPANY.com;Melissa.Manseau@COMPANY.com;Deanna.Max@COMPANY.com;Laura.Faddah@COMPANY.com;DL-CBK-MAST@COMPANY.com; BalaSubramanyam.Thirumurthy@COMPANY.com
    4. DEA - limit notification
      1. To: santosh.dube@COMPANY.com;tj.struckus@COMPANY.com;Melissa.Manseau@COMPANY.com;BalaSubramanyam.Thirumurthy@COMPANY.com
      2. CC: przemyslaw.warecki@COMPANY.com;mikolaj.morawski@COMPANY.com
  8. Take an action if DEA limit was exceeded. 
    1. Login to each PROD host
    2. Go to "cd /app/mdmgw/batch_channel/config/"
    3. Edit "application.yml" on each host:
    4. Change poller.inputFormats.DEA.deleteDateLimit: 22 000 to new value.
    5. Restart Components: 
      1. Execute https://jenkins-gbicomcloud.COMPANY.com:8443/job/mdm_manage_playbooks/job/Microservices/job/manage_microservices__prod_us/
        1. component: mdmgw_batch-channel_1
        2. node: all_nodes
        3. command: restart
    6. Load the latest DEA file (MD5 checksum skips all entities, so only post-processing step will be executed) 
    7. Change and commit new limit to GIT: https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/prod_us/group_vars/gw-services/batch_channel.yml 


Example Emails:

  1. DEA limit exceeded: 
    1. DEA load check

      Hi Team,

      We just received the DEA file, the current DEA post processing process is set to 22 000 limitation. The DEA load resulted in xxxx profiles to be updated in post-processing. Should I change the limit and re-process profiles ?

      Regards,

  2. HIN File missing
    1. HIN PROD file missing

      Hi,

       Today we expected to receive new HIN files. I checked that HIN files are missing on S3 bucket. Last week we received files at <time> CET time.

      Here is the screenshot that presents files that we received last week:

      <screen from S3 bucket>

      Could you please verify this.

      Regards,







" + }, + { + "title": "Batch Loads - Update Addresses", + "pageID": "164469820", + "pageLink": "/display/GMDM/Batch+Loads+-+Update+Addresses", + "content": "
  1. Log in to US PROD Kibana: https://amraelp00006209.COMPANY.com:5601/app/kibana
    1. use the dedicated "kibana_gbiccs_user" 
  2. Go to the Dashboards Tab - "PROD Batch loads"
    1. \"\"
  3. Change the Time rage 
    1. \"\"
    2. Choose 24 hours to check if the new file was loaded for the last 24 hours.
  4. The Dashboard is divided into the following sections:
    1. File by type - this visualization presents how many file of the specific type were loaded during a specific time range
    2. File load count - this visualization presents when the specific file was loaded
    3. File load summary - on this table you can verify the detailed information about file load
    4. File load status count - the user name ("integration_batch_user") that executes the API and "status" - the number of requests ended with the status. To get more details o to PROD Api Calls
    5. Response status load summary - the number of requests ended with the specific status. To get more details o to PROD Api Calls
    6. \"\"
  5. The result report name or the details saved in Kibana contains correlation ID. 
    1. example Report name: DEV_update_profiles_integration_testing_ID-5e1b4bdf7525-1574860947734-0-819_REPORT.csv 
    2. example correlation ID: ID-5e1b4bdf7525-1574860947734-0-819
  6. To get more details o to PROD Api Calls
  7. Search by the correlation ID related to the latest Addresses update file load. 
  8. \"\"
  9. The following screenshot presents how many operations were invoked during the Addresses update.
    1. In this example, the input file contains 3 Customers.
    2. During the process, 3 Search API calls and 3 Attribute Updates API calls were invoked with success. 


DOC

Please read the following Technical Design document related to the Addresses updating process. This document contains a detailed description of the process, all inbound and outbound interface types.




S3 report and distribution


The report is uploaded to the S3 location: 

PROD location: mdmprodamrasp42095/PROD/archive/ADDRESSES/

The report is published in the AWS S3 bucket.

File name format is following: “<name>_<correlation_id>.csv”

Where <name> is the input file name.

Where <correlation_id> is the number of the batch related to the whole addresses update process. Using the correlation number Operator can find and verify all updates send to Reltio and easily verify the status of the batch.



Download the file and publish it to the SharePoint location. 

Send the notification to the designated mailing group. 


SharePoint upload location:


Mailing group:

    To: Melissa.Manseau@COMPANY.com,santosh.dube@COMPANY.com,Deanna.Max@COMPANY.com,Laura.Faddah@COMPANY.com,Xin.Sun@COMPANY.com,crystal.sawyer@COMPANY.com 

    CC:przemyslaw.warecki@COMPANY.com,mikolaj.morawski@COMPANY.com


Email template:



FLEX Addresses updating process - Report - <generation_date>

Hi, 

 Please be informed that the Addresses updating process report is available for verification.

Report:

 → <SharePoint URL>

Regards,

Mikolaj 














" + }, + { + "title": "Batch Loads - Update Identifiers", + "pageID": "164470070", + "pageLink": "/display/GMDM/Batch+Loads+-+Update+Identifiers", + "content": "
  1. Log in to US PROD Kibana: https://amraelp00006209.COMPANY.com:5601/app/kibana
    1. use the dedicated "kibana_gbiccs_user" 
  2. Go to the Dashboards Tab - "PROD Batch loads"
    1. \"\"
  3. Change the Time rage 
    1. \"\"
    2. Choose 24 hours to check if the new file was loaded for the last 24 hours.
  4. The Dashboard is divided into the following sections:
    1. File by type - this visualization presents how many file of the specific type were loaded during a specific time range
    2. File load count - this visualization presents when the specific file was loaded
    3. File load summary - on this table you can verify the detailed information about file load
    4. File load status count - the user name ("identifiers_batch_user") that executes the API and "status" - the number of requests ended with the status. To get more details o to PROD Api Calls
    5. Response status load summary - the number of requests ended with the specific status. To get more details o to PROD Api Calls
    6. \"\"
  5. The result report name or the details saved in Kibana contains correlation ID. 
    1. example Report name: DEV_update_profiles_integration_testing_ID-5e1b4bdf7525-1574860947734-0-819_REPORT.csv 
    2. example correlation ID: ID-5e1b4bdf7525-1574860947734-0-819
  6. To get more details o to PROD Api Calls
  7. Search by the correlation ID related to the latest Identifiers file load. 
  8. \"\"
  9. The following screenshot presents how many operations were invoked during the Identifiers update.
    1. In this example, the input file contains 3 Customers.
    2. During the process, 3 Search API calls and 3 Attribute Updates API calls were invoked with success. 


DOC

Please read the following Technical Design document related to the Identifiers updating process. This document contains a detailed description of the process, all inbound and outbound interface types.


\"\"



S3 report and distribution


The report is uploaded to the S3 location: 

PROD location: mdmprodamrasp42095/PROD/archive/IDENTIFIERS/

The report is published in the AWS S3 bucket.

File name format is following: “<name>_<correlation_id>.csv”

Where <name> is the input file name.

Where <correlation_id> is the number of the batch related to the whole identifiers update process. Using the correlation number Operator can find and verify all updates send to Reltio and easily verify the status of the batch.



Download the file and publish it to the SharePoint location. 

Send the notification to the designated mailing group. 


SharePoint upload location:


Mailing group:

    To: Melissa.Manseau@COMPANY.com,santosh.dube@COMPANY.com,Deanna.Max@COMPANY.com,Laura.Faddah@COMPANY.com,Xin.Sun@COMPANY.com,crystal.sawyer@COMPANY.com 

    CC:przemyslaw.warecki@COMPANY.com,mikolaj.morawski@COMPANY.com


Email template:



FLEX Identifiers updating process - Report - <generation_date>

Hi, 

 Please be informed that the Identifiers updating process report is available for verification.

Report:

 → <SharePoint URL>

Regards,

Mikolaj 














" + }, + { + "title": "FLEX QC", + "pageID": "164470057", + "pageLink": "/display/GMDM/FLEX+QC", + "content": "


Agenda

The following table presents the scheduled agenda of the process:

whenhours
Each Saturday 13:00 (UTC time)


The process has to be verified on Monday morning CET time. After successful verification the report has to be sent to the designated mailing group.

Prometheus Dashboard

There is a requirement to monitor the process after each run and send the generated comparison report. 

The overview Monitoring Prometheus dashboard is available here:

https://mdm-monitoring.COMPANY.com/grafana/d/COVgYieiz/alerts-monitoring?orgId=1&refresh=10s&var-region=us

\"\"

When the dashboard contains GREEN color on "US PROD Airflow DAG's Status" panel -  The process ended with success.

When the dashboard contains RED color on "US PROD Airflow DAG's Status" panel -  The process ended with failure. The details are available in Airflow.


Airflow

  1. Log in to Airflow platform: https://cicd-gbl-mdm-hub.COMPANY.com/airflow/tree?dag_id=flex_validate_us_prod 
    1. you can use admin user
    2. Login page
    3. \"\"
  2. Go to the "flex_validate_us_prod" Job
    1. \"\"
    2. To check details of the specific Task, click on the Task and then in pop up window click "View Logs" 
    3. *_validation_tasks - these tasks are "Sub DAG's". To verify the internal tasks click on the SUB DAG, then in pop up window click "Zoom into SUB DAG". 
  3. After LOGs verification there is a possibility to re-run the process from the last failure point, To do this process the following steps:
    1. Click on the Task. In the pop-up window choose "Clear" 
    2. \"\"
    3. Clearing deletes the previous state of the task instance, allowing it to get re-triggered by the scheduler or a backfill command. It means that all future tasks are cleaned and started one more time.



DOC

Please read the following Technical Design document related to the FLEX Quality check process. this document contains a detailed description of the Airflow process, all inbound and outbound interfaces types.

\"\"



S3 report and distribution

The comparison report is uploaded to the S3 location: 

PROD location: mdmprodamrasp42095/verify/PROD/report/

File name format is following: “comparison_report_full_<date>.csv”
Where <date> is YYYYMMDDTHHMMSS (20191001T072509)


Download the file and publish it to the SharePoint location. 

Send the notification to the designated mailing group. 


Report preprocessing and XLSX create:

  1. Open comparision_report_full_<data>.csv with Notepad++
  2. Because excel removed leading 000 characters the replacement needs to be done using Search mode: Regular expression. 
    1. Replace all

      \n
      ;"0(.*?)";
      \n

      to 

      \n
      ;="0\\1";
      \n

      \"\"

  3. Check the CSV for multi-line comments (NotesText attribute). They might disturb the CSV format.
     
    1. Replace all

      \n
      ([^"])\\n
      \n

      to 

      \n
      "\\1"
      \n

      (remove the quote marks - cannot escape backslash in Confluence)

    2. Fix the header row (add the removed \\n)

  4. Save file
  5. Open CSV file by double click - to open this file in Excel.
    1. \"\"
    2. Click on the left top corner to mark all columns and rows
    3. \"\"
    4. double click on the line between column "A" and "B" to adjust column width.
    5. \"\"
    6. Apply the "Filter" option on the Header.
    7. \"\"
    8. Verify result. Each row needs to start with a source name. Check the source column. Check if the NotesText attribute is in one row, and the format is correct.
    9. When the format is correct the source column should contain only the following values:
    10. \"\"
  6. Save the file in XLSX format
    1. Click "File" → Save as. Choose "Save as type" = "Excel Workbook (*.xlsx)
    2. \"\"
  7. Send both CSV and XLSX format to the SharePoint location:
    1. \"\"


8. 

As recently requested, I have deleted rows with “attributes.Name.value” error and with CXkfvVy entity.



SharePoint upload location:


Mailing group:

    To: Manseau, Melissa <Melissa.Manseau@COMPANY.com>; Dube, Santosh R <santosh.dube@COMPANY.com>;  Faddah, Laura Jordan <Laura.Faddah@COMPANY.com>; Sun, Ivy <Xin.Sun@COMPANY.com>; Antoine, Melissa <melissa.antoine@COMPANY.com>; DL-CBK-MAST <DL-CBK-MAST@COMPANY.com>

    CC: Warecki, Przemyslaw <Przemyslaw.Warecki@COMPANY.com>; Morawski, Mikolaj <Mikolaj.Morawski@COMPANY.com>; Anuskiewicz, Piotr <Piotr.Anuskiewicz@COMPANY.com>


Email template:

<generation_date> - each report is generated during the weekend. So for example when the report generation was executed between 01/04/2020-01/05/2020 (weekend), then the generation_date should be the same.

The date format should be consistent with US notation. (MM/dd/yyyy)  e.g. 01/04/2020-01/05/2020

<SharePoint URL> - the URL in the email needs to be formated, because due to the spaces in the path. 


FLEX QC result - Report - <generation_date>

Hi,


Please be informed that the new QC report is available for verification.


Report:

 → <SharePoint URL>


Best Regards,

Karol



Contact: BalaSubramanyam.Thirumurthy@COMPANY.com,santosh.dube@COMPANY.com when FLEX/HIN/DEA file is missing.

Contact: Venkata.Mandala@COMPANY.com Chakrapani.Kruthiventi@COMPANY.com,santosh.dube@COMPANY.com when SAP file is missing.

Contact: santosh.dube@COMPANY.com,Venkata.Mandala@COMPANY.com,Jayant.Srivastava@COMPANY.com,DL-GMFT-EDI-PRD-SUPPORT@COMPANY.com - With GIS FILE transfer problem (missing files)


14/02/2023

Hi Karol,

You can remove me from this distribution going forward.

Thanks,

Deanna K. Max


27/02/2023

Hi Karol,

I’ve moved to a new role and no longer need to be apart of this distribution. Can you please remove me?

Regards,

Crystal Sawyer 









" + }, + { + "title": "Generate events to prod-out-full-gblus-flex-all*.json file", + "pageID": "333156205", + "pageLink": "/display/GMDM/Generate+events+to+prod-out-full-gblus-flex-all*.json+file", + "content": "
  1. Go to gblmdmhubprodamrasp101478/us/prod/inbound/oneview-cov/prod-out-full-gblus-flex-all (concat_s3_files_gblus_prod input directory)
  2. Copy files for desired period of time to your local workspace
  3. Download attached script and modify events variable
    \"\"
  4. Execute attached script in the directory below downloaded files. It will find the latest event for every element in events list and store them in agregated_events.json
  5. Arrange with the person requesting event generation that they stop the process for 24h. When they stop the process, you can add the found events to a file in gblmdmhubprodamrasp101478/us/prod/inbound/oneview-cov/inbound s3 directory
  6. After file is modified thay can start ingestion process and verify if events were properly generated

\"\"findEvents.sh

" + }, + { + "title": "Re-Loading SAP/HIN/DEA Files After Batch Channel Stopped", + "pageID": "164470077", + "pageLink": "/pages/viewpage.action?pageId=164470077", + "content": "


These are the steps to be taken to correctly process SAP/HIN/DEA files after mdmgw_batch_channel docker container is stopped on PROD and has to be restarted:


  1. Create an emergency RFC for this action
  2. Change configuration of the batch_channel component on PROD1 (amraelp00006207) under /app/mdmgw/batch_channel/config/application.yml:


change relativePathPattern: DEA/.* to relativePathPattern: DEA_LOAD/.*
change relativePathPattern: HIN/.* to relativePathPattern: HIN_LOAD/.*
change relativePathPattern: SAP/.* to relativePathPattern: SAP_LOAD/.*


This is required because GIS publishes files to */DEA/HIN/SAP automatically and we don't want to consume them during the fix.


     3. Empty all /inbound/* directories by moving all files from:

/inbound/SAP to /archive/SAP_tmp

/inbound/DEA to /archive/DEA_tmp

/inbound/HIN to /archive/HIN_tmp


4. After inbound directories are empty start batch_channel component on PROD1 (amraelp00006207). Process files in FIFO order by moving them in order from:

/archive/SAP_tmp to /inbound/SAP_LOAD

/archive/DEA_tmp to /inbound/DEA_LOAD

/archive/HIN_tmp to /inbound/HIN_LOAD


5. After these files are processes stop batch_channel on PROD1 (amraelp00006207).


6. Restore configuration on PROD1 under /app/mdmgw/batch_channel/config/application.yml:


relativePathPattern: DEA_LOAD/.* to relativePathPattern: DEA/.* 
relativePathPattern: HIN_LOAD/.* to relativePathPattern: HIN/.* 
relativePathPattern: SAP_LOAD/.* to relativePathPattern: SAP/.* 


7. Start batch_channel on PROD1, PROD2 and PROD3 waiting 1 minute before start on each subsequent node.

8. Check if nodes started and clustered correctly:


9. Move previously processed files from /archive/*_load to /archive/*

" + }, + { + "title": "S3 keys replacement", + "pageID": "379129646", + "pageLink": "/display/GMDM/S3+keys+replacement", + "content": "

PROD ( amraelp00006207, amraelp00006208, amraelp00006209):

Remember that the replacement has to be done on all three instances!

  1. Replace keys for batch channel and do recreate containers. 

/app/mdmgw/batch_channel/config/application.yml

      2.  Replace keys for reltio subscriber and do recreate containers

/app/mdmhub/reltio_subscriber/config/application.yml

     3. Replace keys for archiver and do not recreate containers

/app/archiver/config/archiver.env

    4. Replace keys for airflow dags 

https://cicd-gbl-mdm-hub.COMPANY.com/airflow/home


NPROD (DEV / TEST - amraelp00005781): 


  1. Replace keys for batch channel and recreate containers. 

/app/mdmgw/dev-mdm-srv/batch_channel/config/application.yml

/app/mdmgw/test-mdm-srv/batch_channel/config/application.yml




After manual replacement in the components:

Replace keys in the repository:

Use replace_aws_keys.sh to find and replace keys in the repository. 

Deploy changes! MDM Hub Deploy Jobs and MDM Gateway Deploy Jobs





" + }, + { + "title": "Project Highlander:", + "pageID": "302705635", + "pageLink": "/pages/viewpage.action?pageId=302705635", + "content": "" + }, + { + "title": "Highlander IDL Quality Check", + "pageID": "164470068", + "pageLink": "/display/GMDM/Highlander+IDL+Quality+Check", + "content": "

It is required to check HCO and HCP counts at selected checkpoins of C8 flow and document it.

Checkpoints

Document

Please create document using the template.

Procedures

Retrieving counts from  Reltio

Call following API

To get HCP counts

\n
GET https://{{url}}/reltio/api/{{tenantID}}/entities/_facets?facet=type,attributes.Country&options=searchByOv&max=2000&filter=equals(type,'HCP') and in(attributes.Country,"AI,AN,AG,AR,AW,BS,BB,BZ,BM,BO,BR,CL,CO,CR,CW,DO,EC,GT,GY,HN,JM,KY,LC,MX,NI,PA,PY,PE,PN,SV,SX,TT,UY,VG,VE")
\n


To get HCO counts:

\n
GET https://{{url}}/reltio/api/{{tenantID}}/entities/_facets?facet=type,attributes.Country&options=searchByOv&max=2000&filter=equals(type,'HCO') and in(attributes.Country,"AI,AN,AG,AR,AW,BS,BB,BZ,BM,BO,BR,CL,CO,CR,CW,DO,EC,GT,GY,HN,JM,KY,LC,MX,NI,PA,PY,PE,PN,SV,SX,TT,UY,VG,VE")
\n


Retrieving counts from HUB (global)


Query
\n
db.getCollection("entityHistory").aggregate(\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: {\n\t\t\t     "$and" : [\n\t\t\t        {"status" : "ACTIVE"}, \n\t\t\t        {"country" : {\n\t\t\t            "$in" : [\n\t\t\t                "ai", \n\t\t\t                "an", \n\t\t\t                "ag", \n\t\t\t                "ar", \n\t\t\t                "aw", \n\t\t\t                "bs", \n\t\t\t                "bb", \n\t\t\t                "bz", \n\t\t\t                "bm", \n\t\t\t                "bo", \n\t\t\t                "br", \n\t\t\t                "cl", \n\t\t\t                "co", \n\t\t\t                "cr", \n\t\t\t                "cw", \n\t\t\t                "do", \n\t\t\t                "ec", \n\t\t\t                "gt", \n\t\t\t                "gy", \n\t\t\t                "hn", \n\t\t\t                "jm", \n\t\t\t                "ky", \n\t\t\t                "lc", \n\t\t\t                "mx", \n\t\t\t                "ni", \n\t\t\t                "pa", \n\t\t\t                "py", \n\t\t\t                "pe", \n\t\t\t                "pn", \n\t\t\t                "sv", \n\t\t\t                "sx", \n\t\t\t                "tt", \n\t\t\t                "uy", \n\t\t\t                "vg", \n\t\t\t                "ve"\n\t\t\t            ]\n\t\t\t        }}\n\t\t\t        ]      \n\t\t\t}\n\t\t},\n\t\t// Stage 2\n\t\t{\n\t\t\t$group: {\n\t\t\t_id: {entityType: "$entityType", country: "$country" }, count: { $sum: 1 }\n\t\t\t}\n\t\t},\n\n\t]\n);\n\n
\n

Retrieving counts from HUB (C8 filters)


Query
\n
db.getCollection("entityHistory").aggregate(\n    // Pipeline\n    [\n        // Stage 1\n        {\n            $match: {\n                 "$and" : [\n                    {"status" : "ACTIVE"},\n                    {"country" : {\n                        "$in" : [\n                            "ai", \n                            "an", \n                            "ag", \n                            "ar", \n                            "aw",\n                            "bs",\n                            "bb",\n                            "bz",\n                            "bm",\n                            "bo",\n                            "br",\n                            "cl",\n                            "co",\n                            "cr",\n                            "cw",\n                            "do",\n                            "ec",\n                            "gt",\n                            "gy",\n                            "hn",\n                            "jm",\n                            "ky",\n                            "lc",\n                            "mx",\n                            "ni",\n                            "pa",\n                            "py",\n                            "pe",\n                            "pn",\n                            "sv",\n                            "sx",\n                            "tt",\n                            "uy",\n                            "vg",\n                            "ve"\n                        ]\n                    }},\n                    {\n                            "entity.crosswalks" : {\n                                "$elemMatch" : {\n                                    "type" : {\n                                        "$in" : [\n                                            "configuration/sources/OK",\n                                            "configuration/sources/CRMMI",\n                                            "configuration/sources/Reltio"                                            \n                                        ]\n                                    },\n                                    "deleteDate" : {\n                                        "$exists" : false\n                                    }\n                                }\n                            }\n                        }\n                    ]     \n            }\n        },\n \n        // Stage 2\n        {\n            $addFields: {\n                "market":    \n                 {"$switch": {\n                   branches: [\n                  { case:  {"$in" : [ "$country", ["ag","ai","aw","bb","bs","cr","do","gt","hn","jm","lc","ni","pa","sv","tt","vg","cw","sx" ]]}, then: "ac" },\n                  { case:  {"$in" : [ "$country", ["uy" ]]}, then: "ar" }\n                   ],\n               default: "$country"\n            }  \n                 }\n            }\n        },\n \n        // Stage 3\n        {\n            $group: {\n            _id: {entityType: "$entityType", market: "$market" }, count: { $sum: 1 }\n            }\n        },\n \n    ]\n);\n\n\n
\n



" + }, + { + "title": "RawData:", + "pageID": "347666020", + "pageLink": "/pages/viewpage.action?pageId=347666020", + "content": "" + }, + { + "title": "Restore raw entity data", + "pageID": "347666025", + "pageLink": "/display/GMDM/Restore+raw+entity+data", + "content": "

The following SOP describes how to restore raw entity data.



Steps:


  1. Login to UI
  2. Go to HUB Admin →  Restore Raw Data → Restore entities
  3. Fill in the filters
        a) Source environment - restore data from other environment (restore QA on DEV), default value will restore data from currently logged environment
        b) Entity type - restore data only for selected entity types - requires at least one selected
        c) Countries - restore data only for selected countries
        d) Sources - restore data only for selected sources
        e) Restore entities created after - only entities created after this date will be restored
      
  4. Click the execute button
  5. Validate the results in Kibana API Calls Kibana



\"\"

" + }, + { + "title": "Restore raw relation data", + "pageID": "347666056", + "pageLink": "/display/GMDM/Restore+raw+relation+data", + "content": "



Steps:


  1. Login to UI
  2. Go to HUB Admin →  Restore Raw Data → Restore relations
  3. Fill in the filters
        a) Source environment - restore data from other environment (restore QA on DEV), default value will restore data from currently logged environment
        b) Countries - restore data only for selected countries
        c) Sources - restore data only for selected sources
        d) Relation types - restore data only for selected relation type
        e) Restore relations created after - only relations created after this date will be restored
      
  4. Click the execute button
  5. Validate the results in Kibana API Calls Kibana



\"\"

" + }, + { + "title": "Reconciliation:", + "pageID": "164470071", + "pageLink": "/pages/viewpage.action?pageId=164470071", + "content": "" + }, + { + "title": "How to Start the Reconciliation Process", + "pageID": "164470058", + "pageLink": "/display/GMDM/How+to+Start+the+Reconciliation+Process", + "content": "

This procedure describes the reconciliation process between Reltio and Mongo. The result of this process is the Entities and Relations events generated for the HUB internal Kafka topics.

       0. Check if the entityHistory and entityRelations contains the following indexes:

entityHistory
 db.entityHistory.createIndex({country: -1}, {background: true, name: "idx_country"});
db.entityHistory.createIndex({sources: -1}, {background: true, name: "idx_sources"});
db.entityHistory.createIndex({entityType: -1}, {background: true, name: "idx_entityType"});
db.entityHistory.createIndex({status: -1}, {background: true, name: "idx_status"});
db.entityHistory.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});
db.entityHistory.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});
db.entityHistory.createIndex({"entity.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"});
db.entityHistory.createIndex({"entity.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"});
entityRelations
 db.entityRelations.createIndex({country: -1}, {background: true, name: "idx_country"});
db.entityRelations.createIndex({sources: -1}, {background: true, name: "idx_sources"});
db.entityRelations.createIndex({entityType: -1}, {background: true, name: "idx_relationType"});
db.entityRelations.createIndex({status: -1}, {background: true, name: "idx_status"});
db.entityRelations.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});
db.entityRelations.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});
db.entityRelations.createIndex({startObjectId: -1}, {background: true, name: "idx_startObjectId"});
db.entityRelations.createIndex({endObjectId: -1}, {background: true, name: "idx_endObjectId"});
db.getCollection("entityRelations").createIndex({"relation.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_asc"});


  1. Export Reltio Data
    1. TODO
  2. Import the Reltio Data to Mongo:
    1. Check the following required variables in the mdm-reltio-handler-env/inventory/prod/group_vars/mongo/all.yml

      GBL PROD Example:


      mongo_install_dir: /app/mongo

      hub_db_reltio_user: "mdm_hub"

      hub_db_reltio_●●●●●●●●●●●●● secret_hub_db_reltio_●●●●●●●●●●●●


      hub_db_admin_user: admin

      hub_db_admin_●●●●●●●●●●●●● secret_hub_db_admin_●●●●●●●●●●●●


      hub_db_name: reltio


      #COMPENSATION EVENTS VARIABLES:

      MONGO_URL: "10.12.199.141:27017"


      reltio_entities_export_url_name: "https://reltio-data-exports.s3.amazonaws.com/entities/pfe_mdm_api/2019/25-Feb-2019/fw2ztf8k3jpdffl_14-21_entities_bbf5.zip..."
      reltio_entities_export_file_name: "fw2ztf8k3jpdffl_14-21_entities_bbf5" # THE SAME AS FILE NAME FROM URL
      reltio_entities_export_date_timestamp_ms: "1551052800000" # RETIO EXPORT DATE
      reltio_entities_export_LAST_date_timestamp_ms: "1548288000000" # RETIO LAST EXPORT DATE. Do not SET when you want to do the reconciliation on all entities


      reltio_relations_export_url_name: "https://reltio-data-exports.s3.amazonaws.com/relations/pfe_mdm_api/2019/25-Feb-2019/fw2ztf8k3jpdffl_14-21_relations_afa6.zip..."
      reltio_relations_export_file_name: "fw2ztf8k3jpdffl_14-21_relations_afa6" # THE SAME AS FILE NAME FROM URL
      reltio_relations_export_date_timestamp_ms: "1551052800000" # RETIO EXPORT DATE
      reltio_relations_export_LAST_date_timestamp_ms: "1548806400000" # RETIO LAST EXPORT DATE. Do not SET when you want to do the reconciliation on all entities


      KAFKA_BOOTSTRAP_SERVERS: "10.192.70.189:9094,10.192.70.156:9094,10.192.70.159:9094"

      kafka_import_events_user: "hub_prod"
      kafka_import_events_●●●●●●●●●●●●● secret_kafka_import_events_●●●●●●●●●●●●
      kafka_import_events_truststore_●●●●●●●●●●●●● secret_kafka_import_events_truststore_●●●●●●●●●●●●

      internal_reltio_events_topic: "prod-internal-reltio-events"
      internal_reltio_relations_topic: "prod-internal-reltio-relations-events"

      reconciliate_entities: True # set To False when you want to do the reconciliation only for relations
      reconciliate_relations: True #set To False when you want to do the reconciliation only for entities


      For US PROD Set additional parameters:

      external_user_id: 25084803
      external_group_id: 20796763


    2. On the new files set only reltio_entities_export_.*  or reltio_relations_export_.* variables. According to the export date time and file name.

    3. check PRIMARY

      Check which Mongo instance is PRIMARY. If the first instance is primary execute ansbile playbooks with --limit mongo1 parameter. Otherwise change --limit attribute to other node

    4. Execute: ansible-playbook extract_reltio_data.yml -i inventory/prod/inventory --limit mongo1 --vault-password-file=ansible.secret

    5. Check logs 

    6. Execute: docker logs --tail 1000 mongo_mongoimport_<date> -f
    7. Wait until container will stop then go to the next step.
  3. Create indexes on imported collections:
    1.  db.getCollection("fw2ztf8k3jpdffl_15-55_entities_9d83").createIndex({uri: -1}, {background: true, name: "idx_uri"});
      db.getCollection("fw2ztf8k3jpdffl_15-55_entities_9d83").createIndex({type: -1}, {background: true, name: "idx_type"});
      db.getCollection("fw2ztf8k3jpdffl_15-55_entities_9d83").createIndex({createdTime: -1}, {background: true, name: "idx_createdTime"});
      db.getCollection("fw2ztf8k3jpdffl_15-55_entities_9d83").createIndex({updatedTime: -1}, {background: true, name: "idx_updatedTime"});
      db.getCollection("fw2ztf8k3jpdffl_15-55_entities_9d83").createIndex({"attributes.Country.lookupCode": -1}, {background: true, name: "idx_country"});
      db.getCollection("fw2ztf8k3jpdffl_15-55_entities_9d83").createIndex({"crosswalks.value": -1}, {background: true, name: "idx_crosswalks"});
       db.getCollection("fw2ztf8k3jpdffl_14-21_relations_afa6").createIndex({uri: -1}, {background: true, name: "idx_uri"});
      db.getCollection("fw2ztf8k3jpdffl_14-21_relations_afa6").createIndex({updatedTime: -1}, {background: true, name: "idx_updatedTime"});
      db.getCollection("fw2ztf8k3jpdffl_14-21_relations_afa6").createIndex({"crosswalks.value": -1}, {background: true, name: "idx_crosswalks"});
    2.   Wait until indexes are build

    3. Execute:docker logs --tail 1000 mongo_mongo_1 -f
  4. Based on the imported Reltio data generate missing events:
    1. Execute: ansible-playbook generate_compensation_events.yml -i inventory/prod/inventory --limit mongo1 --vault-password-file=ansible.secret
    2. Wait until the docker containers stop. ETA: 1h - 1h 30min
    3. Check docker logs
    4. Verify the .*_compensation_result collections. 
    5. Check the number of Events for each type for entities: 

      HCP_CREATED | HCO_CREATED
      HCP_CHANGED | HCO_CHANGED
      HCP_MERGED | HCO_MERGED | HCP_LOST_MERGE | HCO_LOST_MERGE
      HCP_REMOVED | HCO_REMOVED


    6. Check the number of Events for each type for relations:

      RELATIONSHIP_CREATED
      RELATIONSHIP_CHANGED
      RELATIONSHIP_MERGED
      RELATIONSHIP_LOST_MERGE
      RELATIONSHIP_REMOVED

    7. Check if the count do not contain the anomalies. Verify the problem if exists. 
    8. Check the logs in the /app/mongo/compensation_events/scripts_entities/.*.out. Check if the logs contain "REPORT AN ERROR TO Reltio" - analyse the problem and report the issue to Reltio. 

    9. Check the logs in the /app/mongo/compensation_events/scripts_relations/.*.out. Check if the logs contain "REPORT AN ERROR TO Reltio" - analyse the problem and report the issue to Reltio. 
  5. When all the events are correct generate events to Kafka internal topic: 
    1. Execute: ansible-playbook generate_compensation_events_kafka.yml -i inventory/prod/inventory --limit mongo1 --vault-password-file=ansible.secret
  6. Verify the internal kafka topics and docker logs. 


" + }, + { + "title": "Hub Reconciliation Monitoring", + "pageID": "273707408", + "pageLink": "/display/GMDM/Hub+Reconciliation+Monitoring", + "content": "

Check Reconciliation dashboard

Check reconciliation dashboard for every environmento on every monday. Ensure that set timespan corresponds with time of last reconciliation(friday-sunday):

Urls

EMEA PROD Reconciliation dashboard

GBL PROD Reconciliation dashboard

AMER PROD Reconciliation dashboard

GBLUS PROD Reconciliation dashboard

APAC PROD Reconciliation dashboard
\"\"

START -  the number of entities/relations/mergeTree that the reconciliation started for

END -  the number of entities/relations/mergeTree that were fully processed(Calculated checksum and checksum from Reltio export differ)

REJECTED - to check the number of entities/relations/mergeTree that were rejected(Calculated checksum and checksum from Reltio export are the same)

Issues

  1. ENTITIES/RELATION/MERGETREE START/REJECTED/END == 0
    Check reconciliation topics if there were produced and consumed events during last weekend
    Check airflow dags
  2. ENTITIES/RELATION/MERGETREE END > 50k
    Check HUB EVENTS dashboard
    Check snowflake

Check HUB EVENTS dashboard

HUB events dashboard describes events that were processed by event publisher and sent to output topics(clients/snowflake)

Urls

EMEA PROD: https://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/emea-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))
GBL PROD: https://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/gbl-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))
AMER PROD: https://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/amer-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))
GBLUS PROD: https://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/gblus-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))
APAC PROD: https://kibana-apac-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/apac-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))

Aplied filter in kibana dashboard

metadata.HUB_RECONCILIATION: true


\"\"

Appling above filter we receive all reconciliation events that were processed by our streaming channel. Now we need to analyze two cases:

  1. comment field == 'No change in data detected (Entity MD5 checksum did not change), ignoring.'
    \"\"
    Although these events checksums differed during reconciliation calculation, after recalculating checksum in entity-enricher, the events were found to be the same. In that case we should check reltio export
  2. comment field != 'No change in data detected (Entity MD5 checksum did not change), ignoring.'
    \"\"
    This situation means that those events are really different and needed to be reconciled. For these entities/relations we send update event to snowflake topic. That's standard process but number of such events shouldn't be to big. If it exceeds 50k then we should analyse what have changed in snowflake(Check snowflake) and check if everything is appropriate.

Please check events 5 HCPs, 5 HCOs and 5 relations from different time periods. Eg, the first hour of reconciliation, the middle of reconciliation and the last hour of reconciliation.

Check reltio export

We should download Reltio export used during reconciliation from s3 bucket. We can check archivisation path in hub_reconciliation_v2_* dags configuration:
E.g.

For AMER PROD: gblmdmhubprodamrasp101478/amer/prod/inbound/hub/hub_reconciliation/entities/archive/

Check snowflake

We should compare the last event to the previous one and see if there are any problems. We can use similar query:

\n
select * from landing.HUB_KAFKA_DATA where record_metadata:key='entities/GOyJxoA' ORDER BY record_metadata:CreateTime desc limit 10;
\n

\"\"

If there is only one rekord in snowflake HUB_KAFKA_DATA this means that retention time has passed and we do not have data any data to compare to. In this case we can check object in reltio. Unfortunately Reltio doesn't keep all changes(eg. rdm changes) so checking in reltio doesn't always provide explanation.

Check object in reltio

Unfortunately Reltio doesn't keep all changes(eg. rdm changes) so checking in reltio doesn't always provide explanation what has changed. This solution should be used as a last resort.


To compare objects in reltio we need to performr Reltio api requests with time parameter.

Time parameter allows you to get the object in the state it was in at selected time

Steps:

  1. Find object in Reltio UI
    \"\"
  2. Find last update date
    \"\"
  3. Perform Reltio api request without time parameter

    \n
    curl --location --request GET 'https://eu-360.reltio.com/reltio/api/Xy67R0nDA10RUV6/entities/PcepVgw?options=ovOnly' \\\n--header 'Authorization: Bearer 357b69a4-4709-43b8-95df-06ef9839599f'
    \n
  4. Perform Reltio api request with time parameter

    \n
    curl --location --request GET 'https://eu-360.reltio.com/reltio/api/Xy67R0nDA10RUV6/entities/PcepVgw?options=ovOnly&time=1663064886000' \\\n--header 'Authorization: Bearer 357b69a4-4709-43b8-95df-06ef9839599f'
    \n
  5. Compare results

Check reconciliations topics

Check if new events showed up on reconciliation topic on last dag run and if those events were consumed:
EMEA PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=emea_prod&var-kube_env=emea_prod&var-topic=emea-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=
AMER PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=amer_prod&var-kube_env=amer_prod&var-topic=amer-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=
GBL PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=gbl_prod&var-kube_env=gbl_prod&var-topic=gbl-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=
APAC PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=apac_prod&var-kube_env=apac_prod&var-topic=apac-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=
GBLUS PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=gblus_prod&var-kube_env=gblus_prod&var-topic=gblus-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=
\"\"

If there were no events generated during last weekend then please check airflow dags.

If events were generated but not processed the please check mdmhub reconciliation service configuration.

Check airflow dags

If there is any issue please verify corresponding airflow dags. None of subsequent stages should be failed:

https://airflow-amer-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_amer_prod
https://airflow-amer-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_gblus_prod
https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_emea_prod
https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_gbl_prod
https://airflow-apac-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_apac_prod

Raport:

Every reconciliation check should be finished with short raport posted on teams chat

EnvEntities ENDRelation ENDMerges ENDSummmary(OK/NOK)Comment
EMEA PROD




GBL PROD




AMER PROD




GBLUS PROD




APAC PROD






Check Reconciliation dashboard

Check reconciliation dashboard for every environmento on every monday. Ensure that set timespan corresponds with time of last reconciliation(friday-sunday):

Urls

EMEA PROD Reconciliation dashboard

GBL PROD Reconciliation dashboard

AMER PROD Reconciliation dashboard

GBLUS PROD Reconciliation dashboard

APAC PROD Reconciliation dashboard
\"\"

START -  the number of entities/relations/mergeTree that the reconciliation started for

END -  the number of entities/relations/mergeTree that were fully processed(Calculated checksum and checksum from Reltio export differ)

REJECTED - to check the number of entities/relations/mergeTree that were rejected(Calculated checksum and checksum from Reltio export are the same)

Issues

  1. ENTITIES/RELATION/MERGETREE START/REJECTED/END == 0
    Check reconciliation topics if there were produced and consumed events during last weekend
    Check airflow dags
  2. ENTITIES/RELATION/MERGETREE END > 50k
    Check HUB EVENTS dashboard
    Check snowflake

Check HUB EVENTS dashboard

HUB events dashboard describes events that were processed by event publisher and sent to output topics(clients/snowflake)

Urls

EMEA PROD: https://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/emea-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))
GBL PROD: https://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/gbl-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))
AMER PROD: https://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/amer-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))
GBLUS PROD: https://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/gblus-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))
APAC PROD: https://kibana-apac-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/apac-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))

Aplied filter in kibana dashboard

metadata.HUB_RECONCILIATION: true


\"\"

Appling above filter we receive all reconciliation events that were processed by our streaming channel. Now we need to analyze two cases:

  1. comment field == 'No change in data detected (Entity MD5 checksum did not change), ignoring.'
    \"\"
    Although these events checksums differed during reconciliation calculation, after recalculating checksum in entity-enricher, the events were found to be the same. In that case we should check reltio export
  2. comment field != 'No change in data detected (Entity MD5 checksum did not change), ignoring.'
    \"\"
    This situation means that those events are really different and needed to be reconciled. For these entities/relations we send update event to snowflake topic. That's standard process but number of such events shouldn't be to big. If it exceeds 50k then we should analyse what have changed in snowflake(Check snowflake) and check if everything is appropriate.

Please check events 5 HCPs, 5 HCOs and 5 relations from different time periods. Eg, the first hour of reconciliation, the middle of reconciliation and the last hour of reconciliation.

Check reltio export

We should download Reltio export used during reconciliation from s3 bucket. We can check archivisation path in hub_reconciliation_v2_* dags configuration:
E.g.

For AMER PROD: gblmdmhubprodamrasp101478/amer/prod/inbound/hub/hub_reconciliation/entities/archive/

Check snowflake

We should compare the last event to the previous one and see if there are any problems. We can use similar query:

\n
select * from landing.HUB_KAFKA_DATA where record_metadata:key='entities/GOyJxoA' ORDER BY record_metadata:CreateTime desc limit 10;
\n

\"\"

If there is only one rekord in snowflake HUB_KAFKA_DATA this means that retention time has passed and we do not have data any data to compare to. In this case we can check object in reltio. Unfortunately Reltio doesn't keep all changes(eg. rdm changes) so checking in reltio doesn't always provide explanation.

Check object in reltio

Unfortunately Reltio doesn't keep all changes(eg. rdm changes) so checking in reltio doesn't always provide explanation what has changed. This solution should be used as a last resort.


To compare objects in reltio we need to performr Reltio api requests with time parameter.

Time parameter allows you to get the object in the state it was in at selected time

Steps:

  1. Find object in Reltio UI
    \"\"
  2. Find last update date
    \"\"
  3. Perform Reltio api request without time parameter

    \n
    curl --location --request GET 'https://eu-360.reltio.com/reltio/api/Xy67R0nDA10RUV6/entities/PcepVgw?options=ovOnly' \\\n--header 'Authorization: Bearer 357b69a4-4709-43b8-95df-06ef9839599f'
    \n
  4. Perform Reltio api request with time parameter

    \n
    curl --location --request GET 'https://eu-360.reltio.com/reltio/api/Xy67R0nDA10RUV6/entities/PcepVgw?options=ovOnly&time=1663064886000' \\\n--header 'Authorization: Bearer 357b69a4-4709-43b8-95df-06ef9839599f'
    \n
  5. Compare results

Check reconciliations topics

Check if new events showed up on reconciliation topic on last dag run and if those events were consumed:
EMEA PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=emea_prod&var-kube_env=emea_prod&var-topic=emea-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=
AMER PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=amer_prod&var-kube_env=amer_prod&var-topic=amer-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=
GBL PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=gbl_prod&var-kube_env=gbl_prod&var-topic=gbl-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=
APAC PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=apac_prod&var-kube_env=apac_prod&var-topic=apac-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=
GBLUS PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=gblus_prod&var-kube_env=gblus_prod&var-topic=gblus-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=
\"\"

If there were no events generated during last weekend then please check airflow dags.

If events were generated but not processed the please check mdmhub reconciliation service configuration.

Check airflow dags

If there is any issue please verify corresponding airflow dags. None of subsequent stages should be failed:

https://airflow-amer-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_amer_prod
https://airflow-amer-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_gblus_prod
https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_emea_prod
https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_gbl_prod
https://airflow-apac-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_apac_prod

Raport:

Every reconciliation check should be finished with short raport posted on teams chat

EnvEntities ENDRelation ENDMerges ENDSummmary(OK/NOK)Comment
EMEA PROD




GBL PROD




AMER PROD




GBLUS PROD




APAC PROD






" + }, + { + "title": "Verifying Reconciliation Results", + "pageID": "164470187", + "pageLink": "/display/GMDM/Verifying+Reconciliation+Results", + "content": "
  1. Run reconciliation dag in airflow for given entities, relations, merge-tree
    1. GBLUS DEV - http://euw1z1dl039.COMPANY.com:8080/airflow/tree?dag_id=hub_reconciliation_v2_gblus_dev
    2. GBLUS QA - http://euw1z1dl039.COMPANY.com:8080/airflow/tree?dag_id=hub_reconciliation_v2_gblus_qa
    3. GBLUS STAGE - http://euw1z1dl039.COMPANY.com:8080/airflow/tree?dag_id=hub_reconciliation_v2_gblus_stage
  2. After reconciliation is finished go to kibana to make verification (https://mdm-log-management-gbl-us-nonprod.COMPANY.com:5601/app/kibana#)
  3. Go to Discover dashboard and choose from dropdown list appropriate filter: docker.<env>
    1. \"\"
    2. switch to Lucene
    3. choose the correct time range
    4. choose the correct index docker.<env>
  4. Add following custom filters  
    1. tag is depending on environment, it can be
      1. docker.dev.mdm-hub-reconciliation-service
      2. docker.qa.mdm-hub-reconciliation-service
      3. docker.stage.mdm-hub-reconciliation-service
      4. docker.prod.mdm-hub-reconciliation-service
    2. data.logger_name, choose if you want to check reconciliation type:
      1. com.COMPANY.mdm.reconciliation.stream.ReconciliationMergeLogic for mergeTree 
      2. com.COMPANY.mdm.reconciliation.stream.ReconciliationLogic - for entities/relations
        1. To check only entities in the search box write entities  to select only one object type (using LUCENE type)
        2. To check only relations in the search box write relation  to select only one object type (using LUCENE type)
    3. data.message is START - to check the number of entities/relations/mergeTree that the reconciliation started for
    4. data.message is END - to check the number of entities/relations/mergeTree that were fully processed
    5. data.message is REJECTED - to check the number of entities/relations/mergeTree that were rejected
    6. choose the appropriate time of reconciliation processing
  5. Differences verification between export and mongo
    1. find URI of the object to verify in kibana
      1. check the Event Publisher dashboard for this uri, if the Reconciliation process detected this as a difference (END) and in the Publisher dasbhaord there is a comment "No change in data detected (Entity MD5 checksum did not change), ignoring." it means something is wrong and you can compare the Reltio export entity with Mongo Entity.
    2. download export from S3 (us/<env>/inbound/hub/hub_reconciliation/<object_type>/archive)
      1. find the JSON in the part_ files - "zgrep "entities/<id>" part-00*"
      2. save the JSON to the file that will be passed to the calculateChecksum.groovy script - file format:
      3. [
        json,
        json
        ]

    3. process exported object using calculateChecksum.groovy from docker and save the object
      1. Modify the script:
        • add  EntityKt filteredEntity = EntityFilter.filter to the reconciliation event output so you can check the whole JSON in the output file
        • change to the outfile.append(uri + "|" + newLine + "\\n")
        • check the file for reference and use this calculateChecksum.groovy
      2. Script RUN:
        1. Run with the following parameters: D:\\docs\\EMEA\\Reconciliation_PROCESS\\entities\\part_01020222.txt entities FULL COMPANYCustID 1 https://api-emea-prod-gbl-mdm-hub.COMPANY.com:8443/prod/gw bhW
          1. path
          2. entities/relations/merge_tree
          3. FULL - to get full JSON compare MD5
          4. this is from the DAG config - hub_reconciliation_v2.yml.params.nonOvAttrToInclude
          5. manager URL
          6. manager API KEY
        2. \"\"
        3. Output file is in the - D:\\opt\\kafka_utils\\data
    4. export object with the same uri from mongo db using simple json format
      1. \"\"
    5. compare those two export using some compare tool, but before reformat those jsons
      1. Use Intellij compare two JSON files function
" + }, + { + "title": "Snowflake:", + "pageID": "337856693", + "pageLink": "/pages/viewpage.action?pageId=337856693", + "content": "" + }, + { + "title": "How to fix issue in Reltio Parser with lookup typos", + "pageID": "337858475", + "pageLink": "/display/GMDM/How+to+fix+issue+in+Reltio+Parser+with+lookup+typos", + "content": "

This procedure shows how to manage typos in lookup codes that can resolve to the same alias in Snowflake, producing errors in Reltio Configuration Parser

  1. Go to ReltioConfigurations  collection in MongoDB
  2. Find configurations with typo that you want to fix (one by one or with filters)
  3. Using Edit Document option, open each affected configuration and find attribute with wrong lookupCode
  4. Fix typos and save changes


Example with screenshots

In this example we fix added white symbol at the end of "DCRType" lookup code on APAC DEV. We go to this environment:

\"\"

Find our configurations:

\"\"

Check them for possible typo:

\"\"

Fix it in each affected configuration and save. This ensures that next parsing will be successfull.

" + }, + { + "title": "SSL Certificates:", + "pageID": "218453496", + "pageLink": "/pages/viewpage.action?pageId=218453496", + "content": "" + }, + { + "title": "Generating a CSR", + "pageID": "218454469", + "pageLink": "/display/GMDM/Generating+a+CSR", + "content": "

Go to the configuration repository (mdm-hub-env-config).

Find the expiring certificate.

Kong

For KONG / KAFKA FLEX PROD mdm-hub-env-config/ssl_certs/prod_us/certs/mdm-ihub-us-trade-prod.COMPANY.com.key 

Certificate should be in ssl_certs/{{ env }}/certs/{{ url }}.pem

For example: ssl_certs/prod/certs/mdm-gateway.COMPANY.com.pem


We will generate our new certificate from the existing private key. Private key is in the same directory as certificate, ending with .key extension.

Copy it to some temporary directory and decrypt:

\n
anuskp@CF-341562:/mnt/c/Users/panu/gitrep/mdm-hub-env-config/ssl_certs/prod/certs$ ls -l\ntotal 32\n-rwxrwxrwx 1 anuskp anuskp  7353 Nov 12 11:59 mdm-gateway.COMPANY.com.key\n-rwxrwxrwx 1 anuskp anuskp 24459 Jan 28 15:05 mdm-gateway.COMPANY.com.pem\nanuskp@CF-341562:/mnt/c/Users/panu/gitrep/mdm-hub-env-config/ssl_certs/prod/certs$ cp mdm-gateway.COMPANY.com.key ~/temp\nanuskp@CF-341562:/mnt/c/Users/panu/gitrep/mdm-hub-env-config/ssl_certs/prod/certs$ cd ~/temp\nanuskp@CF-341562:~/temp$ ansible-vault decrypt ./mdm-gateway.COMPANY.com.key --vault-password-file=~/ap\nDecryption successful
\n


Contents of this file are confidential. Do not share it with anyone outside of your Team.


Generate a CSR from the private key:

CSR Value Guidlines

During last Certificate request we received below CSR guidlines:

Common Name: Needs to have FQDN

Organizational Unit: No specific requirement -  optional attribute.

Organization: COMPANY, Inc                NOT  COMPANY [OR]  COMPANY Inc  [OR] COMPANY Inc.

Locality: City or Location must be spelled correctly. No abbreviations allowed

State: Must use full name of State or Province, no abbreviations allowed

Country: US (Always use 2 char. Country code)

Key Size: at least 2048 is recommended.


\n
anuskp@CF-341562:~/temp$ openssl req -new -key mdm-gateway.COMPANY.com.key -out mdm-gateway.COMPANY.com.csr\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:US\nState or Province Name (full name) [Some-State]:Connecticut\nLocality Name (eg, city) []:Groton\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY, Inc\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:mdm-gateway-int.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT_PROD@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge password []:\nAn optional company name []:\nanuskp@CF-341562:~/temp$ ls -l\ntotal 16\n-rw-r--r-- 1 anuskp anuskp 1098 Feb 10 15:58 mdm-gateway.COMPANY.com.csr\n-rw------- 1 anuskp anuskp 1734 Feb 10 15:52 mdm-gateway.COMPANY.com.key
\n


All information provided should be exactly the same as existing certificate's. Email should be set to support DL:

\"\"


Kafka - existing guide

Keystores/Truststores should be in ssl_certs/{{ env }}/ssl/server.keystore.jks

For example: ssl_certs/prod/ssl/server.keystore.jks


Go to some temporary directory and generate new Keystore:

\n
anuskp@CF-341562:~/temp$ keytool -genkeypair -alias kafka.mdm-gateway.COMPANY.com -keyalg RSA -keysize 2048 -keystore server.keystore.jks -dname "CN = kafka.mdm-gateway.COMPANY.com, O = COMPANY"\nEnter keystore <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=2031523">●●●●●●●●●●●●●●●●●●</a> new password:\nEnter key password for <kafka.mdm-gateway.COMPANY.com>\n        (RETURN if same as keystore password):\n\nWarning:\nThe JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore server.keystore.jks -destkeystore server.keystore.jks -deststoretype pkcs12".
\n


Key password should be the same as keystore password. After the certificate has been switched, remember to save the new keystore password in inventory/{{ env }}/group_vars/kafka/secret.yml.

In the -dname param insert the same parameters as existing certificate's.

Generate CSR from the keystore:

\n
anuskp@CF-341562:~/temp$ keytool -certreq -alias kafka.mdm-gateway.COMPANY.com -file kafka.mdm-gateway.COMPANY.com.csr -keystore server.keystore.jks\nEnter keystore <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=2031525">●●●●●●●●●●●●●●●●●●●</a>\nThe JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore server.keystore.jks -destkeystore server.keystore.jks -deststoretype pkcs12".\nanuskp@CF-341562:~/temp$ ls -l\ntotal 8\n-rw-r--r-- 1 anuskp anuskp 1027 Feb 10 16:11 kafka.mdm-gateway.COMPANY.com.csr\n-rw-r--r-- 1 anuskp anuskp 2161 Feb 10 16:07 server.keystore.jks
\n


EFK

Every Elasticsearch node may have its own certificate:

There is only one certificate for Kibana:


Generating CSRs from existing .key files is exactly the same as for Kong. Remember to set parameters ("O", "L", "CN") exactly the same as existing certificate's.




" + }, + { + "title": "Requesting a new certificate", + "pageID": "218454527", + "pageLink": "/display/GMDM/Requesting+a+new+certificate", + "content": "

Go to https://requestmanager.COMPANY.com/. Search for Digital Certificates and click the first and only position found:

\"\"


COMPANY-issued certificates

Check the COMPANY SSL Certificate - Internal Only checkbox.

\"\"


Entrust-issued certificates

Check the Entrust External SSL certificate checkbox and click the first link:

\"\"


You will be redirected to the Entrust portal. Check if renewing an existing certificate works. If it doesn't, follow below steps:



Wait for the email with new certificate from Entrust.




" + }, + { + "title": "Rotating EFK certificates", + "pageID": "218454407", + "pageLink": "/display/GMDM/Rotating+EFK+certificates", + "content": "
  1. Elasticsearch

    1. Single instance (non-prod clusters)

      Go to Elasticsearch config directory on host. For example:

      /app/efk/elasticsearch/config - US DEV (amraelp00005781.COMPANY.com)
      /apps/efk/elasticsearch/config - GBL DEV (euw1z1dl039.COMPANY.com)

      \n
      [mdm@euw1z1dl039 config]$ ls -l\ntotal 48\n-rw-rw-r-- 1 mdm   7000  1445 Feb 22  2019 admin-ca.pem\n-rw------- 1 mdm docker  1708 Jul 27  2020 elasticsearch-admin-key.pem\n-rw------- 1 mdm docker  1765 Jul 27  2020 elasticsearch-admin.pem\n-rw-rw---- 1 mdm docker   199 Mar 30  2020 elasticsearch.keystore\n-rw------- 1 mdm docker  1013 Jul 27  2020 elasticsearch.yml\n-rw------- 1 mdm docker  1704 Jul 27  2020 esnode-key.pem\n-rw------- 1 mdm docker  1801 Feb  9 05:00 esnode.pem\n-rw------- 1 mdm docker  3320 Mar 30  2020 jvm.options\n-rw------- 1 mdm docker 10899 Mar 30  2020 log4j2.properties\n-rw------- 1 mdm docker  1972 Jul 27  2020 root-ca.pem
      \n


      Check the elasticsearch.yml config file. By default, esnode.pem should contain the certificate and esnode-key.pem should contain private key.
      If you have generated new CSR based on existing private key, you only need to update the esnode.pem file:

      \n
      [mdm@euw1z1dl039 config]$ vi esnode.pem
      \n


      Remove all file contents and copy-paste the new certificate. Save the changes.

      Now restart the container and make sure it's working and not throwing errors in the logs:

      \n
      [mdm@euw1z1dl039 config]$ docker restart elasticsearch\nelasticsearch\n[mdm@euw1z1dl039 config]$ docker logs --tail 100 -f elasticsearch
      \n


      Log into Kibana and check that dashboards are correctly displaying data.


    2. Clustered (production clusters)

      On every Elasticsearch node go to the Elasticsearch config directory and replace esnode.pem certificate file, as shown in 1a.

      Once done, restart all Elasticsearch instances. Check logs. All instances should throw the following error in logs:

      \n
      [2022-02-10T10:53:19,770][ERROR][c.f.s.a.BackendRegistry ] [prod-gbl-data-2] Not yet initialized (you may need to run sgadmin)\n[2022-02-10T10:53:19,798][ERROR][c.f.s.a.BackendRegistry ] [prod-gbl-data-2] Not yet initialized (you may need to run sgadmin)
      \n


      Now, run the following command on all hosts in Elasticsearch cluster:

      \n
      docker exec elasticsearch bash -c "export JAVA_HOME=/usr/share/elasticsearch/jdk/ && cd /usr/share/elasticsearch/plugins/search-guard-7/tools && ./sgadmin.sh -cd ../sgconfig/ -h {{ elasticsearch_cluster_network_host }} -cn {{ elasticsearch_cluster_name }} -nhnv -cacert ../../../config/root-ca.pem -cert ../../../config/elasticsearch-admin.pem  -key ../../../config/elasticsearch-admin-key.pem"
      \n


      where:

      {{ elasticsearch_cluster_network_host }} - instance's name in cluster, check in host_vars, for example (in configuration repository): mdm-hub-env-config/inventory/prod/host_vars/efk1/all.yml
      {{ elasticsearch_cluster_name }} - cluster name, is the same for all nodes, check in group_vars, for example: mdm-hub-env-config/inventory/prod/group_vars/efk-services/all.yml

      So, on example of GLOBAL PROD (2 clusters):

      Run the following on PROD4 (euw1z1pl025.COMPANY.com):

      \n
      [mdm@euw1z1pl025 config]$ docker exec elasticsearch bash -c "export JAVA_HOME=/usr/share/elasticsearch/jdk/ && cd /usr/share/elasticsearch/plugins/search-guard-7/tools && ./sgadmin.sh -cd ../sgconfig/ -h 'euw1z1pl025.COMPANY.com' -cn 'elasticsearch-prod-gbl-cluster' -nhnv -cacert ../../../config/root-ca.pem -cert ../../../config/elasticsearch-admin.pem  -key ../../../config/elasticsearch-admin-key.pem"\nSearch Guard Admin v7\nWill connect to euw1z1pl025.COMPANY.com:9300 ... done\nConnected as CN=elasticsearch-admin.COMPANY.com,O=COMPANY\nElasticsearch Version: 7.6.2\nSearch Guard Version: 7.6.2-41.0.0\nContacting elasticsearch cluster 'elasticsearch-prod-gbl-cluster' and wait for YELLOW clusterstate ...\nClustername: elasticsearch-prod-gbl-cluster\nClusterstate: YELLOW\nNumber of nodes: 2\nNumber of data nodes: 2\nsearchguard index already exists, so we do not need to create one.\nINFO: searchguard index state is YELLOW, it seems you miss some replicas\nPopulate config from /usr/share/elasticsearch/plugins/search-guard-7/sgconfig\n../sgconfig/sg_action_groups.yml OK\n../sgconfig/sg_internal_users.yml OK\n../sgconfig/sg_roles.yml OK\n../sgconfig/sg_roles_mapping.yml OK\n../sgconfig/sg_config.yml OK\n../sgconfig/sg_tenants.yml OK\nWill update '_doc/config' with ../sgconfig/sg_config.yml\n   SUCC: Configuration for 'config' created or updated\nWill update '_doc/roles' with ../sgconfig/sg_roles.yml\n   SUCC: Configuration for 'roles' created or updated\nWill update '_doc/rolesmapping' with ../sgconfig/sg_roles_mapping.yml\n   SUCC: Configuration for 'rolesmapping' created or updated\nWill update '_doc/internalusers' with ../sgconfig/sg_internal_users.yml\n   SUCC: Configuration for 'internalusers' created or updated\nWill update '_doc/actiongroups' with ../sgconfig/sg_action_groups.yml\n   SUCC: Configuration for 'actiongroups' created or updated\nWill update '_doc/tenants' with ../sgconfig/sg_tenants.yml\n   SUCC: Configuration for 'tenants' created or updated\nDone with success
      \n



      Run the following on PROD5 (euw1z2pl024.COMPANY.com):

      \n
      [mdm@euw1z2pl024 config]$ docker exec elasticsearch bash -c "export JAVA_HOME=/usr/share/elasticsearch/jdk/ && cd /usr/share/elasticsearch/plugins/search-guard-7/tools && ./sgadmin.sh -cd ../sgconfig/ -h 'euw1z2pl024.COMPANY.com' -cn 'elasticsearch-prod-gbl-cluster' -nhnv -cacert ../../../config/root-ca.pem -cert ../../../config/elasticsearch-admin.pem  -key ../../../config/elasticsearch-admin-key.pem"\nSearch Guard Admin v7\nWill connect to euw1z2pl024.COMPANY.com:9300 ... done\nConnected as CN=elasticsearch-admin.COMPANY.com,O=COMPANY\nElasticsearch Version: 7.6.2\nSearch Guard Version: 7.6.2-41.0.0\nContacting elasticsearch cluster 'elasticsearch-prod-gbl-cluster' and wait for YELLOW clusterstate ...\nClustername: elasticsearch-prod-gbl-cluster\nClusterstate: YELLOW\nNumber of nodes: 2\nNumber of data nodes: 2\nsearchguard index already exists, so we do not need to create one.\nINFO: searchguard index state is YELLOW, it seems you miss some replicas\nPopulate config from /usr/share/elasticsearch/plugins/search-guard-7/sgconfig\n../sgconfig/sg_action_groups.yml OK\n../sgconfig/sg_internal_users.yml OK\n../sgconfig/sg_roles.yml OK\n../sgconfig/sg_roles_mapping.yml OK\n../sgconfig/sg_config.yml OK\n../sgconfig/sg_tenants.yml OK\nWill update '_doc/config' with ../sgconfig/sg_config.yml\n   SUCC: Configuration for 'config' created or updated\nWill update '_doc/roles' with ../sgconfig/sg_roles.yml\n   SUCC: Configuration for 'roles' created or updated\nWill update '_doc/rolesmapping' with ../sgconfig/sg_roles_mapping.yml\n   SUCC: Configuration for 'rolesmapping' created or updated\nWill update '_doc/internalusers' with ../sgconfig/sg_internal_users.yml\n   SUCC: Configuration for 'internalusers' created or updated\nWill update '_doc/actiongroups' with ../sgconfig/sg_action_groups.yml\n   SUCC: Configuration for 'actiongroups' created or updated\nWill update '_doc/tenants' with ../sgconfig/sg_tenants.yml\n   SUCC: Configuration for 'tenants' created or updated\nDone with success
      \n


      Check the logs. There should be no new errors. Check Kibana - whether you can login and view data in dashboards.

  2. Kibana

    Go to Kibana config directory on host. For example:

    /app/efk/kibana/config

    \n
    [root@amraelp00005781 config]# ls -l\ntotal 12\n-rw-r--r-- 1 mdmihnpr mdmihub 1964 Jul 10  2020 kibana.crt\n-rw-r--r-- 1 mdmihnpr mdmihub 1704 Jul 10  2020 kibana.key\n-rw-rwxr-- 1 mdmihnpr mdmihub  536 Jul  5  2020 kibana.yml
    \n


    Modify the kibana.crt file. Remove its contents and copy-paste new certificate.

    \n
    [root@amraelp00005781 config]# vi kibana.crt
    \n


    Do the same for kibana.key, unless you have generated the CSR based on the existing private key.

    Restart the Kibana container and check logs:

    \n
    [root@amraelp00005781 config]# docker restart kibana\nkibana\n[root@amraelp00005781 config]# docker logs --tail 100 -f kibana
    \n


    Wait for Kibana to come back up and make sure there are no errors in logs and you can login to web app and view data in dashboards.

REMEMBER TO PUSH NEW CERTIFICATES TO CONFIGURATION REPO



" + }, + { + "title": "Rotating FLEX Kafka certificates", + "pageID": "387161356", + "pageLink": "/display/GMDM/Rotating+FLEX+Kafka+certificates", + "content": "

Kafka FLEX certificate is the same as for the Kong FLEX

1 Email to Santosh.

If there is a need to rotate Kafka certificate on FLEX environment, approval from the business is required.

To: santosh.dube@COMPANY.com

Cc: dl-atp_mdmhub_support@COMPANY.com

Hi Santosh,

We created the RFC ticket in our Jira - <Link to the ticket>

The FLEX PROD Kafka certificate is expiring, we need to go through the deployment procedure and replace the certificate on our Kafka.

We prepared the following deployment procedure – '<doc>’ – added to attachment.


Could you please approve this request because we need to trigger this deployment to replace the certificates.


Let me know in case of any questions.

Regards,

\"\"



Change the certificate:


2. Check if CA cert has changed

!IMPORTANT! If intermediate certificate changed, it would be required to contact FLEX team to replace it. 


To: DL-CBK-MAST@COMPANY.com anisha.sahu@COMPANY.com santosh.dube@COMPANY.com

Dear FLEX team,

We are providing new client.trustore.jks file which should be changed from your side. The change was forced by the change in policy of providing new certificates and server retirement. Due to the new certificate is signed by the other intermediate CA there is a need to change client truststore.

Please treat this as a high priority as the certificate will expire in 2 days.

Kind regards,


Remember to attach new client.truststore.jks file!

It is not required to create additional email thread with client if there is a need to change only the certificate. 


3. Rotate certificate

3.1 create keystore


Create new keystore with new key-pair. Private key should be in repository under mdm-hub-env-config/ssl_certs/prod_us/certs/mdm-ihub-us-trade-prod.COMPANY.com.key and certificate should be requested.

Tools → import Key Pair → 

\"\"


→ PKCS #8 → 

\"\"

→ and than choose private key and certificates from directories in the repo.


Passwords can be found under mdm-hub-env-config/inventory/prod_us/host_vars/kafka1/secret.yml

3.2 Rotate certificates on machines

Once done, log into host and go to /app/kafka/ssl.

Back existing server.keystore.jks up:

\n
$ cp server.keystore.jks server.keystore.jks-backup
\n

And upload the modified server.keystore.jks.


Restart Kafka container and wait for it to come back up:

\n
$ docker restart kafka_kafka_1
\n


Replace the keystore and restart Kafka container on each node.

Wait for Kafka to come up and become fully operational before restarting next node. After certificate has been successfully rotated, push modified keystore to the mdm-hub-env-config repository. CER and CSR files are no longer useful and can be disposed of.



Provide the evidence in the email thread:


After the replacement evidence file should be sent:

\"\"


" + }, + { + "title": "Rotating FLEX Kong certificates", + "pageID": "387161359", + "pageLink": "/display/GMDM/Rotating+FLEX+Kong+certificates", + "content": "

Kafka certificate is the same as for the kong


Rotating FLEX Kong certificate.

If there is a need to rotate Kafka certificate on FLEX environment, approval from the business is required.

To: santosh.dube@COMPANY.com

Cc: dl-atp_mdmhub_support@COMPANY.com

Hi Santosh,

We created the RFC ticket in our Jira - <Link to the ticket>

The FLEX PROD Kong certificate is expiring, we need to go through the deployment procedure and replace the certificate on our Kong API gateway.

We prepared the following deployment procedure – '<doc>’ – added to attachment.


Could you please approve this request because we need to trigger this deployment to replace the certificates.


Let me know in case of any questions.

Regards,
\"\"




Change the certificate:

!IMPORTANT! If intermediate certificate changed, it would be required to contact FLEX team to replace it. 


To: DL-CBK-MAST@COMPANY.com anisha.sahu@COMPANY.com santosh.dube@COMPANY.com

Dear FLEX team,

We are providing new client.trustore.jks file which should be changed from your side. The change was forced by the change in policy of providing new certificates and server retirement. Due to the new certificate is signed by the other intermediate CA there is a need to change client truststore.

Please treat this as a high priority as the certificate will expire in 2 days.

Kind regards,


Remember to attach new client.truststore.jks file!

It is not required to create additional email thread with client if there is a need to change only the certificate. 

  1. You should receive three certificates from COMPANY/Entrust: Server Certificate and Intermediate (PBACA G2) or Intermediate and Root. Open the Server Certificate in the text editor:

    \"\"

    \"\"


    Copy all received certificates into a chain in the following sequence:

    1. Server Certificate
    2. Intermediate
    3. Root:

    \"\"

  2. Go to main directory with command line and ansible installed
  3. Make sure you are on master branch and have newest changes fetched
    git checkout master
    git pull
  4. Comment out all sections in mdm-hub-env-config\\inventory\\prod_us\\group_vars\\kong\\all.yml except “kong_certificates”

    \"\"


  5. Comment out all sections in mdm-hub-env-config\\roles\\update_kong_api\\tasks\\main.yml except “Add Certificates”part

    \"\"


  6. Execute ansible playbook
    (Limit it to only one Kong host in the cluster)
    $ ansible-playbook update_kong_api.yml -i inventory/prod_us/inventory --vault-password-file=/home/karol/password --limit kong1
  7. Verify if server is responding with correct certificate
    openssl s_client -connect mdm-ihub-us-trade-prod.COMPANY.com:443 </dev/null
    openssl s_client -connect amraelp00006207.COMPANY.com:8443 </dev/null

          openssl s_client -connect amraelp00006208.COMPANY.com:8443</dev/null

          openssl s_client -connect amraelp00006209.COMPANY.com:8443</dev/null




Provide the evidence in the email thread:


After the replacement evidence file should be sent:

\"\"


" + }, + { + "title": "Rotating Kafka certificates", + "pageID": "229180645", + "pageLink": "/display/GMDM/Rotating+Kafka+certificates", + "content": "

After receiving signed SSL certificate, place it in the same mdm-hub-env-config repo directory as existing Kafka keystore. For example:
ssl_certs/prod/ssl/[server.keystore.jks] - for Global PROD


Add the certificate to keystore, using the command:

\n
$ keytool -importcert -alias kafka.mdm-gateway.COMPANY.com -file kafka.mdm-gateway.COMPANY.com.cer -keystore server.keystore.jks
\n

Important: use the same alias as existing certificate in this keystore, to overwrite it


Once done, log into host and go to /app/kafka/ssl.

Back existing server.keystore.jks up:

\n
$ cp server.keystore.jks server.keystore.jks-backup
\n

And upload the modified server.keystore.jks.


Restart Kafka container and wait for it to come back up:

\n
$ docker restart kafka_kafka_1
\n


If there are multiple Kafka instances (Production), replace the keystore and restart Kafka container on each node. Wait for Kafka to come up and become fully operational before restarting next node. You can check node availability using, for example, AKHQ.

After certificate has been successfully rotated, push modified keystore to the mdm-hub-env-config repository. CER and CSR files are no longer useful and can be disposed of.

" + }, + { + "title": "Rotating Kong certificate", + "pageID": "218453498", + "pageLink": "/display/GMDM/Rotating+Kong+certificate", + "content": "

You should receive three certificates from COMPANY/Entrust: Server Certificate and Intermediate (PBACA G2) or Intermediate and Root. Open the Server Certificate in the text editor:

\"\"

\"\"


Copy all received certificates into a chain in the following sequence:

  1. Server Certificate
  2. Intermediate
  3. Root:

\"\"


Save the file as {hostname}.pem - for example mdm-gateway.COMPANY.com.pem and switch it in configuration repository:


Go to appropriate Kong group_vars:


Make sure all "create_or_update" flags are set to "False":

\"\"


Go down to #CERTIFICATES and switch the "create_or_update" flag. Path to the .pem file should not have changed - if you chose a different filename, adjust it here:

\"\"


Run the update_kong_api_v1.yml playbook. Limit it to only one Kong host in the cluster. After it has finished, switch the "create_or_update" flag back to "False" and push new certificate to the repository.

$ ansible-playbook update_kong_api_v1.yml -i inventory/prod/inventory --vault-password-file=~/ap --limit kong_v1_01


Check all SNIs on all Kong instances using s_client:


$ openssl s_client -servername mdm-gateway-int.COMPANY.com -connect euw1z1pl017.COMPANY.com:8443
$ openssl s_client -servername mdm-gateway-int.COMPANY.com -connect euw1z1pl021.COMPANY.com:8443
$ openssl s_client -servername mdm-gateway-int.COMPANY.com -connect euw1z1pl022.COMPANY.com:8443
$ openssl s_client -servername mdm-gateway.COMPANY.com -connect euw1z1pl017.COMPANY.com:8443
...



" + }, + { + "title": "Hub upgrade procedures and calendar", + "pageID": "401611801", + "pageLink": "/display/GMDM/Hub+upgrade+procedures+and+calendar", + "content": "

Backend components upgrade policy

  1. Major upgrade once a year
  2. Patch upgrades every quarter

Upgrade table

Componentcurrent versionlatest upgrade datenewest patch releaseplanned patch upgrade datenewest stable releaseplanned major upgrade dateNotes
Prometheus2.53.4 (monitoring host)2025-04-10--2.53.4-

\n MR-10396\n -\n Getting issue details...\n STATUS\n

kube-prometheus-stack61.7.22025-05--70.1.0-

\n MR-9578\n -\n Getting issue details...\n STATUS\n

Airflow2.7.22023-112.7.3-2.10.52025 Q2

\n MR-10437\n -\n Getting issue details...\n STATUS\n

Monstache6.7.212025-05--6.7.21-

\n MR-10437\n -\n Getting issue details...\n STATUS\n

Kong Gateway3.4.22024-09--3.9.02025 Q3
Kong Ingress Controller3.2.02024-093.2.4-3.4.42025 Q3
Kong external proxy3.3.12023-10--3.9.02025 Q3
OpenJDK - AdoptOpenJDK11.0.14.1_12022(?)11.0.27_62025 Q2Temurin 17.0.15+6-LTS2025 Q3
Jenkins2.462.32024-10--2.504.12025 Q3All versions newer than 2.462.3 require Java 17
Consul1.16.22023-111.16.6-1.21.02025 Q2

\n MR-10437\n -\n Getting issue details...\n STATUS\n

Elasticsearch8.11.42024-02--9.0.12025 Q4
Fluentd1.16.52024-051.16.8-1.182025 Q4Replace with Fluent Bit instead?
Fluent Bit2.2.32025-02--4.0.12025 Q4
Apache Kafka3.7.02024-073.7.22025 Q24.0.02026 Q1
AKHQ0.23.02024-08--0.25.12026 Q1
MongoDB6.0.212025-04--2026 Q2

\n MR-10399\n -\n Getting issue details...\n STATUS\n

" + }, + { + "title": "Airflow upgrade procedure", + "pageID": "401611840", + "pageLink": "/display/GMDM/Airflow+upgrade+procedure", + "content": "


Introduction

Airflow used by MDM HUB is maintained by Apache: https://airflow.apache.org/

To deploy airflow we are using official airflow helm chart: https://github.com/airflow-helm/charts



Prerequisite

  1. Verify changelog for changes that could alter behaviour/usage in new version and plan configuration adjustments to make it work correctly.
    https://airflow.apache.org/docs/apache-airflow/stable/release_notes.html
  2. Ensure base images are mirrored to COMPANY artifactory.


Generic procedure

Procedure assumes that upgrade will be executed and tested on the SBX first.

Upgrade Steps

Airflow version upgrade

  1. Apply changes in mdm-hub-inbound-services:
    1. Change airflow airflowVersion and defaultAirflowTagtag to updated version in:
      1. http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/browse/helm/airflow/src/main/helm/values.yaml
    2. Change airflow docker base image version in:
      1. http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/browse/helm/airflow/docker/Dockerfile 
    3. Apply other changes to helm chart if necessary (Prerequisite step 1)
  2. Apply configuration changes in mdm-hub-cluster-env:
    1. Apply needed changes to configuration if necessary (Prerequisite step 1)
      1. http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/sandbox/namespaces/airflow/values.yaml
  3. Build and deploy changes with new configuration.
  4. Verify if the component is working properly:
    1. Check if component started
    2. Go to the Airflow main page and verify if everything is working as expected (no log in issues, no errors, can see dags etc.)
    3. Check component logs for errors
  5. Check if all dags are working properly
    1. For dags with periodic schedule - wait for them to be triggered 
    2. For dags executed from UI  - execute all of them with test data 

Airflow helm template upgrade

  1. Deploy current airflow version on local environment from mdm-hub-inboud-services
  2. Get current airflow helm manifest and save it to airflow_manifest_1.yaml

    \n
    helm get manifest -n airflow airflow > airflow_manifest_1.yaml
    \n
  3. Pull new airflow chart version from chart repostiroy and replace in aiflow/charts directory. Copy old chart version to some temporary directory outside repository for comparison

    \n
    helm pull apache-airflow/airflow --version "1.13.0"\nmv airflow-1.13.0.tgz ${repo_dir}/mdm-hub-inbound-services/helm/airflow/src/main/helm/charts/airflow-1.13.0.tgz
    \n
  4. Extract old helm chart and check MODIFICATION_LIST file for modifiactions applied on helm chart. Apply needed changes to new airflow chart.

    \n
    tar -xzf airflow-1.10.0_modified.tgz\ncat airflow/MODIFICATION_LIST
    \n
  5. Perform helm upgrade with new helm chart version. Verify if airflow is working as expected
  6. Get current airflow manifest and save it to airflow_manifest_2.yaml

    \n
    helm get manifest -n airflow airflow > airflow_manifest_2.yaml
    \n
  7. Compare generated manifests and verify if there are breaking changes
  8. Fix all issues




Past upgrades

Upgrade Airflow x → y

Description:


Procedure:


Reference tickets:


Reference PR's:
http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/pull-requests/1283/overview



" + }, + { + "title": "AKHQ upgrade procedure", + "pageID": "401611810", + "pageLink": "/display/GMDM/AKHQ+upgrade+procedure", + "content": "



Introduction

AKHQ used in MDM HUB is mantained by tchiotludo/akhq.




Prerequisite

  1. Verify changelog for changes that could alter behaviour/usage in new version and plan configuration adjustments to make it work correctly.
  2. Ensure base images are mirrored to COMPANY artifactory.




Generic procedure

Procedure assumes that upgrade will be executed and tested on the SBX first.

Upgrade Steps

  1. Apply changes in mdm-hub-inbound-services:
    1. Change akhq image tag to updated version in:
      1. mdm-hub-inbound-services/helm/kafka/chart/src/main/helm/templates/akhq/akhq.yaml
      2. mdm-hub-inbound-services/helm/kafka/chart/src/main/helm/values.yaml
    2. Apply other changes to helm chart if necessary (Prerequisite step 1)
  2. Apply configuration changes in mdm-hub-cluster-env:
    1. Change akhq image tag to updated version in mdm-hub-cluster-env/amer/sandbox/namespaces/amer-backend/values.yaml (example for SBX)
    2. Apply other changes to configuration if necessary (Prerequisite step 1)
  3. Build and deploy changes with new configuration.
  4. Verify if the component is working properly:
    1. Check if component started
    2. Go to the AKHQ dashboard and verify if everything is working as expected (no log in issues, no errors, can see topics, consumergroups etc.)
    3. Check component logs for errors




Past upgrades

Upgrade AKHQ 0.14.1 → 0.24.0 (0.23.0)

Description:

This update required upgrade to version 0.24.0. After checking changes between previous version and target version it become obvious that there are required additional changes to helm chart.

There were detected errors during upgrade verification for which no fix was found in version 0.24.0. That resulted in changing version to 0.23.0, where the issue didn't occur.

Procedure:

  1. Pushed base image to COMPANY artifactory: artifactory.COMPANY.com/mdmhub-docker-dev/tchiotludo/akhq:0.24.0
  2. Applied inbound-services changes:
    1. changed image tag to 0.24.0 in:
      1. akhq.yaml
      2. values.yaml
    2. Applied necessary changes to akhq-cm.yaml (based of changelog requirements):
      1. added micronaut configuration
      2. moved topic-data property under ui-options property
      3. adjusted security configuration
  3. Changed image tag to 0.24.0 in cluster-env values.yaml
  4. Build inbound-services changes and deployed them with new configuration on SBX environment.
  5. Verified if component is working:
    1. component started
    2. there was an error present after logging In
    3. there was an exception thrown in logs:
      java.lang.NullPointerException: null\nat org.akhq.repositories.AvroWireFormatConverter.convertValueToWireFormat(AvroWireFormatConverter.java:39)\n\tat org.akhq.repositories.RecordRepository.newRecord(RecordRepository.java:454)\n\tat org.akhq.repositories.RecordRepository.lambda$getLastRecord$3(RecordRepository.java:109)\n\tat java.base/java.lang.Iterable.forEach(Unknown Source)\n\tat org.akhq.repositories.RecordRepository.getLastRecord(RecordRepository.java:107)\n\tat org.akhq.controllers.TopicController.lastRecord(TopicController.java:224)\n\tat org.akhq.controllers.$TopicController$Definition$Exec.dispatch(Unknown Source)\n\tat io.micronaut.context.AbstractExecutableMethodsDefinition$DispatchedExecutableMethod.invoke(AbstractExecutableMethodsDefinition.java:351)\n\tat io.micronaut.context.DefaultBeanContext$4.invoke(DefaultBeanContext.java:583)\n\tat io.micronaut.web.router.AbstractRouteMatch.execute(AbstractRouteMatch.java:303)\n\tat io.micronaut.web.router.RouteMatch.execute(RouteMatch.java:111)\n\tat io.micronaut.http.context.ServerRequestContext.with(ServerRequestContext.java:103)\n\tat io.micronaut.http.server.RouteExecutor.lambda$executeRoute$14(RouteExecutor.java:656)\n\tat reactor.core.publisher.FluxDeferContextual.subscribe(FluxDeferContextual.java:49)\n\tat reactor.core.publisher.InternalFluxOperator.subscribe(InternalFluxOperator.java:62)\n\tat reactor.core.publisher.FluxSubscribeOn$SubscribeOnSubscriber.run(FluxSubscribeOn.java:194)\n\tat io.micronaut.reactive.reactor.instrument.ReactorInstrumentation.lambda$null$0(ReactorInstrumentation.java:62)\n\tat reactor.core.scheduler.WorkerTask.call(WorkerTask.java:84)\n\tat reactor.core.scheduler.WorkerTask.call(WorkerTask.java:37)\n\tat io.micrometer.core.instrument.composite.CompositeTimer.recordCallable(CompositeTimer.java:68)\n\tat io.micrometer.core.instrument.Timer.lambda$wrap$1(Timer.java:171)\n\tat io.micronaut.scheduling.instrument.InvocationInstrumenterWrappedCallable.call(InvocationInstrumenterWrappedCallable.java:53)\n\tat java.base/java.util.concurrent.FutureTask.run(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)\n\tat java.base/java.lang.Thread.run(Unknown Source) \n
  6. Found no fix / workaround for this in 0.24.0 version, decided to change version to 0.23.0
  7. Applied inbound-services changes:
    1. changed image tag to 0.23.0 in:
      1. akhq.yaml
      2. values.yaml
  8. Changed image tag to 0.23.0 in cluster-env values.yaml
  9. Build inbound-services changes and deployed them with new configuration on SBX environment.
  10. Verified if component is working:
    1. component started
    2. no errors present on dashboard, everything is as expected
    3. no errors in logs

Reference tickets:

[MR-6778] Prepare AKHQ upgrade plan to version 0.24.0

Reference PR's:

[MR-6778] AKHQ upgraded to 0.23.0

[MR-6778] SANDBOX: AKHQ version change to 0.23.0

" + }, + { + "title": "Consul upgrade procedure", + "pageID": "401611813", + "pageLink": "/display/GMDM/Consul+upgrade+procedure", + "content": "

Introduction

Consul used in MDM is installed using official Consul Helm chart provided by Hashicorp.


Prerequisite

Before upgrade verify checklist:



Generic procedure

Procedure assumes that upgrade will be executed and tested on the SBX first.

Upgrade steps:

  1. Upgrade Consul Helm chart
  2. Upgrade Consul Docker images
  3. Update this confluence page


Past upgrades

Upgrade 1.10.2 → 1.16.2

Description

This was the only Consul upgrade so far.

Procedure

  1. Upgrade Consul Helm chart
    1. Add Hashicorp Helm repo and find the newest Consul chart and app version

      \n
      helm repo add hashicorp https://helm.releases.hashicorp.com\nhelm search repo hashicorp/consul
      \n
    2. In helm/consul/src/main/helm/Chart.yaml uncomment repository and change version number
    3. Update dependencies

      \n
      cd helm/consul/src/main/helm\nhelm dependency update
      \n
    4. Comment repository line back in Chart.yaml
    5. Commit only the updated charts/consul-*.tgz and Chart.yaml files
  2. Upgrade Consul Docker image
    1. Pull official images from Docker Hub
      1. https://hub.docker.com/r/hashicorp/consul/tags
      2. https://hub.docker.com/r/hashicorp/consul-k8s-control-plane/tags
    2. Tag images with artifactory.COMPANY.com/mdmhub-docker-dev/ prefix
    3. Push images to Artifactory
  3. Update cluster-env configuration (backend namespace)
    1. Change Docker image tags to uploaded in previous step
  4. Deploy updated backend
  5. Ensure cluster is in a running state

Reference tickets

Reference PRs


" + }, + { + "title": "Elastic stack upgrade", + "pageID": "401611843", + "pageLink": "/display/GMDM/Elastic+stack+upgrade", + "content": "

Introduction:

ECK stack used in MDM is installed using official ECK stack installation procedures provided by Elasticsearch B.V..


Prerequisite

Before upgrade verify checklist:



Generic procedure

Procedure assumes that upgrade will be executed and tested on the SBX first.

Upgrade Elastic stack steps:

  1. Upgrade Elasticsearch docker image
  2. Upgrade Elasticsearch plugins and dependencies
  3. Upgrade Kibana docker image
  4. Upgrade Logstash docker image
  5. Upgrade Logstash drivers and dependencies
  6. Upgrade FleetServer docker image
  7. Upgrade APM jar agents
  8. Update this confluence page

Past upgrades

ECK operator installation

Uninstall olm ECK operator 

  1. Scale down the number of olm-operator pods to 0
  2. Delete eck olm Subscription with orphan propagation
    kubectl delete subscription my-elastic-cloud-eck --cascade=orphan\n
  3. Delete all eck olm InstallPlans with orphan propagation
    kubectl delete installplans install-* --cascade=orphan\n
  4. Delete all "eck" ClusterServiceVersions with orphan propagation
    for ns in $(kubectl get namespaces -o name | cut -c 11-);\ndo\necho $ns;\nkubectl delete csv elastic-cloud-eck.v2.10.0 -n $ns --cascade=orphan;\ndone\n
  5. Scale down elastic-operator to 0
  6. Delete eck operator objects:
    1. ConfigMaps
      for cm in $(kubectl get cm | awk '{if ($1 ~ "elastic-") print $1}');\ndo\n  echo $cm;\n  kubectl delete cm $cm --cascade=orphan;\ndone\n
    2. ServiceAccount
      kubectl delete sa elastic-operator --cascade=orphan\n
    3. Elastic operator cert
      kubectl delete ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● --cascade=orphan\n
    4. ClusterRole - everything with "elastic" in name besides elastic-agent
      for cr in $(kubectl get clusterrole | grep -v elastic-agent | awk '{if ($1 ~ "elastic") print $1}')\ndo\n  echo $cr;\n  kubectl delete clusterrole $cr --cascade=orphan;\ndone\n
    5. Service
      kubectl delete service elastic-operator-service --cascade=orphan\n
    6. Deployment eck-operator
      kubectl delete deployment eck-operator

Install eck-operator standalone

  1. Adjust labels and annotaions of CRDs
    for CRD in $(kubectl get crds --no-headers -o custom-columns=NAME:.metadata.name | grep k8s.elastic.co); do\n  echo "changing $CRD";   \n  kubectl annotate crd "$CRD" meta.helm.sh/release-name="operators";\n  kubectl annotate crd "$CRD" meta.helm.sh/release-namespace="operators";\n  kubectl label crd "$CRD" app.kubernetes.io/managed-by=Helm;\ndone\n
  2. Install eck-operator without OLM by deploying operators version 4.1.19-project-boldmove-SNAPSHOT or newer

Upgrade ECK stack

Procedure:

  1. Upgrade Elastic stack docker images
    1. Pull from DockerHub and push the newest possible docker tags image of all Elastic stack components besides APM agent
    2. Download from maver repo and push to artifactory maven gallery the newest jar of APM agent
    3. Change version tag in inbound-services repo of all Elastic stack components
  2. Repeat steps 3 - 5 in the following order:
    1. Elasticsearch - wait until all nodes are updated (shards relocation lasts long)
    2. Kibana
    3. Logstash and FleetServer
  3. Update cluster-env configuration (backend namespaces)
    1. Change Docker image tag
  4. Deploy updated backend with Jenkins job
  5. Ensure backend component is working fine
  6. Deploy mdmhub to update APM agents
  7. Ensure mdmhub components are working fine

Reference tickets: 


" + }, + { + "title": "Fluent Bit (Fluentbit) upgrade procedure", + "pageID": "401611834", + "pageLink": "/display/GMDM/Fluent+Bit+%28Fluentbit%29+upgrade+procedure", + "content": "

Introduction:

FluentBit used in MDM is installed using official Fleuntbit installtion proc provided by Cloud Native Computing Foundation.


Prerequisite

Before upgrade verify checklist:



Generic procedure

Procedure assumes that upgrade will be executed and tested on the SBX first.

Upgrade steps:

  1. Upgrade Fluentbit Docker images
  2. Update this confluence page

Past upgrades

Upgrade 1.8.112.2.2

Description:

This was the only Fluentbit upgrade so far.

Procedure:

  1. Upgrade Fluentbit docker image
    1. Pull from DockerHub and push the newest possible docker tag image of fluentbit-debug and fluentbit to artifactory.
    2. Change version tag in inbound-services repo of mdmhub fluentbit and kubevents fluentbit.
  2. Update cluster-env configuration (envs and backend namespaces)
    1. Change Docker image tags to uploaded in previous step
  3. Deploy updated backend for kubevents and mdmhub for components logs with Jenkins jobs
  4. Ensure kubevents and mdmhub logs are being stored in Elasticsearch, check Kibanas.

Reference tickets: 

Reference PRs:


" + }, + { + "title": "Fluentd upgrade procedure", + "pageID": "401611830", + "pageLink": "/display/GMDM/Fluentd+upgrade+procedure", + "content": "

Introduction:

Fluentd used in MDM is installed using official Fluentd installation procedures provided by Cloud Native Computing Foundation.


Prerequisite

Before upgrade verify checklist:



Generic procedure

Procedure assumes that upgrade will be executed and tested on the SBX first.

Upgrade steps:

  1. Upgrade Fluentd Docker images
  2. Upgrade Fluentd plugins and dependencies
  3. Update this confluence page

Past upgrades

Upgrade fluentd-kubernetes-daemonset - v1.12-debian-elasticsearch7-1 → v1.16.2-debian-elasticsearch7-1.1

Procedure:

  1. Change docker image base to the newest version in env-config repo, (ex. "fluentd-kubernetes-daemonset:v1.16.2-debian-elasticsearch7-1.1")
  2. Build image with docker build job : https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm_manage_playbooks/job/Docker/job/build_Dockerfile/
  3. Update cluster-env repo configuration with the new image tag for fluentd (ex. 981)
  4. Test on SBX
  5. After checking fluentd output logs, the following actions were needed to be taken:
    1. upgrading of the following plugins and dependencies:
      1. "ruby-kafka", "~> 1.5"
      2. "fluent-plugin-kafka", "0.19.2"
    2. defining new mappings in "backend" and "others" datastreams:
        "properties": {\n    "kubernetes.labels.app": {\n      "dynamic": true,\n      "type": "object",\n      "enabled": false\n    }\n
    3. execute ansible playbook with index template update 
    4. rollover "backend" and "others" datastreams after mappings change

Reference tickets: 


" + }, + { + "title": "Kafka clients upgrade procedure", + "pageID": "401611855", + "pageLink": "/display/GMDM/Kafka+clients+upgrade+procedure", + "content": "

Introduction

There are two tools that we need to take under consideration when upgrade'ing Kafka clients, both are managed by Confluent Inc.:



Prerequisite

Before proceeding with upgrade verify checklist:



Generic procedure

Procedure assumes that upgrade will be executed and tested on the SBX first.

Upgrade Steps

cp-kcat:

  1. Change image tag in mdm-hub-inbound-services/helm/kafka/kcat/docker/Dockerfile.
  2. Build and deploy changes.
  3. Verify if container is working correctly.
  4. Verify if all wrapper scripts included in mdm-hub-inbound-services/helm/kafka/kcat/docker/bin are running correctly.

cp-kafka:

  1. Change image tag in mdm-hub-inbound-services/helm/kafka/kafka-client/docker/Dockerfile.
  2. Build and deploy changes.
  3. Verify if container is working correctly.
  4. Verify if all wrapper scripts included in mdm-hub-inbound-services/helm/kafka/kafka-client/docker/bin are running correctly.



Past upgrades

Upgrade cp-kcat 7.30→ 7.5.2 and cp-kafka 6.1.0→7.5.2

Description:

This update require to update both cp-kcat and cp-kafka to version 7.5.2 to eliminate CVE-2023-4911 vulnerability.

Procedure:

  1. Pushed base images for updated components to COMPANY artifactory:
    1. confluentinc/cp-kcat:7.5.2 →  artifactory.COMPANY.com/mdmhub-docker-dev/mdmtools/confluentinc/cp-kcat:7.5.2
    2. confluentinc/cp-kafka:7.5.2 → artifactory.COMPANY.com/mdmhub-docker-dev/confluentinc/cp-kafka:7.5.2
  2. Changed images versions in Dockerfiles:
    1. cp-kcat 7.30→ 7.5.2
    2. cp-kafka 6.1.0→7.5.2
  3. Built changes and deployed on SBX environment.
  4. Verified that both containers started successfully.
  5. Executed into each container and tested if all wrapper scripts present at /opt/app/bin are running and returning expected results.
  6. Deployed changes to other environments.

Reference tickets:

Reference PR's:

" + }, + { + "title": "Kafka upgrade procedure", + "pageID": "401611803", + "pageLink": "/display/GMDM/Kafka+upgrade+procedure", + "content": "

Introduction

Kafka used in MDM is installed, configured and upgraded using Strimzi Kafka Operator


Prerequisite

Before upgrade verify checklist:

  1. There must be no critical errors for the environment Alerts Monitoring
  2. Kafka Cluster Overview must  show 0 for 
    1. Under-Replicated Partitions
    2. Under-Min-ISR Partitions
    3. Offline Partitions
    4. Unclean Leader Election
    5. Preferred Replica Imbalance >0 is not a blocker, but a high number may indicate an issue with Kafka performance.



Generic procedure

Procedure assumes that upgrade will be executed and tested on the SBX first.

Upgrade steps:

  1. Verify if Strimzi Kafka Operator supports Kafka version you want to install (Supported versions - https://strimzi.io/downloads/)
    1. if not, upgrade Strimzi chart first
  2. Change Kafka version in environment configuration
  3. Update this confluence page


Past upgrades

Upgrade 3.6.1 → 3.7.0 and ZK to KRaft migration

Description

This upgrade was part of the \n MR-8004\n -\n Getting issue details...\n STATUS\n Epic.

Procedure

  1. Upgrade Strimzi operator to the version supporting Kafka 3.6.1
    1. Add Strimzi Helm repo and find the newest Consul chart and app version

      \n
      helm repo add strimzi https://strimzi.io/charts\nhelm search repo strimzi/strimzi-kafka-operator
      \n
    2. In helm/operators/src/main/helm/Chart.yaml uncomment Strimzi repository and change version number
    3. Update dependencies

      \n
      cd helm/operators/src/main/helm\nhelm dependency update
      \n
    4. Comment repository line back in Chart.yaml
    5. Commit only the updated charts/strimzi-kafka-operator-helm-*.tgz and Chart.yaml files
  2. Upgrade default Kafka to 3.7.0 in mdm-hub-inbound-services
  3. Upgrade Kafka per environment
    1. Deploy updated operators with the new Strimzi
    2. Update cluster-env configuration (backend namespace)
    3. Deploy updated backend
    4. Ensure cluster is in a running state

Reference tickets

Reference PRs

Upgrade 3.5.1 → 3.6.1

Description

This upgrade was part of the \n MR-8004\n -\n Getting issue details...\n STATUS\n Epic.

Procedure

  1. Upgrade Strimzi operator to the version supporting Kafka 3.6.1
    1. Add Strimzi Helm repo and find the newest Consul chart and app version

      \n
      helm repo add strimzi https://strimzi.io/charts\nhelm search repo strimzi/strimzi-kafka-operator
      \n
    2. In helm/operators/src/main/helm/Chart.yaml uncomment Strimzi repository and change version number
    3. Update dependencies

      \n
      cd helm/operators/src/main/helm\nhelm dependency update
      \n
    4. Comment repository line back in Chart.yaml
    5. Commit only the updated charts/strimzi-kafka-operator-helm-*.tgz and Chart.yaml files
  2. Upgrade default Kafka to 3.6.1 in mdm-hub-inbound-services
    1. change Kafka config and wait for the operator to apply changes:
      1. remove inter.broker.protocol.version: "3.5"
      2. remove log.message.format.version: "3.5"
      3. set kafka.version: 3.6.1
  3. Upgrade Kafka per environment
    1. Deploy updated operators with the new Strimzi strimzi
    2. Update cluster-env configuration (backend namespace)
    3. Deploy updated backend
    4. Ensure cluster is in a running state

Reference tickets

Reference PRs

" + }, + { + "title": "Kong upgrade procedure", + "pageID": "401611825", + "pageLink": "/display/GMDM/Kong+upgrade+procedure", + "content": "

Introduction

Kong used in MDM HUB is mantained by Kong/kong.



Prerequisite

  1. Verify changelog for changes that could alter behaviour/usage in new version and plan configuration adjustments to make it work correctly.
  2. Ensure base images are mirrored to COMPANY artifactory.


Generic Procedure

Procedure assumes that upgrade will be executed and tested on the SBX first.

Upgrade Steps

  1. Change image tag to updated version in mdm-hub-env-config/docker/kong3/Dockerfile
  2. Build and push docker image based on updated Dockerfile.
  3. Change the tag of kong image in mdm-inbound-services/helm/kong/src/main/helm/values.yaml to the one that was build in Step 2.
  4. Change the tag of kong image in mdm-cluster-env/helm/amer/sandbox/namespaces/kong/values.yaml to the one that was build in Step 2.
  5. Build changes from Step 3 and deploy with configuration added in Step 4.
  6. Verify update:
    1. Check if component started.
    2. Check if API requests are accepted and return correct responses
    3. Check if kong-mdm-external-oauth-plugin works properly (try OAuth authorization and then some API calls to verify it)




Past upgrades

Upgrade Kong 3.2.2 → 3.4.2

Description:

This update required update to version 3.4.2 to fix the CVE-2023-4911 vulnerability on NPROD and PROD.

Procedure:

  1. Changed image tag to 3.4.2 in mdm-hub-env-config/docker/kong3/Dockerfile
  2. Built and pushed docker image to artifactory.
  3. Changed the tag of kong image in mdm-inbound-services/helm/kong/src/main/helm/values.yaml to the one that was build in Step 2 (951).
  4. Changed the tag of kong image in mdm-cluster-env/helm/{tenant}/{nprod|prod}/namespaces/kong/values.yaml to the one that was build in Step 2 (951).
  5. Built changes from Step 3 and deploy with configuration added in Step 4.
  6. Verified update:
    1. Component started.
    2. API requests were accepted and returned correct responses
    3. kong-mdm-external-oauth-plugin worked properly (checked OAuth and some API requests)

Reference Tickets:

[MR-7599] Update kong to 3.4.2

Reference PR's:

[MR-7599] Updated kong to 3.4.2

[MR-7599] Updated kong to 3.4.2



" + }, + { + "title": "Mongo upgrade procedure", + "pageID": "401611849", + "pageLink": "/display/GMDM/Mongo+upgrade+procedure", + "content": "

Introduction:

Mongo used in MDM is managed by mongodb-kubernetes-operator. When updating mongo, we must think about all components at the same time.

Mongo operator bring additional images to orchestrate and managed mongo cluster 


Prerequisite

Before migration verify checklist:



Generic procedure

Procedure assumes that upgrade will be executed and tested on the SBX first.

Upgrade steps:

  1. Verify if MongoDB Kubernetes operator documentation provides specific for planned upgrade 
  2. Upgrade Mongo Operator
    1. Update cluster-env configuration (operators namespace)
    2. Deploy new Operator
    3. Ensure if cluster is in running state  
  3. Upgrade Mongo 
    1. Update cluster-env configuration (backend namespace) 
    2. Deploy updated backend 
      NOTE: step a and b can be execute multiple times (first we upgrade mongo images then we updated featureCompatibilityVersion parameter) 
    3. Ensure if cluster is in running state   
  4. Update confluence page


Past upgrades

Upgrade 4.2.6 → 6.0.9

Description:

This upgrade required multiple intermediate upgrades without upgrading Mongo Kubernetes Operator 

Procedure:

    1. Upgrade image 4.2.6 → 4.4.24 by updating cluster-env configuration (backend namespace)
    2. Deploy updated backend
    3. Ensure if cluster is in running state  
    4. Upgrade featureCompatibilityVersion to 4.4 by updating cluster-env configuration (backend namespace)
    5. Deploy updated backend
    6. Ensure if cluster is in running state 
    7. Upgrade image 4.4.24  → 5.0.20 by updating cluster-env configuration (backend namespace)
    8. Deploy updated backend
    9. Ensure if cluster is in running state  
    10. Upgrade featureCompatibilityVersion to 5.0 by updating cluster-env configuration (backend namespace)
    11. Deploy updated backend
    12. Ensure if cluster is in running state 
    13. Upgrade image  5.0.20 → 6.0.9 by updating cluster-env configuration (backend namespace)
    14. Deploy updated backend
    15. Ensure if cluster is in running state  
    16. Upgrade featureCompatibilityVersion to 6.0 by updating cluster-env configuration (backend namespace)
    17. Deploy updated backend
    18. Ensure if cluster is in running state 

Reference tickets: 

Reference PRs:

Upgrade Operator 0.7.3 → 0.8.2 

Description:

This upgrade was required to enable mongo horizon feature. Previous version of operator was unstable and sometimes failed to complete reconciliation of mongo cluster. 
Mongo itself was no updated in this upgrade

Procedure:

    1. Update cluster-env configuration (operators namespace)
    2. Deploy new Operator
    3. Ensure if cluster is in running state  

Reference tickets: 

Reference PRs:

Upgrade 6.0.9 → 6.0.11

Description:

This upgrade required only upgrading mongo image. At this time there was no newer version of mongodb Kubernetes operator. 

Procedure:

    1. Update cluster-env configuration (backend namespace)
    2. Deploy updated backend
    3. Ensure if cluster is in running state  

Reference tickets: 

Reference PRs:

Upgrade 6.0.11 → 6.0.21

Description

This was planned periodic upgrade. During this upgrade also kubernetes mongo operator was upgraded from 0.8.2 to 0.12.0. 

To perform this upgrade there was change needed in MongoDBCommunity helm template. We were using users configuration in wrong way - uniqueness constraint on  scramCredentialsSecretName field was violated 

Procedure:

Reference tickets

Reference PRs



MongoDBCommunity
" + }, + { + "title": "Monstache upgrade procedure", + "pageID": "401611821", + "pageLink": "/display/GMDM/Monstache+upgrade+procedure", + "content": "

Introduction:

Monstache used in MDM is installed using official Monstache installation procedure provided by Ryan Wynn.


Prerequisite

Before upgrade verify checklist:



Generic procedure

Procedure assumes that upgrade will be executed and tested on the SBX first.

Upgrade steps:

  1. Upgrade Monstache Docker images
  2. Update this confluence page

Past upgrades

Upgrade 6.7.0 → 6.7.17

Description:

This was the only Monstache upgrade so far.

Procedure:

  1. Upgrade Monstache docker image
    1. Pull from DockerHub and push the newest possible docker tag image of monstache to artifactory.
    2. Change version tag in inbound-services repo of monstache.
  2. Update cluster-env configuration (envs and backend namespaces)
    1. Change Docker image tags to uploaded in previous step
  3. Deploy updated backend with Jenkins job
  4. Ensure monstache is working fine, check logs on monstache Pod logs dir.

Reference tickets: 


Upgrade 6.7.17 → 6.7.21

Description:

Upgrade Monstache docker image to version 6.7.21

Procedure:

  1. Upgrade Monstache docker image
    1. Pull from DockerHub and push the newest possible docker tag image of monstache to artifactory.
    2. Change version tag in inbound-services repo of monstache.
  2. Update cluster-env configuration (envs and backend namespaces)
    1. Change Docker image tags to uploaded in previous step
  3. Deploy updated backend with Jenkins job
  4. Ensure monstache is working fine, check logs on monstache Pod logs dir. PASSED

Reference tickets: 



" + }, + { + "title": "Prometheus upgrade procedure", + "pageID": "521705242", + "pageLink": "/display/GMDM/Prometheus+upgrade+procedure", + "content": "

Monitoring host

Introduction

Official Prometheus site: https://prometheus.io/

To deploy Prometheus we use official docker image: https://hub.docker.com/r/prom/prometheus/

Prerequisites

  1. Verify CHANGELOG for changes that could alter behaviour/usage in new version and plan configuration adjustments to make it work correctly.
  2. Verify if other monitoring components are in versions compatible with version to which prometheus is upgraded. List of components to check:
    1. Thanos
    2. Telegraf
    3. SQS Exporter
    4. S3 Exporter
    5. Node Exporter
    6. Karma
    7. Grafana
    8. DNS Exporter
    9. cAdvisor
    10. Blackbox Exporter
    11. Alertmanager
  3. Ensure base images are mirrored to COMPANY artifactory.

Generic Procedure

Upgrade steps

  1. Apply configuration changes in mdm-hub-cluster-env:
    1. Change prometheus image tag to updated version in mdm-hub-cluster-env/ansible/roles/install_monitoring_prometheus/defaults/main.yml
    2. Apply other changes to configuration if necessary (Prerequisites step 1)
    3. Upgrade dependant monitoring components if necessary (Prerequisites step 2)
  2. Install monitoring stack using ansible-playbook:
    ansible-playbook install_monitoring_stack.yml -i inventory/monitoring/inventory --vault-password-file=$VAULT_PASSWORD_FILE\n
  3. Verify installation:
    1.  Check if monitoring components are up and running
    2. Check logs
    3. Check metrics and dashboards
  4. Fix all issues

Past Upgrades

Upgrade monitoring host Prometheus v2.30.3 → v2.53.4

Description:

This upgrade was a huge change in Prometheus version, therefore also Thanos had to be updated from main-2023-11-03-7e879c6 to v0.37.2 to maintain compatibility between those components. Some additional configuration adjustments had to be made on Thanos side during this upgrade.

Procedure:

  1. Checked prerequisites
    1. Verified that no breaking changes were made made in Prometheus that would require configuration adjustments on our side.
    2. Verified that alongside Prometheus, Thanos have to be updated to v0.37.2 to keep compatibility
    3. Pushed Prometheus v2.53.4 and Thanos v.0.37.2 to COMPANY artifactory.
  2. Changed Prometheus tag to v2.53.4 and Thanos tag to v0.37.2 in mdm-hub-cluster-env/ansible/roles/install_monitoring_prometheus/defaults/main.yml
  3. Installed monitoring stack using ansible-playbook
  4. Verified installation - noticed issues with Thanos Query that couldn't connected to Thanos Sidecar and Thanos Store
  5. Made adjustments in Thanos configuration to fix those issues (See reference PR)
  6. Installed monitoring stack using ansible-playbook again
  7. Verified installation - all components, dashboards and metrics were working correctly
  8. Upgrade finished successfully

Reference Tickets:

Reference PR's:


K8s cluster

Introduction

To deploy Prometheus on k8s clusters we use the following chart: kube-prometheus-stack.

It contains definition of Prometheus and related crd's.


Prerequisites

Check which chart version uses Prometheus in version to which you want to upgrade. Verify Prometheus CHANGELOG and kube-prometheus-stack chart templates and default values for changes that could alter behaviour/usage in new version and plan configuration adjustments to make it work correctly.


Generic Procedure

Upgrade Steps

  1. Download and unpack kube-prometheus-stack-<new_version>
  2. Replace CRD's:
    cd kube-prometheus-stack\\charts\\crds\\crds\nkubectl -n monitoring replace -f "*.yaml"
  3. Create and build PR with helm chart upgrade
    1. update version in mdm-hub-inbound-services/helm/monitoring/src/main/helm/Chart.yaml
    2. update package version replacing charts/kube-prometheus-stack-<old_version>.tgz with charts/kube-prometheus-stack-<new_version>.tgz
  4. Deploy PR to SBX cluster
  5. Verify installation and merge the PR
    1. Get the number of metrics and alerts from Prometheus and compare them with the number before upgrade
    2. Verify if Grafana dashboards are working correctly
  6. Proceed to NPROD/PROD deployments (Verify installation after each of them)


Past Upgrades

Upgrade monitoring host Prometheus v2.39.1 → v2.53.1

Description:

To perform this upgrade it was necessary to upgrade used helm chart (kube-prometheus-stack) from v41.7.4 (containing Prometheus v2.39.1) to v61.7.2 (containing Prometheus v.2.53.1)

Procedure:

  1. Checked prerequisites
    1. Verified that no breaking changes were made made in Prometheus that would require configuration adjustments on our side.
    2. Verified that kube-prometheus-stack v61.7.2 contained Prometheus v2.53.1
  2. Downloaded and unpacked kube-prometheus-stack-61.7.2.tgz
  3. Replaced CRD's
  4. Created PR with upgraded chart version and replaced old package with kube-prometheus-stack-61.7.4.tgz (See reference PR)
  5. Deployed changes to SBX from PR
  6. Verified Installation (SBX)
    1. No lost metrics
    2. All alerts correct
    3. Grafana dashboards working correctly
  7. Merged PR

Reference Tickets:

Reference PR's:

" + }, + { + "title": "Infrastructure", + "pageID": "302705566", + "pageLink": "/display/GMDM/Infrastructure", + "content": "" + }, + { + "title": "How to access AWS Console", + "pageID": "310939854", + "pageLink": "/display/GMDM/How+to+access+AWS+Console", + "content": "

Add new user access to AWS Account

Request access to the correct Security Group in the Request Manager

https://requestmanager1.COMPANY.com/Group/Default.aspx

ie, for accessing the 432817204314 Account using the WBS-EUW1-GBICC-ALLENV-RO-SSO role, use the 

WBS-EUW1-GBICC-ALLENV-RO-SSO_432817204314_PFE-AWS-PROD Security Group

AWS Console

Always use this AWS Console address: http://awsprodv2.COMPANY.com/ and there select the Account you want to use

\"\"

" + }, + { + "title": "How to login to hosts with SSH", + "pageID": "310940209", + "pageLink": "/display/GMDM/How+to+login+to+hosts+with+SSH", + "content": "
  1. Generate a SSH key pair - private and public
  2. Copy the public key to the ~/.ssh/authorized_keys file on the host and account you want to use
  3. use ssh command to login, ie. ssh ec2-user@euw1z2dl115.COMPANY.com
  4. List the content of the ~/.ssh/authorized_keys file to check which keys are used
" + }, + { + "title": "How to restart the EC2 instance", + "pageID": "310940306", + "pageLink": "/display/GMDM/How+to+restart+the+EC2+instance", + "content": "
  1. Login to AWS Console (How to access AWS Console)

  2. Select EC2 Service from the search box
  3. In the navigation pane, choose Instances.

  4. Select the instance and choose Instance state, Reboot instance.
    Alternatively, select the instance and choose Actions, Manage instance state. In the screen that opens, choose Reboot, and then Change state.

  5. Choose Reboot when prompted for confirmation
    \"\"

More: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-reboot.html

" + }, + { + "title": "HUB-UI: Timeout issue after authorization", + "pageID": "337840086", + "pageLink": "/display/GMDM/HUB-UI%3A+Timeout+issue+after+authorization", + "content": "

Issue description:

When accesing HUB-UI site, after successfuly authorizing via SSO, the timeout may occur when trying to access the site.

Solution:

Check if you have valid COMPANY certificates installed in your browser. You can do that by clicking on padlock icon in browser search and checking if the connection is safe:

\"\"

If not, you have to install certificates:

  1. Install RootCA-G2.cer:
    1. Double-click on certificate
    2. Choose Install Certificate
    3. Local Machine
    4. Choose "Place all certificates in the following store" and select store: "Trusted Root Certification Authorities"
    5. Click finish to complete the instalation process
  2. Install PBACA-G2.cer:
    1. Double-click on certificate
    2. Choose Install Certificate
    3. Local Machine
    4. Choose "Automatically select the certificate store based on type of certificate"
    5. Click finish to complete the instalation process
  3. Reboot computer
  4. Verify by accessing HUB-UI
" + }, + { + "title": "Key Auth Not Working on Hosts - Fix", + "pageID": "172294447", + "pageLink": "/display/GMDM/Key+Auth+Not+Working+on+Hosts+-+Fix", + "content": "

In case you are unable to use SSH authentication via RSA key, the cause might be wrong /home/{user}/.ssh directory context.

Check /var/log/secure:

\"\"

The "maximum authentication attempts exceeded" error might indicate that his is the case.

Check the /home/{user}/.ssh directory with the "-Z" option:

$ ls -laZ /home/{user}/.ssh

\"\"

On the screen above is an example of wrong context. Fix it by:

$ chcon -R system_u:object_r:usr_t:s0 /home/{user}/.ssh


Verify the context has changed:

\"\"


" + }, + { + "title": "Kubernetes Operations", + "pageID": "228923667", + "pageLink": "/display/GMDM/Kubernetes+Operations", + "content": "" + }, + { + "title": "Kubernetes upgrades", + "pageID": "337842009", + "pageLink": "/display/GMDM/Kubernetes+upgrades", + "content": "

Introduction

Kubernetes clusters provided by PDKS are upgraded quarterly. To make sure it doesn't break MDM Hub, we've established the process described in this article.

K8s upgrade process in the PDKS platform

\"\"

Verify MDM Hub's compatibility with the new K8s version

\"\"

kube-no-trouble

Upgrades are done 1 version up, ie. 1.23 → 1.24, so we need to make sure we've not using any APIs removed in the upgraded version.

To find all objects using deprecated API, run kube-no-trouble 

\"\"

If there are "Deprecated APIs" listed for the next K8s version, MDM Hub's team must provide upgrades.

In the example, an upgrade from 1.23 to 1.24 doesn't require any work.

Upgrade sandbox/non-prod/prod clusters

\"\"

PDKS does a rolling upgrade of all nodes, starting with Control Plane, then dynamic (or "flex") nodes, and then the static nodes.

Assist and verify

\"\"

MDM Hub's team support during prod upgrades

MDM Hub's team presence and assistance are required during prod upgrades. During the agreed upgrade window one designated person must be actively monitoring the upgrade process and react if issues are found.

" + }, + { + "title": "MongoDB backup and restore", + "pageID": "322548514", + "pageLink": "/display/GMDM/MongoDB+backup+and+restore", + "content": "

Introduction

Percona Backup for MongoDB

We are using Percona Backup for MongoDB (PBM) - an open-source and distributed solution for consistent backups and restore of production MongoDB clusters. 

\"\"

PBM functions used in MDM Hub are marked in green.

How are backups done in MDM Hub?

Architecture

The solution was built in 4 parts

Code

Configuration

General rules 

Details

Config is stored per environment in mdm-hub-cluster-env project in {env}/prod/namespaces/{env}-backend/values.yaml path, under mongo.pbm key.

\"\"

Where are backups stored?

All backups are stored in separate s3 buckets.

Backup

How to do a manual full backup?

Run a pbm backup --wait command in a mongodb-pbm-client pod

\"\"

How to do an incremental backup?

You don't have to do anything. If you really need to do an incremental backup, wait for 10 minutes for the next scheduled point-in-time backup.

Restore

How to restore DB when it's empty - Disaster Recovery (DR) scenario

Percona configuration is stored in the database itself. If the database is completely removed (EKS cluster, PVCs, or all data from DB), the Percona agent won't be able to restore the DB from backup.

You need at least an empty MongoDB and PBM configuration restored.

  1. Deploy MDM Hub Backend Using Jenkins Job
    1. An empty database will be created
    2. Percona will be configured
    3. pbm-agent pod will be created
  2. Choose between preferred restore ways
    1. full backup
    2. incremental backup

How to restore DB from a full backup

  1. Shut down all MongoDB clients - MDM Hub components
  2. Disable PITR
    $ pbm config --set pitr.enabled=false
  3. Run pbm list to get a named list of backups
    \"\"
  4. Run pbm restore [<backup_name>]
  5. Run pbm status to check the current restore status
  6. After a successful restore, enable PITR back
    $ pbm config --set pitr.enabled=true

How to restore DB from an incremental (Point-in-time Recovery)

  1. Shut down all MongoDB clients - MDM Hub components
  2. Disable PITR
    $ pbm config --set pitr.enabled=false
  3. Run pbm list to get an available time range for the PITR restore
    \"\"
  4. Run pbm restore  --time=2006-01-02T15:04:05
  5. Run pbm status to check the current restore status
  6. After a successful restore, enable PITR back
    $ pbm config --set pitr.enabled=true
" + }, + { + "title": "Restart service", + "pageID": "228923671", + "pageLink": "/display/GMDM/Restart+service", + "content": "

To restart MDMHUB service you have to have access to the Kubernetes console:

  1. Find the pod name that you want to restart: kubectl get pods --namespace {{mdmhub env namespace}}

raselek@CF-0YVKSY:~/kafka/amer_dev/kafka_client$ kubectl get pods --namespace amer-dev

NAME                                                 READY   STATUS    RESTARTS   AGE

mdmhub-batch-service-dbbf4486d-snpgc                 2/2     Running   0          22h

mdmhub-callback-service-55c6dd696d-5bn4h             2/2     Running   0          22h

mdmhub-entity-enricher-f9f884f97-cwqqc               2/2     Running   0          22h

mdmhub-event-publisher-756b46cfd7-7ccqp              2/2     Running   0          22h

mdmhub-mdm-api-router-9b9596f8b-8wqrn                2/2     Running   0          9h

mdmhub-mdm-manager-678764db5-fqlzf                   2/2     Running   0          9h

mdmhub-mdm-reconciliation-service-66b65c7bf8-jhvhv   2/2     Running   0          9h

mdmhub-reltio-subscriber-6495fb4878-c8hp5            2/2     Running   0          9h

2. Delete the pod that you selected: kubectl delete pod {{selected pod name}} --namespace {{mdmhub env namespace}}

raselek@CF-0YVKSY:~/kafka/amer_dev/kafka_client$ kubectl delete pod mdmhub-mdm-reconciliation-service-66b65c7bf8-jhvhv --namespace amer-dev

pod "mdmhub-mdm-reconciliation-service-66b65c7bf8-jhvhv" deleted

3. After above operation you will be able to see newly created pod:

raselek@CF-0YVKSY:~/kafka/amer_dev/kafka_client$ kubectl get pods --namespace amer-dev

NAME                                                 READY   STATUS    RESTARTS   AGE

mdmhub-batch-service-dbbf4486d-snpgc                 2/2     Running   0          22h

mdmhub-callback-service-55c6dd696d-5bn4h             2/2     Running   0          22h

mdmhub-entity-enricher-f9f884f97-cwqqc               2/2     Running   0          22h

mdmhub-event-publisher-756b46cfd7-7ccqp              2/2     Running   0          22h

mdmhub-mdm-api-router-9b9596f8b-8wqrn                2/2     Running   0          9h

mdmhub-mdm-manager-678764db5-fqlzf                   2/2     Running   0          9h

mdmhub-mdm-reconciliation-service-66b65c7bf8-ns88k   2/2     Running   0          2m32s

mdmhub-reltio-subscriber-6495fb4878-c8hp5            2/2     Running   0          9h

It's restarted instance.


" + }, + { + "title": "Scaling services", + "pageID": "228923952", + "pageLink": "/display/GMDM/Scaling+services", + "content": "

To do this action access to the runtime configuration repository is required. You have to modify deployment configuration for selected component - let's assume that it is mdm-reconciliation-service:

  1. Modify values.yaml for MDMHUB environment {{region}}/{{cluster class}}/namespaces/{{mdmhub env name}}/values.yaml:

components:
  registry: artifactory.COMPANY.com/mdmhub-docker-dev
  deployments:
    mdm_reconciliation_service:
      enabled: true
      replicas: 2
      hostAliases: *hostAliases
      resources:
        component:
          requests:
            memory: "2560Mi"
            cpu: "200m"
          limits:
            memory: "3840Mi"
            cpu: "4000m"
      logging: *logging

And change the value of the "replicas" parameter. If it doesn't exist you have to add this to the component deployment configuration.

2. Commit and push changes,

3. Go to Jenkins job responsible for deploying changes to the selected environment and run the job,

4. After deploying check if the configuration has been applied correctly: kubectl get pods --namespace {{mdmhub env name}}:

raselek@CF-0YVKSY:~/kafka/amer_dev/kafka_client$ kubectl get pods --namespace amer-dev

NAME                                                 READY   STATUS    RESTARTS   AGE

mdmhub-batch-service-dbbf4486d-snpgc                 2/2     Running   0          22h

mdmhub-callback-service-55c6dd696d-5bn4h             2/2     Running   0          22h

mdmhub-entity-enricher-f9f884f97-cwqqc               2/2     Running   0          22h

mdmhub-event-publisher-756b46cfd7-7ccqp              2/2     Running   0          22h

mdmhub-mdm-api-router-9b9596f8b-8wqrn                2/2     Running   0          9h

mdmhub-mdm-manager-678764db5-fqlzf                   2/2     Running   0          9h

mdmhub-mdm-reconciliation-service-66b65c7bf8-ns88k   2/2     Running   0          2m32s

mdmhub-mdm-reconciliation-service-66b68c7bf8-ndksk   2/2     Running   0          2m32s

mdmhub-reltio-subscriber-6495fb4878-c8hp5            2/2     Running   0          9h

You will be able to see the desired amount of pods.

" + }, + { + "title": "Stop/Start service", + "pageID": "228923678", + "pageLink": "/pages/viewpage.action?pageId=228923678", + "content": "

To do this action access to the runtime configuration repository is required. Start/Stop service means enable/disable component deployment. You have to modify deployment configuration for selected component - let's assume that it is mdm-reconciliation-service:

  1. Modify values.yaml for MDMHUB environment {{region}}/{{cluster class}}/namespaces/{{mdmhub env name}}/values.yaml:

components:
  registry: artifactory.COMPANY.com/mdmhub-docker-dev
  deployments:
    mdm_reconciliation_service:
      enabled: true
      hostAliases: *hostAliases
      resources:
        component:
          requests:
            memory: "2560Mi"
            cpu: "200m"
          limits:
            memory: "3840Mi"
            cpu: "4000m"
      logging: *logging

Change the enabled flag to false.

2. Commit and push changes,

3. Go to Jenkins job responsible for deploying changes to the selected environment and run the job,

4. After deploying check if the configuration has been applied correctly: kubectl get pods --namespace {{mdmhub env name}}

raselek@CF-0YVKSY:~/kafka/amer_dev/kafka_client$ kubectl get pods --namespace amer-dev

NAME                                                 READY   STATUS    RESTARTS   AGE

mdmhub-batch-service-dbbf4486d-snpgc                 2/2     Running   0          22h

mdmhub-callback-service-55c6dd696d-5bn4h             2/2     Running   0          22h

mdmhub-entity-enricher-f9f884f97-cwqqc               2/2     Running   0          22h

mdmhub-event-publisher-756b46cfd7-7ccqp              2/2     Running   0          22h

mdmhub-mdm-api-router-9b9596f8b-8wqrn                2/2     Running   0          9h

mdmhub-mdm-manager-678764db5-fqlzf                   2/2     Running   0          9h

mdmhub-reltio-subscriber-6495fb4878-c8hp5            2/2     Running   0          9h

There should not be any active pods of the disabled component.

To enable service you have to do the same steps but remember that "enabled" flag should be set to true.

" + }, + { + "title": "Open Traffic from Outside COMPANY to MDM Hub", + "pageID": "250142861", + "pageLink": "/display/GMDM/Open+Traffic+from+Outside+COMPANY+to+MDM+Hub", + "content": "

EMEA NProd

AWS Account ID: 432817204314

VPC ID: vpc-004cb58768e3c8459

SecurityGroup: sg-04d4116a040a7e1da - MDMHub-kafka-and-api-proxy-external-nprod-sg

Proxy documentation: EMEA External proxy


EMEA Prod

AWS Account ID: 432817204314

VPC ID: vpc-004cb58768e3c8459

SecurityGroup: sg-06305fd9d3b0992a6 - MDMHub-kafka-and-api-proxy-external-prod-sg

Proxy documentation: EMEA External proxy


EXUS (GBL) Prod

AWS Account ID: 432817204314

VPC ID: vpc-004cb58768e3c8459

SecurityGroup: sg-0cd8ba02f6351f383 - Mdm-reltio-internet-traffic-SG


US

no whitelisting

" + }, + { + "title": "Replace S3 Keys", + "pageID": "187796851", + "pageLink": "/display/GMDM/Replace+S3+Keys", + "content": "

CREATE ticket if there is an issue with KEYs (rotation required -  expired)

REQUEST:

http://btondemand.COMPANY.com/getsupport#!/g71h1sgv0/0

QUEUE: GBL-BTI-IOD AWS FULL SUPPORT

Hi Team,
Our S3 access key expired - I am receiving - The AWS Access Key Id you provided does not exists in our records.
KEY details:
BucketName User name Access key ID Secret access key
gblmdmhubnprodamrasp100762 SRVC-MDMGBLFT ●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Could you please regenerate this S3 key ?
Regards,
Mikolaj


BITBUCKET REPLACE:

inventory/<env>_gblus/group_vars/all/secret.yml

REPLACE and Post replace tasks:


REPLACE:
1. decrypt - group_vars/all/secret.yml
2. replace on non-prod and prod
3. encrypt and push


Post Replace TASK:
NON PROD

NEW nonprod <KEY> <SECRET>


REDEPLOY
1. Airflow:


All Airflow jobs - https://jenkins-gbicomcloud.COMPANY.com/job/MDM_Airflow_Deploy_jobs/ (take list from airflow_components variable)
- dev: concat_s3_files,merge_unmerge_entities_gblus,hub_reconciliation_v2,lookup_values_export_to_s3,reconciliation_koloneview,reconciliation_snowflake,reconciliation_icue,import_merges_from_reltio,export_merges_from_reltio_to_s3_full,export_merges_from_reltio_to_s3_inc
- qa: concat_s3_files,merge_unmerge_entities_gblus,hub_reconciliation_v2,lookup_values_export_to_s3,reconciliation_koloneview,reconciliation_snowflake,reconciliation_icue,import_merges_from_reltio,export_merges_from_reltio_to_s3_full,export_merges_from_reltio_to_s3_inc
- stage: merge_unmerge_entities_gblus,hub_reconciliation_v2,lookup_values_export_to_s3,reconciliation_koloneview,reconciliation_snowflake,reconciliation_icue,import_merges_from_reltio,export_merges_from_reltio_to_s3_full,export_merges_from_reltio_to_s3_inc


2. FLEX connector to S3 DEV AND QA


- replace in kafka-connect-flex
:/app/kafka-connect-flex/<env>/config/s3-connector-config.json
:/app/kafka-connect-flex/<env>/config/s3-connector-config-update.json
Update on Main(check logs with errors and execute)
- curl -X GET http://localhost:8083/connectors/S3SinkConnector/config
- curl -X PUT -H "Content-Type: application/json" localhost:8083/connectors/S3SinkConnector/config -d @/etc/kafka/config/s3-connector-config-update.json
- curl -X POST http://localhost:8083/connectors/S3SinkConnector/tasks/0/restart
- curl -X POST http://localhost:8083/connectors/S3SinkConnector/restart
- curl -X GET http://localhost:8083/connectors/S3SinkConnector/status

3. Snowflake:

--changeset warecp:LOV_DATA_STG runOnChange:true
create or replace stage landing.LOV_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/dev/outbound/SNOWFLAKE'
credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')
FILE_FORMAT=(TYPE= 'JSON' STRIP_OUTER_ARRAY= true)

create or replace stage landing.LOV_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/qa/outbound/SNOWFLAKE'
credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')
FILE_FORMAT=(TYPE= 'JSON' STRIP_OUTER_ARRAY= true)

create or replace stage landing.LOV_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/stage/outbound/SNOWFLAKE'
credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')
FILE_FORMAT=(TYPE= 'JSON' STRIP_OUTER_ARRAY= true)


--changeset morawm03:MERGE_TREE_DATA_STG runOnChange:true
create or replace stage landing.MERGE_TREE_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/dev/outbound/SNOWFLAKE_MERGE_TREE'
credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')
FILE_FORMAT=(TYPE= 'JSON' STRIP_OUTER_ARRAY= true COMPRESSION= 'GZIP')

create or replace stage landing.MERGE_TREE_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/qa/outbound/SNOWFLAKE_MERGE_TREE'
credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')
FILE_FORMAT=(TYPE= 'JSON' STRIP_OUTER_ARRAY= true COMPRESSION= 'GZIP')

create or replace stage landing.MERGE_TREE_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/s3://gblmdmhubnprodamrasp100762/us/dev/outbound/SNOWFLAKE_MERGE_TREE/outbound/SNOWFLAKE_MERGE_TREE'
credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')
FILE_FORMAT=(TYPE= 'JSON' STRIP_OUTER_ARRAY= true COMPRESSION= 'GZIP')

--changeset warecp:reconcilation_URL runOnChange:true
create or replace stage customer.RECONCILIATION_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/dev/inbound/hub/reconciliation/SNOWFLAKE/'
credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')
FILE_FORMAT = ( TYPE = CSV FIELD_DELIMITER = ',' COMPRESSION=NONE )

create or replace stage customer.RECONCILIATION_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/qa/inbound/hub/reconciliation/SNOWFLAKE/'
credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')
FILE_FORMAT = ( TYPE = CSV FIELD_DELIMITER = ',' COMPRESSION=NONE )

create or replace stage customer.RECONCILIATION_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/stage/inbound/hub/reconciliation/SNOWFLAKE/'
credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')
FILE_FORMAT = ( TYPE = CSV FIELD_DELIMITER = ',' COMPRESSION=NONE )




PROD:

NEW prod <KEY> <SECRET>


REDEPLOY
1. Airflow:


All Airflow jobs - https://jenkins-gbicomcloud.COMPANY.com/job/MDM_Airflow_Deploy_jobs/job/deploy_mdmgw_airflow_services__prod_gblus/ (take list from airflow_components variable)
- prod: concat_s3_files,merge_unmerge_entities_gblus,hub_reconciliation_v2,lookup_values_export_to_s3,reconciliation_koloneview,reconciliation_snowflake,reconciliation_icue,export_merges_from_reltio_to_s3_full,export_merges_from_reltio_to_s3_inc

               Manulay replace connections and variables in http://amraelp00007847.COMPANY.com:9110/airflow/home for gblus prod DAGS


2. FLEX connector to S3


- replace in kafka-connect-flex (on Master only)
:/app/kafka-connect-flex/prod/config/s3-connector-config.json
:/app/kafka-connect-flex/prod/config/s3-connector-config-update.json
Update on Main(check logs with errors and execute)
- curl -X GET http://localhost:8083/connectors/S3SinkConnector/config
- curl -X PUT -H "Content-Type: application/json" localhost:8083/connectors/S3SinkConnector/config -d @/etc/kafka/config/s3-connector-config-update.json
- curl -X POST http://localhost:8083/connectors/S3SinkConnector/tasks/0/restart
- curl -X POST http://localhost:8083/connectors/S3SinkConnector/restart
- curl -X GET http://localhost:8083/connectors/S3SinkConnector/status


3. Snowflake:



--changeset warecp:LOV_DATA_STG runOnChange:true
create or replace stage landing.LOV_DATA_STG url='s3://gblmdmhubprodamrasp101478/us/prod/outbound/SNOWFLAKE'
credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')
FILE_FORMAT=(TYPE= 'JSON' STRIP_OUTER_ARRAY= true)


--changeset morawm03:MERGE_TREE_DATA_STG runOnChange:true
create or replace stage landing.MERGE_TREE_DATA_STG url='s3://gblmdmhubprodamrasp101478/us/prod/outbound/SNOWFLAKE_MERGE_TREE'
credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')
FILE_FORMAT=(TYPE= 'JSON' STRIP_OUTER_ARRAY= true COMPRESSION= 'GZIP')


--changeset warecp:reconcilation_URL runOnChange:true
create or replace stage customer.RECONCILIATION_DATA_STG url='s3://gblmdmhubprodamrasp101478/us/prod/inbound/hub/reconciliation/SNOWFLAKE/'
credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')
FILE_FORMAT = ( TYPE = CSV FIELD_DELIMITER = ',' COMPRESSION=NONE )


4. HOST:


- replace archiver-services
on 3 nodes:
:/app/archiver/.s3cfg
:/app/archiver/config/archiver.env




" + }, + { + "title": "Resize PV, LV, FS", + "pageID": "164470164", + "pageLink": "/display/GMDM/Resize+PV%2C+LV%2C+FS", + "content": "
\n
sudo pvresize /dev/nvme2n1\nsudo lvextend -L +<SIZE_TO_INCREASE>G /dev/mapper/docker-thinpool
\n

Extention lvm using additional disk.

\n
sudo pvcreate /dev/nvme3n1 \nsudo vgextend mdm_vg /dev/nvme3n1\nsudo lvm lvextend -l +100%FREE /dev/mdm_vg/data\nsudo xfs_growfs -d /dev/mapper/mdm_vg-data
\n


" + }, + { + "title": "Resolve Docker Issues After Instance Restart (Flex US)", + "pageID": "163927016", + "pageLink": "/pages/viewpage.action?pageId=163927016", + "content": "

After restarting one of the US FLEX instances, issues with service user mdmihpr/mdmihnpr may come up.

Resolve them using the following:

Change owner of the Docker socket

[root@amraelp00005781 run]# cd /var/run/
[root@amraelp00005781 run]# chown root:mdmihub docker.sock

Increase VM memory

If the ElasticSearch is not starting:

[root@amraelp00005781 run]# sysctl -w vm.max_map_count=262144

Reset offset on EFK topics

If there are no logs on Kibana, use the Kafka Client to reset offsets on efk topics using the "--to-datetime" option, pointing to 6 months prior.

Prune the Docker

If there is a ThinPool Error coming up, use:

[root@amraelp00005781 run]# docker system prune -a
" + }, + { + "title": "Service User ●●●●●●●●●●●●●●●● [https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1588321]", + "pageID": "194547472", + "pageLink": "/pages/viewpage.action?pageId=194547472", + "content": "

Log into the machine via other account with root access.

For service user mdm (GBL NPROD/PROD):

\n
$ chage -I -1 -m 0 -M 99999 -E -1 mdm
\n
" + }, + { + "title": "Jenkins", + "pageID": "250676213", + "pageLink": "/display/GMDM/Jenkins", + "content": "" + }, + { + "title": "Proxy on bitbucket-insightsnow.COMPANY.com (fix Hostname issue and timeouts)", + "pageID": "250147973", + "pageLink": "/pages/viewpage.action?pageId=250147973", + "content": "


On GBLUS DEV host amraelp00007335.COMPANY.com (●●●●●●●●●●●●) setup service and route to proxy bitbucket:


kong_services:
#----------------------DEV---------------------------
- create_or_update: False
vars:
name: "{{ kong_env }}-bitbucket-proxy"
url: "http://bitbucket-insightsnow.COMPANY.com/"
connect_timeout: 120000
write_timeout: 120000
read_timeout: 120000

kong_routes:
#----------------------DEV---------------------------
- create_or_update: False
vars:
name: "{{ kong_env }}-bitbucket-proxy-route"
service: "{{ kong_env }}-bitbucket-proxy"
paths: [ "/" ]
methods: [ "GET", "POST", "PATCH", "DELETE" ]


Then we can access Bitbucket through:

curl https://gbl-mdm-hub-us-nprod.COMPANY.com:8443/repos?visibility=public

Change is in the and currently deplyed: http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/dev_gblus/group_vars/kong_v1/kong_dev.yml


-----------------------------------------------------------------------------------------------------------------

Next setup the nginx proxy to route 80 port to 8443 port.

Go to ec2-user@gbinexuscd01:/opt/cd-env/bitbucket-proxy

RUN bitbucket-nginx:

dded05295c16        nginx:1.17.3                                                          "nginx -g 'daemon of…"   About an hour ago   Up 16 minutes           0.0.0.0:80->80/tcp                                            bitbucket-nginx

Config:


\n
http {\n    server {\n        listen              80;\n        server_name         gbinexuscd01;\n\n        location / {\n            rewrite ^\\/(.*) /$1 break;\n            proxy_pass  https://gbl-mdm-hub-us-nprod.COMPANY.com:8443;\n            resolver <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1588839">●●●●●●●●●●</a>;\n        }\n    }\n}\n\nevents {}
\n


This config will route port 80 to gbl-mdm-hub-us-nprod.COMPANY.com:8443 host to bitbucket



Next, add to all Jenkins and Jenkins-Slaves the following entry in /etc/hosts:

docker exec -it -u root jenkins bash
docker exec -it -u root nexus_jenkins_slave2 bash
docker exec -it -u root nexus_jenkins_slave bash


vi /etc/hosts

add:
●●●●●●●●●●●●● bitbucket-insightsnow.COMPANY.com

where ●●●●●●●●●●●●● is a IP of bitbucket-nginx

to check run
docker inspect bitbucket-nginx
"Gateway": "192.168.128.1",



Then check on each Slave and Jenkins:
curl http://bitbucket-insightsnow.COMPANY.com/repos?visibility=public

You should receive the HTML page response.





" + }, + { + "title": "Unable to Find Valid Certification Path to Requested Target (GBLUS)", + "pageID": "164470045", + "pageLink": "/pages/viewpage.action?pageId=164470045", + "content": "

The following issue is caused by missing COMPANY - PBACA-G2.cer and RootCA-G2.cer in the java cacerts file.


Issue:

06:41:54 2020-12-24 06:41:52.843  INFO   --- [       Thread-4] c.consol.citrus.report.LoggingReporter   :  
FAILURE: Caused by: ResourceAccessException: I/O error on POST request for "https://gbl-mdm-hub-us-nprod.COMPANY.com:8443/apidev/hcp":
sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException:
unable to find valid certification path to requested target; nested exception is javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException:
PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

https://jenkins-gbicomcloud.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/project%252Ffletcher/151/console 


Solution:

Log in to:

mapr@gbinexuscd01 - ●●●●●●●●●●●●●

docker exec -it nexus_jenkins_slave bash

cd /etc/ssl/certs/java

touch PBACA-G2.cer  - PBACA-G2.cer
touch RootCA-G2.cer  - RootCA-G2.cer

keytool -importcert -trustcacerts -keystore cacerts -alias COMPANYInter -file PBACA-G2.cer -storepass changeit
keytool -importcert -trustcacerts -keystore cacerts -alias COMPANYRoot -file RootCA-G2.cer -storepass changeit

next - docker exec -it nexus_jenkins_slave2 bash


Permanent Solution. TODO:

add PBACA-G2.cer and RootCA-G2.cer to /etc/ssl/certs/java/cacerts in Dockerfile:


COPY certs/PBACA-G2.cer /etc/ssl/certs/java/PBACA-G2.cer
COPY certs/RootCA-G2.cer /etc/ssl/certs/java/RootCA-G2.cer
RUN cd /etc/ssl/certs/java && keytool -importcert -trustcacerts -keystore cacerts -alias COMPANYInter -file PBACA-G2.cer -storepass changeit -noprompt
RUN cd /etc/ssl/certs/java && keytool -importcert -trustcacerts -keystore cacerts -alias COMPANYRoot -file RootCA-G2.cer -storepass changeit -noprompt

fix - nexus_jenkins_slave2 and nexus_jenkins_slave
" + }, + { + "title": "Monitoring", + "pageID": "411343429", + "pageLink": "/display/GMDM/Monitoring", + "content": "" + }, + { + "title": "FLEX: Monitoring Batch Loads", + "pageID": "513737976", + "pageLink": "/display/GMDM/FLEX%3A+Monitoring+Batch+Loads", + "content": "

Opening The Dashboard

Use one of links below:

Navigating The Dashboard

Use the selector in upper right corner to change the time range (for example Last 24 hours or Last 7 days).

\"\"

The search bar allows searching for a specific file name.


\"\"


The dashboard is divided into 5 main sections:

  1. File by type - how many files of each input type have been loaded. File types are: SAP, DEA, HIN, FLEX_340B, IDENTIFIERS, ADDRESSES, FLEX_BULK.
  2. File load status count - breakdown of each file type and final status of records from that file
  3. File load count - depiction of loads through time
  4. File load summary - most important section, containing detailed information about each loaded file:
    • File - file type
    • Start time/End time - start and end of file processing. Important note: this applies only to parsing, preprocessing and mapping the records - those are later loaded into Reltio asynchronously
    • File name
    • Status - indicates that the file processing has finished correctly, without interruption or failures
    • Load time
    • Bad Records - records that could not be parsed or mapped, usually due to malformed input
    • Input Entities - number of records (lines) that the file contained
    • Processed Entities - number of individual profiles extracted from the file. This number may be lower than Input Entities, for example due to input model requiring aggregation of multiple lines (SAP), skipping unchanged records (DEA) etc.
    • Created - number of profiles that were identified as missing from MDM and have been passed to Reltio
    • Updated - number of profiles that were identified as changed since last loaded and have been passed to Reltio
    • Post Processing - Only for DEA - number of profiles that are present in MDM, but were not present in the DEA file. In this case, the records will be deleted in MDM (but there is a limit of 22,000 deleted profiles per single file - security mechanism)
    • Skipped Entities - number of profiles that were not updated in Reltio, because their data has not changed since the last load. This is detected using records' checksums, calculated for each record while processing the file. Checksums are stored in MDM Hub's cache and compared with the future records
    • Suspended Entities - Only for DEA - number of profiles that could have been deleted from MDM, but were not due to the 22,000 delete limit being exceeded
    • Count
  5. Response status load summary - final statuses of loading the records into Reltio. Records are loaded asynchronously and their statuses are being gradually updated in this section, after the file is present in the File load summary section
" + }, + { + "title": "Quality Gateway Alerts", + "pageID": "438317787", + "pageLink": "/display/GMDM/Quality+Gateway+Alerts", + "content": "

Quality Gateway is MDM Hub's publishing layer framework responsible for detecting Data Quality issues before publishing an event downstream (to Kafka consumers or Snowflake). You can find more details on the Quality Gateway in the documentation: Quality Gateway - Event Publishing Filter

There are 4 statuses that an event (entity/relationship) can receive after being processed by the Quality Gateway:

AUTO_RESOLVED events mean that they were preceded by a BROKEN one, which signifies potential data problems or processing problems.

This is why we have implemented two alerts to track these statuses, which may be otherwise missed.

quality_gateway_auto_resolved_sum/quality_gateway_auto_resolved_event

Both alerts should be approached similarly, as it is expected that they always get triggered together and tell us about the same thing.

Pick an example from one of the quality_gateway_auto_resolved_event alerts and take the entity/relationship URI:

\"\"


Use Kibana's HUB Events dashboard to find all the recent events for this URI:

\"\"\"\"


If you find no events at first, try extending the time range (for example 7 days).

Scroll down to the event list and open each event. Under metadata.quality.* keys you will find Quality Gateway info:

\"\"


Find the first BROKEN event. Under metadata.quality.issues you will find the list of quality rules that this event did not pass. Quality rules from this list match quality Rules configured in the Event Publisher's config.

Example repository config file path (amer-prod): mdm-hub-cluster-env\\amer\\prod\\namespaces\\amer-prod\\config_files\\event-publisher\\config\\application.yml

\"\"


Quality rules are expressions written in Groovy. Every event passing the appliesTo filter must also pass the mustPass filter, otherwise it will be BROKEN.


Records in BROKEN state are saved in MongoDB along with the full event that triggered the rejection. For AUTO_RESOLVED and MANUALLY_RESOLVED it is a bit more tricky - record is no longer in MongoDB.

To find the exact event that triggered the rejection you can use the AKHQ - Publisher's and QualityGateway's input Kafka topic is ${env}-internal-reltio-proc-event. Keep in mind that the retention configured for this topic should be around 7 days - events older than that get automatically removed from the topic.

\"\"


Search by the entity/relationship URI in Key. Match the BROKEN event with Kibana by the timestamp.


There is an infinite number of ways in which an event can be broken, so some investigation will often be needed.

Most common cases until now:

Blank Profile

Description: when fetching the entity JSON through Postman, the JSON has no attributes, but entity is not inactive.

\"\"

This is not expected and should be reported to the COMPANY MDM Team.

RDM Temporary Failure

Description: all lookup attribute values in the entity JSON are having lookupErrors. At least one lookupCode per JSON is expected (unless there are no lookup attributes).

Good:

\"\"

Bad:

\"\"


This is not expected and should be reported to the COMPANY MDM Team.

For extra points, find the exact API request/response to which Reltio responded with lookupErrors and add it to the ticket. You can find the request/response in Kibana's component logs (Discover > amer-prod-mdmhub) in MDM Manager's logs - POST entitites/_byUris.



" + }, + { + "title": "Thanos", + "pageID": "411343433", + "pageLink": "/display/GMDM/Thanos", + "content": "
\n
\n
\n
\n

Components:

Thanos stack is running on monitoring host: amraelp00020595.COMPANY.com under /app/monitoring/prometheus/ orchestrated with docker-compose:

\n
-bash-4.2$ docker-compose ps \nNAME             IMAGE                                 COMMAND                  SERVICE          CREATED        STATUS         PORTS\nbucket_web       artifactory.p:main-7e879c6   "/bin/thanos tools b…"   bucket_web       3 weeks ago    Up 2 seconds   \ncompactor        artifactory.p:main-7e879c6   "/bin/thanos compact…"   compactor        44 hours ago   Up 44 hours    \nprometheus       artifactory.p...:v2.30.3     "/bin/prometheus --c…"   prometheus       3 weeks ago    Up 3 weeks     0.0.0.0:9090->9090/tcp, ...\nquery            artifactory.p:main-7e879c6   "/bin/thanos query -…"   query            3 weeks ago    Up 3 weeks     \nquery_frontend   artifactory.p:main-7e879c6   "/bin/thanos query-f…"   query_frontend   3 weeks ago    Up 3 weeks     \nrule             artifactory.p:main-7e879c6   "/bin/thanos rule --…"   rule             3 weeks ago    Up 3 weeks     \nstore            artifactory.p:main-7e879c6   "/bin/thanos store -…"   store            3 weeks ago    Up 3 weeks     \nthanos           artifactory.p:main-7e879c6   "/bin/thanos sidecar…"   thanos           3 weeks ago    Up 3 weeks     0.0.0.0:10901-10902->10901-10902/tcp,...
\n
\n
\n
\n
\n
\n
\n

Thonos (sidecar):

Thanos rule:
Thanos store:
Thanos bucket_web:
Thanos query_frontend:
Thanos query:
Thanos compactor


Thanos oveview dashbord: Thanos / Overview - Dashboards - Grafana (COMPANY.com) 



\n
\n
\n
\n

\"\"

\n
\n
\n
\n
\n
\n

General troubleshooting: 

Every troubleshooting starts with analyzing logs from component which is mentioned in alert. 
Thanos components logs always give clear information about the problem:

Typical procedure:

  • Check alerts
  • Check status of components with command: docker-compose ps 
  • Check component log if it is crashlooping: with command: docker-compose logs <name_of_component>


Alerts rules:

Below links to prometheus rules that can generate alerts: 

Knows issues: 

Thanos sidecar permission denied

Alart: after 24H ThanosCompactHalted

Description: thanos can't read shared folder with Prometheus

Solution:

  1. Check thanos logs: docker-compose logs thanos
  2. confirm issue "permission denied" accessing files
  3. Restart thanos with: docker-compose restart thanos


Compactor halted

Alart: ThanosCompactHalted.

Logs (docker-compose logs compactor)

\n
compactor         | ts=2024-03-25T13:23:43.380462226Z caller=compact.go:491 level=error msg="critical error detected; halting" err="compaction: group 0@3028247278749986641: compact blocks [/data/compact/0@3028247278749986641/01HSK9YKWVEDZGE9MF4XGARS58 /data/compact/0@3028247278749986641/01HSKBNHNJ9B1PC0NAYR5F67SJ /data/compact/0@3028247278749986641/01HSKDCFFEC9SZM5N5PTHK3TYM /data/compact/0@3028247278749986641/01HSKF3D9E0H1B4ZMAJ1YHKM1A]: populate block: chunk iter: cannot populate chunk 8 from block 01HSKDCFFEC9SZM5N5PTHK3TYM: segment index 0 out of range"
\n

Description: Chunk uploaded to S3 is broken

Solution:

  1. Go to https://mdm-monitoring.COMPANY.com/thanos-bucket-web/blocks
  2. Search for block 01HSKF3D9E0H1B4ZMAJ1YHKM1A
  3. Click on block
  4. Click on "Mark Deletion"
    \"\"
  5. Restart compactor with: docker-compose restart compactor 
  6. Verify if metric thanos_compact_halted returned to 0
    Grafana -> thanos_compact_halted  


Expired S3 keys

Alart: maybe not tested: ThanosSidecarBucketOperationsFailed

Description: thanos can't access S3:

  1. Check Thanos bucket page whether you can see data chunks from S3: https://mdm-monitoring.COMPANY.com/thanos-bucket-web/blocks
  2. Check components logs and confirm that Store, sidecar and bucket use old S3 keys
  3. Rotate S3 Keys 

High memory usage by store

Alart: - 

Description: thanos store consumed over then 20% node memory 

Solution: No clear solution what was the root cause


\n
\n
\n
" + }, + { + "title": "Snowflake", + "pageID": "218446612", + "pageLink": "/display/GMDM/Snowflake", + "content": "" + }, + { + "title": "Dynamic Views Backwards Compatibility Error SOP", + "pageID": "322555521", + "pageLink": "/display/GMDM/Dynamic+Views+Backwards+Compatibility+Error+SOP", + "content": "

For the process documentation please visit the following page:

Snowflake: Backwards compatibility

There are two artifacts that can be created for this process and will be delivered to the HUB-DL:

  1. breaking-changes.info - this file is created when an attribute changes its type from a lov to a non-lov value or vice-versa. Lov attributes have the *_LKP suffix in the column names for dynamic views therefore in this scenario there will be an additional column created and the data will be transferred to it. Bot columns will still be present in Snowflake. There is no action needed from the HUB end.

  2. breaking-changes.error - this file is only created when an existing column is converted into a nested value (is a parent value for multiple other attributes). Each nested value has a separate dynamic view that contains all of its attributes. The changes in this file are omitted in the snowflake refresh. When that kind of change will be discovered HUB will send information to Change Management and Delottie team to manage that case. 
" + }, + { + "title": "How to Gather Detailed Logs from Snowflake Connector", + "pageID": "234979546", + "pageLink": "/display/GMDM/How+to+Gather+Detailed+Logs+from+Snowflake+Connector", + "content": "


How To change the Kafka Consumer parameters in Snowflake Kafka Conenctor:

add do docker-compose.yml:

        environment:
          - "CONNECT_MAX_POLL_RECORDS=50"
          - "CONNECT_MAX_POLL_INTERVAL_MS=900000"
    recreate container.


How To enable JDBC TRACE on Snowflake Kafka Connector:

    JDBC TRACE LOGS are in the TMP directory:
    https://github.com/snowflakedb/snowflake-kafka-connector/pull/201/commits/650b92cfa362217ca4dfdf2c6768026e862a9b45

    add 
        environment:
          - "JDBC_TRACE=true"

     additionally you can enable trace on whole connector:

      - "CONNECT_LOG4J_LOGGERS=org.apache.kafka.connect=TRACE"

      more details here:

            https://docs.confluent.io/platform/current/connect/logging.html#connect-logging-docker

            https://docs.confluent.io/platform/current/connect/logging.html


    mount volume:
       volumes:
          - "/app/kafka-connect/prod/logs:/tmp:Z"

    recreate container.
    

    LOGS are in the:
        amraelp00007848:mdmuspr:[05:59 AM]:/app/kafka-connect/prod/logs> pwd
        /app/kafka-connect/prod/logs/snowflake_jdbc0.log.0
        
    Also gather the logs from the Container stdout:
        docker logs prod_kafka-connect-snowflake >& prod_kafka-connect-snowflake_after_restart_24032022_jdbc_trace.log
   


Additional details about DEBUG with snowflake debug:

https://docs.confluent.io/platform/current/connect/logging.html#check-log-levels

You can enable the DEBUG logs by editing the "connect" logfile. (it is different to the JDBC trace setting we used before)

This is the link to our doc explaining the log enabling: 
ttps://docs.snowflake.com/en/user-guide/kafka-connector-ts.html#reporting-issues
In more details, on the confluent documentation:
https://docs.confluent.io/platform/current/connect/logging.html#using-the-kconnect-api

It is also possible to use an API call:

 curl -s -X PUT -H "Content-Type:application/json" \\                        http://localhost:8083/admin/loggers/com.snowflake.kafka.connector \\-d '{"level": "DEBUG"}' | jq '.'


Share with Snowflake support. 
    

" + }, + { + "title": "How to Refresh LOV_DATA in Lookup Values Processing", + "pageID": "218446615", + "pageLink": "/display/GMDM/How+to+Refresh+LOV_DATA+in+Lookup+Values+Processing", + "content": "
  1. Log in to proper Snowflake instance (credentials are stored in ansible repository):
    1. NPROD:
      1. EMEA (EU) - https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com
      2. AMER (US) - https://amerdev01.us-east-1.privatelink.snowflakecomputing.com
    2. PROD: 
      1. EMEA (GBL) - https://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com
      2.  AMER (US) - https://amerprod01.us-east-1.privatelink.snowflakecomputing.com
  2. Set proper role, warehouse and database:
    1. example (EU): 

      DB NameCOMM_GBL_MDM_DMART_PROD

      Default warehouse name

      COMM_MDM_DMART_WH

      DevOps role name

      COMM_PROD_MDM_DMART_DEVOPS_ROLE
  3. Run commands in the following order:
    1. COPY INTO landing.lov_data from @landing.LOV_DATA_STG pattern='.*.json';
    2. call customer.refresh_lov();
    3. call customer.materialize_view_full_refresh('M', 'CUSTOMER','CODES');
    4. call customer.materialize_view_full_refresh('M', 'CUSTOMER','CODE_SOURCE_MAPPINGS');
    5. call customer.materialize_view_full_refresh('M', 'CUSTOMER','CODE_TRANSLATIONS');
    6. REMOVE @landing.LOV_DATA_STG pattern='.*.json';

       
" + }, + { + "title": "Issue: Cannot Execute Task, EXECUTE TASK Privilege Must Be Granted to Owner Role", + "pageID": "196884458", + "pageLink": "/display/GMDM/Issue%3A+Cannot+Execute+Task%2C+EXECUTE+TASK+Privilege+Must+Be+Granted+to+Owner+Role", + "content": "

Environment details:

SF: https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

db: COMM_EU_MDM_DMART_DEV

schema: CUSTOMER

role: COMM_GBL_MDM_DMART_DEV_DEVOPS_ROLE

Issue:

The command is working fine:

\n
CREATE OR REPLACE TASK customer.refresh_customer_sl_eu_legacy_views\n   WAREHOUSE = COMM_MDM_DMART_WH\n   AFTER customer.refresh_customer_consolidated_views\nAS\nCALL customer.refresh_sl_views('COMM_EU_MDM_DMART_DEV_DB','CUSTOMER','COMM_GBL_MDM_DMART_DEV_DB','CUSTOMER_SL','%','I','M', false);\nALTER TASK customer.refresh_customer_sl_eu_legacy_views resume;
\n


The command that is causing the issue:

\n
ALTER TASK customer.refresh_customer_consolidated_views resume;\n\nSQL Error [91089] [23001]: Cannot execute task , EXECUTE TASK privilege must be granted to owner role
\n


Solution:

  1. http://btondemand.COMPANY.com/getsupport
  2. Choose Snowflake
  3. \"\"
    1. Issue:
      1. Describe your issue - Cannot execute task, EXECUTE TASK privilege must be granted to owner role
      2. Please provide a detailed description:
        1. Hi Team,
          We are facing the following issue:
          SQL Error [91089] [23001]: Cannot execute task, EXECUTE TASK privilege must be granted to owner role
          during the execution of the following command:
          ALTER TASK customer.refresh_customer_consolidated_views resume;

          Environment details:
          HOST: https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com
          DB: COMM_EU_MDM_DMART_DEV
          SCHEMA: CUSTOMER
          ROLE: COMM_GBL_MDM_DMART_DEV_DEVOPS_ROLE

          Could you please fix this issue in DEV/QA/STAGE and additionally on PROD:
          HOST: https://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com

          Please let me know if you need any other details.

    2. Created ticket for reference: - http://digitalondemand.COMPANY.com/My-Tickets/Ticket-Details?ticket=RF3372664 


" + }, + { + "title": "PTE: Add Country", + "pageID": "302686106", + "pageLink": "/display/GMDM/PTE%3A+Add+Country", + "content": "

There are two files in the Snowflake Bitbucket repo that are used in the deployment for PTE:

src/sql/global/pte_sl/tables/driven_tables.sql

src/sql/global/pte_sl/views/report_views.sql


driven_tables.sql

This file contains the definitions of supporting tables used for the calculation of the PTE_REPORT view.

DRIVEN_TABLE2_STATIC contains the list of identifiers per country and the column placement in the pte_report view. There can be a maximum of five identifiers per country and they should be provided by the PTE team. If there are no identifiers added for a country in the table the list of identifiers will be calculated "dynamically" based on the number of HCPs having the identifier.

Column nameDescription
ISO_CODEISO2 code of the country ie. 'TR', 'FR', 'PL' etc.
CANONICAL_CODERDM code that will appear in PTE_REPORT as IDENTIFIER_CODE
LANG_DESCRDM code description that will appear in PTE_REPORT as IDENTIFIER_CODE_DESC
CODE_IDTYPE_LKP value used to connect to the identifiers table to extract the value.
MODEL'p' or 'i' showing whether the codes for the country should be taken from the IQVIA ('i') or COMPANY ('p') data model.
ORDER_IDA number from 1 to 5. Showing the placement of the code among identifiers. Code from 1 will be mapped to IDENTIFIER1_CODE etc.

report_views.sql

DRIVEN_TABLE1 is a view that derives the basic information for the country from the COUNTRY_CONFIG table. The country ISO2 code has to be added into the WHERE clause depending on whether the country should have data from the IQVIA data model (the first part of the query) or from the COMPANY data model (after the UNION)

\n
\n DRIVEN_TABLE1 Expand source\n
\n
\n
CREATE OR REPLACE VIEW PTE_SL."DRIVEN_TABLE1" AS(\nSELECT\n    ISO_CODE,\n    NAME,\n    LABEL,\n    RELTIO_TENANT,\n    HUB_TENANT,\n    SF_INSTANCE,\n    SF_TENANTDATABASE,\n    CUSTOMERSL_PREFIX\nFROM CUSTOMER.COUNTRY_CONFIG \nWHERE ISO_CODE in ('SK', 'PH', 'CL', 'CO', 'AR', 'MX')\nAND CUSTOMERSL_PREFIX = 'i_'\nUNION ALL\nSELECT\n    ISO_CODE,\n    NAME,\n    LABEL,\n    RELTIO_TENANT,\n    HUB_TENANT,\n    SF_INSTANCE,\n    SF_TENANTDATABASE,\n    CUSTOMERSL_PREFIX\nFROM CUSTOMER.COUNTRY_CONFIG\nWHERE ISO_CODE in ('AD', 'BL', 'BR', 'FR', 'GF', 'GP', 'MC', 'MC', 'MF', 'MQ', 'MU', 'NC', 'PF', 'PM', 'RE', 'TF', 'WF', 'YT')\nAND CUSTOMERSL_PREFIX = 'p_'\n);
\n
\n


PTE_REPORT this is the view from which the clients take their data. Unfortunately the data required varies from country to country and also is some cases between nprod and prod due to data availability.

GO_STATUS. By default for the IQVIA data model the values for GO_STATUS are YES/NO and for the COMPANY data model they're Y/N if there's an exception you have to manually add the country to the case in the view.

\n
\n GO_STATUS Expand source\n
\n
\n
CAST(CASE\n    WHEN HCP.GO_STATUS_LKP = 'LKUP_GOVOFF_GOSTATUS:GO' AND HCP.COUNTRY IN ('CO', 'CL', 'AR', 'MX') THEN 'Y'\n    WHEN HCP.GO_STATUS_LKP = 'LKUP_GOVOFF_GOSTATUS:NGO' AND HCP.COUNTRY IN ('CO', 'CL', 'AR', 'MX') THEN 'N'\n    WHEN HCP.GO_STATUS_LKP = 'LKUP_GOVOFF_GOSTATUS:GO' THEN 'YES'\n    WHEN HCP.GO_STATUS_LKP = 'LKUP_GOVOFF_GOSTATUS:NGO' THEN 'NO'\n\tWHEN HCP.COUNTRY IN ('CO', 'CL', 'AR', 'MX') THEN 'N'\n    ELSE 'NO'\nEND AS VARCHAR(200)) AS "GO_STATUS",
\n
\n
" + }, + { + "title": "QC", + "pageID": "234712311", + "pageLink": "/display/GMDM/QC", + "content": "

Snowflake QC Check data is located in the CUSTOMER.QUALITY_CONTROL table.


Duplicated COMPANY_GLOBAL_CUSTOMER_ID

sql:

SELECT COMPANY_global_customer_id, COUNT(1)
FROM customer.entities
WHERE COMPANY_global_customer_id is not null
AND last_event_type not like '%LOST_MERGE%'
AND last_event_type not like '%REMOVED%'
GROUP BY COMPANY_global_customer_id
HAVING COUNT(1) > 1

Description:

COMPANY Global Customer ID should be unique for every entity in Reltio. In case of any duplicates you have to check if it's a Snowflake data refresh issue (data is OK in Reltio not in Snowflake), or something is wrong with the flow (check if the id's are duplicated in COMPANYIdRegistry in Mongo). 


Merges with object data

sql:

SELECT ENTITY_URI
FROM CUSTOMER.ENTITIES
WHERE LAST_EVENT_TYPE IN ('HCP_LOST_MERGE', 'HCO_LOST_MERGE', 'MCO_LOST_MERGE')
AND OBJECT IS NOT NULL

Description:

All entities in the *Lost_Merge status should have null values in the object column. If that's not the case they have to be cleared manually either by re-sending the specified record to Snowflake or by manually setting the object field for them as null. 


Active crosswalks assigned to more than one different entity

sql:

SELECT CROSSWALK_URI
FROM
CUSTOMER.M_ENTITY_CROSSWALKS
WHERE ACTIVE = TRUE
AND ACTIVE_CROSSWALK = TRUE
GROUP BY CROSSWALK_URI
HAVING COUNT(ENTITY_URI) > 1


Description:

A crosswalk should be active for only one entity_uri. If that's not the case then either the entities should be merged (contact: DLER-COMPANY-MDM-Support <COMPANY-MDM-Support@iqvia.com>) or they were merged but the lost_merge event wasn't delivered to snowflake / mdm_hub.


Duplicated entities in materialized views

sql:

SELECT ENTITY_URI, 'HCO' TYPE, COUNT(1)
FROM CUSTOMER.M_HCO
GROUP BY ENTITY_URI
HAVING COUNT(1) > 1
UNION ALL
SELECT ENTITY_URI, 'HCP' TYPE, COUNT(1)
FROM CUSTOMER.M_HCP
GROUP BY ENTITY_URI
HAVING COUNT(1) > 1

Description:

There are duplicated records in materialized tables. Investigate what caused the duplicates and run the full materialization procedure to fix it.


Entities with the same global id and parent global id

sql:

SELECT ENTITY_URI, COMPANY_GLOBAL_CUSTOMER_ID, PARENT_COMPANY_GLOBAL_CUSTOMER_ID
FROM CUSTOMER.ENTITIES
WHERE COMPANY_GLOBAL_CUSTOMER_ID = PARENT_COMPANY_GLOBAL_CUSTOMER_ID
AND COMPANY_GLOBAL_CUSTOMER_ID IS NOT NULL

Description:

Check if this is the case in the hub. If not re-send the data into snowflake if yes than contact the support team.


Missing ID's for specializations:

sql:

SELECT ENTITY_URI
FROM CUSTOMER.M_SPECIALITIES
WHERE SPECIALITIES_URI IS NULL

Description:

Review the affected entities. If their missing an id review them with the hub. Make sure they're active in Reltio and Hub. You might have to reload it in snowflake if it's not updated.


" + }, + { + "title": "Snowflake - Prometheus Alerts", + "pageID": "401026870", + "pageLink": "/display/GMDM/Snowflake+-+Prometheus+Alerts", + "content": "

SNOWFLAKE TASK FAILED

Description: This alert means that one of the regularly scheduled snowflake tasks have failed. To fix this you have to find the task that was failed in Snowflake, check the reason, and fix it. Snowflake task dag's have an auto suspend function after ten conscutive failed runs, if the issue isn't resolved at the time you'll need to manually restart the root task.

Queries:

  1. Idnetify failed tasks

    \n
    SELECT *\nFROM TABLE(INFORMATION_SCHEMA.TASK_HISTORY(RESULT_LIMIT=>5000, ERROR_ONLY=>TRUE))\n;
    \n
  2. Use the ERROR_CODE and ERROR_MESSAGE columns to find out the information needed to determine the cause of the error.
  3. After determining and fixing the cause of the issue you can manually run all the queries that are left in the task tree. To get them you can use the following code:

    \n
    SELECT *\nFROM TABLE(INFORMATION_SCHEMA.TASK_DEPENDENTS('<task_name>'))\n;
    \n

    Remember that if a schema isn't selected for the session you need submit it with the task name.
    You can also use the execute task query with the RETRY LAST option to restart the flow. This will only work if a new run wasn't started yet and you have to run it on the root task not the task that failed.

    \n
    EXECUTE TASK <root_task_name> RETRY LAST;
    \n

    SNOWFLAKE TASK FAILED 603

    Description: This alert means that one of the regularly scheduled snowflake tasks have failed. To fix this you have to find the task that was failed in Snowflake, check the reason, and fix it. Snowflake task dag's have an auto suspend function after ten conscutive failed runs, if the issue isn't resolved at the time you'll need to manually restart the root task.

    Queries:

    1. Idnetify failed tasks

      \n
      SELECT *\nFROM TABLE(INFORMATION_SCHEMA.TASK_HISTORY(RESULT_LIMIT=>5000, ERROR_ONLY=>TRUE))\n;
      \n
    2. You can manually run all the queries that are left in the task tree. To get them you can use the following code:

      \n
      SELECT *\nFROM TABLE(INFORMATION_SCHEMA.TASK_DEPENDENTS('<task_name>'))\n;
      \n

      Remember that if a schema isn't selected for the session you need submit it with the task name.
      You can also use the execute task query with the RETRY LAST option to restart the flow. This will only work if a new run wasn't started yet and you have to run it on the root task not the task that failed.

      \n
      EXECUTE TASK <root_task_name> RETRY LAST;
      \n

SNOWFLAKE TASK NOT STARTED 24h

Description: A Snowflake scheduled task hasn't run in the last day. You need to check if the alert is factually correct and solve any issues that are stopping the task from running. Please note that on production the materialization is scheduled every two hours, so if a materialization task isn't run for 24h that means that we missied twelve materialization cycles of data, hence it's important to get it fixed as soon as possible.

Queries:

  1. Check when the task was last run

    \n
    SELECT *\nFROM TABLE(INFORMATION_SCHEMA.TASK_HISTORY(RESULT_LIMIT=>5000))\nWHERE 1=1\nAND DATABASE_NAME ='<database_name>'\nAND NAME = '<task_name>'\nORDER BY QUERY_START_TIME DESC\n;
    \n
  2. If the task is running succesfully the issue might be with prometheus data scraping. Check the following dashboard to see when the data was last succesfully scraped:
    Snowflake Tasks - Dashboard

    If the task wasn't run in the last 24h. It might be suspended. Verify it using the command:

    \n
    SHOW TASKS;
    \n

    The column STATE will tell you if the task is suspended or started, and the LAST_SUSPENDED_REASON columns will tell you what was the reason of the last suspension. If it's SUSPENDED_DUE_TO_ERRORS you need to get the list of all of the dependent tasks and find which one of the failed (reminder: the root task gets suspended if any of the child tasks faila ten times in a row). To find out the failed task and the dependants of the suspended task you can use the queries from the alert SNOWFLAKE TASK FAILED.

  3. To restart a suspended task run the query:

    \n
    ALTER TASK <schema_name>.<task_name> resume;
    \n

SNOWFLAKE DUPLICATED COMPANY GLOBAL CUSTOMER ID'S

Description: COMPANY Global Customer Id's are unique identifiers calculated by the Hub. In some cases of wrongly done unmerge events on Reltio side there might be entities with wrongly assigned hub-callback crosswalks, or there might be another reason that caused the duplicates. The ID's need to be unique so ti should be verified, fixed, and the data reloaded in a timely manner.

Queries:

  1. Identify COMPANY global customer id's with duplicates:

    \n
    SELECT COMPANY_global_customer_id, COUNT(1)\nFROM customer.entities\nWHERE COMPANY_global_customer_id is not null\nAND last_event_type not like '%LOST_MERGE%'\nAND last_event_type not like '%REMOVED%'\nGROUP BY COMPANY_global_customer_id\nHAVING COUNT(1) > 1\n;
    \n



    Variant of the query that returns entity uri's for easier querying:

    \n
    SELECT ENTITY_URI\nFROM CUSTOMER.ENTITIES\nWHERE COMPANY_GLOBAL_CUSTOMER_ID IN (\n    SELECT COMPANY_global_customer_id\n    FROM customer.entities\n    WHERE COMPANY_global_customer_id is not null\n    AND last_event_type not like '%LOST_MERGE%'\n    AND last_event_type not like '%REMOVED%'\n    GROUP BY COMPANY_global_customer_id\n    HAVING COUNT(1) > 1\n)\n;
    \n
  2. Check if the duplicates are reflected in MongoDB. If the data in Mongo doesn't have the duplicates use hub ui to resend the events to Snowflake.
  3. Check if Reltio contains the duplicated data if not reconcile the affected entities, if yes review the reason. If it's because of a Hub_Callback you might need to manually delete the crosswalk, and check COMPANYIDRegistry in Mongo, if it also contains duplicates that you need to delete it there also.

SNOWFLAKE LAST ENTITY EVENT TIME

Description: The alert informs of Snowflake production tenants where the last update was more than four hours ago. The refresh on production is every two hours and the traffic is high enough that there should be updates in every cycle.

Queries:

  1. Check how many minutes ago was the last update in Snowflake

    \n
    SELECT DATEDIFF('MINUTE', (SELECT MAX(SF_UPDATE_TIME) FROM CUSTOMER.ENTITIES), (SELECT CURRENT_TIMESTAMP()));
    \n
  2. If it's over four hours check the kafka snowflake topic if it has an active consumer and if the data is flowing correctly to the landing schema. Review any latest changes in Snowflake refresh to make sure that there's nothing impacting the tasks and they're all started.
  3.  If the data in snowflake is OK than the issue might be with the data scrape.
    Snowflake Tasks - Dashboard

SNOWFLAKE MISSING COMPANY GLOBAL ID'S IN MATERIALIZED DATA

Description: This alert informs us that there are entities in Snowflake that don't have a COMPANY Global Customer ID. This is a mandatory identifier and as such should be available for all event types (excluding DCR's). It's also used by down steram clients to identify records and in case the value is deleted from an entity it will be deleted in the down streams.

Queries:

  1. Check the impact in the qc table:

    \n
    SELECT *\nFROM CUSTOMER.QC_COMPANY_ID\nORDER BY DATE DESC\n;
    \n
  2. Get the list of all entities that are missing the id's

    \n
    SELECT *\nFROM CUSTOMER.ENTITIES\nWHERE COMPANY_GLOBAL_CUSTOMER_ID IS NULL\nAND ENTITY_TYPE != 'DCR'\nAND COUNTRY != 'US'\nAND (SELECT CURRENT_DATABASE()) not ilike 'COMM_EU%'\n;
    \n
  3. Check the data in Mongo, AKHQ, Reltio.
  4. Consider informing down stream cleints to stop ingestion of the data until the issue is fixed

SNOWFLAKE GENERATED EVENTS WITHOUT COMPANY GLOBAL CUSTOMER ID'S

Description: This alert stops events without COMPANY Global Customer ID's from reaching the materialized data layer. It will add information about this occurences into a special table and delete those events before materialization.

Queries:

  1. Check the list of impacted entity_uri's

    \n
    SELECT *\nFROM CUSTOMER.MISSING_COMPANY_ID\n;
    \n
  2. Check for the reason of missing COMPANY Global Customer Id's similiarly to missing global id's in materialized data alaer.
  3. After finding and fixnig the reason of the issue use Hub UI to resend the profiles into Snowflake to make sure we have the correct data.
  4. Clear the missing COMPANY id table

    \n
    TRUNCATE TABLE CUSTOMER.MISSING_COMPANY_ID;
    \n

SNOWFLAKE TOPIC NO CONSUMER

Description: The Kafka Connector from Mongo to Snowflake has data which isn't consumed.

Queries:

  1. Check if the consumer is online you might have to restart it's pod to get it working again.


SNOWFLAKE VIEW MATERIALIZATION FAILED

Description: This alert informs you that one or more views have failed in their last materialization attempt. The alert checks the data from CUSTOMER.MATERIALZED_VIEW_LOG table for the last seven days and chooses the last materialization attempt based on the largest id.

Queries:

  1. Query that the alert is based upon

    \n
    SELECT COUNT(VIEW_NAME) FAILED_MATERIALIZATION\nFROM (\n    SELECT VIEW_NAME, MAX(ID) ID, SUCCESS, ERROR_MESSAGE, MATERIALIZED_OPTION, ROW_NUMBER() OVER (PARTITION BY VIEW_NAME ORDER BY ID DESC) AS RN\n    FROM CUSTOMER.MATERIALIZED_VIEW_LOG\n    GROUP BY VIEW_NAME, ERROR_MESSAGE, ID, SUCCESS, MATERIALIZED_OPTION\n    HAVING DATEDIFF('days', MAX(START_TIME),  (SELECT CURRENT_DATE())) < 7\n)\nWHERE RN = 1\nAND SUCCESS = 'FALSE';
    \n
  2. Modified version that will show you the error message that Snowflake ended the materialization attempt. Those are standard SQL errors on which you have to find out the root cause and the resolution of the issue.

    \n
    SELECT VIEW_NAME, ERROR_MESSAGE\nFROM (\n    SELECT VIEW_NAME, MAX(ID) ID, SUCCESS, ERROR_MESSAGE, MATERIALIZED_OPTION, ROW_NUMBER() OVER (PARTITION BY VIEW_NAME ORDER BY ID DESC) AS RN\n    FROM CUSTOMER.MATERIALIZED_VIEW_LOG\n    GROUP BY VIEW_NAME, ERROR_MESSAGE, ID, SUCCESS, MATERIALIZED_OPTION\n    HAVING DATEDIFF('days', MAX(START_TIME),  (SELECT CURRENT_DATE())) < 7\n)\nWHERE RN = 1\nAND SUCCESS = 'FALSE';
    \n

SNOWFLAKE MISSING DESC IN CODES VIEW

Description: This alert indicates that there are codes without descriptions in the CUSTOMER.M_CODES data table.

Queries:

  1. Check the missing data:

    \n
    SELECT CODE_ID, DESC\nFROM CUSTOMER.M_CODES\nWHERE DESC IS NULL;
    \n
  2. Check the Dynamic view to make sure it's not a materialization issue:

    \n
    SELECT CODE_ID, DESC\nFROM CUSTOMER.CODES\nWHERE DESC IS NULL;
    \n
  3. If it's a materialization issue then rematerialize the table.

    \n
    CALL CUSTOMER.MATERIALIZE_VIEW_FULL_REFRESH('M', 'CUSTOMER', 'CODES');
    \n
  4. If the data is missing in the dynamic view, check the code in RDM. If it has a source mapping from the source Reltio with the canonical value set to true, then it should have data in Snowflake. Check why it isn't flowing. If there is no such entry notify COMPANY team.


" + }, + { + "title": "Release", + "pageID": "386809112", + "pageLink": "/display/GMDM/Release", + "content": "

Release history:


Release process description (TBD):

Text:

Diagram:

How branches work, differences between release and FIX deployemend(TBD):

Text:

Diagram:


Release rules:

  1. Always do PR review.
  2. Do not deploy unencrypted files.
  3. Release versioning: normal path 4.x, FIX version 4.10.x
  4. TBD
  5. TBD


Release calendar:

TBD




" + }, + { + "title": "Snowflake Release", + "pageID": "430080179", + "pageLink": "/display/GMDM/Snowflake+Release", + "content": "" + }, + { + "title": "Current Release", + "pageID": "438309059", + "pageLink": "/display/GMDM/Current+Release", + "content": "


Release report:

Release:2.2.0Release date:

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Grzegorz SzczęsnyPlanned GO-LIVE:wed Jul 03
Jira linkCategoryDescriptionDeveloped ByDevelopment FinishedTested By Test Scenarios / ResultsTesting FinishedAdditional Notes

\n MR-9001\n -\n Getting issue details...\n STATUS\n
\n MR-8942\n -\n Getting issue details...\n STATUS\n

Feature ChangeUpdate the data mart with code changes needed for Onekey and DLUP data.SZCZEG0102.07.2024SARMID03Done validating below:
✅Onekey Data Mapping.
✅ DLUP Data Mapping.
03.07.2024

\n MR-9056\n -\n Getting issue details...\n STATUS\n

Feature ChangeUpdate the Country Table for Transparency_SL with new data.SZCZEG0102.07.2024SARMID03✅New data passed the checking.03.07.2024

\n MR-8988\n -\n Getting issue details...\n STATUS\n

ChangeImproved the MATERIALIZE_VIEW_INCREMENTAL_REFRESH procedure to cover 5 options, that were previously covered by 5 separate procedures and replaced their use with the new oneHARAKR02.07.2024












PROD deployment report:

PROD deployment date:Wed Jun 26 12:27:48 UTC 2024

Deployed by:Grzegorz Szczęsny
ENV:LinkStatusDetails
AMER

SUCCESS


APAC

SUCCESS


EMEA


SUCCESS


GBL(EX-US)


SUCCESS


GBLUS


SUCCESS


GLOBAL

SUCCESS


" + }, + { + "title": "2.1.0", + "pageID": "430080184", + "pageLink": "/display/GMDM/2.1.0", + "content": "


Release report:

Release:2.1.0Release date:Wed Jun 26 12:27:48 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Grzegorz SzczęsnyPlanned GO-LIVE:wed Jun 19
Jira linkCategoryDescriptionDeveloped ByDevelopment FinishedTested By Test Scenarios / ResultsTesting FinishedAdditional Notes

\n MR-8919\n -\n Getting issue details...\n STATUS\n

New FeaturePOC - The point of this ticket is to check if calculating a delta based on the SF_UPDATE_TIME from the materialized ENTITY_UPDATE_DATES table will be more efficient than using the stream. If this results in better performance than we're going to calculate deltas on our base tables dropping the streams.SZCZEG0128.05.2024SZCZEG01

Verified the change on times and the data quality by running the procedures simultanously on EMEA STAGE for a period of time

old:
\"\"


new:

\"\"



\n MR-8862\n -\n Getting issue details...\n STATUS\n

New FeatureDue to a change done in RDM we lost some descriptions for certain codes. It's important that we have the visibility for such issues in the future, therefore the need for this alert.SZCZEG0129.05.2024-New alert in Prometheus no need for additional testing--

\n MR-8969\n -\n Getting issue details...\n STATUS\n

ChangeAdjusted TRANSPARENCY_SL views to filter based on COUNTRY code (COMPANY model vs iquvia)HARAKR13.06.2024



\n MR-9003\n -\n Getting issue details...\n STATUS\n

ChangeUdate TRANSPARENCY_SL schema to Secure Views instead of views, due to the need to have the data from EMEA PROD available in AMER lower envs.SZCZEG0121.06.2024-Checked the view type on PROD--

\n MR-8986\n -\n Getting issue details...\n STATUS\n

ChangeChenge the way incremental code updates treat hard deleted lov's.SZCZEG0118.06.2024SZCZEG01


\n MR-8740\n -\n Getting issue details...\n STATUS\n

ChangeSuspend the WAREHOUSE_SUSPEND task.SZCZEG0118.04.2024-Pushed diretly to PROD--

\n MR-8701\n -\n Getting issue details...\n STATUS\n

New FeatureAdd new views in the PT&E schema for Saudi Arabia HCO / IDENTIFIERSSZCZEG0118.04.2024-Checked the views availability and record counts.--

\n MR-8712\n -\n Getting issue details...\n STATUS\n

BugfixFix a case where column order changes and it causes global views to not update properly.SZCZEG0118.04.2024SZCZEG01Rerun the case that cause the issue--

\n MR-8827\n -\n Getting issue details...\n STATUS\n

ChangeAdd email column to PT&E EU/APAC reportsSZCZEG0122.05.2024SZCZEG01Checked the column availability--

\n MR-8863\n -\n Getting issue details...\n STATUS\n

ChangeAdd a case for code materialization where there are more than one descriptions from the source Reltio but not all of them are CanonicalValues.SZCZEG0122.05.2024SZCZEG01Checked with the existing misisng descriptions.--

\n MR-7038\n -\n Getting issue details...\n STATUS\n

New FeatureAdd enchanced logging for manually called procedures.SZCZEG0122.05.2024SZCZEG01---

\n MR-8896\n -\n Getting issue details...\n STATUS\n

ChangeRemove DE from PTE_REPORT_EU, change values "Without Title", "Unknown", and "Unspecified" to null.SZCZEG0122.05.2024SZCZEG01---

\n MR-8916\n -\n Getting issue details...\n STATUS\n

ChangeRemove "Unknown" Country Codes from missing COMPANY global customer id's.SZCZEG0128.05.2024SZCZEG01---

\n MR-8994\n -\n Getting issue details...\n STATUS\n

ChangeUpdata column names for PTE_REPORT_SA.SZCZEG0118.06.2024SZCZEG01---

\n MR-8992\n -\n Getting issue details...\n STATUS\n

ChangeAdd missing columns to the Transparency_SL reports (MVP1 review).SZCZEG0118.06.2024SZCZEG01---

\n MR-8980\n -\n Getting issue details...\n STATUS\n

ChangeAdd US data into the Global DataMart TRANSPARENCY_SL.SZCZEG0118.06.2024SZCZEG01---

\n MR-8977\n -\n Getting issue details...\n STATUS\n

ChangeAdd hard coded columns to the TRANSPARENCY_SL data mart.SZCZEG0118.06.2024SZCZEG01---

\n MR-8844\n -\n Getting issue details...\n STATUS\n

New FeatureCreate Initial Data Mart for the TRANSPARENCY_SL project.SZCZEG0118.06.2024SZCZEG01---

\n MR-9016\n -\n Getting issue details...\n STATUS\n

BugfixFix on MR-8986. The procedure was launched in the landing schema but it tried to use a function that is only available in customer. Not finding the function in the current schema it returned an errorSZCZEG0125.06.2024SZCZEG01---

\n MR-8991\n -\n Getting issue details...\n STATUS\n

New FeatureChange refreh entities to use a calculated delta instead of strems. Followup to POC MR-8919.SZCZEG0118.06.2024SZCZEG01---

PROD deployment report:

\"\"CHANGELOG_2_1_0.md

" + }, + { + "title": "4.1.24 [TEMPLATE - draft]", + "pageID": "386815558", + "pageLink": "/pages/viewpage.action?pageId=386815558", + "content": "

Release report:

Release:4.1.24Release date:Tue Jan 16 21:08:10 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:TODOPlanned GO-LIVE:Tue Jan 30 (in 2 weeks)
StageLinkStatusComments (images 600px)
Build:TODO

SUCCESS 


CHANGELOG:TODO



Unit tests:TODO

SUCCESS

TODO

Integration tests:

Execution date: TODO

Executed by: TODO

AMERTODO

[84] SUCCESS

[0] FAILED

[0] REPEATED


TODO

APACTODO

[89] SUCCESS

[0] FAILED

[0] REPEATED


TODO

EMEATODO

[89] SUCCESS

[0] FAILED

[0] REPEATED


TODO

GBL(EX-US)TODO

[72] SUCCESS

[0] FAILED

[0] REPEATED


TODO
GBLUSTODO

[74] SUCCESS

[0] FAILED

[0] REPEATED


TODO

Tests ready and approved:
  • approved by: TODO
Release ready and approved:
  • approved by: TODO


DEV and QA tests results:

DEV and QA deployment date:TODO Wed Jan 17 09:35:31 UTC 2024

Deployment approved:
  • approved by: TODO
Deployed by:TODO
ENV:LinkStatusDetails
AMERTODO

SUCCESS


APACTODO

SUCCESS


EMEA

TODO

SUCCESS


GBL(EX-US)

TODO

SUCCESS


GBLUS

TODO

SUCCESS 



STAGE deployment details:

STAGE deployment date:TODO Wed Jan 17 09:35:31 UTC 2024

Deployment approved:
  • approved by: TODO
Deployed by:TODO
ENV:LinkStatusDetails
AMERTODO

SUCCESS


APACTODO

SUCCESS


EMEA

TODO

SUCCESS


GBL(EX-US)

TODO

SUCCESS


GBLUS

TODO

SUCCESS 



STAGE test phase details:

Verification date



Verification by


Dashboard

Hints

Status

Details

MDMHUB / MDMHUB Component errors

Increased number of alerts → there's certainly something wrong



MDMHUB / MDMHUB KPIsSpikes, especially wide ones, suggest potential issue

MDMHUB / MDMHUB Components resourceComponent specific analysis, especially good for the ones updated within latest release (check news fragments)

General / Snowflake QC Trends

Quick and easy way to determine if there's something wrong with QC. Any change (lower/higher) → potential issue



Kubernetes / K8s Cluster Usage Statistics

Good for PROD environments since NPROD is to prone to project specific loads



Kubernetes / Pod Monitoring

Component specific analysis, especially good for the ones updated within latest release (check news fragments)



General / kubernetes-persistent-volumes 

Storage trend over time 



General / Alerts Statistics 

Increase after release → potential issue 



General / SSL Certificates and Endpoint Availability

Lower widget, multiple stacked endpoints at the same time for a long period



PROD deployment report:

PROD deployment date:TODO Wed Jan 17 09:35:31 UTC 2024

Deployment approved:
  • approved by: TODO
Deployed by:TODO
ENV:LinkStatusDetails
AMERTODO

SUCCESS


APACTODO

SUCCESS


EMEA

TODO

SUCCESS


GBL(EX-US)

TODO

SUCCESS


GBLUS

TODO

SUCCESS


PROD deploy hypercare details:

Verification date



Verification by


Dashboard

Hints

Status

Details

MDMHUB / MDMHUB Component errors

Increased number of alerts → there's certainly something wrong



MDMHUB / MDMHUB KPIsSpikes, especially wide ones, suggest potential issue

MDMHUB / MDMHUB Components resourceComponent specific analysis, especially good for the ones updated within latest release (check news fragments)

General / Snowflake QC Trends

Quick and easy way to determine if there's something wrong with QC. Any change (lower/higher) → potential issue



Kubernetes / K8s Cluster Usage Statistics

Good for PROD environments since NPROD is to prone to project specific loads



Kubernetes / Pod Monitoring

Component specific analysis, especially good for the ones updated within latest release (check news fragments)



General / kubernetes-persistent-volumes 

Storage trend over time 



General / Alerts Statistics 

Increase after release → potential issue 



General / SSL Certificates and Endpoint Availability

Lower widget, multiple stacked endpoints at the same time for a long period




" + }, + { + "title": "4.1.24 [TEMPLATE - example]", + "pageID": "386809114", + "pageLink": "/pages/viewpage.action?pageId=386809114", + "content": "

Release report:

Release:4.1.24Tue Jan 16 21:08:10 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Mikołaj MorawskiTue Jan 30 (in 2 weeks)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/467/ 

SUCCESS 


CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/387d6b51ebf7ade55692d80388d81e3c1e59117d 



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/467/testReport/ 

SUCCESS

\"\"

Integration tests:

Execution date: Wed Jan 24 18:01:08 UTC 2024

Executed by: Mikołaj Morawski

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/372/testReport/

[84] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/314/testReport/

[89] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/466/testReport/

[88] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/384/testReport/

[73] SUCCESS

[0] FAILED

[1] REPEATED

\"\"

  • failed tests - DerivedHcpAddressesTestCase.derivedHCPAddressesTest 
    • during run on Reltio there were multiple events and test got bloced
    • Test was repeated manually and passed with success 
      • <screenshot from local execution>
GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/321/testReport/

[74] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

Tests ready and approved:
Release ready and approved:
  • approved by: Mikołaj Morawski 


STAGE deployment details:


STAGE test phase details:

Test Test description ResponsibleStatus
Alerts verificationTo check if any of alerts in STG environments is a prod deployment release stopper. e.g. Latuch, Lukasz 

e.g. SUCCESS

SnowFlake checkTo check if there are any QC checks or tasks failed that can happend on prod environments. 

Data Quality GatewayTo check if there are any broken events. 

Environment check

To check if there are any issues on STG environment that can be a PROD release stopper



TBD


TBD


PROD deployment report:



" + }, + { + "title": "4.1.28", + "pageID": "386815544", + "pageLink": "/display/GMDM/4.1.28", + "content": "

Release report:

Release:4.1.28Release date:Thu Feb 08 10:10:38 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Rafał KućPlanned GO-LIVE:Thu Feb 29
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/470/

SUCCESS 


CHANGELOG:

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/966ebe3374d1de8d89764bbf5fd4e39e638a5723#CHANGELOG.md

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/39953783022e8b06c49af2e872b7cf66f2a8b26b



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/470/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: Tue Feb 13 18:00:57 UTC 2024

Executed by: Mikołaj Morawski

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/391/testReport/

[84] SUCCESS

[0] FAILED

[1] REPEATED

\"\"

  • one failed test - com.COMPANY.mdm.tests.events.COMPANYGlobalCustomerIdTest.test
    • repeated from local PC one more time by Mikołaj Morawski
    • during run on Reltio there were multiple events and test got blocked
    • Test was repeated manually and passed with success
    • \"\"
APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/330/testReport/

[89] SUCCESS

[0] FAILED

[1] REPEATED

\"\"

  • one failed test - com.COMPANY.mdm.tests.events.COMPANYGlobalCustomerIdTest.test
    • repeated from local PC one more time by Mikołaj Morawski
    • during run on Reltio there were multiple events and test got blocked
    • Test was repeated manually and passed with success
    • \"\"
EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/485/testReport/

[88] SUCCESS

[0] FAILED

[1] REPEATED

\"\"

  • one failed test -  com.COMPANY.mdm.tests.dcr2.DCR2ServiceTest.shouldCreateHCPOneKeyRedirectToReltio
    • repeated from local PC one more time by Mikołaj Morawski
    • during run on Reltio there were multiple events and test got blocked
    • Test was repeated manually and passed with success
    • \"\"
GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/395/testReport/

[73] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/332/testReport/

[74] SUCCESS

[0] FAILED

[1] REPEATED

\"\"

  • one failed test -  com.COMPANY.mdm.tests.events.COMPANYGlobalCustomerIdSearchOnLostMergeEntitiesTest.test
    • repeated from local PC one more time by Mikołaj Morawski
    • during run on Reltio there were multiple events and test got blocked
    • Test was repeated manually and passed with success
    • \"\"
Tests ready and approved:
  • approved by: Mikołaj Morawski
Release ready and approved:
  • approved by: Mikołaj Morawski


STAGE deployment details:


PROD deployment report:



" + }, + { + "title": "4.1.31", + "pageID": "401024639", + "pageLink": "/display/GMDM/4.1.31", + "content": "

Release report:

Release:4.1.31Release date:Fri Mar 01 12:21:23 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Kacper UrbańskiPlanned GO-LIVE:Mon Mar 04
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/98/

SUCCESS 


CHANGELOG:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/98/artifact/CHANGELOG.md/*view*/



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/98/testReport/

SUCCESS

TODO

Integration tests:

Execution date: N/A

Executed by: N/A

AMERN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A

APACN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A

EMEAN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A

GBL(EX-US)N/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A
GBLUSN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A

Tests ready and approved:
  • approved by: N/A
Release ready and approved:
  • approved by: Kacper Urbański


STAGE deployment details:


PROD deployment report:



" + }, + { + "title": "4.1.29", + "pageID": "401613066", + "pageLink": "/display/GMDM/4.1.29", + "content": "

Release report:

Release:4.1.29Release date:Wed Feb 28 10:32:26 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Kacper UrbańskiPlanned GO-LIVE:Thu Mar 07 (in 1 weeks)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/472/

SUCCESS 


CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/4c3f8a5fc460bb0cc20e55f736850f2416b6e9f3#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/472/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: Wed Feb 28

Executed by: Mikołaj Morawski

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/407/testReport/

[84] SUCCESS

[0] FAILED

[1] REPEATED


\"\"

one failed test - com.COMPANY.mdm.tests.events.COMPANYGlobalCustomerIdTest.test

  • repeated from local PC one more time by Mikołaj Morawski
  • during run on Reltio there were multiple events and test got blocked
  • Test was repeated manually and passed with success
  • \"\"
APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/350/testReport/

[66] SUCCESS

[18] FAILED

[3] REPEATED


\"\"

  • All [18] DCR tests failed due to RDM issue on Reltio side:
  • same set of tests is successful on EMEA and AMER so logic is working correctly
  • RCA:

\"\"

Repeated tests:

  • repeated from local PC one more time by Mikołaj Morawski
  • during run on Reltio there were multiple events and test got blocked
  • Test was repeated manually and passed with success
  • \"\"
  • \"\"
EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/501/testReport/

[84] SUCCESS

[0] FAILED

[3] REPEATED


\"\"

Repeated tests:

  • repeated from local PC one more time by Mikołaj Morawski
  • during run on Reltio there were multiple events and test got blocked
  • Test was repeated manually and passed with success
  • \"\"
  • \"\"
GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/411/testReport/

[72] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/349/testReport/

[74] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

Tests ready and approved:
  • approved by: Mikołaj Morawski
Release ready and approved:
  • approved by: Mikołaj Morawski


STAGE deployment details:


PROD deployment report:

PROD deployment date:TODO

Deployment approved:
  • approved by: Mikołaj Morawski
Deployed by:Rafał Kuć
ENV:LinkStatusDetails
AMERTODO

SUCCESS


APACTODO

SUCCESS


EMEA

TODO

SUCCESS


GBL(EX-US)

TODO

SUCCESS


GBLUS

TODO

SUCCESS




" + }, + { + "title": "4.3.0", + "pageID": "408556244", + "pageLink": "/display/GMDM/4.3.0", + "content": "

Release report:

Release:4.3.0Release date:Thu Mar 14 11:30:13 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Mikołaj MorawskiPlanned GO-LIVE:Tue Mar 21 (in 1 weeks)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/477/

SUCCESS 


CHANGELOG:

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/7d6036dfb79366537f79272b026ab24ec1ea1b62#CHANGELOG.md

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/d30b468528cb98adc181b4e5d192c776328d70e8#CHANGELOG.md

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/73bdcaaa0997b156ce79728af6c90dfd0f3cfa1b#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/477/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: Thu Mar 14

Executed by: Mikołaj Morawski

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/419/testReport/

[81] SUCCESS

[0] FAILED

[3] REPEATED


\"\"

  • DCR tests failed due to RDM issue on Reltio side:
  • same set of tests is successful on EMEA and AMER so logic is working correctly
  • RCA: 
    expected:<A[UTO_REJECTED]> but was:<A[uto Rejected]>

    Repeated tests:

    • repeated from local PC one more time by Mikołaj Morawski
    • Test was repeated manually and passed with success

\"\"

\"\"

\"\"

APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/359/testReport/

[89] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/511/testReport/

[89] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/420/testReport/

[72] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/358/testReport/

[74] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

Tests ready and approved:
  • approved by: Mikołaj Morawski
Release ready and approved:
  • approved by: Mikołaj Morawski

STAGE deployment details:

PROD deployment report:



" + }, + { + "title": "4.6.0", + "pageID": "410815299", + "pageLink": "/display/GMDM/4.6.0", + "content": "

Release report:

Release:4.6.0Release date:Thu Mar 21 14:01:19 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Mikołaj MorawskiPlanned GO-LIVE:Tue Mar 28 (in 1 weeks)
StageLinkStatusComments (images 600px)
Build:

https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/484/

++ https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/485/

SUCCESS 


CHANGELOG:

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/9a3b6fe4bdf5573691cb37d5f994fe0f93b661fa#CHANGELOG.md

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/c9c3d307b27704264bf4d0b5fefc51bc02b78e79#CHANGELOG.md

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/99cadba8373475c979f12b0c2ae815908b72b582#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/484/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: Thu Mar 21

Executed by: Mikołaj Morawski

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/422/testReport/

[83] SUCCESS

[1] FAILED

[0] REPEATED


\"\"

APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/365/testReport/

[87] SUCCESS

[2] FAILED

[0] REPEATED


\"\"

  • DCR tests failed due to RDM issue on Reltio side:
  • same set of tests is successful  AMER so logic is working correctly
  • RCA:
  • org.junit.ComparisonFailure: expected:<A[uto Rejected]> but was:<A[UTO_REJECTED]>
  • Ignoring and approved by Mikołaj Morawski because we are still waiting for RDM configuration on DEV
EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/517/testReport/

[87] SUCCESS

[2] FAILED

[0] REPEATED


\"\"

  • DCR tests failed due to RDM issue on Reltio side:
  • same set of tests is successful  AMER so logic is working correctly
  • RCA:
  • org.junit.ComparisonFailure: expected:<A[uto Rejected]> but was:<A[UTO_REJECTED]>
  • Ignoring and approved by Mikołaj Morawski because we are still waiting for RDM configuration on DEV
GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/426/testReport/

[72] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/363/testReport/

[74] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

Tests ready and approved:
  • approved by: Mikołaj Morawski
Release ready and approved:
  • approved by: Mikołaj Morawski

STAGE deployment details:

PROD deployment report:



" + }, + { + "title": "4.9.0", + "pageID": "415995497", + "pageLink": "/display/GMDM/4.9.0", + "content": "

Release report:

Release:4.9.0Release date:Thu Apr 10 10:01:19 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Rafał KućPlanned GO-LIVE:Tue Apr 11 (in 1 day)
StageLinkStatusComments (images 600px)
Build:

https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/491/

FAILED

The code has been released but job failed because of issue related to docker cleanup
CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/0467698f97b08623c8edc9f134ea2156737c8df7#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/491/testReport/

SUCCESS


Integration tests:

Execution date: Thu Apr 10

Executed by: Rafał Kuć

AMER

[0] SUCCESS

[0] FAILED

[0] REPEATED



APACSkipped due to development of IoD project

[0] SUCCESS

[0] FAILED

[0] REPEATED



EMEA

[0] SUCCESS

[0] FAILED

[0] REPEATED



GBL(EX-US)

[0] SUCCESS

[0] FAILED

[0] REPEATED



GBLUS

[0] SUCCESS

[0] FAILED

[0] REPEATED



Tests ready and approved:
  • approved by: Rafał Kuć
Release ready and approved:
  • approved by: Rafał Kuć

STAGE deployment details:

PROD deployment report:

PROD deployment date:Thu Apr 11 09:23:52 UTC 2024

Deployment approved:
  • approved by: Rafał Kuć
Deployed by:Rafał Kuć
ENV:LinkStatusDetails
AMER

SUCCESS


APAC

SUCCESS


EMEA


SUCCESS


GBL(EX-US)


SUCCESS


GBLUS


SUCCESS




" + }, + { + "title": "4.10.0", + "pageID": "415212536", + "pageLink": "/display/GMDM/4.10.0", + "content": "

Release report:

Release:4.10.0Release date:Thu Apr 18 19:03:35 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Wed 24 (in 1 weeks)
StageLinkStatusComments (images 600px)
Build:

https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/492/

SUCCESS 


CHANGELOG:

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/2939c70fcc57caa8040a895889c88af99a396665#CHANGELOG.md

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/0467698f97b08623c8edc9f134ea2156737c8df7#CHANGELOG.md

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/d110ea29c10875123e738d32eb166875db7a6948#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/492/testReport/

SUCCESS


Integration tests:

Execution date: Thu Apr 18

Executed by: Krzysztof Prawdzik

AMER

[85] SUCCESS

[0] FAILED

[0] REPEATED



APAC

[89] SUCCESS

[0] FAILED

[0] REPEATED




EMEA

[89] SUCCESS

[0] FAILED

[0] REPEATED



GBL(EX-US)

[72] SUCCESS

[0] FAILED

[0] REPEATED



GBLUS

[74] SUCCESS

[0] FAILED

[0] REPEATED



Tests ready and approved:
  • approved by: Krzysztof Prawdzik
Release ready and approved:
  • approved by: Mikołaj Morawski

STAGE deployment details:

PROD deployment report:

PROD deployment date:Thu Apr 25 ??:??:?? UTC 2024

Deployment approved:
  • approved by: Mikołaj Morawski
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMER

SUCCESS


APAC

SUCCESS


EMEA


SUCCESS


GBL(EX-US)


SUCCESS


GBLUS


SUCCESS




" + }, + { + "title": "4.11.0", + "pageID": "416001899", + "pageLink": "/display/GMDM/4.11.0", + "content": "

Release report:

Release:4.11.0Release date:Tue Apr 23 10:41:13 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Mon Apr 29 (in 1 week)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/493/

SUCCESS 


CHANGELOG:

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/20128ed85fda3830ebbb2874f7cd9cecd3031e18#CHANGELOG.md

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/2939c70fcc57caa8040a895889c88af99a396665#CHANGELOG.md

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/0467698f97b08623c8edc9f134ea2156737c8df7#CHANGELOG.md

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/d110ea29c10875123e738d32eb166875db7a6948#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/493/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: Tue Apr 23

Executed by: Krzysztof Prawdzik

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/447/testReport/

[84] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/382/testReport/

[93] SUCCESS

[0] FAILED

[8] REPEATED


\"\"

  • part of CHina tests failed due to some timeout:
  • RCA: 
    Action timeout after 360000 milliseconds.
    Failed to receive message on endpoint: 'apac-dev-out-full-mde-cn'

  • Repeated tests:

    • repeated from local PC one more time by Krzysztof Prawdzik
    • Test was repeated manually and passed with success

\"\"

\"\"

\"\"

EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/539/testReport/

[89] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/445/testReport/

[72] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/386/testReport/

[74] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

Tests ready and approved:
  • approved by: Krzysztof Prawdzik
Release ready and approved:
  • approved by: Mikołaj Morawski

STAGE deployment details:

PROD deployment report:



" + }, + { + "title": "4.11.1", + "pageID": "415221783", + "pageLink": "/display/GMDM/4.11.1", + "content": "

Release report:

Release:4.11.1Release date:Wed May 08 08:16:41 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Wed May 08 (same day)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/101/

SUCCESS 


CHANGELOG:

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/dbe984a2a9bb73ba141aad9386d741fd3fc8334d#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/493/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: N/A

Executed by: N/A

AMERN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

APACN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

EMEAN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

GBL(EX-US)N/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

GBLUSN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

Tests ready and approved:
  • approved by: N/A
Release ready and approved:
  • approved by: 

STAGE deployment details:

PROD deployment report:



" + }, + { + "title": "4.12.0", + "pageID": "425492972", + "pageLink": "/display/GMDM/4.12.0", + "content": "

Release report:

Release:4.12.0Release date:Mon May 13 12:03:50 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Thu May 16
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/2/

SUCCESS 


CHANGELOG:

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/dc117aa31a81375f4572ca68a22491d02094e91e#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/2/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: Mon May 13

Executed by: Krzysztof Prawdzik

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/463/testReport/

[81] SUCCESS

[3] FAILED

[0] REPEATED


\"\"

  • RCA: 
    Tenant [wn60kG248ziQSMW] is not registered.
APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/398/testReport/

[99] SUCCESS

[0] FAILED

[2] REPEATED


\"\"

  • one of China tests failed due to timeout:
  • RCA: 
    Action timeout after 360000 milliseconds.
    Failed to receive message on endpoint: 'apac-dev-out-full-mde-cn'

  • Repeated tests:

    • repeated from local PC one more time by Krzysztof Prawdzik
    • Test was repeated manually and passed with success

\"\"

\"\"

EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/554/testReport/

[88] SUCCESS

[1] FAILED

[0] REPEATED


\"\"

  • RCA: 
    Tenant [wn60kG248ziQSMW] is not registered.
GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/459/testReport/

[72] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/401/testReport/

[73] SUCCESS

[0] FAILED

[1] REPEATED


\"\"

  • one of the tests failed due to unsufficient time to get proper eventType:
  • RCA: 
    Validation failed: Values not equal for element '$.eventType', expected 'HCP_MERGED' but was 'ENTITY_POTENTIAL_LINK_FOUND'
  • Repeated test:

    • repeated from local PC one more time by Krzysztof Prawdzik
    • Test was repeated manually with increased number of retries and passed with success

\"\"

Tests ready and approved:
  • approved by: Krzysztof Prawdzik
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

PROD deployment report:



" + }, + { + "title": "4.12.1", + "pageID": "425136247", + "pageLink": "/display/GMDM/4.12.1", + "content": "

Release report:

Release:4.12.1Release date:Tue May 21 08:44:41 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Tue May 21 (same day)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/102/

SUCCESS 


CHANGELOG:

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/0849434b3c67a63f36b13211cb19c23e4c77b25e#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/102/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: N/A

Executed by: N/A

AMERN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A

N/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A
EMEAN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A
GBL(EX-US)N/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A
GBLUSN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A
Tests ready and approved:
  • approved by: N/A
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

PROD deployment report:



" + }, + { + "title": "4.14.0", + "pageID": "430082856", + "pageLink": "/display/GMDM/4.14.0", + "content": "

Release report:

Release:4.14.0Release date:Wed May 29 15:14:52 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Thu Jun 6
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/4/

SUCCESS 


CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/0d962b08c9a6caa4520868f8c33a577c85356a8f#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/4/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: Wed May 29

Executed by: Krzysztof Prawdzik

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/473/

[83] SUCCESS

[0] FAILED

[1] REPEATED

\"\"

Recent changes in com.COMPANY.mdm.tests.dcr2.DCR2ServiceTest.shouldInactivateHCP test has caused its instability.

  • repeated from local PC one more time by Krzysztof Prawdziik
  • Test was repeated manually and passed with success
  • fix for this test is being prapered
APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/413/

[99] SUCCESS

[0] FAILED

[1] REPEATED

\"\"

EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/565/

[88] SUCCESS

[0] FAILED

[1] REPEATED

\"\"

Recent changes in com.COMPANY.mdm.tests.dcr2.DCR2ServiceTest.shouldInactivateHCP test has caused its instability.

  • repeated from local PC one more time by Krzysztof Prawdziik
  • Test was repeated manually and passed with success
  • fix for this test is being prapered
GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/469/

[72] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/411/testReport/

[74] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

Tests ready and approved:
  • approved by: Krzysztof Prawdzik
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

STAGE test phase details:

Verification date

17:05 - 18:00 + 12:15



Verification by



Dashboard

Hints

Status

Details

MDMHUB / MDMHUB Component errors

Increased number of alerts → there's certainly something wrong

SUCCESS

APAC NPROD

\"\"

EMEA NPROD

\"\"

\"\"

MDMHUB / MDMHUB KPIsSpikes, especially wide ones, suggest potential issue

SUCCESS

\"\"

MDMHUB / MDMHUB Components resourceComponent specific analysis, especially good for the ones updated within latest release (check news fragments)

SUCCESS

Batch service

\"\"

Entity enricher

\"\"

Map channel

\"\"

MDM Auth

\"\"

MDM Reconciliation

\"\"

Raw data

\"\"

General / Snowflake QC Trends

Quick and easy way to determine if there's something wrong with QC. Any change (lower/higher) → potential issue

SUCCESS


Kubernetes / Vertical Pod Autoscaler (VPA)

Change in memory requirement before and after deployment → potential issue 

not verified


Kubernetes / K8s Cluster Usage Statistics

Good for PROD environments since NPROD is to prone to project specific loads

SUCCESS


Kubernetes / Pod Monitoring

Component specific analysis, especially good for the ones updated within latest release (check news fragments)

\"(question)\"

APAC DEV

\"\"

General / kubernetes-persistent-volumes 

Storage trend over time 

SUCCESS


General / Alerts Statistics 

Increase after release → potential issue 

SUCCESS

APAC NPROD

\"\"

GBLUS NPROD

\"\"

GBL

\"\"

General / SSL Certificates and Endpoint Availability

Lower widget, multiple stacked endpoints at the same time for a long period

SUCCESS


PROD deployment report:

PROD deploy hypercare details:

Verification date

12:37



Verification by



Dashboard

Hints

Status

Details

MDMHUB / MDMHUB Component errors

Increased number of alerts → there's certainly something wrong

SUCCESS


MDMHUB / MDMHUB KPIsSpikes, especially wide ones, suggest potential issue

SUCCESS

\"\"

\"\"

MDMHUB / MDMHUB Components resourceComponent specific analysis, especially good for the ones updated within latest release (check news fragments)



General / Snowflake QC Trends

Quick and easy way to determine if there's something wrong with QC. Any change (lower/higher) → potential issue

SUCCESS


Kubernetes / Vertical Pod Autoscaler (VPA)

Change in memory requirement before and after deployment → potential issue 



Kubernetes / K8s Cluster Usage Statistics

Good for PROD environments since NPROD is to prone to project specific loads

SUCCESS


Kubernetes / Pod Monitoring

Component specific analysis, especially good for the ones updated within latest release (check news fragments)

SUCCESS

\"\"

\"\"


\"\"

General / kubernetes-persistent-volumes 

Storage trend over time 

SUCCESS

\"\"

General / Alerts Statistics 

Increase after release → potential issue 

SUCCESS

\"\"

General / SSL Certificates and Endpoint Availability

Lower widget, multiple stacked endpoints at the same time for a long period

SUCCESS


" + }, + { + "title": "4.12.2", + "pageID": "430083918", + "pageLink": "/display/GMDM/4.12.2", + "content": "

Release report:

Release:4.12.2Release date:Tue Jun 04 12:19:52 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Tue Jun 4 (same day)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/103/

SUCCESS 


CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/0abf8b37a2ac6b27c093cba3f3288ebd2c9ebfc4#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/103/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: N/A

Executed by: N/A

AMERN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A

N/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A
EMEAN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A
GBL(EX-US)N/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A
GBLUSN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A
Tests ready and approved:
  • approved by: N/A
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

STAGE deployment date:Tue Jun 04 13:27:51 UTC 2024

Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMER

SUCCESS


APAC

SUCCESS


EMEA


SUCCESS


GBL(EX-US)


SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/288/

SUCCESS 


PROD deployment report:



" + }, + { + "title": "4.14.1", + "pageID": "430087408", + "pageLink": "/display/GMDM/4.14.1", + "content": "

Release report:

Release:4.14.1Release date:Tue Jun 11 10:27:15 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Tue Jun 11 (same day)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/105/

SUCCESS 


CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/69c634998c0b05dd2ed74677bcb638c55213b940#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/105/testReport/

SUCCESS


Integration tests:

Execution date: N/A

Executed by: N/A

AMERN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A

N/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A
EMEAN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A
GBL(EX-US)N/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A
GBLUSN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A
Tests ready and approved:
  • approved by: N/A
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

STAGE test phase details:

Verification date

 17:05 - 18:00 +  12:15



Verification by



Dashboard

Hints

Status

Details

MDMHUB / MDMHUB Component errors

Increased number of alerts → there's certainly something wrong

e.g. SUCCESS


MDMHUB / MDMHUB KPIsSpikes, especially wide ones, suggest potential issue



MDMHUB / MDMHUB Components resourceComponent specific analysis, especially good for the ones updated within latest release (check news fragments)


General / Snowflake QC Trends

Quick and easy way to determine if there's something wrong with QC. Any change (lower/higher) → potential issue



Kubernetes / Vertical Pod Autoscaler (VPA)

Change in memory requirement before and after deployment → potential issue 



Kubernetes / K8s Cluster Usage Statistics

Good for PROD environments since NPROD is to prone to project specific loads



Kubernetes / Pod Monitoring

Component specific analysis, especially good for the ones updated within latest release (check news fragments)



General / kubernetes-persistent-volumes 

Storage trend over time 



General / Alerts Statistics 

Increase after release → potential issue 



General / SSL Certificates and Endpoint Availability

Lower widget, multiple stacked endpoints at the same time for a long period



PROD deployment report:

PROD deploy hypercare details:

Verification date

usually Deployment_date + 24-48h



Verification by



Dashboard

Hints

Status

Details

MDMHUB / MDMHUB Component errors

Increased number of alerts → there's certainly something wrong

e.g. SUCCESS


MDMHUB / MDMHUB KPIsSpikes, especially wide ones, suggest potential issue



MDMHUB / MDMHUB Components resourceComponent specific analysis, especially good for the ones updated within latest release (check news fragments)



General / Snowflake QC Trends

Quick and easy way to determine if there's something wrong with QC. Any change (lower/higher) → potential issue



Kubernetes / Vertical Pod Autoscaler (VPA)

Change in memory requirement before and after deployment → potential issue 



Kubernetes / K8s Cluster Usage Statistics

Good for PROD environments since NPROD is to prone to project specific loads



Kubernetes / Pod Monitoring

Component specific analysis, especially good for the ones updated within latest release (check news fragments)



General / kubernetes-persistent-volumes 

Storage trend over time 



General / Alerts Statistics 

Increase after release → potential issue 



General / SSL Certificates and Endpoint Availability

Lower widget, multiple stacked endpoints at the same time for a long period



" + }, + { + "title": "4.15.0", + "pageID": "430350581", + "pageLink": "/display/GMDM/4.15.0", + "content": "

Release report:

Release:4.15.0Release date:Thu Jun 13 15:45:35 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Thu Jun 20 (in 1 week)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/8/

SUCCESS 


CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/6aab2f8a14ba7406e1e2de60a81a4af2d34d6094#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/4/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: 

Executed by: Krzysztof Prawdzik

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/485/

[84] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

APAC

[99] SUCCESS

[0] FAILED

[1] REPEATED


EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/575/

[89] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/482/

[72] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/422/

[74] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

Tests ready and approved:
  • approved by: Krzysztof Prawdzik
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

STAGE test phase details:

Verification date

15:30 - 16:20



Verification by



Dashboard

Hints

Status

Details

MDMHUB / MDMHUB Component errors


SUCCESS


MDMHUB / MDMHUB KPIs

SUCCESS


MDMHUB / MDMHUB Components resource

SUCCESS

AMER-STAGE - HTTP 401 - known issue with authorization to OneKey (IB)

\"\"

General / Snowflake QC Trends


SUCCESS


Kubernetes / K8s Cluster Usage Statistics


SUCCESS


Kubernetes / Pod Monitoring


SUCCESS

APAC DEV - Damian's tests + Krzysztof published old version for a moment which behave strangle on APAC DEV only (selective router)

\"\"

General / kubernetes-persistent-volumes 


SUCCESS

General / Alerts Statistics 

Why there are duplicates with _ and - ?

\"\"

\"(question)\"

EMEA-NPROD - Marek knows about this ? 

\"\"

APAC-STAGE - something wrong with monitoring? constant "1" independent from timeframe?

\"\"

GBLUS-STAGE - Greg is working on it - note from karma

\"\"

General / SSL Certificates and Endpoint Availability


\"(question)\"

APAC-NPROD - real issue or monitoring false positives? 

\"\"

EMEA-NPROD

\"\"


PROD deployment report:

PROD deploy hypercare details:

Verification date

15:45 + review 11:00 (Bachanowicz, Mieczysław (Irek)

Verification by

Dashboard

Status

Details

MDMHUB / MDMHUB Component errors

\"(warning)\"

DCR Oneky change was deployed without extensive testing od NPROD. Verified with Paweł - no major risks to leave it unattained for the weekend.

GBLUS-PROD - mdm-manager, peak processing

\"\"

APAC-PROD - onekey. 

: Did not happen since then. 

\"\"

APAC-PROD mdm-manager

\"\"

APAC-PROD DCR2 Service

\"\"

EMEA-PROD map-channel, strange errors

\"\"

\"(warning)\" GBL-PROD pforcerx channel - \n MR-9012\n -\n Getting issue details...\n STATUS\n

\"(tick)\"  Did not happen since then. 

\"\"

\"(warning)\" GBL-PROD - Created \n MR-9011\n -\n Getting issue details...\n STATUS\n

\"\"

MDMHUB / MDMHUB KPIs\"(tick)\" 

\"(tick)\" APAC-PROD to Greg → IB: This is a recurring thing. Happens every week. 

\"\"

\"(tick)\"  EMEA-PROD → IB: This is a recurring thing. Happens every week. 

\"\"

\"(tick)\"  GBL-PROD → IB: This is a recurring thing. Happens every week. 

\"\"

MDMHUB / MDMHUB Components resource\"(tick)\"

General / Snowflake QC Trends

\"(tick)\"

Kubernetes / K8s Cluster Usage Statistics

\"(tick)\"

Kubernetes / Pod Monitoring

\"(tick)\"

EMEA-PROD - known issue during deployment

\"\"

General / kubernetes-persistent-volumes \"(tick)\"
General / Alerts Statistics \"(tick)\"

AMER-PROD zookeeper reelection

\"\"

GBLUS-PROD high processing, corresponds with manager issue

\"\"

EMEA-PROD deployment issue

\"\"

General / SSL Certificates and Endpoint Availability\"(tick)\"
" + }, + { + "title": "4.16.0", + "pageID": "438895667", + "pageLink": "/display/GMDM/4.16.0", + "content": "

Release report:

Release:4.16.0Release date:Mon Jun 24 15:13:56 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Thu Jun 27 (in 3 days)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/9/

SUCCESS 


CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/0789f75320df48915b3eaa82d1669bfe2fdc0668#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/9/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: Tue Jun 25 17:00:03 UTC 2024

Executed by: Krzysztof Prawdzik

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/493/

[85] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/429/

[102] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/585/

[89] SUCCESS

[0] FAILED

[1] REPEATED

\"\"

GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/489/

[73] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/429/

[75] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

Tests ready and approved:
  • approved by: Krzysztof Prawdzik
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

STAGE test phase details:

Verification date

  10:45 - 11:45

Verification by

Dashboard

Status

Details

MDMHUB / MDMHUB Component errors

\"(tick)\"

\"(tick)\"  AMER-STAGE - small issues with COMPANYGlobalCustomerID (COMPANY Customer Id: 02-100373164 does not exist in Reltio or is deactivated)

\"\"

\"(tick)\"  APAC-STAGE - AWS issue

\"\"

AWS does not show any problems with their S3 services 

\"\"

\"(tick)\"  EMEA-STAGE, manager

\"(tick)\"  GLB-STAGE, manager 

MDMHUB / MDMHUB KPIs

\"(tick)\"

MDMHUB / MDMHUB Components resource

\"(tick)\"


General / Snowflake QC Trends

\"(tick)\"


Kubernetes / K8s Cluster Usage Statistics

\"(tick)\"


Kubernetes / Pod Monitoring

\"(tick)\"

\"(tick)\"  APAC-STAGE - Mon morning - HCONames memory reload, config update by Karol

\"\"

General / kubernetes-persistent-volumes 

\"(tick)\"


General / Alerts Statistics 

\"(warning)\"

\"(warning)\"   APAC-STAGE - Friday, 17:00, a lot of strange errors, corelates with AWS issue 

\"\"

\"(question)\"  AMER-STAGE + APAC-STAGE + GBLUS - stage- Grzesiek - wt/środa - Snowflake na Stageach?4


General / SSL Certificates and Endpoint Availability

\"(tick)\"

PROD deployment report:

PROD deploy hypercare details:

Verification date

16-17:00

Verification by

Dashboard

Status

Details

MDMHUB / MDMHUB Component errors

\"(warning)\" 

\"(warning)\"  AMER-PROD - mdm service 2 + OneKey - 2 examples of failed lookup codes transformation 

  •  Issue found for two DCR requests. Failed to send req to OK:  IB> Paweł - create ticket
  • 3bd7e9217a004b37a2c0cbd7afabda1f
  • 4d9e09c06b89494c950a759889cf12d0
    • low priority issue - lepsza obsługa lookupów - często się to pojawia na różnych środowiskach (APAC-PROD) "Create dcr exception"
    • wywałko w endpoicie OneKey

log1.txt

\"\"

\"(tick)\" AMER-PROD - clean NPE → create ticket to clean up such "errors"

\"\"

\"(question)\"  GBLUS-PROD - single error, however huge

\"\"

\"(tick)\"   EMEA-PROD, map-channel, brak trace'a, kubernetes restarted component. 

\"\"

\"(tick)\"  EMEA-PROD, minor issue, for further investigation (Krzysiek) - low prio

\"\"

\"(warning)\"  \"(warning)\"  GBLUS-PROD, Know issue - \n MR-9011\n -\n Getting issue details...\n STATUS\n

\"\"

MDMHUB / MDMHUB KPIs\"(tick)\" 
  • \"(warning)\"   Publishing latency ~1year -known issue, ticket to create (IB)

GBL-PROD

\"\"

MDMHUB / MDMHUB Components resource\"(tick)\" 
  • AMER-PROD, map channel, high CPU usage, to verify on Mondays

\"\"

General / Snowflake QC Trends

\"(tick)\" 

Kubernetes / K8s Cluster Usage Statistics

\"(tick)\" 

Kubernetes / Pod Monitoring

\"(tick)\" 
General / kubernetes-persistent-volumes \"(tick)\" 
General / Alerts Statistics \"(tick)\" 

\"(tick)\" GBL-PROD - confirm with Damian that's not an issue

\"\"

General / SSL Certificates and Endpoint Availability\"(tick)\" 

\"(tick)\"  US-PROD 

  • IB > Ticket to create to check env selectors for us-prod

\"\"

" + }, + { + "title": "4.17.0", + "pageID": "438899752", + "pageLink": "/display/GMDM/4.17.0", + "content": "

Release report:

Release:4.17.0Release date:Fri Jun 28 15:13:34 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Thu Jul 4 (in 3 days)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/10/

SUCCESS 


CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/14f625d0b5d47629245ed7fd0d0112e7ad5675e8#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/10/testReport/

SUCCESS


Integration tests:

Execution date: 

Executed by: Krzysztof Prawdzik

AMER

[85] SUCCESS

[0] FAILED

[0] REPEATED


APAC

[102] SUCCESS

[0] FAILED

[0] REPEATED


EMEA

[89] SUCCESS

[0] FAILED

[1] REPEATED


GBL(EX-US)

[73] SUCCESS

[0] FAILED

[0] REPEATED


GBLUS

[75] SUCCESS

[0] FAILED

[0] REPEATED


Tests ready and approved:
  • approved by: Krzysztof Prawdzik
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

STAGE deployment date:


Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMER

SUCCESS


APAC

SUCCESS


EMEA


SUCCESS


GBL(EX-US)


SUCCESS


GBLUS


SUCCESS 


STAGE test phase details:

PROD deployment report:

PROD deployment date:


Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMER

SUCCESS


APAC

SUCCESS


EMEA


SUCCESS


GBL(EX-US)


SUCCESS


GBLUS


SUCCESS


PROD deploy hypercare details:

" + }, + { + "title": "4.16.1", + "pageID": "438900696", + "pageLink": "/display/GMDM/4.16.1", + "content": "

Release report:

Release:4.16.1Release date:Tue Jul 02 10:02:19 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Tue Jul 02 (same day)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/108/

SUCCESS 


CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/60a14c07d0421cb25ee9d1e29aa376705d20686d



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/108/testReport/

SUCCESS


Integration tests:

Execution date: N/A

Executed by: N/A

AMERN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

APACN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

EMEAN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

GBL(EX-US)N/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

GBLUSN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

Tests ready and approved:
  • approved by: Krzysztof Prawdzik
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

STAGE deployment date:


Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMER

SUCCESS


APAC

SUCCESS


EMEA


SUCCESS


GBL(EX-US)


SUCCESS


GBLUS


SUCCESS 


STAGE test phase details:

PROD deployment report:

PROD deploy hypercare details:

" + }, + { + "title": "4.18.0", + "pageID": "438900984", + "pageLink": "/display/GMDM/4.18.0", + "content": "

Release report:

Release:4.18.0Release date:Tue Jul 02 14:57:49 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Thu Jul 04 (in 2 days)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/11/

SUCCESS 


CHANGELOG:

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/14f625d0b5d47629245ed7fd0d0112e7ad5675e8#CHANGELOG.md

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/60a14c07d0421cb25ee9d1e29aa376705d20686d

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/f90e4505509822513ae8c27a48a776e3acd67c8e



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/11/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: Tue Jul 02 15:59:32 UTC 2024

Executed by: Krzysztof Prawdzik

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/499/

[85] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

APAC

[94] SUCCESS

[1] FAILED

[7] REPEATED

\"\"

  • one of China tests failed due to timeout:
  • RCA: 
    Action timeout after 360000 milliseconds.
    Failed to receive message on endpoint: 'apac-dev-out-full-hcp-merge-cn'

  • Repeated tests:

    • several test failed due ro recent change of DCR tracking statues on APAC DEV on Reltio side
    • repeated from local PC (with updated values) one more time by Krzysztof Prawdzik
    • Tests were repeated manually and passed with success
    • fix for these tests is being prepared

\"\"

\"\"

EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/591/

[89] SUCCESS

[0] FAILED

[1] REPEATED

\"\"

GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/495/

[73] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/435/

[75] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

Tests ready and approved:
  • approved by: Krzysztof Prawdzik
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

STAGE test phase details:

PROD deployment report:

PROD deploy hypercare details:

Verification date

15:30 - 17:00

Verification by

Dashboard

Status

Details

MDMHUB / MDMHUB Component errors

\"(tick)\" 

\"(tick)\"  AMER-PROD - batch-service: data input issue, OneMed job - incorrect data ← Piotr

\"\"

\"(tick)\"   AMER-PROD, mdm-dcr2-service: know issue: "Can't convert data to Json string"

AMER-PROD, manager: Error processing request

\"\"

\"(tick)\"  AMER-PROD, onekey-dcr: know-issue

\"\"

\"(tick)\"  APAC-PROD, mdm-manager

\"\"

\"(question)\"  EMEA-PROD, MAPP channel

non-cirtical - needs to be verified "later"

\"\"

\"(question)\"  EMEA-PROD, manager,

minor to verify cause:" avax.ws.rs.ClientErrorException: HTTP 429 Too Many Requests at"

\"\"

\"(tick)\"  GBL-PROD, manager - known issue

\"\"


MDMHUB / MDMHUB KPIs\"(tick)\" 

\"(question)\"  GBLUS-PROD - why it wasn't smoothly processed? 

\"\"

GBL-PROD

\"\"

MDMHUB / MDMHUB Components resource\"(tick)\" 

General / Snowflake QC Trends

\"(tick)\" 


Kubernetes / K8s Cluster Usage Statistics

\"(tick)\" 

Kubernetes / Pod Monitoring

\"(tick)\" 

\"(tick)\"  GBLUS-PROD 

\"\"

GBL-PROD, publisher, manager high usage

\"\"

\"(warning)\" \"(question)\"  EMEA-PROD, 7d

\"\"

\"(warning)\" \"(question)\"  EMEA-PROD

\"\"


General / kubernetes-persistent-volumes 

\"(tick)\" 


General / Alerts Statistics \"(tick)\" 

\"(warning)\"  AMER-PROD, empty COMPANYGlobalCustomerId

Ticker raised by COMPANY to Reltio team - \n HSM-708\n -\n Getting issue details...\n STATUS\n + support.reltio.com/hc/requests/105633

\"\"

GBL-PROD, not an issue

\"\"

GBLUS-PROD, probably COMPANY manual merge/unmerge

\"\"

General / SSL Certificates and Endpoint Availability


\"(tick)\"  Schedule meeting with Marek how to deep dive to diagnose 

\n MR-9088\n -\n Getting issue details...\n STATUS\n

\n MR-9089\n -\n Getting issue details...\n STATUS\n

Kibana "Kube-events" indice contains logs from kubernets



\"(warning)\"  \"(question)\" EMEA-PROD - DCR, required further verification with Marek/Damian. 

\"\"

" + }, + { + "title": "4.18.1", + "pageID": "438317171", + "pageLink": "/display/GMDM/4.18.1", + "content": "

Release report:

Release:4.18.1Release date:Mon Jul 08 15:01:32 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Tue Jul 09 (in 1 day)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/109/

SUCCESS 


CHANGELOG:

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/446610ec20f2837570cb75c518ff0dc03bd7528f#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/109/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: N/A

Executed by: N/A

AMERN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

APACN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A
EMEAN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

GBL(EX-US)N/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

GBLUSN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

Tests ready and approved:
  • approved by: 
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

STAGE test phase details:

PROD deployment report:

PROD deploy hypercare details:

" + }, + { + "title": "4.19.0", + "pageID": "438317571", + "pageLink": "/display/GMDM/4.19.0", + "content": "

Release report:

Release:4.19.0Release date:Tue Jul 09 14:29:10 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Thu Jul 11 (in 2 days)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/12/

SUCCESS 


CHANGELOG:

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/106376c5e3a96725ae10c4eff57dc19157549d1c#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/12/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: Tue Jul 09 17:00:03 UTC 2024

Executed by: Krzysztof Prawdzik

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/504/

[85] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/444/

[98] SUCCESS

[0] FAILED

[4] REPEATED

\"\"

\"\"

\"\"

\"\"

\"\"

EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/597/

[90] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/500/

[72] SUCCESS

[1] FAILED

[0] REPEATED

\"\"

GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/440/

[75] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

Tests ready and approved:
  • approved by: Krzysztof Prawdzik
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

STAGE test phase details:

Verification date

11:00 - 12:00

Verification by

Dashboard

Status

Details

MDMHUB / MDMHUB Component errors

\"(tick)\" 


MDMHUB / MDMHUB KPIs\"(tick)\" 
MDMHUB / MDMHUB Components resource\"(tick)\"

General / Snowflake QC Trends

\"(tick)\"

Kubernetes / K8s Cluster Usage Statistics

\"(tick)\"

Kubernetes / Pod Monitoring

\"(tick)\"
General / kubernetes-persistent-volumes \"(tick)\"
General / Alerts Statistics \"(question)\" 

\"(question)\" APAC-STAGE - known issue?

\"\"

\"(question)\"  APAC-STAGE, kong 503, kube job completion? pod crash looping pdk?

\"\"

General / SSL Certificates and Endpoint Availability

\"(tick)\" \"(question)\" 

Need to monitor production deployment for this irregularities

AMER-NPROD

\"\"

\"\"

\"(tick)\"  \"(warning)\"  APAC-DEV, dcr, Klaudia: bean issue, strange, nothing corelated to recent changes in code. Error: "requestScopedExchange"

\"\"

\"(tick)\"  \"(question)\" EMEA-QA,  dcr, Klaudia checked logs, nothing unusual. Need to increase logs in blackbox exporter

\"\"

PROD deployment report:

PROD deploy hypercare details:

Verification date

13:30 - 14:30 + warning revalidation on 10:00

Verification by

Dashboard

Status

Details

MDMHUB / MDMHUB Component errors

\"(tick)\" 

\"(tick)\"  APAC-PROD, manager

\n MR-9097\n -\n Getting issue details...\n STATUS\n

\n MR-9098\n -\n Getting issue details...\n STATUS\n

\"\"

\"(tick)\"  GBL-PROD

  • We need to meetup with Grzesiek and verify this issues

\"\"

MDMHUB / MDMHUB KPIs\"(tick)\" 
MDMHUB / MDMHUB Components resource\"(tick)\"

General / Snowflake QC Trends

\"(tick)\"

Kubernetes / K8s Cluster Usage Statistics

\"(tick)\"

Kubernetes / Pod Monitoring

\"(tick)\"

\"(warning)\" GBL-PROD

\"\"

Verification on Monday - high memory usage

\"\"

General / kubernetes-persistent-volumes \"(tick)\"
General / Alerts Statistics \"(tick)\"

\"(question)\"  AMER-PROD 

  • disk space

\"\"


\"(tick)\"\"(question)\" AMER-PROD

  • Publisher broken events
  • Zookeper - info from Marek in Karma that it's nothing to be afraid of
  • Quality gateway - confirmed with Piotr

\"\"


\"(tick)\" GBLUS-PROD

  • Publisher broken events
  • Snowflake

\"\"


\"(tick)\" \"(question)\" EMEA-PROD

  • High load - confirmed with Marek and Piotr
\"\"


\"(tick)\" GBL-PROD

  • High eta - china reaload (info in karma)
\"\"


\"(tick)\" GBLUS-PROD

  • Quality gateway - Dominiq addressed it to Deloite (info from Piotr)
  • Confirmed with Piotr
\"\"
General / SSL Certificates and Endpoint Availability\"(tick)\"
" + }, + { + "title": "4.21.0", + "pageID": "438910809", + "pageLink": "/display/GMDM/4.21.0", + "content": "

Release report:

Release:4.21.0Release date:Tue Jul 09 14:29:10 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Thu Jul 18 (in 2 days)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/18/

SUCCESS 


CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/ef6b59b63a3800a08e98c2e36e2853d45ed97395#CHANGELOG.md



Unit tests:

SUCCESS

\"\"

Integration tests:

Execution date: Sun Jul 14 17:00:05 UTC 2024

Executed by: Krzysztof Prawdzik

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/510/

[85] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/450/

[102] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/600/

[90] SUCCESS

[0] FAILED

[0] REPEATED

\"\"


GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/505/

[72] SUCCESS

[1] FAILED

[0] REPEATED

\"\"

GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/443/

[75] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

Tests ready and approved:
  • approved by: Krzysztof Prawdzik
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

STAGE test phase details:

Verification date

13:00

Verification by

Dashboard

Status

Details

MDMHUB / MDMHUB Component errors

\"(tick)\"

\"(tick)\" AMER-NPROD - know issue during deployment

\"\"

\"(tick)\" APAC-STAGE - dcr servce 2 

  • create ticket to change error 400 to warning

\"\"

\"(tick)\"

  • to verify if these publishing errors may cause some synchronization issues in SF

\"\"

\"(tick)\"

  • Callback - Java Heap Space? Memory issue. Caused by APAC-PROD to APAC-STAGE cloning

\"\"

MDMHUB / MDMHUB KPIs\"(tick)\"

\"(tick)\"  APAC-STAGE - env cloning

\"\"

\"(question)\"  EMEA-STAGE, 1h+ long publishing times

\"\"

MDMHUB / MDMHUB Components resource\"(tick)\"

General / Snowflake QC Trends

\"(tick)\"

Kubernetes / K8s Cluster Usage Statistics

\"(tick)\"

Kubernetes / Pod Monitoring

\"(tick)\"
General / kubernetes-persistent-volumes \"(tick)\"
General / Alerts Statistics \"(tick)\"

\"(question)\"  EMEA-STAGE - high ETA

\"\"

this graph does not reflect this

\"\"

General / SSL Certificates and Endpoint Availability\"(tick)\"

\"(tick)\"  APAC-STAGE, cloning related

\"\"

\"(tick)\" EMEA/GBL - a lot of strange endpoint failuers

  • Marek/Damian - to verify

\"\"


PROD deployment report:

PROD deploy hypercare details:

" + }, + { + "title": "4.22.0", + "pageID": "438327818", + "pageLink": "/display/GMDM/4.22.0", + "content": "

Release report:

Release:4.22.0Release date:Tue Jul 23 16:32:08 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Thu Jul 25 (in 2 days)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/19/

SUCCESS 


CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/e366164c1adff5b1ccfd79dea28f068bc34a0ee2#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/19/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: Tue Jul 23 17:24:15 UTC 2024

Executed by: Krzysztof Prawdzik

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/517/

[85] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/457/

[94] SUCCESS

[8] FAILED

[0] REPEATED

\"\"

EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/608/

[90] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/510/

[72] SUCCESS

[1] FAILED

[0] REPEATED

\"\"

GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/449/

[75] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

Tests ready and approved:
  • approved by: Krzysztof Prawdzik
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

STAGE test phase details:

Verification date

11:15 - 12:30

Verification by

Dashboard

Status

Details

MDMHUB / MDMHUB Component errors

\"(tick)\"

\"(tick)\" AMER-STAGE, EMEA-STAGE, known errors for OneKey DCR

\"(tick)\" EMEA-STAGE, mdmhub-mdm-manager, issues already reported earlier

\"(tick)\"  GBL-STAGE, something with batches (UpdateHCPBatchRestRoute) - probably wrong JSON - ticket to make it more pleasant 

\"\"

MDMHUB / MDMHUB KPIs\"(tick)\"
  • Irek>ask Rafał - what does "Publishing latency" mean - total delay of our processing stack?
MDMHUB / MDMHUB Components resource\"(tick)\"

\"(tick)\" EMEA-STAGE, Batch service, more memory usage? → nothing to worry about

\"\"

\"(tick)\" GBLUS, api-router, more memory? → nothing to worry about

General / Snowflake QC Trends

\"(tick)\" 

Kubernetes / K8s Cluster Usage Statistics


EMEA-NPROD, higher CPU usage, storage usage increase

\"\"

Kubernetes / Pod Monitoring

\"(tick)\"

AMER-NPROD, something is happening → batch processing, Reltio caps events to be processed which we compl

\"\"

General / kubernetes-persistent-volumes \"(tick)\"

EMEA-NPROD, increasing storage usage → entity enricher working (15M events being processed)

  • need to be verified with Marek

\"\"

General / Alerts Statistics \"(tick)\"

\"(tick)\" APAC-NPROD,

  • \"(tick)\"  Target down, what does it mean? We don't have such alerts → glitch in the matrix
  • Publisher broken events - addressed in Karma by Will
  • \"(tick)\"  reconciliation_events_threshold_exceeded?
  • \"(tick)\"  customresource_status_condition → Related to Kafka migration
  • KubeJobFailed
  • pod_crashlooping_pdks - more than usuall
  • zookeeper_fsync_time_too_long - waiting for more data

AMER-NPROD

  • dag_failed_nprod
  • pod_crashlooping_hub_nprod
  • pod_crashlooping_pdks

\"\"

EMEA-NPROD

  • dag_failed_nprod
  • \"(tick)\" customresource_status_condition - 
  • \"(tick)\" Piotr DCR testing API - kong3_http_503_status_nprod

\"\"

General / SSL Certificates and Endpoint Availability\"(tick)\"EMEA-DEV, dcr - Piotr testing

PROD deployment report:

PROD deploy hypercare details:

Verification date

15:30 - 16:40

Verification by

Dashboard

Status

Details

MDMHUB / MDMHUB Component errors

\"(tick)\"

AMER-PROD, Incorrect payload on Kafka, Piotr manually moved offset to fix this. 

\"\"

GBLUS-PROD, single error with ";" and ")" 

APAC-PROD, map channel:

  • Failure not recovered
  • Processing of message: KR-6687996c10e6767c9e1cab6f failed with error: Invalid format: "6/20/1970" is malformed at "/20/1970"
    • Piotr claims that this is DLQ queue probably with single problematic event. 

\"\"

EMEA-PROD, map-channel:

  • 400x Unexpected response: { "status": "ERROR", "status_code": 403, "error_message": "com.COMPANY.gcs.hcp.gateway.exception.RateLimitExceededException - TotalRequests Limit exceeded! (maxRequestsPerMinute=1200)" }
  • Unexpected response: { "status": "ERROR", "status_code": 404, "error_message": "Contact not found by contact_id=a0EF000000pI8bAMAS! (market=IE)" }\"\"
MDMHUB / MDMHUB KPIsWithout refactoring this dashboard, no insights can be extracted. Skipping
MDMHUB / MDMHUB Components resource\"(tick)\"

General / Snowflake QC Trends

\"(tick)\"

EMEA-PROD, Empty COMPANYGlobalCusdtomerID - such entities are deleted at Snowflake level → nothing gets populated to downstream. 

\"\"

Kubernetes / K8s Cluster Usage Statistics

\"(tick)\"

Kubernetes / Pod Monitoring

\"(tick)\"

APAC-PROD, suspicious memory usage? 

\"\"

EMEA-PROD, config deploy

\"\"

General / kubernetes-persistent-volumes \"(tick)\"
General / Alerts Statistics \"(tick)\"

AMER-PROD

  • \"(tick)\"  publisher_broken_events_prod
  • quality_gateway_auto_resolved_event
  • hub_callback_loop

GBLUS-PROD

  • \"(tick)\"  snowflake_last_entity_event_time_prod

EMEA-PROD

  • dag_failed_prod - existis for a long time, addressed in karma ..
  • snowflake_generated_events_without_COMPANY_global_customer_ids_prod

APAC-PROD

  • \"(tick)\"  pod_crashlooping_pdks - long time error in karma


General / SSL Certificates and Endpoint Availability\"(tick)\"
" + }, + { + "title": "FAQ", + "pageID": "462236735", + "pageLink": "/display/GMDM/FAQ", + "content": "

Questions and answers about HUB topics.

" + }, + { + "title": "What is survivorship strategy in Reltio and where to find it?", + "pageID": "462236738", + "pageLink": "/pages/viewpage.action?pageId=462236738", + "content": "

Simple attributes on Reltio profiles (not nested ones) have an OV attribute - showing whether the attribute value should be shown to user.

Example:
\"\"
This HCO has two COMPANY Customer IDs (from different crosswalks) and the visible one won during calculation of survivorship strategy.


The survivorship rules can be configured separately for each environment and attribute. Those are part of Reltio configuration and can be accessed here (authentication type is Bearer token):
{{RELTIO_URL}}/{{tenantID}}/configuration


Description of Reltio survivorship rules:
https://docs.reltio.com/en/model/consolidate-data/design-survivorship-rules/survivorship-rules

" + } +] \ No newline at end of file diff --git a/HUB_nohtml.txt b/HUB_nohtml.txt new file mode 100644 index 0000000..074bdd0 --- /dev/null +++ b/HUB_nohtml.txt @@ -0,0 +1,3458 @@ +[ + { + "title": "HUB Overview", + "pageID": "164470108", + "pageLink": "/display/GMDM/HUB+Overview", + "content": "MDM Integration services provide services for clients using MDM systems (Reltio or Nucleus 360) in following fields:As abstraction layer providing API for MDM data management.Delivering common processes that are hiding complexity of interaction with Reltio API.Enhancing Reltio functionality by data quality validating and through cleaning services.Extending data protection by limiting clients' access.Allowing to publish MDM data to multiple clients using event streaming and batch mode.MDM Integration Services consist of:Integration Gateway providing services for data handling in Reltio (storing and accessing entities directly).Publishing Hub being responsible for publishing OV profiles to consumers.The MDM HUB ecosystem is presented at the picture below.   " + }, + { + "title": "Modules", + "pageID": "164470022", + "pageLink": "/display/GMDM/Modules", + "content": "" + }, + { + "title": "Direct Channel", + "pageID": "164469882", + "pageLink": "/display/GMDM/Direct+Channel", + "content": "DescriptionDirect channel exposes unified REST API interface to update/search profiles in MDM systems. The diagram below shows the logical architecture of the Direct Channel module. Logical architectureComponentsComponentSubcomponentDescriptionAPI GatewayKong API Gateway components playing the role of proxAuthentication engineKong module providing client authentication servicesManager/Orchestratorjava microservice orchestrating API callsData Quality Enginequality service validating data sent to Reltio Authorization Engineauthorize client access to MDM resourcesMDM routing engineroute calls to MDM systemsTransaction Loggerregisters API calls in EFK service for tracing reasons. Reltio Adapterhandles communication with Reltio MDM systemNucleus Adapterhandle communication with Nucleus MDM systemHUB StoreMongoDB database plays the role of persistence store for MDM HUB logicAPI Routerrouting requests to regional MDM Hub servicesFlowsFlowDescriptionCreate/Update HCP/HCO/MCOCreate or Update HCP/HCO/MCO entitySearch EntitySearch entityGet EntityRead entityRead LOVRead LOVValidate HCPValidate HCP" + }, + { + "title": "Streaming channel", + "pageID": "164469812", + "pageLink": "/display/GMDM/Streaming+channel", + "content": "DescriptionStreaming channel distributes MDM profile updates through KAFKA topics in near real-time to consumers.  Reltio events generate on profile changes are sent via AWS SQS queue to MDM HUB.MDM HUB enriches events with profile data and dedupes them. During the process, callback service process data (for example: calculate ranks and hco names, clean unused topics) and updates profile in Reltio with the calculated values.   Publisher distributes events to target client topics based on the configured routing rules.MDM Datamart built-in Snowflake provides SQL access to up to date MDM data in both the object and the relational model. Logical architectureComponentsComponentDescriptionReltio subscriberConsume events from ReltioCallback serviceTrigger callback actions on incoming events for example calculated rankingsDirect ChannelOrchestrates Reltio updates triggered by callbacksHUB StoreKeeps MDM data historyReconciliation serviceReconcile missing eventsPublisherEvaluates routing rules and publishes data do downstream consumersSnowflake Data MartExposes MDM data in the relation modelKafka ConnectSends data to Snowflake from KafkaEntity enricherEnrich events with full data retrieved from ReltioFlowsFlowDescriptionReltio events streamingDistribute Reltio MDM data changes to downstream consumers in the streaming modeNucleus events streamingDistribute Nucleus MDM data changes to downstream consumers in the streaming modeSnowflake: Events publish flowDistribute Reltio MDM data changes to Snowflake DM" + }, + { + "title": "Java Batch Channel", + "pageID": "164469814", + "pageLink": "/display/GMDM/Java+Batch+Channel", + "content": "DescriptionJava Batch Channel is the set of services responsible to load file extract delivered by the external source to Reltio. The heart of the module is file loader service aka inc-batch-channel that maps flat model to Reltio model and orchestrates the load through asynchronous interface manage by Manager. Batch flows are managed by Apache Airflow scheduler.Logical architectureComponentsApache Airflow - batch flows scheduler and orcherstartor.File loader aka inc-batch-channel - maps files to Reltio model  and orchestrate profiles loads Manager/Orchestrator - java microservice orchestrating API calls FlowsIncremental batches - generic flow for loading source data from flat files into Reltio" + }, + { + "title": "ETL Batch Channel", + "pageID": "164469835", + "pageLink": "/display/GMDM/ETL+Batch+Channel", + "content": "DescriptionETL Batch channel exposes REST API  for ETL components like Informatica and manages a loading process in an asynchronous way.With its own cache based on Hub Store, it supports full loads providing a delta detection logic.Logical architectureComponentsBatch service - exposes REST API for ETL platforms to load batch data into Reltio and controls the loading process.Hub Store - a registry of batch loads and a cache to handle delta detection.Manager/Orchestrator - java microservice orchestrating API calls into Reltio and providing validation and data protection services. FlowsETL batch flow -  ageneric flow for loading source data with ETL tools like Informatica into Reltio" + }, + { + "title": "Environments", + "pageID": "164470172", + "pageLink": "/display/GMDM/Environments", + "content": "Reltio Export IPsEnvironmentIPsReltio Team commentEMEA NON-PRODEMEA PROD- ●●●●●●●●●●●●- ●●●●●●●●●●●●- ●●●●●●●●●●●●are available across all EMEA environmentsAPAC NON-PRODAPAC PROD- ●●●●●●●●●●●- ●●●●●●●●●●●●●●- ●●●●●●●●●●●●●are available across all APAC environmentsGBLUS NON-PRODGBLUS PROD- ●●●●●●●●●●●●●- ●●●●●●●●●●●- ●●●●●●●●●●●●● for the dev/test and 361 tenants, the IPs can be used by any of the environments.AMER NON-PRODAMER PRODThe AMER tenants use the same access points as the US" + }, + { + "title": "AMER", + "pageID": "196878948", + "pageLink": "/display/GMDM/AMER", + "content": "ContactsTypeContactCommentSupported MDMHUB environmentsDLDL-ADL-ATP-GLOBAL_MDM_RELTIO@COMPANY.comSupports Reltio instancesGBLUS - Reltio only" + }, + { + "title": "AMER Non PROD Cluster", + "pageID": "196878950", + "pageLink": "/display/GMDM/AMER+Non+PROD+Cluster", + "content": "Physical ArchitectureKubernetes clusternameIPConsole addressresource typeAWS regionFilesystemComponentsTypeatp-mdmhub-nprod-amer10.9.64.0/1810.9.0.0/18https://pdcs-som1d.COMPANY.comEKS over EC2us-east-1~60GB per node,6TBx2 replicated Portworx volumesKong, Kafka, Mongo, Prometheus, MDMHUB microservicesoutbound and inboundNon PROD - backend NamespaceComponentPod nameDescriptionLogskongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kongamer-backendKafkamdm-kafka-kafka-0mdm-kafka-kafka-1mdm-kafka-kafka-2Kafkalogsamer-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace amer-backendamer-backendZookeeper mdm-kafka-zookeeper-0mdm-kafka-zookeeper-1mdm-kafka-zookeeper-2Zookeeperlogsamer-backendMongomongo-0Mongologsamer-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace amer-backendamer-backendFluentDfluentd-*EFK - fluentdkubectl logs {{pod name}} --namespace amer-backendamer-backendElasticsearchelasticsearch-es-default-0elasticsearch-es-default-1EFK - elasticsearchkubectl logs {{pod name}} --namespace amer-backendamer-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace amer-backendmonitoringCadvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoringamer-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace amer-backendamer-backendMongo exportermongo-exporter-*mongo metrics exporter---amer-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace amer-backendamer-backendConsulconsul-consul-server-0consul-consul-server-1consul-consul-server-2Consulkubectl logs {{pod name}} --namespace amer-backendamer-backendSnowflake connectoramer-dev-mdm-connect-cluster-connect-*amer-qa-mdm-connect-cluster-connect-*amer-stage-mdm-connect-cluster-connect-*Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace amer-backendmonitoringKafka Connect Exportermonitoring-jdbc-snowflake-exporter-amer-dev-*monitoring-jdbc-snowflake-exporter-amer-stage-*monitoring-jdbc-snowflake-exporter-amer-stage-*Kafka Connect metric exporterkubectl logs {{pod name}} --namespace monitoringamer-backendAkhqakhq-*Kafka UIlogsCertificates Wed Aug 31 21:57:19 CEST 2016 until: Sun Aug 31 22:07:17 CEST 2036ResourceCertificate LocationValid fromValid to Issued ToKibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/nprod/namespaces/kong/config_files/certsThu, 13 Jan 2022 14:13:53 GMTTue, 10 Jan 2023 14:13:53 GMThttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/Kafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/nprod/namespaces/amer-backend/secrets.yaml.encryptedJan 18 11:07:55 2022 GMTJan 18 11:07:55 2024 GMTkafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094Setup and check connections:Snowflake - managing service accounts - EMEA Snowflake Access" + }, + { + "title": "AMER DEV Services", + "pageID": "196878953", + "pageLink": "/display/GMDM/AMER+DEV+Services", + "content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-amer-devPing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-amer-devKafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://gblmdmhubnprodamrasp100762HUB UIhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ui-amer-dev/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.com/DB NameCOMM_AMER_MDM_DMART_DEV_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_AMER_MDM_DMART_DEV_DEVOPS_ROLEGrafana dashboardsResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=amer_dev&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=amer_dev&var-topic=All&var-node=1Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_nprodJMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=amer_dev&var-component=managerKonghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_nprod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_dev&var-interval=$__auto_interval_intervalKibana dashboardsResource NameEndpointKibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com (DEV prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-amer-dev/swagger-ui/index.html?configUrl=/api-gw-spec-amer-dev/v3/api-docs/swagger-configBatch Service API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-amer-dev/swagger-ui/index.html?configUrl=/api-batch-spec-amer-dev/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-amer-nprod-gbl-mdm-hub.COMPANY.comConsulResource NameEndpointConsul UIhttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.comAKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-amer-nprod-gbl-mdm-hub.COMPANY.comComponents & LogsENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod portsamer-devManagermdmhub-mdm-manager-*Gateway APIlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availableamer-devBatch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogsamer-devApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogsamer-devSubscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogsamer-devEnrichermdmhub-entity-enricher-*Reltio events enricherlogsamer-devCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsamer-devPublishermdmhub-event-publisher-*Events publisherlogsamer-devReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogsClientsETL - COMPANY (GBLUS)MDM SystemsReltioDEV - wn60kG248ziQSMWResource NameEndpointSQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/dev_wJmSQ8GWI8Q6Fl1Reltiohttps://dev.reltio.com/ui/wJmSQ8GWI8Q6Fl1https://dev.reltio.com/reltio/api/wJmSQ8GWI8Q6Fl1Reltio Gateway Usersvc-pfe-mdmhubRDMhttps://rdm.reltio.com/lookups/dyzB7cAPhATUslEInternal ResourcesResource NameEndpointMongomongodb://mongo-amer-nprod-gbl-mdm-hub.COMPANY.com:27017Kafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.comElasticsearchhttps://elastic-amer-nprod-gbl-mdm-hub.COMPANY.comMigrationThe amer dev is the first environment that was migrated from old ifrastructure (EC2 based) to a new one - Kubernetes based. The following table presents old endpoints and their substitutes in the new environment. Everyone who wants to connect with amer dev has to use new endpoints.DescriptionOld endpointNew endpointManager APIhttps://amraelp00010074.COMPANY.com:8443/dev-exthttps://gbl-mdm-hub-amer-nprod.COMPANY.com:8443/dev-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-amer-devBatch Service APIhttps://amraelp00010074.COMPANY.com:8443/dev-batch-exthttps://gbl-mdm-hub-amer-nprod.COMPANY.com:8443/dev-batch-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-amer-devConsul APIhttps://amraelp00010074.COMPANY.com:8443/v1https://gbl-mdm-hub-amer-nprod.COMPANY.com:8443/v1https://consul-amer-nprod-gbl-mdm-hub.COMPANY.com/v1Kafkaamraelp00010074.COMPANY.com:9094kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094" + }, + { + "title": "AMER QA Services", + "pageID": "228921283", + "pageLink": "/display/GMDM/AMER+QA+Services", + "content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-amer-qaPing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-amer-qaKafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://gblmdmhubnprodamrasp100762HUB UIhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ui-amer-qa/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.com/DB NameCOMM_AMER_MDM_DMART_QA_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_AMER_MDM_DMART_QA_DEVOPS_ROLEResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=amer_qa&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=amer_qa&var-topic=All&var-node=1Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_nprodJMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=amer_qa&var-component=mdm-managerKonghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_nprod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_nprod&var-interval=$__auto_interval_intervalResource NameEndpointKibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com (QA prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-amer-qa/swagger-ui/index.html?configUrl=/api-gw-spec-amer-qa/v3/api-docs/swagger-configBatch Service API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-amer-qa/swagger-ui/index.html?configUrl=/api-batch-spec-amer-qa/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-amer-nprod-gbl-mdm-hub.COMPANY.comConsulResource NameEndpointConsul UIhttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.comAKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-amer-nprod-gbl-mdm-hub.COMPANY.comComponents & LogsENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod portsamer-qaManagermdmhub-mdm-manager-*Gateway APIlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availableamer-qaBatch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogsamer-qaApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogsamer-qaSubscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogsamer-qaEnrichermdmhub-entity-enricher-*Reltio events enricherlogsamer-qaCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsamer-qaPublishermdmhub-event-publisher-*Events publisherlogsamer-qaReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogsClientsETL - COMPANY (GBLUS)MDM SystemsReltioDEV - wn60kG248ziQSMWResource NameEndpointSQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/test_805QOf1Xnm96SPjReltiohttps://test.reltio.com/ui/805QOf1Xnm96SPjhttps://test.reltio.com/reltio/api/805QOf1Xnm96SPjReltio Gateway Usersvc-pfe-mdmhubRDMhttps://rdm.reltio.com/lookups/805QOf1Xnm96SPjInternal ResourcesResource NameEndpointMongomongodb://mongo-amer-nprod-gbl-mdm-hub.COMPANY.com/reltio_amer-qa:27017Kafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.comElasticsearchhttps://elastic-amer-nprod-gbl-mdm-hub.COMPANY.com" + }, + { + "title": "AMER STAGE Services", + "pageID": "228921315", + "pageLink": "/display/GMDM/AMER+STAGE+Services", + "content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-amer-stagePing Federatehttps://stgfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-amer-stageKafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://gblmdmhubnprodamrasp100762HUB UIhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ui-amer-stage/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.com/DB NameCOMM_AMER_MDM_DMART_STG_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_AMER_MDM_DMART_STG_DEVOPS_ROLEResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=amer_stage&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=amer_stage&var-topic=All&var-node=1Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_nprodJMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=amer_stage&var-component=mdm-managerKonghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_nprod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_nprod&var-interval=$__auto_interval_intervalResource NameEndpointKibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com (STAGE prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-amer-stage/swagger-ui/index.html?configUrl=/api-gw-spec-amer-stage/v3/api-docs/swagger-configBatch Service API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-amer-stage/swagger-ui/index.html?configUrl=/api-batch-spec-amer-stage/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-amer-nprod-gbl-mdm-hub.COMPANY.comConsulResource NameEndpointConsul UIhttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.comAKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-amer-nprod-gbl-mdm-hub.COMPANY.comComponents & LogsENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod portsamer-stageManagermdmhub-mdm-manager-*Gateway APIlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availableamer-stageBatch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogsamer-stageApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogsamer-stageSubscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogsamer-stageEnrichermdmhub-entity-enricher-*Reltio events enricherlogsamer-stageCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsamer-stagePublishermdmhub-event-publisher-*Events publisherlogsamer-stageReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogsClientsETL - COMPANY (GBLUS)MDM SystemsReltioDEV - wn60kG248ziQSMWResource NameEndpointSQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/test_K7I3W3xjg98Dy30Reltiohttps://test.reltio.com/ui/K7I3W3xjg98Dy30https://test.reltio.com/reltio/api/K7I3W3xjg98Dy30Reltio Gateway Usersvc-pfe-mdmhubRDMhttps://rdm.reltio.com/lookups/K7I3W3xjg98Dy30Internal ResourcesResource NameEndpointMongomongodb://mongo-amer-nprod-gbl-mdm-hub.COMPANY.com/reltio_amer-stage:27017Kafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.comElasticsearchhttps://elastic-amer-nprod-gbl-mdm-hub.COMPANY.com" + }, + { + "title": "GBLUS-DEV Services", + "pageID": "234701562", + "pageLink": "/display/GMDM/GBLUS-DEV+Services", + "content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-devPing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-gblus-devKafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://gblmdmhubnprodamrasp100762HUB UIhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ui-gblus-dev/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.comDB NameCOMM_GBL_MDM_DMART_DEVDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_DEV_MDM_DMART_DEVOPS_ROLEGrafana dashboardsResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gblus_dev&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gblus_dev&var-topic=All&var-node=1&var-instance=amraelp00007335.COMPANY.com:9102Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_nprodJMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=gblus_dev&var-component=&var-instance=All&var-node=Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_nprod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_dev&var-interval=$__auto_interval_intervalResource NameEndpointKibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com (DEV prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gblus-dev/swagger-ui/index.html?configUrl=/api-gw-spec-gblus-dev/v3/api-docs/swagger-configBatch Service API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-gblus-dev/swagger-ui/index.html?configUrl=/api-batch-spec-gblus-dev/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-amer-nprod-gbl-mdm-hub.COMPANY.comConsulResource NameEndpointConsul UIhttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.comAKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-amer-nprod-gbl-mdm-hub.COMPANY.comComponents & LogsENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod portsgblus-stageManagermdmhub-mdm-manager-*Gateway APIlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availablegblus-stageBatch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogsgblus-stageApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogsgblus-stageSubscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogsgblus-stageEnrichermdmhub-entity-enricher-*Reltio events enricherlogsgblus-stageCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsgblus-stagePublishermdmhub-event-publisher-*Events publisherlogsgblus-stageReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogsClientsETL - COMPANY (GBLUS)MDM SystemsReltioDEV(gblus_dev) - sw8BkTZqjzGr7hnResource NameEndpointSQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/dev_sw8BkTZqjzGr7hnReltiohttps://dev.reltio.com/ui/sw8BkTZqjzGr7hnhttps://dev.reltio.com/reltio/api/sw8BkTZqjzGr7hnReltio Gateway Usersvc-pfe-mdmhubRDMhttps://rdm.reltio.com/%s/wq2MxMmfTUCYk9kInternal ResourcesResource NameEndpointMongomongodb://mongo-amer-nprod-gbl-mdm-hub.COMPANY.com:27017Kafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.comElasticsearchhttps://elastic-amer-nprod-gbl-mdm-hub.COMPANY.comMigrationThe following table presents old endpoints and their substitutes in the new environment. Everyone who wants to connect with gblus dev has to use new endpoints.DescriptionOld endpointNew endpointManager APIhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/dev-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-devBatch Service APIhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/dev-batch-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-gblus-devConsul APIhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/v1https://consul-amer-nprod-gbl-mdm-hub.COMPANY.com/v1Kafkaamraelp00007335.COMPANY.com:9094kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094" + }, + { + "title": "GBLUS-QA Services", + "pageID": "234701566", + "pageLink": "/display/GMDM/GBLUS-QA+Services", + "content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-qaPing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-gblus-qaKafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://gblmdmhubnprodamrasp100762HUB UIhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ui-gblus-qa/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.com/DB NameCOMM_GBL_MDM_DMART_QADefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_QA_MDM_DMART_DEVOPS_ROLEResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=amer_qa&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gblus_qa&var-topic=All&var-instance=All&var-node=Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_nprodJMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=gblus_qa&var-component=mdm-managerKonghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_nprod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_nprod&var-interval=$__auto_interval_intervalResource NameEndpointKibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com (QA prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gblus-qa/swagger-ui/index.html?configUrl=/api-gw-spec-gblus-qa/v3/api-docs/swagger-configBatch Service API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-gblus-qa/swagger-ui/index.html?configUrl=/api-batch-spec-gblus-qa/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-amer-nprod-gbl-mdm-hub.COMPANY.comConsulResource NameEndpointConsul UIhttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.comAKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-amer-nprod-gbl-mdm-hub.COMPANY.comComponents & LogsENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod portsgblus-stageManagermdmhub-mdm-manager-*Gateway APIlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availablegblus-stageBatch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogsgblus-stageApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogsgblus-stageSubscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogsgblus-stageEnrichermdmhub-entity-enricher-*Reltio events enricherlogsgblus-stageCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsgblus-stagePublishermdmhub-event-publisher-*Events publisherlogsgblus-stageReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogsClientsETL - COMPANY (GBLUS)MDM SystemsReltioQA(gblus_qa) - rEAXRHas2ovllvTSQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/test_rEAXRHas2ovllvTReltiohttps://test.reltio.com/ui/rEAXRHas2ovllvThttps://test.reltio.com/reltio/api/rEAXRHas2ovllvTReltio Gateway Usersvc-pfe-mdmhubRDMhttps://rdm.reltio.com/%s/u78Dh9B87sk6I2vInternal ResourcesResource NameEndpointMongomongodb://mongo-amer-nprod-gbl-mdm-hub.COMPANY.com/reltio_amer-qa:27017Kafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.comElasticsearchhttps://elastic-amer-nprod-gbl-mdm-hub.COMPANY.comMigrationThe following table presents old endpoints and their substitutes in the new environment. Everyone who wants to connect with gblus qa has to use new endpoints.DescriptionOld endpointNew endpointManager APIhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/qa-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-qaBatch Service APIhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/qa-batch-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-gblus-qaConsul APIhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/v1https://consul-amer-nprod-gbl-mdm-hub.COMPANY.com/v1Kafkaamraelp00007335.COMPANY.com:9094kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094" + }, + { + "title": "GBLUS-STAGE Services", + "pageID": "243863074", + "pageLink": "/display/GMDM/GBLUS-STAGE+Services", + "content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-stagePing Federatehttps://stgfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-gblus-stageKafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://gblmdmhubnprodamrasp100762HUB UIhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ui-gblus-stage/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.com/DB NameCOMM_GBL_MDM_DMART_STGDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_STG_MDM_DMART_DEVOPS_ROLEResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gblus_stage&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gblus_stage&var-topic=All&var-node=1Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_nprodJMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=amer_stage&var-component=mdm-managerKonghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_nprod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_nprod&var-interval=$__auto_interval_intervalResource NameEndpointKibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com (STAGE prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gblus-stage/swagger-ui/index.html?configUrl=/api-gw-spec-gblus-stage/v3/api-docs/swagger-configBatch Service API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-gblus-stage/swagger-ui/index.html?configUrl=/api-batch-spec-gblus-stage/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-amer-nprod-gbl-mdm-hub.COMPANY.comConsulResource NameEndpointConsul UIhttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.comAKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-amer-nprod-gbl-mdm-hub.COMPANY.comComponents & LogsENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod portsgblus-stageManagermdmhub-mdm-manager-*Gateway APIlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availablegblus-stageBatch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogsgblus-stageApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogsgblus-stageSubscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogsgblus-stageEnrichermdmhub-entity-enricher-*Reltio events enricherlogsgblus-stageCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsgblus-stagePublishermdmhub-event-publisher-*Events publisherlogsgblus-stageReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogsClientsETL - COMPANY (GBLUS)MDM SystemsReltioSTAGE(gblus_stage) - 48ElTIteZz05XwTSQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/test_48ElTIteZz05XwTReltiohttps://test.reltio.com/ui/48ElTIteZz05XwThttps://test.reltio.com/reltio/api/48ElTIteZz05XwTReltio Gateway Usersvc-pfe-mdmhubRDMhttps://rdm.reltio.com/%s/5YqAPYqQnUtQJqpInternal ResourcesResource NameEndpointMongomongodb://mongo-amer-nprod-gbl-mdm-hub.COMPANY.com/reltio_amer-stage:27017Kafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.comElasticsearchhttps://elastic-amer-nprod-gbl-mdm-hub.COMPANY.com" + }, + { + "title": "AMER PROD Cluster", + "pageID": "234698165", + "pageLink": "/display/GMDM/AMER+PROD+Cluster", + "content": "Physical ArchitectureKubernetes clusternameIPConsole addressresource typeAWS regionFilesystemComponentsTypeatp-mdmhub-prod-amer10.9.64.0/1810.9.0.0/18https://pdcs-drm1p.COMPANY.comEKS over EC2us-east-1~60GB per node,6TBx3 replicated Portworx volumesKong, Kafka, Mongo, Prometheus, MDMHUB microservicesoutbound and inboundPROD - backend NamespaceComponentPod nameDescriptionLogskongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kongamer-backendKafkamdm-kafka-kafka-0mdm-kafka-kafka-1mdm-kafka-kafka-2Kafkalogsamer-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace amer-backendamer-backendZookeeper mdm-kafka-zookeeper-0mdm-kafka-zookeeper-1mdm-kafka-zookeeper-2Zookeeperlogsamer-backendMongomongo-0Mongologsamer-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace amer-backendamer-backendFluentDfluentd-*EFK - fluentdkubectl logs {{pod name}} --namespace amer-backendamer-backendElasticsearchelasticsearch-es-default-0elasticsearch-es-default-1EFK - elasticsearchkubectl logs {{pod name}} --namespace amer-backendamer-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace amer-backendmonitoringCadvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoringamer-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace amer-backendamer-backendMongo exportermongo-exporter-*mongo metrics exporter---amer-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace amer-backendamer-backendConsulconsul-consul-server-0consul-consul-server-1consul-consul-server-2Consulkubectl logs {{pod name}} --namespace amer-backendamer-backendSnowflake connectoramer-prod-mdm-connect-cluster-connect-*amer-qa-mdm-connect-cluster-connect-*amer-stage-mdm-connect-cluster-connect-*Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace amer-backendmonitoringKafka Connect Exportermonitoring-jdbc-snowflake-exporter-amer-prod-*monitoring-jdbc-snowflake-exporter-amer-stage-*monitoring-jdbc-snowflake-exporter-amer-stage-*Kafka Connect metric exporterkubectl logs {{pod name}} --namespace monitoringamer-backendAkhqakhq-*Kafka UIlogsCertificates Wed Aug 31 21:57:19 CEST 2016 until: Sun Aug 31 22:07:17 CEST 2036ResourceCertificate LocationValid fromValid to Issued ToKibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/prod/namespaces/kong/config_files/certsThu, 13 Jan 2022 14:13:53 GMTTue, 10 Jan 2023 14:13:53 GMThttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/Kafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/prod/namespaces/amer-backend/secrets.yaml.encryptedJan 18 11:07:55 2022 GMTJan 18 11:07:55 2024 GMTkafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094Setup and check connections:Snowflake - managing service accounts - via http://btondemand.COMPANY.com/ - Get Support → Submit ticket → GBL-ATP-COMMERCIAL SNOWFLAKE DOMAIN ADMI" + }, + { + "title": "AMER PROD Services", + "pageID": "234698356", + "pageLink": "/display/GMDM/AMER+PROD+Services", + "content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-amer-prodPing Federatehttps://prodfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-gw-amer-prodKafkakafka-amer-prod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://gblmdmhubprodamrasp101478HUB UIhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/ui-amer-prod/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://amerprod01.us-east-1.privatelink.snowflakecomputing.com/DB NameCOMM_AMER_MDM_DMART_PROD_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_AMER_MDM_DMART_PROD_DEVOPS_ROLEResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=amer_prod&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=amer_prod&var-topic=All&var-node=1Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_prodJMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=amer_prod&var-component=managerKonghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_prod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_prod&var-interval=$__auto_interval_intervalResource NameEndpointKibanahttps://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/ (PROD prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-gw-spec-amer-prod/swagger-ui/index.html?configUrl=/api-gw-spec-amer-prod/v3/api-docs/swagger-configBatch Service API documentationhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-batch-spec-amer-prod/swagger-ui/index.html?configUrl=/api-batch-spec-amer-prod/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-amer-prod-gbl-mdm-hub.COMPANY.comConsulResource NameEndpointConsul UIhttps://consul-amer-prod-gbl-mdm-hub.COMPANY.com/ui/AKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-amer-prod-gbl-mdm-hub.COMPANY.com/Components & LogsENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod portsamer-prodManagermdmhub-mdm-manager-*Gateway APIlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availableamer-prodBatch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogsamer-prodApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogsamer-prodSubscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogsamer-prodEnrichermdmhub-entity-enricher-*Reltio events enricherlogsamer-prodCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsamer-prodPublishermdmhub-event-publisher-*Events publisherlogsamer-prodReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogsClientsETL - COMPANY (GBLUS)MDM SystemsReltioPROD - Ys7joaPjhr9DwBJResource NameEndpointSQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/361_Ys7joaPjhr9DwBJReltiohttps://361.reltio.com/ui/Ys7joaPjhr9DwBJhttps://361.reltio.com/reltio/api/Ys7joaPjhr9DwBJReltio Gateway Usersvc-pfe-mdmhub-prodRDMhttps://rdm.reltio.com/lookups/LEo5zuzyWyG1xg4Internal ResourcesResource NameEndpointMongomongodb://mongo-amer-prod-gbl-mdm-hub.COMPANY.com:27017Kafkakafka-amer-prod-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/Elasticsearchhttps://elastic-amer-prod-gbl-mdm-hub.COMPANY.com/" + }, + { + "title": "GBL US PROD Services", + "pageID": "250133277", + "pageLink": "/display/GMDM/GBL+US+PROD+Services", + "content": "HUB EndpointsAPI & Kafka & S3Gateway API OAuth2 External - DEVhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-prodPing Federatehttps://prodfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-gw-gblus-prodKafkakafka-amer-prod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://gblmdmhubprodamrasp101478Snowflake MDM DataMartDB Urlhttps://amerprod01.us-east-1.privatelink.snowflakecomputing.comDB NameCOMM_GBL_MDM_DMART_PRODDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_PROD_MDM_DMART_DEVOPS_ROLEHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gblus_prod&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gblus_prod&var-topic=All&var-node=1&var-instance=amraelp00007848.COMPANY.com:9102JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=gblus_prod&var-component=manager&var-node=1&var-instance=amraelp00007848.COMPANY.com:9104Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_prod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_prod&var-interval=$__auto_interval_intervalKibanahttps://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/ (PROD prefixed dashboards)DocumentationManager API documentationhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-gw-spec-amer-prod/swagger-ui/index.html?configUrl=/api-gw-spec-gblus-prod/v3/api-docs/swagger-configBatch Service API documentationhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-batch-spec-amer-prod/swagger-ui/index.html?configUrl=/api-batch-spec-gblus-prod/v3/api-docs/swagger-configAirflowAirflow UIhttps://airflow-amer-prod-gbl-mdm-hub.COMPANY.comConsulConsul UIhttps://consul-amer-prod-gbl-mdm-hub.COMPANY.com/ui/AKHQ - KafkaAKHQ Kafka UIhttps://akhq-amer-prod-gbl-mdm-hub.COMPANY.com/Components & LogsENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod portsgblus-prodManagermdmhub-mdm-manager-*Gateway APIlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availablegblus-prodBatch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogsgblus-prodSubscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogsgblus-prodEnrichermdmhub-entity-enricher-*Reltio events enricherlogsgblus-prodCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsgblus-prodPublishermdmhub-event-publisher-*Events publisherlogsgblus-prodReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogsgblus-prodOnekey DCRmdmhub-mdm-onekey-dcr-service-*Onekey DCR servicelogsClientsCDW (GBLUS)ETL - COMPANY (GBLUS)ENGAGE (GBLUS)KOL_ONEVIEW (GBLUS)GRV (GBLUS)GRACE (GBLUS)MDM SystemsReltioPROD- 9kL30u7lFoDHp6XSQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/361_9kL30u7lFoDHp6XReltiohttps://361.reltio.com/ui/9kL30u7lFoDHp6Xhttps://361.reltio.com/reltio/api/9kL30u7lFoDHp6XReltio Gateway Usersvc-pfe-mdmhub-prodRDMhttps://rdm.reltio.com/%s/DABr7gxyKKkrxD3Internal ResourcesMongomongodb://mongo-amer-prod-gbl-mdm-hub.COMPANY.com:27017Kafkakafka-amer-prod-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/Elasticsearchhttps://elastic-amer-prod-gbl-mdm-hub.COMPANY.com/" + }, + { + "title": "AMER SANDBOX Cluster", + "pageID": "310950353", + "pageLink": "/display/GMDM/AMER+SANDBOX+Cluster", + "content": "Physical ArchitectureKubernetes clusternameIPConsole addressresource typeAWS regionFilesystemComponentsTypeatp-mdmhub-sbx-amer●●●●●●●●●●●●●●●●●●●●●●● https://pdcs-som1d.COMPANY.comEKS over EC2us-east-1~60GB per nodeKong, Kafka, Mongo, Prometheus, MDMHUB microservicesoutbound and inboundSANDBOX - backend NamespaceComponentPod nameDescriptionLogskongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kongamer-backendKafkamdm-kafka-kafka-0mdm-kafka-kafka-1mdm-kafka-kafka-2Kafkalogsamer-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace amer-backendamer-backendZookeepermdm-kafka-zookeeper-0mdm-kafka-zookeeper-1mdm-kafka-zookeeper-2Zookeeperlogsamer-backendMongomongo-0Mongologsamer-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace amer-backendamer-backendFluentDfluentd-*EFK - fluentdkubectl logs {{pod name}} --namespace amer-backendamer-backendElasticsearchelasticsearch-es-default-0elasticsearch-es-default-1elasticsearch-es-default-2EFK - elasticsearchkubectl logs {{pod name}} --namespace amer-backendmonitoringCadvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoringamer-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace amer-backendamer-backendMongo exportermongo-exporter-*mongo metrics exporter---amer-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace amer-backendamer-backendConsulconsul-consul-server-0consul-consul-server-1consul-consul-server-2Consulkubectl logs {{pod name}} --namespace amer-backendamer-backendSnowflake connectoramer-devsbx-mdm-connect-cluster-connect-*Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace amer-backendamer-backendAkhqakhq-*Kafka UIlogsCertificates Wed Aug 31 21:57:19 CEST 2016 until: Sun Aug 31 22:07:17 CEST 2036ResourceCertificate LocationValid fromValid to Issued ToKibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/sandbox/namespaces/kong/config_files/certs2023-02-22 15:16:042025-02-21 15:16:04https://api-amer-sandbox-gbl-mdm-hub.COMPANY.com/Kafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/sandbox/namespaces/amer-backend/secrets.yaml.encrypted--kafka-amer-sandbox-gbl-mdm-hub.COMPANY.com:9094" + }, + { + "title": "AMER DEVSBX Services", + "pageID": "310950591", + "pageLink": "/display/GMDM/AMER+DEVSBX+Services", + "content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-amer-sandbox-gbl-mdm-hub.COMPANY.com/ext-api-gw-amer-devsbxPing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-amer-sandbox-gbl-mdm-hub.COMPANY.com/api-gw-amer-devsbxKafkahttp://kafka-amer-sandbox-gbl-mdm-hub.COMPANY.com/:9094MDM HUB S3 s3://gblmdmhubnprodamrasp100762HUB UIhttps://api-amer-sandbox-gbl-mdm-hub.COMPANY.com/ui-amer-devsbx/#/dashboardGrafana dashboardsResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=amer_devsbx&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=amer_devsbx&var-topic=All&var-node=11Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_sandboxJMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=amer_devsbx&var-component=managerKonghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_sandbox&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_devsbx&var-interval=$__auto_interval_intervalKibana dashboardsResource NameEndpointKibanahttps://kibana-amer-sandbox-gbl-mdm-hub.COMPANY.com (DEVSBX prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-amer-sandbox-gbl-mdm-hub.COMPANY.com/api-gw-spec-amer-devsbx/swagger-ui/index.html?configUrl=/api-gw-spec-amer-devsbx/v3/api-docs/swagger-configBatch Service API documentationhttps://api-amer-sandbox-gbl-mdm-hub.COMPANY.com/api-batch-spec-amer-devsbx/swagger-ui/index.html?configUrl=/api-batch-spec-amer-devsbx/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-amer-sandbox-gbl-mdm-hub.COMPANY.comConsulResource NameEndpointConsul UIhttps://consul-amer-sandbox-gbl-mdm-hub.COMPANY.comAKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-amer-sandbox-gbl-mdm-hub.COMPANY.comComponents & LogsENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod portsamer-devsbxManagermdmhub-mdm-manager-*Gateway APIlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availableamer-devsbxBatch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogsamer-devsbxApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogsamer-devsbxEnrichermdmhub-entity-enricher-*Reltio events enricherlogsamer-devsbxCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsamer-devsbxPublishermdmhub-event-publisher-*Events publisherlogsamer-devsbxReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogsInternal ResourcesResource NameEndpointMongomongodb://mongo-amer-sandbox-gbl-mdm-hub.COMPANY.com:27017Kafkakafka-amer-sandbox-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-amer-sandbox-gbl-mdm-hub.COMPANY.comElasticsearchhttps://elastic-amer-sandbox-gbl-mdm-hub.COMPANY.com" + }, + { + "title": "APAC", + "pageID": "228933517", + "pageLink": "/display/GMDM/APAC", + "content": "" + }, + { + "title": "APAC Non PROD Cluster", + "pageID": "228933519", + "pageLink": "/display/GMDM/APAC+Non+PROD+Cluster", + "content": "Physical ArchitectureKubernetes clusternameIPConsole addressresource typeAWS regionFilesystemComponentsTypeatp-mdmhub-nprod-apac●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●https://pdcs-apa1p.COMPANY.comEKS over EC2ap-southeast-1~60GB per node,6TBx2 replicated Portworx volumesKong, Kafka, Mongo, Prometheus, MDMHUB microservicesinbound/outboundComponents & LogsDEV - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsapac-devManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availableapac-devBatch Servicemdmhub-batch-service-*Batch Servicelogsapac-devAPI routermdmhub-mdm-api-router-*API Routerlogsapac-devReltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogsapac-devEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsapac-devCallback Servicemdmhub-callback-service-*Callback Servicelogsapac-devEvent Publishermdmhub-event-publisher-*Event Publisherlogsapac-devReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogsapac-devCallback delay servicemdmhub-callback-delay-service-*Callback delay servicelogsQA - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsapac-qaManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availableapac-qaBatch Servicemdmhub-batch-service-*Batch Servicelogsapac-qaAPI routermdmhub-mdm-api-router-*API Routerlogsapac-qaReltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogsapac-qaEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsapac-qaCallback Servicemdmhub-callback-service-*Callback Servicelogsapac-qaEvent Publishermdmhub-event-publisher-*Event Publisherlogsapac-qaReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogsapac-qaCallback delay servicemdmhub-callback-delay-service-*Callback delay servicelogsSTAGE - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsapac-stageManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availableapac-stageBatch Servicemdmhub-batch-service-*Batch Servicelogsapac-stageAPI routermdmhub-mdm-api-router-*API Routerlogsapac-stageReltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogsapac-stageEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsapac-stageCallback Servicemdmhub-callback-service-*Callback Servicelogsapac-stageEvent Publishermdmhub-event-publisher-*Event Publisherlogsapac-stageReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogsapac-stageCallback delay servicemdmhub-callback-delay-service-*Callback delay servicelogsNon PROD - backend NamespaceComponentPodDescriptionLogskongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kongapac-backendKafkamdm-kafka-kafka-0mdm-kafka-kafka-1mdm-kafka-kafka-2Kafkalogsapac-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace apac-backendapac-backendZookeeper mdm-kafka-zookeeper-0mdm-kafka-zookeeper-1mdm-kafka-zookeeper-2Zookeeperlogsapac-backendMongomongo-0Mongologsapac-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace apac-backendapac-backendFluentDfluentd-*EFK - fluentdkubectl logs {{pod name}} --namespace apac-backendapac-backendElasticsearchelasticsearch-es-default-0elasticsearch-es-default-1EFK - elasticsearchkubectl logs {{pod name}} --namespace apac-backendapac-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace apac-backendmonitoringcAdvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoringapac-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace apac-backendapac-backendMongo exportermongo-exporter-*mongo metrics exporter---apac-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace apac-backendapac-backendConsulconsul-consul-server-0consul-consul-server-1consul-consul-server-2Consulkubectl logs {{pod name}} --namespace apac-backendapac-backendSnowflake connectorapac-dev-mdm-connect-cluster-connect-*apac-qa-mdm-connect-cluster-connect-*apac-stage-mdm-connect-cluster-connect-*Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace apac-backendmonitoringKafka Connect Exportermonitoring-jdbc-snowflake-exporter-apac-dev-*monitoring-jdbc-snowflake-exporter-apac-stage-*monitoring-jdbc-snowflake-exporter-apac-stage-*Kafka Connect metric exporterkubectl logs {{pod name}} --namespace monitoringapac-backendAKHQakhq-*Kafka UIlogsCertificates ResourceCertificate LocationValid fromValid to Issued ToKibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/apac/nprod/namespaces/kong/config_files/certs2022/03/042024/03/03https://api-apac-nprod-gbl-mdm-hub.COMPANY.comKafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/apac/nprod/namespaces/apac-backend/secrets.yaml.encrypted2022/03/072024/03/06https://kafka-api-nprod-gbl-mdm-hub.COMPANY.com:9094" + }, + { + "title": "APAC DEV Services", + "pageID": "228933556", + "pageLink": "/display/GMDM/APAC+DEV+Services", + "content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-apac-devPing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-apac-devKafkakafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://globalmdmnprodaspasp202202171347HUB UIhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/ui-apac-dev/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comDB NameCOMM_APAC_MDM_DMART_DEV_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_APAC_MDM_DMART_DEV_DEVOPS_ROLEResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=apac_dev&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=apac_dev&var-topic=All&var-node=1JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=apac_dev&var-component=managerKonghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=apac_nprod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=apac_dev&var-interval=$__auto_interval_intervalKube Statehttps://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=apac-nprod&var-node=All&var-namespace=All&var-datasource=PrometheusPod Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=apac_nprod&var-namespace=AllPVC Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/R_-8aaf7k/pvc-monitoring?orgId=1&refresh=30s&var-env=apac_nprodResource NameEndpointKibanahttps://kibana-apac-nprod-gbl-mdm-hub.COMPANY.com (DEV prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-apac-dev/swagger-ui/index.html?configUrl=/api-gw-spec-apac-dev/v3/api-docs/swagger-configBatch Service API documentationhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-apac-dev/swagger-ui/index.html?configUrl=/api-batch-spec-apac-dev/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-apac-nprod-gbl-mdm-hub.COMPANY.comConsulResource NameEndpointConsul UIhttps://consul-apac-nprod-gbl-mdm-hub.COMPANY.comAKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-apac-nprod-gbl-mdm-hub.COMPANY.comClientsMAPP (EMEA, AMER, APAC)GRACEMedicEASIEngageETL MDM SystemsReltio DEV - 2NBAwv1z2AvlkgSResource NameEndpointSQS queue namehttps://sqs.ap-southeast-1.amazonaws.com/930358522410/mpe-02_2NBAwv1z2AvlkgSReltiohttps://mpe-02.reltio.com/ui/2NBAwv1z2AvlkgShttps://mpe-02.reltio.com/reltio/api/2NBAwv1z2AvlkgSReltio Gateway Usersvc-pfe-mdmhubRDMhttps://rdm.reltio.com/lookups/GltqYa2x8xzSnB8Internal ResourcesResource NameEndpointMongomongodb://mongo-apac-nprod-gbl-mdm-hub.COMPANY.com:27017Kafkakafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-apac-nprod-gbl-mdm-hub.COMPANY.comElasticsearchhttps://elastic-apac-nprod-gbl-mdm-hub.COMPANY.com" + }, + { + "title": "APAC QA Services", + "pageID": "234693067", + "pageLink": "/display/GMDM/APAC+QA+Services", + "content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-apac-qaPing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-apac-qaKafkakafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://globalmdmnprodaspasp202202171347HUB UIhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/ui-apac-qa/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comDB NameCOMM_APAC_MDM_DMART_QA_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_APAC_MDM_DMART_QA_DEVOPS_ROLEResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=apac_qa&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=apac_qa&var-topic=All&var-node=1JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=apac_qa&var-component=managerKonghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=apac_nprod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=apac_qa&var-interval=$__auto_interval_intervalKube Statehttps://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=apac-nprod&var-node=All&var-namespace=All&var-datasource=PrometheusPod Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=apac_nprod&var-namespace=AllPVC Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/R_-8aaf7k/pvc-monitoring?orgId=1&refresh=30s&var-env=apac_nprodResource NameEndpointKibanahttps://kibana-apac-nprod-gbl-mdm-hub.COMPANY.com (QA prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-apac-qa/swagger-ui/index.html?configUrl=/api-gw-spec-apac-qa/v3/api-docs/swagger-configBatch Service API documentationhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-apac-qa/swagger-ui/index.html?configUrl=/api-batch-spec-apac-qa/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-apac-nprod-gbl-mdm-hub.COMPANY.comConsulResource NameEndpointConsul UIhttps://consul-apac-nprod-gbl-mdm-hub.COMPANY.comAKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-apac-nprod-gbl-mdm-hub.COMPANY.comClientsMAPP (EMEA, AMER, APAC)GRACEMedicEASIEngageETL MDM SystemsReltio QA - xs4oRCXpCKewNDKResource NameEndpointSQS queue namehttps://sqs.ap-southeast-1.amazonaws.com/930358522410/mpe-02_xs4oRCXpCKewNDKReltiohttps://mpe-02.reltio.com/ui/xs4oRCXpCKewNDKhttps://mpe-02.reltio.com/reltio/api/xs4oRCXpCKewNDKReltio Gateway Usersvc-pfe-mdmhubRDMhttps://rdm.reltio.com/lookups/jemrjLkPUhOsPMaInternal ResourcesResource NameEndpointMongomongodb://mongo-apac-nprod-gbl-mdm-hub.COMPANY.com:27017Kafkakafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-apac-nprod-gbl-mdm-hub.COMPANY.comElasticsearchhttps://elastic-apac-nprod-gbl-mdm-hub.COMPANY.com" + }, + { + "title": "APAC STAGE Services", + "pageID": "234693073", + "pageLink": "/display/GMDM/APAC+STAGE+Services", + "content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-apac-stagePing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-apac-stageKafkakafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://globalmdmnprodaspasp202202171347HUB UIhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/ui-apac-stage/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comDB NameCOMM_APAC_MDM_DMART_STG_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_APAC_MDM_DMART_STG_DEVOPS_ROLEResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=apac_stage&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=apac_stage&var-topic=All&var-node=1JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=apac_stage&var-component=managerKonghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=apac_nprod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=apac_stage&var-interval=$__auto_interval_intervalKube Statehttps://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=apac-nprod&var-node=All&var-namespace=All&var-datasource=PrometheusPod Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=apac_nprod&var-namespace=AllPVC Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/R_-8aaf7k/pvc-monitoring?orgId=1&refresh=30s&var-env=apac_nprodResource NameEndpointKibanahttps://kibana-apac-nprod-gbl-mdm-hub.COMPANY.com (STAGE prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-apac-stage/swagger-ui/index.html?configUrl=/api-gw-spec-apac-stage/v3/api-docs/swagger-configBatch Service API documentationhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-apac-stage/swagger-ui/index.html?configUrl=/api-batch-spec-apac-stage/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-apac-nprod-gbl-mdm-hub.COMPANY.comConsulResource NameEndpointConsul UIhttps://consul-apac-nprod-gbl-mdm-hub.COMPANY.comAKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-apac-nprod-gbl-mdm-hub.COMPANY.comClientsMAPP (EMEA, AMER, APAC)GRACEMedicEASIEngageETL MDM SystemsReltio STAGE - Y4StMNK3b0AGDf6Resource NameEndpointSQS queue namehttps://sqs.ap-southeast-1.amazonaws.com/930358522410/mpe-02_Y4StMNK3b0AGDf6Reltiohttps://mpe-02.reltio.com/ui/Y4StMNK3b0AGDf6https://mpe-02.reltio.com/reltio/api/Y4StMNK3b0AGDf6Reltio Gateway Usersvc-pfe-mdmhubRDMhttps://rdm.reltio.com/lookups/NYa4AETF73napDaInternal ResourcesResource NameEndpointMongomongodb://mongo-apac-nprod-gbl-mdm-hub.COMPANY.com:27017Kafkakafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-apac-nprod-gbl-mdm-hub.COMPANY.comElasticsearchhttps://elastic-apac-nprod-gbl-mdm-hub.COMPANY.com" + }, + { + "title": "APAC PROD Cluster", + "pageID": "234712170", + "pageLink": "/display/GMDM/APAC+PROD+Cluster", + "content": "Physical ArchitectureKubernetes clusternameIPConsole addressresource typeAWS regionFilesystemComponentsTypeatp-mdmhub-prod-apac●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●https://pdcs-apa1p.COMPANY.comEKS over EC2ap-southeast-1~60GB per node,6TBx2 replicated Portworx volumesKong, Kafka, Mongo, Prometheus, MDMHUB microservicesinbound/outboundComponents & LogsPROD - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsapac-prodManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availableapac-prodBatch Servicemdmhub-batch-service-*Batch Servicelogsapac-prodAPI routermdmhub-mdm-api-router-*API Routerlogsapac-prodReltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogsapac-prodEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsapac-prodCallback Servicemdmhub-callback-service-*Callback Servicelogsapac-prodEvent Publishermdmhub-event-publisher-*Event Publisherlogsapac-prodReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogsapac-prodCallback delay servicemdmhub-callback-delay-service-*Callback delay servicelogsNon PROD - backend NamespaceComponentPodDescriptionLogskongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kongapac-backendKafkamdm-kafka-kafka-0mdm-kafka-kafka-1mdm-kafka-kafka-2Kafkalogsapac-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace apac-backendapac-backendZookeeper mdm-kafka-zookeeper-0mdm-kafka-zookeeper-1mdm-kafka-zookeeper-2Zookeeperlogsapac-backendMongomongo-0Mongologsapac-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace apac-backendapac-backendFluentDfluentd-*EFK - fluentdkubectl logs {{pod name}} --namespace apac-backendapac-backendElasticsearchelasticsearch-es-default-0elasticsearch-es-default-1EFK - elasticsearchkubectl logs {{pod name}} --namespace apac-backendapac-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace apac-backendmonitoringcAdvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoringapac-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace apac-backendapac-backendMongo exportermongo-exporter-*mongo metrics exporter---apac-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace apac-backendapac-backendConsulconsul-consul-server-0consul-consul-server-1consul-consul-server-2Consulkubectl logs {{pod name}} --namespace apac-backendapac-backendSnowflake connectorapac-prod-mdm-connect-cluster-connect-*Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace apac-backendmonitoringKafka Connect Exportermonitoring-jdbc-snowflake-exporter-apac-prod-*Kafka Connect metric exporterkubectl logs {{pod name}} --namespace monitoringapac-backendAKHQakhq-*Kafka UIlogsCertificates ResourceCertificate LocationValid fromValid to Issued ToKibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/apac/prod/namespaces/kong/config_files/certs2022/03/042024/03/03https://api-apac-prod-gbl-mdm-hub.COMPANY.comKafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/apac/prod/namespaces/apac-backend/secrets.yaml.encrypted2022/03/072024/03/06https://kafka-api-prod-gbl-mdm-hub.COMPANY.com:9094" + }, + { + "title": "APAC PROD Services", + "pageID": "234712172", + "pageLink": "/display/GMDM/APAC+PROD+Services", + "content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-apac-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-apac-prodPing Federatehttps://prodfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-apac-prod-gbl-mdm-hub.COMPANY.com/api-gw-apac-prodKafkakafka-apac-prod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://globalmdmprodaspasp202202171415HUB UIhttps://api-apac-prod-gbl-mdm-hub.COMPANY.com/ui-apac-prod/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlemeaprod01.eu-west-1.privatelink.snowflakecomputing.comDB NameCOMM_APAC_MDM_DMART_PROD_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_APAC_MDM_DMART_PROD_DEVOPS_ROLEResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=prod_dev&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=apac_prod&var-topic=All&var-node=1JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=apac_prod&var-component=mdm_managerKonghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=apac_prod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=apac_prod&var-interval=$__auto_interval_intervalKube Statehttps://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=atp-mdmhub-prod-apac&var-node=All&var-namespace=All&var-datasource=PrometheusPod Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=apac_prod&var-namespace=AllPVC Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/R_-8aaf7k/pvc-monitoring?orgId=1&refresh=30s&var-env=apac_prodResource NameEndpointKibanahttps://kibana-apac-prod-gbl-mdm-hub.COMPANY.com (DEV prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-apac-prod-gbl-mdm-hub.COMPANY.com/api-gw-spec-apac-prod/swagger-ui/index.html?configUrl=/api-gw-spec-apac-prod/v3/api-docs/swagger-configBatch Service API documentationhttps://api-apac-prod-gbl-mdm-hub.COMPANY.com/api-batch-spec-apac-prod/swagger-ui/index.html?configUrl=/api-batch-spec-apac-prod/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-apac-prod-gbl-mdm-hub.COMPANY.comConsulResource NameEndpointConsul UIhttps://consul-apac-prod-gbl-mdm-hub.COMPANY.comAKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-apac-prod-gbl-mdm-hub.COMPANY.comClientsMAPP (EMEA, AMER, APAC)GRACEMedicEASIEngageETL MDM SystemsReltio DEV - 2NBAwv1z2AvlkgSResource NameEndpointSQS queue namehttps://sqs.ap-southeast-1.amazonaws.com/930358522410/ap-360_sew6PfkTtSZhLdWReltiohttps://ap-360.reltio.com/ui/sew6PfkTtSZhLdWhttps://ap-360.reltio.com/reltio/api/sew6PfkTtSZhLdWReltio Gateway Usersvc-pfe-mdmhub-prodRDMhttps://rdm.reltio.com/lookups/ARTA9lOg3dbvDqkInternal ResourcesResource NameEndpointMongomongodb://mongo-apac-prod-gbl-mdm-hub.COMPANY.com:27017Kafkakafka-apac-prod-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-apac-prod-gbl-mdm-hub.COMPANY.comElasticsearchhttps://elastic-apac-prod-gbl-mdm-hub.COMPANY.com" + }, + { + "title": "EMEA", + "pageID": "181022903", + "pageLink": "/display/GMDM/EMEA", + "content": "" + }, + { + "title": "EMEA External proxy", + "pageID": "308256760", + "pageLink": "/display/GMDM/EMEA+External+proxy", + "content": "The page describes the Kong external proxy servers. deployed in a DLP (Double Lollipop) AWS account, used by clients outside of the COMPANY network, to access MDM Hub.Kong proxy instancesEnvironmentConsole addressInstanceSSH accessresource typeAWS regionAWS Account IDComponentsNon PRODhttp://awsprodv2.COMPANY.com/and use the role:WBS-EUW1-GBICC-ALLENV-RO-SSOi-08d4b21c314a98700 (EUW1Z2DL115)ssh ec2-user@euw1z2dl115.COMPANY.comEC2eu-west-1432817204314KongPRODi-091aa7f1fe1ede714 (EUW1Z2DL113)ssh ec2-user@euw1z2dl113.COMPANY.comi-05c4532bf7b8d7511 (EUW1Z2DL114)ssh ec2-user@euw1z2dl114.COMPANY.com External Hub EndpointsEnvironmentServiceEndpointInbound security group configurationNon PRODAPIhttps://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/MDMHub-kafka-and-api-proxy-external-nprod-sgKafkakafka-b1-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com:9095kafka-b2-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com:9095kafka-b3-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com:9095PRODAPIhttps://api-emea-k8s-prod-gbl-mdm-hub-ext.COMPANY.com/MDMHub-kafka-and-api-proxy-external-prod-sg - due to the limit of 60 rules per SG, add new ones to:MDMHub-kafka-and-api-proxy-external-prod-sg-2Kafkakafka-b1-emea-k8s-prod-gbl-mdm-hub-ext.COMPANY.com:9095kafka-b2-emea-k8s-prod-gbl-mdm-hub-ext.COMPANY.com:9095kafka-b3-emea-k8s-prod-gbl-mdm-hub-ext.COMPANY.com:9095ClientsEnvironmentClientsNon PRODFind all details in the Security GroupMDMHub-kafka-and-api-proxy-external-nprod-sgPRODFind all details in the Security GroupMDMHub-kafka-and-api-proxy-external-prod-sgAnsible configurationResourceAddressInstall Kong proxyhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/install_kong.ymlInstall cadvisorhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/install_cadvisor.ymlNon PROD inventoryhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/proxy_nprodPROD inventoryhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/proxy_prodUseful SOPsHow to access AWS ConsoleHow to restart the EC2 instanceHow to login to hosts with SSHNo downtime Kong restart/upgrade" + }, + { + "title": "EMEA Non PROD Cluster", + "pageID": "181022904", + "pageLink": "/display/GMDM/EMEA+Non+PROD+Cluster", + "content": "Physical ArchitectureKubernetes clusternameIPConsole addressresource typeAWS regionFilesystemComponentsTypeatp-mdmhub-nprod-emea10.90.96.0/2310.90.98.0/23https://pdcs-ema1p.COMPANY.com/EKS over EC2eu-west-1~100GBper node,7.3Ti x2 replicated Portworx volumesKong, Kafka, Mongo, Prometheus, MDMHUB microservicesinbound/outboundComponents & LogsDEV - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsemea-devManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availableemea-devBatch Servicemdmhub-batch-service-*Batch Servicelogsemea-devAPI routermdmhub-mdm-api-router-*API Routerlogsemea-devReltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogsemea-devEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsemea-devCallback Servicemdmhub-callback-service-*Callback Servicelogsemea-devEvent Publishermdmhub-event-publisher-*Event Publisherlogsemea-devReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation ServicelogsQA - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsemea-qaManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availableemea-qaBatch Servicemdmhub-batch-service-*Batch Servicelogsemea-qaAPI routermdmhub-mdm-api-router-*API Routerlogsemea-qaReltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogsemea-qaEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsemea-qaCallback Servicemdmhub-callback-service-*Callback Servicelogsemea-qaEvent Publishermdmhub-event-publisher-*Event Publisherlogsemea-qaReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation ServicelogsSTAGE - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsemea-stageManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availableemea-stageBatch Servicemdmhub-batch-service-*Batch Servicelogsemea-stageAPI routermdmhub-mdm-api-router-*API Routerlogsemea-stageReltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogsemea-stageEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsemea-stageCallback Servicemdmhub-callback-service-*Callback Servicelogsemea-stageEvent Publishermdmhub-event-publisher-*Event Publisherlogsemea-stageReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation ServicelogsGBL DEV - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsgbl-devManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availablegbl-devBatch Servicemdmhub-batch-service-*Batch Servicelogsgbl-devReltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogsgbl-devEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsgbl-devCallback Servicemdmhub-callback-service-*Callback Servicelogsgbl-devEvent Publishermdmhub-event-publisher-*Event Publisherlogsgbl-devReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogsgbl-devDCR Servicemdmhub-mdm-dcr-service-*DCR Servicelogsgbl-devMAP Channel mdmhub-mdm-map-channel-*MAP Channellogsgbl-devPforceRX Channelmdm-pforcerx-channel-*PforceRX ChannellogsGBL QA - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsgbl-qaManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availablegbl-qaBatch Servicemdmhub-batch-service-*Batch Servicelogsgbl-qaReltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogsgbl-qaEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsgbl-qaCallback Servicemdmhub-callback-service-*Callback Servicelogsgbl-qaEvent Publishermdmhub-event-publisher-*Event Publisherlogsgbl-qaReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogsgbl-qaDCR Servicemdmhub-mdm-dcr-service-*DCR Servicelogsgbl-qaMAP Channel mdmhub-mdm-map-channel-*MAP Channellogsgbl-qaPforceRX Channelmdm-pforcerx-channel-*PforceRX ChannellogsGBL STAGE - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsgbl-stageManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availablegbl-stageBatch Servicemdmhub-batch-service-*Batch Servicelogsgbl-stageReltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogsgbl-stageEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsgbl-stageCallback Servicemdmhub-callback-service-*Callback Servicelogsgbl-stageEvent Publishermdmhub-event-publisher-*Event Publisherlogsgbl-stageReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogsgbl-stageDCR Servicemdmhub-mdm-dcr-service-*DCR Servicelogsgbl-stageMAP Channel mdmhub-mdm-map-channel-*MAP Channellogsgbl-stagePforceRX Channelmdm-pforcerx-channel-*PforceRX ChannellogsNon PROD - backend NamespaceComponentPodDescriptionLogskongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kongemea-backendKafkamdm-kafka-kafka-0mdm-kafka-kafka-1mdm-kafka-kafka-2Kafkalogsemea-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace emea-backendemea-backendZookeeper mdm-kafka-zookeeper-0mdm-kafka-zookeeper-1mdm-kafka-zookeeper-2Zookeeperlogsemea-backendMongomongo-0Mongologsemea-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace emea-backendemea-backendFluentDfluentd-*EFK - fluentdkubectl logs {{pod name}} --namespace emea-backendemea-backendElasticsearchelasticsearch-es-default-0elasticsearch-es-default-1EFK - elasticsearchkubectl logs {{pod name}} --namespace emea-backendemea-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace emea-backendmonitoringcAdvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoringemea-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace emea-backendemea-backendMongo exportermongo-exporter-*mongo metrics exporter---emea-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace emea-backendemea-backendConsulconsul-consul-server-0consul-consul-server-1consul-consul-server-2Consulkubectl logs {{pod name}} --namespace emea-backendemea-backendSnowflake connectoremea-dev-mdm-connect-cluster-connect-*emea-qa-mdm-connect-cluster-connect-*emea-stage-mdm-connect-cluster-connect-*Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace emea-backendmonitoringKafka Connect Exportermonitoring-jdbc-snowflake-exporter-emea-dev-*monitoring-jdbc-snowflake-exporter-emea-stage-*monitoring-jdbc-snowflake-exporter-emea-stage-*Kafka Connect metric exporterkubectl logs {{pod name}} --namespace monitoringemea-backendAKHQakhq-*Kafka UIlogsCertificates ResourceCertificate LocationValid fromValid to Issued ToKibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/nprod/namespaces/kong/config_files/certs2022/03/042024/03/03https://api-emea-nprod-gbl-mdm-hub.COMPANY.comKafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/nprod/namespaces/emea-backend2022/03/072024/03/06kafka-emea-nprod-gbl-mdm-hub.COMPANY.com" + }, + { + "title": "EMEA DEV Services", + "pageID": "181022906", + "pageLink": "/display/GMDM/EMEA+DEV+Services", + "content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-devPing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-emea-devKafkakafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://pfe-atp-eu-w1-nprod-mdmhub/emea/devHUB UIhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-emea-dev/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comDB NameCOMM_EMEA_MDM_DMART_DEV_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_EMEA_MDM_DMART_DEVOPS_DEV_ROLEResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=emea_dev&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=emea_dev&var-kube_env=emea_nprod&var-topic=All&var-instance=All&var-node=JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=emea_dev&var-component=mdm_manager&var-instance=All&var-node=Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_nprod&var-service=All&var-instance=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_nprod&var-instance=10.90.130.202:9216&var-node_instance=10.90.129.220&var-interval=$__auto_interval_intervalKube Statehttps://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=atp-mdmhub-nprod-emea&var-node=All&var-namespace=All&var-datasource=PrometheusPod Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=emea_nprod&var-namespace=AllPVC Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/xLgt8oTik/portworx-cluster-monitoring?orgId=1&var-cluster=atp-mdmhub-nprod-emea&var-node=AllResource NameEndpointKibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/ (DEV prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-emea-dev/swagger-ui/index.html?configUrl=/api-gw-spec-emea-dev/v3/api-docs/swagger-configBatch Service API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-emea-dev/swagger-ui/index.html?configUrl=/api-batch-spec-emea-dev/v3/api-docs/swagger-configDCR Service 2 API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-dcr-spec-emea-dev/swagger-ui/index.html?configUrl=/api-dcr-spec-emea-dev/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/ConsulResource NameEndpointConsul UIhttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/AKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/ClientsETL - COMPANY (GBLUS)MDM SystemsReltioDEV - wn60kG248ziQSMWResource NameEndpointSQS queue namehttps://eu-west-1.queue.amazonaws.com/930358522410/mpe-01_wn60kG248ziQSMWReltiohttps://mpe-01.reltio.com/ui/wn60kG248ziQSMWhttps://mpe-01.reltio.com/reltio/api/wn60kG248ziQSMWReltio Gateway Usersvc-pfe-mdmhubRDMhttps://rdm.reltio.com/lookups/rQHwiWkdYGZRTNqInternal ResourcesResource NameEndpointMongomongodb://mongo-emea-nprod-gbl-mdm-hub.COMPANY.com:27017Kafkahttp://kafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094/ - SASL SSLKibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/Elasticsearchhttps://elastic-emea-nprod-gbl-mdm-hub.COMPANY.com/" + }, + { + "title": "EMEA QA Services", + "pageID": "192383454", + "pageLink": "/display/GMDM/EMEA+QA+Services", + "content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-qaPing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-emea-qaKafkakafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://pfe-atp-eu-w1-nprod-mdmhub/emea/qaHUB UIhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-emea-qa/#/dashboardSnowflake MDM DataMeResource NameEndpointDB Urlhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comDB NameCOMM_EMEA_MDM_DMART_QA_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_EMEA_MDM_DMART_QA_DEVOPS_ROLEGrafana dashboardsResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=emea_qa&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=emea_qa&var-topic=All&var-node=1&var-instance=euw1z2dl112.COMPANY.com:9102Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-env=emea_nprod&var-job=node-exporter&var-node=10.90.129.220&var-port=9100Pod monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&var-env=emea_nprod&var-namespace=AllJMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=emea_qa&var-component=batch_service&var-instance=All&var-node=Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_nprod&var-service=All&var-instance=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_nprod&var-instance=10.90.130.202:9216&var-node_instance=10.90.129.220&var-interval=$__auto_interval_intervalKibana dashboardsResource NameEndpointKibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home (QA prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-emea-qa/swagger-ui/index.htmlBatch Service API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-emea-qa/swagger-ui/index.htmlAirflowResource NameEndpointAirflow UIhttps://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/login/?next=https%3A%2F%2Fairflow-emea-nprod-gbl-mdm-hub.COMPANY.com%2FhomeConsulResource NameEndpointConsul UIhttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/ui/AKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/loginClientsETL - COMPANY (GBLUS)MDM SystemsReltioQA - vke5zyYwTifyeJSResource NameEndpointSQS queue namehttps://eu-west-1.queue.amazonaws.com/930358522410/mpe-01_vke5zyYwTifyeJSReltiohttps://mpe-01.reltio.com/ui/vke5zyYwTifyeJShttps://mpe-01.reltio.com/reltio/api/vke5zyYwTifyeJSReltio Gateway Usersvc-pfe-mdmhubRDMhttps://rdm.reltio.com/lookups/jIqfd8krU6ua5kRInternal ResourcesResource NameEndpointMongomongodb://mongo-emea-nprod-gbl-mdm-hub.COMPANY.com:27017Kafkahttp://kafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094/ - SASL SSLKibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/homeElasticsearchhttps://elastic-emea-nprod-gbl-mdm-hub.COMPANY.com/" + }, + { + "title": "EMEA STAGE Services", + "pageID": "192383457", + "pageLink": "/display/GMDM/EMEA+STAGE+Services", + "content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-stagePing Federatehttps://stgfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-emea-stageKafkakafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://pfe-atp-eu-w1-nprod-mdmhub/emea/stageHUB UIhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-emea-stage/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comDB NameCOMM_EMEA_MDM_DMART_STG_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_EMEA_MDM_DMART_STG_DEVOPS_ROLEResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=emea_stage&var-component=mdm_manager&var-component_publisher=event_publisher&var-component_subscriber=reltio_subscriber&var-instance=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=emea_stage&var-kube_env=amer_nprod&var-topic=All&var-instance=All&var-node=Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-env=emea_nprod&var-job=node-exporter&var-node=10.90.129.220&var-port=9100Pod monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&var-env=emea_nprod&var-namespace=AllJMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=emea_stage&var-component=batch_service&var-instance=All&var-node=Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_nprod&var-service=All&var-instance=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_nprod&var-instance=10.90.130.202:9216&var-node_instance=10.90.129.220&var-interval=$__auto_interval_intervalResource NameEndpointKibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home (STAGE prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-emea-stage/swagger-ui/index.html?configUrl=/api-gw-spec-emea-stage/v3/api-docs/swagger-configBatch Service API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-emea-stage/swagger-ui/index.html?configUrl=/api-batch-spec-emea-stage/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/login/?next=https%3A%2F%2Fairflow-emea-nprod-gbl-mdm-hub.COMPANY.com%2FhomeConsulResource NameEndpointConsul UIhttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/ui/AKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/loginClientsETL - COMPANY (GBLUS)MDM SystemsReltioSTAGE - Dzueqzlld107BVWResource NameEndpointSQS queue namehttps://eu-west-1.queue.amazonaws.com/930358522410/mpe-01_Dzueqzlld107BVWReltiohttps://mpe-01.reltio.com/ui/Dzueqzlld107BVWhttps://mpe-01.reltio.com/reltio/api/Dzueqzlld107BVWReltio Gateway Usersvc-pfe-mdmhubRDMhttps://rdm.reltio.com/lookups/TBxXCy2Z6LZ8nbnInternal ResourcesResource NameEndpointMongomongodb://mongo-emea-nprod-gbl-mdm-hub.COMPANY.com:27017Kafkahttp://kafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094/ - SASL SSLKibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/homeElasticsearchhttps://elastic-emea-nprod-gbl-mdm-hub.COMPANY.com/" + }, + { + "title": "GBL DEV Services", + "pageID": "250130206", + "pageLink": "/display/GMDM/GBL+DEV+Services", + "content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-devPing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-gbl-devKafkakafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://pfe-atp-eu-w1-nprod-mdmhub (eu-west-1)HUB UIhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-gbl-dev/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comDB NameCOMM_EU_MDM_DMART_DEV_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_DEV_MDM_DMART_DEVOPS_ROLEMonitoringResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gbl_dev&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gbl_dev&var-topic=All&var-node=1&var-instance=10.192.70.189:9102Pod Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10sKube Statehttps://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=atp-mdmhub-nprod-emea&var-node=All&var-namespace=All&var-datasource=PrometheusJMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=gbl_dev&var-component=batch_service&var-instance=All&var-node=Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_nprod&var-service=All&var-instance=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_nprod&var-instance=10.90.130.202:9216&var-node_instance=10.90.129.220&var-interval=$__auto_interval_intervalLogsResource NameEndpointKibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home (DEV prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gbl-dev/swagger-ui/index.htmlAirflowResource NameEndpointAirflow UIhttps://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/ConsulResource NameEndpointConsul UIhttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/AKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/ClientsChinaMAPPKOL_ONEVIEWGRVGANTGRACEMedicPTRSOneMedEngageMDM SystemsReltio GBL DEV - FLy4mo0XAh0YEbNResource NameEndpointSQS queue namehttps://sqs.eu-west-1.amazonaws.com/930358522410/mpe-01_FLy4mo0XAh0YEbNReltiohttps://eu-dev.reltio.com/ui/FLy4mo0XAh0YEbNhttps://eu-dev.reltio.com/reltio/api/FLy4mo0XAh0YEbNReltio Gateway UserIntegration_Gateway_UserRDMhttps://rdm.reltio.com/%s/WUBsSEwz3SU3idO/Internal ResourcesResource NameEndpointMongomongodb://mongo-emea-nprod-gbl-mdm-hub.COMPANY.com:27017Kafkahttp://kafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094/ - SASL SSLKibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home#/Elasticsearchhttps://elastic-apac-nprod-gbl-mdm-hub.COMPANY.com" + }, + { + "title": "GBL QA Services", + "pageID": "250130235", + "pageLink": "/display/GMDM/GBL+QA+Services", + "content": "HUB EndpointsAPI & Kafka & S3 & UIGateway API OAuth2 External - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-qaPing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-gbl-qaKafkakafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://pfe-atp-eu-w1-nprod-mdmhub (eu-west-1)HUB UIhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-gbl-qa/#/dashboardSnowflake MDM DataMartDB Urlhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comDB NameCOMM_EU_MDM_DMART_QA_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_QA_MDM_DMART_DEVOPS_ROLEMonitoringHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gbl_qa&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gbl_qa&var-kube_env=emea_nprod&var-topic=All&var-instance=All&var-node=Pod Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=emea_nprod&var-namespace=AllKube Statehttps://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=atp-mdmhub-nprod-emea&var-node=All&var-namespace=All&var-datasource=PrometheusJMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=gbl_qa&var-component=batch_service&var-instance=All&var-node=Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=gbl_dev&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_nprod&var-instance=10.90.130.202:9216&var-node_instance=10.90.129.220&var-interval=$__auto_interval_intervalLogsKibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home(QA prefixed dashboards)DocumentationManager API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gbl-qa/swagger-ui/index.htmlAirflowResource NameEndpointAirflow UIhttps://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/ConsulResource NameEndpointConsul UIhttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/AKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/ClientsChinaMAPPKOL_ONEVIEWGRVGANTGRACEMedicPTRSOneMedEngageMDM SystemsReltio GBL MAPP - AwFwKWinxbarC0ZSQS queue namehttps://sqs.eu-west-1.amazonaws.com/930358522410/mpe-01_AwFwKWinxbarC0ZReltiohttps://mpe-01.reltio.com/ui/AwFwKWinxbarC0Z/https://mpe-01.reltio.com/reltio/api/AwFwKWinxbarC0Z/Reltio Gateway UserIntegration_Gateway_UserRDMhttps://rdm.reltio.com/%s/WUBsSEwz3SU3idO/Internal ResourcesMongomongodb://mongo-emea-nprod-gbl-mdm-hub.COMPANY.com:27017Kafkakafka-emea-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home#/Elasticsearchhttps://elastic-emea-nprod-gbl-mdm-hub.COMPANY.com" + }, + { + "title": "GBL STAGE Services", + "pageID": "250130297", + "pageLink": "/display/GMDM/GBL+STAGE+Services", + "content": "HUB EndpointsAPI & Kafka & S3Gateway API OAuth2 External - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-stagePing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-gbl-stageKafkakafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://pfe-atp-eu-w1-nprod-mdmhub (eu-west-1)HUB UIhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-gbl-stage/#/dashboardSnowflake MDM DataMartDB Urlhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comDB NameCOMM_EU_MDM_DMART_STG_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_STG_MDM_DMART_DEVOPS_ROLEMonitoringHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gbl_stage&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gbl_stage&var-kube_env=emea_nprod&var-topic=All&var-instance=All&var-node=Pod Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=emea_nprod&var-namespace=AllKube Statehttps://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=atp-mdmhub-nprod-emea&var-node=All&var-namespace=All&var-datasource=PrometheusJMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=gbl_stage&var-component=batch_service&var-instance=All&var-node=Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_nprod&var-service=All&var-instance=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=gbl_stage&var-instance=&var-node_instance=&var-interval=$__auto_interval_intervalLogsKibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home(STAGE prefixed dashboards)DocumentationManager API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gbl-stage/swagger-ui/index.htmlAirflowResource NameEndpointAirflow UIhttps://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/ConsulResource NameEndpointConsul UIhttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/AKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/ClientsChinaMAPPKOL_ONEVIEWGRVGANTGRACEMedicPTRSOneMedEngageMDM SystemsReltio GBL STAGE - FW4YTaNQTJEcN2gSQS queue namehttps://sqs.eu-west-1.amazonaws.com/930358522410/mpe-01_FW4YTaNQTJEcN2gReltiohttps://eu-dev.reltio.com/ui/FW4YTaNQTJEcN2g/https://eu-dev.reltio.com/reltio/api/FW4YTaNQTJEcN2g/Reltio Gateway UserIntegration_Gateway_UserRDMhttps://rdm.reltio.com/%s/WUBsSEwz3SU3idO/Internal ResourcesMongomongodb://mongo-emea-nprod-gbl-mdm-hub.COMPANY.com:27017Kafkahttp://kafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094/ - SASL SSLKibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home#/Elasticsearchhttps://elastic-apac-nprod-gbl-mdm-hub.COMPANY.com" + }, + { + "title": "EMEA PROD Cluster", + "pageID": "196881569", + "pageLink": "/display/GMDM/EMEA+PROD+Cluster", + "content": "Physical ArchitectureKubernetes clusternameIPConsole addressresource typeAWS regionFilesystemComponentsTypeatp-mdmhub-nprod-emea10.90.96.0/2310.90.98.0/23https://pdcs-ema1p.COMPANY.com/EKS over EC2eu-west-1~100GBper node,7.3Ti x2 replicated Portworx volumesKong, Kafka, Mongo, Prometheus, MDMHUB microservicesinbound/outboundComponents & LogsPROD - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsemea-prodManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availableemea-prodBatch Servicemdmhub-batch-service-*Batch Servicelogsemea-prodAPI routermdmhub-mdm-api-router-*API Routerlogsemea-prodReltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogsemea-prodEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsemea-prodCallback Servicemdmhub-callback-service-*Callback Servicelogsemea-prodEvent Publishermdmhub-event-publisher-*Event Publisherlogsemea-prodReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation ServicelogsPROD - backend NamespaceComponentPodDescriptionLogskongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kongemea-backendKafkamdm-kafka-kafka-0mdm-kafka-kafka-1mdm-kafka-kafka-2Kafkalogsemea-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace emea-backendemea-backendZookeeper mdm-kafka-zookeeper-0mdm-kafka-zookeeper-1mdm-kafka-zookeeper-2Zookeeperlogsemea-backendMongomongo-0mongo-1mongo-2Mongologsemea-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace emea-backendemea-backendFluentDfluentd-*EFK - fluentdkubectl logs {{pod name}} --namespace emea-backendemea-backendElasticsearchelasticsearch-es-default-0elasticsearch-es-default-1elasticsearch-es-default-2EFK - elasticsearchkubectl logs {{pod name}} --namespace emea-backendemea-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace emea-backendmonitoringcAdvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoringemea-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace emea-backendemea-backendMongo exportermongo-exporter-*mongo metrics exporter---emea-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace emea-backendemea-backendConsulconsul-consul-server-0consul-consul-server-1consul-consul-server-2Consulkubectl logs {{pod name}} --namespace emea-backendemea-backendSnowflake connectoremea-prod-mdm-connect-cluster-connect-*Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace emea-backendmonitoringKafka Connect Exportermonitoring-jdbc-snowflake-exporter-emea-prod-*Kafka Connect metric exporterkubectl logs {{pod name}} --namespace monitoringemea-backendAKHQakhq-*Kafka UIlogsCertificates ResourceCertificate LocationValid fromValid to Issued ToKibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/prod/namespaces/kong/config_files/certs2022/03/042024/03/03https://api-emea-prod-gbl-mdm-hub.COMPANY.com/Kafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/prod/namespaces/emea-backend2022/03/072024/03/06https://kafka-emea-prod-gbl-mdm-hub.COMPANY.com/" + }, + { + "title": "EMEA PROD Services", + "pageID": "196881867", + "pageLink": "/display/GMDM/EMEA+PROD+Services", + "content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - PRODhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-prodPing Federatehttps://prodfederate.COMPANY.com/as/token.oauth2Gateway API KEY auth - PRODhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-gw-emea-prodKafkakafka-b1-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://pfe-atp-eu-w1-prod-mdmhub/emea/prodHUB UIhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ui-emea-prod/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com/DB NameCOMM_EMEA_MDM_DMART_PROD_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_EMEA_MDM_DMART_PROD_DEVOPS_ROLEMonitoringResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=emea_prod&var-node=All&var-type=entitiesHUB Batch Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/gz0X6rkMk/hub-batch-performance?orgId=1&refresh=10s&var-env=emea_prod&var-node=All&var-name=AllKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=emea_prod&var-topic=All&var-node=5&var-instance=euw1z1pl117.COMPANY.com:9102Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-env=emea_prod&var-job=node_exporter&var-node=euw1z2pl113.COMPANY.com&var-port=9100Docker monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/Z1VgYm6iz/docker-monitoring?orgId=1&refresh=10s&var-env=emea_prod&var-node=1JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=emea_prod&var-component=manager&var-node=5&var-instance=euw1z1pl117.COMPANY.com:9104Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_prod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_prod&var-instance=euw1z2pl115.COMPANY.com:9120&var-node_instance=euw1z2pl115.COMPANY.com&var-interval=$__auto_interval_intervalLogsResource NameEndpointKibanahttps://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/ (PROD prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-gw-spec-emea-prod/swagger-ui/index.html?configUrl=/api-gw-spec-emea-prod/v3/api-docs/swagger-configBatch Service API documentationhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-batch-spec-emea-prod/swagger-ui/index.html?configUrl=/api-batch-spec-emea-prod/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/homeConsulResource NameEndpointConsul UIhttps://consul-emea-prod-gbl-mdm-hub.COMPANY.com/ui/dc1/servicesAKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-emea-prod-gbl-mdm-hub.COMPANY.com/loginClientsETL - COMPANY (GBLUS)MDM SystemsReltioPROD_EMEA - Xy67R0nDA10RUV6Resource NameEndpointSQS queue namehttps://sqs.eu-west-1.amazonaws.com/930358522410/eu-360_Xy67R0nDA10RUV6Reltiohttps://eu-360.reltio.com/reltio/api/Xy67R0nDA10RUV6 - APIhttps://eu-360.reltio.com/ui/Xy67R0nDA10RUV6/# - UIReltio Gateway Usersvc-pfe-mdmhub-prodRDMhttps://rdm.reltio.com/%s/uJG2vepGEXEHmrI/Internal ResourcesResource NameEndpointMongohttps://mongo-emea-prod-gbl-mdm-hub.COMPANY.com:27017Kafkahttp://kafka-b1-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094/,http://kafka-b2-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094/,http://kafka-b3-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094/Kibanahttps://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/Elasticsearchhttps://elastic-emea-prod-gbl-mdm-hub.COMPANY.com/" + }, + { + "title": "GBL PROD Services", + "pageID": "284792395", + "pageLink": "/display/GMDM/GBL+PROD+Services", + "content": "HUB EndpointsAPI & Kafka & S3 & UIGateway API OAuth2 External - PRODhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ext-api-gbl-prodPing Federatehttps://prodfederate.COMPANY.com/as/token.oauth2Gateway API KEY auth - PRODhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-gbl-prodKafkakafka-b1-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://pfe-baiaes-eu-w1-project/mdmHUB UIhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ui-gbl-prod/#/dashboardSnowflake MDM DataMartDB Urlhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com/DB NameCOMM_EU_MDM_DMART_PROD_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_GBL_MDM_DMART_PROD_DEVOPS_ROLEMonitoringHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gbl_prod&var-component=mdm_manager&var-component_publisher=event_publisher&var-component_subscriber=reltio_subscriber&var-instance=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gbl_prod&var-kube_env=emea_prod&var-topic=All&var-instance=All&var-node=Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/Z1VgYm6iz/docker-monitoring?orgId=1&refresh=10s&var-env=emea_prod&var-node=&var-instance=10.90.130.122Pods monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/Z1VgYm6iz/docker-monitoring?orgId=1&refresh=10s&var-env=emea_prod&var-node=&var-instance=10.90.130.122JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=emea_prod&var-component=manager&var-node=5&var-instance=euw1z1pl117.COMPANY.com:9104Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_prod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_prod&var-instance=10.90.142.48:9216&var-node_instance=euw1z2pl115.COMPANY.com&var-interval=$__auto_interval_intervalLogsKibanahttps://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/ (PROD prefixed dashboards)DocumentationManager API documentationhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gbl-prod/swagger-ui/index.html?configUrl=/api-gw-spec-emea-prod/v3/api-docs/swagger-configAirflowAirflow UIhttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/homeConsulConsul UIhttps://consul-emea-prod-gbl-mdm-hub.COMPANY.com/ui/dc1/servicesAKHQ - KafkaAKHQ Kafka UIhttps://akhq-emea-prod-gbl-mdm-hub.COMPANY.com/loginClientsETL - COMPANY (GBLUS)MDM SystemsReltioPROD_EMEA - FW2ZTF8K3JpdfFlSQS queue namehttps://sqs.eu-west-1.amazonaws.com/930358522410/euprod-01_FW2ZTF8K3JpdfFlReltiohttps://eu-360.reltio.com/reltio/api/FW2ZTF8K3JpdfFl - APIhttps://eu-360.reltio.com/ui/FW2ZTF8K3JpdfFl/ - UIReltio Gateway Userpfe_mdm_apiRDMhttps://rdm.reltio.com/%s/ImsRdmCOMPANY/Internal ResourcesMongohttps://mongo-emea-prod-gbl-mdm-hub.COMPANY.com:27017Kafkahttp://kafka-b1-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094/,http://kafka-b2-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094/,http://kafka-b3-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094/Kibanahttps://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/Elasticsearchhttps://elastic-emea-prod-gbl-mdm-hub.COMPANY.com/" + }, + { + "title": "US Trade (FLEX)", + "pageID": "164470168", + "pageLink": "/pages/viewpage.action?pageId=164470168", + "content": "" + }, + { + "title": "US Non PROD Cluster", + "pageID": "164470067", + "pageLink": "/display/GMDM/US+Non+PROD+Cluster", + "content": "Physical ArchitectureHostsIDIPHostnameDocker UserResource TypeSpecificationAWS RegionFilesystemDEV●●●●●●●●●●●●●amraelp00005781.COMPANY.commdmihnprEC2r4.2xlargeus-east750 GB - /app15 GB - /var/lib/dockerComponents & LogsENVHostComponentDocker nameDescriptionLogsOpen PortsDEVDEVManagerdevmdmsrv_mdm-manager_1Gateway API/app/mdmgw/dev-mdm-srv/manager/log8849, 9104DEVDEVBatch Channeldevmdmsrv_batch-channel_1Batch file processor, S3 poller/app/mdmgw/dev-mdm-srv/batch_channel/log9121DEVDEVPublisherdevmdmhubsrv_event-publisher_1Event publisher/app/mdmhub/dev-mdm-srv/event_publisher/log9106DEVDEVSubscriberdevmdmhubsrv_reltio-subscriber_1SQS Reltio event subscriber/app/mdmhub/dev-mdm-srv/reltio_subscriber/log9105DEVDEVConsoledevmdmsrv_console_1Hawtio console9999ENVHostComponentDocker nameDescriptionLogsOpen PortsTESTDEVManagertestmdmsrv_mdm-manager_1Gateway API/app/mdmgw/test-mdm-srv/manager/log8850, 9108TESTDEVBatch Channeltestmdmsrv_batch-channel_1Batch file processor, S3 poller/app/mdmgw/test-mdm-srv/batch_channel/log9111TESTDEVPublishertestmdmhubsrv_event-publisher_1Event publisher/app/mdmhub/test-mdm-srv/event_publisher/log9110TESTDEVSubscribertestmdmhubsrv_reltio-subscriber_1SQS Reltio event subscriber/app/mdmhub/test-mdm-srv/reltio_subscriber/log9109Back-End HostComponentDocker nameDescriptionLogsOpen PortsDEVFluentDfluentdEFK - FluentD/app/efk/fluentd/log24225DEVKibanakibanaEFK - Kibanadocker logs kibana5601DEVElasticsearchelasticsearchEFK - Elasticsearch/app/efk/elasticsearch/logs9200DEVPrometheusprometheusPrometheus Federation slave serverdocker logs prometheus9119DEVMongomongo_mongo_1Mongodocker logs mongo_mongo_127017DEVMongo Exportermongo_exporterMongo → Prometheus exporter/app/mongo_exporter/logs9120DEVMonstache Connectormonstache-connectorMongo → Elasticsearch exporter8095DEVKafkakafka_kafka_1Kafkadocker logs kafka_kafka_19093, 9094, 9101DEVKafka Exporterkafka_kafka_exporter_1Kafka → Prometheus exporterdocker logs kafka_kafka_exporter_19102DEVSQS Exportersqs-exporter-devSQS → Prometheus exporterdocker logs sqs-exporter-dev9122DEVCadvisorcadvisorDocker → Prometheus exporterdocker logs cadvisor9103DEVKongkong_kong_1API Manager/app/mdmgw/kong/kong_logs8000, 8443, 32774DEVKong - DBkong_kong-database_1Kong Cassandra databasedocker logs kong_kong-database_19042DEVZookeeperkafka_zookeeper_1Zookeeperdocker logs kafka_zookeeper_12181DEVNode Exporter(non-docker) node_exporterPrometheus node exportersystemctl status node_exporter9100CertificatesResourceCertificate LocationValid fromValid to Issued ToKibanahttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/dev_us/efk/kibana/mdm-log-management-us-nonprod.COMPANY.com.cer22.02.201907.05.2022mdm-log-management-us-nonprod.COMPANY.comKong - APIhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/dev_us/certs/mdm-ihub-us-nonprod.COMPANY.com.pem18.07.201817.07.2021CN = mdm-ihub-us-nonprod.COMPANY.comO = COMPANYKafka - Server Truststorehttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/dev_us/ssl/server.truststore.jks10.07.202001.09.2026O = Default Company LtdST = Some-StateC = AUKafka - Server KeyStorehttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/dev_us/ssl/server.keystore.jks10.07.202006.07.2022 CN = KafkaFlexOU = UnknownO = UnknownL = UnknownST = UnknownC = UnknownElasticsearchhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/dev_us/efk/esnode1/mdm-esnode1-us-nonprod.COMPANY.com.cer22.02.201921.02.2022mdm-esnode1-us-nonprod.COMPANY.comUnix groupsResource NameTypeDescriptionSupportuserComputer RoleLogin: mdmihnprName: SRVGBL-Pf6687993Uid: 27634358Gid: 20796763 userUnix Role GroupRole: ADMIN_ROLEportsSecurity groupSG Name: PFE-SG-IHUB-APP-DEV-001http://btondemand.COMPANY.comSubmit ticket to GBL-BTI-IOD AWS FULL SUPPORTInternal ClientsNameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopicFLEX US userflex_nprodExternal OAuth2Flex-MDM_client- "CREATE_HCP"- "CREATE_HCO"- "UPDATE_HCP"- "UPDATE_HCO"- "GET_ENTITIES"- "SCAN_ENTITIES"ALL- "FLEXProposal"- "FLEX"- "FLEXIDL"- "Calculate"- "SAP"dev-out-full-flex-alltest-out-full-flex-alltest2-out-full-flex-alltest3-out-full-flex-allInternal HUB usermdm_test_userExternal OAuth2Flex-MDM_client- "CREATE_HCP"- "CREATE_HCO"- "UPDATE_HCP"- "UPDATE_HCO"- "GET_ENTITIES"- "DELETE_CROSSWALK"- "GET_RELATION"- "SCAN_ENTITIES"- "SCAN_RELATIONS"- "LOOKUPS"- "ENTITY_ATTRIBUTES_UPDATE"ALL- "FLEXProposal"- "FLEX"- "FLEXIDL"- "Calculate"- "AddrCalc"- "SAP"- "HIN"- "DEAIntegration Batch Update userintegration_batch_userKey AuthN/A- "GET_ENTITIES"- "ENTITY_ATTRIBUTES_UPDATE"- "GENERATE_ID"- "CREATE_HCO"- "UPDATE_HCO"ALL- "FLEXProposal"- "FLEX"- "FLEXIDL"- "Calculate"- "AddrCalc"dev-internal-integration-testsFLEX Batch Channel userflex_batch_devKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "FLEX"- "FLEXIDL"dev-internal-hco-create-flexflex_batch_testKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "FLEX"- "FLEXIDL"test-internal-hco-create-flexflex_batch_test2Key AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "FLEX"- "FLEXIDL"test2-internal-hco-create-flexflex_batch_test3Key AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "FLEX"- "FLEXIDL"test3-internal-hco-create-flexSAP Batch Channel usersap_batch_devKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "SAP"dev-internal-hco-create-sapsap_batch_testKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "SAP"test-internal-hco-create-sapsap_batch_test2Key AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "SAP"test2-internal-hco-create-sapsap_batch_test3Key AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "SAP"test3-internal-hco-create-sapHIN Batch Channel userhin_batch_devKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "HIN"dev-internal-hco-create-hinhin_batch_testKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "HIN"test-internal-hco-create-hinhin_batch_test2Key AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "HIN"test2-internal-hco-create-hinhin_batch_test3Key AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "HIN"test3-internal-hco-create-hinDEA Batch Channel userdea_batch_devKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "DEA"dev-internal-hco-create-deadea_batch_testKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "DEA"test-internal-hco-create-deadea_batch_test2Key AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "DEA"test2-internal-hco-create-deadea_batch_test3Key AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "DEA"test3-internal-hco-create-dea340B Batch Channel user340b_batch_devKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "340B"dev-internal-hco-create-340b340b_batch_testKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "340B"test-internal-hco-create-340b" + }, + { + "title": "US DEV Services", + "pageID": "164469990", + "pageLink": "/display/GMDM/US+DEV+Services", + "content": "HUB EndpointsAPI & Kafka & S3Resource NameEndpointGateway API OAuth2 External - DEVhttps://mdm-ihub-us-nonprod.COMPANY.com:8443/dev-extPing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://mdm-ihub-us-nonprod.COMPANY.com:8443/devKafkaamraelp00005781.COMPANY.com:9094MDM HUB S3 s3://mdmnprodamrasp22124/MonitoringResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=us_dev&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=us_dev&var-topic=All&var-node=1&var-instance=amraelp00005781.COMPANY.com:9102Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=us_dev&var-node=amraelp00005781.COMPANY.com&var-port=9100Docker monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/Z1VgYm6iz/docker-monitoring?orgId=1&refresh=10s&var-env=us_dev&var-node=1JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=us_dev&var-component=batch_channel&var-node=1&var-instance=amraelp00005781.COMPANY.com:9121KongMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=us_dev&var-instance=amraelp00005781.COMPANY.com:9120&var-node_instance=amraelp00005781.COMPANY.com&var-interval=$__auto_interval_intervalLogsResource NameEndpointKibanahttps://mdm-log-management-us-trade-nonprod.COMPANY.com:5601/app/kibana (DEV prefixed dashboards)MDM SystemsReltio US DEV - keHVup25rN7ij3YResource NameEndpointSQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/dev_keHVup25rN7ij3YReltiohttps://dev.reltio.com/ui/keHVup25rN7ij3Yhttps://dev.reltio.com/reltio/api/keHVup25rN7ij3YReltio Gateway UserIntegration_Gateway_US_UserRDMhttps://rdm.reltio.com/%s/aPYW1rxK6I1Op4y/Internal ResourcesResource NameEndpointMongomongodb://amraelp00005781.COMPANY.com:27107Kafkaamraelp00005781.COMPANY.com:9094Zookeeperamraelp00005781.COMPANY.com:2181Kibanahttps://amraelp00005781.COMPANY.com:5601/app/kibanaElasticsearchhttps://amraelp00005781.COMPANY.com:9200Hawtiohttp://amraelp00005781.COMPANY.com:9999/hawtio/#/login" + }, + { + "title": "US TEST (QA) Services", + "pageID": "164469988", + "pageLink": "/display/GMDM/US+TEST+%28QA%29+Services", + "content": "HUB EndpointsAPI & Kafka & S3Resource NameEndpointGateway API OAuth2 External - TESThttps://mdm-ihub-us-nonprod.COMPANY.com:8443/test-extGateway API OAuth2 External - TEST2https://mdm-ihub-us-nonprod.COMPANY.com:8443/test2-extGateway API OAuth2 External - TEST3https://mdm-ihub-us-nonprod.COMPANY.com:8443/test3-extGateway API KEY auth - TESThttps://mdm-ihub-us-nonprod.COMPANY.com:8443/testGateway API KEY auth - TEST2https://mdm-ihub-us-nonprod.COMPANY.com:8443/test2Gateway API KEY auth - TEST3https://mdm-ihub-us-nonprod.COMPANY.com:8443/test3Ping Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Kafkaamraelp00005781.COMPANY.com:9094MDM HUB S3 s3://mdmnprodamrasp22124/LogsResource NameEndpointKibanahttps://mdm-log-management-us-trade-nonprod.COMPANY.com:5601/app/kibana (TEST prefixed dashboards)MDM SystemsReltio US TEST - cnL0Gq086PrguOdResource NameEndpointSQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/test_cnL0Gq086PrguOd Reltiohttps://test.reltio.com/ui/cnL0Gq086PrguOdhttps://test.reltio.com/reltio/api/cnL0Gq086PrguOdReltio Gateway UserIntegration_Gateway_US_UserRDMhttps://rdm.reltio.com/%s/FENBHNkytefh9dB/ Reltio US TEST2 - JKabsuFZzNb4K6kResource NameEndpointSQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/test_JKabsuFZzNb4K6kReltiohttps://test.reltio.com/ui/JKabsuFZzNb4K6khttps://test.reltio.com/reltio/api/JKabsuFZzNb4K6kReltio Gateway UserIntegration_Gateway_US_UserRDMhttps://rdm.reltio.com/%s/dhUp0Lm9NebmqB9/ Reltio US TEST3 - Yy7KqOqppDVzJpkResource NameEndpointSQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/test_Yy7KqOqppDVzJpkReltiohttps://test.reltio.com/ui/Yy7KqOqppDVzJpkhttps://test.reltio.com/reltio/api/Yy7KqOqppDVzJpkReltio Gateway UserIntegration_Gateway_US_UserRDMhttps://rdm.reltio.com/%s/Q4rz1LUZ9WnpVoJ/ Internal ResourcesResource NameEndpointMongomongodb://amraelp00005781.COMPANY.com:27107Kafkaamraelp00005781.COMPANY.com:9094Zookeeperamraelp00005781.COMPANY.com:2181Kibanahttps://amraelp00005781.COMPANY.com:5601/app/kibanaElasticsearchhttps://amraelp00005781.COMPANY.com:9200Hawtiohttp://amraelp00005781.COMPANY.com:9999/hawtio/#/login" + }, + { + "title": "US PROD Cluster", + "pageID": "164470064", + "pageLink": "/display/GMDM/US+PROD+Cluster", + "content": "Physical ArchitectureHostsIDIPHostnameDocker UserResource TypeSpecificationAWS RegionFilesystemPROD1●●●●●●●●●●●●●●amraelp00006207.COMPANY.commdmihpr EC2r4.xlarge us-east-1e500 GB - /app15 GB - /var/lib/dockerPROD2●●●●●●●●●●●●●●amraelp00006208.COMPANY.commdmihprEC2r4.xlarge us-east-1e500 GB - /app15 GB - /var/lib/dockerPROD3●●●●●●●●●●●●amraelp00006209.COMPANY.commdmihprEC2r4.xlarge us-east-1e500 GB - /app15 GB - /var/lib/dockerComponents & LogsHostComponentDocker nameDescriptionLogsOpen PortsPROD1, PROD2, PROD3Managermdmgw_mdm-manager_1Gateway API/app/mdmgw/manager/log9104, 8851PROD1Batch Channelmdmgw_batch-channel_1Batch file processor, S3 poller/app/mdmgw/batch_channel/log9107PROD1, PROD2, PROD3Publishermdmhub_event-publisher_1Event publisher/app/mdmhub/event_publisher/log9106PROD1, PROD2, PROD3Subscribermdmhub_reltio-subscriber_1SQS Reltio event subscriber/app/mdmhub/reltio_subscriber/log9105Back-EndHostComponentDocker nameDescriptionLogsOpen PortsPROD1, PROD2, PROD3ElasticsearchelasticsearchEFK - Elasticsearch/app/efk/elasticsearch/logs9200PROD1, PROD2, PROD3FluentDfluentdEFK - FluentD/app/efk/fluentd/logPROD3KibanakibanaEFK - Kibanadocker logs kibana5601PROD3PrometheusprometheusPrometheus Federation slave serverdocker logs prometheus9109PROD1, PROD2, PROD3Mongomongo_mongo_1Mongodocker logs mongo_mongo_127017PROD3Monstache Connectormonstache-connectorMongo → Elasticsearch exporterPROD1, PROD2, PROD3Kafkakafka_kafka_1Kafkadocker logs kafka_kafka_19101, 9093, 9094PROD1, PROD2, PROD3Kafka Exporterkafka_kafka_exporter_1Kafka → Prometheus exporterdocker logs kafka_kafka_exporter_19102PROD1, PROD2, PROD3CadvisorcadvisorDocker → Prometheus exporterdocker logs cadvisor9103PROD3SQS Exportersqs-exporterSQS → Prometheus exporterdocker logs sqs-exporter9108PROD1, PROD2, PROD3Kongkong_kong_1API Manager/app/mdmgw/kong/kong_logs8000, 8443, 32777PROD1, PROD2, PROD3Kong - DBkong_kong-database_1Kong Cassandra databasedocker logs kong_kong-database_17000, 9042PROD1, PROD2, PROD3Zookeeperkafka_zookeeper_1Zookeeperdocker logs kafka_zookeeper_12181, 2888, 3888PROD1, PROD2, PROD3Node Exporter(non-docker) node_exporterPrometheus node exportersystemctl status node_exporter9100CertificatesResourceCertificate LocationValid fromValid to Issued ToKibanahttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/prod_us/efk/kibana/mdm-log-management-us-trade-prod.COMPANY.com.cer22.02.201921.02.2022mdm-log-management-us-trade-prod.COMPANY.comKong - APIhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/prod_us/certs/mdm-ihub-us-trade-prod.COMPANY.com.pem04.01.202204.01.2024CN = mdm-ihub-us-trade-prod.COMPANY.comO = COMPANYKafka - Client Truststorehttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/prod_us/ssl/client.truststore.jks01.09.201601.09.2026COMPANY Root CA G2Kafka - Server TruststorePROD1 - https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/prod_us/ssl/server1.keystore.jksPROD2 - https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/prod_us/ssl/server2.keystore.jksPROD3 - https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/prod_us/ssl/server3.keystore.jks04.01.202204.01.2024CN = mdm-ihub-us-trade-prod.COMPANY.comO = COMPANYElasticsearchesnode1 - https://github.com/COMPANY/mdm-reltio-handler-env/tree/master/ssl_certs/prod_us/efk/esnode1esnode2 - https://github.com/COMPANY/mdm-reltio-handler-env/tree/master/ssl_certs/prod_us/efk/esnode2esnode3 - https://github.com/COMPANY/mdm-reltio-handler-env/tree/master/ssl_certs/prod_us/efk/esnode322.02.201921.02.2022mdm-esnode1-us-trade-prod.COMPANY.commdm-esnode2-us-trade-prod.COMPANY.commdm-esnode3-us-trade-prod.COMPANY.comUnix groupsResource NameTypeDescriptionSupportELBLoad BalancerReference LB Name: PFE-CLB-JIRA-HARMONY-PROD-001CLB name: PFE-CLB-MDM-HUB-TRADE-PROD-001DNS name: internal-PFE-CLB-MDM-HUB-TRADE-PROD-001-1966081961.us-east-1.elb.amazonaws.comuserComputer RoleComputer Role: UNIX-UNIVERSAL-AWSCBSDEV-MDMIHPR-COMPUTERS-U Login: mdmihprName: SRVGBL-mdmihprUID: 25084803GID: 20796763 userUnix Role GroupUnix-mdmihubProd-URole: ADMIN_ROLEportsSecurity groupSG Name: PFE-SG-IHUB-APP-PROD-001http://btondemand.COMPANY.comSubmit ticket to GBL-BTI-IOD AWS FULL SUPPORTS3S3 Bucketmdmprodamrasp42095 (us-east-1)Username: SRVC-MDMIHPRConsole login: https://bti-aws-prod-hosting.signin.aws.amazon.com/consoleInternal ClientsNameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopicInternal MDM Hub userpublishing_hubKey AuthN/A- "CREATE_HCP"- "CREATE_HCO"- "UPDATE_HCP"- "UPDATE_HCO"- "GET_ENTITIES"- "DELETE_CROSSWALK"- "GET_RELATION"- "SCAN_ENTITIES"- "SCAN_RELATIONS"- "LOOKUPS"- "ENTITY_ATTRIBUTES_UPDATE"ALL- "FLEXProposal"- "FLEX"- "FLEXIDL"- "Calculate"- "AddrCalc"prod-internal-reltio-eventsInternal MDM Test usermdm_test_userExternal OAuth2MDM_client- "CREATE_HCP"- "CREATE_HCO"- "UPDATE_HCP"- "UPDATE_HCO"- "GET_ENTITIES"- "DELETE_CROSSWALK"- "GET_RELATION"- "SCAN_ENTITIES"- "SCAN_RELATIONS"- "LOOKUPS"- "ENTITY_ATTRIBUTES_UPDATE"ALL- "FLEXProposal"- "FLEX"- "FLEXIDL"- "Calculate"- "AddrCalc"- "SAP"- "HIN"- "DEA"Integration Batch Update userintegration_batch_userKey AuthN/A- "GET_ENTITIES"- "ENTITY_ATTRIBUTES_UPDATE"- "GENERATE_ID"- "CREATE_HCO"- "UPDATE_HCO"ALL- "FLEXProposal"- "FLEX"- "FLEXIDL"- "Calculate"- "AddrCalc"FLEX US userflex_prodExternal OAuth2Flex-MDM_client- "CREATE_HCP"- "CREATE_HCO"- "UPDATE_HCP"- "UPDATE_HCO"- "GET_ENTITIES"- "SCAN_ENTITIES"ALL- "FLEXProposal"- "FLEX"- "FLEXIDL"- "Calculate"prod-out-full-flex-allFLEX Batch Channel userflex_batchKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "FLEX"- "FLEXIDL"prod-internal-hco-create-flexSAP Batch Channel usersap_batchKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "SAP"prod-internal-hco-create-sapHIN Batch Channel userhin_batchKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "HIN"prod-internal-hco-create-hinDEA Batch Channel userdea_batchKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "DEA"prod-internal-hco-create-dea340B Batch Channel user340b_batchKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "340B"prod-internal-hco-create-340b" + }, + { + "title": "US PROD Services", + "pageID": "164469976", + "pageLink": "/display/GMDM/US+PROD+Services", + "content": "HUB EndpointsAPI & Kafka & S3Resource NameEndpointGateway API OAuth2 External - PRODhttps://mdm-ihub-us-trade-prod.COMPANY.com/gw-api-oauth-extGateway API OAuth2 - PRODhttps://mdm-ihub-us-trade-prod.COMPANY.com/gw-api-oauthGateway API KEY auth - PRODhttps://mdm-ihub-us-trade-prod.COMPANY.com/gw-apiPing Federatehttps://prodfederate.COMPANY.com/as/introspect.oauth2Kafkaamraelp00006207.COMPANY.com:9094amraelp00006208.COMPANY.com:9094amraelp00006209.COMPANY.com:9094MDM HUB S3 s3://mdmprodamrasp42095/- FLEX: PROD/inbound/FLEX- SAP: PROD/inbound/SAP- HIN: PROD/inbound/HIN- DEA: PROD/inbound/DEA- 340B: PROD/inbound/340BMonitoringResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=us_prod&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=us_prod&var-topic=All&var-node=1&var-instance=amraelp00006207.COMPANY.com:9102Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=us_prod&var-node=amraelp00006207.COMPANY.com&var-port=9100Docker monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/Z1VgYm6iz/docker-monitoring?orgId=1&refresh=10s&var-env=us_prod&var-node=1JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=us_prod&var-component=batch_channel&var-node=1&var-instance=amraelp00006207.COMPANY.com:9107KongMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=us_prod&var-instance=amraelp00006209.COMPANY.com:9110&var-node_instance=amraelp00006209.COMPANY.com&var-interval=$__auto_interval_intervalLogsResource NameEndpointKibanahttps://mdm-log-management-us-trade-prod.COMPANY.com:5601/app/kibanaMDM SystemsReltio US PROD - VUUWV21sflYijwaResource NameEndpointSQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/361_VUUWV21sflYijwaReltiohttps://361.reltio.com/ui/VUUWV21sflYijwa/https://361.reltio.com/reltio/api/VUUWV21sflYijwa Reltio Gateway UserIntegration_Gateway_US_UserRDMhttps://rdm.reltio.com/%s/f6dQoR9tfCpFCtm/Internal ResourcesResource NameEndpointMongomongodb://amraelp00006207.COMPANY.com:27017,amraelp00006208.COMPANY.com:27017,amraelp00006209.COMPANY.com:28018Kafkaamraelp00006207.COMPANY.com:9094amraelp00006208.COMPANY.com:9094amraelp00006209.COMPANY.com:9094Zookeeperamraelp00006207.COMPANY.com:2181amraelp00006208.COMPANY.com:2181amraelp00006209.COMPANY.com:2181Kibanahttps://amraelp00006209.COMPANY.com:5601/app/kibanaElasticsearchhttps://amraelp00006207.COMPANY.com:9200https://amraelp00006208.COMPANY.com:9200https://amraelp00006209.COMPANY.com:9200Hawtiohttp://amraelp00006207.COMPANY.com:9999/hawtio/#/loginhttp://amraelp00006208.COMPANY.com:9999/hawtio/#/loginhttp://amraelp00006209.COMPANY.com:9999/hawtio/#/login" + }, + { + "title": "Components", + "pageID": "164469881", + "pageLink": "/display/GMDM/Components", + "content": "" + }, + { + "title": "Apache Airflow", + "pageID": "164469951", + "pageLink": "/display/GMDM/Apache+Airflow", + "content": "DescriptionAirflow is platform created by Apache and designed to schedule workflows called dags.Airflow docs:https://airflow.apache.org/docs/apache-airflow/stable/index.htmlWe are using airflow on kubernetes with helm of official airflow helm chart: https://airflow.apache.org/docs/helm-chart/stable/index.htmlIn this architecture airflow consists of 3 main components:Scheduler - scheduling, monitoring and executing tasksWebserver - Airflow UIDatabase(PostgreSQL)InterfacesUI e.g. https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/homeREST API /api/v1/docs: https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.htmlFlowsFlows are configure in mdm-hub-cluster-env repository in ansible/inventory/${environment}/group_vars/gw-airflow-services/${dag_name}.yaml filesUsed flows are described in dags list" + }, + { + "title": "API Gateway", + "pageID": "164469910", + "pageLink": "/display/GMDM/API+Gateway", + "content": "DescriptionKong (API Gateway) is the component used as the gateway for all API requests in the MDM HUB. This component exposes only one URL to the external clients, which means that all internal docker containers are secured and it is not possible to access them. This allows to track whole network traffic access in one place. Kong is the router that redirects requests to specific services using configured routes. Kong contains multiple additional plugins, these plugins are connected with the specific services and add additional security (Key-Auth, OAuth 2.0, Oauth2-External) or user management. Only Kong authorized users are allowed to execute specific operations in the HUB.Technology:Kong is a predefined component installed using a Docker container. Kong uses the Lua language and Nginx engine. (docker image: kong:1.1.1-centos)Kong stores the whole configuration in the Cassandra Database ( docker image: cassandra:3)Kong uses a customized plugin for the PingFederate token verification - OAuth 2.0 ExternalCode link: Kong: Kong Admin API DOCOauth2 External plugin: kong/mdm-external-oauth-pluginFlowsKong is responsible for the security, user management, and access layer to HUB: SecurityInterface NameTypeEndpoint patternDescriptionAdmin APIREST APIGET http://localhost:8001/Internal and secured PORT available only in the docker container used by kong to manage existing services, routes, plugins, consumers, certificatesExternal APIREST APIGET https://localhost:8443/External and secured PORT exposed to the ELB and accessed by clients. Dependent componentsComponentInterfaceFlowDescriptionCassandra - kong_kong-database_1TCP internal docker communicationN/Akong configuration databaseHUB MicroservicesREST internal docker communicationN/AThe route to all HUB microservices, required to expose API to external clients ConfigurationKong configuration is divided into 5 sections:1 ConsumersConfig ParameterDefault valueDescription- snowflake_api_user: create_or_update: False vars: username: snowflake_api_user plugins: - name: key-auth parameters: key: "{{ secret_kong_consumers.snowflake_api_user.key_auth.key }}"N/AConfiguration for the user with key-auth authentication - used only for the technical services users.All External OAuth2 users are configured in the 4.Routes Sections2 CertificatesConfig ParameterDefault valueDescription- gbl_mdm_hub_us_nprod: create_or_update: False vars: cert: "{{ lookup('file', '{{playbook_dir}}/ssl_certs/{{ env_name }}/certs/gbl-mdm-hub-us-nprod.COMPANY.com.pem') }}" key: "{{ lookup('file', '{{playbook_dir}}/ssl_certs/{{ env_name }}/certs/gbl-mdm-hub-us-nprod.key') }}" snis: - "gbl-mdm-hub-us-nprod.COMPANY.com" - "amraelp00007335.COMPANY.com" - "10.12.209.27"N/A Configuration of the SSL Certificate in the Kong.3 ServicesConfig ParameterDefault valueDescriptionkong_services: - create_or_update: False vars: name: "{{ kong_env }}-manager-service" url: "http://{{ kong_env }}mdmsrv_mdm-manager_1:8081" connect_timeout: 120000 write_timeout: 120000 read_timeout: 120000N/AKong Service - this is a main part of the configuration, this connects internally Kong with Docker container. Kong allows configuring multiple services with multiple routes and plugins.4 RoutesConfig ParameterDefault valueDescription- create_or_update: False vars: name: "{{ kong_env }}-manager-ext-int-api-oauth-route" service: "{{ kong_env }}-manager-service" paths: [ "/{{ kong_env }}-ext" ] methods: [ "GET", "POST", "PATCH", "DELETE" ]N/AExposes the route to the service. Clients using ELB have to add the path to the API invocation to access specified services. "-ext" suffix defines the API that used the External OAuth 2.0 plugin connected to the PingFederate. Configures the methods that the user is allowed to invoke. 5 PluginsConfig ParameterDefault valueDescription- create_or_update: False vars: name: key-auth route: "{{ kong_env }}-manager-int-api-route" config: hide_credentials: trueN/AThe type of plugin "key-auth" used for the internal or technical users that authenticate using a security key- create_or_update: False vars: name: mdm-external-oauth route: "{{ kong_env }}-manager-ext-int-api-oauth-route" config: introspection_url: "https://devfederate.COMPANY.com/as/introspect.oauth2" authorization_value: "{{ devfederate.secret_oauth2_authorization_value }}" hide_credentials: true users_map: - "e2a6de9c38be44f4a3c1b53f50218cf7:engage"N/AThe type of plugin "mdm-external-oauth" is a customized plugin used for all External Clients that are using tokens generated in the PingFederate.The configuration contains introspection_url - Ping API for token verification.The most important part of this configuration is the users_map The Key is the PingFedeate User, the Value is the HUB user configured in the services." + }, + { + "title": "API Router", + "pageID": "196877505", + "pageLink": "/display/GMDM/API+Router", + "content": "DescriptionThe api router component is responsible for routing requests to regional MDM Hub services. Application exposes REST API to call MDM Hub services from different regions simultaneously. The component provides centralized authorization and authentication service and transaction log feature. Api router uses http4k library which is a lightweight  HTTP toolkit written in Kotlin that enables the serving and consuming of HTTP services in a functional and consistent way.Technologyjava 8,kotlin,spring bootCode link: api routerRequest flowComponentDescriptionAuthentication serviceauthenticates user by x-consumer-username headerRequest enricherdetects request sources, countries and roleAuthorization serviceauthorizes user permissions to role, countries and sourcesService callercalls MDM Hub services, tries 3 times in case of an exception,requests are routed to the appropriate mdm services based on the countries parameter, if the requests contains countries from multiple regions, different regional services are called, if the request contains no countries, default user or application country is setService response transformer and filtertransforms and/or filters service responses (e.g. data anonymization) depending on the defined request and/or response filtration parameters (e.g. header, http method, path)Response composercomposes responses from services, if multiple services responded, the response is concatenatedRequest enrichmentParameterMethodsourcescountriesrolecreate hcorequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedCREATE HCOupdate hcorequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedUPDATE_HCObatch create hcorequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedCREATE_HCObatch update hcorequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedUPDATE_HCOcreate hcprequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedCREATE_HCPupdate hcprequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedUPDATE_HCPbatch create hcprequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedCREATE_HCPbatch update hcprequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedUPDATE_HCPcreate mcorequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedCREATE_MCOupdate mcorequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedUPDATE_MCObatch create mcorequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedCREATE_MCObatch update mcorequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedUPDATE_MCOcreate entityrequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedCREATE_ENTITYupdate entityrequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedUPDATE_ENTITYget entities by urissources not allowedrequest param Country attribute, 0 or more allowedGET_ENTITIESget entity by urisources not allowedrequest param Country attribute, 0 or more allowedGET_ENTITIESdelete entity by crosswalktype query param, required at least onerequest param Country attribute, 0 or more allowedDELETE_CROSSWALKget entity matchessources not allowedrequest param Country attribute, 0 or more allowedGET_ENTITY_MATCHEScreate relationrequest body crosswalk attributes, required at least onerequest param Country attribute, 0 or more allowedCREATE_RELATIONbatch create relationrequest body crosswalk attributes, required at least onerequest param Country attribute, 0 or more allowedCREATE_RELATIONget relation by urisources not allowedrequest param Country attribute, 0 or more allowedGET_RELATIONdelete relation by crosswalktype query param, required at least onerequest param Country attribute, 0 or more allowedDELETE_CROSSWALKget lookupssources not allowedrequest param Country attribute, 0 or more allowedLOOKUPSConfigurationConfig parameterDescriptiondefaultCountrydefault application instance countryusersusers configuration listed belowzoneszones configuration listed belowresponseTransformresponse transformation definitions explained belowUser configurationConfig parameterDescriptionnameuser namedescriptionuser descriptionrolesallowed user rolescountriesallowed user countriessourcesallowed user sourcesdefaultCountryuser default countryZone configurationConfig parameterDescriptionurlmdm service urluserNamemdm service user namelogMessagesflag indicates that mdm service messages should be loggedtimeoutMsmdm service request timeoutResponse transformation configurationConfig parameterDescriptionfiltersrequest and response filter configuationmapresponse body JSLT transformation definitionsFilters configurationConfig parameterDescriptionrequestrequest filter configuationresponseresponse filter configuationRequest filter configurationConfig parameterDescriptionmethodHTTP methodpathAPI REST call pathheaderslist of HTTP headers with name and value parametersResponse filter configurationConfig parameterDescriptionbodyresponse body JSTL transformation definitionExample configuration of response transformationAPI router configurationresponseTransform: - filters:      request:        method: GET        path: /entities.*        headers: - name: X-Consumer-Username            value: mdm_test_user      response:        body:          jstl.content: | contains(true,[for (.crosswalks) .type == "configuration/sources/HUB_CALLBACK"])    map: - jstl.content: | .crosswalks - jstl.content: | ." + }, + { + "title": "Batch Service", + "pageID": "164469936", + "pageLink": "/display/GMDM/Batch+Service", + "content": "DescriptionThe batch-service component is responsible for managing the batch loads to MDM Systems. It exposes the REST API that clients use to create a new instance of a batch and upload data. The component is responsible for managing the batch instances and stages, processing the data, gathering acknowledge responses from the Manager component. Batch service stores data in two collections batchInstance - stores all instances of batches and statistics gathered during load and batchEntityProcessStatus  - stores metadata information about all objects that were loaded through all batches. These two collections are required to manage and process the data, check the checksum deduplication process, mark entities as processed after ACK from Reltio, and soft-delete entities in case of full files load. The component uses the Asynchronous operations using Kafka topics as the stages for each part of the load. Technology:  java 8, spring boot, mongodb, kafka-streams, apache camel, kafka, shedlock-spring, spring-schedulerCode link: batch-serviceFlowsETL BatchesBatch Controller: creating and updating batch instanceBulk Service: loading bulk dataProcessing JOBSending JOBSoftDeleting JOBACK CollectorClear CacheExposed interfacesBatch Controller - manage batch instancesInterface NameTypeEndpoint patternDescriptionCreate a new instance for the specific batchREST APIPOST /batchController/{batchName}/instancesCreates a new instance of the specific batch. Returns the object of Batch with a generated ID that has to be used in the all below requests. Based on the ID client is able to check the status or load data using this instance. It is not possible to start new batch instance once the previous one is not completed. Get batch instance detailsREST APIGET /batchController/{batchName}/instances/{batchInstanceId}Returns current details about the specific batch instance. Returns object with all stages, statuses, and statistics. Initialize the stage or complete the stage and save statistics in the cache. REST APIPOST /batchController/{batchName}/instances/{batchInstanceId}/stages/{stageName}Creates or updates the specific stage in the batch. Using this operation clients are able to do two things.1. initialize and start the stage before loading the data. In that case, the Body request should be empty.2. update and complete the stage after loading the data. In that case, the Body should contain the stage name and statistics.Clients have permission to update only "Loading" stages. The next stages are managed by the internal batch-service processes.Initialize multiple stages or complete the stages and save statistics in the cache. REST APIPOST /batchController/{batchName}/instances/{batchInstanceId}/stagesThis operation is similar to the single-stage management operation. This operation allows manage of multiple stages in one request.Remove the specific batch instance from the cache.REST APIDELETE /batchController/{batchName}/instances/{batchInstanceId}Additional service operation used to delete the batch instances from cache. The permission for this operation is not exposed to external clients, this operation is used only by the HUB support team. Clear cache ( clear objects from batchEntityProcessStatus collection that stores metada of objects and is used in deduplication logic)REST APIGET /batchController/{batchName}/_clearCacheheaders:  objectType: ENTITY/RELATION  entityType: e.g. configuration/entityTypes/HCPAdditional service operation used to clear cache for the specific batch. The user can provide additional parameters to the API to specify what type of objects should be removed from the cache. Operation is used by the clients after executing smoke tests on PROD and during testing on DEV environments. It allows clearing the cache after load to avoid data deduplication during load. Bulk Service - load data using previously created batch instancesInterface NameTypeEndpoint patternDescriptionLoad multiple entities using create operationREST APIPOST /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/entitiesThe operation should be used once the user created a new batch instance and initialized the "Loading" stage. At that moment client is able to load entities to the MDM system. The operation accepts the bulk of entities and loads the data to Kafka topic. Using POST operation the standard creates operation is used.Load multiple entities using the partial override operationREST APIPATCH /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/entitiesThis operation is similar to the above. The PATCH operation force to use partialOverride operation. Load multiple relations using create operationREST APIPOST /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/relationsThe operation is similar to the above. Using POST operation the standard creates operation is used. Using /relations suffix in the URI clients is able to create relations objects in MDM.Load multiple Tags using PATCH operation - append operationREST APIPATCH /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/tagsThe operation should be used once the user created a new batch instance and initialized the "Loading" stage. At that moment client is able to load tags to the MDM system. The operation accepts the bulk of entities and loads the data to Kafka topic. Using PATCH operation the standard append operation is used so all tags in the input array are added to specified profile in MDM.Load multiple Tags using delete operation - removal operationREST APIDELETE /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/tagsThis operation is similar to the above. The DELETE operation removes selected TAGS from the MDM system.Load multiple merge requests using POST operation, this will result in a merge between two entities.REST APIPOST /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/entities/_mergeThe operation should be used once the user created a new batch instance and initialized the "Loading" stage. At that moment client is able to load merge requests to the MDM system - this will result in merging operation between two entities specified in the request. The operation accepts the bulk of merging requests and loads the data to Kafka's topic. Load multiple unmerge requests using POST operation, this will result in a unmerge between two entities.REST APIPOST /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/entities/_unmergeThe operation should be used once the user created a new batch instance and initialized the "Loading" stage. At that moment client is able to load unmerge requests to the MDM system - this will result in unmerging operation between two entities specified in the request. The operation accepts the bulk of unmerging requests and loads the data to Kafka's topic. Dependent componentsComponentInterfaceFlowDescriptionManagerAsyncMDMManagementServiceRouteEntitiesCreateProcess bulk objects with entities and creates the HCP/HCO/MCO in MDM. Returns asynchronous ACK responseEntitiesUpdateProcess entities and creates using partialOverride property the HCP/HCO/MCO in MDM. Returns asynchronous ACK responseRelationsCreateProcess bulk objects with entities and creates the HCP/HCO/MCO in MDM. Returns asynchronous ACK responseHub StoreMongo connectionN/AStore cache data in mongo collectionConfigurationBatch Workflows configuration, main config for all Batches and StagesConfig ParameterDescriptionbatchWorkflows: - batchName: "ONEKEY" batchDescription: "ONEKEY - HCO and HCP entities and relations loading" stages: - stageName: "HCOLoading"The main part of the batches configuration. Each batch has to contain:batchName - the name of the specific batch, used in the API request.batchDescription - additional description for the specificstages - the list of dependent stages arranged in the execution sequence.This configuration presents the workflow for the specific batch, Administrator can setup these stages in the order that is required for the batch and Client requirements. The main assumptions:The "Loading" Stage is the first one always.The "Sending" Stage is dependent on the "Loading" stageThe "Processing" Stage is dependent on the "Sending" stage.There is the possibility to add 2 additional optional stages:"EntitiesUnseenDeletion" - used only once the full file is loaded and the soft-delete process is required"HCODeletesProcessing" - process soft-deleted objects to check if all ACKs were received. Available jobs:SendingJobProcessingJobDeletingJobDeletingRelationJobIt is possible to set up different stage names but the assumption is to reuse the existing names to keep consistency.The JOB is dependent on each other in two ways:softDependentStages - allows starting next stage immediately after the dependent one is started. Used in the Sending stages to immediately send data to the Manager.dependentStages - hard dependent stages, this blocks the starting of the stage until the previous one is ended.  - stageName: "HCOSending"softDependentStages: ["HCOLoading"]processingJobName: "SendingJob"Example configuration of Sending stage dependent from the Loading stage. In this stage, data is taken from the stage Kafka Topics and published to the Manager component for further processing- stageName: "HCOProcessing"dependentStages: ["HCOSending"]processingJobName: "ProcessingJob"Example configuration of the Processing stage. This stage starts once the Sending JOB is completed. It uses the batchEntityProcessStatus collection to check if all ACK responses were received from MDM. - stageName: "RelationLoading"- stageName: "RelationSending" dependentStages: [ "HCOProcessing"] softDependentStages: ["RelationLoading"] processingJobName: "SendingJob"- stageName: "RelationProcessing" dependentStages: [ "RelationSending" ] processingJobName: "ProcessingJob"The full example configuration for the Relation loading, sending, and processing stages.- stageName: "EntitiesUnseenDeletion" dependentStages: ["RelationProcessing"] processingJobName: "DeletingJob"- stageName: "HCODeletesProcessing" dependentStages: ["EntitiesUnseenDeletion"] processingJobName: "ProcessingJob"Configuration for entities. The example configuration that is used in the full files. It is triggered at the end of the Workflow and checks the data that should be removed. - stageName: "RelationsUnseenDeletion" dependentStages: ["HCODeletesProcessing"] processingJobName: "DeletingRelationJob"- stageName: "RelationDeletesProcessing" dependentStages: ["RelationsUnseenDeletion"] processingJobName: "ProcessingJob"Configuration for relations. The example configuration that is used in the full files. It is triggered at the end of the Workflow and checks the data that should be removed. Loading stage configuration for Entities and Relations BULK load through API requestConfig ParameterDescriptionbulkConfiguration: destinations: "ONEKEY": HCPLoading: bulkLimit: 25 destination: topic: "{{ env_local_name }}-internal-batch-onekey-hcp"The configuration contains the following:destinations - list of batches and kafka topics on which data should be loaded from REST API to Kafka Topics."ONEKEY" - batch nameHCPLoading - specific configuration for loading stagebulkLimit - limit of entities/relations in one API calldestination.topic - target topic nameSending stage configuration for Sending Entities and Relations to MDM Async API (Reltio)Config ParameterDefault valueDescriptionsendingJob: numberOfRetriesOnError: 3Number of retries once an exception occurs during Kafka events publishing  pauseBetweenRetriesSecs: 30Number of seconds to wait between the next retry idleTimeWhenProcessingEndsSec: 60Number of seconds once to wait for new events and complete the Sending JOB threadPoolSize:2Number of threads used to Kafka Producer "ONEKEY": HCPSending: source: topic: "{{ env_local_name }}-internal-batch-onekey-hcp" bulkSending: false bulkPacketSize: 10 reltioRequestTopic: "{{ env_local_name }}-internal-async-all-onekey" reltioReponseTopic: "{{ env_local_name }}-internal-async-all-onekey-ack"The specific configuration for Sending Stage"ONEKEY" - batch nameHCPSending - specific configuration for sending stagesource.topic- source topic name from which data is consumedbulkSending - by default false (bundling is implemented and managed in Manager client, currently there is no need to bundle the events on client-side)bulkPacketSize - optionally once bulkSending is true, batch-service is able to bundle the requests. reltioRequestTopic- processing requests in managerreltioReponseTopic - processing ACK in batch-serviceProcessing stage config for checking processing entities status in MDM Async API (Reltio) - check ACK collectorConfig ParameterDefault valueDescriptionprocessingJob.pauseBetweenQueriesSecs:60Interval in which Cache is cached if all ACK were received.Entities/Relations UnseenDeletion Job config for Reltio Request Topic and Max Deletes Limit for entities soft Delete.Config ParameterDefault valueDescriptiondeletingJob: "Symphony": "EntitiesUnseenDeletion":The specific configuration for Deleting Stage"Symphony" - batch nameEntitiesUnseenDelettion- specific configuration for soft-delete stagemaxDeletesLimit: 100The limit is a safety switch in case if we get a corrupted file (empty or partial).It prevents from deleting all profiles Reltio in such cases.queryBatchSize: 10The number of entities/relations downloaded from Cache in one callreltioRequestTopic: "{{ env_local_name }}-internal-async-all-symphony"target topic - processing requests in managerreltioResponseTopic: "{{ env_local_name }}-internal-async-all-symphony-ack"ack topics - processing ACK in batch-serviceUsersConfig ParameterDescription- name: "mdmetl_nprod" description: "MDMETL Informatica IICS User - BATCH loader" defaultClient: "ReltioAll" roles: - "CREATE_HCP" - "CREATE_HCO" - "CREATE_MCO" - "CREATE_BATCH" - "GET_BATCH" - "MANAGE_STAGE" - "CLEAR_CACHE_BATCH" countries: - US sources: - "SHS"... batches: "Symphony": - "HCPLoading"The example ETL user configuration. The configuration is divided into the following sections:roles - available roles to create specific objects and manage batch instancescountries - list of countries that user is allowed to loadsources - list of sources that user is allowed to loadbatches - list of batch names with corresponding stages. In general external users are able to create/edit Loading stages only.ConnectionsConfig ParameterDescriptionmongo.url: "mongodb://mdm_batch_service:{{ mongo.users.mdm_batch_service.password }}@{{ mongo.springURL }}/{{ mongo.dbName }}"Full Mongo DB URLmongo.dbName: "{{ mongo.dbName }}"Mongo database namekafka.servers: "{{ kafka.servers }}"Kafka Hostname kafka.groupId: "batch_service_{{ env_local_name }}"Batch Service component group namekafka.saslMechanism: "{{ kafka.saslMechanism }}"SASL configrrationkafka.securityProtocol: "{{ kafka.securityProtocol }}"Security Protocolkafka.sslTruststoreLocation: /opt/mdm-gw-batch-service/config/kafka_truststore.jksSSL trustore file locationkafka.sslTruststorePassword: "{{ kafka.sslTruststorePassword }}"SSL trustore file passowrdkafka.username: batch_serviceKafka usernamekafka.password: "{{ hub_broker_users.batch_service }}"Kafka dedicated user passwordkafka.sslEndpointAlgorithm:SSL algorightAdvanced Kafka configuration (do not edit if not required)Config Parameterspring: kafka: properties: sasl: mechanism: ${kafka.saslMechanism} security: protocol: ${kafka.securityProtocol} ssl.endpoint.identification.algorithm: consumer: properties: max.poll.interval.ms: 600000 bootstrap-servers: - ${kafka.servers} groupId: ${kafka.groupId} auto-offset-reset: earliest max-poll-records: 50 fetch-max-wait: 1s fetch-min-size: 512000 enable-auto-commit: false ssl: trustStoreLocation: file:${kafka.sslTruststoreLocation} trustStorePassword: ${kafka.sslTruststorePassword} producer: bootstrap-servers: - ${kafka.servers} groupId: ${kafka.groupId} auto-offset-reset: earliest ssl: trustStoreLocation: file:${kafka.sslTruststoreLocation} trustStorePassword: ${kafka.sslTruststorePassword} streams: bootstrap-servers: - ${kafka.servers} applicationId: ${kafka.groupId}_ack # for Kafka Streams GroupID have to different that Kafka consumer clientId: batch_service_ID stateDir: /tmp # num-stream-threads: 1 - default 1 ssl: trustStoreLocation: file:${kafka.sslTruststoreLocation} trustStorePassword: ${kafka.sslTruststorePassword}Additional config (do not edit if not required)Config Parameterserver.port: 8083management.endpoint.shutdown.enabled=false:management.endpoints.web.exposure.include: prometheus, health, infospring.main.allow-bean-definition-overriding: truecamel.springboot.main-run-controller: Truecamel: component: metrics: metric-registry=prometheusMeterRegistry:server: use-forward-headers: true forward-headers-strategy: FRAMEWORKspringdoc: swagger-ui: disable-swagger-default-url: TruerestService: #service port - do not change if it run in docker container port: 8082schedulerTreadCount: 5" + }, + { + "title": "Callback Delay Service", + "pageID": "322536130", + "pageLink": "/display/GMDM/Callback+Delay+Service", + "content": "DescriptionThe application consists of two streams - precallback and postcallback. When the precallback stream detects the need to change the ranking for a given relationship, it generates an event to the post callback stream. The post callback stream collects events in the time window for a given key and processes the last one. This allows you to avoid updating the rankings multiple times when loading relations using batch.Responsible for following transformations:HCO relation rakingApplies transformations to the Kafka input stream producing the Kafka output stream.Technology: kotlin, spring boot, MongoDB, Kafka-StreamsCode link: callback-delay-service FlowsOtherHCOtoHCOAffiliations RankingsExposed interfacesPreCallbackDelay Stream -(rankings)Interface NameTypeEndpoint patternDescriptioncallback inputKAFKA${env}-internal-reltio-full-delay-eventsEvents processed by the precallback serviceoutput  - callbacksKAFKA${env}-internal-reltio-proc-eventsResult events processed by the precallback delay serviceoutput - processing KAFKA${env}-internal-async-all-bulk-callbacksUpdateAttribute requests sent to Manager component for asynchronous processingDependent componentsComponentInterfaceFlowDescriptionManagerAsyncMDMManagementServiceRouteRelationshipAttributesUpdateUpdate relationship attributes in asynchronous modeHub StoreMongo connectionN/AGet mongodb stored relation data when Kafka cache is empty.ConfigurationMain ConfigurationDefault valueDescriptionkafka.groupId${env}-precallback-delay-serviceThe application ID. Each stream processing application must have a unique ID. The same ID must be given to all instances of the application. It is recommended to use only alphanumeric characters, (dot), - (hyphen), and _ (underscore). Examples: "hello_world", "hello_world-v1.0.0"kafkaOther.num.stream.threads10Number of threads used in the Kafka StreamkafkaOther.default.deserialization.exception.handlercom.COMPANY.mdm.common.streams.StructuredLogAndContinueExceptionHandlerDeserialization exception handlerkafkaOther.max.poll.interval.ms3600000Number of milliseconds to wait max time before next poll of eventskafkaOther.max.request.size2097152Events message sizeCallbackWithDelay Stream -(rankings)Config ParameterDefault valueDescriptionpreCallbackDelay.eventInputTopic${env}-internal-reltio-full-delay-eventsinput topicpreCallbackDelay.eventDelayTopic${env}-internal-reltio-full-callback-delay-eventsdelay stream input topic, when the precallback stream detects the need to modify ranks for a given relationship group, it produces an event for this topic. Events for a given key are aggregated in a time windowpreCallbackDelay.eventOutputTopic${env}-internal-reltio-proc-eventsoutput topic for eventspreCallbackDelay.internalAsyncBulkCallbacksTopic${env}-internal-async-all-bulk-callbacksoutput topic for callbackspreCallbackDelay.relationDataStore.storeName${env}-relation-data-storeRelation data cache store namepreCallbackDelay.rankCallback.featureActivationtrueParameter used to enable/disable the Rank featurepreCallbackDelay.rankCallback.callbackSourceHUB_CALLBACKCrosswalk used to update Reltio with Rank attributespreCallbackDelay.rankCallback.rawRelationChecksumDedupeStore.namewith-delay-raw-relation-checksum-dedupe-storetopic name that store rawRelation MD5 checksum - used in rank callback deduplicationpreCallbackDelay.rankCallback.rawRelationChecksumDedupeStore.retentionPeriod1hstore retention periodpreCallbackDelay.rankCallback.rawRelationChecksumDedupeStore.windowSize10mstore window sizepreCallbackDelay.rankCallback.attributeChangesChecksumDedupeStore.nameattribute-changes-checksum-dedupe-storetopic name that store attribute changes MD5 checksum - used in rank callback deduplicationpreCallbackDelay.rankCallback.attributeChangesChecksumDedupeStore.retentionPeriod1hstore retention periodpreCallbackDelay.rankCallback.attributeChangesChecksumDedupeStore.windowSize10mstore window sizepreCallbackDelay.rankCallback.activeCallbacksOtherHCOtoHCOAffiliationsDelayCallbackList of Ranker to be activatedpreCallbackDelay.rankTransform.featureActivationtrueParaemter defines in the Rank feature should be activated.preCallbackDelay.rankTransform.activationFilter.activeRankSorterOtherHCOtoHCOAffiliationsDelayRankSorterRank sorter namespreCallbackDelay.rankTransform.rankSortOrder.affiliationN/AThe source order defined for the specific Ranking. Details about the algorithm in:  OtherHCOtoHCOAffiliations RankSorterdeduplicationPost callback stream ddeduplication configdeduplication.pingInterval1mPost callback stream ping invervaldeduplication.duration1hPost callback stream window durationdeduplication.gracePeriod0sPost callback stream deduplication grace perioddeduplication.byteLimit122869944Post callback stream deduplication byte limitdeduplication.suppressNamecallback-rank-delay-suppressPost callback stream deduplication suppress namededuplication.namecallback-rank-delay-suppressPost callback stream deduplication namededuplication.storeNamecallback-rank-delay-suppress-deduplication-storePost callback stream deduplication store nameRank sort order config:The component allows you to set different sorting (ranking) configurations depending on the country of the relationship. Relations for selected countries are sorted based on the rankExecutionOrder configuration - in the order of the items on the list. The following sorters are available:ATTRIBUTE - sort relationships based on the values (or lookup codes) ​​of defined attributesACTIVE - sort relationships based on their status (ACTIVE, NON-ACTIVE)SOURCE - sort relations based on the order of sourcesLUD - sort relations based on their update time - ascending or descending orderSample rankSortOrder confiugration:rankSortOrder: affiliation: config: - countries: - AU - NZ rankExecutionOrder: - type: ACTIVE - type: ATTRIBUTE attributeName: RelationType/RelationshipDescription lookupCode: true order: REL.HIE: 1 REL.MAI: 2 REL.FPA: 3 REL.BNG: 4 REL.BUY: 5 REL.PHN: 6 REL.GPR: 7 REL.MBR: 8 REL.REM: 9 REL.GPSS: 10 REL.WPC: 11 REL.WPIC: 12 REL.DOU: 13 - type: SOURCE order: Reltio: 1 ONEKEY: 2 JPDWH: 3 SAP: 4 PFORCERX: 5 PFORCERX_ODS: 5 KOL_OneView: 6 ONEMED: 6 ENGAGE: 7 MAPP: 8 GRV: 9 GCP: 10 SSE: 11 PCMS: 12 PTRS: 13 - type: LUD" + }, + { + "title": "Callback Service", + "pageID": "164469913", + "pageLink": "/display/GMDM/Callback+Service", + "content": "DescriptionResponsible for following transformations:HCO names calculationDangling affiliationsCrosswalk cleanerPotential match queue cleanerPrecallback stream - (rankings)Applies transformations to the Kafka input stream producing the Kafka output stream.Technology: java 8, spring boot, MongoDB, Kafka-StreamsCode link: callback-service FlowsCallbacksHCONames Callback for IQVIA modelDanglingAffiliations CallbackCrosswalkCleaner CallbackNotMatch CallbackPreCallbacks (Rankings/COMPANYGlobalCustomerId/Canada Micro-Bricks/HCPType)Exposed interfacesPreCallback Stream -(rankings)Interface NameTypeEndpoint patternDescriptioncallback inputKAFKA${env}-internal-reltio-full-eventsEvents enriched by the EntityEnricher component. Full JSON dataoutput  - callbacksKAFKA${env}-internal-reltio-proc-eventsEvents that are already processed by the precallback services (contains updated Ranks and Reltio callback is also processed)output - processing KAFKA${env}-internal-async-all-bulk-callbacksUpdateAttribute requests sent to Manager component for asynchronous processingHCO NamesInterface NameTypeEndpoint patternDescriptioncallback inputKAFKA${env}-internal-callback-hconame-inevents being sent by the event publisher component. Event types being considered:  HCO_CREATED, HCO_CHANGED, RELATIONSHIP_CREATED, RELATIONSHIP_CHANGEDcallback outputKAFKA${env}-internal-hconames-rel-createRelation Create requests sent to Manager component for asynchronous processingDanging AffiliationsInterface NameTypeEndpoint patternDescriptioncallback inputKAFKA${env}-internal-callback-orphanClean-inevents being sent by the event publisher component. Event types being considered:  'HCP_REMOVED', 'HCO_REMOVED', 'MCO_REMOVED', 'HCP_INACTIVATED', 'HCO_INACTIVATED', 'MCO_INACTIVATED'callback outputKAFKA${env}-internal-async-all-orphanCleanRelation Update (soft-delete) requests sent to Manager component for asynchronous processingCrosswalk CleanerInterface NameTypeEndpoint patternDescriptioncallback inputKAFKA${env}-internal-callback-cleaner-inevents being sent by the event publisher component. Event types being considered: 'HCO_CHANGED', 'HCP_CHANGED', 'MCO_CHANGED', 'RELATIONSHIP_CHANGED'callback outputKAFKA${env}-internal-async-all-cleaner-callbacksDelete Crosswalk or Soft-Delete requests sent to Manager component for asynchronous processingNotMatch callback (clean potential match queue)Interface NameTypeEndpoint patternDescriptioncallback inputKAFKA${env}-internal-callback-potentialMatchCleaner-inevents being sent by the event publisher component. Event types being considered:  'RELATIONSHIP_CHANGED', 'RELATIONSHIP_CREATED'callback outputKAFKA${env}-internal-async-all-notmatch-callbacksNotMatch requests sent to Manager component for asynchronous processingDependent componentsComponentInterfaceFlowDescriptionManagerMDMIntegrationServiceGetEntitiesByUrisRetrieve multiple entities by providing the list of entities URISAsyncMDMManagementServiceRouteRelationshipUpdateUpdate relationship object in asynchronous modeEntitiesUpdateUpdate entity object in asynchronous mode - set soft-deleteCrosswalkDeleteRemove Crosswalk from entity/relation in asynchronous modeNotMatchSet Not a Match between two  entitiesHub StoreMongo connectionN/AStore cache data in mongo collectionConfigurationMain ConfigurationDefault valueDescriptionkafka.groupId${env}-entity-enricherThe application ID. Each stream processing application must have a unique ID. The same ID must be given to all instances of the application. It is recommended to use only alphanumeric characters, (dot), - (hyphen), and _ (underscore). Examples: "hello_world", "hello_world-v1.0.0"kafkaOther.num.stream.threads10Number of threads used in the Kafka StreamkafkaOther.default.deserialization.exception.handlercom.COMPANY.mdm.common.streams.StructuredLogAndContinueExceptionHandlerDeserialization exception handlerkafkaOther.max.poll.interval.ms3600000Number of milliseconds to wait max time before next poll of eventskafkaOther.max.request.size2097152Events message sizegateway.apiKey${gateway.apiKey}API key used in the communication to Managergateway.logMessagesfalseParameter used to turn on/off logging the payloadgateway.url${gateway.url}Manager URLgateway.userName${gateway.userName}Manager user nameHCO NamesConfig ParameterDefault valueDescriptioncallback.hconames.eventInputTopic${env}-internal-callback-hconame-ininput topiccallback.hconames.HCPCalculateStageTopic${env}-internal-callback-hconame-hcp4calcinternal topiccallback.hconames.intAsyncHCONames${env}-internal-hconames-rel-createoutput topiccallback.hconames.deduplicationWindowDuration10The size of the windows in millisecondscallback.hconames.deduplicationWindowGracePeriod10sThe grace period to admit out-of-order events to a window.callback.hconames.dedupStoreNamehco-name-dedupe-storededuplication topic namecallback.hconames.acceptedEntityEventTypesHCO_CREATED, HCO_CHANGEDaccepted events types for entity objectscallback.hconames.acceptedRelationEventTypesRELATIONSHIP_CREATED, RELATIONSHIP_CHANGEDaccepted events types for relationship objectscallback.hconames.acceptedCountriesAI,AN,AG,AR,AW,BS,BB,BZ,BM,BO,BR,CL,CO,CR,CW,DO,EC,GT,GY,HN,JM,KY,LC,MX,NI,PA,PY,PE,PN,SV,SX,TT,UY,VGlist of countries aceppted in further processing callback.hconames.impactedHcpTraverseRelationTypesconfiguration/relationTypes/Activity, configuration/relationTypes/Managed, configuration/relationTypes/RLE.MAIaccepted relationship types to travers for impacted HCP objectscallback.hconames.mainHCOTraverseRelationTypesconfiguration/relationTypes/Activity, configuration/relationTypes/Managed, configuration/relationTypes/RLE.MAIaccepted relationship types to travers for impacted main HCO objectscallback.hconames.mainHCOTypeCodes.defaultHOSPthe Type code name for the Main HCO objectcallback.hconames.mainHCOStructurTypeCodese.g.: AD:- "WFR.TSR.JUR"- "WFR.TSR.GRN"- "WFR.TSR.ETA"Cotains the map where the:KEY is the country Values are the TypCodes for the corresponding country, callback.hconames.deduplicationeither callback.hconames.deduplication or callback.hconames.windowSessionDeduplication must be setcallback.hconames.deduplication.durationduration size of time windowcallback.hconames.deduplication.gracePeriodgrace period related to time windowcallback.hconames.deduplication.byteLimitbyte limit of Suppressed.BufferConfigcallback.hconames.deduplication.suppressNamename ofSuppressed.BufferConfigcallback.hconames.deduplication.namename of the Grouping step in deduplicationcallback.hconames.deduplication.storageNamewhen switching from callback.hconames.deduplication to callback.hconames.windowSessionDeduplication storageName must be differentname of Materialized Session Storecallback.hconames.deduplication.pingIntervalinterval in which ping messages are being generatedcallback.hconames.windowSessionDeduplicationeither callback.hconames.deduplication or callback.hconames.windowSessionDeduplication must be setcallback.hconames.windowSessionDeduplication.durationduration size of session windowcallback.hconames.windowSessionDeduplication.byteLimitbyte limit of Suppressed.BufferConfigcallback.hconames.windowSessionDeduplication.suppressNamename ofSuppressed.BufferConfigcallback.hconames.windowSessionDeduplication.namename of the Grouping step in deduplicationcallback.hconames.windowSessionDeduplication.storageNamewhen switching from callback.hconames.deduplication to callback.hconames.windowSessionDeduplication storageName must be differentname of Materialized Session Storecallback.hconames.windowSessionDeduplication.pingIntervalinterval in which ping messages are being generatedPfe HCO NamesConfig ParameterDefault valueDescriptioncallback.pfeHconames.eventInputTopic${env}-internal-callback-hconame-ininput topiccallback.pfeHconames.HCPCalculateStageTopic${env}-internal-callback-hconame-hcp4calcinternal topiccallback.pfeHconames.intAsyncHCONames${env}-internal-hconames-rel-createoutput topiccallback.pfeHconames.timeWindoweither callback.pfeHconames.timeWindow or callback.pfeHconames.sessionWindow must be setcallback.pfeHconames.timeWindow.durationduration size of time windowcallback.pfeHconames.timeWindow.gracePeriodgrace period related to time windowcallback.pfeHconames.timeWindow.byteLimitbyte limit of Suppressed.BufferConfigcallback.pfeHconames.timeWindow.suppressNamename ofSuppressed.BufferConfigcallback.pfeHconames.timeWindow.namename of the Grouping step in deduplicationcallback.pfeHconames.timeWindow.storageNamewhen switching from callback.pfeHconames.timeWindow to callback.pfeHconames.sessionWindow storageName must be differentname of Materialized Session Storecallback.pfeHconames.timeWindow.pingIntervalinterval in which ping messages are being generatedcallback.pfeHconames.sessionWindoweither callback.pfeHconames.timeWindow or callback.pfeHconames.sessionWindow must be setcallback.pfeHconames.sessionWindow.durationduration size of session windowcallback.pfeHconames.sessionWindow.byteLimitbyte limit of Suppressed.BufferConfigcallback.pfeHconames.sessionWindow.suppressNamename ofSuppressed.BufferConfigcallback.pfeHconames.sessionWindow.namename of the Grouping step in deduplicationcallback.pfeHconames.sessionWindow.storageNamewhen switching from callback.pfeHconames.deduplication to callback.pfeHconames.windowSessionDeduplication storageName must be differentname of Materialized Session Storecallback.pfeHconames.sessionWindow.pingIntervalinterval in which ping messages are being generatedDanging AffiliationsConfig ParameterDefault valueDescriptioncallback.danglingAffiliations.eventInputTopic${env}-internal-callback-orphanClean-ininput topiccallback.danglingAffiliations.acceptedEntityEventTypesHCP_REMOVED, HCO_REMOVED, MCO_REMOVED, HCP_INACTIVATED, HCO_INACTIVATED, MCO_INACTIVATEDaccepted entity eventscallback.danglingAffiliations.eventOutputTopic${env}-internal-async-all-orphanCleanoutput topiccallback.danglingAffiliations.relationUpdateHeaders.HubAsyncOperationrel-updatekafka record headercallback.danglingAffiliations.exceptCrosswalkTypesconfiguration/sources/Reltiocrosswalk types to excludeCrosswalk CleanerConfig ParameterDefault valueDescriptioncallback.crosswalkCleaner.eventInputTopic${env}-internal-callback-cleaner-ininput topiccallback.crosswalkCleaner.acceptedEntityEventTypesMCO_CHANGED, HCP_CHANGED, HCO_CHANGEDaccepted entity eventscallback.crosswalkCleaner.acceptedRelationEventTypesRELATIONSHIP_CHANGEDaccepted relation eventscallback.crosswalkCleaner.hardDeleteCrosswalkTypes.alwaysconfiguration/sources/HUB_CallbackHub callback crosswalk namecallback.crosswalkCleaner.hardDeleteCrosswalkTypes.exceptconfiguration/sources/ReltioCleanserReltio cleanser crosswalk namecallback.crosswalkCleaner.hardDeleteCrosswalkRelationTypes.alwaysconfiguration/sources/HUB_CallbackHub callback crosswalk namecallback.crosswalkCleaner.hardDeleteCrosswalkRelationTypes.exceptconfiguration/sources/ReltioCleanserReltio cleanser crosswalk namecallback.crosswalkCleaner.softDeleteCrosswalkTypes.alwaysconfiguration/sources/HUB_USAGETAGCrosswalks list to soft-deletecallback.crosswalkCleaner.softDeleteCrosswalkTypes.whenOneKeyNotExistsconfiguration/sources/IQVIA_PRDP, configuration/sources/IQVIA_RAWDEACrosswalk list to soft-delete when ONEKEY crosswalk does not existscallback.crosswalkCleaner.softDeleteCrosswalkTypes.exceptconfiguration/sources/HUB_CALLBACK, configuration/sources/ReltioCleanserCrosswalk to excludecallback.crosswalkCleaner.hardDeleteHeaders.HubAsyncOperationcrosswalk-deletekafka record headercallback.crosswalkCleaner.hardDeleteRelationHeaders.HubAsyncOperationcrosswalk-relation-deletekafka record headercallback.crosswalkCleaner.softDeleteHeaders.hcp.HubAsyncOperationhcp-updatekafka record headercallback.crosswalkCleaner.softDeleteHeaders.hco.HubAsyncOperationhco-updatekafka record headercallback.crosswalkCleaner.oneKeyconfiguration/sources/ONEKEYONEKEY crosswalk namecallback.crosswalkCleaner.eventOutputTopic${env}-internal-async-all-cleaner-callbacksoutput topiccallback.crosswalkCleaner.softDeleteOneKeyReferbackCrosswalkTypes.referbackLookupCodesHCPIT.RBI, HCOIT.RBIOneKey referback crosswalk lookup codescallback.crosswalkCleaner.softDeleteOneKeyReferbackCrosswalkTypes.oneKeyLookupCodesHCPIT.OK, HCOIT.OKOneKey crosswalk lookup codesNotMatch callback (clean potential match queue)Config ParameterDefault valueDescriptioncallback.potentialMatchLinkCleaner.eventInputTopic${env}-internal-callback-potentialMatchCleaner-ininput topiccallback.potentialMatchLinkCleaner.acceptedRelationEventTypes- RELATIONSHIP_CREATED- RELATIONSHIP_CHANGEDaccepted relation eventscallback.potentialMatchLinkCleaner.acceptedRelationObjectTypes- "configuration/relationTypes/FlextoHCOSAffiliations"- "configuration/relationTypes/FlextoDDDAffiliations"- "configuration/relationTypes/SAPtoHCOSAffiliations"accepted relationship typescallback.potentialMatchLinkCleaner.matchTypesInCache- "AUTO_LINK"- "POTENTIAL_LINK"PotentialMatch cache object typescallback.potentialMatchLinkCleaner.notMatchHeaders.hco.HubAsyncOperationentities-not-match-setkafka record headercallback.potentialMatchLinkCleaner.eventOutputTopic${env}-internal-async-all-notmatch-callbacksoutput topicPreCallback Stream -(rankings)Config ParameterDefault valueDescriptionpreCallback.eventInputTopic${env}-internal-reltio-full-eventsinput topicpreCallback.eventOutputTopic${env}-internal-reltio-proc-eventsoutput topic for eventspreCallback.internalAsyncBulkCallbacksTopic${env}-internal-async-all-bulk-callbacksoutput topic for callbackspreCallback.mdmIntegrationService.baseURLN/AManager URL defined per environemntpreCallback.mdmIntegrationService.apiKeyN/AManager secret API KEY defined per environemntpreCallback.mdmIntegrationService.logMessagesfalseParameter used to turn on/off logging the payloadpreCallback.skipEventTypesENTITY_MATCHES_CHANGED, ENTITY_AUTO_LINK_FOUND, ENTITY_POTENTIAL_LINK_FOUND, DCR_CREATED, DCR_CHANGED, DCR_REMOVEDEvents skipped in the processingpreCallback.oldEventsDeletion.maintainDuration10mCache duration time (for callbacks MD5 checksum)preCallback.oldEventsDeletion.interval5mCache deletion intervalpreCallback.rankCallback.featureActivationtrueParameter used to enable/disable the Rank featurepreCallback.rankCallback.callbackSourceHUB_CallbackCrosswalk used to update Reltio with Rank attributespreCallback.rankCallback.activationFilter.countriesAG, AI, AN, AR, AW, BB, BL, BM, BO, BR, BS, BZ, CL, CO, CR, CW, DE, DO, EC, ES, FR, GF, GP, GT, GY, HK, HN, ID, IN, IT, JM, JP, KY, LC, MC, MF, MQ, MX, MY, NL, NC, NI, PA, PE, PF, PH, PK, PM, PN, PY, RE, RU, SA, SG, SV, SX, TF, TH, TR, TT, TW, UY, VE, VG, VN, WF, YT, XX, EMPTYList of countries for wich process activates the Rank (different between GBL and GBLUS)preCallback.rankCallback.rawEntityChecksumDedupeStoreNameraw-entity-checksum-dedupe-storetopic name that store rawEntity MD5 checksum - used in rank callback deduplicationpreCallback.rankCallback.attributeChangesChecksumDedupeStoreNameattribute-changes-checksum-dedupe-storetopic name that store attribute changes MD5 checksum - used in rank callback deduplicationpreCallback.rankCallback.forwardMainEventsDuringPartialUpdatefalseThe parameter used to define if we want to forward partial events. By default it is false so only events that are fully calculated are sent furtherpreCallback.rankCallback.ignoreAndRemoveDuplicatesfalseThe parameter used in the Ranking may contain duplicities in the group. It is set to False because now Reltio is removing duplicated IdentifierpreCallback.rankCallback.activeCleanerCallbacksSpecialityCleanerCallback, IdentifierCleanerCallback, EmailCleanerCallback, PhoneCleanerCallbackList of cleaner callbacks to be activatedpreCallback.rankCallback.activeCallbacksSpecialityCallback, AddressCallback, AffiliationCallback, IdentifierCallback, EmailCallback, PhoneCallbackList of Ranker to be activatedpreCallback.rankTransform.featureActivationtrueParaemter defines in the Rank feature should be activated.preCallback.rankTransform.activationFilter.activeRankSorterSpecialtyRankSorter, AffiliationRankSorter, AddressRankSorter, IdentifierRankSorter, EmailRankSorter, PhoneRankSorterpreCallback.rankTransform.rankSortOrder.affiliationN/AThe source order defined for the specific Ranking. Details about the algorithm in:  Affiliation RankSorterpreCallback.rankTransform.rankSortOrder.phoneN/AThe source order defined for the specific Ranking. Details about the algorithm in: Phone RankSorterpreCallback.rankTransform.rankSortOrder.emailN/AThe source order defined for the specific Ranking. Details about the algorithm in: Email RankSorterpreCallback.rankTransform.rankSortOrder.specialitiesN/AThe source order defined for the specific Ranking. Details about the algorithm in: Specialty RankSorterpreCallback.rankTransform.rankSortOrder.identifierN/AThe source order defined for the specific Ranking. Details about the algorithm in: Identifier RankSorterpreCallback.rankTransform.rankSortOrder.addressSource.ReltioN/AThe source order defined for the specific Ranking. Details about the algorithm in: Address RankSorterpreCallback.rankTransform.rankSortOrder.addressesSource.ReltioN/AThe source order defined for the specific Ranking. Details about the algorithm in: Addresses RankSorter" + }, + { + "title": "China Selective Router", + "pageID": "284812312", + "pageLink": "/display/GMDM/China+Selective+Router", + "content": "DescriptionThe china-selective-router component is responsible for enriching events and transformig from COMPANY model to Iqivia model. Component is using Asynchronous operation using kafka topics. To transform COMPANY object it needs to be consumed from input topic and based on configuration it is enriched, hco entity is connected with mainHco and as a last step event model is transformed to Iqivia model, after all operations event is sending to output topic.Technology:  java 11, spring boot, kafka-streams, kafkaCode link: china-selective-routerFlowsTransformation flowExposed interfacesInterface NameTypeEndpoint patternDescriptionEvent transformer topologyKAFKAtopic: {env}-{topic_postfix}Transform event from COMPANY model to Iqivia model, and send to ouptut topicDependent componentsComponentInterfaceFlowDescriptionData modelHCPModelConverterN/AConverter to transform Entity to COMPANY model or to Iqivia modelConfigurationConfig ParameterDescriptioneventTransformer: - country: "CN" eventInputTopic: "${env}-internal-full-hcp-merge-cn" eventOutputTopic: "${env}-out-full-hcp-merge-cn" enricher: com.COMPANY.mdm.event_transformer.enricher.ChinaRefEntityProcessor hcoConnector: processor: com.COMPANY.mdm.event_transformer.enricher.ChinaHcoConnectorProcessor transformer: com.COMPANY.mdm.event_transformer.transformer.COMPANYToIqviaEventTransformer refEntity: - type: HCO attribute: ContactAffiliations relationLookupAttribute: RelationType.RelationshipDescription relationLookupCode: CON - type: MainHCO attribute: ContactAffiliations relationLookupAttribute: RelationType.RelationshipDescription relationLookupCode: REL.MAIThe main part of china-selective-router configuration, contains list of event transformaton configurationcountry - specify country, value of this parameter have to be in event country section otherwise event will be skippedeventInputTopic - input topiceventOutputTopic - output topicenricher - specify class to enrich event, based on refEntity configuration this class is resposible for collecting related hco and mainHco entities.hcoConnector.processor - specify class to connect hco with main hco, in this class is made a call to reltio for all connections by hco uri. Based on received data is created additional attribute 'OtherHcoToHco' contains mainHco entity collected by enricher.hcoConnector.enabled - enable or disable hcoConnectorhcoConnector.hcoAttrName - specify additional attibute name to place connected mainHcohcoConnector.outRelations - specify the list of out relation to filter while calling reltio for hco connectionsrefEntity - contains list of attributes containing information about HCO or MainHCO entity (refEntity uri)refEntity.type - type of entity: HCO or MainHcorefEntity.attribute - base attribute to search for entityrefEntity.relationLookupAttribute - attribute to search for lookupCode to decide what entity we are looking forrefEntity.relationLookupCode - code specify entity type" + }, + { + "title": "Component Template", + "pageID": "164469941", + "pageLink": "/display/GMDM/Component+Template", + "content": "DescriptionTechnology:Code link:FlowsExposed interfacesInterface NameTypeEndpoint patternDescriptionREST API|KAFKADependent componentsComponentInterfaceFlowDescriptionfor whatConfigurationConfig ParameterDefault valueDescription" + }, + { + "title": "DCR Service", + "pageID": "209949312", + "pageLink": "/display/GMDM/DCR+Service", + "content": "" + }, + { + "title": "DCR Service 2", + "pageID": "218444525", + "pageLink": "/display/GMDM/DCR+Service+2", + "content": "DescriptionResponsible for the DCR processing. Client (PforceRx) sends the DCRs through REST API, DCRs are routed to the target system (OneKey/Veeva Opendata/Reltio). Client (Pforcerx) retrieves the status of the DCR using status API. Service also contains Kafka-streams functionality to process the DCR updates asynchronously and update the DCRRegistry cache.Services are accessible with REST API.Applies transformations to the Kafka input stream producing the Kafka output stream.Technology: java 8, spring boot, MongoDB, Kafka-StreamsCode link: dcr-service-2 FlowsPforceRx DCR flowsCreate DCRDCR state changeGet DCR statusOneKey: create DCR method (submitVR) - directOneKey: generate DCR Change Events (traceVR)OneKey: process DCR Change EventsVeeva: create DCR method (storeVR)Veeva: generate DCR Change Events (traceVR)Veeva: process DCR Change EventsReltio: create DCR method - directReltio: process DCR Change EventsExposed interfacesREST APIInterface NameTypeEndpoint patternDescriptionCreate DCRsREST APIPOST /dcrCreate DCRsGET DCRs statusREST APIGET /dcr/statusGET DCRs statusOneKey StreamInterface NameTypeEndpoint patternDescriptioncallback inputKAFKA{env}-internal-onekey-dcr-change-events-inEvents generated by the OneKey component after OneKey DataSteward Action. Flow responsible for events generation is OneKey: generate DCR Change Events (traceVR)output  - callbacksMongomongoDCR Registry updated Veeva OpenData StreamInterface NameTypeEndpoint patternDescriptioncallback inputKAFKA{env}-internal-veeva-dcr-change-events-inEvents generated by the Veeva component after Veeva DataSteward Action. Flow responsible for events generation is Veeva: generate DCR Change Events (traceVR)output  - callbacksMongomongoDCR Registry updated Reltio StreamInterface NameTypeEndpoint patternDescriptioncallback inputKAFKA{env}-internal-reltio-dcr-change-events-inEvents generated by Reltio after DataSteward Action. Published by the event-publisher component selector: "(exchange.in.headers.reconciliationTarget==null) && exchange.in.headers.eventType in ['full'] && exchange.in.headers.eventSubtype in ['DCR_CREATED', 'DCR_CHANGED', 'DCR_REMOVED']" output  - callbacksMongomongoDCR Registry updated Dependent componentsComponentInterfaceFlowDescriptionAPI RouterAPI routingCreate DCRroute the requests to the DCR-Service componentManagerMDMIntegrationServiceGetEntitiesByUrisRetrieve multiple entities by providing the list of entities URISGetEntityByIdget entity by the idGetEntityByCrosswalkget entity by the crosswalkCreateDCRcreate change requests in ReltioOK DCR ServiceOneKeyIntegrationServiceCreateDCRcreate VR in OneKeyVeeva DCR ServiceThirdPartyIntegrationServiceCreateDCRcreate VR in VeevaAt the moment only Veeva realized this interface, however in the future OneKey will be exposed via this interface as well  Hub StoreMongo connectionN/AStore cache data in mongo collectionTransaction LoggerTransactionServiceTransactionsSaves each DCR status change in transactionsConfigurationConfig ParameterDefault valueDescriptionkafka.groupId${env}_dcr2The application ID. Each stream processing application must have a unique ID. The same ID must be given to all instances of the application. It is recommended to use only alphanumeric characters, (dot), - (hyphen), and _ (underscore). Examples: "hello_world", "hello_world-v1.0.0"kafkaOther.num.stream.threads10Number of threads used in the Kafka StreamkafkaOther.default.deserialization.exception.handlercom.COMPANY.mdm.common.streams.StructuredLogAndContinueExceptionHandlerDeserialization exception handlerkafkaOther.ssl.engine.factory.classcom.COMPANY.mdm.common.security.CustomTrustStoreSslEngineFactorySSL configkafkaOther.partitioner.classcom.COMPANY.mdm.common.ping.PingPartitionerPing partitioner required in Kafka Streams application with PING servicekafkaOther.max.poll.interval.ms3600000Number of milliseconds to wait max time before next poll of eventskafkaOther.max.poll.records10Number of records downloaded in one poll from kafkakafkaOther.max.request.size2097152Events message sizedataStewardResponseConfig: reltioResponseStreamConfig: enable: true eventInputTopic: - ${env}-internal-reltio-dcr-change-events-in    sendTo3PartyDecisionTable:      - target: Veeva        decisionProperties:          sourceName: "VEEVA_CROSSWALK"      - target: Veeva        decisionProperties:          countries: ["ID","PK","MY","TH"]      - target: OneKey    sendTo3PartyTopics:      Veeva:        - ${env}-internal-sendtothirdparty-ds-requests-in      OneKey:        - ${env}-internal-onekeyvr-ds-requests-in VeevaResponseStreamConfig: enable: true eventInputTopic: - ${env}-internal-veeva-dcr-change-events-in  onekeyResponseStreamConfig: enable: true eventInputTopic: - ${env}-internal-onekey-dcr-change-events-in maxRetryCounter: 20 deduplication: duration: 2m gracePeriod: 0s byteLimit: 2147483648 suppressName: dcr2-onekey-response-stream-suppress name: dcr2-onekey-response-stream-with-delay storeName: dcr2-onekey-response-window-deduplication-store pingInterval: 1m- ${env}-internal-reltio-dcr-change-events-in- ${env}-internal-onekey-dcr-change-events-in- ${env}-internal-veeva-dcr-change-events-in- ${env}-internal-sendtothirdparty-ds-requests-in- ${env}-internal-onekeyvr-ds-requests-inConfiguration related to the event processing from Reltio, Onekey or VeevaDeduplication is related to Onekey and allows to configure the aggregation window for events (processing daily) - 24hMaxRetryCounter should be set to a high number - 1000000targetDecisionTable: - target: Reltio decisionProperties: userName: "mdm_dcr2_test_reltio_user" - target: OneKey decisionProperties: userName: "mdm_dcr2_test_onekey_user" - target: Veeva    decisionProperties:      sourceName: "VEEVA_CROSSWALK" - target: Veeva    decisionProperties:      countries: ["ID","PK","MY","TH"] - target: Reltio decisionProperties: country: GBLIST OF the following combination of attributesEach attribute in the configuration is optional. The decision table is making the validation based on the input request and the main object- the main object is HCP, if the HCP is empty then the decision table is checking HCO. The result of the decision table is the TargetType, the routing to the Reltio MDM system, OneKey or Veeva service. userName the user name that executes the requestsourceNamethe source name of the Main objectcountrythe county defined in the requestoperationTypethe operation type for the Main object{ insert, update, delete }affectedAttributesthe list of attributes that the user is changingaffectedObjects{ HCP, HCO, HCP_HCO}RESULT →  TargetType {Reltio, OneKey, Veeva}PreCloseConfig: acceptCountries: - "IN" - "SA"   rejectCountries: - "PL" - "GB"DCRs with countries which belong to acceptCountries attribute are automatically accepted (PRE_APPROVED) or rejected (PRE_REJECTED) when belong to rejectCountires. acceptCountriesList of values, example: [ IN, GB, PL , ...]rejectCountriesList of values, example: [ IN, GB, PL ]transactionLogger: simpleDCRLog: enable: true kafkaEfk: enable: trueTransaction ServiceThe configuration that enables/disables the transaction loggeroneKeyClient: url: http://devmdmsrv_onekey-dcr-service_1:8092 userName: dcr_service_2_userOneKey Integration ServiceThe configuration that allows connecting to onekey dcr serviceVeevaClient: url: http://localhost:8093 username: user apiKey: ""Veeva Integration Service The configuration that allows connecting to Veeva dcr servicemanager: url: https://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/${env}/gw userName:dcr_service_2_user logMessages: true timeoutMs: 120000MDM Integration ServiceThe configuration that allows connecting to Reltio serviceIndexesDCR Service 2 Indexes" + }, + { + "title": "DCR service connect guide", + "pageID": "415221200", + "pageLink": "/display/GMDM/DCR+service+connect+guide", + "content": "IntroductionThis guide provides comprehensive instructions on integrating new client applications with the DCR (Data Change Request) service in the MDM HUB system. It is intended for technical engineers, client architects, solution designers, and MDM/Mulesoft teams.Table of ContentsOverviewThe DCR service processes Data Change Requests (DCRs) sent by clients through a REST API. These DCRs are routed to target systems such as OneKey, Veeva Opendata, or Reltio. The service also includes Kafka-streams functionality to process DCR updates asynchronously and update the DCRRegistry cache.Access to the DCR API should be confirmed in advance with the P.O. MDM HUB → A.J. VarganinGetting StartedPrerequisitesAPI credentials (username and password)Network configurations (DNS, VPN, updated whitelists to allow you access API endpoints)Setup InstructionsCreate MDM HUB User: Follow the SOP to add a direct API user to the HUB.  Complete the steps outlined in → Add Direct API User to HUBObtain Access Token: Use PingFederate to acquire an access tokenAPI OverviewEndpointsCreate DCR: POST /dcrGet DCR Status: GET /dcr/statusGet Multiple DCR Statuses: GET /dcr/_statusGet Entity Details: GET /{objectUri}MethodsGET: Retrieve informationPOST: Create new DCRsAuthentication and AuthorizationFirst step is to acquire access token. If you are connecting first time to MDM HUB API you should create MDM HUB user Once you have the PingFederate username and password, you can acquire the access token.Obtaining Access TokenRequest Token:\ncurl --location --request POST 'https://devfederate.COMPANY.com/as/token.oauth2?grant_type=client_credentials' \\ // Use devfederate for DEV & UAT, stgfederate for STAGE, prodfederate for PROD\n--header 'Content-Type: application/x-www-form-urlencoded' \\\n--header 'Authorization: Basic Base64-encoded(username:password)'\n\nResponse:\n{\n "access_token": "12341SPRtjWQzaq6kgK7hXkMVcTzX", \n "token_type": "Bearer",\n "expires_in": 1799 // The token expires after the time - "expires_in" field. Once the token expires, it must be refreshed.\n}\nBelow you can see, how Postman should be configured to obtain access_tokenUsing Access TokenInclude the access token in the Authorization header for all API requests.Network ConfigurationRequired SettingsDNS: Ensure DNS resolution for MDM HUB endpointsVPN: Configure VPN access if requiredWhitelists: Add necessary IP addresses to the whitelistCreating DCRsThis method is used to create new DCR objects in the MDM HUB system. Below is an example request to create a new HCP object in the MDM system.More examples and the entire data model can be found at:DCR service swaggerExample RequestCreate new HCP\ncurl --location '{api_url}/dcr' \\ // e.g., https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-amer-dev\n--header 'Content-Type: application/json' \\\n--header 'Authorization: Bearer ${access_token_value}' \\ // e.g., 0001WvxKA16VWwlufC2dslSILdbE\n--data-raw '[\n {\n "country": "${dcr_country}", // e.g., CA\n        "createdBy": "${created_by}", // e.g., Test user\n        "extDCRComment": "${external_system_comment}", // e.g., This is test DCR to create new HCP\n        "extDCRRequestId": "${external_system_request_id}", // e.g., CA-VR-00255752\n        "dcrType": "${dcr_type}", // e.g., PforceRxDCR\n        "entities": [\n {\n "@type": "hcp",\n "action": "insert",\n "updateCrosswalk": {\n "type": "${source_system_name}", // e.g., PFORCERX \n                    "value": "${source_system_value}" // e.g., HCP-CA-VR-00255752 \n                },\n "values": {\n "birthDate": "07-08-2017",\n "birthYear": "2017",\n "firstName": "Maurice",\n "lastName": "Brekke",\n "title": "HCPTIT.1118",\n "middleName": "Karen",\n "subTypeCode": "HCPST.A",\n "addresses": [\n {\n "action": "insert",\n "values": {\n "sourceAddressId": {\n "source": "${source_system_name}", // e.g., PFORCERX\n                                    "id": "${address_source_system_value}"   // e.g., ADR-CA-VR-00255752 \n                                },\n "addressLine1": "08316 McCullough Terrace",\n "addressLine2": "Waynetown",\n "addressLine3": "Designer Books gold parsing",\n "addressType": "AT.OFF",\n "buildingName": "Handmade Cotton Shirt",\n "city": "Singapore",\n "country": "SG",\n "zip": "ZIP 5"\n }\n }\n ] \n }\n }\n ]\n }\n]'\nRequest placeholders:parameter namedescriptionexampleapi_urlAPI router URLhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-amer-devaccess_token_valueAccess token value0001WvxKA16VWwlufC2dslSILdbEdcr_countryMain entity countryCAcreated_byCreated by userTest userexternal_system_commentComment that will be populate to next processing stepsThis is test DCRexternal_system_request_idID for tracking DCR processingCA-VR-00255752dcr_typeProvided by MDM HUB team when user with DCR permission will be createdPforceRxDCRsource_system_nameSource system name. User used to invoke request has to have access to this sourcePFORCERXsource_system_valueID of this object in source systemHCO-CA-VR-00255752address_source_system_valueID of address in source systemADR-CA-VR-00255752Handling ResponsesSuccess ResponseCreate DCR success response\n[\n {\n "requestStatus": "${request_status}", // e.g., REQUEST_ACCEPTED\n        "extDCRRequestId": "${external_system_request_id},   // e.g., CA-VR-00255752\n        "dcrRequestId": "${mdm_hub_dcr_request_id}",   // e.g., 4a480255a4e942e18c6816fa0c89a0d2\n        "targetSystem": "${target_system_name}",   // e.g., Reltio\n        "country": "${dcr_request_country}",   // e.g., CA\n        "dcrStatus": {\n "status": "CREATED",\n "updateDate": "2024-05-07T11:22:10.806Z",\n "dcrid": "${reltio_dcr_status_entity_uri}"   // e.g., entities/0HjtwJO\n        }\n }\n]\nResponse placeholders:parameterdescriptionexampleexternal_system_request_idDCR request id in source systemCA-VR-00255752mdm_hub_dcr_request_idDCR request id in MDM HUB system4a480255a4e942e18c6816fa0c89a0d2target_system_nameDCR target system name, one of values: OneKey, Reltio, VeevaReltiodcr_request_countryDCR request countryCArequest_statusDCR request status, one of values: REQUEST_ACCEPTED, REQUEST_FAILED, REQUEST_REJECTEDREQUEST_ACCEPTEDreltio_dcr_status_entity_uriURI of DCR status entity in Reltio systementities/0HjtwJORejected Response\n[\n {\n "requestStatus": "REQUEST_REJECTED",\n "errorMessage": "DuplicateRequestException -> Request [97aa3b3f-35dc-404c-9d4a-edfaf9e7121211c] has already been processed",\n "errorCode": "DUPLICATE_REQUEST",\n "extDCRRequestId": "97aa3b3f-35dc-404c-9d4a-edfaf9e7121211c"\n }\n]\nFailed Response\n[\n {\n "requestStatus": "REQUEST_FAILED",\n "errorMessage": "Target lookup code not found for attribute: HCPTitle, country: SG, source value: HCPTIT.111218.",\n "errorCode": "VALIDATION_ERROR",\n "extDCRRequestId": "97aa3b3f-35dc-404c-9d4a-edfaf9e712121121c"\n }\n]\nIn case of incorrect user configuration in the system, the API will return errors as follows. In these cases, please contact the MDM HUB team.Getting DCR statusProcessing of DCR will take some time. DCR status can be track via get DCR status API calls. DCR processing ends when it reaches the final status: ACCEPTED or REJECTED. When the DCR gets the ACCEPTED status, the following fields will appear in its status: "objectUri" and "COMPANYCustomerId". These can be used to find created/modified entities in the MDM system. Full documentation can be found at → Get DCR status.Example RequestBelow is an example query for the selected external_system_request_id\ncurl --location '{api_url}/dcr/_status/${external_system_request_id}' \\ // e.g., CA-VR-00255752 \n--header 'Authorization: Bearer ${access_token_value}' // e.g., 0001WvxKA16VWwlufC2dslSILdbE \nHandling ResponsesSuccess Response\n{\n "requestStatus": "REQUEST_ACCEPTED",\n "extDCRRequestId": "8600ca9a-c317-45d0-97f6-152f01d70158",\n "dcrRequestId": "a2848f2a573344248f78bff8dc54871a",\n "targetSystem": "Reltio",\n "country": "AU",\n "dcrStatus": {\n "status": "ACCEPTED",\n "objectUri": "entities/0Hhskyx", // \n "COMPANYCustomerId": "03-102837896", // usually HCP. HCO only when creating or updating HCO without references to HCP in DCR request\n        "updateDate": "2024-05-07T11:47:08.958Z",\n "changeRequestUri": "changeRequests/0N38Jq0",\n "dcrid": "entities/0EUulla"\n }\n}\nRejected Response\n{\n "requestStatus": "REQUEST_REJECTED",\n "errorMessage": "Received DCR_CHANGED event, updatedBy: svc-pfe-mdmhub, on 1714378259964. Updating DCR status to: REJECTED",\n "extDCRRequestId": "b9239835-937e-434d-948c-6a282a736c4f",\n "dcrRequestId": "0b4125648b6c4d9cb785856841f7d65d",\n "targetSystem": "Veeva",\n "country": "HK",\n "dcrStatus": {\n "status": "REJECTED",\n "updateDate": "2024-04-29T08:11:06.555Z",\n "comment": "This DCR was REJECTED by the VEEVA Data Steward with the following comment: [A-20022] Veeva Data Steward: Your request has been rejected..",\n "changeRequestUri": "changeRequests/0IojkYP",\n "dcrid": "entities/0qmBUXU"\n }\n}\nGetting multiple DCR statusesMultiple statuses can be selected at once using the DCR status filtering APIExample RequestFilter DCR status\ncurl --location '{api_url}/dcr/_status?updateFrom=2021-10-17T20%3A31%3A31.424Z&updateTo=2023-10-17T20%3A31%3A31.424Z&limit=5&offset=3' \\\n--header 'Authorization: Bearer ${access_token_value}' // e.g., 0001WvxKA16VWwlufC2dslSILdbE \nExample ResponseSuccess Response\n[\n {\n "requestStatus": "REQUEST_ACCEPTED",\n "extDCRRequestId": "8d3eb4f7-7a08-4813-9a90-73caa7537eba",\n "dcrRequestId": "360d152d58d7457ab6a0610b718b6b8b",\n "targetSystem": "OneKey",\n "country": "AU",\n "dcrStatus": {\n "status": "ACCEPTED",\n "objectUri": "entities/05jHpR1",\n "COMPANYCustomerId": "03-102429068",\n "updateDate": "2023-10-13T05:43:02.007Z",\n "comment": "ONEKEY response comment: ONEKEY accepted response - HCP EID assigned\\nONEKEY HCP ID: WUSM03999911",\n "changeRequestUri": "8b32b8544ede4c72b7adfa861b1dc53f",\n "dcrid": "entities/04TxaQB"\n }\n },\n {\n "requestStatus": "REQUEST_ACCEPTED",\n "extDCRRequestId": "b66be6bd-655a-47f8-b78b-684e80166096",\n "dcrRequestId": "becafcb2cd004c1d89ecfc670de1de70",\n "targetSystem": "Reltio",\n "country": "AU",\n "dcrStatus": {\n "status": "ACCEPTED",\n "objectUri": "entities/06SVUCq",\n "COMPANYCustomerId": "03-102429064",\n "updateDate": "2023-10-13T05:35:08.597Z",\n "comment": "26498057 [svc-pfe-mdmhub][1697175298895] -",\n "changeRequestUri": "changeRequests/06sXnXH",\n "dcrid": "entities/08LAHeQ"\n }\n }\n]\nGet entityThis method is used to prepare a DCR request for modifying entities and to validate the created/modified entities in the DCR process. Use the "objectUri" field available after accepting the DCR to query MDM system.Example RequestGet entity request\ncurl --location '{api_url}/${objectUri}' \\ // e.g., https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-amer-dev, entities/05jHpR1\n --header 'Authorization: Bearer ${access_token_value}' // e.g., 0001WvxKA16VWwlufC2dslSILdbE \nExample ResponseSuccess ResponseGet entity response\n{\n "type": "configuration/entityTypes/HCP",\n "uri": "entities/06SVUCq",\n "createdBy": "svc-pfe-mdmhub",\n "createdTime": 1697175293866,\n "updatedBy": "Re-cleansing of null in tenant 2NBAwv1z2AvlkgS background task. (started by test.test@COMPANY.com)",\n "updatedTime": 1713375695895,\n "attributes": {\n "COMPANYGlobalCustomerID": [\n {\n "uri": "entities/06SVUCq/attributes/COMPANYGlobalCustomerID/LoT0xC2",\n "type": "configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",\n "value": "03-102429064",\n "ov": true\n }\n ],\n "TypeCode": [\n {\n "uri": "entities/06SVUCq/attributes/TypeCode/LoT0XcU",\n "type": "configuration/entityTypes/HCP/attributes/TypeCode",\n "value": "HCPT.NPRS",\n "ov": true\n }\n ],\n "Addresses": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv",\n "value": {\n "AddressType": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressType/dZqkWUB",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressType",\n "value": "TYS.P",\n "ov": true\n }\n ],\n "COMPANYAddressID": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/COMPANYAddressID/dZqkakR",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/COMPANYAddressID",\n "value": "7001330683",\n "ov": true\n }\n ],\n "AddressLine1": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine1/dZqkf0h",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine1",\n "value": "addressLine1",\n "ov": true\n }\n ],\n "AddressLine2": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine2/dZqkjGx",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine2",\n "value": "addressLine2",\n "ov": true\n }\n ],\n "AddressLine3": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine3/dZqknXD",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine3",\n "value": "addressLine3",\n "ov": true\n }\n ],\n "City": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/City/dZqkrnT",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/City",\n "value": "city",\n "ov": true\n }\n ],\n "Country": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/Country/dZqkw3j",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Country",\n "value": "GB",\n "ov": true\n }\n ],\n "Zip5": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/Zip5/dZql0Jz",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Zip5",\n "value": "zip5",\n "ov": true\n }\n ],\n "Source": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF",\n "value": {\n "SourceName": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceName/dZql8qV",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Source/attributes/SourceName",\n "value": "PforceRx",\n "ov": true\n }\n ],\n "SourceAddressID": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceAddressID/dZqlD6l",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Source/attributes/SourceAddressID",\n "value": "string",\n "ov": true\n }\n ]\n },\n "ov": true,\n "label": "PforceRx"\n }\n ],\n "VerificationStatus": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/VerificationStatus/dZrp4Jz",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/VerificationStatus",\n "value": "Unverified",\n "ov": true\n }\n ],\n "VerificationStatusDetails": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/VerificationStatusDetails/hLXLd9W",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/VerificationStatusDetails",\n "value": "Address Verification Status is unverified - unable to verify. the output fields will contain the input data.\\nPost-Processed Verification Match Level is 0 - none.\\nPre-Processed Verification Match Level is 0 - none.\\nParsing Status isidentified and parsed - All input data has been able to be identified and placed into components.\\nLexicon Identification Match Level is 0 - none.\\nContext Identification Match Level is 5 - delivery point (postbox or subbuilding).\\nPostcode Status is PostalCodePrimary identified by context - postalcodeprimary identified by context.\\nThe accuracy matchscore, which gives the similarity between the input data and closest reference data match is 100%.",\n "ov": true\n }\n ],\n "AVC": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AVC/hLXLhPm",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/AVC",\n "value": "U00-I05-P1-100",\n "ov": true\n }\n ],\n "AddressRank": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressRank/gjq5qMj",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressRank",\n "value": "1",\n "ov": true\n }\n ]\n },\n "ov": true,\n "label": "TYS.P - addressLine1, addressLine2, city, zip5, GB"\n }\n ]\n },\n "crosswalks": [\n {\n "type": "configuration/sources/ReltioCleanser",\n "value": "06SVUCq",\n "uri": "entities/06SVUCq/crosswalks/dZrp03j",\n "reltioLoadDate": 1697175300805,\n "createDate": 1697175303886,\n "updateDate": 1697175303886,\n "attributes": [\n "entities/06SVUCq/attributes/Addresses/dZqkSDv",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/AVC/hLXLhPm",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/VerificationStatus/dZrp4Jz",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/VerificationStatusDetails/hLXLd9W"\n ]\n },\n {\n "type": "configuration/sources/Reltio",\n "value": "06SVUCq",\n "uri": "entities/06SVUCq/crosswalks/dZqkNxf",\n "reltioLoadDate": 1697175300805,\n "createDate": 1697175300805,\n "updateDate": 1697175300805,\n "attributes": [\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/Country/dZqkw3j",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceName/dZql8qV",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/Zip5/dZql0Jz",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine1/dZqkf0h",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceAddressID/dZqlD6l",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/COMPANYAddressID/dZqkakR",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine2/dZqkjGx",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/City/dZqkrnT",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine3/dZqknXD",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressType/dZqkWUB"\n ],\n "singleAttributeUpdateDates": {\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/Country/dZqkw3j": "2023-10-13T05:35:00.805Z",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceName/dZql8qV": "2023-10-13T05:35:00.805Z",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/Zip5/dZql0Jz": "2023-10-13T05:35:00.805Z",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine1/dZqkf0h": "2023-10-13T05:35:00.805Z",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceAddressID/dZqlD6l": "2023-10-13T05:35:00.805Z",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/COMPANYAddressID/dZqkakR": "2023-10-13T05:35:00.805Z",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF": "2023-10-13T05:35:00.805Z",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine2/dZqkjGx": "2023-10-13T05:35:00.805Z",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/City/dZqkrnT": "2023-10-13T05:35:00.805Z",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine3/dZqknXD": "2023-10-13T05:35:00.805Z",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv": "2023-10-13T05:35:00.805Z",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressType/dZqkWUB": "2023-10-13T05:35:00.805Z"\n }\n },\n {\n "type": "configuration/sources/HUB_CALLBACK",\n "value": "06SVUCq",\n "uri": "entities/06SVUCq/crosswalks/LoT0kPG",\n "reltioLoadDate": 1697175429294,\n "createDate": 1697175296673,\n "updateDate": 1697175296673,\n "attributes": [\n "entities/06SVUCq/attributes/TypeCode/LoT0XcU",\n "entities/06SVUCq/attributes/COMPANYGlobalCustomerID/LoT0xC2",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressRank/gjq5qMj",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv"\n ],\n "singleAttributeUpdateDates": {\n "entities/06SVUCq/attributes/TypeCode/LoT0XcU": "2023-10-13T05:34:56.673Z",\n "entities/06SVUCq/attributes/COMPANYGlobalCustomerID/LoT0xC2": "2023-10-13T05:37:09.294Z",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressRank/gjq5qMj": "2023-10-13T05:35:08.420Z",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv": "2023-10-13T05:35:08.420Z"\n }\n }\n ]\n}\nRejected ResponseEntity not found response\n{\n "code": "404",\n "message": "Entity not found"\n}\nTroubleshooting GuideAll documentation with a detailed description of flows can be found at → PforceRx DCR flowsCommon Issues and SolutionsDuplicate Request:Error Message: "DuplicateRequestException -> Request [ID] has already been processed."Solution: Ensure that the extDCRRequestId is unique for each request.  This ID is used to track DCR processing and prevent duplicate submissions. Generate a new unique ID for every new DCR request.Validation Error:Error Message: "Target lookup code not found for attribute: [Attribute], country: [Country], source value: [Value]."Solution: This error indicates that the provided attribute values or lookup codes are incorrect or not recognized by the system.Verify Attribute Values: Double-check the attribute values in your request against the expected values and formats documented in the API specification (Swagger documentation).Correct Lookup Codes: Ensure that you are using the correct lookup codes for attributes that require them (e.g., country codes, title codes). Example: If you receive "Target lookup code not found for attribute: HCPTitle, country: SG, source value: HCPTIT.111218.", verify that 'HCPTIT.111218' is a valid HCP Title code for Singapore ('SG').Network Errors:Issue: Unable to connect to the DCR API endpoint. Common errors include "Connection refused," "Timeout," "DNS resolution failure."Solutions:Verify Network Connectivity: Use the ping command (e.g., ping api-amer-nprod-gbl-mdm-hub.COMPANY.com) to check if the API endpoint is reachable. Use traceroute to diagnose network path issues.Check VPN Connection: If VPN access is required, ensure that your VPN connection is active and correctly configured.Firewall Settings: Confirm that your firewall rules are not blocking outbound traffic on the necessary ports (typically 443 for HTTPS) to the API endpoint. Contact your network administrator to verify firewall settings if needed.DNS Resolution: Ensure that your DNS server is correctly resolving the MDM HUB API endpoint hostname to an IP address.Authentication Errors:Issue: API requests are rejected due to authentication failures. Common errors include "Invalid credentials," "Token expired," "Unauthorized."Solutions:Verify API Credentials: Double-check that you are using the correct username and password for API access.Access Token Validity: If using Bearer Token authentication, ensure that your access token is valid and not expired. Access tokens typically have a limited lifespan (e.g., 30 minutes).Token Refresh: Implement token refresh logic in your client application to automatically obtain a new access token when the current one expires.Authorization Header: Verify that you are including the access token correctly in the Authorization header of your API requests, using the "Bearer " scheme (e.g., Authorization: Bearer ).Service Unavailable Errors:Issue: Intermittent API connectivity issues or request failures with "503 Service Unavailable" or "500 Internal Server Error" responses.Solutions:Check Service Status: Check if there is a known outage or maintenance activity for the MDM HUB service. A service status page may be available (check with the MDM HUB team).Retry Requests: Implement retry logic in your client application to handle transient service interruptions. Use exponential backoff to avoid overwhelming the API service during recovery.Contact Support: If the issue persists, contact the MDM HUB support team to report the service unavailability and get further assistance.Missing Configuration for UserError Message: "RuntimeException -> User [User] dcrServiceConfig is missing."Missing dcr service cofiguration\n[\n {\n "requestStatus": "REQUEST_FAILED",\n "errorMessage": "RuntimeException -> User test_user dcrServiceConfig is missing",\n "extDCRRequestId": "97aa3b3f-35dc-404c-9d4a-edfaf9e7b11c"\n }\n]\nSolution: Contact the MDM HUB team to ensure the user configuration is correctly set up.Permission Denied to create DCR:Error Message: "User is not permitted to perform: [Action]"Missing role\n{\n "code": "403",\n "message": "User is not permitted to perform: CREATE_DCR"\n}\nSolution: Ensure the user has the necessary permissions to perform the action.Verify User Permissions: Contact the MDM HUB team or your MDM HUB administrator to verify that your user account has the necessary roles and permissions to perform the requested action (e.g., CREATE_DCR, GET_DCR_STATUS) and access the specified DCR type (e.g., PforceRxDCR).DCR Type Access: Ensure that your user configuration includes access to the specific DCR type you are trying to use.Validation Error:Error Message: "ValidationException -> User [User] doesn't have access to PforceRXDCR dcrType."Invalid dcr service configuration\n[\n {\n "requestStatus": "REQUEST_REJECTED",\n "errorMessage": "ValidationException -> User test_user doesn't have access to PforceRXDCR dcrType",\n "errorCode": "VALIDATION_ERROR",\n "extDCRRequestId": "97aa3b3f-35dc-404c-9d4a-edfaf9e71212112121c"\n }\n]\nDescription: This error occurs when the user does not have the necessary permissions to access a specific DCR type (PforceRXDCR) in the MDM HUB system.Possible Causes:The user has not been granted the required permissions for the specified DCR typeThe user configuration is incomplete or incorrectSolution:Verify User Permissions: Ensure that the user has been granted the necessary permissions to access the PforceRXDCR DCR type. This can be done by checking the user roles and permissions in the MDM HUB system" + }, + { + "title": "Entity Enricher", + "pageID": "164469912", + "pageLink": "/display/GMDM/Entity+Enricher", + "content": "DescriptionAccepts simple events on the input. Performs the following calls to Reltio:getEntitiesByUrisgetRelationgetChangeRequestfindEntityCountryProduces the events enriched with the targetEntity / targetRelation field retrieved from RELTIO.Technology: java 8, spring boot, mongodb, kafka-streamsCode link: entity-enricher Exposed interfacesInterface NameTypeEndpoint patternDescriptionentity enricher inputKAFKA${env}-internal-reltio-eventsevents being sent by the event publisher component. Event types being considered: HCP_*, HCO_*, ENTITY_MATCHES_CHANGEDentity enricher outputKAFKA${env}-internal-reltio-full-eventsDependent componentsComponentInterfaceFlowDescriptionManagerMDMIntegrationServicegetEntitiesByUrisgetRelationgetChangeRequestfindEntityCountryConfigurationConfig ParameterDefault valueDescriptionbundle.enabletrueenable / disable functionbundle.inputTopics${env}-internal-reltio-eventsinput topicbundle.threadPoolSize10number of thread pool sizebundle.pollDuration10spoll intervalbundle.outputTopic${env}-internal-reltio-full-eventsoutput topickafka.groupId${env}-entity-enricherThe application ID. Each stream processing application must have a unique ID. The same ID must be given to all instances of the application. It is recommended to use only alphanumeric characters, . (dot), - (hyphen), and _ (underscore). Examples: "hello_world", "hello_world-v1.0.0"bundle.kafkaOther.session.timeout.ms30000bundle.kafkaOther.max.poll.records10bundle.kafkaOther.max.poll.interval.ms300000bundle.kafkaOther.auto.offset.resetearliestbundle.kafkaOther.enable.auto.commitfalsebundle.kafkaOther.max.request.size2097152bundle.gateway.apiKey${gateway.apiKey}bundle.gateway.logMessagesfalsebundle.gateway.url${gateway.url}bundle.gateway.userName${gateway.userName}" + }, + { + "title": "HUB APP", + "pageID": "302700538", + "pageLink": "/display/GMDM/HUB+APP", + "content": "DescriptionHUB UI is a front-end application that presents basic information about the MDM HUB cluster. This component allows you to manage Kafka and Airflow Dags or view quality service configuration.The app allows users to log in with their COMPANY accounts.Technology: AngularCode link: mdm-hub-appFlowsUser flowsAdmin flowsAccess:Add new role and add users to the UIDependent componentsComponentInterfaceDescriptionMDM ManagerREST APIUsed to fetch quality service configuration and for testing entitiesMDM AdminREST APIUsed to manage kafka, airflow dags and reconciliation serviceConfigurationComponent is configured via environment variablesEnvironment variableDefault valueDescriptionBACKEND_URIN/AMDM Manager URIADMIN_URIN/AMDM Admin URIINGRESS_PREFIXN/AApplication context path" + }, + { + "title": "Hub Store", + "pageID": "164469908", + "pageLink": "/display/GMDM/Hub+Store", + "content": "Hub store is a mongo cache where are stored: EntityHistory, EntityMatchesHistory, EntityRelation.ConfigurationConfig ParameterDefault valueDescriptionmongo:host: ***:27017,***:27017,***:27017dbName: reltio_${env}user: ***url: mongodb://${mongo.user}:${mongo.password}@${mongo.host}/${mongo.dbName}Mong DB connection configuration" + }, + { + "title": "Inc batch channel", + "pageID": "302686382", + "pageLink": "/display/GMDM/Inc+batch+channel", + "content": "DescriptionResponsible for ETL data loads of data to Reltio. It takes plain data files(eg. txt, csv) and, based on defined mappings, converts it into json objects, which are then sent to Reltio.Code link: inc-batch-channelFlowsIncremantal batch Dependent componentsComponentInterface nameDescriptionManagerKafkaEvents constructed by inc-batch-channel are transferred to the kafka topic, from where they are read by mdm-manager and sent to Reltio. When the event is processed by the Reltio manager send ACK message on the appropriate topic:Example input topic: gbl-prod-internal-async-all-sapExample ACK topic: gbl-prod-internal-async-all-sap-ackBatch ServiceBatch ControllerUsed to store ETL loads state and statistics. All information are placed in mongodbMongoDb collectionsGenBatchDags - stores dag stages stateGenBatchAttributeHisotry - stores state of objects loaded by inc-batch-channelgenBatchLastBatchIds - last batch id for every batchgenBatchProcessorStartTime - start time of all batch stagesgenBatchTagMappings -ConfigurationConnectionsmongoConnectionProps.dbUrlFull Mongo DB URLmongoConnectionProps.mongo.dbNameMongo database namekafka.serversKafka Hostname kafka.groupIdBatch Service component group namekafka.saslMechanismSASL configrrationkafka.securityProtocolSecurity Protocolkafka.sslTruststoreLocationSSL trustore file locationkafka.sslTruststorePasswordSSL trustore file passowrdkafka.usernameKafka usernamekafka.passwordKafka dedicated user passwordkafka.sslEndpointAlgorithm:SSL algorightBatches configuration:batches.${batch_name}Batch configurationbatches.${batch_name}.inputFolderDirectory with input filesbatches.${batch_name}.outputFolderDirectory with output filesbatches.${batch_name}.columnsDefinitionFileFile defining mappingbatches.${batch_name}.requestTopicManager topic with events that are going to be sent to Reltiobatches.${batch_name}.ackTopicAck topicbatches.${batch_name}.parserTypeParser type. Defines separator and encoding formatbatches.${batch_name}.preProcessingDefine preprocessin of input filesbatches.${batch_name}.stages.${stage_name}.stageOrderStage prioritybatches.${batch_name}.stages.${stage_name}.processorTypeProcessor type:SIMPLE - change is applied only in mongoENTITY_SENDER - change is sent to Reltiobatches.${batch_name}.stages.${stage_name}.outputFileNameOutput file namebatches.${batch_name}.stages.${stage_name}.disabledIf stage is disabledbatches.${batch_name}.stages.${stage_name}.definitionsDefine which definition is used to map input filebatches.${batch_name}.stages.${stage_name}.deltaDetectionEnabledIf previous and current state of objects are comparedbatches.${batch_name}.stages.${stage_name}.initDeletedLoadEnabledbatches.${batch_name}.stages.${stage_name}.fullAttributesMergebatches.${batch_name}.stages.${stage_name}.postDeleteProcessorEnabledbatches.${batch_name}.stages.${stage_name}.senderHeadersDefines http headers" + }, + { + "title": "Kafka Connect", + "pageID": "164469804", + "pageLink": "/display/GMDM/Kafka+Connect", + "content": "DescriptionKafka Connect is a tool for scalably and reliably streaming data between Apache Kafka® and other data systems.  It makes it simple to quickly define connectors that move large data sets in and out of Kafka. Kafka Connect can ingest entire databases or collect metrics from all your application servers into Kafka topics, making the data available for stream processing with low latency.FlowsSnowflake: Base tables refreshSnowflake: Events publish flowSnowflake: History InactiveSnowflake: LOV data publish flowSnowflake: MT data publish flowConfigurationKafka Connect - properties descriptionparamvaluegroup.id-kafka-connect-snowflaketopic.creation.enablefalseoffset.storage.topic-internal-kafka-connect-snowflake-offset config.storage.topic-internal-kafka-connect-snowflake-config status.storage.topic-internal-kafka-connect-snowflake-statuskey.converterorg.apache.kafka.connect.storage.StringConvertervalue.converterorg.apache.kafka.connect.storage.StringConverterkey.converter.schemas.enabletruevalue.converter.schemas.enabletrueconfig.storage.replication.factor3offset.storage.replication.factor3status.storage.replication.factor3 rest.advertised.host.namelocalhostrest.port8083security.protocolSASL_PLAINTEXT sasl.mechanismSCRAM-SHA-512consumer.group.id-kafka-connect-snowflake-consumerconsumer.security.protocolSASL_PLAINTEXTconsumer.sasl.mechanismSCRAM-SHA-512connectors - SnowflakeSinkConnector - properties descriptionparamvaluesnowflake.topic2table.map-out-full-snowflake-all:HUB_KAFKA_DATAtopics-out-full-snowflake-allbuffer.flush.time300snowflake.url.namesnowflake.database.namesnowflake.schema.nameLANDINGbuffer.count.records1000snowflake.user.namevalue.convertercom.snowflake.kafka.connector.records.SnowflakeJsonConverterkey.converterorg.apache.kafka.connect.storage.StringConverterbuffer.size.bytes60000000snowflake.private.key.passphrasesnowflake.private.keyThere is an one exception connected with FLEX environment. The S3SinkConnector is used here - properties descriptionparamvalues3.regions3.part.retries10 s3.bucket.names3.compression.typenone topics.dirtopics-out-full-gblus-flex-allflush.size1000000timezoneUTClocale format.classio.confluent.connect.s3.format.json.JsonFormatschema.generator.classio.confluent.connect.storage.hive.schema.DefaultSchemaGeneratorschema.compatibilityNONE aws.access.key.idaws.secret.access.keyvalue.converterorg.apache.kafka.connect.json.JsonConvertervalue.converter.schemas.enablefalsekey.converterorg.apache.kafka.connect.storage.StringConverterkey.converter.schemas.enablefalsepartition.duration.ms86400000partitioner.classio.confluent.connect.storage.partitioner.TimeBasedPartitioner storage.classio.confluent.connect.s3.storage.S3Storagerotate.schedule.interval.ms86400000rotate.interval.ms-1path.formatYYYY-MM-ddtimestamp.extractorWallclock" + }, + { + "title": "Manager", + "pageID": "164469894", + "pageLink": "/display/GMDM/Manager", + "content": "DescriptionManager is the main component taking part in client interactions with MDM systems.It orchestrates API calls with  the following services:Reltio & Nucleus adapters translating client input into MDM API callsProcess logic  - mapping  simple calls into multiple MDM callsQuality engine - validating data flowing into MDMsTransaction engine - logging requests for tracing purposesAutorisation engine - controlling user privileges  Cache engine - reduce API calls by reading data directly from Hub storeManager services are accessible with REST API.  Some services are exposed as asynchronous operations through Kafka for performance reasons.Technology: Java, Spring, Apache CamelCode link: mdm-managerFlowsGet entitySearch entitiesValidate HCPCreate/Update HCP/HCO/MCOLOV readCreate relationsMerge & UnmergeMerge & Unmerge ComplexExposed interfacesInterface NameTypeEndpoint patternDescriptionGet entityREST APIGET /entities/{entityId}Get detailed entity informationGet multiple entitiesREST APIGET /entities/_byUrisReturn multiple entities with provided urisGet entity countryREST APIGET /entities/{entityId}/_countryReturn country for an entity with the provided uriMerge & UnmegeREST APIPOST/entities/{entitiyId/_mergePOST/entities/{entitiyId/_unmerge_byUrisMerge entity A with entity B using Reltio uris as IDs.Unmerge entity B from entity A using Reltio uris as IDs.Merge & Unmege ComplexREST APIPOST/entities/_mergePOST/entities/_unmergeMerge entity A with entity B using request body (JSON) with ids.Unmerge entity B from entity A using request body (JSON) with ids.Create/Update entityREST API & KAFKAPOST /hcpPATCH /hcpPOST /hcoPATCH /hcoCreate/partially update entityCreate/Update multiple entitiesREST APIPOST /batch/hcpPATCH /batch/hcpPOST /batch/hcoPATCH /batch/hcoBatch create HCO/HCP entitiesGet entity by crosswalkREST APIGET /entities/crosswalkGet entity by crosswalkDelete entity by crosswalkREST APIDELETE /entities/crosswalkDelete entityt by crosswalkCreate/Update relationREST APIPOST /relations/_dbscanPATCH /relations/Create/update relationGet relationREST APIGET /relations/{relationId}Get relation by reltio URIGet relation by crosswalkREST APIGET /relations/crosswalkGet relation by crosswalkDelete relation by crosswalkREST APIDELETE /relations/crosswalkDelete relation by crosswalkBatch create relationREST APIPOST /batch/relationBatch create relationCreate/replace/update mco profileREST APIPOST /mcoPATCH /mcoCreate, replace or partially update mco profileCreate/replace/update batch mco profileREST APIPOST /batch/mcoPATCH /batch/mcoCreate, replace or partially update mco profilesUpdate Usage FlagsREST APIPOST /updateUsageFlagsCreate, Update, Remove UsageType UsageFlags of "Addresses' Address field of HCP and HCO entitiesSearch for change requestsREST APIGET /changeRequests/_byEntityCrosswalkSearch for change requests by entity crosswalkGet change request by uriREST APIGET /changeRequests/{uri}Get change request by uriCreate change requestREST APIPOST /changeRequestCreate change request - internalGet change requestREST APIGET /changeRequestGet change request - internalDependent componentsComponentInterfaceDescriptionReltio AdapterInternal Java interfaceUsed to communicate with ReltioNucleus AdapterInternal Java interfaceUsed to communicate with NucleusAuthorization EngineInternal Java interfaceProvide user authorizationMDM Routing EngineInternal Java interfaceProvides routingConfigurationThe configuration is a composition of dependent components configurations and parameters specifived below.Config ParameterDefault valueDescriptionmongo.urlMongo urlmongo.dbNameMongo database namemongoConnectionProps.dbUrlMongo database urlmongoConnectionProps.dbNameMongo database namemongoConnectionProps.userMongo usernamemongoConnectionProps.passwordMongo user passwordmongoConnectionProps.entityCollectionNameEntity collection namemongoConnectionProps.lovCollectionNameLov collection name" + }, + { + "title": "Authorization Engine", + "pageID": "164469870", + "pageLink": "/display/GMDM/Authorization+Engine", + "content": "DescriptionAuthorization Engine is responsible for authorizing users executing API operations. All API operations are secured and can be executed only by users that have specific roles. The engine checks if a user has a role allowed access to API operation.FlowsThe Authorization Engine is engaged in all flows exposed by Manager component.Exposed interfacesInterface NameTypeJava class:methodDescriptionAuthorization ServiceJavaAuthorizationService:processCheck user permission to run a specific operation. If the user has granted a role to run this operation method will allow to call it. In other case authorization exception will throwDependent componentsAll of the below operations are exposed by Manager component and details about was described here. Description column of below table has role names which have to be assigned to user permitted to use described operations.ComponentInterfaceDescriptionManagerGET /entities/*GET_ENTITIESGET /relations/*GET_RELATIONGET /changeRequests/*GET_CHANGE_REQUESTSDELETE /entities/crosswalkDELETE /relations/crosswalkDELETE_CROSSWALKPOST /hcpPOST /batch/hcpCREATE_HCPPATCH /hcpPATCH /batch/hcpUPDATE_HCPPOST /hcoPOST /batch/hcoCREATE_HCOPATCH /hcoPATCH /batch/hcoUPDATE_HCOPOST /mcoPOST /batch/mcoCREATE_MCOPATCH /mcoPATCH /batch/mcoUPDATE_MCOPOST /relationsCREATE_RELATIONPATCH /relationsUPDATE_RELATIONPOST /changeRequestCREATE_CHANGE_REQUESTPOST /updateUsageFlagsUSAGE_FLAG_UPDATEPOST /entities/{entityId}/_mergeMERGE_ENTITIESPOST /entities/{entityId}/_unmergeUNMERGE_ENTITIESGET /lookupLOOKUPSConfigurationConfiguration parameterDescriptionusers[].nameUser nameusers[].descriptionDescription of userusers[].defaultClientDefault MDM client that is used in the case when the user doesn't specify countryusers[].rolesList of roles assigned to userusers[].countriesList of countries whose data can be managed by userusers[].sourcesList of sources (crosswalk types) whose can be used during manage data by the user" + }, + { + "title": "MDM Routing Engine", + "pageID": "164469900", + "pageLink": "/display/GMDM/MDM+Routing+Engine", + "content": "DescriptionMDM Routing Engine is responsible for making a decision on which MDM system has to be used to process client requests. The call is made based on a decision table that maps MDM system with a  country.In the case of multiple MDM systems for the same market, the decision table contains a user dimension allowing to select MDM system by user name.FlowsThe MDM Routing Engine is engaged in all flows supported by Manager component.Exposed interfacesInterface NameTypeJava class:methodDescriptionMDM Client FactoryJavaMDMClientFactory:getDefaultMDMClientGet default MDM clientJavaMDMClientFactory:getDefaultMDMClient(username)Get default MDM client specified for the userJavaMDMClientFactory:getMDMClient(country)Get MDM client that supports the specified countryJavaMDMClientFactory:getMDMClient(country, user);Get MDM client that  supported specified country and userDependent componentsComponentInterfaceDescriptionReltio AdapterJavaProvides integrations with Reltio MDMNucleus AdapterJavaProvides integration with Nucleus MDMConfigurationConfiguration parameterDescriptionusers[].namename of userusers[].defaultClientdefault mdm client for userclientsDecisionTable.{selector name}.countries[]List of countriesclientsDecisionTable.{selector name}.clients[]Map where the key is username and value is MDM client name that will be used to process data comes from defined countries.Special key "default" defines the default MDM client which will be used in the case when there is no specific client for username.mdmFactoryConfig.{mdm client name}.typeType of MDM client. Only two values are supported: "reltio" or "nucleus".mdmFactoryConfig.{mdm client name}.configMDM client configuration. It is based on adapter type: Reltio or Nucleus" + }, + { + "title": "Nucleus Adapter", + "pageID": "164469896", + "pageLink": "/display/GMDM/Nucleus+Adapter", + "content": "DescriptionNucleus-adapter is a component of MDM Hub that is used to communicate with Nucleus. It provides 4 types of operations:get entity,get entities,create/update entity,get relationNucleus 360 is an old COMPANY MDM platform comparing to Reltio. It's used to store and manage data about healthcare professionals(hcp) and healthcare organizations(hco).It uses batch processing so the results of the operation are applied for the golden record after a certain period of time.Nucleus accepts requests with an XML formatted body and also sends responses in the same way.Technology: java 8, nucleusCode link: nucleus-adapterFlowsCreate/update entityGet entityGet entitiesGet relationsExposed interfacesInterface NameTypeJava class:methodDescriptionget entityJavaNucleusMDMClient:getEntityProvides a mechanism to obtain information about the specified entity. Entity can be obtained by entity id, e.g. xyzf325Two Nucleuses methods are used to obtain detailed information about the entity.First is Look up method, thanks to which we can obtain basic information about entity(xml format) by its id.Next, we provide that information for the second Nucleus method, Get Profile Details that sends a response with all available information (xml format).Finally, we gather all received information about the entity, convert it to Relto model(json format) and transfer it to a client.get entitiesJavaNucleusMDMClient:getEntitiesProvide a mechanism to obtain basic information about a group of entities. This entity group is determined based on the defined filters(e.g. first name, last name, professional type code).For this purpose only Nuclueus look up method is used. This way we receive only basic information about entities but it is performance-optimized and does not create unnecessary load on the server.create/update entityJavaNucleusMDMClient:creteEntityUsing the Nucleus Add Update web service method nucleus-adapter provides a mechanism to create or update data present in the database according to the business rules(createEntity method).Nucleus-adapter accepts JSON formatted requests body, maps it to xml format, and then sends it to Nucleus.get relationsJavaNucleusMDMClient:getRelationTo get relations nucleus-adapter uses the Nucleus affiliation interface.Nucleus produces XML formatted response and nucleus-adapter transforms it to Reltio model(JSON format).Dependent componentsComponentInterfaceDescriptionNucleushttps://{{ nuleus host }}/CustomerManage_COMPANY_EU_Prod/manage.svc?singleWsdlNucleus endpoint for Creating/updating hcp and hcohttps://{{ nuleus host }}/Nuc360ProfileDetails5.0/Api/DetailSearchNucleus endpoint for getting details about entityhttps://{{ nuleus host }}/Nuc360QuickSearch5.0/LookupNucleus endpoint for getting basic information about entityhttps://{{ nuleus host }}/Nuc360DbSearch5.0/api/affiliationNucleus endpoint for getting relations informationConfigurationConfig ParameterDefault valueDescriptionnucleusConfig.baseURLnullBase url of Nucleus mdmnucleusConfig.usernamenullNucleus usernamenucleusConfig.passwordnullNucleus passwordnucleusConfig.additionalOptions.customerManageUrlnullNucleus endpoint for creating/updating entitiesnucleusConfig.additionalOptions.profileDetailsUrlnullNucleus endpoint for getting detailed information about entitynucleusConfig.additionalOptions.quickSearchUrlnullNucleus endpoint for getting basic information about entitynucleusConfig.additionalOptions.affiliationUrlnullNucleus endpoint for getting information about entities relationsnucleusConfig.additionalOptions.defaultIdTypenullDefault IdType for entities search(used if another not provided)" + }, + { + "title": "Quality Engine and Rules", + "pageID": "164469944", + "pageLink": "/display/GMDM/Quality+Engine+and+Rules", + "content": "DescriptionQuality engine is used to verify data quality in entity attributes. It is used for MCO, HCO, HCP entities.Quality engine is responsible for preprocessing Entity when a specific precondition is met. This engine is started in the following cases:Rest operation (POST/PATCH) on /hco endpoint on MDM ManagerRest operation (POST/PATCH) on /hcp endpoint on MDM ManagerRest operation (POST/PATCH) on /mco endpoint on MDM ManagerIt has two two components quality-engine and quality-engine-integrationTechnology:fasterxmlCode link:quality-engine - http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/browse/quality-enginequality-engine-integration - http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/browse/quality-engine-integrationquality rules - http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/browse/mdm-manager/src/main/resources/qualityRulesBusiness requirements (provided by AJ)COMPANY Teams → Global Customer MDM → 20-Design → Hub → Global-MDM_DQ_*FlowsValidation by quality rules is done before sending entities to reltio. Quality rules should be enabled in configuration.Data quality checking is started in com.COMPANY.mdm.manager.service.QualityService. Whole rule flow for entity have one context (com.COMPANY.entityprocessingengine.pipeline.RuleContext)RuleRule have following configurationname - name of the rule - it is requiredpreconditions - preconditions that should be met to run the rulecheck - check that should be triggered if preconditions are metaction - action that should be triggered if check is evaluated to truePreconditionsStructure:Example:preconditions:    - type: source      values:          - CENTRISPossible types:not - it evaluates to true if all preconditions that are underneath evaluate to falsematch - it evaluate to true if given attribute value matches any of listed patterns to trueanyMatch - it evaluate to true if given array attribute value matches any of listed patterns to trueexistsInContext - it checks if given fieldName with specified value exists in contextcontext - check if entity context values contains only allowed once source - check if entity has source of given typeChecksStructure:Example:check:   type: match   attribute: FirstName   values:       - '[^0-9@#$%^&*~!"<>?/|\\_]+'Possible types:ageCheck - check if age specified in date or year attribute is older than specified number of yearsmandatoryGroup - check if at least one from specified list of attributes existsmandatory - check if specified attribute existsmandatoryAll - check if all specified attributes existsmandatoryArray - check if specified nested attribute existsnot - check if opposite of the check is truegroupMatch - check of group of attributes matches specified valuesmatch - check if attribute value matches specified given valueempty - empty checkActionsStructure:Example:action:   type: add   attributes:      - DataQuality[].DQDescription   value: "{source}_005_02"Possible types:clean - cleans attribute value - replaces pattern with given stringreject - rejects entityremove - remove attributeset - sets attribut valuemodify - modify attribute valueadd - adds attribute valuechineseNameToEnglish - converts chinese value to englishaddressDigest - calculate address digestaddressCrosswalkValue - sets digest valueconvertCase - convert case lower, upper, capitalizeremoveEmptyAttributes - removes empty attributesprefixByCountry - adds country prefix to attribute valuemakeSourceAddressInfo - adds attribute with source address infopadding - pads attribute value with specified characterassignId - assings id setContextValue - set value that will be stored in contextDependent componentsComponentInterfaceFlowDescriptionmanagerQualityServiceValidationRuns quality engine validationConfigurationConfig ParameterDefault valueDescriptionvalidationOntrueIt turns on or off validation - it needs to specified in application.ymlpartialOverrideValidationOntrueIt turns on or off validation for updateshcpQualityRulesConfigslist of files with quality rules for hcpIt contains a list of files with quality rules for hcphcoQualityRulesConfigslist of files with quality rules for hcoIt contains a list of files with quality rules for hcohcpAffiliatedHCOsQualityRulesConfigslist of files with quality rules for affilitated hcpIt contains a list of files with quality rules for affilitated HCOmcoQualityRulesConfigslist of files with quality rules for mcoIt contains a list of files with quality rules for mco" + }, + { + "title": "Reltio Adapter", + "pageID": "164469898", + "pageLink": "/display/GMDM/Reltio+Adapter", + "content": "DescriptionReltio-adapter is a component of MDM Hub(part of mdm-manager) that is used to communicate with Reltio. Technology: Java,Code link: reltio-adapterFlowsCreate/update entityGet entityGet entitiesMerge entityUnmerge entityCreate relationGet relationsCreate DCRGet DCRApply DCRReject DCRDelete DCRExposed interfacesInterface NameTypeEndpoint patternDescriptionGet entityJavaReltioMDMClient:getEntityGet detailed entity information by entity URIGet entitiesJavaReltioMDMClient:getEntitiesGet basic information about a group of entities based on applied filtersCreate/Update entityJavaReltioMDMClient:createEntityCreate/partially update entity(HCO, HCP, MCO)Create/Update multiple entitiesJavaReltioMDMClient:createEntitiesBatch create HCO/HCP/MCO entitiesDelete entityJavaReltioMDMClient:deleteEntityDeletes entity by its URIFind entityJavaReltioMDMClient:findEntityFinds entity. The search mechanism is flexible and chooses the proper method:If URI applied in entityPattern then use the getEntity method.If URI not specified and finds crosswalks then uses getEntityByCrosswalk methodOtherwise, it uses the find matches methodMerge entitiesJavaReltioMDMClient:mergeEntitiesMerge two entities basing on reltio merging rules.Also accepts explicit winner as explicitWinnerEntityUri.Unmerge entitiesJavaReltioMDMClient:unmergeEntitiesUnmerge entitiesUnmerge Entity TreeJavaReltioMDMClient:treeUnmergeEntitiesUnmerge entities recursively(details in reltio treeunmerge documentation)Scan entitiesJavaReltioMDMClient:scanEntitiesIterate entities of a specific type in a particular tenant.Delete crosswalkJavaReltioMDMClient:deleteCrosswalkDeletes crosswalk from an objectFind matchesJavaReltioMDMClient:findMatchesReturns potential matches based on rules in entity type configurationGet entity connectionsJavaReltioMDMClient:getMultipleEntityConnectionsGet connected entitiesGet entity by a crosswalkJavaReltioMDMClient:getEntityByCrosswalkGet entity by the crosswalkDelete relation by a crosswalkJavaReltioMDMClient:deleteRelationDelete relation by relation URIGet relationJavaReltioMDMClient:getRelationGet relation by relation URICreate/Update relationJavaReltioMDMClient:createRelationCreate/update relationScan relationsJavaReltioMDMClient:scanRelationsIterate entities of a specific type in a particular tenant.Get relation by a crosswalkJavaReltioMDMClient:getRelationByCrosswalkGet relation by the crosswalkBatch create relationJavaReltioMDMClient:createRelationsBatch create relationSearch for change requestsJavaReltioMDMClient:searchSearch for change requests by entity crosswalkGet change request by URIJavaReltioMDMClient:getChangeRequestGet change request by URICreate change requestJavaReltioMDMClient:createChangeRequestCreate change request - internalDelete change requestJavaReltioMDMClient:deleteChangeRequestDelete change requestApply change requestJavaReltioMDMClient:applyChangeRequestApply data change requestReject change requestJavaReltioMDMClient:rejectChangeRequestReject data change requestAdd/update external inforJavaReltioMDMClient:createOrUpdateExternalInfoAdd external info to specified DCRDependenciesComponentInterfaceDescriptionReltioGET {TenantURL}/entities/{Entity ID}Get detailed information about the entityhttps://docs.reltio.com/entitiesapi/getentity.htmlGET {TenantURL}/entitiesGet basic( or chosen ) information about entity based on applied filtershttps://docs.reltio.com/mulesoftconnector/getentities_2.htmlGET {TenantURL}/entities/_byCrosswalk/{crosswalkValue}?type={sourceType}Get entity by crosswalkhttps://docs.reltio.com/entitiesapi/getentitybycrosswalk_2.htmlDELETE {TenantURL}/{entity object URI}Delete entityhttps://docs.reltio.com/entitiesapi/deleteentity.htmlPOST {TenantURL}/entitiesCreate/update single or a bunch of entitieshttps://docs.reltio.com/entitiesapi/createentities.htmlPOST {TenantURL}/entities/_dbscanhttps://docs.reltio.com/searchapi/iterateentitiesbytype.html?hl=_dbscanPOST {TenantURL}/entities/{winner}/_sameAs?uri=entities/{looser}Merge entities basing on looser and winner IDhttps://docs.reltio.com/mergeapis/mergingtwoentities.htmlPOST {TenantURL}//_unmerge?contributorURI=Unmerge entitieshttps://docs.reltio.com/mergeapis/unmergeentitybycontriburi.htmlPOST {TenantURL}//_treeUnmerge?contributorURI=Tree unmerge entitieshttps://docs.reltio.com/mergeapis/unmergeentitybycontriburi.htmlGET {TenantURL}/relations/Get relation by relation URIhttps://docs.reltio.com/relationsapi/getrelationship.htmlPOST {TenantURL}/relationsCreate relationhttps://docs.reltio.com/relationsapi/createrelationships.htmlPOST {TenantURL}/relations/_dbscanhttps://docs.reltio.com/relationsapi/iteraterelationshipbytype.html?hl=relations%2F_dbscan GET {TenantURL}/changeRequests Get change requesthttps://docs.reltio.com/dcrapi/searchdcr.htmlGET {TenantURL}/changeRequests/{id}Returns a data change request by ID.https://docs.reltio.com/dcrapi/getdatachangereq.htmlPOST {TenantURL}/changeRequests Create data change requesthttps://docs.reltio.com/dcrapi/createnewdatachangerequest.htmlDELETE {TenantURL}/changeRequests/{id} Delete data change requesthttps://docs.reltio.com/dcrapi/deletedatachangereq.htmlPOST {TenantURL}/changeRequests/_byUris/_applyThis API applies (commits) all changes inside a data change request to real entities and relationships.https://docs.reltio.com/dcrapi/applydcr.htmlPOST {TenantURL}/changeRequests/_byUris/_rejectReject data change requesthttps://docs.reltio.com/dcrapi/rejectdcr.htmlPOST {TenantURL}/entities/_matches Returns potential matches based on rules in entity type configuration.https://docs.reltio.com/matchesapi/serachpotentialmatchesforjsonentity.htmlPOST {TenantURL}/_connectionsGet connected entitieshttps://docs.reltio.com/relationsapi/requestdifferententityconnections.html?hl=_connectionsDELETE /{crosswalk URI}Delete crosswalkhttps://docs.reltio.com/mergeapis/dataapicrosswalks.html?hl=delete,crosswalkdataapicrosswalks__deletecrosswalk#dataapicrosswalks__deletecrosswalkPOST {TenantURL}/changeRequests/0000OVV/_externalInfoAdd/update external info to DCRhttps://docs.reltio.com/dcrapi/addexternalinfotochangereq.html?hl=_externalinfoConfigurationConfig ParameterDefault valueDescriptionmdmConfig.authURLnullReltio authentication URLmdmConfig.baseURLnullReltio base URLmdmConfig.rdmUrlnullReltio  RDM URLmdmConfig.usernamenullReltio usernamemdmConfig.passwordnullReltio passwordmdmConfig.apiKeynullReltio apiKeymdmConfig.apiSecretnullReltio apiSecrettranslateCache.milisecondsToExpiretranslateCache.objectsLimit" + }, + { + "title": "Map Channel", + "pageID": "302697819", + "pageLink": "/display/GMDM/Map+Channel", + "content": "DescriptionMap Channel integrates GCP and GRV systems data. External systems use the SQS queue or REST API to load data. The data is then copied to the internal queue. This allows to redo the processing at a later time. The identifier and market contained in the data are used to retrieve complete data via REST requests. The data is then sent to the Manager component to storage in the MDM system. Application provides features for filtering events by country, status or permissions. This component uses different mappers to process data for the COMPANY or IQVIA data model.Technology: Java, Spring, Apache CamelCode link: map-channelFlowsGRV & GCP events processingExposed interfacesInterface nameTypeEndpoint patternDescriptioncreate contactREST APIPOST /gcpcreate HCP profile based on GCP contact dataupdate contactREST APIPUT /gcp/{gcpId}update HCP profile based on GCP contact datacreate userREST APIPOST /grvcreate HCP profile based on GRV user dataupdate userREST APIPUT /grv/{grvId}update HCP profile based on GRV user dataDependent componentsComponentInterfaceDescriptionManagerREST APIcreate HCP, create HCO, update HCP, update HCOConfigurationThe configuration is a composition of dependent components configurations and parameters specifived below.Kafka processing configConfig paramDefault valueDescriptionkafkaProducerPropkafka producer propertieskafkaConsumerPropkafka consumer propertiesprocessing.endpointskafka internal topics configurationprocessing.endpoints.[endpoint-type].topickafka entpoint-type topic nameprocessing.endpoints.[endpoint-type].activeOnStartupshould endpoint start on application startupprocessing.endpoints.[endpoint-type].consumerCountkafka endpoint consumer countprocessing.endpoints.[endpoint-type].breakOnFirstErrorshould kafka rebalance on errorprocessing.endpoints.[endpoint-type].autoCommitEnableshould kafka cuto commit enableDEG configConfig paramDefault valueDescriptionDEG.urllDEG gateway URLDEG.oAuth2ServiceDEG authorization service URLDEG.protocolDEG protocolDEG.portDEG portDEG.prefixDEG API prefixTransaction log configConfig paramDefault valueDescriptiontransactionLogger.kafkaEfk.enableshould kafka efk transaction logger enabletransactionLogger.kafkaEfk.kafkaProducer.topickafka efk topic nametransactionLogger.kafkaEfk.logContentOnlyOnFailedLog request body only on failed transactionstransactionLogger.simpleLog.enableshould simple console transaction logger enableFilter configConfig paramDefault valueDescriptionactiveCountries.GRVlist of allowed GRV countriesactiveCountries.GRVlist of allowed GCP countriesdeactivatedStatuses.[Source].[Country]list of ValidationStatus attribute values for which HCP will be deleted for given country and sourcedeactivateGCPContactWhenInactivelst of countries for which GCP will be deleted when contact is inactivedeactivatedWhenNoPermissionslst of countries for which GCP will be deleted when contact permissions are missingdeleteOption.[Source].noneHCP will be sent to MDM when deleted date is presentdeleteOption.[Source].hardcall delete crosswalk action when deleted date is presentdeleteOption.[Source].softcall update HCP when delete date is presentMapper configConfig paramDefault valueDescriptiongcpMappername of GCP mapper implenentationgrvMappername of GRV mapper implenentationMappingsIQVIA mappingCOMPANY mapping" + }, + { + "title": "MDM Admin", + "pageID": "284817212", + "pageLink": "/display/GMDM/MDM+Admin", + "content": "DescriptionMDM Admin exposes an API of tools automating repetitive and/or difficult Operating Procedures and Tasks. It also aggregates APIs of various Hub components that should not be exposed to the world, while providing an authorization layer. Permissions to each Admin operation can be granted to client's API user.FlowsKafka OffsetResend EventsPartial ListReconciliationExposed interfacesREST APISwagger: https://api-apac-prod-gbl-mdm-hub.COMPANY.com/api-admin-spec-apac-prod/swagger-ui/index.htmlDependent componentsComponentInterfaceFlowDescriptionReconciliation ServiceReconciliation Service APIEntities ReconciliationAdmin uses internal Reconciliation Service API to trigger reconciliations. Passes the same inputs and returns the same results.Relations ReconciliationPartials ReconciliationPrecallback ServicePrecallback Service APIPartials ListAdmin fetches a list of partials directly from Precallback Service and returns it to the user or uses it to reconcile all entities stuck in partial state.Partials ReconciliationAirflowAirflow APIEvents ResendAdmin allows triggering an Airflow DAG with request parameters/body and checking its status.Events Resend ComplexKafkaKafka Client/Admin APIKafka OffsetsAdmin allows modifying topic/group offsets.ConfigurationConfig ParameterDefault valueDescriptionairflow-config: url: https://airflow-amer-nprod-gbl-mdm-hub.COMPANY.com user: admin password: ${airflow.password} dag: reconciliation_system_amer_dev-Dependent Airflow configuration including external URL, DAG name and credentials. Entities Reload operation will trigger a DAG of configured name in the configured Airflow instance.services:services: reconciliationService: mdmhub-mdm-reconciliation-service-svc:8081 precallbackService: mdmhub-precallback-service-svc:8081URLs of dependent services. Default values lead to internal Kubernetes services." + }, + { + "title": "MDM Integration Tests", + "pageID": "302687584", + "pageLink": "/display/GMDM/MDM+Integration+Tests", + "content": "DescriptionThe module contains Integration Tests. All Integration Tests are divided into different categories based on environment on which are executed.Technology:JUnitSpring TestCitrusGradle tasksThe table shows which environment uses which gradle task.EnvironmentGradle taskConfiguration propertiesALLcommonIntegrationTests-GBLUSintegrationTestsForCOMPANYModelRegionUShttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/kube_dev_gblus/group_vars/gw-services/int_tests.ymlCHINAintegrationTestsForCOMPANYModelChinahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/kube_devchina_apac/group_vars/gw-services/int_tests.ymlEMEAintegrationTestsForCOMPANYModelRegionEMEAhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/kube_dev_emea/group_vars/gw-services/int_tests.ymlAPACintegrationTestsForCOMPANYModelRegionAPAChttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/kube_dev_apac/group_vars/gw-services/int_tests.ymlAMERintegrationTestsForCOMPANYModelRegionAMERhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/kube_dev_amer/group_vars/gw-services/int_tests.ymlOTHERSintegrationTestsForIqviaModelhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/kube_dev_gbl/group_vars/gw-services/int_tests.ymlThe Jenkins script with configuration: http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/jenkins/k8s_int_test.groovyGradle tasks - IT categoriesThe table shows which test categories are included in gradle tasks.Gradle taskTest categorycommonIntegrationTestsCommonIntegrationTestintegrationTestsForCOMPANYModelRegionUSIntegrationTestForCOMPANYModelIntegrationTestForCOMPANYModelRegionUSintegrationTestsForCOMPANYModelChinaIntegrationTestForCOMPANYModelIntegrationTestForCOMPANYModelChinaintegrationTestsForCOMPANYModelIntegrationTestForCOMPANYModel●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●integrationTestsForCOMPANYModelRegionAMERIntegrationTestForCOMPANYModel●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●IntegrationTestForCOMPANYModelRegionAMERintegrationTestsForCOMPANYModelRegionAPACIntegrationTestForCOMPANYModel●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●integrationTestsForCOMPANYModelRegionEMEAIntegrationTestForCOMPANYModel●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●IntegrationTestForCOMPANYModelRegionEMEAintegrationTestsForIqviaModelIntegrationTestForIqiviaModelTests are configured in build.gradle file: http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/browse/build.gradle?at=refs%2Fheads%2Fproject%2FboldmoveTest use cases included in categoriesTest categoryTest use casesCommonIntegrationTestCommon Integration TestIntegrationTestForIqiviaModelIntegration Test For Iqvia ModelIntegrationTestForCOMPANYModelIntegration Test For COMPANY ModelIntegrationTestForCOMPANYModelRegionUSIntegration Test For COMPANY Model Region USIntegrationTestForCOMPANYModelChinaIntegration Test For COMPANY Model ChinaIntegrationTestForCOMPANYModelRegionAMERIntegration Test For COMPANY Model Region AMER●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●Integration Test For COMPANY Model DCR2ServiceIntegrationTestsForCOMPANYModelRegionEMEAIntegration Test For COMPANY Model Region EMEA" + }, + { + "title": "Nucleus Subscriber", + "pageID": "164469790", + "pageLink": "/display/GMDM/Nucleus+Subscriber", + "content": "DescriptionNucleus subscriber collects events from Amazon AWS S3 modifies it and then transfer to the right Kafka Topic.Data changes are stored as archive files on S3 from where they are then pulled byt the nucleus subscriber.The next step is to modify the event from the Reltio format to one accepted by the MDM Hub. The modified data is then transfered to the appropriate Kafka Topic.Data pulls from S3 are performed periodically so the changes made  are visible after some time.Part of: Streaming channgelTechnology: Java, Spring, Apache CamelCode link: nucleus-subscriberFlowsEntity change events processing (Nucleus) Exposed interfacesInterface NameTypeEndpoint patternDescriptionKafka topic KAFKA{env}-internal-nucleus-eventsEnents pulled from sqs are then transformed and published to kafka topicDependenciesComponentInterfaceFlowDescriptionAWS S3Entity change events processing (Nucleus)Stores events regarding data modification in reltioEntity enricherNucleus Subscriber downstream component. Collects events from Kafka and produces events enriched with the targetEntityConfigurationConfig ParameterDefault valueDescriptionnucleus_subscriber.server.port8082Nucleus subscriber portnucleus_subscriber.kafka.servers10.192.71.136:9094Kafka servernucleus_subscriber.lockingPolicy.zookeeperServernullZookeeper servernucleus_subscriber.lockingPolicy.groupNamenullZookeeper group namenucleus_subscriber.deduplicationCache.maxSize100000nucleus_subscriber.deduplicationCache.expirationTimeSeconds3600nucleus_subscriber.kafka.groupIdhubKafka group Idnucleus_subscriber.kafka.usernamenullKafka usernamenucleus_subscriber.kafka.passwordnullKafka user passwordnucleus_subscriber.publisher.entities.topicdev-internal-integration-testsnucleus_subscriber.publisher.dictioneries.topicdev-internal-reltio-dictionaries-eventsnucleus_subscriber.publisher.relationships.topicdev-internal-integration-testsnucleus_subscriber.mongoConnectionProp.dbUrlnullMongoDB urlnucleus_subscriber.mongoConnectionProp.dbNamenullMongoDB database namenucleus_subscriber.mongoConnectionProp.usernullMongoDB usernucleus_subscriber.mongoConnectionProp.passwordnullMongoDB user passwordnucleus_subscriber.mongoConnectionProp.chechConnectionOnStartupnullCheck connection on startup( yes/no )nucleus_subscriber.poller.typefileSource typenucleus_subscriber.poller.enableOnStartupyesEnable on startup( yes/no )nucleus_subscriber.poller.fileMasknullInput files masknucleus_subscriber.poller.bucketNamecandf-mesosName of S3 bucketnucleus_subscriber.poller.processingTimeoutMs3000000Timeout in milisecondsnucleus_subscriber.poller.inputFolderC:/PROJECTS/COMPANY/GIT/mdm-publishing-hub/nucleus-subscriber/src/test/resources/dataInput directorynucleus_subscriber.poller.outputFoldernullOutput directorynucleus_subscriber.poller.keynullPoller keynucleus_subscriber.poller.secretnullPoller secretnucleus_subscriber.poller.regionEU_WEST_1Poller regionnucleus_subscriber.poller.alloweSubDirsnullAllowed sub directories( e.g. by country code - AU, CA )nucleus_subscriber.fileFormat.hcp.*Professional.expInput fiile format for hcpnucleus_subscriber.fileFormat.hco.*Organization.expInput fiile format for hconucleus_subscriber.fileFormat.dictionary.*Code_Header.expInput fiile format for dictionarynucleus_subscriber.fileFormat.dictionaryItem.*Code_Item.expInput fiile format for dictionary Itemnucleus_subscriber.fileFormat.dictionaryItemDesc.*Code_Item_Description.expInput fiile format fornucleus_subscriber.fileFormat.dictionaryItemExternal.*Code_Item_External.expInput fiile format fornucleus_subscriber.fileFormat.customerMerge.*customer_merge.expInput fiile format for customer mergenucleus_subscriber.fileFormat.specialty.*Specialty.expInput fiile format for specialitynucleus_subscriber.fileFormat.address.*Address.expInput fiile format foraddressnucleus_subscriber.fileFormat.degree.*Degree.expInput fiile format for degreenucleus_subscriber.fileFormat.identifier.*Identifier.expInput fiile format foridentifiernucleus_subscriber.fileFormat.communication.*Communication.expInput fiile format forcommunicationnucleus_subscriber.fileFormat.optout.*Optout.expInput fiile format for optoutnucleus_subscriber.fileFormat.affiliation.*Affiliation.expInput fiile format for affiliationnucleus_subscriber.fileFormat.affiliationRole.*AffiliationRole.expInput fiile format for affiliation role." + }, + { + "title": "OK DCR Service", + "pageID": "164469929", + "pageLink": "/display/GMDM/OK+DCR+Service", + "content": "DescriptionValidation of information regarding healthcare institutions and professionals based on ONE KEY webservices databaseTechnology: java 8, spring boot, mongodb, kafka-streamsCode link: mdm-onekey-dcr-service FlowsData Steward ResponseSubmit Validation RequestTrace Validation RequestExposed interfacesInterface NameTypeEndpoint patternDescriptioninternal onekeyvr inputKAFKA${env}-internal-onekeyvr-inevents being sent by the event publisher component. Event types being considered: HCP_*, HCO_*, ENTITY_MATCHES_CHANGEDinternal onekeyvr change requests inputKAFKA${env}-internal-onekeyvr-change-requests-inDependent componentsComponentInterfaceFlowDescriptionManagerGetEntitygetEntitygetting the entity from RELTIOMDMIntegrationServicegetMatchesgetting matches from RELTIOtranslateLookupstranslating lookup codescreateEntityDCR entity created in Reltio and the relation between the processed entity and the DCR entitycreateResponsepatchEntityupdating the entity in RELTIOBoth ONEKEY service and the Manager service are called with the retry policy.ConfigurationConfig ParameterDefault valueDescriptiononekey.oneKeyIntegrationService.url${oneKeyClient.url}onekey.oneKeyIntegrationService.userName${oneKeyClient.userName}onekey.oneKeyIntegrationService.password${oneKeyClient.password}onekey.oneKeyIntegrationService.connectionPoint${oneKeyClient.connectionPoint}onekey.oneKeyIntegrationService.logMessages${oneKeyClient.logMessages}onekey.oneKeyIntegrationService.retrying.maxAttemts22Limit to the number of attempts -> Exponential Back Offonekey.oneKeyIntegrationService.retrying.initialIntervalMs1000Initial interval -> Exponential Back Offonekey.oneKeyIntegrationService.retrying.multiplier2.0Multiplier -> Exponential Back Offonekey.oneKeyIntegrationService.retrying.maxIntervalMs3600000Max interval -> Exponential Back Offonekey.gatewayIntegrationService.url${gateway.url}onekey.gatewayIntegrationService.userName${gateway.userName}onekey.gatewayIntegrationService.apiKey${gateway.apiKey}onekey.gatewayIntegrationService.logMessages${gateway.logMessages}onekey.gatewayIntegrationService.timeoutMs${gateway.timeoutMs}onekey.gatewayIntegrationService.gatewayRetryConfig.maxAttemts22onekey.gatewayIntegrationService.gatewayRetryConfig.initialIntervalMs1000onekey.gatewayIntegrationService.gatewayRetryConfig.multiplier2.0onekey.gatewayIntegrationService.gatewayRetryConfig.maxIntervalMs3600000onekey.gatewayIntegrationService.gatewayRetryConfig.maxAttemts22Limit to the number of attempts -> Exponential Back Offonekey.gatewayIntegrationService.gatewayRetryConfig.initialIntervalMs1000Initial interval -> Exponential Back Offonekey.gatewayIntegrationService.gatewayRetryConfig.multiplier2.0Multiplier -> Exponential Back Offonekey.gatewayIntegrationService.gatewayRetryConfig.maxIntervalMs3600000Max interval -> Exponential Back Offonekey.submitVR.eventInputTopic${env}-internal-onekeyvr-inSubmit Validation input topiconekey.submitVR.skipEventTypeSuffix_REMOVED_INACTIVATED_LOST_MERGESubmit Validation event type string endings to skiponekey.submitVR.storeNamewindow-deduplication-storeInternal kafka topic that stores events to deduplicateonekey.submitVR.window.duration4hThe size of the windows in milliseconds.onekey.submitVR.window.nameInternal kafka topic that stores events being grouped by.onekey.submitVR.window.gracePeriod0The grace period to admit out-of-order events to a window.onekey.submitVR.window.byteLimit107374182Maximum number of bytes the size-constrained suppression buffer will use.onekey.submitVR.window.suppressNamedcr-suppressThe specified name for the suppression node in the topology.onekey.traceVR.enabletrueonekey.traceVR.minusExportDateTimeMillis3600000onekey.traceVR.schedule.cron0 0 * ? * * # every hourquartz.properties.org.quartz.scheduler.instanceNamemdm-onekey-dcr-serviceCan be any string, and the value has no meaning to the scheduler itself - but rather serves as a mechanism for client code to distinguish schedulers when multiple instances are used within the same program. If you are using the clustering features, you must use the same name for every instance in the cluster that is ‘logically’ the same Scheduler.quartz.properties.org.quartz.scheduler.skipUpdateChecktrueWhether or not to skip running a quick web request to determine if there is an updated version of Quartz available for download. If the check runs, and an update is found, it will be reported as available in Quartz’s logs. You can also disable the update check with the system property “org.terracotta.quartz.skipUpdateCheck=true” (which you can set in your system environment or as a -D on the java command line). It is recommended that you disable the update check for production deployments.quartz.properties.org.quartz.scheduler.instanceIdGenerator.classorg.quartz.simpl.HostnameInstanceIdGeneratorOnly used if org.quartz.scheduler.instanceId is set to “AUTO”. Defaults to “org.quartz.simpl.SimpleInstanceIdGenerator”, which generates an instance id based upon host name and time stamp. Other IntanceIdGenerator implementations include SystemPropertyInstanceIdGenerator (which gets the instance id from the system property “org.quartz.scheduler.instanceId”, and HostnameInstanceIdGenerator which uses the local host name (InetAddress.getLocalHost().getHostName()). You can also implement the InstanceIdGenerator interface your self.quartz.properties.org.quartz.jobStore.classcom.novemberain.quartz.mongodb.MongoDBJobStorequartz.properties.org.quartz.jobStore.mongoUri${mongo.url}quartz.properties.org.quartz.jobStore.dbName${mongo.dbName}quartz.properties.org.quartz.jobStore.collectionPrefix quartz-onekey-dcrquartz.properties.org.quartz.scheduler.instanceIdAUTOCan be any string, but must be unique for all schedulers working as if they are the same ‘logical’ Scheduler within a cluster. You may use the value “AUTO” as the instanceId if you wish the Id to be generated for you. Or the value “SYS_PROP” if you want the value to come from the system property “org.quartz.scheduler.instanceId”.quartz.properties.org.quartz.jobStore.isClusteredtruequartz.properties.org.quartz.threadPool.threadCount1" + }, + { + "title": "Publisher", + "pageID": "164469927", + "pageLink": "/display/GMDM/Publisher", + "content": "DescriptionPublisher is member of Streaming channel. It distributes events to target client topics based on configured routing rules.Main tasks:Filtering events beased on their contentRouting events based publisher configurationEnriching nucleus eventsUpdating mongoTechnology: Java, Spring, KafkaCode: event-publisherFlowsReltio events streamingNucleus Events StreamingCallbacksEvent filtering and routing rulesLOV update process (Nucleus)Data Steward ResponseSubmit Validation RequestSnowflake: Events publish flowExposed interfacesInterface NameTypeEndpoint patternDescriptionKafka - input topics for entities dataKAFKA${env_name}-internal-reltio-proc-events${env_name}-internal-nucleus-eventsStores events about entities, relations and change requests changes.Kafka - input topics for dicrtionaries dataKAFKA${env_name}-internal-reltio-dictionaries-events${env_name}-internal-nucleus-dictionaries-eventsStores events about lookup (LOV) changes.Kafka - output topicsKAFKA${env_name}-out-**(All topics that get events from publisher)Output topics for Publisher.Event after filtration process is then transferred on the appropriate topic based on routing rules defined in the configurationResend eventsRESTPOST /resendLastEventAllow triggering reconstruction event. Events are created based on the current state fetch for MongoDB and then forwarded according to defined routing rules.Mongo's collectionsMongo collectionentityHistoryCollection stored last known state of entities dataMongo collectionentityRelationsCollection stored last known state of relations dataMongo collectionLookupValuesCollection stored last known state of lookups (LOVs) dataDependenciesComponentInterfaceFlowDescriptionCallback ServiceKAFKAEntity change events processing (Reltio)Creates input for PublisherResponsible for following transformations:HCO names calculationDangling affiliationsCrosswalk cleanerPrecallback streamMongoDBEntity change events processing (Reltio)Entity change events processing (Nucleus)Stores the last known state of objects such as: entities, relations. Used as cache data to reduce Reltio load. Is updated after every entity change eventKafka Connect Snowflake connectorKAFKASnowflake: Events publish flowReceives events from the publisher and loads it to Snowflake databaseClients of the HUBClients that receive events from MDM HUBMAPP, China, etcConfigurationConfig ParameterDefault valueDescriptionevent_publisher.usersnullPublisher users dictionary used to authenticate user in ResendService operations.User parameters:name,description,roles(list) - currently there is only one role which can be assign to user:RESEND_EVENT - user with this role is granted to use resend last event operationevent_publisher.activeCountries- AD- BL- FR- GF- GP- MC- MF- MQ- MU- NC- PF- PM- RE- WF- YT- CNList of active countriesevent_publisher.lookupValuesPoller.interval60mInterval of lookups (LOVs) from Reltioevent_publisher.lookupValuesPoller.batchSize1000Poller batch sizeevent_publisher.lookupValuesPoller.enableOnStartupyesEnable on startup( yes/no )event_publisher.lookupValuesPoller.dbCollectionNameLookupValuesMongo's collection name stored fetched lookup dataevent_publisher.eventRouter.incomingEventsincomingEvents: reltio: topic: dev-internal-reltio-entity-and-relation-events enableOnStartup: no startupOrder: 10 properties: autoOffsetReset: latest consumersCount: 20 maxPollRecords: 50 pollTimeoutMs: 30000Configuration of the incoming topic with events regarding entities, relations etc.event_publisher.eventRouter.dictionaryEventsdictionaryEvents: reltio: topic: dev-internal-reltio-dictionaries-events enableOnStartup: true startupOrder: 30 properties: autoOffsetReset: earliest consumersCount: 10 maxPollRecords: 5 pollTimeoutMs: 30000Configuration of incoming topic with events regarding dictionary changes.event_publisher.eventRouter.historyCollectionNameentityHistoryName of collection stored entities stateevent_publisher.eventRouter.relationCollectionNameentityRelationsName of collection stored relations stateevent_publisher.eventRouter.routingRules.[]nullList of routing rules. Routing rule definition has following parametersid - unique identifier of rule,selector - conditional expression written in groovy which filters incoming events,destination - topic name." + }, + { + "title": "Raw data service", + "pageID": "337869880", + "pageLink": "/display/GMDM/Raw+data+service", + "content": "DescriptionRaw data service is the component used to process source data. Allows you to remove expired data in real time. Provides a REST interface for restoring source data on the environment.Technology:kotlin,kafka streams,spring bootCode link: Raw data serviceFlows Raw data flowsExposed interfacesBatch Controller - manage batch instancesInterface nameTypeEndpoint patternDescriptionRestore entitiesREST APIPOST /restore/entitiesRestore entities for selected parameters: entity types, sources, countries, date from1. Create consumer for entities topic and given offset - date from2. Poll and filter records3. Produce data to bundle input topicRestore relationsREST APIPOST /restore/relationsRestore entities for selected parameters: sources, countries, relation types and date from1. Create consumer for relations topic and given offset - date from2. Poll and filter records3. Produce data to bundle input topicRestore entitiesREST APIPOST /restore/entities/countCount entities for selected parameters: entity types, sources, countries, date fromRestore entitiesREST APIPOST /restore/relations/countCount relations for selected parameters: sources, countries, relation types and date fromConfigurationConfig paramdescriptionkafka.groupIdkafka group idkafkaOtherother kafka consumer/producer propertiesentityTopictopic used to store entity datarelationTopictopic used to store relation datastreamConfig.patchKeyStoreNamestate store name used to store entities patch keysstreamConfig.relationStoreNamestate store name used to store relations patch keysstreamConfig.enabledis raw data stream processor enabledstreamConfig.kafkaOtherraw data processor stream kafka other propertiesrestoreConfig.enabledis restore api enabledrestoreConfig.consumer.pollTimeoutrestore api kafka topic consumer poll timeoutrestoreConfig.consumer.kafkaOtherother kafka consumer propertiesrestoreConfig.producer.outputrestore data producer output topic - manager bundle input topicrestoreConfig.producer.kafkaOtherother kafka producer properties" + }, + { + "title": "Reconciliation Service", + "pageID": "164469826", + "pageLink": "/display/GMDM/Reconciliation+Service", + "content": "Reconciliation service is used to consume reconciliation event from reltio and decide is entity or relation should be refreshed in mongo cache. after reconsiliation this service also produce metrics from reconciliation, it counts changes and produce event with all metatdta and statistics about reconciliated entity/relationFlowsReconciliation+HUB-ClientReconciliation metricsConfigurationConfig ParameterDefault valueDescriptionreconciliation: eventInputTopic: eventOutputTopic:reconciliation: eventInputTopic: ${env}-internal-reltio-reconciliation-events eventOutputTopic: ${env}-internal-reltio-eventsConsumes event from eventInputTopic, decide about reconiliation and produce event to eventOutputTopicreconciliation: eventMetricsInputTopic: eventMetricsOutputTopic:metricRules: - name: operationRegexp: pathRegexp: valueRegexp: reconciliation: eventInputTopic: ${env}-internal-reltio-reconciliation-events eventOutputTopic: ${env}-internal-reltio-events eventMetricsInputTopic: ${env}-internal-reltio-reconciliation-metrics-event eventMetricsOutputTopic: ${env}-internal-reconciliation-metrics-efk-transactionsmetricRules: - name: reconciliation.object.missed operationRegexp: "remove" pathRegexp: "" valueRegexp: ".*" - name: reconciliation.object.added operationRegexp: "add" pathRegexp: "" valueRegexp: ".*" - name: reconciliation.lookupcode.error operationRegexp: "add" pathRegexp: "^.*/lookupCode$" valueRegexp: ".*" - name: reconciliation.lookupcode.changed operationRegexp: "replace" pathRegexp: "^.*/lookupCode$" valueRegexp: ".*" - name: reconciliation.value.changed operationRegexp: "add|replace|remove" pathRegexp: "^/attributes/.+$" valueRegexp: ".*" - name: reconciliation.other.reason operationRegexp: ".*" pathRegexp: ".*" valueRegexp: ".*"Consume event from eventMetricsInputTopic, then calculate diff betwent current and previous event, based on diff produce statisctis and metrics. After all produce event with all information to eventMetricsOutputTopic" + }, + { + "title": "Reltio Subscriber", + "pageID": "164469916", + "pageLink": "/display/GMDM/Reltio+Subscriber", + "content": "DescriptionReltio subscriber is part of Reltio events streaming flow. It consumes Reltio events from Amazon SQS, filters, maps, and transfers to the Kafka Topic.Part of: Streaming channelTechnology: Java, Spring, Apache CamelCode link: reltio-subscriberFlowsEntity change events processing (Reltio)Exposed interfacesInterface NameTypeEndpoint patternDescriptionKafka topic KAFKA${env}-internal-reltio-eventsEnents pulled from sqs are then transformed and published to kafka topicDependent componentsComponentInterfaceFlowDescriptionSqs - queueEntity change events processing (Reltio)It stores events about entities modification in reltioEntity enricherReltio Subscriber downstream component. Collects events from Kafka and produces events enriched with the target entityConfigurationConfig ParameterDefault valueDescriptionreltio_subscriber.reltio.queuempe-01_FLy4mo0XAh0YEbNReltio queue namereltio_subscriber.reltio.queueOwner930358522410Reltio queue owner numberreltio_subscriber.reltio.concurrentConsumers1Max number of concurrent consumersreltio_subscriber.reltio.messagesPerPoll10Messages per pollreltio_subscriber.publisher.topicdev-internal-reltio-eventsPublisher kafka topicreltio_subscriber.publisher.enableOnStartupyesEnable on startupreltio_subscriber.publisher.filterSelfMergesnoFilter self merges( yes/no )reltio_subscriber.relationshipPublisher.topicdev-internal-reltio-relations-eventsRelationship publisher kafka topicreltio_subscriber.dcrPublisher.topicnullDCR publisher kafka topicreltio_subscriber.kafka.servers10.192.71.136:9094Kafka serversreltio_subscriber.kafka.groupIdhubKafka group Idreltio_subscriber.kafka.saslMechanismPLAINKafka sasl mechanismreltio_subscriber.kafka.securityProtocolSASL_SSLKafka security protocolreltio_subscriber.kafka.sslTruststoreLocationsrc/test/resources/client.truststore.jksKafka truststore locationreltio_subscriber.kafka.sslTuststorePasswordkafka123Kafka truststore passwordreltio_subscriber.kafka.usernamenullKafka usernamereltio_subscriber.kafka.passwordnullKafka user passwordreltio_subscriber.kafka.compressionCodecnullKafka compression codecreltio_subscriber.poller.types3Source typereltio_subscriber.poller.enableOnStartupnoEnable on startup( yes/no )reltio_subscriber.poller.fileMask.*Input files maskreltio_subscriber.poller.bucketNamecandf-mesosName of S3 bucketreltio_subscriber.poller.processingTimeoutMs7200000Timeout in milisecondsreltio_subscriber.poller.inputFoldernullInput directoryreltio_subscriber.poller.outputFoldernullOutput directoryreltio_subscriber.poller.keynullPoller keyreltio_subscriber.poller.secretnullPoller secretreltio_subscriber.poller.regionEU_WEST_1Poller regionreltio_subscriber.allowedEventTypes- ENTITY_CREATED- ENTITY_REMOVED- ENTITY_CHANGED- ENTITY_LOST_MERGE- ENTITIES_MERGED- ENTITIES_SPLITTED- RELATIONSHIP_CREATED- RELATIONSHIP_CHANGED- RELATIONSHIP_REMOVED- RELATIONSHIP_MERGED- RELATION_LOST_MERGE- CHANGE_REQUEST_CHANGED- CHANGE_REQUEST_CREATED- CHANGE_REQUEST_REMOVED- ENTITIES_MATCHES_CHANGEDEvent types that are processed when received.Other event types are being rejectedreltio_subscriber.transactionLogger.kafkaEfk.enablenullTransaction logger enabled( true/false)reltio_subscriber.transactionLogger.kafkaEfk.logContentOnlyOnFailednullLog content only on failed( true/false)reltio_subscriber.transactionLogger.kafkaEfk.kafkaConsumerProp.groupIdnullKafka consumer group Idreltio_subscriber.transactionLogger.kafkaEfk.kafkaConsumerProp.autoOffsetResetnullKafka transaction logger topicreltio_subscriber.transactionLogger.kafkaEfk.kafkaConsumerProp.consumerCountnullreltio_subscriber.transactionLogger.kafkaEfk.kafkaConsumerProp.sessionTimeoutMsnullSession timeoutreltio_subscriber.transactionLogger.kafkaEfk.kafkaConsumerProp.maxPollRecordsnullreltio_subscriber.transactionLogger.kafkaEfk.kafkaConsumerProp.breakOnFirstErrornullreltio_subscriber.transactionLogger.kafkaEfk.kafkaConsumerProp.consumerRequestTimeoutMsnullreltio_subscriber.transactionLogger.SimpleLog.enablenull" + }, + { + "title": "Clients", + "pageID": "164470170", + "pageLink": "/display/GMDM/Clients", + "content": "The section describes clients (systems) that publish or subscribe data to MDM systems vis MDH HUBActive clients\n\n \n \n \n \n \n \n\n \n \n \n \n\n \n \n\n \n \n \n\n \n \n \n \n \n \n\n \n \n \n \nAggregated Contact ListCOMPANY MDM TeamNameContactAndrew J. VarganinAndrew.J.Varganin@COMPANY.comSowjanya Tirumalasowjanya.tirumala@COMPANY.comJohn AustinJohn.Austin@COMPANY.comTrivedi NishithNishith.Trivedi@COMPANY.comGLOBALClientContactsMAPDL-BT-Production-Engineering@COMPANY.comKOLDL-SFA-INF_Support_PforceOL@COMPANY.comSolanki, Hardik (US - Mumbai) ;Yagnamurthy, Maanasa (US - Hyderabad) ;ChinaMing Ming ;Jiang, Dawei MAPPShashi.Banda@COMPANY.comRajesh.K.Chengalpathy@COMPANY.comDebbie.Gelfand@COMPANY.comDinesh.Vs@COMPANY.comDL-MAPP-Navigator-Hypercare-Support@COMPANY.comJapan DWHDL-GDM-ServiceOps-Commercial_APAC@COMPANY.comGRACEDL-AIS-Mule-Integration-Support@COMPANY.comEngageDL-BTAMS-ENGAGE-PLUS@COMPANY.com;Amish.Adhvaryu@COMPANY.comPTRSSagar.Bodala@COMPANY.comOneMedMarsha.Wirtel@COMPANY.com;AnveshVedula.Chalapati@COMPANY.comMedicDL-F&BO-MEDIC@COMPANY.comGBL USClientContactsCDWNarayanan, Abhilash Raman, Krishnan ETLNayan, Rajeev Duvvuri, Satya KOLTikyani, Devesh Brahma, Bagmita Solanki, Hardik US Trade (FLEX COV)ClientContactsMain contactsDube, Santosh R Manseau, Melissa Thirumurthy, Bala Subramanyam Business TeamMax, Deanna Faddah, Laura Jordan GIS(file transfer)Mandala, Venkata Srivastava, Jayant " + }, + { + "title": "KOL", + "pageID": "164470183", + "pageLink": "/display/GMDM/KOL", + "content": "\nData pushing\n Figure 22. KOL authentication with Identity ManagerKOL system push data to MDM integration service using REST API. To authenticate, KOL uses external Oauth2 authorization service named Identity Manager to fetch access token. Then system sends the REST request to integration service endpoint which validates access token using Identity Manager API.\n\nKOL manage data for several countries. Many of these is loaded to default MDM system (Reltio), supported by integration service but for GB, PT, DK and CA countries data is sent to Nucleus 360. Decision, where the data should be loaded, is made by MDM Manager logic. Based on Country attribute value, MDM manager selects the right MDM adapter. It is important to set the Country attribute value correctly during data updating. Same rule applies to the country query parameter during data fetching. Thanks to this, MDM manager is able to process the right data in the right MDM system. In case of updating data with the Country attribute set incorrectly, the REST request will be rejected. When data is being fetched without country attribute query parameter set, the default MDM (Reltio) will be used to resolve the data.\n\nEvent processing\nKOL application receives events in one standard way – kafka topic. Events from Reltio MDM system are published to this topic directly after Reltio has processed changes, sent event to SQS and processed them by Event Publisher. It means that the Reltio processes change and send events in real time. Client, who listens for events, does not have to wait for receiving them too long.\n Figure 23. Difference between processing events in Reltio and Nucleus 360The situation changes when the entity changes are processed by Nucleus 360. This MDM publishes changes once in a while, so the events will be delivered to kafka topic with longer delay." + }, + { + "title": "Japan DWH", + "pageID": "164470060", + "pageLink": "/display/GMDM/Japan+DWH", + "content": "ContactsJapan DWH Feed Support DL: DL-GDM-ServiceOps-Commercial_APAC@COMPANY.com - it is valid until 15/04/2023DL-ATP-SERVICEOPS-JPN-DATALAKE@COMPANY.com - it will be valid since 15/04/2023 FlowsJapan DWH has only one batch process which consume the incremental file export from data warehouse, process this and loads data to MDM. This process is based on incremental batch engine and run on Airflow platform.Input filesThe input files are delivered by GIS to AWS S3 bucket.UATPRODS3 service accountdidn't createdsvc_gbi-cc_mdm_japan_rw_s3S3 Access key IDdidn't createdAKIATCTZXPPJU6VBUUKBS3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectS3 Foldermdm/UAT/inbound/JAPAN/mdm/inbound/JAPAN/Input data file mask JPDWH_[0-9]+.zipJPDWH_[0-9]+.zipCompressionZipZipFormatFlat files, DWH dedicated format Flat files, DWH dedicated format ExampleJPDWH_20200421202224.zipJPDWH_20200421202224.zipSchedulenoneAt 08:00 UTC on every day-of-week from Monday through Friday (0 8 * * 1-5). The input file is not delivered in Japan's holidays (https://www.officeholidays.com/countries/japan/2020)Airflow jobinc_batch_jp_stageinc_batch_jp_prodData mapping The detailed filed mappings are presented in the document.Mapping rules:Inactive HCPs, HCOs are not loaded in IDL.   They are filtered out using delete flags present in source files.  Profiles being inactivated in DWH source are soft-deleted from Reltio. Affiliations between hospitals and departments are not delivered by the source directly. They are derived from dri file (doctor – institution association) having department ids referring to a dictionary on affiliations.  Each hospital in Reltio has dedicated departments objects although departments are global dictionary in Japan DWH. HCP addresses are copied from affiliated HCOs. HCP workplaces refer to departments. Departments point to Main HCOs using MainHCO relations.  HCP affiliations pointing to inactive HCOs are skipped during the load, but HCP profiles are load. Department names  and hospital names are added to address attributes (HcoName, MainHcoName) associated with HCPs to allow searching by its names.ConfigurationFlow configuration is stored in MDM Environment configuration repository. For each environment where the flow should be enabled the configuration file inc_batch_jp.yml has to be created in the location related to configured environment: inventory/[env name]/group_vars/gw-airflow-services/ and the batch name "inc_batch_jp" has to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Below table prresents the location of inc_batch_jp.yml file for UAT and PROD env:UATPRODinc_batch_jp.ymlhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/stage/group_vars/gw-airflow-services/inc_batch_jp.ymlhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/prod/group_vars/gw-airflow-services/inc_batch_jp.ymlApplying configuration changes is done by executing the deploy Airflow's components procedure.SOPsThere is no particular SOP procedure for this flow. All common SOPs was described in the "Airflow:" chapter." + }, + { + "title": "Nucleus", + "pageID": "164470256", + "pageLink": "/display/GMDM/Nucleus", + "content": "ContactsDelivering of data used by Nucleus's processes is maintained by Iqvia Team: COMPANY-MDM-Support@iqvia.comFlowsThere are several batch processes that loads data extracted from Nucleus to Reltio MDM. Data are delivered for countries: Canada, South Korea, Australia, United Kingdom, Portugal and Denmark as zip archive available at S3 bucket.Input filesUATPRODS3 service accountdidn't createdsvc_mdm_project_nuc360_rw-s3S3 Access key IDdidn't createdAKIATCTZXPPJTFMGRZFMS3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectS3 Foldermdm/UAT/inbound/APAC_CCV/AU/mdm/UAT/inbound/APAC_CCV/KR/mdm/UAT/inbound/nuc360/inc-batch/GB/mdm/UAT/inbound/nuc360/inc-batch/PT/mdm/UAT/inbound/nuc360/inc-batch/DK/mdm/UAT/inbound/nuc360/inc-batch/CA/mdm/inbound/nuc360/inc-batch/AU/mdm/inbound/nuc360/inc-batch/KR/mdm/inbound/nuc360/inc-batch/GB/mdm/inbound/nuc360/inc-batch/PT/mdm/inbound/nuc360/inc-batch/DK/mdm/inbound/nuc360/inc-batch/CA/Input data file mask NUCLEUS_CCV_[0-9_]+.zipNUCLEUS_CCV_[0-9_]+.zipCompressionZipZipFormatFlat files in CCV format Flat files in CCV format ExampleNUCLEUS_CCV_8000000792_20200609_211102.zipNUCLEUS_CCV_8000000792_20200609_211102.zipSchedulenoneinc_batch_apac_ccv_au_prod - at 17:00 UTC on every day-of-week from Monday through Friday (0 17 * * 1-5)inc_batch_apac_ccv_kr_prod - at 08:00 UTC on every day-of-week from Monday through Friday (0 8 * * 1-5)inc_batch_eu_ccv_gb_stage - at 07:00 UTC on every day-of-week from Monday through Friday (0 7 * * 1-5)inc_batch_eu_ccv_pt_stage - at 07:00 UTC on every day-of-week from Monday through Friday (0 7 * * 1-5)inc_batch_eu_ccv_dk_stage - at 07:00 UTC on every day-of-week from Monday through Friday (0 7 * * 1-5)inc_batch_amer_ccv_ca_prod - at 17:00 UTC on every day-of-week from Monday through Friday (0 17 * * 1-5)Airflow's DAGSinc_batch_apac_ccv_au_stageinc_batch_apac_ccv_kr_stageinc_batch_eu_ccv_gb_stageinc_batch_eu_ccv_pt_stageinc_batch_eu_ccv_dk_stageinc_batch_amer_ccv_ca_stageinc_batch_apac_ccv_au_prodinc_batch_apac_ccv_kr_prodinc_batch_eu_ccv_gb_stageinc_batch_eu_ccv_pt_stageinc_batch_eu_ccv_dk_stageinc_batch_amer_ccv_ca_prodData mappingData mapping is described in the following document.ConfigurationFlows configuration is stored in MDM Environment configuration repository. For each environment where the flows should be enabled configuration files has to be created in the location related to configured environment: inventory/[env name]/group_vars/gw-airflow-services/ and the batch name has to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Below table presents the location of flows configuration files for UAT and PROD env:Flow configuration fileUATPRODinc_batch_apac_ccv_au.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_apac_ccv_au.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_apac_ccv_au.ymlinc_batch_apac_ccv_kr.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_apac_ccv_kr.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_apac_ccv_kr.ymlinc_batch_eu_ccv_gb.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_eu_ccv_gb.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_eu_ccv_gb.ymlinc_batch_eu_ccv_pt.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_eu_ccv_pt.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_eu_ccv_pt.ymlinc_batch_eu_ccv_dk.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_eu_ccv_dk.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_eu_ccv_dk.ymlinc_batch_amer_ccv_ca.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_amer_ccv_ca.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_amer_ccv_ca.ymlTo deploy changes of DAG's configuration you have to execute SOP Deploying DAGsSOPsThere is no particular SOP procedure for this flow. All common SOPs was described in the "Airflow:" chapter." + }, + { + "title": "Veeva New Zealand", + "pageID": "164470112", + "pageLink": "/display/GMDM/Veeva+New+Zealand", + "content": "ContactsDL-ATP-APC-APACODS-SUPPORT@COMPANY.comFlowThe flow transforms the Veeva's data to Reltio model and loads the result to MDM. Data contains HCPs and HCOs from New Zealand.This flow is divided into two steps:Pre-proccessing - Copying source files from Veeva's S3 bucket, filtering once and uploading result to HUB's bucket,Incremental batch - Running the standard incremental batch process.Each of these steps are realized by separated Airflow's DAGs.Input filesUATPRODVeeva's S3 service accountSRVC-MDMHUB_GBL_NONPRODSRVC-MDMHUB_GBLVeeva's S3 Access key IDAKIAYCS3RWHN72AQKG6BAKIAYZQEVFARKMXC574QVeeva's S3 bucketapacdatalakeprcaspasp55737apacdatalakeprcaspasp63567Veeva's S3 bucket regionap-southeast-1ap-southeast-1Veeva's S3 Folderproject_kangaroo/landing/veeva/sf_account/project_kangaroo/landing/veeva/sf_address_vod__c/project_kangaroo/landing/veeva/sf_child_account_vod__c/project_kangaroo/landing/veeva/sf_account/project_kangaroo/landing/veeva/sf_address_vod__c/project_kangaroo/landing/veeva/sf_child_account_vod__c/Veeva's Input data file mask * (all files inside above folders)* (all files inside above folders)Veeva's Input data file compressionnonenoneHUB's S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectHUB's S3 Foldermdm/UAT/inbound/APAC_VEEVA/mdm/inbound/APAC_PforceRx/HUS's input data file maskin_nz_[0-9]+.zipin_nz_[0-9]+.zipHUS's input data file compressionZipZipSchedule (is set only for pre-processing DAG)noneAt 06:00 UTC on every day-of-week from Monday through Friday (0 8 * * 1-5)Pre-processing Airflow's DAGinc_batch_apac_veeva_wrapper_stageinc_batch_apac_veeva_wrapper_prodIncremental batch Airflow's DAGinc_batch_apac_veeva_stageinc_batch_apac_veeva_prodData mappingData mapping is described in the following document.ConfigurationConfiguration of this flow is defined in two configuration files. First of these inc_batch_apac_veeva_wrapper.yml specifies the pre-processing DAG configuration and the second inc_batch_apac_veeva.yml defines configuration of DAG for standard incremental batch process. To activate the flow on environment files should be created in the following location inventory/[env name]/group_vars/gw-airflow-services/ and batch names "inc_batch_apac_veeva_wrapper" and "inc_batch_apac_veeva" have to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Changes made in configuration are applied on environment by running Deploy Airflow Components procedure.Below table presents the location of flows configuration files for UAT and PROD env:Configuration fileUATPRODinc_batch_apac_veeva_wrapper.ymlhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/stage/group_vars/gw-airflow-services/inc_batch_apac_veeva_wrapper.ymlhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/prod/group_vars/gw-airflow-services/inc_batch_apac_veeva_wrapper.ymlinc_batch_apac_veeva.ymlhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/stage/group_vars/gw-airflow-services/inc_batch_apac_veeva.ymlhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/prod/group_vars/gw-airflow-services/inc_batch_apac_veeva.ymlSOPsThere is no dedicated SOP procedures for this flow. However, you must remember that this flow consists of two DAGs which both have to finish successfully.All common SOPs was described in the "Incremental batch flows: SOP" chapter." + }, + { + "title": "ODS", + "pageID": "164470116", + "pageLink": "/display/GMDM/ODS", + "content": "ContactsDL-ATP-APC-APACODS-SUPPORT@COMPANY.com - APAC ODS SupportDL-GBI-PFORCERX_ODS_Support@COMPANY.com - EU ODS SupportKaranam, Bindu ; velmurugan, Aarthi - AMER ODS SupportFlowThe flow transforms the ODS's data to Reltio model and loads the result to MDM. Data contains HCPs and HCOs from: HK, ID, IN, MY, PH, PK, SG, TH, TW, VN, BL, FR, GF, GP, MF, MQ, MU, NC, PF, PM, RE, TF, WF, YT, SI, RS countries.This flow is divided into two steps:Pre-proccessing - Copying source files from ODS's bucket and then uploading these to HUB's bucket,Incremental batch - Running the standard incremental batch process.Each of these steps are realized by separated Airflow's DAGs.Input filesUAT APACUAT EUPROD APACPROD EUSupported countriesHK, ID, IN, MY, PH, PK, SG, TH, TW, VN, BLFR, GF, GP, MF, MQ, MU, NC, PF, PM, RE, TF, WF, YT, SI, RSHK, ID, IN, MY, PH, PK, SG, TH, TW, VN, BLFR, GF, GP, MF, MQ, MU, NC, PF, PM, RE, TF, WF, YT, SI, RSODS S3 service accountSRVC-GCMDMS3DEVSRVC-GCMDMS3DEVSRVC-GCMDMS3PRDsvc_gbicc_euw1_prod_partner_gcmdm_rw_s3ODS S3 Access key IDAKIAYCS3RWHN45FC4MOPAKIAYCS3RWHN45FC4MOPAKIAYZQEVFARE64ESXWHAKIA6NIP3JYIMUIQABMXODS S3 bucketapacdatalakeintaspasp100939apacdatalakeintaspasp100939apacdatalakeintaspasp104492pfe-gbi-eu-w1-prod-partner-internalODS S3 folder/APACODSD/GCMDM//APACODSD/GCMDM//APACODSD/GCMDM//eu-dmart-odsd-file-extracts/gateway/GATEWAY/ODS/PROD/GCMDM/ODS Input data file mask ****ODS Input data file compressionzipzipzipzipHUB's S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectpfe-baiaes-eu-w1-projectHUB's S3 Foldermdm/UAT/inbound/ODS/APAC/mdm/UAT/inbound/ODS/EU/mdm/inbound/ODS/APAC/mdm/inbound/ODS/EU/HUS's input data file mask****HUS's input data file compressionzipzipzipzipPre-processing Airflow's DAGmove_ods_apac_export_stagemove_ods_eu_export_stagemove_ods_apac_export_prodmove_ods_eu_export_prodPre-processing Airflow's DAG schedulenonenone0 6 * * 1-50 7 * * 2  (At 07:00 on Tuesday.)Incremental batch Airflow's DAGinc_batch_apac_ods_stageinc_batch_eu_ods_stageinc_batch_apac_ods_prodinc_batch_eu_ods_prodIncremental batch Airflow's DAG schedulenonenone0 8 * * 1-50 8 * * 2 (At 08:00 on Tuesday.)Data mappingData mapping is described in the following document.ConfigurationConfiguration of this flow is defined in two configuration files. First of these move_ods_apac_export.yml specifies the pre-processing DAG configuration and the second inc_batch_apac_ods.yml defines configuration of DAG for standard incremental batch process. To activate the flow on environment files should be created in the following location inventory/[env name]/group_vars/gw-airflow-services/ and batch names "move_ods_apac_export" and "inc_batch_apac_ods" have to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Changes made in configuration are applied on environment by running Deploy Airflow's components procedure.Below table presents the location of flows configuration files for UAT and PROD env:Configuration fileUATPRODmove_ods_apac_export.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/move_ods_apac_export.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/move_ods_apac_export.ymlinc_batch_apac_ods.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_apac_ods.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_apac_ods.ymlmove_ods_eu_export.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/move_ods_eu_export.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/move_ods_eu_export.ymlinc_batch_eu_ods.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_eu_ods.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_eu_ods.ymlSOPsThere is no dedicated SOP procedures for this flow. However, you must remember that this flow consists of two DAGs which both have to finish successfully.All common SOPs was described in the "Incremental batch flows: SOP" chapter." + }, + { + "title": "China", + "pageID": "164470000", + "pageLink": "/display/GMDM/China", + "content": "ACLsNameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopicChina client accesschina-clientKey AuthN/A- "CREATE_HCP"- "CREATE_HCO"- "UPDATE_HCO"- "UPDATE_HCP"- "GET_ENTITIES"- CN- "CN3RDPARTY"- "MDE"- "FACE"- "EVR"- dev-out-full-mde-cn- stage-out-full-mde-cn- dev-out-full-mde-cnContactsQianRu.Zhou@COMPANY.comFlowsBatch merge & unmergeDCR generation process (China DCR)[FL.IN.1] HCP & HCO update processesReportsReports" + }, + { + "title": "Corrective batch process for EVR", + "pageID": "164470250", + "pageLink": "/display/GMDM/Corrective+batch+process+for+EVR", + "content": "Corrective batch process for EVR fixes China data using standard incremental batch mechanism. The process gets data from csv file, transforms to json model and loads to Reltio. During loading of changes following HCP's attributes can be changed:Name,Title,SubTypeCode,ValidationStatus,Specific Workplace can be ignored or its ValidationStatus can be changed,Specific MainWorkplace can be ignored.The load saves the changes in Reltio under crosswalk where:type of crosswalk is EVR,crosswalk's value is the same as Reltio id,crosswalk's source table is "corrective".Thanks this, it is easy to find changes that was made by this process.Input filesThe input files are delivered to s3 bucketUATPRODInput S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectInput S3 Foldermdm/UAT/inbound/CHINA/EVR/mdm/inbound/CHINA/EVR/Input data file mask evr_corrective_file_[0-9]*.zipevr_corrective_file_[0-9]*.zipCompressionzipzipFormatFlat files in CCV format Flat files in CCV format Exampleevr_corrective_file_20201109.zipevr_corrective_file_20201109.zipSchedulenonenoneAirflow's DAGSinc_batch_china_evr_stageinc_batch_china_evr_prodData mappingMapping from CSV to Reltio's json was describe in this document: evr_corrective_file_format_new.xlsxExample file presented input data: evr_corrective_file_20221215.csvConfigurationFlows configuration is stored in MDM Environment configuration repository. For each environment where the flow should be enabled configuration file inc_batch_china_evr.yml has to be created in the location related to configured environment: inventory/[env name]/group_vars/gw-airflow-services/ and the batch name "inc_batch_china" has to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Below table presents the location of flow configuration files for UAT and PROD environment:UATPRODhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_china_evr.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_china_evr.ymlSOPsThere is no particular SOP procedure for this flow. All common SOPs was described in the "Incremental batch flows: SOP" chapter." + }, + { + "title": "Reports", + "pageID": "164469873", + "pageLink": "/display/GMDM/Reports", + "content": "Daily ReportsThere are 4 reports which their preparing is triggered by china_generate_reports_[env] DAG. The DAG starts all dependent report DAGs and then waits for files published by them on s3. When all required files are delivered to s3, DAG sents the email with generted reports to all configured recipients.china_generate_reports_[env]|-- china_import_and_gen_dcr_statistics_report_[env] |-- import_pfdcr_from_reltio_[env] +-- china_dcr_statistics_report_[env]|-- china_import_and_gen_merge_report_[env] |-- import_merges_from_reltio_[env] +-- china_merge_report_[env]|-- china_total_entities_report_[env]+-- china_hcp_by_source_report_[env]Daily DAGs are triggered by DAG china_generate_reportsUATPRODParent DAGchina_generate_reports_stagechina_generate_reports_prodSchedulenoneEvery day at 00:05.Filter applied to all reports:FieldValuecountrycnstatusACTIVEHCP by source reportThe Report shows how many HCPs was delivered to MDM by specific source.The Output  files are delivered to s3 bucket:UATPRODOutput S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectOutput S3 Foldermdm/UAT/outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_hcp_by_source_report_.*.xlsxchina_hcp_by_source_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_hcp_by_source_report_20201113093437.xlsxchina_hcp_by_source_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_hcp_by_source_report_stagechina_hcp_by_source_report_prodReport Templatechina_hcp_by_source_template.xlsxMongo scripthcp_by_source_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ColumnDescriptionSourceThe source which delivered HCPHCPNumber of all HCPs which has the sourceDaily IncrementalNumber of HCPs modified last utc day.Total entities reportThe report shows total entities count, grouped by entity type, theirs validation status and speaker attribute.The Output  files are delivered to s3 bucketUATPRODOutput S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectOutput S3 Foldermdm/UAT/outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_total_entities_report_.*.xlsxchina_total_entities_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_total_entities_report_20201113093437.xlsxchina_total_entities_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_total_entities_report_stagechina_total_entities_report_prodReport Templatechina_total_entities_template.xlsxMongo scripttotal_entities_report.jsApplied filters"country" : "CN""status": "ACTIVE"Report fields description:ColumnDescriptionTotal_Hospital_MDMNumber of total hospital MDMTotal_Dept_MDMNumber of total department MDMTotal_HCP_MDMNumber of total HCP MDMValidated_HCPNumber of validated HCPPending_HCPNumber of pending HCPNot_Validated_HCPNumber of validated HCPOther_Status_HCP?Number of HCP with other statusTotal_Speaker Number of total speakersTotal_Speaker_EnabledNumber of enabled speakersTotal_Speaker_DisabledNumber of disabled speakersDCR statistics reportThe report shows statistics about data change requests which were created in MDM. Generating of this report is divided into two steps:Importing PfDataChengeRequest data from Reltio - this step is realized by import_pfdcr_from_reltio_[env] DAG. It schedules export data in Reltio using Export Entities operation and then waits for result. After export file is ready, DAG load its content to mongo,Generating report - generates report based on proviosly imported data. This step is perform by china_dcr_statistics_report_[env] DAG.Both of above steps are run sequentially by china_import_and_gen_dcr_statistics_report_[env] DAG. The Output  files are delivered to s3 bucket:UATPRODOutput S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectOutput S3 Foldermdm/UAT/outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_dcr_statistics_report_.*.xlsxchina_dcr_statistics_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_dcr_statistics_report_20201113093437.xlsxchina_dcr_statistics_report_20201113093437.xlsxAirflow's DAGSchina_dcr_statistics_report_stagechina_dcr_statistics_report_prodReport Templatechina_dcr_statistics_template.xlsxMongo scriptchina_dcr_statistics_report.jsApplied filtersThere are no additional conditions applied to select dataReport fields description:ColumnDescriptionTotal_DCR_MDMTotal number of DCRsNew_HCP_DCRTotal number of DCRs of type NewHCPNew_HCO_L1_DCRTotal number of DCRs of type NewHCOL1New_HCO_L2_DCRTotal number of DCRs of type NewHCOL2MultiAffil_DCRTotal number of DCRs of type MultiAffilNew_HCP_DCR_CompletedTotal number of DCRs of type NewHCP which have completed statusNew_HCO_L1_DCR_CompletedTotal number of DCRs of type NewHCOL1 which have completed statusNew_HCO_L2_DCR_CompletedTotal number of DCRs of type NewHCOL2 which have completed statusMultiAffil_DCR_CompletedTotal number of DCRs of type MultiAffil which have completed statusNew_HCP_AcceptTotal number of DCRs of type NewHCP which were acceptedNew_HCP_UpdateTotal number of DCRs of type NewHCP which were updated during responding for theseNew_HCP_MergeTotal number of DCRs of type NewHCP which were accepted and response had entities to mergeNew_HCP_MergeUpdateTotal number of DCRs of type NewHCP which were updated and response had entities to mergeNew_HCP_RejectTotal number of DCRs of type NewHCP which were rejectedNew_HCP_CloseTotal number of closed DCRs of type NewHCPAffil_AcceptTotal number of DCRs of type MultiAffil which were acceptedAffil_RejectTotal number of DCRs of type MultiAffil which were rejectedAffil_AddTotal number of DCRs of type MultiAffil which data were updated during respondingMultiAffil_DCR_CloseTotal number of closed DCRs of type MultiAffilNew_HCO_L1_UpdateTotal number of closed DCRs of type NewHCOL1 which data were updated during respondingNew_HCO_L1_RejectTotal number of rejected DCRs of type NewHCOL1 New_HCO_L1_CloseTotal number of closed DCRs of type NewHCOL1 New_HCO_L2_AcceptTotal number of accepted DCRs of type NewHCOL2New_HCO_L2_UpdateTotal number of DCRs of type NewHCOL2 which data were updated during respondingNew_HCO_L2_RejectTotal number of rejected DCRs of type NewHCOL2New_HCO_L2_CloseTotal number of closed DCRs of type NewHCOL2New_HCP_DCR_OpenedTotal number of opend DCRs of type NewHCPMultiAffil_DCR_OpenedTotal number of opend DCRs of type MultiAffilNew_HCO_L1_DCR_OpenedTotal number of opend DCRs of type NewHCOL1New_HCO_L2_DCR_OpenedTotal number of opend DCRs of type NewHCOL2New_HCP_DCR_FailedTotal number of failed DCRs of type NewHCPMultiAffil_DCR_FailedTotal number of failed DCRs of type MultiAffilNew_HCO_L1_DCR_FailedTotal number of failed DCRs of type NewHCOL1New_HCO_L2_DCR_FailedTotal number of failed DCRs of type NewHCOL2Merge reportThe report shows statistics about merges which were occurred in MDM. Generating of this report, similar to DCR statistics report, is divided into two steps:Importing merges data from Reltio - this step is performed by import_merges_from_reltio_[env] DAG. It schedules export data in Reltio unsing Export Merge Tree operation and then waits for result. After export file is ready, DAG loads its content to mongo,Generating report - generates report based on previously imported data. This step is performed by china_merge_report_[env] DAG.Both of above steps are run sequentially by china_import_and_gen_merge_report_[env] DAG. The Output  files are delivered to s3 bucket:UATPRODOutput S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectOutput S3 Foldermdm/UAT/outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_merge_report_.*.xlsxchina_merge_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_merge_report_20201113093437.xlsxchina_merge_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_import_and_gen_merge_report_stagechina_import_and_gen_merge_report_prodReport Templatechina_daily_merges_template.xlsxMongo scriptmerge_report.jsApplied filters"country" : "CN"Report fields description:ColumnDescriptionDateDate when merges occurredDaily_Merge_HosptialTotal number of merges on HCODaily_Merge_HCPTotal number of merges on HCPDaily_Manually_Merge_HosptialTotal number of manual merges on HCPDaily_Manually_Merge_HCPTotal number of manual merges on HCPMonthly ReportsThere are 8 monthly reports. All of them are triggered by china_monthly_generate_reports_[env] which then waits for files, generated and published to S3 bucket by each depended DAGs. When all required files exist on S3, DAG prepares the email with all files and sents this defined recipients.china_monthly_generate_reports_[env]|-- china_monthly_hcp_by_SubTypeCode_report_[env]|-- china_monthly_hcp_by_channel_report_[env]|-- china_monthly_hcp_by_city_type_report_[env]|-- china_monthly_hcp_by_department_report_[env]|-- china_monthly_hcp_by_gender_report_[env]|-- china_monthly_hcp_by_hospital_class_report_[env]|-- china_monthly_hcp_by_province_report_[env]+-- china_monthly_hcp_by_source_report_[env]Monthly DAGs are triggered by DAG china_monthly_generate_reportsUATPRODParent DAGchina_monthly_generate_reports_stagechina_monthly_generate_reports_prodHCP by source reportThe report shows how many HCPs were delivered by specific source.The Output  files are delivered to s3 bucketUATPRODOutput S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectOutput S3 Foldermdm/UAT/outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_monthly_hcp_by_source_report_.*.xlsxchina_monthly_hcp_by_source_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_monthly_hcp_by_source_report_20201113093437.xlsxchina_monthly_hcp_by_source_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_monthly_hcp_by_source_report_stagechina_monthly_hcp_by_source_report_prodReport Templatechina_monthly_hcp_by_source_template.xlsxMongo scriptmonthly_hcp_by_source_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ColumnDescriptionSourceSource that delivered HCPHCPNumber of all HCPs which has the sourceHCP by channel reportThe report presents amount of HCPs which were delivered to MDM through specific Channel.The Output  files are delivered to s3 bucketUATPRODOutput S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectOutput S3 Foldermdm/UAT/outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_monthly_hcp_by_channel_report_.*.xlsxchina_monthly_hcp_by_channel_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_monthly_hcp_by_channel_report_20201113093437.xlsxchina_monthly_hcp_by_channel_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_monthly_hcp_by_channel_report_stagechina_monthly_hcp_by_channel_report_prodReport Templatechina_monthly_hcp_by_channel_template.xlsxMongo scriptmonthly_hcp_by_channel_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ColumnDescriptionChannelChannel nameHCPNumber of all HCPs which match the channelHCP by SubTypeCode reportThe report presents HCPs grouped by its Medical Title (SubTypeCode)The Output  files are delivered to s3 bucketUATPRODOutput S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectOutput S3 Foldermdm/UAT/outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_monthly_hcp_by_SubTypeCode_report_.*.xlsxchina_monthly_hcp_by_SubTypeCode_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_monthly_hcp_by_SubTypeCode_report_20201113093437.xlsxchina_monthly_hcp_by_SubTypeCode_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_monthly_hcp_by_SubTypeCode_report_stage china_monthly_hcp_by_SubTypeCode_report_prodReport Templatechina_monthly_hcp_by_SubTypeCode_template.xlsxMongo scriptmonthly_hcp_by_SubTypeCode_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ColumnDescriptionMedical TitleMedical Title (SubTypeCode) of HCPHCPNumber of all HCPs which match the medical titleHCP by city type reportThe report shows amount of HCP which works in specific city type. Type of city in not avaiable in MDM data. To know what is type of specific citys report uses additional collection chinaGeography which has mapping between city's name and its type. Data in the collection can be updated on request of china's team.The Output  files are delivered to s3 bucketUATPRODOutput S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectOutput S3 Foldermdm/UAT/outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_monthly_hcp_by_city_type_report_.*.xlsxchina_monthly_hcp_by_city_type_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_monthly_hcp_by_city_type_report_20201113093437.xlsxchina_monthly_hcp_by_city_type_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_monthly_hcp_by_city_type_report_stage china_monthly_hcp_by_city_type_report_prodReport Templatechina_monthly_hcp_by_city_type_template.xlsxMongo scriptmonthly_hcp_by_city_type_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ColumnDescriptionCity TypeCity Type taken from chinaGeography collection which match entity.attributes.Workplace.value.MainHCO.value.Address.value.City.valueHCPNumber of all HCPs which match the city typeHCP by department reportThe report presents the HCPs grouped by department where they work.The Output  files are delivered to s3 bucketUATPRODOutput S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectOutput S3 Foldermdm/UAT/outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_monthly_hcp_by_department_report_.*.xlsxchina_monthly_hcp_by_department_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_monthly_hcp_by_department_report_20201113093437.xlsxchina_monthly_hcp_by_department_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_monthly_hcp_by_department_report_stage china_monthly_hcp_by_department_report_prodReport Templatechina_monthly_hcp_by_department_template.xlsxMongo scriptmonthly_hcp_by_department_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ColumnDescriptionDeptDepartment's nameHCPNumber of all HCPs which match the deptHCP by gender reportThe report presents the HCPs grouped by gender.The Output  files are delivered to s3 bucketUATPRODOutput S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectOutput S3 Foldermdm/UAT/outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_monthly_hcp_by_gender_report_.*.xlsxchina_monthly_hcp_by_gender_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_monthly_hcp_by_gender_report_20201113093437.xlsxchina_monthly_hcp_by_gender_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_monthly_hcp_by_gender_report_stage china_monthly_hcp_by_gender_report_prodReport Templatechina_monthly_hcp_by_gender_template.xlsxMongo scriptmonthly_hcp_by_gender_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ColumnDescriptionGenderGenderHCPNumber of all HCPs which match the genderHCP by hospital class reportThe report presents the HCPs grouped by theirs department.The Output  files are delivered to s3 bucketUATPRODOutput S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectOutput S3 Foldermdm/UAT/outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_monthly_hcp_by_hospital_class_report_.*.xlsxchina_monthly_hcp_by_hospital_class_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_monthly_hcp_by_hospital_class_report_20201113093437.xlsxchina_monthly_hcp_by_hospital_class_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_monthly_hcp_by_hospital_class_report_stage china_monthly_hcp_by_hospital_class_report_prodReport Templatechina_monthly_hcp_by_hospital_class_template.xlsxMongo scriptmonthly_hcp_by_hospital_class_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ColumnDescriptionClassClassificationHCPNumber of all HCPs which match the classHCP by province reportThe report presents the HCPs grouped by province where they work.The Output  files are delivered to s3 bucketUATPRODOutput S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectOutput S3 Foldermdm/UAT/outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_monthly_hcp_by_province_report_.*.xlsxchina_monthly_hcp_by_province_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_monthly_hcp_by_province_report_20201113093437.xlsxchina_monthly_hcp_by_province_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_monthly_hcp_by_province_report_stage china_monthly_hcp_by_province_report_prodReport Templatechina_monthly_hcp_by_province_template.xlsxMongo scriptmonthly_hcp_by_province_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ProvinceName of provinceHCPNumber of all HCPs which match the ProvinceSOPsHow can I check the status of generating reports?Status of generating reports can be chacked by verification of task statuses on main DAGs - china_generate_reports_[env] for daily reports or china_monthly_generate_reports_[env] for monthly reports. Both of these DAGs have task "sendEmailReports" which waits for files generated by dependent DAGs. If required files are not published to S3 in confgured amount of time, the task will fail with following message:\n[2020-11-27 12:12:54,085] {{docker_operator.py:252}} INFO - Caught: java.lang.RuntimeException: ERROR: Elapsed time 300 minutes. Timeout exceeded: 300\n[2020-11-27 12:12:54,086] {{docker_operator.py:252}} INFO - java.lang.RuntimeException: ERROR: Elapsed time 300 minutes. Timeout exceeded: 300\n[2020-11-27 12:12:54,086] {{docker_operator.py:252}} INFO - at SendEmailReports.getListOfFilesLoop(sendEmailReports.groovy:221)\n\tat SendEmailReports.processReport(sendEmailReports.groovy:257)\n[2020-11-27 12:12:54,290] {{docker_operator.py:252}} INFO - at SendEmailReports$processReport.call(Unknown Source)\n\tat sendEmailReports.run(sendEmailReports.groovy:279)\n[2020-11-27 12:12:55,552] {{taskinstance.py:1058}} ERROR - docker container failed: {'StatusCode': 1}\nIn this case you have to check the status of all dependent DAGs to find the reason on failure, resolve the issue and retry all failed tasks starting by tasks in dependend DAGs and finishing by task in main DAG.Daily reports failed due to error durign importing data from Reltio. What to do?If you are able to see that DAGs import_pfdcr_from_reltio_[env] or import_merges_from_reltio_[env] in failed state, it probably means that export data from Reltio took longer then usual. To confirm this supposing you have to show details of importing DAG and check status of waitingForExportFile task. If it has failed state and in the logs you can see following messages:\n[2020-12-04 12:09:10,957] {{s3_key_sensor.py:88}} INFO - Poking for key : s3://pfe-baiaes-eu-w1-project/mdm/reltio_exports/merges_from_reltio_20201204T000718/_SUCCESS\n[2020-12-04 12:09:11,074] {{taskinstance.py:1047}} ERROR - Snap. Time is OUT.\nTraceback (most recent call last):\n File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 922, in _run_raw_task\n result = task_copy.execute(context=context)\n File "/usr/local/lib/python3.7/site-packages/airflow/sensors/base_sensor_operator.py", line 116, in execute\n raise AirflowSensorTimeout('Snap. Time is OUT.')\nairflow.exceptions.AirflowSensorTimeout: Snap. Time is OUT.\n[2020-12-04 12:09:11,085] {{taskinstance.py:1078}} INFO - Marking task as FAILED.\nYou can be pretty sure that the export is still processed on Reltio side. You can confirm this by using tasks api. If on the returned list you are able to see tasks in processing state, it means that MDM still works on this export. To fix this issue in DAG you have to restart the failed task. The DAG will start checking existance of export file once agine." + }, + { + "title": "CDW (AMER)", + "pageID": "164470121", + "pageLink": "/pages/viewpage.action?pageId=164470121", + "content": "ContactsNarayanan, Abhilash Balan, Sakthi Raman, Krishnan GatewayAMER(manager)NameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicCDW user (NPROD)cdwExternal OAuth2CDW-MDM_client["CREATE_HCO","UPDATE_HCO","GET_ENTITIES","USAGE_FLAG_UPDATE"]["US"]["SHS","SHS_MCO","IQVIA_MCO","CENTRIS","SAP","IQVIA_DDD","ONEKEY","DT_340b","DEA","HUB_CALLBACK","IQVIA_RAWDEA","IQVIA_PDRP","ENGAGE","GRV","ICUE","KOL_OneView","COV","ENGAGE 1.0","GRV","IQVIA_RX","MILLIMAN_MCO","ICUE","KOL_OneView","SHS_RX","MMIT","INTEGRICHAIN_TRADE_PARTNER","INTEGRICHAIN_SHIP_TO","EMDS_VVA","APUS_VVA","BMS (NAV)","EXAS","POLARIS_DM","ANRO_DM","ASHVVA","MM_C1st","KFIS","DVA","Reltio","DDDV","IQVIA_DDD_ZIP","867","MYOV_VVA","COMPANY_ACCTS"]CDW user (PROD)cdwExternal OAuth2CDW-MDM_client["CREATE_HCO","UPDATE_HCO","GET_ENTITIES","USAGE_FLAG_UPDATE"]["US"]["SHS","SHS_MCO","IQVIA_MCO","CENTRIS","SAP","IQVIA_DDD","ONEKEY","DT_340b","DEA","HUB_CALLBACK","IQVIA_RAWDEA","IQVIA_PDRP","ENGAGE","GRV","ICUE","KOL_OneView","COV","ENGAGE 1.0","GRV","IQVIA_RX","MILLIMAN_MCO","ICUE","KOL_OneView","SHS_RX","MMIT","INTEGRICHAIN_TRADE_PARTNER","INTEGRICHAIN_SHIP_TO","EMDS_VVA","APUS_VVA","BMS (NAV)","EXAS","POLARIS_DM","ANRO_DM","ASHVVA","MM_C1st","KFIS","DVA","Reltio","DDDV","IQVIA_DDD_ZIP","867","MYOV_VVA","COMPANY_ACCTS"]FlowsFlowDescriptionSnowflake: Events publish flowEvents are published to snowflakeSnowflake: Base tables refreshTable is refreshed (every 2 hours in prod) with those eventsSnowflake MDMTable are read by an ETL process implemented by COMPANY Team Update Usage TagsUpdate BESTCALLEDON used flag on addressesCDW docs: Best Address Data flowClient software Snowpipe " + }, + { + "title": "ETL - COMPANY (GBLUS)", + "pageID": "164470236", + "pageLink": "/pages/viewpage.action?pageId=164470236", + "content": "ContactsNayan, Rajeev Duvvuri, Satya ACLsNameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopicBatchesETL batch load usermdmetl_nprodOAuth2SVC-MDMETL_client- "CREATE_HCP"- "CREATE_HCO"- "CREATE_MCO"- "CREATE_BATCH"- "GET_BATCH"- "MANAGE_STAGE"- "CLEAR_CACHE_BATCH"US- "SHS"- "SHS_MCO"- "IQVIA_MCO"- "CENTRIS"- "ENGAGE 1.0"- "GRV"- "IQVIA_DDD"- "SAP"- "ONEKEY"- "IQVIA_RAWDEA"- "IQVIA_PDRP"- "COV"- "IQVIA_RX"- "MILLIMAN_MCO"- "ICUE"- "KOL_OneView"- "SHS_RX"- "MMIT"- "INTEGRICHAIN"N/Abatches: "Symphony": - "HCPLoading" "Centris": - "HCPLoading" "IQVIA_DDD": - "HCOLoading" - "RelationLoading" "SAP": - "HCOLoading" "ONEKEY": - "HCPLoading" - "HCOLoading" - "RelationLoading" "IQVIA_RAWDEA": - "HCPLoading" "IQVIA_PDRP": - "HCPLoading" "PFZ_CUSTID_SYNC": - "COMPANYCustIDLoading" "OneView": - "HCOLoading" "HCPM": - "HCPLoading" "SHS_MCO": - "MCOLoading" - "RelationLoading" "IQVIA_MCO": - "MCOLoading" - "RelationLoading" "IQVIA_RX": - "HCPLoading" "MILLIMAN_MCO": - "MCOLoading" - "RelationLoading" "VEEVA": - "HCPLoading" - "HCOLoading" - "MCOLoading" - "RelationLoading" "SHS_RX": - "HCPLoading" "MMIT": - "MCOLoading" - "RelationLoading" "DDD_SAP": - "RelationLoading" "INTEGRICHAIN": - "HCOLoading"...ETL Get/Resubmit Errorsmdmetl_nprodOAuth2SVC-MDMETL_client- "GET_ERRORS"- "RESUBMIT_ERRORS"USALLN/AN/AFlowsBatch Controller: creating and updating batch instance - the user invokes the batch-service API to create a new batch instanceBulk Service: loading bulk data - the user invokes the batch-service API to load the dataAfter load, the processing starts - ETL BatchesClient software Informatica ETL data loaderSOPsAdding a New BatchCache Address ID Clear (Remove Duplicates) ProcessCache Address ID Update ProcessManager: Resubmitting Failed RecordsSOP in WikiManual Cache ClearUpdating ETL Dictionaries in ConsulUpdating Dictionary" + }, + { + "title": "KOL_ONEVIEW (GBLUS)", + "pageID": "164469966", + "pageLink": "/pages/viewpage.action?pageId=164469966", + "content": "ContactsBrahma, Bagmita Solanki, Hardik Tikyani, Devesh DL DL-iMed_L3@COMPANY.comACLsNameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopicKOL_OneView userkol_oneviewOAuth2KOL-MDM-PFORCEOL_client- "CREATE_HCP"- "UPDATE_HCP"- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"- "LOOKUPS"USKOL_OneViewN/AKOL_OneView TOPICN/AKafka JassN/A"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'KOL_ONEVIEW')&& exchange.in.headers.eventType in ['full'] && ['KOL_OneView'].intersect(exchange.in.headers.eventSource) && exchange.in.headers.objectType in ['HCP', 'HCO']"USKOL_OneViewprod-out-full-koloneview-allFlowsCreate/Update HCP/HCO/MCOGet EntityCreate RelationsClient software Kafka Sink JDBC connector" + }, + { + "title": "GRV (GBLUS)", + "pageID": "164469964", + "pageLink": "/pages/viewpage.action?pageId=164469964", + "content": "ContactsBablani, Vijay Jain, Somya Adhvaryu, Amish Reynolds, Lori Alphonso, Venisa Patel, Jay Anumalasetty, Jayasravani ACLsNameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopicGRV UsergrvOAuth2GRV-MDM_client- "GET_ENTITIES"- "LOOKUPS"- "VALIDATE_HCP"- "CREATE_HCP"- "UPDATE_HCP"US- "GRV"N/AGRV-AIS-MDM Usergrv_aisOAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●- "GET_ENTITIES"- "LOOKUPS"- "VALIDATE_HCP"- "CREATE_HCP"- "UPDATE_HCP"- "CREATE_HCO"- "UPDATE_HCO"US- "GRV"- "CENTRIS"- "ENGAGE"N/AGRV TOPICN/AKafka JassN/A"(exchange.in.headers.reconciliationTarget==null)&& exchange.in.headers.eventType in ['full_not_trimmed'] && ['GRV'].intersect(exchange.in.headers.eventSource)&& exchange.in.headers.objectType in ['HCP'] && exchange.in.headers.eventSubtype in ['HCP_CHANGED']"USGRVprod-out-full-grv-allFlowsCreate/Update HCP/HCO/MCOGet EntityCreate RelationsClient software APIKafka connector" + }, + { + "title": "GRACE (GBLUS)", + "pageID": "164469962", + "pageLink": "/pages/viewpage.action?pageId=164469962", + "content": "ContactsJeffrey.D.LoVetere@COMPANY.comwilliam.nerbonne@COMPANY.comKalyan.Kanumuru@COMPANY.comBrigilin.Stanley@COMPANY.comACLsNameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopicGRACE UsergraceOAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●- "GET_ENTITIES"- "LOOKUPS"US- "GRV"- "CENTRIS"- "ENGAGE"N/AFlowsGet EntityClient software API - read only" + }, + { + "title": "KOL_ONEVIEW (EMEA, AMER, APAC)", + "pageID": "164470136", + "pageLink": "/pages/viewpage.action?pageId=164470136", + "content": "ContactsDL-SFA-INF_Support_PforceOL@COMPANY.comSolanki, Hardik (US - Mumbai) Yagnamurthy, Maanasa (US - Hyderabad) ACLsEMEANameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicKOL_ONEVIEW user (NPROD)kol_oneviewExternal OAuth2KOL-MDM-PFORCEOL_clientKOL-MDM_client["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","GET_ENTITIES","LOOKUPS"]["AD","AE","AO","AR","AU","BF","BH","BI","BJ","BL","BO","BR","BW","BZ","CA","CD","CF","CG","CH","CI","CL","CM","CN","CO","CP","CR","CV","DE","DJ","DK","DO","DZ","EC","EG","ES","ET","FI","FO","FR","GA","GB","GF","GH","GL","GM","GN","GP","GQ","GT","GW","HN","IE","IL","IN","IQ","IR","IT","JO","JP","KE","KW","LB","LR","LS","LY","MA","MC","MF","MG","ML","MQ","MR","MU","MW","MX","NA","NC","NG","NI","NZ","OM","PA","PE","PF","PL","PM","PT","PY","QA","RE","RU","RW","SA","SD","SE","SL","SM","SN","SV","SY","SZ","TD","TF","TG","TN","TR","TZ","UG","UY","VE","WF","YE","YT","ZA","ZM","ZW"]GB- "KOL_OneView"KOL_ONEVIEW user (PROD)kol_oneviewExternal OAuth2KOL-MDM-PFORCEOL_clientKOL-MDM_client["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","GET_ENTITIES","LOOKUPS"]["AD","AE","AO","AR","AU","BF","BH","BI","BJ","BL","BO","BR","BW","BZ","CA","CD","CF","CG","CH","CI","CL","CM","CN","CO","CP","CR","CV","DE","DJ","DK","DO","DZ","EC","EG","ES","ET","FO","FR","GA","GB","GF","GH","GL","GM","GN","GP","GQ","GT","GW","HN","IE","IL","IN","IQ","IR","IT","JO","JP","KE","KW","LB","LR","LS","LY","MA","MC","MF","MG","ML","MQ","MR","MU","MW","MX","NA","NC","NG","NI","NZ","OM","PA","PE","PF","PL","PM","PT","PY","QA","RE","RU","RW","SA","SD","SL","SM","SN","SV","SY","SZ","TD","TF","TG","TN","TR","TZ","UG","UY","VE","WF","YE","YT","ZA","ZM","ZW"]GB- "KOL_OneView"AMERNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicKOL_ONEVIEW user (NPROD)kol_oneviewExternal OAuth2KOL-MDM-PFORCEOL_client["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","GET_ENTITIES","LOOKUPS"]["AR","BR","CA","MX","UY"]CA- "KOL_OneView"KOL_ONEVIEW user (PROD)kol_oneviewExternal OAuth2KOL-MDM-PFORCEOL_client["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","GET_ENTITIES","LOOKUPS"]["AR","BR","CA","MX","UY"]CA- "KOL_OneView"APACNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicKOL_ONEVIEW user (NPROD)kol_oneviewExternal OAuth2KOL-MDM-PFORCEOL_client["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","GET_ENTITIES","LOOKUPS"]["AU","IN","KR","NZ","JP"]JP- "KOL_OneView"KOL_ONEVIEW user (PROD)kol_oneviewExternal OAuth2KOL-MDM-PFORCEOL_client["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","GET_ENTITIES","LOOKUPS"]["AU","IN","KR","NZ","JP"]JP- "KOL_OneView"KafkaEMEAEnvNameKafka UsernameConsumergroupPublisher routing ruleTopicPartitionsemea-prodKol_oneviewkol_oneview"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'KOL_ONEVIEW') && exchange.in.headers.eventType in ['full'] && ['KOL_OneView'].intersect(exchange.in.headers.eventSource) && exchange.in.headers.objectType in ['HCP', 'HCO'] && exchange.in.headers.country in ['ie', 'gb']"-${env}-out-full-koloneview-all3emea-devKol_oneviewkol_oneview-${env}-out-full-koloneview-all3emea-qaKol_oneviewkol_oneview-${env}-out-full-koloneview-all3emea-stageKol_oneviewkol_oneview-${env}-out-full-koloneview-all3AMEREnvNameKafka UsernameConsumergroupPublisher routing ruleTopicPartitionsgblus-prodKol_oneviewkol_oneview"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'KOL_OneView') && exchange.in.headers.eventType in ['full'] && ['KOL_OneView'].intersect(exchange.in.headers.eventSource) && exchange.in.headers.objectType in ['HCP', 'HCO']"-${env}-out-full-koloneview-all3gblus-devKol_oneviewkol_oneview-${env}-out-full-koloneview-all3gblus-qaKol_oneviewkol_oneview-${env}-out-full-koloneview-all3gblus-stageKol_oneviewkol_oneview-${env}-out-full-koloneview-all3" + }, + { + "title": "GRV (EMEA, AMER)", + "pageID": "164470150", + "pageLink": "/pages/viewpage.action?pageId=164470150", + "content": "ContactsTODOGatewayEMEANameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicGRV user (NPROD)grvExternal OAuth2GRV-MDM_client- GET_ENTITIES- LOOKUPS- VALIDATE_HCP["CA"]GBGRVN/AGRV user (PROD)grvExternal OAuth2GRV-MDM_client- GET_ENTITIES- LOOKUPS- VALIDATE_HCP["CA"]GBGRVN/AAMER(manager)NameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicGRV user (NPROD)grvExternal OAuth2GRV-MDM_client["GET_ENTITIES","LOOKUPS","VALIDATE_HCP","CREATE_HCP","UPDATE_HCP"]["US"]GRVN/AGRV user (PROD)grvExternal OAuth2GRV-MDM_client["GET_ENTITIES","LOOKUPS","VALIDATE_HCP","CREATE_HCP","UPDATE_HCP"]["US"]GRVN/AKafkaAMEREnvNameKafka UsernameConsumergroupPublisher routing ruleTopicPartitionsgblus-prodGrvgrv"(exchange.in.headers.reconciliationTarget==null) && exchange.in.headers.eventType in ['full_not_trimmed'] && ['GRV'].intersect(exchange.in.headers.eventSource) && exchange.in.headers.objectType in ['HCP'] && exchange.in.headers.eventSubtype in ['HCP_CHANGED']"- ${env}-out-full-grv-allgblus-devGrvgrv- ${local_env}-out-full-grv-allgblus-qaGrvgrv- ${local_env}-out-full-grv-allgblus-stageGrv grv- ${local_env}-out-full-grv-all" + }, + { + "title": "GANT (Global, EMEA, AMER, APAC)", + "pageID": "164470148", + "pageLink": "/pages/viewpage.action?pageId=164470148", + "content": "ContactsNadpolla, Gangadhar (Gangadhar.Nadpolla@COMPANY.com)GatewayEMEANameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicGANT UsergantExternal OAuth2GANT-MDM_client- "GET_ENTITIES"- "LOOKUPS"["AD", "AG", "AI", "AM", "AN","AR", "AT", "AU", "AW", "BA","BB", "BE", "BG", "BL", "BM","BO", "BQ", "BR", "BS", "BY","BZ", "CA", "CH", "CL", "CN","CO", "CP", "CR", "CW", "CY","CZ", "DE", "DK", "DO", "DZ","EC", "EE", "EG", "ES", "FI","FO", "FR", "GB", "GF", "GP","GR", "GT", "GY", "HK", "HN","HR", "HU", "ID", "IE", "IL","IN", "IT", "JM", "JP", "KR","KY", "KZ", "LC", "LT", "LU","LV", "MA", "MC", "MF", "MQ","MU", "MX", "MY", "NC", "NI","NL", "NO", "NZ", "PA", "PE","PF", "PH", "PK", "PL", "PM","PN", "PT", "PY", "RE", "RO","RS", "RU", "SA", "SE", "SG","SI", "SK", "SV", "SX", "TF","TH", "TN", "TR", "TT", "TW","UA", "UY", "VE", "VG", "VN","WF", "XX", "YT", "ZA"]GBGRVN/AAMERAction RequiredUser configurationPingFederate UsernameGANT-MDM_clientCountriesBrazilTenantAMEREnvironments (PROD/NON-PROD/ALL)ALLAPI Servicesext-api-gw-amer-stage/entities,  ext-api-gw-amer-stage/lookups.SourcesONEKEY,CRMMI,MAPPBusiness JustificationAs we are fetching hcp data from MDM COMPANY Instance, Earlier It was MDM IQVIA instanceNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicGANT UsergantExternal OAuth2GANT-MDM_client- "GET_ENTITIES"- "LOOKUPS"["BR"]BR- ONEKEY- CRMMI- MAPPN/AAPACAction RequiredUser configurationPingFederate UsernameGANT-MDM_clientCountriesIndiaTenantAPACEnvironments (PROD/NON-PROD/ALL)ALLAPI Servicesext-api-gw-apac-stage/entities,  ext-api-gw-apac-stage/lookups.SourcesONEKEY,CRMMI,MAPPBusiness JustificationAs we are fetching hcp data from MDM COMPANY Instance, Earlier It was MDM IQVIA instanceNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicGANT UsergantExternal OAuth2GANT-MDM_client- "GET_ENTITIES"- "LOOKUPS"["IN"]IN- ONEKEY- CRMMI- MAPPN/A" + }, + { + "title": "Medic (EMEA, AMER, APAC)", + "pageID": "164470140", + "pageLink": "/pages/viewpage.action?pageId=164470140", + "content": "ContactsDL-F&BO-MEDIC@COMPANY.comGatewayEMEANameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicMedic user (NPROD)medicExternal OAuth2MEDIC-MDM_client●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AR","BR","CO","FR","GR","IE","IN","IT","NZ"]IE["MEDIC"]Medic user (PROD)medicExternal OAuth2MEDIC-MDM_client●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AR","BR","CO","FR","GR","IE","IN","IT","NZ"]IE["MEDIC"]AMERNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicMedic  user (NPROD)medicExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●, ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AR","BR","CO","FR","GR","IE","IN","IT","NZ","US"]["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI","DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE","GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO","IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC","MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM","PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]Medic user (PROD)medicExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AR","BR","CO","FR","GR","IE","IN","IT","NZ","US"]["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI","DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE","GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO","IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC","MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM","PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]APACNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicMedic user (NPROD)medicExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AR","BR","CO","FR","GR","IE","IN","IT","NZ"]IN["MEDIC"]Medic user (PROD)medicExternal OAuth2MEDIC-MDM_client●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AR","BR","CO","FR","GR","IE","IN","IT","NZ"]IN["MEDIC"]" + }, + { + "title": "PTRS (EMEA, AMER, APAC)", + "pageID": "164470165", + "pageLink": "/pages/viewpage.action?pageId=164470165", + "content": "RequirementsEnvPublisher routing ruleTopicemea-prod(ptrs-eu)"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_RECONCILIATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf', 'br', 'mx', 'id', 'pt'] && exchange.in.headers.objectType in ['HCP', 'HCO']"01/Mar/23 4:14 AM[10:13 AM] Shanbhag, BhushanOkay in that case we want Turkey market's events to come from emea-prod-out-full-ptrs-global2 topic only. ${env}-out-full-ptrs-euemea prod and nprodsAdding MC and AD to out-full-ptrs-eu15/05/2023Sagar: Hi Karol,Can you please add below counties for France to country configuration list for FRANCE EMEA Topics (Prod, Stage QA & Dev)1. Monaco2. Andorra\n MR-6236\n -\n Getting issue details...\n STATUS\n ${env}-out-full-ptrs-euContactsAPI: Prapti.Nanda@COMPANY.com;Varun.ArunKumar@COMPANY.comKafka: Sagar.Bodala@COMPANY.comGatewayEMEANameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicPTRS user (NPROD)ptrsExternal OAuth2PTRS-MDM_client["CREATE_HCO","CREATE_HCP","GET_ENTITIES","LOOKUPS"]["AG","AI","AN","AR","AW","BB","BL","BM","BO","BR","BS","BZ","CL","CO","CR","CW","DO","EC","FR","GF","GP","GT","GY","HN","ID","IL","JM","KY","LC","MF","MQ","MU","MX","NC","NI","PA","PE","PF","PH","PM","PN","PT","PY","RE","SV","SX","TF","TR","TT","UY","VE","VG","WF","YT"]["PTRS"]PTRS user (PROD)ptrsExternal OAuth2PTRS-MDM_client["CREATE_HCO","CREATE_HCP","GET_ENTITIES","LOOKUPS"]["AG","AI","AN","AR","AW","BB","BL","BM","BO","BR","BS","BZ","CL","CO","CR","CW","DO","EC","FR","GF","GP","GT","GY","HN","ID","IL","JM","KY","LC","MF","MQ","MU","MX","NC","NI","PA","PE","PF","PH","PM","PN","PT","PY","RE","SV","SX","TF","TR","TT","UY","VE","VG","WF","YT"]["PTRS"]AMER(manager)NameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicPTRS user (NPROD)ptrsExternal OAuth2PTRS-MDM_client["CREATE_HCO","CREATE_HCP","GET_ENTITIES","LOOKUPS"]["MX","BR"]["PTRS"]PTRS user (PROD)ptrsExternal OAuth2PTRS-MDM_client["CREATE_HCO","CREATE_HCP","GET_ENTITIES","LOOKUPS"]["MX","BR"]["PTRS"]APAC(manager)NameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicPTRS user (NPROD)ptrsExternal OAuth2PTRS_RELTIO_ClientPTRS-MDM_client["CREATE_HCO","CREATE_HCP","GET_ENTITIES"]["ID","JP","PH"]["VOC","PTRS"]PTRS user (PROD)ptrsExternal OAuth2PTRS_RELTIO_ClientPTRS-MDM_client["CREATE_HCO","CREATE_HCP","GET_ENTITIES"]["JP"]["VOC","PTRS"]KafkaEMEAEnvNameKafka UsernameConsumergroupPublisher routing ruleTopicPartitionsemea-prod(ptrs-eu)Ptrsptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_RECONCILIATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf', 'br', 'mx', 'id', 'pt', 'ad', 'mc'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-eu3emea-prod (ptrs-global2)Ptrsptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_GLOBAL2_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['tr'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-global23emea-dev (ptrs-global2)Ptrsptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_GLOBAL2_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['tr'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-global23emea-qa (ptrs-eu)Ptrsptrsemea-dev-ptrs-eu"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_EU_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-eu3emea-qa (ptrs-global2)Ptrsptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_GLOBAL2_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['tr'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-global23emea-stage (ptrs-eu)Ptrsptrsemea-stage-ptrs-eu"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_EU_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf', 'pt', 'id', 'tr'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-eu3emea-stage (ptrs-global2)Ptrsptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_GLOBAL2_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['tr'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-global23AMEREnvNameKafka UsernameConsumergroupPublisher routing ruleTopicPartitionsamer-prod(ptrs-amer)Ptrsptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_AMER_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['mx', 'br'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-amer3amer-dev (ptrs-amer)Ptrsptrsamer-dev-ptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_AMER_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['mx', 'br'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-amer3amer-qa (ptrs-amer)Ptrsptrsamer-qa-ptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_AMER_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['mx', 'br'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-amer3amer-stage (ptrs-amer)Ptrsptrsamer-stage-ptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_AMER_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['mx', 'br'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-amer3APACEnvNameKafka UsernameConsumergroupPublisher routing ruleTopicPartitionsapac-dev (ptrs-apac)Ptrsptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_APAC_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['pk'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-apacapac-qa (ptrs-apac)Ptrsptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_APAC_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['pk'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-apacapac-stage (ptrs-apac)Ptrsptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_APAC_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['pk'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-apacGBLEnvNameKafka UsernameConsumergroupPublisher routing ruleTopicPartitionsgbl-prodPtrsptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['co', 'mx', 'br', 'ph'] && exchange.in.headers.objectType in ['HCP', 'HCO']"- ${env}-out-full-ptrsgbl-prod (ptrs-eu)Ptrsptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_EU_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-eugbl-prod (ptrs-porind)Ptrsptrsexchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['id', 'pt'] && exchange.in.headers.objectType in ['HCP', 'HCO'] && !exchange.in.headers.eventSubtype.endsWith('_MATCHES_CHANGED') && (exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_PORIND_REGENERATION')"${env}-out-full-ptrs-porindgbl-devPtrsptrs"exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['co', 'mx', 'br', 'ph', 'cl', 'tr'] && exchange.in.headers.objectType in ['HCP', 'HCO'] && !exchange.in.headers.eventSubtype.endsWith('_MATCHES_CHANGED') && (exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_REGENERATION')"- ${env}-out-full-ptrs20gbl-dev (ptrs-eu)Ptrsptrsptrs_nprod"exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf'] && exchange.in.headers.objectType in ['HCP', 'HCO'] && !exchange.in.headers.eventSubtype.endsWith('_MATCHES_CHANGED') && (exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_EU_REGENERATION')"- ${env}-out-full-ptrs-eugbl-dev (ptrs-porind)Ptrsptrs"exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['id', 'pt'] && exchange.in.headers.objectType in ['HCP', 'HCO'] && !exchange.in.headers.eventSubtype.endsWith('_MATCHES_CHANGED') && (exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_PORIND_REGENERATION')"- ${env}-out-full-ptrs-porindgbl-qa (ptrs-eu)Ptrsptrs"exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf'] && exchange.in.headers.objectType in ['HCP', 'HCO'] && (exchange.in.headers.reconciliationTarget==null)"- ${env}-out-full-ptrs-eu20gbl-stagePtrsptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_LATAM') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['co', 'mx', 'br', 'ph', 'cl','tr'] && exchange.in.headers.objectType in ['HCP', 'HCO']"- ${env}-out-full-ptrsgbl-stage (ptrs-eu)Ptrsptrsptrs_nprod"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_EU') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf'] && exchange.in.headers.objectType in ['HCP', 'HCO']"- ${env}-out-full-ptrs-eugbl-stage (ptrs-porind)Ptrsptrs"exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['id', 'pt'] && exchange.in.headers.objectType in ['HCP', 'HCO'] && !exchange.in.headers.eventSubtype.endsWith('_MATCHES_CHANGED') && (exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_PORIND_REGENERATION')"- ${env}-out-full-ptrs-porind" + }, + { + "title": "OneMed (EMEA)", + "pageID": "164470163", + "pageLink": "/pages/viewpage.action?pageId=164470163", + "content": "ContactsMarsha.Wirtel@COMPANY.com;AnveshVedula.Chalapati@COMPANY.comGatewayEMEANameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicOneMed user (NPROD)onemedExternal OAuth2ONEMED-MDM_client["GET_ENTITIES","LOOKUPS"]["AR","AU","BR","CH","CN","DE","ES","FR","GB","IE","IL","IN","IT","JP","MX","NZ","PL","SA","TR"]IE["CICR","CN3RDPARTY","CRMMI","EVR","FACE","GCP","GRV","KOL_OneView","LocalMDM","MAPP","MDE","OK","Reltio","Rx_Audit"]OneMeduser (PROD)onemedExternal OAuth2ONEMED-MDM_client["GET_ENTITIES","LOOKUPS"]["AR","AU","BR","CH","CN","DE","ES","FR","GB","IE","IL","IN","IT","JP","MX","NZ","PL","SA","TR"]IE["CICR","CN3RDPARTY","CRMMI","EVR","FACE","GCP","GRV","KOL_OneView","LocalMDM","MAPP","MDE","OK","Reltio","Rx_Audit"]" + }, + { + "title": "GRACE (EMEA, AMER, APAC)", + "pageID": "164470161", + "pageLink": "/pages/viewpage.action?pageId=164470161", + "content": "ContactsDL-AIS-Mule-Integration-Support@COMPANY.comRequirementsPartial requirementsSent by Amish Adhvaryuaction neededNeed Plugin Configuration for below usernamesusernameGRACE MAVENS SFDC - DEV - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● - DevGRACE MAVENS SFDC - STG - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● - StageGRACE MAVENS SFDC - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● - ProdcountriesAU,NZ,IN,JP,KR (APAC) and AR, UY, MX (AMER)tenantAPAC and AMERenvironments (prod/nonprods/all)ALLAPI services exposedHCP HCO MCO Search, LookupsSourcesGraceBusiness justificationClient ID used by GRACE application to search HCP and HCOsGatewayEMEANameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicGRACE usergraceExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AD","AG","AI","AM","AN","AR","AT","AU","AW","BA","BB","BE","BG","BL","BM","BO","BQ","BR","BS","BY","BZ","CA","CH","CL","CN","CO","CP","CR","CW","CY","CZ","DE","DK","DO","DZ","EC","EE","ES","FI","FO","FR","GB","GD","GF","GL","GP","GR","GT","GY","HK","HN","HR","HU","ID","IE","IL","IN","IT","JM","JP","KR","KY","KZ","LC","LT","LU","LV","MA","MC","MF","MQ","MU","MX","MY","NC","NI","NL","NO","NZ","PA","PE","PF","PH","PK","PL","PM","PN","PT","PY","RE","RO","RS","RU","SA","SE","SG","SI","SK","SR","SV","SX","TF","TH","TN","TR","TT","TW","UA","US","UY","VE","VG","VN","WF","XX","YT","ZA"]GB["NONE"]N/AGRACE UsergraceExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AD","AG","AI","AM","AN","AR","AT","AU","AW","BA","BB","BE","BG","BL","BM","BO","BQ","BR","BS","BY","BZ","CA","CH","CL","CN","CO","CP","CR","CW","CY","CZ","DE","DK","DO","DZ","EC","EE","ES","FI","FO","FR","GB","GD","GF","GL","GP","GR","GT","GY","HK","HN","HR","HU","ID","IE","IL","IN","IT","JM","JP","KR","KY","KZ","LC","LT","LU","LV","MA","MC","MF","MQ","MU","MX","MY","NC","NI","NL","NO","NZ","PA","PE","PF","PH","PK","PL","PM","PN","PT","PY","RE","RO","RS","RU","SA","SE","SG","SI","SK","SR","SV","SX","TF","TH","TN","TR","TT","TW","UA","US","UY","VE","VG","VN","WF","XX","YT"]GB["NONE"]N/AAMERNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicGRACE usergraceExternal OAuth2 (all)●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["CA","US","AR","UY","MX"]["NONE"]N/AExternal OAuth2 (amer-dev)●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●External OAuth2 (gblus-stage)●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●External OAuth2 (amer-stage)●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●GRACE UsergraceExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AD","AR","AU","BR","CA","DE","ES","FR","GB","GF","GP","IN","IT","JP","KR","MC","MF","MQ","MX","NC","NZ","PF","PM","RE","SA","TR","US","UY"]["NONE"]N/AAPACNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicGRACE usergraceExternal OAuth2 (all)●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AR","AU","BR","CA","HK","ID","IN","JP","KR","MX","MY","NZ","PH","PK","SG","TH","TW","US","UY","VN"]["NONE"]N/AExternal OAuth2 (apac-stageb469b84094724d74adb9ff7224588647GRACE UsergraceExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AD","AR","AU","BR","CA","DE","ES","FR","GB","GF","GP","IN","IT","JP","KR","MC","MF","MQ","MX","NC","NZ","PF","PM","RE","SA","TR","US","UY"]["NONE"]N/A" + }, + { + "title": "Snowflake (Global, GBLUS)", + "pageID": "164469783", + "pageLink": "/pages/viewpage.action?pageId=164469783", + "content": "ContactsNarayanan, Abhilash ACLsNameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopicSnowflake topicSnowflake TopicKafka JAASN/Aexchange.in.headers.eventType in ['full_not_trimmed']exchange.in.headers.objectType in ['HCP', 'HCO', 'MCO', 'RELATIONSHIP']) ||(exchange.in.headers.eventType in ['simple'] && exchange.in.headers.objectType in ['ENTITY'])) ALLALLprod-out-full-snowflake-allFlowsSnowflake participate in two flows:Snowflake: Events publish flowEvent publisher pushes all events regarding entity/relation change to Kafka topic that is created for Snowflake( {{$env}}-out-full-snowflake-all }} ). Then Kafka Connect component pulls those events and loads them to Snowflake table(Flat model).ReconciliationMain goal of reconciliation process is to synchronise Snowflake database with MongoDB.Snowflake periodically exports entities and creates csv file with their identifiers and checksums. The file is sent to S3 from where it is then downloaded in the reconciliation process. This process compares the data in the file with the values stored in Mongo.A reconciliation event is created and posted on kafka topic in two cases:the cheksum has changedthere is lack of entity in csv fileClient software  Kafka Connect is responsible for collecting kafka events and loading them to Snowflake database in flat model.SOPsCurrently there are no SOPs for snowflake." + }, + { + "title": "Vaccine (GBLUS)", + "pageID": "164469863", + "pageLink": "/pages/viewpage.action?pageId=164469863", + "content": "ContactsVajapeyajula, Venkata Kalyan Ram BAVISHI, MONICA Duvvuri, Satya Garg, Nalini Shah, Himanshu FlowsFlowDescriptionSnowflake: Events publish flowEvents AUTO_LINK_FOUND and POTENTIAL_LINK_FOUND are published to snowflakeSnowflake: Base tables refreshMATCHES table is refreshed (every 2 hours in prod) with those eventsSnowflake MDMMATCHES table are read by an ETL process implemented by COMPANY Team ETL BatchesThe ETL process creates relations like  SAPtoHCOSAffiliations. FlextoDDDAffiliations, FlextoHCOSAffiliations through the Batch ChannelNotMatch CallbackFor created relations, the NotMatch callback is triggered and removes LINKS using NotMatch Reltio callsClient software Addtional clients links/software/description ACLsNameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopicDerivedAffilations Batch Load userderivedaffiliations_loadN/AN/A- "CREATE_RELATION"- "UPDATE_RELATION"- US*" + }, + { + "title": "ICUE (AMER)", + "pageID": "172301085", + "pageLink": "/pages/viewpage.action?pageId=172301085", + "content": "ContactsBrahma, Bagmita Solanki, Hardik Tikyani, Devesh GatewayAMERNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicICUE user (NPROD)icueExternal OAuth2ICUE-MDM_client["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","CREATE_MCO","UPDATE_MCO","GET_ENTITIES","LOOKUPS"]["US"]["ICUE"]consumer: regex: - "^.*-out-full-icue-all$" - "^.*-out-full-icue-grv-all$"groups: - icue_dev - icue_qa - icue_stage - dev_icue_grv - qa_icue_grv - stage_icue_grvICUE user (PROD)icueExternal OAuth2ICUE-MDM_client["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","CREATE_MCO","UPDATE_MCO","GET_ENTITIES","LOOKUPS"]["US"]["ICUE"]consumer: regex: - "^.*-out-full-icue-all$" - "^.*-out-full-icue-grv-all$"groups: - icue_prod - prod_icue_grvKafkaGBLUS (icue-grv-mule)NameKafka UsernameConsumergroupPublisher routing ruleTopicPartitionsicue - DEVicue_nprod"exchange.in.headers.eventType in ['full_not_trimmed'] && exchange.in.headers.objectType in ['HCP'] && ['GRV'].intersect(exchange.in.headers.eventSource) && !(['ICUE'].intersect(exchange.in.headers.eventSource)) && exchange.in.headers.eventSubtype in ['HCP_CREATED', 'HCP_CHANGED']"${local_env}-out-full-icue-grv-all"icue - QAicue_nprod${local_env}-out-full-icue-grv-allicue - STAGEicue_nprod${local_env}-out-full-icue-grv-allicue  - PRODicuex_prod${env}-out-full-icue-grv-allFlowsCreate/Update HCP/HCO/MCOGet EntityCreate RelationsClient software APIKafka connector" + }, + { + "title": "ESAMPLES (GBLUS)", + "pageID": "172301089", + "pageLink": "/pages/viewpage.action?pageId=172301089", + "content": "ContactsAdhvaryu, Amish Jain, Somya Bablani, Vijay Reynolds, Lori ACLsNameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopicMuleSoft - esamples useresamplesOAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●- "GET_ENTITIES"USall_sourcesN/AFlowsGet EntityClient software API - read only" + }, + { + "title": "VEEVA_FIELD (EMEA, AMER)", + "pageID": "172301091", + "pageLink": "/pages/viewpage.action?pageId=172301091", + "content": "ContactsAdhvaryu, Amish Fani, Chris GatewayEMEANameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicVEEVA_FIELD user (NPROD)veeva_fieldExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AD","AG","AI","AM","AN","AR","AT","AU","AW","BA","BB","BE","BG","BL","BM","BO","BQ","BR","BS","BY","BZ","CA","CH","CL","CN","CO","CP","CR","CW","CY","CZ","DE","DK","DO","DZ","EC","EE","ES","FI","FO","FR","GB","GF","GL","GP","GR","GT","GY","HK","HN","HR","HU","ID","IE","IL","IN","IT","JM","JP","KR","KY","KZ","LC","LT","LU","LV","MA","MC","MF","MQ","MU","MX","MY","NC","NI","NL","NO","NZ","PA","PE","PF","PH","PK","PL","PM","PN","PT","PY","RE","RO","RS","RU","SA","SE","SG","SI","SK","SV","SX","TF","TH","TN","TR","TT","TW","UA","UY","VE","VG","VN","WF","XX","YT"]GB["AHA","AMA","AMPCO","AMS","AOA","BIODOSE","BUPA","CH","CICR","CN3RDPARTY","CRMMI","CRMMI-SUR","CSL","DDD","DEA","DT_340b","ENGAGE","EVR","FACE","GCP","GRV","HCH","HCOS","HMS","HUB_CALLBACK","HUB_Callback","HUB_USAGETAG","IMSDDD","IMSPLAN","JPDWH","KOL_OneView","KOL_OneView","LLOYDS","LocalMDM","MAPP","MDE","MEDIC","NHS","NUCLEUS","OK","ONEKEY","PCMS","PFORCERX","PFORCERX_ID","PFORCERX_ODS","PTRS","RX_AUDIT","Reltio","ReltioCleanser","Rx_Audit","SAP","SYMP","VEEVA","VEEVA_AU","VEEVA_NZ","VEEVA_PHARMACY_AU","XPO"]N/AVEEVA_FIELD user (PROD)veeva_fieldExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AD","AG","AI","AM","AN","AR","AT","AU","AW","BA","BB","BE","BG","BL","BM","BO","BQ","BR","BS","BY","BZ","CA","CH","CL","CN","CO","CP","CR","CW","CY","CZ","DE","DK","DO","DZ","EC","EE","ES","FI","FO","FR","GB","GF","GL","GP","GR","GT","GY","HK","HN","HR","HU","ID","IE","IL","IN","IT","JM","JP","KR","KY","KZ","LC","LT","LU","LV","MA","MC","MF","MQ","MU","MX","MY","NC","NI","NL","NO","NZ","PA","PE","PF","PH","PK","PL","PM","PN","PT","PY","RE","RO","RS","RU","SA","SE","SG","SI","SK","SV","SX","TF","TH","TN","TR","TT","TW","UA","UY","VE","VG","VN","WF","XX","YT"]GB["AHA","AMA","AMPCO","AMS","AOA","BIODOSE","BUPA","CH","CICR","CN3RDPARTY","CRMMI","CRMMI-SUR","CSL","DDD","DEA","DT_340b","ENGAGE","EVR","FACE","GCP","GRV","HCH","HCOS","HMS","HUB_CALLBACK","HUB_Callback","HUB_USAGETAG","IMSDDD","IMSPLAN","JPDWH","KOL_OneView","LLOYDS","LocalMDM","MAPP","MDE","MEDIC","NHS","NUCLEUS","OK","ONEKEY","PCMS","PFORCERX","PFORCERX_ID","PFORCERX_ODS","PTRS","RX_AUDIT","Reltio","ReltioCleanser","Rx_Audit","SAP","SYMP","VEEVA","VEEVA_AU","VEEVA_NZ","VEEVA_PHARMACY_AU","XPO"]N/AAMERNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicVEEVA_FIELD   user (NPROD)veeva_fieldExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["CA", "US"]["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI","DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE","GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO","IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC","MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM","PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]N/AExternal OAuth2(GBLUS-STAGE)55062bae02364c7598bc3ffbfe38e07bVEEVA_FIELD user (PROD)veeva_fieldExternal OAuth2 (ALL)67b77aa7ecf045539237af0dec890e59726b6d341f994412a998a3e32fdec17a["GET_ENTITIES","LOOKUPS"]["CA", "US"]["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI","DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE","GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO","IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC","MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM","PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]N/AFlowsGet EntityClient software API - read only" + }, + { + "title": "PFORCEOL (EMEA, AMER, APAC)", + "pageID": "172301093", + "pageLink": "/pages/viewpage.action?pageId=172301093", + "content": "ContactsAdhvaryu, Amish Fani, Chris RequirementsPartial requirementsSent by Amish AdhvaryuPforceOL Dev - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●PforceOL Stage - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●PforceOL Prod - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● PT RO DK BR IL TR GR NO CA JP MX AT AR RU KR DE PL AU HK IN MY PH SG TW TH ES CZ LT UA VN ID KZ HU SK UK SE FI CH SA EG MA ZA BE NL IT DZ CO NZ PE CL EE HR LV RS TN US CN SI FR BG IR WA PKNew Requirements - October 2024Action neededNeed Access to PFORCEOL - DEV, PFORCEOL - QA, PFORCEOL - STG, PFORCEOL - PRODPingFederate usernameDEV & QA: ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●STG: ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●PROD: ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●CountriesAC, AE, AG, AI, AR, AT, AU, AW, BB, BE, BH, BM, BR, BS, BZ, CA, CH, CN, CO, CR, CU, CW, CY, CZ, DE, DK, DM, DO, DZ, EG, ES, FI, FK, FR, GB, GD, GF, GP, GR, GT, GY, HK, HN, HT, ID, IE, IL, IN, IT, JM, JP, KN, KR, KW, KY, LC, LU, MF, MQ, MS, MX, MY, NI, NL, NO, NZ, OM, PA, PH, PL, PT, QA, RO, SA, SE, SG, SK, SR, SV, SX, TC, TH, TR, TT, TW, UE, UK, US, VC, VG, VN, YE, ZAAJ: "Keep the other countries for now"Full list:AC, AD, AE, AG, AI, AM, AN, AR, AT, AU, AW, BA, BB, BE, BG, BH, BL, BM, BO, BQ, BR, BS, BY, BZ, CA, CH, CL, CN, CO, CP, CR, CU, CW, CY, CZ, DE, DK, DM, DO, DZ, EC, EE, EG, ES, FI, FK, FO, FR, GB, GD, GF, GL, GP, GR, GT, GY, HK, HN, HR, HT, HU, ID, IE, IL, IN, IR, IT, JM, JP, KN, KR, KW, KY, KZ, LC, LT, LU, LV, MA, MC, MF, MQ, MS, MU, MX, MY, NC, NI, NL, NO, NZ, OM, PA, PE, PF, PH, PK, PL, PM, PN, PT, PY, QA, RE, RO, RS, RU, SA, SE, SG, SI, SK, SR, SV, SX, TC, TF, TH, TN, TR, TT, TW, UA, UE, UK, US, UY, VC, VE, VG, VN, WA, WF, XX, YE, YT, ZATenantAMER, EMEA, APAC, US, EX-USEnvironmentsDEV, QA, STG, PRODPermissions rangeRead access for HCP Search and HCO Search and MCO SearchSourcesSources that are configured in OneMed:MAPP, ONEKEY,OK, PFORCERX_ODS, PFORCERX, VOD, LEGACY_SFA_IDL, PTRS, JPDWH, iCUE, IQVIA_DDD, DCR_SYNC, MDE, MEDPAGESHCP, MEDPAGESHCOBusiness justificationThese changes are required as part of OneMed 2.0 Transformation Project. This project is responsible to ensure an improvised system due to which the proposed changes will help the OneMed technical team to build a better solution to search for HCP/HCO data within MDM system through API integration.Point of contactAnvesh (anveshvedula.chalapati@COMPANY.com), Aparna (aparna.balakrishna@COMPANY.com)Excel sheet with countries: GatewayEMEANameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicPFORCEOL user (NPROD)pforceolExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["NO","AD","AG","AI","AM","AN","AR","AT","AU","AW","BA","BB","BE","BG","BL","BM","BO","BQ","BR","BS","BY","BZ","CA","CH","CL","CN","CO","CP","CR","CW","CY","CZ","DE","DK","DO","DZ","EC","EE","EG","ES","FI","FO","FR","GB","GF","GL","GP","GR","GT","GY","HK","HN","HR","HU","ID","IE","IL","IN","IR","IT","JM","JP","KR","KY","KZ","LC","LT","LU","LV","MA","MC","MF","MQ","MU","MX","MY","NC","NI","NL","false","NZ","PA","PE","PF","PH","PK","PL","PM","PN","PT","PY","RE","RO","RS","RU","SA","SE","SG","SI","SK","SV","SX","TF","TH","TN","TR","TT","TW","UA","UK","US","UY","VE","VG","VN","WA","WF","XX","YT","ZA"]GB["AHA","AMA","AMPCO","AMS","AOA","BIODOSE","BUPA","CH","CICR","CN3RDPARTY","CRMMI","CRMMI-SUR","CSL","DDD","DEA","DT_340b","ENGAGE","EVR","FACE","GCP","GRV","HCH","HCOS","HMS","HUB_CALLBACK","HUB_Callback","HUB_USAGETAG","IMSDDD","IMSPLAN","JPDWH","KOL_OneView","KOL_OneView","LLOYDS","LocalMDM","MAPP","MDE","MEDIC","NHS","NUCLEUS","OK","ONEKEY","PCMS","PFORCERX","PFORCERX_ID","PFORCERX_ODS","PTRS","RX_AUDIT","Reltio","ReltioCleanser","Rx_Audit","SAP","SYMP","VEEVA","VEEVA_AU","VEEVA_NZ","VEEVA_PHARMACY_AU","XPO"]N/APFORCEOL user (PROD)pforceolExternal OAuth2- ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["NO","AD","AG","AI","AM","AN","AR","AT","AU","AW","BA","BB","BE","BG","BL","BM","BO","BQ","BR","BS","BY","BZ","CA","CH","CL","CN","CO","CP","CR","CW","CY","CZ","DE","DK","DO","DZ","EC","EE","EG","ES","FI","FO","FR","GB","GF","GL","GP","GR","GT","GY","HK","HN","HR","HU","ID","IE","IL","IN","IR","IT","JM","JP","KR","KY","KZ","LC","LT","LU","LV","MA","MC","MF","MQ","MU","MX","MY","NC","NI","NL","false","NZ","PA","PE","PF","PH","PK","PL","PM","PN","PT","PY","RE","RO","RS","RU","SA","SE","SG","SI","SK","SV","SX","TF","TH","TN","TR","TT","TW","UA","UK","UY","VE","VG","VN","WA","WF","XX","YT","ZA"]GB["AHA","AMA","AMPCO","AMS","AOA","BIODOSE","BUPA","CH","CICR","CN3RDPARTY","CRMMI","CRMMI-SUR","CSL","DDD","DEA","DT_340b","ENGAGE","EVR","FACE","GCP","GRV","HCH","HCOS","HMS","HUB_CALLBACK","HUB_Callback","HUB_USAGETAG","IMSDDD","IMSPLAN","JPDWH","KOL_OneView","LLOYDS","LocalMDM","MAPP","MDE","MEDIC","NHS","NUCLEUS","OK","ONEKEY","PCMS","PFORCERX","PFORCERX_ID","PFORCERX_ODS","PTRS","RX_AUDIT","Reltio","ReltioCleanser","Rx_Audit","SAP","SYMP","VEEVA","VEEVA_AU","VEEVA_NZ","VEEVA_PHARMACY_AU","XPO"]N/AAMERNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicPFORCEOL  user (NPROD)pforceolExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["CA", "US"]["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI","DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE","GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO","IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC","MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM","PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]N/AExternal OAuth2(GBLUS-STAGE)223ca6b37aef4168afaa35aa2cf39a3ePFORCEOL user (PROD)pforceolExternal OAuth2 (ALL)e678c66c02c64b599b351e0ab02bae9fe6ece8da20284c6987ce3b8564fe9087["GET_ENTITIES","LOOKUPS"]["CA", "US"]["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI","DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE","GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO","IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC","MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM","PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]N/AFlowsGet EntityClient software API - read only" + }, + { + "title": "1CKOL (Global)", + "pageID": "184688633", + "pageLink": "/pages/viewpage.action?pageId=184688633", + "content": "Contacts:Kucherov, Aleksei ; Moshin, Nikolay Old Contacts:Data load support:First Name: IlyaLast Name: EnkovichOffice:  ●●●●●●●●●●●●●●●●●●Mob: ●●●●●●●●●●●●●●●●●●Internet: www.unit-systems.ruE-mail: enkovich.i.s@unit-systems.ruBackup contact:First Name: SergeyLast Name: PortnovOffice: ●●●●●●●●●●●●●●●●●●Mob: ●●●●●●●●●●●●●●●●●●Internet: www.unit-systems.ruE-mail: portnov.s.a@unit-systems.ruFlows1CKOL has one batch process which consumes export files from data warehouse, process this, and loads data to MDM. This process is base on incremental batch engine and run on Airflow platform.Input filesThe input files are delivered by 1CKOL to AWS S3 bucketMAPP Review - Europe - 1cKOL - All Documents (sharepoint.com)UATPRODS3 service accountsvc_gbicc_euw1_project_mdm_inbound_1ckol_rw_s3svc_gbicc_euw1_project_mdm_inbound_1ckol_rw_s3S3 Access key IDAKIATCTZXPPJXRNSDOGNAKIATCTZXPPJXRNSDOGNS3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectS3 Foldermdm/UAT/inbound/KOL/RU/mdm/inbound/KOL/RU/Input data file mask KOL_Extract_Russia_[0-9]+.zipKOL_Extract_Russia_[0-9]+.zipCompressionzipzipFormatFlat files, 1CKOL dedicated format Flat files, 1CKOL dedicated format ExampleKOL_Extract_Russia_07212021.zipKOL_Extract_Russia_07212021.zipSchedulenonenoneAirflow job inc_batch_eu_kol_ru_stage inc_batch_eu_kol_ru_prod Data mapping Data mapping is described in the attached document.ConfigurationFlow configuration is stored in MDM Environment configuration repository. For each environment where the flow should be enabled the configuration file inc_batch_eu_kol_ru.yml has to be created in the location related to configured environment: inventory/[env name]/group_vars/gw-airflow-services/ and the batch name "inc_batch_eu_kol_ru" has to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Below table prresents the location of inc_batch_jp.yml file for Test, Dev, Mapp, Stage and PROD envs:inc_batch_eu_kol_ruUAThttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_eu_kol_ru.ymlPRODhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_eu_kol_ru.ymlApplying configuration changes is done by executing the deploy Airflow's components procedure.SOPsThere is no particular SOP procedure for this flow. All common SOPs was described in the "Incremental batch flows: SOP" chapter." + }, + { + "title": "Snowflake MDM Data Mart", + "pageID": "164470197", + "pageLink": "/display/GMDM/Snowflake+MDM+Data+Mart", + "content": "The section describes   MDM Data Mart in Snowflake. The Data Mart contains MDM data from Reltio tenants published into Snowflake via MDM HUB.Roles, permissions, warehouses used in MDM Data Mart in Snowflake: NewMdmSfRoles_231017.xlsx" + }, + { + "title": "Connect Guide", + "pageID": "196886695", + "pageLink": "/display/GMDM/Connect+Guide", + "content": "How to add a user to the DATA Role:  Users accessing snowflake have to create a ticket and add themselves to the DATA role. This will allow the user to view CUSTOMER_SL schema (users access layer to Snowflake):Go to https://requestmanager.COMPANY.com/Click on the TOP: "Group Manager" - https://requestmanager1.COMPANY.com/Group/Default.aspxClick on the "Distribution Lists"Search for the correct group you want to be added. Check the group name here: "List Of Groups With Access To The DataMart" In the search write the "AD Group Name" for selected SF Instance.Click Request AccessClick "Add Myself" and then save Go to "Cart" and click "Submit Request"How to connect to the DB:Go to the Environments view.Choose the Environments that you want to view:e.g. EMEA - EMEAChoose the NPROD or PROD environmentse.g - EMEA STAGE ServicesOn this page go to the Snowflake MDM DataMartClick on the DB Urle.g. - https://emeadev01.eu-west-1.privatelink.snowflakecomputing.comThe following page will open:Click "Sign in using COMPANY SSO"Open "New Worksheet"Choose:ROLE: WAREHOUSE:  COMM_MDM_DMART_WH                                          - this is based on the "Snowflake MDM DataMart" table - Default warehouse nameDATABASE:      COMM__MDM_DMART__DB          - this is based on the "Snowflake MDM DataMart" table - DB NameSCHEMA:        CUSTOMER_SLList Of Groups With Access To The DataMartSince October 2023NewMdmSfRoles_231017 1.xlsx[Expired Oct 2023] Groups that have access to CUSTOMER_SL schema:Role NameSF InstanceDB InstanceEnvAD Group NameCOMM_AMER_MDM_DMART_DEV_DATA_ROLEAMERAMERDEVsfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_DEV_DATA_ROLECOMM_AMER_MDM_DMART_QA_DATA_ROLEAMERAMERQAsfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_QA_DATA_ROLECOMM_AMER_MDM_DMART_STG_DATA_ROLEAMERAMERSTAGEsfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_STG_DATA_ROLECOMM_AMER_MDM_DMART_PROD_DATA_ROLEAMERAMERPRODsfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_DATA_ROLECOMM_MDM_DMART_DEV_DATA_ROLEAMERUSDEVsfdb_us-east-1_amerdev01_COMM_DEV_MDM_DMART_DATA_ROLECOMM_MDM_DMART_QA_DATA_ROLEAMERUSQAsfdb_us-east-1_amerdev01_COMM_QA_MDM_DMART_DATA_ROLECOMM_MDM_DMART_STG_DATA_ROLEAMERUSSTAGEsfdb_us-east-1_amerdev01_COMM_STG_MDM_DMART_DATA_ROLECOMM_MDM_DMART_PROD_DATA_ROLEAMERUSPRODsfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_DATA_ROLECOMM_APAC_MDM_DMART_DEV_DATA_ROLEEMEAAPACDEVsfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_DEV_DATA_ROLECOMM_APAC_MDM_DMART_QA_DATA_ROLEEMEAAPACQAsfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_QA_DATA_ROLECOMM_APAC_MDM_DMART_STG_DATA_ROLEEMEAAPACSTAGEsfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_STG_DATA_ROLECOMM_APAC_MDM_DMART_PROD_DATA_ROLEEMEAAPACPRODsfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_DATA_ROLECOMM_EMEA_MDM_DMART_DEV_DATA_ROLEEMEAEMEADEVsfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_DEV_DATA_ROLECOMM_EMEA_MDM_DMART_QA_DATA_ROLEEMEAEMEAQAsfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_QA_DATA_ROLECOMM_EMEA_MDM_DMART_STG_DATA_ROLEEMEAEMEASTAGEsfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_STG_DATA_ROLECOMM_EMEA_MDM_DMART_PROD_DATA_ROLEEMEAEMEAPRODsfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_DATA_ROLECOMM_MDM_DMART_DEV_DATA_ROLEEMEAEUDEVsfdb_eu-west-1_emeadev01_COMM_DEV_MDM_DMART_DATA_ROLECOMM_MDM_DMART_QA_DATA_ROLEEMEAEUQAsfdb_eu-west-1_emeadev01_COMM_QA_MDM_DMART_DATA_ROLECOMM_MDM_DMART_STG_DATA_ROLEEMEAEUSTAGEsfdb_eu-west-1_emeadev01_COMM_STG_MDM_DMART_DATA_ROLECOMM_MDM_DMART_PROD_DATA_ROLEEMEAEUPRODsfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_DATA_ROLECOMM_GBL_MDM_DMART_DEV_DATA_ROLEEMEAGBLDEVsfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_DEV_DATA_ROLECOMM_GBL_MDM_DMART_QA_DATA_ROLEEMEAGBLQAsfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_QA_DATA_ROLECOMM_GBL_MDM_DMART_STG_DATA_ROLEEMEAGBLSTAGEsfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_STG_DATA_ROLECOMM_GBL_MDM_DMART_PROD_DATA_ROLEEMEAGBLPRODsfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_DATA_ROLE" + }, + { + "title": "Data model", + "pageID": "196886989", + "pageLink": "/display/GMDM/Data+model", + "content": "The data mart contains MDM data in object & relational data models. The fragment of the model is presented in the picture below. The object data model includes the latest version of Reltio JSON documents representing entities, relationships, lovs, merge-tree. They are loaded into  ENTITIES, RELATIONS, LOV_DATA, MERGES, MATCHES tables. They are loading from Reltio using a HUB streaming interface described here.The object model is transformed into the relation model by a set of dynamic views using Snowflake JSON processing query language. Dynamic views are generated dynamically from the Retlio data model. The regeneration process is maintained in Jenkins and triggered weekly or on-demand.  The generation process starts from root objects like HCP, HCO, walks through JSON tree and generates views with the following rules:  for simple attributes like first name,  a view column is generated in the current view.for nested attributes like addresses, a new view is generated, nested attribute uri and parent key from the parent view become primary key in the new viewfor lookup values like gender the lookup id is generatedModel versionsThere are two versions of Reltio data model maintained in the data mart:COMPANY Reltio data model - the current model maintained in all regional data marts that consume data from COMPANY Reltio regional instancesIqivia Reltio data model - legacy model from the first Reltio instance maintained in   EU regional data mart that consumes data from Global Legacy Reltio (ex-us)Key generation strategyObject model:ObjectsKey columnsDescriptionENTITIES, MATCHES MERGESentity_uri, country*Reltio entity unique identifier and countryRELATIONSrelation_uri, country*Reltio relationship unique identifier & countryLOV_DATAid, mdm_region*the concatenation of Reltio LOV name + ':'+ canonical code as id & mdm region  * - only in global data martRelational model:ObjectsKey columnsDescriptionroot objects like HCP, HCO, MCO, MERGE_HISTORY, MATCH_HISTORYentity_uri, country*Reltio entity unique identifier and countryAFFILIATIONSrelation_uri, country*Reltio relationship unique identifier and countrychild views for nested attributes Addresses, Specialties ...parent view keys, nested attribute uri, country* parent view keys + nested attribute uri  + country  * - only in global data martSchemas:MDM Data Mart contains the following schemas:Schema nameDescriptionLANDINGSchemas used by HUB ETL processes as stage areaCUSTOMERMain schema containing data mart data CUSTOMER_SLAccess schema to CUSTOMER schema dataAES_RS_SLContains views presenting data in Redshift data model" + }, + { + "title": "AES_RS_SL", + "pageID": "203229895", + "pageLink": "/display/GMDM/AES_RS_SL", + "content": "The schema contains a set of views that mimic MDM DataMart from Redshift. The views integrate both data models COMPANY and IQIVIA and present data from all countries available in Reltio.Differences from original Redshift martTechnical ids in views keeping nested attributes values are different from Redshit ones. They are based on Reltio attribute uris instead of MDM checksum generated from attribute values.Foreign keys for code values to be joined with the dictionary table are also generated using a different strategy." + }, + { + "title": "CUSTOMER schema", + "pageID": "163919161", + "pageLink": "/display/GMDM/CUSTOMER+schema", + "content": "This is the main schema containing MDM data in two formats.Object model that represents Reltio JSON format. Data in the format are kept in ENTITIES , RELATIONS, MERGE_TREE tables. Relation model is created as a part of views (standard or materialized) derived from the object model. Most of the views are generated in an automated way based on Reltio Data Model configuration. They directly reflect Relito object model. There are two sets of views as there are two models in Reltio: COMPANY and Iqivia,  Those views can change dynamically as Reltio config is updated.\n\n \n \n \n \n \n \n\n \n \n \n \n\n \n \n\n \n \n \n\n \n \n \n \n \n \n\n \n \n \n \n" + }, + { + "title": "Customer base objects", + "pageID": "164470194", + "pageLink": "/display/GMDM/Customer+base+objects", + "content": "ENTITIESKeeps Relto entities objectsColumnTypeDescriptionENTITY_URITEXTReltio entityt uriCOUNTRYTEXTCountryENTITY_TYPETEXTEntity type for example: HCO, HCPACTIVEBOOLEANActive flag CREATE_TIMETIMESTAMP_LTZCreate timeUPDATE_TIMETIMESTAMP_LTZUpdate timeOBJECTVARIANTJSON objectLAST_EVENT_TYPETEXTThe last event updated the JSON objectLAST_EVENT_TIMETIMESTAMP_LTZLast event timePARENTTEXTParent entity uriCHECKSUMNUMBERChecksumCOMPANY_GLOBAL_CUSTOMER_IDTEXTEntity COMPANY Global IdPARENT_COMPANY_GLOBAL_CUSTOMER_ITEXTIn case of lost merge that field store COMPANY Global Id of the winner entity else is emptyHIST_INACTIVE_ENTITIESUsed for history inactive onekey crosswals. Structure is a copy of entities table.ColumnTypeDescriptionENTITY_URITEXTReltio entityt uriCOUNTRYTEXTCountryENTITY_TYPETEXTEntity type for example: HCO, HCPACTIVEBOOLEANActive flag CREATE_TIMETIMESTAMP_LTZCreate timeUPDATE_TIMETIMESTAMP_LTZUpdate timeOBJECTVARIANTJSON objectLAST_EVENT_TYPETEXTThe last event updated the JSON objectLAST_EVENT_TIMETIMESTAMP_LTZLast event timePARENTTEXTParent entity uriCHECKSUMNUMBERChecksumCOMPANY_GLOBAL_CUSTOMER_IDTEXTEntity COMPANY Global IdPARENT_COMPANY_GLOBAL_CUSTOMER_ITEXTIn case of lost merge that field store COMPANY Global Id of the winner entity else is emptyRELATIONSKeeps Relto relations objectsColumnTypeDescriptionRELATION_URITEXTReltio relation uriCOUNTRYTEXTCountryRELATION_TYPETEXTRelation typeACTIVEBOOLEANActive flagCREATE_TIMETIMESTAMP_LTZCreate timeUPDATE_TIMETIMESTAMP_LTZUpdate timeSTART_ENTITY_URITEXTSource entity uri END_ENTITY_URITEXTTarget entity uriOBJECTVARIANTJSON object LAST_EVENT_TYPETEXTThe last event type modified the recordLAST_EVENT_TIMETIMESTAMP_LTZLast event timePARENTTEXTnot usedCHECKSUMNUMBERChecksumMATCHESThe table presents active and historical matches found in Reltio for all entities.ColumnTypeDescriptionENTITY_URITEXTReltio entity uriTARGET_ENTITY_URITEXTReltio entity uri to which matches ENTITY_URIMATCH_TYPETEXTMatch typeMATCH_RULE_NAMETEXTMatch rule nameCOUNTRYTEXTCountryLAST_EVENT_TYPETEXTThe last event type modified the recordLAST_EVENT_TIMETIMESTAMP_LTZLast event timeLAST_EVENT_CHECKSUMNUMBERThe last event checksumACTIVEBOOLEANActive flagMATCH_HISTORYThe view shows match history for active and inactive matches enriched by merge data. The merge info is available for matches that were inactivated by the merge action triggered by users or Reltio background processes.  ColumnTypeDescriptionENTITY_URITEXTReltio entity uriTARGET_ENTITY_URITEXTReltio entity uri to which matches ENTITY_URIMATCH_TYPETEXTMatch typeMATCH_RULE_NAMETEXTMatch rule nameCOUNTRYTEXTCountryLAST_EVENT_TYPETEXTThe last event type modified the recordLAST_EVENT_TIMETIMESTAMP_LTZLast event timeLAST_EVENT_CHECKSUMNUMBERThe last event checksumACTIVEBOOLEANActive flagMERGEDBOOLEANMerge indicator, the true value indicates that the merge happened for the match.MERGE_REASONTEXT Merge reason MERGE_USERTEXTReltio user name or process name that executed the mergeMERGE_DATETO_TIMESTAMP_LTZMerge date MERGE_RULETEXTMerge rule that triggered the mergeMERGESThe table presents active merges found in Reltio based on the merge_tree export.ColumnTypeDescriptionENTITY_URITEXTReltio entity uriLAST_UPDATE_TIMETO_TIMESTAMP_LTZDate of the last update on the selected rowCREATE_TIMETO_TIMESTAMP_LTZCreation date on the selected rowOBJECTVARIANTJSON object MERGE_HISTORYThe view shows merge history for active entities. The merge history view is build based on the merge_tree Reltio export. ColumnTypeDescriptionENTITY_URITEXTReltio entity uriLOSER_ENTITY_URITEXTReltio entity uri for the merge loserMERGE_REASONTEXT Merge reason Merge on the flyThis indicates automatic match rules were able to find matches for a newly added entity. Therefore, the new entity was not created as a separate entity in the platform but was merged into an existing one instead.Merge by crosswalksIf a newly added entity has the same crosswalk as that of an existing entity in the platform, such entities are merged automatically on the fly because the Reltio platform does not allow multiple entities with the same crosswalk.Automatic merge by crosswalksSometimes, two entities with the same crosswalk may exist in the platform (simultaneously added entities). In this case, such entities are merged automatically using a special background thread.Group merge (Matches found on object creation)This indicates that several entities are grouped into one merge request because all such entities will be merged at the same time to create a single entity in the platform. The reason for a group merge can be an automatic match rule or same crosswalk or both.Merges found by background merge processThe background match thread (incremental match processor) modifies entities as a result of create/change/remove events and performs a rematch. During the rematch, if some entities match using the automatic match rules, such entities are merged.Merge by handThis is a merge performed by a user through the API or from the UI by going through the potential matches.MERGE_RULETEXTMerge rule that triggered the mergeUSERTEXTUser name which executed the mergeMERGE_DATETO_TIMESTAMP_LTZMerge date ENTITY_HISTORYKeeps event history for entities and relationsColumnTypeDescriptionEVENT_KEYTEXTEvent keyEVENT_PARTITIONNUMBERPartition number in KafkaEVENT_OFFSETNUMBEROffset in KafkaEVENT_TOPICTEXTName of the topic in Kafka where this event is storedEVENT_TIMETIMESTAMP_LTZTimestamp when the event was generatedEVENT_TYPETEXTEvent typeCOUNTRYTEXTCountryENTITY_URITEXTReltio entity uriCHECKSUMNUMBERChecksumLOV_DATAKeeps LOV objectsColumnTypeDescriptionIDTEXTLOV identifier OBJECTVARIANTReltio RDM object in JSON formatCODESColumnTypeDescriptionSOURCETEXTSource MDM system nameCODE_IDTEXTCode id - generated by concatenated LOV name and canonical codeCANONICAL_CODETEXTCanonical codeLOV_NAMETEXTLOV (Dictionary) nameACTIVEBOOLEANActive flagDESCTEXTEnglish descriptionCOUNTRYTEXTCode countryPARENTSTEXTParent code idCODE_TRANSLATIONSRDM code translationsColumnTypeDescriptionSOURCETEXTSource MDM system nameCODE_IDTEXTCode idCANONICAL_CODETEXTCanonical codeLOV_NAMETEXTLOV (Dictionary) nameACTIVEBOOLEANActive flagLANG_CODETEXTLanguage codeLAND_DESCTEXTLanguage descriptionCOUNTRYTEXTCountryCODE_SOURCE_MAPPINGSSource code mappings to canonical codes in Reltio RDMColumnTypeDescriptionSOURCETEXTSource MDM system nameCODE_IDTEXTCode idSOURCE_NAMETEXTSource nameSOURCE_CODETEXTSource codeACTIVEBOOLEANActve flag (true - active, false - inactive)IS_CANONICALBOOLEANIs canonicalCOUNTRYTEXTCountryLAST_MODIFIEDTIMESTAMP_LTZLast modified datePARENTTEXTParent codeENTITY_CROSSWALKSKeeps entity crosswalksColumnTypeDescriptionCROSSWALK_URITEXTCrosswalk uriENTITY_URITEXTEntity uriENTITY_TYPETEXTEntity typeACTIVEBOOLEANActive flagTYPETEXTCrosswalk typeVALUETEXTCrosswalk valueSOURCE_TABLETEXTSource tableCREATE_DATETIMESTAMP_NTZCreate dateUPDATE_DATETIMESTAMP_NTZUpdate dateRELTIO_LOAD_DATETIMESTAMP_NTZDate when this crosswalk was loaded to ReltioDELETE_DATETIMESTAMP_NTZDelete dateCOMPANY_GLOBAL_CUSTOMER_IDTEXTEntity COMPANY Global IdRELATION_CROSSWALKSKeeps relations crosswalksColumnTypeDescriptionCROSSWALK_URITEXTCrosswalk URIRELATION_URITEXTRelation URIRELATION_TYPETEXTRelation typeACTIVEBOOLEANActive flagTYPETEXTCrosswalk typeVALUETEXTCrosswalk valueSOURCE_TABLETEXTSource tableCREATE_DATETIMESTAMP_NTZCreate dateUPDATE_DATETIMESTAMP_NTZUpdate dateDELETE_DATETIMESTAMP_NTZDelete dateRELTIO_LOAD_DATETIMESTAMP_NTZDate when this relation was loaded to ReltioATTRIBUTE_SOURCEPresents information about what crosswalk provided the given attribute. The view can be joined with views for nested attributes to get also attribute values.ColumnTypeDescriptionATTTRIBUTE_URITEXTAttribute URIENTITY_URTEXTEntity URIACTIVEBOOLEANIs entity activeTYPETEXTCrosswalk typeVALUETEXTCrosswalk valueSOURCE_TABLETEXTCrosswalk source tableENTITY_UPDATE_DATESPresents information about updated dates of entities in Reltio MDM or SnowflakeThe view can be used to query updated records in a period of time including root objects like HCP, HCO, MCO, and child objects like IDENTIFIERS, SPECIALTIES, ADDRESSED etc.ColumnTypeDescriptionENTITY_URITEXTEntity URIACTIVEBOOLEANIs entity activeENTITY_TYPETEXTType of entityCOUNTRYTEXTCountry iso codeMDM_CREATE_TIMETIMESTAMP_LTZEntity create time in ReltioMDM_UPDATE_TIMETIMESAMP_LTZEntity update time in ReltioSF_CREATE_TIMETIMESTAMP_LTZEntity create time in Snowflake DBSF_UPDATE_TIMETIMESTAMP_LTZEntity last update time in SnowflakeLAST_EVENT_TIMETIMESTAMP_LTZLast KAFKA event timestampCHECKSUMNUMBERChecksumCOMPANY_GLOBAL_CUSTOMER_IDTEXTEntity COMPANY Global IdPARENT_COMPANY_GLOBAL_CUSTOMER_ITEXTIn case of lost merge that field store COMPANY Global Id of the winner entity else is emptyRELATION_UPDATE_DATESPresents information about updated dates of relations Reltio MDM or SnowflakeThe view can be used to query all updated entries in a period of time from  AFFILIATONS and child objects like AFFIL_RELATION_TYPEColumnTypeDescriptionRELATION_URITEXTEntity URIACTIVEBOOLEANIs entity activeRELATION_TYPETEXTType of entityCOUNTRYTEXTCountry iso codeMDM_CREATE_TIMETIMESTAMP_LTZRelation create time in ReltioMDM_UPDATE_TIMETIMESAMP_LTZRelation update time in ReltioSF_CREATE_TIMETIMESTAMP_LTZRelation create time in Snowflake DBSF_UPDATE_TIMETIMESTAMP_LTZRelation last update time in SnowflakeLAST_EVENT_TIMETIMESTAMP_LTZLast KAFKA event timestampCHECKSUMNUMBERChecksum" + }, + { + "title": "Data Materialization Process", + "pageID": "347657026", + "pageLink": "/display/GMDM/Data+Materialization+Process", + "content": "" + }, + { + "title": "Dynamic views for IQVIA MDM Model", + "pageID": "164470213", + "pageLink": "/display/GMDM/Dynamic+views++for+IQVIA+MDM+Model", + "content": "HCPHealth care providerReltio URI: configuration/entityTypes/HCPMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeFIRST_NAMEVARCHARFirst Nameconfiguration/entityTypes/HCP/attributes/FirstNameLAST_NAMEVARCHARLast Nameconfiguration/entityTypes/HCP/attributes/LastNameMIDDLE_NAMEVARCHARMiddle Nameconfiguration/entityTypes/HCP/attributes/MiddleNameNAMEVARCHARNameconfiguration/entityTypes/HCP/attributes/NamePREFIXVARCHARconfiguration/entityTypes/HCP/attributes/PrefixLKUP_IMS_PREFIXSUFFIX_NAMEVARCHARGeneration Suffixconfiguration/entityTypes/HCP/attributes/SuffixNameLKUP_IMS_SUFFIXPREFERRED_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/PreferredNameNICKNAMEVARCHARconfiguration/entityTypes/HCP/attributes/NicknameCOUNTRY_CODEVARCHARCountry Codeconfiguration/entityTypes/HCP/attributes/CountryLKUP_IMS_COUNTRY_CODEGENDERVARCHARconfiguration/entityTypes/HCP/attributes/GenderLKUP_IMS_GENDERTYPE_CODEVARCHARType codeconfiguration/entityTypes/HCP/attributes/TypeCodeLKUP_IMS_HCP_CUST_TYPEACCOUNT_TYPEVARCHARAccount Typeconfiguration/entityTypes/HCP/attributes/AccountTypeSUB_TYPE_CODEVARCHARSub type codeconfiguration/entityTypes/HCP/attributes/SubTypeCodeLKUP_IMS_HCP_SUBTYPETITLEVARCHARconfiguration/entityTypes/HCP/attributes/TitleLKUP_IMS_PROF_TITLEINITIALSVARCHARInitialsconfiguration/entityTypes/HCP/attributes/InitialsD_O_BDATEDate of Birthconfiguration/entityTypes/HCP/attributes/DoBY_O_BVARCHARBirth Yearconfiguration/entityTypes/HCP/attributes/YoBMAPP_HCP_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/MAPPHcpStatusLKUP_MAPP_HCPSTATUSGO_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/GOStatusLKUP_GOVOFF_GOSTATUSPIGO_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/PIGOStatusLKUP_GOVOFF_PIGOSTATUSNIPPIGO_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/NIPPIGOStatusLKUP_GOVOFF_NIPPIGOSTATUSPRIMARY_PIGO_RATIONALEVARCHARconfiguration/entityTypes/HCP/attributes/PrimaryPIGORationaleLKUP_GOVOFF_PIGORATIONALESECONDARY_PIGO_RATIONALEVARCHARconfiguration/entityTypes/HCP/attributes/SecondaryPIGORationaleLKUP_GOVOFF_PIGORATIONALEPIGOSME_REVIEWVARCHARconfiguration/entityTypes/HCP/attributes/PIGOSMEReviewLKUP_GOVOFF_PIGOSMEREVIEWGSQ_DATEDATEGSQDateconfiguration/entityTypes/HCP/attributes/GSQDateMAPP_DO_NOT_USEVARCHARconfiguration/entityTypes/HCP/attributes/MAPPDoNotUseLKUP_GOVOFF_DONOTUSEMAPP_CHANGE_DATEVARCHARconfiguration/entityTypes/HCP/attributes/MAPPChangeDateMAPP_CHANGE_REASONVARCHARconfiguration/entityTypes/HCP/attributes/MAPPChangeReasonIS_EMPLOYEEBOOLEANconfiguration/entityTypes/HCP/attributes/IsEmployeeVALIDATION_STATUSVARCHARValidation Status of the Customerconfiguration/entityTypes/HCP/attributes/ValidationStatusLKUP_IMS_VAL_STATUSSOURCE_CHANGE_DATEDATESourceChangeDateconfiguration/entityTypes/HCP/attributes/SourceChangeDateSOURCE_CHANGE_REASONVARCHARSourceChangeReasonconfiguration/entityTypes/HCP/attributes/SourceChangeReasonORIGIN_SOURCEVARCHAROriginating Sourceconfiguration/entityTypes/HCP/attributes/OriginSourceOK_VR_TRIGGERVARCHARconfiguration/entityTypes/HCP/attributes/OK_VR_TriggerLKUP_IMS_SEND_FOR_VALIDATIONBIRTH_CITYVARCHARBirth Cityconfiguration/entityTypes/HCP/attributes/BirthCityBIRTH_STATEVARCHARBirth Stateconfiguration/entityTypes/HCP/attributes/BirthStateSTATE_CODEBIRTH_COUNTRYVARCHARBirth Countryconfiguration/entityTypes/HCP/attributes/BirthCountryCOUNTRY_CDD_O_DDATEconfiguration/entityTypes/HCP/attributes/DoDY_O_DVARCHARconfiguration/entityTypes/HCP/attributes/YoDTAX_IDVARCHARconfiguration/entityTypes/HCP/attributes/TaxIDSSN_LAST4VARCHARconfiguration/entityTypes/HCP/attributes/SSNLast4MEVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/MENPIVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/NPIUPINVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/UPINKAISER_PROVIDERBOOLEANconfiguration/entityTypes/HCP/attributes/KaiserProviderMAJOR_PROFESSIONAL_ACTIVITYVARCHARconfiguration/entityTypes/HCP/attributes/MajorProfessionalActivityMPA_CDPRESENT_EMPLOYMENTVARCHARconfiguration/entityTypes/HCP/attributes/PresentEmploymentPE_CDTYPE_OF_PRACTICEVARCHARconfiguration/entityTypes/HCP/attributes/TypeOfPracticeTOP_CDSOLOBOOLEANconfiguration/entityTypes/HCP/attributes/SoloGROUPBOOLEANconfiguration/entityTypes/HCP/attributes/GroupADMINISTRATORBOOLEANconfiguration/entityTypes/HCP/attributes/AdministratorRESEARCHBOOLEANconfiguration/entityTypes/HCP/attributes/ResearchCLINICAL_TRIALSBOOLEANconfiguration/entityTypes/HCP/attributes/ClinicalTrialsWEBSITE_URLVARCHARconfiguration/entityTypes/HCP/attributes/WebsiteURLIMAGE_LINKSVARCHARconfiguration/entityTypes/HCP/attributes/ImageLinksDOCUMENT_LINKSVARCHARconfiguration/entityTypes/HCP/attributes/DocumentLinksVIDEO_LINKSVARCHARconfiguration/entityTypes/HCP/attributes/VideoLinksDESCRIPTIONVARCHARconfiguration/entityTypes/HCP/attributes/DescriptionCREDENTIALSVARCHARconfiguration/entityTypes/HCP/attributes/CredentialsCREDFORMER_FIRST_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/FormerFirstNameFORMER_LAST_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/FormerLastNameFORMER_MIDDLE_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/FormerMiddleNameFORMER_SUFFIX_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/FormerSuffixNameSSNVARCHARconfiguration/entityTypes/HCP/attributes/SSNPRESUMED_DEADBOOLEANconfiguration/entityTypes/HCP/attributes/PresumedDeadDEA_BUSINESS_ACTIVITYVARCHARconfiguration/entityTypes/HCP/attributes/DEABusinessActivitySTATUS_IMSVARCHARconfiguration/entityTypes/HCP/attributes/StatusIMSLKUP_IMS_STATUSSTATUS_UPDATE_DATEDATEconfiguration/entityTypes/HCP/attributes/StatusUpdateDateSTATUS_REASON_CODEVARCHARconfiguration/entityTypes/HCP/attributes/StatusReasonCodeLKUP_IMS_SRC_DEACTIVE_REASON_CODECOMMENTERSVARCHARCommentersconfiguration/entityTypes/HCP/attributes/CommentersSOURCE_CREATION_DATEDATEconfiguration/entityTypes/HCP/attributes/SourceCreationDateSOURCE_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/SourceNameSUB_SOURCE_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/SubSourceNameEXCLUDE_FROM_MATCHVARCHARconfiguration/entityTypes/HCP/attributes/ExcludeFromMatchPROVIDER_IDENTIFIER_TYPEVARCHARProvider Identifier Typeconfiguration/entityTypes/HCP/attributes/ProviderIdentifierTypeLKUP_IMS_PROVIDER_IDENTIFIER_TYPECATEGORYVARCHARCategory Codeconfiguration/entityTypes/HCP/attributes/CategoryLKUP_IMS_HCP_CATEGORYDEGREE_CODEVARCHARDegree Codeconfiguration/entityTypes/HCP/attributes/DegreeCodeLKUP_IMS_DEGREESALUTATION_NAMEVARCHARSalutation Nameconfiguration/entityTypes/HCP/attributes/SalutationNameIS_BLACK_LISTEDBOOLEANIndicates to Blacklist the profileconfiguration/entityTypes/HCP/attributes/IsBlackListedTRAINING_HOSPITALVARCHARTraining Hospitalconfiguration/entityTypes/HCP/attributes/TrainingHospitalACRONYM_NAMEVARCHARAcronymNameconfiguration/entityTypes/HCP/attributes/AcronymNameFIRST_SET_DATEDATEDate of 1st Installationconfiguration/entityTypes/HCP/attributes/FirstSetDateCREATE_DATEDATEIndividual Creation Dateconfiguration/entityTypes/HCP/attributes/CreateDateUPDATE_DATEDATEDate of Last Individual Updateconfiguration/entityTypes/HCP/attributes/UpdateDateCHECK_DATEDATEDate of Last Individual Quality Checkconfiguration/entityTypes/HCP/attributes/CheckDateSTATE_CODEVARCHARSituation of the healthcare professional (ex. Active, Inactive, Retired)configuration/entityTypes/HCP/attributes/StateCodeLKUP_IMS_PROFILE_STATESTATE_DATEDATEDate when state of the record was last modified.configuration/entityTypes/HCP/attributes/StateDateVALIDATION_CHANGE_REASONVARCHARReason for Validation Status changeconfiguration/entityTypes/HCP/attributes/ValidationChangeReasonLKUP_IMS_VAL_STATUS_CHANGE_REASONVALIDATION_CHANGE_DATEDATEDate of Validation changeconfiguration/entityTypes/HCP/attributes/ValidationChangeDateAPPOINTMENT_REQUIREDBOOLEANIndicates whether sales reps need to make an appointment to see the Professional.configuration/entityTypes/HCP/attributes/AppointmentRequiredNHS_STATUSVARCHARNational Health System Statusconfiguration/entityTypes/HCP/attributes/NHSStatusLKUP_IMS_SECTOR_OF_CARENUM_OF_PATIENTSVARCHARNumber of attached patientsconfiguration/entityTypes/HCP/attributes/NumOfPatientsPRACTICE_SIZEVARCHARPractice Sizeconfiguration/entityTypes/HCP/attributes/PracticeSizePATIENTS_X_DAYVARCHARPatients Per Dayconfiguration/entityTypes/HCP/attributes/PatientsXDayPREFERRED_LANGUAGEVARCHARPreferred Spoken Languageconfiguration/entityTypes/HCP/attributes/PreferredLanguagePOLITICAL_AFFILIATIONVARCHARPolitical Affiliationconfiguration/entityTypes/HCP/attributes/PoliticalAffiliationLKUP_IMS_POL_AFFILPRESCRIBING_LEVELVARCHARPrescribing Levelconfiguration/entityTypes/HCP/attributes/PrescribingLevelLKUP_IMS_PRES_LEVELEXTERNAL_RATINGVARCHARExternal Ratingconfiguration/entityTypes/HCP/attributes/ExternalRatingTARGETING_CLASSIFICATIONVARCHARTargeting Classificationconfiguration/entityTypes/HCP/attributes/TargetingClassificationKOL_TITLEVARCHARKey Opinion Leader Titleconfiguration/entityTypes/HCP/attributes/KOLTitleSAMPLING_STATUSVARCHARSampling Status of HCPconfiguration/entityTypes/HCP/attributes/SamplingStatusLKUP_IMS_SAMPLING_STATUSADMINISTRATIVE_NAMEVARCHARAdministrative Nameconfiguration/entityTypes/HCP/attributes/AdministrativeNamePROFESSIONAL_DESIGNATIONVARCHARconfiguration/entityTypes/HCP/attributes/ProfessionalDesignationLKUP_IMS_PROF_DESIGNATIONEXTERNAL_INFORMATION_URLVARCHARconfiguration/entityTypes/HCP/attributes/ExternalInformationURLMATCH_STATUS_CODEVARCHARconfiguration/entityTypes/HCP/attributes/MatchStatusCodeLKUP_IMS_MATCH_STATUS_CODESUBSCRIPTION_FLAG1BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCP/attributes/SubscriptionFlag1SUBSCRIPTION_FLAG2BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCP/attributes/SubscriptionFlag2SUBSCRIPTION_FLAG3BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCP/attributes/SubscriptionFlag3SUBSCRIPTION_FLAG4BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCP/attributes/SubscriptionFlag4SUBSCRIPTION_FLAG5BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCP/attributes/SubscriptionFlag5SUBSCRIPTION_FLAG6BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCP/attributes/SubscriptionFlag6SUBSCRIPTION_FLAG7BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCP/attributes/SubscriptionFlag7SUBSCRIPTION_FLAG8BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCP/attributes/SubscriptionFlag8SUBSCRIPTION_FLAG9BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCP/attributes/SubscriptionFlag9SUBSCRIPTION_FLAG10BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCP/attributes/SubscriptionFlag10MIDDLE_INITIALVARCHARMiddle Initial. This attribute is populated from Middle Nameconfiguration/entityTypes/HCP/attributes/MiddleInitialDELETE_ENTITYBOOLEANProperty for GDPR removingconfiguration/entityTypes/HCP/attributes/DeleteEntityPARTY_IDVARCHARconfiguration/entityTypes/HCP/attributes/PartyIDLAST_VERIFICATION_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/LastVerificationStatusLAST_VERIFICATION_DATEDATEconfiguration/entityTypes/HCP/attributes/LastVerificationDateEFFECTIVE_DATEDATEconfiguration/entityTypes/HCP/attributes/EffectiveDateEND_DATEDATEconfiguration/entityTypes/HCP/attributes/EndDatePARTY_LOCALIZATION_CODEVARCHARconfiguration/entityTypes/HCP/attributes/PartyLocalizationCodeMATCH_PARTY_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/MatchPartyNameLICENSEReltio URI: configuration/entityTypes/HCP/attributes/LicenseMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameLICENSE_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCATEGORYVARCHARconfiguration/entityTypes/HCP/attributes/License/attributes/CategoryLKUP_IMS_LIC_CATEGORYNUMBERVARCHARState License INTEGER. A unique license INTEGER is listed for each license the physician holds. There is no standard format syntax. Format examples: 18986, 4301079019, BX1464089. There is also no limit to the INTEGER of licenses a physician can hold in a state. Example: A physician can have an inactive resident license plus unlimited active licenses. Residents can have as many as four licenses since some states issue licenses every yearconfiguration/entityTypes/HCP/attributes/License/attributes/NumberBOARD_EXTERNAL_IDVARCHARBoard External IDconfiguration/entityTypes/HCP/attributes/License/attributes/BoardExternalIDBOARD_CODEVARCHARState License Board Code. For AMA The board code will always be AMAconfiguration/entityTypes/HCP/attributes/License/attributes/BoardCodeSTLIC_BRD_CD_LOVSTATEVARCHARState License State. Two character field. USPS standard abbreviations.configuration/entityTypes/HCP/attributes/License/attributes/StateLKUP_IMS_STATE_CODEISO_COUNTRY_CODEVARCHARISO country codeconfiguration/entityTypes/HCP/attributes/License/attributes/ISOCountryCodeLKUP_IMS_COUNTRY_CODEDEGREEVARCHARState License Degree. A physician may hold more than one license in a given state. However, not more than one MD or more than one DO license in the same state.configuration/entityTypes/HCP/attributes/License/attributes/DegreeLKUP_IMS_DEGREEAUTHORIZATION_STATUSVARCHARAuthorization Statusconfiguration/entityTypes/HCP/attributes/License/attributes/AuthorizationStatusLKUP_IMS_IDENTIFIER_STATUSLICENSE_NUMBER_KEYVARCHARState License Number Keyconfiguration/entityTypes/HCP/attributes/License/attributes/LicenseNumberKeyAUTHORITY_NAMEVARCHARAuthority Nameconfiguration/entityTypes/HCP/attributes/License/attributes/AuthorityNamePROFESSION_CODEVARCHARProfessionconfiguration/entityTypes/HCP/attributes/License/attributes/ProfessionCodeLKUP_IMS_PROFESSIONTYPE_IDVARCHARAuthorization Type idconfiguration/entityTypes/HCP/attributes/License/attributes/TypeIdTYPEVARCHARState License Type. U = Unlimited there is no restriction on the physician to practice medicine; L = Limited implies restrictions of some sort. For example, the physician may practice only in a given county, admit patients only to particular hospitals, or practice under the supervision of a physician with a license in state or private hospitals or other settings; T = Temporary issued to a physician temporarily practicing in an underserved area outside his/her state of licensure. Also granted between board meetings when new licenses are issued. Time span for a temporary license varies from state to state. Temporary licenses typically expire 6-9 months from the date they are issued; R = Resident License granted to a physician in graduate medical education (e.g., residency training).configuration/entityTypes/HCP/attributes/License/attributes/TypeLKUP_IMS_LICENSE_TYPEPRIVILEGE_IDVARCHARLicense Privilegeconfiguration/entityTypes/HCP/attributes/License/attributes/PrivilegeIdPRIVILEGE_NAMEVARCHARLicense Privilege Nameconfiguration/entityTypes/HCP/attributes/License/attributes/PrivilegeNamePRIVILEGE_RANKVARCHARLicense Privilege Rankconfiguration/entityTypes/HCP/attributes/License/attributes/PrivilegeRankSTATUSVARCHARState License Status. A = Active. Physician is licensed to practice within the state; I = Inactive. If the physician has not reregistered a state license OR if the license has been suspended or revoked by the state board; X = unknown. If the state has not provided current information Note: Some state boards issue inactive licenses to physicians who want to maintain licensure in the state although they are currently practicing in another state.configuration/entityTypes/HCP/attributes/License/attributes/StatusLKUP_IMS_IDENTIFIER_STATUSDEACTIVATION_REASON_CODEVARCHARDeactivation Reason Codeconfiguration/entityTypes/HCP/attributes/License/attributes/DeactivationReasonCodeLKUP_IMS_SRC_DEACTIVE_REASON_CODEEXPIRATION_DATEDATEconfiguration/entityTypes/HCP/attributes/License/attributes/ExpirationDateISSUE_DATEDATEState License Issue Dateconfiguration/entityTypes/HCP/attributes/License/attributes/IssueDateBRD_DATEDATEState License as of date or pull date. The as of date (or stamp date) is the date the current license file is provided to the Database Licensees.configuration/entityTypes/HCP/attributes/License/attributes/BrdDateSAMPLE_ELIGIBILITYVARCHARconfiguration/entityTypes/HCP/attributes/License/attributes/SampleEligibilitySOURCE_CDVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/License/attributes/SourceCDRANKVARCHARLicense Rankconfiguration/entityTypes/HCP/attributes/License/attributes/RankCERTIFICATIONVARCHARCertificationconfiguration/entityTypes/HCP/attributes/License/attributes/CertificationREQ_SAMPL_NON_CTRLVARCHARRequest Samples Non-Controlledconfiguration/entityTypes/HCP/attributes/License/attributes/ReqSamplNonCtrlREQ_SAMPL_CTRLVARCHARRequest Samples Controlledconfiguration/entityTypes/HCP/attributes/License/attributes/ReqSamplCtrlRECV_SAMPL_NON_CTRLVARCHARReceives Samples Non-Controlled Substancesconfiguration/entityTypes/HCP/attributes/License/attributes/RecvSamplNonCtrlRECV_SAMPL_CTRLVARCHARReceives Samples Controlledconfiguration/entityTypes/HCP/attributes/License/attributes/RecvSamplCtrlDISTR_SAMPL_NON_CTRLVARCHARDistribute Samples Non-Controlled Substancesconfiguration/entityTypes/HCP/attributes/License/attributes/DistrSamplNonCtrlDISTR_SAMPL_CTRLVARCHARDistribute Samples Controlledconfiguration/entityTypes/HCP/attributes/License/attributes/DistrSamplCtrlSAMP_DRUG_SCHED_I_FLAGVARCHARSample Drug Schedule I flagconfiguration/entityTypes/HCP/attributes/License/attributes/SampDrugSchedIFlagSAMP_DRUG_SCHED_II_FLAGVARCHARSample Drug Schedule II flagconfiguration/entityTypes/HCP/attributes/License/attributes/SampDrugSchedIIFlagSAMP_DRUG_SCHED_III_FLAGVARCHARSample Drug Schedule III flagconfiguration/entityTypes/HCP/attributes/License/attributes/SampDrugSchedIIIFlagSAMP_DRUG_SCHED_IV_FLAGVARCHARSample Drug Schedule IV flagconfiguration/entityTypes/HCP/attributes/License/attributes/SampDrugSchedIVFlagSAMP_DRUG_SCHED_V_FLAGVARCHARSample Drug Schedule V flagconfiguration/entityTypes/HCP/attributes/License/attributes/SampDrugSchedVFlagSAMP_DRUG_SCHED_VI_FLAGVARCHARSample Drug Schedule VI flagconfiguration/entityTypes/HCP/attributes/License/attributes/SampDrugSchedVIFlagPRESCR_NON_CTRL_FLAGVARCHARPrescribe Non-controlled flagconfiguration/entityTypes/HCP/attributes/License/attributes/PrescrNonCtrlFlagPRESCR_APP_REQ_NON_CTRL_FLAGVARCHARPrescribe Application Request for Non-controlled Substances Flagconfiguration/entityTypes/HCP/attributes/License/attributes/PrescrAppReqNonCtrlFlagPRESCR_CTRL_FLAGVARCHARPrescribe Controlled flagconfiguration/entityTypes/HCP/attributes/License/attributes/PrescrCtrlFlagPRESCR_APP_REQ_CTRL_FLAGVARCHARPrescribe Application Request for Controlled Substances Flagconfiguration/entityTypes/HCP/attributes/License/attributes/PrescrAppReqCtrlFlagPRESCR_DRUG_SCHED_I_FLAGVARCHARPrescrDrugSchedIFlagconfiguration/entityTypes/HCP/attributes/License/attributes/PrescrDrugSchedIFlagPRESCR_DRUG_SCHED_II_FLAGVARCHARPrescribe Schedule II Flagconfiguration/entityTypes/HCP/attributes/License/attributes/PrescrDrugSchedIIFlagPRESCR_DRUG_SCHED_III_FLAGVARCHARPrescribe Schedule III Flagconfiguration/entityTypes/HCP/attributes/License/attributes/PrescrDrugSchedIIIFlagPRESCR_DRUG_SCHED_IV_FLAGVARCHARPrescribe Schedule IV Flagconfiguration/entityTypes/HCP/attributes/License/attributes/PrescrDrugSchedIVFlagPRESCR_DRUG_SCHED_V_FLAGVARCHARPrescribe Schedule V Flagconfiguration/entityTypes/HCP/attributes/License/attributes/PrescrDrugSchedVFlagPRESCR_DRUG_SCHED_VI_FLAGVARCHARPrescribe Schedule VI Flagconfiguration/entityTypes/HCP/attributes/License/attributes/PrescrDrugSchedVIFlagSUPERVISORY_REL_CD_NON_CTRLVARCHARSupervisory Relationship for Non-Controlled Substancesconfiguration/entityTypes/HCP/attributes/License/attributes/SupervisoryRelCdNonCtrlSUPERVISORY_REL_CD_CTRLVARCHARSupervisoryRelCdCtrlconfiguration/entityTypes/HCP/attributes/License/attributes/SupervisoryRelCdCtrlCOLLABORATIVE_NONCTRLVARCHARCollaboration for Non-Controlled Substancesconfiguration/entityTypes/HCP/attributes/License/attributes/CollaborativeNonctrlCOLLABORATIVE_CTRLVARCHARCollaboration for Controlled Substancesconfiguration/entityTypes/HCP/attributes/License/attributes/CollaborativeCtrlINCLUSIONARYVARCHARInclusionaryconfiguration/entityTypes/HCP/attributes/License/attributes/InclusionaryEXCLUSIONARYVARCHARExclusionaryconfiguration/entityTypes/HCP/attributes/License/attributes/ExclusionaryDELEGATION_NON_CTRLVARCHARDelegationNonCtrlconfiguration/entityTypes/HCP/attributes/License/attributes/DelegationNonCtrlDELEGATION_CTRLVARCHARDelegation for Controlled Substancesconfiguration/entityTypes/HCP/attributes/License/attributes/DelegationCtrlDISCIPLINARY_ACTION_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/License/attributes/DisciplinaryActionStatusADDRESSReltio URI: configuration/entityTypes/HCP/attributes/Address, configuration/entityTypes/HCO/attributes/AddressMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESS_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypePRIMARY_AFFILIATIONVARCHARconfiguration/relationTypes/HasAddress/attributes/PrimaryAffiliation, configuration/relationTypes/HasAddress/attributes/PrimaryAffiliationLKUP_IMS_YES_NOSOURCE_ADDRESS_IDVARCHARconfiguration/relationTypes/HasAddress/attributes/SourceAddressID, configuration/relationTypes/HasAddress/attributes/SourceAddressIDADDRESS_TYPEVARCHARconfiguration/relationTypes/HasAddress/attributes/AddressType, configuration/relationTypes/HasAddress/attributes/AddressTypeLKUP_IMS_ADDR_TYPECARE_OFVARCHARconfiguration/relationTypes/HasAddress/attributes/CareOf, configuration/relationTypes/HasAddress/attributes/CareOfPRIMARYBOOLEANconfiguration/relationTypes/HasAddress/attributes/Primary, configuration/relationTypes/HasAddress/attributes/PrimaryADDRESS_RANKVARCHARconfiguration/relationTypes/HasAddress/attributes/AddressRank, configuration/relationTypes/HasAddress/attributes/AddressRankSOURCE_NAMEVARCHARconfiguration/relationTypes/HasAddress/attributes/SourceAddressInfo/attributes/SourceName, configuration/relationTypes/HasAddress/attributes/SourceAddressInfo/attributes/SourceNameSOURCE_LOCATION_IDVARCHARconfiguration/relationTypes/HasAddress/attributes/SourceAddressInfo/attributes/SourceLocationId, configuration/relationTypes/HasAddress/attributes/SourceAddressInfo/attributes/SourceLocationIdADDRESS_LINE1VARCHARconfiguration/entityTypes/Location/attributes/AddressLine1, configuration/entityTypes/Location/attributes/AddressLine1ADDRESS_LINE2VARCHARconfiguration/entityTypes/Location/attributes/AddressLine2, configuration/entityTypes/Location/attributes/AddressLine2ADDRESS_LINE3VARCHARAddressLine3configuration/entityTypes/Location/attributes/AddressLine3, configuration/entityTypes/Location/attributes/AddressLine3ADDRESS_LINE4VARCHARAddressLine4configuration/entityTypes/Location/attributes/AddressLine4, configuration/entityTypes/Location/attributes/AddressLine4PREMISEVARCHARconfiguration/entityTypes/Location/attributes/Premise, configuration/entityTypes/Location/attributes/PremiseSTREETVARCHARconfiguration/entityTypes/Location/attributes/Street, configuration/entityTypes/Location/attributes/StreetFLOORVARCHARN/Aconfiguration/entityTypes/Location/attributes/Floor, configuration/entityTypes/Location/attributes/FloorBUILDINGVARCHARN/Aconfiguration/entityTypes/Location/attributes/Building, configuration/entityTypes/Location/attributes/BuildingCITYVARCHARconfiguration/entityTypes/Location/attributes/City, configuration/entityTypes/Location/attributes/CitySTATE_PROVINCEVARCHARconfiguration/entityTypes/Location/attributes/StateProvince, configuration/entityTypes/Location/attributes/StateProvinceSTATE_PROVINCE_CODEVARCHARconfiguration/entityTypes/Location/attributes/StateProvinceCode, configuration/entityTypes/Location/attributes/StateProvinceCodeLKUP_IMS_STATE_CODEPOSTAL_CODEVARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/PostalCode, configuration/entityTypes/Location/attributes/Zip/attributes/PostalCodeZIP5VARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/Zip5, configuration/entityTypes/Location/attributes/Zip/attributes/Zip5ZIP4VARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/Zip4, configuration/entityTypes/Location/attributes/Zip/attributes/Zip4COUNTRYVARCHARconfiguration/entityTypes/Location/attributes/CountryLKUP_IMS_COUNTRY_CODECBSA_CODEVARCHARCore Based Statistical Areaconfiguration/entityTypes/Location/attributes/CBSACode, configuration/entityTypes/Location/attributes/CBSACodeCBSA_CDFIPS_COUNTY_CODEVARCHARFIPS county Codeconfiguration/entityTypes/Location/attributes/FIPSCountyCode, configuration/entityTypes/Location/attributes/FIPSCountyCodeFIPS_STATE_CODEVARCHARFIPS State Codeconfiguration/entityTypes/Location/attributes/FIPSStateCode, configuration/entityTypes/Location/attributes/FIPSStateCodeDPVVARCHARUSPS delivery point validation. R = Range Check; C = Clerk; F = Formally Valid; V = DPV Validconfiguration/entityTypes/Location/attributes/DPV, configuration/entityTypes/Location/attributes/DPVMSAVARCHARMetropolitan Statistical Area for a businessconfiguration/entityTypes/Location/attributes/MSA, configuration/entityTypes/Location/attributes/MSALATITUDEVARCHARconfiguration/entityTypes/Location/attributes/GeoLocation/attributes/LatitudeLONGITUDEVARCHARconfiguration/entityTypes/Location/attributes/GeoLocation/attributes/LongitudeGEO_ACCURACYVARCHARconfiguration/entityTypes/Location/attributes/GeoLocation/attributes/GeoAccuracyGEO_CODING_SYSTEMVARCHARconfiguration/entityTypes/Location/attributes/GeoLocation/attributes/GeoCodingSystemADDRESS_INPUTVARCHARconfiguration/entityTypes/Location/attributes/AddressInput, configuration/entityTypes/Location/attributes/AddressInputSUB_ADMINISTRATIVE_AREAVARCHARThis field holds the smallest geographic data element within a country. For instance, USA County.configuration/entityTypes/Location/attributes/SubAdministrativeArea, configuration/entityTypes/Location/attributes/SubAdministrativeAreaPOSTAL_CITYVARCHARconfiguration/entityTypes/Location/attributes/PostalCity, configuration/entityTypes/Location/attributes/PostalCityLOCALITYVARCHARThis field holds the most common population center data element within a country. For instance, USA City, Canadian Municipality.configuration/entityTypes/Location/attributes/Locality, configuration/entityTypes/Location/attributes/LocalityVERIFICATION_STATUSVARCHARconfiguration/entityTypes/Location/attributes/VerificationStatus, configuration/entityTypes/Location/attributes/VerificationStatusSTATUS_CHANGE_DATEDATEStatus Change Dateconfiguration/entityTypes/Location/attributes/StatusChangeDate, configuration/entityTypes/Location/attributes/StatusChangeDateADDRESS_STATUSVARCHARStatus of the Addressconfiguration/entityTypes/Location/attributes/AddressStatus, configuration/entityTypes/Location/attributes/AddressStatusACTIVE_ADDRESSBOOLEANconfiguration/relationTypes/HasAddress/attributes/Active, configuration/relationTypes/HasAddress/attributes/ActiveLOC_CONF_INDVARCHARconfiguration/relationTypes/HasAddress/attributes/LocConfInd, configuration/relationTypes/HasAddress/attributes/LocConfIndLKUP_IMS_LOCATION_CONFIDENCEBEST_RECORDVARCHARconfiguration/relationTypes/HasAddress/attributes/BestRecord, configuration/relationTypes/HasAddress/attributes/BestRecordRELATION_STATUS_CHANGE_DATEDATEconfiguration/relationTypes/HasAddress/attributes/RelationStatusChangeDate, configuration/relationTypes/HasAddress/attributes/RelationStatusChangeDateVALIDATION_STATUSVARCHARValidation status of the Address. When Addresses are merged, the loser Address is set to INVL.configuration/relationTypes/HasAddress/attributes/ValidationStatus, configuration/relationTypes/HasAddress/attributes/ValidationStatusLKUP_IMS_VAL_STATUSSTATUSVARCHARconfiguration/relationTypes/HasAddress/attributes/Status, configuration/relationTypes/HasAddress/attributes/StatusLKUP_IMS_ADDR_STATUSHCO_NAMEVARCHARconfiguration/relationTypes/HasAddress/attributes/HcoName, configuration/relationTypes/HasAddress/attributes/HcoNameMAIN_HCO_NAMEVARCHARconfiguration/relationTypes/HasAddress/attributes/MainHcoName, configuration/relationTypes/HasAddress/attributes/MainHcoNameBUILD_LABELVARCHARconfiguration/relationTypes/HasAddress/attributes/BuildLabel, configuration/relationTypes/HasAddress/attributes/BuildLabelPO_BOXVARCHARconfiguration/relationTypes/HasAddress/attributes/POBox, configuration/relationTypes/HasAddress/attributes/POBoxVALIDATION_REASONVARCHARconfiguration/relationTypes/HasAddress/attributes/ValidationReason, configuration/relationTypes/HasAddress/attributes/ValidationReasonLKUP_IMS_VAL_STATUS_CHANGE_REASONVALIDATION_CHANGE_DATEDATEconfiguration/relationTypes/HasAddress/attributes/ValidationChangeDate, configuration/relationTypes/HasAddress/attributes/ValidationChangeDateSTATUS_REASON_CODEVARCHARconfiguration/relationTypes/HasAddress/attributes/StatusReasonCode, configuration/relationTypes/HasAddress/attributes/StatusReasonCodeLKUP_IMS_SRC_DEACTIVE_REASON_CODEPRIMARY_MAILBOOLEANconfiguration/relationTypes/HasAddress/attributes/PrimaryMail, configuration/relationTypes/HasAddress/attributes/PrimaryMailVISIT_ACTIVITYVARCHARconfiguration/relationTypes/HasAddress/attributes/VisitActivity, configuration/relationTypes/HasAddress/attributes/VisitActivityDERIVED_ADDRESSVARCHARconfiguration/relationTypes/HasAddress/attributes/derivedAddress, configuration/relationTypes/HasAddress/attributes/derivedAddressNEIGHBORHOODVARCHARconfiguration/entityTypes/Location/attributes/Neighborhood, configuration/entityTypes/Location/attributes/NeighborhoodAVCVARCHARconfiguration/entityTypes/Location/attributes/AVC, configuration/entityTypes/Location/attributes/AVCCOUNTRY_CODEVARCHARconfiguration/entityTypes/Location/attributes/CountryLKUP_IMS_COUNTRY_CODEGEO_LOCATION.LATITUDEVARCHARconfiguration/entityTypes/Location/attributes/GeoLocation/attributes/LatitudeGEO_LOCATION.LONGITUDEVARCHARconfiguration/entityTypes/Location/attributes/GeoLocation/attributes/LongitudeGEO_LOCATION.GEO_ACCURACYVARCHARconfiguration/entityTypes/Location/attributes/GeoLocation/attributes/GeoAccuracyGEO_LOCATION.GEO_CODING_SYSTEMVARCHARconfiguration/entityTypes/Location/attributes/GeoLocation/attributes/GeoCodingSystemADDRESS_PHONEReltio URI: configuration/relationTypes/HasAddress/attributes/Phone, configuration/relationTypes/HasAddress/attributes/PhoneMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESS_URIVARCHARgenerated key descriptionPHONE_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPE_IMSVARCHARconfiguration/relationTypes/HasAddress/attributes/Phone/attributes/TypeIMS, configuration/relationTypes/HasAddress/attributes/Phone/attributes/TypeIMSLKUP_IMS_COMMUNICATION_TYPENUMBERVARCHARconfiguration/relationTypes/HasAddress/attributes/Phone/attributes/Number, configuration/relationTypes/HasAddress/attributes/Phone/attributes/NumberEXTENSIONVARCHARconfiguration/relationTypes/HasAddress/attributes/Phone/attributes/Extension, configuration/relationTypes/HasAddress/attributes/Phone/attributes/ExtensionRANKVARCHARconfiguration/relationTypes/HasAddress/attributes/Phone/attributes/Rank, configuration/relationTypes/HasAddress/attributes/Phone/attributes/RankACTIVE_ADDRESS_PHONEBOOLEANconfiguration/relationTypes/HasAddress/attributes/Phone/attributes/Active, configuration/relationTypes/HasAddress/attributes/Phone/attributes/ActiveBEST_PHONE_INDICATORVARCHARconfiguration/relationTypes/HasAddress/attributes/Phone/attributes/BestPhoneIndicator, configuration/relationTypes/HasAddress/attributes/Phone/attributes/BestPhoneIndicatorADDRESS_DEAReltio URI: configuration/relationTypes/HasAddress/attributes/DEA, configuration/relationTypes/HasAddress/attributes/DEAMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESS_URIVARCHARgenerated key descriptionDEA_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeNUMBERVARCHARconfiguration/relationTypes/HasAddress/attributes/DEA/attributes/Number, configuration/relationTypes/HasAddress/attributes/DEA/attributes/NumberEXPIRATION_DATEDATEconfiguration/relationTypes/HasAddress/attributes/DEA/attributes/ExpirationDate, configuration/relationTypes/HasAddress/attributes/DEA/attributes/ExpirationDateSTATUSVARCHARconfiguration/relationTypes/HasAddress/attributes/DEA/attributes/Status, configuration/relationTypes/HasAddress/attributes/DEA/attributes/StatusLKUP_IMS_IDENTIFIER_STATUSDRUG_SCHEDULEVARCHARconfiguration/relationTypes/HasAddress/attributes/DEA/attributes/DrugSchedule, configuration/relationTypes/HasAddress/attributes/DEA/attributes/DrugScheduleBUSINESS_ACTIVITY_CODEVARCHARBusiness Activity Codeconfiguration/relationTypes/HasAddress/attributes/DEA/attributes/BusinessActivityCode, configuration/relationTypes/HasAddress/attributes/DEA/attributes/BusinessActivityCodeSUB_BUSINESS_ACTIVITY_CODEVARCHARSub Business Activity Codeconfiguration/relationTypes/HasAddress/attributes/DEA/attributes/SubBusinessActivityCode, configuration/relationTypes/HasAddress/attributes/DEA/attributes/SubBusinessActivityCodeDEA_CHANGE_REASON_CODEVARCHARDEA Change Reason Codeconfiguration/relationTypes/HasAddress/attributes/DEA/attributes/DEAChangeReasonCode, configuration/relationTypes/HasAddress/attributes/DEA/attributes/DEAChangeReasonCodeLKUP_IMS_SRC_DEACTIVE_REASON_CODEAUTHORIZATION_STATUSVARCHARAuthorization Statusconfiguration/relationTypes/HasAddress/attributes/DEA/attributes/AuthorizationStatus, configuration/relationTypes/HasAddress/attributes/DEA/attributes/AuthorizationStatusLKUP_IMS_IDENTIFIER_STATUSADDRESS_OFFICE_INFORMATIONReltio URI: configuration/relationTypes/HasAddress/attributes/OfficeInformation, configuration/relationTypes/HasAddress/attributes/OfficeInformationMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESS_URIVARCHARgenerated key descriptionOFFICE_INFORMATION_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeBEST_TIMESVARCHARconfiguration/relationTypes/HasAddress/attributes/OfficeInformation/attributes/BestTimes, configuration/relationTypes/HasAddress/attributes/OfficeInformation/attributes/BestTimesAPPT_REQUIREDBOOLEANconfiguration/relationTypes/HasAddress/attributes/OfficeInformation/attributes/ApptRequired, configuration/relationTypes/HasAddress/attributes/OfficeInformation/attributes/ApptRequiredOFFICE_NOTESVARCHARconfiguration/relationTypes/HasAddress/attributes/OfficeInformation/attributes/OfficeNotes, configuration/relationTypes/HasAddress/attributes/OfficeInformation/attributes/OfficeNotesSPECIALITIESReltio URI: configuration/entityTypes/HCP/attributes/Specialities, configuration/entityTypes/HCO/attributes/SpecialitiesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSPECIALITIES_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSPECIALTY_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/Specialities/attributes/SpecialtyType, configuration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyTypeLKUP_IMS_SPECIALTY_TYPESPECIALTYVARCHARconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Specialty, configuration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyLKUP_IMS_SPECIALTYRANKVARCHARSpecialty Rankconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Rank, configuration/entityTypes/HCO/attributes/Specialities/attributes/RankDESCVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/Specialities/attributes/DescGROUPVARCHARconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Group, configuration/entityTypes/HCO/attributes/Specialities/attributes/GroupSOURCE_CDVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/Specialities/attributes/SourceCDSPECIALTY_DETAILVARCHARconfiguration/entityTypes/HCP/attributes/Specialities/attributes/SpecialtyDetail, configuration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyDetailPROFESSION_CODEVARCHARProfessionconfiguration/entityTypes/HCP/attributes/Specialities/attributes/ProfessionCodeLKUP_IMS_PROFESSIONPRIMARY_SPECIALTY_FLAGBOOLEANconfiguration/entityTypes/HCP/attributes/Specialities/attributes/PrimarySpecialtyFlag, configuration/entityTypes/HCO/attributes/Specialities/attributes/PrimarySpecialtyFlagSORT_ORDERVARCHARconfiguration/entityTypes/HCP/attributes/Specialities/attributes/SortOrder, configuration/entityTypes/HCO/attributes/Specialities/attributes/SortOrderBEST_RECORDVARCHARconfiguration/entityTypes/HCP/attributes/Specialities/attributes/BestRecord, configuration/entityTypes/HCO/attributes/Specialities/attributes/BestRecordSUB_SPECIALTYVARCHARconfiguration/entityTypes/HCP/attributes/Specialities/attributes/SubSpecialty, configuration/entityTypes/HCO/attributes/Specialities/attributes/SubSpecialtyLKUP_IMS_SPECIALTYSUB_SPECIALTY_RANKVARCHARSubSpecialty Rankconfiguration/entityTypes/HCP/attributes/Specialities/attributes/SubSpecialtyRank, configuration/entityTypes/HCO/attributes/Specialities/attributes/SubSpecialtyRankTRUSTED_INDICATORVARCHARconfiguration/entityTypes/HCP/attributes/Specialities/attributes/TrustedIndicator, configuration/entityTypes/HCO/attributes/Specialities/attributes/TrustedIndicatorLKUP_IMS_YES_NORAW_SPECIALTYVARCHARconfiguration/entityTypes/HCP/attributes/Specialities/attributes/RawSpecialty, configuration/entityTypes/HCO/attributes/Specialities/attributes/RawSpecialtyRAW_SPECIALTY_DESCRIPTIONVARCHARconfiguration/entityTypes/HCP/attributes/Specialities/attributes/RawSpecialtyDescription, configuration/entityTypes/HCO/attributes/Specialities/attributes/RawSpecialtyDescriptionIDENTIFIERSReltio URI: configuration/entityTypes/HCP/attributes/Identifiers, configuration/entityTypes/HCO/attributes/IdentifiersMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameIDENTIFIERS_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPEVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/Type, configuration/entityTypes/HCO/attributes/Identifiers/attributes/TypeLKUP_IMS_HCP_IDENTIFIER_TYPE,LKUP_IMS_HCO_IDENTIFIER_TYPEIDVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/ID, configuration/entityTypes/HCO/attributes/Identifiers/attributes/IDORDERVARCHARDisplays the order of priority for an MPN for those facilities that share an MPN. Valid values are: P ?the MPN on a business record is the primary identifier for the business and O ?the MPN is a secondary identifier. (Using P for the MPN supports aggregating clinical volumes and avoids double counting).configuration/entityTypes/HCP/attributes/Identifiers/attributes/Order, configuration/entityTypes/HCO/attributes/Identifiers/attributes/OrderCATEGORYVARCHARAdditional information about the identifer. For a DDD identifer, the DDD subcategory code (e.g. H4, D1, A2). For a DEA identifier, contains the DEA activity code (e.g. M for Mid Level Practitioner)configuration/entityTypes/HCP/attributes/Identifiers/attributes/Category, configuration/entityTypes/HCO/attributes/Identifiers/attributes/CategoryLKUP_IMS_IDENTIFIERS_CATEGORYSTATUSVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/Status, configuration/entityTypes/HCO/attributes/Identifiers/attributes/StatusLKUP_IMS_IDENTIFIER_STATUSAUTHORIZATION_STATUSVARCHARAuthorization Statusconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/AuthorizationStatus, configuration/entityTypes/HCO/attributes/Identifiers/attributes/AuthorizationStatusLKUP_IMS_IDENTIFIER_STATUSDEACTIVATION_REASON_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/DeactivationReasonCode, configuration/entityTypes/HCO/attributes/Identifiers/attributes/DeactivationReasonCodeLKUP_IMS_SRC_DEACTIVE_REASON_CODEDEACTIVATION_DATEDATEconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/DeactivationDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/DeactivationDateREACTIVATION_DATEDATEconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/ReactivationDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/ReactivationDateNATIONAL_ID_ATTRIBUTEVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/NationalIdAttribute, configuration/entityTypes/HCO/attributes/Identifiers/attributes/NationalIdAttributeAMAMDDO_FLAGVARCHARAMA MD-DO Flagconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/AMAMDDOFlagMAJOR_PROF_ACTVARCHARMajor Professional Activity Codeconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/MajorProfActHOSPITAL_HOURSVARCHARHospitalHoursconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/HospitalHoursAMA_HOSPITAL_IDVARCHARAMAHospitalIDconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/AMAHospitalIDPRACTICE_TYPE_CODEVARCHARPracticeTypeCodeconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/PracticeTypeCodeEMPLOYMENT_TYPE_CODEVARCHAREmploymentTypeCodeconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/EmploymentTypeCodeBIRTH_CITYVARCHARBirthCityconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/BirthCityBIRTH_STATEVARCHARBirthStateconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/BirthStateBIRTH_COUNTRYVARCHARBirthCountryconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/BirthCountryMEDICAL_SCHOOLVARCHARMedicalSchoolconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/MedicalSchoolGRADUATION_YEARVARCHARGraduationYearconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/GraduationYearNUM_OF_PYSICIANSVARCHARNumOfPysiciansconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/NumOfPysiciansSTATEVARCHARLicenseStateconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/State, configuration/entityTypes/HCO/attributes/Identifiers/attributes/StateLKUP_IMS_STATE_CODETRUSTED_INDICATORVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/TrustedIndicator, configuration/entityTypes/HCO/attributes/Identifiers/attributes/TrustedIndicatorLKUP_IMS_YES_NOHARD_LINK_INDICATORVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/HardLinkIndicator, configuration/entityTypes/HCO/attributes/Identifiers/attributes/HardLinkIndicatorLKUP_IMS_YES_NOLAST_VERIFICATION_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/LastVerificationStatus, configuration/entityTypes/HCO/attributes/Identifiers/attributes/LastVerificationStatusLAST_VERIFICATION_DATEDATEconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/LastVerificationDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/LastVerificationDateACTIVATION_DATEDATEconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/ActivationDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/ActivationDateSPEAKERReltio URI: configuration/entityTypes/HCP/attributes/SpeakerMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSPEAKER_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeIS_SPEAKERBOOLEANconfiguration/entityTypes/HCP/attributes/Speaker/attributes/IsSpeakerIS_COMPANY_APPROVED_SPEAKERBOOLEANAttribute to track if an HCP is a COMPANY approved speakerconfiguration/entityTypes/HCP/attributes/Speaker/attributes/IsCOMPANYApprovedSpeakerLAST_BRIEFING_DATEDATETrack the last date that the HCP received the briefing/training to be certified as an approved COMPANY Speakerconfiguration/entityTypes/HCP/attributes/Speaker/attributes/LastBriefingDateSPEAKER_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Speaker/attributes/SpeakerStatusLKUP_SPEAKERSTATUSSPEAKER_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/Speaker/attributes/SpeakerTypeLKUP_SPEAKERTYPESPEAKER_LEVELVARCHARconfiguration/entityTypes/HCP/attributes/Speaker/attributes/SpeakerLevelLKUP_SPEAKERLEVELHCP_WORKPLACE_MAIN_HCOReltio URI: configuration/entityTypes/HCO/attributes/MainHCOMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameWORKPLACE_URIVARCHARgenerated key descriptionMAINHCO_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeNAMEVARCHARNameconfiguration/entityTypes/HCO/attributes/NameOTHER_NAMESVARCHAROther Namesconfiguration/entityTypes/HCO/attributes/OtherNamesTYPE_CODEVARCHARCustomer Typeconfiguration/entityTypes/HCO/attributes/TypeCodeLKUP_IMS_HCO_CUST_TYPESOURCE_IDVARCHARSource IDconfiguration/entityTypes/HCO/attributes/SourceIDVALIDATION_STATUSVARCHARconfiguration/relationTypes/RLE.MAI/attributes/ValidationStatusLKUP_IMS_VAL_STATUSVALIDATION_CHANGE_DATEDATEconfiguration/relationTypes/RLE.MAI/attributes/ValidationChangeDateAFFILIATION_STATUSVARCHARconfiguration/relationTypes/RLE.MAI/attributes/AffiliationStatusLKUP_IMS_STATUSCOUNTRYVARCHARCountry Codeconfiguration/relationTypes/RLE.MAI/attributes/CountryLKUP_IMS_COUNTRY_CODEHCP_WORKPLACE_MAIN_HCO_CLASSOF_TRADE_NReltio URI: configuration/entityTypes/HCO/attributes/ClassofTradeNMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameWORKPLACE_URIVARCHARgenerated key descriptionMAINHCO_URIVARCHARgenerated key descriptionCLASSOFTRADEN_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypePRIORITYVARCHARNumeric code for the primary class of tradeconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/PriorityCLASSIFICATIONVARCHARconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/ClassificationLKUP_IMS_HCO_CLASSOFTRADEN_CLASSIFICATIONFACILITY_TYPEVARCHARconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/FacilityTypeLKUP_IMS_HCO_CLASSOFTRADEN_FACILITYTYPESPECIALTYVARCHARconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/SpecialtyLKUP_IMS_HCO_CLASSOFTRADEN_SPECIALTYHCP_MAIN_WORKPLACE_CLASSOF_TRADE_NReltio URI: configuration/entityTypes/HCO/attributes/ClassofTradeNMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameMAINWORKPLACE_URIVARCHARgenerated key descriptionCLASSOFTRADEN_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypePRIORITYVARCHARNumeric code for the primary class of tradeconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/PriorityCLASSIFICATIONVARCHARconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/ClassificationLKUP_IMS_HCO_CLASSOFTRADEN_CLASSIFICATIONFACILITY_TYPEVARCHARconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/FacilityTypeLKUP_IMS_HCO_CLASSOFTRADEN_FACILITYTYPESPECIALTYVARCHARconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/SpecialtyLKUP_IMS_HCO_CLASSOFTRADEN_SPECIALTYPHONEReltio URI: configuration/entityTypes/HCP/attributes/Phone, configuration/entityTypes/HCO/attributes/PhoneMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePHONE_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPE_IMSVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/TypeIMS, configuration/entityTypes/HCO/attributes/Phone/attributes/TypeIMSLKUP_IMS_COMMUNICATION_TYPENUMBERVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/Number, configuration/entityTypes/HCO/attributes/Phone/attributes/NumberEXTENSIONVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/Extension, configuration/entityTypes/HCO/attributes/Phone/attributes/ExtensionRANKVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/Rank, configuration/entityTypes/HCO/attributes/Phone/attributes/RankCOUNTRY_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/CountryCode, configuration/entityTypes/HCO/attributes/Phone/attributes/CountryCodeLKUP_IMS_COUNTRY_CODEAREA_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/AreaCode, configuration/entityTypes/HCO/attributes/Phone/attributes/AreaCodeLOCAL_NUMBERVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/LocalNumber, configuration/entityTypes/HCO/attributes/Phone/attributes/LocalNumberFORMATTED_NUMBERVARCHARFormatted number of the phoneconfiguration/entityTypes/HCP/attributes/Phone/attributes/FormattedNumber, configuration/entityTypes/HCO/attributes/Phone/attributes/FormattedNumberVALIDATION_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/ValidationStatus, configuration/entityTypes/HCO/attributes/Phone/attributes/ValidationStatusVALIDATION_DATEDATEconfiguration/entityTypes/HCP/attributes/Phone/attributes/ValidationDate, configuration/entityTypes/HCO/attributes/Phone/attributes/ValidationDateLINE_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/LineType, configuration/entityTypes/HCO/attributes/Phone/attributes/LineTypeFORMAT_MASKVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/FormatMask, configuration/entityTypes/HCO/attributes/Phone/attributes/FormatMaskDIGIT_COUNTVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/DigitCount, configuration/entityTypes/HCO/attributes/Phone/attributes/DigitCountGEO_AREAVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/GeoArea, configuration/entityTypes/HCO/attributes/Phone/attributes/GeoAreaGEO_COUNTRYVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/GeoCountry, configuration/entityTypes/HCO/attributes/Phone/attributes/GeoCountryDQ_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/DQCode, configuration/entityTypes/HCO/attributes/Phone/attributes/DQCodeACTIVE_PHONEBOOLEANDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/Phone/attributes/ActiveBEST_PHONE_INDICATORVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/BestPhoneIndicator, configuration/entityTypes/HCO/attributes/Phone/attributes/BestPhoneIndicatorPHONE_SOURCE_DATAReltio URI: configuration/entityTypes/HCP/attributes/Phone/attributes/SourceData, configuration/entityTypes/HCO/attributes/Phone/attributes/SourceDataMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePHONE_URIVARCHARgenerated key descriptionSOURCE_DATA_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeDATASET_IDENTIFIERVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/SourceData/attributes/DatasetIdentifier, configuration/entityTypes/HCO/attributes/Phone/attributes/SourceData/attributes/DatasetIdentifierDATASET_PARTY_IDENTIFIERVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/SourceData/attributes/DatasetPartyIdentifier, configuration/entityTypes/HCO/attributes/Phone/attributes/SourceData/attributes/DatasetPartyIdentifierDATASET_PHONE_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/SourceData/attributes/DatasetPhoneType, configuration/entityTypes/HCO/attributes/Phone/attributes/SourceData/attributes/DatasetPhoneTypeLKUP_IMS_COMMUNICATION_TYPERAW_DATASET_PHONE_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/SourceData/attributes/RawDatasetPhoneType, configuration/entityTypes/HCO/attributes/Phone/attributes/SourceData/attributes/RawDatasetPhoneTypeBEST_PHONE_INDICATORVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/SourceData/attributes/BestPhoneIndicator, configuration/entityTypes/HCO/attributes/Phone/attributes/SourceData/attributes/BestPhoneIndicatorEMAILReltio URI: configuration/entityTypes/HCP/attributes/Email, configuration/entityTypes/HCO/attributes/EmailMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEMAIL_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPE_IMSVARCHARconfiguration/entityTypes/HCP/attributes/Email/attributes/TypeIMS, configuration/entityTypes/HCO/attributes/Email/attributes/TypeIMSLKUP_IMS_EMAIL_TYPEEMAILVARCHARconfiguration/entityTypes/HCP/attributes/Email/attributes/Email, configuration/entityTypes/HCO/attributes/Email/attributes/EmailDOMAINVARCHARconfiguration/entityTypes/HCP/attributes/Email/attributes/Domain, configuration/entityTypes/HCO/attributes/Email/attributes/DomainDOMAIN_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/Email/attributes/DomainType, configuration/entityTypes/HCO/attributes/Email/attributes/DomainTypeUSERNAMEVARCHARconfiguration/entityTypes/HCP/attributes/Email/attributes/Username, configuration/entityTypes/HCO/attributes/Email/attributes/UsernameRANKVARCHARconfiguration/entityTypes/HCP/attributes/Email/attributes/Rank, configuration/entityTypes/HCO/attributes/Email/attributes/RankVALIDATION_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Email/attributes/ValidationStatus, configuration/entityTypes/HCO/attributes/Email/attributes/ValidationStatusVALIDATION_DATEDATEconfiguration/entityTypes/HCP/attributes/Email/attributes/ValidationDate, configuration/entityTypes/HCO/attributes/Email/attributes/ValidationDateACTIVE_EMAIL_HCPVARCHARconfiguration/entityTypes/HCP/attributes/Email/attributes/ActiveDQ_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Email/attributes/DQCode, configuration/entityTypes/HCO/attributes/Email/attributes/DQCodeSOURCE_CDVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/Email/attributes/SourceCDACTIVE_EMAIL_HCOBOOLEANconfiguration/entityTypes/HCO/attributes/Email/attributes/ActiveDISCLOSUREDisclosure - Reporting derived attributesReltio URI: configuration/entityTypes/HCP/attributes/Disclosure, configuration/entityTypes/HCO/attributes/DisclosureMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameDISCLOSURE_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeDGS_CATEGORYVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/DGSCategory, configuration/entityTypes/HCO/attributes/Disclosure/attributes/DGSCategoryLKUP_BENEFITCATEGORY_HCP,LKUP_BENEFITCATEGORY_HCODGS_TITLEVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/DGSTitleLKUP_BENEFITTITLEDGS_QUALITYVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/DGSQualityLKUP_BENEFITQUALITYDGS_SPECIALTYVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/DGSSpecialtyLKUP_BENEFITSPECIALTYCONTRACT_CLASSIFICATIONVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/ContractClassificationLKUP_CONTRACTCLASSIFICATIONCONTRACT_CLASSIFICATION_DATEDATEconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/ContractClassificationDateMILITARYBOOLEANconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/MilitaryLEGALSTATUSVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/LEGALSTATUSLKUP_LEGALSTATUSTHIRD_PARTY_VERIFYReltio URI: configuration/entityTypes/HCP/attributes/ThirdPartyVerify, configuration/entityTypes/HCO/attributes/ThirdPartyVerifyMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameTHIRD_PARTY_VERIFY_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSEND_FOR_VERIFYVARCHARconfiguration/entityTypes/HCP/attributes/ThirdPartyVerify/attributes/SendForVerify, configuration/entityTypes/HCO/attributes/ThirdPartyVerify/attributes/SendForVerifyLKUP_IMS_SEND_FOR_VALIDATIONVERIFY_DATEVARCHARconfiguration/entityTypes/HCP/attributes/ThirdPartyVerify/attributes/VerifyDate, configuration/entityTypes/HCO/attributes/ThirdPartyVerify/attributes/VerifyDatePRIVACY_PREFERENCESReltio URI: configuration/entityTypes/HCP/attributes/PrivacyPreferences, configuration/entityTypes/HCO/attributes/PrivacyPreferencesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePRIVACY_PREFERENCES_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeOPT_OUTBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/OptOutOPT_OUT_START_DATEDATEconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/OptOutStartDateALLOWED_TO_CONTACTBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/AllowedToContactPHONE_OPT_OUTBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/PhoneOptOut, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/PhoneOptOutEMAIL_OPT_OUTBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/EmailOptOut, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/EmailOptOutFAX_OPT_OUTBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/FaxOptOut, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/FaxOptOutVISIT_OPT_OUTBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/VisitOptOut, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/VisitOptOutAMA_NO_CONTACTBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/AMANoContactPDRPBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/PDRPPDRP_DATEDATEconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/PDRPDateTEXT_MESSAGE_OPT_OUTBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/TextMessageOptOutMAIL_OPT_OUTBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/MailOptOut, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/MailOptOutOPT_OUT_CHANGE_DATEDATEThe date the opt out indicator was changedconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/OptOutChangeDateREMOTE_OPT_OUTBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/RemoteOptOut, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/RemoteOptOutOPT_OUT_ONE_KEYBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/OptOutOneKey, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/OptOutOneKeyOPT_OUT_SAFE_HARBORBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/OptOutSafeHarborKEY_OPINION_LEADERBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/KeyOpinionLeaderRESIDENT_INDICATORBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/ResidentIndicatorALLOW_SAFE_HARBORBOOLEANconfiguration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/AllowSafeHarborSANCTIONReltio URI: configuration/entityTypes/HCP/attributes/SanctionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSANCTION_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSANCTION_IDVARCHARCourt sanction Id for any case.configuration/entityTypes/HCP/attributes/Sanction/attributes/SanctionIdACTION_CODEVARCHARCourt sanction code for a caseconfiguration/entityTypes/HCP/attributes/Sanction/attributes/ActionCodeACTION_DESCRIPTIONVARCHARconfiguration/entityTypes/HCP/attributes/Sanction/attributes/ActionDescriptionBOARD_CODEVARCHARCourt case board idconfiguration/entityTypes/HCP/attributes/Sanction/attributes/BoardCodeBOARD_DESCVARCHARcourt case board descriptionconfiguration/entityTypes/HCP/attributes/Sanction/attributes/BoardDescACTION_DATEDATEconfiguration/entityTypes/HCP/attributes/Sanction/attributes/ActionDateSANCTION_PERIOD_START_DATEDATEconfiguration/entityTypes/HCP/attributes/Sanction/attributes/SanctionPeriodStartDateSANCTION_PERIOD_END_DATEDATEconfiguration/entityTypes/HCP/attributes/Sanction/attributes/SanctionPeriodEndDateMONTH_DURATIONVARCHARconfiguration/entityTypes/HCP/attributes/Sanction/attributes/MonthDurationFINE_AMOUNTVARCHARconfiguration/entityTypes/HCP/attributes/Sanction/attributes/FineAmountOFFENSE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Sanction/attributes/OffenseCodeOFFENSE_DESCRIPTIONVARCHARconfiguration/entityTypes/HCP/attributes/Sanction/attributes/OffenseDescriptionOFFENSE_DATEDATEconfiguration/entityTypes/HCP/attributes/Sanction/attributes/OffenseDateHCP_SANCTIONSReltio URI: configuration/entityTypes/HCP/attributes/SanctionsMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSANCTIONS_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeIDENTIFIER_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/IdentifierTypeLKUP_IMS_HCP_IDENTIFIER_TYPEIDENTIFIER_IDVARCHARconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/IdentifierIDTYPE_CODEVARCHARType of sanction/restriction for a given providedconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/TypeCodeLKUP_IMS_SNCTN_RSTR_ACTNDEACTIVATION_REASON_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/DeactivationReasonCodeLKUP_IMS_SNCTN_RSTR_DACT_RSNDISPOSITION_CATEGORY_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/DispositionCategoryCodeLKUP_IMS_SNCTN_RSTR_DSP_CATGEXCLUSION_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/ExclusionCodeLKUP_IMS_SNCTN_RSTR_EXCLDESCRIPTIONVARCHARconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/DescriptionURLVARCHARconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/URLISSUED_DATEDATEconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/IssuedDateEFFECTIVE_DATEDATEconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/EffectiveDateREINSTATEMENT_DATEDATEconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/ReinstatementDateIS_STATE_WAIVERBOOLEANconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/IsStateWaiverSTATUS_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/StatusCodeLKUP_IMS_IDENTIFIER_STATUSSOURCE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/SourceCodeLKUP_IMS_SNCTN_RSTR_SRCPUBLICATION_DATEDATEconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/PublicationDateGOVERNMENT_LEVEL_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/GovernmentLevelCodeLKUP_IMS_GOVT_LVLHCP_GSA_SANCTIONReltio URI: configuration/entityTypes/HCP/attributes/GSASanctionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameGSA_SANCTION_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSANCTION_IDVARCHARconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/SanctionIdFIRST_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/FirstNameMIDDLE_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/MiddleNameLAST_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/LastNameSUFFIX_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/SuffixNameCITYVARCHARconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/CitySTATEVARCHARconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/StateZIPVARCHARconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/ZipACTION_DATEVARCHARconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/ActionDateTERM_DATEVARCHARconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/TermDateAGENCYVARCHARconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/AgencyCONFIDENCEVARCHARconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/ConfidenceDEGREESDO NOT USE THIS ATTRIBUTE - will be deprecatedReltio URI: configuration/entityTypes/HCP/attributes/DegreesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameDEGREES_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeDEGREEVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/Degrees/attributes/DegreeDEGREEBEST_DEGREEVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/Degrees/attributes/BestDegreeCERTIFICATESReltio URI: configuration/entityTypes/HCP/attributes/CertificatesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCERTIFICATES_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCERTIFICATE_IDVARCHARconfiguration/entityTypes/HCP/attributes/Certificates/attributes/CertificateIdNAMEVARCHARconfiguration/entityTypes/HCP/attributes/Certificates/attributes/NameBOARD_IDVARCHARconfiguration/entityTypes/HCP/attributes/Certificates/attributes/BoardIdBOARD_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/Certificates/attributes/BoardNameINTERNAL_HCP_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Certificates/attributes/InternalHCPStatusINTERNAL_HCP_INACTIVE_REASON_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Certificates/attributes/InternalHCPInactiveReasonCodeINTERNAL_SAMPLING_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Certificates/attributes/InternalSamplingStatusPVS_ELIGIBILTYVARCHARconfiguration/entityTypes/HCP/attributes/Certificates/attributes/PVSEligibiltyEMPLOYMENTReltio URI: configuration/entityTypes/HCP/attributes/EmploymentMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEMPLOYMENT_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTITLEVARCHARconfiguration/relationTypes/Employment/attributes/TitleSUMMARYVARCHARconfiguration/relationTypes/Employment/attributes/SummaryIS_CURRENTBOOLEANconfiguration/relationTypes/Employment/attributes/IsCurrentNAMEVARCHARNameconfiguration/entityTypes/Organization/attributes/NameCREDENTIALDO NOT USE THIS ATTRIBUTE - will be deprecatedReltio URI: configuration/entityTypes/HCP/attributes/CredentialMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCREDENTIAL_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeRANKVARCHARconfiguration/entityTypes/HCP/attributes/Credential/attributes/RankCREDENTIALVARCHARconfiguration/entityTypes/HCP/attributes/Credential/attributes/CredentialCREDPROFESSIONReltio URI: configuration/entityTypes/HCP/attributes/ProfessionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePROFESSION_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypePROFESSION_CODEVARCHARProfessionconfiguration/entityTypes/HCP/attributes/Profession/attributes/ProfessionCodeLKUP_IMS_PROFESSIONRANKVARCHARProfession Rankconfiguration/entityTypes/HCP/attributes/Profession/attributes/RankEDUCATIONReltio URI: configuration/entityTypes/HCP/attributes/EducationMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEDUCATION_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSCHOOL_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/SchoolNameLKUP_IMS_SCHOOL_CODETYPEVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/TypeDEGREEVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/DegreeYEAR_OF_GRADUATIONVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/Education/attributes/YearOfGraduationGRADUATEDBOOLEANDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/Education/attributes/GraduatedGPAVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/GPAYEARS_IN_PROGRAMVARCHARYear in Grad Training Program, Year in training in current programconfiguration/entityTypes/HCP/attributes/Education/attributes/YearsInProgramSTART_YEARVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/StartYearEND_YEARVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/EndYearFIELDOF_STUDYVARCHARSpecialty Focus or Specialty Trainingconfiguration/entityTypes/HCP/attributes/Education/attributes/FieldofStudyELIGIBILITYVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/Education/attributes/EligibilityEDUCATION_TYPEVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/Education/attributes/EducationTypeRANKVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/RankMEDICAL_SCHOOLVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/MedicalSchoolTAXONOMYReltio URI: configuration/entityTypes/HCP/attributes/Taxonomy, configuration/entityTypes/HCO/attributes/TaxonomyMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameTAXONOMY_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTAXONOMYVARCHARconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/Taxonomy, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/TaxonomyTAXONOMY_CD,LKUP_IMS_JURIDIC_CATEGORYTYPEVARCHARconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/Type, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/TypeTAXONOMY_TYPEPROVIDER_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/ProviderType, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/ProviderTypeCLASSIFICATIONVARCHARconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/Classification, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/ClassificationSPECIALIZATIONVARCHARconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/Specialization, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/SpecializationPRIORITYVARCHARconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/Priority, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/PriorityTAXONOMY_PRIORITYSTR_TYPEVARCHARconfiguration/entityTypes/HCO/attributes/Taxonomy/attributes/StrTypeLKUP_IMS_STRUCTURE_TYPEDP_PRESENCEReltio URI: configuration/entityTypes/HCP/attributes/DPPresence, configuration/entityTypes/HCO/attributes/DPPresenceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameDP_PRESENCE_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCHANNEL_CODEVARCHARconfiguration/entityTypes/HCP/attributes/DPPresence/attributes/ChannelCode, configuration/entityTypes/HCO/attributes/DPPresence/attributes/ChannelCodeLKUP_IMS_DP_CHANNELCHANNEL_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/DPPresence/attributes/ChannelName, configuration/entityTypes/HCO/attributes/DPPresence/attributes/ChannelNameCHANNEL_URLVARCHARconfiguration/entityTypes/HCP/attributes/DPPresence/attributes/ChannelURL, configuration/entityTypes/HCO/attributes/DPPresence/attributes/ChannelURLCHANNEL_REGISTRATION_DATEDATEconfiguration/entityTypes/HCP/attributes/DPPresence/attributes/ChannelRegistrationDate, configuration/entityTypes/HCO/attributes/DPPresence/attributes/ChannelRegistrationDatePRESENCE_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/DPPresence/attributes/PresenceType, configuration/entityTypes/HCO/attributes/DPPresence/attributes/PresenceTypeLKUP_IMS_DP_PRESENCE_TYPEACTIVITYVARCHARconfiguration/entityTypes/HCP/attributes/DPPresence/attributes/Activity, configuration/entityTypes/HCO/attributes/DPPresence/attributes/ActivityLKUP_IMS_DP_SCORE_CODEAUDIENCEVARCHARconfiguration/entityTypes/HCP/attributes/DPPresence/attributes/Audience, configuration/entityTypes/HCO/attributes/DPPresence/attributes/AudienceLKUP_IMS_DP_SCORE_CODEDP_SUMMARYReltio URI: configuration/entityTypes/HCP/attributes/DPSummary, configuration/entityTypes/HCO/attributes/DPSummaryMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameDP_SUMMARY_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSUMMARY_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/DPSummary/attributes/SummaryType, configuration/entityTypes/HCO/attributes/DPSummary/attributes/SummaryTypeLKUP_IMS_DP_SUMMARY_TYPESCORE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/DPSummary/attributes/ScoreCode, configuration/entityTypes/HCO/attributes/DPSummary/attributes/ScoreCodeLKUP_IMS_DP_SCORE_CODEADDITIONAL_ATTRIBUTESReltio URI: configuration/entityTypes/HCP/attributes/AdditionalAttributes, configuration/entityTypes/HCO/attributes/AdditionalAttributesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDITIONAL_ATTRIBUTES_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeATTRIBUTE_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/AdditionalAttributes/attributes/AttributeName, configuration/entityTypes/HCO/attributes/AdditionalAttributes/attributes/AttributeNameATTRIBUTE_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/AdditionalAttributes/attributes/AttributeType, configuration/entityTypes/HCO/attributes/AdditionalAttributes/attributes/AttributeTypeLKUP_IMS_TYPE_CODEATTRIBUTE_VALUEVARCHARconfiguration/entityTypes/HCP/attributes/AdditionalAttributes/attributes/AttributeValue, configuration/entityTypes/HCO/attributes/AdditionalAttributes/attributes/AttributeValueATTRIBUTE_RANKVARCHARconfiguration/entityTypes/HCP/attributes/AdditionalAttributes/attributes/AttributeRank, configuration/entityTypes/HCO/attributes/AdditionalAttributes/attributes/AttributeRankADDITIONAL_INFOVARCHARconfiguration/entityTypes/HCP/attributes/AdditionalAttributes/attributes/AdditionalInfo, configuration/entityTypes/HCO/attributes/AdditionalAttributes/attributes/AdditionalInfoDATA_QUALITYData QualityReltio URI: configuration/entityTypes/HCP/attributes/DataQuality, configuration/entityTypes/HCO/attributes/DataQualityMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameDATA_QUALITY_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSEVERITY_LEVELVARCHARconfiguration/entityTypes/HCP/attributes/DataQuality/attributes/SeverityLevel, configuration/entityTypes/HCO/attributes/DataQuality/attributes/SeverityLevelLKUP_IMS_DQ_SEVERITYSOURCEVARCHARconfiguration/entityTypes/HCP/attributes/DataQuality/attributes/Source, configuration/entityTypes/HCO/attributes/DataQuality/attributes/SourceSCOREVARCHARconfiguration/entityTypes/HCP/attributes/DataQuality/attributes/Score, configuration/entityTypes/HCO/attributes/DataQuality/attributes/ScoreCLASSIFICATIONReltio URI: configuration/entityTypes/HCP/attributes/Classification, configuration/entityTypes/HCO/attributes/ClassificationMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCLASSIFICATION_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCLASSIFICATION_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/Classification/attributes/ClassificationType, configuration/entityTypes/HCO/attributes/Classification/attributes/ClassificationTypeLKUP_IMS_CLASSIFICATION_TYPECLASSIFICATION_VALUEVARCHARconfiguration/entityTypes/HCP/attributes/Classification/attributes/ClassificationValue, configuration/entityTypes/HCO/attributes/Classification/attributes/ClassificationValueCLASSIFICATION_VALUE_NUMERIC_QUANTITYVARCHARconfiguration/entityTypes/HCP/attributes/Classification/attributes/ClassificationValueNumericQuantity, configuration/entityTypes/HCO/attributes/Classification/attributes/ClassificationValueNumericQuantitySTATUSVARCHARconfiguration/entityTypes/HCP/attributes/Classification/attributes/Status, configuration/entityTypes/HCO/attributes/Classification/attributes/StatusLKUP_IMS_CLASSIFICATION_STATUSEFFECTIVE_DATEDATEconfiguration/entityTypes/HCP/attributes/Classification/attributes/EffectiveDate, configuration/entityTypes/HCO/attributes/Classification/attributes/EffectiveDateEND_DATEDATEconfiguration/entityTypes/HCP/attributes/Classification/attributes/EndDate, configuration/entityTypes/HCO/attributes/Classification/attributes/EndDateNOTESVARCHARconfiguration/entityTypes/HCP/attributes/Classification/attributes/Notes, configuration/entityTypes/HCO/attributes/Classification/attributes/NotesTAGReltio URI: configuration/entityTypes/HCP/attributes/Tag, configuration/entityTypes/HCO/attributes/TagMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameTAG_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTAG_TYPE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Tag/attributes/TagTypeCode, configuration/entityTypes/HCO/attributes/Tag/attributes/TagTypeCodeLKUP_IMS_TAG_TYPE_CODETAG_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Tag/attributes/TagCode, configuration/entityTypes/HCO/attributes/Tag/attributes/TagCodeSTATUSVARCHARconfiguration/entityTypes/HCP/attributes/Tag/attributes/Status, configuration/entityTypes/HCO/attributes/Tag/attributes/StatusLKUP_IMS_TAG_STATUSEFFECTIVE_DATEDATEconfiguration/entityTypes/HCP/attributes/Tag/attributes/EffectiveDate, configuration/entityTypes/HCO/attributes/Tag/attributes/EffectiveDateEND_DATEDATEconfiguration/entityTypes/HCP/attributes/Tag/attributes/EndDate, configuration/entityTypes/HCO/attributes/Tag/attributes/EndDateNOTESVARCHARconfiguration/entityTypes/HCP/attributes/Tag/attributes/Notes, configuration/entityTypes/HCO/attributes/Tag/attributes/NotesEXCLUSIONSReltio URI: configuration/entityTypes/HCP/attributes/Exclusions, configuration/entityTypes/HCO/attributes/ExclusionsMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEXCLUSIONS_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypePRODUCT_IDVARCHARconfiguration/entityTypes/HCP/attributes/Exclusions/attributes/ProductId, configuration/entityTypes/HCO/attributes/Exclusions/attributes/ProductIdLKUP_IMS_PRODUCT_IDEXCLUSION_STATUS_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Exclusions/attributes/ExclusionStatusCode, configuration/entityTypes/HCO/attributes/Exclusions/attributes/ExclusionStatusCodeLKUP_IMS_EXCL_STATUS_CODEEFFECTIVE_DATEDATEconfiguration/entityTypes/HCP/attributes/Exclusions/attributes/EffectiveDate, configuration/entityTypes/HCO/attributes/Exclusions/attributes/EffectiveDateEND_DATEDATEconfiguration/entityTypes/HCP/attributes/Exclusions/attributes/EndDate, configuration/entityTypes/HCO/attributes/Exclusions/attributes/EndDateNOTESVARCHARconfiguration/entityTypes/HCP/attributes/Exclusions/attributes/Notes, configuration/entityTypes/HCO/attributes/Exclusions/attributes/NotesEXCLUSION_RULE_IDVARCHARconfiguration/entityTypes/HCP/attributes/Exclusions/attributes/ExclusionRuleId, configuration/entityTypes/HCO/attributes/Exclusions/attributes/ExclusionRuleIdACTIONReltio URI: configuration/entityTypes/HCP/attributes/Action, configuration/entityTypes/HCO/attributes/ActionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACTION_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeACTION_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Action/attributes/ActionCode, configuration/entityTypes/HCO/attributes/Action/attributes/ActionCodeLKUP_IMS_ACTION_CODEACTION_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/Action/attributes/ActionName, configuration/entityTypes/HCO/attributes/Action/attributes/ActionNameACTION_REQUESTED_DATEDATEconfiguration/entityTypes/HCP/attributes/Action/attributes/ActionRequestedDate, configuration/entityTypes/HCO/attributes/Action/attributes/ActionRequestedDateACTION_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Action/attributes/ActionStatus, configuration/entityTypes/HCO/attributes/Action/attributes/ActionStatusLKUP_IMS_ACTION_STATUSACTION_STATUS_DATEDATEconfiguration/entityTypes/HCP/attributes/Action/attributes/ActionStatusDate, configuration/entityTypes/HCO/attributes/Action/attributes/ActionStatusDateALTERNATE_NAMEReltio URI: configuration/entityTypes/HCP/attributes/AlternateName, configuration/entityTypes/HCO/attributes/AlternateNameMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameALTERNATE_NAME_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeNAME_TYPE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/NameTypeCode, configuration/entityTypes/HCO/attributes/AlternateName/attributes/NameTypeCodeLKUP_IMS_NAME_TYPE_CODENAMEVARCHARconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/Name, configuration/entityTypes/HCO/attributes/AlternateName/attributes/NameFIRST_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/FirstName, configuration/entityTypes/HCO/attributes/AlternateName/attributes/FirstNameMIDDLE_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/MiddleName, configuration/entityTypes/HCO/attributes/AlternateName/attributes/MiddleNameLAST_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/LastName, configuration/entityTypes/HCO/attributes/AlternateName/attributes/LastNameSUFFIX_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/SuffixName, configuration/entityTypes/HCO/attributes/AlternateName/attributes/SuffixNameLANGUAGEReltio URI: configuration/entityTypes/HCP/attributes/Language, configuration/entityTypes/HCO/attributes/LanguageMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameLANGUAGE_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeLANGUAGE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Language/attributes/LanguageCode, configuration/entityTypes/HCO/attributes/Language/attributes/LanguageCodePROFICIENCY_LEVELVARCHARconfiguration/entityTypes/HCP/attributes/Language/attributes/ProficiencyLevel, configuration/entityTypes/HCO/attributes/Language/attributes/ProficiencyLevelSOURCE_DATAReltio URI: configuration/entityTypes/HCP/attributes/SourceData, configuration/entityTypes/HCO/attributes/SourceDataMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSOURCE_DATA_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCLASS_OF_TRADE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/SourceData/attributes/ClassOfTradeCode, configuration/entityTypes/HCO/attributes/SourceData/attributes/ClassOfTradeCodeRAW_CLASS_OF_TRADE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/SourceData/attributes/RawClassOfTradeCode, configuration/entityTypes/HCO/attributes/SourceData/attributes/RawClassOfTradeCodeRAW_CLASS_OF_TRADE_DESCRIPTIONVARCHARconfiguration/entityTypes/HCP/attributes/SourceData/attributes/RawClassOfTradeDescription, configuration/entityTypes/HCO/attributes/SourceData/attributes/RawClassOfTradeDescriptionDATASET_IDENTIFIERVARCHARconfiguration/entityTypes/HCP/attributes/SourceData/attributes/DatasetIdentifier, configuration/entityTypes/HCO/attributes/SourceData/attributes/DatasetIdentifierDATASET_PARTY_IDENTIFIERVARCHARconfiguration/entityTypes/HCP/attributes/SourceData/attributes/DatasetPartyIdentifier, configuration/entityTypes/HCO/attributes/SourceData/attributes/DatasetPartyIdentifierPARTY_STATUS_CODEVARCHARconfiguration/entityTypes/HCP/attributes/SourceData/attributes/PartyStatusCode, configuration/entityTypes/HCO/attributes/SourceData/attributes/PartyStatusCodeNOTESReltio URI: configuration/entityTypes/HCP/attributes/Notes, configuration/entityTypes/HCO/attributes/NotesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameNOTES_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeNOTE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Notes/attributes/NoteCode, configuration/entityTypes/HCO/attributes/Notes/attributes/NoteCodeLKUP_IMS_NOTE_CODENOTE_TEXTVARCHARconfiguration/entityTypes/HCP/attributes/Notes/attributes/NoteText, configuration/entityTypes/HCO/attributes/Notes/attributes/NoteTextHCOHealth care providerReltio URI: configuration/entityTypes/HCOMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeNAMEVARCHARNameconfiguration/entityTypes/HCO/attributes/NameTYPE_CODEVARCHARCustomer Typeconfiguration/entityTypes/HCO/attributes/TypeCodeLKUP_IMS_HCO_CUST_TYPESUB_TYPE_CODEVARCHARCustomer Sub Typeconfiguration/entityTypes/HCO/attributes/SubTypeCodeLKUP_IMS_HCO_SUBTYPEEXCLUDE_FROM_MATCHVARCHARconfiguration/entityTypes/HCO/attributes/ExcludeFromMatchOTHER_NAMESVARCHAROther Namesconfiguration/entityTypes/HCO/attributes/OtherNamesSOURCE_IDVARCHARSource IDconfiguration/entityTypes/HCO/attributes/SourceIDVALIDATION_STATUSVARCHARconfiguration/entityTypes/HCO/attributes/ValidationStatusLKUP_IMS_VAL_STATUSORIGIN_SOURCEVARCHAROriginating Sourceconfiguration/entityTypes/HCO/attributes/OriginSourceCOUNTRY_CODEVARCHARCountry Codeconfiguration/entityTypes/HCO/attributes/CountryLKUP_IMS_COUNTRY_CODEFISCALVARCHARconfiguration/entityTypes/HCO/attributes/FiscalSITEVARCHARconfiguration/entityTypes/HCO/attributes/SiteGROUP_PRACTICEBOOLEANconfiguration/entityTypes/HCO/attributes/GroupPracticeGEN_FIRSTVARCHARStringconfiguration/entityTypes/HCO/attributes/GenFirstLKUP_IMS_HCO_GENFIRSTSREP_ACCESSVARCHARStringconfiguration/entityTypes/HCO/attributes/SrepAccessLKUP_IMS_HCO_SREPACCESSACCEPT_MEDICAREBOOLEANconfiguration/entityTypes/HCO/attributes/AcceptMedicareACCEPT_MEDICAIDBOOLEANconfiguration/entityTypes/HCO/attributes/AcceptMedicaidPERCENT_MEDICAREVARCHARconfiguration/entityTypes/HCO/attributes/PercentMedicarePERCENT_MEDICAIDVARCHARconfiguration/entityTypes/HCO/attributes/PercentMedicaidPARENT_COMPANYVARCHARReplacement Parent Satelliteconfiguration/entityTypes/HCO/attributes/ParentCompanyHEALTH_SYSTEM_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/HealthSystemNameVADODBOOLEANconfiguration/entityTypes/HCO/attributes/VADODGPO_MEMBERSHIPBOOLEANconfiguration/entityTypes/HCO/attributes/GPOMembershipACADEMICBOOLEANconfiguration/entityTypes/HCO/attributes/AcademicMKT_SEGMENT_CODEVARCHARconfiguration/entityTypes/HCO/attributes/MktSegmentCodeTOTAL_LICENSE_BEDSVARCHARconfiguration/entityTypes/HCO/attributes/TotalLicenseBedsTOTAL_CENSUS_BEDSVARCHARconfiguration/entityTypes/HCO/attributes/TotalCensusBedsNUM_PATIENTSVARCHARconfiguration/entityTypes/HCO/attributes/NumPatientsTOTAL_STAFFED_BEDSVARCHARconfiguration/entityTypes/HCO/attributes/TotalStaffedBedsTOTAL_SURGERIESVARCHARconfiguration/entityTypes/HCO/attributes/TotalSurgeriesTOTAL_PROCEDURESVARCHARconfiguration/entityTypes/HCO/attributes/TotalProceduresOR_SURGERIESVARCHARconfiguration/entityTypes/HCO/attributes/ORSurgeriesRESIDENT_PROGRAMBOOLEANconfiguration/entityTypes/HCO/attributes/ResidentProgramRESIDENT_COUNTVARCHARconfiguration/entityTypes/HCO/attributes/ResidentCountNUMS_OF_PROVIDERSVARCHARNum_of_providers displays the total number of distinct providers affiliated with a business. Current Data: Value between 1 and 422816configuration/entityTypes/HCO/attributes/NumsOfProvidersCORP_PARENT_NAMEVARCHARCorporate Parent Nameconfiguration/entityTypes/HCO/attributes/CorpParentNameMANAGER_HCO_IDVARCHARManager Hco Idconfiguration/entityTypes/HCO/attributes/ManagerHcoIdMANAGER_HCO_NAMEVARCHARManager Hco Nameconfiguration/entityTypes/HCO/attributes/ManagerHcoNameOWNER_SUB_NAMEVARCHAROwner Sub Nameconfiguration/entityTypes/HCO/attributes/OwnerSubNameFORMULARYVARCHARconfiguration/entityTypes/HCO/attributes/FormularyLKUP_IMS_HCO_FORMULARYE_MEDICAL_RECORDVARCHARconfiguration/entityTypes/HCO/attributes/EMedicalRecordLKUP_IMS_HCO_ERECE_PRESCRIBEVARCHARconfiguration/entityTypes/HCO/attributes/EPrescribeLKUP_IMS_HCO_ERECPAY_PERFORMVARCHARconfiguration/entityTypes/HCO/attributes/PayPerformLKUP_IMS_HCO_PAYPERFORMCMS_COVERED_FOR_TEACHINGBOOLEANconfiguration/entityTypes/HCO/attributes/CMSCoveredForTeachingCOMM_HOSPBOOLEANIndicates whether the facility is a short-term (average length of stay is less than 30 days) acute care, or non federal hospital. Values: Yes and Nullconfiguration/entityTypes/HCO/attributes/CommHospEMAIL_DOMAINVARCHARconfiguration/entityTypes/HCO/attributes/EmailDomainSTATUS_IMSVARCHARconfiguration/entityTypes/HCO/attributes/StatusIMSLKUP_IMS_STATUSDOING_BUSINESS_AS_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/DoingBusinessAsNameCOMPANY_TYPEVARCHARconfiguration/entityTypes/HCO/attributes/CompanyTypeLKUP_IMS_ORG_TYPECUSIPVARCHARconfiguration/entityTypes/HCO/attributes/CUSIPSECTOR_IMSVARCHARSectorconfiguration/entityTypes/HCO/attributes/SectorIMSLKUP_IMS_HCO_SECTORIMSINDUSTRYVARCHARconfiguration/entityTypes/HCO/attributes/IndustryFOUNDED_YEARVARCHARconfiguration/entityTypes/HCO/attributes/FoundedYearEND_YEARVARCHARconfiguration/entityTypes/HCO/attributes/EndYearIPO_YEARVARCHARconfiguration/entityTypes/HCO/attributes/IPOYearLEGAL_DOMICILEVARCHARState of Legal Domicileconfiguration/entityTypes/HCO/attributes/LegalDomicileOWNERSHIP_STATUSVARCHARconfiguration/entityTypes/HCO/attributes/OwnershipStatusLKUP_IMS_HCO_OWNERSHIPSTATUSPROFIT_STATUSVARCHARThe profit status of the facility. Values include: For Profit, Not For Profit, Government, Armed Forces, or NULL (If data is unknown or Not Confidential and Proprietary to IMS Health. Field Name Data Type Field Description Applicable).configuration/entityTypes/HCO/attributes/ProfitStatusLKUP_IMS_HCO_PROFITSTATUSCMIVARCHARCMI is the Case Mix Index for an organization. This is a government-assigned measure of the complexity of medical and surgical care provided to Medicare inpatients by a hospital under the prospective payment system (PPS). It factors in a hospital?s use of technology for patient care and medical services? level of acuity required by the patient population.configuration/entityTypes/HCO/attributes/CMISOURCE_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/SourceNameSUB_SOURCE_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/SubSourceNameDEA_BUSINESS_ACTIVITYVARCHARconfiguration/entityTypes/HCO/attributes/DEABusinessActivityIMAGE_LINKSVARCHARconfiguration/entityTypes/HCO/attributes/ImageLinksVIDEO_LINKSVARCHARconfiguration/entityTypes/HCO/attributes/VideoLinksDOCUMENT_LINKSVARCHARconfiguration/entityTypes/HCO/attributes/DocumentLinksWEBSITE_URLVARCHARconfiguration/entityTypes/HCO/attributes/WebsiteURLTAX_IDVARCHARconfiguration/entityTypes/HCO/attributes/TaxIDDESCRIPTIONVARCHARconfiguration/entityTypes/HCO/attributes/DescriptionSTATUS_UPDATE_DATEDATEconfiguration/entityTypes/HCO/attributes/StatusUpdateDateSTATUS_REASON_CODEVARCHARconfiguration/entityTypes/HCO/attributes/StatusReasonCodeLKUP_IMS_SRC_DEACTIVE_REASON_CODECOMMENTERSVARCHARCommentersconfiguration/entityTypes/HCO/attributes/CommentersCLIENT_TYPE_CODEVARCHARClient Customer Typeconfiguration/entityTypes/HCO/attributes/ClientTypeCodeLKUP_IMS_HCO_CLIENT_CUST_TYPEOFFICIAL_NAMEVARCHAROfficial Nameconfiguration/entityTypes/HCO/attributes/OfficialNameVALIDATION_CHANGE_REASONVARCHARconfiguration/entityTypes/HCO/attributes/ValidationChangeReasonLKUP_IMS_VAL_STATUS_CHANGE_REASONVALIDATION_CHANGE_DATEDATEconfiguration/entityTypes/HCO/attributes/ValidationChangeDateCREATE_DATEDATEconfiguration/entityTypes/HCO/attributes/CreateDateUPDATE_DATEDATEconfiguration/entityTypes/HCO/attributes/UpdateDateCHECK_DATEDATEconfiguration/entityTypes/HCO/attributes/CheckDateSTATE_CODEVARCHARSituation of the workplace: Open/Closedconfiguration/entityTypes/HCO/attributes/StateCodeLKUP_IMS_PROFILE_STATESTATE_DATEDATEDate when state of the record was last modified.configuration/entityTypes/HCO/attributes/StateDateSTATUS_CHANGE_REASONVARCHARReason the status of the Organization changedconfiguration/entityTypes/HCO/attributes/StatusChangeReasonNUM_EMPLOYEESVARCHARconfiguration/entityTypes/HCO/attributes/NumEmployeesNUM_MED_EMPLOYEESVARCHARconfiguration/entityTypes/HCO/attributes/NumMedEmployeesTOTAL_BEDS_INTENSIVE_CAREVARCHARconfiguration/entityTypes/HCO/attributes/TotalBedsIntensiveCareNUM_EXAMINATION_ROOMVARCHARconfiguration/entityTypes/HCO/attributes/NumExaminationRoomNUM_AFFILIATED_SITESVARCHARconfiguration/entityTypes/HCO/attributes/NumAffiliatedSitesNUM_ENROLLED_MEMBERSVARCHARconfiguration/entityTypes/HCO/attributes/NumEnrolledMembersNUM_IN_PATIENTSVARCHARconfiguration/entityTypes/HCO/attributes/NumInPatientsNUM_OUT_PATIENTSVARCHARconfiguration/entityTypes/HCO/attributes/NumOutPatientsNUM_OPERATING_ROOMSVARCHARconfiguration/entityTypes/HCO/attributes/NumOperatingRoomsNUM_PATIENTS_X_WEEKVARCHARconfiguration/entityTypes/HCO/attributes/NumPatientsXWeekACT_TYPE_CODEVARCHARconfiguration/entityTypes/HCO/attributes/ActTypeCodeLKUP_IMS_ACTIVITY_TYPEDISPENSE_DRUGSBOOLEANconfiguration/entityTypes/HCO/attributes/DispenseDrugsNUM_PRESCRIBERSVARCHARconfiguration/entityTypes/HCO/attributes/NumPrescribersPATIENTS_X_YEARVARCHARconfiguration/entityTypes/HCO/attributes/PatientsXYearACCEPTS_NEW_PATIENTSVARCHARY/N field indicating whether the workplace accepts new patientsconfiguration/entityTypes/HCO/attributes/AcceptsNewPatientsEXTERNAL_INFORMATION_URLVARCHARconfiguration/entityTypes/HCO/attributes/ExternalInformationURLMATCH_STATUS_CODEVARCHARconfiguration/entityTypes/HCO/attributes/MatchStatusCodeLKUP_IMS_MATCH_STATUS_CODESUBSCRIPTION_FLAG1BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCO/attributes/SubscriptionFlag1SUBSCRIPTION_FLAG2BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCO/attributes/SubscriptionFlag2SUBSCRIPTION_FLAG3BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCO/attributes/SubscriptionFlag3SUBSCRIPTION_FLAG4BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCO/attributes/SubscriptionFlag4SUBSCRIPTION_FLAG5BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCO/attributes/SubscriptionFlag5SUBSCRIPTION_FLAG6BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCO/attributes/SubscriptionFlag6SUBSCRIPTION_FLAG7BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCO/attributes/SubscriptionFlag7SUBSCRIPTION_FLAG8BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCO/attributes/SubscriptionFlag8SUBSCRIPTION_FLAG9BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCO/attributes/SubscriptionFlag9SUBSCRIPTION_FLAG10BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCO/attributes/SubscriptionFlag10ROLE_CODEVARCHARconfiguration/entityTypes/HCO/attributes/RoleCodeLKUP_IMS_ORG_ROLE_CODEACTIVATION_DATEVARCHARconfiguration/entityTypes/HCO/attributes/ActivationDatePARTY_IDVARCHARconfiguration/entityTypes/HCO/attributes/PartyIDLAST_VERIFICATION_STATUSVARCHARconfiguration/entityTypes/HCO/attributes/LastVerificationStatusLAST_VERIFICATION_DATEDATEconfiguration/entityTypes/HCO/attributes/LastVerificationDateEFFECTIVE_DATEDATEconfiguration/entityTypes/HCO/attributes/EffectiveDateEND_DATEDATEconfiguration/entityTypes/HCO/attributes/EndDatePARTY_LOCALIZATION_CODEVARCHARconfiguration/entityTypes/HCO/attributes/PartyLocalizationCodeMATCH_PARTY_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/MatchPartyNameDELETE_ENTITYBOOLEANDeleteEntity flag to identify GDPR compliant dataconfiguration/entityTypes/HCO/attributes/DeleteEntityOK_VR_TRIGGERVARCHARconfiguration/entityTypes/HCO/attributes/OK_VR_TriggerLKUP_IMS_SEND_FOR_VALIDATIONHCO_MAIN_HCO_CLASSOF_TRADE_NReltio URI: configuration/entityTypes/HCO/attributes/ClassofTradeNMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameMAINHCO_URIVARCHARgenerated key descriptionCLASSOFTRADEN_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypePRIORITYVARCHARNumeric code for the primary class of tradeconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/PriorityCLASSIFICATIONVARCHARconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/ClassificationLKUP_IMS_HCO_CLASSOFTRADEN_CLASSIFICATIONFACILITY_TYPEVARCHARconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/FacilityTypeLKUP_IMS_HCO_CLASSOFTRADEN_FACILITYTYPESPECIALTYVARCHARconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/SpecialtyLKUP_IMS_HCO_CLASSOFTRADEN_SPECIALTYHCO_ADDRESS_UNITReltio URI: configuration/entityTypes/Location/attributes/UnitMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESS_URIVARCHARgenerated key descriptionUNIT_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeUNIT_NAMEVARCHARconfiguration/entityTypes/Location/attributes/Unit/attributes/UnitNameUNIT_VALUEVARCHARconfiguration/entityTypes/Location/attributes/Unit/attributes/UnitValueHCO_ADDRESS_BRICKReltio URI: configuration/entityTypes/Location/attributes/BrickMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESS_URIVARCHARgenerated key descriptionBRICK_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPEVARCHARconfiguration/entityTypes/Location/attributes/Brick/attributes/TypeLKUP_IMS_BRICK_TYPEBRICK_VALUEVARCHARconfiguration/entityTypes/Location/attributes/Brick/attributes/BrickValueLKUP_IMS_BRICK_VALUESORT_ORDERVARCHARconfiguration/entityTypes/Location/attributes/Brick/attributes/SortOrderKEY_FINANCIAL_FIGURES_OVERVIEWReltio URI: configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverviewMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameKEY_FINANCIAL_FIGURES_OVERVIEW_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeFINANCIAL_STATEMENT_TO_DATEDATEconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/FinancialStatementToDateFINANCIAL_PERIOD_DURATIONVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/FinancialPeriodDurationSALES_REVENUE_CURRENCYVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueCurrencySALES_REVENUE_CURRENCY_CODEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueCurrencyCodeSALES_REVENUE_RELIABILITY_CODEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueReliabilityCodeSALES_REVENUE_UNIT_OF_SIZEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueUnitOfSizeSALES_REVENUE_AMOUNTVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueAmountPROFIT_OR_LOSS_CURRENCYVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossCurrencyPROFIT_OR_LOSS_RELIABILITY_TEXTVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossReliabilityTextPROFIT_OR_LOSS_UNIT_OF_SIZEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossUnitOfSizePROFIT_OR_LOSS_AMOUNTVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossAmountSALES_TURNOVER_GROWTH_RATEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesTurnoverGrowthRateSALES3YRY_GROWTH_RATEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Sales3YryGrowthRateSALES5YRY_GROWTH_RATEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Sales5YryGrowthRateEMPLOYEE3YRY_GROWTH_RATEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Employee3YryGrowthRateEMPLOYEE5YRY_GROWTH_RATEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Employee5YryGrowthRateCLASSOF_TRADE_NReltio URI: configuration/entityTypes/HCO/attributes/ClassofTradeNMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCLASSOF_TRADE_N_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypePRIORITYVARCHARNumeric code for the primary class of tradeconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/PriorityCLASSIFICATIONVARCHARconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/ClassificationLKUP_IMS_HCO_CLASSOFTRADEN_CLASSIFICATIONFACILITY_TYPEVARCHARconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/FacilityTypeLKUP_IMS_HCO_CLASSOFTRADEN_FACILITYTYPESPECIALTYVARCHARconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/SpecialtyLKUP_IMS_HCO_CLASSOFTRADEN_SPECIALTYSPECIALTYDO NOT USE THIS ATTRIBUTE - will be deprecatedReltio URI: configuration/entityTypes/HCO/attributes/SpecialtyMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSPECIALTY_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSPECIALTYVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCO/attributes/Specialty/attributes/SpecialtyTYPEVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCO/attributes/Specialty/attributes/TypeGSA_EXCLUSIONReltio URI: configuration/entityTypes/HCO/attributes/GSAExclusionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameGSA_EXCLUSION_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSANCTION_IDVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/SanctionIdORGANIZATION_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/OrganizationNameADDRESS_LINE1VARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/AddressLine1ADDRESS_LINE2VARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/AddressLine2CITYVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/CitySTATEVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/StateZIPVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/ZipACTION_DATEVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/ActionDateTERM_DATEVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/TermDateAGENCYVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/AgencyCONFIDENCEVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/ConfidenceOIG_EXCLUSIONReltio URI: configuration/entityTypes/HCO/attributes/OIGExclusionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameOIG_EXCLUSION_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSANCTION_IDVARCHARconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/SanctionIdACTION_CODEVARCHARconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/ActionCodeACTION_DESCRIPTIONVARCHARconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/ActionDescriptionBOARD_CODEVARCHARCourt case board idconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/BoardCodeBOARD_DESCVARCHARcourt case board descriptionconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/BoardDescACTION_DATEDATEconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/ActionDateOFFENSE_CODEVARCHARconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/OffenseCodeOFFENSE_DESCRIPTIONVARCHARconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/OffenseDescriptionBRICKReltio URI: configuration/entityTypes/HCO/attributes/BrickMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameBRICK_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPEVARCHARconfiguration/entityTypes/HCO/attributes/Brick/attributes/TypeLKUP_IMS_BRICK_TYPEBRICK_VALUEVARCHARconfiguration/entityTypes/HCO/attributes/Brick/attributes/BrickValueLKUP_IMS_BRICK_VALUEEMRReltio URI: configuration/entityTypes/HCO/attributes/EMRMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEMR_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeNOTESBOOLEANY/N field indicating whether workplace uses EMR software to write notesconfiguration/entityTypes/HCO/attributes/EMR/attributes/NotesPRESCRIBESBOOLEANY/N field indicating whether the workplace uses EMR software to write a prescriptionsconfiguration/entityTypes/HCO/attributes/EMR/attributes/PrescribesLKUP_IMS_EMR_PRESCRIBESELABS_X_RAYSBOOLEANY/N indicating whether the workplace uses EMR software for eLabs/Xraysconfiguration/entityTypes/HCO/attributes/EMR/attributes/ElabsXRaysLKUP_IMS_EMR_ELABS_XRAYSNUMBER_OF_PHYSICIANSVARCHARNumber of physicians that use EMR software in the workplaceconfiguration/entityTypes/HCO/attributes/EMR/attributes/NumberOfPhysiciansPOLICYMAKERVARCHARIndividual who makes decisions regarding EMR softwareconfiguration/entityTypes/HCO/attributes/EMR/attributes/PolicymakerSOFTWARE_TYPEVARCHARName of the EMR software used at the workplaceconfiguration/entityTypes/HCO/attributes/EMR/attributes/SoftwareTypeADOPTIONVARCHARWhen the EMR software was adopted at the workplaceconfiguration/entityTypes/HCO/attributes/EMR/attributes/AdoptionBUYING_FACTORVARCHARBuying factor which influenced the workplace's decision to purchase the EMRconfiguration/entityTypes/HCO/attributes/EMR/attributes/BuyingFactorOWNERVARCHARIndividual who made the decision to purchase EMR softwareconfiguration/entityTypes/HCO/attributes/EMR/attributes/OwnerAWAREBOOLEANconfiguration/entityTypes/HCO/attributes/EMR/attributes/AwareLKUP_IMS_EMR_AWARESOFTWAREBOOLEANconfiguration/entityTypes/HCO/attributes/EMR/attributes/SoftwareLKUP_IMS_EMR_SOFTWAREVENDORVARCHARconfiguration/entityTypes/HCO/attributes/EMR/attributes/VendorLKUP_IMS_EMR_VENDORBUSINESS_HOURSReltio URI: configuration/entityTypes/HCO/attributes/BusinessHoursMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameBUSINESS_HOURS_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeDAYVARCHARconfiguration/entityTypes/HCO/attributes/BusinessHours/attributes/DayPERIODVARCHARconfiguration/entityTypes/HCO/attributes/BusinessHours/attributes/PeriodTIME_SLOTVARCHARconfiguration/entityTypes/HCO/attributes/BusinessHours/attributes/TimeSlotSTART_TIMEVARCHARconfiguration/entityTypes/HCO/attributes/BusinessHours/attributes/StartTimeEND_TIMEVARCHARconfiguration/entityTypes/HCO/attributes/BusinessHours/attributes/EndTimeAPPOINTMENT_ONLYBOOLEANconfiguration/entityTypes/HCO/attributes/BusinessHours/attributes/AppointmentOnlyPERIOD_STARTVARCHARconfiguration/entityTypes/HCO/attributes/BusinessHours/attributes/PeriodStartPERIOD_ENDVARCHARconfiguration/entityTypes/HCO/attributes/BusinessHours/attributes/PeriodEndACO_DETAILSACO DetailsReltio URI: configuration/entityTypes/HCO/attributes/ACODetailsMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACO_DETAILS_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeACO_TYPE_CODEVARCHARAcoTypeCodeconfiguration/entityTypes/HCO/attributes/ACODetails/attributes/AcoTypeCodeLKUP_IMS_ACO_TYPEACO_TYPE_CATGVARCHARAcoTypeCatgconfiguration/entityTypes/HCO/attributes/ACODetails/attributes/AcoTypeCatgACO_TYPE_MDELVARCHARAcoTypeMdelconfiguration/entityTypes/HCO/attributes/ACODetails/attributes/AcoTypeMdelACO_DETAIL_IDVARCHARAcoDetailIdconfiguration/entityTypes/HCO/attributes/ACODetails/attributes/AcoDetailIdACO_DETAIL_CODEVARCHARAcoDetailCodeconfiguration/entityTypes/HCO/attributes/ACODetails/attributes/AcoDetailCodeLKUP_IMS_ACO_DETAILACO_DETAIL_GROUP_CODEVARCHARAcoDetailGroupCodeconfiguration/entityTypes/HCO/attributes/ACODetails/attributes/AcoDetailGroupCodeLKUP_IMS_ACO_DETAIL_GROUPACO_VALVARCHARAcoValconfiguration/entityTypes/HCO/attributes/ACODetails/attributes/AcoValTRADE_STYLE_NAMEReltio URI: configuration/entityTypes/HCO/attributes/TradeStyleNameMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameTRADE_STYLE_NAME_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeORGANIZATION_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/TradeStyleName/attributes/OrganizationNameLANGUAGE_CODEVARCHARconfiguration/entityTypes/HCO/attributes/TradeStyleName/attributes/LanguageCodeFORMER_ORGANIZATION_PRIMARY_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/TradeStyleName/attributes/FormerOrganizationPrimaryNameDISPLAY_SEQUENCEVARCHARconfiguration/entityTypes/HCO/attributes/TradeStyleName/attributes/DisplaySequenceTYPEVARCHARconfiguration/entityTypes/HCO/attributes/TradeStyleName/attributes/TypePRIOR_DUNS_NUMBERReltio URI: configuration/entityTypes/HCO/attributes/PriorDUNSNUmberMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePRIOR_DUNSN_UMBER_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTRANSFER_DUNS_NUMBERVARCHARconfiguration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferDUNSNumberTRANSFER_REASON_TEXTVARCHARconfiguration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferReasonTextTRANSFER_REASON_CODEVARCHARconfiguration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferReasonCodeTRANSFER_DATEVARCHARconfiguration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferDateTRANSFERRED_FROM_DUNS_NUMBERVARCHARconfiguration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferredFromDUNSNumberTRANSFERRED_TO_DUNS_NUMBERVARCHARconfiguration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferredToDUNSNumberINDUSTRY_CODEReltio URI: configuration/entityTypes/HCO/attributes/IndustryCodeMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameINDUSTRY_CODE_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeDNB_CODEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/DNBCodeINDUSTRY_CODEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeINDUSTRY_CODE_DESCRIPTIONVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeDescriptionINDUSTRY_CODE_LANGUAGE_CODEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeLanguageCodeINDUSTRY_CODE_WRITING_SCRIPTVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeWritingScriptDISPLAY_SEQUENCEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/DisplaySequenceSALES_PERCENTAGEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/SalesPercentageTYPEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/TypeINDUSTRY_TYPE_CODEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryTypeCodeIMPORT_EXPORT_AGENTVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/ImportExportAgentACTIVITIES_AND_OPERATIONSReltio URI: configuration/entityTypes/HCO/attributes/ActivitiesAndOperationsMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACTIVITIES_AND_OPERATIONS_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeLINE_OF_BUSINESS_DESCRIPTIONVARCHARconfiguration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/LineOfBusinessDescriptionLANGUAGE_CODEVARCHARconfiguration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/LanguageCodeWRITING_SCRIPT_CODEVARCHARconfiguration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/WritingScriptCodeIMPORT_INDICATORBOOLEANconfiguration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/ImportIndicatorEXPORT_INDICATORBOOLEANconfiguration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/ExportIndicatorAGENT_INDICATORBOOLEANconfiguration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/AgentIndicatorEMPLOYEE_DETAILSReltio URI: configuration/entityTypes/HCO/attributes/EmployeeDetailsMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEMPLOYEE_DETAILS_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeINDIVIDUAL_EMPLOYEE_FIGURES_DATEVARCHARconfiguration/entityTypes/HCO/attributes/EmployeeDetails/attributes/IndividualEmployeeFiguresDateINDIVIDUAL_TOTAL_EMPLOYEE_QUANTITYVARCHARconfiguration/entityTypes/HCO/attributes/EmployeeDetails/attributes/IndividualTotalEmployeeQuantityINDIVIDUAL_RELIABILITY_TEXTVARCHARconfiguration/entityTypes/HCO/attributes/EmployeeDetails/attributes/IndividualReliabilityTextTOTAL_EMPLOYEE_QUANTITYVARCHARconfiguration/entityTypes/HCO/attributes/EmployeeDetails/attributes/TotalEmployeeQuantityTOTAL_EMPLOYEE_RELIABILITYVARCHARconfiguration/entityTypes/HCO/attributes/EmployeeDetails/attributes/TotalEmployeeReliabilityPRINCIPALS_INCLUDEDVARCHARconfiguration/entityTypes/HCO/attributes/EmployeeDetails/attributes/PrincipalsIncludedMATCH_QUALITYReltio URI: configuration/entityTypes/HCO/attributes/MatchQualityMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameMATCH_QUALITY_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCONFIDENCE_CODEVARCHARDnB Match Quality Confidence Codeconfiguration/entityTypes/HCO/attributes/MatchQuality/attributes/ConfidenceCodeDISPLAY_SEQUENCEVARCHARDnB Match Quality Display Sequenceconfiguration/entityTypes/HCO/attributes/MatchQuality/attributes/DisplaySequenceMATCH_CODEVARCHARconfiguration/entityTypes/HCO/attributes/MatchQuality/attributes/MatchCodeBEMFABVARCHARconfiguration/entityTypes/HCO/attributes/MatchQuality/attributes/BEMFABMATCH_GRADEVARCHARconfiguration/entityTypes/HCO/attributes/MatchQuality/attributes/MatchGradeORGANIZATION_DETAILReltio URI: configuration/entityTypes/HCO/attributes/OrganizationDetailMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameORGANIZATION_DETAIL_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeMEMBER_ROLEVARCHARconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/MemberRoleSTANDALONEBOOLEANconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/StandaloneCONTROL_OWNERSHIP_DATEDATEconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/ControlOwnershipDateOPERATING_STATUSVARCHARconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/OperatingStatusSTART_YEARVARCHARconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/StartYearFRANCHISE_OPERATION_TYPEVARCHARconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/FranchiseOperationTypeBONEYARD_ORGANIZATIONBOOLEANconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/BoneyardOrganizationOPERATING_STATUS_COMMENTVARCHARconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/OperatingStatusCommentDUNS_HIERARCHYReltio URI: configuration/entityTypes/HCO/attributes/DUNSHierarchyMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameDUNS_HIERARCHY_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeGLOBAL_ULTIMATE_DUNSVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/GlobalUltimateDUNSGLOBAL_ULTIMATE_ORGANIZATIONVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/GlobalUltimateOrganizationDOMESTIC_ULTIMATE_DUNSVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/DomesticUltimateDUNSDOMESTIC_ULTIMATE_ORGANIZATIONVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/DomesticUltimateOrganizationPARENT_DUNSVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/ParentDUNSPARENT_ORGANIZATIONVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/ParentOrganizationHEADQUARTERS_DUNSVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/HeadquartersDUNSHEADQUARTERS_ORGANIZATIONVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/HeadquartersOrganizationAFFILIATIONSReltio URI: configuration/relationTypes/HasHealthCareRole, configuration/relationTypes/AffiliatedPurchasing, configuration/relationTypes/Activity, configuration/relationTypes/ManagedMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameRELATION_URIVARCHARReltio Relation URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagRELATION_TYPEVARCHARReltio Relation TypeSTART_ENTITY_URIVARCHARReltio Start Entity URIEND_ENTITY_URIVARCHARReltio End Entity URIREL_GROUPVARCHARHCRS relation group from the relationship type, each rel group refers to one relation idconfiguration/relationTypes/AffiliatedPurchasing/attributes/RelGroup, configuration/relationTypes/Managed/attributes/RelGroupLKUP_IMS_RELGROUP_TYPEREL_ORDER_AFFILIATEDPURCHASINGVARCHAROrderconfiguration/relationTypes/AffiliatedPurchasing/attributes/RelOrderSTATUS_REASON_CODEVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/StatusReasonCode, configuration/relationTypes/Activity/attributes/StatusReasonCode, configuration/relationTypes/Managed/attributes/StatusReasonCodeLKUP_IMS_SRC_DEACTIVE_REASON_CODESTATUS_UPDATE_DATEDATEconfiguration/relationTypes/AffiliatedPurchasing/attributes/StatusUpdateDate, configuration/relationTypes/Activity/attributes/StatusUpdateDate, configuration/relationTypes/Managed/attributes/StatusUpdateDateVALIDATION_CHANGE_REASONVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/ValidationChangeReason, configuration/relationTypes/Activity/attributes/ValidationChangeReason, configuration/relationTypes/Managed/attributes/ValidationChangeReasonLKUP_IMS_VAL_STATUS_CHANGE_REASONVALIDATION_CHANGE_DATEDATEconfiguration/relationTypes/AffiliatedPurchasing/attributes/ValidationChangeDate, configuration/relationTypes/Activity/attributes/ValidationChangeDate, configuration/relationTypes/Managed/attributes/ValidationChangeDateVALIDATION_STATUSVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/ValidationStatus, configuration/relationTypes/Activity/attributes/ValidationStatus, configuration/relationTypes/Managed/attributes/ValidationStatusLKUP_IMS_VAL_STATUSAFFILIATION_STATUSVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/AffiliationStatus, configuration/relationTypes/Activity/attributes/AffiliationStatus, configuration/relationTypes/Managed/attributes/AffiliationStatusLKUP_IMS_STATUSCOUNTRYVARCHARCountry Codeconfiguration/relationTypes/AffiliatedPurchasing/attributes/Country, configuration/relationTypes/Activity/attributes/Country, configuration/relationTypes/Managed/attributes/CountryLKUP_IMS_COUNTRY_CODEAFFILIATION_NAMEVARCHARAffiliation Nameconfiguration/relationTypes/AffiliatedPurchasing/attributes/AffiliationName, configuration/relationTypes/Activity/attributes/AffiliationNameSUBSCRIPTION_FLAG1BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag1, configuration/relationTypes/Activity/attributes/SubscriptionFlag1, configuration/relationTypes/Managed/attributes/SubscriptionFlag1SUBSCRIPTION_FLAG2BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag2, configuration/relationTypes/Activity/attributes/SubscriptionFlag2, configuration/relationTypes/Managed/attributes/SubscriptionFlag2SUBSCRIPTION_FLAG3BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag3, configuration/relationTypes/Activity/attributes/SubscriptionFlag3, configuration/relationTypes/Managed/attributes/SubscriptionFlag3SUBSCRIPTION_FLAG4BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag4, configuration/relationTypes/Activity/attributes/SubscriptionFlag4, configuration/relationTypes/Managed/attributes/SubscriptionFlag4SUBSCRIPTION_FLAG5BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag5, configuration/relationTypes/Activity/attributes/SubscriptionFlag5, configuration/relationTypes/Managed/attributes/SubscriptionFlag5SUBSCRIPTION_FLAG6BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag6, configuration/relationTypes/Activity/attributes/SubscriptionFlag6, configuration/relationTypes/Managed/attributes/SubscriptionFlag6SUBSCRIPTION_FLAG7BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag7, configuration/relationTypes/Activity/attributes/SubscriptionFlag7, configuration/relationTypes/Managed/attributes/SubscriptionFlag7SUBSCRIPTION_FLAG8BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag8, configuration/relationTypes/Activity/attributes/SubscriptionFlag8, configuration/relationTypes/Managed/attributes/SubscriptionFlag8SUBSCRIPTION_FLAG9BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag9, configuration/relationTypes/Activity/attributes/SubscriptionFlag9, configuration/relationTypes/Managed/attributes/SubscriptionFlag9SUBSCRIPTION_FLAG10BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag10, configuration/relationTypes/Activity/attributes/SubscriptionFlag10, configuration/relationTypes/Managed/attributes/SubscriptionFlag10BEST_RELATIONSHIP_INDICATORVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/BestRelationshipIndicator, configuration/relationTypes/Activity/attributes/BestRelationshipIndicator, configuration/relationTypes/Managed/attributes/BestRelationshipIndicatorLKUP_IMS_YES_NORELATIONSHIP_RANKVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/RelationshipRank, configuration/relationTypes/Activity/attributes/RelationshipRank, configuration/relationTypes/Managed/attributes/RelationshipRankRELATIONSHIP_VIEW_CODEVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/RelationshipViewCode, configuration/relationTypes/Activity/attributes/RelationshipViewCode, configuration/relationTypes/Managed/attributes/RelationshipViewCodeRELATIONSHIP_VIEW_TYPE_CODEVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/RelationshipViewTypeCode, configuration/relationTypes/Activity/attributes/RelationshipViewTypeCode, configuration/relationTypes/Managed/attributes/RelationshipViewTypeCodeRELATIONSHIP_STATUSVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/RelationshipStatus, configuration/relationTypes/Activity/attributes/RelationshipStatus, configuration/relationTypes/Managed/attributes/RelationshipStatusLKUP_IMS_RELATIONSHIP_STATUSRELATIONSHIP_CREATE_DATEDATEconfiguration/relationTypes/AffiliatedPurchasing/attributes/RelationshipCreateDate, configuration/relationTypes/Activity/attributes/RelationshipCreateDate, configuration/relationTypes/Managed/attributes/RelationshipCreateDateUPDATE_DATEDATEconfiguration/relationTypes/AffiliatedPurchasing/attributes/UpdateDate, configuration/relationTypes/Activity/attributes/UpdateDate, configuration/relationTypes/Managed/attributes/UpdateDateRELATIONSHIP_START_DATEDATEconfiguration/relationTypes/AffiliatedPurchasing/attributes/RelationshipStartDate, configuration/relationTypes/Activity/attributes/RelationshipStartDate, configuration/relationTypes/Managed/attributes/RelationshipStartDateRELATIONSHIP_END_DATEDATEconfiguration/relationTypes/AffiliatedPurchasing/attributes/RelationshipEndDate, configuration/relationTypes/Activity/attributes/RelationshipEndDate, configuration/relationTypes/Managed/attributes/RelationshipEndDateCHECKED_DATEDATEconfiguration/relationTypes/Activity/attributes/CheckedDatePREFERRED_MAIL_INDICATORBOOLEANconfiguration/relationTypes/Activity/attributes/PreferredMailIndicatorPREFERRED_VISIT_INDICATORBOOLEANconfiguration/relationTypes/Activity/attributes/PreferredVisitIndicatorCOMMITTEE_MEMBERVARCHARconfiguration/relationTypes/Activity/attributes/CommitteeMemberLKUP_IMS_MEMBER_MED_COMMITTEEAPPOINTMENT_REQUIREDBOOLEANconfiguration/relationTypes/Activity/attributes/AppointmentRequiredAFFILIATION_TYPE_CODEVARCHARAffiliation Type Codeconfiguration/relationTypes/Activity/attributes/AffiliationTypeCodeWORKING_STATUSVARCHARconfiguration/relationTypes/Activity/attributes/WorkingStatusLKUP_IMS_WORKING_STATUSTITLEVARCHARconfiguration/relationTypes/Activity/attributes/TitleLKUP_IMS_PROF_TITLERANKVARCHARconfiguration/relationTypes/Activity/attributes/RankPRIMARY_AFFILIATION_INDICATORBOOLEANconfiguration/relationTypes/Activity/attributes/PrimaryAffiliationIndicatorACT_WEBSITE_URLVARCHARconfiguration/relationTypes/Activity/attributes/ActWebsiteURLACT_VALIDATION_STATUSVARCHARconfiguration/relationTypes/Activity/attributes/ActValidationStatusLKUP_IMS_VAL_STATUSPREF_OR_ACTIVEVARCHARconfiguration/relationTypes/Activity/attributes/PrefOrActiveCOMMENTERSVARCHARCommentersconfiguration/relationTypes/Activity/attributes/CommentersREL_ORDER_MANAGEDBOOLEANOrderconfiguration/relationTypes/Managed/attributes/RelOrderPURCHASING_CLASSIFICATIONReltio URI: configuration/relationTypes/AffiliatedPurchasing/attributes/ClassificationMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCLASSIFICATION_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URICLASSIFICATION_TYPEVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/ClassificationTypeLKUP_IMS_CLASSIFICATION_TYPECLASSIFICATION_INDICATORVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/ClassificationIndicatorLKUP_IMS_CLASSIFICATION_INDICATORCLASSIFICATION_VALUEVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/ClassificationValueCLASSIFICATION_VALUE_NUMERIC_QUANTITYVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/ClassificationValueNumericQuantitySTATUSVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/StatusLKUP_IMS_CLASSIFICATION_STATUSEFFECTIVE_DATEDATEconfiguration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/EffectiveDateEND_DATEDATEconfiguration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/EndDateNOTESVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/NotesPURCHASING_SOURCE_DATAReltio URI: configuration/relationTypes/AffiliatedPurchasing/attributes/SourceDataMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSOURCE_DATA_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URIDATASET_IDENTIFIERVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/SourceData/attributes/DatasetIdentifierSTART_OBJECT_DATASET_PARTY_IDENTIFIERVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/SourceData/attributes/StartObjectDatasetPartyIdentifierEND_OBJECT_DATASET_PARTY_IDENTIFIERVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/SourceData/attributes/EndObjectDatasetPartyIdentifierRANKVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/SourceData/attributes/RankACTIVITY_PHONEReltio URI: configuration/relationTypes/Activity/attributes/ActPhoneMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACT_PHONE_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URITYPE_IMSVARCHARconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/TypeIMSLKUP_IMS_COMMUNICATION_TYPENUMBERVARCHARconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/NumberEXTENSIONVARCHARconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/ExtensionRANKVARCHARconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/RankCOUNTRY_CODEVARCHARconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/CountryCodeLKUP_IMS_COUNTRY_CODEAREA_CODEVARCHARconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/AreaCodeLOCAL_NUMBERVARCHARconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/LocalNumberFORMATTED_NUMBERVARCHARFormatted number of the phoneconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/FormattedNumberVALIDATION_STATUSVARCHARconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/ValidationStatusLINE_TYPEVARCHARconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/LineTypeFORMAT_MASKVARCHARconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/FormatMaskDIGIT_COUNTVARCHARconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/DigitCountGEO_AREAVARCHARconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/GeoAreaGEO_COUNTRYVARCHARconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/GeoCountryACTIVEBOOLEANDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/ActiveACTIVITY_PRIVACY_PREFERENCESReltio URI: configuration/relationTypes/Activity/attributes/PrivacyPreferencesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePRIVACY_PREFERENCES_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URIPHONE_OPT_OUTBOOLEANconfiguration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/PhoneOptOutALLOWED_TO_CONTACTBOOLEANconfiguration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/AllowedToContactEMAIL_OPT_OUTBOOLEANconfiguration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/EmailOptOutMAIL_OPT_OUTBOOLEANconfiguration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/MailOptOutFAX_OPT_OUTBOOLEANconfiguration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/FaxOptOutREMOTE_OPT_OUTBOOLEANconfiguration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/RemoteOptOutOPT_OUT_ONEKEYBOOLEANconfiguration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/OptOutOnekeyVISIT_OPT_OUTBOOLEANconfiguration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/VisitOptOutACTIVITY_SPECIALITIESReltio URI: configuration/relationTypes/Activity/attributes/SpecialitiesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSPECIALITIES_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URISPECIALTY_TYPEVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/SpecialtyTypeLKUP_IMS_SPECIALTY_TYPESPECIALTYVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/SpecialtyLKUP_IMS_SPECIALTYEMAIL_OPT_OUTBOOLEANconfiguration/relationTypes/Activity/attributes/Specialities/attributes/EmailOptOutDESCVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/DescGROUPVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/GroupSOURCE_CDVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/SourceCDSPECIALTY_DETAILVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/SpecialtyDetailPROFESSION_CODEVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/ProfessionCodeRANKVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/RankPRIMARY_SPECIALTY_FLAGBOOLEANPrimary Specialty flag to be populated by client teams according to business rulesconfiguration/relationTypes/Activity/attributes/Specialities/attributes/PrimarySpecialtyFlagSORT_ORDERVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/SortOrderBEST_RECORDVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/BestRecordSUB_SPECIALTYVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/SubSpecialtyLKUP_IMS_SPECIALTYSUB_SPECIALTY_RANKVARCHARSubSpecialty Rankconfiguration/relationTypes/Activity/attributes/Specialities/attributes/SubSpecialtyRankACTIVITY_IDENTIFIERSReltio URI: configuration/relationTypes/Activity/attributes/ActIdentifiersMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACT_IDENTIFIERS_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URIIDVARCHARconfiguration/relationTypes/Activity/attributes/ActIdentifiers/attributes/IDTYPEVARCHARconfiguration/relationTypes/Activity/attributes/ActIdentifiers/attributes/TypeLKUP_IMS_HCP_IDENTIFIER_TYPEORDERVARCHARDisplays the order of priority for an MPN for those facilities that share an MPN. Valid values are: P ?the MPN on a business record is the primary identifier for the business and O ?the MPN is a secondary identifier. (Using P for the MPN supports aggregating clinical volumes and avoids double counting).configuration/relationTypes/Activity/attributes/ActIdentifiers/attributes/OrderAUTHORIZATION_STATUSVARCHARAuthorization Statusconfiguration/relationTypes/Activity/attributes/ActIdentifiers/attributes/AuthorizationStatusLKUP_IMS_IDENTIFIER_STATUSNATIONAL_ID_ATTRIBUTEVARCHARconfiguration/relationTypes/Activity/attributes/ActIdentifiers/attributes/NationalIdAttributeACTIVITY_ADDITIONAL_ATTRIBUTESReltio URI: configuration/relationTypes/Activity/attributes/AdditionalAttributesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDITIONAL_ATTRIBUTES_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URIATTRIBUTE_NAMEVARCHARconfiguration/relationTypes/Activity/attributes/AdditionalAttributes/attributes/AttributeNameATTRIBUTE_TYPEVARCHARconfiguration/relationTypes/Activity/attributes/AdditionalAttributes/attributes/AttributeTypeLKUP_IMS_TYPE_CODEATTRIBUTE_VALUEVARCHARconfiguration/relationTypes/Activity/attributes/AdditionalAttributes/attributes/AttributeValueATTRIBUTE_RANKVARCHARconfiguration/relationTypes/Activity/attributes/AdditionalAttributes/attributes/AttributeRankADDITIONAL_INFOVARCHARconfiguration/relationTypes/Activity/attributes/AdditionalAttributes/attributes/AdditionalInfoACTIVITY_BUSINESS_HOURSReltio URI: configuration/relationTypes/Activity/attributes/BusinessHoursMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameBUSINESS_HOURS_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URIDAYVARCHARconfiguration/relationTypes/Activity/attributes/BusinessHours/attributes/DayPERIODVARCHARconfiguration/relationTypes/Activity/attributes/BusinessHours/attributes/PeriodTIME_SLOTVARCHARconfiguration/relationTypes/Activity/attributes/BusinessHours/attributes/TimeSlotSTART_TIMEVARCHARconfiguration/relationTypes/Activity/attributes/BusinessHours/attributes/StartTimeEND_TIMEVARCHARconfiguration/relationTypes/Activity/attributes/BusinessHours/attributes/EndTimeAPPOINTMENT_ONLYBOOLEANconfiguration/relationTypes/Activity/attributes/BusinessHours/attributes/AppointmentOnlyPERIOD_STARTVARCHARconfiguration/relationTypes/Activity/attributes/BusinessHours/attributes/PeriodStartPERIOD_ENDVARCHARconfiguration/relationTypes/Activity/attributes/BusinessHours/attributes/PeriodEndPERIOD_OF_DAYVARCHARconfiguration/relationTypes/Activity/attributes/BusinessHours/attributes/PeriodOfDayACTIVITY_AFFILIATION_ROLEReltio URI: configuration/relationTypes/Activity/attributes/AffiliationRoleMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameAFFILIATION_ROLE_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URIROLE_RANKVARCHARconfiguration/relationTypes/Activity/attributes/AffiliationRole/attributes/RoleRankROLE_NAMEVARCHARconfiguration/relationTypes/Activity/attributes/AffiliationRole/attributes/RoleNameLKUP_IMS_ROLEROLE_ATTRIBUTEVARCHARconfiguration/relationTypes/Activity/attributes/AffiliationRole/attributes/RoleAttributeROLE_TYPE_ATTRIBUTEVARCHARconfiguration/relationTypes/Activity/attributes/AffiliationRole/attributes/RoleTypeAttributeROLE_STATUSVARCHARconfiguration/relationTypes/Activity/attributes/AffiliationRole/attributes/RoleStatusBEST_ROLE_INDICATORVARCHARconfiguration/relationTypes/Activity/attributes/AffiliationRole/attributes/BestRoleIndicatorACTIVITY_EMAILReltio URI: configuration/relationTypes/Activity/attributes/ActEmailMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACT_EMAIL_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URITYPE_IMSVARCHARconfiguration/relationTypes/Activity/attributes/ActEmail/attributes/TypeIMSLKUP_IMS_COMMUNICATION_TYPEEMAILVARCHARconfiguration/relationTypes/Activity/attributes/ActEmail/attributes/EmailDOMAINVARCHARconfiguration/relationTypes/Activity/attributes/ActEmail/attributes/DomainDOMAIN_TYPEVARCHARconfiguration/relationTypes/Activity/attributes/ActEmail/attributes/DomainTypeUSERNAMEVARCHARconfiguration/relationTypes/Activity/attributes/ActEmail/attributes/UsernameRANKVARCHARconfiguration/relationTypes/Activity/attributes/ActEmail/attributes/RankVALIDATION_STATUSVARCHARconfiguration/relationTypes/Activity/attributes/ActEmail/attributes/ValidationStatusACTIVEBOOLEANconfiguration/relationTypes/Activity/attributes/ActEmail/attributes/ActiveACTIVITY_BRICKReltio URI: configuration/relationTypes/Activity/attributes/BrickMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameBRICK_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URITYPEVARCHARconfiguration/relationTypes/Activity/attributes/Brick/attributes/TypeLKUP_IMS_BRICK_TYPEBRICK_VALUEVARCHARconfiguration/relationTypes/Activity/attributes/Brick/attributes/BrickValueLKUP_IMS_BRICK_VALUESORT_ORDERVARCHARconfiguration/relationTypes/Activity/attributes/Brick/attributes/SortOrderACTIVITY_CLASSIFICATIONReltio URI: configuration/relationTypes/Activity/attributes/ClassificationMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCLASSIFICATION_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URICLASSIFICATION_TYPEVARCHARconfiguration/relationTypes/Activity/attributes/Classification/attributes/ClassificationTypeLKUP_IMS_CLASSIFICATION_TYPECLASSIFICATION_INDICATORVARCHARconfiguration/relationTypes/Activity/attributes/Classification/attributes/ClassificationIndicatorLKUP_IMS_CLASSIFICATION_INDICATORCLASSIFICATION_VALUEVARCHARconfiguration/relationTypes/Activity/attributes/Classification/attributes/ClassificationValueCLASSIFICATION_VALUE_NUMERIC_QUANTITYVARCHARconfiguration/relationTypes/Activity/attributes/Classification/attributes/ClassificationValueNumericQuantitySTATUSVARCHARconfiguration/relationTypes/Activity/attributes/Classification/attributes/StatusLKUP_IMS_CLASSIFICATION_STATUSEFFECTIVE_DATEDATEconfiguration/relationTypes/Activity/attributes/Classification/attributes/EffectiveDateEND_DATEDATEconfiguration/relationTypes/Activity/attributes/Classification/attributes/EndDateNOTESVARCHARconfiguration/relationTypes/Activity/attributes/Classification/attributes/NotesACTIVITY_SOURCE_DATAReltio URI: configuration/relationTypes/Activity/attributes/SourceDataMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSOURCE_DATA_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URIDATASET_IDENTIFIERVARCHARconfiguration/relationTypes/Activity/attributes/SourceData/attributes/DatasetIdentifierSTART_OBJECT_DATASET_PARTY_IDENTIFIERVARCHARconfiguration/relationTypes/Activity/attributes/SourceData/attributes/StartObjectDatasetPartyIdentifierEND_OBJECT_DATASET_PARTY_IDENTIFIERVARCHARconfiguration/relationTypes/Activity/attributes/SourceData/attributes/EndObjectDatasetPartyIdentifierRANKVARCHARconfiguration/relationTypes/Activity/attributes/SourceData/attributes/RankMANAGED_CLASSIFICATIONReltio URI: configuration/relationTypes/Managed/attributes/ClassificationMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCLASSIFICATION_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URICLASSIFICATION_TYPEVARCHARconfiguration/relationTypes/Managed/attributes/Classification/attributes/ClassificationTypeLKUP_IMS_CLASSIFICATION_TYPECLASSIFICATION_INDICATORVARCHARconfiguration/relationTypes/Managed/attributes/Classification/attributes/ClassificationIndicatorLKUP_IMS_CLASSIFICATION_INDICATORCLASSIFICATION_VALUEVARCHARconfiguration/relationTypes/Managed/attributes/Classification/attributes/ClassificationValueCLASSIFICATION_VALUE_NUMERIC_QUANTITYVARCHARconfiguration/relationTypes/Managed/attributes/Classification/attributes/ClassificationValueNumericQuantitySTATUSVARCHARconfiguration/relationTypes/Managed/attributes/Classification/attributes/StatusLKUP_IMS_CLASSIFICATION_STATUSEFFECTIVE_DATEDATEconfiguration/relationTypes/Managed/attributes/Classification/attributes/EffectiveDateEND_DATEDATEconfiguration/relationTypes/Managed/attributes/Classification/attributes/EndDateNOTESVARCHARconfiguration/relationTypes/Managed/attributes/Classification/attributes/NotesMANAGED_SOURCE_DATAReltio URI: configuration/relationTypes/Managed/attributes/SourceDataMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSOURCE_DATA_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URIDATASET_IDENTIFIERVARCHARconfiguration/relationTypes/Managed/attributes/SourceData/attributes/DatasetIdentifierSTART_OBJECT_DATASET_PARTY_IDENTIFIERVARCHARconfiguration/relationTypes/Managed/attributes/SourceData/attributes/StartObjectDatasetPartyIdentifierEND_OBJECT_DATASET_PARTY_IDENTIFIERVARCHARconfiguration/relationTypes/Managed/attributes/SourceData/attributes/EndObjectDatasetPartyIdentifierRANKVARCHARconfiguration/relationTypes/Managed/attributes/SourceData/attributes/Rank" + }, + { + "title": "Dynamic views for COMPANY MDM Model", + "pageID": "163917858", + "pageLink": "/display/GMDM/Dynamic+views+for+COMPANY+MDM+Model", + "content": "HCPHealth care providerReltio URI: configuration/entityTypes/HCPMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCOUNTRY_HCPVARCHARCountryconfiguration/entityTypes/HCP/attributes/CountryCOMPANY_CUST_IDVARCHARAn auto-generated unique COMPANY id assigned to an HCPconfiguration/entityTypes/HCP/attributes/COMPANYCustIDPREFIXVARCHARPrefix added before the name, e.g., Mr, Ms, Drconfiguration/entityTypes/HCP/attributes/PrefixHCPPrefixNAMEVARCHARNameconfiguration/entityTypes/HCP/attributes/NameFIRST_NAMEVARCHARFirst Nameconfiguration/entityTypes/HCP/attributes/FirstNameLAST_NAMEVARCHARLast Nameconfiguration/entityTypes/HCP/attributes/LastNameMIDDLE_NAMEVARCHARMiddle Nameconfiguration/entityTypes/HCP/attributes/MiddleNameCLEANSED_MIDDLE_NAMEVARCHARMiddle Nameconfiguration/entityTypes/HCP/attributes/CleansedMiddleNameSTATUSVARCHARStatus, e.g., Active or Inactiveconfiguration/entityTypes/HCP/attributes/StatusHCPStatusSTATUS_DETAILVARCHARDeactivation reasonconfiguration/entityTypes/HCP/attributes/StatusDetailHCPStatusDetailDEACTIVATION_CODEVARCHARDeactivation reasonconfiguration/entityTypes/HCP/attributes/DeactivationCodeHCPDeactivationReasonCodeSUFFIX_NAMEVARCHARGeneration Suffixconfiguration/entityTypes/HCP/attributes/SuffixNameSuffixNameGENDERVARCHARGenderconfiguration/entityTypes/HCP/attributes/GenderGenderNICKNAMEVARCHARNicknameconfiguration/entityTypes/HCP/attributes/NicknamePREFERRED_NAMEVARCHARPreferred Nameconfiguration/entityTypes/HCP/attributes/PreferredNameFORMATTED_NAMEVARCHARFormatted Nameconfiguration/entityTypes/HCP/attributes/FormattedNameTYPE_CODEVARCHARHCP Type Codeconfiguration/entityTypes/HCP/attributes/TypeCodeHCPTypeSUB_TYPE_CODEVARCHARHCP SubType Codeconfiguration/entityTypes/HCP/attributes/SubTypeCodeHCPSubTypeCodeIS_COMPANY_APPROVED_SPEAKERBOOLEANIs COMPANY Approved Speakerconfiguration/entityTypes/HCP/attributes/IsCOMPANYApprovedSpeakerSPEAKER_LAST_BRIEFING_DATEDATELast Briefing Dateconfiguration/entityTypes/HCP/attributes/SpeakerLastBriefingDateSPEAKER_TYPEVARCHARSpeaker typeconfiguration/entityTypes/HCP/attributes/SpeakerTypeSPEAKER_STATUSVARCHARSpeaker Statusconfiguration/entityTypes/HCP/attributes/SpeakerStatusHCPSpeakerStatusSPEAKER_LEVELVARCHARSpeaker Statusconfiguration/entityTypes/HCP/attributes/SpeakerLevelSPEAKER_EFFECTIVE_DATEDATESpeaker Effective Dateconfiguration/entityTypes/HCP/attributes/SpeakerEffectiveDateSPEAKER_DEACTIVATE_REASONVARCHARSpeaker Effective Dateconfiguration/entityTypes/HCP/attributes/SpeakerDeactivateReasonDELETION_DATEDATEDeletion Dataconfiguration/entityTypes/HCP/attributes/DeletionDateACCOUNT_BLOCKEDBOOLEANIndicator of account blocked or notconfiguration/entityTypes/HCP/attributes/AccountBlockedY_O_BVARCHARBirth Yearconfiguration/entityTypes/HCP/attributes/YoBD_O_DDATEconfiguration/entityTypes/HCP/attributes/DoDY_O_DVARCHARconfiguration/entityTypes/HCP/attributes/YoDTERRITORY_NUMBERVARCHARTitle of HCPconfiguration/entityTypes/HCP/attributes/TerritoryNumberWEBSITE_URLVARCHARWebsite URLconfiguration/entityTypes/HCP/attributes/WebsiteURLTITLEVARCHARTitle of HCPconfiguration/entityTypes/HCP/attributes/TitleHCPTitleEFFECTIVE_END_DATEDATEconfiguration/entityTypes/HCP/attributes/EffectiveEndDateCOMPANY_WATCH_INDBOOLEANCOMPANY Watch Indconfiguration/entityTypes/HCP/attributes/COMPANYWatchIndKOL_STATUSBOOLEANKOL Statusconfiguration/entityTypes/HCP/attributes/KOLStatusTHIRD_PARTY_DECILVARCHARThird Party Decilconfiguration/entityTypes/HCP/attributes/ThirdPartyDecilFEDERAL_EMP_LETTER_DATEDATEFederal Emp Letter Dateconfiguration/entityTypes/HCP/attributes/FederalEmpLetterDateMARKETING_CONTRACT_CODEVARCHARMarketing Contract Codeconfiguration/entityTypes/HCP/attributes/MarketingContractCodeCURRICULUM_VITAE_LINKVARCHARCurriculum Vitae Linkconfiguration/entityTypes/HCP/attributes/CurriculumVitaeLinkSPEAKER_TRAVEL_INDICATORVARCHARSpeaker Travel Indicatorconfiguration/entityTypes/HCP/attributes/SpeakerTravelIndicatorSPEAKER_INFOVARCHARSpeaker Informationconfiguration/entityTypes/HCP/attributes/SpeakerInfoDEGREEVARCHARDegree Informationconfiguration/entityTypes/HCP/attributes/DegreePRESENT_EMPLOYMENTVARCHARPresent Employmentconfiguration/entityTypes/HCP/attributes/PresentEmploymentPE_CDEMPLOYMENT_TYPE_CODEVARCHAREmployment Type Codeconfiguration/entityTypes/HCP/attributes/EmploymentTypeCodeEMPLOYMENT_TYPE_DESCVARCHAREmployment Type Descriptionconfiguration/entityTypes/HCP/attributes/EmploymentTypeDescTYPE_OF_PRACTICEVARCHARType Of Practiceconfiguration/entityTypes/HCP/attributes/TypeOfPracticeTOP_CDTYPE_OF_PRACTICE_DESCVARCHARType Of Practice Descriptionconfiguration/entityTypes/HCP/attributes/TypeOfPracticeDescSCHOOL_SEQ_NUMBERVARCHARSchool Sequence Numberconfiguration/entityTypes/HCP/attributes/SchoolSeqNumberMRM_DELETE_FLAGBOOLEANMRM Delete Flagconfiguration/entityTypes/HCP/attributes/MRMDeleteFlagMRM_DELETE_DATEDATEMRM Delete Dateconfiguration/entityTypes/HCP/attributes/MRMDeleteDateCNCY_DATEDATECNCY Dateconfiguration/entityTypes/HCP/attributes/CNCYDateAMA_HOSPITALVARCHARAMA Hospital Infoconfiguration/entityTypes/HCP/attributes/AMAHospitalAMA_HOSPITAL_DESCVARCHARAMA Hospital Descconfiguration/entityTypes/HCP/attributes/AMAHospitalDescPRACTISE_AT_HOSPITALVARCHARPractise At Hospitalconfiguration/entityTypes/HCP/attributes/PractiseAtHospitalSEGMENT_IDVARCHARSegment IDconfiguration/entityTypes/HCP/attributes/SegmentIDSEGMENT_DESCVARCHARSegment Descconfiguration/entityTypes/HCP/attributes/SegmentDescDCR_STATUSVARCHARStatus of HCP profileconfiguration/entityTypes/HCP/attributes/DCRStatusDCRStatusPREFERRED_LANGUAGEVARCHARLanguage preferenceconfiguration/entityTypes/HCP/attributes/PreferredLanguageSOURCE_TYPEVARCHARType of the sourceconfiguration/entityTypes/HCP/attributes/SourceTypeSTATE_UPDATE_DATEDATEUpdate date of stateconfiguration/entityTypes/HCP/attributes/StateUpdateDateSOURCE_UPDATE_DATEDATEUpdate date at sourceconfiguration/entityTypes/HCP/attributes/SourceUpdateDateCOMMENTERSVARCHARCommentersconfiguration/entityTypes/HCP/attributes/CommentersIMAGE_GALLERYVARCHARconfiguration/entityTypes/HCP/attributes/ImageGalleryBIRTH_CITYVARCHARBirth Cityconfiguration/entityTypes/HCP/attributes/BirthCityBIRTH_STATEVARCHARBirth Stateconfiguration/entityTypes/HCP/attributes/BirthStateStateBIRTH_COUNTRYVARCHARBirth Countryconfiguration/entityTypes/HCP/attributes/BirthCountryCountryD_O_BDATEDate of Birthconfiguration/entityTypes/HCP/attributes/DoBORIGINAL_SOURCE_NAMEVARCHAROriginal Source Nameconfiguration/entityTypes/HCP/attributes/OriginalSourceNameSOURCE_MATCH_CATEGORYVARCHARSource Match Categoryconfiguration/entityTypes/HCP/attributes/SourceMatchCategoryALTERNATE_NAMEReltio URI: configuration/entityTypes/HCP/attributes/AlternateNameMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameALTERNATE_NAME_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeNAME_TYPE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/NameTypeCodeHCPAlternateNameTypeFULL_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/FullNameFIRST_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/FirstNameMIDDLE_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/MiddleNameLAST_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/LastNameVERSIONVARCHARconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/VersionADDRESSESReltio URI: configuration/entityTypes/HCP/attributes/Addresses, configuration/entityTypes/HCO/attributes/Addresses, configuration/entityTypes/MCO/attributes/AddressesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESSES_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeADDRESS_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/AddressType, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressType, configuration/entityTypes/MCO/attributes/Addresses/attributes/AddressTypeAddressTypeCOMPANY_ADDRESS_IDVARCHARCOMPANY Address IDconfiguration/entityTypes/HCP/attributes/Addresses/attributes/COMPANYAddressID, configuration/entityTypes/HCO/attributes/Addresses/attributes/COMPANYAddressID, configuration/entityTypes/MCO/attributes/Addresses/attributes/COMPANYAddressIDADDRESS_LINE1VARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine1, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressLine1, configuration/entityTypes/MCO/attributes/Addresses/attributes/AddressLine1ADDRESS_LINE2VARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine2, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressLine2, configuration/entityTypes/MCO/attributes/Addresses/attributes/AddressLine2ADDRESS_LINE3VARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine3, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressLine3, configuration/entityTypes/MCO/attributes/Addresses/attributes/AddressLine3ADDRESS_LINE4VARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine4, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressLine4, configuration/entityTypes/MCO/attributes/Addresses/attributes/AddressLine4CITYVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/City, configuration/entityTypes/HCO/attributes/Addresses/attributes/City, configuration/entityTypes/MCO/attributes/Addresses/attributes/CitySTATE_PROVINCEVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/StateProvince, configuration/entityTypes/HCO/attributes/Addresses/attributes/StateProvince, configuration/entityTypes/MCO/attributes/Addresses/attributes/StateProvinceStateCOUNTRY_ADDRESSESVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Country, configuration/entityTypes/HCO/attributes/Addresses/attributes/Country, configuration/entityTypes/MCO/attributes/Addresses/attributes/CountryCountryPO_BOXVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/POBox, configuration/entityTypes/HCO/attributes/Addresses/attributes/POBox, configuration/entityTypes/MCO/attributes/Addresses/attributes/POBoxZIP5VARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Zip5, configuration/entityTypes/HCO/attributes/Addresses/attributes/Zip5, configuration/entityTypes/MCO/attributes/Addresses/attributes/Zip5ZIP4VARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Zip4, configuration/entityTypes/HCO/attributes/Addresses/attributes/Zip4, configuration/entityTypes/MCO/attributes/Addresses/attributes/Zip4STREETVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Street, configuration/entityTypes/HCO/attributes/Addresses/attributes/Street, configuration/entityTypes/MCO/attributes/Addresses/attributes/StreetPOSTAL_CODE_EXTENSIONVARCHARPostal Code Extensionconfiguration/entityTypes/HCP/attributes/Addresses/attributes/PostalCodeExtension, configuration/entityTypes/HCO/attributes/Addresses/attributes/PostalCodeExtension, configuration/entityTypes/MCO/attributes/Addresses/attributes/PostalCodeExtensionADDRESS_USAGE_TAGVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/AddressUsageTag, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressUsageTagAddressUsageTagCNCY_DATEDATECNCY Dateconfiguration/entityTypes/HCP/attributes/Addresses/attributes/CNCYDate, configuration/entityTypes/HCO/attributes/Addresses/attributes/CNCYDateCBSA_CODEVARCHARCore Based Statistical Areaconfiguration/entityTypes/HCP/attributes/Addresses/attributes/CBSACode, configuration/entityTypes/HCO/attributes/Addresses/attributes/CBSACode, configuration/entityTypes/MCO/attributes/Addresses/attributes/CBSACodePREMISEVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Premise, configuration/entityTypes/HCO/attributes/Addresses/attributes/PremiseISO3166-2VARCHARThis field holds the ISO 3166 2-character country code.configuration/entityTypes/HCP/attributes/Addresses/attributes/ISO3166-2, configuration/entityTypes/HCO/attributes/Addresses/attributes/ISO3166-2, configuration/entityTypes/MCO/attributes/Addresses/attributes/ISO3166-2ISO3166-3VARCHARThis field holds the ISO 3166 3-character country code.configuration/entityTypes/HCP/attributes/Addresses/attributes/ISO3166-3, configuration/entityTypes/HCO/attributes/Addresses/attributes/ISO3166-3, configuration/entityTypes/MCO/attributes/Addresses/attributes/ISO3166-3ISO3166-NVARCHARThis field holds the ISO 3166 N-digit numeric country code.configuration/entityTypes/HCP/attributes/Addresses/attributes/ISO3166-N, configuration/entityTypes/HCO/attributes/Addresses/attributes/ISO3166-N, configuration/entityTypes/MCO/attributes/Addresses/attributes/ISO3166-NLATITUDEVARCHARLatitudeconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Latitude, configuration/entityTypes/HCO/attributes/Addresses/attributes/Latitude, configuration/entityTypes/MCO/attributes/Addresses/attributes/LatitudeLONGITUDEVARCHARLongitudeconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Longitude, configuration/entityTypes/HCO/attributes/Addresses/attributes/Longitude, configuration/entityTypes/MCO/attributes/Addresses/attributes/LongitudeGEO_ACCURACYVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/GeoAccuracy, configuration/entityTypes/HCO/attributes/Addresses/attributes/GeoAccuracy, configuration/entityTypes/MCO/attributes/Addresses/attributes/GeoAccuracyVERIFICATION_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/VerificationStatus, configuration/entityTypes/HCO/attributes/Addresses/attributes/VerificationStatus, configuration/entityTypes/MCO/attributes/Addresses/attributes/VerificationStatusVERIFICATION_STATUS_DETAILSVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/VerificationStatusDetails, configuration/entityTypes/HCO/attributes/Addresses/attributes/VerificationStatusDetails, configuration/entityTypes/MCO/attributes/Addresses/attributes/VerificationStatusDetailsAVCVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/AVC, configuration/entityTypes/HCO/attributes/Addresses/attributes/AVC, configuration/entityTypes/MCO/attributes/Addresses/attributes/AVCSETTING_TYPEVARCHARSetting Typeconfiguration/entityTypes/HCP/attributes/Addresses/attributes/SettingType, configuration/entityTypes/HCO/attributes/Addresses/attributes/SettingTypeADDRESS_SETTING_TYPE_DESCVARCHARAddress Setting Type Descconfiguration/entityTypes/HCP/attributes/Addresses/attributes/AddressSettingTypeDesc, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressSettingTypeDescCATEGORYVARCHARCategoryconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Category, configuration/entityTypes/HCO/attributes/Addresses/attributes/CategoryAddressCategoryFIPS_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/FIPSCode, configuration/entityTypes/HCO/attributes/Addresses/attributes/FIPSCodeFIPS_COUNTY_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/FIPSCountyCode, configuration/entityTypes/HCO/attributes/Addresses/attributes/FIPSCountyCodeFIPS_COUNTY_CODE_DESCVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/FIPSCountyCodeDesc, configuration/entityTypes/HCO/attributes/Addresses/attributes/FIPSCountyCodeDescFIPS_STATE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/FIPSStateCode, configuration/entityTypes/HCO/attributes/Addresses/attributes/FIPSStateCodeFIPS_STATE_CODE_DESCVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/FIPSStateCodeDesc, configuration/entityTypes/HCO/attributes/Addresses/attributes/FIPSStateCodeDescCARE_OFVARCHARCare Ofconfiguration/entityTypes/HCP/attributes/Addresses/attributes/CareOf, configuration/entityTypes/HCO/attributes/Addresses/attributes/CareOfMAIN_PHYSICAL_OFFICEVARCHARMain Physical Officeconfiguration/entityTypes/HCP/attributes/Addresses/attributes/MainPhysicalOffice, configuration/entityTypes/HCO/attributes/Addresses/attributes/MainPhysicalOfficeDELIVERABILITY_CONFIDENCEVARCHARDeliverability Confidenceconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DeliverabilityConfidence, configuration/entityTypes/HCO/attributes/Addresses/attributes/DeliverabilityConfidenceAPPLIDVARCHARAPPLIDconfiguration/entityTypes/HCP/attributes/Addresses/attributes/APPLID, configuration/entityTypes/HCO/attributes/Addresses/attributes/APPLIDSMPLDLV_INDBOOLEANSMPLDLV Indconfiguration/entityTypes/HCP/attributes/Addresses/attributes/SMPLDLVInd, configuration/entityTypes/HCO/attributes/Addresses/attributes/SMPLDLVIndSTATUSVARCHARStatusconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Status, configuration/entityTypes/HCO/attributes/Addresses/attributes/StatusAddressStatusSTARTER_ELIGIBLE_FLAGVARCHARStarterEligibleFlagconfiguration/entityTypes/HCP/attributes/Addresses/attributes/StarterEligibleFlag, configuration/entityTypes/HCO/attributes/Addresses/attributes/StarterEligibleFlagDEA_FLAGBOOLEANDEA Flagconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEAFlag, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEAFlagUSAGE_TYPEVARCHARUsage Typeconfiguration/entityTypes/HCP/attributes/Addresses/attributes/UsageType, configuration/entityTypes/HCO/attributes/Addresses/attributes/UsageTypePRIMARYBOOLEANPrimary Addressconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Primary, configuration/entityTypes/HCO/attributes/Addresses/attributes/PrimaryEFFECTIVE_START_DATEDATEEffective Start Dateconfiguration/entityTypes/HCP/attributes/Addresses/attributes/EffectiveStartDate, configuration/entityTypes/HCO/attributes/Addresses/attributes/EffectiveStartDateEFFECTIVE_END_DATEDATEEffective End Dateconfiguration/entityTypes/HCP/attributes/Addresses/attributes/EffectiveEndDate, configuration/entityTypes/HCO/attributes/Addresses/attributes/EffectiveEndDateADDRESS_RANKVARCHARAddress Rank for priorityconfiguration/entityTypes/HCP/attributes/Addresses/attributes/AddressRank, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressRank, configuration/entityTypes/MCO/attributes/Addresses/attributes/AddressRankSOURCE_SEGMENT_CODEVARCHARSource Segment Codeconfiguration/entityTypes/HCP/attributes/Addresses/attributes/SourceSegmentCode, configuration/entityTypes/HCO/attributes/Addresses/attributes/SourceSegmentCodeSEGMENT1VARCHARSegment1configuration/entityTypes/HCP/attributes/Addresses/attributes/Segment1, configuration/entityTypes/HCO/attributes/Addresses/attributes/Segment1SEGMENT2VARCHARSegment2configuration/entityTypes/HCP/attributes/Addresses/attributes/Segment2, configuration/entityTypes/HCO/attributes/Addresses/attributes/Segment2SEGMENT3VARCHARSegment3configuration/entityTypes/HCP/attributes/Addresses/attributes/Segment3, configuration/entityTypes/HCO/attributes/Addresses/attributes/Segment3ADDRESS_INDBOOLEANAddressIndconfiguration/entityTypes/HCP/attributes/Addresses/attributes/AddressInd, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressIndSCRIPT_UTILIZATION_WEIGHTVARCHARScript Utilization Weightconfiguration/entityTypes/HCP/attributes/Addresses/attributes/ScriptUtilizationWeight, configuration/entityTypes/HCO/attributes/Addresses/attributes/ScriptUtilizationWeightBUSINESS_ACTIVITY_CODEVARCHARBusiness Activity Codeconfiguration/entityTypes/HCP/attributes/Addresses/attributes/BusinessActivityCode, configuration/entityTypes/HCO/attributes/Addresses/attributes/BusinessActivityCodeBUSINESS_ACTIVITY_DESCVARCHARBusiness Activity Descconfiguration/entityTypes/HCP/attributes/Addresses/attributes/BusinessActivityDesc, configuration/entityTypes/HCO/attributes/Addresses/attributes/BusinessActivityDescPRACTICE_LOCATION_RANKVARCHARPractice Location Rankconfiguration/entityTypes/HCP/attributes/Addresses/attributes/PracticeLocationRank, configuration/entityTypes/HCO/attributes/Addresses/attributes/PracticeLocationRankPracticeLocationRankPRACTICE_LOCATION_CONFIDENCE_INDVARCHARPractice Location Confidence Indconfiguration/entityTypes/HCP/attributes/Addresses/attributes/PracticeLocationConfidenceInd, configuration/entityTypes/HCO/attributes/Addresses/attributes/PracticeLocationConfidenceIndPRACTICE_LOCATION_CONFIDENCE_DESCVARCHARPractice Location Confidence Descconfiguration/entityTypes/HCP/attributes/Addresses/attributes/PracticeLocationConfidenceDesc, configuration/entityTypes/HCO/attributes/Addresses/attributes/PracticeLocationConfidenceDescSINGLE_ADDRESS_INDBOOLEANSingle Address Indconfiguration/entityTypes/HCP/attributes/Addresses/attributes/SingleAddressInd, configuration/entityTypes/HCO/attributes/Addresses/attributes/SingleAddressIndSUB_ADMINISTRATIVE_AREAVARCHARThis field holds the smallest geographic data element within a country. For instance, USA County.configuration/entityTypes/HCP/attributes/Addresses/attributes/SubAdministrativeArea, configuration/entityTypes/HCO/attributes/Addresses/attributes/SubAdministrativeArea, configuration/entityTypes/MCO/attributes/Addresses/attributes/SubAdministrativeAreaSUPER_ADMINISTRATIVE_AREAVARCHARThis field holds the largest geographic data element within a country.configuration/entityTypes/HCO/attributes/Addresses/attributes/SuperAdministrativeAreaADMINISTRATIVE_AREAVARCHARThis field holds the most common geographic data element within a country. For instance, USA State, and Canadian Province.configuration/entityTypes/HCO/attributes/Addresses/attributes/AdministrativeAreaUNIT_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/Addresses/attributes/UnitNameUNIT_VALUEVARCHARconfiguration/entityTypes/HCO/attributes/Addresses/attributes/UnitValueFLOORVARCHARN/Aconfiguration/entityTypes/HCO/attributes/Addresses/attributes/FloorBUILDINGVARCHARN/Aconfiguration/entityTypes/HCO/attributes/Addresses/attributes/BuildingSUB_BUILDINGVARCHARconfiguration/entityTypes/HCO/attributes/Addresses/attributes/SubBuildingNEIGHBORHOODVARCHARconfiguration/entityTypes/HCO/attributes/Addresses/attributes/NeighborhoodPREMISE_NUMBERVARCHARconfiguration/entityTypes/HCO/attributes/Addresses/attributes/PremiseNumberADDRESSES_SOURCESourceReltio URI: configuration/entityTypes/HCP/attributes/Addresses/attributes/Source, configuration/entityTypes/HCO/attributes/Addresses/attributes/Source, configuration/entityTypes/MCO/attributes/Addresses/attributes/SourceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESSES_URIVARCHARGenerated KeySOURCE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSOURCE_NAMEVARCHARSourceNameconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Source/attributes/SourceName, configuration/entityTypes/HCO/attributes/Addresses/attributes/Source/attributes/SourceName, configuration/entityTypes/MCO/attributes/Addresses/attributes/Source/attributes/SourceNameSOURCE_RANKVARCHARSourceRankconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Source/attributes/SourceRank, configuration/entityTypes/HCO/attributes/Addresses/attributes/Source/attributes/SourceRank, configuration/entityTypes/MCO/attributes/Addresses/attributes/Source/attributes/SourceRankSOURCE_ADDRESS_IDVARCHARSource Address IDconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Source/attributes/SourceAddressID, configuration/entityTypes/HCO/attributes/Addresses/attributes/Source/attributes/SourceAddressID, configuration/entityTypes/MCO/attributes/Addresses/attributes/Source/attributes/SourceAddressIDLEGACY_IQVIA_ADDRESS_IDVARCHARLegacy address idconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Source/attributes/LegacyIQVIAAddressID, configuration/entityTypes/HCO/attributes/Addresses/attributes/Source/attributes/LegacyIQVIAAddressIDADDRESSES_DEADEAReltio URI: configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEAMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESSES_URIVARCHARGenerated KeyDEA_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeNUMBERVARCHARNumberconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/Number, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/NumberEXPIRATION_DATEDATEExpiration Dateconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/ExpirationDate, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/ExpirationDateSTATUSVARCHARStatusconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/Status, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/StatusAddressDEAStatusSTATUSVARCHARStatusconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/Status, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/StatusSTATUS_DETAILVARCHARDeactivation Reason Codeconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/StatusDetail, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/StatusDetailHCPDEAStatusDetailSTATUS_DETAILVARCHARDeactivation Reason Codeconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/StatusDetail, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/StatusDetailDRUG_SCHEDULEVARCHARDrug Scheduleconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/DrugSchedule, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/DrugScheduleDRUG_SCHEDULEVARCHARDrug Scheduleconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/DrugSchedule, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/DrugScheduleApp-LSCustomer360DEADrugScheduleEFFECTIVE_DATEDATEEffective Dateconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/EffectiveDate, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/EffectiveDateSTATUS_DATEDATEStatus Dateconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/StatusDate, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/StatusDateDEA_BUSINESS_ACTIVITYVARCHARBusiness Activityconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/DEABusinessActivity, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/DEABusinessActivityDEABusinessActivityDEA_BUSINESS_ACTIVITYVARCHARBusiness Activityconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/DEABusinessActivity, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/DEABusinessActivitySUB_BUSINESS_ACTIVITYVARCHARSub Business Activityconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/SubBusinessActivity, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/SubBusinessActivityDEABusinessSubActivitySUB_BUSINESS_ACTIVITYVARCHARSub Business Activityconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/SubBusinessActivity, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/SubBusinessActivityBUSINESS_ACTIVITY_DESCVARCHARBusiness Activity Descconfiguration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/BusinessActivityDescSUB_BUSINESS_ACTIVITY_DESCVARCHARSub Business Activity Descconfiguration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/SubBusinessActivityDescADDRESSES_OFFICE_INFORMATIONReltio URI: configuration/entityTypes/HCP/attributes/Addresses/attributes/OfficeInformation, configuration/entityTypes/HCO/attributes/Addresses/attributes/OfficeInformationMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESSES_URIVARCHARGenerated KeyOFFICE_INFORMATION_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeBEST_TIMESVARCHARBest Timesconfiguration/entityTypes/HCP/attributes/Addresses/attributes/OfficeInformation/attributes/BestTimes, configuration/entityTypes/HCO/attributes/Addresses/attributes/OfficeInformation/attributes/BestTimesAPPT_REQUIREDBOOLEANAppointment Required or notconfiguration/entityTypes/HCP/attributes/Addresses/attributes/OfficeInformation/attributes/ApptRequired, configuration/entityTypes/HCO/attributes/Addresses/attributes/OfficeInformation/attributes/ApptRequiredOFFICE_NOTESVARCHAROffice Notesconfiguration/entityTypes/HCP/attributes/Addresses/attributes/OfficeInformation/attributes/OfficeNotes, configuration/entityTypes/HCO/attributes/Addresses/attributes/OfficeInformation/attributes/OfficeNotesCOMPLIANCEComplianceReltio URI: configuration/entityTypes/HCP/attributes/ComplianceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCOMPLIANCE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeGO_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Compliance/attributes/GOStatusHCPComplianceGOStatusPIGO_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Compliance/attributes/PIGOStatusHCPPIGOStatusNIPPIGO_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Compliance/attributes/NIPPIGOStatusHCPNIPPIGOStatusPRIMARY_PIGO_RATIONALEVARCHARconfiguration/entityTypes/HCP/attributes/Compliance/attributes/PrimaryPIGORationaleHCPPIGORationaleSECONDARY_PIGO_RATIONALEVARCHARconfiguration/entityTypes/HCP/attributes/Compliance/attributes/SecondaryPIGORationaleHCPPIGORationalePIGOSME_REVIEWVARCHARconfiguration/entityTypes/HCP/attributes/Compliance/attributes/PIGOSMEReviewHCPPIGOSMEReviewGSQ_DATEDATEconfiguration/entityTypes/HCP/attributes/Compliance/attributes/GSQDateDO_NOT_USEBOOLEANconfiguration/entityTypes/HCP/attributes/Compliance/attributes/DoNotUseCHANGE_DATEDATEconfiguration/entityTypes/HCP/attributes/Compliance/attributes/ChangeDateCHANGE_REASONVARCHARconfiguration/entityTypes/HCP/attributes/Compliance/attributes/ChangeReasonMAPPHCP_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Compliance/attributes/MAPPHCPStatusMAPP_MAILVARCHARconfiguration/entityTypes/HCP/attributes/Compliance/attributes/MAPPMailDISCLOSUREDisclosureReltio URI: configuration/entityTypes/HCP/attributes/DisclosureMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameDISCLOSURE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeBENEFIT_CATEGORYVARCHARBenefit Categoryconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/BenefitCategoryHCPBenefitCategoryBENEFIT_TITLEVARCHARBenefit Titleconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/BenefitTitleHCPBenefitTitleBENEFIT_QUALITYVARCHARBenefit Qualityconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/BenefitQualityHCPBenefitQualityBENEFIT_SPECIALTYVARCHARBenefit Specialtyconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/BenefitSpecialtyHCPBenefitSpecialtyCONTRACT_CLASSIFICATIONVARCHARContract Classificationconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/ContractClassificationCONTRACT_CLASSIFICATION_DATEDATEContract Classification Dateconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/ContractClassificationDateMILITARYBOOLEANMilitaryconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/MilitaryCIVIL_SERVANTBOOLEANCivil Servantconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/CivilServantCREDENTIALCredential InformationReltio URI: configuration/entityTypes/HCP/attributes/CredentialMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCREDENTIAL_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCREDENTIALVARCHARconfiguration/entityTypes/HCP/attributes/Credential/attributes/CredentialCredentialOTHER_CDTL_TXTVARCHAROther Credential Textconfiguration/entityTypes/HCP/attributes/Credential/attributes/OtherCdtlTxtPRIMARY_FLAGBOOLEANPrimary Flagconfiguration/entityTypes/HCP/attributes/Credential/attributes/PrimaryFlagEFFECTIVE_END_DATEDATEEffective End Dateconfiguration/entityTypes/HCP/attributes/Credential/attributes/EffectiveEndDatePROFESSIONProfession InformationReltio URI: configuration/entityTypes/HCP/attributes/ProfessionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePROFESSION_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypePROFESSIONVARCHARconfiguration/entityTypes/HCP/attributes/Profession/attributes/ProfessionHCPSpecialtyProfessionPROFESSION_SOURCESourceReltio URI: configuration/entityTypes/HCP/attributes/Profession/attributes/SourceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePROFESSION_URIVARCHARGenerated KeySOURCE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSOURCE_NAMEVARCHARSourceNameconfiguration/entityTypes/HCP/attributes/Profession/attributes/Source/attributes/SourceNameSOURCE_RANKVARCHARSourceRankconfiguration/entityTypes/HCP/attributes/Profession/attributes/Source/attributes/SourceRankSPECIALITIESReltio URI: configuration/entityTypes/HCP/attributes/Specialities, configuration/entityTypes/HCO/attributes/SpecialitiesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSPECIALITIES_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSPECIALTYVARCHARSpecialty of the entity, e.g., Adult Congenital Heart Diseaseconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Specialty, configuration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyHCPSpecialty,App-LSCustomer360SpecialtyPROFESSIONVARCHARconfiguration/entityTypes/HCP/attributes/Specialities/attributes/ProfessionHCPSpecialtyProfessionPRIMARYBOOLEANWhether Primary Specialty or notconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Primary, configuration/entityTypes/HCO/attributes/Specialities/attributes/PrimaryRANKVARCHARRankconfiguration/entityTypes/HCP/attributes/Specialities/attributes/RankTRUST_INDICATORVARCHARconfiguration/entityTypes/HCP/attributes/Specialities/attributes/TrustIndicatorDESCVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/Specialities/attributes/DescSPECIALTY_TYPEVARCHARType of Specialty, e.g. Secondaryconfiguration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyTypeApp-LSCustomer360SpecialtyTypeGROUPVARCHARGroup, Specialty belongs toconfiguration/entityTypes/HCO/attributes/Specialities/attributes/GroupSPECIALTY_DETAILVARCHARDescription of Specialtyconfiguration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyDetailSPECIALITIES_SOURCEReltio URI: configuration/entityTypes/HCP/attributes/Specialities/attributes/SourceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSPECIALITIES_URIVARCHARGenerated KeySOURCE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSOURCE_NAMEVARCHARSourceNameconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Source/attributes/SourceNameSOURCE_RANKVARCHARRankconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Source/attributes/SourceRankSUB_SPECIALITIESReltio URI: configuration/entityTypes/HCP/attributes/SubSpecialitiesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSUB_SPECIALITIES_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSPECIALTY_CODEVARCHARSub specialty code of the entityconfiguration/entityTypes/HCP/attributes/SubSpecialities/attributes/SpecialtyCodeSUB_SPECIALTYVARCHARSub specialty of the entityconfiguration/entityTypes/HCP/attributes/SubSpecialities/attributes/SubSpecialtyPROFESSION_CODEVARCHARProfession Codeconfiguration/entityTypes/HCP/attributes/SubSpecialities/attributes/ProfessionCodeSUB_SPECIALITIES_SOURCEReltio URI: configuration/entityTypes/HCP/attributes/SubSpecialities/attributes/SourceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSUB_SPECIALITIES_URIVARCHARGenerated KeySOURCE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSOURCE_NAMEVARCHARSourceNameconfiguration/entityTypes/HCP/attributes/SubSpecialities/attributes/Source/attributes/SourceNameSOURCE_RANKVARCHARRankconfiguration/entityTypes/HCP/attributes/SubSpecialities/attributes/Source/attributes/SourceRankEDUCATIONReltio URI: configuration/entityTypes/HCP/attributes/EducationMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEDUCATION_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSCHOOL_CDVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/SchoolCDSCHOOL_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/SchoolNameYEAR_OF_GRADUATIONVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/Education/attributes/YearOfGraduationSTATEVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/StateCOUNTRY_EDUCATIONVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/CountryTYPEVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/TypeGPAVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/GPAGRADUATEDBOOLEANDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/Education/attributes/GraduatedEMAILReltio URI: configuration/entityTypes/HCP/attributes/Email, configuration/entityTypes/HCO/attributes/Email, configuration/entityTypes/MCO/attributes/EmailMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEMAIL_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPEVARCHARType of Email, e.g., Homeconfiguration/entityTypes/HCP/attributes/Email/attributes/Type, configuration/entityTypes/HCO/attributes/Email/attributes/Type, configuration/entityTypes/MCO/attributes/Email/attributes/TypeEmailTypeEMAILVARCHAREmail addressconfiguration/entityTypes/HCP/attributes/Email/attributes/Email, configuration/entityTypes/HCO/attributes/Email/attributes/Email, configuration/entityTypes/MCO/attributes/Email/attributes/EmailRANKVARCHARRank used to assign priority to a Emailconfiguration/entityTypes/HCP/attributes/Email/attributes/Rank, configuration/entityTypes/HCO/attributes/Email/attributes/Rank, configuration/entityTypes/MCO/attributes/Email/attributes/RankEMAIL_USAGE_TAGVARCHARconfiguration/entityTypes/HCP/attributes/Email/attributes/EmailUsageTag, configuration/entityTypes/HCO/attributes/Email/attributes/EmailUsageTag, configuration/entityTypes/MCO/attributes/Email/attributes/EmailUsageTagEmailUsageTagUSAGE_TYPEVARCHARUsage Type of an Emailconfiguration/entityTypes/HCP/attributes/Email/attributes/UsageType, configuration/entityTypes/HCO/attributes/Email/attributes/UsageType, configuration/entityTypes/MCO/attributes/Email/attributes/UsageTypeDOMAINVARCHARconfiguration/entityTypes/HCP/attributes/Email/attributes/Domain, configuration/entityTypes/HCO/attributes/Email/attributes/Domain, configuration/entityTypes/MCO/attributes/Email/attributes/DomainVALIDATION_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Email/attributes/ValidationStatus, configuration/entityTypes/HCO/attributes/Email/attributes/ValidationStatus, configuration/entityTypes/MCO/attributes/Email/attributes/ValidationStatusDOMAIN_TYPEVARCHARStatus of Emailconfiguration/entityTypes/HCO/attributes/Email/attributes/DomainType, configuration/entityTypes/MCO/attributes/Email/attributes/DomainTypeUSERNAMEVARCHARDomain on which Email is createdconfiguration/entityTypes/HCO/attributes/Email/attributes/Username, configuration/entityTypes/MCO/attributes/Email/attributes/UsernameEMAIL_SOURCESourceReltio URI: configuration/entityTypes/HCP/attributes/Email/attributes/Source, configuration/entityTypes/HCO/attributes/Email/attributes/Source, configuration/entityTypes/MCO/attributes/Email/attributes/SourceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEMAIL_URIVARCHARGenerated KeySOURCE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSOURCE_NAMEVARCHARSourceNameconfiguration/entityTypes/HCP/attributes/Email/attributes/Source/attributes/SourceName, configuration/entityTypes/HCO/attributes/Email/attributes/Source/attributes/SourceName, configuration/entityTypes/MCO/attributes/Email/attributes/Source/attributes/SourceNameSOURCE_RANKVARCHARSourceRankconfiguration/entityTypes/HCP/attributes/Email/attributes/Source/attributes/SourceRank, configuration/entityTypes/HCO/attributes/Email/attributes/Source/attributes/SourceRank, configuration/entityTypes/MCO/attributes/Email/attributes/Source/attributes/SourceRankIDENTIFIERSReltio URI: configuration/entityTypes/HCP/attributes/Identifiers, configuration/entityTypes/HCO/attributes/Identifiers, configuration/entityTypes/MCO/attributes/IdentifiersMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameIDENTIFIERS_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPEVARCHARIdentifier Typeconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/Type, configuration/entityTypes/HCO/attributes/Identifiers/attributes/Type, configuration/entityTypes/MCO/attributes/Identifiers/attributes/TypeHCPIdentifierType,HCOIdentifierTypeIDVARCHARIdentifier IDconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/ID, configuration/entityTypes/HCO/attributes/Identifiers/attributes/ID, configuration/entityTypes/MCO/attributes/Identifiers/attributes/IDEXTL_DATEDATEExternal Dateconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/EXTLDateACTIVATION_DATEDATEActivation Dateconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/ActivationDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/ActivationDateREFER_BACK_ID_STATUSVARCHARStatusconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/ReferBackIDStatus, configuration/entityTypes/HCO/attributes/Identifiers/attributes/ReferBackIDStatusDEACTIVATION_DATEDATEIdentifier Deactivation Dateconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/DeactivationDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/DeactivationDateSTATEVARCHARIdentifier Stateconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/StateStateSOURCE_NAMEVARCHARName of the Identifier sourceconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/SourceName, configuration/entityTypes/HCO/attributes/Identifiers/attributes/SourceName, configuration/entityTypes/MCO/attributes/Identifiers/attributes/SourceNameTRUSTVARCHARTrustconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/Trust, configuration/entityTypes/HCO/attributes/Identifiers/attributes/Trust, configuration/entityTypes/MCO/attributes/Identifiers/attributes/TrustSOURCE_START_DATEDATEStart date at sourceconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/SourceStartDateSOURCE_UPDATE_DATEDATEUpdate date at sourceconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/SourceUpdateDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/SourceUpdateDateSTATUSVARCHARStatusconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/Status, configuration/entityTypes/HCO/attributes/Identifiers/attributes/StatusHCPIdentifierStatus,HCOIdentifierStatusSTATUS_DETAILVARCHARIdentifier Deactivation Reason Codeconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/StatusDetail, configuration/entityTypes/HCO/attributes/Identifiers/attributes/StatusDetailHCPIdentifierStatusDetail,HCOIdentifierStatusDetailDRUG_SCHEDULEVARCHARStatusconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/DrugScheduleTAXONOMYVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/TaxonomySEQUENCE_NUMBERVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/SequenceNumberMCRPE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/MCRPECodeMCRPE_START_DATEDATEconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/MCRPEStartDateMCRPE_END_DATEDATEconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/MCRPEEndDateMCRPE_IS_OPTEDBOOLEANconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/MCRPEIsOptedEXPIRATION_DATEDATEconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/ExpirationDateORDERVARCHAROrderconfiguration/entityTypes/HCO/attributes/Identifiers/attributes/OrderREASONVARCHARReasonconfiguration/entityTypes/HCO/attributes/Identifiers/attributes/ReasonSTART_DATEDATEIdentifier Start Dateconfiguration/entityTypes/HCO/attributes/Identifiers/attributes/StartDateEND_DATEDATEIdentifier End Dateconfiguration/entityTypes/HCO/attributes/Identifiers/attributes/EndDateDATA_QUALITYReltio URI: configuration/entityTypes/HCP/attributes/DataQuality, configuration/entityTypes/HCO/attributes/DataQuality, configuration/entityTypes/MCO/attributes/DataQualityMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameDATA_QUALITY_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeDQ_DESCRIPTIONVARCHARDQ Descriptionconfiguration/entityTypes/HCP/attributes/DataQuality/attributes/DQDescription, configuration/entityTypes/HCO/attributes/DataQuality/attributes/DQDescription, configuration/entityTypes/MCO/attributes/DataQuality/attributes/DQDescriptionDQDescriptionLICENSEReltio URI: configuration/entityTypes/HCP/attributes/License, configuration/entityTypes/HCO/attributes/LicenseMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameLICENSE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCATEGORYVARCHARCategory License belongs to, e.g., Internationalconfiguration/entityTypes/HCP/attributes/License/attributes/CategoryPROFESSION_CODEVARCHARProfession Informationconfiguration/entityTypes/HCP/attributes/License/attributes/ProfessionCodeHCPProfessionNUMBERVARCHARState License INTEGER. A unique license INTEGER is listed for each license the physician holds. There is no standard format syntax. Format examples: 18986, 4301079019, BX1464089. There is also no limit to the INTEGER of licenses a physician can hold in a state. Example: A physician can have an inactive resident license plus unlimited active licenses. Residents can have as many as four licenses since some states issue licenses every yearconfiguration/entityTypes/HCP/attributes/License/attributes/Number, configuration/entityTypes/HCO/attributes/License/attributes/NumberREG_AUTH_IDVARCHARRegAuthIDconfiguration/entityTypes/HCP/attributes/License/attributes/RegAuthIDSTATE_BOARDVARCHARState Boardconfiguration/entityTypes/HCP/attributes/License/attributes/StateBoardSTATE_BOARD_NAMEVARCHARState Board Nameconfiguration/entityTypes/HCP/attributes/License/attributes/StateBoardNameSTATEVARCHARState License State. Two character field. USPS standard abbreviations.configuration/entityTypes/HCP/attributes/License/attributes/State, configuration/entityTypes/HCO/attributes/License/attributes/StateTYPEVARCHARState License Type. U = Unlimited there is no restriction on the physician to practice medicine; L = Limited implies restrictions of some sort. For example, the physician may practice only in a given county, admit patients only to particular hospitals, or practice under the supervision of a physician with a license in state or private hospitals or other settings; T = Temporary issued to a physician temporarily practicing in an underserved area outside his/her state of licensure. Also granted between board meetings when new licenses are issued. Time span for a temporary license varies from state to state. Temporary licenses typically expire 6-9 months from the date they are issued; R = Resident License granted to a physician in graduate medical education (e.g., residency training).configuration/entityTypes/HCP/attributes/License/attributes/TypeST_LIC_TYPESTATUSVARCHARState License Status. A = Active. Physician is licensed to practice within the state; I = Inactive. If the physician has not reregistered a state license OR if the license has been suspended or revoked by the state board; X = unknown. If the state has not provided current information Note: Some state boards issue inactive licenses to physicians who want to maintain licensure in the state although they are currently practicing in another state.configuration/entityTypes/HCP/attributes/License/attributes/StatusHCPLicenseStatusSTATUS_DETAILVARCHARDeactivation Reason Codeconfiguration/entityTypes/HCP/attributes/License/attributes/StatusDetailHCPLicenseStatusDetailTRUSTVARCHARTrust flagconfiguration/entityTypes/HCP/attributes/License/attributes/TrustDEACTIVATION_REASON_CODEVARCHARDeactivation Reason Codeconfiguration/entityTypes/HCP/attributes/License/attributes/DeactivationReasonCodeHCPLicenseDeactivationReasonCodeEXPIRATION_DATEDATELicense Expiration Dateconfiguration/entityTypes/HCP/attributes/License/attributes/ExpirationDateISSUE_DATEDATEState License Issue Dateconfiguration/entityTypes/HCP/attributes/License/attributes/IssueDateSTATE_LICENSE_PRIVILEGEVARCHARState License Privilegeconfiguration/entityTypes/HCP/attributes/License/attributes/StateLicensePrivilegeSTATE_LICENSE_PRIVILEGE_NAMEVARCHARState License Privilege Nameconfiguration/entityTypes/HCP/attributes/License/attributes/StateLicensePrivilegeNameSTATE_LICENSE_STATUS_DATEDATEState License Status Dateconfiguration/entityTypes/HCP/attributes/License/attributes/StateLicenseStatusDateRANKVARCHARRank of Licenseconfiguration/entityTypes/HCP/attributes/License/attributes/RankCERTIFICATION_CODEVARCHARCertification Codeconfiguration/entityTypes/HCP/attributes/License/attributes/CertificationCodeHCPLicenseCertificationLICENSE_SOURCESourceReltio URI: configuration/entityTypes/HCP/attributes/License/attributes/SourceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameLICENSE_URIVARCHARGenerated KeySOURCE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSOURCE_NAMEVARCHARSourceNameconfiguration/entityTypes/HCP/attributes/License/attributes/Source/attributes/SourceNameSOURCE_RANKVARCHARSourceRankconfiguration/entityTypes/HCP/attributes/License/attributes/Source/attributes/SourceRankLICENSE_REGULATORYLicense RegulatoryReltio URI: configuration/entityTypes/HCP/attributes/License/attributes/RegulatoryMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameLICENSE_URIVARCHARGenerated KeyREGULATORY_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeREQ_SAMPL_NON_CTRLVARCHARReq Sampl Non Ctrlconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/ReqSamplNonCtrlREQ_SAMPL_CTRLVARCHARReq Sampl Ctrlconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/ReqSamplCtrlRECV_SAMPL_NON_CTRLVARCHARRecv Sampl Non Ctrlconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/RecvSamplNonCtrlRECV_SAMPL_CTRLVARCHARRecv Sampl Ctrlconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/RecvSamplCtrlDISTR_SAMPL_NON_CTRLVARCHARDistr Sampl Non Ctrlconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/DistrSamplNonCtrlDISTR_SAMPL_CTRLVARCHARDistr Sampl Ctrlconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/DistrSamplCtrlSAMP_DRUG_SCHED_I_FLAGVARCHARSamp Drug Sched I Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SampDrugSchedIFlagSAMP_DRUG_SCHED_II_FLAGVARCHARSamp Drug Sched II Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SampDrugSchedIIFlagSAMP_DRUG_SCHED_III_FLAGVARCHARSamp Drug Sched III Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SampDrugSchedIIIFlagSAMP_DRUG_SCHED_IV_FLAGVARCHARSamp Drug Sched IV Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SampDrugSchedIVFlagSAMP_DRUG_SCHED_V_FLAGVARCHARSamp Drug Sched V Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SampDrugSchedVFlagSAMP_DRUG_SCHED_VI_FLAGVARCHARSamp Drug Sched VI Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SampDrugSchedVIFlagPRESCR_NON_CTRL_FLAGVARCHARPrescr Non Ctrl Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrNonCtrlFlagPRESCR_APP_REQ_NON_CTRL_FLAGVARCHARPrescr App Req Non Ctrl Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrAppReqNonCtrlFlagPRESCR_CTRL_FLAGVARCHARPrescr Ctrl Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrCtrlFlagPRESCR_APP_REQ_CTRL_FLAGVARCHARPrescr App Req Ctrl Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrAppReqCtrlFlagPRESCR_DRUG_SCHED_I_FLAGVARCHARPrescr Drug Sched I Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedIFlagPRESCR_DRUG_SCHED_II_FLAGVARCHARPrescr Drug Sched II Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedIIFlagPRESCR_DRUG_SCHED_III_FLAGVARCHARPrescr Drug Sched III Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedIIIFlagPRESCR_DRUG_SCHED_IV_FLAGVARCHARPrescr Drug Sched IV Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedIVFlagPRESCR_DRUG_SCHED_V_FLAGVARCHARPrescr Drug Sched V Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedVFlagPRESCR_DRUG_SCHED_VI_FLAGVARCHARPrescr Drug Sched VI Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedVIFlagSUPERVISORY_REL_CD_NON_CTRLVARCHARSupervisory Rel Cd Non Ctrlconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SupervisoryRelCdNonCtrlSUPERVISORY_REL_CD_CTRLVARCHARSupervisory Rel Cd Ctrlconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SupervisoryRelCdCtrlCOLLABORATIVE_NONCTRLVARCHARCollaborative Non ctrlconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/CollaborativeNonctrlCOLLABORATIVE_CTRLVARCHARCollaborative ctrlconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/CollaborativeCtrlINCLUSIONARYVARCHARInclusionaryconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/InclusionaryEXCLUSIONARYVARCHARExclusionaryconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/ExclusionaryDELEGATION_NON_CTRLVARCHARDelegation Non Ctrlconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/DelegationNonCtrlDELEGATION_CTRLVARCHARDelegation Ctrlconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/DelegationCtrlCSRReltio URI: configuration/entityTypes/HCP/attributes/CSRMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCSR_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypePROFESSION_CODEVARCHARProfession Informationconfiguration/entityTypes/HCP/attributes/CSR/attributes/ProfessionCodeHCPProfessionAUTHORIZATION_NUMBERVARCHARAutorization number of CSRconfiguration/entityTypes/HCP/attributes/CSR/attributes/AuthorizationNumberREG_AUTH_IDVARCHARRegAuthIDconfiguration/entityTypes/HCP/attributes/CSR/attributes/RegAuthIDSTATE_BOARDVARCHARState Boardconfiguration/entityTypes/HCP/attributes/CSR/attributes/StateBoardSTATE_BOARD_NAMEVARCHARState Board Nameconfiguration/entityTypes/HCP/attributes/CSR/attributes/StateBoardNameSTATEVARCHARState of CSR.configuration/entityTypes/HCP/attributes/CSR/attributes/StateCSR_LICENSE_TYPEVARCHARCSR License Typeconfiguration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseTypeCSR_LICENSE_TYPE_NAMEVARCHARCSR License Type Nameconfiguration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseTypeNameCSR_LICENSE_PRIVILEGEVARCHARCSR License Privilegeconfiguration/entityTypes/HCP/attributes/CSR/attributes/CSRLicensePrivilegeCSR_LICENSE_PRIVILEGE_NAMEVARCHARCSR License Privilege Nameconfiguration/entityTypes/HCP/attributes/CSR/attributes/CSRLicensePrivilegeNameCSR_LICENSE_EFFECTIVE_DATEDATECSR License Effective Dateconfiguration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseEffectiveDateCSR_LICENSE_EXPIRATION_DATEDATECSR License Expiration Dateconfiguration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseExpirationDateCSR_LICENSE_STATUSVARCHARCSR License Statusconfiguration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseStatusHCPLicenseStatusSTATUS_DETAILVARCHARCSRLicenseDeactivationReasonconfiguration/entityTypes/HCP/attributes/CSR/attributes/StatusDetailHCPLicenseStatusDetailCSR_LICENSE_DEACTIVATION_REASONVARCHARCSR License Deactivation Reasonconfiguration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseDeactivationReasonHCPCSRLicenseDeactivationReasonCSR_LICENSE_CERTIFICATIONVARCHARCSR License Certificationconfiguration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseCertificationHCPLicenseCertificationCSR_LICENSE_TYPE_PRIVILEGE_RANKVARCHARCSR License Type Privilege Rankconfiguration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseTypePrivilegeRankCSR_REGULATORYCSR RegulatoryReltio URI: configuration/entityTypes/HCP/attributes/CSR/attributes/RegulatoryMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCSR_URIVARCHARGenerated KeyREGULATORY_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeREQ_SAMPL_NON_CTRLVARCHARReq Sampl Non Ctrlconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/ReqSamplNonCtrlREQ_SAMPL_CTRLVARCHARReq Sampl Ctrlconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/ReqSamplCtrlRECV_SAMPL_NON_CTRLVARCHARRecv Sampl Non Ctrlconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/RecvSamplNonCtrlRECV_SAMPL_CTRLVARCHARRecv Sampl Ctrlconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/RecvSamplCtrlDISTR_SAMPL_NON_CTRLVARCHARDistr Sampl Non Ctrlconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/DistrSamplNonCtrlDISTR_SAMPL_CTRLVARCHARDistr Sampl Ctrlconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/DistrSamplCtrlSAMP_DRUG_SCHED_I_FLAGVARCHARSamp Drug Sched I Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SampDrugSchedIFlagSAMP_DRUG_SCHED_II_FLAGVARCHARSamp Drug Sched II Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SampDrugSchedIIFlagSAMP_DRUG_SCHED_III_FLAGVARCHARSamp Drug Sched III Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SampDrugSchedIIIFlagSAMP_DRUG_SCHED_IV_FLAGVARCHARSamp Drug Sched IV Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SampDrugSchedIVFlagSAMP_DRUG_SCHED_V_FLAGVARCHARSamp Drug Sched V Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SampDrugSchedVFlagSAMP_DRUG_SCHED_VI_FLAGVARCHARSamp Drug Sched VI Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SampDrugSchedVIFlagPRESCR_NON_CTRL_FLAGVARCHARPrescr Non Ctrl Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrNonCtrlFlagPRESCR_APP_REQ_NON_CTRL_FLAGVARCHARPrescr App Req Non Ctrl Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrAppReqNonCtrlFlagPRESCR_CTRL_FLAGVARCHARPrescr Ctrl Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrCtrlFlagPRESCR_APP_REQ_CTRL_FLAGVARCHARPrescr App Req Ctrl Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrAppReqCtrlFlagPRESCR_DRUG_SCHED_I_FLAGVARCHARPrescr Drug Sched I Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrDrugSchedIFlagPRESCR_DRUG_SCHED_II_FLAGVARCHARPrescr Drug Sched II Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrDrugSchedIIFlagPRESCR_DRUG_SCHED_III_FLAGVARCHARPrescr Drug Sched III Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrDrugSchedIIIFlagPRESCR_DRUG_SCHED_IV_FLAGVARCHARPrescr Drug Sched IV Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrDrugSchedIVFlagPRESCR_DRUG_SCHED_V_FLAGVARCHARPrescr Drug Sched V Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrDrugSchedVFlagPRESCR_DRUG_SCHED_VI_FLAGVARCHARPrescr Drug Sched VI Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrDrugSchedVIFlagSUPERVISORY_REL_CD_NON_CTRLVARCHARSupervisory Rel Cd Non Ctrlconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SupervisoryRelCdNonCtrlSUPERVISORY_REL_CD_CTRLVARCHARSupervisory Rel Cd Ctrlconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SupervisoryRelCdCtrlCOLLABORATIVE_NONCTRLVARCHARCollaborative Non ctrlconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/CollaborativeNonctrlCOLLABORATIVE_CTRLVARCHARCollaborative ctrlconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/CollaborativeCtrlINCLUSIONARYVARCHARInclusionaryconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/InclusionaryEXCLUSIONARYVARCHARExclusionaryconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/ExclusionaryDELEGATION_NON_CTRLVARCHARDelegation Non Ctrlconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/DelegationNonCtrlDELEGATION_CTRLVARCHARDelegation Ctrlconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/DelegationCtrlPRIVACY_PREFERENCESReltio URI: configuration/entityTypes/HCP/attributes/PrivacyPreferencesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePRIVACY_PREFERENCES_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeAMA_NO_CONTACTBOOLEANCan be Contacted through AMA or notconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/AMANoContactFTC_NO_CONTACTBOOLEANCan be Contacted through FTC or notconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/FTCNoContactPDRPBOOLEANPhysician Data Restriction Program enrolled or notconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/PDRPPDRP_DATEDATEPhysician Data Restriction Program enrolment dateconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/PDRPDateOPT_OUT_START_DATEDATEOpt Out Start Dateconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/OptOutStartDateALLOWED_TO_CONTACTBOOLEANIndicator whether allowed to contactconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/AllowedToContactPHONE_OPT_OUTBOOLEANOpted Out for being contacted on Phone or notconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/PhoneOptOutEMAIL_OPT_OUTBOOLEANOpted Out for being contacted through Email or notconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/EmailOptOutFAX_OPT_OUTBOOLEANOpted Out for being contacted through Fax or notconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/FaxOptOutMAIL_OPT_OUTBOOLEANOpted Out for being contacted through Mail or notconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/MailOptOutNO_CONTACT_REASONVARCHARReason for no contactconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/NoContactReasonNO_CONTACT_EFFECTIVE_DATEDATEEffective date of no contactconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/NoContactEffectiveDateCERTIFICATESReltio URI: configuration/entityTypes/HCP/attributes/CertificatesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCERTIFICATES_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCERTIFICATE_IDVARCHARCertificate Id of Certificate received by HCPconfiguration/entityTypes/HCP/attributes/Certificates/attributes/CertificateIdSPEAKERReltio URI: configuration/entityTypes/HCP/attributes/SpeakerMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSPEAKER_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeLEVELVARCHARLevelconfiguration/entityTypes/HCP/attributes/Speaker/attributes/LevelHCPTierLevelTIER_STATUSVARCHARTier Statusconfiguration/entityTypes/HCP/attributes/Speaker/attributes/TierStatusHCPTierStatusTIER_APPROVAL_DATEDATETier Approval Dateconfiguration/entityTypes/HCP/attributes/Speaker/attributes/TierApprovalDateTIER_UPDATED_DATEDATETier Updated Dateconfiguration/entityTypes/HCP/attributes/Speaker/attributes/TierUpdatedDateTIER_APPROVERVARCHARTier Approverconfiguration/entityTypes/HCP/attributes/Speaker/attributes/TierApproverEFFECTIVE_DATEDATESpeaker Effective Dateconfiguration/entityTypes/HCP/attributes/Speaker/attributes/EffectiveDateDEACTIVATE_REASONVARCHARSpeaker Deactivate Reasonconfiguration/entityTypes/HCP/attributes/Speaker/attributes/DeactivateReasonIS_SPEAKERBOOLEANconfiguration/entityTypes/HCP/attributes/Speaker/attributes/IsSpeakerSPEAKER_TIER_RATIONALETier RationaleReltio URI: configuration/entityTypes/HCP/attributes/Speaker/attributes/TierRationaleMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSPEAKER_URIVARCHARGenerated KeyTIER_RATIONALE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTIER_RATIONALEVARCHARTier Rationaleconfiguration/entityTypes/HCP/attributes/Speaker/attributes/TierRationale/attributes/TierRationaleHCPTierRationalRAWDEAReltio URI: configuration/entityTypes/HCP/attributes/RAWDEAMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameRAWDEA_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeDEA_NUMBERVARCHARRAW DEA Numberconfiguration/entityTypes/HCP/attributes/RAWDEA/attributes/DEANumberDEA_BUSINESS_ACTIVITYVARCHARDEA Business Activityconfiguration/entityTypes/HCP/attributes/RAWDEA/attributes/DEABusinessActivityEFFECTIVE_DATEDATERAW DEA Effective Dateconfiguration/entityTypes/HCP/attributes/RAWDEA/attributes/EffectiveDateEXPIRATION_DATEDATERAW DEA Expiration Dateconfiguration/entityTypes/HCP/attributes/RAWDEA/attributes/ExpirationDateNAMEVARCHARRAW DEA Nameconfiguration/entityTypes/HCP/attributes/RAWDEA/attributes/NameADDITIONAL_COMPANY_INFOVARCHARAdditional Company Infoconfiguration/entityTypes/HCP/attributes/RAWDEA/attributes/AdditionalCompanyInfoADDRESS1VARCHARRAW DEA Address 1configuration/entityTypes/HCP/attributes/RAWDEA/attributes/Address1ADDRESS2VARCHARRAW DEA Address 2configuration/entityTypes/HCP/attributes/RAWDEA/attributes/Address2CITYVARCHARRAW DEA Cityconfiguration/entityTypes/HCP/attributes/RAWDEA/attributes/CitySTATEVARCHARRAW DEA Stateconfiguration/entityTypes/HCP/attributes/RAWDEA/attributes/StateZIPVARCHARRAW DEA Zipconfiguration/entityTypes/HCP/attributes/RAWDEA/attributes/ZipBUSINESS_ACTIVITY_SUB_CDVARCHARBusiness Activity Sub Cdconfiguration/entityTypes/HCP/attributes/RAWDEA/attributes/BusinessActivitySubCdPAYMT_INDVARCHARPaymt Indicatorconfiguration/entityTypes/HCP/attributes/RAWDEA/attributes/PaymtIndHCPRAWDEAPaymtIndRAW_DEA_SCHD_CLAS_CDVARCHARRaw Dea Schd Clas Cdconfiguration/entityTypes/HCP/attributes/RAWDEA/attributes/RawDeaSchdClasCdSTATUSVARCHARRaw Dea Statusconfiguration/entityTypes/HCP/attributes/RAWDEA/attributes/StatusPHONEReltio URI: configuration/entityTypes/HCP/attributes/Phone, configuration/entityTypes/HCO/attributes/Phone, configuration/entityTypes/MCO/attributes/PhoneMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePHONE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPEVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/Type, configuration/entityTypes/HCO/attributes/Phone/attributes/Type, configuration/entityTypes/MCO/attributes/Phone/attributes/TypePhoneTypeNUMBERVARCHARPhone numberconfiguration/entityTypes/HCP/attributes/Phone/attributes/Number, configuration/entityTypes/HCO/attributes/Phone/attributes/Number, configuration/entityTypes/MCO/attributes/Phone/attributes/NumberFORMATTED_NUMBERVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/FormattedNumber, configuration/entityTypes/HCO/attributes/Phone/attributes/FormattedNumber, configuration/entityTypes/MCO/attributes/Phone/attributes/FormattedNumberEXTENSIONVARCHARExtension, if anyconfiguration/entityTypes/HCP/attributes/Phone/attributes/Extension, configuration/entityTypes/HCO/attributes/Phone/attributes/Extension, configuration/entityTypes/MCO/attributes/Phone/attributes/ExtensionRANKVARCHARRank used to assign priority to a Phone numberconfiguration/entityTypes/HCP/attributes/Phone/attributes/Rank, configuration/entityTypes/HCO/attributes/Phone/attributes/Rank, configuration/entityTypes/MCO/attributes/Phone/attributes/RankPHONE_USAGE_TAGVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/PhoneUsageTag, configuration/entityTypes/HCO/attributes/Phone/attributes/PhoneUsageTag, configuration/entityTypes/MCO/attributes/Phone/attributes/PhoneUsageTagPhoneUsageTagUSAGE_TYPEVARCHARUsage Type of a Phone numberconfiguration/entityTypes/HCP/attributes/Phone/attributes/UsageType, configuration/entityTypes/HCO/attributes/Phone/attributes/UsageType, configuration/entityTypes/MCO/attributes/Phone/attributes/UsageTypeAREA_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/AreaCode, configuration/entityTypes/HCO/attributes/Phone/attributes/AreaCode, configuration/entityTypes/MCO/attributes/Phone/attributes/AreaCodeLOCAL_NUMBERVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/LocalNumber, configuration/entityTypes/HCO/attributes/Phone/attributes/LocalNumber, configuration/entityTypes/MCO/attributes/Phone/attributes/LocalNumberVALIDATION_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/ValidationStatus, configuration/entityTypes/HCO/attributes/Phone/attributes/ValidationStatus, configuration/entityTypes/MCO/attributes/Phone/attributes/ValidationStatusLINE_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/LineType, configuration/entityTypes/HCO/attributes/Phone/attributes/LineType, configuration/entityTypes/MCO/attributes/Phone/attributes/LineTypeFORMAT_MASKVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/FormatMask, configuration/entityTypes/HCO/attributes/Phone/attributes/FormatMask, configuration/entityTypes/MCO/attributes/Phone/attributes/FormatMaskDIGIT_COUNTVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/DigitCount, configuration/entityTypes/HCO/attributes/Phone/attributes/DigitCount, configuration/entityTypes/MCO/attributes/Phone/attributes/DigitCountGEO_AREAVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/GeoArea, configuration/entityTypes/HCO/attributes/Phone/attributes/GeoArea, configuration/entityTypes/MCO/attributes/Phone/attributes/GeoAreaGEO_COUNTRYVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/GeoCountry, configuration/entityTypes/HCO/attributes/Phone/attributes/GeoCountry, configuration/entityTypes/MCO/attributes/Phone/attributes/GeoCountryCOUNTRY_CODEVARCHARTwo digit code for a Countryconfiguration/entityTypes/HCO/attributes/Phone/attributes/CountryCode, configuration/entityTypes/MCO/attributes/Phone/attributes/CountryCodePHONE_SOURCESourceReltio URI: configuration/entityTypes/HCP/attributes/Phone/attributes/Source, configuration/entityTypes/HCO/attributes/Phone/attributes/Source, configuration/entityTypes/MCO/attributes/Phone/attributes/SourceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePHONE_URIVARCHARGenerated KeySOURCE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSOURCE_NAMEVARCHARSourceNameconfiguration/entityTypes/HCP/attributes/Phone/attributes/Source/attributes/SourceName, configuration/entityTypes/HCO/attributes/Phone/attributes/Source/attributes/SourceName, configuration/entityTypes/MCO/attributes/Phone/attributes/Source/attributes/SourceNameSOURCE_RANKVARCHARSourceRankconfiguration/entityTypes/HCP/attributes/Phone/attributes/Source/attributes/SourceRank, configuration/entityTypes/HCO/attributes/Phone/attributes/Source/attributes/SourceRank, configuration/entityTypes/MCO/attributes/Phone/attributes/Source/attributes/SourceRankSOURCE_ADDRESS_IDVARCHARSourceAddressIDconfiguration/entityTypes/HCP/attributes/Phone/attributes/Source/attributes/SourceAddressID, configuration/entityTypes/HCO/attributes/Phone/attributes/Source/attributes/SourceAddressID, configuration/entityTypes/MCO/attributes/Phone/attributes/Source/attributes/SourceAddressIDHCP_ADDRESS_ZIPReltio URI: configuration/entityTypes/Location/attributes/ZipMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESS_URIVARCHARGenerated KeyZIP_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypePOSTAL_CODEVARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/PostalCodeZIP5VARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/Zip5ZIP4VARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/Zip4DEAReltio URI: configuration/entityTypes/HCP/attributes/DEA, configuration/entityTypes/HCO/attributes/DEAMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameDEA_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeNUMBERVARCHARconfiguration/entityTypes/HCP/attributes/DEA/attributes/Number, configuration/entityTypes/HCO/attributes/DEA/attributes/NumberSTATUSVARCHARconfiguration/entityTypes/HCP/attributes/DEA/attributes/Status, configuration/entityTypes/HCO/attributes/DEA/attributes/StatusSTATUSVARCHARconfiguration/entityTypes/HCP/attributes/DEA/attributes/Status, configuration/entityTypes/HCO/attributes/DEA/attributes/StatusApp-LSCustomer360DEAStatusEXPIRATION_DATEDATEconfiguration/entityTypes/HCP/attributes/DEA/attributes/ExpirationDate, configuration/entityTypes/HCO/attributes/DEA/attributes/ExpirationDateDRUG_SCHEDULEVARCHARconfiguration/entityTypes/HCP/attributes/DEA/attributes/DrugSchedule, configuration/entityTypes/HCO/attributes/DEA/attributes/DrugScheduleApp-LSCustomer360DEADrugScheduleDRUG_SCHEDULE_DESCRIPTIONVARCHARconfiguration/entityTypes/HCP/attributes/DEA/attributes/DrugScheduleDescription, configuration/entityTypes/HCO/attributes/DEA/attributes/DrugScheduleDescriptionBUSINESS_ACTIVITYVARCHARconfiguration/entityTypes/HCP/attributes/DEA/attributes/BusinessActivity, configuration/entityTypes/HCO/attributes/DEA/attributes/BusinessActivityApp-LSCustomer360DEABusinessActivityBUSINESS_ACTIVITY_PLUS_SUB_CODEVARCHARBusiness Activity SubCodeconfiguration/entityTypes/HCP/attributes/DEA/attributes/BusinessActivityPlusSubCode, configuration/entityTypes/HCO/attributes/DEA/attributes/BusinessActivityPlusSubCodeApp-LSCustomer360DEABusinessActivitySubcodeBUSINESS_ACTIVITY_DESCRIPTIONVARCHARStringconfiguration/entityTypes/HCP/attributes/DEA/attributes/BusinessActivityDescription, configuration/entityTypes/HCO/attributes/DEA/attributes/BusinessActivityDescriptionApp-LSCustomer360DEABusinessActivityDescriptionPAYMENT_INDICATORVARCHARStringconfiguration/entityTypes/HCP/attributes/DEA/attributes/PaymentIndicator, configuration/entityTypes/HCO/attributes/DEA/attributes/PaymentIndicatorApp-LSCustomer360DEAPaymentIndicatorTAXONOMYReltio URI: configuration/entityTypes/HCP/attributes/Taxonomy, configuration/entityTypes/HCO/attributes/TaxonomyMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameTAXONOMY_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTAXONOMYVARCHARTaxonomy related to HCP, e.g., Obstetrics & Gynecologyconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/Taxonomy, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/TaxonomyApp-LSCustomer360Taxonomy,TAXONOMY_CDTYPEVARCHARType of Taxonomy, e.g., Primaryconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/Type, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/TypeApp-LSCustomer360TaxonomyType,TAXONOMY_TYPESTATE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/StateCodeGROUPVARCHARGroup Taxonomy belongs toconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/GroupPROVIDER_TYPEVARCHARTaxonomy Provider Typeconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/ProviderType, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/ProviderTypeCLASSIFICATIONVARCHARClassification of Taxonomyconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/Classification, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/ClassificationSPECIALIZATIONVARCHARSpecialization of Taxonomyconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/Specialization, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/SpecializationPRIORITYVARCHARTaxonomy Priorityconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/Priority, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/PriorityTAXONOMY_PRIORITYSANCTIONReltio URI: configuration/entityTypes/HCP/attributes/SanctionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSANCTION_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSANCTION_IDVARCHARCourt sanction Id for any case.configuration/entityTypes/HCP/attributes/Sanction/attributes/SanctionIdACTION_CODEVARCHARCourt sanction code for a caseconfiguration/entityTypes/HCP/attributes/Sanction/attributes/ActionCodeACTION_DESCRIPTIONVARCHARCourt sanction Action Descriptionconfiguration/entityTypes/HCP/attributes/Sanction/attributes/ActionDescriptionBOARD_CODEVARCHARCourt case board idconfiguration/entityTypes/HCP/attributes/Sanction/attributes/BoardCodeBOARD_DESCVARCHARcourt case board descriptionconfiguration/entityTypes/HCP/attributes/Sanction/attributes/BoardDescACTION_DATEDATECourt sanction Action Dateconfiguration/entityTypes/HCP/attributes/Sanction/attributes/ActionDateSANCTION_PERIOD_START_DATEDATESanction Period Start Dateconfiguration/entityTypes/HCP/attributes/Sanction/attributes/SanctionPeriodStartDateSANCTION_PERIOD_END_DATEDATESanction Period End Dateconfiguration/entityTypes/HCP/attributes/Sanction/attributes/SanctionPeriodEndDateMONTH_DURATIONVARCHARSanction Duration in Monthsconfiguration/entityTypes/HCP/attributes/Sanction/attributes/MonthDurationFINE_AMOUNTVARCHARFine Amount for Sanctionconfiguration/entityTypes/HCP/attributes/Sanction/attributes/FineAmountOFFENSE_CODEVARCHAROffense Code for Sanctionconfiguration/entityTypes/HCP/attributes/Sanction/attributes/OffenseCodeOFFENSE_DESCRIPTIONVARCHAROffense Description for Sanctionconfiguration/entityTypes/HCP/attributes/Sanction/attributes/OffenseDescriptionOFFENSE_DATEDATEOffense Date for Sanctionconfiguration/entityTypes/HCP/attributes/Sanction/attributes/OffenseDateGSA_SANCTIONReltio URI: configuration/entityTypes/HCP/attributes/GSASanctionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameGSA_SANCTION_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSANCTION_IDVARCHARSanction Id of HCP as per GSA Saction listconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/SanctionIdFIRST_NAMEVARCHARFirst Name of HCP as per GSA Saction listconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/FirstNameMIDDLE_NAMEVARCHARMiddle Name of HCP as per GSA Saction listconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/MiddleNameLAST_NAMEVARCHARLast Name of HCP as per GSA Saction listconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/LastNameSUFFIX_NAMEVARCHARSuffix Name of HCP as per GSA Saction listconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/SuffixNameCITYVARCHARCity of HCP as per GSA Saction listconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/CitySTATEVARCHARState of HCP as per GSA Saction listconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/StateZIPVARCHARZip of HCP as per GSA Saction listconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/ZipACTION_DATEVARCHARAction Date for GSA Sactionconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/ActionDateTERM_DATEVARCHARTerm Date for GSA Sactionconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/TermDateAGENCYVARCHARAgency that imposed Sanctionconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/AgencyCONFIDENCEVARCHARConfidence as per GSA Saction listconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/ConfidenceMULTI_CHANNEL_COMMUNICATION_CONSENTReltio URI: configuration/entityTypes/HCP/attributes/MultiChannelCommunicationConsentMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameMULTI_CHANNEL_COMMUNICATION_CONSENT_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCHANNEL_TYPEVARCHARChannel type for the consent, e.g. email, SMS, etc.configuration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/ChannelTypeCHANNEL_VALUEVARCHARValue of the channel for consent - john.doe@email.comconfiguration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/ChannelValueCHANNEL_CONSENTVARCHARThe consent for the corresponding channel and the id - yes or noconfiguration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/ChannelConsentChannelConsentSTART_DATEDATEStart date of the consentconfiguration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/StartDateEXPIRATION_DATEDATEExpiration date of the consentconfiguration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/ExpirationDateCOMMUNICATION_TYPEVARCHARDifferent communication type that the individual prefers, for e.g. - New Product Launches, Sales/Discounts, Brand-level Newsconfiguration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/CommunicationTypeCOMMUNICATION_FREQUENCYVARCHARHow frequently can the individual be communicated to. Example - Daily/monthly/weeklyconfiguration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/CommunicationFrequencyCHANNEL_PREFERENCE_FLAGBOOLEANWhen checked denotes the preferred channel of communicationconfiguration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/ChannelPreferenceFlagEMPLOYMENTReltio URI: configuration/entityTypes/HCP/attributes/EmploymentMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEMPLOYMENT_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeNAMEVARCHARNameconfiguration/entityTypes/Organization/attributes/NameTITLEVARCHARconfiguration/relationTypes/Employment/attributes/TitleSUMMARYVARCHARconfiguration/relationTypes/Employment/attributes/SummaryIS_CURRENTBOOLEANconfiguration/relationTypes/Employment/attributes/IsCurrentHCOHealth care organizationReltio URI: configuration/entityTypes/HCOMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPE_CODEVARCHARType Codeconfiguration/entityTypes/HCO/attributes/TypeCodeHCOTypeCOMPANY_CUST_IDVARCHARCOMPANY Customer IDconfiguration/entityTypes/HCO/attributes/COMPANYCustIDSUB_TYPE_CODEVARCHARSubType Codeconfiguration/entityTypes/HCO/attributes/SubTypeCodeHCOSubTypeSUB_CATEGORYVARCHARSubCategoryconfiguration/entityTypes/HCO/attributes/SubCategoryHCOSubCategorySTRUCTURE_TYPE_CODEVARCHARSubType Codeconfiguration/entityTypes/HCO/attributes/StructureTypeCodeHCOStructureTypeCodeNAMEVARCHARNameconfiguration/entityTypes/HCO/attributes/NameDOING_BUSINESS_AS_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/DoingBusinessAsNameFLEX_RESTRICTED_PARTY_INDVARCHARparty indicator for FLEXconfiguration/entityTypes/HCO/attributes/FlexRestrictedPartyIndTRADE_PARTNERVARCHARStringconfiguration/entityTypes/HCO/attributes/TradePartnerSHIP_TO_SR_PARENT_NAMEVARCHARStringconfiguration/entityTypes/HCO/attributes/ShipToSrParentNameSHIP_TO_JR_PARENT_NAMEVARCHARStringconfiguration/entityTypes/HCO/attributes/ShipToJrParentNameSHIP_FROM_JR_PARENT_NAMEVARCHARStringconfiguration/entityTypes/HCO/attributes/ShipFromJrParentNameTEACHING_HOSPITALVARCHARTeaching Hospitalconfiguration/entityTypes/HCO/attributes/TeachingHospitalOWNERSHIP_STATUSVARCHARconfiguration/entityTypes/HCO/attributes/OwnershipStatusHCOOwnershipStatusPROFIT_STATUSVARCHARProfit Statusconfiguration/entityTypes/HCO/attributes/ProfitStatusHCOProfitStatusCMIVARCHARCMIconfiguration/entityTypes/HCO/attributes/CMICOMPANY_HCOS_FLAGVARCHARCOMPANY HCOS Flagconfiguration/entityTypes/HCO/attributes/COMPANYHCOSFlagSOURCE_MATCH_CATEGORYVARCHARSource Match Categoryconfiguration/entityTypes/HCO/attributes/SourceMatchCategoryCOMM_HOSPVARCHARCommHospconfiguration/entityTypes/HCO/attributes/CommHospGEN_FIRSTVARCHARStringconfiguration/entityTypes/HCO/attributes/GenFirstHCOGenFirstSREP_ACCESSVARCHARStringconfiguration/entityTypes/HCO/attributes/SrepAccessHCOSrepAccessOUT_PATIENTS_NUMBERSVARCHARconfiguration/entityTypes/HCO/attributes/OutPatientsNumbersUNIT_OPER_ROOM_NUMBERVARCHARconfiguration/entityTypes/HCO/attributes/UnitOperRoomNumberPRIMARY_GPOVARCHARPrimary GPOconfiguration/entityTypes/HCO/attributes/PrimaryGPOTOTAL_PRESCRIBERSVARCHARTotal Prescribersconfiguration/entityTypes/HCO/attributes/TotalPrescribersNUM_IN_PATIENTSVARCHARTotal InPatientsconfiguration/entityTypes/HCO/attributes/NumInPatientsTOTAL_LIVESVARCHARTotal Livesconfiguration/entityTypes/HCO/attributes/TotalLivesTOTAL_PHARMACISTSVARCHARTotal Pharmacistsconfiguration/entityTypes/HCO/attributes/TotalPharmacistsTOTAL_M_DSVARCHARTotal MDsconfiguration/entityTypes/HCO/attributes/TotalMDsTOTAL_REVENUEVARCHARTotal Revenueconfiguration/entityTypes/HCO/attributes/TotalRevenueSTATUSVARCHARconfiguration/entityTypes/HCO/attributes/StatusHCOStatusSTATUS_DETAILVARCHARDeactivation Reasonconfiguration/entityTypes/HCO/attributes/StatusDetailHCOStatusDetailACCOUNT_BLOCK_CODEVARCHARAccount Block Codeconfiguration/entityTypes/HCO/attributes/AccountBlockCodeTOTAL_LICENSE_BEDSVARCHARTotal License Bedsconfiguration/entityTypes/HCO/attributes/TotalLicenseBedsTOTAL_CENSUS_BEDSVARCHARconfiguration/entityTypes/HCO/attributes/TotalCensusBedsTOTAL_STAFFED_BEDSVARCHARconfiguration/entityTypes/HCO/attributes/TotalStaffedBedsTOTAL_SURGERIESVARCHARTotal Surgeriesconfiguration/entityTypes/HCO/attributes/TotalSurgeriesTOTAL_PROCEDURESVARCHARTotal Proceduresconfiguration/entityTypes/HCO/attributes/TotalProceduresNUM_EMPLOYEESVARCHARNumber of Proceduresconfiguration/entityTypes/HCO/attributes/NumEmployeesRESIDENT_COUNTVARCHARResident Countconfiguration/entityTypes/HCO/attributes/ResidentCountFORMULARYVARCHARFormularyconfiguration/entityTypes/HCO/attributes/FormularyHCOFormularyE_MEDICAL_RECORDVARCHARe-Medical Recordconfiguration/entityTypes/HCO/attributes/EMedicalRecordE_PRESCRIBEVARCHARe-Prescribeconfiguration/entityTypes/HCO/attributes/EPrescribeHCOEPrescribePAY_PERFORMVARCHARPay Performconfiguration/entityTypes/HCO/attributes/PayPerformHCOPayPerformDEACTIVATION_REASONVARCHARDeactivation Reasonconfiguration/entityTypes/HCO/attributes/DeactivationReasonHCODeactivationReasonINTERNATIONAL_LOCATION_NUMBERVARCHARInternational location number (part 1)configuration/entityTypes/HCO/attributes/InternationalLocationNumberDCR_STATUSVARCHARStatus of HCO profileconfiguration/entityTypes/HCO/attributes/DCRStatusDCRStatusCOUNTRY_HCOVARCHARCountryconfiguration/entityTypes/HCO/attributes/CountryORIGINAL_SOURCE_NAMEVARCHAROriginal Sourceconfiguration/entityTypes/HCO/attributes/OriginalSourceNameSOURCE_UPDATE_DATEDATEconfiguration/entityTypes/HCO/attributes/SourceUpdateDateCLASSOF_TRADE_NReltio URI: configuration/entityTypes/HCO/attributes/ClassofTradeNMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCLASSOF_TRADE_N_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSOURCE_COTIDVARCHARSource COT IDconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/SourceCOTIDCOTPRIORITYVARCHARPriorityconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/PrioritySPECIALTYVARCHARSpecialty of Class of Tradeconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/SpecialtyCOTSpecialtyCLASSIFICATIONVARCHARClassification of Class of Tradeconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/ClassificationCOTClassificationFACILITY_TYPEVARCHARFacility Type of Class of Tradeconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/FacilityTypeCOTFacilityTypeCOT_ORDERVARCHARCOT Orderconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/COTOrderSTART_DATEDATEStart Dateconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/StartDateSOURCEVARCHARSourceconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/SourcePRIMARYVARCHARPrimaryconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/PrimaryHCO_ADDRESS_ZIPReltio URI: configuration/entityTypes/Location/attributes/ZipMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESS_URIVARCHARGenerated KeyZIP_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypePOSTAL_CODEVARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/PostalCodeZIP5VARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/Zip5ZIP4VARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/Zip4340BReltio URI: configuration/entityTypes/HCO/attributes/340bMaterialized: noColumnTypeDescriptionReltio Attribute URILOV Name340B_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity Type340BIDVARCHAR340B IDconfiguration/entityTypes/HCO/attributes/340b/attributes/340BIDENTITY_SUB_DIVISION_NAMEVARCHAREntity Sub-Division Nameconfiguration/entityTypes/HCO/attributes/340b/attributes/EntitySubDivisionNamePROGRAM_CODEVARCHARProgram Codeconfiguration/entityTypes/HCO/attributes/340b/attributes/ProgramCode340BProgramCodePARTICIPATINGBOOLEANParticipatingconfiguration/entityTypes/HCO/attributes/340b/attributes/ParticipatingAUTHORIZING_OFFICIAL_NAMEVARCHARAuthorizing Official Nameconfiguration/entityTypes/HCO/attributes/340b/attributes/AuthorizingOfficialNameAUTHORIZING_OFFICIAL_TITLEVARCHARAuthorizing Official Titleconfiguration/entityTypes/HCO/attributes/340b/attributes/AuthorizingOfficialTitleAUTHORIZING_OFFICIAL_TELVARCHARAuthorizing Official Telconfiguration/entityTypes/HCO/attributes/340b/attributes/AuthorizingOfficialTelAUTHORIZING_OFFICIAL_TEL_EXTVARCHARAuthorizing Official Tel Extconfiguration/entityTypes/HCO/attributes/340b/attributes/AuthorizingOfficialTelExtCONTACT_NAMEVARCHARContact Nameconfiguration/entityTypes/HCO/attributes/340b/attributes/ContactNameCONTACT_TITLEVARCHARContact Titleconfiguration/entityTypes/HCO/attributes/340b/attributes/ContactTitleCONTACT_TELEPHONEVARCHARContact Telephoneconfiguration/entityTypes/HCO/attributes/340b/attributes/ContactTelephoneCONTACT_TELEPHONE_EXTVARCHARContact Telephone Extconfiguration/entityTypes/HCO/attributes/340b/attributes/ContactTelephoneExtSIGNED_BY_NAMEVARCHARSigned By Nameconfiguration/entityTypes/HCO/attributes/340b/attributes/SignedByNameSIGNED_BY_TITLEVARCHARSigned By Titleconfiguration/entityTypes/HCO/attributes/340b/attributes/SignedByTitleSIGNED_BY_TELEPHONEVARCHARSigned By Telephoneconfiguration/entityTypes/HCO/attributes/340b/attributes/SignedByTelephoneSIGNED_BY_TELEPHONE_EXTVARCHARSigned By Telephone Extconfiguration/entityTypes/HCO/attributes/340b/attributes/SignedByTelephoneExtSIGNED_BY_DATEDATESigned By Dateconfiguration/entityTypes/HCO/attributes/340b/attributes/SignedByDateCERTIFIED_DECERTIFIED_DATEDATECertified/Decertified Dateconfiguration/entityTypes/HCO/attributes/340b/attributes/CertifiedDecertifiedDateRURALVARCHARRuralconfiguration/entityTypes/HCO/attributes/340b/attributes/RuralENTRY_COMMENTSVARCHAREntry Commentsconfiguration/entityTypes/HCO/attributes/340b/attributes/EntryCommentsNATURE_OF_SUPPORTVARCHARNature Of Supportconfiguration/entityTypes/HCO/attributes/340b/attributes/NatureOfSupportEDIT_DATEVARCHAREdit Dateconfiguration/entityTypes/HCO/attributes/340b/attributes/EditDate340B_PARTICIPATION_DATESReltio URI: configuration/entityTypes/HCO/attributes/340b/attributes/ParticipationDatesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV Name340B_URIVARCHARGenerated KeyPARTICIPATION_DATES_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypePARTICIPATING_START_DATEDATEParticipating Start Dateconfiguration/entityTypes/HCO/attributes/340b/attributes/ParticipationDates/attributes/ParticipatingStartDateTERMINATION_DATEDATETermination Dateconfiguration/entityTypes/HCO/attributes/340b/attributes/ParticipationDates/attributes/TerminationDateTERMINATION_CODEVARCHARTermination Codeconfiguration/entityTypes/HCO/attributes/340b/attributes/ParticipationDates/attributes/TerminationCode340BTerminationCodeOTHER_NAMESReltio URI: configuration/entityTypes/HCO/attributes/OtherNamesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameOTHER_NAMES_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPEVARCHARTypeconfiguration/entityTypes/HCO/attributes/OtherNames/attributes/TypeNAMEVARCHARNameconfiguration/entityTypes/HCO/attributes/OtherNames/attributes/NameACOReltio URI: configuration/entityTypes/HCO/attributes/ACOMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACO_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPEVARCHARTypeconfiguration/entityTypes/HCO/attributes/ACO/attributes/TypeHCOACOTypeACO_TYPE_CATEGORYVARCHARType Categoryconfiguration/entityTypes/HCO/attributes/ACO/attributes/ACOTypeCategoryHCOACOTypeCategoryACO_TYPE_GROUPVARCHARType Group of ACOconfiguration/entityTypes/HCO/attributes/ACO/attributes/ACOTypeGroupHCOACOTypeGroupACO_ACODETAILReltio URI: configuration/entityTypes/HCO/attributes/ACO/attributes/ACODetailMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACO_URIVARCHARGenerated KeyACO_DETAIL_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeACO_DETAIL_CODEVARCHARDetail Code for ACOconfiguration/entityTypes/HCO/attributes/ACO/attributes/ACODetail/attributes/ACODetailCodeHCOACODetailACO_DETAIL_VALUEVARCHARDetail Value for ACOconfiguration/entityTypes/HCO/attributes/ACO/attributes/ACODetail/attributes/ACODetailValueACO_DETAIL_GROUP_CODEVARCHARDetail Value for ACOconfiguration/entityTypes/HCO/attributes/ACO/attributes/ACODetail/attributes/ACODetailGroupCodeHCOACODetailGroupWEBSITEReltio URI: configuration/entityTypes/HCO/attributes/WebsiteMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameWEBSITE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeWEBSITE_URLVARCHARUrl of the websiteconfiguration/entityTypes/HCO/attributes/Website/attributes/WebsiteURLWEBSITE_SOURCESourceReltio URI: configuration/entityTypes/HCO/attributes/Website/attributes/SourceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameWEBSITE_URIVARCHARGenerated KeySOURCE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSOURCE_NAMEVARCHARSourceNameconfiguration/entityTypes/HCO/attributes/Website/attributes/Source/attributes/SourceNameSOURCE_RANKVARCHARSourceRankconfiguration/entityTypes/HCO/attributes/Website/attributes/Source/attributes/SourceRankSALES_ORGANIZATIONSales OrganizationReltio URI: configuration/entityTypes/HCO/attributes/SalesOrganizationMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSALES_ORGANIZATION_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSALES_ORGANIZATION_CODEVARCHARSales Organization Codeconfiguration/entityTypes/HCO/attributes/SalesOrganization/attributes/SalesOrganizationCodeCUSTOMER_ORDER_BLOCKVARCHARCustomer Order Blockconfiguration/entityTypes/HCO/attributes/SalesOrganization/attributes/CustomerOrderBlockCUSTOMER_GROUPVARCHARCustomer Groupconfiguration/entityTypes/HCO/attributes/SalesOrganization/attributes/CustomerGroupHCO_BUSINESS_UNIT_TAGReltio URI: configuration/entityTypes/HCO/attributes/BusinessUnitTAGMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameBUSINESSUNITTAG_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeBUSINESS_UNITVARCHARBusiness Unitconfiguration/entityTypes/HCO/attributes/BusinessUnitTAG/attributes/BusinessUnitSEGMENTVARCHARSegmentconfiguration/entityTypes/HCO/attributes/BusinessUnitTAG/attributes/SegmentCONTRACT_TYPEVARCHARContract Typeconfiguration/entityTypes/HCO/attributes/BusinessUnitTAG/attributes/ContractTypeGLNReltio URI: configuration/entityTypes/HCO/attributes/GLNMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameGLN_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPEVARCHARGLN Typeconfiguration/entityTypes/HCO/attributes/GLN/attributes/TypeIDVARCHARGLN IDconfiguration/entityTypes/HCO/attributes/GLN/attributes/IDSTATUSVARCHARGLN Statusconfiguration/entityTypes/HCO/attributes/GLN/attributes/StatusHCOGLNStatusSTATUS_DETAILVARCHARGLN Statusconfiguration/entityTypes/HCO/attributes/GLN/attributes/StatusDetailHCOGLNStatusDetailHCO_REFER_BACKReltio URI: configuration/entityTypes/HCO/attributes/ReferBackMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameREFERBACK_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeREFER_BACK_IDVARCHARRefer Back IDconfiguration/entityTypes/HCO/attributes/ReferBack/attributes/ReferBackIDREFER_BACK_HCOSIDVARCHARGLN IDconfiguration/entityTypes/HCO/attributes/ReferBack/attributes/ReferBackHCOSIDDEACTIVATION_REASONVARCHARDeactivation Reasonconfiguration/entityTypes/HCO/attributes/ReferBack/attributes/DeactivationReasonBEDReltio URI: configuration/entityTypes/HCO/attributes/BedMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameBED_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPEVARCHARTypeconfiguration/entityTypes/HCO/attributes/Bed/attributes/TypeHCOBedTypeLICENSE_BEDSVARCHARLicense Bedsconfiguration/entityTypes/HCO/attributes/Bed/attributes/LicenseBedsCENSUS_BEDSVARCHARCensus Bedsconfiguration/entityTypes/HCO/attributes/Bed/attributes/CensusBedsSTAFFED_BEDSVARCHARStaffed Bedsconfiguration/entityTypes/HCO/attributes/Bed/attributes/StaffedBedsGSA_EXCLUSIONReltio URI: configuration/entityTypes/HCO/attributes/GSAExclusionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameGSA_EXCLUSION_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSANCTION_IDVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/SanctionIdORGANIZATION_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/OrganizationNameADDRESS_LINE1VARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/AddressLine1ADDRESS_LINE2VARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/AddressLine2CITYVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/CitySTATEVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/StateZIPVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/ZipACTION_DATEVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/ActionDateTERM_DATEVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/TermDateAGENCYVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/AgencyCONFIDENCEVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/ConfidenceOIG_EXCLUSIONReltio URI: configuration/entityTypes/HCO/attributes/OIGExclusionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameOIG_EXCLUSION_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSANCTION_IDVARCHARconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/SanctionIdACTION_CODEVARCHARconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/ActionCodeACTION_DESCRIPTIONVARCHARconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/ActionDescriptionBOARD_CODEVARCHARCourt case board idconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/BoardCodeBOARD_DESCVARCHARcourt case board descriptionconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/BoardDescACTION_DATEDATEconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/ActionDateOFFENSE_CODEVARCHARconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/OffenseCodeOFFENSE_DESCRIPTIONVARCHARconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/OffenseDescriptionBUSINESS_DETAILReltio URI: configuration/entityTypes/HCO/attributes/BusinessDetailMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameBUSINESS_DETAIL_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeDETAILVARCHARDetailconfiguration/entityTypes/HCO/attributes/BusinessDetail/attributes/DetailHCOBusinessDetailGROUPVARCHARGroupconfiguration/entityTypes/HCO/attributes/BusinessDetail/attributes/GroupHCOBusinessDetailGroupDETAIL_VALUEVARCHARDetail Valueconfiguration/entityTypes/HCO/attributes/BusinessDetail/attributes/DetailValueDETAIL_COUNTVARCHARDetail Countconfiguration/entityTypes/HCO/attributes/BusinessDetail/attributes/DetailCountHINHINReltio URI: configuration/entityTypes/HCO/attributes/HINMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameHIN_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeHINVARCHARHINconfiguration/entityTypes/HCO/attributes/HIN/attributes/HINTICKERReltio URI: configuration/entityTypes/HCO/attributes/TickerMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameTICKER_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSYMBOLVARCHARconfiguration/entityTypes/HCO/attributes/Ticker/attributes/SymbolSTOCK_EXCHANGEVARCHARconfiguration/entityTypes/HCO/attributes/Ticker/attributes/StockExchangeTRADE_STYLE_NAMEReltio URI: configuration/entityTypes/HCO/attributes/TradeStyleNameMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameTRADE_STYLE_NAME_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeORGANIZATION_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/TradeStyleName/attributes/OrganizationNameLANGUAGE_CODEVARCHARconfiguration/entityTypes/HCO/attributes/TradeStyleName/attributes/LanguageCodeFORMER_ORGANIZATION_PRIMARY_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/TradeStyleName/attributes/FormerOrganizationPrimaryNameDISPLAY_SEQUENCEVARCHARconfiguration/entityTypes/HCO/attributes/TradeStyleName/attributes/DisplaySequenceTYPEVARCHARconfiguration/entityTypes/HCO/attributes/TradeStyleName/attributes/TypeHRIOR_DUNS_NUMBERReltio URI: configuration/entityTypes/HCO/attributes/PriorDUNSNUmberMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePRIOR_DUNS_NUMBER_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTRANSFER_DUNS_NUMBERVARCHARconfiguration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferDUNSNumberTRANSFER_REASON_TEXTVARCHARconfiguration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferReasonTextTRANSFER_REASON_CODEVARCHARconfiguration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferReasonCodeTRANSFER_DATEVARCHARconfiguration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferDateTRANSFERRED_FROM_DUNS_NUMBERVARCHARconfiguration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferredFromDUNSNumberTRANSFERRED_TO_DUNS_NUMBERVARCHARconfiguration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferredToDUNSNumberINDUSTRY_CODEReltio URI: configuration/entityTypes/HCO/attributes/IndustryCodeMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameINDUSTRY_CODE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeDNB_CODEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/DNBCodeINDUSTRY_CODEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeINDUSTRY_CODE_DESCRIPTIONVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeDescriptionINDUSTRY_CODE_LANGUAGE_CODEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeLanguageCodeINDUSTRY_CODE_WRITING_SCRIPTVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeWritingScriptDISPLAY_SEQUENCEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/DisplaySequenceSALES_PERCENTAGEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/SalesPercentageTYPEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/TypeINDUSTRY_TYPE_CODEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryTypeCodeIMPORT_EXPORT_AGENTVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/ImportExportAgentACTIVITIES_AND_OPERATIONSReltio URI: configuration/entityTypes/HCO/attributes/ActivitiesAndOperationsMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACTIVITIES_AND_OPERATIONS_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeLINE_OF_BUSINESS_DESCRIPTIONVARCHARconfiguration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/LineOfBusinessDescriptionLANGUAGE_CODEVARCHARconfiguration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/LanguageCodeWRITING_SCRIPT_CODEVARCHARconfiguration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/WritingScriptCodeIMPORT_INDICATORBOOLEANconfiguration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/ImportIndicatorEXPORT_INDICATORBOOLEANconfiguration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/ExportIndicatorAGENT_INDICATORBOOLEANconfiguration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/AgentIndicatorEMPLOYEE_DETAILSReltio URI: configuration/entityTypes/HCO/attributes/EmployeeDetailsMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEMPLOYEE_DETAILS_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeINDIVIDUAL_EMPLOYEE_FIGURES_DATEVARCHARconfiguration/entityTypes/HCO/attributes/EmployeeDetails/attributes/IndividualEmployeeFiguresDateINDIVIDUAL_TOTAL_EMPLOYEE_QUANTITYVARCHARconfiguration/entityTypes/HCO/attributes/EmployeeDetails/attributes/IndividualTotalEmployeeQuantityINDIVIDUAL_RELIABILITY_TEXTVARCHARconfiguration/entityTypes/HCO/attributes/EmployeeDetails/attributes/IndividualReliabilityTextTOTAL_EMPLOYEE_QUANTITYVARCHARconfiguration/entityTypes/HCO/attributes/EmployeeDetails/attributes/TotalEmployeeQuantityTOTAL_EMPLOYEE_RELIABILITYVARCHARconfiguration/entityTypes/HCO/attributes/EmployeeDetails/attributes/TotalEmployeeReliabilityPRINCIPALS_INCLUDEDVARCHARconfiguration/entityTypes/HCO/attributes/EmployeeDetails/attributes/PrincipalsIncludedKEY_FINANCIAL_FIGURES_OVERVIEWReltio URI: configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverviewMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameKEY_FINANCIAL_FIGURES_OVERVIEW_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeFINANCIAL_STATEMENT_TO_DATEDATEconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/FinancialStatementToDateFINANCIAL_PERIOD_DURATIONVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/FinancialPeriodDurationSALES_REVENUE_CURRENCYVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueCurrencySALES_REVENUE_CURRENCY_CODEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueCurrencyCodeSALES_REVENUE_RELIABILITY_CODEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueReliabilityCodeSALES_REVENUE_UNIT_OF_SIZEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueUnitOfSizeSALES_REVENUE_AMOUNTVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueAmountPROFIT_OR_LOSS_CURRENCYVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossCurrencyPROFIT_OR_LOSS_RELIABILITY_TEXTVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossReliabilityTextPROFIT_OR_LOSS_UNIT_OF_SIZEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossUnitOfSizePROFIT_OR_LOSS_AMOUNTVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossAmountSALES_TURNOVER_GROWTH_RATEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesTurnoverGrowthRateSALES3YRY_GROWTH_RATEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Sales3YryGrowthRateSALES5YRY_GROWTH_RATEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Sales5YryGrowthRateEMPLOYEE3YRY_GROWTH_RATEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Employee3YryGrowthRateEMPLOYEE5YRY_GROWTH_RATEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Employee5YryGrowthRateMATCH_QUALITYReltio URI: configuration/entityTypes/HCO/attributes/MatchQualityMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameMATCH_QUALITY_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCONFIDENCE_CODEVARCHARDnB Match Quality Confidence Codeconfiguration/entityTypes/HCO/attributes/MatchQuality/attributes/ConfidenceCodeDISPLAY_SEQUENCEVARCHARDnB Match Quality Display Sequenceconfiguration/entityTypes/HCO/attributes/MatchQuality/attributes/DisplaySequenceMATCH_CODEVARCHARconfiguration/entityTypes/HCO/attributes/MatchQuality/attributes/MatchCodeBEMFABVARCHARconfiguration/entityTypes/HCO/attributes/MatchQuality/attributes/BEMFABMATCH_GRADEVARCHARconfiguration/entityTypes/HCO/attributes/MatchQuality/attributes/MatchGradeORGANIZATION_DETAILReltio URI: configuration/entityTypes/HCO/attributes/OrganizationDetailMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameORGANIZATION_DETAIL_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeMEMBER_ROLEVARCHARconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/MemberRoleSTANDALONEBOOLEANconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/StandaloneCONTROL_OWNERSHIP_DATEDATEconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/ControlOwnershipDateOPERATING_STATUSVARCHARconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/OperatingStatusSTART_YEARVARCHARconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/StartYearFRANCHISE_OPERATION_TYPEVARCHARconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/FranchiseOperationTypeBONEYARD_ORGANIZATIONBOOLEANconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/BoneyardOrganizationOPERATING_STATUS_COMMENTVARCHARconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/OperatingStatusCommentDUNS_HIERARCHYReltio URI: configuration/entityTypes/HCO/attributes/DUNSHierarchyMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameDUNS_HIERARCHY_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeGLOBAL_ULTIMATE_DUNSVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/GlobalUltimateDUNSGLOBAL_ULTIMATE_ORGANIZATIONVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/GlobalUltimateOrganizationDOMESTIC_ULTIMATE_DUNSVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/DomesticUltimateDUNSDOMESTIC_ULTIMATE_ORGANIZATIONVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/DomesticUltimateOrganizationPARENT_DUNSVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/ParentDUNSPARENT_ORGANIZATIONVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/ParentOrganizationHEADQUARTERS_DUNSVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/HeadquartersDUNSHEADQUARTERS_ORGANIZATIONVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/HeadquartersOrganizationMCOManaged Care OrganizationReltio URI: configuration/entityTypes/MCOMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCOMPANY_CUST_IDVARCHARCOMPANY Customer IDconfiguration/entityTypes/MCO/attributes/COMPANYCustIDNAMEVARCHARNameconfiguration/entityTypes/MCO/attributes/NameTYPEVARCHARTypeconfiguration/entityTypes/MCO/attributes/TypeMCOTypeMANAGED_CARE_CHANNELVARCHARManaged Care Channelconfiguration/entityTypes/MCO/attributes/ManagedCareChannelMCOManagedCareChannelPLAN_MODEL_TYPEVARCHARPlanModelTypeconfiguration/entityTypes/MCO/attributes/PlanModelTypeMCOPlanModelTypeSUB_TYPEVARCHARSubTypeconfiguration/entityTypes/MCO/attributes/SubTypeMCOSubTypeSUB_TYPE2VARCHARSubType2configuration/entityTypes/MCO/attributes/SubType2SUB_TYPE3VARCHARSub Type 3configuration/entityTypes/MCO/attributes/SubType3NUM_LIVES_MEDICAREVARCHARMedicare Number of Livesconfiguration/entityTypes/MCO/attributes/NumLives_MedicareNUM_LIVES_MEDICALVARCHARMedical Number of Livesconfiguration/entityTypes/MCO/attributes/NumLives_MedicalNUM_LIVES_PHARMACYVARCHARPharmacy Number of Livesconfiguration/entityTypes/MCO/attributes/NumLives_PharmacyOPERATING_STATEVARCHARState Operating fromconfiguration/entityTypes/MCO/attributes/Operating_StateORIGINAL_SOURCE_NAMEVARCHAROriginal Source Nameconfiguration/entityTypes/MCO/attributes/OriginalSourceNameDISTRIBUTION_CHANNELVARCHARDistribution Channelconfiguration/entityTypes/MCO/attributes/DistributionChannelACCESS_LANDSCAPE_FORMULARY_CHANNELVARCHARAccess Landscape Formulary Channelconfiguration/entityTypes/MCO/attributes/AccessLandscapeFormularyChannelEFFECTIVE_START_DATEDATEEffective Start Dateconfiguration/entityTypes/MCO/attributes/EffectiveStartDateEFFECTIVE_END_DATEDATEEffective End Dateconfiguration/entityTypes/MCO/attributes/EffectiveEndDateSTATUSVARCHARStatusconfiguration/entityTypes/MCO/attributes/StatusMCOStatusSOURCE_MATCH_CATEGORYVARCHARSource Match Categoryconfiguration/entityTypes/MCO/attributes/SourceMatchCategoryCOUNTRY_MCOVARCHARCountryconfiguration/entityTypes/MCO/attributes/CountryAFFILIATIONSReltio URI: configuration/relationTypes/FlextoDDDAffiliations, configuration/relationTypes/Ownership, configuration/relationTypes/PAYERtoPLAN, configuration/relationTypes/PBMVendortoMCO, configuration/relationTypes/ACOAffiliations, configuration/relationTypes/MCOtoPLAN, configuration/relationTypes/FlextoHCOSAffiliations, configuration/relationTypes/FlextoSAPAffiliations, ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●, configuration/relationTypes/HCOStoDDDAffiliations, configuration/relationTypes/EnterprisetoBOB, configuration/relationTypes/OtherHCOtoHCOAffiliations, configuration/relationTypes/ContactAffiliations, configuration/relationTypes/VAAffiliations, configuration/relationTypes/PBMtoPLAN, configuration/relationTypes/Purchasing, configuration/relationTypes/BOBtoMCO, configuration/relationTypes/DDDtoSAPAffiliations, ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●, configuration/relationTypes/ProviderAffiliations, configuration/relationTypes/SAPtoHCOSAffiliationsMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameRELATION_URIVARCHARReltio Relation URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagRELATION_TYPEVARCHARReltio Relation TypeSTART_ENTITY_URIVARCHARReltio Start Entity URIEND_ENTITY_URIVARCHARReltio End Entity URISOURCEVARCHARconfiguration/relationTypes/FlextoDDDAffiliations/attributes/Source, configuration/relationTypes/Ownership/attributes/Source, configuration/relationTypes/PAYERtoPLAN/attributes/Source, configuration/relationTypes/PBMVendortoMCO/attributes/Source, configuration/relationTypes/ACOAffiliations/attributes/Source, configuration/relationTypes/MCOtoPLAN/attributes/Source, configuration/relationTypes/FlextoHCOSAffiliations/attributes/Source, configuration/relationTypes/FlextoSAPAffiliations/attributes/Source, configuration/relationTypes/MCOtoMMITORG/attributes/Source, configuration/relationTypes/HCOStoDDDAffiliations/attributes/Source, configuration/relationTypes/EnterprisetoBOB/attributes/Source, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/Source, configuration/relationTypes/ContactAffiliations/attributes/Source, configuration/relationTypes/VAAffiliations/attributes/Source, configuration/relationTypes/PBMtoPLAN/attributes/Source, configuration/relationTypes/Purchasing/attributes/Source, configuration/relationTypes/BOBtoMCO/attributes/Source, configuration/relationTypes/DDDtoSAPAffiliations/attributes/Source, configuration/relationTypes/Distribution/attributes/Source, configuration/relationTypes/ProviderAffiliations/attributes/Source, configuration/relationTypes/SAPtoHCOSAffiliations/attributes/SourceLINKED_BYVARCHARconfiguration/relationTypes/FlextoDDDAffiliations/attributes/LinkedBy, configuration/relationTypes/FlextoHCOSAffiliations/attributes/LinkedBy, configuration/relationTypes/FlextoSAPAffiliations/attributes/LinkedBy, configuration/relationTypes/SAPtoHCOSAffiliations/attributes/LinkedByCOUNTRY_AFFILIATIONSVARCHARconfiguration/relationTypes/FlextoDDDAffiliations/attributes/Country, configuration/relationTypes/Ownership/attributes/Country, configuration/relationTypes/PAYERtoPLAN/attributes/Country, configuration/relationTypes/PBMVendortoMCO/attributes/Country, configuration/relationTypes/ACOAffiliations/attributes/Country, configuration/relationTypes/MCOtoPLAN/attributes/Country, configuration/relationTypes/FlextoHCOSAffiliations/attributes/Country, configuration/relationTypes/FlextoSAPAffiliations/attributes/Country, configuration/relationTypes/MCOtoMMITORG/attributes/Country, configuration/relationTypes/HCOStoDDDAffiliations/attributes/Country, configuration/relationTypes/EnterprisetoBOB/attributes/Country, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/Country, configuration/relationTypes/ContactAffiliations/attributes/Country, configuration/relationTypes/VAAffiliations/attributes/Country, configuration/relationTypes/PBMtoPLAN/attributes/Country, configuration/relationTypes/Purchasing/attributes/Country, configuration/relationTypes/BOBtoMCO/attributes/Country, configuration/relationTypes/DDDtoSAPAffiliations/attributes/Country, configuration/relationTypes/Distribution/attributes/Country, configuration/relationTypes/ProviderAffiliations/attributes/Country, configuration/relationTypes/SAPtoHCOSAffiliations/attributes/CountryAFFILIATION_TYPEVARCHARconfiguration/relationTypes/PAYERtoPLAN/attributes/AffiliationType, configuration/relationTypes/PBMVendortoMCO/attributes/AffiliationType, configuration/relationTypes/MCOtoPLAN/attributes/AffiliationType, configuration/relationTypes/MCOtoMMITORG/attributes/AffiliationType, configuration/relationTypes/EnterprisetoBOB/attributes/AffiliationType, configuration/relationTypes/VAAffiliations/attributes/AffiliationType, configuration/relationTypes/PBMtoPLAN/attributes/AffiliationType, configuration/relationTypes/BOBtoMCO/attributes/AffiliationTypePBM_AFFILIATION_TYPEVARCHARconfiguration/relationTypes/PAYERtoPLAN/attributes/PBMAffiliationType, configuration/relationTypes/PBMVendortoMCO/attributes/PBMAffiliationType, configuration/relationTypes/MCOtoPLAN/attributes/PBMAffiliationType, configuration/relationTypes/MCOtoMMITORG/attributes/PBMAffiliationType, configuration/relationTypes/EnterprisetoBOB/attributes/PBMAffiliationType, configuration/relationTypes/PBMtoPLAN/attributes/PBMAffiliationType, configuration/relationTypes/BOBtoMCO/attributes/PBMAffiliationTypePLAN_MODEL_TYPEVARCHARconfiguration/relationTypes/PAYERtoPLAN/attributes/PlanModelType, configuration/relationTypes/PBMVendortoMCO/attributes/PlanModelType, configuration/relationTypes/MCOtoPLAN/attributes/PlanModelType, configuration/relationTypes/MCOtoMMITORG/attributes/PlanModelType, configuration/relationTypes/EnterprisetoBOB/attributes/PlanModelType, configuration/relationTypes/PBMtoPLAN/attributes/PlanModelType, configuration/relationTypes/BOBtoMCO/attributes/PlanModelTypeMCOPlanModelTypeMANAGED_CARE_CHANNELVARCHARconfiguration/relationTypes/PAYERtoPLAN/attributes/ManagedCareChannel, configuration/relationTypes/PBMVendortoMCO/attributes/ManagedCareChannel, configuration/relationTypes/MCOtoPLAN/attributes/ManagedCareChannel, configuration/relationTypes/MCOtoMMITORG/attributes/ManagedCareChannel, configuration/relationTypes/EnterprisetoBOB/attributes/ManagedCareChannel, configuration/relationTypes/PBMtoPLAN/attributes/ManagedCareChannel, configuration/relationTypes/BOBtoMCO/attributes/ManagedCareChannelMCOManagedCareChannelEFFECTIVE_START_DATEDATEconfiguration/relationTypes/MCOtoPLAN/attributes/EffectiveStartDateEFFECTIVE_END_DATEDATEconfiguration/relationTypes/MCOtoPLAN/attributes/EffectiveEndDateSTATUSVARCHARconfiguration/relationTypes/VAAffiliations/attributes/StatusAFFIL_RELATION_TYPEReltio URI: configuration/relationTypes/Ownership/attributes/RelationType, configuration/relationTypes/ACOAffiliations/attributes/RelationType, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType, configuration/relationTypes/ContactAffiliations/attributes/RelationType, configuration/relationTypes/Purchasing/attributes/RelationType, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType, configuration/relationTypes/Distribution/attributes/RelationType, configuration/relationTypes/ProviderAffiliations/attributes/RelationTypeMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameRELATION_TYPE_URIVARCHARGenerated KeyRELATION_URIVARCHARReltio Relation URIRELATIONSHIP_GROUP_OWNERSHIPVARCHARconfiguration/relationTypes/Ownership/attributes/RelationType/attributes/RelationshipGroupHCORelationGroupRELATIONSHIP_DESCRIPTION_OWNERSHIPVARCHARconfiguration/relationTypes/Ownership/attributes/RelationType/attributes/RelationshipDescriptionHCORelationDescriptionRELATIONSHIP_ORDERVARCHARconfiguration/relationTypes/Ownership/attributes/RelationType/attributes/RelationshipOrder, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/RelationshipOrder, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/RelationshipOrder, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/RelationshipOrder, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/RelationshipOrder, configuration/relationTypes/Distribution/attributes/RelationType/attributes/RelationshipOrderRANKVARCHARconfiguration/relationTypes/Ownership/attributes/RelationType/attributes/Rank, configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/Rank, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/Rank, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/Rank, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/Rank, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/Rank, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/Rank, configuration/relationTypes/Distribution/attributes/RelationType/attributes/Rank, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/RankAMA_HOSPITAL_IDVARCHARconfiguration/relationTypes/Ownership/attributes/RelationType/attributes/AMAHospitalID, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/AMAHospitalID, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/AMAHospitalID, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/AMAHospitalID, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/AMAHospitalID, configuration/relationTypes/Distribution/attributes/RelationType/attributes/AMAHospitalIDAMA_HOSPITAL_HOURSVARCHARconfiguration/relationTypes/Ownership/attributes/RelationType/attributes/AMAHospitalHours, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/AMAHospitalHours, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/AMAHospitalHours, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/AMAHospitalHours, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/AMAHospitalHours, configuration/relationTypes/Distribution/attributes/RelationType/attributes/AMAHospitalHoursEFFECTIVE_START_DATEDATEconfiguration/relationTypes/Ownership/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/Distribution/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/EffectiveStartDateEFFECTIVE_END_DATEDATEconfiguration/relationTypes/Ownership/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/Distribution/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/EffectiveEndDateACTIVE_FLAGBOOLEANconfiguration/relationTypes/Ownership/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/Distribution/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/ActiveFlagPRIMARY_AFFILIATIONVARCHARconfiguration/relationTypes/Ownership/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/Distribution/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/PrimaryAffiliationAFFILIATION_CONFIDENCE_CODEVARCHARconfiguration/relationTypes/Ownership/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/Distribution/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/AffiliationConfidenceCodeRELATIONSHIP_GROUP_ACOAFFILIATIONSVARCHARconfiguration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/RelationshipGroupHCPRelationGroupRELATIONSHIP_DESCRIPTION_ACOAFFILIATIONSVARCHARconfiguration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/RelationshipDescriptionHCPRelationshipDescriptionRELATIONSHIP_STATUS_CODEVARCHARconfiguration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/RelationshipStatusCode, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/RelationshipStatusCode, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/RelationshipStatusCodeHCPtoHCORelationshipStatusRELATIONSHIP_STATUS_REASON_CODEVARCHARconfiguration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/RelationshipStatusReasonCode, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/RelationshipStatusReasonCode, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/RelationshipStatusReasonCodeHCPtoHCORelationshipStatusReasonCodeWORKING_STATUSVARCHARconfiguration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/WorkingStatus, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/WorkingStatus, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/WorkingStatusWorkingStatusRELATIONSHIP_GROUP_HCOSTODDDAFFILIATIONSVARCHARconfiguration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/RelationshipGroupHCORelationGroupRELATIONSHIP_DESCRIPTION_HCOSTODDDAFFILIATIONSVARCHARconfiguration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/RelationshipDescriptionHCORelationDescriptionRELATIONSHIP_GROUP_OTHERHCOTOHCOAFFILIATIONSVARCHARconfiguration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/RelationshipGroupHCORelationGroupRELATIONSHIP_DESCRIPTION_OTHERHCOTOHCOAFFILIATIONSVARCHARconfiguration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/RelationshipDescriptionHCORelationDescriptionRELATIONSHIP_GROUP_CONTACTAFFILIATIONSVARCHARconfiguration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/RelationshipGroupHCPRelationGroupRELATIONSHIP_DESCRIPTION_CONTACTAFFILIATIONSVARCHARconfiguration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/RelationshipDescriptionHCPRelationshipDescriptionRELATIONSHIP_GROUP_PURCHASINGVARCHARconfiguration/relationTypes/Purchasing/attributes/RelationType/attributes/RelationshipGroupHCORelationGroupRELATIONSHIP_DESCRIPTION_PURCHASINGVARCHARconfiguration/relationTypes/Purchasing/attributes/RelationType/attributes/RelationshipDescriptionHCORelationDescriptionRELATIONSHIP_GROUP_DDDTOSAPAFFILIATIONSVARCHARconfiguration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/RelationshipGroupHCORelationGroupRELATIONSHIP_DESCRIPTION_DDDTOSAPAFFILIATIONSVARCHARconfiguration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/RelationshipDescriptionHCORelationDescriptionRELATIONSHIP_GROUP_DISTRIBUTIONVARCHARconfiguration/relationTypes/Distribution/attributes/RelationType/attributes/RelationshipGroupHCORelationGroupRELATIONSHIP_DESCRIPTION_DISTRIBUTIONVARCHARconfiguration/relationTypes/Distribution/attributes/RelationType/attributes/RelationshipDescriptionHCORelationDescriptionRELATIONSHIP_GROUP_PROVIDERAFFILIATIONSVARCHARconfiguration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/RelationshipGroupHCPRelationGroupRELATIONSHIP_DESCRIPTION_PROVIDERAFFILIATIONSVARCHARconfiguration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/RelationshipDescriptionHCPRelationshipDescriptionAFFIL_ACOReltio URI: configuration/relationTypes/Ownership/attributes/ACO, configuration/relationTypes/ACOAffiliations/attributes/ACO, configuration/relationTypes/HCOStoDDDAffiliations/attributes/ACO, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/ACO, configuration/relationTypes/ContactAffiliations/attributes/ACO, configuration/relationTypes/Purchasing/attributes/ACO, configuration/relationTypes/DDDtoSAPAffiliations/attributes/ACO, configuration/relationTypes/Distribution/attributes/ACO, configuration/relationTypes/ProviderAffiliations/attributes/ACOMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACO_URIVARCHARGenerated KeyRELATION_URIVARCHARReltio Relation URIACO_TYPEVARCHARconfiguration/relationTypes/Ownership/attributes/ACO/attributes/ACOType, configuration/relationTypes/ACOAffiliations/attributes/ACO/attributes/ACOType, configuration/relationTypes/HCOStoDDDAffiliations/attributes/ACO/attributes/ACOType, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/ACO/attributes/ACOType, configuration/relationTypes/ContactAffiliations/attributes/ACO/attributes/ACOType, configuration/relationTypes/Purchasing/attributes/ACO/attributes/ACOType, configuration/relationTypes/DDDtoSAPAffiliations/attributes/ACO/attributes/ACOType, configuration/relationTypes/Distribution/attributes/ACO/attributes/ACOType, configuration/relationTypes/ProviderAffiliations/attributes/ACO/attributes/ACOTypeHCOACOTypeACO_TYPE_CATEGORYVARCHARconfiguration/relationTypes/Ownership/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/ACOAffiliations/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/HCOStoDDDAffiliations/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/ContactAffiliations/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/Purchasing/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/DDDtoSAPAffiliations/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/Distribution/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/ProviderAffiliations/attributes/ACO/attributes/ACOTypeCategoryHCOACOTypeCategoryACO_TYPE_GROUPVARCHARconfiguration/relationTypes/Ownership/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/ACOAffiliations/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/HCOStoDDDAffiliations/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/ContactAffiliations/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/Purchasing/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/DDDtoSAPAffiliations/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/Distribution/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/ProviderAffiliations/attributes/ACO/attributes/ACOTypeGroupHCOACOTypeGroupAFFIL_RELATION_TYPE_ROLEReltio URI: configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/Role, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/Role, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/RoleMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameRELATION_TYPE_URIVARCHARGenerated KeyROLE_URIVARCHARGenerated KeyRELATION_URIVARCHARReltio Relation URIROLEVARCHARconfiguration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/Role/attributes/Role, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/Role/attributes/Role, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/Role/attributes/RoleRoleTypeRANKVARCHARconfiguration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/Role/attributes/Rank, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/Role/attributes/Rank, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/Role/attributes/RankAFFIL_USAGE_TAGReltio URI: configuration/relationTypes/ProviderAffiliations/attributes/UsageTagMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameUSAGE_TAG_URIVARCHARGenerated KeyRELATION_URIVARCHARReltio Relation URIUSAGE_TAGVARCHARconfiguration/relationTypes/ProviderAffiliations/attributes/UsageTag/attributes/UsageTag" + }, + { + "title": "CUSTOMER_SL schema", + "pageID": "163924327", + "pageLink": "/display/GMDM/CUSTOMER_SL+schema", + "content": "The schema plays the role of access layer for clients reading MDM data. It includes a set of views that are directly inherited from CUSTOMER schema.Views have the same structure as views in CUSTOMER schemat. To learn about view definitions please see CUSTOMER schema. In regional data marts, the schema views have MDM prefix. In CUSTOMER_SL schema in Global Data Mart views are prefixed with 'P'  for COMPANY Reltio Model,'I' for IQIVIA Reltio model, and 'P_HI' for Historical Inactive data for COMPANY Reltio Model.To speed up access, most views are being materialized to physical tables. The process is transparent to users. Access views are being switched to physical tables automatically if they are available.  The refresh process is incremental and connected with the loading process. " + }, + { + "title": "LANDING schema", + "pageID": "163920137", + "pageLink": "/display/GMDM/LANDING+schema", + "content": "LANDING schema plays a role of the staging database for publishing  MDM data from Reltio tenants throught MDM HUBHUB_KAFKA_DATATarget table for KAFA events published through Snowflake pipe.ColumnTypeDescriptionRECORD_METADATAVARIANTMetadata of KAFKA event like KAFKA key, topic, partition, create timeRECORD_CONTENTVARIANTEvent payloadLOV_DATATarget table for LOV data publish ColumnTypeDescription IDTEXTLOV object idOBJECTVARIANTRelto RDM json objectMERGE_TREE_DATATarget table for merge_tree exports from ReltioColumnTypeDescription FILENAMETEXTFull S3 file pathOBJECTVARIANTRelto MERGE_TREE json objectHI_DATATarget table for ad-hoc historical inactive dataColumnTypeDescription OBJECTVARIANTHistorical Inactive json object" + }, + { + "title": "PTE_SL", + "pageID": "302687546", + "pageLink": "/display/GMDM/PTE_SL", + "content": "The schema plays the role of access layer for Clients reading data required for PT&E reports. It mimics its structure and logic. To make a connection to the PTE_SL schema you need to have a proper role assigned:COMM_GBL_MDM_DMART_DEV_PTE_ROLECOMM_GBL_MDM_DMART_QA_PTE_ROLECOMM_GBL_MDM_DMART_STG_PTE_ROLECOMM_GBL_MDM_DMART_PROD_PTE_ROLEthat are connected with groups:sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_DEV_PTE_ROLE\nsfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_QA_PTE_ROLE\nsfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_STG_PTE_ROLE\nsfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_PTE_ROLEInformation how to request for an acces is described here: Snowflake - connection guidSnowflake path to the client report: "COMM_GBL_MDM_DMART_PROD_DB"."PTE_SL"."PTE_REPORT"General assumptions for view creation:The views integrate both data models COMPANY and IQIVIA via a Union function. Meaning that they're calculated separately and then joined together. driven_tabel1.iso_code = entity_uri.country The lang_code from the code translations is always 'en'In case the hcp identifiers aren't provided by the client there is an option to calculate them dynamically by the number of HCPs having the identifier.Driven tables:DRIVEN_TABLE1This is a view selecting data from the country_config table for countries that need to be added to the PTE_REPORTColumn nameDescriptionISO_CODEISO2 code of the countryNAMECountry nameLABELCountry label (name + iso_code)RELTIO_TENANTEither 'IQVIA' or the region of the Reltio tenant (EMEA/AMER...)HUB_TENANTIndicator of the HUB database the date comes fromSF_INSTANCEName of the Snowflake instance the data comes from (emeaprod01.eu-west-1...)SF_TENANTDATABASEFull database name form which the data comes fromCUSTOMERSL_PREFIXeither 'i_' for the IQVIA data model or 'p_' for the COMPANY data modelDRIVEN_TABLEV2 / DRIVEN_TABLE2_STATICDRIVEN_TABLEV2 is a view used to get the HCP identifiers and sort them by the count of HCPs that have the identifier. DRIVEN_TABLE2_STATIC is a table containing the list of identifiers used per country and the order in which they're placed in the PTE_REPORT view. If the country isn't available in DRIVEN_TABLE2_STATIC the report will use DRIVEN_TABLEV2 to get them calculated dynamically every time the report is used.Column nameDescriptionISO_CDOEISO2 code of the countryCANONICAL_CODECanonical code of the identifierLANG_DESCCode description in EnglishCODE_IDCode idMODELeither 'i' for the IQVIA data model or 'p' for the COMPANY data modelORDER_IDOrder in which the identifier will be available in the PTE_REPORT view. Only identifiers from 1 to 5 will be used.DRIVEN_TABLE3Specialty dictionary provided by the client for the IQVIA data model only. Used for calculating the is_prescriber data.'IS PRESCRIBER' calculation method for IQIVIA modelThe path to the dictionary files on S3: pfe-baiaes-eu-w1-project/mdm/config/PTE_DictionariesColumn nameDescriptionCOUNTRY_CODEISO2 code of the countryHEADER_NAMECode nameMDM_CODECode idCANONICAL_CODECanonical code of the identifierLONG_DESCRIPTIONCode description in EnglishPROFESSIONAL_TYPEIf the specialty is a prescriber or not PTE_REPORT:The PTE_REPORT is the view from which the clients should get their data. It's an UNION of the reports for the IQVIA data model and the COMPANY data model. Calculation detail may be found in the respective articles:IQVIA: PTE_SL IQVIA MODELCOMPANY: PTE_SL COMPANY MODEL" + }, + { + "title": "Data Sourcing", + "pageID": "347664788", + "pageLink": "/display/GMDM/Data+Sourcing", + "content": "CountryIso CodeMDM RegionData ModelSnowflake ViewFranceFREMEACOMPANYPTE_REPORTArgentinaAEGBLIQVIAPTE_REPORTBrazilBRAMERCOMPANYPTE_REPORTMexicoMXGBLIQVIAPTE_REPORTChileCLGBLIQVIAPTE_REPORTColombiaCOGBLIQVIAPTE_REPORTSlovakaSKGBLIQVIAPTE_REPORTPhilippinesPKGBLIQVIAPTE_REPORTRéunionREEMEACOMPANYPTE_REPORTSaint Pierre and MiquelonPMEMEACOMPANYPTE_REPORTMayotteYTEMEACOMPANYPTE_REPORTFrench PolynesiaPFEMEACOMPANYPTE_REPORTFrench GuianaGFEMEACOMPANYPTE_REPORTWallis and FutunaWFEMEACOMPANYPTE_REPORTGuadeloupeGPEMEACOMPANYPTE_REPORTNew CaledoniaNCEMEACOMPANYPTE_REPORTMartiniqueMQEMEACOMPANYPTE_REPORTMauritiusMUEMEACOMPANYPTE_REPORTMonacoMCEMEACOMPANYPTE_REPORTAndorraADEMEACOMPANYPTE_REPORTTurkeyTREMEACOMPANYPTE_REPORT_TRSouth KoreaKRAPACCOMPANYPTE_REPORT_KRAll views are available in the global database in the PTE_SL schema." + }, + { + "title": "PTE_SL IQVIA MODEL", + "pageID": "218432348", + "pageLink": "/display/GMDM/PTE_SL+IQVIA+MODEL", + "content": "Iqvia data model specification:name typedescription Reltio attribute URILOV Name additional querry conditions (IQIVIA model)additional querry conditions (COMPANY model)HCP_IDVARCHARReltio Entity URIi_hcp.entity_uri or i_affiliations.start_entity_urionly active hcp are returned (customer_sl.i_hcp.active ='TRUE')i_hcp.entity_uri or i_affiliations.start_entity_urionly active hcp are returnedHCO_IDVARCHARReltio Entity URIFor the IQIVIA model, all affiliation with i_affiliation.active = 'TRUE' and relation type in ('Activity','HasHealthCareRole') must be returned.i_hco.entity_uri select END_ENTITY_URI from customer_sl.i_affiliations where start_entity_uri ='T9u7Ej4'and active = 'TRUE'and relation_type in ('Activity','HasHealthCareRole') ;select * from customer_sl.p_affiliations where active=TRUE and relation_type = 'ContactAffiliations';WORKPLACE_NAMEVARCHARReltio workplace name or reltio workplace parent name.configuration/entityTypes/HCO/attributes/NameFor the IQIVIA model, all affiliation with i_affiliation.active = 'TRUE' and relation type in ('Activity','HasHealthCareRole') must be returned.i_hco.name must be returnedselect hco.name from customer_sl.i_affiliations a,customer_sl.i_hco hcowhere a.end_entity_uri = hco.entity_uri and a.start_entity_uri ='T9u7Ej4'and a.active = 'TRUE'and a.relation_type in ('Activity','HasHealthCareRole') ;For the COMPANY model, all affiliation with p_affiliation.active=TRUE and relation_type = 'ContactAffiliations'i_hco.nameSTATUSBOOLEANReltio Entity statusi_customer_sl.i_hcp.activemapping rule TRUE = ACTIVEi_customer_sl.p_hcp.activemapping rule TRUE = ACTIVELAST_MODIFICATION_DATETIMESAMP_LTZEntity update time in SnowFlakeconfiguration/entityTypes/HCP/updateTimecustomer_sl.i_entity_update_dates.SF_UPDATE_TIMEi_customer_sl.p_entity_update.SF_UPDATE_TIMEFIRST_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/FirstNamei_customer_sl.i_hcp.first_namei_customer_sl.p_hcp.first_nameLAST_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/LastNamei_customer_sl.i_hcp.last_namei_customer_sl.p_hcp.last_nameTITLE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/TitleLOV Name COMPANY = HCPTitleLOV Name IQIVIA = LKUP_IMS_PROF_TITLEselect  c.canonical_code  from customer_sl.i_hcp hcp,customer_sl.i_codetranslations cwhere hcp.title_lkp = c.code_ide.g.select c.canonical_code fromcustomer_sl.i_hcp hcp,customer_sl.i_code_translations cwherehcp.title_lkp = c.code_idand hcp.entity_uri='T9u7Ej4'and c.country='FR';select c.canonical_code from customer_sl.p_hcp hcp,customer_sl.p_codes cwhere hcp.title_lkp = c.code_idTITLE_DESCVARCHARconfiguration/entityTypes/HCP/attributes/TitleLOV Name COMPANY = THCPTitleLOV Name IQIVIA = LKUP_IMS_PROF_TITLEselect  c.lang_desc  from customer_sl.i_hcp hcp,customer_sl.i_code_translations cwhere hcp.title_lkp = c.code_ide.g.select c.lang_desc fromcustomer_sl.i_hcp hcp,customer_sl.i_code_translations cwherehcp.title_lkp = c.code_idand hcp.entity_uri='T9u7Ej4'and c.country='FR';select c.desc from customer_sl.p_hcp hcp,customer_sl.p_codes cwhere hcp.title_lkp = c.code_idIS_PRESCRIBER'IS PRESCRIBER' calculation method for IQIVIA modelCASEWhen p_hcp.TYPE_CODE_LKP = 'HCPType:HCPT.PRES' then YCASEWhen p_hcp.TYPE_CODE_LKP = 'HCPType:HCPT.NPRS' then NELSETo define                                                COUNTRYCountry codeconfiguration/entityTypes/Location/attributes/countrycustomer_sl.i_hcp.countrycustomer_sl.p_hcp.countryPRIMARY_ADDRESS_LINE_1IQIVIA: configuration/entityTypes/Location/attributes/AddressLine1COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine1select address_line1 from customer_sl.i_address where address_rank=1select address_line1 from customer_sl.i_address where address_rank=1 and entity_uri='T9u7Ej4';select a. address_line1 from customer_sl.p_addresses a where a.address_rank =1PRIMARY_ADDRESS_LINE_2IQIVIA: configuration/entityTypes/Location/attributes/AddressLine2COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine2select address_line2 from customer_sl.i_address where address_rank=1select a. address_line2 from customer_sl.p_addresses a where a.address_rank =1PRIMARY_ADDRESS_CITYIQIVIA: configuration/entityTypes/Location/attributes/CityCOMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/Cityselect cityfrom customer_sl.i_address where address_rank=1select a.city from customer_sl.p_addresses a where a.address_rank =1PRIMARY_ADDRESS_POSTAL_CODEIQIVIA: configuration/entityTypes/Location/attributes/Zip/attributes/ZIP5COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/Zip5select ZIP5 from customer_sl.i_address where address_rank=1select a.ZIP5 from customer_sl.p_addresses a where a.address_rank =1PRIMARY_ADDRESS_STATEIQIVIA: configuration/entityTypes/Location/attributes/StateProvinceCOMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/StateProvinceLOV Name COMPANY = Stateselect state_province from customer_sl.i_address where address_rank=1select c.desc fromcustomer_sl.p_codes c,customer_sl.p_addresses awhere a.address_rank=1anda.STATE_PROVINCE_LKP = c.code_id PRIMARY_ADDR_STATUSIQIVIA: configuration/entityTypes/Location/attributes/VerificationStatusCOMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/VerificationStatuscustomer_sl.i_address.verification_statuscustomer_sl.p_addresses.verification_statusPRIMARY_SPECIALTY_CODEconfiguration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyLOV Name COMPANY = HCPSpecialtyLOV Name IQIVIA =LKUP_IMS_SPECIALTYe.g.select c.canonical_code from customer_sl.i_specialities s,customer_sl.i_code_translations cwhere s.specialty_lkp = c.code_idand s.entity_uri ='T9liLpi'and s.SPECIALTY_TYPE_LKP='LKUP_IMS_SPECIALTY_TYPE:SPEC' and c.lang_code = 'en'and c.country = 'FR';select c.canonical_code from customer_sl.p_specialities s,customer_sl.p_codes cwhere s.specialty_lkp =c.code_idand s.rank = 1 ;There are no extra query conditions connected with SPECIALTY_TYPE_LKP because in the GBL environment that parameter always has a NULL value. PRIMARY_SPECIALTY_DESCconfiguration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyLOV Name COMPANY = LKUP_IMS_SPECIALTYLOV Name IQIVIA =LKUP_IMS_SPECIALTYe.gselect  c.lang_desc from customer_sl.i_specialities s,customer_sl.i_code_translations cwhere s.specialty_lkp = c.code_idand s.entity_uri ='T9liLpi'and s.SPECIALTY_TYPE_LKP='LKUP_IMS_SPECIALTY_TYPE:SPEC' and c.lang_code = 'en'and c.country = 'FR';select c.desc from customer_sl.p_specialities s,customer_sl.p_codes cwhere s.specialty_lkp =c.code_idand s.rank = 1 ;There are no extra query conditions connected with SPECIALTY_TYPE_LKP because in the GBL environment that parameter always has a NULL value. GO_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Compliance/attributes/GOStatusgo_status <> ''CASEWhen i_hcp.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:GO' then YesCASEWhen i_hcp.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:NGO' then NoELSENULLgo_status <> ''CASEWhen p_compliance.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:GO' then YCASEWhen p_compliance.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:NGO' then NELSE Not defined(now this is an empty tabel)IDENTIFIER1_CODEVARCHARReltio identyfier code.configuration/entityTypes/HCP/attributes/Identifiers/attributes/Typeselect ct.canonical_code from customer_sl.i_code_translations ct,customer_sl.i_identifiers dwherect.code_id = d.TYPE_LKPThere is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the first one.e.g.select ct.canonical_code, ct.lang_desc, d.id, ct.*,d.* from customer_sl.i_code_translations ct,customer_sl.i_identifiers dwherect.code_id = d.TYPE_LKPand d.entity_uri='T9v0e54'andct.lang_code='en'and ct.country ='FR';select ct.canonical_code from customer_sl.p_codes ct,customer_sl.p_identifiers dwherect.code_id = d.TYPE_LKPThere is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the first one.IDENTIFIER1_CODE_DESCVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/Typeselect ct.lang_desc from customer_sl.i_code_translations ct,customer_sl.i_identifiers dwherect.code_id = d.TYPE_LKPselect ct.desc from customer_sl.p_codes ct,customer_sl.p_identifiers dwherect.code_id = d.TYPE_LKPIDENTIFIER1_VALUEVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/IDselect id from customer_sl.i_identifiers.id select id from customer_sl.p_identifiersIDENTIFIER2_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/Typeselect ct.canonical_code from customer_sl.i_code_translations ct,customer_sl.i_identifiers dwherect.code_id = d.TYPE_LKPMaximum two identyfiers can be returnedThere is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the second one.select ct.canonical_code from customer_sl.p_codes ct,customer_sl.p_identifiers dwherect.code_id = d.TYPE_LKPMaximum two identifiers can be returnedThere is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the second one.IDENTIFIER2_CODE_DESCVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/Typeselect ct.lang_desc from customer_sl.i_code_translations ct,customer_sl.i_identifiers dwherect.code_id = d.TYPE_LKPselect ct.desc from customer_sl.p_codes ct,customer_sl.p_identifiers dwherect.code_id = d.TYPE_LKPIDENTIFIER2_VALUEVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/IDselect i.id from customer_sl.i_identifiers.idselect id from customer_sl.p_identifiersDGSCATEGORYVARCHARIQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSCategoryCOMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitCategoryLKUP_BENEFITCATEGORY_HCP,LKUP_BENEFITCATEGORY_HCOselect ct.lang_desc from customer_sl.i_code_translations ct,customer_sl.i_disclosure dwherect.code_id = d.dgs_category_lkpselect DisclosureBenefitCategory from p_hcpDGSCATEGORY_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/DGSCategoryLKUP_BENEFITCATEGORY_HCP,LKUP_BENEFITCATEGORY_HCOselect ct.canonical_code from customer_sl.i_code_translations ct,customer_sl.i_disclosure dwherect.code_id = d.dgs_category_lkpcomment: select i_code.canonical_code for a valu returned from DisclosureBenefitCategory DGSTITLEVARCHARIQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSTitleCOMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitTitleLKUP_BENEFITTITLEselect ct.lang_desc from customer_sl.i_code_translations ct,customer_sl.i_disclosure dwherect.code_id = d.DGS_TITLE_LKPselect DisclosureBenefitTitle from p_hcpDGSTITLE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/DGSTitleLKUP_BENEFITTITLEselect ct.canonical_code from customer_sl.i_code_translations ct,customer_sl.i_disclosure dwherect.code_id = d.DGS_TITLE_LKPcomment: select i_code.canonical_code for a valu returned from DisclosureBenefitTitle DGSQUALITYVARCHARIQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSQualityCOMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitQualityLKUP_BENEFITQUALITYselect ct.lang_desc from customer_sl.i_code_translations ct,customer_sl.i_disclosure dwherect.code_id = d.DGS_QUALITY_LKPselect DisclosureBenefitQuality from p_hcpDGSQUALITY_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/DGSQualityLKUP_BENEFITQUALITYselect ct.canonical_code from customer_sl.i_code_translations ct,customer_sl.i_disclosure dwherect.code_id = d.DGS_QUALITY_LKPcomment: select i_code.canonical_code for a valu returned from DisclosureBenefitQuality DGSSPECIALTYVARCHARIQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSSpecialtyCOMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitSpecialtyLKUP_BENEFITSPECIALTYselect ct.lang_desc from customer_sl.i_code_translations ct,customer_sl.i_disclosure dwherect.code_id = d.DGS_SPECIALTY_LKPDisclosureBenefitSpecialtyDGSSPECIALTY_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/DGSSpecialtyLKUP_BENEFITSPECIALTYselect canonical_code from customer_sl.i_code_translations ct,customer_sl.i_disclosure dwherect.code_id = d.DGS_SPECIALTY_LKPcomment: select i_code.canonical_code for a valu returned from DisclosureBenefitSpecialtySECONDARY_SPECIALTY_DESCVARCHARA query should return values like:select c.LANG_DESC from "COMM_GBL_MDM_DMART_PROD_DB"."CUSTOMER_SL"."I_SPECIALITIES" s,"COMM_GBL_MDM_DMART_PROD_DB"."CUSTOMER_SL"."I_CODE_TRANSLATIONS" cwhere s.SPECIALTY_LKP =c.CODE_IDand s.RANK=2and s.SPECIALTY_TYPE_LKP='LKUP_IMS_SPECIALTY_TYPE:SPEC'and c.LANG_CODE ='en' ← lang code conditionand c.country ='PH' ← country conditionand s.ENTITY_URI ='ENTITI_URI'; ← entity uri conditionEMAILVARCHARA query should return values like:select EMAIL from "COMM_GBL_MDM_DMART_PROD_DB"."CUSTOMER_SL"."I_EMAIL" where rank= 1 and entity_uri ='ENTITI_URI';  ← entity uri conditionCAUTION: In case when multiple values are returned, the first one must be returned as a query result.PHONEVARCHARA query should return values like:select FORMATTED_NUMBER from "COMM_GBL_MDM_DMART_PROD_DB"."CUSTOMER_SL"."I_PHONE" where RANK=1 and entity_uri ='ENTITI_URI'; ← entity uri conditionCAUTION: In case when multiple values are returned, the first one must be returned as a query result." + }, + { + "title": "'IS PRESCRIBER' calculation method for IQIVIA model", + "pageID": "218434836", + "pageLink": "/display/GMDM/%27IS+PRESCRIBER%27+calculation+method+for+IQIVIA+model", + "content": "Parameters contains in SF model:SF xml parameter name in calculation metode.g. value from SF modelcustomer_sl.i_hcp.type_code_lkp hcp.professional_type_cdi_hcp.type_code_lkp LKUP_IMS_HCP_CUST_TYPE:PRESselect c.canonical_code from customer_sl.i_hcp s,customer_sl.i_codes cwheres.SUB_TYPE_CODE_LKP = c.code_id hcp.professional_subtype_cdprof_subtype_codeWFR.TYP.Iselect c.canonical_code from customer_sl.i_specialities s,customer_sl.i_codes cwheres.specialty_lkp = c.code_id and s.rank=1 and s.SPECIALTY_TYPE_LKP='LKUP_IMS_SPECIALTY_TYPE:SPEC' and c.parents='SPEC'spec.specialty_codespec_codeWFR.SP.IEcustomer_sl.i_hcp.countryhcp.countryi_hcp.countryFRDictionaries parameters:profesion_type_subtype.csv as dict_subtypesprofesion_type_subtype_fr.csv as dict_subtypesprofessions_type_subtype.xlsxxmlvalue from file to calculate SF viewe.g. value to calculate SF viewmdm_codedict_subtypes.mdm_codecanonical_codeWAR.TYP.Aprofessional_typedict_subtypes.professional_typeprofessional_typeNon-Prescriber, Prescribercountry_codedict_subtypes.country_codecountry_codeFRprofesion_type_speciality.csv as dict_specialtiesprofesion_type_speciality_fr.csv as dict_specialtiesprofessions_type_subtype.xlsxxmlvalue from file to calculate SF viewe.g. value to calculate SF viewmdm_codedict_subtypes.mdm_codecanonical_codeWAC.SP.24professional_typedict_subtypes.professional_typeprofessional_typeNon-Prescriber, Prescribercountry_codedict_subtypes.country_codecountry_codeFRIn a new PTE_SL view the files mentions above are migrated to driven_tabel3. So in a method description, there is an extra condition that matches a dependence with profession subtype or specialty.Method description:Query condition: driven_tabel3.country_code = i_hcp.country and driven_tabel3.canonical_code = prof_subtype_code and driven_tabel3.header_name = 'LKUP_IMS_HCP_SUBTYPE'driven_tabel3.country_code = i_hcp.country and driven_tabel3.canonical_code = spec_code and driven_tabel3.header_name='LKUP_IMS_SPECIALTY'CASE         WHEN i_hcp.type_code_lkp ='LKUP_IMS_HCP_CUST_TYPE:PRES' THEN 'Y'         WHEN    coalesce(prof_subtype_code,spec_code,'') = '' THEN 'N'         WHEN    coalesce(prof_subtype_code,'') <> '' THEN                    CASE                             WHEN coalesce(driven_tabel3.canonical_code,'') = '' THEN 'N@1'                             –- for driven_tabel3.header_name = 'LKUP_IMS_HCP_SUBTYPE', this is a profession subtype checking condition                             WHEN coalesce(driven_tabel3.canonical_code,'') <> '' THEN                                      –- for driven_tabel3.header_name = 'LKUP_IMS_HCP_SUBTYPE', this is a profession subtype checking condition                                        CASE                                                 WHEN driven_tabel3.professional_type = 'Prescriber' THEN 'Y'              –- for driven_tabel3.header_name = 'LKUP_IMS_HCP_SUBTYPE', this is a profession subtype checking condition                                                 WHEN driven_tabel3.professional_type = 'Non-Prescriber' THEN 'N'     –- for driven_tabel3.header_name = 'LKUP_IMS_HCP_SUBTYPE', this is a profession subtype checking condition                                                 ELSE 'N@2'                                        END                     END          WHEN    coalesce(spec_code,'') <> '' THEN                     CASE                              WHEN coalesce(driven_tabel3.canonical_code,'') = '' THEN 'N@3'                                –- for driven_tabel3.header_name = 'LKUP_IMS_SPECIALTY', this is a specialty checking condition                              WHEN coalesce(driven_tabel3.canonical_code,'') <> '' THEN                                        –- for driven_tabel3.header_name = 'LKUP_IMS_SPECIALTY', this is a specialty checking condition                                         CASE                                                  WHEN driven_tabel3.professional_type = 'Prescriber' THEN 'Y'                 –- for driven_tabel3.header_name = 'LKUP_IMS_SPECIALTY', this is a specialty checking condition                                                  WHEN driven_tabel3.professional_type = 'Non-Prescriber' THEN 'N'        –- for driven_tabel3.header_name = 'LKUP_IMS_SPECIALTY', this is a specialty checking condition                                                  ELSE 'N@4'                                          END                     END           ELSE 'N@99'END AS IS_PRESCRIBER" + }, + { + "title": "PTE_SL COMPANY MODEL", + "pageID": "234711638", + "pageLink": "/display/GMDM/PTE_SL+COMPANY+MODEL", + "content": "COMPANY data model specification:name typedescription Reltio attribute URILOV Name additional querry conditions (COMPANY model)HCP_IDVARCHARReltio Entity URIi_hcp.entity_uri or i_affiliations.start_entity_urionly active hcp are returned (customer_sl.i_hcp.active ='TRUE')HCO_IDVARCHARReltio Entity URISELECT HCO.ENTITY_URIFROM CUSTOMER_SL.P_HCP HCPINNER JOIN CUSTOMER_SL.P_AFFILIATIONS AF    ON HCP.ENTITY_URI= AF.START_ENTITY_URIINNER JOIN CUSTOMER_SL.P_HCO HCO    ON AF.END_ENTITY_URI = HCO.ENTITY_URIWHERE AF.relation_type = 'ContactAffiliations'AND AF.ACTIVE = 'TRUE';TO - DO An additional conditions that should be included:querry need to return only HCP-HCO pairs for witch "P_AFFIL_RELATION_TYPE.RELATIONSHIPDESCRIPTION_LKP" = 'HCPRelationshipDescription:CON' A Pair HCP plus HCO must be uniqe.WORKPLACE_NAMEVARCHARReltio workplace name or reltio workplace parent name.configuration/entityTypes/HCO/attributes/NameSELECT HCO.NAMEFROM CUSTOMER_SL.P_HCP HCPINNER JOIN CUSTOMER_SL.P_AFFILIATIONS AF    ON HCP.ENTITY_URI= AF.START_ENTITY_URIINNER JOIN CUSTOMER_SL.P_HCO HCO    ON AF.END_ENTITY_URI = HCO.ENTITY_URIWHERE AF.relation_type = 'ContactAffiliations'AND AF.ACTIVE = 'TRUE';A Pair HCP plus HCO must be uniqe.STATUSBOOLEANReltio Entity statusi_customer_sl.p_hcp.activemapping rule TRUE = ACTIVELAST_MODIFICATION_DATETIMESAMP_LTZEntity update time in SnowFlakeconfiguration/entityTypes/HCP/updateTimep_entity_update.SF_UPDATE_TIMEFIRST_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/FirstNamei_customer_sl.p_hcp.first_nameLAST_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/LastNamei_customer_sl.p_hcp.last_nameTITLE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/TitleLOV Name COMPANY = HCPTitleLOV Name IQIVIA = LKUP_IMS_PROF_TITLEselect c.canonical_code from customer_sl.p_hcp hcp,customer_sl.p_codes cwhere hcp.title_lkp = c.code_idTITLE_DESCVARCHARconfiguration/entityTypes/HCP/attributes/TitleLOV Name COMPANY = THCPTitleLOV Name IQIVIA = LKUP_IMS_PROF_TITLEselect c.desc from customer_sl.p_hcp hcp,customer_sl.p_codes cwhere hcp.title_lkp = c.code_idIS_PRESCRIBERCASEWhen p_hcp.TYPE_CODE_LKP = 'HCPType:HCPT.PRES' then YCASEWhen p_hcp.TYPE_CODE_LKP = 'HCPType:HCPT.NPRS' then NELSETo define                                                COUNTRYCountry codeconfiguration/entityTypes/Location/attributes/countrycustomer_sl.p_hcp.countryPRIMARY_ADDRESS_LINE_1IQIVIA: configuration/entityTypes/Location/attributes/AddressLine1COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine1select a. address_line1 from customer_sl.p_addresses a where a.address_rank =1PRIMARY_ADDRESS_LINE_2IQIVIA: configuration/entityTypes/Location/attributes/AddressLine2COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine2select a. address_line2 from customer_sl.p_addresses a where a.address_rank =1PRIMARY_ADDRESS_CITYIQIVIA: configuration/entityTypes/Location/attributes/CityCOMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/Cityselect a.city from customer_sl.p_addresses a where a.address_rank =1PRIMARY_ADDRESS_POSTAL_CODEIQIVIA: configuration/entityTypes/Location/attributes/Zip/attributes/ZIP5COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/Zip5select a.ZIP5 from customer_sl.p_addresses a where a.address_rank =1PRIMARY_ADDRESS_STATEIQIVIA: configuration/entityTypes/Location/attributes/StateProvinceCOMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/StateProvinceLOV Name COMPANY = Stateselect c.desc fromcustomer_sl.p_codes c,customer_sl.p_addresses awhere a.address_rank=1anda.STATE_PROVINCE_LKP = c.code_id PRIMARY_ADDR_STATUSIQIVIA: configuration/entityTypes/Location/attributes/VerificationStatusCOMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/VerificationStatuscustomer_sl.p_addresses.verification_statusPRIMARY_SPECIALTY_CODEconfiguration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyLOV Name COMPANY = HCPSpecialtyLOV Name IQIVIA =LKUP_IMS_SPECIALTYselect c.canonical_code from customer_sl.p_specialities s,customer_sl.p_codes cwhere s.specialty_lkp =c.code_idand s.rank = 1 ;There are no extra query conditions connected with SPECIALTY_TYPE_LKP because in the GBL environment that parameter always has a NULL value. PRIMARY_SPECIALTY_DESCconfiguration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyLOV Name COMPANY = LKUP_IMS_SPECIALTYLOV Name IQIVIA =LKUP_IMS_SPECIALTYselect c.desc from customer_sl.p_specialities s,customer_sl.p_codes cwhere s.specialty_lkp =c.code_idand s.rank = 1 ;There are no extra query conditions connected with SPECIALTY_TYPE_LKP because in the GBL environment that parameter always has a NULL value. GO_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Compliance/attributes/GOStatusgo_status <> ''CASEWhen p_compliance.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:GO' then YCASEWhen p_compliance.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:NGO' then NELSE Not defined(now this is an empty tabel)IDENTIFIER1_CODEVARCHARReltio identyfier code.configuration/entityTypes/HCP/attributes/Identifiers/attributes/Typeselect ct.canonical_code from customer_sl.p_codes ct,customer_sl.p_identifiers dwherect.code_id = d.TYPE_LKPThere is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the first one.IDENTIFIER1_CODE_DESCVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/Typeselect ct.desc from customer_sl.p_codes ct,customer_sl.p_identifiers dwherect.code_id = d.TYPE_LKPIDENTIFIER1_VALUEVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/IDselect id from customer_sl.p_identifiersIDENTIFIER2_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/Typeselect ct.canonical_code from customer_sl.p_codes ct,customer_sl.p_identifiers dwherect.code_id = d.TYPE_LKPMaximum two identifiers can be returnedThere is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the second one.IDENTIFIER2_CODE_DESCVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/Typeselect ct.desc from customer_sl.p_codes ct,customer_sl.p_identifiers dwherect.code_id = d.TYPE_LKPIDENTIFIER2_VALUEVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/IDselect id from customer_sl.p_identifiersDGSCATEGORYVARCHARIQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSCategoryCOMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitCategoryLKUP_BENEFITCATEGORY_HCP,LKUP_BENEFITCATEGORY_HCOselect DisclosureBenefitCategory from p_hcpDGSCATEGORY_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/DGSCategoryLKUP_BENEFITCATEGORY_HCP,LKUP_BENEFITCATEGORY_HCOcomment: select i_code.canonical_code for a valu returned from DisclosureBenefitCategory DGSTITLEVARCHARIQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSTitleCOMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitTitleLKUP_BENEFITTITLEselect DisclosureBenefitTitle from p_hcpDGSTITLE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/DGSTitleLKUP_BENEFITTITLEcomment: select i_code.canonical_code for a valu returned from DisclosureBenefitTitle DGSQUALITYVARCHARIQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSQualityCOMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitQualityLKUP_BENEFITQUALITYselect DisclosureBenefitQuality from p_hcpDGSQUALITY_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/DGSQualityLKUP_BENEFITQUALITYcomment: select i_code.canonical_code for a valu returned from DisclosureBenefitQuality DGSSPECIALTYVARCHARIQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSSpecialtyCOMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitSpecialtyLKUP_BENEFITSPECIALTYDisclosureBenefitSpecialtyDGSSPECIALTY_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/DGSSpecialtyLKUP_BENEFITSPECIALTYcomment: select i_code.canonical_code for a valu returned from DisclosureBenefitSpecialtySECONDARY_SPECIALTY_DESCVARCHAREMAILVARCHARPHONEVARCHAR" + }, + { + "title": "Global Data Mart", + "pageID": "196886082", + "pageLink": "/display/GMDM/Global+Data+Mart", + "content": "The section describes the structure of  MDM GLOBAL Data Mart in Snowflake. The GLOBAL Data Mart contains consolidated data from multiple regional data marts.Databases:The Global MDM Data mart connects all markets using Snowflake DB Replication (if in the different zone) or Local DB (if in the same zone): DEV/QA/STG/PRODMDM_REGIONMDM Region detailsSnowflake  InstanceSnowflake DB nameTypeModelEMEAlinkhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.comCOMM_EMEA_MDM_DMART__DBlocalP / P_HIAMERlinkhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.comhttps://amerprod01.us-east-1.privatelink.snowflakecomputing.comCOMM_AMER_MDM_DMART__DBreplicaP / P_HIUSlinkhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.comhttps://amerprod01.us-east-1.privatelink.snowflakecomputing.comCOMM_GBL_MDM_DMART_replicaP / P_HIAPAClinkhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.comCOMM_APAC_MDM_DMART__DBlocalP / P_HIEUlinkhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.comCOMM_EU_MDM_DMART__DBlocalIConsolidated GLOBAL Schema:The COMM_GBL_MDM_DMART__DB database includes the following schema:CUSTOMER - main schema containing consolidated views for all COMPANY models.CUSTOMER_SL - access schema for users containing a set of views accessing CUSTOMER schema objectsP_ - COMPANY Reltio Model and are prefixed with 'P'P_HI - COMPANY Reltio Model with Historical Inactive onekey crosswalksI_  - Ex-US data are in the IQIVIA Reltio model and are prefixed with 'I'AES_RS_SL - schema containing views that mimic Redshift data mart.User accessing the CUSTOMER_SL schema can query across all markets, having in mind the following details:P_ prefixed viewsP_HI prefixed viewsI_ prefixed viewsConsolidated view from all markets that are from "P" Model.The first column in each view is the MDM_REGION representing the information about the connection of the specific row to the market. Each market may contain a different number of columns and also some columns that exist in one market may not be available in the other. The Consolidated views aggregate all columns from all markets.Corresponding data model: Dynamic views for COMPANY MDM ModelConsolidated view from all markets that are from "P_HI" Model.The first column in each view is the MDM_REGION representing the information about the connection of the specific row to the market. Each market may contain a different number of columns and also some columns that exist in one market may not be available in the other. The Consolidated views aggregate all columns from all markets.View build based on the Legacy IQVIA Reltio Model, from EU market that is using "I" Model"Corresponding data model: Dynamic views for IQIVIA MDM ModelGLOBALInstance detailsENVSnowflake InstanceSnowflake DB NameReltio TenantRefresh timeDEVhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_GBL_MDM_DMART_DEV_DBEMEA + AMER + US+ APAC + EUonce per dayQAhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_GBL_MDM_DMART_QA_DBEMEA + AMER + US+ APAC + EUonce per daySTGhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_GBL_MDM_DMART_STG_DBEMEA + AMER + US+ APAC + EUonce per dayPRODhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.comCOMM_GBL_MDM_DMART_PROD_DBEMEA + AMER + US+ APAC + EUevery 2hRolesNPROD = DEV/QA/STGRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxPTE_SLWarehouseAD Group NameCOMM_GBL_MDM_DMART__DEVOPS_ROLEFullFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART__DEVOPS_ROLECOMM_GBL_MDM_DMART__MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART__MTCH_AFFIL_ROLECOMM_GBL_MDM_DMART__METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART__METRIC_ROLECOMM_GBL_MDM_DMART__MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART__MDM_ROLECOMM_GBL_MDM_DMART__READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART__READ_ROLECOMM_GBL_MDM_DMART__DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART__DATA_ROLECOMM_GBL_MDM_DMART__PTE_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART__PTE_ROLEPRODRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxPTE_SLWarehouseAD Group NameCOMM_GBL_MDM_DMART_PROD_DEVOPS_ROLEFullFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_DEVOPS_ROLECOMM_GBL_MDM_DMART_PROD_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PRD_MTCHAFFIL_ROLECOMM_GBL_MDM_DMART_PROD_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_METRIC_ROLECOMM_GBL_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_MDM_ROLECOMM_GBL_MDM_DMART_PROD_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_READ_ROLECOMM_GBL_MDM_DMART_PROD_DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_DATA_ROLECOMM_GBL_MDM_DMART_PROD_PTE_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_PTE_ROLE" + }, + { + "title": "Global Data Materialization Process", + "pageID": "356800042", + "pageLink": "/display/GMDM/Global+Data+Materialization+Process", + "content": "" + }, + { + "title": "Regional Data Marts", + "pageID": "196886987", + "pageLink": "/display/GMDM/Regional+Data+Marts", + "content": "The regional data mart is presenting MDM data from one region.  Data are loaded from one selected Reltio instance. They are being refreshed more frequently than the global mart. They are a good choice for clients operating in local markets.EMEAInstance detailsENVSnowflake InstanceSnowflake DB NameReltio TenantRefresh timeDEVhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_EMEA_MDM_DMART_DEV_DBwn60kG248ziQSMWevery day between 2 am - 4 am ESTQAhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_EMEA_MDM_DMART_QA_DBvke5zyYwTifyeJSevery day between 2 am - 4 am ESTSTGhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_EMEA_MDM_DMART_STG_DBDzueqzlld107BVWevery day between 2 am - 4 am EST *Due to many projects running on the environment the refresh time has been temporarily changed to "every 2 hours" for the client's convenience.PRODhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com/COMM_EMEA_MDM_DMART_PROD_DBXy67R0nDA10RUV6every 2 hoursRolesNPROD = DEV/QA/STGRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group NameCOMM_EMEA_MDM_DMART__DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART__DEVOPS_ROLECOMM_EMEA_MDM_DMART__MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART__MTCH_AFFIL_ROLECOMM_EMEA_MDM_DMART__METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART__METRIC_ROLECOMM_EMEA_MDM_DMART__MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART__MDM_ROLECOMM_EMEA_MDM_DMART__READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART__READ_ROLECOMM_EMEA_MDM_DMART__DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART__DATA_ROLEPRODRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group NameCOMM_EMEA_MDM_DMART_PROD_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_DEVOPS_ROLECOMM_EMEA_MDM_DMART_PROD_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PRD_MTCHAFFIL_ROLECOMM_EMEA_MDM_DMART_PROD_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_METRIC_ROLECOMM_EMEA_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_MDM_ROLECOMM_EMEA_MDM_DMART_PROD_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_READ_ROLECOMM_EMEA_MDM_DMART_PROD_DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_DATA_ROLEAMERInstance detailsENVSnowflake InstanceSnowflake DB NameReltio TenantRefresh timeDEVhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.com/COMM_AMER_MDM_DMART_DEV_DBwJmSQ8GWI8Q6Fl1every day between 2 am - 4 am ESTQAhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.com/COMM_AMER_MDM_DMART_QA_DB805QOf1Xnm96SPjevery day between 2 am - 4 am ESTSTGhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.com/COMM_AMER_MDM_DMART_STG_DBK7I3W3xjg98Dy30every day between 2 am - 4 am ESTPRODhttps://amerprod01.us-east-1.privatelink.snowflakecomputing.comCOMM_AMER_MDM_DMART_PROD_DBYs7joaPjhr9DwBJevery 2 hoursRolesNPROD = DEV/QA/STGRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group NameCOMM_AMER_MDM_DMART__DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART__DEVOPS_ROLECOMM_AMER_MDM_DMART__MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART__MTCH_AFFIL_ROLECOMM_AMER_MDM_DMART__METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART__METRIC_ROLECOMM_AMER_MDM_DMART__MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART__MDM_ROLECOMM_AMER_MDM_DMART__READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART__READ_ROLECOMM_AMER_MDM_DMART__DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART__DATA_ROLEPRODRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group NameCOMM_AMER_MDM_DMART_PROD_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_DEVOPS_ROLECOMM_AMER_MDM_DMART_PROD_MTCH_AFFIL_RORead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_MTCH_AFFIL_ROCOMM_AMER_MDM_DMART_PROD_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_METRIC_ROLECOMM_AMER_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_MDM_ROLECOMM_AMER_MDM_DMART_PROD_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_READ_ROLECOMM_AMER_MDM_DMART_PROD_DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_DATA_ROLEUSInstance detailsENVSnowflake InstanceSnowflake DB NameReltio TenantRefresh timeDEVhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.comCOMM_GBL_MDM_DMART_DEVsw8BkTZqjzGr7hnevery day between 2 am - 4 am ESTQAhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.comCOMM_GBL_MDM_DMART_QArEAXRHas2ovllvTevery day between 2 am - 4 am ESTSTGhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.comCOMM_GBL_MDM_DMART_STG48ElTIteZz05XwTevery day between 2 am - 4 am ESTPRODhttps://amerprod01.us-east-1.privatelink.snowflakecomputing.comCOMM_GBL_MDM_DMART_PROD9kL30u7lFoDHp6Xevery 2 hoursRolesNPROD = DEV/QA/STGRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group NameCOMM__MDM_DMART_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_us-east-1_amerdev01_COMM__MDM_DMART_DEVOPS_ROLECOMM_MDM_DMART__MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM__MDM_DMART_MTCH_AFFIL_ROLECOMM__MDM_DMART_ANALYSIS_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Onlysfdb_us-east-1_amerdev01_COMM__MDM_DMART_ANALYSIS_ROLECOMM__MDM_DMART_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM__MDM_DMART_METRIC_ROLECOMM_MDM_DMART__MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM__MDM_DMART_MDM_ROLECOMM__MDM_DMART_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM__MDM_DMART_READ_ROLECOMM_MDM_DMART__DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM__MDM_DMART_DATA_ROLEPRODRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group NameCOMM_PROD_MDM_DMART_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_DEVOPS_ROLECOMM_MDM_DMART_PROD_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_MTCH_AFFIL_ROLECOMM_PROD_MDM_DMART_ANALYSIS_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_ANALYSIS_ROLECOMM_PROD_MDM_DMART_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_METRIC_ROLECOMM_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_MDM_ROLECOMM_PROD_MDM_DMART_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_READ_ROLECOMM_MDM_DMART_PROD_DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_DATA_ROLEAPACInstance detailsENVSnowflake InstanceSnowflake DB NameReltio TenantRefresh timeDEVhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_APAC_MDM_DMART_DEV_DBw2NBAwv1z2AvlkgSevery day between 2 am - 4 am ESTQAhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_APAC_MDM_DMART_QA_DBxs4oRCXpCKewNDKevery day between 2 am - 4 am ESTSTGhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_APAC_MDM_DMART_STG_DBY4StMNK3b0AGDf6every day between 2 am - 4 am ESTPRODhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com/COMM_APAC_MDM_DMART_PROD_DBsew6PfkTtSZhLdWevery 2 hoursRolesNPROD = DEV/QA/STGRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group NameCOMM_APAC_MDM_DMART__DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART__DEVOPS_ROLECOMM_APAC_MDM_DMART__MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART__MTCH_AFFIL_ROLECOMM_APAC_MDM_DMART__METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART__METRIC_ROLECOMM_APAC_MDM_DMART__MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART__MDM_ROLECOMM_APAC_MDM_DMART__READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART__READ_ROLECOMM_APAC_MDM_DMART__DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART__DATA_ROLEPRODRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group NameCOMM_APAC_MDM_DMART_PROD_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_DEVOPS_ROLECOMM_APAC_MDM_DMART_PROD_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PRD_MTCHAFFIL_ROLECOMM_APAC_MDM_DMART_PROD_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_METRIC_ROLECOMM_APAC_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_MDM_ROLECOMM_APAC_MDM_DMART_PROD_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_READ_ROLECOMM_APAC_MDM_DMART_PROD_DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_DATA_ROLEEU (ex-us)Instance detailsENVSnowflake InstanceSnowflake DB NameReltio TenantRefresh timeDEVhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_EU_MDM_DMART_DEV_DBFLy4mo0XAh0YEbNevery day between 2 am - 4 am ESTQAhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_EU_MDM_DMART_QA_DBAwFwKWinxbarC0Zevery day between 2 am - 4 am ESTSTGhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_EU_MDM_DMART_STG_DBFW4YTaNQTJEcN2gevery day between 2 am - 4 am ESTPRODhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com/COMM_EU_MDM_DMART_PROD_DBFW2ZTF8K3JpdfFlevery 2 hoursRolesNPROD = DEV/QA/STGRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group NameCOMM__MDM_DMART_OPS_ROLEDEVFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_eu-west-1_emeadev01_COMM__MDM_DMART_DEVOPS_ROLECOMM_MDM_DMART__MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM__MDM_DMART_MTCH_AFFIL_ROLECOMM_EU__MDM_DMART_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EU__MDM_DMART_METRIC_ROLECOMM_MDM_DMART__MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM__MDM_DMART_MDM_ROLECOMM_EU_MDM_DMART__READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM__MDM_DMART_READ_ROLECOMM_MDM_DMART__DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM__MDM_DMART_DATA_ROLEPRODRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group NameCOMM_PROD_MDM_DMART_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_DEVOPS_ROLECOMM_MDM_DMART_PROD_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_MTCH_AFFIL_ROLECOMM_EU_MDM_DMART_PROD_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EU_PROD_MDM_DMART_METRIC_ROLECOMM_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_MDM_ROLECOMM_PROD_MDM_DMART_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_READ_ROLECOMM_MDM_DMART_PROD_DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_DATA_ROLE" + }, + { + "title": "MDM Admin Management API", + "pageID": "294663752", + "pageLink": "/display/GMDM/MDM+Admin+Management+API", + "content": "" + }, + { + "title": "Description", + "pageID": "294663759", + "pageLink": "/display/GMDM/Description", + "content": "MDM Admin is a management API, automating numerous repeatable tasks and enabling the end user to perform them, without the need to make a request and wait for one of MDM Hub's engineers to pick it up.At its current state, MDM Hub provides below services:Modify Kafka offsetGenerate outbound eventsReconcile an entity/relation (only used by MDM Hub Ops Team)Each functionality is described in detail in the following chapters.API URL listTenantEnvironmentMDM Admin API Base URLSwagger URL - API DocumentationGBL (EX-US)DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-dev/https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gbl-dev/swagger-ui/index.html QAhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-qa/https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gbl-qa/swagger-ui/index.html STAGEhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-stage/https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gbl-stage/swagger-ui/index.html PRODhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-prod/https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gbl-prod/swagger-ui/index.html GBLUSDEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gblus-dev/https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gblus-dev/swagger-ui/index.html QAhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gblus-qa/https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gblus-qa/swagger-ui/index.html STAGEhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gblus-stage/https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gblus-stage/swagger-ui/index.html PRODhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-admin-gblus-prod/https://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gblus-prod/swagger-ui/index.html EMEADEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html QAhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-qa/https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-qa/swagger-ui/index.html STAGEhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-stage/https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-stage/swagger-ui/index.html PRODhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-admin-emea-prod/https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-prod/swagger-ui/index.html AMERDEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-amer-dev/https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-amer-dev/swagger-ui/index.html QAhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-amer-qa/https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-amer-qa/swagger-ui/index.html STAGEhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-amer-stage/https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-amer-stage/swagger-ui/index.html PRODhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-admin-amer-prod/https://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-admin-spec-amer-prod/swagger-ui/index.html APACDEVhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-apac-dev/https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-apac-dev/swagger-ui/index.html QAhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-apac-qa/https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-apac-qa/swagger-ui/index.html STAGEhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-apac-stage/https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-apac-stage/swagger-ui/index.html PRODhttps://api-apac-prod-gbl-mdm-hub.COMPANY.com/api-admin-apac-prod/https://api-apac-prod-gbl-mdm-hub.COMPANY.com/api-admin-spec-apac-prod/swagger-ui/index.html Modify Kafka offsetIf you are consuming from MDM Hub's outbound topic, you can now modify the offsets to skip/re-send messages. Please refer to the Swagger Documentation for additional details.Example 1Environment is EMEA DEV. User wants to consume the last 100 messages from his topic again. He is using topic "emea-dev-out-full-test-topic-1" and consumer-group "emea-dev-consumergroup-1"Steps:Disable the consumer. Kafka will not allow offset manipulation, if the topic/consumergroup is being usedSend below request:\nPOST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/kafka/offset\n{\n "topic": "emea-dev-out-full-test-topic-1", \n "groupId": "emea-dev-consumergroup-1",\n "shiftBy": -100\n}\nEnable the consumer. Last 100 events will be re-consumed.Example 2User wants to consume all available messages from the topic again.Steps:Disable the consumer. Kafka will not allow offset manipulation, if the topic/consumergroup is being used.Send below request:\nPOST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/kafka/offset\n{\n "topic": "emea-dev-out-full-test-topic-1", \n "groupId": "emea-dev-consumergroup-1",\n "offset": earliest\n}\nEnable the consumer. All events from the topic will be available for consumption again.Resend EventsAllows re-sending events to MDM Hub's outbound Kafka topics, with filtering by Entity Type (entity or relation), modification date, country and source. Please refer to the Swagger Documentation for more details. Example use scenario is described below.Generated events are filtered by the topic routing rule (by country, event type etc.). Generating events for some country may not result in anything being produced on the topic, if this country is not added to the filter.Before starting a Resend Events job, please make sure that the country is already added to the routing rule. Otherwise, request additional country to be added (TODO: link to the instruction).ExampleFor development purposes, user needs to generate 10k of events to his "emea-dev-out-full-test-topic-1" topic for the new market - Belgium (BE).Steps:Send below request:\nPOST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/events/resend\n{\n "countries": [\n "be"\n ],\n "objectType": "ENTITY",\n "limit": 10000,\n "reconciliationTarget": "emea-dev-out-full-test-topic-1"\n}\nA process will start on MDM Hub's side, generating events on this topic. Response to the request will contain the process ID (dag_run_id):\n{\n "dag_id": "reconciliation_system_amer_dev",\n "dag_run_id": "manual__2022-11-30T14:12:07.780320+00:00",\n "execution_date": "2022-11-30T14:12:07.780320+00:00",\n "state": "queued"\n}\nYou can check the status of this process by sending below request:\nGET https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/events/resend/status/manual__2022-11-30T14:12:07.780320+00:00\nResponse:\n{\n "dag_id": "reconciliation_system_amer_dev",\n "dag_run_id": "manual__2022-11-30T14:12:07.780320+00:00",\n "execution_date": "2022-11-30T14:12:07.780320+00:00",\n "state": "started"\n}\nOnce the process is completed, all the requested events will have been sent to the topic." + }, + { + "title": "Requesting Access", + "pageID": "294663762", + "pageLink": "/display/GMDM/Requesting+Access", + "content": "Access to MDM Admin Management API should be requested via email sent to MDM Hub's DL: DL-ATP_MDMHUB_SUPPORT@COMPANY.com.Below chapters contain required details and email templates.Modify Kafka OffsetRequired details:Team name (including Person of Contact)List of topicsList of consumergroupsUsername (already used for Kafka, API etc.)Email template:\nHi Team,\n\nPlease provide us with access to the MDM Admin API. Details below:\n\nAPI: Kafka Offset\nTeam name: MDM Hub\nTopics:\n - emea-dev-out-full-test-topic\n - emea-qa-out-full-test-topic \n - emea-stage-out-full-test-topic \nConsumergroups: \n - emea-dev-hub \n - emea-qa-hub \n - emea-stage-hub \nUsername: mdm-hub-user\n\nBest Regards,\nPiotr\nResend EventsRequired details:Team name (including Person of Contact)List of topicsUsername (already used for Kafka, API etc.)Email template:\nHi Team,\n\nPlease provide us with access to the MDM Admin API. Details below:\n\nAPI: Resend Events\nTeam name: MDM Hub\nTopics: \n - emea-dev-out-full-test-topic\nUsername: mdm-hub-user\n\nBest Regards,\nPiotr\n" + }, + { + "title": "Flows", + "pageID": "164470069", + "pageLink": "/display/GMDM/Flows", + "content": "" + }, + { + "title": "Batch clear ETL data load cache", + "pageID": "333154693", + "pageLink": "/display/GMDM/Batch+clear+ETL+data+load+cache", + "content": "DescriptionThis is the batch operation to clear batch cache. The process was design to clear mongo cache (removes records from batchEntityProcessStatus) for specified batch name, sourceId type and value. This process is an adapter to the /batchController/{batchName}/_clearCache operation exposed by mdmhub batch service that allows user to clear cache.Link to clear batch cache by crosswalk documentation exposed by Batch Service Clear Cache by croswalksLink to HUB UI documentation: HUB UI User Guide Flow: The client delivers file including the list of source types and values to be cleared by HUB. File is uploaded to S3 resource by MDM HUB UI.The clear batch process is triggered by MDM HUB Admin service.The process parses the input files and calls Batch Service API to clear cache.File load through UI details:MAX SizeMax file size is 128MBHow to prepare the file to avoid unexpected errors:File format descriptionFile needs to be encoded with UTF-8 without bom.Input fileFile format: CSV Encoding: UTF-8EOL: UnixHow to setup this using Notepad++:Set encoding:Set EOL to Unix:Check (bottom right corner):Column headers:SourceType - source crosswalk type that describes entitySourceValue - source crosswalk value that describes entityInput file example123SourceType;SourceValueReltio;upIP01WSAP;3000201428clear_cache_ex.csvInternalsAirflow process name: clear_batch_service_cache_{{ env }}" + }, + { + "title": "Batch merge & unmerge", + "pageID": "164470091", + "pageLink": "/pages/viewpage.action?pageId=164470091", + "content": "DescriptionThis is the batch operation to merge/unmerge entities in Reltio. The process was designed to execute the force merge operation between Reltio objects. In Reltio, there are merge rules that automatically merge objects, but the user may explicitly define the merge between objects. This process is the adapter to the _merge or _unmerge operation that allows the user to specify the CSV file with multi entries so there is no need to execute API multiple times.  Flow: The client delivers files including the list of merge/unmerge operations to be executed by HUB. Files must be placed in S3 resource controlled by MDM HUB either by a client or MDM HUB support via HUB UI. The batch process is triggered by Airflow directly or by HUB UIThe process parses the input files and calls Reltio API to merge or unmerge entities.The result of the process is the report file generated and published to S3File load through UI details:MAX SizeMax file size is 128MB or 10k recordsHow to prepare the file to avoid unexpected errors:File format descriptionFile needs to be encoded with UTF-8 without bom. Merge operation Input fileFile format: CSV Encoding: UTF-8EOL: UnixHow to setup this using Notepad++:Set encoding:Set EOL to Unix:Check (bottom right corner):File name format: merge_YYYYMMDD.csvDrop location: DEV: s3://pfe-baiaes-eu-w1-nprod-project/mdm/DEV/merge_unmerge_entities/input/STAGE: s3://pfe-baiaes-eu-w1-nprod-project/mdm/STAGE/merge_unmerge_entities/input/PROD: Column headers:The column names are kept for backward compatibility. The winner of the merge is always the entity that was created earlier. There is currently no possibility to select an explicit winner via the merge_unmerge batch.WinnerSourceName - source name of the source entity: the survivor of the merge operation or the entity that will be splitWinnerId - id of the source entity: the survivor of the merge operation or the entity that will be splitLoserSourceName - source name of the target entity: the looser of the merge operation LoserId - id of the target entity: the loser of the merge operation In the output file there are two additional fields:responseStatus - the response statusresponseErrorMessage - the error messageMerge input file example\nWinnerSourceName;WinnerId;LoserSourceName;LoserId\nRELTIO;15hgDlsd;RELTIO;1JRPpffH\nRELTI;15hgDlsd;RELTIO;1JRPpffH\nOutput fileFile format: CSV Encoding: UTF-8File name format: status_merge_YYYYMMDD_.csv   - the number of the file process in the current day. Starting with 1 to n. Drop location: DEV: s3://pfe-baiaes-eu-w1-nprod-project/mdm/DEV/merge_unmerge_entities/output/YYYYMMDD_hhmmss/STAGE: s3://pfe-baiaes-eu-w1-nprod-project/mdm/DEV/merge_unmerge_entities/output/YYYYMMDD_hhmmss/PROD: Column headers:sourceId.type - source name of the source entity: the survivor of the merge operation or the entity that will be splitsourceId.value - id of the source entity: the survivor of the merge operation or the entity that will be splittedstatus - the response statuserrorCode - the error codeerrorMessage - the error meesageMerge output file example\nsourceId.type,sourceId.value,status,errorCode,errorMessage\nmerge_RELTIO_RELTIO,0009e93_00Ff82E,updated,,\nmerge_GRV_GRV,6422af22f7c95392db313216_23f45427-8cdc-43e6-9aea-0896d4cae5f8,updated,,\nmerge_RELTI_RELTIO,15hgDlsd_1JRPpffH,notFound,EntityNotFoundByCrosswalk,Entity not found by crosswalk in getEntityByCrosswalk [Type:RELTI Value:15hgDlsd]\nUnmerge operation Input fileFile format: CSV Encoding: UTF-8File name format: unmerge_YYYYMMDD_.csv   - the number of the file process in the current day. Starting with 1 to n. Drop location: DEV: s3://pfe-baiaes-eu-w1-nprod-project/mdm/DEV/merge_unmerge_entities/input/STAGE: s3://pfe-baiaes-eu-w1-nprod-project/mdm/STAGE/merge_unmerge_entities/input/Column headers:SourceURI - uri of the source entityTargetURI - uri of the extracted entityUnmerge input file example\nSourceURI;TargetURI\n15hgG6nP;15hgG6nQ1\n15hgG6qc;15hgG6rq\nOutput fileFile format: CSV Encoding: UTF-8File name format: status_umerge_YYYYMMDD_.csv   - the number of the file process in the current day. Starting with 1 to n. Column headers:SourceURI - uri of the source entityTargetURI - uri of the extracted entityresponseStatus - the response statusresponseErrorMessage - the error messageUnmerge output file example\nsourceId.type,sourceId.value,status,errorCode,errorMessage\nunmerge_RELTIO_RELTIO,01lAEll_01jIfxx,updated,,\nunmerge_RELTIO_RELTIO,0144V4D_01EFVyb,updated,,\nInternalsAirflow process name: merge_unmerge_entities" + }, + { + "title": "Batch reload MapChannel data", + "pageID": "407896553", + "pageLink": "/display/GMDM/Batch+reload+MapChannel+data", + "content": "DescriptionThis process is used to reload source data from GCP/GRV systems. The user has two ways to indicate the data he wants to reload:CSV file - contains lines with entity uri or crosswalk valuesQuery mongo - only entities meeting the criteria will be reloadedIn process Airflow Dag is used to control the flow  Flow: The client delivers files including the list of entity uris/crosswalk values. Files must be placed in S3 resource controlled by MDM HUB either by a client via HUB UI or MDM HUB support.The Airflow Dag is triggered:The process parses the input and query mongo for selected entitiesFor each entity - sending events to raw GCP/GRV input topicsThe result of the process is the report file generated and published to S3File load through UI details:MAX SizeMax file size is 128MBInput file examplereload_map_channel_data.csv Output fileFile format: CSV Encoding: UTF-8File name format: report__reload_map_channel_data_YYYYMMDD_.csv   - the number of the file process in the current day. Starting with 1 to n. Column headers: TODOOutput file example TODOSourceCrosswalkType,SourceCrosswalkValue,IdentifierType,IdentifierValue,status,errorCode,errorMessageReltio,upIP01W,HCOIT.PFORCERX,TEST9_OEG_1000005218888,failed,404,Can't find entity for target: EntityURITargetObjectId(entityURI=entities/upIP01W)SAP,3000201428,HCOIT.SAP,3000201428,failed,CrosswalkNotFoundException,Entity not found by crosswalk in getEntityByCrosswalk [Type:SAP Value:3000201428]InternalsAirflow process name: reload_map_channel_data_{{ env }}" + }, + { + "title": "Batch Reltio Reindex", + "pageID": "337846347", + "pageLink": "/display/GMDM/Batch+Reltio+Reindex", + "content": "DescriptionThis is the operation to execute Reltio Reindex API. The process was designed to get the input CSV file with entities URIS and schedule the Reltio Reindex API. More details about the Reltio API is available here: 5. Reltio ReindexHUB wraps the Entity URIs and schedules Reltio Task.  Flow: The client delivers files including the list of entity uris. The file is uploaded to the S3 resource by MDM HUB UI.The Reltio Reindex process is triggered by the MDM HUB Admin service.The process parses the input files and calls Reltio API.File load through UI details:MAX SizeMax file size is 128MB. The user should be able to load around 7.4M entity uris lines in one file to fit into a 128MB file size. Please check the file size before uploading. Larger files will be rejected.Please be aware that 128MB file upload may take a few minutes depending on the user network performance. Please wait until processing is finished and the response appears.How to prepare the file to avoid unexpected errors:File format descriptionFile needs to be encoded with UTF-8 without bom.Input fileFile format: CSV Encoding: UTF-8EOL: UnixHow to setup this using Notepad++:Set encoding:Set EOL to Unix:Check (bottom right corner):Column headers:N/A - do not add headersInput file example123entities/E0pV5Xmentities/1CsgdXN4entities/2O5RmRireltio_reindex.csvInternalsAirflow process name: reindex_entities_mdm_{{ env }}" + }, + { + "title": "Batch update identifiers", + "pageID": "234704200", + "pageLink": "/display/GMDM/Batch+update+identifiers", + "content": "DescriptionThis is the batch operation to update identifiers in Reltio. The process was design to update selected identifiers selected by identifier lookup code. This process is an adapter to the /entities/_updateAttributes operation exposed by mdmhub manager service that allows user to modify nested attributes using specific filters.Source for the batch process is csv in which one row corresponds with single identifiers that should be changed.In process batch service is used to control the flow  Flow: The client delivers files including the list of identifiers that should be updated. Files must be placed in S3 resource controlled by MDM HUB either by a client via HUB UI or MDM HUB support.The batch process is triggered by Airflow manually or scheduled wayThe process parses the input files and calls Reltio API to update identifiersThe result of the process is the report file generated and published to S3File load through UI details:MAX SizeMax file size is 128MB or 10k recordsHow to prepare the file to avoid unexpected errors:File format descriptionFile needs to be encoded with UTF-8 without bom. Input fileFile format: CSV Encoding: UTF-8EOL: UnixHow to setup this using Notepad++:Set encoding:Set EOL to Unix:Check (bottom right corner):File name format: update_identifiers_YYYYMMDD_.csv   - the number of the file process in the current day. Starting with 1 to n. Drop location: GBL:DEV: s3://pfe-atp-eu-w1-nprod-mdmhub/gbl/dev/inbound/update_identifiersSTAGE: s3://pfe-atp-eu-w1-nprod-mdmhub/gbl/stage/inbound/update_identifiersPROD: s3://pfe-baiaes-eu-w1-project/mdm/inbound/update_identifiersEMEA:DEV: s3://pfe-atp-eu-w1-nprod-mdmhub/emea/dev/inbound/update_identifiersQA: s3://pfe-atp-eu-w1-nprod-mdmhub/emea/qa/inbound/update_identifiersSTAGE: s3://pfe-atp-eu-w1-nprod-mdmhub/emea/stage/inbound/update_identifiersPROD: s3://pfe-atp-eu-w1-prod-mdmhub/emea/prod/inbound/update_identifiersColumn headers:SourceCrosswalkType - source crosswalk type that describes entity. If you use "Reltio" then you should use entity uri in SourceCrosswalkValue column. For every other crosswalk type use SourceCrosswalkValue - source crosswalk value that describes entityIdentifierType - identifier type that you want to modifyIdentifierValue - identifier values that you want to set(update/insert/merge). More information in /entities/_updateAttributes documentationIdentifierTrust - trust flag for given identifier, accepted values: Yes, No and . In case of , default value No for AMER, APAC, EMEA and null for GBL will be set.IdentifierSourceName - source name of updated identifier. In case of , default value HUB_ID for AMER, APAC, EMEA and null for GBL will be set.Action - action you want to perform on attribute. More information in /entities/_updateAttributes documentationdelete - IGNORE_ATTRIBUTE - IdentifierType has to exists - if it does not exists do not delete and share the information in the "details" attribute that the target key does not exist This operation works like DELETE FROM Identifiers WHERE key=(key)update - UPDATE_ATTRIBUTE - IdentifierType have to exists - if it does not exist return share the information in the "details" attribute that the target key does not exist   This operation works like UPDATE Identifiers SET (set) WHERE key=(key)Only allows updating existing attributes ( for example if the  ID  does not exist in the target - do not update this Identifier and share the information in the details that "ID" does not exist in the target)insert - INSERT_ATTRIBUTE  only allows to insert new attributes, if the "set" exists in the target return the information in the "details" element that such object already exists  This operation work like INSERT INTO Identifiers values (set)      Adds only a new element to the target array.merge - (insert or update) (similar to "update" but it makes an insert if "set" elements do not exist in target) - update attributes matched by the key or inserts a new one. If there are multiple keys related to one filter, it updates all matches or inserts a new one. In this case, we are checking the target array. For example, we matched multiple target Identifiers by the "key" and we want to "set" the "ID". If the target identifier does not have the "ID" we are making an INSERT_ATTRIBUTE, if the target attribute contains the "ID" we are making the UPDATE_ATTRIBUTEreplace -(delete or insert) - delete (IGNORE_ATTRIBUTE) attributes matched by the "key" and insert the new one.This operation works in a way that it will delete all target attributes matched by the "key" and put only one new Identifier in that place. For example, we had 3 Identifiers in the target matching by the "key". Replace will cause that now in the target we have 1 new Identifier. 3 old ones are removed (IGNORE_ATTRIBUTE) and a new one is inserted (INSERT_ATTRIBUTE).TargetCrosswalkType - HUB_ID is a default source that updates the data in Reltio - N/A - keep empty and add just this header.Input file example123SourceCrosswalkType;SourceCrosswalkValue;IdentifierType;IdentifierValue;IdentifierTrust;IdentifierSourceName;Action;TargetCrosswalkTypeReltio;upIP01W;HCOIT.PFORCERX;TEST9_OEG_1000005218888;;;update;SAP;3000201428;HCOIT.SAP;3000201428;Yes;SAP;update;update_identifier_20220323.csvOutput fileFile format: CSV Encoding: UTF-8File name format: report__update_identifiers_YYYYMMDD_.csv   - the number of the file process in the current day. Starting with 1 to n. Column headers:SourceCrosswalkType - source crosswalk type that describes entity. If you use "Reltio" then you should use entity uri in SourceCrosswalkValue column. For every other crosswalk type use SourceCrosswalkValue - source crosswalk value that describes entityIdentifierType - identifier type that you want to modifyIdentifierValue - identifier values that you want to set(update/insert/merge). More information in /entities/_updateAttributes documentationstatus- the response statuserrorCode - the error codeerrorMessage- the error messageOutput file example\nSourceCrosswalkType,SourceCrosswalkValue,IdentifierType,IdentifierValue,status,errorCode,errorMessage\nReltio,upIP01W,HCOIT.PFORCERX,TEST9_OEG_1000005218888,failed,404,Can't find entity for target: EntityURITargetObjectId(entityURI=entities/upIP01W)\nSAP,3000201428,HCOIT.SAP,3000201428,failed,CrosswalkNotFoundException,Entity not found by crosswalk in getEntityByCrosswalk [Type:SAP Value:3000201428]\nInternalsAirflow process name: update_identifiers_{{ env }}" + }, + { + "title": "Callbacks", + "pageID": "164469861", + "pageLink": "/display/GMDM/Callbacks", + "content": "DescriptionThe HUB Callbacks are divided into the following two sections:PreCallback process is responsible for the Ranking of the selected attributes RankSorters. This callback is based on the full enriched events from the "${env}-internal-reltio-full-events". Only events that do not require additional ranking updates in Reltio are published to the next processing stage. Some rankings calculations - like OtherHCOtoHCO is delayed and processed in PreDylayCallbackService - such functionality was required to gather all changes for relations in time windows and send events to Reltio only after the aggregation window is closed. This limits the number of events and updates to Reltio. OtherHCOtoHCOAffiliations Rankings - more details related to the OtherHCOtoHCO relation ranking with all PreDylayCallbackService  and DelayRankActivationProcessorrank details OtherHCOtoHCOAffiliations RankSorter"Post" Callback process is responsible for the specific logic and is based on the events published by the Event Publisher component. Here are the processes executed in the post callback process:AttributeSetter Callback - based on the "{env}-internal--callback-attributes-setter-in" events. Sets additional attributes for EMEA COMPANY France market  e.g. ComplianceMAPPHCPStatusCrosswalkActivator Callback  - based on the "${env}-internal-callback-activator-in" events. Activates selected crosswalk or soft-delete specific crosswalks based on the configuration. CrosswalkCleaner Callback - based on the "${env}-internal-callback-cleaner-in" events. Cleans orphan HUB_Callback crosswalk or soft-delete specific crosswalks based on the configuration. CrosswalkCleanerWithDelay Callback - based on the "${env}-internal-callback-cleaner-with-delay-in" events. Cleans orphan HUB_Callback crosswalk or soft-delete specific crosswalks based on the configuration with delay (aggregate events in time window)DanglingAffiliations Callback - based on the "${env}-internal-callback-orphan-clean-in" events. Removes orphan affiliations once one of the start or end objects was removed. Derived Addresses Callback  - based on the "${env}-internal-callback-derived-addresses-in" events. Rewrites an Address from HCO to HCP, connected to each other with some type of Relationship. used on IQVIA tenantHCONames Callback for IQVIA model - based on the "${env}-internal-callback-hconame-in" events. Caclucate HCO Names. HCONames Callback for COMPANY model -  based on the "${env}-internal-callback-hconame-in" events. Caclucate HCO Names in COMPANY Model.NotMatch Callback - based on the "${env}-internal-callback-potential-match-cleaner-in" events. Based on the created relationships between two matched objects, removes the match using _notMatch operation. More details about the HUB callbacks are described in the sub-pages. Flow diagram​" + }, + { + "title": "AttributeSetter Callback", + "pageID": "250150261", + "pageLink": "/display/GMDM/AttributeSetter+Callback", + "content": "DescriptionCallback auto-fills configured static Attributes, as long as the profile's attribute values meet the requirements. If no requirement (rule) is met, an optional cleaner deletes the existing, Hub-provided value for this attribute. AttributeSetter uses Manager's Update Attributes async interface.Flow DiagramStepsAfter event has been routed from EventPublisher, check the following:Entity must be active and have at least one active crosswalk Event Type must match configured allowedEventTypesCountry must match configured allowedCountriesFor each configured setAttribute do the following:Check if the entityType matches For each rules do the following:Check if criteria are metIf criteria are met:Check if Hub crosswalk already provides the AutoFill value (either Attribute's value or lookupCode must match)If attribute value is already present, do nothingIf attribute is not present:Add inserting AutoFill attribute to the list of changesCheck if Hub crosswalk provides another value for this attributeIf Hub crosswalk provides another value, add deleting that attribute value to the list of changesIf no rules were matched for this setAttribute and cleaner is enabled:Find the Hub-provided value of this attribute and add deleting this value to the list of changes (if exists)Map the list of changes into a single AttributeUpdateRequest object and send to Manager inbound topic.ConfigurationExample AttributeSetter rule (multiple allowed):\n - setAttribute: "ComplianceMAPPHCPStatus"\n entityType: "HCP"\n cleanerEnabled: true\n rules:\n - name: "AutoFill HCPMHS.Non-HCP IF SubTypeCode = Administrator (HCPST.A) / Researcher/Scientist (HCPST.C) / Counselor/Social Worker (HCPST.CO) / Technician/Technologist (HCPST.TC)"\n setValue: "HCPMHS.Non-HCP"\n where:\n - attribute: "SubTypeCode"\n values: [ "HCPST.A", "HCPST.C", "HCPST.CO", "HCPST.TC" ]\n\n - name: "AutoFill HCPMHS.Non-HCP IF SubTypeCode = Allied Health Professionals (HCPST.R) AND PrimarySpecialty = Psychology (SP.PSY)"\n setValue: "HCPMHS.Non-HCP"\n where:\n - attribute: "SubTypeCode"\n values: [ "HCPST.R" ]\n - attribute: "Specialities"\n nested:\n - attribute: "Primary"\n values: [ "true" ]\n - attribute: "Specialty"\n values: [ "SP.PSY" ]\n\n - name: "AutoFill HCPMHS.HCP for all others"\n setValue: "HCPMHS.HCP"\nRule inserts ComplianceMAPPHCPStatus attribute for every HCP:"HCPMHS.Non-HCP" for every profile having SubTypeCode in [ "HCPST.A", "HCPST.C", "HCPST.CO", "HCPST.TC" ]"HCPMHS.Non-HCP" for every profile having SubTypeCode == "HCPST.R" where one of Specialities == "SP.PSY" and has Primary flag"HCPMHS.HCP" in all other scenariosDependent ComponentsComponentUsageCallback ServiceMain component with flow implementationPublisherGeneration of incoming eventsManagerAsynchronous processing of generated AttributeUpdateRequest events" + }, + { + "title": "CrosswalkActivator Callback", + "pageID": "302701827", + "pageLink": "/display/GMDM/CrosswalkActivator+Callback", + "content": "DescriptionCrosswalkActivator is the opposite of CrosswalkCleaner. There are 4 main processing branches (described in more detail in the "Algorithm" section):WhenOneKeyExistsAndActive - activate all crosswalks having:crosswalk type as in the configuration,crosswalk value same as an existing, active Onekey crosswalk in this profile.WhenAnyOneKeyExistsAndActive - activate all crosswalks of types same as in configuration, as long as there is at least one active Onekey crosswalk present in this profile.WhenAnyCrosswalksExistsAndActive - activate all crosswalks of types same as in configuration, as long as there is at least one active crosswalk present in this profile (crosswalk types in the except section of configuration are not considered as active crosswalks).ActivateOneKeyReferbackCrosswalkWhenRelatedOneKeyCrosswalkExistsAndActive - activate OneKey referback crosswalk (with lookupCode in configuration), as long as there is at least one active Onekey crosswalk present in this profileAlgorithmFor each event from ${env}-internal-callback-activator-in topic, do:filter by event country (configured),filter by event type (configured, usually only CHANGED events),Processing: WhenOneKeyExistsAndActivefind all active Onekey crosswalks (exact Onekey source name is fetched from configuration)for each crosswalk in the input event entity do:if crosswalk type is in the configured list (getWhenOneKeyExistsAndActive) and crosswalk value is the same as one of active Onekey crosswalks, send activator request to Manager,activator request contains entityType,activated crosswalk with empty string ("") in deleteDate,Country attribute rewritten from the input event,Manager processes the request as partialOverride.Processing: WhenAnyOneKeyExistsAndActivefind all active Onekey crosswalks (exact Onekey source name is fetched from configuration)for each crosswalk in the input event entity do:if crosswalk type is in the configured list (getWhenAnyOneKeyExistsAndActive) and active Onekey crosswalks list is not empty, send activator request to Manager,activator request contains entityType,activated crosswalk with empty string ("") in deleteDate,Country attribute rewritten from the input event,Manager processes the request as partialOverride.Processing: WhenAnyCrosswalksExistsAndActivefind all active crosswalks (sources in the configuration except list are filtered out)for each crosswalk in the input event entity do:if crosswalk type is in the configured list (getWhenAnyCrosswalksExistsAndActive) and active Onekey crosswalks list is not empty, send activator request to Manager,activator request contains entityType,activated crosswalk with empty string ("") in deleteDate,Country attribute rewritten from the input event,Manager processes the request as partialOverride.Processing: ActivateOneKeyReferbackCrosswalkWhenRelatedOneKeyCrosswalkExistsAndActivefind all OneKey crosswalks,check for active OneKey crosswalk with lookupCode included in the configured list oneKeyLookupCodes,check for related inactive OneKey referback crosswalk with lookupCode included in the configured list referbackLookupCodes,if above conditions are met, send activator request to Manager,activator request contains:entityType,activated OneKey referback crosswalk with empty string ("") in deleteDate,Country attribute rewritten from the input event,Manager processes the request as partialOverride.Dependent componentsComponentUsageCallback ServiceMain component with flow implementationPublisherRoutes incoming eventsManagerAsync processing of generated activator requests" + }, + { + "title": "CrosswalkCleaner Callback", + "pageID": "164469744", + "pageLink": "/display/GMDM/CrosswalkCleaner+Callback", + "content": "DescriptionThis process removes using the hard delete or soft-delete operation crosswalks on Entity or Relation objects. There are the following sections in this process.Hard Delete Crosswalks - EntitiesBased on the input configuration removes the crosswalk from Reltio once all other crosswalks were removed or inactivated.  Once the source decides to inactivated the crosswalk, associated attributes are removed from the Golden Profile (OV), and in that case Rank attributes delivered by the HUB have to be removed. The process is used to remove orphan HUB_CALLBACK crosswalks that are used in the PreCallbacks (Rankings/COMPANYGlobalCustomerId/Canada Micro-Bricks/HCPType) processHard Delete Crosswalks - RelationshipsThis is similar to the above. The only difference here is that the PreCallbacks (Rankings/COMPANYGlobalCustomerId/Canada Micro-Bricks/HCPType) process is adding new Rank attributes to the relationship between two objects. Once the relationship is deactivated by the Source, the orphan HUB_CALLBACK crosswalk is removed. Soft Delete Crosswalks This process does not remove the crosswalk from Reltio. It updates the existing providing additional deleteDate attribute on the soft-deleting crosswalk. In that case in Reltio the corresponding crosswalk becomes inactive. There are three types of soft-deletes:always - soft-delete crosswalks based on the configuration once all other crosswalks are removed or inactivated,whenOneKeyNotExists - soft-delete crosswalks based on the configuration once ONEKEY crosswalk is removed or inactivated. This process is similar to the "always" process by the activation is only based on the ONEKEY crosswalk inactivation,softDeleteOneKeyReferbackCrosswalkWhenOneKeyCrosswalkIsInactive - soft-delete ONEKEY referback crosswalk (lookupCode in configuration) once ONEKEY crosswalk is inactivated.Flow diagramStepsEvent publisher publishes full events to ${env}-internal-callback-cleaner-in including 'HCO_CHANGED', 'HCP_CHANGED', 'MCO_CHANGED', 'RELATIONSHIP_CHANGED' eventsOnly events with the correct event type are processed.Then the checks are activated checking if it is possible to: hard delete entity crosswalkshard delete relationship crosswalkssoft delete crosswalksIt is possible that for one event multiple checks are going to be activated, in that case, multiple output events will be generated. Once the criteria are successfully fulfilled, the events are generated to the "${env}-internal-async-all-cleaner-callbacks" topic to the next processing step in the Manager component. TriggersTrigger actionComponentActionDefault timeIN Events incoming mdm-callback-service:CrosswalkCleanerStream (callback package)Process events and calculate hard or soft-delete requests and publish to the next processing stage. realtime - events streamDependent componentsComponentUsageCallback ServiceMain component with flow implementationPublisherEvents publisher generates incoming eventsManagerAsynchronous process of generated events" + }, + { + "title": "CrosswalkCleanerWithDelay Callback", + "pageID": "302701874", + "pageLink": "/display/GMDM/CrosswalkCleanerWithDelay+Callback", + "content": "DescriptionCrosswalkCleanerWithDelay works similarly to CrosswalkCleaner. It is using the same Kafka Streams topology, but events are trimmed (eliminateNeedlessData parameter - all the fields other than crosswalks are removed), and, which is most important, deduplication window is added.Deduplication window's parameters are configured, there are no default parameters. EMEA PROD example:8 hour window (Callback Service's config: callback.crosswalkCleanerWithDelay.deduplication.duration)1 hour ping interval (Callback Service's config: callback.crosswalkCleanerWithDelay.deduplication.pingInterval)This means, that the delay is equal to 8-9 hours.AlgorithmFor more details on algorithm steps, see CrosswalkCleaner Callback.DependenciesComponentUsageCallback ServiceMain component with flow implementationPublisherRoutes incoming eventsManagerAsync processing of generated requests" + }, + { + "title": "DanglingAffiliations Callback", + "pageID": "164469754", + "pageLink": "/display/GMDM/DanglingAffiliations+Callback", + "content": "DescriptionDanglingAffiliation Callback consists of two sub-processes:DanglingAffiliations Based On Inactive Objects (legacy)DanglingAffiliations Based On Same Start And End Objects (added in August 2023)" + }, + { + "title": "DanglingAffiliations Based On Inactive Objects", + "pageID": "347635836", + "pageLink": "/display/GMDM/DanglingAffiliations+Based+On+Inactive+Objects", + "content": "DescriptionThe process soft-deletes active relationships between inactivated start or end objects. Based on the configuration only REMOVED or INACTIVATE events are processed. It means that once the Start or End objects becomes inactive process checks the orphan relationship and sends the soft-delete request to the next processing stage. Flow diagramStepsEvent publisher publishes full events to ${env}-internal-callback-orphanClean-in including 'HCP_REMOVED', 'HCO_REMOVED', 'MCO_REMOVED', 'HCP_INACTIVATED', 'HCO_INACTIVATED', 'MCO_INACTIVATED' eventsOnly events with the correct event type are processed.In the next step, the Relationship is retrieved from the HUB by StartObjectURI or EndObjectURI.Once the relationship exists and is ACTIVE the Soft-Delete Request is generated to the "${env}-internal-async-all-cleaner-callbacks" topic to the next processing step in the Manager component. TriggersTrigger actionComponentActionDefault timeIN Events incoming mdm-callback-service:DanglingAffiliationsStream (callback package)Process events for inactive entities and calculate soft-delete requests and publish to the next processing stage. realtime - events streamDependent componentsComponentUsageCallback ServiceMain component with flow implementationPublisherEvents publisher generates incoming eventsManagerAsynchronous process of generated eventsHub StoreRelationship Cache" + }, + { + "title": "DanglingAffiliations Based On Same Start And End Objects", + "pageID": "347635839", + "pageLink": "/display/GMDM/DanglingAffiliations+Based+On+Same+Start+And+End+Objects", + "content": "DescriptionThis process soft-deletes looping relations - active relations having the same startObject and endObject.Such loops can be created in one of two ways:merge-on-the-fly of two entities,manual merge of two entitiesboth of these create a RELATIONSHIP_CHANGED event, so the process is based off of RELATIONSHIP_CREATED and RELATIONSHIP_CHANGED events.Unlike the other DanglingAffiliations sub-process, this one does not query the cache for relations, because all the required information is in the processed event.Flow diagramStepsEvent publisher publishes full events to ${env}-internal-callback-orphanClean-in including RELATIONSHIP_CREATED and RELATIONSHIP_CHANGED eventsOnly events with the correct event type are processed.If there is a country list configured, the event country is also checked before processing.Current state of relation in the event is checked for the following:is startObject.objectURI the same as endObject.objectURI?is relation active (no endDate is set)?does the relation type match the configured list of relationTypes (only if configured list is not empty)?If all of the above are true, a soft-delete request is generated to the ${env}-internal-async-all-cleaner-callbacks topic to the next processing step in the Manager component. TriggersTrigger actionComponentActionDefault timeIN Events incoming mdm-callback-service:DanglingAffiliationsStream (callback package)Process events for relations and calculate soft-delete requests and publish to the next processing stage. realtime - events streamDependent componentsComponentUsageCallback ServiceMain component with flow implementationPublisherEvents publisher generates incoming eventsManagerAsynchronous process of generated events" + }, + { + "title": "Derived Addresses Callback", + "pageID": "294677441", + "pageLink": "/display/GMDM/Derived+Addresses+Callback", + "content": "DescriptionThe Callback is a tool for rewriting an Address from HCO to HCP, connected to each other with some type of Relationship.Sequence DiagramFlowProcess is a callback. It operates on four Kafka topics:${env}-internal-callback-derived-addresses-in – input topic, containing simple events:HCP_CREATEDHCP_CHANGEDHCO_CREATEDHCO_CHANGEDHCO_REMOVEDHCO_INACTIVATEDRELATIONSHIP_CREATEDRELATIONSHIP_CHANGEDRELATIONSHIP_REMOVED${env}-internal-callback-derived-addresses-hcp4calc – internal topic, containing HCP URIs${env}- internal-derived-addresses-hcp-create – Manager bundle topic, processes Addresses sent${env}-internal-async-all-cleaner-callbacks – Manager async topic, cleans orphaned crosswalksStepsAlgorithm has 3 stages: Stage I – Event PublisherEvent Publisher routes all above event types to ${env}-internal-callback-derived-addresses-in topic, optional filtering by country/source. Stage II – Callback Service – Preprocessing StageIf event subType ~ HCP_*:pass targetEntity URI to ${env}-internal-callback-derived-addresses-hcp4calcIf event subtype ~ HCO_*:Find all ACTIVE relations of types ${walkRelationType} ending at this HCO in entityRelations collection.Extract URIs of all HCPs at starts of these relations and send them to topic ${env}-internal-callback-derived-addresses-hcp4calcIf event subtype ~ RELATIONSHIP_*:Find the relation by URI in entityRelations collection.Check if relation type matches the configured ${walkRelationType}Extract URI of the startObject (HCP) and send it to the topic ${env}-internal-callback-derived-addresses-hcp4calc Stage III – Callback Service – Main StageInput is HCP URI.Find HCP by URI in entityHistory collection. Check:If we cannot find entity in entityHistory, log error and skipIf found entity has other type than “configuration/entityTypes/HCP”, log error and skipIf entity has status LOST_MERGE/DELETED/INACTIVE, skipIn entityHistory, find all relations of types ${walkRelationType} starting at this HCP, extract HCO at the end of relationFor each extracted HCO (Hospital) do:Find HCO in entityHistory collectionWrap HCO Addresses in a Create HCP Request:Rewrite all sub-attributes from each ov==true Hospital’s AddressAdd attributes from ${staticAddedFields}, according to strategy: overwrite or underwrite (add if missing)Add the required Country attribute (rewrite from HCP)Add two crosswalks:Data provider ${hubCrosswalk} with value: ${hcpId}_${hcoId}.Contributor provider Reltio type with HCP uri.Send Create HPC Request to Manager through bundle topicIf HCP has a crosswalk of type and sourceTable as below:type: ${hubCrosswalk.type}sourceTable: ${hubCrosswalk.sourceTable}value: ${hcpId}_${hcoId}but its hcoUri suffix does not match any Hospital found, send request to delete the crosswalk to MDM Manager.ConfigurationFollowing configurations have to be made (examples are for GBL tenants).Callback ServiceAdd and handle following section to CallbackService application.yml in GBL:\ncallback:\n...\n derivedAddresses:\n enabled: true\n walkRelationType: \n - configuration/relationTypes/HasHealthCareRole\n hubCrosswalk:\n type: HUB_Callback\n sourceTable: DerivedAddresses\n staticAddedFields:\n - attributeName: AddressType\n attributeValue: TYS.P\n strategy: over\n inputTopic: ${env}-internal-callback-derived-addresses-in\n hcp4calcTopic: ${env}-internal-callback-derived-addresses-hcp4calc\n outputTopic: ${env}-internal-derived-addresses-hcp-create\n cleanerTopic: ${env}-internal-async-all-cleaner-callbacks\nSince we are adding a new crosswalk, cleaning of which will be handled by the Derived Addresses callback itself, we should exclude this crosswalk from the Crosswalk Cleaner config (similar to HcoNames one):\ncallback:\n crosswalkCleaner:\n ...\n hardDeleteCrosswalkTypes:\n ...\n exclude:\n - type: configuration/sources/HUB_Callback\n sourceTable: DerivedAddresses\nManagerAdd below to the MDM Manager bundle config:\nbundle:\n...\n inputs:\n...\n - topic: "${env}-internal-derived-addresses-hcp-create"\n username: "mdm_callback_service_user"\n defaultOperation: hcp-create\nCheck DQ Rules configuration.If there are any rules that may reject the HUB_Callback/DerivedAddresses HCP Create, an exception should be made. Example: Validation Status is required.If Address refEntity is configured to be surrogate, add an exception and new rule, adding MD5 crosswalk to the Address:\n- name: generate address relation and refEnity crosswalk\n preconditions:\n - type: sourceAndSourceTable\n values:\n - source: HUB_Callback\n sourceTable: "DerivedAddresses"\n action:\n type: addressDigest\n value: MD5\n skipRefEntityCreation: false\n skipRefRelationCreation: false\n\n- name: Make surrogate crosswalk on address\n preconditions:\n - type: not\n preconditions:\n - type: sourceAndSourceTable\n values:\n - source: HUB_Callback\n sourceTable: "DerivedAddresses"\n action:\n type: addressCrosswalkValue\n value: surrogate\nEvent PublisherRouting rule has to be added:\n- id: derived_addresses_callback\n destination: "${env}-internal-derived-addresses-in"\n selector: "(exchange.in.headers.reconciliationTarget==null)\n && exchange.in.headers.eventType in ['simple']\n && exchange.in.headers.country in ['cn']\n && exchange.in.headers.eventSubtype in ['HCP_CREATED', 'HCP_CHANGED', 'HCO_CREATED', 'HCO_CHANGED', 'HCO_REMOVED', 'HCO_INACTIVATED', 'RELATIONSHIP_CREATED', 'RELATIONSHIP_CHANGED', 'RELATIONSHIP_REMOVED']"\nDependent ComponentsComponentUsageCallback ServiceMain component with flow implementationManagerProcessing HCP Create, Crosswalk Delete operationsEvent PublisherGeneration of incoming events" + }, + { + "title": "HCONames Callback for IQVIA model", + "pageID": "164469742", + "pageLink": "/display/GMDM/HCONames+Callback+for+IQVIA+model", + "content": "DescriptionThe HCO names callback is responsible for calculating HCO Names. At first events are filtered, deduplicated and the list of impacted hcp is being evaluated. Then the new HCO are calculated. And finally if there is a need for update, the updates are being send for asynchronous processing in HUB Callback SourceFlow diagramSteps1. Impacted HCP GeneratorListen for the events on the ${env}-internal-callback-hconame-in topic.Filter out against the list of predefined countries (AI, AN, AG, AR, AW, BS, BB, BZ, BM, BO, BR, CL, CO, CR, CW, DO, EC, GT, GY, HN, JM, KY, LC, MX, NI, PA, PY, PE, PN, SV, SX, TT, UY, VG, VE).Filter out against the list of predefined event types (HCO_CREATED, HCO_CHANGED, RELATIONSHIP_CREATED, RELATIONSHIP_CHANGED).Split into two following branches. Results of both are then published on the ${env}-internal-callback-hconame-hcp4calc.Entity Event Stream1 extract the "Name" attribute from the target entity.2. reject the event if "Name" does not exist3. check if there was already a record with the identical Key + Name pair (a duplicate)4. reject the duplicate5. find the list of impacted HCPs based on the key6. return a flat stream of the key and the liste.g. key: entities/dL144Hk, impactedHCP: 1, 2, 3 return (entities/dL144Hk, 1), (entities/dL144Hk, 2), (entities/dL144Hk, 3)Relation Event Stream1. map Event to RelationWrapper(type,uRI,country,startURI,endURI,active,startObjectType,endObjectTyp)2. reject if any of fields missing3. check if there was already a record with the identical Key + Name pair (a duplicate)4. reject the duplicate5. find the list of impacted HCPs based on the key6. return a flat stream of the key and the liste.g. key: entities/dL144Hk, impactedHCP: 1, 2, 3 return (entities/dL144Hk, 1), (entities/dL144Hk, 2), (entities/dL144Hk, 3)2. HCO Names Update StreamListen for the events on the ${env}-internal-callback-hconame-hcp4calc.The incoming list of HCPs is passed to the calculator (described below).The HcoMainCalculatorResult contains hcpUri, a list of entityAddresses and the mainWorkplaceUri (to update)The result is being mapped to the RelationRequest The RelationRequest is generated to the "${env}-internal-hconames-rel-create" topic.3. HCP Calc Alogithmcalculate HCO NameHCOL1: get HCO from mongo where uri equals HCP.attributes.Workplace.refEntity.urireturn HCOL1.Namecalculate MainHCONameget all target HCO for relations (paremeter traverseRelationTypes) when start object id equals HCOL1 uri.for each target HCO (curHCO) doif target HCO is last in hierarchy thenreturn HCO.attributes.Nameelse if target HCO.attributes.TypeCode.lookupCode is on the configured list defined by parameter mainHCOTypeCodes for selected countryreturn HCO.attributes.Nameelse if target HCO.attributes.Taxonomy.StrType.lookupCode is on the configured list defined by parameter mainHCOStructurTypeCodes for selected countryreturn HCO.attributes.Nameelse if target HCO.attributes.ClassofTradeN.FacilityType.lookupCode is on the configured list defined by parameter mainHCOFacilityTypeCodes for selected countryreturn HCO.attributes.Nameelseget all target HCO when start object id is curHCO.uri (recursive call)update HCP addressesfind address in HCP.attributes.Address when Address.refEntity.uri=HCOL1.uriif found and address.HCOName<>calcHCOName or address.MainHcoName<>calcMainHCOName thencreate/update HasAddress relation using HUBCallback sourceTriggers*Filter whole tableHide columnsReset all filtersCopy the filter URLExport to PDFExport to CSVExport to WordPrintDocumentationWhat's newRate our appOops, it seems that you need to place a table or a macro generating a table within the Table Filter macro.Trigger actionComponentActionDefault timeIN Events incoming mdm-callback-service:HCONamesUpdateStream (callback package)Evaluates the list of affected HCPs. Based on that the HCO updates being sent when needed.realtime - events stream\n\n\n\n\nDependent componentsComponentUsageCallback ServiceMain component with flow implementationPublisherEvents publisher generates incoming eventsManagerAsynchronous process of generated eventsHub StoreCache​" + }, + { + "title": "HCONames Callback for COMPANY model", + "pageID": "243863711", + "pageLink": "/display/GMDM/HCONames+Callback+for+COMPANY+model", + "content": "DescriptionHCONames Callback for COMPANY data model differs from the one for IQVIA model.Callback consists of two stages: preprocessing and main processing. Main processing stage takes in HCP URIs, so the preprocessing stage logic extracts such affected HCPs from HCO, HCP, RELATIONSHIP events.During main processing, Callback calculates trees, where nodes are HCOs (tree root is always the input HCP) and edges are Relationships. HCOs and MainHCOs are extracted from this tree. MainHCOs are chosen following some business specification from the Callback config. Direct Relationships from HCPs to MainHCOs are created (or cleaned if no longer applicable). If any of HCP's Addresses matches HCO/MainHCO Address, adequate sub-attribute is added to this Address.AlgorithmStage I - preprocessingInput topic: ${env}-internal-callback-hconame-inInput event types:HCO_CREATEDHCO_CHANGEDHCP_CREATEDHCP_CHANGEDRELATIONSHIP_CREATEDRELATIONSHIP_CHANGEDFor each HCO event from the topic:Deduplicate events by key (deduplication window size is configurable),using MongoDB entityRelations collection, build maximum dependency tree (recursive algorithm) consisting of HCPs and HCOs connected with:relations of type equal to hcoHcoTraverseRelationTypes from configuration,relations of type equal to hcoHcpTraverseRelationTypes from configuration,return all HCPs from the dependency tree (all visited HCPs),generate events having key and value equal to HCP uri and send to the main processing topic (${env}-internal-callback-hconame-hcp4calc).For each RELATIONSHIP event from the topic:Deduplicate events by key (deduplication window size is configurable),if relation's startObject is HCP:add HCP's entityURI to result list,if relation's startObject is HCO: similarly to HCO events preprocessing, build dependency tree and return all HCPs from the tree. HCP URIs are added to the result list,for each HCP on the result list, generate an event and send to the main processing topic (${env}-internal-callback-hconame-hcp4calc).For each HCP event from the topic:Deduplicate events by key (deduplication window size is configurable),generate events having key and value equal to HCP uri and send to the main processing topic (${env}-internal-callback-hconame-hcp4calc).Stage II - main processingInput topic: ${env}-internal-callback-hconame-hcp4calcFor each HCP from the topic:Deduplicate by entity URI (deduplication window size is configurable),fetch current state of HCP from MongoDB, entityHistory collection,traversing by HCP-HCO relation type from config, find all affiliated HCOs with "CON" descriptors,traversing by HCO-HCO relation type from config, find all affiliated HCOs with MainHCO: "REL.MAI" or "REL.HIE" descriptors,from the "CON" HCO list, find all MainHCO candidates - MainHCO candidate must pass the configured specification. Below is MainHCO spec in EMEA PROD:if not yet existing, create new HcoNames relationship to MainHCO candidates by generating a request and sending to Manager async topic: ${env}-internal-hconames-rel-create,if existing, but not on candidates list, delete the relationship by generating a request and sending to Manager async topic: ${env}-internal-async-all-cleaner-callbacks,if one of input HCP's Addresses matches HCO Address or MainHCO Address, generate a request adding "HCO" or "MainHCO" sub-attribute to the Address and send to Manager async topic: ${env}-internal-hconames-hcp-create.Processing events1. Find Impacted HCPListen for the events on the ${env}-internal-callback-hconame-in topic.Filter out against the list of predefined countries (GB, IE).Filter out against the list of predefined event types (HCO_CREATED, HCO_CHANGED, RELATIONSHIP_CREATED, RELATIONSHIP_CHANGED).Split into two following branches. Results of both are then published on the ${env}-internal-callback-hconame-hcp4calc.Entity Event Stream1 extract the "Name" attribute from the target entity.2. reject the event if "Name" does not exist3. check if there was already a record with the identical Key + Name pair (a duplicate)4. reject the duplicate5. find the list of impacted HCPs based on the key6. return a flat stream of the key and the liste.g. key: entities/dL144Hk, impactedHCP: 1, 2, 3 return (entities/dL144Hk, 1), (entities/dL144Hk, 2), (entities/dL144Hk, 3)Relation Event Stream1. map Event to RelationWrapper(type,uRI,country,startURI,endURI,active,startObjectType,endObjectTyp)2. reject if any of fields missing3. check if there was already a record with the identical Key + Name pair (a duplicate)4. reject the duplicate5. find the list of impacted HCPs based on the key6. return a flat stream of the key and the liste.g. key: entities/dL144Hk, impactedHCP: 1, 2, 3 return (entities/dL144Hk, 1), (entities/dL144Hk, 2), (entities/dL144Hk, 3)2. Select HCOs affiliated with HCPListen for incoming list of HCPs on the ${env}-internal-callback-hconame-hcp4calc.For each HCP a list of affiliated HCOs is retrieved from a database. HCP-HCO relation is based on type:configuration/relationTypes/ContactAffiliationsand description:"CON"3. Find Main HCO traversing HCO-HCO hierarchyFor each HCO from the list of selected HCOs above a list of HCO is retrieved from the database.  HCO-HCO relation is based on type:configuration/relationTypes/OtherHCOtoHCOAffiliationsand description:"RLE.MAI", "RLE.HIE"The step is being repeated recursively until there are no affiliated HCOs or the Subtype matches the one provided in configuration.mainHcoIndicator.subTypeCode (STOP condition)The result is being mapped to the RelationRequest The RelationRequest is generated to the "${env}-internal-hconames-rel-create" topic.4. Populate HcoName / Main HCO Name in HCP addresses if required So far there are two HCO lists: HCOs affiliated with HCP and Main HCOs.There's a check if HCP fields HCOName and MainHCOName which are also two lists match the HCO names.If not, then the HCP update event is being generated.Address is nested attribute in the model ​Matching by uri must be replaced by matching by the key on attribute values. ​The match key will include AddressType, AddressLine1, AddressLine2,City,StateProvinance, Zip5.​The same key is configured in Reltio for address deduping. ​Changes the address key in Reltio must be consulted with HUB team​The target attributes in addresses will be populated by creating new HCP address having the same match key + HCOName and MainHCOName by HubCallback source. Reltio will match the new address with the existing based on the match key.​Each HCP address will have own HUBCallback crosswalk {type=HUB_Callback, value={Address Attribute URI}, sourceTable=HCO_NAME}​4. Create HCO -> Main HCO affiliation if not exist Also there's a check if the HCP outgoing relations point to Main HCOs. Only relations with the type "configuration/relationTypes/ContactAffiliations"and description"MainHCO" are being considered.Appropriate relations need to be created and not appropriate removed.Data model DependenciesComponentUsageCallback ServiceMain component with flow implementationPublisherRoutes incoming eventsManagerAsync processing of generated requests" + }, + { + "title": "NotMatch Callback", + "pageID": "164469859", + "pageLink": "/display/GMDM/NotMatch+Callback", + "content": "DescriptionThe NotMatch callback was created to clear the potential match queue for the suspect matches when the Linkage has been created by the DerivedAffiliationsbatch process. During this batch process, affiliations are created between COV and ONEKEY HCO objects. The potential match queue is not cleared and this impacts the Data Steward process because DS does not know what matches have to be processed through the UI. Potential match queue is cleared during RELATIONSHIP events processing using the "NotMatch callback" process. The process invokes _notMatch operation in MDM and removed these matches from Reltio. All "_notMatch" matches are visible in the UI in the "Potental Matches"."Not a Match" TAB. Flow diagramStepsEvent publisher publishes simple events to $env-internal-callback-potentialMatchCleaner-in including RELATIONSHIP_CHANGED and RELATIONSHIP_CREATED events with Reltio source (limit to only the one loaded through DA batch)Only events with the correct event type are processed: RELATIONSHIP_CHANGED and RELATIONSHIP_CREATEDOnly events with the correct relationship type are processed. Accepted relationship types:FlextoHCOSAffiliationsFlextoDDDAffiliationsFlextoDDDAffiliationsThe HUB AUTOLINK Store is searchedif AUTOLINK match exists in the store _notMatch operation is executed in asynchronous modeelse event is skippedAll _notMatch operations are published to the $env-internal-async-all-notmatch-callbacks topic and the Manager process these operations in asynchronous mode. TriggersTrigger actionComponentActionDefault timeIN Events incoming mdm-callback-service:PotentialMatchLinkCleanerStreamprocess relationship events in streaming mode and sets _notMatch in MDMrealtime - events streamDependent componentsComponentUsageCallback ServiceMain component with flow implementationPublisherEvents publisher generates incoming eventsManagerReltio Adapter for _notMatch operation in asynchronous modeHub StoreMatches Store" + }, + { + "title": "PotentialMatchLinkCleaner Callback", + "pageID": "302702435", + "pageLink": "/display/GMDM/PotentialMatchLinkCleaner+Callback", + "content": "DescriptionAlgorithmCallback accepts relationship events - this is configurable, usually:RELATIONSHIP_CREATEDRELATIONSHIP_CHANGEDFor each event from inbound topic (${env}-internal-callback-potential-match-cleaner-in):event is filtered by eventType (acceptedRelationEventTypes list in configuration),event is filtered by relationship type (acceptedRelationObjectTypes list in configuration),extract startObjectURI and endObjectURI from event targetRelation,search MongoDB, collection entityMatchesHistory, for records having both URIs in matches and having same matchType (matchTypesInCache list in configuration),if found a record in cache, check if it has already been sent (boolean field in the document),if record has not been yet sent, generate a EntitiesNotMatchRequest containing two fields:sourceEntityURI,targetEntityURI,add the operation header and send the Request to Manager.DependenciesComponentUsageCallback ServiceMain component with flow implementationPublisherRoutes incoming eventsManagerAsync processing of generated requests" + }, + { + "title": "PreCallbacks (Rankings/COMPANYGlobalCustomerId/Canada Micro-Bricks/HCPType)", + "pageID": "164469756", + "pageLink": "/pages/viewpage.action?pageId=164469756", + "content": "DescriptionThe main part of the process is responsible for setting up the Rank attributes on the specific Attributes in Reltio. Based on the input JSON events, the difference between the RAW entity and the Ranked entity is calculated and changes shared through the asynchronous topic to Manager. Only events that contain no changes are published to the next processing stage, it limits the number of events sent to the external Clients. Only data that is ranked and contains the correct callback is shared further. During processing, if changes are detected main events are skipped and a callback is executed. This will cause the generation of new events in Reltio and the next calculation. The next calculation should detect 0 changes but that may occur that process will fall into an infinity loop. Due to this, the MD5 checksum is implemented on the Entity and AttributeUpdate request to percent such a situation. The PreCallback is the setup with the chain of responsibility with the following steps:Enricher Processor Enrich object with RefLookup serviceMultMergeProcessor - change the ID of the main entity to the loser Id when the Main Entity is different from Target Entity - it means that the merge happened between timestamp when Reltio generated the EVENT and HUB retrieved the Entitty from Reltio. In that case the outcome entity contains 3 ID RankSorters Calculate rankings - transform entity with correct Ranks attributesBased on the calculated rank generate pre-callback events that will be sent to MangerGlobal COMPANY ID callback Generation of changes on COMPANYGlobalCustomerIDs Canada Micro-Bricks Autofill Canada Micro-BricksHCPType Callback Calculate HCPType attribute based on Specilaity and SubTypeCode canonical Reltio codes. Cleaner Processor Clean reference attributes enriched in the first step (save in mongo only when cleanAdditionalRefAttributes is false)Inactivation Generator Generation of inactivated events (for each changed event)OtherHCOtoHCOAffiliations Rankings Generation of the event to full-delay topic to process Ranking changes on relationships objects Flow diagramStepsEntity Enricher publishes full enriched events to ${env}-internal-reltio-full-eventsThe event is enriched with additional data required in the ranking process. More details in Affiliation RankSorter that require enrichment of the HCO objects once ranking the Affiliation on HCP. Rankings are calculated based on the implemented RankSorters. Based on the activation criteria and the environment configuration the following Rank Sorters are activated:Address RankSorterAddresses RankSorterAffiliation RankSorterEmail RankSorterPhone RankSorterSpecialty RankSorterIdentifier RankSorterBased on the changes between sorted Entity and input entity, Callback is published to the next processing stage. In that case, Main Event is skipped.If no new changes are detected, Main Event is forwarder to further processing.The enriched data required in the Affiliation ranking is cleaned. This last step check the incoming event and generates an additional *_INACTIVATED event type once the Entity/Relation object contains EndDate (is inactive) TriggersTrigger actionComponentActionDefault timeIN Events incoming mdm-callback-service:PrecallbackStream (precallback package)Process full events, execute ranking services, generates callbacks, and published calculated events to the EventPublisher componentrealtime - events streamDependent componentsComponentUsageCallback ServiceMain component with flow implementationEntity EnricherGenerates incoming events full eventsManagerProcess callbacks generated by this serviceHub StoreCache-Store" + }, + { + "title": "Global COMPANY ID callback", + "pageID": "218447103", + "pageLink": "/display/GMDM/Global+COMPANY+ID+callback", + "content": "Proces provides a unique Global COMPANY ID to each entity. The current solution on the Reltio side overwrites an entity's Global COMPANY ID when it loses a merge. Global COMPANY ID pre-callback solution was created to contain Global COMPANY Id as a unique value for entity_uri.To fulfill the requirement a solution based on COMPANY Global ID Registry is prepared. It includes elements like below:Modification on Orchestrator/Manager side - during the entity creation processCreation of COMPANYGloballId Pre-callback Modification on entity history to enrich search processLogical ArchitectureModification on Orchestrator/Manager side - during the entity creation processProcess descriptionThe request is sent to the HUB Manager - it may come from each source allowed. Like ETL loading or direct channel. getCOMPANYIdOrRegister service is call and entityURI with COMPANYGlobalId is stored in COMPANYIdRegistry From an external system point of view, the response to a client is modified. COMPANY Global Id is a part of the main attributes section in the JSON file (not in a nest). In response, there are information about OVI true and false{    "uri": "entities/19EaDJ5L",    "status": "created",    "errorCode": null,    "errorMessage": null,    "COMPANYGlobalCustomerID": "04-125652694",    "crosswalk": {        "type": "configuration/sources/RX_AUDIT",        "value": "test1_104421022022_RX_AUDIT_1",        "deleteDate": ""    }}{    "uri": "entities/entityURI",    "type": "configuration/entityTypes/HCP",    "createdBy": "username",    "createdTime": 1000000000000,    "updatedBy": "username",    "updatedTime": 1000000000000,"attributes": {        "COMPANYGlobalCustomerID": [            {                "type": "configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",                "ov": true,                "value": "04-111855581",                "uri": "entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/1jVIrkG2D"            },            {                "type": "configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",                "ov": false,                "value": "04-123653905",                "uri": "entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/1jVIrosrm"            },            {                "type": "configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",                "ov": false,                "value": "04-124022162",                "uri": "entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/1jVIrhcNY"            },            {                "type": "configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",                "ov": false,                "value": "04-117260591",                "uri": "entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/1jVIrnM10"            },            {                "type": "configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",                "ov": false,                "value": "04-129895294",                "uri": "entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/1mrOsvf6P"            },            {                "type": "configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",                "ov": false,                "value": "04-112615849",                "uri": "entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/2ZNzEowk3"            },            {                "type": "configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",                "ov": false,                "value": "04-111851893",                "uri": "entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/2LG7Grmul"            }        ],3. How to store GlobalCOMPANYId process diagram - business level.Creation of COMPANYGlobalId Pre-callbackA publisher event model is extended with two new values:COMPANYGlobalCustomerIDs - list of ID. For some merge events, there is two entityURI ID. The order of the IDs must match the order of the IDs in entitiURI field.parentCOMPANYGlobalCustomerID - it has value only for the LOST_MERGE event type. It contains winner entityURI.data class PublisherEvent(val eventType: EventType?,                          val eventTime: Long? = null,                          val entityModificationTime: Long? = null,                          val countryCode: String? = null,                          val entitiesURIs: List = emptyList(),                          val targetEntity: Entity? = null,                          val targetRelation: Relation? = null,                          val targetChangeRequest: ChangeRequest? = null,                          val dictionaryItem: DictionaryItem? = null,                          val mdmSource: String?,                          val viewName: String? = DEFAULT_VIEW_NAME,                          val matches: List? = null,                          val COMPANYGlobalCustomerIDs: List = emptyList(),                          val parentCOMPANYGlobalCustomerID: String? = null,                          @JsonIgnore                          val checksumChanged: Boolean = false,                          @JsonIgnore                          val isPartialUpdate: Boolean = false,                          @JsonIgnore                          val isReconciliation: Boolean = falseThere are made changes in  entityHistory collection on MongoDB sideFor each object in a collection, we store also COMPANYGlobalCustomerID:to have a relation between entityURI and COMPANYGLobalCustomerId to make a possible search for an entity that lost merge Additionally, new fields are stored in the Snowflake structure in %_HCP and %_HCO views in CUSTOMER_SL schema, like:COMPANY_GLOBAL_CUSTOMER_IDPARENT_COMPANY_GLOBAL_CUSTOMER_IDFrom an external system point of view, those internal changes are prepared to make a GlobalCOMPANYID filed unique.In case of overwriting GLobalCOMPANYID on Reltio MDM side (lost merge) pre-callback main task is to search for an original value in COMPANYIfRegistry. It will then insert this value into that entity in Reltio MDM that has been overwritten due to lost merge.Process diagram: Search LOST_MERGE entity with its first Global COMPANY IDProcess diagram:Process description:MDM HUB gets SEARCH calls from an external system. The search parameter is Global COMPANY ID.Verification entity status.  If entity status is 'LOST_MERGE' then replace in search request PfiezrGlobalCustomerId to parentCOMPANYGlobalCustomerIdMake a search call in Reltio with enriched dataDependent components" + }, + { + "title": "Canada Micro-Bricks", + "pageID": "250138445", + "pageLink": "/display/GMDM/Canada+Micro-Bricks", + "content": "DescriptionThe process was designed to auto-fill the Micro Brick values on Addresses for Canadian market entities. The process is based on the events streaming, the main event is recalculated based on the current state and during comparison, the current mapping file the changes are generated. The generated change (partial event) updates the Reltio which leads to another change. Only when the entity is fully updated the main event is published to the output topic and processed in the next stage in the event publisher. The process also registers the Changelog events on the topic. the Changelog events are saved only when the state of the entity is not partial. The Changelog events are required in the ReloadService that is triggered by the Airflow DAG. Business users may change the mapping file, this triggers the reload process, changelog events are processed and the updates are generated in reltio.For Canada, we created a new brick type "Micro Brick" and implemented a new pre-callback service to populate the brick codes based on the postal code mapping file:95% of postal codes won't be in the file and the MicroBrick code should be set to the first characters of the postal codeThe mapping file will contain postal code - MicroBrick code pairsThe mapping file will be delivered monthly, usually with no change.  However, 1-2 a year the Business will go thru a re-mapping exercise that could cause significant change.  Also, a few minor changes may happen (e.g., add new pair, etc.). A month change process will be added to the Airflow scheduler as a DAG. This DAG will be scheduled and will generate the export from the Snowflake, when there will be mapping changes changelog events will trigger update to the existing MicroBrick codes in Reltio. A new BrickType code has been added for Micro Brick - "UGM"Flow diagramLogical ArchitecturePreCallback LogicReload LogicStepsOverview Reltio attributesBrick"uri": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Brick",                Brick Type:                RDM: A new BrickType code has been added for Micro Brick - "UGM"                                    "uri": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Brick/attributes/Type",                                    "lookupCode": "rdm/lookupTypes/BrickType",                Brick Value:                                    "uri": "configuration/entityTypes/HCO/attributes/Addresses/attributes/Brick/attributes/Value",                                    "lookupCode": "rdm/lookupTypes/BrickValue",PostalCode:"uri": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Zip5",Canada postal codes format:e.g: K1A 0B1PreCallback LogicFlow:Activation:Check if feature flag activation is true and the acceptedCountires list contains entity countryTake into account only the CHANGED and CREATED events in this pre-callback implementationSteps:For each address in the entity check:Check if the Address contains BrickType= microBrickType and BrickValue!=null and PostalCode!=nullCheck if PostalCode is in the micro-bricks-mapping.csv fileif true compareif different generate UPDATE_ATTRIBUTEif in sync add AddressChange with all attributes to MicroBrickChangelogif false compare BrickValue with “numberOfPostalCodeCharacters” from PostalCodeif different generate UPDATE_ATTRIBUTEif in sync add AddressChange with all attributes to MicroBrickChangelogCheck if Address does not contain BrickType= microBrickType and BrickValue==null and PostalCode !=nullcheck if PostalCode is in the micro-bricks-mapping.csv fileif true generate INSERT_ATTRIBUTEif false get “numberOfPostalCodeCharacters” from PostalCode and generate INSERT_ATTRIBUTEAfter the Addresses array is checked, the main event is blocked when partial. Only when there are 0 changes main event is forwardedif there are changes send partialUpdate and skip the main event depending on the forwardMainEventsDuringPartialUpdateif there are 0 changes send MainEvent and push MicroBrickChangelog to the changelog topicNote: The service contains 2 roles – the main role is to check PostalCode for each address with a mapping file and generate MicroBrick Changes (INSERT (initial) UPDATE (changes)). The second role is to push MicroBrickChangelog events when we detected 0 changes. It means this flow should keep in sync the changelog topic with all changes that are happening in Reltio (address was added/removed/changed). Because ReloadService will work on these changelog events and requires the exact URI to the BrickValue this service needs to push all MicroBrickChangelog events with calculatedMicroBrickUri and calculatedMicroBrickValue and current value on postalCode for specific address represented by the address URI.Reload Logic (Airflow DAG)Flow: ActivationBusiness users make changes on the Snowflake side to micro bricks mapping.StepsDAG is scheduled once a month and process changes made by the Business users, this triggers the Reload Logic on Callback-Service componentsGet changes from snowflake and generate the micro-bricks-mapping.csv fileIf there are 0 changes END the processIf there are change in the micro-bricks-mapping.csv file push the changes to the Consul. Load current Configuration to GIT and push micro-bricks-mapping.csv to Consul.Trigger API call on Callback-Service to reload Consul configuration - this will cause that Pre-Callback processors and the ReloadService will now use new mapping files. Only after this operation is successful go to the next step:Copy events from current topic to reload topic using tmp fileCopy events from current topic to reload topic using temporary fileNote: the micro-brick process is divided into 2 steps Pre-Callback generated ChangeLog events to the $env-internal-microbricks-changelog-eventsReload service is reading the events from $env-internal-microbricks-changelog-reload-eventsThe main goal here is to copy events from one topic to another using Kafka Console Producer and Consumer. Copy is made by the Kafka Console Consumer, we are generating a temporary file with all events, Consumer has to poll all events, and wait 2 min until no new events are in the topic. After this time Kafka Console Producer should send all events to the target topic.After events are in the target $env-internal-microbricks-changelog-reload-events topic the next step described below starts automatically. Reload Logic (Callback-Service)Flow:Activation:Callback-Service Exposes API to reload Consul Configuration - because these changes are made once per month max, there is no need to schedule this process in service internally. Reload is made by the DAG and reloads mapping file inside callback-service.Only after Consul Configuration is reloaded the events are pushed from the $env-internal-microbricks-changelog-events to the $env-internal-microbricks-changelog-reload-events.This triggers the MicroBrickReloadService because it is based on the Kafka-Streams – service is subscribing to events in real-timeSteps:New events to the $env-internal-microbricks-changelog-reload-events will trigger the following:Kafka Stream consumer that will read the changelogTopicFor each MicroBrickChangelog event check:for each address in addresses changes check:check if PostalCode is in the micro-bricks-mapping.csv fileif true and the current mapping value is different than calculatedMicroBrickValue  → generate UPDATE_ATTRIBUTEif false and calculatedMicroBrickValue is different than “numberOfPostalCodeCharacters” from PostalCode → generate UPDATE_ATTRIBUTEGather all changes and push them to the $env-internal-async-all-bulk-callbacksThe reload is required because it may happen that:A new row was addedThen AddressChange.postalCode will be in the micro-bricks-mapping.csv which means that calculatedMicroBrickValue will be different than the one that we now have in the mapping file so we need to trigger UPDATE_ATTRIBUTE.The existing row was updatedThen AddressChange.postalCode will be in the micro-bricks-mapping.csv and the calculatedMicroBrickValue will be different than the one that we now have in the mapping file so we need to trigger UPDATE_ATTRIBUTEThe existing row was removedThen AddressChange.postalCode will be missing in the mapping file, then we are going to compare calculatedMicroBrickValue with “numberOfPostalCodeCharacters” from PostalCode, this will be a difference so UPDATE_ATTRIBUTE will be generatedNote: The data model requires the calculatedMicroBrickUri because we need to trigger UPDATE_ATTRIBUE on the specified BrickValue on a specific Address so an exact URI is required to work properly with the Reltio UPDATE_ATTRIBUTE operation. Only INSERT_ATTRIBUTE requires the URI only on the address attribute, and the body will contain BrickType and BrickValue (this insert is handled in the pre-callback implementation). The changes made by ReloadService will generate the next changes after the mapping file was updated. Once we trigger this event Reltio will generate the change, this change will be processed by the pre-callback service (MicroBrickProcessor). The result of this processor will be no-change-detected (entity and mapping file are in sync) and new CHANGELOG event generation. It may happen that during ReloadService run new Changelog events will be constantly generated, but this will not impact the current process because events from the original topic to the target topic are triggered by the manual copy during reloading. Additionally, 24h compaction window on Kafka will overwrite old changes with new changes generated from pre-callback. So we will have only one newest key on kafka topic after this time, and these changes will be copied to reload process after the next business change (1-2 times a year)Attachment docs with more details:IMPL: TEST:Data Model and ConfigurationChangeLog Event\nCHANGELOG Event:\n\nKafka KEY: entityUri\n\nBody:\ndata class MicroBrickChangelog(\n val entityUri: String,\n val addressesChanges: List,\n)\ndata class AddressChange(\n val addressUri: String,\n val postalCode: String,\n val calculatedMicroBrickUri: String,\n val calculatedMicroBrickValue: String,\n)\n\n\nTriggersTrigger actionComponentActionDefault timeIN Events incoming Callback Service: Pre-Callback:Canada Micro-Brick LogicFull events trigger pre-callback stream and during processing, partial events are processed with generated changes. If data is in sync partial event is not generated, and the main event is forwarded to external clientsrealtime - events streamUser  - triggers a change in mappingAPI: Callback-service - sync consul ConfigurationPre-Callback:ReloadService - streamingThe business user changes the mapping file. Process refreshed Consul store, copies data to changelog topic and this triggers real-time processing on Reload serviceManual Trigger by Business Userrealtime - events streamDependent componentsComponentUsageCallback ServiceMain component of flow implementationEntity EnricherGenerates incoming events full eventsManagerProcess callbacks generated by this service" + }, + { + "title": "RankSorters", + "pageID": "302687133", + "pageLink": "/display/GMDM/RankSorters", + "content": "" + }, + { + "title": "Address RankSorter", + "pageID": "164469761", + "pageLink": "/display/GMDM/Address+RankSorter", + "content": "GLOBAL - IQVIA modelThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Address provided by source "Reltio" is higher in the hierarchy than the Address provided by "CRMMI" source. Based on this configuration, each specialty will be sorted in the following order:addressSource: "Reltio": 1 "EVR": 2 "OK": 3 "AMPCO": 4 "JPDWH": 5 "NUCLEUS": 6 "CMM": 7 "MDE": 8 "LocalMDM": 9 "PFORCERX": 10 "VEEVA_NZ": 11 "VEEVA_AU": 12 "VEEVA_PHARMACY_AU": 13 "CRMMI": 14 "FACE": 15 "KOL_OneView": 16 "GRV": 17 "GCP": 18 "MAPP": 19 "CN3RDPARTY": 20 "Rx_Audit": 21 "PCMS": 22 "CICR": 23Additionally, Address Rank Sorting is based on the following configuration:Address will be sorted based on AddressType attribute in the following order:addressType: "[TYS.P]": 1 "[TYS.PHYS]": 2 "[TYS.S]": 3 "[TYS.L]": 4 "[TYS.M]": 5 "[Mailing]": 6 "[TYS.F]": 7 "[TYS.HEAD]": 8 "[TYS.PHAR]": 9 "[Unknown]": 10Address will be sorted based on ValidationStatus attribute in the following order:addressValidationStatus: "[STA.3]": 1 "[validated]": 2 "[Y]": 3 "[STA.0]": 4 "[pending]": 5 "[NEW]": 6 "[RNEW]": 7 "[selfvalidated]": 8 "[SVALD]": 9 "[preregister]": 10 "[notapplicable]": 11 "[N]": 97 "[notvalidated]": 98 "[STA.9]": 99Address will be sorted based on Status attribute in the following order:addressStatus: "[VALD]": 1 "[ACTV]": 2 "[INAC]": 98 "[INVL]": 99Address rank sort process operates under the following conditions:First, before address ranking the Affiliation RankSorter have to be executed. It is required to get the appropriate value on the Workplace.PrimaryAffiliationIndicator attribute valueEach address is sorted with the following rules:sort by the PrimaryAffiliationIndicator value. The address with "true" values is ranked higher in the hierarchy. The attribute used in this step is taken from the Workplace.PrimaryAffiliationIndicatorsort by Validation Status (lowest rank from the configuration on TOP) - attribute Address.ValidationStatussort by Status (lowest rank from the configuration on TOP) - attribute Address.Statussort by Source Name (lowest rank from the configuration on TOP) - this is calculated based on the Address.RefEntity.crosswalks, means that each address is associated with the appropriate crosswalk and based on the input configuration the order is caluclated.sort by Primary Affiliation (true value wins against false value) - attribute Address.PrimaryAffiliationsort by Address Type (lowest rank from the configuration on TOP) - attribute Address.AddressTypesort by Rank (lowers rank on TOP) in descending order 1 -> 99 - attribute Address.AddressRanksort by LUD (highest LUD date on TOP) in descending order 2017.07 -> 2017.06 - attribute Address.RefEntity.crosswalks.updateDatesort by Label value alphabetically in ascending order A -> Z - attribute Address.labelSorted addresses are recalculated for the new Rank – each Address Rank is reassigned with an appropriate number from lowest to highest.Additionally:When refRelation.crosswalk.deleteDate exists, then the address is excluded from the sorting processWhen recalculated Address Rank has a value equal to "1" then BestRecord attribute is added with the value set to "true"Address rank sort process fallback operates under the following conditions:During Validation Status from configuration (, 1.b) sorting, when ValitdationStatus attribute is missing address, is placed on 90 position ( which means that empty validation status is higher in the ranking than e.g. STA.9 status)During Status from configuration (1.c) sorting when the Status attribute is missing address is placed on 90 position (which means that empty status is higher in the ranking than e.g. INAC status)When Source system name (1.d) is missing address, address is placed on 99 positionWhen address Type (1.e) is empty, address is placed on 99 positionWhen Rank (1.f) is empty, address is placed on 99 positionFor multiple Address Types for the same relation – an address with a higher rank is takenBusiness requirements (provided by AJ)COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*" + }, + { + "title": "Addresses RankSorter", + "pageID": "164469759", + "pageLink": "/display/GMDM/Addresses+RankSorter", + "content": "GLOBAL USThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Address provided by source "ONEKEY" is higher in the hierarchy than the Address provided by "COV" source. Configuration is divided by country and source lists, for which this order is applicable.  Based on this configuration, each address will be sorted in the following order:addressesSource: - countries: - "ALL" sources: - "ALL" rankSortOrder: "Reltio" : 1 "ONEKEY" : 2 "IQVIA_RAWDEA" : 3 "IQVIA_DDD" : 4 "HCOS" : 5 "SAP" : 6 "SAPVENDOR" : 7 "COV" : 8 "DVA" : 9 "ENGAGE" : 10 "KOL_OneView" : 11 "ONEMED" : 11 "ICUE" : 12 "DDDV" : 13 "MMIT" : 14 "MILLIMAN_MCO" : 15 "SHS": 16 "COMPANY_ACCTS" : 17 "IQVIA_RX" : 18 "SEAGEN": 19 "CENTRIS" : 20 "ASTELAS" : 21 "EMD_SERONO" : 22 "MAPP" : 23 "VEEVALINK" : 24 "VALKRE" : 25 "THUB" : 26 "PTRS" : 27 "MEDISPEND" : 28 "PORZIO" : 29 Additionally, Addresses Rank Sorting is based on the following configuration:The address will be sorted based on AddressType attribute in the following order:addressType: "[OFFICE]": 1 "[PHYSICAL]": 2 "[MAIN]": 3 "[SHIPPING]": 4 "[MAILING]": 5 "[BILLING]": 6 "[SOLD_TO]": 7 "[HOME]": 8 "[PO_BOX]": 9Address rank sort process operates under the following conditions:Each address is sorted with the following rules:sort by address status (active addresses on top) - attribute Status (is Active)sort by the source order number from input source order configuration (lowest rank from the configuration on TOP) - source is taken from last updated crosswalk Addresses.RefEntity.crosswalks.updateDate once multiple from the same sourcesort by DEA flag (HCP only with DEA flag set to true on top) - attribute DEAFlagsort by SingleAddressIndicator (true on top) - attribute SingleAddressIndsort by Source Rank (lowers rank on TOP) in descending order 1 -> 99 - for ONEKEY rank is calculated with minus sign - attribute Source.SourceRanksort by address type of HCO and MCO only (lowest rank from the configuration on TOP) - attribute AddressTypesort by COMPANYAddressId (addresses with this attribute are on top) - attribute COMPANYAddressIDSorted addresses are recalculated for new Rank – each Address Rank is reassigned with an appropriate number from lowest to highest - attribute AddressRankAdditionally:When refRelation.crosswalk.deleteDate exists, then the address is excluded from the sorting processMORAWM03 explaining reverse rankings for ONEKEY Addresses:Here is the clarification:The minus rank can be related only to ONEKEY source and will be related to the lowest precedence address.All other sources, different than ONEKEY, contains the normal SourceRank source precedence - it means that the SourceRank 1 will be on top. We will sort SourceRank attribute in ascending order 1 -> 99 (lowest source rank on TOP), so SourceRank 1 will be first, SourceRank 2 second and so on.Due to the ONEKEY data in US - That rank code is a number from 10 to -10 with the larger number (i.e., 10) being the top ranked. We have a logic that makes an opposite ranking on ONEKEY SourceRank attribute. We are sorting in descending order …10 -> -10…, meaning that the rank 10 will be on TOP (highest source rank on TOP)We have reverse the SourceRank logic for ONEKEY, otherwise it led to -10 SourceRank ranked on TOP.In US ONEKEY Addresses contains minus sign and are ranked in descending order. (10,9,8…-1,-2..-10)I am sorry for the confusion that was made in previous explanation.This opposite logic for ONEKEY SourceRank data is in:Addresses: https://confluence.COMPANY.com/display/GMDM/Addresses+RankSorterDOC:EMEA/AMER/APACThis feature requires the following configuration:Address SourceThis map contains sources with appropriate sort numbers, which means e.g. Configuration is divided by country and source lists, for which this order is applicable. Address provided by source "Reltio" is higher in the hierarchy than the Address provided by "ONEKEY" source. Based on this configuration, each address will be sorted in the following order:EMEAaddressesSource: - countries: - GB - IE - FK - FR - BL - GP - MF - MQ - NC - PF - PM - RE - TF - WF - ES - DE - IT - VA - SM - TR - RU rankSortOrder: Reltio: 1 ONEKEY: 2 SAP: 3 SAPVENDOR: 4 PFORCERX: 5 PFORCERX_ODS: 5 KOL_OneView: 6 ONEMED: 6 ENGAGE: 7 MAPP: 8 SEAGEN: 9 GRV: 10 GCP: 11 SSE: 12 BIODOSE: 13 BUPA: 14 CH: 15 HCH: 16 CSL: 17 1CKOL: 18 VEEVALINK: 19 VALKRE: 201 THUB: 21 PTRS: 22 MEDISPEND: 23 PORZIO: 24 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 ONEKEY: 2 MEDPAGESHCP: 3 MEDPAGESHCO: 3 SAP: 4 SAPVENDOR: 5 ENGAGE: 6 MAPP: 7 PFORCERX: 8 PFORCERX_ODS: 8 KOL_OneView: 9 ONEMED: 9 SEAGEN: 10 GRV: 11 GCP: 12 SSE: 13 SDM: 14 PULSE_KAM: 15 WEBINAR: 16 DREAMWEAVER: 17 EVENTHUB: 18 SPRINKLR: 19 VEEVALINK: 20 VALKRE: 21 THUB: 22 PTRS: 23 MEDISPEND: 24 PORZIO: 25 sources: - ALLAMERaddressesSource: - countries: - ALL rankSortOrder: Reltio: 1 DCR_SYNC: 2 ONEKEY: 3 IMSO: 4 CS: 5 PFCA: 6 WSR: 7 PFORCERX: 8 PFORCERX_ODS: 8 SAP: 9 SAPVENDOR: 10 LEGACY_SFA_IDL: 11 ENGAGE: 12 MAPP: 13 SEAGEN: 14 GRV: 15 KOL_OneView: 16 ONEMED: 16 GCP: 17 SSE: 18 RX_AUDIT: 19 VEEVALINK: 20 VALKRE: 21 THUB: 22 PTRS: 23 MEDISPEND: 24 PORZIO: 25 sources: - ALLAPACaddressesSource: - countries: - CN rankSortOrder: Reltio: 1 EVR: 2 MDE: 3 FACE: 4 GRV: 5 CN3RDPARTY: 6 PFORCERX: 7 PFORCERX_ODS: 7 KOL_OneView: 8 ONEMED: 8 ENGAGE: 9 MAPP: 10 GCP: 11 SSE: 12 VEEVALINK: 13 THUB: 14 PTRS: 15 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 ONEKEY: 2 JPDWH: 3 VOD: 4 PFORCERX: 5 PFORCERX_ODS: 5 SAP: 6 SAPVENDOR: 7 KOL_OneView: 8 ONEMED: 8 ENGAGE: 9 MAPP: 10 SEAGEN: 11 GRV: 12 GCP: 13 SSE: 14 PCMS: 15 WEBINAR: 16 DREAMWEAVER: 17 EVENTHUB: 18 SPRINKLR: 19 VEEVALINK: 20 VALKRE: 21 THUB: 22 PTRS: 23 MEDISPEND: 24 PORZIO: 25 sources: - ALLAddress Type attribute:This map contains AddressType attribute values with appropriate sort numbers, which means e.g. Address Type AT.OFF is higher in the hierarchy than the AddressType AT.MAIL. Based on this configuration, each address will be sorted in the following order:addressType: "[OFF]": 1 "[BUS]": 2 "[DEL]": 3 "[LGL]": 4 "[MAIL]": 5 "[BILL]": 6 "[HOM]": 7 "[UNSP]": 99 Address Status attributeThis map contains Address Status attribute values with appropriate sort numbers, which means e.g. Address Status VALID is higher in the hierarchy than the Address Status ACTV. Based on this configuration, each address will be sorted in the following order:addressStatus: "[AS.VLD]": 1 "[AS.ACTV]": 1   NULL: 90 "[AS.INAC]": 99 "[AS.INVLD]": 99Address rank sort process operates under the following conditions:Each address is sorted with the following rules: sort by Primary affiliation indicator - address related to affiliation with primary usage tag on top, HCP and HCO addresses are compared by fields: AddressType, AddressLine1, AddressLine2, City, StateProvince and Zip5sort by Addresses.Primary attribute - primary addresses on TOP - applicable only for HCO entitiessort by address status Addresses.Status (contains the AddressStatus configuration)sort by the source order number from input source order configuration (lowest rank from the configuration on TOP) - source is taken from the last updated crosswalk Addresses.RefEntity.crosswalks.updateDate once multiple from the same sourcesort by address type (lowest rank from the configuration on TOP) - attribute Addresses.AddressTypesort by Source Rank (lowers rank on TOP) in descending order 1 -> 99 - attribute Addresses.Source.SourceRanksort by COMPANYAddressId (addresses with this attribute are on top) - attribute Addresses.COMPANYAddressIDsort by address label (alphabetically from A to Z)Sorted addresses are recalculated for new Rank – each Address Rank is reassigned with an appropriate number from lowest to highest - attribute AddressRankAdditionally:When refRelation.crosswalk.deleteDate exists, then the address is excluded from the sorting processBusiness requirements (provided by AJ)COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*" + }, + { + "title": "Affiliation RankSorter", + "pageID": "164469770", + "pageLink": "/display/GMDM/Affiliation+RankSorter", + "content": "GLOBAL - IQVIA modelThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Workplace provided by source "Reltio" is higher in the hierarchy than the Workplace provided by "CRMMI" source. Based on this configuration, each specialty will be sorted in the following order:affiliation: "Reltio": 1 "EVR": 2 "OK": 3 "AMPCO": 4 "JPDWH": 5 "NUCLEUS": 6 "CMM": 7 "MDE": 8 "LocalMDM": 9 "PFORCERX": 10 "VEEVA_NZ": 11 "VEEVA_AU": 12 "VEEVA_PHARMACY_AU": 13 "CRMMI": 14 "FACE": 15 "KOL_OneView": 16 "GRV": 17 "GCP": 18 "MAPP": 19 "CN3RDPARTY": 20 "Rx_Audit": 21 "PCMS": 22 "CICR": 23The affiliation rank sort process operates under the following conditions:Each workplace is sorted with the following rules:sort by Source Name (lowest rank from the configuration on TOP) - this is calculated based on the Workplace.RefEntity.crosswalks, means that each address is associated with the appropriate crosswalk, and based on the input configuration the order is calculated.sort by LUD (highest LUD date on TOP) in descending order 2017.07 -> 2017.06 - attribute Workplace.RefEntity.crosswalks.updateDatesort by Label value alphabetically in ascending order A -> Z - attribute Workplace.labelSorted workplaces are recalculated for the new PrimaryAffiliationIndicator attribute – each Workplace is reassigned with an appropriate value. The winner gets the "true" on the PrimaryAffiliationIndicator. Any looser, if exists is reasigned to "false"Additionally:When refRelation.crosswalk.deleteDate exists, then the workplace is excluded from the sorting processGLOBAL USThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. FacilityType with name "35" is higher in the hierarchy than FacilityType with the name "27". Based on this configuration, each affiliation will be sorted in the following order:facilityType: "35": 1 "MHS": 1 "34": 1 "27": 2Each affiliation before sorting is enriched with the ProviderAffiliation attribute which contains information about HCO because there are attributes that are needed during sorting.Affiliation rank sort process operates under the following conditions:Each affiliation is sorted with the following rulessort by facility type (the lower number is on top) - attribute ClassofTradeN.FacilityTypesort by affiliation confidence code DESC(the higher number or if exists it is on top) - attribute RelationType.AffiliationConfidenceCodesort by staffed beds (if it exists it is higher and higher number on top) - attribute Bed.Type("StaffedBeds").Totalsort by total prescribers (if it exists it is higher and higher number on top) - attribute TotalPrescriberssort by org identifier (if it exists it is higher and if not it compares is as a string) - attribute Identifiers.Type("HCOS_ORG_ID").IDSorted affiliation are recalculated for new Rank - each Affiliation Rank is reassigned with an appropriate number from lowest to highest - attribute RankAffiliation with Rank = "1" is enriched with the UsageTag attribute with the "Primary" value.Additionally:If facility type is not found it is set to 99EMEA/AMER/APACThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Contact Affiliation provided by source "Reltio" is higher in the hierarchy than the Contact Affiliation provided by "ONEKEY" source.  Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each specialty will be sorted in the following order:EMEAaffiliation: - countries: - GB - IE - FK - FR - BL - GP - MF - MQ - NC - PF - PM - RE - TF - WF - ES - DE - IT - VA - SM - TR - RU rankSortOrder: Reltio: 1 ONEKEY: 2 SAP: 3 SAPVENDOR: 4 PFORCERX: 5 PFORCERX_ODS: 5 KOL_OneView: 6 ONEMED: 6 ENGAGE: 7 MAPP: 8 SEAGEN: 9 VALKRE: 10 GRV: 11 GCP: 12 SSE: 13 BIODOSE: 14 BUPA: 15 CH: 16 HCH: 17 CSL: 18 THUB: 19 PTRS: 20 1CKOL: 21 MEDISPEND: 22 VEEVALINK: 23 PORZIO: 24 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 ONEKEY: 2 MEDPAGESHCP: 3 MEDPAGESHCO: 3 SAP: 4 SAPVENDOR: 5 PFORCERX: 6 PFORCERX_ODS: 6 KOL_OneView: 7 ONEMED: 7 ENGAGE: 8 MAPP: 9 SEAGEN: 10 VALKRE: 11 GRV: 12 GCP: 13 SSE: 14 SDM: 15 PULSE_KAM: 16 WEBINAR: 17 DREAMWEAVER: 18 EVENTHUB: 19 SPRINKLR: 20 THUB: 21 PTRS: 22 VEEVALINK: 23 MEDISPEND: 24 PORZIO: 25 sources: - ALL AMERaffiliation: - countries: - ALL rankSortOrder: Reltio: 1 DCR_SYNC: 2 ONEKEY: 3 SAP: 4 SAPVENDOR: 5 PFORCERX: 6 PFORCERX_ODS: 6 KOL_OneView: 7 ONEMED: 7 LEGACY_SFA_IDL: 8 ENGAGE: 9 MAPP: 10 SEAGEN: 11 VALKRE: 12 GRV: 13 GCP: 14 SSE: 15 IMSO: 16 CS: 17 PFCA: 18 WSR: 19 THUB: 20 PTRS: 21 RX_AUDIT: 22 VEEVALINK: 23 MEDISPEND: 24 PORZIO: 25 sources: - ALLAPACaffiliation: - countries: - CN rankSortOrder: Reltio: 1 EVR: 2 MDE: 3 FACE: 4 GRV: 5 CN3RDPARTY: 6 GCP: 7 SSE: 8 PFORCERX: 9 PFORCERX_ODS: 9 KOL_OneView: 10 ONEMED: 10 ENGAGE: 11 MAPP: 12 VALKRE: 13 THUB: 14 PTRS: 15 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 ONEKEY: 2 JPDWH: 3 VOD: 4 SAP: 5 SAPVENDOR: 6 PFORCERX: 7 PFORCERX_ODS: 7 KOL_OneView: 8 ONEMED: 8 ENGAGE: 9 MAPP: 10 SEAGEN: 11 VALKRE: 12 GRV: 13 GCP: 14 SSE: 15 PCMS: 16 WEBINAR: 17 DREAMWEAVER: 18 EVENTHUB: 19 SPRINKLR: 20 THUB: 21 PTRS: 22 VEEVALINK: 23 MEDISPEND: 24 PORZIO: 25 sources: - ALLThe affiliation rank sort process operates under the following conditions:Each contact affiliation is sorted with the following rules:sort by affiliation status - active on topsort by source prioritysort by source rank - attribute ContactAffiliation.RelationType.Source.SourceRank, ascendingsort by confidence level - attribute ContactAffiliation.RelationType.AffiliationConfidenceCodesort by attribute last updated date - newest at the topsort by Label value alphabetically in ascending order A -> Z - attribute ContactAffiliation.labelSorted contact affiliations are recalculated for the new primary usage tag attribute – each contact affiliation is reassigned with an appropriate value. The winner gets the "true" on the primary usage tag.Additionally:When refRelation.crosswalk.deleteDate exists, then the workplace is excluded from the sorting processBusiness requirements (provided by AJ)COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*" + }, + { + "title": "Email RankSorter", + "pageID": "164469768", + "pageLink": "/display/GMDM/Email+RankSorter", + "content": "GLOBAL - IQVIA modelThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Email provided by source "1CKOL" is higher in the hierarchy than Email provided by any other source. Based on this configuration, each email address will be sorted in the following order:email: - countries: - "ALL" sources: - "ALL" rankSortOrder: "1CKOL": 1Email rank sort process operates under the following conditions:Each email is sorted with the following rulesGroup by the TypeIMS attribute and sort each group:sort by source rank (the lower number on top of the one with this attribute)sort by the validation status (VALID value is the winner) - attribute ValidationStatussort by LUD (highest LUD date on TOP) in descending order 2017.07 -> 2017.06 - attribute crosswalks.updateDatesort by email value alphabetically in ascending order A -> Z - attribute Email.emailSorted emails are recalculated for the new Rank - each Email Rank is reassigned with an appropriate numberGLOBAL USThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Email provided by source "GRV" is higher in the hierarchy than Email provided by "ONEKEY" source. Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each email address will be sorted in the following order:email: - countries: - "ALL" sources: - "ALL" rankSortOrder: "Reltio" : 1 "GRV" : 2 "ENGAGE" : 3 "KOL_OneView" : 4 "ONEMED" : 4 "ICUE" : 5 "MAPP" : 6 "ONEKEY" : 7 "SHS" : 8 "VEEVALINK": 9 "SEAGEN": 10 "CENTRIS" : 11 "ASTELAS" : 12 "EMD_SERONO" : 13 "IQVIA_RX" : 14 "IQVIA_RAWDEA" : 15 "COV" : 16 "THUB" : 17 "PTRS" : 18 "SAP" : 19 "SAPVENDOR": 20 "IQVIA_DDD" : 22 "VALKRE": 23 "MEDISPEND" : 24 "PORZIO" : 25Email rank sort process operates under the following conditions:Each email is sorted with the following rulessort by source order (the lower number on top)sort by source rank (the lower number on top of the one with this attribute)Sorted email are recalculated for new Rank - each Email Rank is reassigned with an appropriate numberEMEA/AMER/APACThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Email provided by source "Reltio" is higher in the hierarchy than Email provided by "GCP" source. Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each email address will be sorted in the following order:EMEAemail: - countries: - GB - IE - FK - FR - BL - GP - MF - MQ - NC - PF - PM - RE - TF - WF - ES - DE - IT - VA - SM - TR - RU rankSortOrder: Reltio: 1 1CKOL: 2 GCP: 3 GRV: 4 SSE: 5 ENGAGE: 6 MAPP: 7 VEEVALINK: 8 SEAGEN: 9 KOL_OneView: 10 ONEMED: 10 PFORCERX: 11 PFORCERX_ODS: 11 THUB: 12 PTRS: 13 ONEKEY: 14 SAP: 15 SAPVENDOR: 16 SDM: 17 BIODOSE: 18 BUPA: 19 CH: 20 HCH: 21 CSL: 22 MEDISPEND: 23 PORZIO: 24 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 GCP: 2 GRV: 3 SSE: 4 ENGAGE: 5 MAPP: 6 VEEVALINK: 7 SEAGEN: 8 KOL_OneView: 9 ONEMED: 9 PULSE_KAM: 10 SPRINKLR: 11 WEBINAR: 12 DREAMWEAVER: 13 EVENTHUB: 14 PFORCERX: 15 PFORCERX_ODS: 15 THUB: 16 PTRS: 17 ONEKEY: 18 MEDPAGESHCP: 19 MEDPAGESHCO: 19 SAP: 20 SAPVENDOR: 21 SDM: 22 MEDISPEND: 23 PORZIO: 24 sources: - ALLAMERemail: - countries: - ALL rankSortOrder: Reltio: 1 DCR_SYNC: 2 GCP: 3 GRV: 4 SSE: 5 ENGAGE: 6 MAPP: 7 VEEVALINK: 8 SEAGEN: 9 KOL_OneView: 10 ONEMED: 10 PFORCERX: 11 PFORCERX_ODS: 11 ONEKEY: 12 IMSO: 13 CS: 14 PFCA: 15 WSR: 16 THUB: 17 PTRS: 18 SAP: 19 SAPVENDOR: 20 LEGACY_SFA_IDL: 21 RX_AUDIT: 22 MEDISPEND: 23 PORZIO: 24 sources: - ALLAPACemail: - countries: - CN rankSortOrder: Reltio: 1 EVR: 2 MDE: 3 FACE: 4 GRV: 5 CN3RDPARTY: 6 ENGAGE: 7 MAPP: 8 VEEVALINK: 9 KOL_OneView: 10 ONEMED: 10 PFORCERX: 11 PFORCERX_ODS: 11 THUB: 12 PTRS: 13 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 JPDWH: 2 PCMS: 3 GCP: 4 GRV: 5 SSE: 6 ENGAGE: 7 MAPP: 8 VEEVALINK: 9 SEAGEN: 10 KOL_OneView: 11 ONEMED: 11 SPRINKLR: 12 WEBINAR: 13 DREAMWEAVER: 14 EVENTHUB: 15 PFORCERX: 16 PFORCERX_ODS: 16 THUB: 17 PTRS: 18 ONEKEY: 19 VOD: 20 SAP: 21 SAPVENDOR: 22 MEDISPEND: 23 PORZIO: 24 sources: - ALLEmail rank sort process operates under the following conditions:Each email is sorted with the following rules sort by cleanser status - valid/invalidsort by source order (the lower number on top)sort by source rank (the lower number on top of the one with this attribute)sort by last updated date - newest at the topsort by email value alphabetically in ascending order A -> Z - attribute Email.labelSorted email are recalculated for new Rank - each Email Rank is reassigned with an appropriate numberBusiness requirements (provided by AJ)COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*" + }, + { + "title": "Identifier RankSorter", + "pageID": "164469766", + "pageLink": "/display/GMDM/Identifier+RankSorter", + "content": "IQVIA Model (Global)AlgorithmThe identifier rank sort process operates under the following conditions:Each Identifier is grouped by Identifier Type: e.g GRV_ID / GCP ID / MI_ID / Physician_Code /. .. – each group is sorted separately.Each group is sorted with the following rules:By identifier "Source System order configuration" (lowest rank from the configuration on TOP)By identifier Order (lower ranks on TOP) in descending order 1 -> 99 - attribute OrderBy update date (LUD) (highest LUD date on TOP) in descending order 2017.07 -> 2017.06  - attribute crosswalks.updateDateBy Identifier value (alphabetically in ascending order A -> Z)Sorted identifiers are optionally deduplicated (by Identifier Type in each group) – from each group, the lowest in rank and the duplicated identifier is removed. Currently the ( isIgnoreAndRemoveDuplicates = False) is set to False, which means that groups are not deduplicated. Duplicates are removed by Reltio.Sorted identifiers are recalculated for the new Rank – each Rank (for each sorted group) is reassigned with an appropriate number from lowest to highest. - attribute - OrderIdentifier rank sort process fallback operates under the following conditions:When Identifier Type is empty – each empty identifier is grouped together. Each identifier with an empty type is added to the "EMPTY" group and sorted and DE duplicated separately.During source system from configuration (2.a) sorting when Source system is missing identifier is placed on 99 positionDuring Rank (, 2.b) sorting when the Source system is missing identifier is placed on 99 positionSource Order Configuration This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Identifier provided by source "Reltio" is higher in the hierarchy than the Identifier provided by the "CRMMI" source. Based on this configuration each identifier will be sorted in the following order:Updated: 2023-12-29EnvironmentGlobal (EX-US)Countries(in environment)CNOthersSource OrderReltio: 1EVR: 2MDE: 3MAPP: 4FACE: 5CRMMI: 6KOL_OneView: 7GRV: 8CN3RDPARTY: 9Reltio: 1EVR: 2OK: 3AMPCO: 4JPDWH: 5NUCLEUS: 6CMM: 7MDE: 8LocalMDM: 9PFORCERX: 10VEEVA_NZ: 11VEEVA_AU: 12VEEVA_PHARMACY_AU: 13CRMMI: 14FACE: 15KOL_OneView: 16GRV: 17GCP: 18MAPP: 19CN3RDPARTY: 20Rx_Audit: 21PCMS: 22CICR: 23COMPANY ModelAlgorithmIdentifier Rank sort algorithm slightly varies from the IQVIA model one:Identifiers are grouped by Type (Identifiers.Type field). Identifiers without a Type count as a separate group.Each group is sorted separately according to following rules:By Trust flag (Identifiers.Trust field). "Yes" takes precedence over "No". If Trust flag is missing, it's as if it was equal to "No".By Source Order (table below). Lowest rank from configuration takes precedence. If a Source is missing in configuration, it gets the lowest possible order (99).By Status (Identifiers.Status). Valid/Active status takes precedence over Invalid/Inactive/missing status. List of status codes is configurable. Currently (2023-12-29), the following codes are configured in all COMPANY environments:Valid codes: [HCPIS.VLD], [HCPIS.ACTV], [HCOIS.VLD], [HCOIS.ACTV]Invalid codes: [HCPIS.INAC], [HCPIS.INVLD], [HCOIS.INAC], [HCOIS.INVLD]By Source Rank (Identifiers.SourceRank field). Lowest rank takes precedence.By LUD. Latest LUD takes precedence. LUD is equal to the highest of 3 dates: providing crosswalk's createDateproviding crosswalk's updateDateproviding crosswalk's singleAttributeUpdateDate for this Identifier (if present)By ID alphabetically. This is a fallback mechanism.Sorted identifiers are recalculated for the new Rank – each Rank (for each sorted group) is reassigned with an appropriate number from lowest to highest. - attribute - Rank.Source Order ConfigurationUpdated: 2023-12-29EnvironmentUSAMEREMEAAPACCountries (in environment)ALLALLEU:GBIEFRBLGPMFMQNCPFPMRETFWFESDEITVASMTRRUOthers (AfME)CNOthersSource OrderReltio: 1ONEKEY: 2ICUE: 3ENGAGE: 4KOL_OneView: 5ONEMED: 5GRV: 6SHS: 7IQVIA_RX: 8IQVIA_RAWDEA: 9SEAGEN: 10CENTRIS: 11MAPP: 12ASTELAS: 13EMD_SERONO: 14COV: 15SAP: 16SAPVENDOR: 17IQVIA_DDD: 18PTRS: 19Reltio: 1ONEKEY: 2PFORCERX: 3PFORCERX_ODS: 3KOL_OneView: 4ONEMED: 4LEGACY_SFA_IDL: 5ENGAGE: 6MAPP: 7SEAGEN: 8GRV: 9GCP: 10SSE: 11IMSO: 12CS: 13PFCA: 14SAP: 15SAPVENDOR: 16PTRS: 17RX_AUDIT: 18Reltio: 1ONEKEY: 2PFORCERX: 3PFORCERX_ODS: 3KOL_ONEVIEW: 4ENGAGE: 5MAPP: 6SEAGEN: 7GRV: 8GCP: 9SSE: 101CKOL: 11SAP: 12SAPVENDOR: 13BIODOSE: 14BUPA: 15CH: 16HCH: 17CSL: 18Reltio: 1ONEKEY: 2MEDPAGES: 3MEDPAGESHCP: 3MEDPAGESHCO: 3PFORCERX: 4PFORCERX_ODS: 4KOL_ONEVIEW: 5ENGAGE: 6MAPP: 7SEAGEN: 8GRV: 9GCP: 10SSE: 11PULSE_KAM: 12WEBINAR: 13SAP: 14SAPVENDOR: 15SDM: 16PTRS: 17Reltio: 1EVR: 2MDE: 3FACE: 4GRV: 5CN3RDPARTY: 6GCP: 7PFORCERX: 8PFORCERX_ODS: 8KOL_OneView: 9ONEMED: 9ENGAGE: 10MAPP: 11PTRS: 12Reltio: 1ONEKEY: 2JPDWH: 3VOD: 4PFORCERX: 5PFORCERX_ODS: 5KOL_OneView: 6ONEMED: 6ENGAGE: 7MAPP: 8SEAGEN: 9GRV: 10GCP: 11SSE: 12PCMS: 13PTRS: 14SAP: 15SAPVENDOR: 16Business requirements (provided by AJ)COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*" + }, + { + "title": "OtherHCOtoHCOAffiliations RankSorter", + "pageID": "319291956", + "pageLink": "/display/GMDM/OtherHCOtoHCOAffiliations+RankSorter", + "content": "APAC COMPANY (currently for AU and NZ)Business requirements (provided by AJ)COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*The functionality is configured in the callback delay service. Allows you to set different types of sorting for each country. The configuration for AU and NZ is shown below.rankSortOrder: affiliation: - countries: - AU - NZ rankExecutionOrder: - type: ATTRIBUTE attributeName: RelationType/RelationshipDescription lookupCode: true order: REL.HIE: 1 REL.MAI: 2 REL.FPA: 3 REL.BNG: 4 REL.BUY: 5 REL.PHN: 6 REL.GPR: 7 REL.MBR: 8 REL.REM: 9 REL.GPSS: 10 REL.WPC: 11 REL.WPIC: 12 REL.DOU: 13 - type: ACTIVE - type: SOURCE order: Reltio: 1 ONEKEY: 2 JPDWH: 3 SAP: 4 PFORCERX: 5 PFORCERX_ODS: 5 KOL_OneView: 6 ONEMED: 6 ENGAGE: 7 MAPP: 8 GRV: 9 GCP: 10 SSE: 11 PCMS: 12 PTRS: 13 - type: LUDRelationships are grouped by endObjectId, then the whole bundle is sorted and ranked. The relationship's position on the list (its rank) for AU and NZ is calculated based on the following algorithm:sorting by RelationshipDescription attribute  - relationship with REL.HIE value on topsorting by relationship activity - active at the topsort by source position - Reltio source on topsort by LUD (newest on top)" + }, + { + "title": "Phone RankSorter", + "pageID": "164469748", + "pageLink": "/display/GMDM/Phone+RankSorter", + "content": "GLOBAL - IQVIA modelThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Phones provided by source "Reltio" is higher in the hierarchy than the Address provided by "EVR" source. Based on this configuration, each phonewill be sorted in the following order:phone: - countries: - "ALL" sources: - "ALL" rankSortOrder: "Reltio": 1 "EVR": 2 "OK": 3 "AMPCO": 4 "JPDWH": 5 "NUCLEUS": 6 "CMM": 7 "MDE": 8 "LocalMDM": 9 "PFORCERX": 10 "VEEVA_NZ": 11 "VEEVA_AU": 12 "VEEVA_PHARMACY_AU": 13 "CRMMI": 14 "FACE": 15 "KOL_OneView": 16 "GRV": 17 "GCP": 18 "MAPP": 19 "CN3RDPARTY": 20 "Rx_Audit": 21 "PCMS": 22 "CICR": 23Phone rank sort process operates under the following conditions:Each phone is sorted with the following rulesGroup by the TypeIMS attribute and sort each group:sort by "Source System order configuration" (lowest rank from the configuration on TOP)sort by source rank (the lower number on top of the one with this attribute)sort by the validation status (VALID value is the winner) - attribute ValidationStatussort by LUD (highest LUD date on TOP) in descending order 2017.07 -> 2017.06 - attribute crosswalks.updateDatesort by number value alphabetically in ascending order A -> Z - attribute Phone.numberSorted phones are recalculated for the new Rank - each Phone Rank is reassigned with an appropriate numberGLOBAL USThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Phone provided by source "ONEKEY" is higher in the hierarchy than the Phone provided by "ENGAGE" source.  Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each phone number will be sorted in the following order:phone: - countries: - "ALL" sources: - "ALL" rankSortOrder: "Reltio" : 1 "ONEKEY" : 2 "ICUE" : 3 "VEEVALINK" : 4 "ENGAGE" : 5 "KOL_OneView" : 6 "ONEMED" : 6 "GRV" : 7 "SHS" : 8 "IQVIA_RX" : 9 "IQVIA_RAWDEA" : 10 "SEAGEN": 11 "CENTRIS" : 12 "MAPP" : 13 "ASTELAS" : 14 "EMD_SERONO" : 15 "COV" : 16 "SAP" : 17 "SAPVENDOR": 18 "IQVIA_DDD" : 19 "VALKRE" : 20 "THUB" : 21 "PTRS" : 22 "MEDISPEND" : 23 "PORZIO" : 24Phone number rank sort process operates under the following conditions:Each phone number is sorted with the following rules, on top, it is grouped by type.Group by the Type attribute and sort each group sort by source order (the lower number on top) - source name is taken from the last updated crosswalk for this Phone attributesort by source rank (the lower number on top or the one with this attribute) - attribute Source.SourceRank for this Phone attributeSorted phone numbers are recalculated for new Rank - each Phone Rank is reassigned with an appropriate number - attribute Rank for Phone attributeEMEA/AMER/APACThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Phone provided by source "ONEKEY" is higher in the hierarchy than the Phone provided by "ENGAGE" source.  Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each phone number will be sorted in the following order:EMEAphone: - countries: - GB - IE - FK - FR - BL - GP - MF - MQ - NC - PF - PM - RE - TF - WF - ES - DE - IT - VA - SM - TR - RU rankSortOrder: Reltio: 1 ONEKEY: 2 PFORCERX: 3 PFORCERX_ODS: 3 VEEVALINK: 4 KOL_OneView: 5 ONEMED: 5 ENGAGE: 6 MAPP: 7 SEAGEN: 8 GRV: 9 GCP: 10 SSE: 11 1CKOL: 12 THUB: 13 PTRS: 14 SAP: 15 SAPVENDOR: 16 BIODOSE: 17 BUPA: 18 CH: 19 HCH: 20 CSL: 21 MEDISPEND: 22 PORZIO: 23 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 ONEKEY: 2 MEDPAGESHCP: 3 MEDPAGESHCO: 3 PFORCERX: 4 PFORCERX_ODS: 4 VEEVALINK: 5 KOL_OneView: 6 ONEMED: 6 ENGAGE: 7 MAPP: 8 SEAGEN: 9 GRV: 10 GCP: 11 SSE: 12 PULSE_KAM: 13 SPRINKLR: 14 WEBINAR: 15 DREAMWEAVER: 16 EVENTHUB: 17 SAP: 18 SAPVENDOR: 19 SDM: 20 THUB: 21 PTRS: 22 MEDISPEND: 23 PORZIO: 24 sources: - ALLAMERphone: - countries: - ALL rankSortOrder: Reltio: 1 DCR_SYNC: 2 ONEKEY: 3 PFORCERX: 4 PFORCERX_ODS: 4 VEEVALINK: 5 KOL_OneView: 6 ONEMED: 6 LEGACY_SFA_IDL: 7 ENGAGE: 8 MAPP: 8 SEAGEN: 9 GRV: 10 GCP: 11 SSE: 12 IMSO: 13 CS: 14 PFCA: 15 WSR: 16 SAP: 17 SAPVENDOR: 18 THUB: 19 PTRS: 20 RX_AUDIT: 21 MEDISPEND: 22 PORZIO: 23 sources: - ALLAPACphone: - countries: - CN rankSortOrder: Reltio: 1 EVR: 2 MDE: 3 FACE: 4 GRV: 5 CN3RDPARTY: 6 GCP: 7 PFORCERX: 8 PFORCERX_ODS: 8 VEEVALINK: 9 KOL_OneView: 10 ONEMED: 10 ENGAGE: 11 MAPP: 12 PTRS: 13 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 ONEKEY: 2 JPDWH: 3 VOD: 4 PFORCERX: 5 PFORCERX_ODS: 5 VEEVALINK: 6 KOL_OneView: 7 ONEMED: 7 ENGAGE: 8 MAPP: 9 SEAGEN: 10 GRV: 11 GCP: 12 SSE: 13 PCMS: 14 THUB: 15 PTRS: 16 SAP: 17 SAPVENDOR: 18 SPRINKLR: 19 WEBINAR: 20 DREAMWEAVER: 21 EVENTHUB: 22 MEDISPEND: 23 PORZIO: 24 sources: - ALLPhone number rank sort process operates under the following conditions:Each phone number is sorted with the following rules, on top, it is grouped by type.Group by the Type attribute and sort each group  sort by cleanser status - valid/invalidsort by source order (the lower number on top) - source name is taken from the last updated crosswalk for this Phone attributesort by source rank (the lower number on top or the one with this attribute) - attribute Source.SourceRank for this Phone attributelast update date - newest to oldestsort by label - alphabetical order A-ZSorted phone numbers are recalculated for new Rank - each Phone Rank is reassigned with an appropriate number - attribute Rank for Phone attributeBusiness requirements (provided by AJ)COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*" + }, + { + "title": "Speaker RankSorter", + "pageID": "337862629", + "pageLink": "/display/GMDM/Speaker+RankSorter", + "content": "DescriptionUnlike other RankSorters, Speaker Rank is expressed not by a nested "Rank" or "Order" field, but by the "ignore" flag."Ignore" flag sets the attribute's "ov" to false. By operating this flag, we assure that only the most valuable attribute is visible and sent downstream from Hub.AlgorithmSort all Speaker nestsSort by source hierarchyIf same source, sort by Last Update Date (higher of crosswalk.updateDate / crosswalk.singleAttributeUpdateDates/{speaker attribute uri})If same source and LUD, sort by attribute URI (fallback strategy)Process sorted groupIf first Speaker nest has ignored == true, set ignored := false for that nestIf every next Speaker nest does not have ignored == true, set ignored := true for that nestPost the list of changes to Manager's async interface using Kafka topicGlobal - IQVIA ModelSpeaker RankSorter is active only for China. Source hierarchy is as follows:speaker: "Reltio": 1 "MAPP": 2 "FACE": 3 "EVR": 4 "MDE": 5 "CRMMI": 6 "KOL_OneView": 7 "GRV": 8 "CN3RDPARTY": 9Specific ConfigurationUnlike other PreCallback flows, Speaker RankSorter requires both ov=true and ov=false attribute values to work correctly.This is why:Entity Enricher configuration must be altered, to enrich entities with ov&nonOv values of "Speaker" attribute:\nbundle:\n nonOv: false\n ov: false\n nonOvAttributesToInclude:\n - "Speaker"\nPreCallback Service configuration must be altered to assure that nonOv values are cleaned from the event before passing it further down to the Event Publisher\ncleanOvFalseAttributeValues:\n - "Speaker"\nBusiness requirements (provided by AJ)COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*" + }, + { + "title": "Specialty RankSorter", + "pageID": "164469746", + "pageLink": "/display/GMDM/Specialty+RankSorter", + "content": "GLOBAL - IQVIA modelThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Specialty provided by source "Reltio" is higher in the hierarchy than the Specialty provided by the "CRMMI" source. Additionally, for Specialities, there is a difference between countries. The configuration for RU and TD contains only 4 sources and is different than the base configuration. Based on this configuration each specialty will be sorted in the following order:specialities: - countries: - "RU" - "TR" sources: - "ALL" rankSortOrder: "GRV": 1 "GCP": 2 "OK": 3 "KOL_OneView": 4 - countries: - "ALL" sources: - "ALL" rankSortOrder: "Reltio": 1 "EVR": 2 "OK": 3 "AMPCO": 4 "JPDWH": 5 "NUCLEUS": 6 "CMM": 7 "MDE": 8 "LocalMDM": 9 "PFORCERX": 10 "VEEVA_NZ": 11 "VEEVA_AU": 12 "VEEVA_PHARMACY_AU": 13 "CRMMI": 14 "FACE": 15 "KOL_OneView": 16 "GRV": 17 "GCP": 18 "MAPP": 19 "CN3RDPARTY": 20 "Rx_Audit": 21 "PCMS": 22 "CICR": 23The specialty rank sort process operates under the following conditions:Each Specialty is grouped by Specialty Type: SPEC/TEND/QUAL/EDUC – each group is sorted separately.Each group is sorted with the following rules:By specialty "Source System order configuration" (lowest rank from the configuration on TOP)By specialty Rank (lower ranks on TOP) in descending order 1 -> 99By update date (LUD) (highest LUD date on TOP) in descending order 2017.07 -> 2017.06 - attribute crosswalks.updateDateBy Specialty Value (alphabetically in ascending order A -> Z)Sorted specialties are optionally deduplicated (by Specialty Type in each group) – from each group, the lowest in rank and the duplicated specialty is removed. Currently the ( isIgnoreAndRemoveDuplicates = False) is set to False, which means that groups are not deduplicated. Duplicates are removed by Reltio.Sorted specialties are recalculated for the new Ranks – each Rank (for each sorted group) is reassigned with an appropriate number from lowest to highest.Additionally, for the Specialty Rank = 1 the best record is set to true - attribute - PrimarySpecialtyFlagSpecialty rank sort process fallback operates under the following conditions:When Specialty Type is empty – each empty specialty is grouped together. Each specialty with an empty type is added to the "EMPTY" group and sorted and DE duplicated separately.During source system from configuration (2.a) sorting when Source system is missing specialty is placed on 99 positionDuring Rank (, 2.b) sorting when the Source system is missing specialty is placed on 99 positionGLOBAL USThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Speciality provided by source "ONEKEY" is higher in the hierarchy than the Speciality provided by the "ENGAGE" source. Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each Speciality will be sorted in the following order:specialities: - countries: - "ALL" sources: - "ALL" rankSortOrder: "Reltio" : 1 "ONEKEY" : 2 "IQVIA_RAWDEA" : 3 "VEEVALINK" : 4 "ENGAGE" : 5 "KOL_OneView" : 6 "ONEMED" : 6 "SPEAKER" : 7 "ICUE" : 8 "SHS" : 9 "IQVIA_RX" : 10 "SEAGEN": 11 "CENTRIS" : 12 "ASTELAS" : 13 "EMD_SERONO" : 14 "MAPP" : 15 "GRV" : 16 "THUB" : 17 "PTRS" : 18 "VALKRE" : 19 "MEDISPEND" : 20 "PORZIO" : 21The specialty rank sort process operates under the following conditions:Specialty is sorted with the following rules, but on the top, it is grouped by Speciality.SpecialityType attribute:Group by Speciality.SpecialityType attribute and sort each group: sort by specialty unspecified status value (higher value on the top) - attribute Specialty with value Unspecifiedsort by source order number (the lower number on the top) - source name is taken from crosswalk that was last updatedsort by source rank (the lower on the top) - attribute Source.SourceRanksort by last update date (the earliest on the top) - last update date is taken from lately updated crosswalksort by specialty attribute value (string comparison) - attribute SpecialtySorted specialties are recalculated for new Rank - each Specialty Rank is reassigned with an appropriate number - attribute RankAdditionally:If the source is not found it is set to 99If specialty unspecified attribute name or value is not set it is set to 99EMEA/AMER/APACThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Speciality provided by source "ONEKEY" is higher in the hierarchy than the Speciality provided by the "ENGAGE" source. Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each Speciality will be sorted in the following order:EMEAspecialities: - countries: - GB - IE - FK - FR - BL - GP - MF - MQ - NC - PF - PM - RE - TF - WF - ES - DE - IT - VA - SM - TR - RU rankSortOrder: Reltio: 1 ONEKEY: 2 PFORCERX: 3 PFORCERX_ODS: 3 VEEVALINK: 4 KOL_OneView: 5 ONEMED: 5 ENGAGE: 6 MAPP: 7 SEAGEN: 8 GRV: 9 GCP: 10 SSE: 11 THUB: 12 PTRS: 13 1CKOL: 14 MEDISPEND: 15 PORZIO: 16 sources: - ALL - countries: - ALL sources: - ALL rankSortOrder: Reltio: 1 ONEKEY: 2 MEDPAGESHCP: 3 MEDPAGESHCO: 3 PFORCERX: 4 PFORCERX_ODS: 4 VEEVALINK: 5 KOL_OneView: 6 ONEMED: 6 ENGAGE: 7 MAPP: 8 SEAGEN: 9 GRV: 10 GCP: 11 SSE: 12 PULSE_KAM: 13 WEBINAR: 14 DREAMWEAVER: 15 EVENTHUB: 16 SPRINKLR: 17 THUB: 18 PTRS: 19 MEDISPEND: 20 PORZIO: 21AMERspecialities: - countries: - ALL rankSortOrder: Reltio: 1 DCR_SYNC: 2 ONEKEY: 3 PFORCERX: 4 PFORCERX_ODS: 4 VEEVALINK: 5 KOL_OneView: 6 ONEMED: 6 LEGACY_SFA_IDL: 7 ENGAGE: 8 MAPP: 9 SEAGEN: 10 GRV: 11 GCP: 12 SSE: 13 THUB: 14 PTRS: 15 RX_AUDIT: 16 PFCA: 17 WSR: 18 MEDISPEND: 19 PORZIO: 20 sources: - ALLAPACspecialities: - countries: - CN rankSortOrder: Reltio: 1 EVR: 2 MDE: 3 FACE: 4 GRV: 5 CN3RDPARTY: 6 GCP: 7 SSE: 8 PFORCERX: 9 PFORCERX_ODS: 9 VEEVALINK: 10 KOL_OneView: 11 ONEMED: 11 ENGAGE: 12 MAPP: 13 THUB: 14 PTRS: 15 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 ONEKEY: 2 JPDWH: 3 VOD: 4 PFORCERX: 5 PFORCERX_ODS: 5 VEEVALINK: 6 KOL_OneView: 7 ONEMED: 7 ENGAGE: 8 MAPP: 9 SEAGEN: 10 GRV: 11 GCP: 12 SSE: 13 PCMS: 14 WEBINAR: 15 DREAMWEAVER: 16 EVENTHUB: 17 SPRINKLR: 18 THUB: 19 PTRS: 20 MEDISPEND: 21 PORZIO: 22 sources: - ALLThe specialty rank sort process operates under the following conditions:Specialty is sorted with the following rules, but on the top, it is grouped by Speciality.SpecialityType attribute:Group by Speciality.SpecialityType attribute and sort each group: sort by specialty unspecified status value (higher value on the top) - attribute Specialty with value Unspecifiedsort by source order number (the lower number on the top) - source name is taken from crosswalk that was last updatedsort by source rank (the lower on the top) - attribute Source.SourceRanksort by last update date (the earliest on the top) - last update date is taken from lately updated crosswalksort by specialty attribute value (string comparison) - attribute SpecialtySorted specialties are recalculated for new Rank - each Specialty Rank is reassigned with an appropriate number - attribute Rank. The primary flag is set for the top ranked specialty.Additionally:If the source is not found it is set to 99If specialty unspecified attribute name or value is not set it is set to 99Business requirements (provided by AJ)COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*" + }, + { + "title": "Enricher Processor", + "pageID": "302687243", + "pageLink": "/display/GMDM/Enricher+Processor", + "content": "EnricherProcessor is the first PreCallback processor applied to incoming events. It enriches reference attributes with refEntity attributes, for the Rank calculation purposes. Usually, enriched attributes are removed after applying all PreCallbacks - this is configurable using cleanAdditionalRefAttributes flag. The only exception is GBL (EX-US), where attributes remain for CN. Removing "borrowed" attributes is carried out by the Cleaner Processor.AlgorithmFor targetEntity:Find reference attributes matching configurationFor each such attribute:Walk the relation to get endObject entityFetch endObject entity's current state through Manager (using cache)Rewrite entity's attributes to this reference attribute, inserting them in .refEntity.attributes pathsteps a-b are applied recursively, according to configured maxDepth.ExampleBelow is EnricherProcessor config from APAC PROD's Precallback Service:\nrefLookupConfig:\n - cleanAdditionalRefAttributes: true\n country:\n - AU\n - IN\n - JP\n - KR\n - NZ\n entities:\n - attributes:\n - ContactAffiliations\n type: HCP\n maxDepth: 2\nHow to read the config:for entities with Country: Australia, India, Japan, South Korea or New Zealand,of entity type HCP,enrich ContactAffiliations, so that it contains refEntity's attributes as sub-attributes,do that with depth 2 - so simply take HCO's attributes and insert them into ContactAffiliations.refEntity.attributes,after all calculations have finished, remove "borrowed" attributes, so that event passed to Event Publisher does not have them." + }, + { + "title": "Cleaner Processor", + "pageID": "302687603", + "pageLink": "/display/GMDM/Cleaner+Processor", + "content": "Cleaner Processor removed attributes enriched by the Enricher Processor. It is one of the last processors in the Precallback Service's execution order. Processor checks the cleanAdditionalRefAttributes flag in config.AlgorithmFor targetEntity:Find all refLookupConfig entries applicable for this Country.For all attributes in found entries, remove refEntity.attributes map." + }, + { + "title": "Inactivation Generator", + "pageID": "302697554", + "pageLink": "/display/GMDM/Inactivation+Generator", + "content": "Inactivation Generator is one of Precallback Service's event Processors. It checks input event's targetEntity and changes event type to INACTIVATED, if it detects one of below:for entities:targetEntity's endDate is set,for relations: targetRelation's endDate is set,targetRelation's startRefIgnored == true,targetRelation's endRefIgnored == true.AlgorithmFor each event:If targetEntity not null and targetEntity.endDate is null, skip event,If targetRelation not null:If targetRelation.endDate is null or targetRelation.startRefIgnored is null or targetRelation.endRefIgnored is null, skip event,Search the mapping for adequate output event type, according to table below. If no match found, skip event,Inbound event typeOutbound event typeHCP_CREATEDHCP_INACTIVATEDHCP_CHANGEDHCO_CREATEDHCO_INACTIVATEDHCO_CHANGEDMCO_CREATEDMCO_INACTIVATEDMCO_CHANGEDRELATIONSHIP_CREATEDRELATIONSHIP_INACTIVATEDRELATIONSHIP_CHANGEDReturn same event with new event type, according to table above." + }, + { + "title": "MultiMerge Processor", + "pageID": "302697588", + "pageLink": "/display/GMDM/MultiMerge+Processor", + "content": "MultiMerge Processor is one of Precallback Service's event Processors.For MERGED events, it checks if targetEntity.uri is equal to first URI from entitiesURIs. If it is different, entitiesURIs is adjusted, by inserting targetEntity.uri in the beginning. This is to assure, that targetEntity.uri[0] always contains a merge winner, even in cases of multiple merges.AlgorithmFor each event of type:HCP_MERGED,HCO_MERGED,MCO_MERGED,do:if targetEntity.uri is null, skip event,if entitiesURIs[0] and targetEntity.uri are equal, skip event,insert targetEntity.uri at the beginning of entitiesURIs and return the event." + }, + { + "title": "OtherHCOtoHCOAffiliations Rankings", + "pageID": "319291954", + "pageLink": "/display/GMDM/OtherHCOtoHCOAffiliations+Rankings", + "content": "DescriptionThe process was designed to rank OtherHCOtoHCOAffiliation with rules that are specific to the country. The current configuration contains Activator and Rankers available for AU and NZ countries and the OtherHCOtoHCOAffiliationsType. The process (compared to the ContactAffilaitions) was designed to process RELATIONSHIP_CHANGE events, which are single events that contain one piece of information about specific relation. The process builds the cache with the hierarchy of objects when the main object is Reltio EndObject (The direction that we check and implement the Rankins: (child)END_OBJECT -> START_OBJECT(parent).  Change in the relation is not generating the HCO_CHANGE events so we need to check relations events. Relation change/create/remove events may change the hierarchy and ranking order.Comparing this to the ContactAffiliations ranking logic, change on HCP object had whole information about the whole hierarchy in one event, this caused we could count and generate events based on HCP CHANGE.This new logic builds this hierarchy based on RELATIONSHIP events, compact the changes in the time window, and generates events after aggregation to limit the number of changes in REltio and API calls. DATA VERIFICATION:Snowflake queries:\nSELECT COUNT(*) FROM (\n\nSELECT END_ENTITY_URI, COUNT(*) FROM COMM_APAC_MDM_DMART_PROD_DB.CUSTOMER_SL.MDM_RELATIONS\n\nWHERE COUNTRY = 'AU' and RELATION_TYPE ='OtherHCOtoHCOAffiliations' and ACTIVE = TRUE\n\nGROUP BY END_ENTITY_URI\n\n)\n\n\n\n\nSELECT COUNT(*) FROM COMM_APAC_MDM_DMART_PROD_DB.CUSTOMER_SL.MDM_ENTITIES\n\nWHERE ENTITY_TYPE='HCO' and COUNTRY ='AU' AND ACTIVE = TRUE\n\nSELECT COUNT(*) FROM (\n\nSELECT END_ENTITY_URI, COUNT(*) FROM COMM_APAC_MDM_DMART_PROD_DB.CUSTOMER_SL.MDM_RELATIONS\n\nWHERE COUNTRY = 'NZ' and RELATION_TYPE ='OtherHCOtoHCOAffiliations' and ACTIVE = TRUE\n\nGROUP BY END_ENTITY_URI\n\n)\nExample few cases from APAC QA:010Xcxi NZ          200zxT2O              NZ          2008NxIA              NZ          21CVfmxOm        NZ          2VCMuTvz            NZ          2cvoyNhG             NZ          2VCMnOvP          NZ          200yZOis                NZ          200JoRnN              NZ          2\nSELECT END_ENTITY_URI, COUNTRY, COUNT(*) AS count FROM CUSTOMER_SL.MDM_RELATIONS\n\nWHERE RELATION_TYPE ='OtherHCOtoHCOAffiliations' AND ACTIVE = TRUE\n\nAND COUNTRY IN ('AU','NZ')\n\nGROUP BY END_ENTITY_URI, COUNTRY\n\nORDER BY count DESC\nCq2pWio             AU          500KcdEA              AU          3T5NxyUa             AU          3ZsTdYcS               AU          3XhGoqwo           AU          300wMWdy         AU          3Cq1wjj8               AU          3The direction that we should check and implement the Rankins:(child)END_OBJECT -> START_OBJECT(parent)We are starting with Child objects and checking if this child is connected to multiple parents and we are ranking. In most cases, 99% of these will be one relation that will auto-filled with rank=1 during load. If not we are going to rank this using below implementation:Example:https://mpe-02.reltio.com/nui/xs4oRCXpCKewNDK/profile?entityUri=entities%2F00KcdEAREQUIREMENTS:Flow diagramLogical ArchitecturePreDelayCallback LogicStepsOverview Reltio attributes\nATTRIBUTES TO UPDATE/INSERT\nRANK\n {\n "label": "Rank",\n "name": "Rank",\n "description": "Rank",\n "type": "Int",\n "hidden": false,\n "important": false,\n "system": false,\n "required": false,\n "faceted": true,\n "searchable": true,\n "attributeOrdering": {\n "orderType": "ASC",\n "orderingStrategy": "LUD"\n },\n "uri": "configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/Rank",\n "skipInDataAccess": false\n },\nPreCallback Logic - RANK ActivatorDelayRankActivationProcessor:The purpose of this activator is to pick specific events and push them to delay-events topics, events from this topic will be ranked using the algorithm described on this page (OtherHCOtoHCOAffiliations Rankings), the flow is also described below.Logic:Check the activation criteria, when true process the event to the delay topic, otherwise, push the main event as is to proc-events topic to next HUB processing phase (event publishing)When all activation criteria are met:acceptedEventTypes – events are RELATION types from the listacceptedRelationObjectTypes – the event is relation type and is the type specified – OtherHCOToHCOacceptedCountries – relation is from a specified countryDo:pick the eventscopy the main event to the delayedEventsclear the mainEvents (do not push events to next publishing phase)Before sending apply the additionalFunctions (specify the interface/process and run all selected)Here change the Kafka Key and put the relation.endObject.objectURI as a RELATION event key.Example configuration for AU and NZ:delayRankActivationCallback: featureActivation: true activators: - description: "Delay OtherHCOtoHCOAffiliations RELATION events from AU and NZ country to calculate Rank in delay service" acceptedEventTypes: - RELATIONSHIP_CHANGED - RELATIONSHIP_CREATED - RELATIONSHIP_REMOVED - RELATIONSHIP_INACTIVATED acceptedRelationObjectTypes: - configuration/relationTypes/OtherHCOtoHCOAffiliations acceptedCountries: - AU - NZ additionalFunctions: - RelationEndObjectAsKafkaKeyPreDelayCallback - RANK LogicThe purpose of this pre-delay-callback service is to Rank specific objects (currently available OtherHCOToHCO ranking for AU and NZ - OtherHCOtoHCOAffiliations Rankings)CallbackWithDelay and CurrentStateCache advantages:The cache is build on the fly based on Mongo (one-time GET of each end Object) and enriched by events during a lifetime - logic is in KafkaStreams and we are using State store in KafkaStreams.(optional) Model change (re-ranking) will cause the cache removal and regeneration of events – cache will be rebuilt with a new model so in case of future changes we can re-rank based on new rules.The cache contains only required attributes and is updated in real-timeIn most cases it will happen that the relations are in sync so no changes will be pushed to the delay-events topic – everything will be pushed in real-time to target systems (Snowflake)In case of any change in any relation, we will aggregate all relations by the EndObjectId. This allows us to emit an aggregation window one time for each EndObject so that changes are generated for one entity in one run. It may also happen that one new relation is re-ranking whole objects hierarchy. Using this logic one event goes to the Delay logic, one event triggers the difference comparison and generation of multiple updates. These updates (after Reltio publishing) will go to the PreDelay state and we are going to check if the data is in sync and if we generated all events. In that case, all events should flow to proc-events and to SnowflakeWe set a 1h window to aggregate multiple changes (relationship updates) and emit windows in 1h intervals.Snowflake is refreshed on PROD in 2h windows - we fit into this so that all events are ready and do not contain the partial state in ate (but Snowflake it may happen in some edge cases) The advantage of this solution is that all RELATIONS will have Rank in Snowflake, so there will be no state without Rank.Logic: PreDelayPoll event from internal-reltio-full-delay-eventsFor each Active rank sorter (currently OtherHCOToHCO) execute the logicWe need a state store that will contain the RelationData  cache of all relation hierarchies.The event key that will be moved here will be endObjectId so that all events related to the specific end object will be on one partition – so that we will ask to mongo one time (no parallelism by endObjectId)Check if “CurrentStateCache” contains the state for endObjectIdIf not – execute GetRelationsByEndObjectId (This returns a list of relations)Transform the output to the CurrentStateCache modelIf exists – update (join) the current Relation to CurrentStateCache by endObject and update relations KeyValue MapCheck if Relation Rank is in sync with SortedState and if true we are going to push such event to outputTopic (reltio-proc-events)execute function isRelationRankInSyncWithCurrentSortedState (Relation, CurrentStateCache)If Relation.Rank ==null -> falseIf Relatio.Rank !=nullSort CurrentStateCacheCheck if RelatioID Rank is the same as SortedStateCache (it means we need to check if the current Relation Rank is correct)If the function returns true – publish the Relationship event to OUTPUT TOPIC – Push events with Kafka Key equal to the relation (reverse logic of - RelationEndObjectAsKafkaKey)If the function returns false go to Delay stepPush event (end object id) to ${env}-internal-reltio-full-callback-delay-eventsDelayAggregate all events in the time window (configurable) by end object IDNOTE – check the closing window for a selected key after the inactivity period – extend the window for the selected key if a new event is in. To save space in the delay/suppress window store only endObjectIDsPostDelayWhen the aggregation window is closed do:Execute the activation function.Sort(CurrentState) – check the whole hierarchy and sort the state to a desired stateThe result of this function is ArrayList of AttributeChanges related to the relations that have to be updated.As a result, push all events to bulk-callback topics that will cause an update in Reltio.Data Model and Configuration\nRelationData cache model:\n[\n Id: endObjectId\n relations:\n     - relationUri: relations/13pTXPR0\n       endObjectUri: endObjectId"      \n          country: AU \n         crosswalks:\n - type: ONEKEY\n value: WSK123sdcF\n deleteDate: 123324521243\n RankUri: e.g. relations/13pTXPR0/attributes/Rank\n Rank: null\n \t Attributes:\n Status:\n \t - ACTIVE                     \n        RelationType/RelationshipDescription:\n - REL.MAI\n - REL.CON\n\n]\n\n\nTriggersRankActivationTrigger actionComponentActionDefault timeIN Events incoming Callback Service: Pre-Callback: DelayRankActivationProcessor$env-internal-reltio-full-eventsFull events trigger pre-callback stream and the activation logic that will route the events to next processing staterealtime - events streamOUT Activated events to be sortedCallback Service: Pre-Callback: DelayRankActivationProcessor $env-internal-reltio-full-delay-eventsOutput topicrealtime - events streamTrigger actionComponentActionDefault timeIN Events incoming mdm-callback-delay-service: Pre-Delay-Callback: PreCallbackDelayStream$env-internal-reltio-full-delay-eventsDELAY: ${env}-internal-reltio-full-callback-delay-eventsFull events trigger pre-delay-callback stream and the ranking logicrealtime - events streamOUT Sorted events with the correct state mdm-callback-delay-service: Pre-Delay-Callback: PreCallbackDelayStream$env-internal-reltio-proc-eventsOutput topic with correct eventsrealtime - events streamOUT Reltio Updatesmdm-callback-delay-service: Pre-Delay-Callback: PostCallbackStream$env-internal-async-all-bulk-callbacksOutput topic with Reltio updatesrealtime - events streamDependent componentsComponentUsageCallback ServiceRELATION ranking activator that push events to delay serviceCallback Delay ServiceMain Service with OtherHCOtoHCOAffiliations Rankings logicEntity EnricherGenerates incoming events full eventsManagerProcess callbacks generated by this serviceAttachment docs with more technical implementation details:example-reqeusts.json" + }, + { + "title": "HCPType Callback", + "pageID": "347637202", + "pageLink": "/display/GMDM/HCPType+Callback", + "content": "DescriptionThe process was designed to update HCPType RDM code in TypeCode attribute on HCP profiles. The process is based on the events streaming, the main event is recalculated based on the current state and during comparison of existing TypeCode on Profile and calculated value the callback is generated. This process (like all processes in PreCallback Service) blocks the main event and will send the update to external clients only when the update is visible in Reltio and TypeCode contains correct code. The process uses the RDM as a internal cache and calculates the output value based on current mapping. To limit the number of requests to RDM we are using the internal Mongo Cache and we refresh this cache every 2 hours on PROD. Additionally we designed the in-memory cache to store 2 required codes (PRES/NON-PRESC) with HUB_CALLBACK source code values.This logic is related to these 2 values in Reltio HCP profiles:Type-  Prescriber (HCPT.PRES)Type - Non-Prescriber (HCPT.NPRS)Why this process was designed:With the addition of the Eastern Cluster LOVs, we have hit the limit/issue where HCP Type Prescriber & Non-Prescriber canonical codes no longer into RDM.Issue is a size limit in RDM’s underlying GCP tech stack It is a GCP physical limitation and cannot be increased. We cannot add new RDM codes to PRES/NON-PRESC codes and this will cause issues in HCP data.The previous logic:In the ingestion service layer (all API calls) there was a DQ rule called “HCP TypeCode”. This logic adds the TypeCode as a concatenation of SubTypeCode and Speciality Ranked 1. Logic get source code and puts the concatenation in TypeCode attribute. The number of combination on source codes is reaching the limit so we are building new logic.For future reference adding old DQ rules that will be removed after we deploy the new process.DQ rules (sort rank):- name: Sort specialities by source rank category: OTHER createdDate: 20-10-2022 modifiedDate: 20-10-2022 preconditions: - type: operationType values: - create - update - type: not preconditions: - type: source values: - HUB_CALLBACK - NUCLEUS - LEGACYMDM - PFORCERX_ID - type: not preconditions: - type: match attribute: TypeCode values: - "^.+$" action: type: sort key: Specialities sorter: SourceRankSorterDQ rules (add sub type code):- name: Autofill sub type code when sub type is null/empty category: AUTOFILL_BASE createdDate: 20-10-2022 modifiedDate: 20-10-2022 preconditions: - type: operationType values: - create - update - type: not preconditions: - type: source values: - HUB_CALLBACK - NUCLEUS - LEGACYMDM - PFORCERX_ID - KOL_OneView action: type: modify attributes: - TypeCode value: "{SubTypeCode}-{Specialities.Specialty}" replaceNulls: true when: - "" - "NULL"Example of previous input values:attributes: "TypeCode": [ { "value": "TYP.M-SP.WDE.04" } ]TYP.M is a SubTypeCodeSP.WDE.04 is a Specialitycalucated value - PRESC:As we can see on this screenshot on EMEA PROD there are 2920 combinations for one ONEKEY source that generates PRESC value. The new logic:The new logic was designed in pre callback service in hybrid mode. The logic uses the same assumptions like are made in previous version, but instead we are using Reltio Canonical codes, and this limits the number of combinations. We are providing this value using only one Source HUB_CALLBACK so there is no need to configure ONEKEY,GRV and all other sources that provides multiple combinations.Advantages:Service populates HCP Type with SubType & Specialty canonical codesHCP Type LOVs reduced to single source (HUB_CALLBACK) and canonical codesThe change in HCP Type RDM will be processed using standard reindex process.This change is impacting the Historical Inactive flow – change described Snowflake: HI HCPType enrichment. Key features in new logic and what you should know:The change in HCP Type RDM will be processed using standard reindex process.Calculate the HCP TypeCode is based on the OV profile and Reltio canonical codesPreviously each source delivered data and the ingestion service calculated TypeCode based on RAW JSON data delivered by the source.Now we calculate on OV Profile, not on the source level.We deliver only one value using HUB_CALLBACK crosswalk.Now once we receive the event we have access to ov:true – golden profileSpecialties, this is the list, each source has the SourceName and SourceRank, so we pick with Rank 1 for selected profile.SubTypeCode is a single attribute, and can pick only ov:true value.2 canonical cocdes are mapped to TypeCode attribute like on the below example Activation/Deactivation profiles in Reltio and Historical Inactive flowSnowflake: HI HCPType enrichmentSnowflake: History Inactive When the whole profile is deactivated HUB_CALLBACK technical crosswalks are hard-deleted, HCPTypeCode will be hard-deletedThis is impact HI Views because the HUB_CALLBACK value will be droppedWe implemented a logic in HI view that will rebuild TypeCode attribute and put this PRES/NON-PRESC in JSON file visible in HI view. Reltio contains the checksum logic and is not generating the event when the sourceCode changes but is mapped to the same canonical codeWe implemented a delta detection logic and we are sending an update only when change is detected Lookup to RDM, requeiers the logic to resolve HUB_CALLBACK code to canonical code. Change only when Type does not exists Type changes from PRESC to NON-PRESC Type changes from NON-PRESC to PRESCExample of new input values:attributes: "TypeCode": [ { "value": "HCPST.M-SP.AN" } ]TYP.M is a SubTypeCode source code mapped to HCPST.MSP.WDE.04 is a Speciality source code mapped to SP.ANrdm/lookupTypes/HCPSubTypeCode:HCPST.Mrdm/lookupTypes/HCPSpecialty:SP.ANFlow diagramLogical ArchitectureHCPType PreCallback LogicStepsOverview Reltio attributes and RDM                {                    "label": "Type",                    "name": "TypeCode",                    "description": "HCP Type Code",                    "type": "String",                    "hidden": false,                    "important": false,                    "system": false,                    "required": false,                    "faceted": true,                    "searchable": true,                    "attributeOrdering": {                        "orderType": "ASC",                        "orderingStrategy": "LUD"                    },                    "uri": "configuration/entityTypes/HCP/attributes/TypeCode",                    "lookupCode": "rdm/lookupTypes/HCPType",                    "skipInDataAccess": false                },Based on:SubTypeCode:                {                    "label": "Sub Type",                    "name": "SubTypeCode",                    "description": "HCP SubType Code",                    "type": "String",                    "hidden": false,                    "important": false,                    "system": false,                    "required": false,                    "faceted": true,                    "searchable": true,                    "attributeOrdering": {                        "orderType": "ASC",                        "orderingStrategy": "LUD"                    },                    "uri": "configuration/entityTypes/HCP/attributes/SubTypeCode",                    "lookupCode": "rdm/lookupTypes/HCPSubTypeCode",                    "skipInDataAccess": false                },Speciality:                        {                            "label": "Specialty",                            "name": "Specialty",                            "description": "Specialty of the entity, e.g., Adult Congenital Heart Disease",                            "type": "String",                            "hidden": false,                            "important": false,                            "system": false,                            "required": false,                            "faceted": true,                            "searchable": true,                            "attributeOrdering": {                                "orderingStrategy": "LUD"                            },                            "cardinality": {                                "minValue": 0,                                "maxValue": 1                            },                            "uri": "configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialty",                            "lookupCode": "rdm/lookupTypes/HCPSpecialty",                            "skipInDataAccess": false                        },RDMCodes:rdm/lookupTypes/HCPType:HCPT.NPRSrdm/lookupTypes/HCPType:HCPT.PRESHCPType PreCallback LogicFlow:Component Startupduring the Pre-Callback component startup we are initializing in memory cache to store 2 PRESC and NPRES values for HUB_CALLBACK soruceThis implementation limits number of requests to RDM Reltio through managerAlso this limit number of API call manager service from pre-callback serviceThe Cache contains TTL configuration and is invalidated after TTLActivationCheck if feature flag activation is trueTake into account only the CHANGED and CREATED events in this pre-callback implementation limited to HCP objectsTake into account only profiles that crosswalks are not on the following list. When Profile contains the crosswalks that are related to this configuration list skip the TypeCode generation. When the Profile contains the following crosswalk and additionally valid crosswalk like ONEKEY generate a TypeCode.- type: not preconditions: - type: source values: - HUB_CALLBACK - NUCLEUS - LEGACYMDM - PFORCERX_IDStepsEach CHANGE or CREATE event triggers the following logic:Get the canonical code from HCP/attributes/SubTypeCode pick a lookupCode if lookupCode is missing and lookupError exists pick a value if the SupTypeCode does not exists put an empty value = ""Get the canonical code from HCP/attributes/Specialities/attributes/Specialty arraypick a speciality with Rank equal to 1pick a lookupCode  if lookupCode is missing and lookupError exists pick a value if the Specialty does not exists put an empty value = ""Combine to canonical codes, using "-" hyphen character as a concatenation.possible values:--""""-""-""Execute delta detection logic:: using the RDM cache translate the generated value to PRESC or NPRES codeCompare the generated value with HCP/attributes/TypeCodepick a lookupCode and compare to generated and translated value if lookupCode is missing and lookupError exists pick a value and compare to generated and not translated valueGenerate:INSERT_ATTRIBUTE: when TypeCode does not exitsUPDATE_ATTRIBUTE: when value is differentForward main event to next processing topic when there are 0 changes.TriggersTrigger actionComponentActionDefault timeIN Events incoming Callback Service: Pre-Callback:HCP Type Callback logicFull events trigger pre-callback stream and during processing, partial events are processed with generated changes. If data is in sync partial event is not generated, and the main event is forwarded to external clientsrealtime - events streamDependent componentsComponentUsageCallback ServiceMain component of flow implementationEntity EnricherGenerates incoming events full eventsManagerProcess callbacks generated by this serviceHub StoreHUB Mongo CacheLOV readLookup RDM values flow" + }, + { + "title": "China IQVIA<->COMPANY", + "pageID": "263501508", + "pageLink": "/display/GMDM/China+IQVIA%3C-%3ECOMPANY", + "content": "DescriptionThe section and all subpages describe HUB adjustments for China clients with transformation to the COMPANY model. HUB created a logic to allow China clients to make a transparent transition between IQVIA and COMPANY Models. Additionally, the DCR process will be adjusted to the new COMPANY model. The New DCR process will eliminate a lot of DCRs that are currently created in the IQVIA tenant. The description of changes and all flows are described in this section and the subpages, links are displayed below. HUB processed all the changes in MR-4191 – the MAIN task, To verify and track please check Jira.China Changes:China is now using the IQVIA model (createHCP operation)The goal realized in these changes is to have the same features as COMPANY model but China will use the IQVIA model (for China change should be transparent)current IQVIA PROD - https://eu-360.reltio.com/ui/FW2ZTF8K3JpdfFl (GBL PROD)new COMPANY PROD - https://ap-360.reltio.com/ui/sew6PfkTtSZhLdW/ (APAC PROD)Changes in Direct Channel (API) (input IQVIA model -> output COMPANY model transformation)Changes in Events Streaming (events) (input COMPANY model -> output IQVIA model transformation)Changes in map-channel. China GRV data in IQVIA model loaded to COMPANY modelCreate a Generic common transformation class:transformIqviaToCOMPANYtransformCOMPANYToIqviaDCR China adjustments to the COMPANY modelFlowsChina IQVIA - current flow and user properties + COMPANY changesOn this page, the current IQVIA flow for China users is described.User properties for China users, the DCR activation criteria.HUB components and China configuration used in HUBThe page contains also COMPANY changes and affected components that will be changedCreate HCP/HCO complex methods - IQVIA model (legacy)This page describes the HCP/HCO create API operations used in IQVIA, based on this logic new COMPANY logic was adjusted.Old logic is complicated and will be deprecated in the future.New logic contains the new solutions and was written in a more readable format. In the new logic, the DCR process is moved outside of the API to the external dcr-service-2 component.Create HCP/HCO complex V2 methods - COMPANY modelNew COMPANY logic for the creation of the HCP and HCO objects.Logic is divided into two sectionssimple - create an HCP/HCO object without affiliationscomplex - create an HCP/HCO object with affiliations Logic also triggers the DCR process if required.The new COMPANY code changes add the V1 and V2 prefixes to the API.Existing COMPANY model operations will be switched to V2 APIsIQVIA users will use V1 API - this is required to keep the old logic, in the future old V1 API will be deprecated and removed.V1/V2 APIs are transparent for the external clients, this is handled on the HUB sideDCR IQVIA flowOLD DCR IQVIA model logicDCR COMPANY flowNew DCR COMPANY model logicChina Selective Router - model transformation flowAdditionall, microservice used to transform COMPANY model events to IQVIA modelThe microservice used the predefined mapping and transforms the output events to the China target output topicThe logic contains also the Reference Attributes lookup like:L1 - get HCP → HCO (Workplaces using COMPANY ContactAffiliations)L2 - get HCO → HCO (MainHCO using COMPANY OtherHCOtoHCOAffiliations)The output HCP is combined and contains full information about all L1 and L2 objects (same as on IQVIA)Model Mapping (IQVIA<->COMPANY)Model mapping documentTransformation used during API calls or events streaming processing User Profile (China user)User Profile for China usercontains all details and configuration properties in one place.All DCR/Search/Trigger/CrosswalkGeneratrs are configured in one file and are shared across all HUB microservices. TriggersDescribed in the separated sub-pages for each process.Dependent componentsDescribed in the separated sub-pages for each process.Documents with HUB detailsmapping China_attributes.xlsxAPI: China_HUB_Changes.docxdcr: China_HUB_DCR_Changes.docx" + }, + { + "title": "China IQVIA - current flow and user properties + COMPANY changes", + "pageID": "284805827", + "pageLink": "/pages/viewpage.action?pageId=284805827", + "content": "DescriptionOn this page, the current IQVIA flow is described. Contains the full API description, and complex API on IQVIA end with all details about HUB configuration and properties used for the China IQVIA model.In the next section of this page, the COMPANY changes are described in a generic way. More details of the new COMPANY complex model and API adjustments were described in other subpages. IQVIACurrent process notes:China uses the createHCP operation (the object with affiliation to HCO(Workplace) and MainHCO(Hospital))GRV source is the only source that creates DCRsCurrent operations used by ChinaIQVIA Kibana details: https://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/app/r/s/BrC2vOperations:GetEntity (only used by event hub user)CreateHCORoute (china_apps)CreateHCPRoute (china_apps and map_channel)CreateDCRRoute (as a part of a createHCP route where DCR is executed)UpdateHCPRoute (china_apps)Users:eventhubchina_appsmap_channelSources:GRVEVRMDEFACECN3RDPARTYMap_ChannelGRV source is there with CN countryManagerManager affiliations activation and configuration\naffiliationConfig:\n hcpToL1HcoRefAttributeName:\n Workplace:\n - country: "CN"\n hcpToL2HcoRefAttributeName:\n MainWorkplace:\n - country: "CN"\n hcoToHcoRefAttributeName:\n MainHCO:\n - country: "CN"\n waitForNewHcoDCRApprove:\n - country: "CN"\n\n\nDCRs current legacy config\ndcrConfig:\n dcrProcessing: yes\n routeEnableOnStartup: yes\n deadLetterEndpoint: "file:///opt/app/log/rejected/"\n externalLogActive: yes\n activationCriteria:\n NEW_HCO:\n - country: "CN"\n sources:\n - "CN3RDPARTY"\n - "FACE"\n - "GRV"\n NEW_HCP:\n - country: "CN"\n sources:\n - "GRV"\n NEW_WORKPLACE:\n - country: "CN"\n sources:\n - "GRV"\n - "MDE"\n - "FACE"\n - "CN3RDPARTY"\n - "EVR"\n\n externalDCRActivationCriteria:\n - country: "CN"\n sources:\n - "CN3RDPARTY"\n - "FACE"\n - "GRV"\n\n continueOnHCONotFoundActivationCriteria:\n - country: "CN"\n sources:\n - "GCP"\n - countries:\n - AD\n - BL\n - BR\n - DE\n - ES\n - FR\n - FR\n - GF\n - GP\n - IT\n - MC\n - MF\n - MQ\n - MU\n - MX\n - NC\n - NL\n - PF\n - PM\n - RE\n - RU\n - TR\n - WF\n - YT\n sources:\n - GRV\n - GCP\n validationStatusesMap:\n VALID: validated\n NOT_VALID: notvalidated\n PENDING: pending\n\n delayPrcInSeconds: 3600\n dcrTopic: "{{env_name}}-gw-dcr-requests"\n\n\nUsers that use CN country in HUB:china_apps\n- name: "china_apps"\n description: "China applications access user"\n defaultClient: "ReltioAll"\n roles:\n - "CREATE_HCP"\n - "CREATE_HCO"\n - "UPDATE_HCO"\n - "UPDATE_HCP"\n - "GET_ENTITIES"\n - "RESPONSE_DCR"\n - "LOOKUPS"\n countries:\n - "CN"\n sources:\n - "CN3RDPARTY"\n - "MDE"\n - "FACE"\n - "EVR"\n\n\n\nmap_channel\n- name: "map_channel"\n description: "Map Channel (Handler) account"\n defaultClient: "ReltioAll"\n roles:\n - "UPDATE_HCP"\n - "CREATE_HCP"\n - "CREATE_HCO"\n - "DELETE_CROSSWALK"\n countries:\n - "CN"\n - "AD"\n…\n sources:\n - "GRV"\n - "GCP"\n\n\nCallback-Service:refLookupConfig\nrefLookupConfig:\n - country: CN\n maxDepth: 2\n useCache: true\n entities:\n - type: HCP\n attributes:\n - Workplace\n - type: HCO\n attributes:\n - MainHCO\nThe callback service is adding enrichment to HCP. Workplace and HCP.Workplace.MainHCO objects – In mongo and in published events we are storing more information than the Reltio. The result is that we have the HCP full data and Workplace and full data and Workplace.MainHCO full data. The MainHCO Workplace is enriched by Workplace references. The Mongo and Publisher move to China data that contains full information in these objects.Published events and Mongo are enriched with this data.Event publisher:\n- id: hcp-china\n selector: "(exchange.in.headers.reconciliationTarget==null)\n && exchange.in.headers.eventType in ['full']\n && exchange.in.headers.country in ['cn']\n && ['CN3RDPARTY', 'MDE', 'FACE', 'EVR', 'GRV', 'GCP', 'Reltio'].intersect(exchange.in.headers.eventSource)\n && exchange.in.headers?.eventSubtype.startsWith('HCP_')"\n destination: "prod-out-full-mde-cn"\nPublishing of china events and sources HCP entities, full events (data is trimmed)COMPANYThe key concepts and general description of COMPANY adjustments:Current IQVIA flow should work only on old IQVIA Tenant and will be deprecated in the futureOn the new COMPANY model there will be V1 and V2 APIs versions transparent for the external client, the V2 is a new logic that will be used by all clients and also a China client with the IQVIA modelOptimization of /batch/hcp method is made as a part of these changes because now all APIs allow to the provision of the list of entities. Created methods:New Service V2 ( input bulk or single entity)- POST/PATCH HCP (simple method without affiliated HCO) (array of entities)- POST/PATCH HCP (complex method with affiliated HCO) (array of entities)- POST/PATCH HCO (array of entities)- POST/PATCH MCO (array of entities)Transformation executed if:Source: IQVIA (user profile configuration)Target: COMPANY (user profile configuration)Then execute the transformation and complex with affiliated HCOAPI-router service will be used to make a transparent transition between V1 and V2 APIs2 methods v1 and v2All COMPANY clients using the COMPANY model will be switched to V2V1 will be removed in the future after IQVIA will be deprecatedTransformation LIB (full description on the different subpage):transformIqviaToCOMPANYtransformCOMPANYToIqviaUser Profile - Feature switchIQVIA vs COMPANY model on user configuration:User Profile objects will be provided. In one file whole configuration shared across all components will be present. Publishing changes:China Selective Router - new microservice – translates China events from the IQVIA model to the COMPANY modelInput: China COMPANY model topicEnrich HCP with HCO data (workplace/mainHCO)Output: target COMPANY modelOpen API Documentation on CamelSwagger UI contains the whole API description, and API documentation is managed in code and automatically generated. DCR processIntegrate manager complex method with dcr-service-2 (using triggers) Create requests that have the model in dcr-service-2K8s separated environmentAPAC-China-DEV is a separate environment used for the China testing. The environment is set up dynamically on K8sThe component changes related to this adjustment:Reltio-Subscriber component is working on DEV as an events router:There is only one SQS queue, but 2 output topics in the subscriber publisher. The event router makes a decision if we need to move this event to APAC-DEV or CHINA-DEV (e.g. china profiles tagged with china-test-cases). Reltio-subscriber reads the tag name and pushes this event to topic {tag-name} – specified number of tag names allowed in publishing to output topic 2 profiles – test mode. PROD – normal mode by default normal PROD mode Manager Changes Create HCP/HCO operations used by HUB automated integration tests adding the China-TEST tag that is routed only to CHINA-DEV environment HCP Service Complex (POST/PATCH) V2 Key concepts and changesCrosswalk Generator - configured in User Profile -allows to automatically generate a crosswalk when missing:(common) CrosswalkGenerator – first type (implementation) UUID generator (autofill: Type <>, Value: , SourceTable:)associated with the Service and User (when the user does not provide the crosswalk we can generate an HCP or HCO crosswalk)For example – if the missing HCP.affilaitedHCO crosswalk then we will generate a new oneFind Service - configured in User Profile - contains the implementation of multiple search cases. User can be configured to use a specific set of searches. Used for example to find Workplace related to the HCP in Complex V2 API.Find Object Method (_findObject (getByUri/getByCrosswalk/getByName e.t.c.):UserProfile configuration drivenInput entity objectSearch ByrefEntity ObjectURICrosswalkSearch method (Reltio (?filter) ) – getByName (search by Reltio Name attribute - configurable)...There is a possibility to add multiple different searches or configure current searches by defining the attributes namesTrigger - configured in User Profile. Contains the Trigger mode implementation. The trigger is executed in the following situation:Find Service execution → result → decision to be madeDecisionFoundCreate ContactAffilations with Workplace and MainWorkplace ( create ReferenceAttributes) -> HCPNotFoundUserProfile: TriggerType configurationFunction result – (ACCEPT OR REJECT + ObjectToCreate)TriggerTypeCREATE (ACCEPT , object)IGNORE (ACCEPT , nullObject)REJECT  (REJECT , nullObject)DCR (ACCEPT, DCRObject)(custom function – can be Lookup) (customFunction(Object) (return CREATE/IGNORE/REJECT)) - for example used in China to lookup to the STD_DPT name in RDM and make a decision based on RDM lookup result. " + }, + { + "title": "China Selective Router - model transformation flow", + "pageID": "284800572", + "pageLink": "/display/GMDM/China+Selective+Router+-+model+transformation+flow", + "content": "DescriptionChina selective router was created to enrich and transform event from COMPANY model to IQIVIA model. Component is also able to connect related mainHco with hco, based on reltio connections API, in Iqivia model its reflected as MainHco in Workplace attribute.Flow diagramStepsCollect event from input topicEnrich event - based on configuration collect hco and main hco entitiesfind attribute with refEntity uri call reltio thrue mdm-manager to collect all related hco and mainHco entities return event with list of hco, and list of mainHcoConnect hco with mainHco based on reltio connections and put mainHco attribute to hcoiterate by list of hco and call reltio to list of connection for current hcoif connection list is not empty and contains entity uri from list of mainHcoput exisitng mainhco to hco in 'OherHcoToHco' attribure (Name of attibute can be changed in configuration)Transform event from COMPANY model to Iqivia modelinvoke HCPModelConverter wiht base evnet, list of hco and list of mainHcoresult of converter will be entity in Iqivia modelput entity in output EventSend event to output topicTriggersTrigger actionComponentActionDefault timekafka messageeventTransformerTopologytransform event to Iqivia modelrealtimeDependent componentsComponentUsageMdm managergetEntitisByUrigetEntityConnectionsByUriHCPModelConvertertoIqviaModel" + }, + { + "title": "Create HCP/HCO complex methods - IQVIA model (legacy)", + "pageID": "284800564", + "pageLink": "/pages/viewpage.action?pageId=284800564", + "content": "DescriptionThe IQVIA China user uses the following methods to create the HCP HCO objects - Create/Update HCP/HCO/MCO. On this linked page the API calls flow is described. The most complex and important thing is the following sections for China users:Additional logic that is activated in the following cases:3 - during HCO update parentHCO attribute is delivered in the request4 - during HCP create/update affiliations are delivered in the request5 - during HCP/HCO creation based on the configuration-specific sources are enriched with cached Relation objects and this object is injected into the main Entity as the reference attributeIQVIA China user also activates the DCR logic using this Create HCP method. The complex description of this flow is here DCR IQVIA flowCurrently, the DCR activation process from the IQVIA flow is described here - DCR generation process (China DCR)New DCR COMPANY flow is described here: DCR COMPANY flowThe below flow diagram and steps description contain the detailed description of all cases used in HCP HCO and DCR methods in legacy code.Flow diagramStepsHCP Service = China logic / STEPS:China Quality Rules:The following files contain the China DQ rules in IQVIA - executed once HUB receives the JSON from the Client.DQ rules are self-documented, details can be found in the following files: affiliatedHCO : affiliatedhco-country-china-quality-rules.yamlHCP:hcp-country-china-quality-rules.yaml(common) qualityServicePipelineProvider – execute DQ rules file(common) dataProviderCrosswalkGuardrail – execute GuardRailsAffiliatedHCO LOGIC (affiliatedHCOs attribute):DQ Rules check and validation on affiliatedHCOIf empty -> add only Country from HCP and Crosswalk from HCPIf not empty -> affiliatedHCOsEntity is combined as one entity from all attributes from all arrays with Country from HCP and Crosswalk from HCPCreating affiliation logic is activated when affiliatedHCOs exist and is not emptyCreate ParameterHelper:Update (true/false) (PATCH/POST)autoCreateHCO is used in isAutoCreateHCO method below. It activates create HCO operation for MAPP and CRMMI for all countries when affiliatedHCO is not found. \naffiliationConfig:\n autoCreateHCO:\n - country: "ALL"\n sources:\n - "MAPP"\n - "CRMMI"\n\n\nRUN affiliationCreator.mapAndReplaceHospitalThe logic was designed to get MainHCO from affiliatedHCO and find this in Reltio. Only 1 element of MainHCO can exist.Then executes the SEARCH LOGIC (by uri/crosswalk/attributes) and gets AUTO rules result.The result is set the MainHCO.objectUri=Reltio found URI (object from the request is assigned and exists Reltio id)Then in the next methods, MainHCO contains the copy of all attributes from Reltio (the object is different than received from the client)For each affiliatedHCOs do:extractL2HCO [MainHCO] from affiliatedHCOs: (it means get MainHCO - Hospital - from affiliatedHCO)when > 1 -> Exception HCPMappingException(String.format("HCO has more than 1 affiliated HCO")when =1 – assign to new Entity object:attributes (copy MainHCO.attributes)crosswalk = MainHCO.refEntity.Crosswalkuri = MainHCO. refEntity.ObjectUrinow on returned Hospiatl do:[SEARCH LOGIC] COMPANY.mdm.model.client.ReltioMDMClient#findEntity[SEARCH LOGIC] shared across all China searches on HCP and HCO servicesFind by ObjectURIOrFind by CrosswalkOrFind by Match API (entities/_matches) where JSON body in MainHCO entity:Verify matches resultCheck only .*Auto.* rulesresultSize > 1 - return nullif there are more than 2 entities with different uris - return  return nullif 1 match – returns entityIf Search result == null -> EntityNotFoundxception – hospital not found If found result then: set the Hospital Reltio Uri in affiliatedHCO.MainHCO.refEntity.objectUri, and copy all attributes from Reltio to MainHCO(replace MainHCO + trim)Hospital is found and have the Reltio URIRUN affiliationCreator.mapAndCreateHCO – returns the mappedHCOs arrayThe main logic of this method is to create a Workplace with MainHCO in Reltio and assign the URI received from Reltio (China) or Create affilaitedHCO object (MAPP and CRMII)For each affiliatedHCOs doFirst Check - "HCO map dict is set, map and create standardized HCO"if (helper.getHCORDMMDict() ( means if CN then return LKUP_STD_DEPARTMENTS )logic:add do mappedHCOs (mapAndCreateStandardizedHCO)The result of this function is to set the AffilaitedHCO(Workplace).URI based on the Reltio search.We translate AffilaitedHCO.Name using RDM LKUP_STD_DEPARTMENTS  code and then make a search in Reltio.If found set URI from ReltioIf not found execute CreateHCO method and assign URI from Reltio based on created objects.IF affiliatedHCO.Name is null, exit.else Lookup Reltio – translate the affiliatedHCO.Name using the lookup function to Reltio with code= LKUP_STD_DEPARTMENTS and Source=HCP.crosswalkIf OK and the code exitsSet Department HCO name to response code (affiliateHCO.Name changed)IF DEPARTMENT NAME is not found in RDM break and exit. This may cause that the Workplace will be not found and you will receive the error - HCO Entity no foundFind L1 entity (affiliatedHCO) (logic same as [SEARCH LOGIC]) (here we search affiliatedHCO with MainHCO attribute)If found set affiliateHCO.uri = reltioFoundUriElse“Create Department (L2 HCO) automatically” for ChinaGet affiliatedHCO.MainHCO object and assing to MainHCOaffiliatedHCO.MainHCO- NULL/CLEARThis clear/null on affiliatedHCO.MainHCO is required because we are executing the CREATE_HCO operation with 2 objects. 1. affiliatedHCO 2. MainHCO (parentHCO in HCO operation)This will create an HCO object with MainHCO in ReltioaffiliatedHCO.MainHCO- SET crosswalk to EVR with Random UUIDExecute logic – [HCO Service = China logic / STEPS (check below)] (parameters 1= procEntity(affiliatedHCO), 2=MainHCO)check creation result:notFound -> NotFoundExceptionfailed -> RuntimeExceptionOK, -> set affiliateHCO.uri = reltioFoundUriSecond Check – “Create or update affiliated HCO”FOR CRMMI and MAPP for affiliatedHCOs create the HCO in Reltio and assign the Reltio URI to affilaitedHCOs URI automatically without search and DPT lookup.isAutoCreateHCO logic based on ParameterHelper param – currently PROD activated for CRMMI and MAPP for all countrieslogic:Execute logic – [HCO Service = China logic / STEPS (check below)] (parameters 1= procEntity(affiliatedHCO), 2=null) - send only Workplace without HospitalHere we are adding parentHCO to the HCO request. Parent HCO is affiliatedHCO object.check creation result:failed -> RuntimeOK -> set affiliateHCO.uri = reltioFoundUriThird Check – “HCO auto-creation is disabled”just return the affiliatedHCO without the Reltio URI assingRUN createHCOAffiliations (Create affiliation to L1 and L2 HCO) creating affiliation HCP to HCOExtends HCP object with MainWorplace(affilaitedHCO.MainHCO) and Workplace(affiliatedHCO) referenced AttributesFor each affiliatedHCOs doExtract MainHCO object (this will be MainWorkplace on HCP)If empty throw RuntimeExceptionIf existsRUN createAffilationAsRef - l2HCORefName = MainWorkplace ----------- Creating MainWorkplace relation from HCP to MainHCOLogic that creates MainWorkplace affiliation between HCP and MainHCO or Workplace affiliation between HCP and affiliatedHCO (used here and below)Below we add 2 more attributes to refEntity - Workplace.ValidationStatus and Workplace.ValidationChangeDateIf MainHCO.objectURi exits. OKELSE search - (here objectUri will be, this search is used in CREATE_HCO method)If still not found throw NotFoundExceptionElse assign the HCP RefEntity and RefRelation attributeson MainWorkplaceRefEntity – MainHCO.ObjectURIRefRelation – Crosswalk (sourceTable=MainWorkplace,type=HCP.crosswalk.type,value=HASH)Attributes - emptyThen check if the same relation on HCP already exists comparing the MainWorkplace attribute with generated crosswalkIf this is a new Relation add to HCP a new attribute that is MainWorkplaceRewriting validation status from main entity or set from HCO entity – preprare reference attributes on WorkplaceRefEntity attributes set from:ValidationStatus or hcp.ValidationStatusValidationChangeDate or hcp.ValidationChangeDateRUN createAffilationAsRef - l2HCORefName = WorkplaceSame logic as above but:----------- Creating Workplace relation from HCP to affiliatedHCOResult – HCP contains MainWorkplace and Workplace refRelation attributesAffiliatedHCO LOGIC throws in some places EntityNotFoundException - process this exception here:activate DCR LOGICCreate NEW_HCO("NewHCO") DCR with HCP entity and affiliatedHCOs Check if NEW_HCO is in activationCriteria for CN (GRV/FACE/CN3RDPARTY) Then check continueOnHCONotFoundActivationCriteria for China only GCP – this will create HCP (continue) without affiliation(common) Reference Relation Attributes Enricher for HCP Object (relations taken from Mongo Relation Cache)CREATE HCP Reltio method - Main HCP create an object in ReltioCheck response:(common) Register COMPANYGlobalCustomerIdactivate DCR LOGIC If NEW_HCO DCR – send DCR Request related to affiliatedHCOs and put this DCR to dcrRequestIf dcrRequest does not contains NEW_HCO DCRCreate NEW_HCP DCR Request with affiliatedHCO and send DCR RequestIf dcrRequest does not contains NEW_HCO DCRCreate NEW_WORKPLACE DCR Request and send DCR REQUEST(common) resolve status – set created/update/failed/e.t.c(common) ValidationException/EntityNotFoundException/HCPMappingException/ExceptionEND HCO Service = China logic / STEPS:China Quality Rules:The following files contain the China DQ rules in IQVIA - executed once HUB receives the JSON from the Client.DQ rules are self-documented, details can be found in the following files: HCO: hco-country-china-quality-rules.yaml(common) qualityServicePipelineProvider – execute DQ rules(common) dataProviderCrosswalkGuardrail – execute GuardRailsParentHCO ↔ AffiliatedHCO LOGIC (parentHCO attribute processing):RUN createAffilationAsRef - = MainHCO ----------- Creating MainHCO relation from HCO to parentHCOIf parentHCO.objectURi exits, ok. (the objectURi can be from HCP create methods but can be also emptu)ELSE -> [SEARCH LOGIC]COMPANY.mdm.model.client.ReltioMDMClient#findEntity (described in HCP section)If still not found throw NotFoundException -> Parent HCO not foundElse if found in ReltioAdjust HCO object and put MainHCO ref attribute: RefEntity – parentHCO.ObjectURIRefRelation – Crosswalk (sourceTable=MainHCO,type=HCP.crosswalk.type,value=HASH)Attributes - emptyThen check if the same relation on HCP already exists comparing the MainHCO attribute with generated crosswalkIf this is a new Relation add to HCO a new attribute that is MainHCO(common) Reference Attributes Enricher for HCP ObjectCREATE HCO Reltio method - HCO create an object in ReltioCheck response:(common) Register COMPANYGlobalCustomerId(common) resolve status – set created/update/failed/e.t.c(common) ValidationException/EntityNotFoundException/HCPMappingException/ExceptionENDTriggersTrigger actionComponentActionDefault timeoperation linkREST callManager: POST/PATCH /hco /hcp /mcocreate specific objects in MDM systemAPI synchronous requests - realtimeCreate/Update HCP/HCO/MCOREST callManager: GET /lookupget lookup Code from ReltioAPI synchronous requests - realtimeLOV readREST callManager: GET /entity?filter=(criteria)search the specific objects in the MDM systemAPI synchronous requests - realtimeSearch EntityREST callManager: GET /entityget Object from RetlioAPI synchronous requests - realtimeGet EntityKafka Request DCRManager: Push Kafka DCR eventpush Kafka DCR EventKafka asynchronous event - realtimeDCR IQVIA flowDependent componentsComponentUsageManagersearch entities in MDM systemsAPI Gatewayproxy REST and secure accessReltioReltio MDM systemDCR ServiceOld legacy DCR processor" + }, + { + "title": "Create HCP/HCO complex V2 methods - COMPANY model", + "pageID": "284800566", + "pageLink": "/pages/viewpage.action?pageId=284800566", + "content": "DescriptionThis API is used to process complex HCP/HCO requests. It supports the management of MDM entities with the relationships between them. The user can provide data in the IQVIA or COMPANY model.Flow diagramFlow diagram HCP (overview)(details on main diagram)Steps HCP Map HCP to COMPANY modelExtract parent HCO - MainHCO attribute of affiliated HCO entityExecute search service for affiliated HCO and parent HCOIf affiliated HCO or parent HCO not found in MDM system: execute trigger serviceOtherwise set entity URI for found objectsExecute HCO complex service for HCO request - affiliated  HCO and parent HCO entitiesMap HCO response to contact affiliations HCP attributecreate relation between HCP and affiliated HCOcreate relation between HCP and parent HCOExecute HCP simple serviceHCP API search entity serviceSearch entity service is used to search for existing entities in the MDM system. This feature is configured for user via searchConfigHcpApi attribute. This configuration is divided for HCO and affiliated HCO entities and contains a list of searcher implementations - searcher type.attributedescriptionHCOsearch configuration for affiliated HCO entityMAIN_HCO search configuration for parent HCO entitysearcherTypetype of searcher implementationattributesattributes used for attribute search implementationHCP trigger serviceTrigger service is used to execute action when entities are missing in MDM system. This feature is configured for user via triggerType attribute.trigger typedescriptionCREATEcreate missing HCO or parent HCO via HCO complex APIDCRcreate DCR request for missing objectsIGNOREignore missing objects, flow will continue, missing objects and relations will not be createdREJECTreject request, stop processing and return response to clientFlow diagram HCO (overview)(details on main diagram)Steps HCOMap HCO request to COMPANY modelIf hco.uri attribute is null then create HCO entityCreate relationif parentHCO.uri is not null then use to create other affiliationsif parentHCO.uri is null then use search service to find entityif entity is found then use is to create other affiliationsif entity is not found then create parentHCO and use to create other affiliationsif Relation exists then do nothingif Relation doesn't exist then create relationTriggersTrigger actionComponentActionDefault timeREST callmanager POST/PATCH v2/hcp/complexcreate HCP, HCO objects and relationsAPI synchronous requests - realtimeREST callmanager POST/PATCH v2/hco/complexcreate HCO objects and relationsAPI synchronous requests - realtimeDependent componentsComponentUsageEntity search servicesearch entity HCP API opertaionTrigger serviceget trigger result opertaionEntity management serviceget entity connections" + }, + { + "title": "Create HCP/HCO simple V2 methods - COMPANY model", + "pageID": "284806830", + "pageLink": "/pages/viewpage.action?pageId=284806830", + "content": "DescriptionV2 API simple methods are used to manage the Reltio entities - HCP/HCO/MCO.They support basic HCP/HCO/MCO request with COMPANY model.Flow diagramSteps Crosswalk generator - auto-create crosswalk - if not exists Entity validationAuthorize request - check if user has appropriate permission, country, sourceGetEntityByCrosswalk operaion-  check if entity exists in reltio, applicable for PATCH operationQuality service - checks entity attributes against validation pipelineDataProviderCrosswalkCheck - check if entity contributor provider exists in reltioExecute HTTP request - post entities Reltio operationExecute GetOrRegister COMPANYGlobalCustomerID operation Crosswalk generator serviceCrosswalk generator service is used for creating crosswalk when entity crosswalk is missing. This feature is configured for user via crosswalkGeneratorConfig attribute.attributedescriptioncrosswalkGeneratorTypecrosswalk generator implementation typecrosswalk type valuesourceTablecrosswalk source table valueTriggersTrigger actionComponentActionDefault timeREST callManager: POST/PATCH /v2/hcpcreate HCP objects in MDM systemAPI synchronous requests - realtimeREST callManager: POST/PATCH /v2/hcocreate HCO objects in MDM systemAPI synchronous requests - realtimeREST callManager: POST/PATCH /v2/mcocreate MCO objects in MDM systemAPI synchronous requests - realtimeDependent componentsComponentUsageCOMPANY Global Customer ID RegistrygetOrRegister operationCrosswalk generator servicegenerate crosswalk opertaion" + }, + { + "title": "DCR IQVIA flow", + "pageID": "284800568", + "pageLink": "/display/GMDM/DCR+IQVIA+flow", + "content": "DescriptionThe following page contains a detailed description of IQVIA DCR flow for China clients. The logic is complicated and contains multiple relations.Currently, it contains the following:Complex business rules for generating DCRs,Limited flexibility with IQVIA tenants,Complex end-to-end technical processes (e.g., hand-offs, transfers, etc.)The flow is related to numerous file transfers & hand-offs.The idea is to make a simplified flow in the COMPANY model - details described here - DCR COMPANY flowThe below diagrams and description contain the current state that will be deprecated in the future.Flow diagram - Overview - high levelFlow diagram - Overview - simplified viewStepsHUB LOGICHUB Configuration overview:DCR CONFIG AND CLASSES:Logic is in the MDM-MANAGERNewHCODCRService - related to NEW_HCO, NEW_HCO_L1, NEW_HCO_L2NewHCPDCRService - related to NEW_HCPNewWorkplaceDCRService - related to NEW_WORKPLACE Config:\ndcrConfig:  \n dcrProcessing: yes\n  routeEnableOnStartup: yes\n  deadLetterEndpoint: "file:///opt/app/log/rejected/"\n  externalLogActive: yes\n  activationCriteria:\n    NEW_HCO:\n      - country: "CN"\n        sources:\n          - "CN3RDPARTY"\n          - "FACE"\n          - "GRV"\n    NEW_HCP:\n      - country: "CN"\n        sources:\n          - "GRV"\n    NEW_WORKPLACE:\n      - country: "CN"\n        sources:\n          - "GRV"\n          - "MDE"\n          - "FACE"\n          - "CN3RDPARTY"\n          - "EVR"\n\n  continueOnHCONotFoundActivationCriteria:\n    - country: "CN"\n      sources:\n        - "GCP"\n    - countries:\n        - AD\n        - BL\n        - BR\n        - DE\n        - ES\n        - FR\n        - FR\n        - GF\n        - GP\n        - IT\n        - MC\n        - MF\n        - MQ\n        - MU\n        - MX\n        - NC\n        - NL\n        - PF\n        - PM\n        - RE\n        - RU\n        - TR\n        - WF\n        - YT\n      sources:\n        - GRV\n        - GCP\n  validationStatusesMap:\n    VALID: validated\n    NOT_VALID: notvalidated\n    PENDING: pending\nFlow diagram - DCR ActivationStepsIQVIA/China  ACTIVATION LOGIC/ACTIVATION CRITERIA:COMPANY.mdm.manager.service.dcr.NewHCPDCRService#isActive :(common) on IQVIA the first check is on the source and country(common) NEW_HCP is activated for CN for GRV source (TRUE – ACTIVATE)(common) NEW_HCO is activated for CN for CN3RDPARTY, FACE, GRV source (TRUE – ACTIVATE)(common) NEW_WORKPLACE is activated for CN for GRV, MDE, CN3RDPARTY, FACE, EVR source (TRUE – ACTIVATE)The first 3 isActive checks are related to common checks, here we are checking the country and source of the HCP and then we can verify more details.(REVALIDATION LOGIC) Then we check if the flag on DCR is revalidated=trueIf trueGet From Reltio the current ChangeRequest state by entityUri( Reltio Change requests connected to the entity)Remove all AWAITING_REVIEW with type NEW_HCPCheck HCP validation statusesConfigured statuses: "pending", "partial-validated", "partialValidated"From Entity get ValidationStatus attributeCompare valuesIf match foundGet EVR crosswalksPatch entity using EVR crosswalk set ValidationStatus to pending(NEW HCP isActive LOGIC) activation logic check (detailed):NEW_HCP detailed ACTIVATORCheck if ValidationStatus is pendingIf False: ValidationStatus is NOT pending:Check current ValidationStatus valueIf OV ValidationStatus is "notvalidated" or "partialValidated" do further checks:Get GRV LUD CrosswalkGet (EVR)DCR LUD Crosswalk(Check) if EVR changes are fresher then the GRV changes on ValidationStatus return FALSEGet GRV ValidationStatus current valueIf pending or partialValidated go to “If true, next”else return FALSEotherwise reject return FALSEIf true, next(Check) SpeakerStatus value and check if not "actv","enabled" then return FALSE(Check)Get Change Requests from Reltio with AWAITING_REVIEW if found return FALSE(Check)Get Entity State from Reltio, if null return FALSE(Check) Get For China the HCP.Workplace and check if exists, if null return FALSEFinally if above checks were not fulfilled return (TRUE – ACTIVATE)(NEW HCO isActive LOGIC) activation logic check cd:NEW_HCO detailed ACTIVATORGet ValidationStatus value from source HCP entityCheck if ValidationStatus is equal to "enabled","validated","pending","WBR.STA.3", "partial-validated", "partialValidated"If true return FALSE – DCR is not activated for these statusesNext go to next Check(Check) SpeakerStatus value and check if not "actv","enabled" then return FALSEGET MainHCO.Name attributeGet Workplace.Name attributeNow once we have Workplace and Hospital Name we need to:Get ChangeRequest details from Reltio related to this specific HCPCheck if any info in ChangeRequest containsHospital nameOr Department nameIf true it means that there are already some DCRs created in Reltio for this HCP in relation to this Department/WorkplaceReturn REQUST_ALREADY_EXISTS and return FALSE (not activated)Finally, if above checks were not fulfilled return (TRUE – ACTIVATE)(NEW WORKPLACE isActive LOGIC) activation logic check cd:NEW_WORKPLACE  detailed ACTIVATORGet ValidationStatus value from source HCP entityCheck if ValidationStatus is equal to "enabled","validated", "WBR.STA.3"If true return FALSE – DCR is not activated for these statusesNext go to next Check(Check) SpeakerStatus value and check if not "actv","enabled" then return FALSE(Check) Verify HCP.Workplaces – if null - return FALSE (not activate)Next check HCP.Workplaces, check all elements andRemove duplicated refEntity.objectUrisRemove Workplaces with "enabled","validated","pending" ValidationStatusesCheck the output list – if there are 0 Workplaces or workplaces.size() <2 then return FALSE, there are less than 2 workplaces so rejectNow filter Workplaces and find TrustedWorkplaces, check all elements andIf there are any workplaces related to (EMPTY) crosswalk name then filter them out, currently make DCR for all because the condition is not metCheck ChangeRequests connected with the current HCPGet ChangeRequest details from Reltio related to this specific HCPCheck if any info in ChangeRequest contains DCR created for the current Workplace for which we are trying to create DCRIf true it means that there are already some DCRs created in Reltio for this HCP in relation to this WorkplaceReturn REQUST_ALREADY_EXISTS and return FALSE (not activated)Finally, if the above checks were not fulfilled return (TRUE – ACTIVATE)Kafka DCR sender - produce event to Kafka TopicCOMPANY.mdm.manager.service.dcr.AbstractDCRService#sendDCRRequest KAFKA EVENT DCR SENDSend a request from HCP Management Service:DCRRequest class published to Kafka DCR topic prod-gw-dcr-requestsFlow diagram - DCR event Receiver (DCR processor)StepsReceiver (DCR processor) (Camel) - COMPANY.mdm.manager.route.DCRServiceRoute LOGIC:DCRServiceRouteReceive dcr request: ${body} – log input DCR bodyCheck Delay time and postpone the DCR to next runtimeDelay = Current Time – DCR Create Time (in HCP Service new object initialization time)if timeDelay < 240 minDelay based on kafka session or delayTime (depending what is lower value)Thread SleepNote: current sessionTimeout on PROD is 30 secondsElse Proceed the DCRExecute com.COMPANY.mdm.manager.service.dcr.AbstractDCRService#processDCRRequest LOGIC:(common) Get From Reltio current Entity State(common) Check Activation (only abstract, by source and country) criteria, if active true:(common) Start processing DCR request(common) Create Change Request in Reltio (empty container)(common) Add External InfoHCPWithHCOExternalInfo objectSet NEW_HCP/HCO/WORKPLACE typeSet Reltio HCP RUISet Source entity crosswalkProcess DCR Custom Logic (NEW_HCP/NEW_HCO/NEW_WORKPALCE),Description belowUpdate in Reltio the Change Request with created External InfoInitialize PfDataChangeRequest objectPfDataChangeRequest object is used by IQVIA and this is exported in excel file to ChinaStatus = CreatedCrosswalk EVRIn case of error delete Reltio ChangeRequest (container) and throw ExceptionIf ok set the status to ACCEPTEDOtherwise REJECTEDNewHCPDCRService - STEPS  - Process DCR Custom Logic (NEW_HCP)NEW_HCP custom logicCreate a new HCP type Entity (java object) EVR/DCRSet ValidationStatus to validatedSet Crosswalk = EVR – get existing or create newPATCH Entity HCP Object to Reltio using change request id (update existing container only)In ExternalInfo set affilaitedHCOs objectNewHCODCRService - STEPS  - Process DCR Custom Logic (NEW_HCO, NEW_HCO_L1,NEW_HCO_L2)NEW_HCO custom logicCreate a new HCP type Entity (java object)Set crosswalks from the HCP entitySet ExternalInfo department and hospital names Get department name from DCR Request form HCP WorkplaceGet hospital name from DCR Request from HCP Workplace.MainHCOExecute COMPANY.mdm.manager.service.dcr.NewHCODCRService#processAffiliations (method return status: 1 – NEW_HCO_L1(Workplace) or 2 – NEW_HCO_L2(MainHCO), logic:Get affiliatedHCOs, for each element doFind L2HCO entity:Get MainHCO element from affiliatedHCO objectIf is null, return nullIf not nullFind object in Reltio using GetEntity operationIf not foundSet EVR crosswalk on MainHCOPOST Entity HCO(MainHCO) Object to Reltio using change request id (update existing container only)And return object/entityURIIf found return object/entityURIFind L1HCO entity:Check if L2HCO is not null, then replace MainHCO attributes using the one found from Reltio and set refEntity uriFind Entity using standard search API(by uri/crosswalk/match)If not foundSet EVR crosswalkRemove MainHCO(L2) from L1 objectsetup affiliation l1HCO -l2HCO (using reference attributes add to Workplace MainHCO reference attribute to create a relation between these 2 objectsPOST Entity HCO(Workplace with MainHCO) Object to Reltio using change request id (update existing container only)And return object/entityURIIf found return object/entityURISet ExternalInfo enrich with:affliatedHCO that contains L1+L2 objectsSet status:2 - If L2HCO URI is null1 – if L1HCO URI is nullclear MainHCO to avoid Reltio errorif L2HCO existsadd MainWorkplace reference attribute to HCP with reference to L2 object (MainHCO)add Workplace reference attribute to HCP with reference to L1 object (affiliatedHCO)PATCH Entity HCP Object to Reltio using change request id (update existing container only)Return 1 or 2 If status = 1 – set NEW_HCO_L1 dcr type in externalInfoIf status =2 – set NEW_HCO_L2 dcr type in externalInfoOtherwise, DCR is not valid, all affiliations found, create affiliation without DCRCreate an HCP entity in ReltioDelete ChangeRequestNull DCR Request (DCR is not valid in that case)NewWorkplaceDCRService - STEPS  - Process DCR Custom Logic (NEW_WORKPLACE)Get HCP entity from DCR objectGet Workplace attributesRemove duplicated Workplace entityUris objectsFind HCO workplaces in Reltio using GET operation and save EntityURisExecute the COMPANY.mdm.manager.service.dcr.NewWorkplaceDCRService#updateAffiliationsLogic (response = false)The method input is HCPDCR IDList of AffiliatedHCOs(Workplaces) found in Reltio by GET operationThe result is HCP+HCO created in the Change requestFlowGet the Change request parameterGet HCP source Entity from ReltioRemove changes from Change RequestCreate HCP Object new Java empty elementSet crosswalk to EVRCreate acceptedWorkplaces(SET) and add all Workplaces found in RletioGet Workplaces from HCP object from ReltioSet workpalcesURIS toIf response=true – get from ExternalInfo from affiiatedHCOs URISIf response-false – get from Workpalces from HCP object from ReltioFor each WrokplaceURI do:Get Entity HCO from Reltio ObjectPATCH Entity HCP Object to Reltio using change request id (update existing container only) – the input request is HCP object + affilaitedHCOs object found from ReltioOverride the ExternalInfo affiliatedHCOsUris with new ids created in ReltioIn the ExternalInfo set the affilaitedHCOs array to EntityURIS found in ReltioFlow diagram - DCR Response - process DCR Response from API clientStepsIQVIA/China DCRResponseRoute:DCR response processing:REST apiActivated by china_apps user based on the IQVIA EVRs export Used by China Client to accept/reject(Action) DCR in ReltioDCRResponse (Camel) route, possible operations:POST (dcr_id,action)Dcr_id – Reltio Change Request IdAction – accept/updateHCP/updateHCO/ updateAffiliations/reject/merge/mergeUpdateAuthentication service, check user and roleCheck headersDcr_id is mandatorymergeUris structure is winner,looser with 2 idsCheck if DCR in Reltio exists, otherwise throw NotFoundException and update the PfDataChangeRequest object in Reltio to closedLogic:If ChangeRequest in Reltio is other than AWAITING_REVIEW throw BadRequestException with details that DCR is already closed (because it means it is now ACCEPTED or REJECTED)Elseupdate the PfDataChangeRequest object in Reltio to completedCheck Action and do (FOR NEW_HCP):Accept: NEW_HCP acceptDCRCompose Entity and setValidationStatus = partialValidated (if partial flag in POST method)ValidationStatus = validated (if not partial)Set ValidationChangeDate to current dateGet ChangeRequest From Reltio with ExternalInfoGet HCP id from ExternalInfoGet current Entity state from ReltioPrepare Country from current EntityGet Workplace data from Reltio entity and enrich the Workplaces HCO objects from Reltio using GET operation – retrieve dataupdateHCP method inputHCP with ValidationStatus/ValidationChangeDate/CountryAffiliatedHCOs from Reltio (Workplaces that were get from ChangeRequest info)Exectue NewHCPDCRService#updateHCP LOGIC:Common updateHCP object method that updates HCP in Reltio and closes the DCRUsed in NEW_HCP.acceptDCR/rejectDCR/updateHCO method andGet ChangeRequest From Reltio with ExternalInfoGet HCP id from ExternalInfoGet the current Entity state from ReltioPrepare Country from current EntitySet EVR crosswalkSet ValidationStatus (validated) and ValidationChangeDate (current date) if missing / If not get from requestIf input AffiliatedHCOs exists (only when Workplaces are in request)mapAndCreateHCO (create HCOs in Reltio)execute modifyAffiliationStatusThis method checks if in Reltio all Workplaces were created and compares it to the list of Workplaces in ChangeRequest input objectset validated or notvalidated statuses on Workplace depending on found in ReltioThe result of these 2 methods are Workplaces created in Reltio with ValidationStatus parameterCreate HCP with affiliated Workplaces(optionally) in Reltio – execute complex updateHCP method -> now data is created in ReltioRemove changes from ChangeRequests from Reltio – because changes were applied manually and ChangeRequest had only a container for changes, we need to clear this to not apply it one more time.Apply ChangeRequest in Reltio – CLOSEDCheck the merge entities parameter and merge entities.Reject: NEW_HCP rejectDCRCompose Entity and setValidationStatus = notvalidatedSet ValidationChangeDate to the current dateupdateHCP method inputHCP with ValidationStatus/ValidationChangeDate/CountryExecute NewHCPDCRService#updateHCPUpdateAffiliation: NEW_HCP updateAffilations logic:(input Entity object from Client)N/A for NEW_HCPUpdateHCO: NEW_HCP updateHCO logic:(input Entity object from Client)N/A for NEW_HCPUpdateHCP: NEW_HCP updateHCP:What is the difference between acceptDCR and updateHCP ?In accept we can set ValidationStatus to validated or partialValidate and we get all Workplaces from ReltioIn updateHCP we receive the HCPObject from client together with DCR Id. We can apply changes generated by the Client, not related to the ChangeRequest object that is currently in ReltioAt the end in both cases we close and accept the ChangeRequest(input Entity object from Client)Execute NewHCPDCRService#updateHCP method (described above)Check Action and do (FOR NEW_HCO):Accept: NEW_ HCO acceptDCRN/A – only user can use this by HCP (updateHCP operation)Reject: NEW_HCO rejectDCRExecute _reject DCR – Change Request is REJECTED in ReltioUpdateAffiliation: NEW_HCO updateAffilations logic:N/A for this requestUpdateHCO: NEW_HCO updateHCO logic:Get ChangeRequest From Reltio with ExternalInfoGet HCP id from ExternalInfoGet current Entity state from ReltioPrepare Country from current EntityGet List of Entities from Client request and execute the:COMPANY.mdm.manager.service.dcr.NewWorkplaceDCRService#updateAffiliationsLogic: (response = true)logic described aboveTrue logic activates the following:Create HCO 1 outside of DCR – object created in ReltioCreate HCO 2 outside of DCR - object create in ReltioThen affiliations are made and an object created in Reltio (HCP with DCR id in Reltio with affiliations to already created objects in Reltio (HCO1 and HCO2) but the HCP still in DCR)UpdateHCP: NEW_HCO updateHCP logic:N/A for HCO dcrCheck Action and do (FOR NEW_WORKPLACE):Accept: NEW_WORKPLACE acceptDCRGet ChangeRequest From Reltio with ExternalInfoGet HCP id from ExternalInfoGet current Entity state from ReltioPrepare Country from current EntityGet List of Workplaces from the Change Request HCP entityCOMPANY.mdm.manager.service.dcr.NewWorkplaceDCRService#updateAffiliationsLogic: (response = true)logic described aboveTrue logic activates the following:Create HCO 1 outside of DCR – object created in ReltioCreate HCO 2 outside of DCR - object create in ReltioThen affilaitions are made and object created in Reltio (HCP with DCR id in Reltio with affilaitions to already created objects in Reltio (HCO1 and HCO2) but the HCP still in DCR)Apply ChanteRequest in Reltio - ACCEPTEDReject: NEW_WORKPLACE rejectDCRApply Reltio Change Request with creation only HCP object in ReltioUpdateAffiliation: NEW_WORKPLACE updateAffilations logic:Same as acceptDCR but the Workplaces list is received from The Client requestUpdateHCO: NEW_WORKPLACE updateHCO logicN/AUpdateHCP: NEW_WORKPLACE updateHCP logic:N/Aupdate the PfDataChangeRequest object in Reltio to closedTriggersTrigger actionComponentActionDefault timeOperation linkDetailsREST callManager: POST/PATCH /hcpcreate specific objects in MDM systemAPI synchronous requests - realtimeCreate/Update HCP/HCO/MCOInitializes the DCR requestKafka Request DCRManager: Push Kafka DCR eventpush Kafka DCR EventKafka asynchronous event - realtimeDCR IQVIA flowPush DCR event to DCR processorKafka Request DCRDCRServiceRoute: Poll Kafka DCR evenConsumes Kafa DCR eventsKafka asynchronous event - realtimeDCR IQVIA flowPoll/Consumes DCR events and process itRest call - DCR responseManager:DCRResponseRoute POST /dcrResponse/{id}/acceptupdates DCR by API (accept/reject e.tc.)API synchronous requests - realtimeDCR IQVIA flowAPI to accept DCRRest call - DCR responseManager:DCRResponseRoute POST /dcrResponse/{id}/updateHCPupdates DCR by API (accept/reject e.tc.)API synchronous requests - realtimeDCR IQVIA flowAPI to update HCP through DCRRest call - DCR responseManager:DCRResponseRoute POST /dcrResponse/{id}/updateHCOupdates DCR by API (accept/reject e.tc.)API synchronous requests - realtimeDCR IQVIA flowAPI to update HCO through DCRRest call - DCR responseManager:DCRResponseRoute POST /dcrResponse/{id}/updateAffiliationsupdates DCR by API (accept/reject e.tc.)API synchronous requests - realtimeDCR IQVIA flowAPI to update HCO to HCO affiliations through DCRRest call - DCR responseManager:DCRResponseRoute POST /dcrResponse/{id}/rejectupdates DCR by API (accept/reject e.tc.)API synchronous requests - realtimeDCR IQVIA flowAPI to reject DCRRest call - DCR responseManager:DCRResponseRoute POST /dcrResponse/{id}/mergeupdates DCR by API (accept/reject e.tc.)API synchronous requests - realtimeDCR IQVIA flowAPI to merge DCR HCP entitiesDependent componentsComponentUsageManagersearch entities in MDM systemsAPI Gatewayproxy REST and secure accessReltioReltio MDM systemManagerOld legacy DCR processor" + }, + { + "title": "DCR COMPANY flow", + "pageID": "284800570", + "pageLink": "/display/GMDM/DCR+COMPANY+flow", + "content": "DescriptionTBD Flow diagram (drafts)StepsTBDTriggersTrigger actionComponentActionDefault timeDependent componentsComponentUsage" + }, + { + "title": "Model Mapping (IQVIA<->COMPANY)", + "pageID": "284800575", + "pageLink": "/pages/viewpage.action?pageId=284800575", + "content": "DescriptionThe interface is used to map MDM Entities between IQIVIA and COMPANY model.Flow diagram-MappingAddress ↔ Addresses attribute mappingIQIVIA MODEL ATTRIBUTE [Address]COMPANY MODEL ATTRIBUTE [Addresses]AddressPremiseAddressesPremiseAddressBuildingAddressesBuildingAddressVerificationStatusAddressesVerificationStatusAddressStateProvinceAddressesStateProvinceAddressCountryAddressesCountryAddressAddressLine1AddressesAddressLine1AddressAddressLine2AddressesAddressLine2AddressAVCAddressesAVCAddressCityAddressesCityAddressNeighborhoodAddressesNeighborhoodAddressStreetAddressesStreetAddressGeolocationLatitudeAddressesLatitudeAddressGeolocationLongitudeAddressesLongitudeAddressGeolocationGeoAccuracyAddressesGeoAccuracyAddressZipZip4AddressesZip4AddressZipZip5AddressesZip5AddressZipPostalCodeAddressesPOBoxPhone attribute mappingsIQIVIA MODEL ATTRIBUTECOMPANY MODEL ATTRIBUTEPhoneLineTypePhoneLineTypePhoneLocalNumberPhoneLocalNumberPhoneNumberPhoneNumberPhoneFormatMaskPhoneFormatMaskPhoneGeoCountryPhoneGeoCountryPhoneDigitCountPhoneDigitCountPhoneCountryCodePhoneCountryCodePhoneGeoAreaPhoneGeoAreaPhoneFormattedNumberPhoneFormattedNumberPhoneAreaCodePhoneAreaCodePhoneValidationStatusPhoneValidationStatusPhoneTypeIMSPhoneTypePhoneActivePhonePrivacyOptOutEmail attribute mappingsIQIVIA MODEL ATTRIBUTECOMPANY MODEL ATTRIBUTEEmailEmailEmailDomainEmailDomainEmailDomainTypeEmailDomainTypeEmailValidationStatusEmailValidationStatusEmailTypeIMSEmailTypeEmailActiveEmailPrivacyOptOutEmailUsernameEmailSourceSourceNameHCO mappingsIQIVIA MODEL ATTRIBUTECOMPANY MODEL ATTRIBUTECountryCountryNameNameTypeCodeTypeCodeSubTypeCodeSubTypeCodeCMSCoveredForTeachingCMSCoveredForTeachingCommentersCommentersCommHospCommHospDescriptionDescriptionFiscalFiscalGPOMembershipGPOMembershipHealthSystemNameHealthSystemNameNumInPatientsNumInPatientsResidentProgramResidentProgramTotalLicenseBedsTotalLicenseBedsTotalSurgeriesTotalSurgeriesVADODVADODAcademicAcademicKeyFinancialFiguresOverviewSalesRevenueUnitOfSizeKeyFinancialFiguresOverviewSalesRevenueUnitOfSizeClassofTradeNSpecialtyClassofTradeNSpecialtyClassofTradeNClassificationClassofTradeNClassificationIdentifiersIDIdentifiersIDIdentifiersTypeIdentifiersTypeSourceNameOriginalSourceNameNumOutPatientsOutPatientsNumbersStatusValidationStatusUpdateDateSourceUpdateDateWebsiteURLWebsiteWebsiteURLOtherNames-OtherNamesName-Type (constant: OTHER_NAMES)OfficialName-OtherNamesName-Type (constant: OFFICIAL_NAME)Address*Addresses*Phone*Phone*HCP mappingsIQIVIA MODEL ATTRIBUTECOMPANY MODEL ATTRIBUTEDESCRIPTIONCountryCountryDoBDoBFirstNameFirstNamecase: (IQVIA -> COMPANY), if IQIVIA(FirstName) is empty then IQIVIA(Name) is used as COMPANY(FirstName) mapping resultLastNameLastNamecase: (IQVIA -> COMPANY), if IQIVIA(LastName) is empty then IQIVIA(Name) is used as COMPANY(LastName) mapping resultNameNameNickNameNickNameGenderGenderPrefferedLanguagePrefferedLanguagePrefixPrefixSubTypeCodeSubTypeCodeTitleTitleTypeCodeTypeCodePresentEmploymentPresentEmploymentCertificatesCertificatesLicenseLicenseIdentifiersIDIdentifiersIDIdentifiersTypeIdentifiersTypeUpdateDateSourceUpdateDateSourceNameSourceValidationSourceNameValidationChangeDateSourceValidationChangeDateValidationStatusSourceValidationStatusSpeakerSpeakerLevelSpeakerLevelSpeakerSpeakerTypeSpeakerTypeSpeakerSpeakerStatusSpeakerStatusSpeakerIsSpeakerIsSpeakerDPPresenceChannelCodeDigitalPresenceChannelCodeMETHOD PARAMContactAffiliationscase: (IQVIA -> COMPANY), param workplaces is converted to HCO and added to ContactAffiliationsMETHOD PARAMContactAffiliationscase: (IQVIA -> COMPANY), param main workplaces are converted to HCO and added to ContactAffiliationsWorkplaceMETHOD PARAMcase: (COMPANY → IQIVIA), param workplaces is converted to HCO and assigned to WorkplaceMainWorkplaceMETHOD PARAMcase: (COMPANY → IQIVIA),  param main workplaces are converted to HCO and assigned to MainWorkplaceAddress*Addresses*Phone*Phone*Email*Email*TriggersTrigger actionComponentActionDefault timeMethod invocationHCPModelConverter.classtoCOMPANYModel(EntityKt  iqiviaModel, List workplaces, List mainWorkplaces, List addresses)realtimeMethod invocationHCPModelConverter.classtoCOMPANYModel(EntityKt  iqiviaModel, List workplaces, List mainWorkplaces)realtimeMethod invocationHCPModelConverter.classtoIqiviaModel(EntityKt  COMPANYModel, List workplaces, List mainWorkplaces)realtimeMethod invocationHCOModelConverter.classtoCOMPANYModel(EntityKt iqiviaModel)realtimeMethod invocationHCOModelConverter.classtoIqiviaModel(EntityKt  COMPANYModel)realtimeDependent componentsComponentUsagedata-modelMapper uses models to convert between them" + }, + { + "title": "User Profile (China user)", + "pageID": "284800562", + "pageLink": "/pages/viewpage.action?pageId=284800562", + "content": "DescriptionUser profile got new attributes used in V2 API.AttributeDescriptionsearchConfigHcpApiconfig search entity service for HCP API - contains HCO/MAIN_HCO search entity type configurationsearchConfigHcoApiconfig search entity service for HCO APIsearcherTypetype of searcher implementationavailable values: [UriEntitySearch/CrosswalkEntitySearch/AttributesEntitySearch]attributesattribute names used in AttributesEntitySearchtriggerTypeV2 HCP/HCO complex API trigger configuration - action executed when there are missing entities in requestavailable values: [REJECT/IGNORE/DCR/CREATE]crosswalkGeneratorConfigauto-create entity crosswalk - if missing in requestcrosswalkGeneratorTypetype of crosswalk generator, available values: [UUID]typeauto-generated crosswalk type valuesoruceTableauto-generated crosswalk source table valuesourceModelsource model of entity provided by user for V2 HCP/HCO complex,available values: [COMPANY,IQIVIA] Flow diagramTBDStepsTBDTriggersTrigger actionComponentActionDefault timeDependent componentsComponentUsage" + }, + { + "title": "User", + "pageID": "284811104", + "pageLink": "/display/GMDM/User", + "content": "The user is configured with a profile that is shared between all MDM services. Configuration is provided via yaml files and loaded at boot time. To use the profile in any application, import the com.COMPANY.mdm.user.UserConfiguration configuration from the mdm-user module. This operation will allow you to use the UserService class, which is used to retrieve users.User profile configurationattributedescriptionnameuser namedescriptionuser descriptiontokentoken used for authenticationgetEntityUsesMongoCacheretrive entity from mongo cache in get entity operationlookupsUseMongoCacheretrive lookups from mongo cache in LookupServicetrimtrimming entities/relationships in response to the clientguardrailsEnabledcheck if contributor provider crosswalk exists with data provider crosswalkrolesuser permissionscountriesuser allowed countriessourcesuser allowed crosswalksdefaultClientdefault mdm client namevalidationRulesForValidateEntityServicevalidation rules configurationbatchesuser allowed batches configurationdefaultCountryuser default country, used in api-router, when country is not provided in requestoverrideZonesuser country-zone configuration that overwrites default api-router behaviorkafkauser kafka configuration, used in kafka management servicereconciliationTargetsreconciliation targets, used in event resend service" + }, + { + "title": "Country Cluster", + "pageID": "234715057", + "pageLink": "/display/GMDM/Country+Cluster", + "content": "General assumptionsMDM HUB will be populating country cluster information.Initially, only default cluster country will be sent. In future, other clusters can be calculated and distributed to downstream clients.In the current phase, the default clustering model is based on OneKey country clustering.Changes are backward compatible for downstream systems if they are not interested in consuming the cluster information.defaultCountrycluster is an optional attribute. In case of lack of mapping, it will not be included in JSON .Example of mapping: CountrycountryClusterAndorra (AD)France (FR)Maroco (MC)France (FR)Changes in MDM HUB1. Enrichment of  Kafka events  with extra parameter defaultClusterCoutryIt will be calculated based on a new config table that maps countries to cluster countriesconfiguration table must be implemented on MDM Publisher sideIt can be used in routing rules in filtering events based on defaultCountryCluster2. Add a new column COUNTRY_CLUSTER representing the default country cluster  in views:ENTITIES, HCO, HCP, ENTITY_UPDATE_DATES, MDM_ENTITY_CROSSWALKSAdd country cluster config table 3. Handling cluster country sent by PforceRx in DCR process in a transparent wayIf a new entity then the country will be set based on the address country.If an entity exists then the country will be set based on the existing country in ReltioChange in the event model{  "eventType": "HCP_CHANGED",  "eventTime": 1514976138977,  "countryCode": "MC",  “defaultCountryCluster": "FR",   "entitiesURIs": ["entities/ysCkGNx“  ] ,  "targetEntity":  {  "uri": "entities/ytY3wd9",  "type": "configuration/entityTypes/HCP",Changes on client-sideMULEMULE must map defaultCountryCluster to country sent to PforceRx in the GRV pipeline.ODSODS ETL process must use column cluster_country instead of country while reading data from Snowflake" + }, + { + "title": "Create/Update HCP/HCO/MCO", + "pageID": "164470018", + "pageLink": "/pages/viewpage.action?pageId=164470018", + "content": "DescriptionThe REST interfaces exposed through the MDM Manager component used by clients to update or create HCP/HCO/MCO objects. The update process is supported by all connected MDMs – Reltio and Nucleus360 with some limitations. At this moment Reltio MDM is fully supported for entity types: HCP, HCO, MCO. The Nucleus360 supports only the HCP update process. The decision which MDM should be selected to process the update request is controlled by configuration. Configuration map defines country assignment to MDM which stores country's data. Based on this map, MDM Manager selects the correct MDM system to forward the update request.The difference between Create and Update operations is the additional API request during the update operation. During the update, an entity is retrieved from the MDM by the crosswalk value for validation purposes. Diagrams 1 and 2 presents standard flow. On diagrams 3, 4, 5, 6 additional logic is optional and activated once the specific condition or attribute is provided. The diagrams below present a sequence of steps in processing client calls.Update 2023-09:To increase Update HCP/HCO/MCO performance, the logic was slightly altered:ContributorProvider crosswalk is now looked up in MDM Hub Cache Databaseif entity not found by this crosswalk, fallback lookup using MDM APIafter confirming that the ContributorProvider crosswalk exists in MDM, add "partialOverride" to the request and continue processing with the Create HCP/HCO/MCO logicFlow diagram1Create HCP/HCO/MCO2 Update HCP/HCO/MCO3 (additional optional logic) Create/Update HCO with ParentHCO 4 (additional optional logic) Create/Update HCP with AffiliatedHCO&Relation5 (additional optional logic) Create/Update HCO with ParentHCO 6 (additional optional logic) Create/Update HCP with source crosswalk replace StepsThe client sends HTTP request to MDM Manager endpoint.Kong API Gateway receives requests and handles authentication.If the authentication succeeds, the request is forwarded to MDM Manager component.MDM Manager checks user permissions to call createEntity (HCP/HCO/MCO) operation and the correctness of the request.If the user's permissions are correct, MDM Manager proceeds with creating the specific object and returns created object in MDM to the Client.During partialUpdate before update entity is retrieved from MDM.Additional logic will be activated in the following cases:3 - during HCO update parentHCO attribute is delivered in the request4 - during HCP create/update affiliations are delivered in the request5 - during HCP/HCO creation based on the configuration-specific sources are enriched with cached Relation objects and this object is injected to the main Entity as the reference attribute6 - during HCP create/update when conditions are met, source crosswalk is replaced from MAPP to MAPP_ATTENDEETriggersTrigger actionComponentActionDefault timeREST callManager: POST/PATCH /hco /hcp /mcocreate specific objects in MDM systemAPI synchronous requests - realtimeDependent componentsComponentUsageManagercreate update Entities in MDM systemsAPI Gatewayproxy REST and secure accessReltioReltio MDM systemNucleusNucleus MDM system" + }, + { + "title": "Create/Update Relations", + "pageID": "164469796", + "pageLink": "/pages/viewpage.action?pageId=164469796", + "content": "DescriptionThe operation creates or updates the Relation of MDM Manager manages the relations in the Reltio MDM system. User can update the specific relation using a crosswalk to match or create a new object using unique crosswalks and information about start and end objectThe detailed process flow is shown below.Flow diagramCreate/Update RelationStepsThe client sends HTTP requests to the MDM Manager endpoint.Kong Gateway receives requests and handles authentication.If the authentication succeeds, the request is forwarded to the MDM Manager component.MDM Manager checks user permissions to call createRelation/updateRelation operation and the correctness of the request.If the user's permissions are correct, MDM Manager proceeds with the create/update operation.OPTIONALLY: after successfully update (ResponseStatus != failed), relations are cached in the MongoDB, the relations are then reused in the ReferenceAttributeEnrichment Service (currently configured for the GBLUS ONEKEY Affiliations). This is required to enrich these relations to the HCP/HCO objects during the update, this prevents losing reference attributes duringHCP create operation.OPTIONALLY: PATCH operation adds the PARTIAL_OVERRIDE header to Reltio switching the request to the partial update operation.TriggersTrigger actionComponentActionDefault timeREST callManager: POST/PATCH/relationscreate or updates the Relations in MDM systemAPI synchronous requests - realtimeDependent componentsComponentUsageManagercreate or updates the Relations in MDM system" + }, + { + "title": "Create/Update/Delete tags", + "pageID": "172295228", + "pageLink": "/pages/viewpage.action?pageId=172295228", + "content": "The REST interfaces exposed through the MDM Manager component used by clients to update, delete or create tags assigned to entity objects. Difference between create and update is that tags are added and if the option returnObjects is set to true all previously added and new tags will be returned. Delete action removes one tag.The diagrams below present a sequence of steps in processing client calls.Flow diagramCreate tagUpdate tagDelete tagStepsThe client sends HTTP request to MDM Manager endpoint.Kong API Gateway receives requests and handles authentication.If the authentication succeeds, the request is forwarded to MDM Manager component.MDM Manager checks user permissions to call createEntityTags operation and the correctness of the request.If the user's permissions are correct, MDM Manager proceeds with creating tags for entity and returns created tags in MDM to the Client.TriggersTrigger actionComponentActionDefault timeREST callManager: POST/PATCH/DELETE /entityTagscreate specific objects in MDM systemAPI synchronous requests - realtimeDependent componentsComponentUsageManagercreate update delete Entity Tags in MDM systemsAPI Gatewayproxy REST and secure accessReltioReltio MDM system" + }, + { + "title": "DCR flows", + "pageID": "415205424", + "pageLink": "/display/GMDM/DCR+flows", + "content": "\n\n\n\nOverviewDCR (Data Change Request) process helps to improve existing data in source systems. Proposal for change is being created by source systems a as DCR object (sometimes also called VR - Validation Request) which is usually being routed by MDM HUB to DS (Data Stewards) either in Reltio or in Third party validators (OneKey, Veeva OpenData). Response is provided twofold:response for specific DCR - metadataprofile data update as a direct effect of a DCR processing - payloadGeneral DCR process flow High level solution architecture for DCR flowSource: Lucid\n\n\n\n\n\nSolution for OneKey (OK)\n\n\n\nSolution for Veeva OpenData (VOD)\n\n\n\n\n\nArchitecture highlightsActors involved: PforceRX, Reltio, HUB, OneKeyKey components: DCR Service 2 (second version) for AMER, EMEA, APAC, US tenantsProcess details:DCRs are created directly by PforceRx using DCR's HUB APIPforceRx checks for DCR status updates every 24h → finds out which DCRs has been updated (since last check 24h ago) and the pulls details from each one with /dcr/_status Integration with OneKey is realized by APIs - DCRs are created with /vr/submit and their status is verified every 8h with /vr/traceData profile updates (payload) are being delivered via CSV and S3 and ETLed (VOD batch) to Reltio with COMPANY's helpDCRRegistry & DCRRegistryVeeva collections are used in Mongo for tracking purposes\n\n\n\nArchitecture highlightsActors involved: Data Stewards in Reltio, HUB, Veeva OpenData (VOD)Key components: DCR Service 2 (second version) for AMER, EMEA, APAC, US tenantsProcess details:DCRs are created by Data Stewards (DSRs) in Reltio via Suggest / Send to 3rd Party Validation - input for DSRs is being provided by reports from PforceRxCommunication with Veeva via S3<>SFTP and synchronization GMTF jobs. DCRs are sent and received in batches every 24h DCRs metadata is being exchanged via multiple CSV files ZIPedData profile updates (payload) are being delivered via CSV and S3 and ETLed (VOD batch) to Reltio with COMPANY's help  DCRRegistry & DCRRegistryONEKEY collections are used in Mongofor tracking purposes\n\n\n\n\n\nSolution for IQVIA Highlander (HL) \n\n\n\nSolution for OneKey on GBLUS - sources ICEU, Engage, GRV\n\n\n\n\n\nArchitecture highlightsActors involved: Veeva on behalf of PforceRX, Reltio, HUB, IQVIA wrapperKey components: DCR Service (first version) for GBLUS tenantProcess details:DCRs are created by sending CSV requests by Veeva - based on information acquired from PforceRxIntegration HUB <> Veeva → via files and S3<>SFTP. HUB confirms DCR creation by returning file reports back to VeevaIntegration HUB <> IQVIA wrapper → via files and S3HUB is responsible for translation of Veeva DCR CSV format to IQVIA CSV wrapper which then creates DCR in ReltioData Stewards approve or reject the DCRs in Reltio which updates data profiles accordingly. PforceRx receives update about changes in ReltioDCRRequest collection is used in Mongo for tracking purposes\n\n\n\nArchitecture highlights (draft)Actors involved: HUB, IQVIA wrapperKey components: DCR Service (first version) for GBLUS tenantProcess details:POST events from sources are captured - some of them are translated to direct DCRs, some of them are gathered and then pushed via flat files to be transformed into DCRs to OneKey \n\n\n" + }, + { + "title": "DCR generation process (China DCR)", + "pageID": "164470008", + "pageLink": "/pages/viewpage.action?pageId=164470008", + "content": "The gateway supports following DCR types:NewHCP – created when new HCP is registered in Reltio and requires external validationNewHCOL1 – created when HCO Level 1 not found in ReltioNewHCOL2 – created when HCO Level 2 not found in ReltioMultiAffil – created when a profile has multiple affiliations DCR generation processes are handled in two steps:During HCP modification – if initial activation criteria are met, then a DCR request is generated and published to KAFKA -gw-dcr-requests topic.In the next step, the internal Camel route DCRServiceRoute reads requests generated from the topic and processes as follows:checks if the time specified by delayPrcInSeconds elapsed since request generation – it makes sure that Reltio batch match process has finished and newly inserted profiles merge with the existing ones.checks if an entity, that caused DCR generation, still exists;checks full activation criteria (table below) on the latest state of the target entity, if criteria are not met then the request is closedcreates DCR in Reltioupdates external infocreates COMPANYDataChangeRequest entity in Reltio for tracking and exporting purposes.Created DCRs are exported by the Informatica ETL process managed by IQIVIADCR applying process (reject/approve actions) are executed through MDM HUB DCR response API executed by the external app manged by MDE team.The table below presents DCR activation criteria handled by system.Table 9. DCR activation criteriaRuleNewHCPMultiAffiliationNewHCOL2NewHCOL1Country inCNCNCNCNSource inGRVGRV, MDE, FACE, EVR, CN3RDPARTYGRV, FACE, CN3RDPARTYGRV, FACE, CN3RDPARTYValidationStatus inpending, partial-validatedor, if merged:OV: notvalidated, GRV nonOV: pending/partial-validatedvalidated, pendingvalidated, pendingvalidated, pendingSpeakerStatus inenabled, nullenabled, nullenabled, nullenabled, nullWorkplaces count>1Hospital foundtruetruefalsetrueDepartment foundtruetruefalseSimilar DCR created in the pastfalsefalsefalsefalseUpdate: December 2021NewHCP DCR is now created if ValidationStatus is pending or partial-validatedNewHCP DCR is also created if OV ValidationStatus is notvalidated, but most-recently updated GRV crosswalk provides non-ov ValidationStatus as pending or partial-validated - in case HCP gets merged into another entity upon creation/modification:DCR request processing history is now available in Kibana via Transaction Log - dashboard API Calls, transaction type "CreateDCRRoute"DCR response processing history (DCR approve/reject flow) is now available in Kibana via Transaction Log - dashboard API Calls, transaction type "DCRResponse"" + }, + { + "title": "HL DCR [Decommissioned April 2025]", + "pageID": "164470085", + "pageLink": "/pages/viewpage.action?pageId=164470085", + "content": "ContactsVendorContactPforceRXDL-PForceRx-SUPPORT@COMPANY.comIQVIA (DCR Wrapper)COMPANY-MDM-Support@iqvia.com As a part of Highlander project, the DCR processing flow was created which realizes following scenarios:Update HCP account details i.e. specialty, address, name (different sources of elements),Add new HCP account with primary affiliation to an existing organization,Add new HCP account with a new business account,Update HCP and add affiliation to a new HCO,Update HCP account details and remove existing details i.e. birth date, national id, …,Update HCP account and add new non primary affiliation to an existing organization,Update HCP account and add new primary affiliation to an existing organization,Update HCP account inactivate primary affiliation. Person account has more than 1 affiliation,Update HCP account inactivate non primary affiliation. Person account has more than 1 affiliation,Inactivate HCP account,Update HCP and add a private address,Update HCP and update existing private address,Update HCP and inactivate a private address,Update HCO details i.e. address, name (different sources of elements),Add new HCO account,Update HCO and remove details,Inactivate HCO account,Update HCO address,Update HCO and add new address,Update HCO and inactivate address,Update HCP's existing affiliation.Above cases has been aggregated into six generic types in internal HUB model:NEW_HCP_GENERIC - represents cases when the new HCP object is created with or without affiliation to HCO,UPDATE_HCP_GENERIC - aggregates cases when the existing HCP object is changed,DELETE_HCP_GENERIC - represents the case when HCP is deactivating,NEW_HCO_GENERIC - aggregates scenarios when new HCO object is created with or without affiliations to parent HCO,UPDATE_HCO_GENERIC - represents cases when existing HCO object is changing,DELETE_HCO_GENERIC - represents the case when HCO is deactivating.General Process OverviewProcess steps:Veeva uploads DCR request file to FTP location,PforceRx Channel component downloads the DCR request file,PforceRx Channel validates and maps each DCR requests to internal model,PforceRx Channel sends the request to DCR Service,DCR Service process the request: validating, enriching and mapping to Iqvia DCR Wrapper,PforceRx Channel prepares the report file containing technical status of DCR processing - at this time, report will contain only requests which don't pass the validation,Scheduled process in DCR Service, prepares the Wrapper requests file and uploads this to S3 location.DCR Wrapper processes the file: creating DCRs in Reltio or rejecting the request due to errors. After that the response file is published to s3 location,DCR Service downloads the response and updates DCRs status,Scheduled process in PforceRx Channel gets DCR requests and prepares next technical report - at this time the report has technical status which comes from DCR Wrappper,DCRs that was created by DCR Wrapper are reviewed by Data Stewards. DCR can be accepted or rejected,After accepting or rejecting DCR, Reltio publishes the message about this event,DCR Service consumes the message and updates DCR status,PforceRx Channel gets DCR data to prepare a response file. The response file contains the final status of DCRs processing in Reltio.Veeva DCR request file specificationThe specification is available at following location:https://COMPANY-my.sharepoint.com/:x:/r/personal/chinj2_COMPANY_com/Documents/Mig%20In-Prog/Highlander/PMO/09%20Integration/LATAM%20Reltio%20DCR/DCR_Reltio_T144_Field_Mapping_Reltio.xlsxDCR Wrapper request file specificationThe specification is available at following link:https://COMPANY.sharepoint.com/:x:/r/sites/HLDCR/Shared%20Documents/ReltioCloudMDM_LATAM_Highlander_DCR_DID_COMPANY__DEVMapping_v2.1.xlsx" + }, + { + "title": "OK DCR flows (GBLUS)", + "pageID": "164469877", + "pageLink": "/pages/viewpage.action?pageId=164469877", + "content": "DescriptionThe process is responsible for creating DCRs in Reltio and starting Change Requests Workflow for singleton entities created in Reltio. During this process, the communication to IQVIA OneKey VR API is established.  SubmitVR operation is executed to create a new Validation Request. The TraceVR operation is executed to check the status of the VR in OneKey. All DCRs are saved in the dedicated collection in HUB Mongo DB, required to gather metadata and trace the changes for each DCR request. Some changes can be suggested by the DS using "Suggest" operation in Reltio and "Send to Third Party Validation" button, the process "Data Steward OK Validation Request" is processing these changes and sends them to the OneKey service. The process is divided into 4 sections:Submit Validation RequestTrace Validation RequestData Steward ResponseData Steward OK Validation RequestThe below diagram presents an overview of the entire process. Detailed descriptions are available in the separated subpages.Flow diagramModel diagramStepsSubmitVRThe process of submitting VR is triggered by the Reltio events. The process aggregates events in a time window and once the window is closed the processing is started.During SubmitVR process checks are executed, getMatches operation in Relto is invoked to verify potential matches for the singleton entities. Once all checks are correct new submitVR request is created in OneKey and DCR is saved in Reltio and in Mongo Cache.TraceVRThe process of tracing VR is triggered each  hours on Mongo DCR cache collection.For each DCR the traceVR operation is executed in OneKey to verify the current status for the specific validation request.Once the checks are correct the DCR is updated in Reltio and in Mongo Cache.Data Steward ResponseThe process is responsible for gathering changes on Change Requests objects from Reltio, the process is only accepting events without the ThirdPartyValidation flagBased on the received change invoked by the Data Steward DCR is updated in Reltio and in Mongo CacheData Steward OK Validation RequestThe process is responsible for processing changes on Change Requests objects from Reltio, the process is only accepting events with the ThirdPartyValidation flag. This event is generated after DS clicks the "Send to Third Party Validation" button in Reltio. The DS is "Suggesting" changes on the specified profile, these changes are next sent to HUB with the DCR event. The changes are not visible in Retlio, it is just a container that keeps the changes.HUB is retrieving the "Preview" state from Reltio and calculating the changes that will send to OneKey WebService using submitVR operationAfter successful submitVR response HUB is closing/rejecting the existing DCR in Reltio. The _reject operation has to be invoked on the current DCR in Reltio because the changes should no be applied to the profile. Changes are now validating in the OneKey system, and appropriate steps will be taken in the next phase (export changed data to Reltio or reject suggestion).TriggersDescribed in the separated sub-pages for each process.Dependent componentsDescribed in the separated sub-pages for each process." + }, + { + "title": "Data Steward OK Validation Request", + "pageID": "172306908", + "pageLink": "/display/GMDM/Data+Steward+OK+Validation+Request", + "content": "DescriptionThe process the DS suggested changes based on the Change Request events received from Reltio(publishing) that are marked with the ThirdPartyValidation flag. The "suggested" changes are retrieved using the "preview" method and send to IQVIA OneKey or Veeva OpenData for validation. After successful submitVR response HUB is closing/rejecting the existing DCR in Reltio and additionally creates a new DCR object with relation to the entity in Reltio for tracking and status purposes. Because of the ONEKEY interface limitation, removal of attributes is send to IQVIA as a comment.Flow diagramStepsEvent publisher publishes full enriched events to $env-internal-[onekeyvr|thirdparty]-ds-requests-in: DCR_CHANGED("CHANGE_REQUEST_CHANGED") and DCR_CREATED("CHANGE_REQUEST_CREATED")Only events with ExternalInfo and ThirdPartyValidation flag set to true and the Change Requests status equal to AWAITING_REVIEW are accepted in this process, otherwise, the event is rejected and processing ends.HUB DCR Cache is verified if any ReltioDCR requests exist and are not in a FAILED status, then processing goes to the next step.DCR request that contains targetChangeRequest is enriched with the current Entity data using HUB CacheVeeva specific: The entity is checked, If no VOD crosswalk exists, then "golden profile" parameters should be used with below logicThe entity is checked, If active [ONEKEY|VOD] crosswalk exists the following steps are executed:The suggested state of the entity is retrieved from Reltio using the getEntityWithChangeRequests operation (parameters - entityUri and the changeRequestId from the DCR event). Current Entity and Preview Entity are compared using the following rules: (full attributes that are part of comparing process are described here)Simple attributes (like FirstName/LastName):Values are compared using the equals method. if differences are found the suggested value is taken. If no differences are found for mandatory, the current value is takenfor optional, the none value is taken (null)Complex attributes (like Specialties/Addresses):Whole nested attributes are matched using Reltio "uri" attributes key.If there is a new Specialty/Address, the new suggested nested attribute is takenVeeva specific: If there is a new Specialty/Addresses/Phone/Email/Medical degree*/HCP Focus area*, the new suggested nested attribute is taken. Since Veeva uses flat structure for these attributes, we need to calculate specialty attribute number (like specialty_5__v) to use when sending request. Attribute number = count of existing attributes +1.If there is no new Specialty/Address and there is a change in the existing attribute, the suggested nested change is taken. If there are multiple suggested changes, the one with the highest Rank is taken.If there are no changesfor mandatory, the current nested attribute that is connected with the ONEKEY crosswalk is taken.for optional, the none nested attribute is taken (no need to send)Contact Affiliations / OtherHCOtoHCOAffiliation:If there are no changes, return current listIf there is new Contact Affiliation with ONEKEY crosswalk, add it to current listAdditional checks:If there are changes associated with the other source (different than the [ONEKEY|VOD]), then these changes are ignored and the VR is saved in Reltio with comment listing what attributes were ignored e.g.: "Attributes: [YoB: 1956], [Email: engagetest123@test.com] ignored due to update on non-[onekey|VOD] attribute."If attribute associated with [ONEKY|VOD] source is removed, a comment specifying what should be removed on [ONEKY|VOD] side is generated and sent to [ONEKY|VOD], e.g.: "Please remove attributes: [Address: 10648 Savannah Plantation Ct, 32832, Orlando, United States]."DCRRequest object is created in Mongo for the flow state recording and generation of the new unique DCR ID for validation requests and data tracing.DCR cache attributesValues for IQVIAValues for OKValues for Veeva (R1)typeOK_VRPFORCERX_DCRRELTIO_SUGGESTstatusDCRRequestStatusDetails (DCRRequestStatus.NEW, currentDate)createdByonekey-dcr-serviceUser which creates DCR via Suggest button in ReltioUser which creates DCR via Suggest button in ReltiodatenowSendTo3PartyValidationtrue (flag that indicates the DCR objects created by this process)Calculated changes are mapped to the OneKey submitVR Request and it's submitted using API REST method POST /vr/submit.Veeva specific:  submitting DCR request to Veeva requires creation of ZIPed CSV files with agreed structure and placed on S3 bucketIf the submission is successful then:DCRRequest.status is updated to SENT with [OK|VOD] request and response details DCR entity is created in Reltio and the relation between the processed entity and the DCR entityReltio source name (crosswalk.type): DCRReltio relation type: HCPtoDCR or HCOtoDCR (depending on the object type)DCR entity attributes:DCR entity attributesMapping for OneKeyMapping for VeevaDCRIDOK VR Reqeust Id (cegedimRequestEid)ID assigned by MDM HUB EntityURIthe processed entity URIVRStatus"OPEN"VRStatusDetail"SENT"Commentsoptionally commentsSentDatecurrent timeSendTo3PartyValidationtrueOtherwise (FAILED)DCRRequest.status is updated to FAILED with OK request and exception response details DCR entity is created in Reltio and the relation between the processed entity and the DCR entityReltio source name (crosswalk.type): DCRReltio relation type: HCPtoDCR or HCOtoDCR (depending on the object type)DCR entity attributes:DCR entity attributesMappingDCRIDOK VR Reqeust Id (cegedimRequestEid)EntityURIthe processed entity URIVRStatus"CLOSED"VRStatusDetail"FAILED"CommentsONEKEY service failed [exception details]SentDatecurrent timeSendTo3PartyValidationtrueThe current DCR object in Reltio is closed using the _reject operation - POST - /changeRequests//_rejectOtherwise, If ONEKEY crosswalk does not exist, or the ONEKEY crosswalk is soft-deleted, or entity is EndDated: the following steps are executed:DCRRequest object is created in Mongo for the flow state recording and generation of the new unique DCR ID for validation requests and data tracing.DCR cache attributesvaluestypeDCRType.OK_VRstatusDCRRequestStatusDetails (DCRRequestStatus.NEW, currentDate)created byonekey-dcr-servicedatenowSendTo3PartyValidationtrue (flag that indicates the DCR objects created by this process)DCRRequest.status is updated to FAILED and comment "No OK crosswalk available"DCR entity is created in Reltio and the relation between the processed entity and the DCR entityReltio source name (crosswalk.type): DCRReltio relation type: HCPtoDCR or HCOtoDCR (depending on the object type)DCR entity attributes:DCR entity attributesMappingDCRIDOK VR Reqeust Id (cegedimRequestEid)EntityURIthe processed entity URIVRStatus"CLOSED"VRStatusDetail"REJECTED"CommentsNo ONEKEY crosswalk availableCreatedByMDM HUBSentDatecurrent timeSendTo3PartyValidationtrueThe current DCR object in Reltio is closed using the _reject operation - POST - /changeRequests//_rejectEND ONEKEY Comparator (suggested changes)HCPReltio AttributeONEKEY attributemandatory typeattribute typeFirstNameindividual.firstNameoptionalsimple valueLastNameindividual.lastNamemandatorysimple valueCountryisoCod2mandatorysimple valueGenderindividual.genderCodeoptionalsimple lookupPrefixindividual.prefixNameCodeoptionalsimple lookupTitleindividual.titleCodeoptionalsimple lookupMiddleNameindividual.middleNameoptionalsimple valueYoBindividual.birthYearoptionalsimple valueDobindividual.birthDayoptionalsimple valueTypeCodeindividual.typeCodeoptionalsimple lookupPreferredLanguageindividual.languageEidoptionalsimple valueWebsiteURLindividual.websiteoptionalsimple valueIdentifier value 1individial.externalId1optionalsimple valueIdentifier value 2individial.externalId2optionalsimple valueAddresses[]address.countryaddress.cityaddress.addressLine1address.addressLine2address.Zip5mandatorycomplex (nested)Specialities[]individual.speciality1 / 2 / 3optionalcomplex (nested)Phone[]individual.phoneoptionalcomplex (nested)Email[]individual.emailoptionalcomplex (nested)Contact Affiliations[]workplace.usualNameworkplace.officialNameworkplace.workplaceEidoptionalContact AffiliationONEKEY crosswalkindividual.individualEidmandatoryIDHCOReltio AttributeONEKEY attributemandatory typeattribute typeNameworkplace.usualNameworkplace.officialNameoptionalsimple valueCountryisoCod2mandatorysimple valueOtherNames.Nameworkplace.usualName2optionalcomplex (nested)TypeCodeworkplace.typeCodeoptionalsimple lookupWebisteWebsiteURLworkplace.websiteoptionalcomplex (nested)Addresses[]address.countryaddress.cityaddress.addressLine1address.addressLine2address.Zip5mandatorycomplex (nested)Specialities[]workplace.speciality1 / 2 / 3optionalcomplex (nested)Phone[] (!FAX)workplace.telephoneoptionalcomplex (nested)Phone[] (FAX)workplace.faxoptionalcomplex (nested)Email[]workplace.emailoptionalcomplex (nested)ONEKEY crosswalkworkplace.workplaceEidmandatoryIDTriggersTrigger actionComponentActionDefault timeIN Events incoming mdm-onekey-dcr-service:ChangeRequestStreamprocess publisher full change request events in the stream that contain ThirdPartyValidation flagrealtime: events stream processing Dependent componentsComponentUsageOK DCR ServiceMain component with flow implementationVeeva DCR ServiceMain component with flow implementationPublisherEvents publisher generates incoming eventsHub StoreDCR and Entities Cache " + }, + { + "title": "Data Steward Response", + "pageID": "164469841", + "pageLink": "/display/GMDM/Data+Steward+Response", + "content": "DescriptionThe process updates the DCR's based on the Change Request events received from Reltio(publishing). Based on the Data Steward decision the state attribute contains relevant information to update DCR status.Flow diagramStepsEvent publisher publishes simple events to $env-internal-[onekeyvr|veeva]-change-requests-in: DCR_CHANGED("CHANGE_REQUEST_CHANGED") and DCR_REMOVED("CHANGE_REQUEST_REMOVED")Only the events without the ThirdPartyValidation flag are accepted, otherwise, the event is Rejected and the process is ended.Events are processed in the Stream and based on the targetChangeRequest.state attribute decision is madeIf the state is APPLIED or REJECTS, DCR is retrieved from the cache based on the changeRequestURIIf DCR exists in Cache The status in Reltio is updatedDCR entity attributesMappingVRStatusCLOSEDVRStatusDetailstate: APPLIED → ACCEPTEDstate: REJECTED → REJECTEDOtherwise, the events are rejected and the transaction is endedOtherwise, the events are rejected and the transition is ended.TriggersTrigger actionComponentActionDefault timeIN Events incoming mdm-onekey-dcr-service:OneKeyResponseStreammdm-veeva-dcr-service:veevaResponseStreamprocess publisher full change request events in streamrealtime: events stream processing Dependent componentsComponentUsageOK DCR ServiceMain component with flow implementationVeeva DCR ServiceMain component with flow implementationPublisherEvents publisher generates incoming eventsHub StoreDCR and Entities Cache " + }, + { + "title": "Submit Validation Request", + "pageID": "164469875", + "pageLink": "/display/GMDM/Submit+Validation+Request", + "content": "DescriptionThe process of submitting new validation requests to the OneKey service based on the Reltio change events aggregated in time windows. During this process, new DCRs are created in Reltio.Flow diagramStepsEvent publisher publishes simple events to $env-internal-onekeyvr-in including HCP_*, HCO_*, ENTITY_MATCHES_CHANGED Events are aggregated in a time window (recommended the window length 4 hours) and the last event is returned to the process after the window is closed.Simple events are enriched with the Entity data using HUB CacheThen, the following checks are executedcheck if at least one crosswalk create date is equal or above for a given source name and cut off date specified in configuration - section submitVR/crosswalkDecisionTablescheck if entity attribute values match specified in configurationcheck if there is no valid DCR created for the entity  check if the entity is activecheck if the OK crosswalk doesn't exist after the full entity retrieval from the HUB cachematch category is not 99GetMatches operation from Reltio returns 0 potential matchesIf any check is negative then the process is aborted.DCRRequest object is created in Mongo for the flow state recording and generation of the new unique DCR ID for validation request and data tracing.The entity is mapped to OK VR Request and it's submitted using API REST method POST /vr/submit.If the submission is successful then:DCRRequest.status is updated to SENT with OK request and response details DCR entity is created in Reltio and the relation between the processed entity and the DCR entityReltio source name (crosswalk.type): DCRReltio relation type: HCPtoDCR or HCOtoDCR (depending on the object type)DCR entity attributes:DCR entity attributesMappingDCRIDOK VR Reqeust Id (cegedimRequestEid)EntityURIthe processed entity URIVRStatus""OPEN"VRStatusDetail"SENT"CreatedByMDM HUBSentDatecurrent timeOtherwise FAILED status is recorded in DCRRequest with an OK error response.DCRRequest.status is updated to FAILED with OK request and exception response details DCR entity is created in Reltio and the relation between the processed entity and the DCR entityReltio source name (crosswalk.type): DCRReltio relation type: HCPtoDCR or HCOtoDCR (depending on the object type)DCR entity attributes:DCR entity attributesMappingDCRIDOK VR Reqeust Id (cegedimRequestEid)EntityURIthe processed entity URIVRStatus"CLOSED"VRStatusDetail"FAILED"CommentsONEKEY service failed [exception details]CreatedByMDM HUBSentDatecurrent timeTriggersTrigger actionComponentActionDefault timeIN Events incoming mdm-onekey-dcr-service:OneKeyStreamprocess publisher simple events in streamevents stream processing with 4h time window events aggregationOUT API requestone-key-client:OneKeyIntegrationService.submitValidationsubmit VR request to OneKeyinvokes API request for each accepted eventDependent componentsComponentUsageOK DCR ServiceMain component with flow implementationPublisherEvents publisher generates incoming eventsManagerReltio Adapter for getMatches and created operationsOneKey AdapterSubmits Validation RequestHub StoreDCR and Entities Cache MappingsReltio → OK mapping file: onkey_mappings.xlsxOK mandatory / required fields: VR - Business Fields Requirements(COMPANY).xlsxOneKey Documentation" + }, + { + "title": "Trace Validation Request", + "pageID": "164469983", + "pageLink": "/display/GMDM/Trace+Validation+Request", + "content": "DescriptionThe process of tracing the VR changes based on the OneKey VR changes. During this process HUB, DCR Cache is triggered every hour for SENT DCR's and check VR status using OneKey web service. After verification DCR is updated in Reltio or a new Workflow is started in Reltio for the Data Steward manual validation. Flow diagramStepsEvery N  hours OK VR requests with status SENT are queried in DCRRequests store.For each open requests, its status is checked it OK using REST API method /vr/traceThe first check is the VR.rsp.status attribute, checking if the status is SUCCESSNext, if the process status (VR.rsp.results.processStatus) is REQUEST_PENDING_OKE | REQUEST_PENDING_JMS | REQUEST_PROCESSED or OK data export date (VR.rsp.results.trace6CegedimOkcExportDate) is earlier than 24 hours then the processing of the request is postponed to the next checkexportDate or processStatus are optional and can be null.The process goes to the next step only if processStatus  is  REQUEST_RESPONDED | RESPONSE_SENTThe process is blocked to next check only if  trace6CegedimOkcExportDate is not null and is earlier than 24hIf the processStatus is validated and VR.rsp.results.responseStatus is VAS_NOT_FOUND | VAS_INCOHERENT_REQUEST | VAS_DUPLICATE_PROCESS then DCR is being closed with status REJECTEDDCR entity attributesMappingVRStatus""CLOSED"VRStatusDetail"REJECTED"ReceivedDatecurrent timeCommentsOK.responseCommentBefore these 2 next steps, the current Entity status is retrieved from HUB Cache. This is required to check if the entity was merged with OK entity. if responseStatus is VAS_FOUND | VAS_FOUND_BUT_INVALID and OK crosswalk exists in Reltio entity which value equals to OK validated id (individualEidValidated or workplaceEidValidated) then DCR is closed with status ACCEPTED.DCR entity attributesMappingVRStatus""CLOSED"VRStatusDetail"ACCEPTED"ReceivedDatecurrent timeCommentsOK.responseComment if responseStatus is VAS_FOUND | VAS_FOUND_BUT_INVALID but OK crosswalk doesn't exist in Reltio then Relio DCR Request is created and workflow task is triggered for Data Steward review. DCR status entity is updated with DS_ACTION_REQUIRED status. DCR entity attributesMappingVRStatus""OPEN"VRStatusDetail"DS_ACTION_REQUIRED "ReceivedDatecurrent timeCommentsOK.responseCommentGET /changeRequests operation is invoked to get a new change request ID and start a new workflowPOST /workflow/_initiate operation is invoked to init new Workflow in ReltioWorkflow attributesMappingchangeRequest.uriChangeRequest Reltio URIchangeRequest.changesEntity URIcommentindividualEidValidated or workplaceEidValidatedPOST /entities?changeRequestId= - operation is invoked to update change request Entity container with DCR Status to Closed, this change is only visible in Reltio once DS accepts the DCR. Body attributesMappingattributes"DCRRequests": [ { "value": { "VRStatus": [ { "value": "CLOSED" } ] }, "refEntity": { "crosswalks": [ { "type": "configuration/sources/DCR", "value": "$requestId", "dataProvider": false, "contributorProvider": true }, { "type": "configuration/sources/DCR", "value": "$requestId_REF", "dataProvider": true, "contributorProvider": false } ] }, "refRelation": { "crosswalks": [ { "type": "configuration/sources/DCR", "value": "$requestId_REF" } ] } }]crosswalks"crosswalks": [ { "type": "configuration/sources/", "value": "", "dataProvider": false, "contributorProvider": true, "deleteDate": "" }, { "type": "configuration/sources/DCR", "value": "$requestId_CR", "dataProvider": true, "contributorProvider": false, "deleteDate": "" }]TriggersTrigger actionComponentActionDefault timeIN Timer (cron)mdm-onekey-dcr-service:TraceVRServicequery mongo to get all SENT DCR's related to OK_VR processevery hourOUT API requestone-key-client:OneKeyIntegrationService.traceValidationtrace VR request to OneKeyinvokes API request for each DCRDependent componentsComponentUsageOK DCR ServiceMain component with flow implementationManagerReltio Adapter for GET /changeRequests and POST /workflow/_initiate operations OneKey AdapterTraceValidation RequestHub StoreDCR and Entities Cache " + }, + { + "title": "PforceRx DCR flows", + "pageID": "209949183", + "pageLink": "/display/GMDM/PforceRx+DCR+flows", + "content": "DescriptionMDM HUB exposes Rest API to create and check the status of DCR. The process is responsible for creating DCRs in Reltio and starting Change Requests Workflow DCRs created in Reltio or creating the DCRs (submitVR operation) in ONEKEY. DCR requests can be routed to an external MDM HUB instance handling the requested country. The action is transparent to the caller. During this process, the communication to IQVIA OneKey VR API / Reltio API is established. The routing decision depends on the market, operation type, or changed profile attributes.Reltio API:  createEntity (with ChangeReqest) operation is executed to create a completely new entity in the new Change Request in Reltio. attributesUpdate (with ChageRequest) operation is executed after calculation of the specific changes on complex or simple attributes on existing entity - this also creates a new Change Request.  Start Workflow operation is requested at the end, this starts the Wrofklow for the DCR in Reltio so the change requests are started in the Reltio Inbox for Data Steward review.IQVIA API: SubmitVR operation is executed to create a new Validation Request. The TraceVR operation is executed to check the status of the VR in OneKey.All DCRs are saved in the dedicated collection in HUB Mongo DB, required to gather metadata and trace the changes for each DCR request. The DCR statuses are updated by consuming events generated by Reltio or periodic query action of open DCRs in OneKeyThe Data Steward can decide to route a DCR to IQVIA as well - some changes can be suggested by the DS using the "Suggest" operation in Reltio and "Send to Third Party Validation" button, the process "Data Steward OK Validation Request" is processing these changes and sends them to the OneKey service. The below diagram presents an overview of the entire process. Detailed descriptions are available in the separated subpages.API doc URL: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-dcr-spec-emea-dev/swagger-ui/index.htmlFlow diagramDCR Service High-Level ArchitectureDCR HUB Logical ArchitectureModel diagramFlows:Create DCRThe client call API Post /dcr method and pass the request in JSON format to MDM HUB DCR serviceThe request is validated against the following rules:mandatory fields are setreference object HCP,HCO are available in Reltioreferenced attributes like specialties, addresses are in the changed objectThe service evaluates the target system based on country, operation type (create, update), changed attributes. The process is controlled by the decision table stored in the config.The DCR is created in the target system through the APIThe result is stored in the registry. DCR information entity is created in Reltio for tracking.The status with created DCR object ids are returned in response to the ClientGet DCR statusThe client calls GET /dcr/_status methodThe DCR service queries DCR registry in Mongo and returns the status to the Client.There are processes updating dcr status in the registry:DCR change events are generated by Reltio when DCR is accepted or rejected by DS. Events are processed by the service.Reltio: process DCR Change EventsDCR change events are generated by Reltio when DCR is accepted or rejected by DS. Events are processed by the service.OneKey: process DCR Change EventsDCR change events are generated by the OneKey service when DCR is accepted or rejected by DS. Events are processed by the service.OneKey: generate DCR Change Events (traceVR)Every x configured hours the OneKey status method is queried to get status for open validation requests.Reltio: create DCR method - directdirect API method that creates DCR in Reltio (contains mapping and logic description)OneKey: create DCR method (submitVR) - directdirect API method that creates DCR in OneKey - executes the submitVR operation (contains mapping and logic description)TriggersDescribed in the separated sub-pages for each process.Dependent componentsDescribed in the separated sub-pages for each process." + }, + { + "title": "Create DCR", + "pageID": "209949185", + "pageLink": "/display/GMDM/Create+DCR", + "content": "DescriptionThe process creates change requests received from PforceRx Client and sends the DCR to the specified target service - Reltio, OneKey or Veeva OpenData (VOD). DCR is created in the system and then processed by the data stewards. The status is asynchronously updated by the HUB processes, Client represents the DCR using a unique extDCRRequestId value. Using this value Client can check the status of the DCR (Get DCR status). Flow diagramSource: LucidSource: LucidDCR Service component perspective StepsClients execute the API POST /dcr requestKong receives requests and handles authenticationIf the authentication succeeds the request is forwarded to the dcr-service-2 component,DCR Service checks permissions to call this operation and the correctness of the request, then the flow is started and the following steps are executed:Parse and validate the dcr request. The validation logic checks the following: Check if the list of DCRRequests contains unique extDCRRequestId.Requests that are duplicate will be rejected with the error message - "Found duplicated request(s)"For each DCRRequest in the input list execute the following checks:Users can define the following number of entities in the Request:at least one entity has to be defined, otherwise, the request will be rejected with an error message - "No entities found in the request"single HCPsinge HCOsinge HCP with single HCOtwo HCOsCheck if the main reference objects exist in Reltio for update and delete actionHCP.refId or HCO.refId, user have to specify one of:CrosswalkTargetObjectId - then the entity is retrieved from Reltio using get entity by crosswalk operationEntityURITargetObjectId - then the entity is retrieved from Reltio using get entity by uri operationCOMPANYCustomerIdTargetObjectId - then the entity is retrieved from Reltio using search operation by the COMPANYGlobalCustomerIDAttributes validation:Simple attributes - like firstName/lastName e.t.cfor update action on the main object:if the input parameter is defined with an empty value - "" - this will result in the removal of the target attributeif the input parameter is defined with a non-empty value - this will result in the update of the target attributeNested attributes - like Specialties/Addresses e.t.cfor each attribute, the user has to define the refId to uniquely identify the attributeFor action "update" - if the refId is not found in the target object request will be rejected with a detailed error message For action "insert" - the refId is not required - new reference attribute will be added to the target objectChanges validation:If the validation detected 0 changes (during comparison of applying changes and the target entity) -  the request is rejected with an error message - "No changes detected"Evaluate dcr service (based on the decision table config)The following decision table is defined to choose the target serviceLIST OF the following combination of attributes:attributedescriptionuserName the user name that executes the requestsourceNamethe source name of the Main objectcountrythe county defined in the requestoperationTypethe operation type for the Main object{ insert, update, delete }affectedAttributesthe list of attributes that the user is changingaffectedObjects{ HCP, HCO, HCP_HCO }RESULT →  TargetType {Reltio, OneKey, Veeva}Each attribute in the configuration is optional. The decision table is making the validation based on the input request and the main object- the main object is HCP, if the HCP is empty then the decision table is checking HCO. The result of the decision table is the TargetType, the routing to the Reltio MDM system, OneKey or Veeva service. Execute target service (reltio/onekey/veeva)Reltio: create DCR method - directOneKey: create DCR method (submitVR) - directVeeva: create DCR method (storeVR)Create DCR in Reltio and save DCR in DCR Registry If the submission is successful then: DCR entity is created in Reltio and the relation between the processed entity and the DCR entityReltio source name (crosswalk.type): DCRReltio relation type: HCPtoDCR or HCOtoDCR (depending on the object type)for "create" and "delete" operation the Relation have to be created between objectsif this is just the "insert" operation the Relation will be created after the acceptance of the Change Request in Reltio - Reltio: process DCR Change EventsDCR entity attributes once sent to OneKeyDCR entity attributesMappingDCRIDextDCRRequestIdEntityURIthe processed entity URIVRStatus"OPEN"VRStatusDetail"SENT_TO_OK"CreatedByMDM HUBSentDatecurrent timeCreateDatecurrent timeCloseDateif REJECTED | ACCEPTED -> current timedcrTypeevaluate based on config:dcrTypeRules: - type: CR0 size: 1 action: insert entity: com.COMPANY.mdm.api.dcr2.HCPDCR entity attributes once sent to VeevaDCR entity attributesMappingDCRIDextDCRRequestIdEntityURIthe processed entity URIVRStatus"OPEN"VRStatusDetail"SENT_TO_VEEVA"CreatedByMDM HUBSentDatecurrent timeCreateDatecurrent timeCloseDateif REJECTED | ACCEPTED -> current timedcrTypeevaluate based on config:dcrTypeRules: - type: CR0 size: 1 action: insert entity: com.COMPANY.mdm.api.dcr2.HCPDCR entity attributes once sent to Reltio → action is passed to DS and workflow is started. DCR entity attributesMappingDCRIDextDCRRequestIdEntityURIthe processed entity URIVRStatus"OPEN"VRStatusDetail"DS_ACTION_REQUIRED "CreatedByMDM HUBSentDatecurrent timeCreateDatecurrent timeCloseDateif REJECTED | ACCEPTED -> current timedcrTypeevaluate based on config:dcrTypeRules: - type: CR0 size: 1 action: insert entity: com.COMPANY.mdm.api.dcr2.HCPMongo Update: DCRRequest.status is updated to SENT with OneKey or Veeva request and response details or DS_ACTION_REQURIED with all Reltio detailsOtherwise FAILED status is recorded in DCRRequest with a detailed error message.Mongo Update:  DCRRequest.status is updated to FAILED with all required attributes, request, and exception response details Initialize Workflow in Reltio (only requests that TargetType is Reltio)POST /workflow/_initiate operation is invoked to init new Workflow in ReltioWorkflow attributesMappingchangeRequest.uriChangeRequest Reltio URIchangeRequest.changesEntity URIThen Auto close logic is invoked to evaluate whether DCR request meets conditions to be auto accepted or auto rejected. Logic is based on decision table PreCloseConfig. If DCRRequest.country is contained in PreCloseConfig.acceptCountries or PreCloseConfig.rejectCountries then DCR is accepted or rejected respectively. return DCRResponse to Client - During the flow, DCRRespone may be returned to Client with the specific errorCode or requestStatus. The description for all response codes is presented on this page: Get DCR statusTriggersTrigger actionComponentActionDefault timeREST callDCR Service: POST /dcrcreate DCRs in the Reltio, OneKey or Veeva systemAPI synchronous requests - realtimeDependent componentsComponentUsageDCR ServiceMain component with flow implementationOK DCR ServiceOneKey Adapter - API operationsVeeva DCR ServiceVeeva Adapter - API operations and S3/SFTP communication ManagerReltio Adapter - API operationsHub StoreDCR and Entities Cache " + }, + { + "title": "DCR state change", + "pageID": "218438617", + "pageLink": "/display/GMDM/DCR+state+change", + "content": "DescriptionThe following diagram represents the DCR state changes. DCR object stat is saved in HUB and in Reltio DCR entity object. The state of the DCR is changed based on the Reltio/IQVIA/Veeva Data Steward action.Flow diagramStepsDCR is created (OPEN)  - Create DCRDCR is sent to Reltio, OneKey or VeevaWhen sent to ReltioPre Close logic is invoked to auto accept (PRE_ACCEPT) or auto reject (PRE_REJECT) DCRReltio Data Steward process the DCR - Reltio: process DCR Change EventsOneKey Data Steward process the DCR - OneKey: process DCR Change EventsVeeva Data Steward process the DCR - Veeva: process DCR Change EventsData Steward DCR status change perspectiveTransaction LogThere are the following main assumptions regarding the transaction log in DCR service: Main transaction The user sends to the DCR service list of the DCR Requests and receives the list of the DCR ResponsesTransaction service generates the transaction ID for the input request - this is used as the correlation ID for each separated DCR Request in the listTransaction service save:METADATAmain transaction IDuserNameextDCRRequestIds (list of all) BODYthe DCR Requests list and the DCR Response ListState change transactionDCR object state may change depending on the DS decision, for each state change (represented as a green box in the above diagram) the transaction is saved with the following attributes:Transaction METADATAmain transaction IDextDCRRequestIddcrRequestIdReltio:VRStatusVRStatusDetailHUB:DCRRequestStatusDetailsoptionally:errorMessageerrorCodeTransaction BODY:Input EventLog appenders:Kafka Transaction appender - saves whole events(metadata+body) to Kafka - data presented in the Kibana Dashboard Simple Transaction logger - saves the transactions details to the file in the following format:{ID}    {extDCRRequestId}   {dcrRequestId}   {VRStatus}   {VRStatusDetail}   {DCRRequestStatusDetails}   {errorCode}   {errorMessage}TriggersTrigger actionComponentActionDefault timeREST callDCR Service: POST /dcrcreate DCRs in the Reltio system or in OneKeyAPI synchronous requests - realtimeIN Events incoming dcr-service-2:DCRReltioResponseStreamprocess publisher full change request events in the streamrealtime: events stream processing IN Events incoming dcr-service-2:DCROneKeyResponseStreamprocess publisher full change request events in the streamrealtime: events stream processing IN Events incoming dcr-service-2:DCRVeevaResponseStreamprocess publisher full change request events in the streamrealtime: events stream processing Dependent componentsComponentUsageDCR ServiceMain component with flow implementationOK DCR ServiceOneKey Adapter  - API operationsVeeva DCR ServiceVeeva Adapter  - API operationsManagerReltio Adapter  - API operationsHub StoreDCR and Entities Cache " + }, + { + "title": "Get DCR status", + "pageID": "209949187", + "pageLink": "/display/GMDM/Get+DCR+status", + "content": "DescriptionThe client creates DCRs in Reltio, OneKey or Veeva OpenData using the Create DCR operation. The status is then asynchronously updated in the DCR Registry. The operation retrieves the current status of the DCRs that the updated date is between 'updateFrom' and 'updateTo' input parameters. PforceRx first asks what DCRs have been changed since last time they checked (usually 24h) and then iterate for each DCR they get detailed info.Flow diagram,Source: LucidDependent flows:The DCRRegistry is enriched by the DCR events that are generated by Reltio - the flow description is here - Reltio: process DCR Change EventsThe DCRRegistry is enriched by the DCR events generated in OneKey DCR service component - after submitVR operation is invoked to ONEKEY, each DCR is traced asynchronously in this process - OneKey: process DCR Change EventsThe DCRRegistry is enriched by the DCR events generated in Veeva OpenData DCR service component - after submitVR operation is invoked to VEEVA, each DCR is traced asynchronously in this process - Veeva: process DCR Change EventsStepsStatusThere are the following request statuses that users may receive during Create DCR operation or during checking the updated status using GET /dcr/_status operation described below:RequestStatusDCRStatus Internal Cache statusDescriptionREQUEST_ACCEPTEDCREATEDSENT_TO_OKDCR was sent to the ONEKEY system for validation and pending the processing by Data Steward in the systemREQUEST_ACCEPTEDCREATEDSENT_TO_VEEVADCR was sent to the VEEVA system for validation and pending the processing by Data Steward in the systemREQUEST_ACCEPTEDCREATEDDS_ACTION_REQUIREDDCR is pending Data Steward validation in Reltio, waiting for approval or rejectionREQUEST_ACCEPTEDCREATEDOK_NOT_FOUNDUsed when ONEKEY profile was not found after X retriesREQUEST_ACCEPTEDCREATEDVEEVA_NOT_FOUNDUsed when VEEVA profile was not found after X retriesREQUEST_ACCEPTEDCREATEDWAITING_FOR_ETL_DATA_LOADUsed when waiting for actual data profile load from 3rd Party to appear in ReltioREQUEST_ACCEPTEDACCEPTEDACCEPTEDData Steward accepted the DCR, changes were appliedREQUEST_ACCEPTEDACCEPTEDPRE_ACCEPTEDPreClose logic was invoked and automatically accepted DCR according to decision table in PreCloseConfigREQUEST_REJECTEDREJECTED REJECTEDData Steward rejected the changes presented in the Change RequestREQUEST_REJECTEDREJECTED PRE_REJECTEDPreClose logic was invoked and automatically rejected DCR according to decision table in PreCloseConfigREQUEST_FAILED-FAILEDDCR requests failed due to: validation error/ unexpected error e.t.d - details in the errorCode and errorMessageError codes:There are the following classes of exception that users may receive during Create DCR operation:ClasserrorCodeDescriptionHTTP code1DUPLICATE_REQUESTrequest rejected - extDCRRequestId  is registered - this is a duplicate request4032NO_CHANGES_DETECTEDentities are the same (request is the same) - no changes4003VALIDATION_ERRORref object does not exist (not able to find HCP/HCO target object4043VALIDATION_ERRORref attribute does not exist - not able to find nested attribute in the target object4003VALIDATION_ERRORwrong number of HCP/HCO entities in the input request400Clients execute the API GET/dcr/_status requestKong receives requests and handles authenticationIf the authentication succeeds the request is forwarded to the dcr-service-2 component,DCR Service checks permissions to call this operation and the correctness of the request, then the flow is started and the following steps are executedQuery on mongo is executed to get all DCRs matching input parameters:updateFrom (date-time) - DCR last update from - DCRRequestDetails.status.changeDateupdateTo (date-time) - DCR last update to - DCRRequestDetails.status.changeDatelimit (int) the maximum number of results returned through API - the recommended value is 25. The max value for a single request is 50.offset(int) - result offset - the parameter used to query through results that exceeded the limit. Resulted values are aggregated and returned to the Client.The client receives the List body.TriggersTrigger actionComponentActionDefault timeREST callDCR Service: GET/dcr/_statusget status of created DCRs. Limit the results using query parameters like dates and offsetAPI synchronous requests - realtimeDependent componentsComponentUsageDCR ServiceMain component with flow implementationHub StoreDCR and Entities Cache " + }, + { + "title": "OneKey: create DCR method (submitVR) - direct", + "pageID": "209949294", + "pageLink": "/display/GMDM/OneKey%3A+create+DCR+method+%28submitVR%29+-+direct", + "content": "DescriptionRest API method exposed in the OK DCR Service component responsible for submitting the VR to OneKeyFlow diagramStepsReceive the API requestValidate - check if the onekey crosswalk exists once there is an update on the profile, otherwise reject the requestThe DCR is mapped to OK VR Request and it's submitted using API REST method POST /vr/submit. (mapping described below)If the submission is successful then:DCRRequesti updated to SENT_TO_OK with OK request and response details. DCRRegistryONEKEY collection in saved for tracing purposes. The process that reads and check ONEKEY VRs is described here: OneKey: generate DCR Change Events (traceVR)Otherwise FAILED status is recorded and the response is returned with an OK error responseMappingVR - Business Fields Requirements_UK.xlsx - file that contains VR UK requirements and mapping to IQVIA modelHUBONEKEYattributesattributescodesmandatoryattributesvaluesHCOYentityTypeWORKPLACEYvalidation.clientRequestIdHUB_GENERATED_IDYvalidation.processQYvalidation.requestDate1970-01-01T00:00ZYvalidation.callDate1970-01-01T00:00ZattributesYvalidation.requestProcessIextDCRCommentvalidation.requestCommentcountryYisoCod2reference EntitycrosswalkONEKEYworkplace.workplaceEidnameworkplace.usualNameworkplace.officialNameotherHCOAffiliationsparentUsualNameworkplace.parentUsualNamesubTypeCodeCOTFacilityType(TET.W.*)workplace.typeCodetypeCodeno value in PFORCERXHCOSubType(LEX.W.*)workplace.activityLocationCodeaddressessourceAddressIdN/AaddressTypeN/AaddressLine1address.longLabeladdressLine2address.longLabel2addressLine3N/AstateProvinceAddressState(DPT.W.*)address.countyCodecityYaddress.cityzipaddress.longPostalCodecountryYaddress.countryrankget address with rank=1 emailstypeN/Aemailworkplace.emailrankget email with rank=1 otherHCOAffiliationstypeN/Arankget affiliation with rank=1 reference EntityotherHCOAffiliations reference entity onekeyID ONEKEYworkplace.parentWorkplaceEidphonestypecontains FAXnumberworkplace.telephonerankget phone with rank=1 typenot contains FAXnumberworkplace.faxrankget phone with rank=1 HCPYentityTypeACTIVITYYvalidation.clientRequestIdHUB_GENERATED_IDYvalidation.processQYvalidation.requestDate1970-01-01T00:00ZYvalidation.callDate1970-01-01T00:00ZattributesYvalidation.requestProcessIextDCRCommentvalidation.requestCommentcountryYisoCod2reference EntitycrosswalkONEKEYindividual.individualEidfirstNameindividual.firstNamelastNameYindividual.lastNamemiddleNameindividual.middleNametypeCodeN/AsubTypeCodeHCPSubTypeCode(TYP..*)individual.typeCodetitleHCPTitle(TIT.*)individual.titleCodeprefixHCPPrefix(APP.*)individual.prefixNameCodesuffixN/AgenderGender(.*)individual.genderCodespecialtiestypeCodeHCPSpecialty(SP.W.*)individual.speciality1typeN/Arankget speciality with rank=1 typeCodeHCPSpecialty(SP.W.*)individual.speciality2typeN/Arankget speciality with rank=2 typeCodeHCPSpecialty(SP.W.*)individual.speciality3typeN/Arankget speciality with rank=3 addressessourceAddressIdN/AaddressTypeN/AaddressLine1address.longLabeladdressLine2address.longLabel2addressLine3N/AstateProvinceAddressState(DPT.W.*)address.countyCodecityYaddress.cityzipaddress.longPostalCodecountryYaddress.countryrankget address with rank=1 identifierstypeN/AidN/AphonestypeN/Anumberindividual.mobilePhonerankget phone with rank=1 emailstypeN/Aemailindividual.emailrankget phone with rank=1 contactAffiliationsno value in PFORCERXtypeRoleType(TIH.W.*)activity.roleprimaryN/Arankget affiliation with rank=1 contactAffiliations reference EntitycrosswalksONEKEYworkplace.workplaceEidHCP & HCOYentityTypeACTIVITYFor HCP full mapping check the HCP section aboveYvalidation.clientRequestIdHUB_GENERATED_IDFor HCO full mapping check the HCO section aboveYvalidation.processQYvalidation.requestDate1970-01-01T00:00ZYvalidation.callDate1970-01-01T00:00ZattributesYvalidation.requestProcessIextDCRCommentvalidation.requestCommentcountryYisoCod2addressesIf the HCO address exists map to ONEKEY addressaddress (mapping HCO)elseIf the HCP address exists map to ONEKEY addressaddress (mapping HCP)contactAffiliationsno value in PFORCERXtypeRoleType(TIH.W.*)activity.roleprimaryN/Arankget affiliation with rank=1 TriggersTrigger actionComponentActionDefault timeREST callDCR Service: POST /dcrcreate DCRs in the ONEKEYAPI synchronous requests - realtimeDependent componentsComponentUsageDCR Service 2Main component with flow implementationHub StoreDCR and Entities Cache " + }, + { + "title": "OneKey: generate DCR Change Events (traceVR)", + "pageID": "209950500", + "pageLink": "/pages/viewpage.action?pageId=209950500", + "content": "DescriptionThis process is triggered after the DCR was routed to Onekey based on the decision table configuration. The process of tracing the VR changes is based on the OneKey VR changes. During this process HUB, DCR Cache is triggered every hour for SENT DCR's and check VR status using OneKey web service. After verification, the DCR Change event is generated. The DCR event is processed in the OneKey: process DCR Change Events and the DCR is updated in Reltio with Accepted or Rejected status.Flow diagramStepsEvery N  hours OK VR requests with status SENT are queried in DCRRegistryONEKEY store.For each open requests, its status is checked it OK using REST API method /vr/traceThe first check is the VR.rsp.status attribute, checking if the status is SUCCESSNext, if the process status (VR.rsp.results.processStatus) is REQUEST_PENDING_OKE | REQUEST_PENDING_JMS | REQUEST_PROCESSED or OK data export date (VR.rsp.results.trace6CegedimOkcExportDate) is earlier than 24 hours then the processing of the request is postponed to the next checkexportDate or processStatus are optional and can be null.The process goes to the next step only if processStatus  is  REQUEST_RESPONDED | RESPONSE_SENTThe process is blocked to next check only if  trace6CegedimOkcExportDate is not null and is earlier than 24hIf the processStatus is validated and VR.rsp.results.responseStatus is VAS_NOT_FOUND | VAS_INCOHERENT_REQUEST | VAS_DUPLICATE_PROCESS then OneKeyDCREvent is being generated with status REJECTEDOneKeyChangeRequest attributesMappingvrStatus"CLOSED"vrStatusDetail"REJECTED"traceResponseReceivedDatecurrent timeoneKeyCommentOK.responseCommentNext. if responseStatus is VAS_FOUND | VAS_FOUND_BUT_INVALID then OneKeyDCREvent is being generated with status ACCEPTED. ( now the new ONEKEY profile will be loaded to Reltio using ETL data load. The OneKey: process DCR Change Events is processing this events ad checks the Reltio if the ONEKEY is created and COMPANYCustomerGlobalId is assigned, this process will wait until ONEKEY is in Reltio so the client received the ACCEPTED DCR only after this condition is met) DCR entity attributesMappingvrStatus"CLOSED"vrStatusDetail"ACCEPTED"traceResponseReceivedDatecurrent timeoneKeyCommentOK.responseComment \\nONEKEY ID = individualEidValidated or workplaceEidValidatedevents are published to the $env-internal-onekey-dcr-change-events-in topicEvent Modeldata class OneKeyDCREvent(val eventType: String? = null, val eventTime: Long? = null, val eventPublishingTime: Long? = null, val countryCode: String? = null, val dcrId: String? = null, val targetChangeRequest: OneKeyChangeRequest,)data class OneKeyChangeRequest( val vrStatus : String? = null, val vrStatusDetail : String? = null, val oneKeyComment : String? = null, val individualEidValidated : String? = null, val workplaceEidValidated : String? = null, val vrTraceRequest : String? = null, val vrTraceResponse : String? = null,)TriggersTrigger actionComponentActionDefault timeIN Timer (cron)dcr-service:TraceVRServicequery mongo to get all SENT DCR's related to the PFORCERX processevery hourOUT Eventsdcr-service:TraceVRServicegenerate the OneKeyDCREventevery hourDependent componentsComponentUsageDCR ServiceMain component with flow implementationHub StoreDCR and Entities Cache " + }, + { + "title": "OneKey: process DCR Change Events", + "pageID": "209949303", + "pageLink": "/display/GMDM/OneKey%3A+process+DCR+Change+Events", + "content": "\n\n\n\nDescriptionThe process updates the DCR's based on the Change Request events received from [ONEKEY|VOD] (after trace VR method result). Based on the [IQVIA|VEEVA] Data Steward decision the state attribute contains relevant information to update DCR status. During this process also the comments created by IQVIA DS are retrieved and the relationship (optional step) between the DCR object and the newly created entity is created. DCR status is accepted only after the [ONEKEY|VOD] profile is created in Reltio, only then the Client will receive the ACCEPTED status. The process is checking Reltio with delay and retries if the ETL load is still in progress waiting for [ONEKEY|VOD] profile. Flow diagram\n\n\n\n\n\nOneKey variant\n\n\n\nVeeva variant: \n\n\n\n\n\nStepsOneKey: generate DCR Change Events (traceVR) publishes simple events to $env-internal-onekey-dcr-change-events-in: DCR_CHANGEDVeeva specific: Veeva: generate DCR Change Events (traceVR) publishes simple events to $env-internal-veeva-dcr-change-events-in: DCR_CHANGEDEvents are aggregated in a time window (recommended the window length 24 hours) and the last event is returned to the process after the window is closed.Events are processed in the Stream and based on the OneKeyDCREvent.OneKeyChangeRequest.vrStatus | VeevaDCREvent.VeevaChangeRequestDetails.vrStatus attribute decision is madeDCR is retrieved from the cache based on the _id of the DCRIf the event state is ACCEPTEDGet Reltio entity COMPANYCustomerID by [ONEKEY|VOD] crosswalkIf such crosswalk entity exists in Reltio:COMPANYGlobalCustomerId is saved in Registry and will be returned to the Client During the process, the optional check is triggered - create the relation between the DCR object and newly created entitiesif DCRRegistry contain an empty list of entityUris, or some of the newly created entity is not present in the list, the Relation between this object and the DCR has to be createdDCR entity is updated in Reltio and the relation between the processed entity and the DCR entityReltio source name (crosswalk. type): DCRReltio relation type: HCPtoDCR or HCOtoDCR (depending on the object type)Newly created entities uris should be retrieved by the individualEidValidated or workplaceEidValidated (it may be both) attributes from the events that represent the HCP or HCO crosswalks.The status in Reltio and in Mongo is updatedDCR entity attributesMapping for OneKeyMapping for VeevaVRStatusCLOSEDVRStatusDetailstate: ACCEPTEDCommentsONEKEY comments ({VR.rsp.responseComments})ONEKEY ID = individualEidValidated or workplaceEidValidatedVEEVA comments = VR.rsp.responseCommentsVEEVA ID = entityUrisCOMPANYGlobalCustomerIdThis is required in ACCEPTED status If the [ONEKEY|VOD] does not exist in ReltioRegenerate the Event with a new timestamp to the input topic so this will be processed in the next hoursUpdate the Reltio DCR statusDCR entity attributesMappingVRStatusOPENVRStatusDetailACCEPTEDupdate the Mongo status to the OK_NOT_FOUND | VEEVA_NOT_FOUND and increase the "retryCounter" attributeIf the event state is REJECTEDIf a Reltio DS has already seen this request, REJECT the DCR and end the flow (if the initial target type is Reltio)The status in Reltio and in Mongo is updatedDCR entity attributesMappingVRStatusCLOSEDVRStatusDetailstate: REJECTEDComments[ONEKEY|VOD] comments ({VR.rsp.responseComments})If this is based on the routing table and it was never sent to the Reltio DS, then create the DCR workflow and send this to the Reltio DS. Add the information comment that this was Rejected by the OneKey, so now Reltio DS has to decide if this should be REJECTED or APPLIED in Reltio. Add the comment that this is not possible to execute the sendTo3PartyValidation button in this case. Steps:Check if the initial target type is [ONEKEY|VOD]Use the DCR Request that was initially received from PforceRx and is a Domain Model request (after validation) Send the DCR to Reltio the service returns the following response:ACCEPTED (change request accepted by Reltio)update the status to DS_ACTION_REQUIERED and in the comment add the following: "This DCR was REJECTED by the [ONEKEY|VOD] Data Steward with the following comment: <[ONEKEY|VOD] reject comment>. Please review this DCR in Reltio and APPLY or REJECT. It is not possible to execute the sendTo3PartyValidation button in this case"initialize new Workflow in Reltio with the comment.save data in the DCR entity status in Reltio and update Mongo DCR Registry with workflow ID and other attributes that were used in this Flow.REJECTED  (failure or error response from Reltio)CLOSE the DCR with the information that DCR was REJECTED by the [ONEKEY|VOD] and Reltio also REJECTED the DCR. Add the error message from both systems in the comment. TriggersTrigger actionComponentActionDefault timeIN Events incoming dcr-service-2:DCROneKeyResponseStreamdcr-service-2:DCRVeevaResponseStream ($env-internal-veeva-dcr-change-events-in)process publisher full change request events in the streamrealtime: events stream processing Dependent componentsComponentUsageDCR Service 2Main component with flow implementationManagerReltio Adapter  - API operationsPublisherEvents publisher generates incoming eventsHub StoreDCR and Entities Cache \n\n\n" + }, + { + "title": "Reltio: create DCR method - direct", + "pageID": "209949292", + "pageLink": "/display/GMDM/Reltio%3A+create+DCR+method+-+direct", + "content": "DescriptionRest API method exposed in the Manager component responsible for submitting the Change Request to ReltioFlow diagramStepsReceive the DCR request generated by DCR Service 2 componentDepending on the Action execute the method in the Manager component:insert - Execute standard Create/Update HCP/HCO/MCO operation with additional changeRequest.id parameterupdate - Execute Update Attributes operation with additional changeRequest.id parameterthe combination of IGNORE_ATTRIBUTE & INSERT_ATTRIBUTE once updating existing parameter in Reltiothe INSERT_ATTRIBUTE once adding new attribute to Reltiodelete - Execute Update Attribute operation with additional changeRequest.id parameterthe UPDATE_END_DATE on the entity to inactivate this profileBased on the Reltio response the DCR Response is returned:REQUEST_ACCEPTED - Reltio processed the request successfully REQUEST_FAILED - Reltio returned the exception, Client will receive the detailed description in the errorMessageTriggersTrigger actionComponentActionDefault timeREST callDCR Service: POST /dcr2Create change Requests in ReltioAPI synchronous requests - realtimeDependent componentsComponentUsageDCR ServiceMain component with flow implementationHub StoreDCR and Entities Cache " + }, + { + "title": "Reltio: process DCR Change Events", + "pageID": "209949300", + "pageLink": "/display/GMDM/Reltio%3A+process+DCR+Change+Events", + "content": "DescriptionThe process updates the DCR's based on the Change Request events received from Reltio(publishing). Based on the Data Steward decision the state attribute contains relevant information to update DCR status. During this process also the comments created by DS are retrieved and the relationship (optional step) between the DCR object and the newly created entity is created.Flow diagramStepsEvent publisher publishes simple events to $env-internal-reltio-dcr-change-events-in: DCR_CHANGED("CHANGE_REQUEST_CHANGED") and DCR_REMOVED("CHANGE_REQUEST_REMOVED")When the events do not contain the ThirdPartyValidation flag it means that DS APPLIED or REJECTED the DCR, the following logic is appliedEvents are processed in the Stream and based on the targetChangeRequest.state attribute decision is madeIf the state is APPLIED or REJECTS, DCR is retrieved from the cache based on the changeRequestURIIf DCR exists in Cache The status in Reltio is updatedDCR entity attributesMappingVRStatusCLOSEDVRStatusDetailstate: APPLIED → ACCEPTEDstate: REJECTED → REJECTEDOtherwise, the events are rejected and the transaction is endedThe COMPANYCustomerGlobalId is retrieved for newly created entities in Reltio based on the main entity URI.During the process, the optional check is triggered - create the relation between the DCR object and newly created entitiesif DCRRegistry contain an empty list of entityUris, or some of the newly created entity is not present in the list, the Relation between this object and the DCR has to be createdDCR entity is updated in Reltio and the relation between the processed entity and the DCR entityReltio source name (crosswalk. type): DCRReltio relation type: HCPtoDCR or HCOtoDCR (depending on the object type)The comments added by the DataSteward during the processing of the Change request is retrieved using the following operation:GET /tasks?objectURI=entities/The processInstanceComments is retrieved from the response and added to DCRRegistry.changeRequestComment Otherwise, when the events contain the ThirdPartyValidation flag it means that DS decided to send the DCR to IQVIA or VEEVA for the validation, the following logic is applied:If the current targetType is ONEKEY | VEEVAREJECT the DCR and add the comment on the DCR in Retlio that "DCR was already processed by [ONEKEY|VEEVA] Data Stewards, REJECT because it is not allowed to send this DCR one more time to [IQVIA|VEEVA]"If the current targetType is Reltio, it means that we can send this DCR to [IQVIA|VEEVA] for validation Use the DCR Request that was initially received from PforceRx and is a Domain Model request (after validation)Execute the POST /dcr method in [ONEKEY|VEEVA] DCR Service, the service returns the following response:ACCEPTED - update the status to [SENT_TO_OK|SENT_TO_VEEVA]REJECTED - it means that some unexpected exception occurred in [ONEKEY|VEEVA], or request was rejected by [ONEKEY|VEEVA], or the ONEKEY crosswalk does not exist in Reltio, and [ONEKEY|VEEVA]service rejected this requestVeeva specific: When VOD crosswalk does not exist in Reltio, current version of profile is being sent to Veeva for validation independent from initial changes which where incorporated within DCRTriggersTrigger actionComponentActionDefault timeIN Events incoming dcr-service-2:DCRReltioResponseStreamprocess publisher full change request events in the streamrealtime: events stream processing Dependent componentsComponentUsageDCR ServiceDCR Service 2Main component with flow implementationManagerReltio Adapter  - API operationsPublisherEvents publisher generates incoming eventsHub StoreDCR and Entities Cache " + }, + { + "title": "Reltio: Profiles created by DCR", + "pageID": "510266969", + "pageLink": "/display/GMDM/Reltio%3A+Profiles+created+by+DCR", + "content": "DCR typeApproval/Reject Record visibility in MDMCrosswalk TypeCrosswalk ValueSourceDCR create for HCP/HCOApproved by OneKey/VODHCP/HCO created in MDMONEKEY|VODonekey id ONEKEY|VODApproved by DSRHCP/HCO created in MDMSystem source name from DCR (KOL_OneView, PforceRx, etc)DCR IDSystem source name from DCR (KOL_OneView, PforceRx, etc)DCR edit for HCP/HCOApproved by OneKey/VODHCP/HCO requested attribute updated in MDMONEKEY|VODONEKEY|VODApproved by DSRHCP/HCO requested attribute updated in MDMReltioentity uriReltioDCR edit for HCPaddress/HCO addressApproved by OneKey/VODNew address created in MDM, existing address marked as inactiveONEKEY|VODONEKEY|VODApproved by DSRNew address created in MDM, existing address marked as inactiveReltioentity uriReltio" + }, + { + "title": "Veeva DCR flows", + "pageID": "379332475", + "pageLink": "/display/GMDM/Veeva+DCR+flows", + "content": "DescriptionThe process is responsible for creating DCRs which are stored (Store VR) to be further transferred and processed by Veeva. Changes can be suggested by the DS using "Suggest" operation in Reltio and "Send to Third Party Validation" button. All DCRs are saved in the dedicated collection in HUB Mongo DB, required to gather metadata and trace the changes for each DCR request. During this process, the communication to Veeva Opendata is established via S3/SFTP communication. SubmitVR operation is executed to create a new ZIP files with DCR requests spread across multiple CSV files. The TraceVR operation is executed to check if Veeva responded to initial DCR Requests via ZIP file placed Inbound S3 dir. The process is divided into 3 sections:Create DCR request - VeevaSubmit DCR Request - VeevaTrace Validation Request - VeevaThe below diagram presents an overview of the entire process. Detailed descriptions are available in the separated subpages.Business process diagram for R1 phaseFlow diagramStepsCreateVRProcess of saving DCR requests in Mongo Cache after being triggered by DCR Service 2.DCR request information is translated to Veeva's model and stored in dedicated collection for Veeva DCRs.SubmitVRThe process of submitting VR stored in Mongo Cache to Veeva's SFTP via S3 bucket. The process aggregates events stored in Mongo Cache since last submit.New ZIP is created with CSV files containing DCR request for Veeva. ZIP is placed in outbound dir in S3 bucket which is further synchronized to Veeva's SFTP. Each DCR is updated with ZIP file name which was used to transfer request to Veeva.TraceVRThe process of tracing VR is triggered each hours by Spring Scheduler.Inbound S3 bucket is searched for ZIP files with CSVs containing DCR responses from Veeva. There are multiple dirs in S3 buckets, each for specific group of countries (currently CN and APAC).Parts of DCR responses are spread across multiple files. Combined information is being processed.Finally information about DCR is updated in Mongo Cache and events are produced to dedicated topic for DCR Service 2 for further processing.TriggersDCR service 2 is being triggered via /dcr API calls which are triggered by Data Stewards actions (R1 phase) → "Suggests 3rd party validation" which pushes DCR from Reltio to HUB.Dependent componentsDescribed in the separated sub-pages for each process.Design document for HUB development Design → VeevaOpenData-implementation.docxReltio HUB-VOD mapping → VeevaOpenDataAPACDataDictionary.xlsxVOD model description (v4) → Veeva_OpenData_APAC_Data_Dictionary v4.xlsx" + }, + { + "title": "Create DCR request - Veeva", + "pageID": "386814533", + "pageLink": "/display/GMDM/Create+DCR+request+-+Veeva", + "content": "DescriptionThe process of creating new DCR requests to the Veeva OpenData. During this process, new DCRs are created in DCRregistryVeeva mongo collection.Flow diagramStepsService is called by Rest APIInput request is validated. If request is invalid - return response with status REJECTEDTransform input request to Veeva DCR modeltranslate lookup codes to Veeva source codesfill the Veeva DCR model with input request valuesSave DCR request to DCRRegistryVeeva mongo collection with status NEWMappingsDCR domain model→ VOD mapping file: VeevaOpenDataAPACDataDictionary-mmor-mapping.xlsxVeeva integration guide" + }, + { + "title": "Submit DCR Request - Veeva", + "pageID": "379333348", + "pageLink": "/display/GMDM/Submit+DCR+Request+-+Veeva", + "content": "DescriptionThe process of submitting new validation requests to the Veeva OpenData service via VeevaAdapter (communication with S3/SFTP) based on DCRRegistryVeeva mongo collection . During this process, new DCRs are created in VOD system.Flow diagramStepsVeeva DCR service flow:Every N hours Veeva DCR requests with status NEW are queried in DCRRegistryVeeva store.DCR are group by countryFor each country:merge Veeva DCR requests - create one zip file for each countryupload zip file to S3 locationupdate DCR status to SENT if upload status is successfulDCR entity attributesMappingDCRIDVeeva VR Request IdVRStatus"OPEN"VRStatusDetail"SENT"CreatedByMDM HUBSentDatecurrent timeSFTP integration service flow:Every N  hours grab all zip files from S3 locationsUpload files to corresponding SFTP serverTriggersTrigger actionComponentActionDefault timeSpring schedulermdm-veeva-dcr-service:VeevaDCRRequestSenderprepare ZIP files for VOD systemCalled every specified intervalDependent componentsComponentUsageVeeva adapterUpload DCR request to s3 location" + }, + { + "title": "Trace Validation Request - Veeva", + "pageID": "379333358", + "pageLink": "/display/GMDM/Trace+Validation+Request+-+Veeva", + "content": "DescriptionThe process of tracing the VR changes based on the Veeva VR changes. During this process HUB, DCRRegistryVeeva Cache is triggered every hour for SENT DCR's and check VR status using Veeva Adapter (s3/SFTP integration). After verification DCR event is sent to DCR Service 2  Veeva response stream.Flow diagramStepsEvery N get all Veeva DCR responses using Veeva AdapterFor each response:check if status is terminal - (CHANGE_ACCEPTED, CHANGE_PARTIAL, CHANGE_REJECTED, CHANGE_CANCELLED)if not - go to next responsequery DCRregistryVeeva mongo collection for DCR with given key and SENT statusget Veeva ID (vid__v) from response filegenerate Veeva DCR change eventupdate DCR status in DCRRegistryVeeva mongo collectionresolution is CHANGE_ACCEPTED, CHANGE_PARTIALDCR entity attributesMappingVRStatus"CLOSED"VRStatusDetail"ACCEPTED"ResponseTimeveeva response completed dateCommentsveeva response resolution notesresolution is CHANGE_REJECTED, CHANGE_CANCELLEDDCR entity attributesMappingVRStatus"CLOSED"VRStatusDetail"REJECTED"ResponseTimeveeva response completed dateCommentsveeva response resolution notesTriggersTrigger actionComponentActionDefault timeIN Spring schedulermdm-veeva-dcr-service:VeevaDCRRequestTracestart trace validation request processevery hourOUT Kafka topicmdm-dcr-service-2:VeevaResponseStreamupdate DCR status in Reltio, create relationsinvokes Kafka producer for each veeva DCR responseDependent componentsComponentUsageDCR Service 2Process response event" + }, + { + "title": "Veeva: create DCR method (storeVR)", + "pageID": "379332642", + "pageLink": "/pages/viewpage.action?pageId=379332642", + "content": "DescriptionRest API method exposed in the Veeva DCR Service component responsible for creating new DCR requests specific to Veeva OpenData (VOD) and storing them in dedicated collection for further submit. Since VOD enables communication only via S3/SFTP, it's required to use dedicated mechanism to actually trigger CSV/ZIP file creation and file placement in outbound directory. This will periodic call to Submit VR method will be scheduled once a day (with cron) which will in the end call VeevaAdapter with method createChangeRequest.Flow diagramStepsReceive the API requestValidate initial requestcheck if the Veeva crosswalk exists once there is an update on the profileotherwise it's required to prepare DCR to create new Veeva profileIf there is any formal attribute missing or incorrect: skip requestThen the DCR is mapped to Veeva Request by invoking mapper between HUB DCR → VEEVA model For mapping purpose below mapping table should be used If there is not proper LOV mapping between HUB and Veeva, default fallback should be set to question mark → ?  Once proper request has been created, it should be stored as a VeevaVRDetails entry in dedicated DCRRegistryVeeva collection to be ready for actually send via Submit VR job and for future tracing purposesPrepare return response for initial API request with below logicGenerate sample request after successful mongo insert →  generateResponse(dcrRequest, RequestStatus.REQUEST_ACCEPTED, null, null)Generate error when validation or exception →  generateResponse(dcrRequest, RequestStatus.REQUEST_FAILED, getErrorDetails(), null);Mapping HUB DCR → Veeva model Below table does not contain all new attributes which are new in Reltio. Only the most important ones were mentioned there.File STTM Stats_SG_HK_v3.xlsx contains full mapping requirements from Veeva OpenData to Reltio data model. It does contain full data mapping which should be covered in target DCR process for VOD.ReltioHUBVEEVAAttribute PathDetailsDCR Request pathDetailsFile NameField NameRequired for Add Request?Required for Change Request?DescriptionReference (RDM/LOV)NOTEHCON/AMongo Generated ID for this DCR | Kafka KEYonce mapping from HUB Domain DCRRequest take this from DCRRequestD.dcrRequestId: String, // HUB DCR request id - Mongo ID - required in ONEKEY servicechange_requestdcr_keyYYCustomer's internal identifier for this requestChange Requests comments extDCRCommentchange_requestdescriptionYYRequester free-text comments explaining the DCRtargetChangeRequest.createdBycreatedBychange_requestcreated_byYYFor requestor identificationN/Aif new objects - ADD, if veeva ID CHANGEchange_requestchange_request_typeYYADD_REQUEST or CHANGE_REQUESTN/Adepends on suggested changes (check use-cases)main entity object type HCP or HCOchange_requestentity_typeYNHCP or HCOEntityTypeN/AMongo Generated ID for this DCR | Kafka KEYchange_request_hcodcr_keyYYCustomer's internal identifier for this requestReltio Uri and Reltio Typewhen insert new profileentities.HCO.updateCrosswalk.type (Reltio)entities.HCO.updateCrosswalk.value (Reltio id)and refId.entityURIconcatenate Reltio:rvu44dmchange_request_hcoentity_keyYYCustomer's internal HCO identifierCrosswalks - VEEVA crosswalkwhen update on VEEVAentities.HCO.updateCrosswalk.type (VEEVA)entities.HCO.updateCrosswalk.value (VEEVA ID)change_request_hcovid__vYNVeeva ID of existing HCO to update; if blank, the request will be interpreted as an add requestconfiguration/entityTypes/HCO/attributes/OtherNames/attributes/Namefirst elementTODO - add new attributechange_request_hcoalternate_name_1__vYN????change_request_hcobusiness_type__vYNHCOBusinessTypeTO BE CONFIRMEDconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/FacilityTypeHCO.subTypeCodechange_request_hcpmajor_class_of_trade__vNNCOTFacilityTypeIn PforceRx - Account Type, more info: \n MR-9512\n -\n Getting issue details...\n STATUS\n configuration/entityTypes/HCO/attributes/Namenamechange_request_hcocorporate_name__vNYconfiguration/entityTypes/HCO/attributes/TotalLicenseBedsTODO - add new attributechange_request_hcocount_beds__vNYconfiguration/entityTypes/HCO/attributes/Email/attributes/Emailemail with rank 1emailschange_request_hcoemail_1__vNNconfiguration/entityTypes/HCO/attributes/Email/attributes/Emailemail with rank 2change_request_hcoemail_2__vNNconfiguration/entityTypes/HCO/attributes/Phone/attributes/Numberphone type TEL.FAX with best rankphoneschange_request_hcofax_1__vNNconfiguration/entityTypes/HCO/attributes/Phone/attributes/Numberphone type TEL.FAX with worst rankchange_request_hcofax_2__vNNconfiguration/entityTypes/HCO/attributes/StatusDetailTODO - add new attributechange_request_hcohco_status__vNNHCOStatusconfiguration/entityTypes/HCO/attributes/TypeCodetypecodechange_request_hcohco_type__vNNHCOTypeconfiguration/entityTypes/HCO/attributes/Phone/attributes/Numberphone type TEL.OFFICE with best rankphoneschange_request_hcophone_1__vNNconfiguration/entityTypes/HCO/attributes/Phone/attributes/Numberphone type TEL.OFFICE with worst rankchange_request_hcophone_2__vNNconfiguration/entityTypes/HCO/attributes/Phone/attributes/Numberphone type TEL.OFFICE with worst rankchange_request_hcophone_3__vNNconfiguration/entityTypes/HCO/attributes/CountryDCRRequest.countrychange_request_hcoprimary_country__vNNconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialtyelements from COT specialtieschange_request_hcospecialty_1__vNNconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_10__vNNSpecialityconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_2__vNNconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_3__vNNconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_4__vNNconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_5__vNNconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_6__vNNconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_7__vNNconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_8__vNNconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_9__vNNconfiguration/entityTypes/HCO/attributes/Website/attributes/WebsiteURLfirst elementwebsiteURLchange_request_hcoURL_1__vNNconfiguration/entityTypes/HCO/attributes/Website/attributes/WebsiteURLN/AN/Achange_request_hcoURL_2__vNNHCP N/AMongo Generated ID for this DCR | Kafka KEYchange_request_hcpdcr_keyYYCustomer's internal identifier for this requestReltio Uri and Reltio Typewhen insert new profileentities.HCO.updateCrosswalk.type (Reltio)entities.HCO.updateCrosswalk.value (Reltio id)and refId.entityURIconcatenate Reltio:rvu44dmchange_request_hcpentity_keyYYCustomer's internal HCP identifierconfiguration/entityTypes/HCP/attributes/CountryDCRRequest.countrychange_request_hcpprimary_country__vYYCrosswalks - VEEVA crosswalkwhen update on VEEVAentities.HCO.updateCrosswalk.type (VEEVA)entities.HCO.updateCrosswalk.value (VEEVA ID)change_request_hcpvid__vNYconfiguration/entityTypes/HCP/attributes/FirstNamefirstNamechange_request_hcpfirst_name__vYNconfiguration/entityTypes/HCP/attributes/MiddlemiddleNamechange_request_hcpmiddle_name__vNNconfiguration/entityTypes/HCP/attributes/LastNamelastNamechange_request_hcplast_name__vYNconfiguration/entityTypes/HCP/attributes/NicknameTODO - add new attributechange_request_hcpnickname__vNNconfiguration/entityTypes/HCP/attributes/Prefixprefixchange_request_hcpprefix__vNNHCPPrefixconfiguration/entityTypes/HCP/attributes/SuffixNamesuffixchange_request_hcpsuffix__vNNconfiguration/entityTypes/HCP/attributes/Titletitlechange_request_hcpprofessional_title__vNNHCPProfessionalTitleconfiguration/entityTypes/HCP/attributes/SubTypeCodesubTypeCodechange_request_hcphcp_type__vYNHCPTypeconfiguration/entityTypes/HCP/attributes/StatusDetailTODO - add new attributechange_request_hcphcp_status__vNNHCPStatusconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/FirstNameTODO - add new attributechange_request_hcpalternate_first_name__vNNconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/LastNameTODO - add new attributechange_request_hcpalternate_last_name__vNNconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/MiddleNameTODO - add new attributechange_request_hcpalternate_middle_name__vNN??TODO - add new attributechange_request_hcpfamily_full_name__vNNTO BE CONFRIMEDconfiguration/entityTypes/HCP/attributes/DoBbirthYearchange_request_hcpbirth_year__vNNconfiguration/entityTypes/HCP/attributes/Credential/attributes/Credentialby rank 1TODO - add new attributechange_request_hcpcredentials_1__vNNTO BE CONFIRMEDconfiguration/entityTypes/HCP/attributes/Credential/attributes/Credential2TODO - add new attributechange_request_hcpcredentials_2__vNNIn reltio there is attribute but not usedconfiguration/entityTypes/HCP/attributes/Credential/attributes/Credential3TODO - add new attributechange_request_hcpcredentials_3__vNN                            "uri": "configuration/entityTypes/HCP/attributes/Credential/attributes/Credential",configuration/entityTypes/HCP/attributes/Credential/attributes/Credential4TODO - add new attributechange_request_hcpcredentials_4__vNN                            "lookupCode": "rdm/lookupTypes/Credential",configuration/entityTypes/HCP/attributes/Credential/attributes/Credential5TODO - add new attributechange_request_hcpcredentials_5__vNNHCPCredentials                            "skipInDataAccess": false??TODO - add new attributechange_request_hcpfellow__vNNBooleanReferenceTO BE CONFRIMEDconfiguration/entityTypes/HCP/attributes/Gendergenderchange_request_hcpgender__vNNHCPGender?? Education ??TODO - add new attributechange_request_hcpeducation_level__vNNHCPEducationLevelTO BE CONFRIMEDconfiguration/entityTypes/HCP/attributes/Education/attributes/SchoolNameTODO - add new attributechange_request_hcpgrad_school__vNNconfiguration/entityTypes/HCP/attributes/Education/attributes/YearOfGraduationTODO - add new attributechange_request_hcpgrad_year__vNN??change_request_hcphcp_focus_area_10__vNNTO BE CONFRIMED??change_request_hcphcp_focus_area_1__vNN??change_request_hcphcp_focus_area_2__vNN??change_request_hcphcp_focus_area_3__vNN??change_request_hcphcp_focus_area_4__vNN??change_request_hcphcp_focus_area_5__vNN??change_request_hcphcp_focus_area_6__vNN??change_request_hcphcp_focus_area_7__vNN??change_request_hcphcp_focus_area_8__vNN??change_request_hcphcp_focus_area_9__vNNHCPFocusArea??change_request_hcpmedical_degree_1__vNNTO BE CONFRIMED??change_request_hcpmedical_degree_2__vNNHCPMedicalDegreeconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyby rank from 1 to 100specialtieschange_request_hcpspecialty_1__vYNconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_10__vNNconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_2__vNNconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_3__vNNconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_4__vNNconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_5__vNNconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_6__vNNconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_7__vNNconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_8__vNNconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_9__vNNSpecialtyconfiguration/entityTypes/HCP/attributes/WebsiteURLTODO - add new attributechange_request_hcpURL_1__vNNADDRESSMongo Generated ID for this DCR | Kafka KEYchange_request_addressdcr_keyYYCustomer's internal identifier for this requestReltio Uri and Reltio Typewhen insert new profileentities.HCP OR HCO.updateCrosswalk.type (Reltio)entities.HCP OR HCO.updateCrosswalk.value (Reltio id)and refId.entityURIconcatenate Reltio:rvu44dmchange_request_addressentity_keyYYCustomer's internal HCO/HCP identifierattributes/Addresses/attributes/COMPANYAddressIDaddress.refIdchange_request_addressaddress_keyYYCustomer's internal address identifierattributes/Addresses/attributes/AddressLine1addressLine1change_request_addressaddress_line_1__vYNattributes/Addresses/attributes/AddressLine2addressLine2change_request_addressaddress_line_2__vNNattributes/Addresses/attributes/AddressLine3addressLine3change_request_addressaddress_line_3__vNNN/AN/AAchange_request_addressaddress_status__vNNAddressStatusattributes/Addresses/attributes/AddressTypeaddressTypechange_request_addressaddress_type__vYNAddressTypeattributes/Addresses/attributes/StateProvincestateProvincechange_request_addressadministrative_area__vYNAddressAdminAreaattributes/Addresses/attributes/Countrycountrychange_request_addresscountry__vYNattributes/Addresses/attributes/Citycitychange_request_addresslocality__vYYattributes/Addresses/attributes/Zip5zipchange_request_addresspostal_code__vYNattributes/Addresses/attributes/Source/attributes/SourceNameattributes/Addresses/attributes/Source/attributes/SourceAddressIDwhen VEEVA map VEEVA ID to sourceAddressIdchange_request_addressvid__vNYmap fromrelationTypes/OtherHCOtoHCOAffiliationsor relationTypes/ContactAffiliationsThis will be HCP.ContactAffiliation or HCO.OtherHcoToHCO affiliationMongo Generated ID for this DCR | Kafka KEYchange_request_parenthcodcr_keyYYCustomer's internal identifier for this requestHCO.otherHCOAffiliations.relationUriorHCP.contactAffiliations.relationUri (from Domain model)information about Reltio Relation IDchange_request_parenthcoparenthco_keyYYCustomer's internal identifier for this relationshipRELATION IDKEY entity_key from HCP or HCO (start object)change_request_parenthcochild_entity_keyYYChild Identifier in the HCO/HCP fileSTART OBJECT IDendObject entity uri mapped to refId.EntityURITargetObjectIdKEY entity_key from HCP or HCO (end object, by affiliation)change_request_parenthcoparent_entity_keyYYParent identifier in the HCO fileEND OBJECT IDchanges in Domain model mappingmap Reltion.Source.SourceName - VEEVAmap Relation.Source.SourceValue - VEEVA IDadd to Domain modelmap if relation is from VEEVA ID change_request_parenthcovid__vNYstart object entity type change_request_parenthcoentity_type__vYNattributes/RelationType/attributes/PrimaryAffiliationif is primaryTODO - add new attribute to otherHcoToHCOchange_request_parenthcois_primary_relationship__vNNBooleanReferenceHCO_HCO or HCP_HCOchange_request_parenthcohierarchy_type__vRelationHierarchyTypeattributes/RelationType/attributes/RelationshipDescriptiontype from affiliationbased on ContactAffliation or OtherHCOToHCO affiliationI think it will be 14-Emploted for HCP_HCOand 4-Manages for HCO_HCObut maybe we can map from affiliation.typechange_request_parenthcorelationship_type__vYNRelationTypeMongo collectionAll DCRs initiated by the dcr-service-2 API and to be sent to Veeva will be stored in Mongo in new collection DCRRegistryVeeva. The idea is to gather all DCRs requested by the client through the day and schedule ‘SubmitVR’ process that will communicate with Veeva adapter.Typical use case: Client requests 3 DCRs during the daySubmitVR contains the schedule that gathers all DCRs with NEW status created during the day and using VeevaAdapter to push requests to S3/SFTP.In this store we are going to keep both types of DCRs:\ninitiated by PforceRX - PFORCERX_DCR("PforceRxDCR")\ninitiated by Reltio SubmitVR - SENDTO3PART_DCR("ReltioSuggestedAndSendTo3PartyDCR");\nStore class idea:_id – this is the same ID that was assigned to DCR in dcr-service-2 VeevaVRDetails\n@Document("DCRRegistryVEEVA")\n@JsonIgnoreProperties(ignoreUnknown = true)\n@JsonInclude(JsonInclude.Include.NON_NULL)\ndata class VeevaVRDetails(\n    @JsonProperty("_id")\n    @Id\n    val id: String? = null,\n    val type: DCRType,\n    val status: DCRRequestStatusDetails,\n    val createdBy: String? = null,\n    val createTime: ZonedDateTime? = null,\n    val endTime: ZonedDateTime? = null,\n    val veevaRequestTime: ZonedDateTime? = null,\n    val veevaResponseTime: ZonedDateTime? = null,\n    val veevaRequestFileName: String? = null\n    val veevaResponseFileName: String? = null    val veevaResponseFileTime: ZonedDateTime? = null\n    val country: String? = null,\n    val source: String? = null,\n    val extDCRComment: String? = null, // external DCR Comment (client comment)\n    val trackingDetails: List = mutableListOf(),\n\n    RAW FILE LINES mapped from DCRRequestD to Veeva model\n    val veevaRequest:\n            val change_request_csv: String,\n            val change_request_hcp_csv: String\n            val change_request_hco_csv: List\n            val change_request_address_csv: List\n            val change_request_parenthco_csv: List\n\n    RAW FILE LINES mapped from Veeva Response model\n    val veevaResponse:\n            val change_request_response_csv: String,\n            val change_request_response_hcp_csv: String\n            val change_request_response_hco_csv: List\n            val change_request_response_address_csv: List\n            val change_request_response_parenthco_csv: List\n)\nMapping Reltio canonical codes → Veeva source codesThere are a couple of steps performed to find out a mapping for canonical code from Reltio to source code understood by VOD. Below steps are performed (in this order) once a code is found. Veeva Defaults Configuration is stored in mdm-config-registry > config-hub/stage_apac/mdm-veeva-dcr-service/defaultsThe purpose of these logic is to select one of possible multiple source codes on VOD end for a single code on COMPANY side (1:N). The other scenario is when there is no actual source code for a canonical code on VOD end (1:0), however this is usually covered by fallback code logic.There are a couple of files, each containing source codes for a specific attribute. The ones related to HCO.Specialty and HCP.Specialty have logic which selects proper code.Usually there are constructed as a three column CSV: Country, Canonical Code, Source CodeFor specific Country we're looking for Canonical code and then we're sending Source code as it is (no trim required)Examples: IN;SP.PD;PD → PD source code will be sent to VODRDM lookups with RegExpThe main logic which is used to find out proper source code for canonical code. We're using codes configured in RDM, however mongo collection LookupValues are used. For specific canonical code (code) we looking for sourceMappings with source = VOD. Often country is embedded within source code so we're applying regexpConfig (more in Veeva Fallback section) to extract specific source code for particular country.Veeva FallbackConfiguration is stored in mdm-config-registry > config-hub/stage_apac/mdm-veeva-dcr-service/fallbackAvailable for a couple of attributes: hco-cot-facility-type.csvCOTFacilityTypehco-specialty.csvCOTSpecialtyhco-type-code.csvHCOTypehcp-specialty.csvHCPSpecialtyhcp-title.csvHCPTitlehcp-type-code.csvHCPSubTypeCodeUsually files are constructed as a one column CSV, however the logic for extracting source code may be differentSource code is extracted using RegExp for each parameter. Check application.yml for this mdm-veeva-dcr-server component - mdm-inboud-services > mdm-veeva-dcr-service/src/main/resources/application.yml to find out proper line and extract code sent to VOD.Example value for hco-specialty-type.csv: IN_?Regexp value for HCP.specialty: regexpConfig > HCPSpecialty: ^COUNTRY_(.+)$Source code sent to VOD for India country: "?" (only question mark without country prefix)TriggersTrigger actionComponentActionDefault timeREST callmdm-veeva-dcr-service: POST /dcr → veevaDCRService.createChangeRequest(request)Creates DCR and stores it in collection without actual send to Veeva. API synchronous requests - realtimeDependent componentsComponentUsageDCR Service 2Main component with flow implementationHub StoreDCR and Entities Cache " + }, + { + "title": "Veeva: create DCR method (submitVR)", + "pageID": "386796763", + "pageLink": "/pages/viewpage.action?pageId=386796763", + "content": "DescriptionGather all stored DCR entities in DCRRegistryVeeva collection (status = NEW) and sends them via S3/SFTP to Veeva OpenData (VOD). This method triggers CSV/ZIP file creation and file placement in outbound directory. This method is triggered from cron which invokes VeevaDCRRequestSender.sendDCRs() from the Veeva DCR Service Flow diagramStepsReceive the API request via scheduled trigger, usually every 24h (senderConfiguration.schedulerConfig.fixedDelay) at specific time of day (senderConfiguration.schedulerConfig.initDelay)All DCR entities (VeevaVRDetails) with status NEW are being retrieved from DCRRegistryVeeva collection Then VeevaCreateChangeRequest object is created which aggregates all CSV content which should be placed in actual CSV files. Each object contains only DCRs specific for countryEach country has its own S3/SFTP directory structure as well as dedicated SFTP server instanceOnce CSV files are created with header and content, they are packed into single ZIP fileFinally ZIP file is placed in outbound S3 directoryIf file was placedsuccessfuly - then VeevaChangeRequestACK status = SUCCESSotherwise - then VeevaChangeRequestACK status = FAILURE and process endsFinally, status of VeevaVRDetails entity in DCRRegistryVeeva collection is updated and set to SENT_TO_VEEVATriggersTrigger actionComponentActionDefault timeTimer (cron)mdm-veeva-dcr-service: VeevaDCRRequestSender.sendDCRs()Takes all unsent entities (status = NEW) from Veeva collection and actually puts file on S3/SFTP directory via veevaAdapter.createDCRsUsually every 24h (senderConfiguration.schedulerConfig.fixedDelay) at specific time of day (senderConfiguration.schedulerConfig.initDelay)Dependent componentsComponentUsageDCR Service 2Main component with flow implementationHub StoreDCR and Entities Cache " + }, + { + "title": "Veeva: generate DCR Change Events (traceVR)", + "pageID": "379329922", + "pageLink": "/pages/viewpage.action?pageId=379329922", + "content": "DescriptionThe process is responsible for gathering DCR responses from Veeva OpenData (VOD). Responses are provided via CSV/ZIP files placed on S3/SFTP server in inbound directory which are specific for each country. During this process files should be retrieved, mapped from VOD to HUB DCR model and published to Kafka topic to be properly processed by DCR Service 2, Veeva: process DCR Change Events.Flow diagramSource: LucidStepsMethod is trigger via cron, usually every 24h (traceConfiguration.schedulerConfig.fixedDelay) at specific time of day (traceConfiguration.schedulerConfig.initDelay)For each country, each inbound directory in scanned for ZIP filesEach ZIP files (_DCR_Response_.zip) should be unpacked and processed. A bunch of CSV files should be extracted. Specifically:change_request_response.csv → it's a manifest file with general information in specific columnsdcr_key → ID of DCR which was established during DCR request creation entity_key → ID of entity in Reltio, the same one we provided during DCR request creationentity_type → type of entity (HCO, HCP) which is being modified via this DCRresolution → has information whether DCR was accepted or rejected. Full list of values is below.resolution valueDescriptionCHANGE_PENDINGThis change is still processing and hasn't been resolvedCHANGE_ACCEPTEDThis change has been accepted without modificationCHANGE_PARTIALThis change has been accepted with additional changes made by the steward, or some parts of the change request have been rejectedCHANGE_REJECTEDThis change has been rejected in its entiretyCHANGE_CANCELLEDThis change has been cancelledchange_request_type change_request_type valueDescriptionADD_REQUESTwhether DCR caused to create new profile in VOD with new vid__v  (Veeva id)CHANGE_REQUESTjust update of existing profile in VOD with existing and already known vid__v (Veeva id)change_request_hcp_response.csv - contains information about DCR related to HCPchange_request_hco_response.csv - contains information about DCR related to HCOchange_request_address_response.csv - contains information about DCR related to addresses which are related to specific HCP or HCOchange_request_parenthco_response.csv - contains information about DCR which correspond to relations between HCP and HCO, and HCO and HCOFile with log: _DCR_Request_Job_Log.csv can be skipped. It does not contain any useful information to be processed automaticallyFor all DCR responses from VOD, we need to get corresponding DCR entity (VeevaVRDetails)from collection DCRRegistryVeeva should be selected. In general, specific response files are not that important (VOD profiles updates will be ingested to HUB via ETL channel) however when new profiles are created (change_request_response.csv.change_request_type = ADD_REQUEST) we need to extract theirs Veeva ID. We need to deep dive into change_request_hcp_response.csv or change_request_hco_response.csv to find vid__v (Veeva ID) for specific dcr_key This new Veeva ID should be stored in VeevaDCREvent.vrDetails.veevaHCPIdsIt should be further used as a crosswalk value in Reltio:entities.HCO.updateCrosswalk.type (VEEVA)entities.HCO.updateCrosswalk.value (VEEVA ID)Once data has been properly mapped from Veeva to HUB DCR model, new VeevaDCREvent entity should be created and published to dedicated Kafka topic $env-internal-veeva-dcr-change-events-inPlease be advised, when the status of resolution is not final (CHANGE_ACCEPTED, CHANGE_REJECTED, CHANGE_CANCELLED, CHANGE_PARTIAL) we should not sent event to DCR-service-2Then for each successfully processed DCR entity (VeevaVRDetails) in Mongo  DCRRegistryVeeva collection should be updated Veeva CSV: resolutionMongo: DCRRegistryVeeva Entity: VeevaVRDetails.status: DCRRequestStatusDetailsTopic: $env-internal-veeva-dcr-change-events-inEvent: VeevaDCREvent.vrDetails.vrStatusTopic: $env-internal-veeva-dcr-change-events-inEvent: VeevaDCREvent.vrDetails.vrStatusDetailCHANGE_PENDINGstatus should not be updated at all (stays as SENT)do not send events to DCR-service-2 do not send events to DCR-service-2 CHANGE_ACCEPTEDACCEPTEDCLOSEDACCEPTEDCHANGE_PARTIALACCEPTEDCLOSEDACCEPTEDresolutionNotes / veevaComment should contain more information what was rejected by VEEVA DSCHANGE_REJECTEDREJECTEDCLOSEDREJECTEDCHANGE_CANCELLEDREJECTEDCLOSEDREJECTEDOnce files are processed, ZIP file should be moved from inbound to archive directoryEvent VeevaDCREvent Model\ndata class VeevaDCREvent (val eventType: String? = null,\n                          val eventTime: Long? = null,\n                          val eventPublishingTime: Long? = null,\n                          val countryCode: String? = null,\n                          val dcrId: String? = null,\n                          val vrDetails: VeevaChangeRequestDetails)\n\ndata class VeevaChangeRequestDetails (\n    val vrStatus: String? = null, - HUB CODEs\n    val vrStatusDetail: String? = null, - HUB CODEs\n    val veevaComment: String? = null,\n    val veevaHCPIds: List? = null,\n    val veevaHCOIds: List? = null)\nTriggersTrigger actionComponentActionDefault timeIN Timer (cron)mdm-veeva-dcr-service: VeevaDCRRequestTrace.traceDCRs()get DCR responses from S3/SFTP directory, extract CSV files from ZIP file and publish events to kafka topicevery hourusually every 6h (traceConfiguration.schedulerConfig.fixedDelay) at specific time of day (traceConfiguration.schedulerConfig.initDelay)OUT Events on Kafka Topicmdm-veeva-dcr-service: VeevaDCRRequestTrace.traceDCRs()$env-internal-veeva-dcr-change-events-inVeevaDCREvent event published to topic to be consumed by DCR Service 2every hourusually every 6h (traceConfiguration.schedulerConfig.fixedDelay) at specific time of day (traceConfiguration.schedulerConfig.initDelay)Dependent componentsComponentUsageDCR Service 2Main component with flow implementationHub StoreDCR and Entities Cache " + }, + { + "title": "ETL Batches", + "pageID": "164470046", + "pageLink": "/display/GMDM/ETL+Batches", + "content": "DescriptionThe process is responsible for managing the batch instances/stages and loading data received from the ETL channel to the MDM system. The Batch service is a complex component that contains predefined JOBS, Batch Workflow configuration that is using the JOBS implementations and using asynchronous communication with Kafka topis updates data in MDM system and gathered the acknowledgment events. Mongo cache stores the BatchInstances with corresponding stages and EntityProcessStatus objects that contain metadata information about loaded objects.The below diagram presents an overview of the entire process. Detailed descriptions are available in the separated subpages.Flow diagramModel diagramStepsThe client is able to create a new instance of the batch using - Batch Controller: creating and updating batch instance flowOnce the batch instance is created client is able to load the data using - Bulk Service: loading bulk data flowDuring data load, the following process startsSending JOB - send data received from REST API to Kafka Stage topicsProcessing JOB - check the status for the specific load if all ACK were receivedSoftDeleting JOB - an optional job that is triggered at the end of the batch that was configured to use full file load - this starts the delta detection process and soft-deletes the objectsACK Collector - a streaming process that gathered events and updated Cache with the MDM response statusFor the support opposes the additional Clear Cache operation is exposedTriggersDescribed in the separated sub-pages for each process.Dependent componentsComponentUsageBatch ServiceMain component with flow implementationManagerAsynchronous events processingHub StoreDatastore and cache" + }, + { + "title": "ACK Collector", + "pageID": "164469774", + "pageLink": "/display/GMDM/ACK+Collector", + "content": "DescriptionThe flow process the ACK response messages and updates the cache. Based on these responses the Processing flow is checking the Cache status and is blocking the workflow by the time all responses are received. This process updates the "status" attribute with the MDM system response and the "updateDateMDM" with the corresponding update timestamp. Flow diagramStepsManager publisher ACK responses to the Batch ACK queue for each processed object through batch-serviceACK Collector process in the streaming mode the events and update the status in the cache. The following attributes are updated:status - MDM status that HUB received after entity/relationship object was created/updated/soft-deletedupdateDateMDM - timestamp once the ACK was receivedentityId - corresponding entity/relation URI that is given by the MDM systemerrorCode - optional  MDM error code once the status is failederrorMessage - optional MDM error message that contains detailed description once the status is failed. TriggersTrigger actionComponentActionDefault timeIN Events incoming batch-service:AckProcessorupdate the cache based on the ACK responserealtimeDependent componentsComponentUsageBatch ServiceThe main componentManagerAsync route with ACK responsesHub StoreCache" + }, + { + "title": "Batch Controller: creating and updating batch instance", + "pageID": "164469788", + "pageLink": "/display/GMDM/Batch+Controller%3A+creating+and+updating+batch+instance", + "content": "DescriptionThe batch controller is responsible for managing the Batch Instances. The service allows to creation of a new batch instance for the specific Batch, create a new Stage in the batch and update stage with the statistics. The Batch controller component manages the batch instances and checks the validation of the requests. Only authorized users are allowed to manage specific batches or stages. Additionally, it is not possible to START multiple instances of the same batch in one time. Once batch is started Client should load the data and at the end complete the current batch instance. Once user creates new batch instance the new unique ID is assigned, in the next request user has to use this ID to update the workflow. By default, once the batch instance is created all stages are initialized with status PENDING. Batch controller also manages the dependent stages and is marking the whole batch as COMPLETED at the end. Flow diagramStepsThe first step that the User has to make is the initialization of the new Batch Instance, during this operation process starts and a new unique ID is assigned.Using the Unique ID and available Stage name user is able to start the STAGE. (by design users have to access only to the first "Loading" stage, but this can be changed in the configuration if required. In this request, the Body objects may be empty. It will cause the initialization of this specific STAGE - changed to STARTED.At that moment user is able to load data - the description is available in the next flow - Bulk Service: loading bulk dataAfter data loading User has to complete the STAGE. In this request, the Body objects have to be delivered. In the request, the User provides the statistics about the load or optionally errors.if there are errors during loading - BatchStageStatus = FAILEDif the load ended with success -    BatchStageStatus = COMPLETEDIn the end, the user should trigger the GET batch instance details operation and wait for the Batch completion ( after Loading stage all dependent stages are started)To get more details about next internal steps check:Processing JOBSending JOBSoftDeleting JOBACK CollectorTriggersTrigger actionComponentActionDefault timeAPI requestbatch-service.RestBatchControllerRouteUser initializes the new batch instance, updates the STAGE, saves the statistics, and completes the corresponding STAGE.User is able to get batch instance details and wait for the load completionmuser API request dependent, triggered by an external clientDependent componentsComponentUsageBatch ServiceThe main component that exposes the REST APIHub StoreBatch Instances Cache" + }, + { + "title": "Batches registry", + "pageID": "234695693", + "pageLink": "/display/GMDM/Batches+registry", + "content": "There is a list of batches configured from 01.02.2022.ONEKEYTenantCountrySource NameBatch NameStageDetailsEMEAAlgeriaONEKEYONEKEY_DZHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)TunisiaONEKEYONEKEY_TNHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)MoroccoONEKEYONEKEY_MAHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)GermanyONEKEYONEKEY_DEHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)France, AD, MCONEKEYONEKEY_FRHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)France (DOMTOM) = RE,MQ,GP,PF,YT,GF,PM,WF,MU,NCONEKEYONEKEY_PFHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)ItalyONEKEYONEKEY_ITHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)SpainONEKEYONEKEY_ESHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)Turkey ONEKEYONEKEY_TRHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)Denmark (Plus Faroe Islands and Greenland)ONEKEYONEKEY_DKHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)PortugalONEKEYONEKEY_PTHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)RussiaONEKEYONEKEY_RUHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)APACAustraliaONEKEYONEKEY_AUHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)New ZealandONEKEYONEKEY_NZHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)South KoreaONEKEYONEKEY_KRHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)AMERCanadaONEKEYONEKEY_CAHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)BrazilONEKEYONEKEY_BRHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)MexicoONEKEYONEKEY_MXHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)Argentina/UruguayONEKEYONEKEY_ARHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)PFORCE_RXTenantCountrySource NameBatch NameStageDetailsAMERBrazilPFORCERX_ODSPFORCERX_ODSHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)MexicoArgentina/UruguayCanadaAPACJapan PFORCERX_ODSPFORCERX_ODSHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)Australia /New ZealandIndiaSouth KoreaEMEASaudi ArabiaPFORCERX_ODSPFORCERX_ODSHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)GermanyFranceItalySpainRussiaTurkey DenmarkPortugalGRVTenantCountrySource NameBatch NameStageEMEAGRGRVGRVHCPLoadingITFRESRUTRSADKGLFOPTAMERCAGRVGRVHCPLoadingBRMXARAPACAUGRVGRVHCPLoadingNZINJPKRGCPTenantCountrySource NameBatch NameStageEMEAGRGCPGCPHCPLoadingITFRESRUTRSADKGLFOPTAMERCAGCPGCPHCPLoadingBRMXARAPACAUGCPGCPHCPLoadingNZINJPKRENGAGETenantCountrySource NameBatch NameStageAMERCAENGAGEENGAGEHCPLoadingHCOLoadingRelationLoading" + }, + { + "title": "Bulk Service: loading bulk data", + "pageID": "164469786", + "pageLink": "/display/GMDM/Bulk+Service%3A+loading+bulk+data", + "content": "DescriptionThe bulk service is responsible for loading the bundled data using REST API as the input and Kafka stage topics as the output. This process is strictly connected to the Batch Controller: creating and updating batch instance flow, which means that the Client should first initialize the new batch instance and stage. Using API requests data is loaded to the next processing stages. Flow diagramStepsThe batch controller part is described in the Batch Controller: creating and updating batch instance flow.After the User starts the Loading stage it is now possible to load the data. (Loading STAGE part on the diagram)Depending on the batch workflow configuration it is possible to load entities or relationsPOST /entities - create entities in MDMPATCH /entities - updated entities in MDM, in that case, the partialOverride option is usedPOST /relations - create relations in MDMPATCH /tags - add tags to objects in MDMDELETE /tags - remove tags from objects in MDMPOST /entities/_merge - merges 2 entities in MDMPOST /entities/_unmerge -  unmerges entity B from entity A in MDMAdditionally, based on the configuration, there is a limitation of the objects in one call - by default user is allowed to send the list of 25 objects in one API call.The response is the HTTP 200 code with an empty body.The API Loading stage is the synchronous operation, the rest of the process uses the Kafka Topics and all data is shared to the MDM system in an asynchronous way. After Loading all data through the specific STAGE, the Client should complete the STAGE, this will trigger the next processing steps described on the ELT Batch sub-pages. TriggersTrigger actionComponentActionDefault timeAPI requestbatch-service.RestBulkControllerRouteClients send the data to the bulk service.user API request dependent, triggered by an external clientDependent componentsComponentUsageBatch ServiceThe main component that exposes the REST APIHub StoreBatch Instances Cache" + }, + { + "title": "Clear Cache", + "pageID": "164469784", + "pageLink": "/display/GMDM/Clear+Cache", + "content": "DescriptionThis flow is used to clear mongo cache (removes records from batchEntityProcessStatus) for specified batch name, object type and entity type. Optional list of countries (comma-separated) allows filtering by countries.Flow diagramStepsclient sends the request to the batch controller with specified parameters like batchName, objectType and entityType example: {{API_URL_BATCH_CONTROLLER}}/{{batchName}}/_clearCache?objectType=RELATION&entityType=configuration/relationTypes/ContactAffiliationsexample: {{API_URL_BATCH_CONTROLLER}}/{{batchName}}/_clearCache?objectType=ENTITY&entityType=configuration/entityTypes/HCP&countries=GB,IE,FR,PT,DKthe service checks if client is allowed to do this action - has appropriate role CLEAR_CACHE_BATCH the service process client request and executes mongo query with specified parametersthe service returns number of removed records.TriggersTrigger actionComponentActionDefault timeAPI Requestbatch-service.RestBatchControllerRouteExternal client calls request to clear the cacheuser API request dependent, triggered by an external clientDependent componentsComponentUsageBatch ServiceThe main component that exposes the REST APIHub StoreBatch entities/relations cache" + }, + { + "title": "Clear Cache by croswalks", + "pageID": "282663410", + "pageLink": "/display/GMDM/Clear+Cache+by+croswalks", + "content": "DescriptionThis flow is used to clear mongo cache (removes records from batchEntityProcessStatus) for specified batch name, sourceId type or/and valueFlow diagramStepsclient sends the request to the batch controller with specified parameters like batchName, sourceId type or/and valueexample: PATCH {{API_URL_BATCH_CONTROLLER}}/{{batchName}}/_clearCachebody: \n{\n "sourceId": [\n {\n "type": "ABC",\n "value": "TEST:123"\n },\n {\n "type": "DEF"\n },\n {\n "value": "TEST:456"\n }\n ]\n}\nthe service checks if client is allowed to do this action - has appropriate role CLEAR_CACHE_BATCH the service process client request and executes mongo query with specified parametersthe service returns number of removed records.TriggersTrigger actionComponentActionDefault timeAPI Requestbatch-service.RestBatchControllerRouteExternal client calls request to clear the cacheuser API request dependent, triggered by an external clientDependent componentsComponentUsageBatch ServiceThe main component that exposes the REST APIHub StoreBatch entities/relations cache" + }, + { + "title": "PATCH Operation", + "pageID": "355371021", + "pageLink": "/display/GMDM/PATCH+Operation", + "content": "DescriptionEntity PATCH (UpdateHCP/UpdateHCO/UpdateMCO) operation differs slightly from the standard POST (CreateHCP/CreateHCO/CreateMCO) operation:PATCH operation includes contributor crosswalk verification - MDM is searched to make sure that the updated entity exists (to prevent creations of singleton profiles)PATCH operation uses Reltio's partialOverride parameter. It allows sending only a portion of attributes (usually only the ones that have changed since the last load). Existing attribute values that have not been provided in the request will not be wiped from MDM.AlgorithmPATCH operation logic consists of following steps:For each entity in the bundle (depending on the configuration, usually around 50 requests):Find contributor crosswalk - if contributor crosswalk cannot be determined, throw an exceptionSearch all the contributor crosswalks in MDM Hub Cache - single search requestsFilter results - assign each found entity to corresponding crosswalkIf no entity found for a crosswalk - perform a fallback search by crosswalk using MDM APIFor every entity where contributor crosswalk was not found in above steps, generate a "Not Found" message.For remaining entities, perform the CreateHCP/CreateHCO/CreateMCO operation.Merge response from CreateHCP/CreateHCO/CreateMCO with "Not Found" messages in correct order, return." + }, + { + "title": "Processing JOB", + "pageID": "164469780", + "pageLink": "/display/GMDM/Processing+JOB", + "content": "DescriptionThe flow checks the Cache using a poller that executes the query each minutes. During this processing, the count is decreasing until it reaches 0. The following query is used to check the count of objects that were not delivered. The process ends if the query return 0 objects - it means that we received ACK for each object and it is possible to go to the next dependent stage. "{'batchName': ?0 ,'sendDateMDM':{ $gt: ?1 }, '$or':[ {'updateDateMDM':{ $lt: ?1 } }, { 'updateDateMDM':{ $exists : false } } ] }"Using Mongo query there is a possibility to find what objects are still not processed. In that case, the user should provide batchName==" currently loading batch " and use the date that is the batch start date. Flow diagramStepsThe process starts once the activation criteria are successful, which means that the dependent JOB is COMPLETED.Using trigger mechanism data is polled from Cache and counted.If the number of processed entities is equal to 0 process endselse the process is triggered after minutes. If this is the last stage in the current batch workflow statistics are calculated.  ( it means that there may be multiple processing jobs in one workflow, but only the last one is calculating all gathered statistics )The LAST stage will always contain the following staistisc: Each statistic is divided into 3 sections using "/" separator1 - entities or relations depending on the loaded object2 - object type, it can be HCO/HCP/MCO or any relationType loaded3 - name{entities | relations}/{object type}/receivedCount - number of objects received {entities | relations}/{object type}/skippedCount - number of objects skipped because of delta detection{entities | relations}/{object type}/failedCount -  number of objects that got "failed" status from MDM{entities | relations}/{object type}/updatedCount - number of objects that got "updated" status from MDM{entities | relations}/{object type}/createdCount - number of objects that got "created" status from MDM{entities | relations}/{object type}/notFoundCount - number of objects that got "notFound"  status from MDM (may occur once using partialOverride operation){entities | relations}/{object type}/deletedCount - number of objects that got "deleted" status from MDM (may occur once object is endDated in MDM and the object updated alreade deleted entity){entities | relations}/{object type}/softDeletedCount - number of objects removed by the SoftDeleting JOB - used only during full files load.Example statistics:TriggersTrigger actionComponentActionDefault timeThe previous dependent JOB is completed. Triggered by the Scheduler mechanismbatch-service:ProcessingJobTriggers mongo and checks the number of objects that are not yet processed.every 60 secondsDependent componentsComponentUsageBatch ServiceThe main component with the Processing JOB implementationHub StoreThe cache that stores all information about the loaded objects" + }, + { + "title": "Sending JOB", + "pageID": "164469778", + "pageLink": "/display/GMDM/Sending+JOB", + "content": "DescriptionThe JOB is responsible for sending the data from the Stage Kafka topics to the manager component. During this process data is checked, the checksum is calculated and compared to the previous state, os only the changes are applied to MDM. The Cache - Batch data store, contains multiple metadata attributes like sourceIngetstionDate - the time once this entity was recently shared by the Client, and the ACK response status (create/update/failed) The Checksum is calculation is skipped for the "failed" objects. It means there is no need to clear the cache for the failed objects, the user just needs to reload the data. The JOB is triggered once the previous dependent job is completed or is started. There are two mode of dependences between Loading STAGE and Sending STAGE(hard) dependentStages - the Sending stage will start once the previous dependent JOB is COMPLETEDsoftDependentStages - the Sending stage will start in parallel to the Loading stage. It means that all loaded dates will be intimately sent to Reltio. The purpose of hard dependency is the case when the user has to Load HCP/HCO and Relations objects. The sending of relation has to start after HCP and HCO load is COMPLETED. The process finishes once the Batch stage queue is empty for 1 minute (no new events are in the queue).The following query is used to retrieve processing object from cache. Where the batchName is the corersponding Batch Instance, and sourceId is the information about loaded source crosswalk.{'batchName': ?0, {'sourceId.type': ?1, 'sourceId.value': ?2,'sourceId.sourceTable': ?3 } }Flow diagramStepsThe process starts once the activation criteria are successful, which means that the (hard) dependent JOB is COMPLETED or soft dependent JOB STARTED.All entities or relations are polled from stage topicif objects exist on topic for each:the current state is retrieved from Batch Cache if this is a new one the object is initialized with all required attributes and checksumthe checksum is calculated (for failed status checksum calculation is skipped)the sourceIngestionDate is updated to the current date (required to track the object and generate soft-deletes once the entity was not received)updateDate,sendDateMDM attributes are updated and "deleted" flag is set to falseonce no new objects are on stage topic process is finished. The STAGE is updated with COMPLETED status.TriggersTrigger actionComponentActionDefault timeThe previous dependent JOB is completed. Triggered by the Scheduler mechanismbatch-service:SendingJobGet entries from stage topic, saved data in mongo and create/updates profiles using Kafka producer (asynchronous channel)once the dependence JOB is completedDependent componentsComponentUsageBatch ServiceThe main component with the Sending JOB implementationHub StoreThe cache that stores all information about the loaded objects" + }, + { + "title": "SoftDeleting JOB", + "pageID": "164469776", + "pageLink": "/display/GMDM/SoftDeleting+JOB", + "content": "DescriptionThis JOB is responsible for the soft-delete process for the full file loads. Batches that are configured with this JOB have to always deliver the full set of data. The process is triggered at the end of the workflow and soft-delete objects in the MDM system. The following query is used to check how many objects are going to be removed and also to get all these objects and send the soft-delete requests. {'batchName': ?0, 'deleted': false, 'objectType': 'ENTITY OR RELATION', 'sourceIngestionDate':{ $lt: ?1 } }Once the object is soft deleted "deleted" flag is changed to "true"Using the mongo query there is a possibility to check what objects were soft-deleted by this process. In that case, the Administrator should provide the batchName=" currently loading batch" and the deleted parameter =" true".The process removes all objects that were not delivered in the current load, which means that the "SourceIngestionDate" is lower than the "BatchStartDate".It may occur that the number of objects to soft-delete exceeds the limit, in that case, the process is aborted and the Administrator should verify what objects are blocked and notify the client. The production limit is a maximum of 10000 objects in one load.Flow diagramSteps The process starts once the activation criteria are successful, which means that the dependent JOB is COMPLETED.Using a query in the first step the process counts the number of entities to be soft-deletedIf the limit is exceeded the process is aborted and status with reason is saved in Cache. The limit is a safety switch in case if we get a corrupted file (empty or partial). It prevents from deleting all MDM  profiles in such cases.in the "RelationsUnseenDeletion" STAGE the following information is saved:statistics:maxDeletesLimit - currently configured limitentitiesUnseenResultCount - number of entities that process indicated to soft-deleteerrors:errorCode - 400 errorMessage - Entities delete limit exceeded, aborting soft delete sending.example:Else the Cache is queried and returned objects are sent Manager for removalIn the loop, all objects are queried from Cache and the data is sent to the corresponding Kafka topic. During this operation, the cache is updated and MDMRequest is preparedMDMRequest:entityTypecountryCrosswalktypevaluedeleteDate - current timestampCache attributes to update:updateDate = current time - cache object update timedeleteDateMDM = current time - date that contains the delete date of corresponding objectsendDateMDM = current time - date that contains the time when the profile was sent to MDMdeleted = true - flag indicates that the profile was soft-deleted2023-07 Update: Set Soft-Delete Limit by CountryDeletingJob now allows additional configuration:\ndeletingJob:\n "TestDeletesPerCountryBatch":\n "EntitiesUnseenDeletion":\n maxDeletesLimit: 20\n queryBatchSize: 5\n reltioRequestTopic: "local-internal-async-all-testbatch"\n reltioResponseTopic: "local-internal-async-all-testbatch-ack"\n>     maxDeletesLimitPerCountry:\n> enabled: true\n> overrides:\n> CA: 10\n> BR: 30\nIf maxDeletesLimitPerCountry.enabled == true (default false):soft-deletes limit in maxDeletesLimit is applied per country. Number of records to delete is fetched from Cache for each country, and if any of the countries exceeds the limit, the batch is failed with appropriate error message.soft-deletes limit can be changed for each country using the maxDeletesLimitPerCountry.overrides map. If country is not present in the overrides, default value from maxDeletesLimit is consideredTriggersTrigger actionComponentActionDefault timeThe previous dependent JOB is completed. Triggered by the Scheduler mechanismbatch-service:AbstractDeletingJob (DeletingJob/DeletingRelationJob)Triggers mongo and soft-delete profiles using Kafka producer (asynchronous channel)once the dependence JOB is completedDependent componentsComponentUsageBatch ServiceThe main component with the SoftDeleting JOB implementationManagerAsynchronous channel Hub StoreThe cache that stores all information about the loaded objects" + }, + { + "title": "Event filtering and routing rules", + "pageID": "164470034", + "pageLink": "/display/GMDM/Event+filtering+and+routing+rules", + "content": "At various stages of processing events can be filtered based on some configurable criteria. This helps to lessen the load on the Hub and client systems, as well as simplifies processing on client side by avoiding the types of events that are of no interest to the target application. There are three places where event filtering is applied:Reltio Subscriber – filters events based on their (Reltio-defined) typeNucleus Subscriber – filters out duplicate events, based on event type and entityUriEvent Publisher – filters events based on their contentEvent type filteringEach event received from SQS queue has a "type" attribute. Reltio Subscriber has a "allowedEventTypes" configuration parameter (in application.yml config file) that lists event types which are processed by application. Currently, complete list of supported types is:ENTITY_CREATEDENTITY_REMOVEDENTITY_CHANGEDENTITY_LOST_MERGEENTITIES_MERGEDENTITIES_SPLITTEDAn event that does not match this list is ignored, and "Message skipped" entry is added to a log file.Please keep in mind that while it is easy to remove an event type from this list in order to ignore it, adding new event type is a whole different story – it might not be possible without changes to the application source code.Duplicate detection (Nucleus)There's an in-memory cache maintained that stores entityUri and type of an event previously sent for that uri. This allows duplicate detection. The cache is cleared after successful processing of the whole zip file.Entity data-based filteringEvent Publisher component receives events from internal Kafka topic. After fetching current Entity state from Reltio (via MDM Integration Gateway) it imposes few additional filtering rules based on fetched data. Those rules are:Filtering based on Country that entity belongs to. This is based on value of ISO country code, extracted from Country attribute of an entity. List of allowed codes is maintained as "activeCountries" parameter in application.yml config file.Filtering based on Entity type. This is controlled by "allowedEntityTypes" configuration parameter, which currently lists two values: "HCP" and "HCO". Those values are matched against "entityType" attribute of Entity (prefix "configuration/entityTypes/" is added automatically, so it does not need to be included in configuration file)Filtering out events that have empty "targetEntity" attribute – such events are considered outdated, plus they lack some mandatory information that would normally be extracted from targetEntity, such as originating country and source system. They are filtered out because Hub would not be able to process them correctly anyway.Filtering out events that have value mismatch between "entitiesURIs" attribute of an event and "uri" attribute of targetEntity – for all event types except HCP_LOST_MERGE and HCO_LOST_MERGE. Uri mismatch may arise when EventPublisher is processing events with significant delay (e.g. due to downtime, or when reprocessing events) – Event Publisher might be processing HCP_CHANGED (HCO_CHANGED) event for an Entity that was merged with another Entity since then, so HCP_CHANGED event is considered outdated, and we are expecting HCP_LOST_MERGE event for the same Entity.This filter is controlled by eventRouter.filterMismatchedURIs configuration parameter, which takes Boolean values (yes/no, true/false)Filtering out events based on timestamps. When HCP_CHANGED or HCO_CHANGED event arrives that has "eventTime" timestamp older than "updatedTime" of the targetEntity, it is assumed that another change for the same entity has already happened and that another event is waiting in the queue to be processed. By ignoring current event Event Publisher is ensuring that only the most recent change is forwarded to client systems.This filter is controlled by eventRouter.filterOutdatedChanges configuration parameter, which can take Boolean values (yes/no, true/false)Event routingPublishing Hub supports multiple client systems subscribing for Entity change events. Since those clients might be interested in different subset of Events, the event routing mechanism was created to allow configurable, content-based routing of the events to specific client systems. Routing mechanics consists of three main parts:Kafka topics – each client system can has one or more dedicated topics where events of interest for that system are publishedMetadata extraction – as one of the processing steps, there are some pieces of information extracted from the Event and related Entity and put in processing context (as headers), so they can be easily accessed.Configurable routing rules – Event Publisher's configuration file contains the whole section for defining rules that facilitates Groovy scripting language and the metadata.Available metadata is described in the table below.Table 10. Routing headersHeaderTypeValuesSource FieldDescriptioneventTypeStringfull simplenoneType of an event. "full" means Event Sourcing mode, with full targetEntity data. "simple" is just an event with basic data, without targetEntityeventSubtypeStringHCP_CREATED, HCP_CHANGED, ….event.eventTypeFor the full list of available event subtypes is specified in MDM Publishing Hub Streaming Interface document.countryStringCN FRevent.targetEntity.attributes .Country.lookupCodeCountry of origin for the EntityeventSourceArray of String["OK", "GRV"]event. targetEntity.crosswalks.typeArray containing names of all the source systems as defined by Reltio crosswalksmdmSourceString["RELTIO", NUCLEUS"]NoneSystem of origin for the Entity.selfMergeBooleantrue, falseNoneIs the event "self-merge"? Enables filtering out merges on the fly.Routing rules configuration is found in eventRouter.routingRules section of application.yml configuration file. Here's an example of such rule: Elements of this configuration are described below.id – unique identifier of the ruleselector – snippet of Groovy code, which should return true or false depending on whether or not message should be forwarded to the destination.destination – name of the topic that message should be sent to.Selector syntax can include, among the others, the elements listed in the table below.Table 11. Selector syntaxElementExampleDescriptioncomparison operators==, !=, <, >Standard Groovy syntaxboolean operators&&,set operatorsin, intersectMessage headersexchange.in.headers.countrySee Table 10 for list of available headers. "exchange.in.headers" is the standard prefix that must be used do access themFull syntax reference can be found in Apache Camel documentation: http://camel.apache.org/groovy.html . The limitation here is that the whole snippet should return a single boolean value.Destination name can be literal, but can also reference any of the message headers from Table 10, with the following syntax: " + }, + { + "title": "FLEX COV Flows", + "pageID": "172301002", + "pageLink": "/display/GMDM/FLEX+COV+Flows", + "content": "" + }, + { + "title": "Address rank callback", + "pageID": "164470175", + "pageLink": "/display/GMDM/Address+rank+callback", + "content": "The Address Rank Callback is used only in the FLEX COV environment to update the Rank attribute on Addresses. This process sends the callback to Reltio only when the specific source exists on the profile. The Rank is used then by the Bussiness Team or Data Stewards in Reltio or by the downstream FLEX system. Address Rank Callback is triggered always when getEntity operation is invoked. The purpose of this process is to synchronize Reltio with correct address rank sort order.Currently the functionality is configured only for US Trade Instance. Below is the diagram outlining the whole process. Process steps description:Event Publisher receives events from internal Kafka topic and calls MDM Gateway API to retrieve latest state of Entity from Reltio.Event Publisher internal user is authorized in MDM Manager to check source, country and appropriate access roles. MDM Manager invokes get entity operation in Reltio. Returned JSON is then added to the Address Rank sort process, so the client will always get entity with sorted address rank order, but only when this feature is activated in configuration.When Address Rank Sort process is activated, each address in entity is sorted. In this case "AddressRank" and "BestRecord" attributes are set. When AddressRank is equal to "1" BestRecord attribute will always have "1" value.When Address Rank Callback process is activated, relation operation is invoked in Reltio. The Relation Request object contains Relation object for each sorted address. Each Relation will be created with "AddrCalc" source, where the start object is current entity id and the end object is id of the Location entity. In that case relation between entity and Location is created with additional rank attributes. There is no need to send multiple callback requests every time when get entity operation is invoked, so the Callback operation is invoked only when address rank sort order have changed.Entity data is stored in MongoDB NOSQL database, for later use in Simple mode (publication of events that entityURI and require client to retrieve full Entity via REST API).For every Reltio event there are two Publishing Hub events created: one in Simple mode and one in Event Sourcing (full) mode. Based on metadata, and Routing Rules provided as a part of application configuration, the list of the target destinations for those events is created. Event is sent to all matched destinations." + }, + { + "title": "DEA Flow", + "pageID": "164470009", + "pageLink": "/display/GMDM/DEA+Flow", + "content": "This flow processes DEA files published by GIS Team to S3 Bucket. Flow steps are presented on the sequence diagram below.  Process steps description:DEA files are uploaded to AWS S3 storage bucket to the appropriate directory intended only for DEA files.Batch Channel component is monitoring S3 location and processes the files uploaded to it.Folder structure for DEA is divided on "inbound" and "archive" directories. Batch Channel component is polling data from inbound directory, after successful processing the file is copied to "archive directory"Files downloaded from S3 is processed in streaming mode. The processing of the file can be stared before full download of the file. Such solution is dedicated to speed up processing of the big files, because there is no need to wait until the file will be fully downloaded.DEA file load Start Time is saved for the specific load – as loadStartDate.Each line in file is parsed in Batch Channel component and mapped to the dedicated DEA object. DEA file is saved in Fixed Width Data Format, in that case one DEA record is saved in one line in the file so there is no need to use record aggregator. Each line has specified length, each column has specified star and end point number in the row.BatchContext is downloaded from MongoDB for each DEA record. This context contains DEA crosswalk ID, line from file, MD5 checksum, last modification date, delete flag. When BatchContext is empty it means that this DEA record is initially created – such object is send to Kafka Topic. When BatchContext is not empty the MD5 form the source DEA file is compared to the MD5 from the BatchContext (mongo). If MD5 checksums are equals – such object is skipped, otherwise – such object is send to Kafka Topic. For each modified object, lastModificationDate is updated in Mongo – it is required to detected delete records as the final step.Only when record MD5 checksum is not changed, DEA record will be published to Kafka topic dedicated for events for DEA records. They will be processed by MDM Manager component. The first step is authorization check to verify if this event was produced by Batch Channel component with appropriate source name and country and roles. Then the standard process for HCO creation is stared. The full description of this process is in HCO Post section.TransactionLog Service is an additional component for managing transaction logs. The role of this component is to save each successful or unsuccessful flow in transaction log. Additionally each log is saved in MongoDB to create a full report from current load and to correlate record flow between Batch Channel and MDM Manager components.After DEA file is successfully processed, DEA delete record processor is started. From Mongo Database each record with lastModificationDate less than loadStartDate and delete flag equal to false is downloaded. When the result count is grater that 1000, delete record processor is stoped – it is a protector feature in case of wrong file uploade which can generate multiple unexpected DEA profiles deletion. Otherwise, when result count is less than 1000, each record from MongoDB is parsed and send to Kafka Topic with deleteDate attribute on crosswalk. Then they will be processed by MDM Manager component. The first step is authorization check to verify if this event was produced by Batch Channel component with appropriate source name and country and roles. Then the standard process for HCO creation is stared. The full description of this process is in HCO Post section. Profiles created with deleteDate attribute on crosswalk are soft deleted in Reltio.Finally DEA file is moved to archive subtree in S3 bucket." + }, + { + "title": "FLEX Flow", + "pageID": "164470035", + "pageLink": "/display/GMDM/FLEX+Flow", + "content": "This flow processes FLEX files published by Flex Team to S3 Bucket. Flow steps are presented on the sequence diagram below. Process steps description:FLEX files are uploaded to AWS S3 storage bucket to the appropriate directory intended only for FLEX files.Batch Channel component is monitoring S3 location and processes the files uploaded to it.Folder structure for FLEX is divided on "inbound" and "archive" directories. Batch Channel component is polling data from inbound directory, after successful processing the file is copied to "archive directory"Files downloaded from S3 is processed in streaming mode. The processing of the file can be stared before full download of the file. Such solution is dedicated to speed up processing of the big files, because there is no need to wait until the file will be fully downloaded.Each line in file is parsed in Batch Channel component and mapped to the dedicated FLEX object. FLEX file is saved in CSV Data Format, in that case one FLEX record is saved in one line in the file so there is no need to use record aggregator. The first line in the file is always the header line with column names, each next line is the FLEX records with "," (comma character) delimiter. The most complex thing in FLEX mapping is Identifiers mapping. When Flex records contain "GROUP_KEY" ("Address Key") attribute it means that Identifiers saved in "Other Active IDs" will be added to FlexID.Identifiers nested attributes. "Other Active IDs" is one line string with key value pairs separated by "," (comma character), and key-value delimiter ":" (colon character). Additionally for each type of customer Flex identifier is always saved in FlexID section.FLEX record will be published to Kafka topic dedicated for events for FLEX records. They will be processed by MDM Manager component. The first step is authorization check to verify if this event was produced by Batch Channel component with appropriate source name and country and roles. Then the standard process for HCO creation is stared. The full description of this process is in HCO Post section.TransactionLog Service is an additional component for managing transaction logs. The role of this component is to save each successful or unsuccessful flow in transaction log. Additionally each log is saved in MongoDB to create a full report from current load and to correlate record flow between Batch Channel and MDM Manager components.After FLEX file is successfully processed, it is moved to archive subtree in S3 bucket." + }, + { + "title": "HIN Flow", + "pageID": "164469995", + "pageLink": "/display/GMDM/HIN+Flow", + "content": "This flow processes HIN files published by HIN Team to S3 Bucket. Flow steps are presented on the sequence diagram below. Process steps description:HIN files are uploaded to AWS S3 storage bucket to the appropriate directory intended only for HIN files.Batch Channel component is monitoring S3 location and processes the files uploaded to it.Folder structure for HIN is divided on "inbound" and "archive" directories. Batch Channel component is polling data from inbound directory, after successful processing the file is copied to "archive directory"Files downloaded from S3 is processed in streaming mode. The processing of the file can be stared before full download of the file. Such solution is dedicated to speed up processing of the big files, because there is no need to wait until the file will be fully downloaded.Each line in file is parsed in Batch Channel component and mapped to the dedicated HIN object. HIN file is saved in Fixed Width Data Format, in that case one HIN record is saved in one line in the file so there is no need to use record aggregator. Each line has specified length, each column has specified star and end point number in the row.HIN record will be published to Kafka topic dedicated for events for FLEX records. They will be processed by MDM Manager component. The first step is authorization check to verify if this event was produced by Batch Channel component with appropriate source name and country and roles. Then the standard process for HCO creation is stared. The full description of this process is in HCO Post section.TransactionLog Service is an additional component for managing transaction logs. The role of this component is to save each successful or unsuccessful flow in transaction log. Additionally each log is saved in MongoDB to create a full report from current load and to correlate record flow between Batch Channel and MDM Manager components.After HIN file is successfully processed, it is moved to archive subtree in S3 bucket." + }, + { + "title": "SAP Flow", + "pageID": "164469997", + "pageLink": "/display/GMDM/SAP+Flow", + "content": "This flow processes SAP files published by GIS system to S3 Bucket. Flow steps are presented on the sequence diagram below. Process steps description:SAP files are uploaded to AWS S3 storage bucket to the appropriate directory intended only for SAP files.Batch Channel component is monitoring S3 location and processes the files uploaded to it.Important note: To facilitate fault tolerance the Batch Channel component will be deployed on multiple instances on different machines. However, to avoid conflicts, such as processing the same file twice, only one instance is allowed to do the processing at any given time. This is implemented via standard Apache Camel mechanism of Route Policy, which is backed by Zookeeper distributed key-value store. When a new file is picked up by Batch Channel instance, the first processing step would be to create a key in Zookeeper, acting as a lock. Only one instance will succeed in creating the key, therefore only one instance will be allowed to proceed.Folder structure for SAP is divided on "inbound" and "archive" directories. Batch Channel component is polling data from inbound directory, after successful processing the file is copied to "archive directory"Files downloaded from S3 is processed in streaming mode. The processing of the file can be stared before full download of the file. Such solution is dedicated to speed up processing of the big files, because there is no need to wait until the file will be fully downloaded.Each line in file is parsed in Batch Channel component and mapped to the dedicated SAP object. In case of SAP files where one SAP record is saved in multiple lines in the file there is need to use SAPRecordAggregator. This class will read each line of the SAP file and aggregate each line to create full SAP record. Each line starts with Record Type character, the separator for SAP is "~" (tilde character). Only lines that start with the following character are parsed and create full SAP record:1 – Header4 – Sales OrganizationE – LicenseC – NotesWhen header line is parsed Account Type attribute is checked. Only SAP records with "Z031" type are filtered and post to Reltio.BatchContext is downloaded from MongoDB for each SAP record. This context contains Start Date for SAP and 340B Identifiers. When BatchContext is empty current timestamp is saved for each of the Identifiers, otherwise the start date for the identifiers is changed for the one saved in the Mongo cache. This Start Date always must be overwritten with the initial dates from mongo cache.Aggregated SAP record will be published to Kafka topic dedicated for events for SAP records. They will be processed by MDM Manager component. The first step is authorization check to verify if this event was produced by Batch Channel component with appropriate source name and country and roles. Then the standard process for HCO creation is stared. The full description of this process is in HCO POST section.TransactionLog Service is an additional component for managing transaction logs. The role of this component is to save each successful or unsuccessful flow in transaction log. Additionally each log is saved in MongoDB to create a full report from current load and to correlate record flow between Batch Channel and MDM Manager components.After SAP file is successfully processed, it is moved to archive subtree in S3 bucket." + }, + { + "title": "US overview", + "pageID": "164470019", + "pageLink": "/display/GMDM/US+overview", + "content": "" + }, + { + "title": "Generic Batch", + "pageID": "164469994", + "pageLink": "/display/GMDM/Generic+Batch", + "content": "The generic batch offers the functionality of configuring processes of HCP/HCO data loading from text files (CSV) into MDM.The loading processes are defined in the configuration, without the need for changes in the implementation.Description of the processDefinition of single data flow Configuration (definition) od each data flow contains:Data flow name Definition of data files. Each file is described by: File name patternMappings for each column Columns in file definition are described by: Column index and name Column type (string, date, number, fixed value)Attribute of the entity to which the value from the column is mappedConditional mapping parametersAmazon S3 resources and local temporary directory configurationAmazon S3 input directory Amazon S3 archive directory Local temporary directory Kafka topic names for sending asynchronous requests Mongo database connection parameters (common for all flow definitions) Currently defined data flows:Flow nameCountrySource systemInput files (with names required after preprocessing stage)Detailed columns to entity attribute mapping fileTH HCPTHCICRhcpEntitiesfileNamePattern: '(TH_Contact_In)+(\\.(?i)(txt))$'hcpAddressesfileNamePattern: '(TH_Contact_Address_In_JOINED)+(\\.(?i)(txt))$'hcpSpecialtiesfileNamePattern: '(TH_Contact_Speciality_In)+(\\.(?i)(txt))$'mdm-gateway\\batch-channel\\src\\main\\resources\\flows.ymlSA HCPSALocalMDMhcpEntitiesfileNamePattern: '(KSA_HCPs)+(\\.(?i)(csv))$'mdm-gateway\\batch-channel\\src\\main\\resources\\flows.yml" + }, + { + "title": "Get Entity", + "pageID": "164470021", + "pageLink": "/display/GMDM/Get+Entity", + "content": "DescriptionOperation getEntity of MDM Manager fetches current state of OV from MongoDB store.The detailed process flow is shown below.Flow diagramGet EntityStepsClient sends HTTP request to MDM Manager endpoint.Kong Gateway receives requests and handles authentication.If the authentication succeeds, the request is forwarded to MDM Manager component.MDM Manager checks user permissions to call getEntity operation and the correctness of the request.If user's permissions are correct, MDM Manager proceeds with searching for the specified entity by id.MDM Manager checks user profile configuration for getEntity operation to determine whether to return results based on MongoDB state or call Reltio directly.For clients configured to use MongoDB – if the entity is found, then its status is checked. For entities with LOST_MERGE status parentEntityId attribute is used to fetch and return the parent Entity instead. This is in line with default Reltio behavior since MDM Manager is supposed to mirror Reltio.TriggersTrigger actionComponentActionDefault timeREST callManager: GET /entity/{entityId}get specific objects from MDM systemAPI synchronous requests - realtimeDependent componentsComponentUsageManagerget Entities in MDM systems" + }, + { + "title": "GRV & GCP events processing", + "pageID": "164470032", + "pageLink": "/pages/viewpage.action?pageId=164470032", + "content": "ContactsVendorContactMAP/DEG API supportMatej.Dolanc@COMPANY.comThis flow processes events from GRV and GCP systems distributed through Event Hub. Processing is split into three stages. Since each stage is implemented as separate Apache Camel route and separated from other stages by persistent message store (Kafka), it is possible to turn each stage on/off separately using Admin Console.SQS subscriptionFirst processing stage is receiving data published by Event Hub from Amazon SQS queues, which is done as shown on diagram below.Figure 5. First processing stageProcess steps description:Data changes in GRV and GCP are captured by Event Hub and distributed via queues to MAP Channel components using SQS queues with names:eh-out-reltio-gcp-update-eh-out-reltio-gcp-batch-update-eh-out-reltio-grv-update-Events pulled from SQS queue are published to Kafka topic as a way of persisting them (allowing reprocessing) and to do event prioritizing and control throughput to Reltio. The following topics are used:-gw-internal-gcp-events-raw-gw-internal-grv-events-rawTo ensure correct ordering of messages in Kafka, there is a custom message key generated. It is a concatenation of market code and unique Contact/User id.Once the message is published to Kafka, it is confirmed in SQS and deleted from the queue.Enrichment with DEG dataFigure 6. Second processing stageSecond processing stage is focused on getting data from DEG system. The control flow is presented below.Process steps description:MAPChannel receives events from Kafka topic on which they were published in previous stage.MAPChannel filters events based on country activation criteria – events coming from not activated countries are skipped. A list of active countries is controlled by configuration parameter, separately for each source (GRV, GCP);Next, MapChannel calls DEG REST services (INT2.1 or INT 2.2 depending on whether it is a GRV or GCP event) to get detailed information about changed record. DEG always returns current state of GRV and GCP records.Data from DEG is published to Kafka topic (again, as a way of persisting them and separating processing stages). The topics used are:-gw-internal-gcp-events-deg-gw-internal-grv-events-degAgain, custom message key (which is a concatenation of market code and unique Contact/User idCreating HCP entitiesLast processing stage involves mapping data to Reltio format and calling MDM Gateway API to create HCP entities in Reltio. Process overview is shown below.Figure 7. Third processing stageProcess steps description:MAPChannel receives events from Kafka topic on which they were published in previous stage.MAPChannel filters events based on country activation criteria, events coming from not activated countries are skipped. A list of active countries is controlled by configuration parameter, separately for each source (GRV, GCP) – this is exactly the same parameter as in previous stage.MapChannel maps data from GCP/GRV to HCP:EMEA mappingGLOBAL mappingValidation status of mapped HCP is checked – if it matches a configurable list of inactive statuses, then deleteCrosswalk operation is called on MDM Manager. As a result entity data originating from GCP/GRV is deleted from Reltio.Otherwise, Map Channel calls REST operation POST /hcp on MDM Manager (INT4.1) to create or replace HCP profile in Reltio. MDM Manager handles complexity of the update process in Reltio.Processing events from multiple sources and prioritizationAs mentioned in previous sections, there are three different SQS queues that are populated with events by Event Hub. Each of them is processed by a separate Camel Route, allowing for some flexibility and prioritizing one queue above others. This can be accomplished by altering consumer configuration found in application.yml file. Relevant section of mentioned file is shown below. Queue eh-out-reltio-gcp-batch-update-dev has 15 consumers (and therefore 15 processing threads), while two remaining queues have only 5 consumers each. This allows faster processing of GCP Batch events.The same principle applies to further stages of the processing, which use Kafka endpoints. Again, there is a configuration section dedicated to each of the internal Kafka topic that allows tuning the pace of processing. " + }, + { + "title": "HUB UI User Guide", + "pageID": "302701919", + "pageLink": "/display/GMDM/HUB+UI+User+Guide", + "content": "This page contains the complete user guide related to the HUB UI.Please check the sub-pages to get details about the HUB UI and usage.Start with Main Page - HUB Status - main pageA handful of information that may be helpful when you are using HUB UI:UI URL: https://api-emea-prod-gbl-mdm-hub.COMPANY.com/ui-emea-prod/ (there is no need to know all URLs, click one, and in the top right corner you can easily switch between tenants).How to connect to UI and gain access to all features - UI Connect Guide(INTERNAL USAGE only by HUB Admins) UI role names and standards - Add new role and add users to the UIIf you want to add any new features to the HUB UI please send your suggestions to the HUB Team: DL-ATP_MDMHUB_SUPPORT@COMPANY.com" + }, + { + "title": "HUB Admin", + "pageID": "302701923", + "pageLink": "/display/GMDM/HUB+Admin", + "content": "All the subpages contain the user guide - how to use the hub admin tools.To gain access to the selected operation please read - UI Connect Guide" + }, + { + "title": "1. Kafka Offset", + "pageID": "302703128", + "pageLink": "/display/GMDM/1.+Kafka+Offset", + "content": "DescriptionThis tab is available to a user with the MODIFY_KAFKA_OFFSET management role.Allows you to reset the offset for the selected topic and group.Kafka ConsumerPlease turn off your Kafka Consumer before executing this operation, it is not possible to manage the ACTIVE consumer groupRequired parametersGroup ID - the Kafka Consumer group that is connected to the topicTopic - The Kafka topic name that the user wants to manageDetailsThe offset parameter can take one of three values:earliest - reset the consumer group to the beginning of kafka topic - use this to read all events one more timelatest - reset the consumer group to the end of kafka topic - use this to skip all events and set consumer group at the end of the topic.shift by - allows to move consumer group by specific ammount to events. negative number (e.g -1000) - shifts the consumer group by 1000 events to the left - means you will get 1000 events more  positive number (e.g. 1000) - shifts the consumer group by 1000 events to the right - means you will get 1000 events less Use Case - you want to read 1000 events.First reset offest to latests - LAG will be 0Then shift by (-1000) - LAG will be 1000 eventsdate - allows to set the consumer group in a specific date, usefull when you want to read events since specific day. View" + }, + { + "title": "10. Jobs Manager", + "pageID": "337846274", + "pageLink": "/display/GMDM/10.+Jobs+Manager", + "content": "DescriptionThis page is available to users that scheduled the JOBAllows you to check the current status of an asynchronous operation Required parametersJob Type  choose a JOB to check the statusDetailsThe page shows the statuses of jobs for each operation.Click the Job Type and select the business operation.In the table below all the jobs for all users in your AD group are displayed. You can track the jobs and download the reports here.Click the Refresh view button to refresh the pageClick the icon to download the report.View" + }, + { + "title": "2. Partials", + "pageID": "302703134", + "pageLink": "/display/GMDM/2.+Partials", + "content": "DescriptionThis tab is available to the user with the LIST_PARTIALS role to manage the precallback service. It allows you to download a list of partials - these are events for which the need to change the Reltio has been detected and their sending to output topics has been suspended. The operation allows you to specify the limit of returned records and to sort them by the time of their occurrence.HUB ADMINUsed only internally by MDM HUB ADMINSRequired parametersN/A - by default, you will get all partial entities.DetailsReturn timestamp instead - mark as true to get  date format instead of the duration of partial in minutesReturn epoch millis- mark as true to get EPOCH timestamp instead of date formatLimit - put a number to limit the number of resultsSort - change the sort orderView" + }, + { + "title": "3. HUB Reconciliation", + "pageID": "302703130", + "pageLink": "/display/GMDM/3.+HUB+Reconciliation", + "content": "DescriptionThis tab is available to the user with the reconciliation service management role - RECONCILE and RECONCILE_COMPLEXThe operation accepts a list of identifiers for which it is to be performed. It allows you to trigger a reconciliation task for a selected type of object:relationsentitiespartialsDivided into 2 sections:TOP - Simple JOBS - simple query where input is the entity URIBOTTOM - Complex jobs - complex query that schedules Airflow JOB.Simple JOBS:Required parametersN/A - by default generate CHANGE events and skip entity when it is in REMOE/INACTIVE/LOST_MERGE state. In that case, we only push CHANGE events. DetailsParameterDefault valueDescriptionforcefalseSend an event to output topics even when a partial update is detected or the checksum is the same.push lost mergefalseReconcile event with LOST_MERGE statuspush inactivatedfalseReconcile event with INACTIVE statuspush removedfalseReconcile event with REMOVE statusViewComplex JOBS:Required parametersCountries - list countries for which you want to generate CHANGE events. DetailsSimpleParameterDefault valueDescriptionforcefalseSend an event to output topics even when a partial update is detected or the checksum is the same.Countries N/Alist of countries.e.g: CA, MXSourcesN/Acrosswalks names for which you want to generate the events.Object TypeENTITYgenerates events from ENTITY or RELATION objectsEntity Typedepend on object TypeCan be for ENTITY: HCP/HCO/MCO/DCRCan be for RELATION: input test in which you specify the relation e.g.: OtherHCOToHCOBatch limitN/Alimit the number of events - useful for testing purposesComplexParameterDefault valueDescriptionforcefalseSend an event to output topics even when a partial update is detectedEntity QueryN/APUT the MATCH query to get Mongo results and generate events. e.g.: { "status": "ACTIVE", "sources": "ONEKEY", "country": "gb" }Entities limitN/Alimit the number of events - useful for testing purposesRelation QueryN/APUT the MATCH query to get Mongo results and generate events. e.g.: { "status": "ACTIVE", "sources": "ONEKEY", "country": "gb" }Relation limitN/Alimit the number of events - useful for testing purposesView" + }, + { + "title": "4. Kafka Republish Events", + "pageID": "302703132", + "pageLink": "/display/GMDM/4.+Kafka+Republish+Events", + "content": "DescriptionThis page is available to users with the publisher manager role -RESEND_KAFKA_EVENT and RESEND_KAFKA_EVENT_COMPLEXAllows you to resend events to output topics. It can be used in two modes: simple and complex.The operation will trigger JOB  with selected parameters. In response, the user will receive an identifier that is used to check the status of the asynchronous operation in the 10. Jobs Manager tab.Simple modeRequired parametersCountries - list countries for which you want to generate CHANGE events. DetailsIn this mode, the user specifies values ​​for defined parameters:ParameterDefault valueDescriptionSelect moderepublish CHANGE eventsnote:when you mark 'republish CHANGE events' - the process will generate CHANGE events for all entities that are ACTIVE, and will check if the entity is LOST_MERGE - then will generate LOST_MERGED events, DELETED - then will generate REMOVED events, INACTIVE - then will generate INACTIVATED events.when you mark 'republish CREATE events' - the process will generate CREATE events for all entities that are ACTIVE, and will check if the entity is LOST_MERGE - then will generate LOST_MERGED events, DELETED - then will generate REMOVED events, INACTIVE - then will generate INACTIVATED events.The difference between these 2 modes is, in one we generate CHANGEs in the second CREATE events (depending if whether this is IDL generation or not)CountriestrueList of countries for which the task will be performedSourcesfalseList of sources for which the task will be performedObject typetrueObject type for which operation will be performed, available values: Entity, RelationReconciliation targettrueOutput kafka topick namelimittrueLimit of generated eventsmodification time fromfalseEvents with a modification date greater than this will be generatedmodification time tofalseEvents with a modification date less than this will be generatedViewComplex modeRequired parametersEntities query or  Relation queryDetails      In this mode, the user himself defines the Mongo query that will be used to generate eventsParameterRequiredDescriptionSelect moderepublish CHANGE eventsnote:when you mark 'republish CHANGE events' - the process will generate CHANGE events for all entities that are ACTIVE, and will check if the entity is LOST_MERGE - then will generate LOST_MERGED events, DELETED - then will generate REMOVED events, INACTIVE - then will generate INACTIVATED events.when you mark 'republish CREATE events' - the process will generate CREATE events for all entities that are ACTIVE, and will check if the entity is LOST_MERGE - then will generate LOST_MERGED events, DELETED - then will generate REMOVED events, INACTIVE - then will generate INACTIVATED events.The difference between these 2 modes is, in one we generate CHANGEs in the second CREATE events (depending if whether this is IDL generation or not)Entities querytrueResend entities Mongo queryEntities limitfalseResend entities limitRelation querytrueResend relations Mongo queryRelations limittrueResend relations limitReconciliation targettrueOutput kafka topick nameView" + }, + { + "title": "5. Reltio Reindex", + "pageID": "337846264", + "pageLink": "/display/GMDM/5.+Reltio+Reindex", + "content": "DescriptionThis page is available to users with the reltio reindex role - REINDEX_ENTITIESAllows you to schedule Reltio Reindex JOB. It can be used in two modes: query and file.The operation will trigger JOB  with selected parameters. In response, the user will receive an identifier that is used to check the status of the asynchronous operation in the 10. Jobs Manager tab.Required parametersSpecify Countries in query mode or file with entity uris in file mode. Detailsquery ParameterDescriptionCountriesList of countries for which the task will be performedSourcesList of sources for which the task will be performedEntity typeObject type for which operation will be performed, available values: HCP/HCO/MCO/DCRBatch limitAdd if you want to limit the reindex to the specific number - helpful with testing purposesfileInput fileFile format: CSV Encoding: UTF-8Column headers: - N/AInput file example123entities/E0pV5Xmentities/1CsgdXN4entities/2O5RmRiViewReltio Reindex details:HUB executes Reltio Reindex API with the following default parameters:ParameterAPI Parameter nameDefault ValueReltio detailed descriptionUI detailsEntity typeentityTypeN/AIf provided, the task restricts the reindexing scope to Entities of specified type.User can specify  the EntityType is search API and the URIS list will be generated. There is no need to pass this to Reltio API becouse we are using the generated URI listSkip entities countskipEntitiesCount0If provided, sets the number of Entities which are skipped during reindexing.-Entities limitentitiesLimitinfinityIf provided, sets the maximum number of Entities are reindexed-Updated sinceupdatedSinceN/ATimestamp in Unix format. If this parameter is provided, then only entities with greater or equal timestamp are reindexed. This is a good way to limit the reindexing to newer records.-Update entitiesupdateEntitiestrue If set to true, initiates update for Search, Match tables, History. If set to false, then no rematching, no history changes, only ES structures are updated.If set to true (default), in addition to refreshing the ElasticSearch index, the task also updates history, match tables, and the analytics layer (RI). This ensures that all indexes and supporting structures are as up-to-date as possible. As explained above, however, triggering all these activities may decrease the overall performance level of the database system for business work, and overwhelm the event streaming channels. If set to false, the task updates ElasticSearch data only. It does not perform rematching, or update history or analytics. These other activities can be performed at different times to spread out the performance impact.-Check crosswalk consistencycheckCrosswalksConsistencyfalseIf true, this will start a task to check if all crosswalks are unique before reindexing data. Please note, if entitiesLimit or distributed parameters have any value other than default, this parameter will be unavailableSpecify true to reindex each Entity, whether it has changed or not. This operation ensures that each Entity in the database is processed. Reltio does not recommend this option – it decreases the performance of the reindex task dramatically, and may overload the server, which will interfere with all database operations.-URI listentityUrisgenerated list of URIS from UIOne or more entity URIs (separated by a comma) that you would like to process. For example: entities/, entities/.Reltio suggests to use 50-100K uris in one API request, this is Reltio limitation. Our process splits to 100K files if required. Based on the input files size one JOB from HUB end may produce multiple Reltio tasks.UI generates list of URIS from mongo querry or we are running the reindex with the input filesIgnore streaming eventsforceIgnoreInStreamingfalseIf set to true, no streaming events will be generated until after the reindex job has completed.-DistributeddistributedfalseIf set to true, the task runs in distributed mode, which is a good way to take advantage of a networked or clustered computing environment to spread the performance demands of reindexing over several nodes. -Job parts counttaskPartsCountN/A due to distributed=falseDefault value: 2The number of tasks which are created for distributed reindexing. Each task reindexes its own subset of Entities. Each task may be executed on a different API node, so that all tasks can run in parallel. Recommended value: the number of API nodes which can execute the tasks. Note: This parameter is used only in distributed mode ( distributed=true); otherwise, its ignored.-More detials in Reltio docs:https://docs.reltio.com/en/explore/get-going-with-apis-and-rocs-utilities/reltio-rest-apis/engage-apis/tasks-api/reindex-data-taskhttps://docs.reltio.com/en/explore/get-your-bearings-in-reltio/console/tenant-management-applications/tenant-management/jobs/creating-a-reindex-data-job" + }, + { + "title": "6. Merge/Unmerge entities", + "pageID": "337846268", + "pageLink": "/pages/viewpage.action?pageId=337846268", + "content": "DescriptionThis page is available to users with the merge/unmerge role - MERGE_UNMERGE_ENTITIESAllows you to schedule Merge/Unmerge JOB. It can be used in two modes: merge or unmerge.The operation will trigger JOB  with selected parameters. In response, the user will receive an identifier that is used to check the status of the asynchronous operation in the 10. Jobs Manager tab.Required parametersfile with profiles to be merged or unmerged in the selected formatDetailsfileInput fileFile format: CSV Encoding: UTF-8more details here - Batch merge & unmergeView" + }, + { + "title": "7. Update Identifiers", + "pageID": "337846270", + "pageLink": "/display/GMDM/7.+Update+Identifiers", + "content": "DescriptionThis page is available to users with the update identifiers role - UPDATE_IDENTIFIERSAllows you to schedule update identifiers JOB.The operation will trigger JOB  with selected parameters. In response, the user will receive an identifier that is used to check the status of the asynchronous operation in the 10. Jobs Manager tab.Required parametersfile with profiles to be updated in the selected formatDetailsfileInput fileFile format: CSV Encoding: UTF-8more details here - Batch update identifiersView" + }, + { + "title": "8. Clear Cache", + "pageID": "337846272", + "pageLink": "/display/GMDM/8.+Clear+Cache", + "content": "DescriptionThis page is available to users with the ETL clear cache role - CLEAR_CACHE_BATCHThe cache is related to the Direct Channel ETL jobs:Docs: ETL Batch Channel and ETL BatchesAllows you to clear the ETL checksum cache. It can be used in three modes: query or by_source or file.The operation will trigger JOB  with selected parameters. In response, the user will receive an identifier that is used to check the status of the asynchronous operation in the 10. Jobs Manager tab.Query modeRequired parametersBatch name  - specify a batch name for which you want to clear the cacheObject type - ENTITY or RELATIONEntity type - e.g. configuration/relationTypes/Employment or configuration/entityTypes/HCPDetailsParameterDescriptionBatch nameSpecify a batch on which the clear cache will be triggeredObject type ENTITY or RELATIONEntity typeIf object type is ENTITY then e.g:configuration/entityTypes/HCOconfiguration/entityTypes/HCPIf object type is RELATION then e.g.:configuration/relationTypes/ContactAffiliationsconfiguration/relationTypes/EmploymentCountryAdd a country if required to limit the clear cache query Viewby_source modeRequired parametersBatch name  - specify a batch name for which you want to clear the cacheSource - crosswalk type and valueDetailsSpecify a batch name and click add a source to specify new crosswalks that you want to remove from the cache.Viewfile modeRequired parametersBatch name  - specify a batch name for which you want to clear cachefile with crosswalks to be cleared in ETL cache in the selected format for specified batchDetailsfileInput fileFile format: CSV Encoding: UTF-8more details here - Batch clear ETL data load cacheViewView" + }, + { + "title": "9. Restore Raw Data", + "pageID": "356650113", + "pageLink": "/display/GMDM/9.+Restore+Raw+Data", + "content": "DescriptionThis page is available to users with the restore data role - RESTOREThe raw data contains data send to MDM HUB:Docs: Restore raw dataAllows you to restore raw (source) data on selected environmentThe operation will trigger asynchronous job with selected parameters.Restore entitiesRequired parametersSource environment - restore data from another environment eg from QA to DEV environment, the default is the currently logged in environmentEntity type  - restore data only for specified entity type: HCP, HCO, MCOOptional parametersCountries - restore data only for specified entity country, eq: GB, IE, BRSources - restore data only for specified entity source, eq: GRV, ONEKEYDate Time - restore data created after specified date timeViewRestore relationsRequired parametersSource environment - restore data from another environment eg from QA to DEV environment, the default is the currently logged in environmentOptional parametersCountries - restore data only for specified entity country, eq: GB, IE, BRSources - restore data only for specified entity source, eq: GRV, ONEKEYRelation types- restore data only for specified relation type, eg: configuration/relationTypes/OtherHCOtoHCOAffiliationsDate Time - restore data created after specified date timeView" + }, + { + "title": "HUB Status - main page", + "pageID": "333155175", + "pageLink": "/display/GMDM/HUB+Status+-+main+page", + "content": "DescriptionThe UI is divided into the following sections:MENUContains links to Ingestion Services ConfigurationIngestion Services TesterHUB AdminHEADERShows the current tenant name, click to quickly change the tenant to a different one.Shows the logged-in user name. Click to log out. FOOTERLink to User GuideLink to Connect GideLink to the whole HUB documentationLink to the Get Help pageCurrently deployed versionClick to get the details about the CHANGELOGon PROD - released versionon NON-PROD- snapshot version - Changelog contains unreleased changes that will be deployed in the upcoming release to PROD.HUB Status dashboard is divided into the following sections:On this page you can check HUB processing status / kafka topics LAGs / API availability / Snowflake DataMart refresh. API (related to the Direct Channel)API Availability  - status related to HUB API (all API exposed by HUB e.g. based on EMEA PROD - EMEA PROD Services )Reltio READ operations performance and latency - for example, GET Entity operations (every operation that gets data from Reltio)Reltio WRITE operations performance and latency - for example, POST/PATCH Entity operations (every operation that changes data in Reltio)Batches (related to the ETL Batch Channel)Currently running batches and duration of completed batches.Currently running batches may cause data load and impact event processing visible in the dashboard below (inbound and outbound)Event Processing Shows information about events that we are processing to:Inbound - all updates made by HUB on profiles in Reltioshows the ETA based on the:ETL Batch Channel (loading and processing events into HUB from ETL)Direct Channel processing:loading ETL data to Reltioloading Rankings/Callbacks/HcoNames (all updates on profiles on Reltio)    Outbound - streaming channel processing (related to the Streaming channel)shows the ETA based on the:Streaming channel - all events processing starting from Reltio SQS queue, events currently processing by HUB Streaming channel microservices.DataMart (related to the Snowflake MDM Data Mart)The time when the last REGIONAL and GLOBAL Snowflake data martsShows the number of events that are still processing by HUB microservices and are not yet consumed by Snowflake Connector. " + }, + { + "title": "Ingestion Services Configuration", + "pageID": "302701936", + "pageLink": "/display/GMDM/Ingestion+Services+Configuration", + "content": "DescriptionThis page shows configuration related to theData Quality checksSource Match CategorizationCleansing & FormattingAuto-FillsMinimum Viable Profile Check. Noise listsIdentifier noise listDuplicate identifier config.Choose a filter to switch between different entity types and use input boxes to filter results.Available filters:FilterDescriptionEntity TypeHCP/HCO/MCO - choose an entity type that you want to review and click SearchCategoryPick to limit the result and review only selected rulesCountryType a country code to limit the number of rules related to the specific countrySource Type a source to limit the number of rules related to the specific sourceQueryOpen Text filed -helps to limit the number of results when searching for specific attributes. Example case - put the "firstname" and click Search to get all rules that modify/use FirstName attribute.Audit filedComparison typeDateUse a combination of these 3 attributes to find rules created before or after a specific date. Or to get rules modified after a specific date. Click on the:Noise List ConfigID Noise ConfigDuplicate ID ConfigAnd get detailed information about current rules for specific type.NOTE: remember to change entity type and click Search to view rules for different entity types.                                                                                  " + }, + { + "title": "Ingestion Services Tester", + "pageID": "302701950", + "pageLink": "/display/GMDM/Ingestion+Services+Tester", + "content": "DescriptionThis site allows you to test quality service. The user can select the input entity using the 'upload' button, paste the content of the entity into the editor or drag it. After clicking the 'test' button, the entity will be sent to the quality service. After processing, the result will appear in the right window. The user can choose two modes of presenting the result - the whole entity or the difference. In the second mode, only changes made by quality service will be displayed. After clicking the 'validation result' button, a dialog box will be displayed with information on which rules were applied during the operation of the service for the selected entity.Quality service tester editorValidation summary                                      Here you can check which rules were "triggered" and check the rule in the Ingestion Services Configuration using the Rule name.Search by text using attribute or "triggered" keyword to get all triggered rules.                                            " + }, + { + "title": "Incremantal batch", + "pageID": "164470033", + "pageLink": "/display/GMDM/Incremantal+batch", + "content": "On the diagram below presented the generic structure of the batch flow. Data sources will have own instances of the flow configured:The flow consists of the following stages: Flow triggering is done by Airflow based on a schedule suited to a source data delivery time.  The source data files are downloaded from AWS S3 bucket managed by MMD HUB and they are preprocessed. The preprocessing is done using standard Unix tools run by Aifrlow as docker containers, and it is specific to  particular source requirements. The goal of the stage is preparing data for the mapping stage by cleaning and formatting. Source data are mapped to Reltio data model using Generic Mapper – custom Java component that uses flexible mapping rules expressed as metadata configuration. The component produces HCP/HCP/relation update events and publish it to dedicated KAFKA topics. Each flow uses own topic to control access and prevent from uncontrolled data modification in Reltio by a source (Topic name is mapped to client privileges in HUB Gateway). The mapper generates update events in an order that reflects Reltio object dependencies. As first,  Main HCO events are generated, then child HCO events, and at the end HCP events. MDM Gateway receives update events, validates, call respective Reltio API to update profiles in Reltio, and send an acknowledge events (ACK) to a response topic containing statuses of processing update events. The events are processed in parallel. The number of threads depends on the number of Kafka consumers configured in the Gateway. The Generic Mapper component receives ACKs and send events for the next Reltio object,  or if all events are processed than it generates a report from a load. At the end of the process, the input files and the load report are copied to an archive location in S3.  Generic MapperGeneric Mapper is a component that converts source data into documents in the unified format required by Reltio API. The component is flexible enough to support incremental batches as well as full snapshots of data. Handling a new type of data source is a matter of (in most cases) creating a new configuration that consists of stage and metadata parts. The first one defines details of so called "stages", i.e.: HCO, HCP, etc. The latter contains all mapping rules defining how to transform source data into attribute path/value form. Once data are transformed into the mentioned form it is easy to store it, merge it or do any other operation (including Reltio document creation) in the same way for all types of sources. This simple idea makes Generic Mapper a very powerful tool that can be extended in many ways.  A stage is a logical group of steps that as a whole process single type of Reltio document, i.e.: HCO entity.    At the beginning of each stage the component reads source data and generates attribute changes (events) and then stores this in an output file. It is worth to notice that there can be many source data configured. Once the output file is produced it is sorted. The above logic can be called phase 1 of a stage. Until now no database has been used. In the phase 2 the sorted file is read, events are aggregated into groups in such a way that each element of a group refers to the same Reltio document. Next all lookups are resolved against a database, merged with previous version of a document attributes and persisted. Then, Reltio document (Json) is created and sent to Kafka. The stage is finished when all acks from the gateway are collected. Under the hood each stage is a sequence of jobs: a job (i.e.: the one for sorting a file) can be started only in case its direct predecessor is finished with a success. Stages can be configured to run in parallel and depends on each other. Load reports At runtime Generic Mapper collects various types of data that give insight into DAG state and load statistics. The HTML report is written to disk each time a status of any job is changed. The report consists of three panels: Summary, Metrics and DAG. The summary panel contains details of all jobs within a DAG that was created for the current execution (load). The DAG panel shows relationships between jobs in the form of a graph. The metrics panel presents details of a load. Each metric key is prefixed by a stage name.  Document processed or Document sent: number of Reltio documents processed with success. In the latter case the document was additionally sent to MDM Gateway.  Document not sent due to its deleted status: number of documents not processed because of its status marked as deleted (only for initDeletedLoadEnabled set to false, otherwise a document is processed anyway) Document not sent due to lack of delta: number of documents not processed because there was not any change discovered (only for deltaDetectionEnabled set to true, otherwise a document is processed anyway) MDMRequest creation error: number of documents not sent due to a problem with building MDMRequest object. This may happen if source data are not complete, i.e.: only specializations without root object attributes were delivered Lookup error: number of documents not processed due to problems with finding referenced data in a database.  Record filtered out: number of records filtered out during attribute change generation step. By default no record is filtered out, this may be changed via mapping configuration. Invalid record error: number of invalid records " + }, + { + "title": "Kafka offset modification", + "pageID": "273695178", + "pageLink": "/display/GMDM/Kafka+offset+modification", + "content": "DescriptionThe REST interfaces exposed through the MDM Manager component used by clients to modify kafka offset.During the update, we will check access to groupId and specyfic topic.Diagram 1 presents flow, and kafka communication during offset modification.The diagrams below present a sequence of steps in processing client calls.Flow diagramStepsThe client sends HTTP request to MDM Manager endpoint.Kong API Gateway receives requests and handles authentication.If the authentication succeeds, the request is forwarded to MDM Manager component.MDM Manager checks user permissions to call kafka offset modification operation and the correctness of the request.If the user's permissions are correct, MDM Manager proceeds with offset modification.Offset modification cases:latest: to latest offsetearliest: to earliest offsetto date: to offset based on specyfied timestamp(Used to retrieve the earliest offset whose timestamp is greater than or equal to the given timestamp in the corresponding partition, timestamp – in milliseconds)If You want shift offset for specific message number you can use "shift" attribute and specify positive or negative number of messages to shift (offset is calculated in memory based on "offset + shift" properties)TriggersTrigger actionComponentActionDefault timeREST callManager: POST /kafka/offsetmodify kafka offsetAPI synchronous requests - realtimeRequestResponse{    "groupId": "mdm_test_user_group",    "topic": "amer-dev-in-guest-tests",    "offset": "latest"}{    "values": [        {            "topic": "amer-dev-in-guest-tests",            "partition": 0,            "offset": 2        }    ]}{    "groupId": "mdm_test_user_group",    "topic": "amer-dev-in-guest-tests",    "offset": "earliest"}{    "values": [        {            "topic": "amer-dev-in-guest-tests",            "partition": 0,            "offset": 0        }    ]}{    "groupId": "mdm_test_user_group",    "topic": "amer-dev-in-guest-tests",    "offset": "2022-12-15T08:15:02Z"}{    "values": [        {            "topic": "amer-dev-in-guest-tests",            "partition": 0,            "offset": 1        }    ]}{    "groupId": "mdm_test_user_group",    "topic": "amer-dev-in-guest-tests",    "offset": "latest"    "partition": 4}{    "values": [        {            "topic": "amer-dev-in-guest-tests",            "partition": 4,            "offset": 2        }    ]}{    "groupId": "mdm_test_user_group",    "topic": "amer-dev-in-guest-tests",    "offset": "2022-12-15T08:15:02Z",    "shift": 5}{    "values": [        {            "topic": "amer-dev-in-guest-tests",            "partition": 0,            "offset": 6        }    ]}Dependent componentsComponentUsageManagercreate update Entities in MDM systemsAPI Gatewayproxy REST and secure access" + }, + { + "title": "LOV read", + "pageID": "164469998", + "pageLink": "/display/GMDM/LOV+read", + "content": "The flow is triggered by API GET /lookup  call.  It retrives LOV data from HUB store. Process steps description:Client sends HTTP request to MDM Manager endpoint.Kong Gateway receives request and handles authenticationIf the authentication succeeds, the request is forwarded to MDM Manager componentMDM Manager checks user permissions to call getEntity operation and the correctness of the requestMDM Manager checks user profile configuration for lookup operation to determine whether to return results based on MongoDB state, or call Reltio directly.Request parameters are used to dynamically generate a query. This query is executed in findByCriteria method.Query results are returned to the client" + }, + { + "title": "LOV update process (Nucleus)", + "pageID": "164469999", + "pageLink": "/pages/viewpage.action?pageId=164469999", + "content": "\nProcess steps description:\n\n\tNucleus Subscriber monitors AWS S3 location where CCV files are uploaded.\n\tWhen a new file is found, it is downloaded and processed. Single CCV zip file contains multiple *.exp files, which contain different parts of LOV – header, description, references to values from external systems.\n\tEach *.exp file is processed line by line, with Dictionary change events generated for each line. These events are published to a Kafka topic from where the Event Publisher component receives them.\n\tAfter CCV file is processed completely, it is moved to archive subtree in S3 bucket folder structure.\n\tWhen Dictionary change event is received in Event Publisher the current state of LOV is first fetched from Mongo database. New data from the event is then merged with that state and the result is saved back in Mongo.\n\n\n\nAdditional remarks:\n\n\tCorrectness is ensured by the fact that LOV id is used as Kafka partitioning key, guaranteeing that events related to the same LOV are processed sequentially by the same thread.\n\tDictionary change events are considered internal to MDM Publishing Hub – they are not forwarded to client systems subscribing to Entity change events.\n\n" + }, + { + "title": "LOV update processes (Reltio)", + "pageID": "164469992", + "pageLink": "/pages/viewpage.action?pageId=164469992", + "content": "\n Figure 18. Updating LOVs from ReltioLOV update processes are triggered by timer on regular, configurable intervals. Their purpose is to synchronize dictionary values from Reltio. Below is the diagram outlining the whole process.\n\nProcess steps description:\n\n\tSynchronization processes are triggered at regular intervals.\n\tReltio Subscriber calls MDM Gateway lookups API to retrieve first batch of LOV data\n\tFetched data is inserted into the Mongo database. Existing records are updated\n\n\n\nSecond and third steps are repeated in a loop until there is no more LOV data remaining." + }, + { + "title": "MDM Admin Flows", + "pageID": "302683297", + "pageLink": "/display/GMDM/MDM+Admin+Flows", + "content": "" + }, + { + "title": "Kafka Offset", + "pageID": "302684674", + "pageLink": "/display/GMDM/Kafka+Offset", + "content": "Swagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Kafka/kafkaOffsetModificationAPI allows offset manipulation for consumergroup-topic pair. Offsets can be set to earliest/latest/timestamp, or adjusted (shifted) by a numeric value.An important point to mention is that in many cases offset does not equal to messages - shifting offset on a topic back by 100 may result in receiving 90 extra messages. This is due to compactation and retention - Kafka may mark offset as removed, but it still remains for the sake of continuity.Example 1Environment is EMEA DEV. User wants to consume the last 100 messages from his topic again. He is using topic "emea-dev-out-full-test-topic-1" and consumer-group "emea-dev-consumergroup-1".User has disabled the consumer - Kafka will not allow offset manipulation, if the topic/consumergroup is being used.He sent below request:\nPOST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/kafka/offset\nBody:\n{\n  "topic": "emea-dev-out-full-test-topic-1",\n  "groupId": "emea-dev-consumergroup-1",\n  "shiftBy": -100\n}\nUpon re-enabling the consumer, 100 of the last events were re-consumed.Example 2User wants to consume all available messages from the topic again.User has disabled the consumer and sent below request:\nPOST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/kafka/offset\nBody:\n{\n  "topic": "emea-dev-out-full-test-topic-1",\n  "groupId": "emea-dev-consumergroup-1",\n  "offset": earliest\n}\nUpon re-enabling the consumer, all events from the topic were available for consumption again." + }, + { + "title": "Partial List", + "pageID": "302683607", + "pageLink": "/display/GMDM/Partial+List", + "content": "Swagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Precallback%20Service/reconcilePartials_1API calls Precallback Service's internal API and returns a list of events stuck in partial state (more information here). List can be limited and sorted. Partial age can be displayed in one of below formats:HH:mm:ss.fff duration(default)YYYY-MM-DDThh:mm:ss.sss timestampepoch timestamp.ExampleUser has noticed an alert being triggered for GBLUS DEV, informing about events in partial state. To investigate the situation, he sends the following request:\nGET https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gblus-dev/precallback/partials?absolute=true\nResponse:\n{\n "entities/1sgqoyCR": "2023-02-09T11:42:06.523Z",\n "entities/1eUqpXVe": "2023-02-01T12:39:57.345Z",\n "entities/2ZlDTE2U": "2023-02-09T11:40:30.950Z",\n "entities/2J1YiLW9": "2023-02-09T11:41:45.092Z",\n "entities/1KgPnkhY": "2023-02-01T12:39:58.594Z",\n "entities/1YpLnUIR": "2023-02-01T12:40:06.661Z"\n}\nHe realized, that it is difficult to quickly tell the age of each partial based on timestamp. He removed the absolute flag from request:\nGET https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gblus-dev/precallback/partials\nResponse:\n{\n "entities/1sgqoyCR": "27:26:56.228",\n "entities/1eUqpXVe": "218:29:05.406",\n "entities/2ZlDTE2U": "27:28:31.801",\n "entities/2J1YiLW9": "27:27:17.659",\n "entities/1KgPnkhY": "218:29:04.157",\n "entities/1YpLnUIR": "218:28:56.090"\n}\nThree partials have been stuck for more than 200 hours. Other three partials - for over 27 hours." + }, + { + "title": "Reconciliation", + "pageID": "302683312", + "pageLink": "/display/GMDM/Reconciliation", + "content": "EntitiesSwagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Reconciliation/reconcileEntitiesAPI accepts a JSON list of entity URIs. URIs not beginning with "entities/" are filtered out. For each URI it:Checks entityType (HCP/HCO/MCO) in MongoChecks status (ACTIVE/LOST_MERGE/INACTIVE/REMOVED) in MongoIf entity is ACTIVE, it generates a *_CHANGED event and sends it to the ${env}-internal-reltio-events to be enriched by the Entity EnricherIf entity has status other than ACTIVE:If entity has status LOST_MERGE and pushLostMerge parameter is true, generate a *_LOST_MERGE event.If entity has status INACTIVE and pushInactived parameter is true, generate a *_INACTIVATED event.If entity has status DELETED and pushRemoved parameter is true, generate a *_REMOVED event.*Additional parameter, force, may be used. When set to true, event will proceed to the EventPublisher even if rejected by Precallbacks.ExampleUser wants to reconcile 4 entities, which have different data in Snowflake/Mongo than in Reltio:entities/108dNvgB is ACTIVEentities/10VLBsCl is LOST_MERGEentities/10bH3nze is INACTIVEentities/1065AHEA is DELETEDrelations/101LIzcm was mistakenly added to the listBelow request is sent (GBL DEV):\nPOST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-dev/reconciliation/entities\nBody:\n["entities/108dNvgB", "entities/10VLBsCl", "entities/10bH3nze", "entities/1065AHEA", "relations/101LIzcm"]\nResponse:\n{\n "entities/10bH3nze": "false - Record with INACTIVE status in cache",\n "entities/1065AHEA": "false - Record with DELETED status in cache",\n "entities/10VLBsCl": "false - Record with LOST_MERGE status in cache",\n "entities/108dNvgB": "true",\n "relations/101LIzcm": "false"\n}\nOnly one event was generated: HCP_CHANGED for entities/108dNvgB.User decided that he also need an HCP_LOST_MERGE event for entities/10VLBsCl. He sent the same request with pushLostMerge flag:\nPOST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-dev/reconciliation/entities?pushLostMerge=true\nBody:\n["entities/108dNvgB", "entities/10VLBsCl", "entities/10bH3nze", "entities/1065AHEA", "relations/101LIzcm"]\nResponse:\n{\n "entities/10bH3nze": "false - Record with INACTIVE status in cache",\n "entities/1065AHEA": "false - Record with DELETED status in cache",\n "entities/10VLBsCl": "true",\n "entities/108dNvgB": "true",\n "relations/101LIzcm": "false"\n}\nThis time, two events have been generated:HCP_CHANGED for entities/108dNvgBHCP_LOST_MERGE for entities/10VLBsClRelationsSwagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Reconciliation/reconcileRelationsAPI works the same way as for Entities, but this time URIs not beginning with "relations/" are filtered out.ExampleUser sent the same request as in previous example (GBL DEV):\nPOST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-dev/reconciliation/relations\nBody:\n["entities/108dNvgB", "entities/10VLBsCl", "entities/10bH3nze", "entities/1065AHEA", "relations/101LIzcm"]\nResponse:\n{\n "entities/10bH3nze": "false",\n "entities/1065AHEA": "false",\n "entities/10VLBsCl": "false",\n "entities/108dNvgB": "false",\n "relations/101LIzcm": "false - Record with DELETED status in cache"\n}\nFirst 4 URIs have been filtered out due to unexpected prefix. Event for relations/101LIzcm has not been generated, because this relation has DELETED status in cache.Same request has been sent with pushRemoved flag:\nPOST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-dev/reconciliation/relations?pushRemoved=true\nBody:\n["entities/108dNvgB", "entities/10VLBsCl", "entities/10bH3nze", "entities/1065AHEA", "relations/101LIzcm"]\nResponse:\n{\n "entities/10bH3nze": "false",\n "entities/1065AHEA": "false",\n "entities/10VLBsCl": "false",\n "entities/108dNvgB": "false",\n "relations/101LIzcm": "true"\n}\nA single event has been generated: RELATIONSHIP_REMOVED for relations/101LIzcm.PartialsSwagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Reconciliation/reconcilePartialsPartials Reconciliation API works the same way that Entities Reconciliation does, but it automatically fetches the current list of entities stuck in partial state using Partial List API.Partials Reconciliation API also handles push and force flags. Additionally, partials can be filtered by age, using partialAge parameter with one of following values: NONE (default), MINUTE, HOUR, DAY.ExampleUser wants to reload entities stuck in partial state in GBL DEV. Prometheus alert informs him that there are plenty, but he remembers that there is currently an ongoing data load, which may cause many temporary partials.User decides that he should use the partialAge parameter with value DAY, to only reload the entities which have been stuck for a longer while, and not generate unnecessary additional traffic.He sends the following request:\nPOST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-dev/reconciliation/partials?partialAge=DAY\nBody: -\nFlow fetches a full list of partials from Precallback Service API and filters out the ones stuck for less than a day. It then executes the Entities Reconciliation with this list. Response:\n{\n "entities/1yHHKEZ7": "true",\n "entities/2EHamZr3": "true",\n "entities/2EyP0kYM": "true",\n "entities/21QU96KG": "true",\n "entities/2BmHQMCn": "true"\n}\n5 HCP/HCO_CHANGED events have been generated as a result." + }, + { + "title": "Resend Events", + "pageID": "302684685", + "pageLink": "/display/GMDM/Resend+Events", + "content": "API triggers an Airflow DAG. The DAG:Runs a query on MongoDB and generates a list of entity/relation URIs.Using Event Publisher's /resendLastEvent API, it produces outbound events for received reconciliationTarget (user-sent).Resend - SimpleSwagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Events/resendEventWhen using Simple API, user does not actually write the Mongo query - they instead fill in the blanks.Required parameters are:country filter,objectType (entity, relation)reconciliationTarget - this is configured for each routing rule in Event Publisher and, according to MDM Hub's support practices, should be equal to topic name,event limit - number.Optionally, objects can be filtered by:source,modification time.ExampleEnvironment is EMEA DEV. User wants to generate 300 entity events (HCP_CHANGED or HCO_CHANGED) for Poland, source CRMMI. His outbound topic is emea-dev-out-full-user-all.He sends the request:\nPOST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/events/resend\nBody:\n{\n "countries": [\n "pl"\n ],\n "sources": [\n "CRMMI"\n ],\n "objectType": "ENTITY",\n "limit": 300,\n "reconciliationTarget": "emea-dev-out-full-user-all"\n}\nResponse:\n{\n "dag_id": "reconciliation_system_emea_dev",\n "dag_run_id": "manual__2023-02-13T14:26:22.283902+00:00",\n "execution_date": "2023-02-13T14:26:22.283902+00:00",\n "state": "queued"\n}\nA new Airflow DAG run was started. dag_run_id field contains this run's unique ID. Below request can be sent to fetch current status of this DAG run:\nGET https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/events/resend/status/manual__2023-02-13T14:26:22.283902+00:00\nResponse:\n{\n "dag_id": "reconciliation_system_emea_dev",\n "dag_run_id": "manual__2023-02-13T14:26:22.283902+00:00",\n "execution_date": "2023-02-13T14:26:22.283902+00:00",\n "state": "running"\n}\nAfter the DAG has finished, 300 HCP_CHANGED/HCO_CHANGED events will have been generated to the emea-dev-out-full-user-all topic.Resend - ComplexSwagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Events/resendEventComplexFor Complex API, user writes their own Mongo query.Required parameters are:either entitiesQuery or relationsQuery - depending on object type and collection to be queried,reconciliationTarget.Optionally, resulting objects can be limited (separate fields for each query).ExampleAs in previous example, user wants to generate 300 events for Poland, source CRMMI. Output topic is emea-dev-out-full-user-all.This time, he sends the following request:\nPOST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/events/resend/complex\nBody:\n{\n "entitiesQuery": "{ 'country': 'pl', 'sources': 'CRMMI' }",\n "relationsQuery": null,\n "reconciliationTarget": "emea-dev-out-full-user-all",\n "limitEntities": 300,\n "limitRelations": null\n}\nResponse:\n{\n "dag_id": "reconciliation_system_emea_dev",\n "dag_run_id": "manual__2023-02-13T14:57:11.543256+00:00",\n "execution_date": "2023-02-13T14:57:11.543256+00:00",\n "state": "queued"\n}\nResend - StatusSwagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Events/getStatusAs described in previous examples, this API returns current status of DAG run. Request url parameter must be equal to dag_run_id. Possible statuses are:queuedsuccessrunningfailed" + }, + { + "title": "Internals", + "pageID": "164470109", + "pageLink": "/display/GMDM/Internals", + "content": "" + }, + { + "title": "Archive", + "pageID": "333152415", + "pageLink": "/display/GMDM/Archive", + "content": "" + }, + { + "title": "APM performance tests", + "pageID": "333152417", + "pageLink": "/display/GMDM/APM+performance+tests", + "content": "Performance tests were executed using Jmeter tool placed on CI/CD server.Test scenario:Create HCPSmall entityMedium size entityBig entityGet previously created entityTests werer performed by 4 parallel users  in a loop for 60 min.Test results:The decrease in component efficiency is not more than 3%The increase in the load on the nodes in not more than 5%(within the measurement error)" + }, + { + "title": "Client integration specifics", + "pageID": "492493127", + "pageLink": "/display/GMDM/Client+integration+specifics", + "content": "" + }, + { + "title": "Saudi Arabia integration with IQVIA", + "pageID": "492493129", + "pageLink": "/display/GMDM/Saudi+Arabia+integration+with+IQVIA", + "content": "Below design was confirmed with Alain and Eleni during 14.01.2025 meeting. Concept of such solution was earlier approved by AJ.Source: Lucid" + }, + { + "title": "Components providers - AWS S3, networking, etc...", + "pageID": "273702388", + "pageLink": "/pages/viewpage.action?pageId=273702388", + "content": "TenantProviderReltioAWS accounts IDsIAM usersIAM rolesS3 bucketsNetwork (subnets, VPCe)Application IDEMEA NPRODPDCS - Kubernetes in IoDCOMPANYAirflow (S3) - 211782433747Snowflake (S3) - 211782433747Reltio (S3) -  211782433747AWS (PDCS) - 330470878083Airflow (S3)- arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_nprod_rw_s3Snowflake (S3) - arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_nprod_rw_s3Reltio (S3) - arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_nprod_rw_s3Node Instance Role ARN: arn:aws:iam::330470878083:role/atp-mdmhub-nprod-emea-eks-worker-NodeInstanceRole-1OG6IFX6DO8B9Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - pfe-atp-eu-w1-nprod-mdmhub Snowflake - pfe-atp-eu-w1-nprod-mdmhubReltio - pfe-atp-eu-w1-nprod-mdmhubVPCvpc-0c55bf38e97950aa5Subnetssubnet-067425933ced0e77f (●●●●●●●●●●●●●●)subnet-0e485098a41ac03ca (●●●●●●●●●●●●●●)SC3028977EMEA PRODAirflow (S3) - 211782433747Snowflake (S3) - 211782433747Reltio (S3) -  211782433747AWS (PDCS) - 330470878083S3 backup bucket - 604526422050Airflow (S3) - arn:aws:iam::211782433747:user/SRVC-MDMCDI-PRODSnowflake (S3) - arn:aws:iam::211782433747:user/SRVC-MDMCDI-PRODReltio (S3) - arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_mdm_exports_prod_rw_s3Node Instance Role ARN: arn:aws:iam::330470878083:role/atp-mdmhub-prod-emea-eks-worker-n-NodeInstanceRole-11OT3ADBULAGCReltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - pfe-atp-eu-w1-prod-mdmhubSnowflake - pfe-atp-eu-w1-prod-mdmhubReltio - pfe-atp-eu-w1-prod-mdmhubBackups - pfe-atp-eu-w1-prod-mdmhub-backupemaasp202207120811VPCvpc-0c55bf38e97950aa5Subnetssubnet-067425933ced0e77f (●●●●●●●●●●●●●●)subnet-0e485098a41ac03ca (●●●●●●●●●●●●●●)SC3211836AMER NPRODPDCS - Kubernetes in IoDCOMPANYAirflow (S3) - 555316523483Snowflake (S3)-  555316523483Reltio (S3) -  555316523483AWS (PDCS) - 330470878083Airflow (S3) - arn:aws:iam::555316523483:user/SRVC-MDMGBLFTSnowflake (S3) - arn:aws:iam::555316523483:user/SRVC-MDMGBLFTReltio (S3) - arn:aws:iam::555316523483:user/SVRC-MDMRELTIOGBLFTNPRODNode Instance Role ARN: arn:aws:iam::330470878083:role/atp-mdmhub-nprod-amer-eks-worker-NodeInstanceRole-1X8MZ6QZQD5V7Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - gblmdmhubnprodamrasp100762Snowflake - gblmdmhubnprodamrasp100762Reltio - gblmdmhubnprodamrasp100762VPCvpc-0aedf14e7c9f0c024Subnetssubnet-0dec853f7c9e507dd (10.9.0.0/18)subnet-07743203751be58b9 (10.9.64.0/18)SC3028977AMER PRODAirflow (S3) - 604526422050Snowflake (S3)- 604526422050Reltio (S3) -  555316523483AWS (PDCS) - 330470878083Backup bucket (S3) - 604526422050Airflow (S3) - arn:aws:iam::604526422050:user/SRVC-MDMGBLFTSnowflake (S3) - arn:aws:iam::604526422050:user/SRVC-MDMGBLFTReltio (S3) - arn:aws:iam::555316523483:user/SVRC-MDMRELTIOGBLFTNPRODNode Instance Role ARN: arn:aws:iam::330470878083:role/atp-mdmhub-prod-amer-eks-worker-n-NodeInstanceRole-1KA6LWUDBA3OIReltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - gblmdmhubprodamrasp101478Snowflake - gblmdmhubprodamrasp101478Reltio - gblmdmhubprodamrasp101478Backups - pfe-atp-us-e1-prod-mdmhub-backupamrasp202207120808VPCvpc-0aedf14e7c9f0c024Subnetssubnet-0dec853f7c9e507dd (10.9.0.0/18)subnet-07743203751be58b9 (10.9.64.0/18)SC3211836APAC NPRODPDCS - Kubernetes in IoDCOMPANYAirflow (S3) - 555316523483Snowflake (S3) - 555316523483Reltio (S3) -  555316523483AWS (PDCS) - 3304708780831.Airflow - (S3) - arn:aws:iam::555316523483:user/svc_atp_aps1_mdmetl_nprod_rw_s32. Snowflake (S3) - arn:aws:iam::555316523483:user/svc_atp_aps1_mdmetl_nprod_rw_s33. Reltio (S3) - arn:aws:iam::555316523483:user/SVRC-MDMRELTIOGBLFTNPRODNode Instance Role ARN: arn:aws:iam::330470878083:role/atp-mdmhub-nprod-apac-eks-worker-NodeInstanceRole-1053BVM6D7I2LReltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - globalmdmnprodaspasp202202171347Snowflake - globalmdmnprodaspasp202202171347Reltio - globalmdmnprodaspasp202202171347VPCvpc-0d4b6d3f77ac3a877Subnetssubnet-018f9a3c441b24c2b (●●●●●●●●●●●●●●●)subnet-06e1183e436d67f29 (●●●●●●●●●●●●●●●)SC3028977APAC PRODAirflow (S3) -Snowflake (S3) - Reltio -  555316523483AWS (PDCS) - 330470878083S3 backup bucket 6045264220501.Airflow - (S3) -  arn:aws:iam::604526422050:user/svc_atp_aps1_mdmetl_prod_rw_s32. Snowflake (S3) - arn:aws:iam::604526422050:user/svc_atp_aps1_mdmetl_prod_rw_s33. Reltio (S3) - arn:aws:iam::555316523483:user/SVRC-MDMRELTIOGBLFTNPRODNode Instance Role ARN: arn:aws:iam::330470878083:role/atp-mdmhub-prod-apac-eks-worker-n-NodeInstanceRole-1NMGPUSYG7H8QReltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - globalmdmprodaspasp202202171415Snowflake - globalmdmprodaspasp202202171415Reltio - globalmdmprodaspasp202202171415Backups - pfe-atp-ap-se1-prod-mdmhub-backuaspasp202207141502VPCvpc-0d4b6d3f77ac3a877Subnetssubnet-018f9a3c441b24c2b (●●●●●●●●●●●●●●●)subnet-06e1183e436d67f29 (●●●●●●●●●●●●●●●)SC3211836GBLUS NPRODPDCS - Kubernetes in IoDCOMPANYAirflow (S3) - 555316523483Snowflake (S3) - 555316523483Reltio (S3) -  555316523483AWS (PDCS) - 330470878083Airflow (S3) - arn:aws:iam::555316523483:user/SRVC-MDMGBLFTSnowflake (S3) - arn:aws:iam::555316523483:user/SRVC-MDMGBLFTReltio (S3) - arn:aws:iam::555316523483:user/SVRC-MDMRELTIOGBLFTNPRODReltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - gblmdmhubnprodamrasp100762Snowflake - gblmdmhubnprodamrasp100762Reltio - gblmdmhubnprodamrasp100762Same as AMER NPRODSC3028977GBLUS PRODAirflow (S3) - 604526422050Snowflake - 604526422050Reltio (S3) -  AWS (PDCS) - 330470878083S3 backup bucket - 604526422050Airflow (S3) - arn:aws:iam::604526422050:user/SRVC-MDMGBLFTSnowflake (S3) - arn:aws:iam::604526422050:user/SRVC-MDMGBLFTReltio (S3) - arn:aws:iam::555316523483:user/SVRC-MDMRELTIOGBLFTNPRODReltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - gblmdmhubprodamrasp101478Snowflake - gblmdmhubprodamrasp101478Reltio - gblmdmhubprodamrasp101478Backups - pfe-atp-us-e1-prod-mdmhub-backupamrasp202207120808Same as AMER  PRODSC3211836GBL NPRODPDCS - Kubernetes in IoDIQVIAAirflow (S3) -Snowflake (S3) - 211782433747Reltio (S3) -  AWS (PDCS) - 3304708780831.Airflow (S3) - arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_nprod_rw_s32. Snowflake (S3) - arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_nprod_rw_s33. Reltio (S3) - arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_mdm_exports_prod_rw_s3Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - pfe-atp-eu-w1-nprod-mdmhubSnowflake - pfe-atp-eu-w1-nprod-mdmhubReltio - pfe-atp-eu-w1-nprod-mdmhubSame as EMEA NPRODSC3028977GBL PRODAirflow (S3) -Snowflake (S3) - 211782433747Reltio (S3) -  AWS (PDCS) - 330470878083S3 backup bucket - 6045264220501.Airflow (S3) - arn:aws:iam::211782433747:user/svc_mdm_project_rw_s32. Snowflake (S3) - arn:aws:iam::211782433747:user/svc_mdm_project_rw_s33. Reltio (S3) - arn:aws:iam::211782433747:user/svc_mdm_project_rw_s3 ???Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - pfe-baiaes-eu-w1-projectSnowflake - pfe-baiaes-eu-w1-projectReltio - pfe-baiaes-eu-w1-projectBackups - pfe-atp-eu-w1-prod-mdmhub-backupemaasp202207120811Same as EMEA PRODSC3211836FLEX NPRODCloudBroker - EC2IQVIAAirflow (S3) -Reltio (S3) - Airflow - mdmnprodamrasp22124Reltio - mdmnprodamrasp22124FLEX PRODAirflow (S3) - Reltio (S3) - Airflow - mdmprodamrasp42095Reltio - mdmprodamrasp42095ProxyRapid - EC2N/AAWS EC2 - 432817204314MonitoringCloudBroker - EC2N/AAWS EC2 - 604526422050AWS S3 - 604526422050Thanos (S3) - arn:aws:iam::604526422050:user/SRVC-gblmdmhubNode Instance Role: arn:aws:iam::604526422050:role/PFE-ATP-MDMHUB-MONITORING-BACKUP-ROLE-01Grafana Backup - pfe-atp-us-e1-prod-mdmhub-grafanaamrasp20240315101601Thanos - pfe-atp-us-e1-prod-mdmhub-monitoringamrasp20240208135314Jenkins buildFLEX AirflowCloudBroker - EC2N/AVPC:Jenkins vpc-12aa056a" + }, + { + "title": "Configuration", + "pageID": "164470110", + "pageLink": "/display/GMDM/Configuration", + "content": "\nAll runtime configuration is stored in GitHub repository and changes are monitored using GIT history. Sensitive data is encrypted by Ansible Vault using AES256 algorithm and decrypted only during automatic deployment managed by Continuous Delivery process in Jenkins. " + }, + { + "title": "●●●●●●●●●●●●● [https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1587199]", + "pageID": "164470111", + "pageLink": "/pages/viewpage.action?pageId=164470111", + "content": "\nConfiguration for all environments is placed in mdm-reltio-handler-env/inventory branch.\nAvailable environments:\n\n\tdev/qa/stage/uat/test\n\t\n\t\t●●●●●●●●●●●●●\n\t\t●●●●●●●●●●●●●\n\t\n\t\n\tprod\n\t\n\t\t●●●●●●●●●●●●●\n\t\t●●●●●●●●●●●●●\n\t\t●●●●●●●●●●●●●\n\t\n\t\n\n\n\nIn order to separate variables for each service, we created the following groups:\n\n\t[gw-services]\n\t[hub-services]\n\t[kong]\n\t[mongo]\n\t[kafka]\n\n" + }, + { + "title": "Kafka", + "pageID": "164470104", + "pageLink": "/display/GMDM/Kafka", + "content": "\nKafka deployment procedures\n\n\tinstall_hub_broker.yml – this procedure is created to deploy kafka/zookeeper on environments other than PROD.\n\tinstall_hub_broker_cluster.yml – this procedure is created to deploy kafka/zookeeper on PROD environment.\n\n\n\nKafka variables\nProduction Kafka cluster requires the following variables:\n\n\tGlobally:\n\t\n\t\thub_broker_truststore_file/password – kafka server truststore file name and password\n\t\thub_broker_keystore_file/password – kafka keystore file name and password\n\t\thub_broker_admin_user/password – kafka admin user name and password\n\t\thub_broker_jaas_config_file – kafka jaas config file with Server auth(kafka) and Client auth(zookeeper)\n\t\tkafka_environment_KAFKA_ZOOKEEPER_CONNECT – list of zookeeper services required by kafka to enable cluster connection.\n\t\tzoo_users – zookeeper is deployed with server auth, this map contains admin user and password.\n\t\tzoo_servers - list of zookeeper servers, each host has to have unique id [1/2/3]\n\t\tkafka_extra_hosts – list of kafka hosts, these lines will be added to /etc/hosts file on each kafka docker container\n\t\n\t\n\tVariables per host – unique values.\n\t\n\t\tzoo_myid – zookeeper server id\n\t\tkafka_environment_KAFKA_BROKER_ID – kafka broker id\n\t\tkafka_environment_KAFKA_ADVERTISED_PORT – kafka advertised port\n\t\tkafka_environment_KAFKA_ADVERTISED_HOST_NAME – kafka host name\n\t\tfirewalld_ports – kafka ports to open in firewalld service.\n\t\n\t\n\tDevelopment kafka instance requires the following variables:\n\t\n\t\thub_broker_truststore_file/password – kafka server truststore file name and password\n\t\thub_broker_keystore_file/password – kafka keystore file name and password\n\t\thub_broker_admin_user/password – kafka admin user name and password\n\t\thub_broker_jaas_config_file – kafka jaas config file with Server auth(kafka) and Client auth(zookeeper)\n\t\n\t\n\tAdditionally:\n\t\n\t\ttopics.yml – definitions of kafka topics\n\t\tusers.yml – definitions of kafka users\n\t\n\t\n\n" + }, + { + "title": "Kong", + "pageID": "164470105", + "pageLink": "/display/GMDM/Kong", + "content": "\nKong deployment procedures\n\n\tinstall_mdmgw_gateway.yml – this procedure is created to deploy kong/cassandra on all available environments.\n\tupdate_kong_api.yml – this procedure is created to manage kong api. Available kong components which can be managed are:\n\t\n\t\tconsumers\n\t\tapis\n\t\tcertificates\n\t\n\t\n\n\n\nKong variables\nCassandra memory parameters are controlled by:\n\n\tkong_database_max_heap_size: "512M" – overwrites Xms and Xmx parameters.\n\tkong_database_heap_newsize: "400M" – overwrites Xmn parameters\n\n\n\nKong required variables:\n\n\tinstall_base_dir – kong docker-compose.yml file deployment directory\n\tkong_cluster_main_host – this parameter defines if kong and Cassandra will be deployed in cluster mode. This parameter is declared on PROD environment and contains main CASSANDRA_BROADCAST_ADDRESS. On DEV environment this parameter is not defined.\n\n\n\nTo manage kong api through deployment procedure these maps are needed:\n\n\tkong_apis – defines kong apis. It is a list of kong apis with required parameters:\n\t\n\t\tkong_api_obj_name – kong api name (e.g. "gw-api")\n\t\tkong_api_obj_upstream_url – api upstream url (e.g. http://mdmgw_mdm-manager_1:8081)\n\t\tkong_api_obj_uris – api uri (eg. /gw-api)\n\t\tkong_api_obj_methods – api methods (e.g. GET/POST/PATH)\n\t\tkong_api_obj_plugins (required plugin is key-auth)\n\t\n\t\n\tkong_consumers – defines kong consumers. It is a list of kong consumers with required parameters:\n\t\n\t\tkong_consumer_obj_username – kong user name\n\t\tkong_consumer_obj_auth_creds – kong required credentials "key-auth"\n\t\t\n\t\t\tkey – dedicated key for kong user\n\t\t\n\t\t\n\t\n\t\n\t[optional] kong_certificates - defines kong certificates to enable ssl communication. It is a list of kong snis with key and cert files:\n\t\n\t\tkong_certificate_obj_snis – list of available snis\n\t\tkong_certificate_obj_cert – kong certificate file\n\t\tkong_certificate_obj_key – kong server key file\n\t\n\t\n\n" + }, + { + "title": "Mongo", + "pageID": "164470004", + "pageLink": "/display/GMDM/Mongo", + "content": "\nMongo deployment procedures\n\n\tinstall_hub_db.yml – this procedure is created to deploy mongo on environments other than PROD.\n\tinstall_hub_mongo_cluster.yml – this procedure is created to deploy mongo cluster on PROD environment\n\n\n\nMongo variables\nProduction mongo cluster requires the following variables declared in /inventory/prod/group_vars/ all/all.yml file:\n\n\tmdm_mongo_base_dir – mongo base directory where shards/configs/routers will be deployed.\n\tmongo_first_run [True/False] - switch this variable to True when there is the first deployment of mongo cluster.\n\trecreate_services [True/False] - if True all docker-compose files will be started with "up -d" parameter, which means all mongo services will be recreated. Run with True when there is a need to add new shard instance.\n\tregenerate_firewalld_config [True/False] - if True, all ports defined in "mongo_cluster" map will be added to firewall service.\n\tmongo_cluster – describes whole mongo cluster. On production environment there are 3 mongo instances:\n\t\n\t\tmongo_server_01 - each instance can define mongo shards/configs/routers with required variables: [id, instance_name, port, host]\n\t\tmongo_server_02\n\t\tmongo_server_03\n\t\n\t\n\n\n\nDevelopment mongo instance requires the following variables declared in /inventory/dev/group_vars/all/all.yml file:\n\n\thub_db_install_dir – mongo base directory\n\thub_db_name – mongo db XXXeltio db name\n\thub_db_user – mongo db XXXeltio user name\n\n" + }, + { + "title": "Services - hub_gateway", + "pageID": "164470005", + "pageLink": "/display/GMDM/Services+-+hub_gateway", + "content": "\nServices deployment procedures\nHub deployment procedure: \n\n\tinstall_mdmhub_services.yml\n\n\n\n \nGateway deployment procedure:\n\n\tinstall_mdmgw_services.yml\n\n\n\nServices variables\n[gw-services] - this group contains variables for map channel and mdm manager in the following two maps:\n\n\tmap_channel\n\tmdm_manager\n\n\n\n[hub-services] - this group contains variables for hub api, reltio subscriber and event publisher in the following maps:\n\n\tevent_publisher\n\thub_api\n\treltio_subscriber\n\n\n\nIt is possible to redefine JVM_OPTS or any other environment using these maps:\n\n\tmdm_manager_environments\n\t\n\t\te.g. "JVM_OPTS=-server -Xms128m -Xmx512m -Djava.security.auth.login.confi g=/opt/mdm-gw-manager/config/kafka_jaas.conf"\n\t\n\t\n\tmap_channel_environments\n\tconsole_environments\n\n" + }, + { + "title": "Data storage", + "pageID": "164470006", + "pageLink": "/display/GMDM/Data+storage", + "content": "\nPublishing Hub among other functions serves as data store, caching the latest state of each Entity fetched from Reltio MDM. This allows clients to take advantage of increased performance and high availability provided by MongoDB NoSQL database. " + }, + { + "title": "Data structures", + "pageID": "164470007", + "pageLink": "/display/GMDM/Data+structures", + "content": "\n Figure 21. Structure of Publishing HUB's databasesThe following diagram shows the structure of DB collections used by Publishing Hub.\n\nDetailed description:\n\n\tentityHistory – collection storing MDM Entities (HCP, HCO), along with some metadata for easier lookup/processing.\n\t\n\t\t_id – unique id of an Entity. Publishing Hub is reusing attribute "uri" from Reltio model (e.g. "entities/ipa1iKq")\n\t\tcountry – two-letter country code, in lowercase (e.g. "de")\n\t\tcreationDate – timestamp of record creation (i.e. insertion to Mongo)\n\t\tentity – the Reltio Entity\n\t\tentityType – type of the entity (e.g. "configuration/entityTypes/HCO")\n\t\tlastModificationDate – timestamp of last update of the record.\n\t\tmergedEntitiesUris – identifiers of child (merged) entities (for entities that "won" merge event in Reltio)\n\t\tparentEntityId – identifier of the parent entity (for entities in "LOST_MERGE" status)\n\t\tsources – array of source system codes (e.g. "OK", "GRV", "FACE")\n\t\tstatus – current status of the entity (one of: ACTIVE, DELETED, LOST_MERGE)\n\t\tmdmSource – name of the source MDM system, currently one of "RELTIO", "NUCLEUS"\n\t\n\t\n\tLookupValues – collection storing dictionary data from Reltio.\n\t\n\t\t_id – unique id of the record. This is generated as concatenation of "type" and "code" attributes from Reltio\n\t\tupdatedOn – timestamp of last update of the record in Mongo\n\t\tvalueUpdatedOn – timestamp of last update of LOV in Reltio (values in Mongo are updated every 24h, whether or not they are actually changed in Reltio, so this value represents the timestamp of actual data change, not timestamp of refresh action)\n\t\ttype – LookupValue type, as defined by Reltio, e.g. "configuration/lookupTypes/ IMS_LKUP_SPECIALTY"\n\t\tcode – LookupValue code, as defined by Reltio, e.g. SPEC\n\t\tcountries – list of countries this LookupValue is valid for\n\t\tmdmSource – name of the source MDM system, currently one of "RELTIO", "NUCLEUS"\n\t\tvalue – LookupValue (full JSON, in Reltio-defined format – even for Nucleus data)\n\t\n\t\n\n\n\nINSERT vs UPSERT\nTo speed up database operations Publishing Hub takes advantage of MongoDB "upsert" flag of db.collection.update() method. This allows the application to skip the potentially costly query checking if the entity already exists in database. Instead the update operation is call right away, ceding the responsibility of checking for entity existence on Mongo internal mechanisms." + }, + { + "title": "Indexes", + "pageID": "164470001", + "pageLink": "/display/GMDM/Indexes", + "content": "\nAll of the fields in database collections are indexed, except complex documents (i.e. "entity" in entityHistory, "value" in LookupValues). Queries that do not use indexes (for example querying arbitrarily nested attributes of "entity") might suffer from bad performance. " + }, + { + "title": "DoR, AC, DoD", + "pageID": "294674667", + "pageLink": "/display/GMDM/DoR%2C+AC%2C+DoD", + "content": "" + }, + { + "title": "DoD - template", + "pageID": "294674670", + "pageLink": "/display/GMDM/DoD+-+template", + "content": "Requirements of task needed to be met before closing:Ticket deployed to dev and qa environmentChange is documentedAC are met." + }, + { + "title": "DoR - template", + "pageID": "294674659", + "pageLink": "/display/GMDM/DoR+-+template", + "content": "Requirements of task needed to be met before pushing to the Sprint:Fields in Jira ticket are filledFix versionEpic LinkComponent/sBusiness value is known and included in a ticket descriptionIf there is a deadline, it is understood and included in a ticket descriptionAcceptance Criteria are includedA ticket is estimated in Story Points." + }, + { + "title": "Exponential Back Off", + "pageID": "164469928", + "pageLink": "/display/GMDM/Exponential+Back+Off", + "content": "BackOff mechanizm that increases the back off period for each retry attempt. When the interval has reached the max interval, it is no longer increased. Stops retrying once the max elapsed time has been reached.Example: The default interval is 2000L ms, the default multiplier is 1.5, and the default max interval is 30000L. For 10 attempts the sequence will be as follows:requestback off ms120002300034500467505101256151877227808300009300001030000Note that the default max elapsed time is Long.MAX_VALUE. Use setMaxElapsedTime(long) to limit the maximum length of time that an instance should accumulate before returning BackOffExecution.STOP.Implementation based on spring-retry library." + }, + { + "title": "HUB UI", + "pageID": "294675912", + "pageLink": "/display/GMDM/HUB+UI", + "content": "DRAFT:TODO: Grafana dashboards through iframe - https://www.itpanther.com/embedding-grafana-in-iframe/" + }, + { + "title": "Integration Tests", + "pageID": "302681782", + "pageLink": "/display/GMDM/Integration+Tests", + "content": "Integration tests are devided into different categories. These categories are used for different environments.Jenkins IT configuration: http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/jenkins/k8s_int_test.groovy" + }, + { + "title": "Common Integration Test", + "pageID": "302681798", + "pageLink": "/display/GMDM/Common+Integration+Test", + "content": "Test classTest caseFlowCommonGetEntityTeststestGetEntityByUriCreate HCPGet HCP by URI and validatetestSearchEntityCreate HCPGet entities using filter (get by country code, first name and last name)Validate if entity existstestGetEntityByCrosswalkCreate HCPGet entity by corsswalk and validate if existstestGetEntitiesByUrisCreate HCPGet entity by uris andvalidate if existstestGetEntityCountryCreate HCPGet entity by country and validate if existstestGetEntityCountryOvCreate HCPAdd new countrySend update requestGet HCP's Country and validateMake ignored = true and ov = false on all countriesSend update requestGet HCP's Country and validateCreateHCPTestcreateHCPTestCreate HCPGet entity and validateCreateRelationTestcreateRelationTestCreate HCPCreate HCOCreate Relation between HCP and HCOGet Relation and validateDeleteCrosswalkTestdeleteCrosswalkTestCreate HCODelete crosswalk and validate status responseUpdateHCOTestupdateHCPTestCreate HCOGet created HCOUpdate HCO's nameValidate response statusGet HCO and validate if it is updatedUpdateHCPUsingReltioContributorProviderupdateHCPUsingReltioContributorProviderTrueAndDataProviderFalseCreate HCPGet created HCP and validateUpdate existing corosswalk and set contributorProvider to falseAdd new contributor provider crosswalkUpdate first nameSend update HCP requestValidate if it is updatedPublishingEventTesttest1_hcpCreate HCPWait for HCP_CREATED eventUpdate HCP first nameWait for HCP_CHANGED eventGet entity and validatetest2_hcpCreate HCPWait for HCP_CREATED eventUpdate HCP's last nameWait for HCP_CHANGED eventDelete crosswalkWait for HCP_REMOVED eventtest3_hcoCreate HCOWait for HCO_CREATED eventUpdate HCO's nameWait for HCO_CHANGED eventDelete crosswalkWait for HCO_REMOVED event" + }, + { + "title": "Integration Test For Iqvia Model", + "pageID": "302681788", + "pageLink": "/display/GMDM/Integration+Test+For+Iqvia+Model", + "content": "Test classTest caseFlowCRUDHCOAsynctestSend HCORequest to Kafka topicWait for created event and validateUpdate HCO's name and send HCORequest to Kafka topicWait for updated event and validateRemove entitiesCRUDHCOAsyncComplextestCreate Source HCOSend HCORequest with Source HCO to Kafka TopicWait for created event and validateCreate Source Department HCO - set Source HCO as Main HCOSend HCORequest with Source Department HCOWait for event and validateRemove entitiesCRUDHCPAsynctestSend HCPRequest to Kafka topicWait for created event and validateUpdate HCP's Last Name and send HCORequest to Kafka topicWait for updated event and validateRemove entitiesCRUDPostBulkAsynctestHCOSend EntitiesUpdateRequest with multiple HCO entities to Kafka topicWait for entities-create event with specific correlactionId headerValidate message payload and check if all entities are createdRemove entitiestestHCPSend EntitiesUpdateRequest with multiple HCP entities to Kafka topicWait for entities-create event with specific correlactionId headerValidate message payload and check if all entities are createdRemove entitiestestHCPRejectedSend EntitiesUpdateRequest with multiple incorrect HCP entities to Kafka topicWait for event with specific correlactionId headerCheck if all entities have ValidatioError and status is failedCreateRelationAsynctestCreateCreate HCOCreate HCPSend RelationRequest with Relation Activity between HCP and HCO to Kafka topicWait for event with specific correlactionId header and validate statustestCreateRelationsCreate HCOCreate HCP_1Create HCP_2 and validate responseCreate HCP_3 and validate responseCreate HCP_4 and validate responseCreate Activity Relations between HCP_1 → HCO, HCP_2 → HCO, HCP_3 → HCO, HCP_4 → HCOSend RelationRequest event with all relations to Kafka topicWait for event with specific correlactionId header and validate statusRemove entitiestestCraeteWithAddressCopyCreate HCOCreate HCPCreate Activity Relation between HCP and HCOSend RelationRequest event to Kafka topic with param copyAddressFromTarget = trueWait for event with specific correlactionId header and validate status is createdGet HCP and HCOValidate updated HCP - check if address exists and contains HcoName attributeRemove entitiestestDeactivateRelationCreate HCOCreate HCPCreate Activity Relation between HCP and HCO with PrimaryAffiliationIndicator = trueSend RelationRequest event to Kafka topicWait for event with specific correlactionId header and validate status is createdUpdate Relation - set delete date on nowSend RelationRequest event to Kafka topicWait for event with specific correlactionId header and validate status is deletedRemove entitiesHCOAsyncErrorsTestCasetestSend HCORequest to Kafka topic - create HCO with incorrect valuesWait for event with specific correlactionId header and validate status is failedHCPAsyncErrorsTestCasetestSend HCPRequest to Kafka topic - create HCP without permissionsWait for event with specific correlactionId header and validate status is failedUpdateRelationAsynctestCreate HCO and validate status createdCreate HCP with affiliatedHCO and validate status createdGet HCP and check if Workplace relation existsGet existing RelationPatch Relation - update ActEmail.Email attribute and validate if status is updatedGet Relation and validate if ActEmail list size is 1Add Country attribute to RelationSend RelationRequest event to Kafka topic with updated RelationWait for event with specific correlactionId header and validate status is updatedGet Relation and check if ActEmail and Country existAdd AffiliationStatus attribute to RelationSend RelationRequest event to Kafka topic with updated RelationWait for event with specific correlactionId header and validate status is updatedGet Relation and check if ActEmail, Country and  AffiliationStatus  existRemove entitiesBundlingTesttestSend multiple HCORequests to Kafka topic - create HCOsFor each request wait for event with status created and collect HCO's uriCheck if number of requests equals number of recived eventsSend multiple HCPRequests to Kafka topic - create HCPsFor each request wait for event with status created and collect HCP's uriCheck if number of requests equals number of recived eventsSend multiple RelationRequests to Kafka topic - create RelationFor each request wait for event with status created and collect Relation's uriCheck if number of requests equals number of recived eventsSet delete date on now for every HCOSend multiple HCORequests to Kafka topicFor each request wait for event with status deletedSet delete date on now for every HCPSend multiple HCPRequests to Kafka topicFor each request wait for event with status deletedDCRResponseTestcreateAndAcceptDCRThenTryToAcceptAgainTestCreate Hopsital HCOCreate Department HCOSet Hospital HCO as Department's Main HCOCreate HCP with Affiliated HCO as DepartmentCheck if DCR is createdAccept DCR and check if response is OKAccept DCR again and check if response is BAD_REQUESTRemove entitiescreateAndPartialAcceptThenConfirmNoLoopCreate Hopsital HCOCreate Department HCOSet Hospital HCO as Department's Main HCOCreate HCP with Affiliated HCO as DepartmentCheck if DCR is createdPartial accept DCR and check if response is OKGet HCP entity and check if ValidationStatus attribute is "partialValidated"Check if DCR is not created - confirms that DCR creation does not loopRemove entitiescreateAndRejectDCRThenTryToRejectAgainTestCreate Hopsital HCOCreate Department HCOSet Hospital HCO as Department's Main HCOCreate HCP with Affiliated HCO as DepartmentCheck if DCR is createdReject DCR and check if response is OKReject again DCR and check if response is BAD_REQUESTRemove entitiesDeriveHCPAddressesTestCasederivedHCPAddressesTestCreate HCP and validate responseCreate HCO Department with 1 Address and validate responseCreate HCO Hospital with 2 Addresses and validate responseCreate "Activity" Relation HCP → HCO Department and validate responseCreate "Has Health Care Role" Relation HCP → HCO Hospital and validate responseGet HCP and check if contains Hospital's AddressesUpdate HCO Hospital Address and validate responseGet HCP and check if contains updated Hospital's AddressesRemove HCO Hospital Address and validate responseGet HCP and check if contains Hospital's Addresses (without removed)Remove "Has Health Care Role" Relation HCP → HCO Hospital and validate responseGet HCP and check if Addresses are removedRemove entitiesEVRDCRUpdateHCPLUDTestCasetestCreate Hopsital HCOCreate Department HCOSet Hospital HCO as Department's Main HCOCreate HCP with Affiliated HCO as DepartmentGet Change requests and check that DCR was createdUpdate HCPValidationStatus = notvalidatedchange existing GRV crosswalk - set DataProvider = trueadd DCR crosswalk - EVR set ContributorProvider = trueadd another EVR crosswalk set DataProvider = trueSend update request and vadiate responseUpdate HCP (partial update)ValidationStatus = validatedRemove First and Last NameRemove crosswalksSend update request and validate responseGet HCP and validateCheck if the ValidationStatus & LUD (updateDate/singleAttributeUpdateDate) were refreshedRemove crosswalksExistingDepartmentAndHCPTestCasecreateHCP_HCPNotInPendingStatus_NoDCRCreate Hospital HCOCreate Department HCO with Hospital HCO as MainHCOCreate HCP with affiliated HCO (Department HCO) and ValidationStatus = validatedGet HCP and validate attributesGet Change requests and check if the list is emptyRemove crosswalkscreateHCP_HCPIsInPendingStatus_HCPDCRCreatedCreate Hospital HCOCreate Department HCO with Hospital HCO as MainHCOCreate HCP with affiliated HCO (Department HCO) and ValidationStatus = pendingGet HCP and validate attributesGet Change requests and check if there is one NEW_HCP change requestRemove crosswalkscreateHCP_HCPHasTwoWorkplaces_HCPAndWorkplaceDCRCreatedCreate Hospital HCOCreate Department1 HCO with Hospital HCO as MainHCOCreate Department2 HCO with Hospital HCO as MainHCOCreate HCP with affiliated HCO (Department1 HCO) and ValidationStatus = pendingGet HCP and validate attributeshas only one Workplace (Department1 HCO)Update HCP with affiliated HCO (Department2 HCO) and ValidationStatus = pendingGet HCP and validate attributeshas only one Workplace (Department2 HCO)Get Change requests and check if there is one NEW_HCP change requestRemove crosswalksNewHCODCRTestCasescreateHCP_DepartmentDoesNotExist_HCOL1DCRCreate Hospital HCOCreate Department HCO with Hospital HCO as MainHCOCreate HCP with affiliated HCO (Department HCO)Get HCP and validate attributesValidate Workplace and MainWorkplaceGet Change requests and check if the list is emptyRemove crosswalkscreateHCP_HospitalAndDepartmentDoesNotExist_HCOL1DCRCreate Department HCO with Hospital HCO (not created yet) as MainHCOCreate HCP with affiliated HCO (Department HCO) and ValidationStatus = pendingGet HCP and validate attributesGet HCO Department and validate attributesGet Change requests and check if there is one NEW_HCO_L2 change requestRemove crosswalksNewHCPDCRTestCasecreateHCPTestCreate HCO HospitalCreate HCO DepartmentCreate HCP with affiliated HCO (Department HCO)Get HCP and validate Workplace and MainWorkplaceRemove crosswalkscreateHCPPendingTestCreate HCO HospitalCreate HCO DepartmentCreate HCP with affiliated HCO (Department HCO) and ValidationStatus = pendingValidate HCP responseValidate if DCR is createdRemove crosswalkscreateHCPNotValidatedTestCreate HCO HospitalCreate HCO DepartmentCreate HCP with affiliated HCO (Department HCO) and ValidationStatus = notvalidatedValidate HCP responseValidate if DCR is createdRemove crosswalkscreateHCPNotValidatedMergedIntoNotValidatedTestCreate HCP_1 with ValidationStatus = notvalidated (Merge winner HCP)Create HCO HospitalCreate HCO DepartmentCreate HCP_2 with affiliated HCO (Department HCO) and ValidationStatus = notvalidatedValidate HCP responseValidate if DCR is not createdRemove crosswalkscreateHCPPendingMergedIntoNotValidatedTestCreate HCP_1 with ValidationStatus = notvalidated (Merge winner HCP)Create HCO HospitalCreate HCO DepartmentCreate HCP_2 with affiliated HCO (Department HCO) and ValidationStatus = pendingValidate HCP responseValidate if DCR is createdRemove crosswalkscreateHCPPendingMergedIntoNotValidatedWithAnotherGRVNotValidatedTestCreate HCP_1 with ValidationStatus = notvalidated (Merge winner HCP)Create HCO HospitalCreate HCP_2 with ValidationStatus = notvalidated (Merge loser HCP)Create HCO DepartmentCreate HCP_3 with affiliated HCO (Department HCO) and ValidationStatus = pendingValidate if DCR is createdRemove crosswalkscreateHCPNotValidatedMergedIntoNotValidatedWithAnotherGRVNotValidatedTestCreate HCP_1 with ValidationStatus = notvalidated (Merge winner HCP)Create HCO HospitalCreate HCP_2 with ValidationStatus = notvalidated (Merge loser HCP)Create HCO DepartmentCreate HCP_3 with affiliated HCO (Department HCO) and ValidationStatus = notvalidatedValidate if DCR is not createdRemove crosswalkscreateHCPPendingMergedIntoNotValidatedWithGRVAsUpdateTestCreate HCP_1 with ValidationStatus = notvalidated (Merge winner HCP)Create HCO HospitalCreate HCP_2 with ValidationStatus = notvalidated (Merge loser HCP)Create HCO DepartmentCreate HCP_3 with affiliated HCO (Department HCO) and ValidationStatus = notvalidatedGet HCP and validate corsswalk GRV count == 3Validate if DCR is not createdUpdate HCP_3 set code = pendingValidate if DCR is createdRemove crosswalksPfDataChangeRequestLiveCycleTesttestCreate HCO HospitalCreate HCO Department with parent HCO HospitalCreate HCP with affiliated HCO (Department HCO) and ValidationStatus = pendingCheck if DCR existCheck if PfDataChangeRequest existAccpet DCRCheck that HCP ValidationStatus == validatedCheck that PfDataChangeRequest is closedRemove crosswalksResponseInfoTestTestCreate HCO HospitalCreate HCO Department with parent HCO HospitalCreate HCP_1 with affiliated HCO (Department HCO) and ValidationStatus = pendingCreate HCP_2 with affiliated HCO (Department HCO) and ValidationStatus = pendingCheck that DCR_1 existCheck that DCR_2 existCheck that PfDataChangeRequest existRespond for DCR_1 - update HCP with merged urischange First Nameset ValidationStatus = validatedGet HCP and check if ValidationStatus is validatedCheck if PfDataChangeRequest is closed and validate ResponseInfoRespond for DCR_2 - accept and validate messageCheck if PfDataChangeRequest is closed and validate ResponseInfoCheck that DCR_2 does not existRemove crosswalksRevalidateNewHCPDCRTestCasetestCreate Parent HCO and validate responseCreate Department HCO with Parent HCO and validate responseCreate HCP with affiliated HCO (Department HCO), ValidationStatus = pending and validate responseCheck that DCR existCheck that PfDataChangeRequest existRespond to DCR - acceptCheck that HCP has ValidationStatus = validatedSend revalidate event to Kafka topicCheck that new DCR was createdChecking that previous PfDataChangeRequest has ResponseStatus=acceptCheck that new PfDataChangeRequest existCheck that HCP has ValidationStatus = pendingRemove crosswalksStandarNonExistingDepartmentTestCasecreateNewHCPTestCreate Hospital HCOCreate HCP with a new affiliated HCO (Department HCO with Hospital HCO as MainHCO)Get HCP and validate attributes (Workplace and MainWorkplace)UpdateHCPPhonestestCreate HCP and validate responseUpdate Phone and send patchHCP requestValidate response status is OKRemove crosswalksGetEntityTeststestGetEntityByUriCreate HCP with ValidationStatus = validated and affiliatedHcos (HCO_1, HCO_2)Get HCP by uri and validate attributesRemove crosswalkstestSearchEntityCreate HCP with ValidationStatus = validated and affiliatedHcos (HCO_1, HCO_2)Get entites using filter - HCP by country, first name and last nameValidate if entity existsRemove crosswalkstestSearchEntityWithoutCountryFilterCreate HCP with ValidationStatus = validated and affiliatedHcos (HCO_1, HCO_2)Get by corsswalk HCO_1 and check if existsGet by corsswalk HCO_2 and check if existsGet entites using filter - HCO by country and (HCO_1 name or HCO_2 name)Validate if both HCO existsRemove crosswalkstestGetEntityByCrosswalkCreate HCP with ValidationStatus = validated and affiliatedHcos (HCO_1, HCO_2)Get HCP by crosswalkValidate if HCP existsRemove crosswalkstestGetEntitiesByUrisCreate HCP with ValidationStatus = validated and affiliatedHcos (HCO_1, HCO_2)Get HCP by uriValidate if HCP existsRemove crosswalkstestGetEntityCountryCreate HCP with ValidationStatus = validated and affiliatedHcos (HCO_1, HCO_2)Get HCP's countryValidate reponseRemove crosswalkstestGetEntityCountryOvCreate HCP with ValidationStatus = validated, affiliatedHcos (HCO_1, HCO_2) and Country = BrazilUpdate HCPupdate existing crosswalk - set ContributorProvider = trueadd new crosswalk as DataProviderset Country ignored = trueupdate Country - set to ChinaGet HCP's Country and validatecheck value == BR-Brazilcheck ov == trueUpdate HCP - make ignored=true, ov=false on all countriesGet HCP's Country and validatelookupCode == BRRemove crosswalksMergeUnmergeHCPTestcreateHCP1andHCP2_checkMerge_checkUnmerge_APICreate HCP_1 and validate responseCreate HCP_2 and validate responseMerge HCP_1 with HCP_2Get HCP_1 after merge and validate attributesGet HCP_2 after merge and validate attributesUnmerge HCP_1 and HCP_2Get HCP_1 after unmerge and validate attributesGet HCP_2 after unmerge and validate attributesUnmerge HCP_1 and HCP_2 - validate if response code is BAD_REQUESTMerge HCP_1 and NOT_EXISTING_URI - validate if response code is NOT_FOUNDRemove crosswalksHCPMatcherTestCasetestPositiveMatchCreate 2 the same HCP objectsCheck that objects matchtestNegativeMatchCreate 2 different HCP objectsCheck that objects do not matchGetEntitiesTesttestGetHCPsGet entities with filter: country = BR and entityType = HCPValidate responseAll entites are HCPAt least one entity has WorkplacetestGetHCOsGet entities with filter: country = BR and entityType = HCOValidate responseAll entites are HCOGetEntityUSTestcreateHCPTestCreate HCP and validate responseGet HCP and check if existsRemove crosswalks" + }, + { + "title": "Integration Test For COMPANY Model", + "pageID": "302681792", + "pageLink": "/display/GMDM/Integration+Test+For+COMPANY+Model", + "content": "Test classTest caseFlowAttributeSetterTestTestAttributeSetterCreate HCP with TypeCode attributeGet entity and validate if has autofilled attributesUpdate TypeCode field: send "None" as attribute valueUpdate HCP requestGet entity and validate autofileld attributes by DQ rulesUpdate TypeCode fieldUpdate HCP requestGet entity and validate autofileld attributes by DQ rulesUpdate TypeCode fieldUpdate HCP requestGet entity and validate autofilled NON-HCP valueSet HCP's crosswalk delete dateUpdate and validate if delete date has been setBatchControllerTestmanageBatchInstance_checkPermissionsWithLimitationCreate batch instanceCreate batch stageValidate response code: 403 and message: Cannot access the processor which has been protectedGet batch instance with incorrect nameValidate response code: 403 and message: Batch 'testBatchNotAdded' is not allowed. Update batch stage with existing stage nameUpdate batch stage with limited userValidate response code: 403 and message: Stage '' is not allowed.Update batch stage with not authorized stage nameValidate response code: 403 and message: Stage '' passed in Body is not allowed.createBatchInstanceCreate batch instance and validateComplete stage 1 and start stage 2Validate stagesComplete stage 2Start stage 3Validate all 3 stagesComplete stage 3 and finish batchGet batch instance and validateTestBatchBundlingErrorQueueTesttestBatchWorkflowTestCreate batch instanceGet errors and check if there is no errorsCreate batch stage: HCO_LOADINGCreate batch stage: HCP_LOADINGCreate batch stage: RELATION_LOADINGSend entites to HCO_LOADING stageFinish HCO_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedSend entites to HCP_LOADING stageFinish HCP_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedSend relations to RELATION_LOADING stageFinish RELATION_LOADING stageCheck sender job status - validate if all relations were sent to ReltioCheck processing job status - validate if all relatons were processedGet batch instance and validate completion statusValidate expected errorsResubmit errorsValidate expected errorsValidate if all errors were resubmitedTestBatchBundlingTesttestBatchWorkflowTestCreate batch instanceCreate batch stage: HCO_LOADINGCreate batch stage: HCP_LOADINGCreate batch stage: RELATION_LOADINGSend entites to HCO_LOADING stageFinish HCO_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedSend entites to HCP_LOADING stageFinish HCP_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedSend relations to RELATION_LOADING stageFinish RELATION_LOADING stageCheck sender job status - validate if all relations were sent to ReltioCheck processing job status - validate if all relatons were processedGet batch instance and validate completion statusGet Relations by crosswalk and validateTestBatchHCOBulkTesttestBatchWorkflowTestCreate batch instanceCreate batch stage: HCO_LOADINGSend entites to HCO_LOADING stageFinish HCO_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validateTestBatchHCOTesttestBatchWorkflowTestCreate batch instanceCreate batch stage: HCO_LOADINGSend entites to HCO_LOADING stageFinish HCO_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validate created statustestBatchWorkflowTest_CheckFAILonLoadJobCreate batch instanceCreate batch stage: HCO_LOADINGSend entites to HCO_LOADING stageUpdate batch stage status: FAILEDGet batch instance and validatetestBatchWorkflowTest_SendEntities_Update_and_MD5SkipCreate batch instanceCreate batch stage: HCO_LOADINGSend entites to HCO_LOADING stageFinish HCO_LOADING stageGet batch instance and validate completion statusGet entities by crosswalk and validate create statusCreate batch instanceCreate batch stage: HCO_LOADINGSend entites to HCO_LOADING stage (skip 2 entities - MD5 check sum changed)Finish HCO_LOADING stageGet batch instance and validate completion statusGet entities by crosswalk and validate update statustestBatchWorkflowTest_SendEntities_Update_and_DeletesProcessingCreate batch instanceCreate batch stage: HCO_LOADINGSend entites to HCO_LOADING stageFinish HCO_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedCheck deleting job status - validate if all entities were sendCheck deleting processing job - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validate delete status-- second runCreate batch instanceCreate batch stage: HCO_LOADINGSend entites to HCO_LOADING stage (skip 2 entities - delete in post processing)Finish HCO_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedCheck deleting job status - validate if all entities were sendCheck deleting processing job - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validate delete status-- third runCreate batch instance for checking activationCreate batch stage: HCO_LOADINGSend entites to HCO_LOADING stageFinish HCO_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedCheck deleting job status - validate if all entities were sendCheck deleting processing job - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validate delete statusTestBatchHCPErrorQueueTesttestBatchWorkflowTestCreate batch instanceCreate batch stage: HCP_LOADINGGet errors and check if there is no errorsSend entites to HCP_LOADING stageFinish HCP_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedGet errors and validate if exists exceptedResubmit errorsGet errors and validate if all were resubmitedTestBatchHCPPartialOverwriteTesttestBatchWorkflowTestCreate HCPCreate batch instanceCreate batch stage: HCP_LOADINGSend entites to HCP_LOADING stage with update last nameFinish HCP_LOADING stageCheck sender job status - validate if all entities are created in mongoCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validateTestBatchHCPSoftDependentTesttestBatchWorkflowTestCreate batch instanceCreate batch stage: HCP_LOADINGCheck Sender job status - SOFT DEPENDENT Send entites to HCP_LOADING stageFinish HCP_LOADING stageCheck sender job status - validate if all entities are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validate created statusTestBatchHCPTesttestBatchWorkflowTestCreate batch instanceCreate batch stage: HCP_LOADINGSend entites to HCP_LOADING stageFinish HCP_LOADING stageCheck sender job status - validate if all entities are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validate created statusTestBatchMergeTesttestBatchWorkflowTestCreate 4 x HCP and validate respons statusGet entities and validate if are createdCreate batch instanceCreate batch stage: MERGE_ENTITIES_LOADINGSend merge entities objects (Reltio, Onekey)Finish MERGE_ENTITIES_LOADING stageCheck sender job status - validate if all tags are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities and validate update status (check if tags are visible in Reltio)Create batch instanceCreate batch stage: MERGE_ENTITIES_LOADINGSend unmerge entities objects (Reltio, Onekey)Finish MERGE_ENTITIES_LOADING stageCheck sender job status - validate if all tags are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusTestBatchPatchHCPPartialOverwriteTestCreate batch instanceCreate batch stage: HCP_LOADINGCreate HCP entity with crosswalk's delete date set on nowSend entites to HCP_LOADING stageFinish HCP_LOADING stageCheck sender job status - validate if all entities are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validate created statusCreate batch instanceCreate batch stage: HCP_LOADINGSend entites PATCH to HCP_LOADING stage with empty crosswalk's delete date and missing first and last nameFinish HCP_LOADING stageCheck sender job status - validate if all entities are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validate if are updateTestBatchRelationTesttestBatchWorkflowTestCreate batch instanceCreate batch stage: HCO_LOADINGCreate batch stage: HCP_LOADINGCreate batch stage: RELATION_LOADINGSend entites to HCO_LOADING stageFinish HCO_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedSend entites to HCP_LOADING stageFinish HCP_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedSend relations to RELATION_LOADING stageFinish RELATION_LOADING stageCheck sender job status - validate if all relations were sent to ReltioCheck processing job status - validate if all relatons were processedGet batch instance and validate completion statusTestBatchTAGSTesttestBatchWorkflowTestCreate HCPGet HCP and check if there is no tagsCreate batch instanceCreate batch stage: TAGS_LOADINGSend request: Append entity tags objectsFinish TAGS_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusCreate batch instanceCreate batch stage: TAGS_LOADING - DELETESend request: Delete entity tags objectsCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate update statusGet entity and check if tags are removed from ReltioCOMPANYGlobalCustomerIdSearchOnLostMergeEntitiesTesttestCreate first HCP and validate response statusCreate second HCP and validate response statusCreate third HCP and validate response statusMerge HCP2 with HCP3 and validate response statusMerge HCP2 with HCP1 and validate response statusGet entities: filter by COMPANYGlobalCustomerID and HCP1UriValidate if existsGet entities: filter by COMPANYGlobalCustomerID and HCP2UriValidate if existsGet entities: filter by COMPANYGlobalCustomerID and HCP3UriValidate if existsCOMPANYGlobalCustomerIdTesttestCreate HCP_1 with RX_AUDIT crosswalkWait for HCP_CREATED eventCreate HCP_2 with GRV crosswalkWait for HCP_CREATED eventMerge both HCP's with RX_AUDIT being winnerWait for HCP_MERGE, HCP_LOST_MARGE and HCP_CHANGED eventsGet entities by uri and validate. Check if merge succeeded and resulting profile has winner COMPANYId.Update HCP_1: set delete date on RX_AUDIT crosswalkCheck if entity's COMPANYID has not changed after softDeleting the crosswalkGet HCP_1 and validate COMPANYGlobalCustomerID after soft deleting crosswalkRemove HCP_1 by crosswalkRemove HCP_2 by crosswalktestWithDeleteDateCreate HCP_1 with crosswalk delete dateWait for HCP_CREATED eventCreate HCP_2Wait for HCP_CREATED eventMerge both HCP'sWait for HCP_MERGE, HCP_LOST_MARGE and HCP_CHANGED eventsCheck if merge succeeded and resulting profile has winner COMPANYId.Remove HCP_1 by crosswalkRemove HCP_2 by crosswalkRelationEventChecksumTesttestCreate HCP and validate statusGet HCP and validate if existsCreate HCO and validate statusCreate Employment Relation between HCP and HCO - validate response statusWait for RELATIONSHIP_CREATED event and validateFind Relation by id and keep checksumUpdate Relation title attribute and validate responseWait for RELATIONSHIP_CHANGED eventValidate if checksum has changedDelete HCO crosswalk and validateDelete HCP crosswalk and validateDelete Relation crosswalk and validateCreateChangeRequestTestcreateChangeRequestTestCreate Change RequestCreate HCPGet HCP and validateUpdate HCP's First Name with dcrId from Change RequestInit Change Request and validate response is not nullDelete Change RequestDelete HCP's crosswalkAttributesEnricherNoCachedTesttestCreateFailedRelationNoCacheCreate HCOCreate HCPCreate Relation with missing attributes - validate response stats is failedSearch Relation in mogno and check if not existsAttributesEnricherTesttestCreateCreate HCP and validateCreate HCP and validateCreate Relation and validateGet HCP and validate if ProviderAffiliations attribute existsUpdate HCP's Last NameGet HCP and validate if ProviderAffiliations attribute existsCheck last Last Name is updatedRemove HCP, HCO and Relation by crosswalkAttributesEnricherWithDeleteDateOnRelationTesttestCreateAndUpdateRelationWithDeleteDateCreate HCP and validateCreate HCP and validateCreate Relation and validateGet HCP and validate if ProviderAffiliations attribute existsUpdate HCP's Last NameGet HCP and validate if ProviderAffiliations attribute existsCheck if Last Name is updatedSet Relation's crosswalk delete date on now and updateUpdate HCP's Last NameGet HCP and validate that ProviderAffiliations attribute does not existCheck last Last Name is updatedSend update Relation request and check status is deletedAttributesEnricherWithMultipleEndObjectstestCreateWithMultipleEndObjectsCreate HCO_1Create HCO_2Create HCPCreate Relation between HCP and HCO_1Create Relation between HCP and HCO_2Get HCP and validate if ProviderAffiliations attribute existsUpdate HCP's Last NameGet HCP and validate that ProviderAffiliations attribute existsRemove all entitiesUpdateEntityAttributeTestshouldUpdateIdentifierCreate HCP and validateUpdate HCP's attribute: insert idetifier and validateUpdate HCP's attribute: update idetifier and validateUpdate HCP's attribute: merge idetifier and validateUpdate HCP's attribute: replace idetifier and validateUpdate HCP's attribute: delete idetifier and validateRemove all entities by crosswalkCreateEntityTestcreateAndUpdateEntityTestCreate DCR entityGet entity and validateUpdate DCR ID attributeValidate updated entityGet matches entities and validate that response is not nullRemove entityCreateHCPWithoutCOMPANYAddressIdcreateHCPTestCreate HCPGet HCP and validate fieldsGet generatedId from Mongo cache collection keyIdRegistryValidate if created HCP's address has COMPANYAddressIDCheck if COMPANYAddressID equals generatedIdRemove entityGetMatchesTestcreateHCPTestCreate HCP_1Create HCP_2 with similar attributes and valuesGet matches for HCP_1Check if matches size >= 0TranslateLookupsTesttranslateLookupTestSend get translate lookups request: Type=AddressStatus, canonicalCode=A,sourceName=ONEKEYAssert resposne is not nullDelayRankActivationTesttestCreate HCO_ACREATE HCO_B1CREATE HCO_B2CREATE HCO_B3CREATE RELATION B1 → A (type: OtherHCOtoHCOAffiliations, rel type: REL.BNG, source: ONEKEY)CREATE RELATION B2 → A (type: OtherHCOtoHCOAffiliations, rel type: REL.BNG, source: ONEKEY)CREATE RELATION B3 → A (type: OtherHCOtoHCOAffiliations, rel type: REL.BNG, source: ONEKEY)Check UPDATE ATTRIBUTE events:UPDATE RANK event exists with Rank = 3 for B1.AUPDATE RANK event exists with Rank = 2 for B2.ACheck PUBLISHED events:B3 - RELATIONSHIP_CREATED event exists with Rank = 1B1 - RELATIONSHIP_CHANGED event exists with Rank = 3B2 - RELATIONSHIP_CHANGED event exists with Rank = 2Check order of events:B1 - RELATIONSHIP_CHANGED and B2 - RELATIONSHIP_CHANGED are after UPDATE eventsCREATE HCO_B4CREATE RELATION B4 → A (type: OtherHCOtoHCOAffiliations, rel type: REL.BNG, source: GRV)Check UPDATE ATTRIBUTE events:UPDATE RANK event exists with Rank = 4 for B4.ACheck PUBLISHED events:B4 - RELATIONSHIP_CHANGED event exists with Rank = 4Check order of events:B4 - RELATIONSHIP_CHANGED is after UPDATE eventsCREATE HCO_B5CREATE RELATION B5 → A (type: OtherHCOtoHCOAffiliations, rel type: REL.FPA, source: ONEKEY)Check UPDATE ATTRIBUTE events:UPDATE RANK event exists with Rank = 4 for B1.AUPDATE RANK event exists with Rank = 3 for B2.AUPDATE RANK event exists with Rank = 2 for B3.AUPDATE RANK event exists with Rank = 5 for B4.ACheck PUBLISHED events:B1 - RELATIONSHIP_CHANGED event exists with Rank = 4B2 - RELATIONSHIP_CHANGED event exists with Rank = 3B3 - RELATIONSHIP_CHANGED event exists with Rank = 2B4 - RELATIONSHIP_CHANGED event exists with Rank = 5B5 - RELATIONSHIP_CREATED event exists with Rank = 1Check order of events:All published RELATIONSHIP_CHANGED are after UPDATE_RANK eventsSet deleteDate on B1.ACheck UPDATE ATTRIBUTE events:UPDATE RANK event exists with Rank = 4 for B4.ACheck PUBLISHED events:B4 - RELATIONSHIP_CHANGED event exists with Rank = 4Check order of events:Published RELATIONSHIP_CHANGED is after UPDATE_RANK eventGet B2.A relation and check Rank = 3Get B3.A relation and check Rank = 2Get B4.A relation and check Rank = 4Get B5.A relation and check Rank = 1Clear dataRawDataTestshouldRestoreHCPCreate HCP entityDelete HCP by crosswalkSearch entity by name - expected not foundRestore HCP entitySearch entity by nameClear datashouldRestoreHCOCreate HCO entityDelete HCO by crosswalkSearch entity by name - expected not foundRestore HCO entitySearch entity by nameClear datashouldRestoreRelationCreate HCP entityCreate HCO entityCreate relation from HCP to HCODelete relation by crosswalkGet relation by crosswalk - expected not foundRestore relationGet relation by crosswalkClear dataTestBatchUpdateAttributesTesttestBatchWorkFlowTestCreate 2 x HCP and validate respons statusGet entities and validate if they are createdTest Insert IdentifiersCreate batch instanceCreate batch stage: UPDATE_ATTRIBUTES_LOADINGInitialize UPDATE_ATTRIBUTES_LOADING stageSend updateEntityAttributeRequest objects with different identifiersFinish UPDATE_ATTRIBUTES_LOADING stageCheck sender job status - validate if all updates are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities and validate update status (check if inserted identifiers are visible in Reltio)Test Update IdentifiersCreate batch instanceCreate batch stage: UPDATE_ATTRIBUTES_LOADINGInitialize UPDATE_ATTRIBUTES_LOADING stageSend updateEntityAttributeRequest objects with different identifiersFinish UPDATE_ATTRIBUTES_LOADING stageCheck sender job status - validate if all updates are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities and validate update status (check if updated identifiers are visible in Reltio)Test Merge IdentifiersCreate batch instanceCreate batch stage: UPDATE_ATTRIBUTES_LOADINGInitialize UPDATE_ATTRIBUTES_LOADING stageSend updateEntityAttributeRequest objects with different identifiersFinish UPDATE_ATTRIBUTES_LOADING stageCheck sender job status - validate if all updates are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities and validate update status (check if merged identifiers are visible in Reltio)Test Replace IdentifiersCreate batch instanceCreate batch stage: UPDATE_ATTRIBUTES_LOADINGInitialize UPDATE_ATTRIBUTES_LOADING stageSend updateEntityAttributeRequest objects with different identifiersFinish UPDATE_ATTRIBUTES_LOADING stageCheck sender job status - validate if all updates are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities and validate update status (check if replaced identifiers are visible in Reltio)Test Delete IdentifiersCreate batch instanceCreate batch stage: UPDATE_ATTRIBUTES_LOADINGInitialize UPDATE_ATTRIBUTES_LOADING stageSend updateEntityAttributeRequest objects with different identifiersFinish UPDATE_ATTRIBUTES_LOADING stageCheck sender job status - validate if all updates are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities and validate update status (check if deleted identifiers are visible in Reltio)Remove all entities by crosswalk and all batch instances by id" + }, + { + "title": "Integration Test For COMPANY Model China", + "pageID": "302681804", + "pageLink": "/display/GMDM/Integration+Test+For+COMPANY+Model+China", + "content": "Test classTest caseFlowChinaComplexEventCaseshouldCreateHCPAndConnectWithAffiliatedHCOByNameCreate HCO (AffiliatedHCO) and validate responseGet entities with filter by HCO's Name and entityTypeValidate if existsCreate HCP (V2Complex method)with not existing MainHCOwith affiliatedHCO and existing HCO's NameGet HCP and validateCheck if affiliatedHCO Uri equals created HCO uri (Workplace)Remove entitiesshouldCreateHCPAndMainHCOCreate HCO (AffiliatedHCO) and validate responseCreate HCP (V2Complex method)with AffiliatedHCO - set uri from previously created HCOwith MainHCO without uriGet HCP and validateCheck if affiliatedHCO Uri equals created HCO uri (Workplace)Validate Workplace attributesRemove entitiesshouldCreateHCPAndAffiliatedHCOCreate HCO (MainHCO) and validate responseCreate HCP (V2Complex method)with AffiliatedHCO without uri (not existing HCO)with MainHCO - set objectURI from previously created Main HCOGet HCP and validateCheck if MainHCO Uri equals created HCO uri (MainWorkplace)Validate MainWorkplace attributesRemove entitiesshouldCreateHCPAndConnectWithAffiliationsCreate HCO (MainHCO) and validate responseCreate HCO (AffiliatedHCO) and validate responseCreate HCP (V2Complex method)with AffiliatedHCO - set uri from previously created Affiliated HCOwith MainHCO - set objectURI from previously created Main HCOGet HCP and validateCheck if affiliatedHCO Uri equals created HCO uri (Workplace)Check if MainHCO Uri equals created HCO uri (MainWorkplace)Validate Workplace and MainWorkplace attributesRemove entitiesshouldCreateHCPAndAffiliationsCreate HCP (V2Complex method)without AffialitedHCO uriwithout MainHCO objectURIGet HCP and validateCheck if Workplace is created and has correct attributesCheck if MainWorkplace is created and has correct attributesValidate Workplace and MainWorkplace attributesRemove entitiesChinaSimpleEventCaseshouldPublishCreateHCPInIqiviaModelCreate HCP in COMPANYModel (V2Simple method)Validate responseGet HCP entity and validate attributesWait for Kafka output eventValidate eventValidate attributes and check if event is in IqiviaModelRemove entitiesChinaMergeEntityTestCraete HCP_1 (V2Complex method) and validate responseCraete HCP_2 (V2Complex method) and validate responseMerge entities HCP_1 and HCP_2Get HCP by HCP_1 uri and check if existsWait for Kafka event on merge response topicValidate Kafka eventRemove entitiesChinaWorkplaceValidationEntityTestshouldValidateMainHCOCreate HCP (V2Complex method)with 2 affiliatedHCO which do not existwith 1 MainHCO which does not existGet HCP entity and check if existWait for Kafka event on response topicValidate Kafka eventValidate MainWorkplace (1 exists)Validate Workplaces (2 exists)Validate MainHCO (1 exists)Assert MainWorkplace equals MainHCORemove entities" + }, + { + "title": "Integration Test For COMPANY Model DCR2Service", + "pageID": "302681794", + "pageLink": "/display/GMDM/Integration+Test+For+COMPANY+Model+DCR2Service", + "content": "Test classTest caseFlowDCR2ServiceTestshouldCreateHCPTestCreate HCO and validate responseCreate DCR request (hcp-create)Send Apply Change requestGet DCR status and validateValidate created entityRemove entitiesshouldUpdateHCPChangePrimarySpecialtyTestCreate HCPCreate DCR request: update HCP Primary SpecialityValidate DCR responseApply Change requestGet DCR status and validateGet HCP and validateGet DCR and validateRemove all entitiesshouldCreateHCOTestCreate DCR Request (hco-create) and validate responseApply Change requestGet DCR status and validateGet HCO and validateGet DCR and validateRemove all entitiesshouldUpdateHCPChangePrimaryAffiliationTestCreate HCO_1 and valdiate responseCreate HCO_2 and validate responseCreate HCP with affiliations and validate reponseGet HCO_1 and save COMPANYGlobalCustomerIdGet HCP and save COMPANYGlobalCustomerIdGet entities - search by HCO_1's COMPANYGlobalCustomerId and check if existsGet entities - search by HCP's COMPANYGlobalCustomerId and check if existsCreate DCR Request and validate response: update HCP primary affiliationApply Change requestGet DCR status and validateGet HCP and validateGet DCR and validateRemove all entitiesshouldUpdateHCPIgnoreRelationCreate HCO_1 and valdiate responseCreate HCO_2 and validate responseCreate HCP with affiliations and validate reponseGet HCO_1 and save COMPANYGlobalCustomerIdGet HCP and save COMPANYGlobalCustomerIdGet entities - search by HCO_1's COMPANYGlobalCustomerId and check if existsGet entities - search by HCP's COMPANYGlobalCustomerId and check if existsCreate DCR Request and validate response: ignore affiliationApply Change requestGet DCR status and validateWait for RELATIONSHIP_CHANGED eventWait for RELATIONSHIP_INACIVATED eventGet HCP and validateGet DCR and validateRemove all entitiesshouldUpdateHCPAddPrimaryAffiliationTestCreate HCO and validate responseCreate HCP and validate responseCreate DCR Request: HCP update added new primary affiliationValidate DCR responseApply Change requestGet DCR status and validateGet HCP and validateGet DCR and validateRemove all entitiesshouldUpdateHCOAddAffiliationTestCreate HCO_1 and validateCreate HCO_2 and validateCreate DCR Request: update HCO add other affiliation (OtherHCOtoHCOAffiliations)Validate DCR responseApply Change requestGet DCR status and validateGet HCO's connections (OtherHCOtoHCOAffiliations) and validateGet DCR and validateRemove all entitiesshouldInactivateHCPCreate HCP and validate responseCreate DCR Request: Inactivate HCPValidate DCR responseApply Change requestGet DCR status and validateGet HCP and validateGet DCR and validateRemove all entitiesshouldUpdateHCPAddPrivateAddressCreate HCP and validate responseCreate DCR Request: update HCP - add private addressValidate DCR responseApply Change requestGet DCR status and validateGet HCP and validateGet DCR and validateRemove all entitiesshouldUpdateHCPAddAffiliationToNewHCOCreate HCO and validate responseCreate HCP and validate responseCreate DCR Request: update HCP - add affiliation to new HCOValidate DCR responseApply Change requestGet DCR status and validateGet HCP and validateGet HCO entity by crosswalk and save uriGet DCR and validateRemove all entitiesshouldReturnValidationErrorCreate DCR request with unknown entityUriValidate DCR response and check if REQUEST_FAILEDshouldCreateHCPOneKeyCreate HCP and validate responseCreate DCR Request: create OneKey HCPValidate DCR responseGet DCR status and validateGet HCP and validateGet DCR and validateRemove all entitiesshouldCreateHCPOneKeySpecialityMappingCreate HCP and validate responseCreate DCR Request: create OneKey HCP with speciality valueValidate DCR responseGet DCR status and validateGet HCP and validateGet DCR and validateRemove all entitiesshouldCreateHCPOneKeyRedirectToReltioCreate HCP and validate responseCreate DCR Request: create OneKey HCP with speciality value "not found key"Validate DCR responseApply Change RequestGet DCR status and validateGet HCP and validateGet DCR and validateRemove all entitiesshouldCreateHCOOneKeyCreate HCO nad validate responseCreate DCR Request: create OneKey HCOValidate DCR responseGet DCR status and validateGet HCO and validateGet DCR and validateRemove all entitiesshouldReturnMissingDataExceptionCreate DCR Request with missing dataValidate DCR response: status = REQUEST_REJECTED and response has correct messageshouldReturnForbiddenAccessExceptionCreate DCR Request with forbidden access dataValidate DCR response: status = REQUEST_FAILED and response has correct messageshouldReturnInternalServerErrorCreate DCR Request with internal server error dataValidate DCR response: status = REQUEST_FAILED and response has correct message" + }, + { + "title": "Integration Test For COMPANY Model Region AMER", + "pageID": "302681796", + "pageLink": "/display/GMDM/Integration+Test+For+COMPANY+Model+Region+AMER", + "content": "Test classTest caseFlowMicroBrickTestshouldCalculateMicroBricksCreate HCP and validate responseWait for event on ChangeLog topic with specified countryGet HCP entity and validate MicroBrickUpdate HCP with new zip codes and valdiate responseWait for event on ChangeLog topic with specified countryGet HCP entity and validate MicroBrickDelete entitiesValidateHCPTestvalidateHCPTestCreate HCP and validate response statusCreate validation request with valid paramsAssert if response is ok and validation status is "Valid"validateHCPTestNotValidCreate HCP and validate response statusCreate validation request with not valid paramsAssert if response is ok and validation status is "NotValid"validateHCPLookupTestCreate HCP with "Speciality" attribute and validate response statusCreate lookup validation request with "Speciality" attributeAssert if response is ok and validation status is "Valid"" + }, + { + "title": "Integration Test For COMPANY Model Region EMEA", + "pageID": "347655258", + "pageLink": "/display/GMDM/Integration+Test+For+COMPANY+Model+Region+EMEA", + "content": "Test classTest caseFlowAutofillTypeCodeTestshouldProcessNonPrescriberCreate HCP entityValidate type code value is Non-Prescriber on output topicInactivate HCP entityValidate type code value is Non-Prescriber on history inactive topicDelete entityshouldProcessPrescriberCreate HCP entityValidate type code value is Prescriber on output topicInactivate HCP entityValidate type code value is Prescriber on history inactive topicDelete entityshouldProcessMergeCreate first HCP entityValidate type code is Prescriber on output topicCreate second HCP entityValidate type code is Non-Prescriber on output topicMerge entitiesValidate type code is Prescriber on output topicInactivate first entityValidate type code is Non-PrescriberDelete second entity crosswalkValidate entity has end date on output topicValidate type code value is Prescriber on output topicDelete entityshouldNotUpdateTypeCodeCreate HCP entity with correct type code valueValidate there is no type code value provided by HUB technical source on output topicDelete entityshouldProcessLookupErrorsCreate HCP entity with invalid sub type code and speciality valuesValidate type code value is concatenation of sub type code and speciality values on output topicInactivate HCP entityValidate type code value is concatenation of sub type code and speciality values on history inactive topicDelete entity" + }, + { + "title": "Integration Test For COMPANY Model Region US", + "pageID": "302681784", + "pageLink": "/display/GMDM/Integration+Test+For+COMPANY+Model+Region+US", + "content": "Test classTest caseFlowCRUDMCOAsynctestSend MCORequest to Kafka topicWait for created eventValidate created MCOUpdate MCO's nameSend MCORequest to Kafka topicWait for updated eventValidate updated entityDelete all entitiesTestBatchMCOTesttestBatchWorkflowTestCreate batch instance: testBatchCreate MCO_LOADNIG stageSend MCO entities to MCO_LOADNIG stageFinish MCO_LOADNIG stageCheck sender job status - get batch instance and validate if all entities are createdCheck processing job status - get batch instance and validate if all entties are processedGet batch instance and check batch completion statusGet entities by crosswalk and check if all are createdRemove all entitiestestBatchWorkflowTest_SendEntities_Update_and_MD5SkipCreate batch instance: testBatchCreate MCO_LOADNIG stageSend MCO entities to MCO_LOADNIG stageFinish MCO_LOADNIG stageCheck sender job status - get batch instance and validate if all entities are createdCheck processing job status - get batch instance and validate if all entties are processedGet batch instance and check batch completion statusGet entities by crosswalk and check if all are createdCreate batch instance: testBatchCreate MCO_LOADNIG stageSend MCO entities to MCO_LOADNIG stage (skip 2 entities MD5 checksum changed)Finish MCO_LOADNIG stageCheck sender job status - get batch instance and validate if all entities are createdCheck processing job status - get batch instance and validate if all entties are processedGet batch instance and check batch completion statusGet entities by crosswalk and check if all are createdRemove all entitiesMCOBundlingTesttestSend multiple MCORequest to kafka topicWait for created event for every MCORequestCheck if number of recived events equals number of sent requestsSet crosswalk's delete date on now for every requestSend all updated MCORequests to Kafka topicWait for deleted event for every MCORequestEntityEventChecksumTesttestCreate HCPWait for HCP_CREATED eventGet created HCP by uri and check if existsFind by id created HCP in mogno and save "checksum"Update HCP's attribute and send requestWait for HCP_CHANGED eventFind by id created HCP in mogno and saveCheck if old checksum is different than current checksumRemove HCPWait for HCP_REMOVED eventEntityEventsTesttestCreate MCOWait for ENTITY_CREATED eventUpdate MCOWait for ENTITY_CHANGED eventRemove MCOWait for ENTITY_REMOVED eventHCPEventsMergeTesttestCreate HCP_1 and validate responseWait for HCP_CREATED eventGet HCP_1 and validate attributesCreate HCP_2 and validate responseGet HCP_2 and validate attributesMerge HCP_1 and HCP_2Wait for HCP_MERGED eventGet HCP_2 and validate attributesDelete HCP_1 crosswalkWait for HCP_CHANGED event and validate HCP_URIDelete HCP_1 and HCP_2 crosswalksWait for HCP_REMOVED eventDelete HCP_2 crosswalkHCPEventsNotTrimmedMergeTesttestCreate HCP_1 and validate responseWait for HCP_CREATED eventGet HCP_1 and validate attributesCreate HCP_2 and validate responseGet HCP_2 and validate attributesMerge HCP_1 and HCP_2Wait for HCP_MERGED event and validate attributesGet HCP_2 and validate attributesDelete HCP_1 crosswalkWait for HCP_CHANGED event and validate HCP_URIDelete HCP_1 and HCP_2 crosswalksWait for HCP_REMOVED eventDelete HCP_2 crosswalkMCOEventsTesttestCreate MCO and validate reponseWait for MCO_CREATED event and validate urisUpdate MCO's name and validate responseWait for MCO_CHANGED event and validate urisDelete MCO's crosswalk and validate response statusWait for MCO_REMOVED event and validate urisRemove entitiesPotentialMatchLinkCleanerTestCreate HCO: Start FLEXGet HCO and validateCreate HCO: End ONEKEYGet HCO and validateGet matches by Start FLEX HCO entityIdValidate matchesGet not matches by Start FLEX HCO entityIdValidate - not match does not existGet Start FLEX HCO from mongo entityMatchesHistory collectionValidate matches from mongoCreate DerivedAffiliation - realtion between FLEX and HCOGet matches by Start FLEX HCO entityIdCheck if there is no matchesGet not matches by Start FLEX HCO entityIdValidate not matches responseRemove all entitiesUpdateMCOTesttest1_createMCOTestCreate MCO and validate responseGet MCO by uri and validateRemove entitiestest2_updateMCOTestCreate MCO and validate responseUpdate MCO's nameGet MCO by uri and validateRemove entitiestest3_createMCOBatchTestCreate multiple MCOs using postBatchMCOValidate responseRemove entitiesUpdateUsageFlagsTesttest1_updateUsageFlagsCreate HCP and validate responseGet entities using filter (Country & Uri) and validate if HCP existsGet entities using filter (Uri) and validate if HCP existsUpdate usage flags and validate responseGet entity and validate updated usage flagstest2_updateUsageFlagsCreate HCO and validate responseGet entities using filter (Country & Uri) and validate if HCO existsGet entities using filter (Uri) and validate if HCO existsUpdate usage flags and validate responseGet entity and validate updated usage flagstest3_updateUsageFlagsCreate HCO with 2 addresses (COMPANYAddressId=3001 and 3002) and validate responseGet entities using filter (Country & Uri) and validate if HCO existsGet entities using filter (Uri) and validate if HCO existsUpdate usage flags (COMPANYAddressId = 3002, action=set) and validate responseUpdate usage flags (COMPANYAddressId = 3001, action=set) and validate responseGet entity and validate updated usage flagsRemove usage flag and validate responseGet entity and validate updated usage flagsClear usage flag and validate responseget entity and validate updated usage flags " + }, + { + "title": "MDM Factory", + "pageID": "164470002", + "pageLink": "/display/GMDM/MDM+Factory", + "content": "\nMDM Client Factory was implemented in MDM manager to select a specific MDM Client (Reltio/Nucleus) based on a client selector configuration. Factory allows to register multiple MDM Clients on runtime and choose it based on country. To register Factory the following example configuration needs to be defined:\n\n\tclientDecisionTable\n\n\n\nBased on this configuration a specific request will be processed by Reltio or Nucleus. Each selector has to define default view for a specific client. For example, 'ReltioAllSelector' has a definition of a default and PforceRx view which corresponds to two factory clients with different user name to Reltio.\n\n\n\tmdmFactoryConfig\n\n\n\nThis map contains MDM Factory Clients. Each client has a specific unique name and a configuration with URL, username, ●●●●●●●●●●●● other specific values defined for a Client. This unique name is used in decision table to choose a factory client based on country in request.\n " + }, + { + "title": "Mulesoft integration", + "pageID": "447577227", + "pageLink": "/display/GMDM/Mulesoft+integration", + "content": "DescriptionMulesoft platform is integration portal that is used to integrate Clients from inside and outside of COMPANY network with MDM Hub. Mule integrationAPI Endpoints/search/hcp : The operation allows to search for HCPs in a country with multiple filter criteria.MDM compiles the final data for a Profile (Golden Profile) when the data for it is requested./search/hco: The operation allows to search for HCOs in a country with multiple filter criteria./hcp : The API allows management of HCPs in MDM. (Get, Create, Update)/hco : The API allows management of HCOs in MDM. (Get, Create, Update)/lookups : This operation allows to fetch the list of values configured in MDM/subscriptions/hcp : This operation allows to 'subscribe to' multiple HCP Profiles in a singlerequest. The subscription is done by allowing a source create a 'crosswalk' of the source systemon the profile. It also allows the source system to insert all data that the source system has for therespective profile in MDM while subscribing. The request specification is same as /hcp POST but itexpects an array of profiles. The subscription works in conjunction with Kafka events that aretriggered from MDM for any 'subscribed' profiles that are modified by any other source system./entities/{countryType} : This operation directly allows to query MDM Reltio for Entity withcustom Filter criteria. It allows to decide if the response needs to be formatted or if data isrequired without formatting - as it is provided by MDM./batch/hcp: This resource allows management of multiple HCPs in MDM at a time. (Create,Update)/batch/hco: This resource allows management of multiple HCOs in MDM at a time. (Create,Update)/search/connection: This resource allows to view relationships an object (HCP, HCO) has onelevel in selected direction (up, down, both)MuleSoft API Catalog:Requests routing on Mule sideBelow values can change. Please check in source MDM Tenant URL Configuration - AIS Application Integration Solutions Mule - ConfluenceAPI Country MappingTenantDevTest (QA)StageProdUSUSUSUSUSEMEAUK,IE,GB,SA,EG,DZ,TN,MA,AE,KW,QA,OM,BH,NG,GH,KE,ET,ZW,MU,IQ,LB,JO,ZA,BW,CI,DJ,GQ,GA,GM,GN,GW,LR,MG,ML,MR,SN,SL,TG,MW,TZ,UG,RW,LS,NA,SZ,ZM,IR,SY,CD,LY,AO,BJ,BF,BI,CM,CV,CF,TD,CG,SD,YE,FR,DE,IT,ES,TF,PM,WF,MF,BL,RE,NC,YT,MQ,GP,GF,PF,MC,AD,SM,VA,TR,AT,BE,LU,DK,FO,GL,FI,NL,NO,PT,SE,CH,CZ,GR,CY,PL,RO,SK,IL,AL,AM,IO,GE,IS,MT,NE,RS,SI,MEUK,IE,GB,SA,EG,DZ,TN,MA,AE,KW,QA,OM,BH,NG,GH,KE,ET,ZW,MU,IQ,LB,JO,ZA,BW,CI,DJ,GQ,GA,GM,GN,GW,LR,MG,ML,MR,SN,SL,TG,MW,TZ,UG,RW,LS,NA,SZ,ZM,IR,SY,CD,LY,AO,BJ,BF,BI,CM,CV,CF,TD,CG,SD,YE,FR,DE,IT,ES,TF,PM,WF,MF,BL,RE,NC,YT,MQ,GP,GF,PF,MC,AD,SM,VA,TR,AT,BE,LU,DK,FO,GL,FI,NL,NO,PT,SE,CH,CZ,GR,CY,PL,RO,SK,IL,AL,AM,IO,GE,IS,MT,NE,RS,SI,MEUK,IE,GB,SA,EG,DZ,TN,MA,AE,KW,QA,OM,BH,NG,GH,KE,ET,ZW,MU,IQ,LB,JO,ZA,BW,CI,DJ,GQ,GA,GM,GN,GW,LR,MG,ML,MR,SN,SL,TG,MW,TZ,UG,RW,LS,NA,SZ,ZM,IR,SY,CD,LY,AO,BJ,BF,BI,CM,CV,CF,TD,CG,SD,YE,FR,DE,IT,ES,TF,PM,WF,MF,BL,RE,NC,YT,MQ,GP,GF,PF,MC,AD,SM,VA,TR,AT,BE,LU,DK,FO,GL,FI,NL,NO,PT,SE,CH,CZ,GR,CY,PL,RO,SK,IL,AL,AM,IO,GE,IS,MT,NE,RS,SI,MEUK,GB,IE,AE,AO,BF,BH,BI,BJ,BW,CD,CF,CG,CI,CM,CV,DJ,DZ,EG,ET,GA,GH,GM,GN,GQ,GW,IQ,IR,JO,KE,KW,LB,LR,LS,LY,MA,MG,ML,MR,MU,MW,NA,NG,OM,QA,RW,SA,SD,SL,SN,SY,SZ,TD,TG,TN,TZ,UG,YE,ZA,ZM,ZW,FR,DE,IT,ES,AD,BL,GF,GP,MC,MF,MQ,NC,PF,PM,RE,TF,WF,YT,SM,VA,TR,AT,BE,LU,DK,FO,GL,FI,NL,NO,PT,SE,CH,CZ,GR,CY,PL,RO,SK,ILAMERCA,BR,AR,UY,MX,CL,CO,PE,BO,ECCA,BR,AR,UY,MX,CL,CO,PE,BO,ECCA,BR,AR,UY,MX,CL,CO,PE,BO,ECCA,BR,AR,UY,MXAPACAU,NZ,IN,KR,JP,HK,ID,MY,PK,PH,SG,TW,TH,VN,MO,BN,BD,NP,LK,MNAU,NZ,IN,KR,JP,HK,ID,MY,PK,PH,SG,TW,TH, VN,MO,BN,NP,LK,MNKR,JP,AU,NZ,IN,HK,ID,MY,PK,PH,SG,TW,TH, VN,MO,BN,NP,LK,MNKR,JP,AU,NZ,IN,HK,ID,MY,PK,PH,SG,TW,TH, VN,MO,BNEXUS (IQVIA)Everything elseEverything elseEverything elseEverything elseAPI URLsMuleSoft MDM HCP Reltio API URLsEnvironmentCloud APIGround APIDevhttps://muleapic-amer-dev.COMPANY.com/mdm-hcp-reltio-dlb-v1-devhttp://mule4api-comm-amer-dev.COMPANY.com/mdm-hcp-reltio-v1/Testhttps://muleapic-amer-dev.COMPANY.com/mdm-hcp-reltio-dlb-v1-tst/http://mule4api-comm-amer-tst.COMPANY.com/mdm-hcp-reltio-v1Stagehttps://muleapic-amer-stg.COMPANY.com/mdm-hcp-reltio-dlb-v1-stghttp://mule4api-comm-amer-stg.COMPANY.com/mdm-hcp-reltio-v1Prodhttps://muleapic-amer.COMPANY.com/mdm-hcp-reltio-dlb-v1http://mule4api-comm-amer.COMPANY.com/mdm-hcp-reltio-v1IntegrationsIntegrations can be found under below url:MDM - AIS Application Integration Solutions Mule - ConfluenceMule documentation referenceSolution Profiles/MDM https://confluence.COMPANY.com/display/AAISM/MDMMDM HCP Reltio APIhttps://confluence.COMPANY.com/display/AAISM/MDM+HCP+Reltio+APIMDM Tenant URL Configurationhttps://confluence.COMPANY.com/display/AAISM/MDM+Tenant+URL+ConfigurationUsing OAuth2 for API AuthenticationDescribed how to use OAuth2How to use an APIDescribed how to request access to API and how to use itConsumer On-boardingDescribed consumer onboarding process" + }, + { + "title": "Multi view", + "pageID": "164470089", + "pageLink": "/display/GMDM/Multi+view", + "content": "\nDuring getEntity or getRelation operation "ViewAdapterService" is activated. This feature contains two steps:\n\n\tAdapt\n\n\n\nBased on the following map each entity will be checked before return:\n\nThis means that for PforceRx view, only entities with source CRMMI will be returned. Otherwise getEntity or getRelation operations will return "404" EntityNotFound exception. \nWhen entity can be returned with success the next step is started: \n\n\tFilter\n\n\n\nEach entity is filtered based on attribute Uris list provided in crosswalks.attribute list.\nThe process will take each attribute from entity and will check if this attribute exists in restricted for specific source crosswalk attribute list. When this attribute is not on restricted list, then it will be removed from entity. This way we will receive entity for specific view only with attribute restricted for specific source.\nMDM publishing HUB has an additional configuration for multi view process. When an entity with a specific country suits the configuration, getEntity operation is invoked with country and view name parameter. Then MDM gateway Factory is activated, and entity is returned from a specific Reltio instance and saved in a mongo collection suffixed with a view name.\n \nFor this configuration entities from BR country will be saved in entityHistory and entityHistory_PforceRx mongo collections. In the view collection entities will be adapted and filtered by View Adapter Service. " + }, + { + "title": "Playbook", + "pageID": "218437749", + "pageLink": "/display/GMDM/Playbook", + "content": "The document depicts how to request access to different sources. " + }, + { + "title": "Issues list", + "pageID": "218441145", + "pageLink": "/display/GMDM/Issues+list", + "content": "" + }, + { + "title": "Add a user to a new group.", + "pageID": "218438493", + "pageLink": "/pages/viewpage.action?pageId=218438493", + "content": "To create a request you need to use  a link:https://requestmanager1.COMPANY.com/Group/Then choose as follow:Than search a group and click request access:As the last step, you need to choose the 'View Cart' button and submit your request. " + }, + { + "title": "Snowflake new schema/group/role creation", + "pageID": "218437752", + "pageLink": "/pages/viewpage.action?pageId=218437752", + "content": "Connect with: https://digitalondemand.COMPANY.com/Click 'Get Support' button.3. Then click that one:4. And as a next step:5. Now you are on create ticket site. The most important thing is to place a proper queue name in a detailed description place. For example a queue name for Snowflake issues looks like this:  gbl-atp-commercial snowflake domain admin. I recommend to you to place it as a first line. And then the request text is required.6. There is a typical request for a new schema:gbl-atp-commercial snowflake domain adminHello,\nI'd like to ask to create a new schema and new roles on Snowflake side.\nNew schema name: PTE_SL\nEnvironments: DEV, QA, STG, PROD, details below:\nDEV\t\nSnowflake instance: https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com\t\nSnowflake DB name:COMM_GBL_MDM_DMART_DEV_DB\nQA\t\nSnowflake instance: https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com\t\nSnowflake DB name: COMM_GBL_MDM_DMART_QA_DB\nSTG\t\nSnowflake instance: https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com\t\nSnowflake DB name:COMM_GBL_MDM_DMART_STG_DB\nPROD\t\nSnowflake instance: https://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com\t\nSnowflake DB name: COMM_GBL_MDM_DMART_PROD_DB\n\nAdd new roles with names (one for each environment): COMM_GBL_MDM_DMART_[Dev/QA/STG/Prod]_PTE_ROLE\nwith read-only acces on Customer_SL & PTE_SL\nand\nadd a roles with full acces to new schema with names (one for each environment) COMM_GBL_MDM_DMART_[Dev/QA/STG/Prod]_DEVOPS_ROLE - like in customer_sl schema7. If you are requesting for a new role too - like in an example above - you need to request to add this role to AD. In this case you need to provide primary and secondary owner details for all groups to be created. You can send a primary ana a secondary owner data or write that the ownership should be set like in another existing role. 8. Ticket example: https://digitalondemand.COMPANY.com/My-Tickets/Ticket-Details?ticket=RF3490743 " + }, + { + "title": "AWS ELB NLB configuration request", + "pageID": "218440089", + "pageLink": "/display/GMDM/AWS+ELB+NLB+configuration+request", + "content": "To create a ticket use this link: http://btondemand.COMPANY.com/Please follow this link if you want to know all the specific steps and click: Snowflake new schema/group/role creationRemember to add a proper queue name!In a request please attached full list of general information:VPCELB TypeHealth ChecksAllowed incoming traffic fromThen please add a specific ELB NLB information FOR EACH NLB ELB you requested for - even if the information is the same and obvious:ListenerTarget Group No of ELBTypeEnvironmentELB Health CheckTarget Group additional information: e.x: 1 Target group with 3 servers:portWhere to add a Listener: e.x.: Listener to be added in ELB #Listner NameSecurity Group informationAdditional information: e.x: IP ●●●●●●●●●●●● mdm-event-handler (Prod) should be able to access this ELBTicket example: http://btod.COMPANY.com/My-Tickets/Ticket-Details?ticket=IM40983303E.g. request text:VPC: Public\nELB Type: Network Load Balancer\nHealth Checks: Passive\nAllowed incoming traffic from:\n●●●●●●●●●●●● mdm-event-handler (Prod)\n\n1. API\nListener:\napi-emea-prod-gbl-mdm-hub-ext.COMPANY.com:8443\n\nTarget Group:\neuw1z2pl116.COMPANY.com:8443\neuw1z1pl117.COMPANY.com:8443\neuw1z2pl118.COMPANY.com:8443\n\n2. KAFKA\n\n2.1\nListener:\nkafka-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095\nTG:\neuw1z2pl116.COMPANY.com:9095\neuw1z1pl117.COMPANY.com:9095\neuw1z2pl118.COMPANY.com:9095\n\n2.2\nListener:\nkafka-b1-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095\nTG:\neuw1z2pl116.COMPANY.com:9095\n\n2.3\nListener:\nkafka-b2-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095\nTG:\neuw1z1pl117.COMPANY.com:9095\n\n2.4\nListener:\nkafka-b3-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095\nTG:\neuw1z2pl118.COMPANY.com:9095\n\nGBL-BTI-EXT HOSTING AWS CLOUD" + }, + { + "title": "To open a traffic between hosts", + "pageID": "218441143", + "pageLink": "/display/GMDM/To+open+a+traffic+between+hosts", + "content": "To create a ticket using this link: http://btondemand.COMPANY.com/Please follow this link if you want to know all the specific steps and click: Snowflake new schema/group/role creationRemember to add a proper queue name!In a request please attached the full list of general information:SourceIP rangeIP range....Targets - remember to add each targets instancesTarget1NameCnameAddressPortTarget2........Example ticket: http://btod.COMPANY.com/My-Tickets/Ticket-Details?ticket=IM41240161Example request text:Source:\n1. IP range: ●●●●●●●●●●●●●\n2. IP range: ●●●●●●●●●●●●●\n\nTarget1:\nLoadBalancer:\ngbl-mdm-hub-us-prod.COMPANY.com canonical name = internal-pfe-clb-atp-mdmhub-us-prod-001-146249044.us-east-1.elb.amazonaws.com.\nName: internal-pfe-clb-atp-mdmhub-us-prod-001-146249044.us-east-1.elb.amazonaws.com\nAddress: ●●●●●●●●●●●●●●\nName: internal-pfe-clb-atp-mdmhub-us-prod-001-146249044.us-east-1.elb.amazonaws.com\nAddress: ●●●●●●●●●●●●●●\nTarget port: 443\n\nTarget2:\nhosts:\namraelp00007848.COMPANY.com(●●●●●●●●●●●●●●)\namraelp00007849.COMPANY.com(●●●●●●●●●●●●●)\namraelp00007871.COMPANY.com(●●●●●●●●●●●●●●)\ntarget port: 8443" + }, + { + "title": "Support information with queue and DL names", + "pageID": "218438484", + "pageLink": "/display/GMDM/Support+information+with+queue+and+DL+names", + "content": "There are a few places when you can send your request:https://digitalondemand.COMPANY.com/getsupporthttps://requestmanager.COMPANY.com/Caution! When we are adding a new client to our architecture there is a MUST to get from him a support queue.Support queuesSystem/component/area nameDedicated queueSupport DLAdditional notesRapid, Digital Labs, GCP etcGBL-EPS-CLOUD OPS FULL SUPPORTEPS-CloudOps@COMPANY.comAWS Global, EMEA environmentsIOD AWS TeamGBL-BTI-IOD AWS FULL SUPPORTEPS-CloudOps@COMPANY.com (same as EPS, not a mistake)Rotating AWS keys, AWS GBL US, AWS FLEX USIODGBL-BTI-IOD FULL OS SUPPORT (VMC)VMware CloudFLEX TeamGBL-F&BO-MAST AMM SUPPORTDL-CBK-MAST@COMPANY.comData, file transfer issues in US FLEX environmentsSAP Interface Team (FLEX)GBL-SS SAP SALES ORDER MGMTQueries regarding SAP FLEX input filesSAP Master Date Team (FLEX)Dianna.OConnell@COMPANY.comQueries regarding data in SAP FLEXNetwork TeamGBL-NETWORK DDIAll domain and DNS changesFirewall TeamGBL-NETWORK ECSGBL-NETWORK-SCS@COMPANY.com"Big" firewall changesSnowflakeGBL-ATP-COMMERCIAL SNOWFLAKE DOMAIN ADMINMDM Hub - non-prodGBL-ADL-ATP GLOBAL MDM - HUB DEVOPSDL-ATP_MDMHUB_SUPPORT@COMPANY.comMDM Hub - prodGBL-ADL-ATP GLOBAL MDM - HUB DEVOPSDL-ATP_MDMHUB_SUPPORT_PROD@COMPANY.comPDKSGBL-BAP-Kubernetes Service L2PDCSOps@COMPANY.comPDKS Kubernetes cluster, ie. new MDM Hub Amer NPRODGo to http://containers.COMPANY.com/ "PDKS Get Help" for details.PDKS Engineering TeamGBL-BTI-SYSTEMS ENGINEERING BTCSDL-PDCS-ADMIN@COMPANY.comPDKS Kubernetes - For Environment provisioning/modification issues with CloudBrokerage/IODAMER/APAC/EMEA/GBLUS Reltio - COMPANYGBL-ADL-ATP GLOBAL MDM - RELTIODL-ADL-ATP-GLOBAL_MDM_RELTIO@COMPANY.comTeam responsible for Reltio and ETL batch loads.GBL/USFLEX Reltio - IQVIAGBL-MDM APP SUPPORTCOMPANY-MDM-Support@iqvia.comDL-Global-MDM-Support@COMPANY.comReltio consultingN/ASumit Singh - reltio consulting (NO support)sumit.singh@reltio.comSumit.Singh@COMPANY.comIt is no support, we can use that contact on technical issues level (API implementation etc) Reltio UI with data accesuse request manager: https://requestmanager.COMPANY.com/Reltio Commercial MDM - GBLUSReltio Customer MDM - GBLPing FederateDL-CIT-PXEDOperations@COMPANY.comPing Federate/OAuth2 supportMAPP NavigatorGBL-FBO-MAPP NAVIGATOR HYPERCAREDL-BTAMS-MAPP-Navigator@COMPANY.com (rarely respond)MAPP Nav issuesHarmony BitbucketGBL-CBT-GBI HARMONY SERVICESDL-GBI-Harmony-Support@COMPANY.comConfluence page:ATP Harmony Service SDConfluence, JiraGBL-DA-DEVSECOPS TOOLS SUPPORTDL-SESRM-ATLASSIAN-SUPPORT ArtifactoryGBL-SESRM-ARTIFACTORY SUPPORTDL-SESRM-ARTIFACTORY-SUPPORT@COMPANY.comMule integration team supportDL-AIS Mule Integration Support DL-AIS-Mule-Integration-Support@COMPANY.comUsed to integrate with mule proxy VOD DCRLaurie.Koudstaal@COMPANY.comPOC if Veeva did not send an input file for the VOD DCR process for 24 hoursExample: there is a description how to request with https://digitalondemand.COMPANY.com/for a ticket assigned to one of groups above. Snowflake new schema/group/role creation" + }, + { + "title": "Global Clients", + "pageID": "310963401", + "pageLink": "/display/GMDM/Global+Clients", + "content": "ClientContactCICRProbably AmishADTSDL-BTAMS-ENGAGE-PLUS@COMPANY.comEASIENGAGEESAMPLESSomya.Jain@COMPANY.com;Vijay.Bablani@COMPANY.com;Lori.Reynolds@COMPANY.comGANTGangadhar.Nadpolla@COMPANY.comGRACECory.Arthus@COMPANY.comGRVvikas.verma@COMPANY.com; Luther Chris ; Matej.Dolanc@COMPANY.comJOShweta.Kulkarni@COMPANY.comMAPDL-BT-Production-Engineering@COMPANY.com; Matej.Dolanc@COMPANY.comMAPPDL-BTAMS-MAPP-Navigator@COMPANY.com; Rajesh.K.Chengalpathy@COMPANY.comMEDICDL-F&BO-MEDIC@COMPANY.comMULEDL-AIS-Mule-Integration-Support@COMPANY.comAmish.Adhvaryu@COMPANY.comODSDL-GBI-PFORCERX_ODS_Support@COMPANY.comONEMEDMarsha.Wirtel@COMPANY.com;AnveshVedula.Chalapati@COMPANY.comPFORCEOLChristopher.Fani@COMPANY.comVEEVA_FIELDPFORCERXNagaJayakiran.Nagumothu@COMPANY.com;dl-pforcerx-support@COMPANY.comPTRSSagar.Bodala@COMPANY.com;bhushan.shanbhag@COMPANY.comJAPAN DWHDL-GDM-ServiceOps-Commercial_APAC@COMPANY.com DL-ATP-SERVICEOPS-JPN-DATALAKE@COMPANY.comCHINAChen, Yong ; QianRu.Zhou@COMPANY.comKOL_ONEVIEWDL-SFA-INF_Support_PforceOL@COMPANY.comSolanki,Hardik (US - Mumbai)Yagnamurthy, Maanasa (US - Hyderabad) NEXUS SriVeerendra.Chode@COMPANY.com;DL-Acc-GBICC-Team@COMPANY.comIMPROMPTUPRAWDOPODOBNIE AMISHCDWNarayanan, Abhilash Balan, Sakthi Raman, Krishnan ICUEBrahma, Bagmita Solanki, Hardik Tikyani, Devesh EVENTHUBSNOWFLAKEClientContactC360DL-C360_Support@COMPANY.comPT&EDL-PTE-Batch-Team@COMPANY.com>;  Drabold, Erich DQ_OPSmarkus.henriksson@COMPANY.com;dl-atp-dq-ops@COMPANY.comaccentureDL-Acc-GBICC-Team@COMPANY.comBig bossesPratap.Deshmukh@COMPANY.comMikhail.Komarov@COMPANY.comRafael.Aviles@COMPANY.com" + }, + { + "title": "How to login to Service Manager", + "pageID": "218448126", + "pageLink": "/display/GMDM/How+to+login+to+Service+Manager", + "content": "How to add a user to Service Manager toolChoose link: https://smweb.COMPANY.com/SCAccountRequest.aspx#/searchFind yourselfClick "Next >>"Choose proper role: Service desk analyst – and click „Needs training”When you have your training succeeded, there is a need to choose groups to which you want to be added :GBL-ADL-ATP GLOBAL MDM - HUB DEVOPSYou do it here:Please remember when you click “Add selected group to cart” there is a second approval step – click: “SUBMIT”.When permissions will be granted you can explore Service Manager possibilities here: https://sma.COMPANY.com/sm/index.do" + }, + { + "title": "How to Escalate btondemand Ticket Priority", + "pageID": "218448925", + "pageLink": "/display/GMDM/How+to+Escalate+btondemand+Ticket+Priority", + "content": "Below is a copy of: AWS Rapid Support → How to Escalate Ticket PriorityHow to Escalate Ticket PriorityTickets will be opened as low priority by default and response time will align to the restoration and resolution times listed in the SLA below. If your request priority needs to be change follow these instructions:Use the Chat function at BT On Demand (or call the Service Desk at 1-877-733-4357)Select Get SupportSelect "Click here to continue without selecting a ticket option."Select ChatProvide the existing ticket number you already openedAsk that ticket Priority be raised to Medium, High or Critical based on the issue and utilize one of the following key phrases to help set priority:Issue is Effecting Production ApplicationProduct Quality is being impactedBatch is unable to proceedLife safety or physical security is impactedDevelopment work stopped awaiting resolution" + }, + { + "title": "How to get AWS Account ID", + "pageID": "218453784", + "pageLink": "/display/GMDM/How+to+get+AWS+Account+ID", + "content": "MDM Hub components are deployed in different AWS Accounts. In a ticket support process, you might be asked about the AWS Account ID of the host, load balancer, or other resources. You can get it quickly in at least two ways described below.Using AWS ConsoleIn AWS Console: http://awsprodv2.COMPANY.com/ (How to access AWS Console) you can find the Account ID in any resource's Amazon Resource Name (ARN).Using curlSSH to a host and run this curl command, same for all AWS accounts:[ec2-user@euw1z2pl116 ~]$ curl http://169.254.169.254/latest/dynamic/instance-identity/document{"accountId" : "432817204314","architecture" : "x86_64","availabilityZone" : "eu-west-1b","billingProducts" : null,"devpayProductCodes" : null,"marketplaceProductCodes" : null,"imageId" : "ami-05c4f918537788bab","instanceId" : "i-030e29a6e5aa27e38","instanceType" : "r5.2xlarge","kernelId" : null,"pendingTime" : "2021-12-21T06:07:12Z","privateIp" : "10.90.98.178","ramdiskId" : null,"region" : "eu-west-1","version" : "2017-09-30"}" + }, + { + "title": "How to push Docker image to artifactory.COMPANY.com", + "pageID": "218458682", + "pageLink": "/display/GMDM/How+to+push+Docker+image+to+artifactory.COMPANY.com", + "content": "I am using the AKHQ image as an example.Login to artifactory.COMPANY.comLog in with COMPANY credentials: https://artifactory.COMPANY.com/artifactory/Generate Identity Token: https://artifactory.COMPANY.com/ui/admin/artifactory/user_profileUse COMPANY username and generated Identity Token in "docker login artifactory.COMPANY.com"marek@CF-19CHU8:~$ docker login artifactory.COMPANY.comAuthenticating with existing credentials...Login SucceededPull, tag, and pushmarek@CF-19CHU8:~$ docker pull tchiotludo/akhq:0.14.10.14.1: Pulling from tchiotludo/akhq...Digest: sha256:b7f21a6a60ed1e89e525f57d6f06f53bea6e15c087a64ae60197d9a220244e9cStatus: Downloaded newer image for tchiotludo/akhq:0.14.1docker.io/tchiotludo/akhq:0.14.1marek@CF-19CHU8:~$ docker tag tchiotludo/akhq:0.14.1 artifactory.COMPANY.com/mdmhub-docker-dev/tchiotludo/akhq:0.14.1marek@CF-19CHU8:~$ docker push artifactory.COMPANY.com/mdmhub-docker-dev/tchiotludo/akhq:0.14.1The push refers to repository [artifactory.COMPANY.com/mdmhub-docker-dev/tchiotludo/akhq]0.14.1: digest: sha256:b7f21a6a60ed1e89e525f57d6f06f53bea6e15c087a64ae60197d9a220244e9c size: 1577And that's all, you can now use this image from artifactory.COMPANY.com!" + }, + { + "title": "Emergency contact list", + "pageID": "218459579", + "pageLink": "/display/GMDM/Emergency+contact+list", + "content": "In case of emergency please inform the person from the list attached to each environment.EMEA:Varganin, A.J. ; Trivedi, Nishith ; Austin, John ; Simon, Veronica ; Adhvaryu, Amish ; Kothandaraman, Sathyanarayanan ; Dolanc, Matej ; Kunchithapatham, Bhavanya ; Bhowmick, Aditya GBL:TO-DOGBL US:TO-DOEMEA:TO-DOAMER:TO-DO" + }, + { + "title": "How to handle issues reported to DL", + "pageID": "294665000", + "pageLink": "/display/GMDM/How+to+handle+issues+reported+to+DL", + "content": "Create a ticket in JiraName: "DL: {{ email title }}"Epic: BAUFix Version(s): BAUUse below template:MDM Hub Issue Response Template.oftReplace all the red placeholders. Fill in the table where you can, based on original email.Respond to the email, requesting additional details if any of the table rows could not be filled in.Update the ticket:Copy/Paste the filled tableAdjust the priority based on the "Business impact details" row" + }, + { + "title": "Sample estimation for jira tickets", + "pageID": "415215566", + "pageLink": "/display/GMDM/Sample+estimation+for+jira+tickets", + "content": "1https://jira.COMPANY.com/browse/MR-8591(Disable keycloak by default)https://jira.COMPANY.com/browse/MR-8544(Investigate server git hooks in BitBucket)https://jira.COMPANY.com/browse/MR-8508(Lack of changelog when build from master)https://jira.COMPANY.com/browse/MR-8506(pvc-autoresizer deployment on PRODs)https://jira.COMPANY.com/browse/MR-8502(Dashboards adjustments)2https://jira.COMPANY.com/browse/MR-8649 (Move kong-mdm-external-oauth-plugin to mdm-utils repo)https://jira.COMPANY.com/browse/MR-8585 (Alert about not ready ScaledObject)https://jira.COMPANY.com/browse/MR-8539 (Reduce number of stored Cadvisor metrics and labels)https://jira.COMPANY.com/browse/MR-8531 (Old monitoring host decomissioning)https://jira.COMPANY.com/browse/MR-8375 (Quality Gateway: deploy publisher changes to PRODs)https://jira.COMPANY.com/browse/MR-8359 (Write article to describe Airflow upgrade procedure)https://jira.COMPANY.com/browse/MR-8166 (Fluentd - improve deployment time and downtime)https://jira.COMPANY.com/browse/MR-8128 (Turn on compression in reconciliation service)3https://jira.COMPANY.com/browse/MR-8543 (POC: Create local git hook with secrets verification)https://jira.COMPANY.com/browse/MR-8503 (Replace hardcoded rate intervals)https://jira.COMPANY.com/browse/MR-8370 (Investigate and plan fix for different version of monitoring CRDs)https://jira.COMPANY.com/browse/MR-8245 (Fluentbit: deploy NPRODs)https://jira.COMPANY.com/browse/MR-7926 (Move jenkins agents containers definition to inbound-services repo)5https://jira.COMPANY.com/browse/MR-8334 (Implement integration with Grafana)https://jira.COMPANY.com/browse/MR-7720 (Logstash - configuration creation and deployment)https://jira.COMPANY.com/browse/MR-7417 (Grafana dashboards backup process)https://jira.COMPANY.com/browse/MR-7075 (POC: Store transaction logs for 6 months)8https://jira.COMPANY.com/browse/MR-8258 (Implement integration with Kibana)https://jira.COMPANY.com/browse/MR-6285 (Prepare Kafka upgrade plan to version 3.3.2)https://jira.COMPANY.com/browse/MR-5981 (Process analysis)https://jira.COMPANY.com/browse/MR-5694 (Implement Reltio mock)https://jira.COMPANY.com/browse/MR-5835 (Mongo backup process: implement backup process)" + }, + { + "title": "FAQ - Frequently Asked Questions", + "pageID": "415217275", + "pageLink": "/display/GMDM/FAQ+-+Frequently+Asked+Questions", + "content": "" + }, + { + "title": "API", + "pageID": "415217277", + "pageLink": "/display/GMDM/API", + "content": "Is there an MDM Hub API Documentation?Of course - it is available for each component:Manager/API Router: https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-gw-spec-emea-prod/swagger-ui/index.html?configUrl=/api-gw-spec-emea-prod/v3/api-docs/swagger-configBatch Service: https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-batch-spec-emea-prod/swagger-ui/index.html?configUrl=/api-batch-spec-emea-prod/v3/api-docs/swagger-configDCR Service: https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-dcr-spec-emea-prod/swagger-ui/index.htmlWhat is the difference between /api-emea-prod and /api-gw-emea-prod API endpoints?Both of these endpoints are leading to different API Components:/api-emea-prod is the API Router endpoint/api-gw-emea-prod is the Manager endpointBoth of these Components' APIs can be used in similar way. The main difference is:API Router allows routing DCR Requests to the DCR component: /api-emea-prod/dcr endpoint leads to the DCR Service API.API Router allows routing HCP/HCO Search requests to other Global MDM tenants, based on the search query filter's Country parameter.Example 1: We are trying to find HCPs named "John" in the US market. We can only use the EMEA HUB API:Sending an HTTP request:GET https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-gw-emea-prod/entities?filter=equals(type, 'configuration/entityTypes/HCP') and equals(attributes.Country, 'US') and equals(attributes.FirstName, 'John')returns nothing, because we are using the /api-gw-emea-prod/* endpoint - the Manager. It is connected directly to the EMEA PROD Reltio, which does not contain the US market.Sending an HTTP request:GET https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-emea-prod/entities?filter=equals(type, 'configuration/entityTypes/HCP') and equals(attributes.Country, 'US') and equals(attributes.FirstName, 'John')routes the search to the GBLUS PROD Reltio, and returns results from there.Example 2: We are trying to find HCPs named "John" in the US, GB, IE and AU markets. We can only use the EMEA HUB API:Sending an HTTP request:GET https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-gw-emea-prod/entities?filter=equals(type, 'configuration/entityTypes/HCP') and in(attributes.Country, 'US,GB,IE,AU') and equals(attributes.FirstName, 'John')searches for American, British, Irish or Australian HCPs in the EMEA PROD Reltio. Only Ireland is available in this tenant, so it returns results, but only limited to this marketSending an HTTP request:GET https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-emea-prod/entities?filter=equals(type, 'configuration/entityTypes/HCP') and in(attributes.Country, 'US,GB,IE,AU') and equals(attributes.FirstName, 'John')splits the search into three separate searches:- search for American HCPs in the GBLUS PROD Reltio- search for British or Irish HCPs in the EMEA PROD Reltio- search for Australian HCPs in the APAC PROD Reltioand returns aggregated results.What is the difference between /api-emea-prod and /ext-api-emea-prod API endpoints?These endpoints use different Authentication methods:when using /api-emea-prod you are using an API Key authentication. Your requests must contain the apikey header with the secret that you received from the Hub Support Team.when using /ext-api-emea-prod you are using an OAuth2 authentication. You must fetch your token from the COMPANY PingFederate and send it in your request's Authorization: Bearer header.It is recommended that all the API Users use OAuth2 and /ext-api-emea-prod endpoint, leaving Key Auth for support and debugging purposes.When should I use a GET Entity operation, when should I use a SEARCH Entity operation?There are two main ways of fetching an HCP/HCO JSON using HUB API:GET Entity:Sending GET /entities/{Reltio ID}It is the simplest and cheapest operation. Use it when you know the exact Reltio ID of the entity you want to find.SEARCH Entity:Sending GET /entities?filter=equals()...It allows finding one or more profiles by their attributes' values. Use it when you do not know the exact Reltio ID or do not know how many results you expect.Read more about Search filters here: https://docs.reltio.com/en/explore/get-going-with-apis-and-rocs-utilities/reltio-rest-apis/model-apis/entities-api/get-entity/filtering-entitiesBelow two requests correspond to each other:GET https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-emea-prod/entities/0TWPf9dGET https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-emea-prod/entities?filter=equals(uri, 'entities/0TWPf9d')Although both are quick, Hub recommends only using the first one to find and entity by URI:GET Entity gets passed to Reltio as-is and results are returned straight awaySEARCH Entity gets analyzed on the Hub side first. If the search filter does not specify a country (a required parameter!), a full list of allowed countries is fetched from the API User's configuration and, as a result, the request may end up being sent to every single Reltio tenant.What is the difference between POST and PATCH /hcp, /hco, /entities operations?The key difference is:If we POST a record (crosswalk + attributes) to Hub, it is created in Reltio straight away:if the crosswalk already existed in Reltio, it gets overwrittenif the record already existed in Reltio, the attributes get completely overwritten:attribute values that did not exist in Reltio before, now are addedattributes that had different values in Reltio before, now are updatedattribute values that were present in Reltio before, but did not exist in the POSTed record, now are removedIf we PATCH a record (crosswalk + attributes) to Hub:we check whether this crosswalk already exists in Reltio. If it does not, we return an HTTP Bad Request error response.If the record already existed in Reltio, only the PATCHed subset of attributes is updated:attribute values that did not exist in Reltio before, now are addedattributes that had different values in Reltio before, now are updatedattribute values that were present in Reltio before, but did not exist in the PATCHed record, are left untouchedPOST should be used if we are sending the full JSON - crosswalk + all attributes.PATCH should be used if we are only sending incremental changes to a pre-existing profile." + }, + { + "title": "Merging Into Existing Entities", + "pageID": "462075948", + "pageLink": "/display/GMDM/Merging+Into+Existing+Entities", + "content": "Can I post a profile and merge it to one already existing in MDM?Yes, there are 3 ways you can do that:Merge-On-The-FlyContributor MergeManual MergeMerge-On-The-Fly - DetailsMerge-on-the-fly is a Reltio mechanism using matchGroups configuration. MatchGroups contain lists of requirements that two entities must pass in order to be merged. There are two types of matchGroups: "suspect" and "automatic". Suspects merely display as potential matches in Reltio UI, but Automatic groups trigger automatic merges of the objects.Example of an HCP automatic matchGroup from Reltio's configuration (EMEA PROD):\n {\n "uri": "configuration/entityTypes/HCP/matchGroups/ExctONEKEYID",\n "label": "(iii) Auto Rule - Exact Source Unique Identifier(ReferBack ID)",\n "type": "automatic",\n "useOvOnly": "true",\n "rule": {\n "and": {\n "exact": [\n "configuration/entityTypes/HCP/attributes/Identifiers/attributes/ID",\n "configuration/entityTypes/HCP/attributes/Country"\n ],\n "in": [\n {\n "values": [\n "OneKey ID"\n ],\n "uri": "configuration/entityTypes/HCP/attributes/Identifiers/attributes/Type"\n },\n {\n "values": [\n "ONEKEY"\n ],\n "uri": "configuration/entityTypes/HCP/attributes/OriginalSourceName"\n },\n {\n "values": [\n "Yes"\n ],\n "uri": "configuration/entityTypes/HCP/attributes/Identifiers/attributes/Trust"\n }\n ]\n }\n },\n "scoreStandalone": 100,\n "scoreIncremental": 0\n \nAbove example merges two entities having same Country attribute and same Identifier of type "OneKey ID". Identifier must have the Trusted flag and the OriginalSourceName must be "ONEKEY".When posting a record to MDM, matchGroups are evaluated. If an automatic matchGroup is matched, Reltio will perform a Merge-On-The-Fly, adding the posted crosswalk to an existing profile.Contributor Merge - DetailsWhen posting an object to Reltio, we can use its Crosswalk contributorProvider/dataProvider mechanism to bind posted crosswalk to an existing one.If we know that a crosswalk exists in MDM, we can add it to the crosswalks array with contributorProvider=true and dataProvider=false flags. Crosswalk marked like that serves as an indicator of an object to bind to.The other crosswalk must have the flags set the other way around: contributorProvider=false and dataProvider=true. This is the crosswalk that will de facto provide the attributes and be considered for the Hub's ingestion rules.Example - we are sending data with an MAPP crosswalk and binding that crosswalk to the existing ONEKEY crosswalk:\n{\n "hcp": {\n "type": "configuration/entityTypes/HCP",\n "attributes": {\n "FirstName": [\n {\n "value": "John"\n }\n ],\n "LastName": [\n {\n "value": "Doe"\n }\n ],\n "Country": [\n {\n "value": "ES"\n }\n ]\n },\n "crosswalks": [\n {\n "type": "configuration/sources/MAPP",\n "value": "B53DFCEA-8231-E444-24F8-7E72C62C0147",\n "contributorProvider": false,\n "dataProvider": true\n },\n {\n "type": "configuration/sources/ONEKEY",\n "value": "WESR04566503",\n "contributorProvider": true,\n "dataProvider": false\n }\n ]\n }\n}\nEvery MDM record also has a crosswalk of type "Reltio" and value equal to Reltio ID. We can use that to bind our record to the entity:\n{\n "hcp": {\n "type": "configuration/entityTypes/HCP",\n "attributes": {\n "FirstName": [\n {\n "value": "John"\n }\n ],\n "LastName": [\n {\n "value": "Doe"\n }\n ],\n "Country": [\n {\n "value": "ES"\n }\n ]\n },\n "crosswalks": [\n {\n "type": "configuration/sources/MAPP",\n "value": "B53DFCEA-8231-E444-24F8-7E72C62C0147",\n "contributorProvider": false,\n "dataProvider": true\n },\n {\n "type": "configuration/sources/Reltio",\n "value": "00TnuTu",\n "contributorProvider": true,\n "dataProvider": false\n }\n ]\n }\n}\nThis approach has a downside: crosswalks are bound, so they cannot be unmerged later on.Manual Merge - DetailsLast approach is simply creating a record in Reltio and straight away merging it with another.Let's use the previous example. First, we are simply posting the MAPP data:\n{\n "hcp": {\n "type": "configuration/entityTypes/HCP",\n "attributes": {\n "FirstName": [\n {\n "value": "John"\n }\n ],\n "LastName": [\n {\n "value": "Doe"\n }\n ],\n "Country": [\n {\n "value": "ES"\n }\n ]\n },\n "crosswalks": [\n {\n "type": "configuration/sources/MAPP",\n "value": "B53DFCEA-8231-E444-24F8-7E72C62C0147"\n }\n ]\n }\n}\nResponse:\n{\n "uri": "entities/0zu5sHM",\n "status": "created",\n "errorCode": null,\n "errorMessage": null,\n "COMPANYGlobalCustomerID": "04-131155084",\n "crosswalk": {\n "type": "configuration/sources/MAPP",\n "value": "B53DFCEA-8231-E444-24F8-7E72C62C0147",\n "updateDate": 1728043082037,\n "deleteDate": ""\n }\n}\nWe can now use the URI from response to merge the new record into existing one:\nPOST /entities/0zu5sHM/_merge?uri=00TnuTu\n" + }, + { + "title": "Quality rules", + "pageID": "164470090", + "pageLink": "/display/GMDM/Quality+rules", + "content": "Quality engine is responsible for preprocessing Entity when a specific precondition is met. This engine is started in the following cases:Rest operation (POST/PATCH) on /hco endpoint on MDM ManagerRest operation (POST/PATCH) on /hcp endpoint on MDM ManagerWhen a validationOn parameter is set to true the first step in HCP/HCO request processing is quality engine validation. MDM Manager Configuration should contain the following quality rules:hcpQualityRulesConfigshcoQualityRulesConfigshcpAffiliatedHCOsQualityRulesConfigsThese properties are able to accept a list of yaml files. Each file has to be added in environment repository in /config_files//mdm_mananger/config/.*quality-rules.yaml. Then each of these files has to be added to these variables in inventory //group_vars/gw-services/mdm_manager.yml. For HCP request processing, files are loaded in the following order:hcpQualityRulesConfigshcpAffiliatedHCOsQualityRulesConfigsFor HCO request processing, files are loaded only from the following configuration:hcoQualityRulesConfigsIt is a good practice to divide files in a common logic and a specific logic for countries. For example, HCP Quality Rules file names should have the following structure:hcp/hcp/affiliatedhco | common/country-* | quality-rules.yamlhcp-common-quality-rules.yamlhcp-country-china-quality-rules.yamlQuality rules yaml file is a set of rules, which will be applied on Entity. Each rule should have the following yaml structure: preconditionsmatch – the condition is met when the attribute matches the pattern or string value provided in values' list. e.g. source – the condition is met when the crosswalk type ends with the values provided in the list. e.g. default – (Empty)/Default value for precondition is "True" value. The preconditions section in yaml file is not required.checkmandatory – this type of check evaluates if the attribute is mandatory. When the check is correctly evaluated, then the action will be performed. e.g.mandatoryGroup – this check will pass when all attributes provided in the list will not be empty. e.g. mandatoryArray – this check will pass when the array provided in the list will contain at least minimum number of values. e.g. actionWhen the precondition and check are properly evaluated then a specific action can be invoked on entity attributes.clean – this action replaces attribute values which match the specific pattern with the value from replacement parameter. e.g. reject – this action rejects the entity when the precondition is met. e.g. remove- based on the madatoryGroup attributes list, this action removes these attributes from entity. e.g. set – this action sets the value provided in parameter on the specific attribute. e.g. modify – this action sets the value on the specific attribute based on attributes in entity. To reference entity's attributes, use curly braces {}. This rule adds country prefix for each element in specialties array. e.g. chineseNamesToEnglish – this action translates the attribute from source (Chinese) to target attribute (English). e.g. addressDigest – this action counts MD5 based on Address attributes and creates Crosswalk for MD5 digest. e.g. autofillSourceName - this action adds SourceName if it not exists to given attributeaction: type: autofillSourceName attribute: AddressesThe logic of the quality engine rule check is as follows:The precondition is checked (if precondition section is not defined, then the default value is True)Then the check is evaluated on specified Entity (if check section is not defined, then by default the action will be executed without check evaluating)If the check will return attributes to process, then the action is executed.Quality rules DOC: " + }, + { + "title": "Relation replacer", + "pageID": "164470095", + "pageLink": "/display/GMDM/Relation+replacer", + "content": "After getRelation operation is invoked, "Relation Replacer" feature can be activated on returned relation entity object. When entity is merged, Reltio sometimes does not replace objectUri id with new updated value. This process will detect such situation and replace objectUri with correct URI from crosswalk. Relation replacer process operates under the following conditions:Relation replacer will check EndObject and StartObject sections.When objectUri is different from each entity id from crosswalks section, then objectURI is replaced with entity id from crosswalks.When crosswalks contain multiple entries in list and there is a situation that crosswalks list contains different entity uri, relation replacer process ends with the following warning: "Object has more than one possible uri to replace" – it is not possible to decide which entity should be pointed as StartObject or EndObject after merge." + }, + { + "title": "SMTP server", + "pageID": "387170360", + "pageLink": "/display/GMDM/SMTP+server", + "content": "Access to SMTP server is granted for each region separately:AMERDestination Host: amersmtp.COMPANY.comDestination SMTP Port: 25Authentication: NONEEMEADestination Host: emeasmtp.COMPANY.comDestination SMTP Port: 25Authentication: NONEAPACDestination Host: apacsmtp.COMPANY.comDestination SMTP Port: 25Authentication: NONETo request access to SMTP server there is need to fill in the SMTP relay registration form through http://ecmi.COMPANY.com portal." + }, + { + "title": "Airflow", + "pageID": "218432163", + "pageLink": "/display/GMDM/Airflow", + "content": "" + }, + { + "title": "Overview", + "pageID": "218432165", + "pageLink": "/display/GMDM/Overview", + "content": "ConfigurationAirflow is deployed on kubernetes cluster using official airflow helm chart:Github repositoryDocumentationAirflow DockerfileMain airflow chart adjustments(creting pvc's, k8s jobs, etc.) are located in components repository.Environment's specific configuration is located in cluster configuration repository.DeploymentLocal deploymentAirflow can be easily deployed on local kubernetes cluster for testing purposes. All you have to do is:If deployment is performed on windows machine please make sure that install.sh, encrypt.sh, decrypt.sh and .config files have unix line endings. Otherwise it will cause deployment errors.Edit .config file to enable airflow deployment(and any other component you want. To enable component it needs to have assigned value greater than 0\nenable_airflow=1\nRun ./install.sh file located in main helm directory\n./install.sh\nEnvironment deploymentEnvironment deployment should be performed with great care.If deployment is performed on windows machine please make sure that install.sh, encrypt.sh, decrypt.sh and .config files have unix line endings. Otherwise it will cause deployment errors.Environment deployemnt can be performed after connecting local machine to remote kubernetes cluster.Prepare airflow configuration in cluster env repository.Adjust .config file to update airflow(and any other service you want)\nenable_airflow=1\nRun ./install.sh script to update kuberntes clusterCheck if all airflow pods are working correctlyHelm chart configurationYou can find described available configuration in values.yaml file in airflow github repository.Helm chart adjustmentsAdditionally to base airflow kubernetes resources there are created:Kubernetes job used to create additional usersPersistent volume claim for airflow dags data(for each prod/nonprod tenant)Secrets from .Values.secretsWebserver ingressDefinitions: helm templatesDags deploymentDags are deployed using ansible playbook: install_mdmgw_airflow_services_k8s.ymlPlaybook uses kubectl command to work with airflow pods.You can run this playbook locally:To modify lists of dags that should be deployed during playbook run you have to adjust airflow_components list:e.g.\nairflow_components:\n - lookup_values_export_to_s3\nRun playbook(adjust environment)e.g.\nansible-playbook install_mdmgw_airflow_services.yml -i inventory/emea_dev/inventory\nOr with jenkins job:https://jenkins-gbicomcloud.COMPANY.com/job/MDM_Airflow_Deploy_jobs/" + }, + { + "title": "Airflow DAGs", + "pageID": "164470169", + "pageLink": "/display/GMDM/Airflow+DAGs", + "content": "" + }, + { + "title": "●●●●●●●●●●●●●●● [https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1589274]", + "pageID": "310943460", + "pageLink": "/pages/viewpage.action?pageId=310943460", + "content": "DescriptionDag used to prepare data from FLEX(US) tenant to be lodaed into  GBLUS tenant.S3 kafka connector on FLEX enironment uploads files everyday to s3 bucket as multiple small files. This dag takes those multiple files and concatenate them into one. ETL team downloads this concatenated file from s3 bucket and upload it into GBLUS tenant via batch service.Examplehttps://airflow-amer-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=concat_s3_files_gblus_prod" + }, + { + "title": "active_hcp_ids_report", + "pageID": "310939877", + "pageLink": "/display/GMDM/active_hcp_ids_report", + "content": "DescriptionGenerates report of active hcp's from defined countries.Examplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=active_hcp_ids_report_emea_prodStepsCreate mongo collection from query on entity_history collectionExport collection to excel formatExport report to s3 directory" + }, + { + "title": "China reports", + "pageID": "310939879", + "pageLink": "/display/GMDM/China+reports", + "content": "DescriptionSet of dags that produces china reports on gbl environment that are later sent via email:Single reports are generated by executing the defined queries on mongo, then extracts are published on s3. Then main dags download exports from s3 and send an email with all reports.Main dag example:Report generating dag example:Dags listDags executed every day:china_generate_reports_gbl_prod - main dag that triggers the restchina_affiliation_status_report_gbl_prodchina_dcr_statistics_report_gbl_prodchina_hcp_by_source_report_gbl_prodchina_import_and_gen_dcr_statistics_report_gbl_prodchina_import_and_gen_merge_report_gbl_prodchina_merge_report_gbl_prodDags executed weekly:china_monthly_generate_reports_gbl_prod - main dag that triggers the rest china_monthly_hcp_by_channel_report_gbl_prodchina_monthly_hcp_by_city_type_report_gbl_prodchina_monthly_hcp_by_department_report_gbl_prodchina_monthly_hcp_by_gender_report_gbl_prodchina_monthly_hcp_by_hospital_class_report_gbl_prodchina_monthly_hcp_by_province_report_gbl_prodchina_monthly_hcp_by_source_report_gbl_prodchina_monthly_hcp_by_SubTypeCode_report_gbl_prodchina_total_entities_report_gbl_prod" + }, + { + "title": "clear_batch_service_cache", + "pageID": "333156979", + "pageLink": "/display/GMDM/clear_batch_service_cache", + "content": "DescriptionThis dag is used to clear batch-service cache(mongo batchEntityProcessStatus collection). It deletes all records specified in csv file for specified batchName.To clear cache batch-service batchController/{batch_name}/_clearCache endpoint is used.Dag used by mdmhub hub-ui.Input parameters:batchNamefileName\n{\n "fileName": "inputFile.csv",\n "batchName": "testBatchTAGS"\n}\nMain stepsDownload input file from s3 directorySplits the file so that is has maximum of $partSize recordsExecutes request to batch-service batchController/{batch_name}/_clearCacheMove input file to s3 archive directoryDeletes temporary workspace from pvcprint report with information how many records have been deleted \n{'removedRecords': 1}\n\nExamplehttps://airflow-amer-nprod-gbl-mdm-hub.COMPANY.com/graph?dag_id=clear_batch_service_cache_amer_dev&root=" + }, + { + "title": "distribute_nucleus_extract", + "pageID": "310939886", + "pageLink": "/display/GMDM/distribute_nucleus_extract", + "content": "DEPRECATEDDescriptionDistributes extracts that are sent by nucleus to s3 directory between multiple directories for the respective countries that are later used by inc_batch_* dagsInput and output directories are configured in dags configuration file:Dag:https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=distribute_nucleus_extract_gbl_prod&root=" + }, + { + "title": "export_merges_from_reltio_to_s3", + "pageID": "310939888", + "pageLink": "/display/GMDM/export_merges_from_reltio_to_s3", + "content": "DescriptionDag used to schedule Reltio merges export, adjust file format and then uload file to s3 snowflake directory.Steps:Clearing workspace after previous runCalculating time range for incremental loads. For full exports(eg. export_merges_from_reltio_to_s3_full_emea_prod) this step sets start and end date as None. This way full extract is produced. For incremental loads start and end dates are calculated using last_days_count variableScheduling reltio exportWaiting for reltio export file(s3 sensor).Postprocessing fileUpload file to snowflake directoryExamplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=export_merges_from_reltio_to_s3_full_emea_prod" + }, + { + "title": "get_rx_audit_files", + "pageID": "310943418", + "pageLink": "/display/GMDM/get_rx_audit_files", + "content": "DescriptionDownload rx_audit files from:SFTP server(external)s3 directory(internal - constant)Files are the uploaded to defined s3 directory that is later used by inc_batch_rx_audit dag.Examplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=inc_batch_rx_audit_gbl_prodUseful linksRX_AUDIT" + }, + { + "title": "historical_inactive", + "pageID": "310943421", + "pageLink": "/display/GMDM/historical_inactive", + "content": "DescriptionDag used to implement history inactive processSteps:Download csv file with crosswalks of entities to recreateRecreate entities and upload to s3 directory as json fileTrigger snowflake stored procedureExamplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=historical_inactive_emea_prodReferenceSnowflake: History Inactive" + }, + { + "title": "hldcr_reconciliation", + "pageID": "310943423", + "pageLink": "/display/GMDM/hldcr_reconciliation", + "content": "DescriptionHL DCR flow occasionally blocked some VRs' statuses from being sent to PforceRx in an outbound file, because Hub has not received an event from Reltio, informing about Change Request resolution. The exact event expected is CHANGE_REQUEST_CHANGED.To prevent the above, HLDCR Reconciliation process runs regularly, doing the following steps:Query MongoDB store (Collection DCRRequests) for VRs in CREATED status. Export result as list.For each VR from the list, generate a CHANGE_REQUEST_CHANGED event and post it to Kafka.Further processing is as usual - DCR Service enriches the event with current changeRequest state. If the changeRequest has been resolved, it updates the status in MongoDB.Examplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hldcr_reconciliation_gbl_prod" + }, + { + "title": "HUB Reconciliation process", + "pageID": "164470182", + "pageLink": "/display/GMDM/HUB+Reconciliation+process", + "content": "The reconciliation process was created to synchronize Reltio with HUB. Because Reltio sometimes does not generate events, and therefore these events are not consumed by HUB from the SQS queue and the HUB platform is out of sync with Reltio data. External Clients dose not receive the required changes, which cause that multiple systems are not consistent. To solve this problem this process was designed. The fully automated reconciliation process generates these missing events. Then these events are sent to the inbound Kafka topic, HUB platform process these events, updates mongo collection and route the events to external Clients topics.AirflowThe following diagram presents the reconciliation process steps:This directed acyclic diagram presents the steps that are taken to compare Reltio and HUB and produce the missing events. This diagram is divided into the following sections:Initialization and Reltio Data preparation - in this section the process invokes the Reltio export, and upload full export to mongo.clean_dirs_before_init, init_dirs, timestamp – these 3 tasks are responsible for the directory structure preparation required in the further steps and timestamp capture required for the reconciliation process. Reltio and HUB data changes in time and the export is made at a specific point in time. We need to ensure that during comparison only entities that were changed before Reltio Export are compared. This requirement guarantee that only correct events are generated and consistent data is compared. entities_export – the task invokes the Reltio Export API and triggers the export job in Reltio sensor_s3_reltio_file – this task is an S3 bucket sensor. Because the Reltio export job is an asynchronous task running in the background, the file sensor checks the S3 location ‘hub_reconciliation//RELTIO/inbound/’ and waits for export. When the success criteria are met, the process exits with success. The timeout for this job is set to 24 hours, the poke interval is set to 10 minutes. download_reltio_s3_file, unzip_reltio_export, mongo_import_json_array, generate_mongo_indexes – these 4 tasks are invoked after successful export generation. Zip is downloaded and extracted to the JSON file, then this file is uploaded to mongo collection. The generate_mongo_indexes task is responsible for generating mongo indexes in the newly uploaded collection. The indexes are created to optimize performance. archive_flex_s3_file_name – After successful mongo import Reltio export is archived for future reference. HUB validation - Reltio ↔ HUB comparison - the main comparison and events generation logic is invoked in this SUB DAG. The details are described in the section below. Events generation  - after data comparison, generated events are sent to selected Kafka topic.Then standard events processing begins. The details are described in HUB documentation.Please check the following documents to find more details: Entity change events processing (Reltio)Event filtering and routing rulesProcessing events on client sideHUB validation - Reltio ↔ HUB comparisonThis directed acyclic diagram (SUB DAG) presents the steps that are taken to compare HUB and Reltio data in both directions. Because Reltio data is already uploaded and HUB (“entityHistory”) collection is always available we can immediately start the comparison process. mongo_find_reltio_hub_differnces - this process compares Reltio data to HUB data.  Mongo aggregation pipeline matches the entities from Reltio export to HUB profiles located in mongo collection by entity URI (ID). All Reltio profiles that are not presented in Reltio export data are marked as missing. All attributes in Reltio are compared to HUB profile attributes - based on this when the difference is found, it means that the profile is out of sync and new even should be generated. Based on these changes the HCP_CHANGED or HCO_CHANGED events are generated.When the profile is missing the HCP_CREATED or HCO_CREATED events are generated. mongo_find_hub_reltio_differnces - this process compares HUB entities to Reltio data. The process is designed to find only missing entities in Reltio, based on these changes the HCP_REMOVED or HCO_REMOVED events are generatedMongo aggregation pipeline matches the entities from HUB mongo collection to Reltio profiles by entity URI (ID). All HUB profiles that are not presented in Reltio export data are marked as missing for future reference. mongo_generate_hub_events_differences - this task is related to the automated reconciliation process. The full process is described in this paragraph.Configuration and schedulingThe process can be started in Airflow on demand. The configuration for this process is stored in the MDM Environment configuration repository. The following section is responsible for the HUB Reconciliation process activation on the selected environment:\nactive_dags:\n gbl_dev:\n - hub_reconciliation.py\nThe file is available in "inventory/scheduler/group_vars/all/all.yml"To activate the Reconciliation process on the new environment the new environment should be added to "active_dags" map.Then the "ansible-playbook install_airflow_dags.yml" needs to be invoked. After this new process is ready for use in Airflow. Reconciliation process To synchronize Reltio with HUB and therefore synchronize profiles in Reltio with external Clients the fully automated process is started after full HUB<->Reltio comparison. this is the "mongo_generate_hub_events_differences" task. The automated reconciliation process generates events. Then these events are sent to the inbound Kafka topic, HUB platform process these events, updates mongo collection and route the events to flex topic.The following diagram presents the reconciliation steps:Automated reconciliation process generates events:The following events are generated during this process:HCO_CHANGED / HCP_CHANGED - In this case, Reltio has not generated ENTITY_CHANGED event for the entityBased on Reltio to HUB comparison, when the comparison result contains ATTRIBUTE_VALUE_MISSING or ATTRIBUTE_VALUE_DIFFERENT for the entity the event is generated.The events are aggregated based on URI so only one change event for the selected entity is generatedHCO_CREATED / HCP_CHANGED - In this case, Reltio has not generated ENTITY_CREATED event for the entityBased on Reltio to HUB comparison when the comparison result contains ENTITY_MISSING difference the create event is generated. It means that Reltio contains the entity and this entity is missing HUB mongo collection, so there is a need to generate and send missing CREATED events.HCO_REMVED - In this case, Reltio has not generated ENTITY_REMOVED event for the entityBased on HUB to Reltio comparison when the comparison result contains ENTITY_MISSING difference the delete event is generated. It means that the HUB cache contains an additional entity that was deactivated/removed from the Reltio system, so there is a need to generate and send the missing REMOVED events.HCO_MERGED and HCO_LOST_MERGE - In this case, Reltio has not generated an ENTITY_MERGED event for the winner entity and ENTITY_LOST_MERGE for the looser entity.Based on Reltio extracted data and HUB mongo cache these events are generated.Entities from source Reltio data are matched by crosswalk value with EntityHistory Mongo data.When Reltio entity URI does not match the Mongo Entity URI and Reltio does not contain entity presented in Mongo and data that was matched by crosswalk value, it means that this entity was merged in Reltio.Then MERGED and LOST_MERGE event is generated for these entities.2. Next, Event Publisher receives events from the internal Kafka topic and calls MDM Gateway API to retrieve the latest state of Entity from Reltio. Entity data in JSON is added to the event to form a full event. For REMOVED events, where Entity data is by definition not available in Reltio at the time of the event, Event Publisher fetches the cached Entity data from Mongo database instead.3. Event Publisher extracts the metadata from Entity (type, country of origin, source system).4. Entity data is stored in the MongoDB database, for later use5. For every Reltio event, there are two Publishing Hub events created: one in Simple mode and one in Event Sourcing (full) mode. Based on the metadata, and Routing Rules provided as a part of application configuration, the list of the target destinations for those events is created. The event is sent to all matched destinations to the target topic (-out-full-) when the event type is full or (-out-simple-) when the event type is simple. " + }, + { + "title": "HUB Reconciliation Process V2", + "pageID": "164470184", + "pageLink": "/display/GMDM/HUB+Reconciliation+Process+V2", + "content": "Hub reconciliation process is starting from downloading reconciliation.properties file with following information:reconciliationType - reconciliation type - possible values: FULL_RECONCILIATION or PARTIAL_RECONCILIATION (since last run)eventType - event type - it is used in in generating events for kafka - possible values: FULL or CROSSWALK_ONLYreconcileEntities - if set to true entities will be reconciliatedreconcileRelations - if set to true relations will be reconciliatedreconcileMergeTree - if set to true mergeTree will be reconciliatedSets hub reconciliation properties in the processIf reconcileEntities is set to true that process for reconciliate entities is started Process gets last timestamp when entities was lately exported Entities export is triggered from Reltio - this step is done by groovy script Process is checking if export is finished by verifing if the file SUCCESS with manifest.json exists on S3 folder /us//inboud/hub/hub_reconciliation/entities/inbound/entities_export_ In this step process is setting timestamp for future reconciliation of entities - it is set in airflow variables this step is responsible for checking which entities has been changed and generate events for changed entitiesfirstly we get export file from S3 folder /us//inboud/hub/hub_reconciliation/entities/inbound/entities_export_we unzip the file in bash scriptfor the unzipped file we there are two optionsif we useChecksum than calculateChecksum groovy script is executed which calculates checksum for exported entities and generates ReconciliationEvent only with checksumif we don't useChecksum than ReconciliationEvent is generated with whole entityin the last step we send those generated events to specified kafka topics Events from topic will be processed by reconciliation serviceReconciliation service is checking basing on checksum change/changes if PublisherEvent should be generated it compares checksum if it exists from ReconciliationEvent with the one that we have in entityHistory tableit compares entity objects from ReconciliationEvent with the one that we have in mongo in entityHistory table if checksum is absent - objects on both sides are normalized before compare processit compares SimpleCrosswalkOnlyEntity objects if CROSSWALK_ONLY reconciliation event type is choosen - move export folder on S3 from inbound to archive folder4. If reconcileRelations is set to true that process for reconciliate relations is started Process gets last timestamp when relations was lately exported Relations export is triggered from Reltio - this step is done by groovy script Process is checking if export is finished by verifing if the file SUCCESS with manifest.json exists on S3 folder /us//inboud/hub/hub_reconciliation/relations/inbound/relations_export_ In this step process is setting timestamp for future reconciliation of relations - it is set in airflow variables this step is responsible for checking which relations has been changed and generate events for changed relationsfirstly we get export file from S3 folder /us//inboud/hub/hub_reconciliation/relations/inbound/relations_export_we unzip the file in bash scriptfor the unzipped file we there are two optionsif we useChecksum than calculateChecksum groovy script is executed which calculates checksum for exported relations and generates ReconciliationEvent only with checksumif we don't useChecksum than ReconciliationEvent is generated with whole relationin the last step we send those generated events to specified kafka topic Events from topic will be processed by reconciliation serviceReconciliation service is checking basing on checksum change/object changes if PublisherEvent should be generated it compares checksum if it exists from ReconciliationEvent with the one that we have in mongo in entityRelation tableit compares relation objects from ReconciliationEvent with the one that we have in mongo in entityRelation table if checksum is absent - objects on both sides are normalized before compare processit compares SimpleCrosswalkOnlyRelation objects if CROSSWALK_ONLY reconciliation event type is choosen - move export folder on S3 from inbound to archive folder5. If reconcileMergeTree is set to true that process for reconciliate relations is started Process gets last timestamp when merge tree was lately exported Merge tree export is triggered from Reltio - this step is done by groovy script Process is checking if export is finished by verifing if the file SUCCESS with manifest.json exists on S3 folder /us//inboud/hub/hub_reconciliation/merge_tree/inbound/merge_tree_export_ In this step process is setting timestamp for future reconciliation of merge tree - it is set in airflow variables this step is responsible for checking which merge tree has been changed and generate events for changed merge tree objectsfirstly we get export file from S3 folder /us//inboud/hub/hub_reconciliation/merge_tree/inbound/merge_tree_export_we unzip the file in bash scriptfor the unzipped file we there are two optionsif we useChecksum than calculateChecksum groovy script is executed which creates ReconciliationMergeEvent with uri of the main object and list of loosers uriif we don't useChecksum than ReconciliationEvent is generated with whole merge tree objectin the last step we send those generated events to specified kafka topic Events from topic will be processed by reconciliation serviceReconciliation service is sending merge and lost_merger PublisherEvent for winner and every looser - move export folder on S3 from inbound to archive folder" + }, + { + "title": "import_merges_from_reltio", + "pageID": "310943426", + "pageLink": "/display/GMDM/import_merges_from_reltio", + "content": "DescriptionSchedules reltio merges export, and imports it into mong.This dag is scheduled by china_import_and_gen_merge_report and data imported into mongo are used by china_merge_report to generate china raport filesExamplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=import_merges_from_reltio_gbl_prod&root=&num_runs=25&base_date=2023-04-06T00%3A05%3A20Z" + }, + { + "title": "import_pfdcr_from_reltio", + "pageID": "310943428", + "pageLink": "/display/GMDM/import_pfdcr_from_reltio", + "content": "DescriptionSchedules reltio entities export, download it from s3, make small changes in export and import into mongo.This dag is scheduled by china_import_and_gen_dcr_statistics_report and data imported into mongo is used by china_dcr_statistics_report to generate china raport filesExamplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=import_pfdcr_from_reltio_gbl_prod" + }, + { + "title": "inc_batch", + "pageID": "310943432", + "pageLink": "/display/GMDM/inc_batch", + "content": "DescriptionProces used to load idl files stored on s3 into Reltio. This dags is basing on mdmhub inc_batch_channel component.StepsCrate batch instance in mongo using batch-service /batchController endpointDownload idl files from s3 directoryExtract compressed archivesPreprocess files(eg. dos2unix )Run inc_batch_channel componentArchive input files and reportsExamplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=inc_batch_sap_gbl_prod" + }, + { + "title": "Initial events generation process", + "pageID": "164470083", + "pageLink": "/display/GMDM/Initial+events+generation+process", + "content": "Newly connected clients doesn't have konwledge about entities which was created in MDM before theirs connecting. Due to this the initial event loading process was designed. Process loads events about already existing entities to client's kafka topic. Thanks this the new client is synced with MDM.AirflowThe process was implemented as Airflow's DAG:Process steps:prepareWorkingDir - prepares directories structure required for the process,getLastTimestamp - gets time marked of last process execution. This marker is used to determine which of events has been sent by previously running process. If the process is run first time the marker has always 0 value,getTimestamp - gets current time marker,generatesEvents - generates events file based on current Mongo state. Data used to prepare event messages is selected based on condition entity.lastModificationDate > lastTimestamp,divEventsByEventKind - divides events file based on event kind: simple or full,loadFullEvents* - it is a group of steps that populates full events to specific topic. The amount of this steps is based on amount of topics specified in configuration,loadSimpleEvents* - similar to above, those steps populates simple events to specific topic. The amount of this steps is based on amount of topics specified in configuration,setLastTimestamp - save current time marker. It will be used in the next process execution as last time marker.Configuration and schedulingThe process can be started on demand.The Process's configuration is stored in the MDM Environment configuration repository.To enable the process on specific environment:Its should be valid with template "generate_events_for_[client name]" and added to the list "airflow_components" which is defined in "inventory/[env name]/group_vars/gw-airflow-services/all.yml" file,Create configuration file in "inventory/[env name]/group_vars/gw-airflow-services/generate_events_for_[client name].yml" with content as below:The process configuration\n---\n\ngenerate_events_for_test_name: "generate_events_for_test" #Process name. It has to be the same as in "airflow_components" list avaiable in all.yml\ngenerate_events_for_test_base_dir: "{{ install_base_dir }}/{{ generate_events_for_test_name }}"\ngenerate_events_for_test:\n dag: #Airflow's DAG configuration section\n template: "generate_events.py" #do not change\n variables:\n DOCKER_URL: "tcp://euw1z1dl039.COMPANY.com:2376" #do not change\n dataDir: "{{ generate_events_for_test_base_dir }}/data" #do not change\n configDir: "{{ generate_events_for_test_base_dir }}/config" #do not change\n logDir: "{{ generate_events_for_test_base_dir }}/log" #do not change\n tmpDir: "{{ generate_events_for_test_base_dir }}/tmp" #do not change\n user:\n id: "7000" #do not change\n name: "mdm" #do not change\n groupId: "1002" #do not change\n groupName: "docker" #do not change\n mongo: #mongo configuration properties\n host: "localhost"\n port: "27017"\n user: "mdm_gw"\n password: "{{ secret_generate_events_for_test.dag.variables.mongo.password }}" #password is taken from the secret.yml file\n authDB: "reltio"\n kafka: #kafka configuration properties\n username: "hub"\n password: "{{ secret_generate_events_for_test.dag.variables.kafka.password }}" #password is taken from the secret.yml file\n servers: "10.192.71.136:9094"\n properties:\n "security.protocol": SASL_SSL\n "sasl.mechanism": PLAIN\n "ssl.truststore.location": /opt/kafka_utils/config/kafka_truststore.jks\n "ssl.truststore.password": "{{ secret_generate_events_for_test.dag.variables.kafka.properties.sslTruststorePassword }}" #password is taken from the secret.yml file\n "ssl.endpoint.identification.algorithm": ""\n countries: #Events will be generated only for below countries\n - CR\n - BR\n targetTopics: #Target topics list. It is array of pairs topic name and event Kind. Only simple and full event kind are allowed.\n - topic: dev-out-simple-int_test\n eventKind: simple\n - topic: dev-out-full-int_test\n eventKind: full\n\n...\nthen the playbook install_mdmgw_services.yml needs to be invoked to update runtime configuration." + }, + { + "title": "lookup_values_export_to_s3", + "pageID": "310943435", + "pageLink": "/display/GMDM/lookup_values_export_to_s3", + "content": "DescriptionProcess used to extract lookup values from mongo and upload it to s3. The file from s3 i then pulled into snowflake.Examplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=lookup_values_export_to_s3_gbl_prod" + }, + { + "title": "MAPP IDL Export process", + "pageID": "164470173", + "pageLink": "/display/GMDM/MAPP+IDL+Export+process", + "content": "DescriptionProcess used to generate excel with entities export. Export is based on two monogo collections: lookupValues and entityHistory. Excel files are then uploaded into s3 directoryExcels are used in MAPP Review process on gbl_prod environment.Examplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=mapp_idl_excel_template_gbl_prod" + }, + { + "title": "mapp_update_idl_export_config", + "pageID": "310943437", + "pageLink": "/display/GMDM/mapp_update_idl_export_config", + "content": "DescriptionProcess is used to update configuration of mapp_idl_excel_template dags stored in mongo.Configuration is stored in mappExportConfig collection and consists of information about configuration and crosswalks order for each country.Examplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=mapp_update_idl_export_config_gbl_prod" + }, + { + "title": "merge_unmerge_entities", + "pageID": "310943439", + "pageLink": "/display/GMDM/merge_unmerge_entities", + "content": "DescriptionThis dag implements batch Batch merge & unmerge process. It download file from s3 with list of files to merge or unmerge and then process documents. To process documents batch-service is used. After documents are processed report is generated and transferred to s3 directory.FlowBatch service batch creationDownloading source file from s3Input file conversion to unix formatFile processingRecords are sent to batch service using /bulkService endpoint.After all entities are sent then Loading stage is closed and statistics are written to stage statisticsWaiting for batch to be completedrecords sent to batch service are then transferred to manager internal topic and then processed by manager which sends requests to Reltio. If all events are processed then batch processing stage is closed which causes whole batch to be completed.Report is generated using batchEntittyProcessStatus mongo collection and saved in temporary report collectionReport is exported and saved in s3 bucket altogether with input fileInput directory is cleared Tmp report mongo collection is dropped Examplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=merge_unmerge_entities_emea_prod" + }, + { + "title": "micro_bricks_reload", + "pageID": "310943463", + "pageLink": "/display/GMDM/micro_bricks_reload", + "content": "DescriptionDag extract data from snowflake table that contains microbricks exceptions. Data is then comited in git repository from where it will be pulled by consul and loaded into mdmhub components.If microbricks mapping file has changed since last dag run then we'll wait for mapping reload and  copy events from {{ env_name }}-internal-microbricks-changelog-events topic into {{ env_name }}-internal-microbricks-changelog-reload-events"Examplehttps://airflow-amer-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=micro_bricks_reload_amer_prod" + }, + { + "title": "move_ods_", + "pageID": "310943441", + "pageLink": "/pages/viewpage.action?pageId=310943441", + "content": "DescriptionDag copies files from external source s3 buckets and uploads them to our internal s3 bucket to the desired location. This data is later used in inc_batch_* dagsExamplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=move_ods_eu_export_gbl_prod" + }, + { + "title": "rdm_errors_report", + "pageID": "310943445", + "pageLink": "/display/GMDM/rdm_errors_report", + "content": "DEPRECATEDDescriptionThis dags generate report with all rdm errors from ErrorLogs collection and publish it to s3 bucket.Examplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=rdm_errors_report_gbl_prod" + }, + { + "title": "reconcile_entities", + "pageID": "337846202", + "pageLink": "/display/GMDM/reconcile_entities", + "content": "Details:Process allowing export data from mongo based on query and generate https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Reconciliation/reconcileEntities request for each package or generate a flat file from exported entities and push to Kafka reltio-events.Steps:Pull config from requeste.g. {'entitiesQuery': {'country': {'$in': ['FR']}, 'sources': {'$in': ['ONEKEY']}}}Drop mongo collections used in previous runGenerating list of entities and/or relations to reconcile using provided queryTrigger /reconciliation/entities and/or /reconciliation/relations endpoint for all entities and relations from the list from previous step. This will cause generating Reltio event and sending it to Hub processing.Examplehttps://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/tree?dag_id=reconcile_entities_emea_dev&root=" + }, + { + "title": "reconciliation_ptrs", + "pageID": "310943447", + "pageLink": "/display/GMDM/reconciliation_ptrs", + "content": "DEPRECATEDDetailsProcess allowing to reconcile events for ptrs source.Logic: Reconciliation processSteps:Downloading input file with checksums from s3 directoryDrop mongo collections used in previous runInporting input file into mongo reconciliation_ptrs collection and prepare output collection reconciliationRecords_ptrsTrigger /resendLastEvent publisher endpoint to resend event for each entity from input file that checksum differs. This will cause event to be generated to ptrs output topicExamplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=reconciliation_ptrs_emea_prod" + }, + { + "title": "reconciliation_snowflake", + "pageID": "310943449", + "pageLink": "/display/GMDM/reconciliation_snowflake", + "content": "DetailsProcess allowing to reconcile events for snowflake topic.Logic: Reconciliation processSteps:Downloading input file with entities checksums from s3 directoryDrop mongo collections used in previous runInporting input file into mongo reconciliation_snowflake collection and prepare output collection reconciliationRecords_snowflakeTrigger /resendLastEvent publisher endpoint to resend event for each entity from input file that checksum differs. This will cause event to be generated to snowflake topic and consumed by snowflake kafka connectorExamplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=reconciliation_ptrs_emea_prod" + }, + { + "title": "Kubernetes", + "pageID": "218693740", + "pageLink": "/display/GMDM/Kubernetes", + "content": "" + }, + { + "title": "Platform Overview", + "pageID": "218452673", + "pageLink": "/display/GMDM/Platform+Overview", + "content": "In the latest physical architecture, MDM HUB services are deployed in Kubernetes clusters managed by COMPANY Digitial Kubernates Service (PDKS)There are non-prod and prod cluster for each region: AMER, EMEA, APAC ArchitectureThe picture below presents the layout of HUB services in Kubernetes cluster managed by PDKS  NodesThere are two groups of nodes:Static, stateful nodes that have Portworx storage configured dedicated for running backend stateful servicesInstance Type:  r5.2xlargeNode labels: mdmhub.COMPANY.com/node-type=staticDynamic nodes - dedicated for stateless services that are dynamically scaledInstance Type:  m5.2xlargeNode labels: mdmhub.COMPANY.com/node-type=dynamicStoragePortworx storage appliance is used to manage persistence volumes required by stateful components.Configuration: Default storage Class:  pwx-repl2-scReplication: 2Operators MDM HUB uses K8 operators to manage applications like:Application NameOperator (with link)VersionMongoDBMongo Comunity operator0.6.2KafkaStrimzi0.27.xElasticSearchElasticsearch operator1.9.0PrometheusPrometheus operator8.7.3MonitoringCluster are monitored by local Prometheus service integrated with central Prometheus and Grafana services For details got to monitoring section.Logging All logs from HUB components are sent to Elastic service and can be discovered by Kibana UI.For details got to Kibana dashboard section. Backend componentsNameVersionMongoDB4.2.6Kafka2.8.1ElasticSearch7.13.1Prometheus2.15.2Scaling TO BE ImplementationKubernetes objects are implemented using helm - package manager for Kubernetes. There are several modules that connected together makes the MDMHUB application:operators - delivers a set of operators used to manage backend components of MDMHUB: Mongo operator, Kafka operator, Elasticsearch operator, Kong operator and Prometheus operator,consul - delivers consul server instance, user management tools and git2consul - the tool used to synchronize consul key-value registry with a git repository,airflow - deploys an instance of Airflow server,eck - using Elasticsearch operator creates EFK stack - Kibana, Elasticsearch and Fluentd,kafka - installs Kafka server,kafka-resources - installs Kafka topics, Kafka connector instances, managed users and ACLs,kong - using Kong operators installs a Kong server,kong-resources - delivers basic Kong configuration: users, plugins etc,mongo - installs mongo server instance, configures users and their permissions,monitoring - install Prometheus server and exporters used to monitors resources, components and endpoints,migration - a set of tools supported migration from old (ec2 based environments) to new Kubernetes infrastructure,mdmhub - delivers the MDMHUB components, their configuration and dependencies.All above modules are stored in application source code as a part of module helm.ConfigurationThe runtime configuration is stored in mdm-hub-cluster-env repository. Configuration has following structure:[region]/ - MDMHUB rerion eg: emea, amer, apac    nprod|prod/ -  cluster class. nprod or prod values are possible,        namespaces/ - logical spaces where MDMHUB coponents are deployed            monitoring/ - configuration of prometheus stack                service-monitors/                values.yaml - namespace level variables            [region]-dev/ - specific configuration for dev env eg.: kafka topics, hub components configuration                config_files/ - MDMHUB components configuration files                    all|mdm-manager|batch-service|.../                values.yaml - variables specific for dev env.                kafka-topics.yaml - kafka topic configuration            [region]-qa/ - specific configuration for qa env                config_files/                    all|mdm-manager|batch-service|.../            [region]-stage/ - specific configuration for stage env                config_files/                    all|mdm-manager|batch-service|.../                values.yaml                kafka-topics.yaml            [region]-prod/ - specific configuration for prod env                config_files/                    all|mdm-manager|batch-service|.../                values.yaml                kafka-topics.yaml            [region]-backend/ - backend services configuration: EFK stack, Kafka, Mongo etc.                eck-config/ #eck specific files                values.yaml            kong/ - configuration of Kong proxy                values.yaml            airflow/ - configuration of Airflow scheduler                values.yaml        users/ #users configuration            mdm_test_user.yaml            callback_service_user.yaml            ...        values.yaml #cluster level variables        secrets.yaml #cluster level sensitive data    values.yaml #region level variablesvalues.yaml #values common for all environments and clustersinstall.sh #implementation of deployment procedureApplication is deployed by install.sh script. The script does this in the following steps:Decrypt sensitive data: passwords, certificates, token, etc,Prepare the order of values and secrets precedence (the last listed variables override all other variables):common values for all environments,region values,cluster variables,users values,namespace values.Download helm package,Do some package customization if required,Install helm package to the selected cluster.DeploymentBuildJob: mdm-hub-inbound-services/feature/kubernatesDeployAll Kubernetes deployment jobsAMER:Deploy backend: Kong, Kafka, mongoDB, EFK, Consul, Airflow, PrometheusDeploy MDM HUBAdministrationAdministration tasks and standard operating procedures were described here." + }, + { + "title": "Migration guide", + "pageID": "218452659", + "pageLink": "/display/GMDM/Migration+guide", + "content": "Phase 0Validate configuration:validate if all configuration was moved correctly - compare application.yml files, check topic name prefix (on k8s env the prefix has 2 parts), check Reltio confguration etc,Check if reading event from sqs is disabled on k8s - reltio-subscriber,Check if reading evets from MAP sqs is disabled on k8s - map-channel,Check if event-publisher is configured to publish events to old kafka server - all client topics (*-out-*) without snowflake.Check if network traffic is opened:from old servers to new REST api endpoint,from k8s cluster to old kafka,from k8s cluster to old REST API endpoint,Make a mongo dump of data collections from mongo - remember start date and time:find mongo-migration-* pod and run shell on it.cd /opt/mongo_utils/datamkdir datacd datanohup dumpData.sh &start date is shown in the first line of log file:head -1 nohup.out #example output → [Mon Jul  4 12:09:32 UTC 2022] Dumping all collections without: entityHistory, entityMatchesHistory, entityRelations and LookupValues from source database mongovalidate the output of dump tool by:cd /opt/mongo_utils/data/data && tail -f nohup.outRestore dumped collections in the new mongo instance:cd /opt/mongo_utils/data/datamv nohup.out nohup.out.dumpnohup mongorestore.sh dump/ &tail -f nohup.out #validate the outputValidate the target database and check if only entityHistory, entityMatchesHistory, entityRelations and LookupValues coolections were copied from source. If there are more collections than mentioned, you can delete them.Create a new consumer group ${new_env}-event-publisher for sync-event-publisher component on topic ${old_env}-internal-reltio-proc-events located on old Kafka instance. Set offset to start date and time of mongo dump - do this by command line client because Akhq has a problem with this action,Configure and run sync-event-publisher - it is responsible for the synchronization of mongo DB with the old environment. The component has to be connected with the old Kafka and Manager and the routing rules list has to be empty,Phase 1(External clients are still connected to old endpoints of rest services and kafka):Check if something is waiting for processing on kafka topics and there are active batches in batch service,If there is a data on kafka topics stop subscriber and wait until all data in enricher, callback and publisher will be processed. Check it out by monitoring input topics of these components,Wait unit all data will be processed by the snowflake connector,Disable Jenkins jobs,Stop outbound (mdmhub) components,Stop inbound (mdmgw) components,Disable all Airflow's DAGs assigned to the migrated environment,Turn off the snowflake connector at the old environment,Turn off sync-event-publisher on k8s environment,Run Mongo Migration Tool to copy mongo databases - copy only collections with caches, data collections were synced before (mongodump + sync-event-publisher). Before start check collections in old mongo instance. You can delete all temporary collections lookup_values_export_to_s3_*, reconciliation_* etc.#dumpingcd /opt/mongo_utils/datamkdir non_datacd non_datanohup dumpNonData.sh &tail -f nohup.out #validate the output#restoringnohup mongorestore.sh dump/ &tail -f nohup.out #validate the outputEnable reltio subscriber on K8s - check SQS credentials and turn on SQS route,Enable processing events on MAP sqs queues - if map-channel exists on migrated environment,Reconfigure Kong:forward all incoming traffic to the new instance of MDMHUBinclude rules for API paths from: \n MR-3140\n -\n Getting issue details...\n STATUS\n Delete all plugins oauth and key-auth plugins https://docs.konghq.com/gateway-oss/2.5.x/admin-api/#delete-pluginit might be required to remove routes, when ansible playbook will throw a duplication error https://docs.konghq.com/gateway-oss/2.5.x/admin-api/#delete-routeStart Snowflake connector located at k8s cluster, Turn on components (without sync-event-publisher) on k8s environment,Change api url and secret (manager apikey) in snowflake deployment configuration (Ansible)Chnage api key in depenedent api routers.Install Kibana dashboards,Add mappings to Monstache,Add transaction topics to fluentd.Phase 2 (Environment run in K8s):Run Kibana Migration Tool to copy indexes, - after migration,Run Kafka Mirror Maker to copy all data from old output topics to new ones.Phase 2 (All external clients confirmed that they switched their applications to new endpoints):Wait until all clients will be switched to new endpoints,Phase 3 (All environments are migrated to kubernetes):Stop old mongo instance,Stop fluentd and kibana,Stop Kafka Mirror MakerStop kafka and kong at old environment,Decommission old environment hosts.To remember after migrationReview CPU requests on k8s https://pdcs-som1d.COMPANY.com/c/c-57wsz/monitoring + Resource management for components - doneMongoDB on k8s has only 1 instanceKong API delete plugin - https://docs.konghq.com/gateway-oss/2.5.x/admin-api/#delete-pluginK8s add consul-server service to ingress - consul ui already exposes API https://consul-amer-nprod-gbl-mdm-hub.COMPANY.com/v1/kv/Consul UI redirect doesn't work due to consul being stubborn about using /ui path. Decision: skip this, send client new consul address Fix issue with MDMHUB manage and batch-service oauth user being duplicated in mappings - doneVerify if mdm hub components are using external api address and switch to internal k8s service address - checked, confirmed nothing is using external addressesCheck if Portworx requires setting affinity rules to be running only on 3 nodesakhq - disable default k8s token automount - done" + }, + { + "title": "PDKS Cluster tests", + "pageID": "228917568", + "pageLink": "/display/GMDM/PDKS+Cluster+tests", + "content": "AssumptionsAddresses used in testsAPI: https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-amer-dev/actuator/health/KafkaConsulK8s resources3 static EC2 nodesCPU reserved >67%RAM reserved >67%0-4 dynamic EC2 nodes in Auto Scaling Group, scaled based on loadEach MDM Hub app deployed in 1 replica, so no redundancy.Failover testsExpected resultsNo downtimes of API and all services exposed to clients.ScenarioOne EKS node downForce node drain with timeout and grace period set to low 10 seconds. ResultsOne EKS node downAPI was unavailable for ~1 or ~3 minutes. Unavailability was handled correctly by Kong by sending HTTP 500 responsesStatic nodes resources were reserved in more than 67%, so draining 1 of 3 nodes caused scaling up dynamic nodesEvery time K8s managed to start new pod and heal all servicesThere was no need for manual operational work to fix anythingConclusionsTest was partially successfulFailover workedAPI downtime was shortNo operational work was requiredTo remove risk of services unavailabilityIncrease number of MDM Hub instancesTo reduce time of services unavailabilityTest if reducing Readiness time of a Pod to less than 60s could workScale testsExpected resultsEKS node scaling up and down should be automatic based on cluster capacity. ScenariosScale pods up, to overcome capacity of static ASG, then scale down.ResultsScale up and down test was carried out while doing failover tests. When 1 of 3 static nodes became unavailable, ASG scaled up number of dynamic instances. First to 1 and then to 2. After a static node was once again operational, ASG scaled down dynamic nodes to 0.Conclusions" + }, + { + "title": "Portworx - storage administration guide", + "pageID": "218458438", + "pageLink": "/display/GMDM/Portworx+-+storage+administration+guide", + "content": "OutdatedPortworx is not longer used in MDM Hub Kubernetes clustersPortworx, what is it?Commercial product, validated storage solution and a standard for PDKS Kubernetes clusters. It uses AWS EBS volumes, adds a replication and provides a k8s storage class as a result. It then can be used just as any k8s storage by defining PVC. What problem does it solve?How to:use Portworx storageConfigure Persistent Volume Claim to use one of Portworx Storage Classes configured on K8s.2 classes are availablepwx-repl2-sc - storage has 2 replicas - use on non-prodpwx-repl3-sc - storage has 3 replicasextend volumesIn Helm just change PVC requested size and deploy changes to a cluster with a Jenkins job. No other action should be required. Example change: MR-3124 change persistent volumes claimscheck status, statistics and alertsTBDOne of the tools should provide volume status and statistics:https://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&from=now-1h&to=nowhttps://amrdrml472.COMPANY.com:9443/loginhttps://us2.app.sysdig.com/api/saml/COMPANY?product=SDCResponsibilitiesWho is responsible for what is described in the table below. In short: if any change in Portworx setup is required, create a support ticket to a queue found on Support information with queues names page.Additional documentationPDCS Kubernetes Storage Management Platform Standards (If link doesn't work, go to http://containers.COMPANY.com/ search in "PDKS Docs" section for "WTST-0299 PDCS Kubernetes Storage Management Platform Standards")Kubernetes Portworx storage class documentationPortworx on Kubernetes docs" + }, + { + "title": "Resource management for components", + "pageID": "218444330", + "pageLink": "/display/GMDM/Resource+management+for+components", + "content": "OutdatedMDM Hub components resources are managed automatically by the Vertical Pod Autoscaler - table below is no longer applicableK8s resource requests vs limits Quotes on how to understand Kubernetes resource limitsrequests is a guarantee, limits is an obligationGalo NavarroWhen you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for Pods. The scheduler ensures that, for each resource type, the sum of the resource requests of the scheduled containers is less than the capacity of the node. Note that although actual memory or CPU resource usage on nodes is very low, the scheduler still refuses to place a Pod on a node if the capacity check fails. This protects against a resource shortage on a node when resource usage later increases, for example, during a daily peak in request rate.How Pods with resource requests are scheduledMDM Hub resource configuration per componentIMPORTANT: table is outdated. The current CPU and memory configuration are in mdm-hub-cluster-env git repository.CPU [m]Memory [Mi]ComponentRequestLimitRequestLimitmdm-callback-service200400016002560mdm-hub-reltio-subscriber2001000400640mdm-hub-event-publisher20020008001280mdm-hub-entity-enricher20020008001280mdm-api-router20040008001280mdm-manager200400010002000mdm-reconciliation-service200400016002560mdm-batch-service20020008001280Kafka500400010000 (Xmx 3GB)20000Zookeeper2001000256512akhq100500256512kafka-connect500200010002000MongoDB50040002000032000MongoDB agent200400200500Elasticsearch5002000800020000Kibana100200010241536Airflow - scheduler2007005122048Airflow - webserver2007002561024Airflow - postgresql250-256-Airflow - statsd200500256512Consul100500256512git2consul100500256512Kong10020005122048Prometheus200100015363072Legendrequires tuningproposaldeployedUseful linksLinks helpful when talking about k8s resource management:Resource Management for Pods and ContainersHow Pods with resource requests are scheduledSizing Kubernetes pods for JVM apps without fearing the OOM KillerMDM Hub Kubernetes cluster configuration git repository" + }, + { + "title": "Standards and rules", + "pageID": "218435163", + "pageLink": "/display/GMDM/Standards+and+rules", + "content": "K8s Limit definitionLimit size for CPU has to be defined in "m" (milliCPU), ram in "Mi" (mibibytes) and storage in "Gi" (Gibibytes). More details about resource limits you can find on https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/GB vs GiB: What’s the Difference Between Gigabytes and Gibibytes?At its most basic level, one GB is defined as 1000³ (1,000,000,000) bytes and one GiB as 1024³ (1,073,741,824) bytes. That means one GB equals 0.93 GiB. Source: https://massive.io/blog/gb-vs-gib-whats-the-difference/To check current resource configuration, check: Resource management for componentsDockerTo secure our images from changing of remote images which come from remote registries such as https://hub.docker.com/ before using remote these as a base image in the implementation, you have to publish the remote image in our private registry http://artifactory.COMPANY.com/mdmhub-docker-dev.Kafka objects naming standardsKafka topicsName template: <$envName>-$-$Topic Types: in - topics for producing events by external systemsout - topics for consuming events by external systemsinternal - topics used by HUB servicesConsumer GroupsName template: <$envName>-<$componentName>-[$processName]Standardized environment namesamer-devemea-qagblus-stagegbl-prodetc.Standardized component namesbatch-servicecallback-servicemdm-managerevent-publisherapi-routerreconciliation-servicereltio-subscriber" + }, + { + "title": "Technical details", + "pageID": "218440550", + "pageLink": "/display/GMDM/Technical+details", + "content": "NetworkSubnet nameSubnet maskRegionDetailssubnet-07743203751be58b910.9.64.0/18amersubnet-0dec853f7c9e507dd10.9.0.0/18amersubnet-018f9a3c441b24c2b●●●●●●●●●●●●●●●apacsubnet-06e1183e436d67f2910.116.176.0/20apacsubnet-0e485098a41ac03ca10.90.144.0/20emeasubnet-067425933ced0e77f10.90.128.0/20emea" + }, + { + "title": "SOPs", + "pageID": "228923665", + "pageLink": "/display/GMDM/SOPs", + "content": "Standard operation procedures are available here." + }, + { + "title": "Downstream system migration guide", + "pageID": "218452663", + "pageLink": "/display/GMDM/Downstream+system+migration+guide", + "content": "This chapter describes steps that you have to take if you want to switch your application to new MDM HUB instance.Direct channel (Rest services)If you use the direct channel to communicate with MDM HUB the only thing that you should do is changing of API endpoint addresses. The authentication mechanism, based on oAuth serving by Ping Federate stays unchanged. Please remember that probably network traffic between your services and MDMHUB has to be opened before switching your application to new HUB endpoints.The following table presents old endpoints and their substitutes in the new environment. Everyone who wants to connect with MDMHUB has to use new endpoints.EnvironmentOld endpointNew endpointAffected clientsDescriptionGBLUS DEV/QA/STAGEhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/v1https://consul-amer-nprod-gbl-mdm-hub.COMPANY.com/v1ETLConsulGBLUS DEVhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/dev-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-devCDW, ENGAGE, KOL_ONEVIEW, GRV, GRACE, ICUE, ESAMPLES, MULEManager APIGBLUS DEVhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/dev-batch-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-gblus-devETLBatch APIGBLUS QAhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/qa-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-qaCDW, ENGAGE, KOL_ONEVIEW, GRV, GRACE, ICUE, ESAMPLES, MULEManager APIGBLUS QAhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/qa-batch-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-gblus-qaETL,Batch APIGBLUS STAGEhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/stage-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-stageCDW, ENGAGE, KOL_ONEVIEW, GRV, GRACE, ICUE, ESAMPLES, MULEManager APIGBLUS STAGEhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/stage-batch-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-gblus-stageETL,Batch APIGBLUS PRODhttps://gbl-mdm-hub-us-prod.COMPANY.com/v1https://consul-amer-prod-gbl-mdm-hub.COMPANY.com/v1ETLConsulGBLUS PRODhttps://gbl-mdm-hub-us-prod.COMPANY.com/prod-exthttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-prodCDW, ENGAGE, KOL_ONEVIEW, GRV, GRACE, ICUE, ESAMPLES, MULEManager APIGBLUS PRODhttps://gbl-mdm-hub-us-prod.COMPANY.com/prod-batch-exthttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/ext-api-batch-gblus-prodETLBatch APIEMEA DEV/QA/STAGEhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/v1https://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/v1ETLConsulEMEA DEVhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/dev-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-emea-devMULE, GRV, PforceRx, JORouter APIEMEA DEVhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/dev-ext/gwhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-devManager APIEMEA DEVhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/dev-batch-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-emea-devETLBatch APIEMEA QAhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/qa-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-emea-qaMULE, GRV, PforceRx, JORouter APIEMEA QAhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/qa-ext/gwhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-qaManager APIEMEA QAhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/qa-batch-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-emea-qaETLBatch APIEMEA STAGEhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/stage-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-emea-stageMULE, GRV, PforceRx, JORouter APIEMEA STAGEhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/stage-ext/gwhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-stageManager APIEMEA STAGEhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/stage-batch-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-emea-stageETLBatch APIEMEA PRODhttps://gbl-mdm-hub-emea-prod.COMPANY.com:8443/v1https://consul-emea-prod-gbl-mdm-hub.COMPANY.com/v1ETLConsulEMEA PRODhttps://gbl-mdm-hub-emea-prod.COMPANY.com:8443/prod-exthttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ext-api-emea-prodMULE, GRV, PforceRxRouter APIEMEA PRODhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/prod-ext/gwhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-prodManager APIEMEA PRODhttps://gbl-mdm-hub-emea-prod.COMPANY.com:8443/prod-batch-exthttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ext-api-batch-emea-prodBatch APIGBL DEVhttps://mdm-reltio-proxy.COMPANY.com:8443/dev-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-devMULE, GRV, JO, KOL_ONEVIEW, MAPP, MEDIC, ONEMED, PTRS, VEEVA_FIELD,Manager APIGBL QA (MAPP)https://mdm-reltio-proxy.COMPANY.com:8443/mapp-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-qaMULE, GRV, JO, KOL_ONEVIEW, MAPP, MEDIC, ONEMED, PTRS, VEEVA_FIELD,Manager APIGBL STAGEhttps://mdm-reltio-proxy.COMPANY.com:8443/stage-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-stageMULE, GRV, JO, KOL_ONEVIEW, MEDIC, ONEMED, PTRS, VEEVA_FIELDManager APIGBL PRODhttps://mdm-gateway.COMPANY.com/prod-exthttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-prodMULE, GRV, JO, KOL_ONEVIEW, MAPP, MEDIC, ONEMED, PTRS, VEEVA_FIELDManager APIGBL PRODhttps://mdm-gateway-int.COMPANY.com/gw-apihttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-gw-gbl-prodCHINAManager APIEXTERNAL GBL DEVhttps://mdm-reltio-proxy.COMPANY.com:8443/dev-exthttps://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/ext-api-gw-gbl-devMAP, GANT, MAPPManager APIEXTERNAL GBL QA (MAPP)https://mdm-reltio-proxy.COMPANY.com:8443/mapp-exthttps://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/ext-api-gw-gbl-qaMAP, GANT, MAPPManager APIEXTERNAL GBL STAGEhttps://mdm-reltio-proxy.COMPANY.com:8443/stage-exthttps://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/ext-api-gw-gbl-stageMAP, GANT, MAPPManager APIEXTERNAL GBL PRODhttps://mdm-gateway.COMPANY.com/prod-exthttps://api-emea-k8s-prod-gbl-mdm-hub-ext.COMPANY.com/ext-api-gw-gbl-prodMAP, GANT, MAPPManager APIEXTERNAL EMEA DEVhttps://api-emea-nprod-gbl-mdm-hub-ext.COMPANY.com:8443/dev-exthttps://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/ext-api-emea-devMAP, GANT, MAPPRouter APIEXTERNAL EMEA QAhttps://api-emea-nprod-gbl-mdm-hub-ext.COMPANY.com:8443/qa-exthttps://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/ext-api-emea-qaMAP, GANT, MAPPRouter APIEXTERNAL EMEA STAGEhttps://api-emea-nprod-gbl-mdm-hub-ext.COMPANY.com:8443/stage-exthttps://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/ext-api-emea-stageMAP, GANT, MAPPRouter APIEXTERNAL EMEA PRODhttps://api-emea-prod-gbl-mdm-hub-ext.COMPANY.com:8443/prod-exthttps://api-emea-k8s-prod-gbl-mdm-hub-ext.COMPANY.com/ext-api-emea-prodMAP, GANT, MAPPRouter APIStreaming channel (Kafka)Switching to a new environment requires configuration change on your side:Change the Kafka's broker address,Change JAAS configuration - in the new architecture, we decided to change JAAS authentication mechanisms to SCRAM. To be sure that you are using the right authentication you have to change a few parameters in Kafka's connection:JAAS login config file which path is specified in "java.security.auth.login.config" java property. It should look like below:KafkaClient {  org.apache.kafka.common.security.scram.ScramLoginModule required username="" ●●●●●●●●●●●●●●●●●●●>";};                   b.  change the value of "sasl.mechanism" property to "SCRAM-SHA-512"                   c. if you configure JAAS login using "sasl.jaas.config" property you have to change its value to "org.apache.kafka.common.security.scram.ScramLoginModule required username="" ●●●●●●●●●●●●●●●●●●●>";"You should receive new credentials (username and password) in the email about changing Kafka endpoints. In another case to get the proper username and ●●●●●●●●●●●●●●● contact our support team.The following table presents old endpoints and their substitutes in the new environment. Everyone who wants to connect with MDMHUB has to use new endpoints.EnvironmentOld endpointNew endpointAffected clientsDescriptionGBLUS DEV/QA/STAGEamraelp00007335.COMPANY.com:9094kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094ENGAGE, KOL_ONEVIEW, GRV, ICUE, MULEKafkaGBLUS PRODamraelp00007848.COMPANY.com:9094,amraelp00007849.COMPANY.com:9094,amraelp00007871.COMPANY.com:9094kafka-amer-prod-gbl-mdm-hub.COMPANY.com:9094ENGAGE, KOL_ONEVIEW, GRV, ICUE, MULEKafkaEMEA DEV/QA/STAGEeuw1z2dl112.COMPANY.com:9094mdm-reltio-proxy.COMPANY.com:9094 (external)kafka-b1-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094MAP (external), PforceRx, MULEKafkaEMEA PRODeuw1z2pl116.COMPANY.com:9094,euw1z1pl117.COMPANY.com:9094,euw1z2pl118.COMPANY.com:9094kafka-b1-emea-prod-gbl-mdm-hub.COMPANY.com:9094,kafka-b2-emea-prod-gbl-mdm-hub.COMPANY.com:9094,kafka-b3-emea-prod-gbl-mdm-hub.COMPANY.com:9094kafka-b1-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095,kafka-b2-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095,kafka-b3-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095 (external)kafka-b1-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094MAP (external), PforceRx, MULEKafkaGBL DEV/QA/STAGEeuw1z1dl037.COMPANY.com:9094mdm-reltio-proxy.COMPANY.com:9094 (external)kafka-b1-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094MAP (external), China, KOL_ONEVIEW, PTRS, PTE, ENGAGE, MAPP,KafkaGBL PRODeuw1z1pl017.COMPANY.com:9094,euw1z1pl021.COMPANY.com:9094,euw1z1pl022.COMPANY.com:9094mdm-broker-p1.COMPANY.com:9094,mdm-broker-p2.COMPANY.com:9094,mdm-broker-p3.COMPANY.com:9094 (external)kafka-b1-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094MAP (external), China, KOL_ONEVIEW, PTRS, ENGAGE, MAPP,KafkaEXTERNAL GBL DEV/QA/STAGEData Mart (Snowflake)There are no changes required if you use Snowflake to get MDMHUB data." + }, + { + "title": "MDM HUB Log Management", + "pageID": "164470115", + "pageLink": "/display/GMDM/MDM+HUB+Log+Management", + "content": "MDM HUB has built in a log management solution that allows to trace data going through the system (incoming and outgoing events).It improves:TraceabilityAbility to trace input and output dataCompliance requirementsSecurityAny user activity is recordedThreat protection and discoveryMonitoringOutages & performance bottlenecks detectionAnalytics Metrics & trends in real-timeAnomalies detectionThe solution is based on EFK stack:ElasticSearch - provides storage and indexing and search capabilitiesFluentd - ships, transforms and loads logsKibana - provides UI for usersThe solutions is presented on the picture below: HUB microservices generetes log events and place them on KAFKA monitoring topics.Fluentd  processes events from topics and store them in ElasticSearch. Kibana presents data to users.    " + }, + { + "title": "EFK Environments", + "pageID": "164470092", + "pageLink": "/display/GMDM/EFK+Environments", + "content": "" + }, + { + "title": "Elastic Cloud on Kubernetes in MDM HUB", + "pageID": "284787486", + "pageLink": "/display/GMDM/Elastic+Cloud+on+Kubernetes+in+MDM+HUB", + "content": "OverviewAfter migration on Kubernetes platform from on premise solutions we started to use Elastic Cloud on Kubernetes (ECK).https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-overview.html With ECK we can streamline critical operations, such as:Setting up hot-warm-cold architectures.Providing lifecycle policies for logs and transactions, snapshots of obsolete/older/less utility data.Creating dashboards visualising data of MDM HUB core processes.Logs, transactions and mongo collectionsWe splitted all the data entering the Elastic Stack cluster into different categories listed as follows:1. MDM HUB services logsFor forwarding MDM HUB services logs we use FluentBit where its used as a sidecar/agent container inside the mdmhub service pod.The sidecar/agents send data directly to a backend service on Kubernetes cluster.2. Backend logs and transactionsFor backend logs and transactions forwarding we use Fluentd as a forwarder and aggregator, lightweight pod instance deployed on edge.In case of Elasticsearch unavailability, secondary output is defined on S3 storage to not miss any data coming from services.3. MongoDB collectionsIn this scenario we decided to use Monstache, sync daemon written in Go that continously indexes MongoDB collections into Elasticsearch.We use it to mirror Reltio data gathered in MongoDB collections in Elasticsearch as a backup and a source for Kibana's dashboards visualisations.Data streamsMDM HUB services and backend logs and transactions are managed by Data streams mechanism.A data stream lets us store append-only time series data (logs/transactions) across multiple indices while giving a single named resource for requests.https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams.htmlIndex lifecycle policies and snapshots managementIndex templates, index lifecycle policies and snapshots for index management are enirely covered by the Elasticsearch built-in mechanisms.Description of the index lifecycle divided into phases:Index rollover - logs and transactions are stored in hot-tiersIndex rollover - logs and transactions are moved to delete phaseSnapshot - deleted logs and transactions from elasticsearch are snapshotted on S3 bucketSnapshot -  logs and transactions are deleted from S3 bucket - index is no longer availableAll snapshotted indices may be restored and recreated on Elasticsearch anytime.Maximum sizes and ages for the indexes rollovers and snapshots are included in the following tables:Non PROD environmentstypeindex rollover hot phaseindex rollover delete phasesnapshot phase MDM HUB logsage: 7dsize: 100gbage: 30dage: 180dBackend logsage: 7dsize: 100gbage: 30dage: 180dKafka transactionsage: 7dsize: 25gbage: 30dage: 180dPROD environmentstypeindex rollover hot phaseindex rollover delete phasesnapshot phase MDM HUB logsage: 7dsize: 100gbage: 90dage: 365dBackend logsage: 7dsize: 100gbage: 90dage: 365dKafka transactionsage: 7dsize: 25gbage: 180dage:  365dAditionally, we execute full snapshot policy on daily basis. It is responsible for incremental storing all the elasticsearch indexes on S3 buckets as a backup. Snapshots locationsenvironmentS3 bucketpathEMEA NPRODpfe-atp-eu-w1-nprod-mdmhubemea/archive/elastic/fullEMEA PRODpfe-atp-eu-w1-prod-mdmhub-backupemaasp202207120811emea/archive/elastic/fullAMER NPRODgblmdmhubnprodamrasp100762amer/archive/elastic/fullAMER PRODpfe-atp-us-e1-prod-mdmhub-backupamrasp202207120808amer/archive/elastic/fullAPAC NPRODglobalmdmnprodaspasp202202171347apac/archive/elastic/fullAPAC PRODpfe-atp-ap-se1-prod-mdmhub-backuaspasp202207141502apac/archive/elastic/fullMongoDB collections data are stored on Elasticsearch permanently, they are not covered by the index lifecycle processes.Kibana dashboardsKibana Dashboard Overview" + }, + { + "title": "Kibana Dashboards", + "pageID": "164470093", + "pageLink": "/display/GMDM/Kibana+Dashboards", + "content": "" + }, + { + "title": "Tracing areas", + "pageID": "164470094", + "pageLink": "/display/GMDM/Tracing+areas", + "content": "Log data are generated in the following actions:API calls request timestampoperation namerequest payloadresponse statusMDM events timestampmdm nameevent typeevent payload" + }, + { + "title": "MDM HUB Monitoring", + "pageID": "164470106", + "pageLink": "/display/GMDM/MDM+HUB+Monitoring", + "content": "" + }, + { + "title": "AKHQ", + "pageID": "164470020", + "pageLink": "/display/GMDM/AKHQ", + "content": "AKHQ (https://github.com/tchiotludo/akhq) is a tool for browsing, changing and monitoring Kafka's instances.https://akhq-amer-nprod-gbl-mdm-hub.COMPANY.com/https://akhq-amer-prod-gbl-mdm-hub.COMPANY.com/https://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/https://akhq-emea-prod-gbl-mdm-hub.COMPANY.com/https://akhq-apac-nprod-gbl-mdm-hub.COMPANY.com/https://akhq-apac-prod-gbl-mdm-hub.COMPANY.com/" + }, + { + "title": "Grafana & Kibana", + "pageID": "228933027", + "pageLink": "/pages/viewpage.action?pageId=228933027", + "content": "KIBANAUS PROD https://mdm-log-management-us-trade-prod.COMPANY.com:5601/app/kibanaUser: kibana_dashboard_viewUS NONPROD https://mdm-log-management-us-trade-nonprod.COMPANY.com:5601/app/kibanaUser: kibana_dashboard_view=====GBL PROD https://kibana-emea-prod-gbl-mdm-hub.COMPANY.comGBL NONPROD https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com=====EMEA PROD https://kibana-emea-prod-gbl-mdm-hub.COMPANY.comEMEA NONPROD https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com=====GBLUS PROD https://kibana-amer-prod-gbl-mdm-hub.COMPANY.comGBLUS NONPROD https://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com=====AMER PROD https://kibana-amer-prod-gbl-mdm-hub.COMPANY.comAMER NONPROD https://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com=====APAC PROD https://kibana-apac-prod-gbl-mdm-hub.COMPANY.comAPAC NONPROD https://kibana-apac-nprod-gbl-mdm-hub.COMPANY.comGRAFANAhttps://grafana-mdm-monitoring.COMPANY.comKeePass - download thisKibana-k8s.kdbxThe password to the KeePass is sent in a separate email to improve the security level of credentials sending.To get access, you only need to download the KeePass application 2.50 version (https://keepass.info/download.html) and use a password that is sent to log in to it.After you do it you will see a screen like:Then just click a title that you are interested in. And you get a window like:Here you have a user name, and a proper link and when you click 3 dots = red square you will get the password." + }, + { + "title": "Grafana Dashboard Overview", + "pageID": "164470208", + "pageLink": "/display/GMDM/Grafana+Dashboard+Overview", + "content": "MDM HUB's Grafana is deployed on the MONITORING host and is available under the following URL:https://grafana-mdm-monitoring.COMPANY.comAll the dashboards are built using Prometheus's metrics." + }, + { + "title": "Alerts Monitoring PROD&NON_PROD", + "pageID": "163917772", + "pageLink": "/pages/viewpage.action?pageId=163917772", + "content": "PROD: https://mdm-monitoring.COMPANY.com/grafana/d/5h4gLmemz/alerts-monitoring-prodNON PROD: https://mdm-monitoring.COMPANY.com/grafana/d/COVgYieiz/alerts-monitoring-non_prodThe Dashboard contains firing alerts and last Airflow DAG runs statuses for GBL (left side) and US FLEX (right side):a., e. number of alerts firingb., f. turns red when one or more DAG JOBS have failedc., g. alerts currently firingd., h. table containing all the DAGs and their run count for each of the statuses" + }, + { + "title": "AWS SQS", + "pageID": "163917788", + "pageLink": "/display/GMDM/AWS+SQS", + "content": "Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/CI4RLieik/aws-sqsThe dashboard is describing the SQS queue used in Reltio→MDM HUB communication.The dashboard is divided into following sections:a. Approximate number of messages - how many messages are currently waiting in the queueb. Approximate number of messages delayed - how many messages are waiting to be added in the queuec. Approximate number of messages invisible - how many messages are not timed out nor deleted" + }, + { + "title": "Docker Monitoring", + "pageID": "163917797", + "pageLink": "/display/GMDM/Docker+Monitoring", + "content": "Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/Z1VgYm6iz/docker-monitoringThis dashboard is describing the Docker containers running on hosts in each environment. Switch currently viewed environment/host using the variables at the top of the dashboard ("env", "host").The dashboard is divided into following sections:a. Running containers - how many containers are currently running on this hostb. Total Memory Usagec. Total CPU Usaged. CPU Usage - over time CPU use per containere. Memory Usage - over time Memory use per containerf. Network Rx - received bytes per container over timeg. Network Tx - transmited bytes per container over time" + }, + { + "title": "Host Statistics", + "pageID": "163917801", + "pageLink": "/display/GMDM/Host+Statistics", + "content": "\n\n\n\nDashboard: https://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statisticsDashboard template source: https://grafana.com/grafana/dashboards/1860This dashboard is describing various statistics related to hosts' resource usage. It uses metrics from the node_exporter. You can change the currently viewed environment and host using variables at the top of the dashboard.\n\n\n\n\n\nBasic CPU / Mem / Disk Gaugea. CPU Busyb. Used RAM Memoryc. Used SWAP - hard disk memory used for swappingd. Used Root FSe. CPU System Load (1m avg)f. CPU System Load (5m avg)\n\n\n\n\n\nBasic CPU / Mem / Disk Infoa. CPU Coresb. Total RAMc. Total SWAPd. Total RootFSe. System Load (1m avg)f. Uptime - time since last restart\n\n\n\n\n\nBasic CPU / Mem Grapha. CPU Basic - CPU state %b. Memory Basic - memory (SWAP + RAM) use\n\n\n\n\n\nBasic Net / Disk Infoa. Network Traffic Basic - network traffic in bytes per interfaceb, Disk Space Used Basic - disk usage per mount\n\n\n\n\n\nCPU Memory Net Diska. CPU - percentage use per status/operationb. Memory Stack - use per status/operationc. Network Traffic - detailed network traffic in bytes per interface. Negative values correspond to transmited bytes, positive to received.d. Disk Space Used - disk usage per mounte. Disk IOps - disk operations per partition. Negative values correspond to write operations, positive - read operations.f. I/O Usage Read / Write - bytes read(positive)/written(negative) per partitiong. I/O Usage Times - time of I/O operations in seconds per partition\n\n\n\n\n\nEtc.As the dashboard template is a publicaly-available project, the panels/graphs are sufficiently described and do not require further explanation.\n\n\n" + }, + { + "title": "HUB Batch Performance", + "pageID": "163917855", + "pageLink": "/display/GMDM/HUB+Batch+Performance", + "content": "\n\n\n\nDashboard: https://mdm-monitoring.COMPANY.com/grafana/d/gz0X6rkMk/hub-batch-performance\n\n\n\n\n\na. Batch loading rateb. Batch loading latencyc. Batch sending rated. Batch sending latencye. Batch processing rate - batch processing in ops/sf. Batch processing latency - batch processing time in secondsg. Batch loading max gauge - max loading time in secondsh. Batch sending max gauge - max sending time in secondsi. Batch processing max gauge - max processing in seconds\n\n\n" + }, + { + "title": "HUB Overview Dashboard", + "pageID": "163917867", + "pageLink": "/display/GMDM/HUB+Overview+Dashboard", + "content": "\n\n\n\nDashboard: https://mdm-monitoring.COMPANY.com/grafana/d/OfVgLm6ik/hub-overviewThis dashboard contains information about Kafka topics/consumer groups in HUB - downstream from Reltio.\n\n\n\n\n\na. Lag by Consumer Group - lag on each INBOUND consumer groupb. Message consume per minute - messages consumed by each INBOUND consumer groupc. Message in per minute - inbound messages count by each INBOUND topicd. Lag by Consumer Group - lag on each OUTBOUND consumer groupe. Message consume per minute - messages consumed by each OUTBOUND consumer groupf. Message in per minute - inbound messages count by each OUTBOUND topicg. Lag by Consumer Group - lag on each INTERNAL BATCH consumer grouph. Message consume per minute - messages consumed by each INTERNAL BATCH consumer groupi. Message in per minute - inbound messages count by each INTERNAL BATCH topic\n\n\n" + }, + { + "title": "HUB Performance", + "pageID": "163917830", + "pageLink": "/display/GMDM/HUB+Performance", + "content": "\n\n\n\nDashboard: https://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance\n\n\n\n\n\nAPI Performancea. Read Rate - API Read operations in 5/10/15min rateb. Read Latency - API Read operations latency in seconds for 50/75/99th percentile of requests. Consists of Reltio response time, processing time and total timec. Write Rate - API Write operations in 5/10/15min rated. Write Latency - API Write operations latency in seconds for 50/75/99th percentile of requests per each API operation\n\n\n\n\n\nPublishing Performancea. Event Preprocessing Total Rate - Publisher's preprocessed events 5/10/15min rate divided for entity/relation eventsb. Event Preprocessing Total Latency - preprocessing time in seconds for 50/75/99th percentile of events\n\n\n\n\n\nSubscribing Performancea. MDM Events Subscribing Rate - Subscriber's events rateb. MDM Events Subscribing Latency - Subscriber's event processing (passing downstream) rate\n\n\n" + }, + { + "title": "JMX Overview", + "pageID": "163917876", + "pageLink": "/display/GMDM/JMX+Overview", + "content": "Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overviewThis dashboard organizes and displays data extracted from each component by a JMX exporter - related to this component's resource usage. You can switch currently viewed environment/component/node using variables on the top of the dashboard.a. Memoryb. Total RAMc. Used SWAPd. Total SWAPe. CPU System Load(1m avg)f. CPU System Load(5m avg)g. CPU Coresh. CPU Usagei. Memory Heap/NonHeapj. Memory Pool Usedk. Threads usedl. Class loadingm. Open File Descriptorsn. GC time / 1 min. rate - Garbage Collector time rate/mino. GC count - Garbage Collector operations count" + }, + { + "title": "Kafka Overview", + "pageID": "163917904", + "pageLink": "/display/GMDM/Kafka+Overview", + "content": "Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/YNIRYmeik/kafka-overviewThis dashboard describes Kafka's per node resource usage.a. CPU Usageb. JVM Memory Usedc. Time spent in GCd. Messages in Per Topice. Bytes in Per Topicf. Bytes Out Per Topic" + }, + { + "title": "Kafka Overview - Total", + "pageID": "163917913", + "pageLink": "/display/GMDM/Kafka+Overview+-+Total", + "content": "Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/W6OysZ5Zz/kafka-overview-totalThis dashboard describes Kafka's total (all node summary) resource usage per environment.a. CPU Usageb. JVM Memory Usedc. Time spent in GCd. Messages ratee. Bytes in Ratef. Bytes Out Rate" + }, + { + "title": "Kafka Topics Overview", + "pageID": "163917920", + "pageLink": "/display/GMDM/Kafka+Topics+Overview", + "content": "Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overviewThis dashboard describes Kafka topics and consumer groups in each environment.a. Topics purge ETA in hours - approximate time it should take for each consumer group to process all the events on their topicb. Lag by Consumer Groupc. Message in per minute - per topicd. Message consume per minute - per consumer groupe. Message in per second - per topic" + }, + { + "title": "Kong Dashboard", + "pageID": "163917927", + "pageLink": "/display/GMDM/Kong+Dashboard", + "content": "Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kongThis dashboard describes the Kong component statistics.a. Total requests per secondb. DB reachabilityc. Requests per serviced. Requests by HTTP status codee. Total Bandwidthf. Egress per service (All) - traffic exiting the MDM network in bytesg. Ingress per service (All) - traffic entering the MDM network in bytesh. Kong Proxy Latency across all services - divided on 90/95/99 percentilei. Kong Proxy Latency per service (All) - divided on 90/95/99 percentilej. Request Time across all services - divided on 90/95/99 percentilek. Request Time per service (All) - divided on 90/95/99 percentilel. Upstream Time across all services - divided on 90/95/99 percentilem. Upstream Time per service (All) - divided on 90/95/99 percentileo. Nginx connection statep. Total Connectionsq. Handled Connectionsr. Accepted Connections" + }, + { + "title": "MongoDB", + "pageID": "163917945", + "pageLink": "/display/GMDM/MongoDB", + "content": "Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodba. Query Operationsb. Document Operationsc. Document Query Executord. Member Healthe. Member Statef. Replica Query Operationsg. Uptimeh. Available Connectionsi. Open Connectionsj. Oplog Sizek. Memoryl. Network I/Om. Oplog Lagn. Disk I/O Utilizationo. Disk Reads Completedp. Disk Writes Completed" + }, + { + "title": "Snowflake Tasks", + "pageID": "163917954", + "pageLink": "/display/GMDM/Snowflake+Tasks", + "content": "Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/358IxM_Mz/snowflake-tasksThis dashboard describes tasks running on each Snowflake instance.Please keep in mind that metrics supporting this dashboard are scraped rarely (every 8h on nprod, every 2h on prod), so keep the Time since last scrape gauge in mind when reviewing the results.a. Time since last scrape - time since the metrics were last scraped - it marks dashboard freshnessb. Last Task Runs - table contains:task's name,date&time of last recorded run,visualisation of how long ago was the last run,state of last run,duration of last run (processing time)c. Processing time - visualizes how the processing time of each task was changing over time" + }, + { + "title": "Kibana Dashboard Overview", + "pageID": "164469839", + "pageLink": "/display/GMDM/Kibana+Dashboard+Overview", + "content": "" + }, + { + "title": "API Calls Dashboard", + "pageID": "164469837", + "pageLink": "/display/GMDM/API+Calls+Dashboard", + "content": "The dashboard contains summary of MDM Gateway API calls in the chosen time range.Use it to:find a certain API call by entity/timestamp/username,check which host this request was sent to,check request processing time etc.The dashboard is divided into the following sections:a. Total requests count - how many requests have been logged in this time range (or passed the filter if that's the case)b. Controls - allows user to filter requests based on username and operationc. Requests by operation - how many requests have been sent per each operationd. Average response time - how long the response time was on average per each actione. Request per client - how many requests have been sent per each clientf. Response status - how many requests have resulted with each statusg. Top 10 processing times - summary of 10 requests that have been processed the longest in this time range. Contains transaction ID, related entity URI, operation type and duration in ms.681pxh. Logs - summary of all the logged requests" + }, + { + "title": "Batch Loads Dashboard", + "pageID": "164469855", + "pageLink": "/display/GMDM/Batch+Loads+Dashboard", + "content": "The dashboard contains information about files processed by the Batch Channel component.Use this dashboard to:check whether the files were delivered on schedule,check processing time,verify that the files have been processed correctly.The dashboard is divided into following sections:a. File by type - summary of how many files of each type were delivered in this time range.b. File load status count - visualisation of how many entities were extracted from each file type and what was the result of their processing.c. File load count - visualisation of loaded files in this time range. Use it to verify that the files have been delivered on schedule.d. File load summary - summary of the processing of each loaded file. e. Response status load summary - summary of processing result for each file type." + }, + { + "title": "HL DCR Dashboard", + "pageID": "164469753", + "pageLink": "/display/GMDM/HL+DCR+Dashboard", + "content": "This dashboard contains information related to the HL DCR flow (DCR Service).Use it to:track issues related to the HL DCR flow.The dashboard is divided into following sections:a. DCR Status - summary of how many DCRs have each of the statusesb. Reltio DCR Stats - summary of how many DCRs that have been processed and sent to Reltio have each of the statusesc. DCRRequestProcessing report - list of DCR reports generated in this time ranged. DCR Current state - list of DCRs and their current statuses" + }, + { + "title": "HUB Events Dashboard", + "pageID": "164469849", + "pageLink": "/display/GMDM/HUB+Events+Dashboard", + "content": "Dashboard contains information about the Publisher component - events sent to clients or internal components (ex. Callback Service).Use it to:track issues related to Publisher's event processing (filtering/publishing),find information about Publisher's event processing time,find potential issues with events not being published from one topic or being constantly skipped etc.The dashboard is divided into following sections:a. Count - how many events have been processed by the Publisher in this time rangeb. Event count - visualisation of how many events have been processed over timec. Simple events in time - visualisation of how many simple events have been processed (published) over time per each outbound topicd. Skipped events in time - visualisation of how many events have been skipped (filtered) for each reason over timee. Full events in time - visualisation of how many full events have been published over time per each topicf. Processing time - visualisation of how long the processing of entities/relations events tookg. Events by country - summary of how many events were related to each countryh. Event types - summary of how many events were of each typei. Full events by Topics - visualisation of how many full events of each type were published on each of the topicsj. Simple events by Topics - visualisation of how many simple events of each type were published on each of the topicsk. Publisher Logs - list containing all the useful information extracted from the Publisher logs for each event. Use it to track issues related to Publisher's event processing." + }, + { + "title": "HUB Store Dashboard", + "pageID": "164469853", + "pageLink": "/display/GMDM/HUB+Store+Dashboard", + "content": "Summary of all entities in the MDM in this environment. Contains summary information about entities count, countries and sources. The dashboard is divided into following sections:a. Entities count - how many entities are there currently in MDMb. Entities modification count - how many entity modifications (create/update/delete) were there over timec. Status - summary of how many entities have each of the statusesd. Type - summary of how many entities are HCO (Health Care Organization) or HCP (Health Care Professional)e. MDM - summary of how many MDM entities are in Reltio/Nucleusf. Entities country - visualisation of country to entity countg. Entities source - visualisation of source to entity counth. Entities by country source type - visualisation of how many entities are there from each country with each sourcei. World Map - visualisation of how many entities are there from each countryj. Source/Country Heat Map - another visualisation of Country-Source distribution" + }, + { + "title": "MDM Events Dashboard", + "pageID": "164469851", + "pageLink": "/display/GMDM/MDM+Events+Dashboard", + "content": "This dashboard contains information extracted from the Subscriber component.Use it to:confirm that a certain event was received from Reltio/Nucleus,check the consume time.The dashboard is divided into following sections:a. Total events count - how many events have been received and published to an internal topic in this time rangeb. Event types - visualisation of how many events processed were of each typec. Event count - visualisation of how many events were processed over timed. Event destinations - visualisation of how many events have been passed to each of internal topics over timee. Average consume time - visualisation of how long it took to process/pass received events over timef. Subscriber Logs - list containing all the useful information extracted from the Subscriber logs. Use it to track potential issues" + }, + { + "title": "Profile Updates Dashboard", + "pageID": "164469751", + "pageLink": "/display/GMDM/Profile+Updates+Dashboard", + "content": "This dashboard contains information about HCO/HCP profile updates via MDM Gateway.Use it to:check how many updates have been processed,check processing results (statuses),track an issue related to the Gateway components.Note, that the Gateway is not only used by the external vendors, but also by HUB's components (Callback Service).The dashboard is divided into following sections:a. Count - how many profile updates have been logged in this time periodb. Updates by status - how many updates have each of the statusesc. Updates count - visualisation of how many updates were received by the Gateway over timed. Updates by country source status - visualisation of how many updates were there for each country, from each source and with each statuse. Updates by source - summary of how many profile updates were there from each sourcef. Updates by country source status - another visualisation of how many updates were there for each country, source, statusg. World Map - visualisation of how many updates were there on profiles from each of the countriesh. Gateway Logs - list containing all the useful information extracted from the Gateway components' logs. Use it to track issues related to the MDM Gateway" + }, + { + "title": "Reconciliation metrics Dashboard", + "pageID": "310964632", + "pageLink": "/display/GMDM/Reconciliation+metrics+Dashboard", + "content": "The Reconciliation Metrics Dashboard shows reasons why the MDM object (entity or relation) was reconciled.Use it to:Check how many records were reconciled,Find the reasons for reconciliation.Currently, the dashboard can show the following reasons:reconciliation.lookupcode.error - new lookup error was added. Caused by changes in RDM reconciliation.lookupcode.changed - lookup code was changed. Caused by changes in RDM reconciliation.updatedtime.changed - entity updateTime changed reconciliation.description.changed - Any description attribute changed. Checks attribute path for .*[Dd]escription.* reconciliation.stateprovince.changed - Addresses, Stateprovince value changed  reconciliation.workplace.changed - Workplace changed  reconciliation.rank.changed - /attributes/Rank changed reconciliation.relation.objectlabel.changed - /startObject/label or /endObject/label changed reconciliation.object.missed - Object was removed reconciliation.object.added - Object was added  reconciliation.specialities.changed - Specialities changed(added/removed/replaced) reconciliation.specialities.label.changed - Specialities label changed(added/removed/replaced) reconciliation.mainhco.changed - /attributes/MainHCO changed(added/removed/replaced) reconciliation.address.changed - Any field under Address changed(added/removed/replaced) reconciliation.refentity.changed - Any reference entity changed('^/attributes/.*refEntity.+$' - added/removed/replaced) reconciliation.refrelation.changed - Any reference relationchanged('^/attributes/.*refRelation.+$' - added/removed/replaced) reconciliation.crosswwalk.attributeslist.change - Crosswalk attributes changed(added/removed/replaced) reconciliation.directionallabel.changed - directionalLabel changed(added/removed/replaced) reconciliation.value.changed - Any attribute changed(added/removed/replaced) reconciliation.other.reason - Non clasified reason - other cases The dashboard consists of a few diagrams:{ENV NAME} Reconciliation reasons - shows the most often existing reasons for reconciliation,Number by country - general number of reconciliation reasons divided by countries,Number by types - shows the general number of reconciliation reasons grouped by MDM object type,Reason list - reconciliation reasons with the number of their occurrences,{ENV NAME} Reconciliation metrics - detail view that shows data generated by Reconciliation Metrics flow. Data has detailed information about what exactly changed on specific MDM object." + }, + { + "title": "Prometheus Alerts", + "pageID": "164470107", + "pageLink": "/display/GMDM/Prometheus+Alerts", + "content": "DashboardsThere are 2 dashboards available for problems overview: KarmaGrafana - Alerts Monitoring DashboardAlertsENVNameAlertCause (Expression)TimeSeverityAction to be takenALLMDMhigh_load> 30 load130mwarningDetect why load is increasing. Decrease number of threads on components or turn off some of them.ALLMDMhigh_load> 30 load12hcriticalDetect why load is increasing. Decrease number of threads on components or turn off some of them.ALLMDMmemory_usage>  90% used1hcriticalDetect the component which is causing high memory usage and restart it.ALLMDMdisk_usage< 10% free2mhighRemove or archive old component logs.ALLMDMdisk_usage<  5% free2mcriticalRemove or archive old component logs.ALLMDMkong_processor_usage> 120% CPU used by container10mhighCheck the Kong containerALLMDMcpu_usage> 90% CPU used1hcriticalDetect the cause of high CPU use and take appropriate measuresALLMDMsnowflake_task_not_successful_nprodLast Snowflake task run has state other than "SUCCEEDED"1mhighInvestigate whether the task failed or was skipped, and what caused it.Metric value returned by the alert corresponds to the task state:0 - FAILED1 - SUCCEEDED2 - SCHEDULED3 - SKIPPEDALLMDMsnowflake_task_not_successful_prodLast Snowflake task run has state other than "SUCCEEDED"1mhighInvestigate whether the task failed or was skipped, and what caused it.Metric value returned by the alert corresponds to the task state:0 - FAILED1 - SUCCEEDED2 - SCHEDULED3 - SKIPPEDALLMDMsnowflake_task_not_started_24hSnowflake task has not started in the last 24h (+ 8h scrape time)1mhighInvestigate why the task was not scheduled/did not start.ALLMDMreltio_response_timeReltio response time to entities/get requests is >= 3 sec for 99th percentile20mhighNotify the Reltio Team.NON PRODMDMservice_downup{env!~".*_prod"} == 020mwarningDetect the not working component and start it.NON PRODMDMkafka_streams_client_statekafka streams client state != 21mhighCheck and restart the Callback Service.NON PRODKongkong_database_downKong DB unreachable20mwarningCheck the Kong DB component.NON PRODKongkong_http_500_status_rateHTTP 500 > 10%5mwarningCheck Gateway components' logs.NON PRODKongkong_http_502_status_rateHTTP 502 > 10%5mwarningCheck Kong's port availability.NON PRODKongkong_http_503_status_rateHTTP 503 > 10%5mwarningCheck the Kong component.NON PRODKongkong_http_504_status_rateHTTP 504 > 10%5mwarningCheck Reltio response rates. Check Gateway components for issues.NON PRODKongkong_http_401_status_rateHTTP 401 > 30%20mwarningCheck Kong logs. Notify the authorities in case of suspected break-in attempts.GBL NON PRODKafkainternal_reltio_events_lag_dev> 500 00030minfoCheck why lag is increasing. Restart the Event Publisher.GBL NON PRODKafkainternal_reltio_relations_events_lag_dev> 500 00030minfoCheck why lag is increasing. Restart the Event Publisher.GBL NON PRODKafkainternal_reltio_events_lag_stage> 500 00030minfoCheck why lag is increasing. Restart the Event Publisher.GBL NON PRODKafkainternal_reltio_relations_events_lag_stage> 500 00030minfoCheck why lag is increasing. Restart the Event Publisher.GBL NON PRODKafkainternal_reltio_events_lag_qa> 500 00030minfoCheck why lag is increasing. Restart the Event Publisher.GBL NON PRODKafkainternal_reltio_relations_events_lag_qa> 500 00030minfoCheck why lag is increasing. Restart the Event Publisher.GBL NON PRODKafkakafka_jvm_heap_memory_increasing> 1000MB memory use predicted in 5 hours20mhighCheck if Kafka is rebalancing. Check the Event Publisher.GBL NON PRODKafkafluentd_dev_kafka_consumer_group_members0 EFK consumergroup members30mhighCheck Fluentd logs. Restart Fluentd.GBLUS NON PRODKafkainternal_reltio_events_lag_gblus_dev> 500 00040minfoCheck why lag is increasing. Restart the Event Publisher.GBLUS NON PRODKafkainternal_reltio_events_lag_gblus_qa> 500 00040minfoCheck why lag is increasing. Restart the Event Publisher.GBLUS NON PRODKafkainternal_reltio_events_lag_gblus_stage> 500 00040minfoCheck why lag is increasing. Restart the Event Publisher.GBLUS NON PRODKafkakafka_jvm_heap_memory_increasing> 3100MB memory use predicted in 5 hours20mhighCheck if Kafka is rebalancing. Check the Event Publisher.GBLUS NON PRODKafkafluentd_gblus_dev_kafka_consumer_group_members0 EFK consumergroup members30mhighCheck Fluentd logs. Restart Fluentd.GBL PRODMDMservice_downcount(up{env=~"gbl_prod"} == 0) by (env,component) == 15mhighDetect the not working component and start it.GBL PRODMDMservice_downcount(up{env=~"gbl_prod"} == 0) by (env,component) > 15mcriticalDetect the not working component and start it.GBL PRODMDMservice_down_kafka_connect0 Kafka Connect Exporters up in the environment5mcriticalCheck and start the Kafka Connect Exporter.GBL PRODMDMservice_downOne or more Kafka Connect instances down5mcriticalCheck and start he Kafka Connect.GBL PRODMDMdcr_stuck_on_prepared_statusDCR has been PREPARED for 1h1hhighDCR has not been processed downstream. Notify IQVIA.GBL PRODMDMdcr_processing_failureDCR processing failed in the last 24 hoursCheck DCR Service, Wrapper logs.GBL PRODCron Jobsmongo_automated_script_not_startedMongo Cron Job has not started1hhighCheck the MongoDB.GBL PRODKongkong_database_downKong DB unreachable20mwarningCheck the Kong DB component.GBL PRODKongkong_http_500_status_rateHTTP 500 > 10%5mwarningCheck Gateway components' logs.GBL PRODKongkong_http_502_status_rateHTTP 502 > 10%5mwarningCheck Kong's port availability.GBL PRODKongkong_http_503_status_rateHTTP 503 > 10%5mwarningCheck the Kong component.GBL PRODKongkong_http_504_status_rateHTTP 504 > 10%5mwarningCheck Reltio response rates. Check Gateway components for issues.GBL PRODKongkong_http_401_status_rateHTTP 401 > 30%10mwarningCheck Kong logs. Notify the authorities in case of suspected break-in attempts.GBL PRODKafkainternal_reltio_events_lag_prod> 1 000 00030minfoCheck why lag is increasing. Restart the Event Publisher.GBL PRODKafkainternal_reltio_relations_events_lag_prod> 1 000 00030minfoCheck why lag is increasing. Restart the Event Publisher.GBL PRODKafkaprod-out-full-snowflake-all_no_consumersprod-out-full-snowflake-all has lag and has not been consumed for 2 hours1mhighCheck and restart the Kafka Connect Snowflake component.GBL PRODKafkainternal_gw_gcp_events_deg_lag_prod> 50 00030minfoCheck the Map Channel component.GBL PRODKafkainternal_gw_gcp_events_raw_lag_prod> 50 00030minfoCheck the Map Channel component.GBL PRODKafkainternal_gw_grv_events_deg_lag_prod> 50 00030minfoCheck the Map Channel component.GBL PRODKafkainternal_gw_grv_events_deg_lag_prod> 50 00030minfoCheck the Map Channel component.GBL PRODKafkaforwarder_mapp_prod_kafka_consumer_group_membersforwarder_mapp_prod consumer group has 0 members30mcriticalCheck the MAPP Events Forwarder.GBL PRODKafkaigate_prod_kafka_consumer_group_membersigate_prod consumer group members have decreased (still > 20)15minfoCheck the Gateway components.GBL PRODKafkaigate_prod_kafka_consumer_group_membersigate_prod consumer group members have decreased (still > 10)15mhighCheck the Gateway components.GBL PRODKafkaigate_prod_kafka_consumer_group_membersigate_prod consumer group has 0 members15mcriticalCheck the Gateway components.GBL PRODKafkahub_prod_kafka_consumer_group_membershub_prod consumer group members have decreased (still > 100)15minfoCheck the Hub components.GBL PRODKafkahub_prod_kafka_consumer_group_membershub_prod consumer group members have decreased (still > 50)15minfoCheck the Hub components.GBL PRODKafkahub_prod_kafka_consumer_group_membershub_prod consumer group has 0 members15minfoCheck the Hub components.GBL PRODKafkakafka_jvm_heap_memory_increasing> 2100MB memory use on node 1 predicted in 5 hours20mhighCheck if Kafka is rebalancing. Check the Event Publisher.GBL PRODKafkakafka_jvm_heap_memory_increasing> 2000MB memory use on nodes 2&3 predicted in 5 hours20mhighCheck if Kafka is rebalancing. Check the Event Publisher.GBL PRODKafkafluentd_prod_kafka_consumer_group_membersFluentd consumergroup has 0 members30mhighCheck and restart Fluentd.US PRODMDMservice_downBatch Channel is not running5mcriticalStart the Batch ChannelUS PRODMDMservice_down1 component is not running5mhighDetect the not working component and start it.US PRODMDMservice_down>1 component is not running5mcriticalDetect the not working components and start them.US PRODCron Jobsarchiver_not_startedArchiver has not started in 24 hours1hhighCheck the Archiver.US PRODKafkainternal_reltio_events_lag_us_prod> 500 0005mhighCheck why lag is increasing. Restart the Event Publisher.US PRODKafkainternal_reltio_events_lag_us_prod> 1 000 0005mcriticalCheck why lag is increasing. Restart the Event Publisher.US PRODKafkahin_kafka_consumer_lag_us_prod> 100015mcriticalCheck why lag is increasing. Restart the Batch Channel.US PRODKafkaflex_kafka_consumer_lag_us_prod> 100015mcriticalCheck why lag is increasing. Restart the Batch Channel.US PRODKafkasap_kafka_consumer_lag_us_prod> 100015mcriticalCheck why lag is increasing. Restart the Batch Channel.US PRODKafkadea_kafka_consumer_lag_us_prod> 100015mcriticalCheck why lag is increasing. Restart the Batch Channel.US PRODKafkaigate_prod_hco_create_kafka_consumer_group_members>= 30 < 40 and lag > 100015minfoCheck why the number of consumers is decreasing. Restart the Batch Channel.US PRODKafkaigate_prod_hco_create_kafka_consumer_group_members>= 10 < 30 and lag > 100015mhighCheck why the number of consumers is decreasing. Restart the Batch Channel.US PRODKafkaigate_prod_hco_create_kafka_consumer_group_members== 0 and lag > 100015mcriticalCheck why the number of consumers is decreasing. Restart the Batch Channel.US PRODKafkahub_prod_kafka_consumer_group_members>= 30 < 45 and lag > 100015minfoCheck why the number of consumers is decreasing. Restart the Event Publisher.US PRODKafkahub_prod_kafka_consumer_group_members>= 10 < 30 and lag > 100015mhighCheck why the number of consumers is decreasing. Restart the Event Publisher.US PRODKafkahub_prod_kafka_consumer_group_members== 0 and lag > 100015mcriticalCheck why the number of consumers is decreasing. Restart the Event Publisher.US PRODKafkafluentd_prod_kafka_consumer_group_membersEFK consumer group has 0 members30mhighCheck and restart Fluentd.US PRODKafkaflex_prod_kafka_consumer_group_membersFLEX Kafka Connector has 0 consumers10mcriticalNotify the FLEX TeamGBLUS PRODMDMservice_downcount(up{env=~"gblus_prod"} == 0) by (env,component) == 15mhighDetect the not working component and start it.GBLUS PRODMDMservice_downcount(up{env=~"gblus_prod"} == 0) by (env,component) > 15mcriticalDetect the not working component and start it.GBLUS PRODKongkong_database_downKong DB unreachable20mwarningCheck the Kong DB component.GBLUS PRODKongkong_http_500_status_rateHTTP 500 > 10%5mwarningCheck Gateway components' logs.GBLUS PRODKongkong_http_502_status_rateHTTP 502 > 10%5mwarningCheck Kong's port availability.GBLUS PRODKongkong_http_503_status_rateHTTP 503 > 10%5mwarningCheck the Kong component.GBLUS PRODKongkong_http_504_status_rateHTTP 504 > 10%5mwarningCheck Reltio response rates. Check Gateway components for issues.GBLUS PRODKongkong_http_401_status_rateHTTP 401 > 30%10mwarningCheck Kong logs. Notify the authorities in case of suspected break-in attempts.GBLUS PRODKafkainternal_reltio_events_lag_prod> 1 000 00030minfoCheck why lag is increasing. Restart the Event Publisher.GBLUS PRODKafkaigate_async_prod_kafka_consumer_group_membersigate_async_prod consumer group members have decreased (still > 20)15minfoCheck the Gateway components.GBLUS PRODKafkaigate_async_prod_kafka_consumer_group_membersigate_async_prod consumer group members have decreased (still > 10)15mhighCheck the Gateway components.GBLUS PRODKafkaigate_async_prod_kafka_consumer_group_membersigate_async_prod consumer group has 0 members15mcriticalCheck the Gateway components.GBLUS PRODKafkahub_prod_kafka_consumer_group_membershub_prod consumer group members have decreased (still > 20)15minfoCheck the Hub components.GBLUS PRODKafkahub_prod_kafka_consumer_group_membershub_prod consumer group members have decreased (still > 10)15mhighCheck the Hub components.GBLUS PRODKafkahub_prod_kafka_consumer_group_membershub_prod consumer group has 0 members15mcriticalCheck the Hub components.GBLUS PRODKafkabatch_service_prod_kafka_consumer_group_membersbatch_service_prod consumer group has 0 members15mcriticalCheck the Batch Service component.GBLUS PRODKafkabatch_service_prod_ack_kafka_consumer_group_membersbatch_service_prod_ack consumer group has 0 members15mcriticalCheck the Batch Service component.GBLUS PRODKafkafluentd_gblus_prod_kafka_consumer_group_membersEFK consumer group has 0 members30mhighCheck Fluentd. Restart if necessary.GBLUS PRODKafkakafka_jvm_heap_memory_increasing> 3100MB memory use predicted in 5 hours20mhighCheck if Kafka is rebalancing. Check the Event Publisher." + }, + { + "title": "Security", + "pageID": "164470097", + "pageLink": "/display/GMDM/Security", + "content": "\nThere are following aspects supporting security implemented in the solution:\n\n\tAll server nodes are in COMPANY VPN.\n\tExternal endpoints (Kafka, KONG API) are exposed to cloud services (MAP, Appian) through the AWS ELB.\n\tEach endpoint has secured transport established using TLS 1.2 – see Transport section.\n\tOnly authenticated clients can access MDM services.\n\tAccess to resources is controlled by built-in authorization process.\n\tEvery API call is logged in access log. It is a standard Nginx access log format.\n\n" + }, + { + "title": "Authentication", + "pageID": "164470075", + "pageLink": "/display/GMDM/Authentication", + "content": "\nAPI Authentication\nAPI authentication is provided by KONG. There are two methods supported:\n\n\tOAuth2 internal\n\tOAuth 2 external – Ping Federate (recommended)\n\tAPI key\n\n\n\nOAuth2 method is recommended, especially for cloud services. The gateway uses Client Credentials grant type variant of OAuth2. The method is supported by KONG OAuth2 plugin. Client secrets are managed by Kong and stored in Cassandra configuration database.\nAPI key authentication is a deprecated method, its usage should be avoided for new services. Keys are unique, randomly generated with 32 characters length managed by Kong Gateway – please see Kong Gateway documentation for details." + }, + { + "title": "Authorization", + "pageID": "164470078", + "pageLink": "/display/GMDM/Authorization", + "content": "\nRest APIs\nAccess to exposed services is controlled with the following algorithm:\n\n\tREST channel component reads user authorization configuration based on X-Consumer-Username header passed by KONG.\n\tAuthorization configuration contains:\n\t\n\t\tList of roles user can access. Roles express operation/logic user can execute.\n\t\tList of countries user can read or write.\n\t\tList of source systems (related to crosswalk type) that data can come from.\n\t\n\t\n\tOperation level authorization – system checks if user can execute an operation.\n\tData level authorization – system checks if user can read or modify entities:\n\t\n\t\tDuring read operation by crosswalk – it is checked if country attribute value is on the allowed country list, otherwise system throws access forbidden error.\n\t\tDuring search operation, filter is modified restriction on country attribute are added) to limit countries user has no access to.\n\t\tDuring write operation, system validates if country attribute and crosswalk type are authorized.\n\n\nTable 12. Role definitions\n \n\n\nRole name\nDescription\n\n\nPOST_HCP\nAllows user to create a new HCP entity\n\n\nPATCH_HCP\nAllows user to update HCP entity\n\n\nPOST_HCO\nAllows user to create a new HCO entity\n\n\nPATCH_HCO\nAllows user to update HCO entity\n\n\nGET_ENTITY\nAllows user to get data of single Entity, specified by ID\n\n\nSEARCH_ENTITY\nAllows user to search for Entities by search criteria\n\n\nRESPONSE_DCR\nAllows user to send DCR response to Gateway\n\n\nDELETE_CROSSWALK\nAllows user to delete crosswalk, effectively removing one datasource from Entity\n\n\nGET_LOV\nAllows user to get dictionary data (LookupValues)\n\n\n\nSample authorization configuration for user:\n \nKafka\nKAFKA resources are protected by ACL mechanism, clients are granted permission to read only from topics dedicated to them. Complexity of Kafka ACL is hidden behind Ansible – permissions are defined in YAML file, in the following format:\n \nType and description of each parameter is specified in table below.\n\n\nTable 13. Topic configuration parameters\n \n \n\n\nParameter\nType\nDescription \n\n\nname\nString\nTopic name\n\n\npartitions\nInteger\nNumber of partitions to create\n\n\nreplicas\nInteger\nReplication factor for partitions\n\n\nproducers\nList of String\nList of usernames that are allowed to publish message to this topic\n\n\nconsumers\nMap of String, String\nConsumers that are allowed to consume from this topic. Map entries are in format "username":"consumer_group_id"\n\n\n\n\t\n\t\n" + }, + { + "title": "KONG external OAuth2 plugin", + "pageID": "164470072", + "pageLink": "/display/GMDM/KONG+external+OAuth2+plugin", + "content": "\nTo integrate with Ping Federate token validation process, external KONG plugin was implemented. Source code and instructions for installation and configuration of local environment were published on GitHub. \nCheck https://github.com/COMPANY/mdm-gateway/tree/kong/mdm-external-oauth-plugin readme file for more information.\nThe role of plugin: \nValidate access tokens sent by developers using a third-party OAuth 2.0 Authorization Server (RFC 7662). The flow of plugin, request, and response from PingFedarate have to be compatible with RFC 7622 specification. To get more information about this specification check https://tools.ietf.org/html/rfc7662 .Plugin assumes that the Consumer already has an access token that will be validated against a third-party OAuth 2.0 server – Ping Federate. \nFlow of the plugin:\n\n\tClient invokes Gateway API providing token generated from PING API\n\tKONG plugin introspects this token\n\t\n\t\tif the token is active, plugin will fill X-Consumer-Username header\n\t\tif the token is not active, the access to the specific uri will be forbidden\n\t\n\t\n\n\n\n \nExample External Plugin configuration:\n \n\nTo define a mdm-external-oauth plugin the following parameters have to be defined:\n\n\tintrospection_url – url address to ping federate API with access to introspect oauth2 tokens\n\tauthorization_value – username and ●●●●●●●●●●●●●●●● to "Basic " format which is authorized to invoke introspect API.\n\thide_credentials – if true, the token provided in request will be removed from request after validation to obtain more security specifications.\n\tusers_map – this map contains comma separated list of values. The first value is user name defined in Ping Federate the second value separated by colon is the user name defined in mdm-manager application. This map is used to correctly map and validate tokens received in request. Additionally, when PingFederate introspect token, it returns the username. This username is mapped on existing user in mdm-manager, so there is no need to define additional users in mdm-manager – it is enough to fill users_map configuration with appropriate values.\n\n\n\nKAFKA authentication\nKafka access is protected using SASL framework. Clients are required to specify user and ●●●●●●●●●●● the configuration. Credentials are sent over TLS transport." + }, + { + "title": "Transport", + "pageID": "164470076", + "pageLink": "/display/GMDM/Transport", + "content": "\nCommunication between the KONG API Gateway and external systems is secured by setting up an encrypted connection with the following specifications:\n\n\tCiphersuites: ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCMSHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256\n\tVersions: TLSv1.2\n\tTLS curves: prime256v1, secp384r1, secp521r1\n\tCertificate type: ECDSA\n\tCertificate curve: prime256v1, secp384r1, secp521r1\n\tCertificate signature: sha256WithRSAEncryption, ecdsa-with-SHA256, ecdsa-with-SHA384, ecdsa-with-SHA512\n\tRSA key size: 2048 (if not ecdsa)\n\tDH Parameter size: None (disabled entirely)\n\tECDH Parameter size: 256\n\tHSTS: max-age=15768000\n\tCertificate switching: None\n\n\n\n" + }, + { + "title": "User management", + "pageID": "164470079", + "pageLink": "/display/GMDM/User+management", + "content": "\nUser accounts are managed by the respective components of the Gateway and Hub. \nAPI Users\nThose are managed by Kong Gateway and stored in Cassandra database. There are two ways of adding a new user to Kong configuration:\n\n\tUsing configuration repository and Ansible\n\n\n\nAnsible tool, which is used to deploy MDM Integration Services, has a plugin that supports Kong user management. User configuration is kept in YAML configuration files (passwords being encrypted using built-in AES-256 encryption). Adding a new user requires adding the following section to the appropriate configuration file:\n \n\n\tDirectly, using Kong REST API\n\n\n\nThis method requires access to COMPANY VPN and to machine that hosts the MDM Integration Services, since REST endpoints are only bound to "localhost", and not exposed to the outside world. URL of the endpoint is:\n It can be accessed via cURL commandline tool. To list all the users that are currently defined use the following command:\n \nTo create a new user:\n To set an API Key for the user:\n A new API key will be automatically generated by Kong and returned in response.\nTo create OAuth2 credentials use the following call instead:\n client_id and client_secret are login credentials, redirect_uri should point to HUB API endpoint. Please see Kong Gateway documentation for details.\n\nKAFKA users\nKafka users are managed by brokers. Authentication method used is Java Authentication and Authorization Service (JAAS) with PlainLogin module. User configuration is stored inside kafka_server_jaas.conf file, that is present in each broker. File has the following structure:\n \nProperties "username" and "password" define credentials to use to secure inter-broker communication. Properties in format "user_" are actual definitions of users. So, adding a new user named "bob" would require addition of the following property to kafka_server_jaas.conf file:\n\n \n\nCAUTION! Since JAAS configuration file is only read on Kafka broker startup, adding a new user requires restart of all brokers. In multi-broker environment this can be achieved by restarting one broker at a time, which should be transparent for end users, given Kafka fault-tolerance capabilities. This limitation might be overcome in future versions by using external user store or custom login module, instead of PlainLoginModule.The process of adding this entry and distributing kafka_server_jass.conf file is automated with Ansible: usernames and ●●●●●●●●●●●● kept in YAML configuration file, encrypted using Ansible Vault (with AES encryption). \nMongoDB users\nMongoDB is used only internally, by Publishing Hub modules and is not exposed to external users, therefore there is no need to create accounts for them. For operational purposes there might be some administration/technical accounts created using standard Mongo commandline tools, as described in MongoDB documentation." + }, + { + "title": "SOP HUB", + "pageID": "164470101", + "pageLink": "/display/GMDM/SOP+HUB", + "content": "" + }, + { + "title": "Hub Configuration", + "pageID": "302705379", + "pageLink": "/display/GMDM/Hub+Configuration", + "content": "" + }, + { + "title": "APM:", + "pageID": "302703254", + "pageLink": "/pages/viewpage.action?pageId=302703254", + "content": "" + }, + { + "title": "Setup APM integration in Kibana", + "pageID": "302703256", + "pageLink": "/display/GMDM/Setup+APM+integration+in+Kibana", + "content": "To setup APM integration in Kibana you need to deploy fleet server first. To do so you need to enable it in mdm-hub-cluster-env repository(eg. in emea/nprod/namespaces/emea-backend/values.yaml)After deploying it open kibana UI. And got to Fleet.Verify if fleet-server is properly configured:Go to Observability - APMClick Add the APM IntegrationClick Add Elastic APMChange host to 0.0.0.0:8200In section 2 choose Existing hosts and choose desired agent-policy(Fleet server on ECK policy)Save changesAfter configuring your service to connect to apm-server it should be visible in Observability.APM" + }, + { + "title": "Consul:", + "pageID": "302705585", + "pageLink": "/pages/viewpage.action?pageId=302705585", + "content": "" + }, + { + "title": "Updating Dictionary", + "pageID": "164470212", + "pageLink": "/display/GMDM/Updating+Dictionary", + "content": "To update dictionary from excelConvert excel to csv formatChange EOL to Unix Put file in appropriate path in mdm-config-registry repository in config-extCheck Updating ETL Dictionaries in Consul page for appropriate Consul UI URL (You need to have a security token set in ACL section)" + }, + { + "title": "Updating ETL Dictionaries in Consul", + "pageID": "164470102", + "pageLink": "/display/GMDM/Updating+ETL+Dictionaries+in+Consul", + "content": "Configuration repository has dedicated directories that store dictionaries used by the ETL engine during loading data with batch service. The content of directories is published in Consul. The table shows the dir name and consul's key under which data in posted:Dir nameConsul keyconfig-ext/dev_gblushttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/dev_gblus/config-ext/qa_gblushttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/qa_gblus/config-ext/prod_gblushttps://consul-amer-prod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/prod_gblus/config-ext/dev_emeahttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/dev_emea/config-ext/qa_emeahttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/qa_emea/config-ext/stage_emeahttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/stage_emea/config-ext/prod_emeahttps://consul-emea-prod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/prod_emea/config-ext/dev_apachttps://consul-apac-nprod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/dev_apac/config-ext/qa_apachttps://consul-apac-nprod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/qa_apac/config-ext/stage_apachttps://consul-apac-nprod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/stage_apac/config-ext/prod_apachttps://consul-apac-prod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/prod_apac/To update Consul values you have to:Make changes in the desired directory and push them to the master git branch,git2consul will synchronize the git repo to Consul Please be advised that proper SecretId token is required to access key/value path you desire. Especially important for AMER/GBLUS directories. " + }, + { + "title": "Environment Setup:", + "pageID": "164470244", + "pageLink": "/pages/viewpage.action?pageId=164470244", + "content": "" + }, + { + "title": "Configuration (amer k8s)", + "pageID": "228917406", + "pageLink": "/pages/viewpage.action?pageId=228917406", + "content": "Configuration steps:Configure mongo permissions for users mdm_batch_service, mdmhub, and mdmgw. Add permissions to database schema related to new environment:---users:  mdm_batch_service:    mongo:      databases:        reltio_amer-dev:          roles:            - "readWrite"        reltio_[tenant-env]:             - "readWrite"2. Add directory with environment configuration files in amer/nprod/namespaces/. You can just make a copy of the existing amer-dev configuration.3. Change file [tenant-env]/values.yaml:Change the value of "env" property,Change the value of "logging_index" property,Change the address of oauth service - "kong_plugins.mdm_external_oauth.introspection_url" property. Use value from below table:Env classoAuth introspection URLDEVhttps://devfederate.COMPANY.com/as/introspect.oauth2QAhttps://devfederate.COMPANY.com/as/introspect.oauth2STAGEhttps://stgfederate.COMPANY.com/as/introspect.oauth2PRODhttps://prodfederate.COMPANY.com/as/introspect.oauth24. Change file [tenant-env]/kafka-topics.yaml by changing the prefix of topic names.5. Add kafka connect instance for newly added environment - add the configuration section to kafkaConnect property located in amer/nprod/namespaces/amer-backend/values.yaml5.1 Add secrets - kafkaConnect.[tenant-env].connectors.[tenant-env]-snowflake-sink-connector.spec.config.snowflake.private.key.passphrase and kafkaConnect.[tenant-env].connectors.[tenant-env]-snowflake-sink-connector.spec.config.snowflake.private.key6. Configure Consul (amer/nprod/namespaces/amer-backend/values.yaml and amer/nprod/namespaces/amer-backend/secrets.yaml):Add repository to git2consul - property git2consul.repos,Add policies - property consul_acl.policies,And policy binding - property consul_acl.tokens.mdmetl-token.policiesAdd secrets - git2consul.repos.[tenant-env].credentials.username: and git2consul.repos.[tenant-env].credentials.passwordCreate proper branch in mdm-hub-env-config repo, like in an example: config/dev_amer - http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse?at=refs%2Fheads%2Fconfig%2Fdev_amer7. Modify components configuration:Change [tenant-env]/config_files/all/config/application.yamlchange "env" property,change "mdmConfig.baseURL" property,change "mdmConfig.rdmURL" property,change "mdmConfig.workflow.url" property,Change [tenant-env]/config_files/event-publisher/config/application.yaml:Change "local_env" propertyChange [tenant-env]/config_files/reltio-subscriber/config/application.yaml:Change "sqs" properties according to Reltio configuration,check and confirm if secrets for this component needn't be changed - changing of sqs queue could cause changing of AWS credentials - verify with Reltio's tenant configuration,Change [tenant-env]/config_files/mdm-manager/config/application.yaml:Change "mdmAsyncAPI.principalMappings" according the correct topic names.COMPANY Reltio tenants details for the above properties:8. Add transaction topics in fluentd configuration - amer/nprod/namespaces/amer-backend/values.yaml and change fluentd.kafka.topics list.9. Monitoringa) Add additional service monitor to amer/nprod/namespaces/monitoring/service-monitors.yaml configuration file:- namespace: [tenant-env]  name: sm-[tenant-env]-services  selector:    matchLabels:      prometheus: [tenant-env]-services  endpoints:    - port: prometheus      interval: 30s      scrapeTimeout: 30s    - port: prometheus-fluent-bit      path: "/api/v1/metrics/prometheus"      interval: 30s      scrapeTimeout: 30sb) Add Snowflake database details to amer/nprod/namespaces/monitoring/jdbc-exporter.yaml configuration file:jdbcExporters: amer-dev: db: url: "jdbc:snowflake://amerdev01.us-east-1.privatelink.snowflakecomputing.com/?db=COMM_AMER_MDM_DMART_DEV_DB&role=COMM_AMER_MDM_DMART_DEV_DEVOPS_ROLE&warehouse=COMM_MDM_DMART_WH" username: "[ USERNAME ]"Add ●●●●●●●●●●● amer/nprod/namespaces/monitoring/secrets.yamljdbcExporters: amer-dev: db: password: "[ ●●●●●●●●●●●10. Run Jenkins job responsible for deploying backend services - to apply mongo and fluentd changes.11. Connect to mongodb server and create scheme reltio_[tenant-env].11.1 Create collections and indexes in the newly added schemas: Intellishelldb.createCollection("entityHistory") db.entityHistory.createIndex({country: -1},  {background: true, name:  "idx_country"});db.entityHistory.createIndex({sources: -1},  {background: true, name:  "idx_sources"});db.entityHistory.createIndex({entityType: -1},  {background: true, name:  "idx_entityType"});db.entityHistory.createIndex({status: -1},  {background: true, name:  "idx_status"});db.entityHistory.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});db.entityHistory.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});db.entityHistory.createIndex({"entity.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});db.entityHistory.createIndex({"entity.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});db.entityHistory.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});db.entityHistory.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});db.entityHistory.createIndex({entityChecksum: -1},  {background: true, name:  "idx_entityChecksum"});db.entityHistory.createIndex({parentEntityId: -1},  {background: true, name:  "idx_parentEntityId"});db.entityHistory.createIndex({COMPANYGlobalCustomerID: -1},  {background: true, name:  "idx_COMPANYGlobalCustomerID"});db.createCollection("entityRelations")db.entityRelations.createIndex({country: -1},  {background: true, name:  "idx_country"});db.entityRelations.createIndex({sources: -1},  {background: true, name:  "idx_sources"});db.entityRelations.createIndex({relationType: -1},  {background: true, name:  "idx_relationType"});db.entityRelations.createIndex({status: -1},  {background: true, name:  "idx_status"});db.entityRelations.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});db.entityRelations.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});db.entityRelations.createIndex({startObjectId: -1},  {background: true, name:  "idx_startObjectId"});db.entityRelations.createIndex({endObjectId: -1},  {background: true, name:  "idx_endObjectId"});db.entityRelations.createIndex({"relation.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});   db.entityRelations.createIndex({"relation.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});   db.entityRelations.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});   db.entityRelations.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"}); db.createCollection("LookupValues")db.LookupValues.createIndex({updatedOn: 1},  {background: true, name:  "idx_updatedOn"});db.LookupValues.createIndex({countries: 1},  {background: true, name:  "idx_countries"});db.LookupValues.createIndex({mdmSource: 1},  {background: true, name:  "idx_mdmSource"});db.LookupValues.createIndex({type: 1},  {background: true, name:  "idx_type"});db.LookupValues.createIndex({code: 1},  {background: true, name:  "idx_code"});db.LookupValues.createIndex({valueUpdateDate: 1},  {background: true, name:  "idx_valueUpdateDate"});db.createCollection("ErrorLogs")db.ErrorLogs.createIndex({plannedResubmissionDate: -1},  {background: true, name:  "idx_plannedResubmissionDate_-1"});db.ErrorLogs.createIndex({timestamp: -1},  {background: true, name:  "idx_timestamp_-1"});db.ErrorLogs.createIndex({exceptionClass: 1},  {background: true, name:  "idx_exceptionClass_1"});db.ErrorLogs.createIndex({status: -1},  {background: true, name:  "idx_status_-1"});db.createCollection("batchEntityProcessStatus")db.batchEntityProcessStatus.createIndex({batchName: -1, sourceId: -1},  {background: true, name:  "idx_findByBatchNameAndSourceId"});db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1},  {background: true, name:  "idx_EntitiesUnseen_SoftDeleteJob"});db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResult_ProcessingJob"});db.batchEntityProcessStatus.createIndex({batchName: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResultAll_ProcessingJob"});db.createCollection("batchInstance")db.createCollection("relationCache")db.relationCache.createIndex({startSourceId: -1},  {background: true, name:  "idx_findByStartSourceId"});db.createCollection("DCRRequests")db.DCRRequests.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});db.DCRRequests.createIndex({entityURI: -1, "status.name": -1},  {background: true, name:  "idx_entityURIStatusNameFind_SubmitVR"});db.DCRRequests.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});db.createCollection("entityMatchesHistory")db.entityMatchesHistory.createIndex({_id: -1, "matches.matchObjectUri": -1, "matches.matchType": -1},  {background: true, name:  "idx_findAutoLinkMatch_CleanerStream"});db.createCollection("DCRRegistry")db.DCRRegistry.createIndex({"status.changeDate": -1},  {background: true, name:  "idx_changeDate_FindDCRsBy"});db.DCRRegistry.createIndex({extDCRRequestId: -1},  {background: true, name:  "idx_extDCRRequestId_FindByExtId"});db.DCRRegistry.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});db.DCRRegistryONEKEY.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});db.createCollection("sequenceCounters")db.sequenceCounters.insertOne({_id: "COMPANYAddressIDSeq", sequence: NumberLong([sequence start number])}) //NOTE!!!! replace text [sequence start count] with value from below tableRegionSeq start numberemea5000000000amer6000000000apac700000000012. Run Jenkins job to deploy kafka resources and mdmhub components for the new environment.13. Create paths on S3 bucket required by Snowflake and Airflow's DAGs.14. Configure Kibana:Add index patterns,Configure retention,Add dashboards.15. Configure basic Airflow DAGs (ansible directory):export_merges_from_reltio_to_s3_full,hub_reconciliation_v2,lookup_values_export_to_s3,reconciliation_snowflake.16. Deploy DAGs (NOTE: check if your kubectl is configured to communicate with the cluster you wanted to change):ansible-playbook install_mdmgw_airflow_services_k8s.yml -i inventory/[tenant-env]/inventory17. Configure Snowflake for the [tenant-env] in mdm-hub-env-config as in example inventory/dev_amer/group_vars/snowflake/*. Verification pointsCheck Reltio's configuration - get reltio tenant configuration:Check if you are able to execute Reltio's operations using credentials of the service user,Check if streaming processing is enable - streamingConfig.messaging.destinations.enabled = true, streamingConfig.streamingEnabled=true, streamingConfig.streamingAPIEnabled=true,Check if cassanda export is configured - exportConfig.smartExport.secondaryDsEnabled = false.Check Kafka:Check if you are able to connect to kafka server using command line client running from your local machine.Check Mongo:Users mdmgw, mdmhub and mdm_batch_service - permissions for the newly added database (readWrite),Indexes,Verify if correct start value is set for sequance COMPANYAddressIDSeq - collection sequenceCounters _id = COMPANYAddressIDSeq.Check MDMHUB API:Check mdm-manager API with apikey authentication by executing one of read operations: GET {{ manager_url }}/entities?filter=equals(type, 'configuration/entityTypes/HCP'). The request should execute properly (HTTP status code 200) and returns some HCP objects. The empty response is also possible in the case when there is no HCP data in Reltio,Run the same operation using oAuth2 authentication - remember that the manager url is different,Check mdm-manager API with apikey authentication by executing write operation:curl --location --request POST '{{ manager_url }}/hcp' \\--header 'apikey: {{ api_key }}' \\--header 'Content-Type: application/json' \\--data-raw '{  "hcp" : {    "type" : "configuration/entityTypes/HCP",    "attributes" : {      "Country" : [ {        "value" : "{{ country }}"      } ],      "FirstName" : [ {        "value" : "Verification Test MDMHUB"      } ],      "LastName" : [ {        "value" : "Verification Test MDMHUB"      } ]    },    "crosswalks" : [ {      "type" : "configuration/sources/{{ source }}",      "value" : "verification_test_mdmhub"    } ]  }}'Replace all placeholders in the above request using the correct values for the configured environment. The response should return HTTP code 200 and a URI of the created object. After verification deleted created object by running: curl --location --request DELETE '{{ manager_url }}/entities/crosswalk?type={{ source }}&value=verification_test_mdmhub' --header 'apikey: {{ api_key }}'Run the same operations using oAuth2 authentication - remember that the mdm manager url is different,Verify api-router API with apikey authentication using search operation: GET {{ api_router_url }}/entities?filter=equals(type, 'configuration/entityTypes/HCP'). The request should execute properly (HTTP status code 200) and returns some HCP objects. Empty response is also possible in the case when there is no HCP data in Reltio,Check api-router API with apikey authentication by executing write operation:curl --location --request POST '{{ api_router_url }}/hcp' \\--header 'apikey: {{ api_key }}' \\--header 'Content-Type: application/json' \\--data-raw '{  "hcp" : {    "type" : "configuration/entityTypes/HCP",    "attributes" : {      "Country" : [ {        "value" : "{{ country }}"      } ],      "FirstName" : [ {        "value" : "Verification Test MDMHUB"      } ],      "LastName" : [ {        "value" : "Verification Test MDMHUB"      } ]    },    "crosswalks" : [ {      "type" : "configuration/sources/{{ source }}",      "value" : "verification_test_mdmhub"    } ]  }}'Replace all placeholders in the above request using the correct values for the configured environment. The response should return HTTP code 200 and a URI of the created object. After verification deleted created object by running: curl --location --request DELETE '2/entities/crosswalk?type={{ source }}&value=verification_test_mdmhub' --header 'apikey: {{ api_key }}'Run the same operations using oAuth2 authentication - remember that the api router url is different,Check batch service API with apikey authentication by executing following operation GET {{ batch_service_url }}/batchController/NA/instances/NA. The request should return 403 HTTP Code and body:{    "code": "403",    "message": "Forbidden: com.COMPANY.mdm.security.AuthorizationException: Batch 'NA' is not allowed."}The request doesn't create any batch.Run the same operation using oAuth2 authentication - remember that the batch service url is different,Verify of component logs: mdm-manager, api-router and batch-service url. Focus on errors and kafka records - rebalancing, authorization problems, topic existence warnings etc.MDMHUB streaming services:Check logs of reltio-subscriber, entity-enricher, callback-service, event-publisher and mdm-reconciliation-service components. Verify if there is no errors and kafka warnings related with rebalancing, authorization problems, topic existence warnings etc,Verify if lookup refresh process is working properly - check existance of mongo collection LookupValues. It should have data,Airflow:Check if DAGs are enabled and have a defined schedule,Run DAGs: export_merges_from_reltio_to_s3_full_{{ env }}, hub_reconciliation_v2_{{ env }}, lookup_values_export_to_s3_{{ env }}, reconciliation_snowflake_{{ env }}.Wait for their finish and validate results.Snowflake:Check snowflake connector logs,Check if tables HUB_KAFKA_DATA, LOV_DATA, MERGE_TREE_DATA exist at LANDING schama and has data,Verify if mdm-hub-snowflake-dm package is deployed,What else?Monitoring:Check grafana dashboards:HUB Performance,Kafka Topics Overview,Host Statistics,JMX Overview,Kong,MongoDB.Check Kibana index patterns:{{env}}-internal-batch-efk-transactions*,{{env}}-internal-gw-efk-transactions*,{{env}}-internal-publisher-efk-transactions*,{{env}}-internal-subscriber-efk-transactions*,{{env}}-mdmhub,Check Kibana dashboards:{{env}} API calls,{{env}} Batch Instances,{{env}} Batch loads,{{env}} Error Logs Overview,{{env}} Error Logs RDM,{{env}} HUB Store{{env}} HUB events,{{env}} MDM Events,{{env}} Profile Updates,Check alerts - How?" + }, + { + "title": "Configuration (amer prod k8s)", + "pageID": "234691394", + "pageLink": "/pages/viewpage.action?pageId=234691394", + "content": "Configuration steps:Copy mdm-hub-cluster-env/amer/nprod directory into mdm-hub-cluster-env/amer/nprod directory.Replace ...CertificatesGenerate private-keys, CSRs and request Kong certificate (kong/config_files/certs).\nmarek@CF-19CHU8:~$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout api-amer-prod-gbl-mdm-hub.COMPANY.com.key -out api-amer-prod-gbl-mdm-hub.COMPANY.com.csr\nGenerating a RSA private key\n.....+++++\n.....................................................+++++\nwriting new private key to 'api-amer-prod-gbl-mdm-hub.COMPANY.com.key'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []: api-amer-prod-gbl-mdm-hub.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT_PROD@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge ●●●●●●●●●●●●\nAn optional company name []:\nGenerate private-keys, CSRs and request Kafka certificate (apac-backend/secret.yaml\nmarek@CF-19CHU8:~$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout kafka-amer-prod-gbl-mdm-hub.COMPANY.com.key -out kafka-amer-prod-gbl-mdm-hub.COMPANY.com.csr\nGenerating a RSA private key\n..........................+++++\n.....+++++\nwriting new private key to 'kafka-amer-prod-gbl-mdm-hub.COMPANY.com.key'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:kafka-amer-prod-gbl-mdm-hub.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT_PROD@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge ●●●●●●●●●●●●\nAn optional company name []:\nBELOW IS AMER NPROD COPY WE USE AS A REFERENCEConfiguration steps:Configure mongo permissions for users mdm_batch_service, mdmhub, and mdmgw. Add permissions to database schema related to new environment:---users:  mdm_batch_service:    mongo:      databases:        reltio_amer-dev:          roles:            - "readWrite"        reltio_[tenant-env]:             - "readWrite"2. Add directory with environment configuration files in amer/nprod/namespaces/. You can just make a copy of the existing amer-dev configuration.3. Change file [tenant-env]/values.yaml:Change the value of "env" property,Change the value of "logging_index" property,Change the address of oauth service - "kong_plugins.mdm_external_oauth.introspection_url" property. Use value from below table:Env classoAuth introspection URLDEVhttps://devfederate.COMPANY.com/as/introspect.oauth2QAhttps://devfederate.COMPANY.com/as/introspect.oauth2STAGEhttps://stgfederate.COMPANY.com/as/introspect.oauth2PRODhttps://prodfederate.COMPANY.com/as/introspect.oauth24. Change file [tenant-env]/kafka-topics.yaml by changing the prefix of topic names.5. Add kafka connect instance for newly added environment - add the configuration section to kafkaConnect property located in amer/nprod/namespaces/amer-backend/values.yaml5.1 Add secrets - kafkaConnect.[tenant-env].connectors.[tenant-env]-snowflake-sink-connector.spec.config.snowflake.private.key.passphrase and kafkaConnect.[tenant-env].connectors.[tenant-env]-snowflake-sink-connector.spec.config.snowflake.private.key6. Configure Consul (amer/nprod/namespaces/amer-backend/values.yaml and amer/nprod/namespaces/amer-backend/secrets.yaml):Add repository to git2consul - property git2consul.repos,Add policies - property consul_acl.policies,And policy binding - property consul_acl.tokens.mdmetl-token.policiesAdd secrets - git2consul.repos.[tenant-env].credentials.username: and git2consul.repos.[tenant-env].credentials.passwordCreate proper branch in mdm-hub-env-config repo, like in an example: config/dev_amer - http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse?at=refs%2Fheads%2Fconfig%2Fdev_amer7. Modify components configuration:Change [tenant-env]/config_files/all/config/application.yamlchange "env" property,change "mdmConfig.baseURL" property,change "mdmConfig.rdmURL" property,change "mdmConfig.workflow.url" property,Change [tenant-env]/config_files/event-publisher/config/application.yaml:Change "local_env" propertyChange [tenant-env]/config_files/reltio-subscriber/config/application.yaml:Change "sqs" properties according to Reltio configuration,check and confirm if secrets for this component needn't be changed - changing of sqs queue could cause changing of AWS credentials - verify with Reltio's tenant configuration,Change [tenant-env]/config_files/mdm-manager/config/application.yaml:Change "mdmAsyncAPI.principalMappings" according the correct topic names.COMPANY Reltio tenants details for the above properties:8. Add transaction topics in fluentd configuration - amer/nprod/namespaces/amer-backend/values.yaml and change fluentd.kafka.topics list.9. Monitoringa) Add additional service monitor to amer/nprod/namespaces/monitoring/service-monitors.yaml configuration file:- namespace: [tenant-env]  name: sm-[tenant-env]-services  selector:    matchLabels:      prometheus: [tenant-env]-services  endpoints:    - port: prometheus      interval: 30s      scrapeTimeout: 30s    - port: prometheus-fluent-bit      path: "/api/v1/metrics/prometheus"      interval: 30s      scrapeTimeout: 30sb) Add Snowflake database details to amer/nprod/namespaces/monitoring/jdbc-exporter.yaml configuration file:jdbcExporters: amer-dev: db: url: "jdbc:snowflake://amerdev01.us-east-1.privatelink.snowflakecomputing.com/?db=COMM_AMER_MDM_DMART_DEV_DB&role=COMM_AMER_MDM_DMART_DEV_DEVOPS_ROLE&warehouse=COMM_MDM_DMART_WH" username: "[ USERNAME ]"Add ●●●●●●●●●●● amer/nprod/namespaces/monitoring/secrets.yamljdbcExporters: amer-dev: db: password: "[ ●●●●●●●●●●●10. Run Jenkins job responsible for deploying backend services - to apply mongo and fluentd changes.11. Connect to mongodb server and create scheme reltio_[tenant-env].11.1 Create collections and indexes in the newly added schemas: Intellishelldb.createCollection("entityHistory") db.entityHistory.createIndex({country: -1},  {background: true, name:  "idx_country"});db.entityHistory.createIndex({sources: -1},  {background: true, name:  "idx_sources"});db.entityHistory.createIndex({entityType: -1},  {background: true, name:  "idx_entityType"});db.entityHistory.createIndex({status: -1},  {background: true, name:  "idx_status"});db.entityHistory.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});db.entityHistory.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});db.entityHistory.createIndex({"entity.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});db.entityHistory.createIndex({"entity.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});db.entityHistory.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});db.entityHistory.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});db.entityHistory.createIndex({entityChecksum: -1},  {background: true, name:  "idx_entityChecksum"});db.entityHistory.createIndex({parentEntityId: -1},  {background: true, name:  "idx_parentEntityId"});db.entityHistory.createIndex({COMPANYGlobalCustomerID: -1},  {background: true, name:  "idx_COMPANYGlobalCustomerID"});db.createCollection("entityRelations")db.entityRelations.createIndex({country: -1},  {background: true, name:  "idx_country"});db.entityRelations.createIndex({sources: -1},  {background: true, name:  "idx_sources"});db.entityRelations.createIndex({relationType: -1},  {background: true, name:  "idx_relationType"});db.entityRelations.createIndex({status: -1},  {background: true, name:  "idx_status"});db.entityRelations.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});db.entityRelations.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});db.entityRelations.createIndex({startObjectId: -1},  {background: true, name:  "idx_startObjectId"});db.entityRelations.createIndex({endObjectId: -1},  {background: true, name:  "idx_endObjectId"});db.entityRelations.createIndex({"relation.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});   db.entityRelations.createIndex({"relation.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});   db.entityRelations.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});   db.entityRelations.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"}); db.createCollection("LookupValues")db.LookupValues.createIndex({updatedOn: 1},  {background: true, name:  "idx_updatedOn"});db.LookupValues.createIndex({countries: 1},  {background: true, name:  "idx_countries"});db.LookupValues.createIndex({mdmSource: 1},  {background: true, name:  "idx_mdmSource"});db.LookupValues.createIndex({type: 1},  {background: true, name:  "idx_type"});db.LookupValues.createIndex({code: 1},  {background: true, name:  "idx_code"});db.LookupValues.createIndex({valueUpdateDate: 1},  {background: true, name:  "idx_valueUpdateDate"});db.createCollection("ErrorLogs")db.ErrorLogs.createIndex({plannedResubmissionDate: -1},  {background: true, name:  "idx_plannedResubmissionDate_-1"});db.ErrorLogs.createIndex({timestamp: -1},  {background: true, name:  "idx_timestamp_-1"});db.ErrorLogs.createIndex({exceptionClass: 1},  {background: true, name:  "idx_exceptionClass_1"});db.ErrorLogs.createIndex({status: -1},  {background: true, name:  "idx_status_-1"});db.createCollection("batchEntityProcessStatus")db.batchEntityProcessStatus.createIndex({batchName: -1, sourceId: -1},  {background: true, name:  "idx_findByBatchNameAndSourceId"});db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1},  {background: true, name:  "idx_EntitiesUnseen_SoftDeleteJob"});db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResult_ProcessingJob"});db.batchEntityProcessStatus.createIndex({batchName: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResultAll_ProcessingJob"});db.createCollection("batchInstance")db.createCollection("relationCache")db.relationCache.createIndex({startSourceId: -1},  {background: true, name:  "idx_findByStartSourceId"});db.createCollection("DCRRequests")db.DCRRequests.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});db.DCRRequests.createIndex({entityURI: -1, "status.name": -1},  {background: true, name:  "idx_entityURIStatusNameFind_SubmitVR"});db.DCRRequests.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});db.createCollection("entityMatchesHistory")db.entityMatchesHistory.createIndex({_id: -1, "matches.matchObjectUri": -1, "matches.matchType": -1},  {background: true, name:  "idx_findAutoLinkMatch_CleanerStream"});db.createCollection("DCRRegistry")db.DCRRegistry.createIndex({"status.changeDate": -1},  {background: true, name:  "idx_changeDate_FindDCRsBy"});db.DCRRegistry.createIndex({extDCRRequestId: -1},  {background: true, name:  "idx_extDCRRequestId_FindByExtId"});db.DCRRegistry.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});db.DCRRegistryONEKEY.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});db.createCollection("sequenceCounters")db.sequenceCounters.insertOne({_id: "COMPANYAddressIDSeq", sequence: NumberLong([sequence start number])}) //NOTE!!!! replace text [sequence start count] with value from below tableRegionSeq start numberemea5000000000amer6000000000apac700000000012. Run Jenkins job to deploy kafka resources and mdmhub components for the new environment.13. Create paths on S3 bucket required by Snowflake and Airflow's DAGs.14. Configure Kibana:Add index patterns,Configure retention,Add dashboards.15. Configure basic Airflow DAGs (ansible directory):export_merges_from_reltio_to_s3_full,hub_reconciliation_v2,lookup_values_export_to_s3,reconciliation_snowflake.16. Deploy DAGs (NOTE: check if your kubectl is configured to communicate with the cluster you wanted to change):ansible-playbook install_mdmgw_airflow_services_k8s.yml -i inventory/[tenant-env]/inventory17. Configure Snowflake for the [tenant-env] in mdm-hub-env-config as in example inventory/dev_amer/group_vars/snowflake/*. Verification pointsCheck Reltio's configuration - get reltio tenant configuration:Check if you are able to execute Reltio's operations using credentials of the service user,Check if streaming processing is enable - streamingConfig.messaging.destinations.enabled = true, streamingConfig.streamingEnabled=true, streamingConfig.streamingAPIEnabled=true,Check if cassanda export is configured - exportConfig.smartExport.secondaryDsEnabled = false.Check Mongo:Users mdmgw, mdmhub and mdm_batch_service - permissions for the newly added database (readWrite),Indexes,Verify if correct start value is set for sequance COMPANYAddressIDSeq - collection sequenceCounters _id = COMPANYAddressIDSeq.Check MDMHUB API:Check mdm-manager API with apikey authentication by executing one of read operations: GET {{ manager_url }}/entities?filter=equals(type, 'configuration/entityTypes/HCP'). The request should execute properly (HTTP status code 200) and returns some HCP objects. The empty response is also possible in the case when there is no HCP data in Reltio,Run the same operation using oAuth2 authentication - remember that the manager url is different,Verify api-router API with apikey authentication using search operation: GET {{ api_router_url }}/entities?filter=equals(type, 'configuration/entityTypes/HCP'). The request should execute properly (HTTP status code 200) and returns some HCP objects. Empty response is also possible in the case when there is no HCP data in Reltio,Run the same operation using oAuth2 authentication - remember that the api router url is different,Check batch service API with apikey authentication by executing following operation GET {{ batch_service_url }}/batchController/NA/instances/NA. The request should return 403 HTTP Code and body:{    "code": "403",    "message": "Forbidden: com.COMPANY.mdm.security.AuthorizationException: Batch 'NA' is not allowed."}The request doesn't create any batch.Run the same operation using oAuth2 authentication - remember that the batch service url is different,Verify of component logs: mdm-manager, api-router and batch-service url. Focus on errors and kafka records - rebalancing, authorization problems, topic existence warnings etc.MDMHUB streaming services:Check logs of reltio-subscriber, entity-enricher, callback-service, event-publisher and mdm-reconciliation-service components. Verify if there is no errors and kafka warnings related with rebalancing, authorization problems, topic existence warnings etc,Verify if lookup refresh process is working properly - check existance of mongo collection LookupValues. It should have data,Airflow:Run DAGs: export_merges_from_reltio_to_s3_full_{{ env }}, hub_reconciliation_v2_{{ env }}, lookup_values_export_to_s3_{{ env }}, reconciliation_snowflake_{{ env }}.Wait for their finish and validate results.Snowflake:Check snowflake connector logs,Check if tables HUB_KAFKA_DATA, LOV_DATA, MERGE_TREE_DATA exist at LANDING schama and has data,Verify if mdm-hub-snowflake-dm package is deployed,What else?Monitoring:Check grafana dashboards:HUB Performance,Kafka Topics Overview,Host Statistics,JMX Overview,Kong,MongoDB.Check Kibana index patterns:{{env}}-internal-batch-efk-transactions*,{{env}}-internal-gw-efk-transactions*,{{env}}-internal-publisher-efk-transactions*,{{env}}-internal-subscriber-efk-transactions*,{{env}}-mdmhub,Check Kibana dashboards:{{env}} API calls,{{env}} Batch Instances,{{env}} Batch loads,{{env}} Error Logs Overview,{{env}} Error Logs RDM,{{env}} HUB Store{{env}} HUB events,{{env}} MDM Events,{{env}} Profile Updates,Check alerts - How?" + }, + { + "title": "Configuration (apac k8s)", + "pageID": "228933487", + "pageLink": "/pages/viewpage.action?pageId=228933487", + "content": "Installation of new APAC non-prod cluster basing on AMER non-prod configuration.Copy mdm-hub-cluster-env/amer directory into mdm-hub-cluster-env/apac directory.Change dir names from "amer" to "apac".Replace everything in files in apac directory: "amer"→"apac".CertificatesGenerate private-keys, CSRs and request Kong certificate (kong/config_files/certs).\nanuskp@CF-341562$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout api-apac-nprod-gbl-mdm-hub.COMPANY.com.key -out api-apac-nprod-gbl-mdm-hub.COMPANY.com.csr\nGenerating a RSA private key\n..................+++++\n.........................+++++\nwriting new private key to 'api-apac-nprod-gbl-mdm-hub.COMPANY.com.key'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:api-apac-nprod-gbl-mdm-hub.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge ●●●●●●●●●●●●\nAn optional company name []:\nSAN:DNS Name=api-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=www.api-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=kibana-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=prometheus-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=grafana-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=elastic-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=consul-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=akhq-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=airflow-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=mongo-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=mdm-log-management-apac-nonprod.COMPANY.comDNS Name=gbl-mdm-hub-apac-nprod.COMPANY.comPlace private-key and signed certificate in kong/config_files/certs. Git-ignore them and encrypt them into .encrypt files.Generate private-keys, CSRs and request Kafka certificate (apac-backend/secret.yaml)\nanuskp@CF-341562$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout kafka-apac-nprod-gbl-mdm-hub.COMPANY.com.key -out kafka-apac-nprod-gbl-mdm-hub.COMPANY.com.csr\nGenerating a RSA private key\n................................................................+++++\n.......................................+++++\nwriting new private key to 'kafka-apac-nprod-gbl-mdm-hub.COMPANY.com.key'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:kafka-apac-nprod-gbl-mdm-hub.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge ●●●●●●●●●●●●\nAn optional company name []:\nSAN:DNS Name=kafka-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=kafka-b1-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=kafka-b2-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=kafka-b3-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=kafka-b4-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=kafka-b5-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=kafka-b6-apac-nprod-gbl-mdm-hub.COMPANY.comAfter receiving the certificate, encode it with base64 and paste into apac-backend/secrets.yaml:  -> secrets.mdm-kafka-external-listener-cert.listener.key  -> secrets.mdm-kafka-external-listener-cert.listener.crt  (*) Since this is a new environment, remove everything under "migration" key in apac-backend/values.yaml.Replace all user_passwords in apac/nprod/secrets.yaml. for each ●●●●●●●●●●●●●●●●● a new, 32-char one and globally replace it in all apac configs.Go through apac-dev/config_files one by one and adjust settings such as: Reltio, SQS etc.(*) Change Kafka topics and consumergroup names to fit naming standards. This is a one-time activity and does not need to be repeated if next environments will be built based on APAC config.Export amer-nprod CRDs into yaml file and import it in apac-nprod:\n$ kubectx atp-mdmhub-nprod-amer\n$ kubectl get crd -A -o yaml > ~/crd-definitions-amer.yaml\n$ kubectx atp-mdmhub-nprod-apac\n$ kubectl apply -f ~/crd-definitions-amer.yaml\nCreate config dirs for git2consul (mdm-hub-env-config):\n$ git checkout config/dev_amer\n$ git pull\n$ git branch config/dev_apac\n$ git checkout config/dev_apac\n$ git push origin config/dev_apac\nRepeat for qa and stage.Install operators:\n$ ./install.sh -l operators -r apac -c nprod -e apac-dev -v 3.9.4\nInstall backend:\n$ ./install.sh -l backend -r apac -c nprod -e apac-dev -v 3.9.4\nLog into mongodb (use port forward if there is no connection to kong: run "kubectl port-forward mongo-0 -n apac-backend 27017" and connect to mongo on localhost:27017). Run below script:\ndb.createCollection("entityHistory") \ndb.entityHistory.createIndex({country: -1}, {background: true, name: "idx_country"});\ndb.entityHistory.createIndex({sources: -1}, {background: true, name: "idx_sources"});\ndb.entityHistory.createIndex({entityType: -1}, {background: true, name: "idx_entityType"});\ndb.entityHistory.createIndex({status: -1}, {background: true, name: "idx_status"});\ndb.entityHistory.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\ndb.entityHistory.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\ndb.entityHistory.createIndex({"entity.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"});\ndb.entityHistory.createIndex({"entity.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"});\ndb.entityHistory.createIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"});\ndb.entityHistory.createIndex({mdmSource: -1}, {background: true, name: "idx_mdmSource"});\ndb.entityHistory.createIndex({entityChecksum: -1}, {background: true, name: "idx_entityChecksum"});\ndb.entityHistory.createIndex({parentEntityId: -1}, {background: true, name: "idx_parentEntityId"});\n\ndb.entityHistory.createIndex({COMPANYGlobalCustomerID: -1}, {background: true, name: "idx_COMPANYGlobalCustomerID"});\n\ndb.createCollection("entityRelations")\ndb.entityRelations.createIndex({country: -1}, {background: true, name: "idx_country"});\ndb.entityRelations.createIndex({sources: -1}, {background: true, name: "idx_sources"});\ndb.entityRelations.createIndex({relationType: -1}, {background: true, name: "idx_relationType"});\ndb.entityRelations.createIndex({status: -1}, {background: true, name: "idx_status"});\ndb.entityRelations.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\ndb.entityRelations.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\ndb.entityRelations.createIndex({startObjectId: -1}, {background: true, name: "idx_startObjectId"});\ndb.entityRelations.createIndex({endObjectId: -1}, {background: true, name: "idx_endObjectId"});\ndb.entityRelations.createIndex({"relation.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"}); \ndb.entityRelations.createIndex({"relation.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"}); \ndb.entityRelations.createIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"}); \ndb.entityRelations.createIndex({mdmSource: -1}, {background: true, name: "idx_mdmSource"});\n \ndb.createCollection("LookupValues")\ndb.LookupValues.createIndex({updatedOn: 1}, {background: true, name: "idx_updatedOn"});\ndb.LookupValues.createIndex({countries: 1}, {background: true, name: "idx_countries"});\ndb.LookupValues.createIndex({mdmSource: 1}, {background: true, name: "idx_mdmSource"});\ndb.LookupValues.createIndex({type: 1}, {background: true, name: "idx_type"});\ndb.LookupValues.createIndex({code: 1}, {background: true, name: "idx_code"});\ndb.LookupValues.createIndex({valueUpdateDate: 1}, {background: true, name: "idx_valueUpdateDate"});\n\ndb.createCollection("ErrorLogs")\ndb.ErrorLogs.createIndex({plannedResubmissionDate: -1}, {background: true, name: "idx_plannedResubmissionDate_-1"});\ndb.ErrorLogs.createIndex({timestamp: -1}, {background: true, name: "idx_timestamp_-1"});\ndb.ErrorLogs.createIndex({exceptionClass: 1}, {background: true, name: "idx_exceptionClass_1"});\ndb.ErrorLogs.createIndex({status: -1}, {background: true, name: "idx_status_-1"});\n\ndb.createCollection("batchEntityProcessStatus")\ndb.batchEntityProcessStatus.createIndex({batchName: -1, sourceId: -1}, {background: true, name: "idx_findByBatchNameAndSourceId"});\ndb.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1}, {background: true, name: "idx_EntitiesUnseen_SoftDeleteJob"});\ndb.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, sendDateMDM: -1, updateDateMDM: -1}, {background: true, name: "idx_ProcessingResult_ProcessingJob"});\ndb.batchEntityProcessStatus.createIndex({batchName: -1, sendDateMDM: -1, updateDateMDM: -1}, {background: true, name: "idx_ProcessingResultAll_ProcessingJob"});\n\ndb.createCollection("batchInstance")\n\ndb.createCollection("relationCache")\ndb.relationCache.createIndex({startSourceId: -1}, {background: true, name: "idx_findByStartSourceId"});\n\ndb.createCollection("DCRRequests")\ndb.DCRRequests.createIndex({type: -1, "status.name": -1}, {background: true, name: "idx_typeStatusNameFind_TraceVR"});\ndb.DCRRequests.createIndex({entityURI: -1, "status.name": -1}, {background: true, name: "idx_entityURIStatusNameFind_SubmitVR"});\ndb.DCRRequests.createIndex({changeRequestURI: -1, "status.name": -1}, {background: true, name: "idx_changeRequestURIStatusNameFind_DSResponse"});\n\ndb.createCollection("entityMatchesHistory")\ndb.entityMatchesHistory.createIndex({_id: -1, "matches.matchObjectUri": -1, "matches.matchType": -1}, {background: true, name: "idx_findAutoLinkMatch_CleanerStream"});\n\ndb.createCollection("DCRRegistry")\ndb.DCRRegistry.createIndex({"status.changeDate": -1}, {background: true, name: "idx_changeDate_FindDCRsBy"});\ndb.DCRRegistry.createIndex({extDCRRequestId: -1}, {background: true, name: "idx_extDCRRequestId_FindByExtId"});\ndb.DCRRegistry.createIndex({changeRequestURI: -1, "status.name": -1}, {background: true, name: "idx_changeRequestURIStatusNameFind_DSResponse"});\n\ndb.DCRRegistryONEKEY.createIndex({type: -1, "status.name": -1}, {background: true, name: "idx_typeStatusNameFind_TraceVR"});\n\ndb.createCollection("sequenceCounters")\ndb.sequenceCounters.insertOne({_id: "COMPANYAddressIDSeq", sequence: NumberLong(7000000000)}) // NOTE: 7000000000 is APAC-specific\nLog into Kibana. Export dashboards/indices from AMER and import them in APAC.Install mdmhub:\n$ ./install.sh -l mdmhub -r apac -c nprod -e apac-dev -v 3.9.4\nTickets:DNS names ticket:Ticket queue: GBL-NETWORK DDITitle: Add domains to DNSDescription:Hi Team,\n\nPlease add below domains:\n\napi-apac-nprod-gbl-mdm-hub.COMPANY.com\nkibana-apac-nprod-gbl-mdm-hub.COMPANY.com\nprometheus-apac-nprod-gbl-mdm-hub.COMPANY.com\ngrafana-apac-nprod-gbl-mdm-hub.COMPANY.com\nelastic-apac-nprod-gbl-mdm-hub.COMPANY.com\nconsul-apac-nprod-gbl-mdm-hub.COMPANY.com\nakhq-apac-nprod-gbl-mdm-hub.COMPANY.com\nairflow-apac-nprod-gbl-mdm-hub.COMPANY.com\nmongo-apac-nprod-gbl-mdm-hub.COMPANY.com\nmdm-log-management-apac-nonprod.COMPANY.com\ngbl-mdm-hub-apac-nprod.COMPANY.com\n\nas CNAMEs of our ELB:\na81322116787943bf80a29940dbc2891-00e7418d9be731b0.elb.ap-southeast-1.amazonaws.comAlso, please add one CNAME for each one of below ELBs:\n\nCNAME: kafka-apac-nprod-gbl-mdm-hub.COMPANY.com\nELB: a7ba438d7068b4a799d29d3d408b0932-1e39235cdff6d511.elb.ap-southeast-1.amazonaws.com\n\nCNAME: kafka-b1-apac-nprod-gbl-mdm-hub.COMPANY.com\nELB: a72bbc64327cb4ee4b35ae5abeefbb26-4c392c106b29b6e5.elb.us-east-1.amazonaws.com\n\nCNAME: kafka-b2-apac-nprod-gbl-mdm-hub.COMPANY.com\nELB: a7fdb6117b2184096915aed31732110b-91c5ac7fb0968710.elb.us-east-1.amazonaws.com\n\nCNAME: kafka-b3-apac-nprod-gbl-mdm-hub.COMPANY.com\nELB: a99220323cc684bcaa5e29c198777e13-ddf5ddbf36fe3025.elb.us-east-1.amazonaws.comBest Regards,PiotrMDM HubFirewall whitelistingTicket queue: GBL-NETWORK ECSTitle: Firewall exceptions for new BoldMoves PDKS clusterDescription:Hi Team,\n\nPlease open all traffic listed in attached Excel sheet.\nIn case this is not the queue where I should request Firewall changes, kindly point me in the right direction.\n\nBest Regards,\nPiotr\nMDM HubAttached excel:SourceSource IPDestinationDestination IPPortMDM Hub monitoring (euw1z1pl046.COMPANY.com)CI/CD server (sonar-gbicomcloud.COMPANY.com)10.90.98.0/24pdcs-apa1p.COMPANY.com-443MDM Hub monitoring (euw1z1pl046.COMPANY.com)CI/CD server (sonar-gbicomcloud.COMPANY.com)EMEA NPROD MDM Hub10.90.98.0/24APAC NPROD - PDKS cluster●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●4439094Global NPROD MDM Hub10.90.96.0/24APAC NPROD - PDKS cluster●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●443APAC NPROD - PDKS cluster●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●Global NPROD MDM Hub10.90.96.0/248443APAC NPROD - PDKS cluster●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●EMEA NPROD MDM Hub10.90.98.0/248443Integration tests:In mdm-hub-env-config prepare inventory/kube_dev_apac (copy kube_dev_amer and adjust variables)run "prepare_int_tests" playbook:\n$ ansible-playbook prepare_int_tests.yml -i inventory/kube_dev_apac/inventory -e src_dir="/mnt/c/Users/panu/gitrep/mdm-hub-inbound-services-all"\nin mdm-hub-inbound-services confirm test resources (citrus properties) for mdm-integration-tests have been replaced and run two Gradle tasks:-mdm-gateway/mdm-interation-tests/Tasks/verification/commonIntegrationTests-mdm-gateway/mdm-interation-tests/Tasks/verification/integrationTestsForCOMPANYModel" + }, + { + "title": "Configuration (apac prod k8s)", + "pageID": "234699630", + "pageLink": "/pages/viewpage.action?pageId=234699630", + "content": "Installation of new APAC prod cluster basing on AMER prod configuration.Copy mdm-hub-cluster-env/amer/prod directory into mdm-hub-cluster-env/apac directory.Change dir names from "amer" to "apac" - apac-backend, apac-prodReplace everything in files in apac directory: "amer"→"apac".CertificatesGenerate private-keys, CSRs and request Kong certificate (kong/config_files/certs).\nanuskp@CF-341562$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout api-apac-prod-gbl-mdm-hub.COMPANY.com.key -out api-apac-prod-gbl-mdm-hub.COMPANY.com.csr\nGenerating a RSA private key\n..................+++++\n.........................+++++\nwriting new private key to 'api-apac-prod-gbl-mdm-hub.COMPANY.com.key'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:api-apac-prod-gbl-mdm-hub.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge ●●●●●●●●●●●●\nAn optional company name []:\nSAN:DNS Name=api-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=www.api-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=kibana-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=prometheus-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=grafana-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=elastic-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=consul-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=akhq-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=airflow-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=mongo-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=mdm-log-management-apac-noprod.COMPANY.comDNS Name=gbl-mdm-hub-apac-prod.COMPANY.comPlace private-key and signed certificate in kong/config_files/certs. Git-ignore them and encrypt them into .encrypt files.Generate private-keys, CSRs and request Kafka certificate (apac-backend/secret.yaml)\nanuskp@CF-341562$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout kafka-apac-prod-gbl-mdm-hub.COMPANY.com.key -out kafka-apac-prod-gbl-mdm-hub.COMPANY.com.csr\nGenerating a RSA private key\n................................................................+++++\n.......................................+++++\nwriting new private key to 'kafka-apac-prod-gbl-mdm-hub.COMPANY.com.key'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:kafka-apac-prod-gbl-mdm-hub.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge ●●●●●●●●●●●●\nAn optional company name []:\nSAN:DNS Name=kafka-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=kafka-b1-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=kafka-b2-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=kafka-b3-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=kafka-b4-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=kafka-b5-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=kafka-b6-apac-prod-gbl-mdm-hub.COMPANY.comAfter receiving the certificate, encode it with base64 and paste into apac-backend/secrets.yaml:  -> secrets.mdm-kafka-external-listener-cert.listener.key  -> secrets.mdm-kafka-external-listener-cert.listener.crt Raise a ticket via Request Manager (*) Since this is a new environment, remove everything under "migration" key in apac-backend/values.yaml.Replace all user_passwords in apac/prod/secrets.yaml. for each ●●●●●●●●●●●●●●●●● a new, 40-char one and globally replace it in all apac configs.Go through apac-dev/config_files one by one and adjust settings such as: Reltio, SQS etc.(*) Change Kafka topics and consumergroup names to fit naming standards. This is a one-time activity and does not need to be repeated if next environments will be built based on APAC config.Export amer-prod CRDs into yaml file and import it in apac-prod:\n$ kubectx atp-mdmhub-prod-amer\n$ kubectl get crd -A -o yaml > ~/crd-definitions-amer.yaml\n$ kubectx atp-mdmhub-prod-apac\n$ kubectl apply -f ~/crd-definitions-amer.yaml\nCreate config dirs for git2consul (mdm-hub-env-config):\n$ git checkout config/dev_amer\n$ git pull\n$ git branch config/dev_apac\n$ git checkout config/dev_apac\n$ git push origin config/dev_apac\nRepeat for qa and stage.Install operators:\n$ ./install.sh -l operators -r apac -c prod -e apac-dev -v 3.9.4\nInstall backend:\n$ ./install.sh -l backend -r apac -c prod -e apac-dev -v 3.9.4\n1 Log into mongodb (use port forward if there is no connection to kong: run "kubectl port-forward mongo-0 -n apac-backend 27017" and connect to mongo on localhost:27017) orretrieve ip address from ELB of kong service and add it to Windows hosts file as DNS name (example. ●●●●●●●●●●●● mongo-amer-prod-gbl-mdm-hub.COMPANY.com) and connect to mongo on mongo-amer-prod-gbl-mdm-hub.COMPANY.com:270172 Run below script:\ndb.createCollection("entityHistory") \ndb.entityHistory.createIndex({country: -1}, {background: true, name: "idx_country"});\ndb.entityHistory.createIndex({sources: -1}, {background: true, name: "idx_sources"});\ndb.entityHistory.createIndex({entityType: -1}, {background: true, name: "idx_entityType"});\ndb.entityHistory.createIndex({status: -1}, {background: true, name: "idx_status"});\ndb.entityHistory.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\ndb.entityHistory.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\ndb.entityHistory.createIndex({"entity.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"});\ndb.entityHistory.createIndex({"entity.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"});\ndb.entityHistory.createIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"});\ndb.entityHistory.createIndex({mdmSource: -1}, {background: true, name: "idx_mdmSource"});\ndb.entityHistory.createIndex({entityChecksum: -1}, {background: true, name: "idx_entityChecksum"});\ndb.entityHistory.createIndex({parentEntityId: -1}, {background: true, name: "idx_parentEntityId"});\n\ndb.entityHistory.createIndex({COMPANYGlobalCustomerID: -1}, {background: true, name: "idx_COMPANYGlobalCustomerID"});\n\ndb.createCollection("entityRelations")\ndb.entityRelations.createIndex({country: -1}, {background: true, name: "idx_country"});\ndb.entityRelations.createIndex({sources: -1}, {background: true, name: "idx_sources"});\ndb.entityRelations.createIndex({relationType: -1}, {background: true, name: "idx_relationType"});\ndb.entityRelations.createIndex({status: -1}, {background: true, name: "idx_status"});\ndb.entityRelations.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\ndb.entityRelations.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\ndb.entityRelations.createIndex({startObjectId: -1}, {background: true, name: "idx_startObjectId"});\ndb.entityRelations.createIndex({endObjectId: -1}, {background: true, name: "idx_endObjectId"});\ndb.entityRelations.createIndex({"relation.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"}); \ndb.entityRelations.createIndex({"relation.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"}); \ndb.entityRelations.createIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"}); \ndb.entityRelations.createIndex({mdmSource: -1}, {background: true, name: "idx_mdmSource"});\n \ndb.createCollection("LookupValues")\ndb.LookupValues.createIndex({updatedOn: 1}, {background: true, name: "idx_updatedOn"});\ndb.LookupValues.createIndex({countries: 1}, {background: true, name: "idx_countries"});\ndb.LookupValues.createIndex({mdmSource: 1}, {background: true, name: "idx_mdmSource"});\ndb.LookupValues.createIndex({type: 1}, {background: true, name: "idx_type"});\ndb.LookupValues.createIndex({code: 1}, {background: true, name: "idx_code"});\ndb.LookupValues.createIndex({valueUpdateDate: 1}, {background: true, name: "idx_valueUpdateDate"});\n\ndb.createCollection("ErrorLogs")\ndb.ErrorLogs.createIndex({plannedResubmissionDate: -1}, {background: true, name: "idx_plannedResubmissionDate_-1"});\ndb.ErrorLogs.createIndex({timestamp: -1}, {background: true, name: "idx_timestamp_-1"});\ndb.ErrorLogs.createIndex({exceptionClass: 1}, {background: true, name: "idx_exceptionClass_1"});\ndb.ErrorLogs.createIndex({status: -1}, {background: true, name: "idx_status_-1"});\n\ndb.createCollection("batchEntityProcessStatus")\ndb.batchEntityProcessStatus.createIndex({batchName: -1, sourceId: -1}, {background: true, name: "idx_findByBatchNameAndSourceId"});\ndb.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1}, {background: true, name: "idx_EntitiesUnseen_SoftDeleteJob"});\ndb.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, sendDateMDM: -1, updateDateMDM: -1}, {background: true, name: "idx_ProcessingResult_ProcessingJob"});\ndb.batchEntityProcessStatus.createIndex({batchName: -1, sendDateMDM: -1, updateDateMDM: -1}, {background: true, name: "idx_ProcessingResultAll_ProcessingJob"});\n\ndb.createCollection("batchInstance")\n\ndb.createCollection("relationCache")\ndb.relationCache.createIndex({startSourceId: -1}, {background: true, name: "idx_findByStartSourceId"});\n\ndb.createCollection("DCRRequests")\ndb.DCRRequests.createIndex({type: -1, "status.name": -1}, {background: true, name: "idx_typeStatusNameFind_TraceVR"});\ndb.DCRRequests.createIndex({entityURI: -1, "status.name": -1}, {background: true, name: "idx_entityURIStatusNameFind_SubmitVR"});\ndb.DCRRequests.createIndex({changeRequestURI: -1, "status.name": -1}, {background: true, name: "idx_changeRequestURIStatusNameFind_DSResponse"});\n\ndb.createCollection("entityMatchesHistory")\ndb.entityMatchesHistory.createIndex({_id: -1, "matches.matchObjectUri": -1, "matches.matchType": -1}, {background: true, name: "idx_findAutoLinkMatch_CleanerStream"});\n\ndb.createCollection("DCRRegistry")\ndb.DCRRegistry.createIndex({"status.changeDate": -1}, {background: true, name: "idx_changeDate_FindDCRsBy"});\ndb.DCRRegistry.createIndex({extDCRRequestId: -1}, {background: true, name: "idx_extDCRRequestId_FindByExtId"});\ndb.DCRRegistry.createIndex({changeRequestURI: -1, "status.name": -1}, {background: true, name: "idx_changeRequestURIStatusNameFind_DSResponse"});\n\ndb.DCRRegistryONEKEY.createIndex({type: -1, "status.name": -1}, {background: true, name: "idx_typeStatusNameFind_TraceVR"});\n\ndb.createCollection("sequenceCounters")\ndb.sequenceCounters.insertOne({_id: "COMPANYAddressIDSeq", sequence: NumberLong(7000000000)}) // NOTE: 7000000000 is APAC-specific\nRegionSeq start numberamer6000000000apac7000000000emea5000000000Log into Kibana. Export dashboards/indices from AMER and import them in APAC.Use the following playbook:- change values in  ansible repository:inventory/jenkins/group_vars/all/all.yml → #CHNG- run playbook:  ansible-playbook install_kibana_objects.yml -i inventory/jenkins/inventory --vault-password-file=../vault -vInstall mdmhub:\n$ ./install.sh -l mdmhub -r apac -c prod -e apac-dev -v 3.9.4\nTickets:DNS names ticket:Ticket queue: GBL-NETWORK DDITitle: Add domains to DNSDescription:Hi Team,Please add below domains:api-apac-prod-gbl-mdm-hub.COMPANY.comkibana-apac-prod-gbl-mdm-hub.COMPANY.comprometheus-apac-prod-gbl-mdm-hub.COMPANY.comgrafana-apac-prod-gbl-mdm-hub.COMPANY.comelastic-apac-prod-gbl-mdm-hub.COMPANY.comconsul-apac-prod-gbl-mdm-hub.COMPANY.comakhq-apac-prod-gbl-mdm-hub.COMPANY.comairflow-apac-prod-gbl-mdm-hub.COMPANY.commongo-apac-prod-gbl-mdm-hub.COMPANY.commdm-log-management-apac-noprod.COMPANY.comgbl-mdm-hub-apac-prod.COMPANY.comas CNAMEs of our ELB:a2349e1a042d14c0691f14cf0db75910-14dc3724296a3d4e.elb.ap-southeast-1.amazonaws.comAlso, please add one CNAME for each one of below ELBs:CNAME: kafka-apac-prod-gbl-mdm-hub.COMPANY.comELB: a40444d2dc7b243b08b40e702105979e-28d24a897d699626.elb.ap-southeast-1.amazonaws.comCNAME: kafka-b1-apac-prod-gbl-mdm-hub.COMPANY.comELB: adadc7f02bf9a4ac585f4fba6870d0ae-be80c1c734ef18a3.elb.ap-southeast-1.amazonaws.comCNAME: kafka-b2-apac-prod-gbl-mdm-hub.COMPANY.comELB: a6c81c4fcba6c42f884c2511b5c5183d-d80b70b1ac791ce9.elb.ap-southeast-1.amazonaws.comCNAME: kafka-b3-apac-prod-gbl-mdm-hub.COMPANY.comELB: a8b88854568314cb5b01a9073e1f1515-0b589be04ea6a31b.elb.ap-southeast-1.amazonaws.comBest Regards,Kacper UrbanskiMDMHUBGBL-NETWORK DDIFirewall whitelistingTicket queue: GBL-NETWORK ECSTitle: Firewall exceptions for new BoldMoves PDKS clusterDescription:Hi Team,\n\nPlease open all traffic listed in attached Excel sheet.\nIn case this is not the queue where I should request Firewall changes, kindly point me in the right direction.\n\nBest Regards,\nPiotr\nMDM HubAttached excel:SourceSource IPDestinationDestination IPPortMDM Hub monitoring (euw1z1pl046.COMPANY.com)CI/CD server (sonar-gbicomcloud.COMPANY.com)10.90.98.0/24pdcs-apa1p.COMPANY.com-443MDM Hub monitoring (euw1z1pl046.COMPANY.com)CI/CD server (sonar-gbicomcloud.COMPANY.com)EMEA prod MDM Hub10.90.98.0/24APAC prod - PDKS cluster●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●4439094Global prod MDM Hub10.90.96.0/24APAC prod - PDKS cluster●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●443APAC prod - PDKS cluster●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●Global prod MDM Hub10.90.96.0/248443APAC prod - PDKS cluster●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●EMEA prod MDM Hub10.90.98.0/248443Integration tests:In mdm-hub-env-config prepare inventory/kube_dev_apac (copy kube_dev_amer and adjust variables)run "prepare_int_tests" playbook:\n$ ansible-playbook prepare_int_tests.yml -i inventory/kube_dev_apac/inventory -e src_dir="/mnt/c/Users/panu/gitrep/mdm-hub-inbound-services-all"\nin mdm-hub-inbound-services confirm test resources (citrus properties) for mdm-integration-tests have been replaced and run two Gradle tasks:-mdm-gateway/mdm-interation-tests/Tasks/verification/commonIntegrationTests-mdm-gateway/mdm-interation-tests/Tasks/verification/integrationTestsForCOMPANYModel" + }, + { + "title": "Configuration (emea)", + "pageID": "218444982", + "pageLink": "/pages/viewpage.action?pageId=218444982", + "content": "Setup Mongo Indexes and Collections:EntityHistorydb.entityHistory.createIndex({COMPANYGlobalCustomerID: -1},  {background: true, name:  "idx_COMPANYGlobalCustomerID"});DCR Service 2 Indexes:DCR Service 2 Indexes\ndb.DCRRegistryONEKEY.createIndex({type: -1, "status.name": -1}, {background: true, name: "idx_typeStatusNameFind_TraceVR"});\n\ndb.DCRRegistry.createIndex({"status.changeDate": -1}, {background: true, name: "idx_changeDate_FindDCRsBy"});\ndb.DCRRegistry.createIndex({extDCRRequestId: -1}, {background: true, name: "idx_extDCRRequestId_FindByExtId"});\ndb.DCRRegistry.createIndex({changeRequestURI: -1, "status.name": -1}, {background: true, name: "idx_changeRequestURIStatusNameFind_DSResponse"});\n" + }, + { + "title": "Configuration (gblus prod)", + "pageID": "164470081", + "pageLink": "/pages/viewpage.action?pageId=164470081", + "content": "Config file: gblmdm-hub-us-spec_v05.xlsxAWS ResourcesResource NameResource TypeSpecificationAWS RegionAWS Availability ZoneDependen onDescriptionComponentsHUBGWInterfaceGBL MDM US HUB Prod Data Svr1 - amraelp00007844EC2r5.2xlargeus-east-1bEBS APP DATA MDM PROD SVR1EBS DOCKER DATA MDM PROD SVR1- Mongo - data redundancy and high availability   primary, secondary, tertiary needs to be hosted on a separated server and zones - high availability if one zone is offline- Disks:     Mount 50G - /var/lib/docker/ - docker installation directory    Mount 750GB - /app/ - docker applications local storage OS: Red Hat Enterprise Linux Server release 7.4mongoEFK-DATAGBL MDM US HUB Prod Data Svr2 - amraelp00007870EC2r5.2xlargeus-east-1eEBS APP DATA MDM PROD SVR2EBS DOCKER DATA MDM PROD SVR2- Mongo - data redundancy and high availability   primary, secondary, tertiary needs to be hosted on a separated server and zones - high availability if one zone is offline- Disks:     Mount 50G - /var/lib/docker/ - docker installation directory    Mount 750GB - /app/ - docker applications local storage OS: Red Hat Enterprise Linux Server release 7.4mongoEFK-DATAGBL MDM US HUB Prod Data Svr3 - amraelp00007847EC2r5.2xlargeus-east-1bEBS APP DATA MDM PROD SVR3EBS DOCKER DATA MDM PROD SVR3- Mongo - data redundancy and high availability   primary, secondary, tertiary needs to be hosted on a separated server and zones - high availability if one zone is offline- Disks:     Mount 50G - /var/lib/docker/ - docker installation directory    Mount 750GB - /app/ - docker applications local storage OS: Red Hat Enterprise Linux Server release 7.4mongoEFK-DATAGBL MDM US HUB Prod Svc Svr1 - amraelp00007848EC2r5.2xlargeus-east-1bEBS APP SVC MDM PROD SVR1EBS DOCKER SVC MDM PROD SVR1- Kafka and zookeeper - Kong and Cassandra    Cassandra replication factory set to 3 – Kong proxy high availability     Load balancer for Kong API- Disks:     Mount 50G - /var/lib/docker/ - docker installation directory    Mount 450GB - /app/ - docker applications local storage OS: Red Hat Enterprise Linux Server release 7.4KafkaZookeeperKongCassandraHUBGWinboundoutboundGBL MDM US HUB Prod Svc Svr2 - amraelp00007849EC2r5.2xlargeus-east-1bEBS APP SVC MDM PROD SVR2EBS DOCKER SVC MDM PROD SVR2- Kafka and zookeeper - Kong and Cassandra    Cassandra replication factory set to 3 – Kong proxy high availability     Load balancer for Kong API- Disks:     Mount 50G - /var/lib/docker/ - docker installation directory    Mount 450GB - /app/ - docker applications local storage OS: Red Hat Enterprise Linux Server release 7.4KafkaZookeeperKongCassandraHUBGWinboundoutboundGBL MDM US HUB Prod Svc Svr3 - amraelp00007871EC2r5.2xlargeus-east-1eEBS APP SVC MDM PROD SVR3EBS DOCKER SVC MDM PROD SVR3- Kafka and zookeeper - Kong and Cassandra    Cassandra replication factory set to 3 – Kong proxy high availability     Load balancer for Kong API- Disks:     Mount 50G - /var/lib/docker/ - docker installation directory    Mount 450GB - /app/ - docker applications local storage OS: Red Hat Enterprise Linux Server release 7.4KafkaZookeeperKongCassandraHUBGWinboundoutboundEBS APP DATA MDM Prod Svr1EBS750 GB XFSus-east-1bmount to /app on GBL MDM US HUB Prod Data Svr1 - amraelp00007844EBS APP DATA MDM Prod Svr2EBS750 GB XFSus-east-1emount to /app on GBL MDM US HUB Prod Data Svr2 - amraelp00007870EBS APP DATA MDM Prod Svr3EBS750 GB XFSus-east-1bmount to /app on GBL MDM US HUB Prod Data Svr3 - amraelp00007847EBS DOCKER DATA MDM Prod Svr1EBS50 GB XFSus-east-1bmount to docker devicemapper on GBL MDM US HUB Prod Data Svr1 - amraelp00007844EBS DOCKER DATA MDM Prod Svr2EBS50 GB XFSus-east-1emount to docker devicemapper on GBL MDM US HUB Prod Data Svr2 - amraelp00007870EBS DOCKER DATA MDM Prod Svr3EBS50 GB XFSus-east-1bmount to docker devicemapper on GBL MDM US HUB Prod Data Svr3 - amraelp00007847EBS APP SVC MDM Prod Svr1EBS450 GB XFSus-east-1bmount to /app on GBL MDM US HUB Prod Svc Svr1 - amraelp00007848EBS APP SVC MDM Prod Svr2EBS450 GB XFSus-east-1bmount to /app on GBL MDM US HUB Prod Svc Svr2 - amraelp00007849EBS APP SVC MDM Prod Svr3EBS450 GB XFSus-east-1emount to /app on GBL MDM US HUB Prod Svc Svr3 - amraelp00007871EBS DOCKER SVC MDM Prod Svr1EBS50 GB XFSus-east-1bmount to docker devicemapper on GBL MDM US HUB Prod Svc Svr1 - amraelp00007848EBS DOCKER SVC MDM Prod Svr2EBS50 GB XFSus-east-1bmount to docker devicemapper on GBL MDM US HUB Prod Svc Svr2 - amraelp00007849EBS DOCKER SVC MDM Prod Svr3EBS50 GB XFSus-east-1emount to docker devicemapper on GBL MDM US HUB Prod Svc Svr3 - amraelp00007871GBLMDMHUB US S3 Bucketgblmdmhubprodamrasp101478S3us-east-1Load BalancerELBELBGBL MDM US HUB Prod Svc Svr1GBL MDM US HUB Prod Svc Svr2GBL MDM US HUB Prod Svc Svr3MAP 443 - 8443 (only HTTPS) - ssl offloading on KONGDomain: gbl-mdm-hub-us-prod.COMPANY.comNAME:  PFE-CLB-ATP-MDMHUB-US-PROD-001DNS Name : internal-PFE-CLB-ATP-MDMHUB-US-PROD-001-146249044.us-east-1.elb.amazonaws.comSSL cert for doiman domain gbl-mdm-hub-us-prod.COMPANY.comCertificateDomain : domain gbl-mdm-hub-us-prod.COMPANY.comDNS RecordDNSAddress: gbl-mdm-hub-us-prod.COMPANY.com -> Load BalancerRolesNameTypePrivilegesMember ofDescriptionReqeusts IDProvided accessUNIX-universal-awscbsdev-mdmhub-us-prod-computers-UUnix Computer ROLEAccess to hosts:GBL MDM US HUB Prod Data Svr1GBL MDM US HUB Prod Data Svr2GBL MDM US HUB Prod Data Svr3GBL MDM US HUB Prod Svc Svr1GBL MDM US HUB Prod Svc Svr2GBL MDM US HUB Prod Svc Svr3Computer role including all MDM servers-UNIX-GBLMDMHUB-US-PROD-ADMINUser Role- dzdo root - access to docker- access to docker-engine (systemctl) – restart, stop, start docker engineUNIX-GBLMDMHUB-US-PROD-U  Admin role to manage all resource on servers-KUCR - 20200519090759337WARECP - 20200519083956229GENDEL - 20200519094636480MORAWM03 - 20200519084328245PIASEM - 20200519095309490UNIX-GBLMDMHUB-US-PROD-HUBROLEUser Role- Read only for logs- dzdo docker ps * - list docker container- dzdo docker logs * - check docker container logs- Read access to /app/* - check  docker container logsUNIX-GBLMDMHUB-US-PROD-U  role without root access, read only for logs and check docker status. It will be used by monitoring-UNIX-GBLMDMHUB-US-PROD-SEROLEUser Role- dzdo docker * UNIX-GBLMDMHUB-US-PROD-U  service role - it will be used to run microservices  from Jenkins CD pipeline-Service Account - GBL32452299imdmuspr mdmhubuspr - 20200519095543524UNIX-GBLMDMHUB-US-PROD-UUser Role- Read only for logs- Read access to /app/* - check  docker container logsUNIX-GBLMDMHUB-US-PROD-U  -Ports - Security Group PFE-SG-GBLMDMHUB-US-APP-PROD-001 Port ApplicationWhitelisted8443Kong (API proxy)ALL from COMPANY VPN7000Cassandra (Kong DB)  - inter-node communicationALL from COMPANY VPN7001Cassandra (Kong DB) - inter-node communicationALL from COMPANY VPN9042Cassandra (Kong DB)  - client portALL from COMPANY VPN9094Kafka - SASL_SSL protocolALL from COMPANY VPN9093Kafka - SSL protocolALL from COMPANY VPN9092KAFKA  - Inter-broker communication   ALL from COMPANY VPN2181ZookeeperALL from COMPANY VPN2888Zookeeper - intercommunicationALL from COMPANY VPN3888Zookeeper - intercommunicationALL from COMPANY VPN27017MongoALL from COMPANY VPN9999HawtIO - administration consoleALL from COMPANY VPN9200ElasticsearchALL from COMPANY VPN9300Elasticsearch TCP - cluster communication portALL from COMPANY VPN5601KibanaALL from COMPANY VPN9100 - 9125Prometheus exportersALL from COMPANY VPN9542Kong exporterALL from COMPANY VPN2376Docker encrypted communication with the daemonALL from COMPANY VPNDocumentationService Account ( Jenkins / server access )http://btondemand.COMPANY.com/solution/160303162657677NSA - UNIX - user access to Servers:http://btondemand.COMPANY.com/solution/131014104610578InstructionsHow to add user access to UNIX-GBLMDMHUB-US-PROD-ADMINlog in to http://btondemand.COMPANY.com/search NSA - UNIXuser access to Servers - http://btondemand.COMPANY.com/solution/131014104610578go to Request Manager -> Request Catalog Search NSAChoose NSA-UNIX NSA Requests for Unix.ContinueFill Formula Add user access details formualAccount Type-NSA-UNIXName-Morawski, MikolajAD Username-MORAWM03User Domain-EMEARequestID-20200310100151888Request Details BelowRoleName: YesDescription:requestorCommentsList:Hi Team,I created the request to add account (EMEA/MORAWM03) to the ADMIN role on the following servers:amraelp00007844amraelp00007870amraelp00007847amraelp00007848amraelp00007849amraelp00007871Role name: UNIX-GBLMDMHUB-US-PROD-ADMIN-U -> member of: UNIX-universal-awscbsdev-mdmhub-us-prod-computers-U (UNIX-GBLMDMHUB-US-PROD-U)Could you please verify if I provided all required information?Regards,MikolajaccessToSpecificServerList_roleLst_2: NobusinessJustificationList:MDM HUB Team access toGBL MDM US HUB Prod Data Svr1 - amraelp00007844GBL MDM US HUB Prod Data Svr2 - amraelp00007870GBL MDM US HUB Prod Data Svr3 - amraelp00007847GBL MDM US HUB Prod Svc Svr1 - amraelp00007848GBL MDM US HUB Prod Svc Svr2 - amraelp00007849GBL MDM US HUB Prod Svc Svr3 - amraelp00007871regarding Fletcher projectserverLocationList: Not ApplicablenisDomainOtherList: OtherroleGroupAccount_roleLst_6: Add to Role Group(s)roleGroupNameList: UNIX-GBLMDMHUB-US-PROD-ADMIN-UaccountPrivilegeList_roleLst_7: Add PrivilegesaccountList_roleLst_8: UNIX group membershipunixGroupNameList: UNIX-GBLMDMHUB-US-PROD-ADMIN-USubmit requestHow to add/create new Service Account with access to UNIX-GBLMDMHUB-US-PROD-SEROLEService Account NameUNIX group namedetailsBTOnDemandLessons Learned mdmusprmdmhubusprService Account Name has to contain max 8 charactersGBL32452299iRE Requires Additional Information (GBL32099918i).msglog in to http://btondemand.COMPANY.com/search NSA - UNIXuser access to Servers - http://btondemand.COMPANY.com/solution/131014104610578go to Request Manager -> Request Catalog Search NSAChoose NSA-UNIX NSA Requests for Unix.ContinueFill FormulaNo -> LegacyYesExistingLegacyamraelp00007844amraelp00007870amraelp00007847amraelp00007848amraelp00007849amraelp00007871N/AOtherTo manage the Service account and Software for the MDM HUBIt will be used to run microservices from Jenkins CD pipelinePrimary: VARGAA08Secondary: TIRUMS05Service AccountService Account Name: UNIX group namePROD:mdmuspr mdmhubuspr - Service Account Name have to contain 8 charactersMDM HUB Service Account access (related to Docker microservices and Jenkins CD) forGBL MDM US HUB Prod Data Svr1 - amraelp00007844GBL MDM US HUB Prod Data Svr2 - amraelp00007870GBL MDM US HUB Prod Data Svr3 - amraelp00007847GBL MDM US HUB Prod Svc Svr1 - amraelp00007848GBL MDM US HUB Prod Svc Svr2 - amraelp00007849GBL MDM US HUB Prod Svc Svr3 - amraelp00007871regarding Fletcher projectHi Team,I am trying to create the request to create the Service Account for the following two servers. amraelp00007844amraelp00007870amraelp00007847amraelp00007848amraelp00007849amraelp00007871I want to provide the privileges for this Service Account:Role name: UNIX-GBLMDMHUB-US-PROD-SEROLE-U -> member of: UNIX-GBLMDMHUB-US-PROD-U  -> UNIX-universal-awscbsdev-mdmhub-us-prod-computers-U- docker * - folder access read/writeComputer role related: UNIX-universal-awscbsdev-mdmhub-us-prod-computers-UCould you please verify if I provided all the required information and this Request is correct?Regards,MikolajHome DIR: /app/mdmusprHow to open ports / create new Security Group - PFE-SG-GBLMDMHUB-US-APP-PROD-001http://btondemand.COMPANY.com/solution/120906165824277To create a new security group:Create server Security Group and Open Ports on  SC queue Name: GBL-BTI-IOD AWS FULL SUPPORTlog in to http://btondemand.COMPANY.com/ go to Get Support Search for queue: GBL-BTI-IOD AWS FULL SUPPORTSubmit Request to this queue:RequestHi Team,Could you please create a new security group and assign it with these servers.GBL MDM US HUB Prod Data Svr1 - amraelp00007844.COMPANY.comGBL MDM US HUB Prod Data Svr2 - amraelp00007870.COMPANY.comGBL MDM US HUB Prod Data Svr3 - amraelp00007847.COMPANY.comGBL MDM US HUB Prod Svc Svr1 - amraelp00007848.COMPANY.comGBL MDM US HUB Prod Svc Svr2 - amraelp00007849.COMPANY.comGBL MDM US HUB Prod Svc Svr3 - amraelp00007871.COMPANY.comPlease add the following owners:Primary: VARGAA08Secondary: TIRUMS05(please let me know if approval is required)New Security group Requested: PFE-SG-GBLMDMHUB-US-APP-PROD-001Please Open the following ports:Port Application Whitlisted8443 Kong (API proxy) ALL from COMPANY VPN7000 Cassandra (Kong DB) - inter-node communication ALL from COMPANY VPN7001 Cassandra (Kong DB) - inter-node communication ALL from COMPANY VPN9042 Cassandra (Kong DB) - client port ALL from COMPANY VPN9094 Kafka - SASL_SSL protocol ALL from COMPANY VPN9093 Kafka - SSL protocol ALL from COMPANY VPN9092 KAFKA - Inter-broker communication ALL from COMPANY VPN2181 Zookeeper ALL from COMPANY VPN2888 Zookeeper - intercommunication ALL from COMPANY VPN3888 Zookeeper - intercommunication ALL from COMPANY VPN27017 Mongo ALL from COMPANY VPN9999 HawtIO - administration console ALL from COMPANY VPN9200 Elasticsearch ALL from COMPANY VPN9300 Elasticsearch TCP - cluster communication port ALL from COMPANY VPN5601 Kibana ALL from COMPANY VPN9100 - 9125 Prometheus exporters ALL from COMPANY VPN9542 Kong exporter ALL from COMPANY VPN2376 Docker encrypted communication with the daemon ALL from COMPANY VPNApply this group to the following servers:amraelp00007844amraelp00007870amraelp00007847amraelp00007848amraelp00007849amraelp00007871Regards,MikolajThis will create a new Security Grouphttp://btondemand.COMPANY.com/My-Tickets/Ticket-Details?ticket=GBL32141041iThen these security groups have to be assigned to servers through the IOD portal by the Servers Owner.To open new ports:log in to http://btondemand.COMPANY.com/ go to Get Support Search for queue: GBL-BTI-IOD AWS FULL SUPPORTSubmit Request to this queue:RequestHi,Could you please modify the below security group and open the following port.PROD security group:Security group: PFE-SG-GBLMDMHUB-US-APP-PROD-001Port: 2376(this port is related to Docker for encrypted communication with the daemon)The host related to this:amraelp00007844amraelp00007870amraelp00007847amraelp00007848amraelp00007849amraelp00007871Regards,MikolajCertificates ConfigurationKafka GO TO:How to Generate JKS Keystore and Truststorekeytool -genkeypair -alias kafka.gbl-mdm-hub-us-prod.COMPANY.com -keyalg RSA -keysize 2048 -keystore server.keystore.jks -dname "CN=kafka.gbl-mdm-hub-us-prod.COMPANY.com, O=COMPANY, L=mdm_gbl_us_hub, C=US"keytool -certreq -alias kafka.gbl-mdm-hub-us-prod.COMPANY.com -file kafka.gbl-mdm-hub-us-prod.COMPANY.com.csr -keystore server.keystore.jksSAN:gbl-mdm-hub-us-prod.COMPANY.comamraelp00007848.COMPANY.com●●●●●●●●●●●●●●amraelp00007849.COMPANY.com●●●●●●●●●●●●●amraelp00007871.COMPANY.com●●●●●●●●●●●●●●Crete guest_user for KAFKA - "CN=kafka.guest_user.gbl-mdm-hub-us-prod.COMPANY.com, O=COMPANY, L=GBLMDMHUB-US-PROD-KAFKA, C=US":GO TO: How to Generate JKS Keystore and Truststorekeytool -genkeypair -alias guest_user -keyalg RSA -keysize 2048 -keystore guest_user.keystore.jks -dname "CN=kafka.guest_user.gbl-mdm-hub-us-prod.COMPANY.com, O=COMPANY, L=GBLMDMHUB-US-PROD-KAFKA, C=US"keytool -certreq -alias guest_user -file kafka.guest_user.gbl-mdm-hub-us-prod.COMPANY.com.csr -keystore guest_user.keystore.jksKongopenssl req -nodes -newkey rsa:2048 -sha256 -keyout gbl-mdm-hub-us-prod.key -out gbl-mdm-hub-us-prod.csrSubject Alternative Namesgbl-mdm-hub-us-prod.COMPANY.comamraelp00007848.COMPANY.com●●●●●●●●●●●●●●amraelp00007849.COMPANY.com●●●●●●●●●●●●●amraelp00007871.COMPANY.com●●●●●●●●●●●●●●EFKPROD_GBL_USopenssl req -nodes -newkey rsa:2048 -sha256 -keyout mdm-log-management-gbl-us-prod.key -out mdm-log-management-gbl-us-prod.csr mdm-log-management-gbl-us-prod.COMPANY.comSubject Alternative Names mdm-log-management-gbl-us-prod.COMPANY.comgbl-mdm-hub-us-prod.COMPANY.comamraelp00007844.COMPANY.com●●●●●●●●●●●●●●amraelp00007870.COMPANY.com●●●●●●●●●●●●●●amraelp00007847.COMPANY.com●●●●●●●●●●●●●esnode1openssl req -nodes -newkey rsa:2048 -sha256 -keyout mdm-esnode1-gbl-us-prod.key -out mdm-esnode1-gbl-us-prod.csr mdm-esnode1-gbl-us-prod.COMPANY.com - Elasticsearch esnode1Subject Alternative Names mdm-esnode1-gbl-us-prod.COMPANY.comgbl-mdm-hub-us-prod.COMPANY.comamraelp00007844.COMPANY.com●●●●●●●●●●●●●●esnode2openssl req -nodes -newkey rsa:2048 -sha256 -keyout mdm-esnode2-gbl-us-prod.key -out mdm-esnode2-gbl-us-prod.csr mdm-esnode2-gbl-us-prod.COMPANY.com - Elasticsearch esnode2Subject Alternative Names mdm-esnode2-gbl-us-prod.COMPANY.comgbl-mdm-hub-us-prod.COMPANY.comamraelp00007870.COMPANY.com●●●●●●●●●●●●●●esnode3openssl req -nodes -newkey rsa:2048 -sha256 -keyout mdm-esnode3-gbl-us-prod.key -out mdm-esnode3-gbl-us-prod.csr mdm-esnode3-gbl-us-prod.COMPANY.com - Elasticsearch esnode3Subject Alternative Names mdm-esnode3-gbl-us-prod.COMPANY.comgbl-mdm-hub-us-prod.COMPANY.comamraelp00007847.COMPANY.com●●●●●●●●●●●●●Domain Configuration:Example request: GBL30514754i "Register domains "mdm-log-management*"log in to http://btondemand.COMPANY.com/getsupportWhat can we help you with? - Search for "Network Team Ticket"Select the most relevant topic - "DNS Request"Submit a ticket to this queue.Ticket Details: - GBL32508266iRequestHi,Could you please register the following domains:ADD the below DNS entry:========================mdm-log-management-gbl-us-prod.COMPANY.com              Alias Record to                             amraelp00007847.COMPANY.com[●●●●●●●●●●●●●]Kind regards,MikolajRequest DNSHi,Could you please register the following domains:ADD the below DNS entry for the ELB: PFE-CLB-ATP-MDMHUB-US-PROD-001:========================gbl-mdm-hub-us-prod.COMPANY.com              Alias Record to                             DNS Name : internal-PFE-CLB-ATP-MDMHUB-US-PROD-001-146249044.us-east-1.elb.amazonaws.comReferenced ELB creation ticket: GBL32561307iKind regards,MikolajEnvironment InstallationDISC:server1 amraelp00007844    APP DISC: nvme1n1   DOCKER DISC: nvme2n1server2 amraelp00007870   APP DISC: nvme2n1   DOCKER DISC: nvme1n1server3 amraelp00007847   APP DISC: nvme2n1   DOCKER DISC: nvme1n1server4 amraelp00007848   APP1 DISC: nvme2n1   APP2 DISC: nvme3n1   DOCKER DISC: nvme1n1server5 amraelp00007849   APP1 DISC: nvme2n1   APP2 DISC: nvme3n1    DOCKER DISC: nvme1nserver6 amraelp00007871   APP1 DISC: nvme2n1   APP2 DISC: nvme3n1    DOCKER DISC: nvme1n1Pre:umount /var/lib/dockerlvremove /dev/datavg/varlibdockervgreduce datavg /dev/nvme1n1vi /etc/fstabRM - /dev/mapper/datavg-varlibdocker /var/lib/docker ext4 defaults 1 2rmdir /var/lib/ -> dockermkdir /app/dockerln -s /app/docker /var/lib/dockerStart docker service after prepare_env_airflow_certs playbook run is completedClear content of /etc/sysconfig/docker-storage to DOCKER_STORAGE_OPTIONS="" to use deamon.json fileAnsible:ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server1 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server1 --vault-password-file=~/vault-password-fileCN_NAME=amraelp00007844.COMPANY.comSUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●●ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server2 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server2 --vault-password-file=~/vault-password-fileCN_NAME=amraelp00007870.COMPANY.comSUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●●ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server3 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server3 --vault-password-file=~/vault-password-fileCN_NAME=amraelp00007847.COMPANY.comSUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server4 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server4 --vault-password-file=~/vault-password-fileCN_NAME=amraelp00007848.COMPANY.comSUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●●ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server5 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server5 --vault-password-file=~/vault-password-fileCN_NAME=amraelp00007849.COMPANY.comSUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server6 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server6 --vault-password-file=~/vault-password-fileCN_NAME=amraelp00007871.COMPANY.comSUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●●Docker Version:amraelp00007844:root:[04:57 AM]:/home/morawm03> docker --versionDocker version 1.13.1, build b2f74b2/1.13.1amraelp00007870:root:[04:57 AM]:/home/morawm03> docker --versionDocker version 1.13.1, build b2f74b2/1.13.1amraelp00007847:root:[04:57 AM]:/home/morawm03> docker --versionDocker version 1.13.1, build b2f74b2/1.13.1amraelp00007848:root:[04:57 AM]:/home/morawm03> docker --versionDocker version 1.13.1, build b2f74b2/1.13.1amraelp00007849:root:[04:57 AM]:/home/morawm03> docker --versionDocker version 1.13.1, build b2f74b2/1.13.1amraelp00007871:root:[05:00 AM]:/home/morawm03> docker --versionDocker version 1.13.1, build b2f74b2/1.13.1Configure Registry Login (registry-gbicomcloud.COMPANY.com):ansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server1 --vault-password-file=~/vault-password-fileansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server2 --vault-password-file=~/vault-password-fileansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server3 --vault-password-file=~/vault-password-fileansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server4 --vault-password-file=~/vault-password-fileansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server5 --vault-password-file=~/vault-password-fileansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server6 --vault-password-file=~/vault-password-fileRegistry (manual config):  Copy certs: /etc/docker/certs.d/registry-gbicomcloud.COMPANY.com/ from (mdm-reltio-handler-env\\ssl_certs\\registry)  docker login registry-gbicomcloud.COMPANY.com (login on service account too)  user/pass: mdm/**** (check mdm-reltio-handler-env\\group_vars\\all\\secret.yml)Playbooks installation order:Install node_exporter (run on user with root access - systemctl node_exprter installation): ansible-playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus3 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus4 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus5 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus6 --vault-password-file=~/vault-password-fileInstall Kafka ansible-playbook install_hub_broker_cluster.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-fileInstall Kafka TOPICS: ansible-playbook install_hub_broker_cluster.yml -i inventory/prod_gblus/inventory --limit kafka1 --vault-password-file=~/vault-password-fileInstall Mongo ansible-playbook install_hub_mongo_rs_cluster.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-fileInstall Kong ansible-playbook install_mdmgw_gateway_v1.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-fileUpdate KONG Config ansible-playbook update_kong_api_v1.yml -i inventory/prod_gblus/inventory --limit kong_v1_01 --vault-password-file=~/vault-password-fileVerification: openssl s_client -connect amraelp00007848.COMPANY.com:8443 -servername gbl-mdm-hub-us-prod.COMPANY.com -CAfile /mnt/d/dev/mdm/GBL_US_NPROD/root_inter/RootCA-G2.cer openssl s_client -connect amraelp00007849.COMPANY.com:8443 -servername gbl-mdm-hub-us-prod.COMPANY.com -CAfile /mnt/d/dev/mdm/GBL_US_NPROD/root_inter/RootCA-G2.cer openssl s_client -connect amraelp00007871.COMPANY.com:8443 -servername gbl-mdm-hub-us-prod.COMPANY.com -CAfile /mnt/d/dev/mdm/GBL_US_NPROD/root_inter/RootCA-G2.cerInstall EFK ansible-playbook install_efk_stack.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-fileInstall Promehtues services : mongo_exporter: ansible-playbook install_prometheus_mongo_exporter.yml -i inventory/prod_gblus/inventory --limit mongo3_exporter --vault-password-file=~/vault-password-file cadvisor: ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus3 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus4 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus5 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus6 --vault-password-file=~/vault-password-file sqs_exporter: ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus6 --vault-password-file=~/vault-password-fileInstall Consul ansible-playbook install_consul.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-file# After operation get SecretID from consul container. On the container execute the following command:$ consul acl bootstrapand copy it as mgmt_token to consul secrets.ymlAfter install consul step run update consul playbook with proper mgmt_token (secret.yml) in every execution for each node.Update Consul ansible-playbook update_consul.yml -i inventory/prod_gblus/inventory --limit consul1 --vault-password-file=~/vault-password-file -v ansible-playbook update_consul.yml -i inventory/prod_gblus/inventory --limit consul2 --vault-password-file=~/vault-password-file -v ansible-playbook update_consul.yml -i inventory/prod_gblus/inventory --limit consul3 --vault-password-file=~/vault-password-file -vSetup Mongo Indexes and Collections:Create Collections and Indexes\nCreate Collections and Indexes:\n entityHistory\n\n db.entityHistory.createIndex({country: -1}, {background: true, name: "idx_country"});\n db.entityHistory.createIndex({sources: -1}, {background: true, name: "idx_sources"});\n db.entityHistory.createIndex({entityType: -1}, {background: true, name: "idx_entityType"});\n db.entityHistory.createIndex({status: -1}, {background: true, name: "idx_status"});\n db.entityHistory.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\n db.entityHistory.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\n db.entityHistory.createIndex({"entity.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"});\n db.entityHistory.createIndex({"entity.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"});\n db.entityHistory.createIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"});\n db.entityHistory.createIndex({mdmSource: -1}, {background: true, name: "idx_mdmSource"});\n db.entityHistory.createIndex({entityChecksum: -1}, {background: true, name: "idx_entityChecksum"});\n db.entityHistory.createIndex({parentEntityId: -1}, {background: true, name: "idx_parentEntityId"}); \n \n \n \n\n entityRelations\n db.entityRelations.createIndex({country: -1}, {background: true, name: "idx_country"});\n db.entityRelations.createIndex({sources: -1}, {background: true, name: "idx_sources"});\n db.entityRelations.createIndex({relationType: -1}, {background: true, name: "idx_relationType"});\n db.entityRelations.createIndex({status: -1}, {background: true, name: "idx_status"});\n db.entityRelations.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\n db.entityRelations.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\n db.entityRelations.createIndex({startObjectId: -1}, {background: true, name: "idx_startObjectId"});\n db.entityRelations.createIndex({endObjectId: -1}, {background: true, name: "idx_endObjectId"});\n db.entityRelations.createIndex({"relation.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"}); \n db.entityRelations.createIndex({"relation.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"}); \n db.entityRelations.createIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"}); \n db.entityRelations.createIndex({mdmSource: -1}, {background: true, name: "idx_mdmSource"});\n\n\n\n LookupValues\n db.LookupValues.createIndex({updatedOn: 1}, {background: true, name: "idx_updatedOn"});\n db.LookupValues.createIndex({countries: 1}, {background: true, name: "idx_countries"});\n db.LookupValues.createIndex({mdmSource: 1}, {background: true, name: "idx_mdmSource"});\n db.LookupValues.createIndex({type: 1}, {background: true, name: "idx_type"});\n db.LookupValues.createIndex({code: 1}, {background: true, name: "idx_code"});\n db.LookupValues.createIndex({valueUpdateDate: 1}, {background: true, name: "idx_valueUpdateDate"});\n\n\n ErrorLogs\n db.ErrorLogs.createIndex({plannedResubmissionDate: -1}, {background: true, name: "idx_plannedResubmissionDate_-1"});\n db.ErrorLogs.createIndex({timestamp: -1}, {background: true, name: "idx_timestamp_-1"});\n db.ErrorLogs.createIndex({exceptionClass: 1}, {background: true, name: "idx_exceptionClass_1"});\n db.ErrorLogs.createIndex({status: -1}, {background: true, name: "idx_status_-1"});\n\n\tbatchEntityProcessStatus\n \tdb.batchEntityProcessStatus.createIndex({batchName: -1, sourceId: -1}, {background: true, name: "idx_findByBatchNameAndSourceId"});\n\t db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1}, {background: true, name: "idx_EntitiesUnseen_SoftDeleteJob"});\n\t\tdb.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, sendDateMDM: -1, updateDateMDM: -1}, {background: true, name: "idx_ProcessingResult_ProcessingJob"});\n\t\tdb.batchEntityProcessStatus.createIndex({batchName: -1, sendDateMDM: -1, updateDateMDM: -1}, {background: true, name: "idx_ProcessingResultAll_ProcessingJob"});\n\n\n batchInstance\n\t\t- create collection\n\n\trelationCache\n\t\tdb.relationCache.createIndex({startSourceId: -1}, {background: true, name: "idx_findByStartSourceId"});\n\n DCRRequests\n db.DCRRequests.createIndex({type: -1, "status.name": -1}, {background: true, name: "idx_typeStatusNameFind_TraceVR"});\n db.DCRRequests.createIndex({entityURI: -1, "status.name": -1}, {background: true, name: "idx_entityURIStatusNameFind_SubmitVR"});\n db.DCRRequests.createIndex({changeRequestURI: -1, "status.name": -1}, {background: true, name: "idx_changeRequestURIStatusNameFind_DSResponse"});\n \n entityMatchesHistory \n db.entityMatchesHistory.createIndex({_id: -1, "matches.matchObjectUri": -1, "matches.matchType": -1}, {background: true, name: "idx_findAutoLinkMatch_CleanerStream"});\n\n Connect ENV with Prometheus:Prometheus config\nnode_exporter\n - targets:\n - "amraelp00007844.COMPANY.com:9100"\n - "amraelp00007870.COMPANY.com:9100"\n - "amraelp00007847.COMPANY.com:9100"\n - "amraelp00007848.COMPANY.com:9100"\n - "amraelp00007849.COMPANY.com:9100"\n - "amraelp00007871.COMPANY.com:9100"\n labels:\n env: gblus_prod\n component: node\n \n\nkafka\n - targets:\n - "amraelp00007848.COMPANY.com:9101"\n labels:\n env: gblus_prod\n node: 1\n component: kafka\n - targets:\n - "amraelp00007849.COMPANY.com:9101"\n labels:\n env: gblus_prod\n node: 2\n component: kafka\n - targets:\n - "amraelp00007871.COMPANY.com:9101"\n labels:\n env: gblus_prod\n node: 3\n component: kafka\n \n \nkafka_exporter\n - targets:\n - "amraelp00007848.COMPANY.com:9102"\n labels:\n trade: gblus\n node: 1\n component: kafka\n env: gblus_prod\n - targets:\n - "amraelp00007849.COMPANY.com:9102"\n labels:\n trade: gblus\n node: 2\n component: kafka\n env: gblus_prod\n - targets:\n - "amraelp00007871.COMPANY.com:9102"\n labels:\n trade: gblus\n node: 3\n component: kafka\n env: gblus_prod \n \n \nComponents:\n jmx_manager\n - targets:\n - "amraelp00007848.COMPANY.com:9104"\n labels:\n env: gblus_prod\n node: 1\n component: manager\n - targets:\n - "amraelp00007849.COMPANY.com:9104"\n labels:\n env: gblus_prod\n node: 2\n component: manager\n - targets:\n - "amraelp00007871.COMPANY.com:9104"\n labels:\n env: gblus_prod\n node: 3\n component: manager \n \n jmx_event_publisher\n - targets:\n - "amraelp00007848.COMPANY.com:9106"\n labels:\n env: gblus_prod\n node: 1\n component: publisher\n - targets:\n - "amraelp00007849.COMPANY.com:9106"\n labels:\n env: gblus_prod\n node: 2\n component: publisher\n - targets:\n - "amraelp00007871.COMPANY.com:9106"\n labels:\n env: gblus_prod\n node: 3\n component: publisher\n \n jmx_reltio_subscriber\n - targets:\n - "amraelp00007848.COMPANY.com:9105"\n labels:\n env: gblus_prod\n node: 1\n component: subscriber\n - targets:\n - "amraelp00007849.COMPANY.com:9105"\n labels:\n env: gblus_prod\n node: 2\n component: subscriber\n - targets:\n - "amraelp00007871.COMPANY.com:9105"\n labels:\n env: gblus_prod\n node: 3\n component: subscriber\n \n jmx_batch_service\n - targets:\n - "amraelp00007848.COMPANY.com:9107"\n labels:\n env: gblus_prod\n node: 1\n component: batch_service\n - targets:\n - "amraelp00007849.COMPANY.com:9107"\n labels:\n env: gblus_prod\n node: 2\n component: batch_service\n - targets:\n - "amraelp00007871.COMPANY.com:9107"\n labels:\n env: gblus_prod\n node: 3\n component: batch_service\n \n batch_service_actuator\n - targets:\n - "amraelp00007848.COMPANY.com:9116"\n labels:\n env: gblus_prod\n node: 1\n component: batch_service\n - targets:\n - "amraelp00007849.COMPANY.com:9116"\n labels:\n env: gblus_prod\n node: 2\n component: batch_service\n - targets:\n - "amraelp00007871.COMPANY.com:9116"\n labels:\n env: gblus_prod\n node: 3\n component: batch_service\n \n \nsqs_exporter \n - targets:\n - "amraelp00007871.COMPANY.com:9122"\n labels:\n env: gblus_prod\n component: sqs_exporter\n\n \n \ncadvisor\n \n - targets:\n - "amraelp00007844.COMPANY.com:9103"\n labels:\n env: gblus_prod\n node: 1\n component: cadvisor_exporter\n - targets:\n - "amraelp00007870.COMPANY.com:9103"\n labels:\n env: gblus_prod\n node: 2\n component: cadvisor_exporter \n - targets:\n - "amraelp00007847.COMPANY.com:9103"\n labels:\n env: gblus_prod\n node: 3\n component: cadvisor_exporter \n - targets:\n - "amraelp00007848.COMPANY.com:9103"\n labels:\n env: gblus_prod\n node: 4\n component: cadvisor_exporter \n - targets:\n - "amraelp00007849.COMPANY.com:9103"\n labels:\n env: gblus_prod\n node: 5\n component: cadvisor_exporter \n - targets:\n - "amraelp00007871.COMPANY.com:9103"\n labels:\n env: gblus_prod\n node: 6\n component: cadvisor_exporter \n \n \nmongodb_exporter\n \n - targets:\n - "amraelp00007847.COMPANY.com:9120"\n labels:\n env: gblus_prod\n component: mongodb_exporter\n \n \nkong_exporter\n - targets:\n - "amraelp00007848.COMPANY.com:9542"\n labels:\n env: gblus_prod\n node: 1\n component: kong_exporter\n - targets:\n - "amraelp00007849.COMPANY.com:9542"\n labels:\n env: gblus_prod\n node: 2\n component: kong_exporter\n - targets:\n - "amraelp00007871.COMPANY.com:9542"\n labels:\n env: gblus_prod\n node: 3\n component: kong_exporter\n" + }, + { + "title": "Configuration (gblus)", + "pageID": "164470073", + "pageLink": "/pages/viewpage.action?pageId=164470073", + "content": "Config file: gblmdm-hub-us-spec_v04.xlsxAWS ResourcesResource NameResource TypeSpecificationAWS RegionAWS Availability ZoneDependen onDescriptionComponentsHUBGWInterfaceGBL MDM US HUB nProd Svr1 amraelp00007334PFE-AWS-MULTI-AZ-DEV-us-east-1EC2r5.2xlargeus-east-1bEBS APP DATA MDM NPROD SVR1EBS DOCKER DATA MDM NPROD SVR1- Mongo -  no data redundancy for nProd- Disks:     Mount 50G - docker installation directory    Mount 1000GB - /app/ - docker applications local storageOS: Red Hat Enterprise Linux Server release 7.3 (Maipo)mongoEFKHUBoutboundGBL MDM US HUB nProd Svr2 amraelp00007335PFE-AWS-MULTI-AZ-DEV-us-east-1EC2r5.2xlargeus-east-1bEBS APP DATA MDM NPROD SVR2EBS DOCKER DATA MDM NPROD SVR2- Kafka and zookeeper - Kong and Cassandra- Disks:     Mount 50G - docker installation directory    Mount 500GB - /app/ - docker applications local storageOS: Red Hat Enterprise Linux Server release 7.3 (Maipo)KafkaZookeeperKongCassandraGWinboundEBS APP DATA MDM nProd Svr1EBS1000 GB XFSus-east-1bmount to /app on amraelp00007334EBS APP DATA MDM nProd Svr2EBS500 GB XFSus-east-1bmount to /app on amraelp00007335EBS DOCKER DATA MDM nProd Svr1EBS50 GB XFSus-east-1bmount to docker devicemapper on amraelp00007334EBS DOCKER DATA MDM nProd Svr2EBS50 GB XFSus-east-1bmount to docker devicemapper on amraelp00007335GBLMDMHUB US S3 Bucketgblmdmhubnprodamrasp100762S3us-east-1SSL cert for doiman domain gbl-mdm-hub-us-nprod.COMPANY.comCertificateDomain : domain gbl-mdm-hub-us-nprod.COMPANY.comDNS RecordDNSAddress: gbl-mdm-hub-us-nprod.COMPANY.comRolesNameTypePrivilegesMember ofDescriptionReqeusts IDProvided accessUNIX-IoD-global-mdmhub-us-nprod-computers-UUnix Computer ROLEAccess to hosts: GBL MDM US HUB nProd Svr1GBL MDM US HUB nProd Svr2Computer role including all MDM serversUNIX-GBLMDMHUB-US-NPROD-ADMIN-UUser Role- dzdo root - access to docker- access to docker-engine (systemctl) – restart, stop, start docker engineUNIX-GBLMDMHUB-US-NPROD-COMPUTERS-UAdmin role to manage all resource on serversNSA-UNIX: 20200303065003900KUCR - GBL32099554iWARECP - GENDEL - GBL32134727iMORAWM03 - GBL32097468iUNIX-GBLMDMHUB-US-NPROD-HUBROLE-UUser Role- Read only for logs- dzdo docker ps * - list docker container- dzdo docker logs * - check docker container logs- Read access to /app/* - check  docker container logsUNIX-GBLMDMHUB-US-NPROD-COMPUTERS-Urole without root access, read only for logs and check docker status. It will be used by monitoringNSA-UNIX: 20200303065731900UNIX-GBLMDMHUB-US-NPROD-SEROLE-UUser Role- dzdo docker * UNIX-GBLMDMHUB-US-NPROD-COMPUTERS-Uservice role - it will be used to run microservices  from Jenkins CD pipelineNSA-UNIX: 20200303070216948Service Account - GBL32099918imdmusnprUNIX-GBLMDMHUB-US-NPROD-READONLYUser Role- Read only for logs- Read access to /app/* - check  docker container logsUNIX-GBLMDMHUB-US-NPROD-COMPUTERS-UNSA-UNIX: 20200303070544951Ports - Security Group PFE-SG-GBLMDMHUB-US-APP-NPROD-001 Port ApplicationWhitelisted8443Kong (API proxy)ALL from COMPANY VPN9094Kafka - SASL_SSL protocolALL from COMPANY VPN9093Kafka - SSL protocolALL from COMPANY VPN2181ZookeeperALL from COMPANY VPN27017MongoALL from COMPANY VPN9999HawtIO - administration consoleALL from COMPANY VPN9200ElasticsearchALL from COMPANY VPN5601KibanaALL from COMPANY VPN9100 - 9125Prometheus exportersALL from COMPANY VPN9542Kong exporterALL from COMPANY VPN2376Docker encrypted communication with the daemonALL from COMPANY VPNOpen ports between Jenkins and AirflowRequest to Przemek.Puchajda@COMPANY.com and Mateusz.Szewczyk@COMPANY.com - this is required to open ports between WBS<>IOD blocked traffic ( the requests take some time to finish so request at the beginning) A connection is required from euw1z1dl039.COMPANY.com (●●●●●●●●●●●●●)                       to amraelp00008810.COMPANY.com (●●●●●●●●●●●●●) port 2376. This connection is between airflow and docker host to run gblus DAGs.                       to amraelp00008810.COMPANY.com (●●●●●●●●●●●●●) port 22. This connection is between airflow and docker host to run gblus DAGs.      2. A connection is required from the Jenkins instance (gbinexuscd01 - ●●●●●●●●●●●●●).                       to amraelp00008810.COMPANY.com (●●●●●●●●●●●●●) port 22. This connection is between Jenkins and the target host required for code deployment purposes.DocumentationService Account ( Jenkins / server access )http://btondemand.COMPANY.com/solution/160303162657677NSA - UNIX - user access to Servers:http://btondemand.COMPANY.com/solution/131014104610578InstructionsHow to add user access to UNIX-GBLMDMHUB-US-NPROD-ADMIN-Ulog in to http://btondemand.COMPANY.com/search NSA - UNIXuser access to Servers - http://btondemand.COMPANY.com/solution/131014104610578go to Request Manager -> Request Catalog Search NSAChoose NSA-UNIX NSA Requests for Unix.ContinueFill Formula Add user access details formualAccount Type-NSA-UNIXName-Morawski, MikolajAD Username-MORAWM03User Domain-EMEARequestID-20200310100151888Request Details BelowRoleName: YesDescription:requestorCommentsList: Hi Team,I created the request to add account (EMEAMORAWM03 to the ADMIN role on the following servers:amraelp00007334amraelp00007335Role name: UNIX-GBLMDMHUB-US-NPROD-ADMIN-U -> member of: UNIX-GBLMDMHUB-US-NPROD-COMPUTERS-U -> NSA-UNIX: 20200303065003900Could you please verify if I provided all required information?Regards,MikolajaccessToSpecificServerList_roleLst_2: NobusinessJustificationList: MDM HUB Team access toGBL MDM US HUB nProd Svr1 (amraelp00007334) - PFE-AWS-MULTI-AZ-DEV-us-east-1andGBL MDM US HUB nProd Svr2 (amraelp00007335) - PFE-AWS-MULTI-AZ-DEV-us-east-1regarding Fletcher projectserverLocationList: Not ApplicablenisDomainOtherList: OtherroleGroupAccount_roleLst_6: Add to Role Group(s)roleGroupNameList: UNIX-GBLMDMHUB-US-NPROD-ADMIN-UaccountPrivilegeList_roleLst_7: Add PrivilegesaccountList_roleLst_8: UNIX group membershipunixGroupNameList: UNIX-GBLMDMHUB-US-NPROD-ADMIN-USubmit requestHow to add/create new Service Account with access to UNIX-GBLMDMHUB-US-NPROD-SEROLE-UService Account NameUNIX group namedetailsBTOnDemandLessons Learned mdmusnprmdmhubusnprService Account Name has to contain max 8 charactersGBL32099918iRE Requires Additional Information (GBL32099918i).msglog in to http://btondemand.COMPANY.com/search NSA - UNIXuser access to Servers - http://btondemand.COMPANY.com/solution/131014104610578go to Request Manager -> Request Catalog Search NSAChoose NSA-UNIX NSA Requests for Unix.ContinueFill FormulaNo -> LegacyYesExistingLegacyamraelp00007334amraelp00007335N/AOtherTo manage the Service account and Software for the MDM HUBIt will be used to run microservices from Jenkins CD pipelinePrimary: VARGAA08Secondary: TIRUMS05Service AccountService Account Name: UNIX group nameNPROD:mdmusnpr mdmhubusnpr - Service Account Name have to contain 8 charactersMDM HUB Service Account access (related to Docker microservices and Jenkins CD) forGBL MDM US HUB nProd Svr1 (amraelp00007334) - PFE-AWS-MULTI-AZ-DEV-us-east-1andGBL MDM US HUB nProd Svr2 (amraelp00007335) - PFE-AWS-MULTI-AZ-DEV-us-east-1regarding Fletcher projectHi Team,I am trying to create the request to create the Service Account for the following two servers. amraelp00007334amraelp00007335I want to provide the privileges for this Service Account:Role name: UNIX-GBLMDMHUB-US-NPROD-SEROLE-U -> member of: UNIX-GBLMDMHUB-US-NPROD-COMPUTERS-U -> NSA-UNIX: 20200303070216948- dzdo docker * - folder access read/writeComputer role related: UNIX-IoD-global-mdmhub-us-nprod-computers-UCould you please verify if I provided all required information and this Request is correct?Regards,MikolajHow to open ports / create new Security Group - PFE-SG-GBLMDMHUB-US-APP-NPROD-001http://btondemand.COMPANY.com/solution/120906165824277To create a new security group:Create server Security Group and Open Ports on  SC queue Name: GBL-BTI-IOD AWS FULL SUPPORTlog in to http://btondemand.COMPANY.com/ go to Get Support Search for queue: GBL-BTI-IOD AWS FULL SUPPORTSubmit Request to this queue:RequestHi Team,Could you please create a new security group and assign it with two servers.GBL MDM US HUB nProd Svr1 (amraelp00007334) - PFE-AWS-MULTI-AZ-DEV-us-east-1andGBL MDM US HUB nProd Svr2 (amraelp00007335) - PFE-AWS-MULTI-AZ-DEV-us-east-1Please add the following owners:Primary: VARGAA08Secondary: TIRUMS05(please let me know if approval is required)New Security group Requested: PFE-SG-GBLMDMHUB-US-APP-NPROD-001Please Open the following ports:Port  Application Whitelisted8443 Kong (API proxy) ALL from COMPANY VPN9094 Kafka - SASL_SSL protocol ALL from COMPANY VPN9093 Kafka - SASL_SSL protocol ALL from COMPANY VPN2181 Zookeeper ALL from COMPANY VPN 27017 Mongo ALL from COMPANY VPN9999 HawtIO - administration console ALL from COMPANY VPN9200 Elasticsearch ALL from COMPANY VPN5601 Kibana ALL from COMPANY VPN9100 - 9125 Prometheus exporters ALL from COMPANY VPNApply this group to the following servers:amraelp00007334amraelp00007335Regards,MikolajThis will create a new Security Grouphttp://btondemand.COMPANY.com/My-Tickets/Ticket-Details?ticket=GBL32141041iThen these security groups have to be assigned to servers through the IOD portal by the Servers Owner.To open new ports:log in to http://btondemand.COMPANY.com/ go to Get Support Search for queue: GBL-BTI-IOD AWS FULL SUPPORTSubmit Request to this queue:RequestHi,Could you please modify the below security group and open the following port.NONPROD security group:Security group: PFE-SG-GBLMDMHUB-US-APP-NPROD-001Port: 2376(this port is related to Docker for encrypted communication with the daemon)The host related to this:amraelp00007334amraelp00007335Regards,MikolajCertificates ConfigurationKafka - GBL32139266i  GO TO:How to Generate JKS Keystore and Truststorekeytool -genkeypair -alias kafka.gbl-mdm-hub-us-nprod.COMPANY.com -keyalg RSA -keysize 2048 -keystore server.keystore.jks -dname "CN=kafka.gbl-mdm-hub-us-nprod.COMPANY.com, O=COMPANY, L=mdm_gbl_us_hub, C=US"keytool -certreq -alias kafka.gbl-mdm-hub-us-nprod.COMPANY.com -file kafka.gbl-mdm-hub-us-nprod.COMPANY.com.csr -keystore server.keystore.jksSAN:gbl-mdm-hub-us-nprod.COMPANY.comamraelp00007334.COMPANY.com●●●●●●●●●●●●amraelp00007335.COMPANY.com●●●●●●●●●●●●Crete guest_user for KAFKA - "CN=kafka.guest_user.gbl-mdm-hub-us-nprod.COMPANY.com, O=COMPANY, L=GBLMDMHUB-US-NONPROD-KAFKA, C=US":GO TO: How to Generate JKS Keystore and Truststorekeytool -genkeypair -alias guest_user -keyalg RSA -keysize 2048 -keystore guest_user.keystore.jks -dname "CN=kafka.guest_user.gbl-mdm-hub-us-nprod.COMPANY.com, O=COMPANY, L=GBLMDMHUB-US-NONPROD-KAFKA, C=US"keytool -certreq -alias guest_user -file kafka.guest_user.gbl-mdm-hub-us-nprod.COMPANY.com.csr -keystore guest_user.keystore.jksKong - GBL32144418iopenssl req -nodes -newkey rsa:2048 -sha256 -keyout gbl-mdm-hub-us-nprod.key -out gbl-mdm-hub-us-nprod.csrSubject Alternative Namesgbl-mdm-hub-us-nprod.COMPANY.comamraelp00007334.COMPANY.com●●●●●●●●●●●●amraelp00007335.COMPANY.com●●●●●●●●●●●●EFK - GBL32139762i  , GBL32144243iopenssl req -nodes -newkey rsa:2048 -sha256 -keyout mdm-log-management-gbl-us-nonprod.key -out mdm-log-management-gbl-us-nonprod.csr mdm-log-management-gbl-us-nonprod.COMPANY.comSubject Alternative Names mdm-log-management-gbl-us-nonprod.COMPANY.comgbl-mdm-hub-us-nprod.COMPANY.comamraelp00007334.COMPANY.com●●●●●●●●●●●●amraelp00007335.COMPANY.com●●●●●●●●●●●●openssl req -nodes -newkey rsa:2048 -sha256 -keyout mdm-esnode1-gbl-us-nonprod.key -out mdm-esnode1-gbl-us-nonprod.csr mdm-esnode1-gbl-us-nonprod.COMPANY.com - ElasticsearchSubject Alternative Names mdm-esnode1-gbl-us-nonprod.COMPANY.comgbl-mdm-hub-us-nprod.COMPANY.comamraelp00007334.COMPANY.com●●●●●●●●●●●●amraelp00007335.COMPANY.com●●●●●●●●●●●●Domain Configuration:Example request: GBL30514754i "Register domains "mdm-log-management*"log in to http://btondemand.COMPANY.com/getsupportWhat can we help you with? - Search for "Network Team Ticket"Select the most relevant topic - "DNS Request"Submit a ticket to this queue.Ticket Details:RequestHi,Could you please register the following domains:ADD the below DNS entry:========================mdm-log-management-gbl-us-nonprod.COMPANY.com              Alias Record to                             amraelp00007334.COMPANY.com[●●●●●●●●●●●●]gbl-mdm-hub-us-nprod.COMPANY.com                                        Alias Record to                             amraelp00007335.COMPANY.com[●●●●●●●●●●●●]Kind regards,MikolajEnvironment InstallationPre:rmdir /var/lib/ -> dockerln -s /app/docker /var/lib/dockerumount /var/lib/dockerlvremove /dev/datavg/varlibdockervgreduce datavg /dev/nvme1n1Clear content of /etc/sysconfig/docker-storage to DOCKER_STORAGE_OPTIONS="" to use deamon.json fileAnsible:ansible-playbook prepare_env_gbl_us.yml -i inventory/dev_gblus/inventory --limit server1 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/dev_gblus/inventory --limit server1 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_gbl_us.yml -i inventory/dev_gblus/inventory --limit server2 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/dev_gblus/inventory --limit server2 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_gbl_us.yml -i inventory/dev_gblus/inventory --limit server3 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/dev_gblus/inventory --limit server3 --vault-password-file=~/vault-password-filecopy daemon_docker_tls_overlay.json.j2 to /etc/docker/daemon.jsonFIX using - https://stackoverflow.com/questions/44052054/unable-to-start-docker-after-configuring-hosts-in-daemon-json$ sudo cp /lib/systemd/system/docker.service /etc/systemd/system/\n$ sudo sed -i 's/\\ -H\\ fd:\\/\\///g' /etc/systemd/system/docker.service\n$ sudo systemctl daemon-reload\n$ sudo service docker restartDocker Version:amraelp00007334:root:[10:10 AM]:/app> docker --versionDocker version 1.13.1, build b2f74b2/1.13.1amraelp00007335:root:[10:04 AM]:/app> docker --versionDocker version 1.13.1, build b2f74b2/1.13.1[root@amraelp00008810 docker]# docker --versionDocker version 19.03.13-ce, build 4484c46Configure Registry Login (registry-gbicomcloud.COMPANY.com):ansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit server1 --vault-password-file=~/vault-password-file - using ●●●●●●●●●●●●● root accessansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit server2 --vault-password-file=~/vault-password-fileansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit server3 --vault-password-file=~/vault-password-fileansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file - using ●●●●●●●●●●●● service accountansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-fileansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit prometheus3 --vault-password-file=~/vault-password-fileRegistry (manual config):  Copy certs: /etc/docker/certs.d/registry-gbicomcloud.COMPANY.com/ from (mdm-reltio-handler-env\\ssl_certs\\registry)  docker login registry-gbicomcloud.COMPANY.com (login on service account too)  user/pass: mdm/**** (check mdm-reltio-handler-env\\group_vars\\all\\secret.yml)Playbooks installation order:Install node_exporter:    ansible-playbook install_prometheus_node_exporter.yml -i inventory/dev_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file    ansible-playbook install_prometheus_node_exporter.yml -i inventory/dev_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_node_exporter.yml -i inventory/dev_gblus/inventory --limit prometheus3 --vault-password-file=~/vault-password-fileInstall Kafka  ansible-playbook install_hub_broker.yml -i inventory/dev_gblus/inventory --limit broker --vault-password-file=~/vault-password-fileInstall Mongo   ansible-playbook install_hub_db.yml -i inventory/dev_gblus/inventory --limit mongo --vault-password-file=~/vault-password-fileInstall Kong   ansible-playbook install_mdmgw_gateway_v1.yml -i inventory/dev_gblus/inventory --limit kong_v1_01 --vault-password-file=~/vault-password-fileUpdate KONG Config (IT NEEDS TO BE UPDATED ON EACH ENV (DEV, QA, STAGE)!!)  ansible-playbook update_kong_api_v1.yml -i inventory/dev_gblus/inventory --limit kong_v1_01 --vault-password-file=~/vault-password-file  Verification:    openssl s_client -connect amraelp00007335.COMPANY.com:8443 -servername gbl-mdm-hub-us-nprod.COMPANY.com -CAfile /mnt/d/dev/mdm/GBL_US_NPROD/root_inter/RootCA-G2.cerInstall EFK  ansible-playbook install_efk_stack.yml -i inventory/dev_gblus/inventory --limit efk --vault-password-file=~/vault-password-fileInstall FLUEND Forwarder (without this docker loggin may not work and docker commands will be blocked)  ansible-playbook install_fluentd_forwarder.yml -i inventory/dev_gblus/inventory --limit docker-services --vault-password-file=~/vault-password-fileInstall Promehtues services :  mongo_exporter:    ansible-playbook install_prometheus_mongo_exporter.yml -i inventory/dev_gblus/inventory --limit mongo_exporter1 --vault-password-file=~/vault-password-file  cadvisor:    ansible-playbook install_prometheus_stack.yml -i inventory/dev_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file    ansible-playbook install_prometheus_stack.yml -i inventory/dev_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_stack.yml -i inventory/dev_gblus/inventory --limit prometheus3 --vault-password-file=~/vault-password-file  sqs_exporter:     ansible-playbook install_prometheus_stack.yml -i inventory/dev_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file    ansible-playbook install_prometheus_stack.yml -i inventory/stage_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file    ansible-playbook install_prometheus_stack.yml -i inventory/qa_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-fileInstall Consul ansible-playbook install_consul.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-file# After operation get SecretID from consul container. On the container execute the following command:$ consul acl bootstrapand copy it as mgmt_token to consul secrets.ymlAfter install consul step run update consul playbookUpdate Consul ansible-playbook update_consul.yml -i inventory/prod_gblus/inventory --limit consul1 --vault-password-file=~/vault-password-file -v Setup Mongo Indexes and Collections:Create Collections and Indexes\nCreate Collections and Indexes:\n entityHistory\n\n db.entityHistory.createIndex({country: -1}, {background: true, name: "idx_country"});\n db.entityHistory.createIndex({sources: -1}, {background: true, name: "idx_sources"});\n db.entityHistory.createIndex({entityType: -1}, {background: true, name: "idx_entityType"});\n db.entityHistory.createIndex({status: -1}, {background: true, name: "idx_status"});\n db.entityHistory.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\n db.entityHistory.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\n db.entityHistory.createIndex({"entity.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"});\n db.entityHistory.createIndex({"entity.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"});\n db.entityHistory.createIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"});\n db.entityHistory.createIndex({mdmSource: -1}, {background: true, name: "idx_mdmSource"});\n db.entityHistory.createIndex({entityChecksum: -1}, {background: true, name: "idx_entityChecksum"});\n db.entityHistory.createIndex({parentEntityId: -1}, {background: true, name: "idx_parentEntityId"}); \n \n \n \n\n entityRelations\n db.entityRelations.createIndex({country: -1}, {background: true, name: "idx_country"});\n db.entityRelations.createIndex({sources: -1}, {background: true, name: "idx_sources"});\n db.entityRelations.createIndex({relationType: -1}, {background: true, name: "idx_relationType"});\n db.entityRelations.createIndex({status: -1}, {background: true, name: "idx_status"});\n db.entityRelations.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\n db.entityRelations.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\n db.entityRelations.createIndex({startObjectId: -1}, {background: true, name: "idx_startObjectId"});\n db.entityRelations.createIndex({endObjectId: -1}, {background: true, name: "idx_endObjectId"});\n db.entityRelations.createIndex({"relation.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"}); \n db.entityRelations.createIndex({"relation.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"}); \n db.entityRelations.createIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"}); \n db.entityRelations.createIndex({mdmSource: -1}, {background: true, name: "idx_mdmSource"});\n\n\n\n LookupValues\n db.LookupValues.createIndex({updatedOn: 1}, {background: true, name: "idx_updatedOn"});\n db.LookupValues.createIndex({countries: 1}, {background: true, name: "idx_countries"});\n db.LookupValues.createIndex({mdmSource: 1}, {background: true, name: "idx_mdmSource"});\n db.LookupValues.createIndex({type: 1}, {background: true, name: "idx_type"});\n db.LookupValues.createIndex({code: 1}, {background: true, name: "idx_code"});\n db.LookupValues.createIndex({valueUpdateDate: 1}, {background: true, name: "idx_valueUpdateDate"});\n\n\n ErrorLogs\n db.ErrorLogs.createIndex({plannedResubmissionDate: -1}, {background: true, name: "idx_plannedResubmissionDate_-1"});\n db.ErrorLogs.createIndex({timestamp: -1}, {background: true, name: "idx_timestamp_-1"});\n db.ErrorLogs.createIndex({exceptionClass: 1}, {background: true, name: "idx_exceptionClass_1"});\n db.ErrorLogs.createIndex({status: -1}, {background: true, name: "idx_status_-1"});\n\n\tbatchEntityProcessStatus\n db.batchEntityProcessStatus.createIndex({batchName: -1, sourceId: -1}, {background: true, name: "idx_findByBatchNameAndSourceId"});\n db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1}, {background: true, name: "idx_EntitiesUnseen_SoftDeleteJob"});\n db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, sendDateMDM: -1, updateDateMDM: -1}, {background: true, name: "idx_ProcessingResult_ProcessingJob"});\n db.batchEntityProcessStatus.createIndex({batchName: -1, sendDateMDM: -1, updateDateMDM: -1}, {background: true, name: "idx_ProcessingResultAll_ProcessingJob"});\n\n batchInstance\n\t\t- create collection\n\n\trelationCache\n\t\tdb.relationCache.createIndex({startSourceId: -1}, {background: true, name: "idx_findByStartSourceId"});\n\n DCRRequests\n db.DCRRequests.createIndex({type: -1, "status.name": -1}, {background: true, name: "idx_typeStatusNameFind_TraceVR"});\n db.DCRRequests.createIndex({entityURI: -1, "status.name": -1}, {background: true, name: "idx_entityURIStatusNameFind_SubmitVR"});\n db.DCRRequests.createIndex({changeRequestURI: -1, "status.name": -1}, {background: true, name: "idx_changeRequestURIStatusNameFind_DSResponse"});\n \n entityMatchesHistory \n db.entityMatchesHistory.createIndex({_id: -1, "matches.matchObjectUri": -1, "matches.matchType": -1}, {background: true, name: "idx_findAutoLinkMatch_CleanerStream"});\n\n\n Connect ENV with Prometheus:Update config -  ansible-playbook install_prometheus_configuration.yml -i inventory/prod_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-filePrometheus config\nnode_exporter\n - targets:\n - "amraelp00007334.COMPANY.com:9100"\n - "amraelp00007335.COMPANY.com:9100"\n labels:\n env: gblus_dev\n component: node\n\n\nkafka\n - targets:\n - "amraelp00007335.COMPANY.com:9101"\n labels:\n env: gblus_dev\n node: 1 \n component: kafka\n \n \nkafka_exporter\n\n - targets:\n - "amraelp00007335.COMPANY.com:9102"\n labels:\n trade: gblus\n node: 1\n component: kafka\n env: gblus_dev \n\n\nComponents:\n jmx_manager\n - targets:\n - "amraelp00007335.COMPANY.com:9104"\n labels:\n env: gblus_dev\n node: 1\n component: manager\n - targets:\n - "amraelp00007335.COMPANY.com:9108"\n labels:\n env: gblus_qa\n node: 1\n component: manager\n - targets:\n - "amraelp00007335.COMPANY.com:9112"\n labels:\n env: gblus_stage\n node: 1\n component: manager \n jmx_event_publisher\n - targets:\n - "amraelp00007334.COMPANY.com:9106"\n labels:\n env: gblus_dev\n node: 1\n component: publisher \n - targets:\n - "amraelp00007334.COMPANY.com:9110"\n labels:\n env: gblus_qa\n node: 1\n component: publisher \n - targets:\n - "amraelp00007334.COMPANY.com:9104"\n labels:\n env: gblus_stage\n node: 1\n component: publisher \n jmx_reltio_subscriber\n - targets:\n - "amraelp00007334.COMPANY.com:9105"\n labels:\n env: gblus_dev\n node: 1\n component: subscriber\n - targets:\n - "amraelp00007334.COMPANY.com:9109"\n labels:\n env: gblus_qa\n node: 1\n component: subscriber\n - targets:\n - "amraelp00007334.COMPANY.com:9113"\n labels:\n env: gblus_stage\n node: 1\n component: subscriber\n jmx_batch_service\n - targets:\n - "amraelp00007335.COMPANY.com:9107"\n labels:\n env: gblus_dev\n node: 1\n component: batch_service\n - targets:\n - "amraelp00007335.COMPANY.com:9111"\n labels:\n env: gblus_qa\n node: 1\n component: batch_service\n - targets:\n - "amraelp00007335.COMPANY.com:9115"\n labels:\n env: gblus_stage\n node: 1\n component: batch_service\n\nsqs_exporter \n - targets:\n - "amraelp00007334.COMPANY.com:9122"\n labels:\n env: gblus_dev\n component: sqs_exporter\n - targets:\n - "amraelp00007334.COMPANY.com:9123"\n labels:\n env: gblus_qa\n component: sqs_exporter\n - targets:\n - "amraelp00007334.COMPANY.com:9124"\n labels:\n env: gblus_stage\n component: sqs_exporter\n\n\ncadvisor\n\n - targets:\n - "amraelp00007334.COMPANY.com:9103"\n labels:\n env: gblus_dev\n node: 1\n component: cadvisor_exporter\n - targets:\n - "amraelp00007335.COMPANY.com:9103"\n labels:\n env: gblus_dev\n node: 2\n component: cadvisor_exporter \n\n\n \nmongodb_exporter\n\n - targets:\n - "amraelp00007334.COMPANY.com:9120"\n labels:\n env: gblus_dev\n component: mongodb_exporter\n \n\nkong_exporter\n - targets:\n - "amraelp00007335.COMPANY.com:9542"\n labels:\n env: gblus_dev\n component: kong_exporter\n" + }, + { + "title": "Getting access to PDKS Rancher and Kubernetes clusters", + "pageID": "259433725", + "pageLink": "/display/GMDM/Getting+access+to+PDKS+Rancher+and+Kubernetes+clusters", + "content": "Go to https://requestmanager.COMPANY.com/#/Search nsa-unix and select first link (NSA-UNIX)You will see the form for requesting an access which should be fulfilled like on an example below: Do you need to be added to any Role Groups? YESDo you need privileged access to specific Servers in a Role Group? NOPlease provide the Server Location: Not applicableNIS Domain: Other Add to Role Group(s) UNIX-GBLMDMHUB-US-PROD-ADMIN-U or UNIX-GBLMDMHUB-US-NPROD-ADMIN-U (depends on an environment)Please provide information about Account Privileges: Add Privileges  Please choose the Type of Privilege to Add: UNIX group membershipPlease provide the UNIX Group Name:  UNIX-GBLMDMHUB-US-PROD-COMPUTERS-U or UNIX-GBLMDMHUB-US-NPROD-COMPUTERS-UPlease provide a brief Business Justification:For prod:atp-mdmhub-prod-ameratp-mdmhub-prod-emeaatp-mdmhub-prod-apacPDKS EKS clusters regarding project BoldMove.For nprod:atp-mdmhub-nprod-ameratp-mdmhub-nprod-emeaatp-mdmhub-nprod-apacPDKS EKS clusters regarding project BoldMove.Comments or Special Instructions:  I am creating this request to have an access to Global MDM HUB prod clusters. " + }, + { + "title": "UI:", + "pageID": "308256633", + "pageLink": "/pages/viewpage.action?pageId=308256633", + "content": "" + }, + { + "title": "Add new role and add users to the UI", + "pageID": "308256635", + "pageLink": "/display/GMDM/Add+new+role+and+add+users+to+the+UI", + "content": "MDM HUB UI roles standards:Here is the role standard that has to be used to get access to the UI by specific users:EnvironmentsNON-PRODPRODDEVQASTAGEPRODGBL****EMEA****AMER****APAC****GBLUS****ALL****Use the 'ALL' keyword with connection to the 'NON-PROD' and 'PROD' - using this approach will produce only 2 roles for the system.Role Schema:______ - COMM - ALL or GBL/AMER/EMEA e.t.c (recommendation is ALL) - MDMHUB  - UI  - PROD / NON-PROD  or specific based on a table above HUB_ADMIN / PTRS e.t.c Important: name has to be in sync with HUB configuration users in e.g http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/nprod/users    ROLEexample roles:HUB ADMIN → COMM_ALL_MDMHUB_UI_NON-PROD_HUB_ADMIN_ROLE - HUB UI group for hub-admin users - access to all clusters, and non-prod environments.HUB ADMIN → COMM_ALL_MDMHUB_UI_PROD_HUB_ADMIN_ROLE - HUB UI group for hub-admin users - access to all clusters, and prod environments.PTRS system → COMM_ALL_MDMHUB_UI_NON-PROD_PTRS_ROLE - HUB UI group for PTRS users - access to all clusters, and non-prod environments.PTRS system → COMM_ALL_MDMHUB_UI_PROD_PTRS_ROLE - HUB UI group for PTRS users - access to all clusters, and prod environments.The system is the user name used in HUB. All users related to the specific system can have access to the specific role.For example, if someone from the PTRS system wants to have access to the UI, how to process such request:Add user to existing UI roleGo to https://requestmanager1.COMPANY.com/Group/Default.aspxsearch a group:If a role is found in search results you can check current members or request a new memberadd a new user:savego to Cart https://requestmanager1.COMPANY.com/group/Review.aspxand submit the request.If the role does not exist:First, create a new role:click Create a NEW Security Grouphttps://requestmanager1.COMPANY.com/group/Create.aspx?type=secregion -EMEAname - the name of a group primary owner - AJsecondary owner  - Mikołaj MorawskiDescription - e.g. HUB UI group for hub-admin users - access to all clusters, and prod environments.now you can add users to this groupSecond, configure roles and access to the user in HUB:Important: name has to be in sync with HUB configuration users in http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/nprod/users Users can have access to the following roles and APIs:https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.htmlUSER and ADMIN roles:MODIFY_KAFKA_OFFSET             - "/kafka/offset" allows modifying offset on specific Kafka topics related to the systemRESEND_KAFKA_EVENT               - "/jobs/hub/resend_events" - resend events to a specific topicUPDATE_IDENTIFIERS                 -   "/jobs/hub/update_identifiers" - starts update identifiers flowMERGE_UNMERGE_ENTITIES         - "/jobs/hub/merge_unmerge_entities" - starts merge unmerge flow REINDEX_ENTITIES                         - "/jobs/mdm/reindex_entities" - executes Reltio Reindex APICLEAR_CACHE_BATCH                  - "/jobs/hub/clear_batch_cache" - executes clear ETL batch cache operationHUB ADMIN roles:RESEND_KAFKA_EVENT_COMPLEX    - "/jobs/hub/resend_events" - resend events to a specific topic using complex API  RECONCILE                - "/jobs/hub/reconciliation_entities" - regenerates events to HUB using simple API - starts JOBRECONCILE_COMPLEX        - "/jobs/hub/reconciliation_entities_complex" - regenerates events to HUB using complex API - starts the jobLIST_PARTIALS                    - "/precallback/partials") - list or resubmit partials that stuck in the queueAdd roles and topics to the user:.e.g: http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/nprod/users/ptrs.yamlPut "kafka" section with specific kafka topics:Add mdm admin section with specific roles and access to topics:e.g.     mdm_admin:      reconciliationTargets:        - emea-dev-out-full-ptrs-eu        - emea-dev-out-full-ptrs-global2        - emea-qa-out-full-ptrs-eu        - emea-qa-out-full-ptrs-global2        - emea-stag-out-full-ptrs-eu        - emea-stag-out-full-ptrs-global2        - gbl-dev-out-full-ptrs        - gbl-dev-out-full-ptrs-eu        - gbl-dev-out-full-ptrs-porind        - gbl-qa-out-full-ptrs-eu        - gbl-stage-out-full-ptrs        - gbl-stage-out-full-ptrs-eu        - gbl-stage-out-full-ptrs-porind      sources:        - ALL      countries:        - ALL      roles: &roles        - MODIFY_KAFKA_OFFSET        - RESEND_KAFKA_EVENT      kafka: *kafkaREMEMBER TO ADD: Add mdm_auth  section  this  will  start  the  UI  access.Without this section the UI will not show HUB Admin tools! mdm_auth: roles: *rolesThe mdm_auth section and roles there will cause the user will only see 2 pages in UI - in that case, MODIFY OFFSET and RESET_KAFKA_EVENTSWhen the roles and users are configured on the HUB end go to the first step and add selected users to the selected roles.Starting from this time any new e.g. PTRS user can be added to the COMM_ALL_MDMHUB_UI_NON-PROD_PTRS_ROLE and will be able to log in to UI and see the pages and use API through UI." + }, + { + "title": "Current users and roles", + "pageID": "347636361", + "pageLink": "/display/GMDM/Current+users+and+roles", + "content": "EnvironmentClientClusterRoleCOMPANY UsersHUB internal userNON-PRODMDMHUBALLCOMM_ALL_MDMHUB_UI_NON-PROD_HUB_ADMIN_ROLEALL HUB Team Members +Andrew.J.Varganin@COMPANY.comNishith.Trivedi@COMPANY.come.g. http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/nprod/users/hub_admin.yamllPRODMDMHUBALLCOMM_ALL_MDMHUB_UI_PROD_HUB_ADMIN_ROLE    ALL HUB Team Members+Andrew.J.Varganin@COMPANY.comNishith.Trivedi@COMPANY.come.g. http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/prod/users/hub_admin.yamlNON-PRODMDMETLALLCOMM_ALL_MDMHUB_UI_NON-PROD_MDMETL_ADMIN_ROLEAnurag.Choudhary@COMPANY.comShikha@COMPANY.comRaghav.Gupta@COMPANY.comKhushboo.Bharti@COMPANY.comManisha.Kansal@COMPANY.comAjit.Tiwari@COMPANY.comSayak.Acharya@COMPANY.comJeevitha.R@COMPANY.comPriya.Suthar@COMPANY.comJoymalya.Bhattacharya@COMPANY.comChinthamani.Kalebu@COMPANY.comArindam.Roy2@COMPANY.comNarendraSingh.Chouhan@COMPANY.comAdrita.Sarkar@COMPANY.comManish.Panda@COMPANY.comMeghana.Das@COMPANY.comHanae.Laroussi@COMPANY.comSomil.Sethi@COMPANY.comShivani.Jha@COMPANY.comPradnya.Raikar@COMPANY.comKOMAL.MANTRI@COMPANY.comAbsar.Ahsan@COMPANY.comAsmita.Datta@COMPANY.come.g. http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/nprod/users/mdmetl_admin.yamlPRODMDMETLALLCOMM_ALL_MDMHUB_UI_PROD_MDMETL_ADMIN_ROLEAnurag.Choudhary@COMPANY.comShikha@COMPANY.comRaghav.Gupta@COMPANY.comKhushboo.Bharti@COMPANY.comManisha.Kansal@COMPANY.comAjit.Tiwari@COMPANY.comSayak.Acharya@COMPANY.comJeevitha.R@COMPANY.comPriya.Suthar@COMPANY.comJoymalya.Bhattacharya@COMPANY.comChinthamani.Kalebu@COMPANY.comArindam.Roy2@COMPANY.comNarendraSingh.Chouhan@COMPANY.comManish.Panda@COMPANY.comMeghana.Das@COMPANY.comHanae.Laroussi@COMPANY.comSomil.Sethi@COMPANY.comShivani.Jha@COMPANY.comPradnya.Raikar@COMPANY.comKOMAL.MANTRI@COMPANY.comAsmita.Datta@COMPANY.come.g. https://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/prod/users/mdmetl_admin.yamlNON-PRODPTRSALLCOMM_ALL_MDMHUB_UI_NON-PROD_PTRS_ROLEsagar.bodala@COMPANY.comAishwarya.Shrivastava@COMPANY.comTanika.Das@COMPANY.comRishabh.Singh@COMPANY.comBhushan.Shanbhag@COMPANY.comHasibul.Mallik@COMPANY.comAbhinavMishra.Mishra@COMPANY.comAsmita.Mishra@COMPANY.comPrema.NayagiGS@COMPANY.come.g. http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/nprod/users/ptrs.yamlPRODPTRSALLCOMM_ALL_MDMHUB_UI_PROD_PTRS_ROLEsagar.bodala@COMPANY.comAishwarya.Shrivastava@COMPANY.comTanika.Das@COMPANY.comRishabh.Singh@COMPANY.comBhushan.Shanbhag@COMPANY.comHasibul.Mallik@COMPANY.comAbhinavMishra.Mishra@COMPANY.comAsmita.Mishra@COMPANY.comPrema.NayagiGS@COMPANY.come.g. http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/prod/users/ptrs.yamlNON-PRODCOMPANYALLCOMM_ALL_MDMHUB_UI_NON-PROD_COMPANY_ROLEnavaneel.ghosh@COMPANY.comhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/pull-requests/1707/diff#amer/nprod/users/COMPANY.ymlPRODCOMPANYALLCOMM_ALL_MDMHUB_UI_PROD_COMPANY_ROLEnavaneel.ghosh@COMPANY.comhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/pull-requests/1707/diff#amer/nprod/users/COMPANY.yml" + }, + { + "title": "SSO and roles", + "pageID": "322564881", + "pageLink": "/display/GMDM/SSO+and+roles", + "content": "To login to UI dashboard You have to be in COMPANY network. sso authorization is made by SAML, using COMPANY pingfederate.Auth flowSSO loginSAML login roleAfter successful authentication with SAML we are receiving roles from Active Directory (Group Manager - distribution list)Then we are decoding roles using following regexp:COMM_(?[A-Z]+)_MDMHUB_UI_(?NON-PROD|PROD)_(?.+)_ROLEWhen role is matching environment and tenant we are getting roles by searching system in user configuration.Backend AD groupsServiceNPROD GroupPROD GroupDescriptionKibanaCOMM_ALL_MDMHUB_KIBANA_NON-PROD_ADMIN_ROLECOMM_ALL_MDMHUB_KIBANA_PROD_ADMIN_ROLECOMM_ALL_MDMHUB_KIBANA_NON-PROD_VIEWER_ROLECOMM_ALL_MDMHUB_KIBANA_PROD_VIEWER_ROLEGrafanaCOMM_ALL_MDMHUB_GRAFANA_PROD_ADMIN_ROLECOMM_ALL_MDMHUB_GRAFANA_PROD_VIEWER_ROLEAkhqCOMM_ALL_MDMHUB_KAFKA_NON-PROD_ADMIN_ROLECOMM_ALL_MDMHUB_KAFKA_PROD_ADMIN_ROLECOMM_ALL_MDMHUB_KAFKA_NON-PROD_VIEWER_ROLECOMM_ALL_MDMHUB_KAFKA_PROD_VIEWER_ROLEMonitoringCOMM_ALL_MDMHUB_ALL_NON-PROD_MON_ROLECOMM_ALL_MDMHUB_ALL_PROD_MON_ROLEThis groups aggregates users that are responsible for monitoring of MDMHUB AirflowCOMM_ALL_MDMHUB_AIRFLOW_NON-PROD_ADMIN_ROLECOMM_ALL_MDMHUB_AIRFLOW_PROD_ADMIN_ROLECOMM_ALL_MDMHUB_AIRFLOW_NON-PROD_VIEWER_ROLECOMM_ALL_MDMHUB_AIRFLOW_PROD_VIEWER_ROLE" + }, + { + "title": "UI Connect Guide", + "pageID": "322540727", + "pageLink": "/display/GMDM/UI+Connect+Guide", + "content": "Log in to UI and switch TenantsTo log in to UI please use the following link: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-emea-devLog in to UI using your COMPANY credentials:There is no need to know each UI address, you can easily switch between Tenants using the following link (available on the TOP RIGHT corner in UI near the USERNAME):What pages are available with the default VIEW roleBy default, you are logged in with the default VIEW role, the following pages are available:HUB StatusYou can use the HUB Dashboard main page that contains HUB platform status: Event processing details, Snowflake refresh time, started batches and ETA to load data to Reltio or get Events from Reltio.Ingestion Services ConfigurationThis page contains the documentation related to the Data Quality checks, Source Match Categorization, Cleansing & Formatting, Auto-Fills, and Minimum Viable Profile Checks.You can choose a filter to switch between different entity types and use input boxes to filter results.You can use the 'Category' filter to include the operations that you are interested inYou can use the 'Query' filter and put any text to find what you are looking for (e.g. 'prefix' to find rules with prefix word)You can use the 'Date' filter to find rules created or updated after a specific time - now using this filter you can easily find the rules added after data reload and reload data one more time to reflect changes. This page contains also documentation related to duplicate identifiers and noise lists.You can choose a  filter to switch between different entity types and use input boxes to filter resultsIngestion Services TesterThis page contains the JSON tester, input JSON and click the 'Test' button to check the output JSON with all rules appliedClick the 'Difference' to get only changed sectionsClick the 'Validation result' to get the rules that were executed.More details here: HUB UI User GuideWhat operations are available in the UIAs a user, you can request access to the technical operations in HUB. The details on how to access more operations are described in the section below.Here you will get to know the different UI operations and what can be done using these operations:HUB Admin allows to:Kafka OffsetTechnical operationOn this page user can modify Kafka offset on specific consumer groupSystem/User that wants to have access to this page will be allowed to maintain the consumer group offset, change to:latestearliestspecific date timeshift by a specific number of events.HUB ReconciliationTechnical operationUsed internally by HUB Team.This operation allows us to mimic Reltio events generation - this operation generates the events to the input HUB topic so that we can reprocess the events.You can use this page and generates events by:provide an input array with entity/relation URIsorprovide the query and select the source/market that you want to reprocess.Kafka Republish EventsTechnical operationThis operation can be used to generate events for your Kafka topicUse case - you are consuming data from HUB and you want to test something on non-prod environments and consume events for a specific market one more time. You want to receive 1000 events for France market for your testing.You can use this page and generates events for the target topic:Specify the Countries/Sources/Limits/Dates and Target Reconciliation topic - as a result, you will receive the events.Reltio ReindexTechnical operationThis operation executes the Reltio Reindexing operationYou can use this page and generates events by:provide the query and select the source/market that you want to reprocess.orprovide the input file with entity/relation URIs, that will be sent to Reltio API.Merge/Unmerge EntitiesBusiness operationThis operation consumes the input file and executes the merge/unmerge operations in ReltioMore details about the file and process are described here: Batch merge & unmergeUpdate IdentifiersBusiness operationThis operation consumes the input file and executes the merge/unmerge operations in ReltioMore details about the file and process are described here: Batch update identifiersClear CacheBusiness operationClear ETL Batch CacheMore details about the file and process are described here: Batch clear ETL data load cacheHow to request additional access to new operationsPlease send the following email to the HUB DL: DL-ATP_MDMHUB_SUPPORT@COMPANY.comSubject:HUB UI - Access request for Body:Please provide the access / update the existing access for to HUB Admin operations.IDDetailsComments:1Action neededAdd user to the HUB UIEdit user in the HUB UI (please provide the existing group name)2TenantGBL, EMEA, AMER, GBLUS, APAC/ALLTenant - more details in EnvironmentsBy default please select ALL Tenants, but if you need access only to a specified one please select.3Environments PROD / NON-PROD  or specific: DEV/QA/STAGE/PRODBy default please select PROD / NON-PROD environments, but if you need access only to a specified one please select.4Permissions rangeChoose the operation:Kafka OffsetHUB ReconciliationKafka Republish EventsReltio ReindexMerge/Unmerge EntitiesUpdate IdentifiersClear Cache5COMPANY TeamETL/COMPANY or DSR or Change Management e.t.c8Business justificationNeeds access to execute merge unmerge operation in EMEA/AMER/APAC PROD Reltio9Point of contactIf you are from the system please provide the DL email and system details.7Sourcesrequired in Events/Reindex/Reconciliation operations3Countriesrequired in Events/Reindex/Reconciliation operationsThe request will be processed after Andrew.J.Varganin@COMPANY.com approval. In the response, you will receive the Group Name. Please use this for future reference.e.g. PTRS system roles used in the PTRS system to manage UI operations.   PTRS system → COMM_ALL_MDMHUB_UI_NON-PROD_PTRS_ROLE - HUB UI group for PTRS users - access to all clusters, and non-prod environments.   PTRS system → COMM_ALL_MDMHUB_UI_PROD_PTRS_ROLE - HUB UI group for PTRS users - access to all clusters, and prod environments.HUB Team will use the following SOP to add you to a selected role: Add a new role and add users to the UIGet HelpIn case of any questions, the GetHelp page or full HUB documentation is available here (UI page footer):GetHelpWelcome to the Global MDM Home!" + }, + { + "title": "Users:", + "pageID": "302705550", + "pageLink": "/pages/viewpage.action?pageId=302705550", + "content": "" + }, + { + "title": "Add Direct API User to HUB", + "pageID": "273694347", + "pageLink": "/display/GMDM/Add+Direct+API+User+to+HUB", + "content": "To add a new user to MDM HUB direct API a few steps must be done. That document describes what activities must be fulfilled and who is responsible fot them.Create PingFederate user - client's responsibility  If the client's authentication method is oauth2 then there is a need to create PingFederate user.To add a user you must have a Ping Federate user created: How to Request PingFederate (PXED) External OAuth 2.0 Account Caution: If the authentication method is key auth then HUB Team generates it and sends it securely way to the client.Send a request to MDM HUB that contains all necessary data - client's responsibility Send a request to create a new user with direct API access to HUB Team: dl-atp_mdmhub_support@COMPANY.comThe request must contain as follows:1Action needed2PingFederate username3Countries4Tenant5Environments6Permissions range7Sources8Business justification9Point of contact10GatewayDescriptionAction needed – this is a place where you decide if you want to create a new user or modify the existing one.PingFederate username – you need to create a user on the PingFederate side. Its username is crucial to authenticate on the HUB side. If you do not have a PingFederate user please check: https://confluence.COMPANY.com/display/GMDM/How+to+request+PingFederate+%28PXED%29+external+OAuth+2.0+accountCountries - list of countries that access to will be grantedTenant – a tenant or list of tenants where the user will be created. Please notice that if you have a connection from open internet only EMEA is possible. If you have a local application split to Reltio Region it is recommended to request a local tenant. If you have a global solution you can call EMEA and your requests will be routed by HUB.Environments – list of environment instances – DEV/QA/STG/PRODPermissions range – do you need to write or read/write? To which entities do you need access? HCO/HCP/MCOSources – to which sources do you need to have access?Business justification – please describeWhy do you have a connection with HUB?Why the user must be created/modified?What’s the project name?Who’s the project manager?Point of contact – please add a DL group name - in case of any issues connected with that userWhich API you want to call: EMEA, AMER, APAC,etcPrepare new user on MDM HUB side - HUB Team Responsibility Store clients' request in dedicated confluence space: ClientsIn the COMPANY tenants, there is a need to connect the new user with API Router directly.Change API router configuration, and add a new user with:user PingFederate name or when the user uses key auth add API key to secrets.yamlsourcescountriesrolesChange Manager configuration, addsourcescountriesChange DCR service configuration - if applicabledcrServiceConfig-  initTrackingDetailsStatus, initTrackingDetail, dcrTyperoles - CREATE_DCR, GET_DCRYou need to check how the request will be routed. If there is a  need to make a routing configuration, follow these steps:change API Router configuration by adding new countries to proper tenantschange Manager configuration in destinated tenant by addingsourcescountries" + }, + { + "title": "Add External User to MDM Hub", + "pageID": "164470196", + "pageLink": "/display/GMDM/Add+External+User+to+MDM+Hub", + "content": "Kong configurationFirstly You need to have users logins from Ping Federate for every envGo folder inventory/{{ kong_env }}/group_vars/kong_v1 in repository mdm-hub-env-configFind section PLUGINS in file kong_{{ env }}.yml and then rule with name mdm-external-oauthin this section find "users_map"add there new entry with following rule:\n- ":"\nchange False to True in create_or_update setting for this rule\ncreate_or_update: True\nRepeat this steps( a-c ) for every environment {{ env }} you want to apply changes to(e.g., dev, qa, stage){{ kong_env }} - environment on which kong instance is deployed{{ env }} - environment on which MDM Hub instance is deployedkong_envenvdevdev, mapp, stageprodproddev_gblusdev_gblus, qa_gblus, stage_gblusprod_gblusprod_gblusdev_usdev_usprod_usprod_usGo to folder inventory/{{ env }}/group_vars/gw-servicesIn file gw_users.yml add section with new user after last added user, specify roles and sources needed for this user. E.g.,User configuration\n- name: ""\n description: ""\n defaultClient: "ReltioAll"\n getEntityUsesMongoCache: yes\n lookupsUseMongoCache: yes\n roles:\n - \n countries:\n - US\n sources: \n\t- \nRepeat this step for every environment {{ env }} you want to apply changes to( e.g., dev, qa, stage)After configuration changes You need to update kong using following commandfor nonprod gblus envsGBLUS NPROD - kong update\nansible-playbook update_kong_api_v1.yml -i inventory/dev_gblus/inventory --limit kong_v1_01 --vault-password-file=~/ansible.secret\nfor prod gblus envGBLUS PROD - kong update\nansible-playbook update_kong_api_v1.yml -i inventory/prod_gblus/inventory --limit kong_v1_01 --vault-password-file=~/ansible.secret\nfor nprod gbl envsGBL NPROD - kong update\nansible-playbook update_kong_api_v1.yml -i inventory/dev/inventory --vault-password-file=~/ansible.secret\nfor prod gbl envGBL PROD - kong update\nansible-playbook update_kong_api_v1.yml -i inventory/prod/inventory --vault-password-file=~/ansible.secret\nfor nprod US envUS NPROD - kong update\nansible-playbook update_kong_api_v1.yml -i inventory/dev_us/inventory --vault-password-file=~/ansible.secret\nfor prod USUS PROD - kong update\nansible-playbook update_kong_api_v1.yml -i inventory/prod_us/inventory --vault-password-file=~/ansible.secret\nTroubleshootingIn case when there will be a problem with deploying You need to set create_or_update as True also for route and manager service.Ansible secretTo use this script You need to have ansible.secret file created in your home directory or adjust vault-password-file if needed.Another option is to change --vault-password-file to --ask-vault and provide ansible vault during the runtime.Before commiting changes find all occurrences where You set create_or_update to true and change it again to:\ncreate_or_update: False\nThen commit changesRedeploy gateway services on all modified envs. Before deploying please verify if there is no batch running in progressJenkins job to deploy gateway services:https://jenkins-gbicomcloud.COMPANY.com/job/mdm-gateway/" + }, + { + "title": "Add new Batch to HUB", + "pageID": "310944945", + "pageLink": "/display/GMDM/Add+new+Batch+to+HUB", + "content": "To add a new batch to MDM HUB  a few steps must be done. That document describes what activities must be fulfilled and who is responsible for them.Check source and country configurationThe first step is to check if DQ rules and SMC are configured for the new source. Repository: mdm-config-registry; Path: \\config-hub\\\\mdm-manager\\quality-service\\quality-rules\\If not you have to immediately send an email to a person that requested a new batch. This condition is usually performed on a separate task as prerequisite to adding the batch configuration."This is a new source. You have to send DQ and SMC requirements for a new source to A.J. and Eleni. Based on it a new HUB requirement deck will be prepared. When we received it the task can be planned. Until that time the task is blocked." The same exercise has to be made when we get requirements for a new country.Authorization and authenticationClients use mdmetl batch service user to populate data to Reltio. There is no changes needed.Send a request to MDM HUB that contains all necessary data - client's responsibility Send a request to create a new batch to HUB Team: dl-atp_mdmhub_support@COMPANY.comThe request must contain as follows:subject arealist of stages HCP/HCO/Affiliationsdata sourcecountries listsource namebatch namefile typefull/incrementalfrequencybussines justificationsingle point of contact on client sidePrepare new batch on MDM HUB side - HUB Team Responsibility Repository: mdm-hub-cluster-envChanges on manager levelIn mdmetl.yaml configuration must be extended with:Path: \\\\\\users\\mdmetl.yamlNew sourcesNew countriesAdd new batch with stages to batch_service, example:batch_service: defaultClient: "ReltioAll" description: "MDMETL Informatica IICS User - BATCH loader" batches: "ONEKEY": <- new batch name - "HCPLoading" <- new stage - "HCOLoading" <- new stage - "RelationLoading" <- new stageIn the MDM manager config, if the batch includes RelationLoading stage then add to the refAttributesEnricher configuration relationType: ProviderAffiliationsrelationType: ContactAffiliationsrelationType: ACOAffiliationsNew sourcesNew countriesChanges in batch-service levelBased on stages that are adding there is a need to change a batch-service configuration.Path: \\\\\\namespaces\\\\config_files\\batch-service\\config\\application.ymlAdd configuration in BatchWorkflows, example:- batchName: "PFORCERX_ODS" batchDescription: "PFORCERX_ODS - HCO, HCP, Relation entities loading" stages: - stageName: "HCOLoading" - stageName: "HCOSending" softDependentStages: [ "HCOLoading" ] processingJobName: "SendingJob" - stageName: "HCOProcessing" dependentStages: [ "HCOSending" ] processingJobName: "ProcessingJob" # -------------------------------- - stageName: "HCPLoading" - stageName: "HCPSending" softDependentStages: [ "HCPLoading" ] processingJobName: "SendingJob" - stageName: "HCPProcessing" dependentStages: [ "HCPSending" ] processingJobName: "ProcessingJob" # ------------------ - stageName: "RelationLoading" - stageName: "RelationSending" dependentStages: [ "HCOProcessing", "HCPProcessing" ] softDependentStages: [ "RelationLoading" ] processingJobName: "SendingJob" - stageName: "RelationProcessing" dependentStages: [ "RelationSending" ] processingJobName: "ProcessingJob"If batch is full load than two additional stages must be configured, it destination is to allows deletating profiles:- stageName: "EntitiesUnseenDeletion" dependentStages: [ "HCOProcessing" ] processingJobName: "DeletingJob"- stageName: "HCODeletesProcessing" dependentStages: [ "EntitiesUnseenDeletion" ] processingJobName: "ProcessingJob"2. Add configuration to bulkConfiguration, example:"PFORCERX_ODS": HCOLoading: bulkLimit: 25 destination: topic: "${env}-internal-batch-pforcerx-ods-hco" maxInFlightRequest: 5 HCPLoading: bulkLimit: 25 destination: topic: "${env}-internal-batch-pforcerx-ods-hcp" maxInFlightRequest: 5 RelationLoading: bulkLimit: 25 destination: topic: "${env}-internal-batch-pforcerx-ods-rel" maxInFlightRequest: 5All new dedicated topic must be configured. There is a need to add configuration in kafka-topics.yml, example:emea-prod-internal-batch-pulse-kam-hco: partitions: 6 replicas: 33. Add configuration in sendingJob, example:PFORCERX_ODS: HCOSending: source: topic: "${env}-internal-batch-pforcerx-ods-hco" maxInFlightRequest: 5 bulkSending: false bulkPacketSize: 10 reltioRequestTopic: "${env}-internal-async-all-mdmetl-user" reltioReponseTopic: "${env}-internal-async-all-mdmetl-user-ack" HCPSending: source: topic: "${env}-internal-batch-pforcerx-ods-hcp" maxInFlightRequest: 5 bulkSending: false bulkPacketSize: 10 reltioRequestTopic: "${env}-internal-async-all-mdmetl-user" reltioReponseTopic: "${env}-internal-async-all-mdmetl-user-ack" RelationSending: source: topic: "${env}-internal-batch-pforcerx-ods-rel" maxInFlightRequest: 5 bulkSending: false bulkPacketSize: 10 reltioRequestTopic: "${env}-internal-async-all-mdmetl-user" reltioReponseTopic: "${env}-internal-async-all-mdmetl-user-ack"4. If a batch is full load then deletingJob must be configured, for example:PULSE_KAM: EntitiesUnseenDeletion: maxDeletesLimit: 10000 queryBatchSize: 10 reltioRequestTopic: "${env}-internal-async-all-mdmetl-user" reltioResponseTopic: "${env}-internal-async-all-mdmetl-user-ack"" + }, + { + "title": "How to Request PingFederate (PXED) External OAuth 2.0 Account", + "pageID": "263491721", + "pageLink": "/display/GMDM/How+to+Request+PingFederate+%28PXED%29+External+OAuth+2.0+Account", + "content": "This instruction describes the Client steps that should be triggered to create the PingFederate account. Referring to security requirements HUB should only know the details about the UserName created by the PXED Team. HUB is not requesting external accounts, passwords and all the details are shared only with the Client. The client is sharing the user name to HUB and only after the User name is configured Client will gain the access to HUB resources. Contact Persons:Varganin, A.J. / DL-ATP_MDMHUB_SUPPORT@COMPANY.com - All details related to VCAS Reference number,CMDB ID (Production Deployment),IPRM Solution profile number and other details. PingFederate (PXED) - DL-CIT-PXED Operations ; Zhang, Christine Details required to fulfill the PXED request are in this doc:User Name standard: -MDM_clientSteps:Go to https://requestmanager.COMPANY.com/#/In Search For Application type: PXED Pick - Application enablement with enterprise authentication services (PXED, LDAP and/or SSO)Fulfill the request and send.Wait for the user name and passwordAfter confirmation share the Client Id with HUB and wait for the grant of access. Do not share the password. EXAMPLE: For the Reference Example request send for PFORCEOL user:Request TicketGBL32702829iTicket IDNameVarganin, Andrew JosephRequested user nameAD UsernameVARGAA08Requested user IdUser DomainAMERRegion (AMER/EMEA/APAC/US...)Request ID20200717112252425request IDHosting locationExternalHosting location of the Client services: (External or  Internal COMPANY Network)VCAS Reference numberV...VCAS Reference numberData FeedNo, API/Servicesflow - requests send to HUB API then - API/ServicesApplication access methodsWeb BrowserType of access for the Client application - (Intranet/Web Browser e.t.c) Application User baseCOMPANY colleaguesContractorsApplication User baseApplication access devicesLaptop/DesktopTablets (iPad/Android/Windows)Application access devicesApplication Access LocationsInternetLocation (External - Internet / Internal - Intranet)Application NameRequested application name that requires new accountCMDB ID (Production Deployment)SC....CMDB ID (Production Deployment)IPRM Solution profile number....IPRM Solution profile numberNumber of users for the application...Number of users for the applicationConcurrent Users....Concurrent UsersCommentsApplication-to-Application Integration using NSA (Non-Standard Service Account.)  PTRS will use REST APIs to authenticate to and access COMPANY Global MDM Services.This application will access MDM API Services (MDM_client) and will need OAuth2 account (KOL-MDM_client) for access to those APIs/Servicesfull description of requested account and integrationApplication ScopeAll UsersApplication ScopeReferenced tickets (only for example / reference purposes):https://btondemand.COMPANY.com/My-Tickets/Ticket-Details?ticket=GBL32702829ihttps://requestmanager.COMPANY.com/#/request/20201208091510997" + }, + { + "title": "Hub Operations", + "pageID": "302705582", + "pageLink": "/display/GMDM/Hub+Operations", + "content": "" + }, + { + "title": "Airflow:", + "pageID": "164470119", + "pageLink": "/pages/viewpage.action?pageId=164470119", + "content": "" + }, + { + "title": "Checking that Process Ends Correctly", + "pageID": "164470118", + "pageLink": "/display/GMDM/Checking+that+Process+Ends+Correctly", + "content": "To check that process ended without any issues you need to login into Prometheus and check the Alerts Monitoring PROD dashboard. You have to check rows in the GBL PROD Airflow DAG's Status panel. If you can see red rows (like on blow screenshot) it means that there occured some issues:Details of issues are available in the Airflow." + }, + { + "title": "Common Problems", + "pageID": "164470117", + "pageLink": "/display/GMDM/Common+Problems", + "content": "Failed task getEarliestUploadedFileDuring reviewing of failed DAG you noticed that the task getEarliestUploadedFile has failed state. In the task's logs you can see the line like this:[2020-03-19 18:44:07,082] {{docker_operator.py:252}} INFO - Unable to find the earliest uploaded file. S3 directory is empty?The issue is because getEarliestUploadedFile was not able to download the export file. In this case you need to check the S3 localtion and verify that the correct export file was uploded to valid location." + }, + { + "title": "Deploy Airflow Components", + "pageID": "164470010", + "pageLink": "/display/GMDM/Deploy+Airflow+Components", + "content": "Deployment procedure is implemented as ansible playbook. The source code is stored in MDM Environment configuration repository. The runnable file is available under the path:  https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/install_mdmgw_airflow_services.yml and can be run by the command: ansible-playbook install_mdmgw_airflow_services.yml -i inventory/[env name]/inventory  Deployment has following steps: Creating directory structure on execution host, Templating configuration files and transferring those to config location, Creating DAG, variable and connections in Apache Airflow, Restarting Airflow instance to apply configuration changes. After successful deployment the dag and configuration changes should be available to trigger in Airflow UI. " + }, + { + "title": "Deploying DAGs", + "pageID": "164469947", + "pageLink": "/display/GMDM/Deploying+DAGs", + "content": "To deploy newly created DAG or configuration changes you have to run the deployment procedure implemented as ansible playbook install_mdmgw_airflow_services.yml:ansible-playbook install_mdmgw_airflow_services.yml -i inventory/[env name]/inventoryIf you you have access to Jenkins you can also use jenkins' jobs: https://jenkins-gbicomcloud.COMPANY.com/job/MDM_Airflow_Deploy_jobs/. Each environment has its own deploy job. Once you choose the right job you have to:1 Click the button "Build Now": 2 After a few seconds the stage icon "Choose dags to deploy" will be active and will wait for choosing DAG to deploy:3 Choose the DAG you wanted to deploy and approve you decision.After this job will deploy all changes made by you to Airflow's server." + }, + { + "title": "Error Grabbing Grapes - hub_reconciliation_v2", + "pageID": "218438556", + "pageLink": "/display/GMDM/Error+Grabbing+Grapes+-+hub_reconciliation_v2", + "content": "In hub_reconciliation_v2 airflow dag, during stage  entities_generate_hub_reconciliation_events grape error might occur:\norg.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:\nGeneral error during conversion: Error grabbing Grapes\n(...)\nCause:That could be caused by connectivity/configuration issues.Workaround:For this dag dependencies are mounted in container. Mounted directory is located in airflow server on path: /app/airflow/{{ env_name }}/hub_reconciliation_v2/tmp/.groovy/grapes/To solve this problem copy libs from working dag. E.g. hub_reconciliation_v2_gblus_prod \namraelp00007847.COMPANY.com/app/airflow/gblus_prod/hub_reconciliation_v2/tmp/.groovy/grapes\n" + }, + { + "title": "Batches (Batch Service):", + "pageID": "302705680", + "pageLink": "/pages/viewpage.action?pageId=302705680", + "content": "" + }, + { + "title": "Adding a New Batch", + "pageID": "164469956", + "pageLink": "/display/GMDM/Adding+a+New+Batch", + "content": "1. Add batch to batch_service.yml in the following sections- add batch info to section batchWorkflows - add basing on some already defined- add bulk configuration- add to sendingJob- add to deletingJob if needed2. Add source and user for batch to batch_service_users.yml- add for user mdmetl_nprod apropriate source and batch3. Add user to:for GBL / GBLUS - /inventory//group_vars/gw-services/gw_users.ymlfor EMEA / AMER / APAC - /config_files/manager/config/users- for appropriate source, country and roles4. Add topic to bundle section in manager/config/application.yml 5. Add kafka topicsWe use kafka manager to add new topics which can be found under directory /inventory//group_vars/kafka/manager/topics.ymlFirstly set create_or_update to True after creation of topics change to False7. Create topics and redeploy services by using Jenkinshttps://jenkins-gbicomcloud.COMPANY.com/job/mdm-gateway/8. Redeploy gateway on others envs qa, stage, prod only if there is no batch running - check it in mongo on batchInstance collection using following query: {"status" : "STARTED"}9. Ask if new source should be added to dq rules" + }, + { + "title": "Cache Address ID Clear (Remove Duplicates) Process", + "pageID": "163917838", + "pageLink": "/display/GMDM/Cache+Address+ID+Clear+%28Remove+Duplicates%29+Process", + "content": "This process is similar to the Cache Address ID Update Process . So the user should load the file to mongo and process it with the following steps: Download the files that were indicated by the user and apply on a specific environment (sometimes only STAGE and sometimes all envs)For example - 3 files - /us/prod/inbound/cdw/one-time-feeds/other/Merge these file to one file - Duplicate_Address_Ids_.txtProceed with the script.sh based on the Cache Address ID Update ProcessGenerated Extract load to the removeIdsFromkeyIdRegistry collectionmongoimport --host=localhost:27017 --username=admin --password=zuMMQvMl7vlkZ9XhXGRZWoqM8ux9d08f7BIpoHb --authenticationDatabase=admin --db=reltio_stage --collection=removeIdsFromkeyIdRegistry --type=csv --columnsHaveTypes --fields="_id.string(),key.string(),sequence.string(),generatedId.int64(),_class.string()" --file=EXTRACT_Duplicate_Address_Ids_16042021.txt --mode=insertCLEAR keyIdRegistrydocker exec -it mongo_mongo_1 bashcd /data/configdbNPROD - nohup mongo duplicate_address_ids_clear.js &PROD   - nohup mongo --host mongo_reltio_repl_set/amraelp00007844.COMPANY.com:27017,amraelp00007870.COMPANY.com:27017,amraelp00007847.COMPANY.com:28017 -u mdm_hub -p --authenticationDatabase reltio_prod duplicate_address_ids_clear.js &FOR REFERENCE SCRIPT:\nCLEAR keyIdRegistry\n db = db.getSiblingDB('reltio_dev')\n db.auth("mdm_hub", "")\n \n db = db.getSiblingDB('reltio_prod')\n db.auth("mdm_hub", "")\n\n\n\n print("START")\n var start = new Date().getTime();\n\n\n var cursor = db.getCollection("removeIdsFromkeyIdRegistry").aggregate( \n [\n \n ], \n { \n "allowDiskUse" : false\n }\n )\n \n cursor.forEach(function (doc){\n db.getCollection("keyIdRegistry").remove({"_id": doc._id});\n });\n\n var end = new Date().getTime();\n var duration = end - start;\n print("duration: " + duration + " ms")\n print("END")\n\n\n nohup mongo duplicate_address_ids_clear.js &\n\n nohup mongo --host mongo_reltio_repl_set/amraelp00007844.COMPANY.com:27017,amraelp00007870.COMPANY.com:27017,amraelp00007847.COMPANY.com:28017 -u mdm_hub -p --authenticationDatabase reltio_prod duplicate_address_ids_clear.js &\nCLEAR batchEntityProcessStatus checksumsdocker exec -it mongo_mongo_1 bashcd /data/configdbNPROD - nohup mongo unset_checsum_duplicate_address_ids_clear.js &PROD   - nohup mongo --host mongo_reltio_repl_set/amraelp00007844.COMPANY.com:27017,amraelp00007870.COMPANY.com:27017,amraelp00007847.COMPANY.com:28017 -u mdm_hub -p --authenticationDatabase reltio_prod unset_checsum_duplicate_address_ids_clear.js &FOR REFERENCE SCRIPT\nCLEAR batchEntityProcessStatus\n\n db = db.getSiblingDB('reltio_dev')\n db.auth("mdm_hub", "")\n \n db = db.getSiblingDB('reltio_prod')\n db.auth("mdm_hub", "")\n\n\n print("START")\n var start = new Date().getTime();\n var cursor = db.getCollection("removeIdsFromkeyIdRegistry").aggregate( \n [\n ], \n { \n "allowDiskUse" : false\n }\n )\n \n cursor.forEach(function (doc){\n var key = doc.key \n var arrVars = key.split("/");\n \n var type = "configuration/sources/"+arrVars[0]\n var value = arrVars[3];\n \n print(type + " " + value)\n \n var result = db.getCollection("batchEntityProcessStatus").update(\n { "batchName" : { $exists : true }, "sourceId" : { "type" : type, "value" : value } },\n { $set: { "checksum": "" } },\n { multi: true}\n )\n \n printjson(result);\n \n });\n \n var end = new Date().getTime();\n var duration = end - start;\n print("duration: " + duration + " ms")\n print("END")\n\n nohup mongo unset_checsum_duplicate_address_ids_clear.js &\n \n nohup mongo --host mongo_reltio_repl_set/amraelp00007844.COMPANY.com:27017,amraelp00007870.COMPANY.com:27017,amraelp00007847.COMPANY.com:28017 -u mdm_hub -p --authenticationDatabase reltio_prod unset_checsum_duplicate_address_ids_clear.js &\nVerify nohup outputCheck few rows and verify if these rows do not exist in the KeyIdRegistry collectionCheck few profiles and verify if the checksum was cleared in the BatchEntityProcessStatus collectionISSUE - for the ONEKEY profiles there is a difference between the generated cache and the corresponding profile.ISSUE - for the GRV profiles there is a difference between the generated cache and the corresponding profile. - check the crosswalks values in COMPANY_ADDRESS_ID_EXTRACT_PAC_files - should be e.g. 00002b9b-f327-456c-959c-fd5b04ed04b8ISSUE - for the ENGAGE 1.0 profiles there is a difference between the generated cache and the corresponding profile.  check the crosswalks values in COMPANY_ADDRESS_ID_EXTRACT_ENG_ files - should be e.g 00002b9b-f327-456c-959c-fd5b04ed04b8Please check the following example:CUST_SYSTEM,CUST_TYPE,SRC_ADDR_ID,SRC_CUST_ID,SRC_CUST_ID_TYPE,PFZ_ADDR_ID,PFZ_CUST_ID,SRC_SYS,MDM_SRC_SYS,EXTRACT_DTPROBLEM : HCPM,HCP,0000407429,8091473,HCE,38357661,1374316,HCPS,HCPS,2021-04-15OK            : HCPM,HCP,a012K000022cqBoQAI,0012K00001lCEyYQAW,HCP,109525669,178336284,VVA,VVA,2021-04-15For VVA the crosswalk is equal to the 001A000001VgOEVIA3 and it is easy to match with the ICUE profile and clear the cache for ONEKEY the generated row is equal to the - COMPANYAddressIDSeq|ONEKEY/HCP/HCE/8091473/0000407429,ONEKEY/HCP/HCE/8091473/0000407429,COMPANYAddressIDSeq,38357661,com.COMPANY.mdm.generator.db.KeyIdRegistryThe 8091473 is not a crosswalk so to remove the checksum from the BatchEntityProcessStatus collection there is a need to find the profile in Reltio - crosswalk si WUSM01113231 - and clear the cache in the BatchEntityProcessStatus collection.In my example, there was only one crosswalk. So it was easy to find this profile. For multiple profiles, there is a need to find the solution. ( I think we need to ask CDW to provide the file for ONEKEY with an additional crosswalk column, so we will be able to match the crosswalk with the Key and clear the checksum)    Solution: once we receive ONEKEY KeyIdRegstriy Update file ask COMPANY Team to generate crosswalks ids - simple CSV fileThe file received from CDW does not contain crosswalks id, only COMPANYAddressIds - example input - https://gblmdmhubprodamrasp101478.s3.amazonaws.com/us/prod/inbound/cdw/one-time-feeds/COMPANY-address-id/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210511.txtAsk DT Team and download CSV fileLoad the file to TMP collection in Mongo e.g. - AddressIDCrosswalks_COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210511Execute the following:\nCLEAR batchEntityProcessStatus based on crosswalks ID list \n\n db = db.getSiblingDB('reltio_dev')\n db.auth("mdm_hub", "")\n \n db = db.getSiblingDB('reltio_prod')\n db.auth("mdm_hub", "")\n\n\n print("START")\n var start = new Date().getTime();\n var cursor = db.getCollection("AddressIDCrosswalks_COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210511").aggregate( \n [\n ], \n { \n "allowDiskUse" : false\n }\n )\n \n cursor.forEach(function (doc){\n \n var type = "configuration/sources/ONEKEY";\n var value = doc.COMPANYcustid_individualeid;\n \n print(type + " " + value)\n \n var result = db.getCollection("batchEntityProcessStatus").update(\n { "batchName" : { $exists : true }, "sourceId" : { "type" : type, "value" : value } },\n { $set: { "checksum": "" } },\n { multi: true}\n )\n \n printjson(result);\n \n });\n \n var end = new Date().getTime();\n var duration = end - start;\n print("duration: " + duration + " ms")\n print("END")\n" + }, + { + "title": "Changelog of removed duplicates", + "pageID": "172294537", + "pageLink": "/display/GMDM/Changelog+of+removed+duplicates", + "content": "01.02.2021 - DROP keys          Duplicate_Address_Ids.txt         nohup ./script.sh inbound/Duplicate_Address_Ids.txt > EXTRACT_Duplicate_Address_Ids.txt &19.04.2021 - DROP keys STAGE GBLUS          Duplicate_Address_Ids_16042021.txt - 11 380 - 1 ONEKEY, ICUE, CENTRIS          nohup ./script.sh inbound/Duplicate_Address_Ids_16042021.txt > EXTRACT_Duplicate_Address_Ids_16042021.txt &17.05.2021 - DROP STAGE GBLUS          Duplicate_Address_Ids_17052021.txt - 25121 - 1 ONEKEY          nohup ./script.sh inbound/Duplicate_Address_Ids_17052021.txt > EXTRACT_Duplicate_Address_Ids_17052021.txt25.06.2021 - DROP STAGE GBLUS          Duplicate_Address_Ids_17052021.txt - 71509, 2 ONEKEY         nohup ./script.sh inbound/Duplicate_Address_Ids_25062021.txt > EXTRACT_Duplicate_Address_Ids_25062021.txt &12.07.2021 - DROP PROD GBLUS          Duplicate_Address_Ids_12072021.txt - 4550 Duplicate_Address_Ids_12072021.txt - us/prod/inbound/cdw/one-time-feeds/Address-DeDup/FileSet-3/         nohup ./script.sh inbound/Duplicate_Address_Ids_12072021.txt > EXTRACT_Duplicate_Address_Ids_12072021.txt & " + }, + { + "title": "Cache Address ID Update Process", + "pageID": "164469955", + "pageLink": "/display/GMDM/Cache+Address+ID+Update+Process", + "content": "1. Log using S3 browser to production bucket gblmdmhubprodamrasp101478 and go to dir /us/prod/inbound/cdw/one-time-feeds/COMPANY-address-id/ and check last update dates2. Log using mdmusnpr service user to server amraelp00007334.COMPANY.com using ssh3. Sync files from S3 using below commanddocker run -u 27519996:24670575 -e "AWS_ACCESS_KEY_ID=" -e "AWS_SECRET_ACCESS_KEY=" -e "AWS_DEFAULT_REGION=us-east-1" -v /app/mdmusnpr/AddressID/inbound:/src:z mesosphere/aws-cli s3 sync s3://gblmdmhubprodamrasp101478/us/prod/inbound/cdw/one-time-feeds/COMPANY-address-id/ /src4. After syncing check new files with those two commads replacing new_file_name with name of the file which was updated. Check in script file that SRC_SYS and MDM_SRC_SYS exists, if not something is wrong and probably script needs to be updated ask the person who asked for address id updatecut -d',' -f8 | sort | uniqcut -d',' -f9 | sort | uniq5. Remove old extracts from /app/mdmusnpr/AddressIDrm EXTRACT_6. Run script which will prepare data for mongonohup ./script.sh inbound/ > EXTRACT_ &Wait until processing in foreground finishes. Check after some time using below command:ps ax | grep scriptIf process is marked as done You can continue with next file or if there is no more files You can proceed to next step.7. Log in using Your user to the server amraelp00007334.COMPANY.com and change to root8. Go to /app/mongo/config and remove old extractsrm EXTRACT_9. Go to /app/mdmusnpr/AddressID and copy new extracts to mongocp EXTRACT_ /app/mongo/config/10. Run mongo shelldocker exec -it mongo_mongo_1 bashcd /data/configdb11. Execute following command for each non prod env and for every new extract file - reltio_dev, reltio_qa, reltio_stagemongoimport --host=localhost:27017 --username=admin --password= --authenticationDatabase=admin --db= --collection=keyIdRegistry --type=csv --columnsHaveTypes --fields="_id.string(),key.string(),sequence.string(),generatedId.int64(),_class.string()" --file=EXTRACT_ --mode=upsertWrite into changelog the number of records that were updated - it should be equal on all envs.12. If needed and requested update production using following commandmongoimport --host=mongo_reltio_repl_set/amraelp00007844.COMPANY.com:27017,amraelp00007870.COMPANY.com:27017,amraelp00007847.COMPANY.com:28017 --username=admin --password= --authenticationDatabase=admin --db=reltio_prod --collection=keyIdRegistry --type=csv --columnsHaveTypes --fields="_id.string(),key.string(),sequence.string(),generatedId.int64(),_class.string()" --file=EXTRACT_ --mode=upsert13. Verify number of entries from input file with updated records number in mongo14. Update changelog15. Respond to email that update is done16. Force merge will be generated - there will be mail about this.17. Download force merge delta from S3 using S3 browser and change name to merge__1.csvbucket: gblmdmhubprodamrasp101478path: us/prod/inbound/HcpmForceMerge/ForceMergeDelta18. Upload file merge__1.csv tobucket: gblmdmhubprodamrasp101478path: us/prod/inbound/hub/merge_unmerge_entities/input/19. Trigger dag https://mdm-monitoring.COMPANY.com/airflow/tree?dag_id=merge_unmerge_entities_gblus_prod_gblus20. After dag is finished login using S3 Browser bucket: gblmdmhubprodamrasp101478path: us/prod/inbound/hub/merge_unmerge_entities/output/_so for date 17/5/2021 and time 12:11: 39, the file looks like this:          us/prod/inbound/hub/merge_unmerge_entities/output/20210517_121139and download result file, check for failed merge and send it in response to email about force merge" + }, + { + "title": "Changelog of updated", + "pageID": "164469954", + "pageLink": "/display/GMDM/Changelog+of+updated", + "content": "20.11.2020 - Loading NEW files:GRV & ENGAGE 1.0nohup ./script.sh inbound/COMPANY_ADDRESS_ID_EXTRACT_PAC_ENG.txt > EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_PAC_ENG.txt &IQVIA_RXnohup ./script.sh inbound/COMPANY_ADDRESS_ID_EXTRACT_HCPS00.txt > EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_HCPS00.txt &IQVIA_MCO & MILLIMAN & MMITnohup ./script.sh inbound/COMPANY_ACCOUNT_ADDR_ID_EXTRACT.txt > EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT.txt &09.12.2020 - Loading new file: -> 46092714.12.2020 - Loading new file: PAC_ENG -> 820 document, CAPP-> 464583 document16.12.2020 - Loading MILLIMAN_MCO: 10504 document22.12.2020 - Loading CPMRTE: 15686 document, CAPP: 1287, PAC_ENG: 1340, VVA: 11927070, IMS: 343, HCO i SAP problem, CENTRIS: 41496, hcps00: 421529.12.2020 - Loading PAC_ENG: 1260, CAPP: 141404.01.2021 - Loading PAC_ENG: 330, CAPP: 33808.01.2021 - Loading HCPS00: 321411.01.2021 - Loading PAC_ENG: 496, CAPP: 51218.01.2021 - Loading PAC_ENG: 616, CAPP: 79525.01.2021 - Loading PAC_ENG: 1009, CAPP: 93901.02.2021 - Loading PAC_ENG: 884, CAPP: 110608.02.2021 - Loading PAC_ENG: 576, CAPP: 39415.02.2021 - Loading PAC_ENG: 690, CAPP: 69617.02.2021 - Loading VVA: 1204836422.02.2021 - Loading PAC_ENG: 724, CAPP: 75701.03.2021 - Loading PAC_ENG: 906, CAPP: 96926.04.2021 - Loading PAC_ENG: 738, CAPP: 79511.05.2021 - Loading PAC_ENG: 589, CAPP: 62617.05.2021 - Loading PAC_ENG: 489, CAPP: 61317.05.2021 - Loading - us/prod/inbound/cdw/one-time-feeds/COMPANY-address-id/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210511.txt                     Updated: 1171703 - customers updated - cleared cache in batchEntityProcessStatus collection for reload                     Updated: 1513734 - document(s) imported successfully in KeyIdRegistry18.05.2021 - STAGE only      COMPANY_ACCOUNT_ADDR_ID_EXTRACT_HCOS_20210511_fix.txt - 43771 document(s) imported successfully      COMPANY_ACCOUNT_ADDR_ID_EXTRACT_IMS_20210511.txt - 10076 document(s) imported successfully19.05.3021 -  Load 15 Files to PROD and clear cache. Load these files to DEV QA and STAGE      2972 May 17 11:40 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_DVA_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_DVA_20210511.txt &      19124366 May 19 07:11 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_HCOS_20210511_fix.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_HCOS_20210511_fix.txt &      3154666 May 17 11:41 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_IMS_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_IMS_20210511.txt &      221969 May 17 11:40 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MLM_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MLM_20210511.txt &      214430 May 17 11:41 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MMIT_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MMIT_20210511.txt &      163142 May 17 11:40 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_SAP_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_SAP_20210511.txt &      73236 May 17 15:00 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_APUS-VVA_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_APUS-VVA_20210511.txt &      6399709 May 17 15:00 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_CENTRIS_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_CENTRIS_20210511.txt &      60175 May 17 15:00 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_EMDS-VVA_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_EMDS-VVA_20210511.txt &      318915 May 17 14:59 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_ENG_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_ENG_20210511.txt &      13528 May 17 14:59 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_HCPS_ZIP_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_HCPS_ZIP_20210511.txt &      1360570 May 17 14:59 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_KOL_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_KOL_20210511.txt &      8135990 May 17 14:59 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_PAC_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_PAC_20210511.txt &      14583373 May 17 15:00 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_SHS_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_SHS_20210511.txt &      283564 May 17 15:00 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_SHS_ZIP_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_SHS_ZIP_20210511.txt &24.05.2021 - Loading PAC_ENG: Dev:1283, QA: 1283, Stage: 1509, Prod: 1283                                         CAPP: Dev: 1873, QA: 1392, Stage: 1873, Prod: 18731/6/2021 - Loading PAC_ENG: 379, CAPP: 4339/6/2021 - Loading PAC_ENG: 38, CAPP: 4714/6/2021 - Loading PAC_ENG: 83, CAPP: 10216/6/2021 - Loading COMPANY_ACCT: Prod: 236 28/06/2021 - Loading PAC_ENG: Dev:182, QA: 182, Stage: 182, Prod: 646, CAPP: Dev: 215, QA: 215, Stage: 215, Prod: 21502.07.2021     Load 11 Files to PROD and clear cache. Load these files to DEV QA and STAGE     nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_HCOS_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_HCOS_20210630.txt &    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_IMS_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_IMS_20210630.txt &    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MLM_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MLM_20210630.txt &    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_APUS-VVA_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_APUS-VVA_20210630.txt &    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_CENTRIS_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_CENTRIS_20210630.txt &    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_EMDS-VVA_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_EMDS-VVA_20210630.txt &    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210630.txt &    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_HCPS_ZIP_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_HCPS_ZIP_20210630.txt &    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_KOL_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_KOL_20210630.txt &    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_SHS_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_SHS_20210630.txt &    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_SHS_ZIP_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_SHS_ZIP_20210630.txt &5/7/2021 - Loading PAC_ENG: 39 , CAPP: 4416.07.2021     Load 1 VVA File to PROD and clear cache. Load this file to DEV QA and STAGE     nohup ./script.sh inbound/COMPANY_ADDRESS_ID_EXTRACT_VVA_20210715.txt > EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_VVA_20210715.txt &20.07.2021     Load 1 VVA File to PROD and clear cache. Load this file to DEV QA and STAGE     nohup ./script.sh inbound/COMPANY_ADDRESS_ID_EXTRACT_VVA_20210718.txt > EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_VVA_20210718.txt &GBLUS/Fletcher PROD GO-LIVE COMPANYAddressID sequence - PROD (MAX)139510034 + 5000000 = 144510034" + }, + { + "title": "Manual Cache Clear", + "pageID": "164470086", + "pageLink": "/display/GMDM/Manual+Cache+Clear", + "content": "Open Studio 3T and connect to appropriate Mongo DBOpen IntelliShellRun following query for appropriate source - replace with right name\ndb.getCollection("batchEntityProcessStatus").updateMany({"sourceId.type":"configuration/sources/"}, {$set: {"checksum" : ""}})\n" + }, + { + "title": "Data Quality", + "pageID": "492471763", + "pageLink": "/display/GMDM/Data+Quality", + "content": "" + }, + { + "title": "Quality Rules Deployment Process", + "pageID": "492471766", + "pageLink": "/display/GMDM/Quality+Rules+Deployment+Process", + "content": "Resource changingThe process regards modifying the resources related to data quality configuration that are stored in Consul and load by mdm-manager, mdm-onekey-dcr-service, precallback-service components in runtime. They are present in mdm-config-registry/config-hub location.When modifying data quality rules configuration present at mdm-config-registry/config-hub//mdm-manager/quality-service/quality-rules , the following rules should be applied:Each YAML file should be formatted in accordance with yamllint rules (See Yamllint validation rules)The attributes createdDate/modifiedDate were deleted from the rules configuration files. They will be automatically set for each rule during the deployment process. (See Deployment of changes)Adding more than one rule with the same value of name attribute is not allowed.PR validationEvery PR to mdm-config-registry repository is validated for correctness of YAML syntax (See Yamllint validation rules). Upon PR creation the job is triggered that checks the format of YAML files using yamllint. The jobs succeeds only when all the yaml files in repository passed the yamllint test.The PRs that did not passed validations should not be merged to master.Deployment of changesAll changes in mdm-config-registry/config-hub should be deployed to consul using JENKINS JOBS. The separate job exist for deploying changes done on each environment. Eg. job deploy_config_amer_nprod_amer-dev is used to deploy all changes done on AMER DEV environment (all changes under path mdm-config-registry/config/hub/dev_amer). Jobs allow to deploy configuration from master branch or PR's to mdm-config-registry repo.The deployment job flow can be described by the following diagram:StepsClean workspace - wipes workspace of all the files left from previous job run.Checkout mdm-config-registry - this repository contains files with data quality configuration and yamllint rulesCheckout mdm-hub-cluster-env - this repository contains script for assigning createdDate / modifiedDate attributes to quality rules and ansible job for running this script and uploading files to consul.Validate yaml files - runs yamllint validation for every YAML file at mdm-config-registry/config-hub/ (See Yamllint validation rules)Get previous quality rules registry files - downloads quality rules registry file produced after previous successfull run of a job. The file is responsible for storing information about modification dates and checksum of quality rules. Decision if modification dates should be update is made based on checksum change, . The registry file is a csv with the following headers:ID - ID for each quality rule in form of :CREATED_DATE - stores createdDate attribute value for each ruleMODIFIED_DATE - stores modifiedDate attribute value for each ruleCHECKSUM - stores checksum counted for each ruleUpdate Quality Rules files - runs ansible job responsible for:Running script QualityRuleDatesManager.groovy - responsible for adjusting createdDate / modifiedDate for quality rules based on checksum changes and creating new quality rules registry file.Updating changed quality rules files in Consul kv store.Archive quality rules registry file - save new registry file in job artifacts.Algorithm of updating modification datesThe following algorithm is implemented in QualityRuleDatesManager.groovy script. The main goal of this is to update createdDate/modifiedDate in the case when new quality rule has been added or its definition changed.Yamllint validation rulesTODO" + }, + { + "title": "DCRs:", + "pageID": "259432965", + "pageLink": "/pages/viewpage.action?pageId=259432965", + "content": "" + }, + { + "title": "DCR Service 2:", + "pageID": "302705607", + "pageLink": "/pages/viewpage.action?pageId=302705607", + "content": "" + }, + { + "title": "Reject pending VOD DCR - transfer to Data Stewards", + "pageID": "415993922", + "pageLink": "/display/GMDM/Reject+pending+VOD+DCR+-+transfer+to+Data+Stewards", + "content": "DescriptionThere's a DCR request which was sent to Veeva OpenData (VOD) by HUB however it hasn't been processed - we didn't receive information whether is should be ACCEPTED or REJECTED. This causes a couple of things:in RELTIO we're having DCR in status VR Status = OPEN and VR Detailed Status = SENTin Mongo in collection DCRRequest we're having DCR in status = SENT_TO_VEEVAin Mongo in collection DCRVeevaRequest we're having DCR in status = SENTalerts are raised in Prometheus/Karma since we usually should receive response within couple of daysGoalWe want to simulate REJECT response from VOD which will make DCR to return to Reltio for further processing by Data Stewards. This may be realized in a couple of ways: Procedure #1 - (minutes to process) Populate event to topic $env-internal-veeva-dcr-change-events-in which skips VeevaAdapter and simulates response from VeevaAdapter to DCR Service 2 → see diagram for more details Veeva DCR flowsProcedure #2 - (hours to process) Create DCR response ZIP file with specific payload, which needs to be placed to specific S3 location, which is further ingested by VeevaAdapterProcedure #1Step 1 - Adjust below event template(optional) update eventTime to current timestamp in milliseconds → use https://www.epochconverter.com/(optional) update countryCode to the on from Request(requited) update dcrId to the one you want JSON event to populate\n{\n "eventType": "CHANGE_REJECTED",\n "eventTime": 1712573721000,\n "countryCode": "SG",\n "dcrId": "a51f229331b14800846503600c787083",\n "vrDetails": {\n "vrStatus": "CLOSED",\n "vrStatusDetail": "REJECTED",\n "veevaComment": "MDM HUB: Simulated reject response to close DCR.",\n "veevaHCPIds": [],\n "veevaHCOIds": []\n }\n}\nStep 2 - Populate event to topic $env-internal-veeva-dcr-change-events-in (for APAC-STAGE: apac-stage-internal-veeva-dcr-change-events-in). For this purpose use AKHQ (for APAC-STAGE: https://akhq-apac-nprod-gbl-mdm-hub.COMPANY.com/ui/login)Select topic $env-internal-veeva-dcr-change-events-in and use "Produce to Topic" button in bottom rightPaste event details, update Key by providing dcrId and press "Populate"After a couple of minutes two things should be in effect:DCR in Reltio should change its status from SENT_TO_VEEVA to DS Action RequiredMongoDB document in collection DCRRegistry will change its status to DS_ACTION_REQUIREDStep 3 - update MongoDB DCRRegistryVeeva collection Connect to Mongo with Studio 3T, find out document using "_id" in collection DCRRegistryVeeva and update its status to REJECTED and changeDate to current one.Document update\n{\n $set : {\n "status.name" : "REJECTED",\n "status.changeDate" : "2024-04-07T17:42:37.882195Z"\n }\n}\nStep 4 - check Reltio DCRCheck if DCR status has changed to "DS Action Required" and DCR Tracing details has been updated with simulated Veeva Reject response. " + }, + { + "title": "Close VOD DCR - override any status", + "pageID": "492489948", + "pageLink": "/display/GMDM/Close+VOD+DCR+-+override+any+status", + "content": "This SoP is almost identical to the one in Override VOD Accept to VOD Reject for VOD DCR with small updates:In Step 1, please also update target = VOD to target = Reltio. " + }, + { + "title": "Override VOD Accept to VOD Reject for VOD DCR", + "pageID": "490649621", + "pageLink": "/display/GMDM/Override+VOD+Accept+to+VOD+Reject+for+VOD+DCR", + "content": "DescriptionThere's a DCR request which was sent to Veeva OpenData (VOD) and mistakenly ACCEPTED, however business requires such DCR to be Rejected and redirected to DSR for processing via Reltio Inbox.GoalWe want to:remove incorrect entries in DCR Tracking details - usually "Veeva Accepted" and "Waiting for ETL Data Load"simulate REJECT response from VOD which will make DCR to return to Reltio for further processing by Data Stewards→ Populate event to topic $env-internal-veeva-dcr-change-events-in which skips VeevaAdapter and simulates response from VeevaAdapter to DCR Service 2 → see diagram for more details Veeva DCR flowsProcedureStep 0 - Assume that VOD_NOT_FOUNDSet retryCounter to 9999Wait for 12hStep 1 - Adjust DCR document in MongoDB in DCRRegistry collection (Studio3T)Remove incorrect DCR Tracking entries for your DCR (trackingDetails section) - usually nested attribute 3 and 4 in this sectionSet retryCounter to 0Set status.name to "SENT_TO_VEEVA"Step 2 - update MongoDB DCRRegistryVeeva collection Connect to Mongo with Studio 3T, find out document using "_id" in collection DCRRegistryVeeva and update its status to REJECTED and changeDate to current one.Document update\n{\n $set : {\n "status.name" : "REJECTED",\n "status.changeDate" : "2024-04-07T17:42:37.882195Z"\n }\n}\nStep 3 - Adjust below event template(optional) update eventTime to current timestamp in milliseconds → use https://www.epochconverter.com/(optional) update countryCode to the on from Request(requited) update dcrId to the one you want JSON event to populate\n{\n "eventType": "CHANGE_REJECTED",\n "eventTime": 1712573721000,\n "countryCode": "SG",\n "dcrId": "a51f229331b14800846503600c787083",\n "vrDetails": {\n "vrStatus": "CLOSED",\n "vrStatusDetail": "REJECTED",\n "veevaComment": "MDM HUB: Simulated reject response to close DCR.",\n "veevaHCPIds": [],\n "veevaHCOIds": []\n }\n}\nStep 4 - Populate event to topic $env-internal-veeva-dcr-change-events-in (for APAC-STAGE: apac-stage-internal-veeva-dcr-change-events-in). For this purpose use AKHQ (for APAC-STAGE: https://akhq-apac-nprod-gbl-mdm-hub.COMPANY.com/ui/login)Select topic $env-internal-veeva-dcr-change-events-in and use "Produce to Topic" button in bottom rightPaste event details, update Key by providing dcrId and press "Populate"After a couple of minutes (it depends on the traceVR schedule - it my take up to 6h on PROD) two things should be in effect:DCR in Reltio should change its status from SENT_TO_VEEVA to DS Action RequiredMongoDB document in collection DCRRegistry will change its status to DS_ACTION_REQUIREDStep 6 - check Reltio DCRCheck if DCR status has changed to "DS Action Required" and DCR Tracing details has been updated with simulated Veeva Reject response. " + }, + { + "title": "DCR escalation to Veeva Open Data (VOD)", + "pageID": "430348063", + "pageLink": "/pages/viewpage.action?pageId=430348063", + "content": "Integration failIt occasionally happens that DCR response files from Veeva are not being delivered to S3 bucket which is used for ingestion by HUB. VOD provides CVS/ZIP files every day, even though there's no actual payload related to DCRs - files contain only CSV headers. This disruption may be caused by two things: VOD didn't generate DCR response and didn't place it on their SFTPGMFT's synchronization job responsible for moving file between SFTP and S3 stopped working Either way, we need to pin point of the two are causing the problem.Troubleshooting It's usually good to check when the last synchronization took place.GMFT issueIf there is more than one file (usually this dir should be empty) in outbound directory /globalmdmprodaspasp202202171415/apac/prod/outbound/vod/APAC/DCR_request it means that GMFT job does not push files from S3 to SFTP. The files which are properly processed by GMFT job are copied to Veeva SFTP and additionally moved to  /globalmdmprodaspasp202202171415/apac/prod/archive/vod/APAC/DCR_request.Veeva Open Data issueOnce you are sure it's not GMFT issue, check archive directory for the latest DCR response file: /globalmdmprodaspasp202202171415/apac/prod/archive/vod/APAC/DCR_response/globalmdmprodaspasp202202171415/apac/prod/archive/vod/CN/DCR_responseIf the latest file is older that 24h → there's an issue on VOD side. Who to contact?SFTP, please contact DL-GMFT-EDI-PRD-SUPPORT@COMPANY.com or directly to barath.s@COMPANY.com, kothai.nayaki@COMPANY.com and CC: sabari.mahendran@COMPANY.comVeeva Open data(important one) create ticket in smartsheet: https://app.smartsheet.com/sheets/pqmwRfRjCxRRCXgwRJf2629fGqrjfFpQ6fWPjfM1 → you may not have access to this file without prior request to moneem.ahmed@veeva.comat the moment Irek has access to this file(optional) please contact laurie.koudstaal@COMPANY.com, quiterie.duco@veeva.com, (and for escalation and PROD issues CC: vincent.pavan@veeva.com, moneem.ahmed@veeva.com and sabari.mahendran@COMPANY.com)" + }, + { + "title": "DCR rejects from IQVIA due to missing RDM codes", + "pageID": "475927691", + "pageLink": "/display/GMDM/DCR+rejects+from+IQVIA+due+to+missing+RDM+codes", + "content": "DescriptionSometimes our Clients are being provided with below error message when they are trying to send DCRs to OneKey. This request was not accepted by the IQVIA due to missing RDM code mapping and was redirected to Reltio Inbox. The reason is: 'Target lookup code not found for attribute: HCPSpecialty, country: CA, source value: SP.ONCM.'. This means that there is no equivalent of this code in IQVIA code mapping. Please contact MDM Hub DL-ATP_MDMHUB_SUPPORT@COMPANY.com asking to add this code and click "SendTo3Party" in Reltio after Hub's confirmation.WhyThis is caused when PforceRx tries to send DCR with changes on attribute with Lookup Values. On HUB end we're trying to remap canonical codes from Reltio/RDM to source mapping values which are specific to OneKey and understood by them. Usual we are dealing with situation that for each canonical code there is a proper source code mapping mapping. Please refer to below screen (Mongo collection LookupValues). However when their is no such mapping like in case below (no ONEKEY entry in sourceMappings) then we're dealing with problem aboveFor more information about canonical code mapping and the flow to get target code sent to OneKey or VOD, please refer to → Veeva: create DCR method (storeVR), section "Mapping Reltio canonical codes → Veeva source codes"HowWe should contact people responsible for RDM codes mappings (MDM COMPANY team) to add find out correct sourceMapping value for this specific canonical code for specific country. In the end they will contact AJ to add it to RDM (usually every week)." + }, + { + "title": "Defaults", + "pageID": "284795409", + "pageLink": "/display/GMDM/Defaults", + "content": "DCR defaults map the source codes of the Reltio system to the codes in the OneKey or VOD (Veeva Open Data) system. Occur for specific types of attributes: HCPSpecialities, HCOSpecialities, HCPTypeCode, HCOTypeCode, HCPTitle, HCOFacilityType. The values ​​are configured in the Consul system. To configure the values:  Sort the source (.xlsx) file: Divide the file into separate sheets for each attribute.Save the sheets in separate csv format files - columns separated by semicolons.Paste the contents of the files into the appropriate files in the consul configuration repository - mdm-config-registry:  - each environment has its own folder in the configuration repository  - files must have header- Country;CanonicalCode;DefaultFor more information about canonical code mapping and the flow to get target code sent to OneKey or VOD, please refer to → Veeva: create DCR method (storeVR), section "Mapping Reltio canonical codes → Veeva source codes"" + }, + { + "title": "Go-Live Readiness", + "pageID": "273696220", + "pageLink": "/display/GMDM/Go-Live+Readiness", + "content": "Procedure:" + }, + { + "title": "OneKey Crosswalk is Missing and IQVIA Returned Wrong ID in TraceVR Response", + "pageID": "259432967", + "pageLink": "/display/GMDM/OneKey+Crosswalk+is+Missing+and+IQVIA+Returned+Wrong+ID+in+TraceVR+Response", + "content": "This SOP describes how to FIX the case when there is a DCR in OK_NOT_FOUND status and IQVIA change  the individualID from wrong one to correct one (due to human error)Example Case based on EMEA PROD: there is a DCR - 1fced0be830540a89c30f5d374754accstatus is OK_NOT_FOUNDmessage is Received ACCEPTED status from IQVIA, waiting for ONEKEY data load, missing crosswalks: WUKM00110951retrycounter reach 14 (7days)IQVIAshared the following trace VR response at firs and we closed the DCR:{"response.traceValidationRequestOutputFormatVersion":"1.8","response.status":"SUCCESS","response.resultSize":1,"response.totalNumberOfResults":1,"response.success":true,"response.results":[{"codBase":"WUK","cisHostNum":"4606","userEid":"04606","requestType":"Q","responseEntityType":"ENT_ACTIVITY","clientRequestId":"1fced0be830540a89c30f5d374754acc","cegedimRequestEid":"fbf706e175c847cb8f39a1873fc4daaf","customerRequest":null,"trace1ClientRequestDate":"2022-07-22T14:53:32Z","trace2CegedimOkcProcessDate":"2022-07-22T14:53:31Z","trace3CegedimOkeTransferDate":"2022-07-22T14:54:02Z","trace4CegedimOkeIntegrationDate":"2022-07-22T14:54:32Z","trace5CegedimDboResponseDate":"2022-07-28T07:27:34Z","trace6CegedimOkcExportDate":null,"requestComment":"FY1 Dr working in the stroke care unit at St Johns Hospital Livingston","responseComment":"HCP works at St Johns Hospital","individualEidSource":null,"individualEidValidated":"WUKM00110951","workplaceEidSource":"WUKH07885517","workplaceEidValidated":"WUKH07885517","activityEidSource":null,"activityEidValidated":"WUKM0011095101","addressEidSource":null,"addressEidValidated":"WUK00000092143","countryEid":"GB","processStatus":"REQUEST_RESPONDED","requestStatus":"VAS_FOUND","updateDate":"2022-07-28T07:56:45Z"}]}People involved in this topic:On Reltio side:On IQVIA side: After IQVIA check the TraceVR changed to:"response":{"traceValidationRequestOutputFormatVersion":1.8,"success":true,"status":"SUCCESS","totalNumberOfResults":1,"resultSize":1,"results":[{"activityEidSource":null,"activityEidValidated":"WUKM0011095501","addressEidSource":null,"addressEidValidated":"WUK00000092143","cegedimRequestEid":"fbf706e175c847cb8f39a1873fc4daaf","cisHostNum":"4606","clientRequestId":"1fced0be830540a89c30f5d374754acc","codBase":"WUK","countryEid":"GB","customerRequest":null,"individualEidSource":null,"individualEidValidated":"WUKM00110955","processStatus":"REQUEST_RESPONDED","requestComment":"FY1 Dr working in the stroke care unit at St Johns Hospital Livingston","requestEntityType":"ENT_ACTIVITY","requestFirstname":"Beth","requestLastname":"Mulloy","requestOrigin":"WS","requestProcess":"I","requestStatus":"VAS_FOUND","requestType":"Q","requestUsualWkpName":"Care of the Elderly Department","responseComment":"HCP works at St Johns Hospital","responseEntityType":"ENT_ACTIVITY","trace1ClientRequestDate":"2022-07-22T14:53:32Z","trace2CegedimOkcProcessDate":"2022-07-22T14:53:31Z","trace3CegedimOkeTransferDate":"2022-07-22T14:54:02Z","trace4CegedimOkeIntegrationDate":"2022-07-22T14:54:32Z","trace5CegedimDboResponseDate":"2022-07-28T07:27:34Z","trace6CegedimOkcExportDate":null,"lastResponseDate":"2022-07-28T07:43:40Z","updateDate":"2022-07-28T08:01:40Z","workplaceEidSource":"WUKH07885517","workplaceEidValidated":"WUKH07885517","userEid":"04606"}}the WUKM00110951 was changed to WUKM00110955This is blocking the DCRThe event that is constantly processing each 12h is in the emea-prod-internal-onekey-dcr-change-events-in The event was already generated so we need to overwrite it to fix the processingSTEPS:Go to https://akhq-emea-prod-gbl-mdm-hub.COMPANY.com/emea-prod-mdm-kafka/topic?search=dcr&show=HIDE_INTERNALFind the DCR by _id and get the latest event:Change the BodyFROM\n{\n "eventType": "DCR_CHANGED",\n "eventTime": 1658995201031,\n "eventPublishingTime": 1658995201031,\n "countryCode": "GB",\n "dcrId": "1fced0be830540a89c30f5d374754acc",\n "targetChangeRequest": {\n "vrStatus": "CLOSED",\n "vrStatusDetail": "ACCEPTED",\n "oneKeyComment": "ONEKEY response comment: HCP works at St Johns Hospital\\nONEKEY HCP ID: WUKM00110951\\nONEKEY HCO ID: WUKH07885517",\n "individualEidValidated": "WUKM00110951",\n "workplaceEidValidated": "WUKH07885517",\n "vrTraceRequest": "{\\"isoCod2\\":\\"GB\\",\\"validation.clientRequestId\\":\\"1fced0be830540a89c30f5d374754acc\\"}",\n "vrTraceResponse": "{\\"response.traceValidationRequestOutputFormatVersion\\":\\"1.8\\",\\"response.status\\":\\"SUCCESS\\",\\"response.resultSize\\":1,\\"response.totalNumberOfResults\\":1,\\"response.success\\":true,\\"response.results\\":[{\\"codBase\\":\\"WUK\\",\\"cisHostNum\\":\\"4606\\",\\"userEid\\":\\"04606\\",\\"requestType\\":\\"Q\\",\\"responseEntityType\\":\\"ENT_ACTIVITY\\",\\"clientRequestId\\":\\"1fced0be830540a89c30f5d374754acc\\",\\"cegedimRequestEid\\":\\"fbf706e175c847cb8f39a1873fc4daaf\\",\\"customerRequest\\":null,\\"trace1ClientRequestDate\\":\\"2022-07-22T14:53:32Z\\",\\"trace2CegedimOkcProcessDate\\":\\"2022-07-22T14:53:31Z\\",\\"trace3CegedimOkeTransferDate\\":\\"2022-07-22T14:54:02Z\\",\\"trace4CegedimOkeIntegrationDate\\":\\"2022-07-22T14:54:32Z\\",\\"trace5CegedimDboResponseDate\\":\\"2022-07-28T07:27:34Z\\",\\"trace6CegedimOkcExportDate\\":null,\\"requestComment\\":\\"FY1 Dr working in the stroke care unit at St Johns Hospital Livingston\\",\\"responseComment\\":\\"HCP works at St Johns Hospital\\",\\"individualEidSource\\":null,\\"individualEidValidated\\":\\"WUKM00110951\\",\\"workplaceEidSource\\":\\"WUKH07885517\\",\\"workplaceEidValidated\\":\\"WUKH07885517\\",\\"activityEidSource\\":null,\\"activityEidValidated\\":\\"WUKM0011095101\\",\\"addressEidSource\\":null,\\"addressEidValidated\\":\\"WUK00000092143\\",\\"countryEid\\":\\"GB\\",\\"processStatus\\":\\"REQUEST_RESPONDED\\",\\"requestStatus\\":\\"VAS_FOUND\\",\\"updateDate\\":\\"2022-07-28T07:56:45Z\\"}]}"\n }\n}\nTO\n{\n "eventType": "DCR_CHANGED",\n "eventTime": 1658995201031,\n "eventPublishingTime": 1658995201031,\n "countryCode": "GB",\n "dcrId": "1fced0be830540a89c30f5d374754acc",\n "targetChangeRequest": {\n "vrStatus": "CLOSED",\n "vrStatusDetail": "ACCEPTED",\n "oneKeyComment": "ONEKEY response comment: HCP works at St Johns Hospital\\nONEKEY HCP ID: WUKM00110955\\nONEKEY HCO ID: WUKH07885517",\n "individualEidValidated": "WUKM00110955",\n "workplaceEidValidated": "WUKH07885517",\n "vrTraceRequest": "{\\"isoCod2\\":\\"GB\\",\\"validation.clientRequestId\\":\\"1fced0be830540a89c30f5d374754acc\\"}",\n "vrTraceResponse": "{\\"response.traceValidationRequestOutputFormatVersion\\":\\"1.8\\",\\"response.status\\":\\"SUCCESS\\",\\"response.resultSize\\":1,\\"response.totalNumberOfResults\\":1,\\"response.success\\":true,\\"response.results\\":[{\\"codBase\\":\\"WUK\\",\\"cisHostNum\\":\\"4606\\",\\"userEid\\":\\"04606\\",\\"requestType\\":\\"Q\\",\\"responseEntityType\\":\\"ENT_ACTIVITY\\",\\"clientRequestId\\":\\"1fced0be830540a89c30f5d374754acc\\",\\"cegedimRequestEid\\":\\"fbf706e175c847cb8f39a1873fc4daaf\\",\\"customerRequest\\":null,\\"trace1ClientRequestDate\\":\\"2022-07-22T14:53:32Z\\",\\"trace2CegedimOkcProcessDate\\":\\"2022-07-22T14:53:31Z\\",\\"trace3CegedimOkeTransferDate\\":\\"2022-07-22T14:54:02Z\\",\\"trace4CegedimOkeIntegrationDate\\":\\"2022-07-22T14:54:32Z\\",\\"trace5CegedimDboResponseDate\\":\\"2022-07-28T07:27:34Z\\",\\"trace6CegedimOkcExportDate\\":null,\\"requestComment\\":\\"FY1 Dr working in the stroke care unit at St Johns Hospital Livingston\\",\\"responseComment\\":\\"HCP works at St Johns Hospital\\",\\"individualEidSource\\":null,\\"individualEidValidated\\":\\"WUKM00110955\\",\\"workplaceEidSource\\":\\"WUKH07885517\\",\\"workplaceEidValidated\\":\\"WUKH07885517\\",\\"activityEidSource\\":null,\\"activityEidValidated\\":\\"WUKM0011095501\\",\\"addressEidSource\\":null,\\"addressEidValidated\\":\\"WUK00000092143\\",\\"countryEid\\":\\"GB\\",\\"processStatus\\":\\"REQUEST_RESPONDED\\",\\"requestStatus\\":\\"VAS_FOUND\\",\\"updateDate\\":\\"2022-07-28T07:56:45Z\\"}]}"\n }\n}\nThe result is the replace in the individualEidValidated and all the places where ol ID existsPush the new event with new timestamp and same kafka key to the topicNew Case (2023-03-21)ONEKEY responded with ACCEPTED with ONEKEY ID but OneKey VR Trace response contains: "requestStatus": "VAS_FOUND_BUT_INVALID".DCR2 Service is checking every 12h if Onekey already provided the data to Reltio. We must manually close this DCR.Steps:In amer-prod-internal-onekey-dcr-change-events-in topic find the latest event for ID ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●.Change from:\n{\n\t"eventType": "DCR_CHANGED",\n\t"eventTime": 1677801600678,\n\t"eventPublishingTime": 1677801600678,\n\t"countryCode": "CA",\n\t"dcrId": "f19305a6e6af4b5aa03d26c1ec1ae5a6",\n\t"targetChangeRequest": {\n\t\t"vrStatus": "CLOSED",\n\t\t"vrStatusDetail": "ACCEPTED",\n\t\t"oneKeyComment": "ONEKEY response comment: Already Exists-Data Privacy\\nONEKEY HCP ID: WCAP00028176\\nONEKEY HCO ID: WCAH00052991",\n\t\t"individualEidValidated": "WCAP00028176",\n\t\t"workplaceEidValidated": "WCAH00052991",\n\t\t"vrTraceRequest": "{\\"isoCod2\\":\\"CA\\",\\"validation.clientRequestId\\":\\"f19305a6e6af4b5aa03d26c1ec1ae5a6\\"}",\n\t\t"vrTraceResponse": "{\\"response.traceValidationRequestOutputFormatVersion\\":\\"1.8\\",\\"response.status\\":\\"SUCCESS\\",\\"response.resultSize\\":1,\\"response.totalNumberOfResults\\":1,\\"response.success\\":true,\\"response.results\\":[{\\"codBase\\":\\"WCA\\",\\"cisHostNum\\":\\"7853\\",\\"userEid\\":\\"07853\\",\\"requestType\\":\\"Q\\",\\"responseEntityType\\":\\"ENT_ACTIVITY\\",\\"clientRequestId\\":\\"f19305a6e6af4b5aa03d26c1ec1ae5a6\\",\\"cegedimRequestEid\\":\\"9d02f7547dbc4e659a9d230c91f96279\\",\\"customerRequest\\":null,\\"trace1ClientRequestDate\\":\\"2023-02-27T23:53:44Z\\",\\"trace2CegedimOkcProcessDate\\":\\"2023-02-27T23:53:40Z\\",\\"trace3CegedimOkeTransferDate\\":\\"2023-02-27T23:54:23Z\\",\\"trace4CegedimOkeIntegrationDate\\":\\"2023-02-27T23:55:47Z\\",\\"trace5CegedimDboResponseDate\\":\\"2023-03-02T21:23:36Z\\",\\"trace6CegedimOkcExportDate\\":null,\\"requestComment\\":null,\\"responseComment\\":\\"Already Exists-Data Privacy\\",\\"individualEidSource\\":null,\\"individualEidValidated\\":\\"WCAP00028176\\",\\"workplaceEidSource\\":\\"WCAH00052991\\",\\"workplaceEidValidated\\":\\"WCAH00052991\\",\\"activityEidSource\\":null,\\"activityEidValidated\\":\\"WCAP0002817602\\",\\"addressEidSource\\":null,\\"addressEidValidated\\":\\"WCA00000006206\\",\\"countryEid\\":\\"CA\\",\\"processStatus\\":\\"REQUEST_RESPONDED\\",\\"requestStatus\\":\\"VAS_FOUND_BUT_INVALID\\",\\"updateDate\\":\\"2023-03-02T21:37:16Z\\"}]}"\n\t}\n}\nTo:\n{\n\t"eventType": "DCR_CHANGED",\n\t"eventTime": 1677801600678,\n\t"eventPublishingTime": 1677801600678,\n\t"countryCode": "CA",\n\t"dcrId": "f19305a6e6af4b5aa03d26c1ec1ae5a6",\n\t"targetChangeRequest": {\n\t\t"vrStatus": "CLOSED",\n\t\t"vrStatusDetail": "REJECTED",\n\t\t"oneKeyComment": "ONEKEY response comment: Already Exists-Data Privacy\\nONEKEY HCP ID: WCAP00028176\\nONEKEY HCO ID: WCAH00052991",\n\t\t"individualEidValidated": "WCAP00028176",\n\t\t"workplaceEidValidated": "WCAH00052991",\n\t\t"vrTraceRequest": "{\\"isoCod2\\":\\"CA\\",\\"validation.clientRequestId\\":\\"f19305a6e6af4b5aa03d26c1ec1ae5a6\\"}",\n\t\t"vrTraceResponse": "{\\"response.traceValidationRequestOutputFormatVersion\\":\\"1.8\\",\\"response.status\\":\\"SUCCESS\\",\\"response.resultSize\\":1,\\"response.totalNumberOfResults\\":1,\\"response.success\\":true,\\"response.results\\":[{\\"codBase\\":\\"WCA\\",\\"cisHostNum\\":\\"7853\\",\\"userEid\\":\\"07853\\",\\"requestType\\":\\"Q\\",\\"responseEntityType\\":\\"ENT_ACTIVITY\\",\\"clientRequestId\\":\\"f19305a6e6af4b5aa03d26c1ec1ae5a6\\",\\"cegedimRequestEid\\":\\"9d02f7547dbc4e659a9d230c91f96279\\",\\"customerRequest\\":null,\\"trace1ClientRequestDate\\":\\"2023-02-27T23:53:44Z\\",\\"trace2CegedimOkcProcessDate\\":\\"2023-02-27T23:53:40Z\\",\\"trace3CegedimOkeTransferDate\\":\\"2023-02-27T23:54:23Z\\",\\"trace4CegedimOkeIntegrationDate\\":\\"2023-02-27T23:55:47Z\\",\\"trace5CegedimDboResponseDate\\":\\"2023-03-02T21:23:36Z\\",\\"trace6CegedimOkcExportDate\\":null,\\"requestComment\\":null,\\"responseComment\\":\\"Already Exists-Data Privacy\\",\\"individualEidSource\\":null,\\"individualEidValidated\\":\\"WCAP00028176\\",\\"workplaceEidSource\\":\\"WCAH00052991\\",\\"workplaceEidValidated\\":\\"WCAH00052991\\",\\"activityEidSource\\":null,\\"activityEidValidated\\":\\"WCAP0002817602\\",\\"addressEidSource\\":null,\\"addressEidValidated\\":\\"WCA00000006206\\",\\"countryEid\\":\\"CA\\",\\"processStatus\\":\\"REQUEST_RESPONDED\\",\\"requestStatus\\":\\"VAS_FOUND_BUT_INVALID\\",\\"updateDate\\":\\"2023-03-02T21:37:16Z\\"}]}"\n\t}\n}\nand post back to the topic. DCR will be closed in 24h.New Case (2024-03-19)We need to force close/reject a couple of DCRs which cannot closed themselves. There were sent to OneKey, but for some reasons OK does not recognize them.  IQVIA have not generated the TraceVR response and we need to simulate it.  To break TRACEVR process for this DCRs we need to manually change the Mongo Status to REJECTED. If we keep SENT we are going to ask IQVIA forever in - TODO - describe this in SOPOpen Mongo and update DCRRegistryONEKEY for selected profiles. Change status to { "status.name" : "REJECTED" } Change details to "HUB manual update due to "Change from:To: Find the latest event for the chosen id and generate the event in the topic "-internal-onekey-dcr-change-events-in" which will change their status\n "vrStatus": "CLOSED",\n "vrStatusDetail": "REJECTED", \n\n {\n "eventType": "DCR_CHANGED",\n "eventTime": ,\n "eventPublishingTime": ,\n "countryCode": "",\n "dcrId": "",\n "targetChangeRequest": {\n "vrStatus": "CLOSED",\n "vrStatusDetail": "REJECTED",\n "oneKeyComment": "HUB manual update due to MR-",\n "individualEidValidated": null,\n "workplaceEidValidated": null,\n "vrTraceRequest": "{\\"isoCod2\\":\\"\\",\\"validation.clientRequestId\\":\\"\\"}",\n "vrTraceResponse": "{\\"response.traceValidationRequestOutputFormatVersion\\":\\"1.8\\",\\"response.status\\":\\"SUCCESS\\",\\"response.resultSize\\":1,\\"response.totalNumberOfResults\\":1,\\"response.success\\":true,\\"response.results\\":[{\\"codBase\\":\\"W\\",\\"cisHostNum\\":\\"4605\\",\\"userEid\\":\\"HUB\\",\\"requestType\\":\\"Q\\",\\"responseEntityType\\":\\"ENT_ACTIVITY\\",\\"clientRequestId\\":\\"\\",\\"cegedimRequestEid\\":\\"\\",\\"customerRequest\\":null,\\"trace1ClientRequestDate\\":\\"2024-02-27T09:29:34Z\\",\\"trace2CegedimOkcProcessDate\\":\\"2024-02-27T09:29:34Z\\",\\"trace3CegedimOkeTransferDate\\":\\"2024-02-27T09:32:22Z\\",\\"trace4CegedimOkeIntegrationDate\\":\\"2024-02-27T09:29:48Z\\",\\"trace5CegedimDboResponseDate\\":\\"2024-03-04T14:51:54Z\\",\\"trace6CegedimOkcExportDate\\":null,\\"requestComment\\":\\"\\",\\"responseComment\\":\\"HUB manual update due to MR-\\",\\"individualEidSource\\":null,\\"individualEidValidated\\":null,\\"workplaceEidSource\\":null,\\"workplaceEidValidated\\":null,\\"activityEidSource\\":null,\\"activityEidValidated\\":null,\\"addressEidSource\\":null,\\"addressEidValidated\\":null,\\"countryEid\\":\\"\\",\\"processStatus\\":\\"REQUEST_RESPONDED\\",\\"requestStatus\\":\\"VAS_NOT_FOUND\\",\\"updateDate\\":\\"2024-03-04T16:06:29Z\\"}]}"\n }\n}\n" + }, + { + "title": "CHANGELOG", + "pageID": "411338079", + "pageLink": "/display/GMDM/CHANGELOG", + "content": "List of DCRs:VR-00952672 = 163f209d24d94ea99bd7b47d9108366cVR-00952674 = dbd44964afba4bab84d50669b1ccbac3VR-00968353 = 07c363c5d3364090a2c0f6fdbbbca1ddRe COMPANY RE IM44066249 VR missing FR.msg" + }, + { + "title": "Update DCRs with missing comments", + "pageID": "425495306", + "pageLink": "/display/GMDM/Update+DCRs+with+missing+comments", + "content": "DescriptionDue to temporary problem with our calls to Reltio workflow API we had multiple DCRs with missing workflow comments. The symptoms of this error were: no changeRequestComment field in DCRRegistry mongo collection and lack of content in Comment field in Reltio while viewing DCR by entityUrl.We have created a solution allowing to find deficient DCRs and update their comments in database and Reltio.GoalWe want to find all deficient DCRs in a given environment and update their comments in DCRRegistry and Reltio.This can be accomplished by following the procedure described below.ProcedureStep 1 - Configure the solutionGo to tools/dcr-update-workflow-comments module in mdm-hub-inbound-services repository.Prepare env configuration. Provide mongo.dbName and manager.url in application.yaml file.Create a file named application-secrets.yaml. Copy the content from application-secretsExample.yaml file and replace mock values with real ones appropriate to a given environment.Prepare solution configuration. Provide desired mode (find/repair) and DCR endTime time limits for deficient DCRs search in application.yaml.Here is an example of update-comments configuration.application.yaml\nupdate-comments:\n mode: find\n starting: 2024-04-01T10:00:00Z\n ending: 2024-05-15T10:00:00Z\nStep 2 - Find deficient DCRsRun the application using ApplicationServiceRunner.java in find mode with Spring profile: secrets.As a result, dcrs.csv file will appear in resources directory. It contains a list of DCRs to be updated in the next step. Those are DCRs ended within the configuration time limits, with no changeRequestComment field in DCRRegistry and having not empty processInstanceId (that value is needed to retrieve workflow comments from Reltio). This list can be viewed and altered if there is a need to omit a specific DCR update.Step 3 - Repair the DCRsChange update-comments.mode configuration to repair. Run the application exactly the same as in Step 2.As a result, report.txt file will be created in resources directory. It will contain a log for every DCR with its update status. If the update fails, it will contain the reason. In case of failed updated, the application can be ran again with dcrs.csv needed adjustments." + }, + { + "title": "GBLUS DCRs:", + "pageID": "310966586", + "pageLink": "/pages/viewpage.action?pageId=310966586", + "content": "" + }, + { + "title": "ICUE VRs manual load from file", + "pageID": "310966588", + "pageLink": "/display/GMDM/ICUE+VRs+manual+load+from+file", + "content": "This SOP describes the manual load of selected ICUE DCRS to the GBLUS environment.Scope and issue description:On GBLUS PROD VRs(DCRs) are sent to IQVIA(ONEKEY) for validation using events. The process is responsible for this is described on this page (OK DCR flows (GBLUS)). IQVIA receives the data based on singleton profiles. The current flow enables only GRV and ENGAGE. ICUE was disabled from the flow and requires manual work to load this to IQVIA due to a high number of ICUE standalone profiles created by this system on January/February 2023. More details related to the ICUE issue are here:ODP_ US IQVIA DRC_VR Request for 2023.msgDCR_Counts_GBLUS_PROD.xlsxSteps to add ICUE in the IQVIA validation process:Check if there are no loads on environment GBLUS PROD:Check reltio-* topics and check if there are no huge number of events per minute and if there is no LAG on topics:Pick the input file from a client and after approval from Monica.Mulloy@COMPANY.com proceed with changes:example email and input file:First batch_ Leftover ICUE VRs (27th Feb-31st March).msgGenerate the events for the VR topic- id: onekey_vr_dcrs_manual destination: "${env}-internal-onekeyvr-in"Reconciliation target ONEKEY_DCRS_MANUALuse the resendLastEvent operation in the publisher (generate CHANGES events)After all events are pushed to topic verify on akhq if generated events are available on desired topicWait for events aggregation window closure(24h).Check if VR's are visible in DCRRequests mongo collection. createTime should be within the last 24h\n{ "entity.uri" : "entities/" }\n" + }, + { + "title": "HL DCR:", + "pageID": "302705613", + "pageLink": "/pages/viewpage.action?pageId=302705613", + "content": "" + }, + { + "title": "How do we answer to requests about DCRs?", + "pageID": "416002490", + "pageLink": "/pages/viewpage.action?pageId=416002490", + "content": "" + }, + { + "title": "EFK:", + "pageID": "284806852", + "pageLink": "/pages/viewpage.action?pageId=284806852", + "content": "" + }, + { + "title": "FLEX Environments - Elasticsearch Shard Limit", + "pageID": "513736765", + "pageLink": "/display/GMDM/FLEX+Environments+-+Elasticsearch+Shard+Limit", + "content": "AlertSometimes, below alert gets triggered:This means that Elasticsearch has allocated >80% of allowed number of shards (default 1000 max).Further DebuggingAlso, we can check directly on the EFK cluster what is the shard count:Log into Kibana and choose "Dev Tools" from the panel on the left:Use one of below API calls:To fetch current cluster status and number of active/unassigned shards (# of active shards + # of unassigned shards = # of allocated shards):GET _cluster/healthTo check the current assigned shards limit:GETSolution: Removing Old Shards/IndicesThis is the preferred solution. Old indices can be removed through Kibana.Log into Kibana and choose "Management" from the panel on the left:Choose "Index Management":Find and mark indices that can be removed. In my case, I searched for indices containing "2023" in their names:Click "Manage Indices" and "Delete Indices". Confirm:Solution: Increasing the LimitThis is not the preferred solution, as it is not advised to go beyond the default limit of 1000 shards per node - it can lead to worse performance/stability of the Elasticsearch cluster.TODO: extend this section when we need to increase the limit somewhere, use this article: https://www.elastic.co/guide/en/elasticsearch/reference/7.4/misc-cluster.html" + }, + { + "title": "Kibana: How to Restore Data from Snapshots", + "pageID": "284806856", + "pageLink": "/display/GMDM/Kibana%3A+How+to+Restore+Data+from+Snapshots", + "content": "NOTE: The time of restoring is based on the amount of data you wanted to restore. Before beginning of restoration you have to be sure that the elastic cluster has a sufficient amount of storage to save restoring data.To restore data from the snapshot you have to use "Snapshot and Restore" site from Kibana. It is one of sites avaiable in "Stack Management" section:Select the snapshot which contains data you are interested in and click the Restore button:In the presented wizard please set up the following options:Disable the option "All data streams and indices" and provide index patterns that match index or data stream you want to restore:It is important to enable option "Rename data streams and indices" and set "Capture pattern" as "(.+)" and "Replacement pattern" as "$1-restored-", where the idx <1, 2, 3, ... , n> - it is required once we restore more than one snapshot from the same datastream. In another case, the restore operation will override current elasticsearch objects and we lost the data:The rest of the options on this page have to be disabled:Click the "Next" button to move to "Index settings" page. Leave all options disabled and go to the next page.On the page "Review restore details" you can see the summary of the restore process settings. Validate them and click the "Restore snapshot" button to start restoring.You can track the restoration progress in "Restore Status" section:When data is no longer needed, it should be deleted:" + }, + { + "title": "External proxy", + "pageID": "379322691", + "pageLink": "/display/GMDM/External+proxy", + "content": "" + }, + { + "title": "No downtime Kong restart/upgrade", + "pageID": "379322693", + "pageLink": "/pages/viewpage.action?pageId=379322693", + "content": "This SOP describes how to perform "no downtime" restart. Resourceshttp://awsprodv2.COMPANY.com/ - AWS consolehttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/install_kong.yml - ansible playbook SOPRemove one node instance from target groups (AWS console)Access AWS console http://awsprodv2.COMPANY.com/. Log in using COMPANY SSOChoose Account: prod-dlp-wbs-rapid (432817204314). Role: WBS-EUW1-GBICC-ALLENV-RO-SSOChange region to Europe(Ireland - eu-west-1)Got to EC2 → Load Balancing → Target GroupsSearch for target group\n-prod-gbl-mdm\nThere should be 4 target groups visible. 1 for mdmhub api and 3 for KafkaRemove first instance (EUW1Z2DL113) from all 4 target groups.Perform below steps for all target groupsTo do so, open each target group select desired instance and choose 'deregister'. Now this instance should have 'Health status': 'Draining'. Next do the same operation for other target groups.Do not remove two instances from consumer group at the same time. It'll cause API unabailability.Also make sure to remove the same instance from all target groups.Wait for Instance to be removed from target groupWait for target groups to be adjusted. Deregistered instance should eventually be removed from target groupAdditionally you can check kong logs directlyFirst instance: \nssh ec2-user@euw1z2dl113.COMPANY.com\ncd /app/kong/\ndocker-compose logs -f --tail=0\n# Check if there are new requests to exteral api\nSecond isntance: \nssh ec2-user@euw1z2dl114.COMPANY.com\ncd /app/kong/\ndocker-compose logs -f --tail=0\n# Check if there are new requests to exteral api\nSome internal requests may be still visible, eg. metricsPerform restart of Kong on removed instance (Ansible playbook)Execute ansible playbook inside mdm-hub-cluster-env repository inside 'ansible' directoryFor the first instance:\nansible-playbook install_kong.yml -i inventory/proxy_prod/inventory  -l kong_01\nFor the second instance:\nansible-playbook install_kong.yml -i inventory/proxy_prod/inventory  -l kong_02\nMake sure that kong_01 is the same instance you've removed from target group(check ansible inventory)Re-add the removed instancePerform this steps for all target groupsSelect target groupChoose 'Register targets'Filter instances to find previously removed instance. Select it and choose 'Include as pending below'. Make sure that correct port is chosenVerify below request and select 'Register pending targets'Instance should be in 'Initial' state in target groupWait for instance to be properly added to target groupWait for all instances to have 'Healthy' status instead of 'Initial'. Make sure everything work as expected (Check Kong logs)Perform steps 1-5 for second Kong instanceSecond instance: euw1z2dl114.COMPANY.comSecond Kong host(ansible inventory): kong_02" + }, + { + "title": "Full Environment Refresh - Reltio Clone", + "pageID": "386803861", + "pageLink": "/display/GMDM/Full+Environment+Refresh+-+Reltio+Clone", + "content": "" + }, + { + "title": "Full Environment Refresh", + "pageID": "386803864", + "pageLink": "/display/GMDM/Full+Environment+Refresh", + "content": "IntroductionBelow steps are the record of steps done in January 2024 due to Reltio Data Clone between GBLUS PROD → STAGE and APAC PROD → STAGE.Environment refresh consists of:disabling MDM Hub componentsfull cleanup of existing STAGE data: Kafka and MongoDBidentifying and copying cache collections from PROD to STAGE MongoDBre-enabling MDM Hub componentsrunning the Hub Reconciliation DAGDisabling Services, Kafka CleanupComment out the EFK topics in fluentd configuration:\nmdm-hub-cluster-env\\apac\\nprod\\namespaces\\apac-backend\\values.yaml\nDeploy apac-backend through Jenkins, to apply the fluentd changes:https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_backend_apac_nprod/(fluentd pods in the apac-backend namespace should recreate)Block the apac-stage mdmhub deployment job in Jenkins:https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/Notify the monitoring/support Team, that the environment is disabled (in case alerts are triggered or users inquire via emails)Use Kubernetes & Helm command line tools to uninstall the mdmhub components and Kafka topics:use kubectx/kubectl to switch context to apac-nprod cluster:use helm to uninstall below two releases from the apac-nprod cluster (you can confirm release names by using the "$ helm list -A" command):\n$ helm uninstall mdmhub -n apac-stage\n$ helm uninstall kafka-resources-apac-stage -n apac-backend\nconfirm there are no pods in the apac-stage namespace:list remaining Kafka topics (kubernetes kafkatopic resources) with "apac-stage" prefix:manually remove all the remaining "apac-stage" prefixed topics. Note that it is expected that some topics remain - some of them have been created by Kafka Streams, for example.MongoDB CleanupLog into the APAC NPROD MongoDB through Studio 3T.Clear all the collections in the apac-stage database.Exceptions:"batchInstance" collection"quartz-" prefixed collections"shedLock" collectionWait until MongoDB cleans all these collections (could take a few hours):Log into the APAC PROD MongoDB through Studio 3T. You want to have both connections in the same session.Copy below collections from APAC PROD (Ctrl+C):keyIdRegistryrelationCachesequenceCountersRight click APAC NPROD database "apac-stage" and choose "Paste Collections"Dialog will appear - use below options for each collection:Collections Copy Mode: Append to existing target collectionDocuments Copy Mode: Overwrite documents with same _idCopy indices from the source collection: uncheckWait until all the collections are copied.Snowflake CleanupCleanup the base tables:\nTRUNCATE TABLE CUSTOMER.ENTITIES;\nTRUNCATE TABLE CUSTOMER.RELATIONS;\nTRUNCATE TABLE CUSTOMER.LOV_DATA;\nTRUNCATE TABLE CUSTOMER.MATCHES;\nTRUNCATE TABLE CUSTOMER.MERGES;\nTRUNCATE TABLE CUSTOMER.HIST_INACTIVE_ENTITIES;\nRun the full materialization jobs:\nCALL CUSTOMER.MATERIALIZE_FULL_ALL('M', 'CUSTOMER');\nCALL CUSTOMER.HI_MATERIALIZE_FULL_ALL('CUSTOMER');\nCheck for any tables that haven't been cleaned properly:\nSELECT *\nFROM INFORMATION_SCHEMA.TABLES\nWHERE 1=1\nAND TABLE_TYPE = 'BASE TABLE'\nAND TABLE_NAME ILIKE 'M^_%' ESCAPE '^'\nAND ROW_COUNT != 0;\nRun the materialization for those tables specifically or you can run the queries prepared from the bellow query:\nSELECT 'TRUNCATE TABLE ' || TABLE_SCHEMA || '.' || TABLE_NAME || ';'\nFROM INFORMATION_SCHEMA.TABLES\nWHERE 1=1\nAND TABLE_TYPE = 'BASE TABLE'\nAND TABLE_NAME ILIKE 'M^_%' ESCAPE '^'\nAND ROW_COUNT != 0;\nRe-Enabling HubGet a confirmation that the Reltio data cloning process has finished.Re-enable the mdmhub apac-stage deployment job and perform a deployment of an adequate version.Uncomment previously commented (look: Disabling The Services, Kafka Cleanup, 1.) EFK transaction topic list, deploy apac-backend. Fluentd pods in the apac-backend namespace should recreate.Wait for both deployments to finish (should be performed one after another).Test the MDM Hub API - try sending a couple of GET requests to fetch some entities that exist in Reltio. Confirm that the result is correct and the requests are visible in Kibana (dashboard APAC-STAGE API Calls):(2025-05-19 Piotr: we no longer need to do this - Matches Enricher now deploys with minimum 1 pod in every environment) Run below command in your local Kafka client environment.\nkafka-console-consumer.sh --bootstrap-server kafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094 --group apac-stage-matches-enricher --topic apac-stage-internal-reltio-matches-events --consumer.config client.sasl.properties\nThis needs to be done to create the consumergroup, so that Keda can scale the deployment in the future.Running The Hub ReconciliationAfter confirming that Hub is up and working correctly, navigate to APAC NPROD Airflow:https://airflow-apac-nprod-gbl-mdm-hub.COMPANY.com/homeTrigger the hub_reconciliation_v2_apac_stage DAG:To minimize the chances of overfilling the Kafka storage, set retention of reconciliation metrics topics to an hour:Navigate to APAC NPROD AKHQ:https://akhq-apac-nprod-gbl-mdm-hub.COMPANY.com/Find below topics and navigate to their "Configs" tabs:apac-stage-internal-reconciliation-metrics-calculator-inhttps://akhq-apac-nprod-gbl-mdm-hub.COMPANY.com/ui/apac-nprod-mdm-kafka/topic/apac-stage-internal-reconciliation-metrics-calculator-in/configsapac-stage-internal-reconciliation-metrics-efk-transactionshttps://akhq-apac-nprod-gbl-mdm-hub.COMPANY.com/ui/apac-nprod-mdm-kafka/topic/apac-stage-internal-reconciliation-metrics-efk-transactions/configsFor each topic, find the config "retention.ms" (do not mistake it with "delete.retention.ms", which is responsible for compaction) and set it to 3600000. Apply changes.Monitor the DAG, event processing and Kafka/Elasticsearch storage.After the DAG finishes, disable reconciliation jobs (if reconciliations start uncontrollably before the data is fully restored, it will unnecessarily increase the workload):Manually disable the hub_reconciliation_v2_apac_stage DAG: https://airflow-apac-nprod-gbl-mdm-hub.COMPANY.com/dags/hub_reconciliation_v2_apac_stage/gridManually disable the reconciliation_snowflake_apac_stage DAG: https://airflow-apac-nprod-gbl-mdm-hub.COMPANY.com/dags/reconciliation_snowflake_apac_stage/gridAfter all reconciliation events are processed, the environment is ready to use. Compare entity/relation counts between Reltio-MongoDB-Snowflake to confirm that everything went well.Re-enable reconciliation jobs from 5." + }, + { + "title": "Full Environment Refresh - Legacy (Docker Environments)", + "pageID": "164470082", + "pageLink": "/pages/viewpage.action?pageId=164470082", + "content": "Steps to take when a Hub environment needs to be cleaned up or refreshed.1.PreparationAdd line ssl.endpoint.identification.algorithm= to client.sasl.properties in your kafka_client folder.Having done that go to the /bin folder and launch the command:$ ./consumer_groups_sasl.sh --describe --group | sortFor every consumer group in this environment. This will list currently connected consumers.If there are external consumers connected they will prevent deletion of topics they're connected to. Contact people responsible for those consumers to disconnect them.2. Stop GW/Hub components: subscriber, publisher, manager, batch_channel$ docker stop 3. Double-check that consumer groups (internal and external) have been disconnected4. Delete all topics:a) Preparation:$ docker exec -it kafka_kafka_1 bash$ export KAFKA_OPTS=-Djava.security.auth.login.config=/ssl/kafka_server_jaas.conf$ kafka-topics.sh --zookeeper zookeeper:2181 --list | grep b) Deleting the topics:$ kafka-topics.sh --zookeeper zookeeper:2181 --delete --topic || true && \\kafka-topics.sh --zookeeper zookeeper:2181 --delete --topic || true&& \\kafka-topics.sh --zookeeper zookeeper:2181 --delete --topic   || true &&          (...) continue for all topics5. Check whether topics are deleted on disk and using $ ./topics.sh --list 6. Recreate the topics by launching the Ansible playbook with parameter create_or_update: True set for desired topics in topics.yml7. Cleanup MongoDB:Access the collections corresponding to the desired environment and choose option "Clear collections" on the following collections: "entityHistory","gateway_errors", "hub_errors", hub_reconcilliation.8. After confirming everything is ready (in case of environment refresh there has to be a notification from Reltio that it's ready) restart GW and Hub components9. Check component logs to confirm they started up and connected correctly." + }, + { + "title": "Hub Application:", + "pageID": "302706338", + "pageLink": "/pages/viewpage.action?pageId=302706338", + "content": "" + }, + { + "title": "Batch Channel: Importing MAPP's Extract", + "pageID": "164470063", + "pageLink": "/display/GMDM/Batch+Channel%3A+Importing+MAPP%27s+Extract", + "content": "To import MAPP's extract you have to:Have original extract (eg. original.csv) which was uploaded to Teams channel,Open it in Excel and save as "CSV (Comma delimited) (*.csv)",Run dos2unix tool on the file.Do steps from 2 and 3 on extract file (eg. changes.csv) received form MAPP's team,Compare original file to file with changes and select only lines which was changed in the second file: ( head -1 changes.csv && diff original.csv changes.csv | grep '^>' | sed 's/^> //' ) > result.csvDivide result file into the smaller ones by running splitFile.sh script: ./splitFile.sh  result.csv. The script will generate set of files where theirs names will end with _{idx}.{extension} eg.: result_00.csv, result_01.csv, result_02.csv etc.Upload the result set of files to s3 location: s3://pfe-baiaes-eu-w1-project/mdm/inbound/mapp/. This action will trigger batch-channel component, which will start loading changes to MDM.splitFile.sh" + }, + { + "title": "Callback Service: How to Find Events Stuck in Partial State", + "pageID": "273681936", + "pageLink": "/display/GMDM/Callback+Service%3A+How+to+Find+Events+Stuck+in+Partial+State", + "content": "What is partial state?When an event gets processed by Callback Service, if any change is done at the precallback stage, event will not be sent further, to Event Publisher. It is expected that in a few seconds another event will come, signaling the change done by precallback logic - this one gets passed to Publisher and downstream clients/Snowflake as far as precallback detects no need for a change.Sometimes the second event is not coming - this is what we call a partial state. It means, that update event will actually not reach Snowflake and downstream clients. PartialCounter functionality of CallbackService was implemented to monitor such behaviour.How to identify that an event is stuck in partial state?PartialCounter is counting events which have not been passed down to Event Publisher (identified by Reltio URI) and exporting this count as a Prometheus (Actuator) metric. Prometheus alert "callback_service_partial_stuck_24h" is notifying us that an event has been stuck for more than 24 hours.How to find events stuck in partial state?Use below command to fetch the list of currently stuck events as JSON array (example for emea-dev). You will have to authorize using mdm_test_user or mdm_admin:\n# curl https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/precallback/partials\nMore details can be found in Swagger Documentation: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/What to do?Events identified as stuck in partial state should be reconciled." + }, + { + "title": "Integration Test - how to run tests locally from your computer to target environment", + "pageID": "337839648", + "pageLink": "/display/GMDM/Integration+Test+-+how+to+run+tests+locally+from+your+computer+to+target+environment", + "content": "Steps:First, choose the environment and go to the Jenkins integration tests directory:https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/based on APAC DEV:go to https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/choose the latest RUN and click Workspace on the leftClick on /home/jenkins workspace linkGo to /code/mdm-integretion-tests/src/test/resources/ Download 3 filescitrus-application.propertieskafka_jaas.confkafka_truststore.jksEdit citrus-application.propertieschange local K8s URLS to real URLS and local PATH. Leave other variables as is. in that case, use the KeePass that contains all URLs:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/credentials.kdbxExample code that is adjusted to APAC DEVAPI URLs + local PATH to certsThis is just the example from APAC DEV that contains the C:\\\\Users\\\\mmor\\\\workspace\\\\SCM\\\\mdm-hub-inbound-services\\\\ path - replace this with your own code localization \ncitrus.spring.java.config=com.COMPANY.mdm.tests.config.SpringConfiguration\n\njava.security.auth.login.config=C:\\\\Users\\\\mmor\\\\workspace\\\\SCM\\\\mdm-hub-inbound-services\\\\mdm-integretion-tests\\\\src\\\\test\\\\resources\\\\kafka_jaas.conf\n\nreltio.oauth.url=https://auth.reltio.com/\nreltio.oauth.basic=secret\nreltio.url=https://mpe-02.reltio.com/reltio/api/2NBAwv1z2AvlkgS\nreltio.username=svc-pfe-mdmhub\nreltio.password=secret\nreltio.apiKey=secret\nreltio.apiSecret=secret\n\nmongo.dbUrl=mongodb://admin:secret@mongo-apac-nprod-gbl-mdm-hub.COMPANY.com:27017/reltio_apac-dev?authMechanism=SCRAM-SHA-256&authSource=admin\nmongo.url=mongodb://mongo-apac-nprod-gbl-mdm-hub.COMPANY.com:27017\nmongo.dbName=reltio_apac-dev\nmongo.username=mdmgw\nmongo.password=secret\n\ngateway.url=https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-apac-dev\ngateway.username=mdm_test_user\ngateway.apiKey=secret\n\nbatchService.url=https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-batch-apac-dev\nbatchService.username=mdm_test_user\nbatchService.apiKey=secret\nbatchService.limitedUsername=mdm_test_user_limited\nbatchService.limitedApiKey=secret\n\nmapchannel.url=https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/dev-map-api\nmapchannel.username=mdm_test_user\nmapchannel.apiKey=secret\n\napiRouter.url=https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-apac-dev\napiRouter.dcrReltioUserApiKey=secret\napiRouter.dcrOneKeyUserApiKey=secret\napiRouter.intTestUserApiKey=secret\napiRouter.dcrReltioUser=mdm_dcr2_test_reltio_user\napiRouter.dcrOneKeyUser=mdm_dcr2_test_onekey_user\napiRouter.intTestUser=mdm_test_user\n\nadminService.url=https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-apac-dev\nadminService.intTestUserApiKey=secret\nadminService.intTestUser=mdm_test_user\n\ndeg.url=https://hcp-gateway-dev.eu.cloudhub.io/v1\ndeg.oAuth2Service=https://hcp-gateway-dev.eu.cloudhub.io/\ndeg.apiKey=secret\ndeg.apiSecret=secret\n\nkafka.brokers=kafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094\nkafka.group=int_test_dev\nkafka.topic=apac-dev-out-simple-all-int-tests-all\nkafka.security.protocol=SASL_SSL\nkafka.sasl.mechanism=SCRAM-SHA-512\nkafka.ssl.truststore.location=C:\\\\Users\\\\mmor\\\\workspace\\\\SCM\\\\mdm-hub-inbound-services\\\\mdm-integretion-tests\\\\src\\\\test\\\\resources\\\\kafka_truststore.jks\nkafka.ssl.truststore.password=secret\nkafka.receive.timeout=60000\nkafka.purgeEndpoints.timeout=100000\n...\n...\n...\nNow go to your local code checkout - mdm-hub-inbound-services\\mdm-integretion-testsCopy 3 files to the mdm-integretion-tests/src/test/resourcesSelect the test and click RUNEND - the result: You are running Jenkins integration tests from your local computer on target DEV environment. Now you can check logs locally and repeat. " + }, + { + "title": "Manager: Reload Entity - Fix COMPANYAddressID Using Reload Action", + "pageID": "229180577", + "pageLink": "/display/GMDM/Manager%3A+Reload+Entity+-+Fix+COMPANYAddressID+Using+Reload+Action", + "content": "Before starting check what DQ rules have -reload action on the list. Now it is SourceMatchCategory and COMPANYAddressIdcheck here - - example dq ruleupdate with -reload operation to reload more DQ rulesGenerate events using the script : scriptorscript - fix SourceMatchCategory without ONEKEYthe script gets all ACTIVE entities with Addressesthat have missing COMPANYAddressIdthat COMPANYAddressID is lower that correct value for each env: emea 5000000000  amer 6000000000  apac 7000000000Script generate events: example:entities/lwBrc9K|{"targetEntity":{"entityURI":"entities/lwBrc9K","sources":["FUSIONMDM"],"targetType":"entityUri"},"overwrites":[{"uriMask":"COMPANYAddressID"}]}entities/1350l3D6|{"targetEntity":{"entityURI":"entities/1350l3D6","sources":["FUSIONMDM"],"targetType":"entityUri"},"overwrites":[{"uriMask":"COMPANYAddressID"}]}entities/1350kZNI|{"targetEntity":{"entityURI":"entities/1350kZNI","sources":["FUSIONMDM"],"targetType":"entityUri"},"overwrites":[{"uriMask":"COMPANYAddressID"}]}entities/cPSKBB9|{"targetEntity":{"entityURI":"entities/cPSKBB9","sources":["FUSIONMDM"],"targetType":"entityUri"},"overwrites":[{"uriMask":"COMPANYAddressID"}]}Make a fix for COMPANYAddressID that is lower than the correct value for each envGo to the keyIdRegistry Mongo collectionfind all entries that have generatedId lower than emea 5000000000  amer 6000000000  apac 7000000000increase the generatedId  adding the correct value from correct environments using the script - scriptGet the file and push it to the -internal-async-all-reload-entity topic./start_sasl_producer.sh -internal-async-all-reload-entityor using the input file  ./start_sasl_producer.sh -internal-async-all-reload-entity < reload_dev_emea_pack_entities.txt (file that contains each json generated by the Mongo script, each row in new line)How to Run a script on docker:example emea DEV:go to - svc-mdmnpr@euw1z2dl111docker exec -it mongo_mongo_1 bashcd  /data/configdbcreate script - touch reload_entities_fix_COMPANYaddressid_hub.jsedit header:db = db.getSiblingDB("")db.auth("mdm_hub", "")RUN: nohup mongo --host mongo_dev_emea_reltio_rs/euw1z2dl111.COMPANY.com:27017 -u mdm_hub -p --authenticationDatabase reltio_dev reload_entities_fix_COMPANYaddressid_hub.js &ORnohup mongo --host mongo_dev_emea_reltio_rs/euw1z2dl111.COMPANY.com:27017 -u mdm_hub -p --authenticationDatabase reltio_dev reload_entities_fix_sourcematch_hub_DEV.js > smc_DEV_FIX.out 2>&1 &nohup mongo --host mongo_dev_emea_reltio_rs/euw1z2dl111.COMPANY.com:27017 -u mdm_hub -p --authenticationDatabase reltio_qa reload_entities_fix_sourcematch_hub_QA.js > smc_QA_FIX.out 2>&1 &nohup mongo --host mongo_dev_emea_reltio_rs/euw1z2dl111.COMPANY.com:27017 -u mdm_hub -p --authenticationDatabase reltio_stage reload_entities_fix_sourcematch_hub_STAGE.js > smc_STAGE_FIX.out 2>&1 &" + }, + { + "title": "Manager: Resubmitting Failed Records", + "pageID": "164470200", + "pageLink": "/display/GMDM/Manager%3A+Resubmitting+Failed+Records", + "content": "There is new API in manager for getting/resubmitting/removing failed records from batches.1. Get failed records method - it returns list of errors basing on provided criteriasPOST /errorsRequestList of FieldFilter objectsfield - name of the field that is stored in errorqueueoperation - operation that is used to create query, possible options are: Equals, Is, Greater, Lowervalue - the value which we compareii. Example:[        {            "field" : "HubAsyncBatchServiceBatchName",            "operation" : "Equals",            "value" : "testBatchBundle"        }    ]b. Responsei. List of Error objectsid - identifier of the error batchName - batch nameobjectType - object typebatchInstanceId - batch instance idkey - keyerrorClass - the name of the error class that happen during record submissionerrorMessage - the message of the error that happen during record submissionresubmitted - true/false - it tells if errror was resubmitted or notdeleted - true/false - it tells if error was deleted or not during remove api callii. Example:[    {        "id": "5fa93377e720a55f0bb68c99",        "batchName": "testBatchBundle",        "objectType": "configuration/entityTypes/HCP",        "batchInstanceId": "0+3j45V7S1K1GT2i6c3Mqw",        "key": "{\\r\\n  \\"type\\" : \\"SHS\\",\\r\\n  \\"value\\" : \\"TEST:HCP:b09b6085-28dc-451d-85b6-fe3ce2079446\\"\\r\\n}",        "errorClass": "javax.ws.rs.ClientErrorException",        "errorMessage": "HTTP 409 Conflict",        "resubmitted": false,        "deleted": false    },    {        "id": "5fa93378e720a55f0bb68ca6",        "batchName": "testBatchBundle",        "objectType": "configuration/entityTypes/HCP",        "batchInstanceId": "0+3j45V7S1K1GT2i6c3Mqw",        "key": "{\\r\\n  \\"type\\" : \\"SHS\\",\\r\\n  \\"value\\" : \\"TEST:HCP:25bfc672-9ba1-44a5-b3c1-d657de701d76\\"\\r\\n}",        "errorClass": "javax.ws.rs.ClientErrorException",        "errorMessage": "HTTP 409 Conflict",        "resubmitted": false,        "deleted": false    },    {        "id": "5fa93377e720a55f0bb68c9a",        "batchName": "testBatchBundle",        "objectType": "configuration/entityTypes/HCP",        "batchInstanceId": "0+3j45V7S1K1GT2i6c3Mqw",        "key": "{\\r\\n  \\"type\\" : \\"SHS\\",\\r\\n  \\"value\\" : \\"TEST:HCP:60067d46-07a6-4902-b9e8-1bf2acbc8a6e\\"\\r\\n}",        "errorClass": "javax.ws.rs.ClientErrorException",        "errorMessage": "HTTP 409 Conflict",        "resubmitted": false,        "deleted": false    },    {        "id": "5fa93377e720a55f0bb68c9b",        "batchName": "testBatchBundle",        "objectType": "configuration/entityTypes/HCP",        "batchInstanceId": "0+3j45V7S1K1GT2i6c3Mqw",        "key": "{\\r\\n  \\"type\\" : \\"SHS\\",\\r\\n  \\"value\\" : \\"TEST:HCP:e8d05d96-7aa3-4059-895e-ce20550d7ead\\"\\r\\n}",        "errorClass": "javax.ws.rs.ClientErrorException",        "errorMessage": "HTTP 409 Conflict",        "resubmitted": false,        "deleted": false    },    {        "id": "5fa96ba300061d51e822854a",        "batchName": "testBatchBundle",        "objectType": "configuration/entityTypes/HCP",        "batchInstanceId": "iN2LB3TiT3+Sd5dYemDGHg",        "key": "{\\r\\n  \\"type\\" : \\"SHS\\",\\r\\n  \\"value\\" : \\"TEST:HCP:973411ec-33d4-477e-a6ae-aca5a0875abb\\"\\r\\n}",        "errorClass": "javax.ws.rs.ClientErrorException",        "errorMessage": "HTTP 409 Conflict",        "resubmitted": false,        "deleted": false    }]2. Resubmit failed records - it takes list of FieldFilter objects and returns list of errors that were resubmitted - if it was correctly resubmitted resubmitted flag is set to truePOST /errors/_resubmita.  Requesti. List of FieldFilter objectsb. Responsei. List of Error objects3. Remove failed records - it takes list of FieldFilter objects that contains criteria for removing error objects and returns list of errors that were deleted - if it was correctly deleted deleted flag is set to truePOST /errors/_removea.  Requesti. List of FieldFilter objectsb. Responsei. List of Error objects" + }, + { + "title": "Issues diagnosis", + "pageID": "438905271", + "pageLink": "/display/GMDM/Issues+diagnosis", + "content": "" + }, + { + "title": "API issues", + "pageID": "438905273", + "pageLink": "/display/GMDM/API+issues", + "content": "Symptomsat least one of the following alert is active:kong_http_500_status_prod,kong_http_502_status_prod,kong_http_503_status_prod,kong3_http_500_status_prod,kong3_http_502_status_prod,kong3_http_503_status_prod,Clients report problems related to communication with our HTTP endpoints.ConfirmationTo confirm if problem with API is really occurring, you have to invoke some operation that is shared by HTTP interface. To do this you can use Postman or other tool that can run HTTP requests. Below you can find a few examples that describe how to check API in components that expose this:mdm-manager:GET {{ manager_url }}/entities?filter=equals(type, 'configuration/entityTypes/HCP') - The request should execute properly (HTTP status code 200) and returns some HCP objects.api-router:GET {{ api_router_url }}/entities?filter=equals(type, 'configuration/entityTypes/HCP') - The request should execute properly (HTTP status code 200) and returns some HCP objects.batch-service:GET {{ batch_service_url }}/batchController/NA/instances/NA - The request should return 403 HTTP Code and body:{    "code": "403",    "message": "Forbidden: com.COMPANY.mdm.security.AuthorizationException: Batch 'NA' is not allowed."}dcr-service2:TODOReasons findingBelow diagram presents the HTTP request processing flow with engaged components:" + }, + { + "title": "Kafka:", + "pageID": "164470059", + "pageLink": "/pages/viewpage.action?pageId=164470059", + "content": "" + }, + { + "title": "Client Configuration", + "pageID": "243862610", + "pageLink": "/display/GMDM/Client+Configuration", + "content": "      1. InstallationTo install kafka binary version 2.8.1 should be downloaded and installed fromhttps://kafka.apache.org/downloads      2. The email from the MDMHUB TeamIn the email received from the MDMHUB support team you can find connection parameters like server address, topic name, group name, and the following files:client.sasl.properties – kafka consumer properties,kafka_client_jaas.conf – JAAS credentials requiered to authenticate with Kafka server,kafka_truststore.jks – java truststore required to build certification path of SSL connections.      3. Example command to test client and configurationTo connect with Kafka using the command line client save delivered files on your disc and run the following command:export KAFKA_OPTS=-Djava.security.auth.login.config={ ●●●●●●●●●●●● Kafka_client_jaas.conf }kafka-console-consumer.sh --bootstrap-server { kafka server } --group { group } --topic { topic_name } --consumer.config { consumer config file eg. client.sasl.properties}For example for amer dev:●●●●●●●●●●● in provided file: kafka_client_jaas.confKafka server: kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094Group: dev-muleTopic: dev-out-full-pforcerx-grv-allConsumer config is in provided file: client.sasl.propertiesexport KAFKA_OPTS=-Djava.security.auth.login.config=kafka_client_jaas.confkafka-console-consumer.sh --bootstrap-server kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 --group dev-mule --topic dev-out-full-pforcerx-grv-all --consumer.config client.sasl.properties" + }, + { + "title": "Client Configuration in k8s", + "pageID": "284806978", + "pageLink": "/display/GMDM/Client+Configuration+in+k8s", + "content": "Each of k8s clusters have installed kafka-client pod. To find this pod you have to list all pods deployed in *-backend namespace and select pod which name starts with kafka-client:\nkubectl get pods --namespace emea-backend  | grep kafka-client\nTo run commands on this pod you have to remember its name and use in "kubectl exec" command:Using kubectl exec with kafka client\nkubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -- \nAs a you can use all of standard Kafka client scripts eg. kafka-consumer-groups.sh or one of wrapper scripts which simplify configuration of standard scripts - broker and authentication configuration. They are the following scripts:consumer_groups.sh - it's wrapper of kafka-consumer-groups,consumer_groups_delete.sh - it's also wrapper of kafka-consumer-groups and can be used only to delete consumer group. Has only one input argument - consumer group name,reset_offsets.sh - it's also wrapper of kafka-consumer-groups and can be used only to reset offsets of consumer group,start_consumer.sh - it's wrapper of kafka-console-consumer,start_producer.sh - it's wrapper of kafka-console-producer,topics.sh - it's wrapper of kafka-topics.Kafka-client pod has other kafka tool named kcat. To use this tool you have to run commands on container kafka-kcat unsing wrapper script kcat.sh:Running kcat.sh on emea-nprod cluster\nkubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -c kafka-kcat -- kcat.sh\nNOTE: Remember that all wrapper scripts work with admin permissions.ExamplesDescribe the current offsets of a groupDescribe group dev_grv_pforcerx on emea-nprod cluster\nkubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -- consumer_groups.sh --describe --group dev_grv_pforcerx\nReset offset of group to earlisetReset offset to earliest for group group1 and topic gbl-dev-internal-gw-efk-transactions on emea-nprod cluster\nkubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -- reset_offsets.sh --group group1 --to-earliest gbl-dev-internal-gw-efk-transactions\nConsumer events from the beginning of topic. It will produce the output where each of lines will have the following format: |Read topic gbl-dev-internal-gw-efk-transactions from beginning on emea-nprod cluster\nkubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -- start_consumer.sh gbl-dev-internal-gw-efk-transactions --from-beginning\nSend messages defined in text file to kafka topics. Each of message in file have to have following format: |Send all messages from file file_with_messages.csv to topic gbl-dev-internal-gw-efk-transactions\nkubectl exec -i --namespace emea-backend kafka-client-8585fbb7f9-55cjm -- start_producer.sh gbl-dev-internal-gw-efk-transactions < file_with_messages.csv\nDelete consumer group on topicDelete consumer group test on topic gbl-dev-internal-gw-efk-transactions emea-nprod cluster\nkubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -- consumer_groups.sh --delete-offsets --group test gbl-dev-internal-gw-efk-transactions\nList topics and their partitions using kcatList topcis into on emea-nprod cluster\nkubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -c kafka-kcat -- kcat.sh -L\n" + }, + { + "title": "How to Add a New Consumer Group", + "pageID": "164470080", + "pageLink": "/display/GMDM/How+to+Add+a+New+Consumer+Group", + "content": "These instructions demonstrate how to add an additional consumer group to an existing topic.Open file "topics.yml" located under mdm-reltio-handler-env\\inventory\\\\group_vars\\kafka and find the topic to be updated. In this example new consumer group "flex_dev_prj2" was added to topic "dev-out-full-flex-all".   2. Make sure the parameter "create_or_update" is set to True for the desired topic:   3.  Additionally, double-check that the parameter "install_only_topics" in the "all.yml" file is set to True:    4. Save the files after making the changes. Run ansible to update the configuration using the following command:  ansible-playbook install_hub_broker.yml -i inventory//inventory --limit broker1 --vault-password-file=~/vault-password-file   5. Double-check ansible output to make sure changes have been implemented correctly.   6. Change the "create_or_update" parameter in "topics.yml" back to False.   7. Save the file and upload the new configuration to git. " + }, + { + "title": "How to Generate JKS Keystore and Truststore", + "pageID": "164470062", + "pageLink": "/display/GMDM/How+to+Generate+JKS+Keystore+and+Truststore", + "content": "This instruction is based on the current GBL PROD Kafka keystore.jks and trustrore.jks generation. Create a certificate pair using keytool genkeypair command keytool -genkeypair -alias kafka.mdm-gateway.COMPANY.com -keyalg RSA -keysize 2048 -keystore server.keystore.jks -dname "CN=kafka.mdm-gateway.COMPANY.com, O=COMPANY, L=mdm_hub, C=US"  set the security password, set the same ●●●●●●●●●●●● the key passphraseNow create a certificate signing request ( csr ) which has to be passed on to our external / third party CA ( Certificate Authority ).keytool -certreq -alias kafka.mdm-gateway.COMPANY.com -file kafka.mdm-gateway.COMPANY.com.csr -keystore server.keystore.jks Send the csr file through the Request Manager:Log in to the BT On DemandGo to Request Manager.Click "Continue"Search for " Digital Certificates"Select the " Digital Certificates" Application and click "Continue"Click "Checkout"Select "COMPANY SSL Certificate - Internal Only" and fill:Copy CSR filefill SAN e.g from the GBL PROD Kafka: mdm-gateway.COMPANY.commdm-gateway-int.COMPANY.com●●●●●●●●●●●●●mdm-broker-p1.COMPANY.comEUW1Z1PL017.EUPWBS.COMeuw1z1pl017.COMPANY.com●●●●●●●●●●●●●mdm-broker-p2.COMPANY.comEUW1Z1PL021.EUPWBS.COMeuw1z1pl021.COMPANY.com●●●●●●●●●●●●●mdm-broker-p3.COMPANY.comEUW1Z1PL022.EUPWBS.COMeuw1z1pl022.COMPANY.comfill email addressselect "No" for additional SSL Cert request, ContinueSend the CSR reqeust.When you receive the signed certificate verify the certificateCheck the Subject: CN and O should be filled just like in the  1.a.Check the SAN: there should be the list of hosts from 3.g.ii.If the certificate is correct CONTINUE:Now we need to import these certificates into server.keystore.jks keystore. Import the intermediate certificate first --> then the root certificate --> and then the signed cert.keytool -importcert -alias inter -file PBACA-G2.cer -keystore server.keystore.jkskeytool -importcert -alias root -file RootCA-G2.cer -keystore server.keystore.jkskeytool -importcert -alias kafka.mdm-gateway.COMPANY.com -file kafka.mdm-gateway.COMPANY.com.cer -keystore server.keystore.jksAfter importing all three certificates you should see : "Certificate reply was installed in keystore" message.Now list the keystore and check if all the certificates are imported successfully.keytool -list -keystore server.keystore.jksYour keystore contains 3 entriesFor debugging start with "-v" parameterLets create a truststore now. Set the security ●●●●●●●●●● different than the keystorekeytool -import -file PBACA-G2.cer -alias inter -keystore server.truststore.jkskeytool -import -file RootCA-G2.cer -alias root -keystore server.truststore.jksCOMPANY Certificates:PBACA-G2.cer RootCA-G2.cer" + }, + { + "title": "Reset Consumergroup Offset", + "pageID": "243862614", + "pageLink": "/display/GMDM/Reset+Consumergroup+Offset", + "content": "To reset offset on Kafka topic you need to have configured the command line client. The tool that can do this action is kafka-consumer-groups.sh. You have to specify a few parameters which determine where you want to reset the offset:--topic - the topic name,--group - the consumer group name,and specify the offset value by proving one of following parameters:1. --shift-byReset offsets shifting current offset by provided number which can be negative or positive:kafka-consumer-groups.sh --bootstrap-server { server } --group { group } -–command-config {  client.sasl.properties } --reset-offsets --shift-by {  number from formula } --topic {  topic } --execute2. --to-datetimeSwitch which can be used to rest offset from datetime. Date should be in format ‘YYYY-MM-DDTHH:mm:SS.sss’kafka-consumer-groups.sh --bootstrap-server { server }--group { group } -–command-config {  client.sasl.properties } --reset-offsets --to-datetime 2022-02-02T00:00:00.000Z --topic {  topic } --execute3. --to-earliestSwitch which can be used to reset the offsets to the earliest (oldest) offset which is available in the topic.kafka-consumer-groups.sh --bootstrap-server { server }--group { group } -–command-config {  client.sasl.properties } --reset-offsets -–to-earliest --topic {  topic } --execute4. --to-latestSwitch which can be used to reset the offsets to the latest (the most recent) offset which is available in the topic.kafka-consumer-groups.sh --bootstrap-server { server }--group { group } -–command-config {  client.sasl.properties } --reset-offsets -–to-latest --topic {  topic } --executeExampleLet's assume that you want to have 10000 messages to read by your consumer and the topic has 10 partitions. The first step is moving the current offset to the latest to make sure that there is no messages to read on the topic:kafka-consumer-groups.sh --bootstrap-server { server }--group { group } -–command-config {  client.sasl.properties } --reset-offsets --to-latest --topic {  topic } --executeThen calculate the offset you need to shift to achieve requested lag using following formula:-1 * desired_lag / number_of_partitionsIn our example the result will be: -1 * 10000 / 10 = -1000. Use this value in the below  command:kafka-consumer-groups.sh --bootstrap-server { server } --group { group } -–command-config {  client.sasl.properties } --reset-offsets --shift-by -1000 --topic {  topic } --execute" + }, + { + "title": "Kong gateway", + "pageID": "462065054", + "pageLink": "/display/GMDM/Kong+gateway", + "content": "" + }, + { + "title": "Kong gateway migration", + "pageID": "462065057", + "pageLink": "/display/GMDM/Kong+gateway+migration", + "content": "Installation procedureDeploy crds\n# Download package with crds to current directory\ntar -xzf crds_to_deploy.tar.gzcd crds_to_deploy/\nbase=$(pwd)\nBackup olds crds\n# Switch to proper k8s context\nkubectx atp-mdmhub-nprod-apac\n\n# Get all crds from cluster and saves them into file ${crd_name}_${env}.yaml\n# Args:\n# $1 = env\ncd $base\nmkdir old_apac_nprod\ncd old_apac_nprod\nget_crds.sh apac_nprod\n\n\ncreate new crds\ncd $base/new/splitted/\n# create new crds\nfor i in $(ls); do echo $i; kubectl create -f $i ; done\n# apply new crds\nfor i in $(ls); do echo $i; kubectl apply -f $i ; done\n# replace crds that were not properly installed \nfor i in   kic-crds.yaml01 kic-crds.yaml03 kic-crds.yaml05 kic-crds.yaml07 kic-crds.yaml10 kic-crds.yaml11; do echo $i ; kubectl replace -f $i; done\nApply new version of gatewayconfigrations \ncd $base/new\nkubectl replace -f gatewayconfiguration-new.yaml\nApply old version of kongingress\ncd $base/old\nkubectl replace -f kongingresses.configuration.konghq.com.yaml\n# Performing tests is advised to check if everything is workingDeploy operators with version that have kong-gateway-operator(4.32.0 or newer)# Performing tests is advised to check if everything is workingMerge configurationhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/pull-requests/1967/overviewDeploy backend (4.33.0-project-boldmove-SNAPSHOT or newer)# Performing tests is advised to check if everything is workingDeploy mdmhub components (4.33.0-project-boldmove-SNAPSHOT or newer)# Performing tests is advised to check if everything is workingTestsChecking all ingresses\n# Change /etc/hosts if dns's are not yet changed. To obtain all hosts that should be modified in /etc/hosts: \n# Switch to correct k8s context\n# k get ingresses -o custom-columns=host0:.spec.rules[0].host -A | tail -n +2 | sort | uniq | tr '\\n' ' '\n# To get dataplane svc: \n# k get svc -n kong -l gateway-operator.konghq.com/dataplane-service-type=ingress\nendpoints=$(kubectl get ingress -A -o custom-columns="NAME:.metadata.name,HOST:.spec.rules[0].host,PATH:.spec.rules[0].http.paths[0].path" | tail -n +2 | awk '{print "https://"$2":443"$3}')\nwhile IFS= read -r line; do echo -e "\\n\\n---- $line ----"; curl -k $line; done <<< $endpoints\nChecking plugins \nexport apikey="xxxxxxxxx"\nexport reltio_authorization="yyyyyyyyy"\nexport consul_token="zzzzzzzzzzz"\n\n\nkey-auth:\n curl https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-emea-dev\n curl https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-emea-dev -H "apikey: $apikey"\n curl https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-emea-dev/entities/2c9cf5a5 -H 'apikey: $apikey'\n\nmdm-external-oauth:\n curl --location --request POST 'https://devfederate.COMPANY.com/as/token.oauth2?grant_type=client_credentials' --header 'Content-Type: application/x-www-form-urlencoded' --header 'Origin: http://10.192.71.136:8000' --header "Authorization: Basic $reltio_authorization" | jq .access_token\n curl https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-dev/entities/2c9cf5a5 --header 'Authorization: Bearer access_token_from_previous_command'\n\ncorrelation-id:\n curl -v https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-emea-dev/entities/2c9cf5a5 -H "apikey: $apikey" 2>&1 | grep hub-correlation-id \n\nbackend-auth:\n kibana-backend-auth:\n # Web browser \n    https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home#/\n\nsession:\n # Web browser \n   # Open debugger console in web browser and check if kong cookies are set\n\npre-function:\n k logs -n emea-backend -l app=consul -f --tail=0\n k exec -n airflow airflow-scheduler-0 -- curl -k http://http-mdmhub-kong-kong-proxy.kong.svc.cluster.local:80/v1/kv/dev?token=$consul_token\n\nopentelemetry:\n curl https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-emea-dev/entities/testtest -H "apikey: $apikey"\n +\n # Web browser\n https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/apm/services/kong/overview?comparisonEnabled=true&environment=ENVIRONMENT_ALL&kuery=&latencyAggregationType=avg&offset=1d&rangeFrom=now-15h&rangeTo=now&serviceGroup=&transactionType=request\n\nprometheus:\n k exec -it dataplane-kong-knkcn-bjrc7-75bb85fc4c-2msfv -- /bin/bash\n curl localhost:8100/metrics\n\n\nCheck logsGateway operatorKong operatorOld kong pod - proxy and ingress controllerNew kong dataplaneNew kong controlPlaneStatus of new kong objects: DataplaneControlplaneGateway\nk get Gateway,dataplane,controlplane -n kong\nCheck services in old and new kong Old kong\nservices=$(k exec -n kong mdmhub-kong-kong-f548788cd-27ltl -c proxy -- curl -k https://localhost:8444/services); echo $services | jq .\nNew kong\n services=$(k exec -n kong dataplane-kong-knkcn-bjrc7-5c9f596ff9-t94lf -c proxy -- curl -k https://localhost:8444/services); echo $services | jq .\nReferenceKong operator configurationhttps://github.com/Kong/kong-operator/blob/main/deploy/crds/charts_v1alpha1_kong_cr.yamlKong gateway operator crd's referencehttps://docs.konghq.com/gateway-operator/latest/reference/custom-resources/#dataplanedeploymentoptionsget_crds.shcrds_to_deploy.tar.gz" + }, + { + "title": "MongoDB:", + "pageID": "164470061", + "pageLink": "/pages/viewpage.action?pageId=164470061", + "content": "" + }, + { + "title": "Mongo-SOP-001: Mongo Scripts", + "pageID": "164470056", + "pageLink": "/display/GMDM/Mongo-SOP-001%3A+Mongo+Scripts", + "content": "Create Mongo Indexes\nhub_errors\n db.hub_errors.createIndex({plannedResubmissionDate: -1}, {background: true, name: "idx_plannedResubmissionDate_-1"});\n db.hub_errors.createIndex({timestamp: -1}, {background: true, name: "idx_timestamp_-1"});\n db.hub_errors.createIndex({exceptionClass: 1}, {background: true, name: "idx_exceptionClass_1"});\n db.hub_errors.createIndex({status: -1}, {background: true, name: "idx_status_-1"});\n\n\ngateway_errors\n db.gateway_errors.createIndex({plannedResubmissionDate: -1}, {background: true, name: "idx_plannedResubmissionDate_-1"});\n db.gateway_errors.createIndex({timestamp: -1}, {background: true, name: "idx_timestamp_-1"});\n db.gateway_errors.createIndex({exceptionClass: 1}, {background: true, name: "idx_exceptionClass_1"});\n db.gateway_errors.createIndex({status: -1}, {background: true, name: "idx_status_-1"});\n\n\ngateway_transactions\n db.gateway_transactions.createIndex({transactionTS: -1}, {background: true, name: "idx_transactionTS_-1"});\n db.gateway_transactions.createIndex({status: -1}, {background: true, name: "idx_status_-1"});\n db.gateway_transactions.createIndex({requestId: -1}, {background: true, name: "idx_requestId_-1"});\n db.gateway_transactions.createIndex({username: -1}, {background: true, name: "idx_username_-1"});\n\n\nentityHistory\n db.entityHistory.createIndex({country: -1}, {background: true, name: "idx_country"});\n db.entityHistory.createIndex({sources: -1}, {background: true, name: "idx_sources"});\n db.entityHistory.createIndex({entityType: -1}, {background: true, name: "idx_entityType"});\n db.entityHistory.createIndex({status: -1}, {background: true, name: "idx_status"});\n db.entityHistory.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\n db.entityHistory.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\n db.entityHistory.createIndex({"entity.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"});\n db.entityHistory.createIndex({"entity.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"});\n db.entityHistory.createIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"});\n\n\nentityRelations\n db.entityRelations.createIndex({country: -1}, {background: true, name: "idx_country"});\n db.entityRelations.createIndex({sources: -1}, {background: true, name: "idx_sources"});\n db.entityRelations.createIndex({entityType: -1}, {background: true, name: "idx_relationType"});\n db.entityRelations.createIndex({status: -1}, {background: true, name: "idx_status"});\n db.entityRelations.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\n db.entityRelations.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\n db.entityRelations.createIndex({startObjectId: -1}, {background: true, name: "idx_startObjectId"});\n db.entityRelations.createIndex({endObjectId: -1}, {background: true, name: "idx_endObjectId"});\n db.entityRelations.createIndex.({"relation.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"}); \n db.entityRelations.createIndex.({"relation.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"}); \n db.entityHistory.createIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"});\n\n\n\n\n\n\nFind ACTIVE relations connected to inactive Entities\nvar start = new Date().getTime();\n\nvar result = db.getCollection("entityRelations").aggregate(\n\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: { \n\t\t\t "status" : "ACTIVE"\n\t\t\t}\n\t\t},\n\n//\t\t// Stage 2\n//\t\t{\n//\t\t\t$limit: 1000\n//\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$lookup: // Equality Match\n\t\t\t{\n\t\t\t from: "entityHistory",\n\t\t\t localField: "relation.endObject.objectURI",\n\t\t\t foreignField: "_id",\n\t\t\t as: "matched_entity"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$match: {\n\t\t\t "$or" : [\n\t\t\t {\n\t\t\t "matched_entity.status" : "INACTIVE"\n\t\t\t }, \n\t\t\t {\n\t\t\t "matched_entity.status" : "LOST_MERGE"\n\t\t\t },\n\t\t\t {\n\t\t\t "matched_entity.status" : "DELETED"\n\t\t\t } \n\t\t\t ]\n\t\t\t}\n\t\t},\n\n\t\t// Stage 5\n\t\t{\n\t\t\t$group: {\n\t\t\t\t\t\t _id:"$matched_entity.status", \n\t\t\t\t\t\t count:{$sum:1}, \n\t\t\t}\n\t\t},\n\n\t]\n\n\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n\n);\n\n\n \t\nprintjson(result._batch) \t\n\nvar end = new Date().getTime();\nvar duration = end - start;\nprint("duration: " + duration + " ms")\nprint("END")\nFix LOST_MERGE entities with wrong parentEntityId\nprint("START")\nvar start = new Date().getTime();\n\nvar result = db.getCollection("entityHistory").aggregate(\n // Pipeline\n [\n // Stage 1\n {\n $match: {\n "status" : "LOST_MERGE",\n "$and" : [\n {\n "$or" : [\n {\n "mdmSource" : "RELTIO"\n },\n {\n "mdmSource" : {\n "$exists" : false\n }\n }\n ]\n }\n ]\n }\n },\n\n // Stage 2\n {\n $graphLookup: {\n "from" : "entityHistory",\n "startWith" : "$_id",\n "connectFromField" : "parentEntityId",\n "connectToField" : "_id",\n "as" : "master",\n "maxDepth" : 10.0,\n "depthField" : "depthField"\n }\n },\n\n // Stage 3\n {\n $unwind: {\n "path" : "$master",\n "includeArrayIndex" : "arrayIndex",\n "preserveNullAndEmptyArrays" : false\n }\n },\n\n // Stage 4\n {\n $match: {\n "master.status" : {\n "$ne" : "LOST_MERGE"\n }\n }\n },\n\n // Stage 5\n {\n $redact: {\n "$cond" : {\n "if" : {\n "$ne" : [\n "$master._id",\n "$parentEntityId"\n ]\n },\n "then" : "$$KEEP",\n "else" : "$$PRUNE"\n }\n }\n },\n\n ]\n\n // Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n\n);\n\n\nresult.forEach(function(obj) {\n var id = obj._id;\n var masterId = obj.master._id;\n\n if( masterId !== undefined){\n\n print( id + " " + " " + obj.parentEntityId +" replaced to "+ masterId);\n var currentTime = new Date().getTime();\n\n var result = db.entityHistory.update( {"_id":id}, {$set: { "parentEntityId":masterId, "forceModificationDate": NumberLong(currentTime) } });\n printjson(result);\n }\n\n});\n\n\nvar end = new Date().getTime();\nvar duration = end - start;\nprint("duration: " + duration + " ms")\nprint("END")\n\n\n\nFind entities based on the FILE with the crosswalks\ndb = db.getSiblingDB('reltio')\nvar file = cat('crosswalks.txt'); // read the crosswalks file\nvar crosswalk_ids = file.split('\\n'); // create an array of crosswalks\nfor (var i = 0, l = crosswalk_ids.length; i < l; i++){ // for every crosswalk search it in the entityHistory\n print("ID crosswalk: " + crosswalk_ids[i])\n var result = db.entityHistory.find({\n status: { $eq: "ACTIVE" },\n "entity.crosswalks.value": crosswalk_ids[i]\n }).projection({id:1, country:1})\n printjson(result.toArray());\n}\nFind ACTIVE entities with duplicated crosswalk - missing or wrong LOST_MERGE event\ndb.getCollection("entityHistory").aggregate(\n\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: { status: { $eq: "ACTIVE" }, entityType:"configuration/entityTypes/HCP" , mdmSource: "RELTIO", "lastModificationDate" : {\n\t\t\t "$gte" : NumberLong(1529966574477)\n\t\t\t } }\n\t\t},\n\n\t\t// Stage 2\n\t\t{\n\t\t\t$project: { _id: 0, "entity.crosswalks": 1,"entity.uri":2, "entity.updatedTime":3 }\n\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$unwind: "$entity.crosswalks"\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$group: {_id:"$entity.crosswalks.value", count:{$sum:1}, entities:{$push: {uri:"$entity.uri", modificationTime:"$entity.updatedTime"}}}\n\t\t},\n\n\t\t// Stage 5\n\t\t{\n\t\t\t$match: { count: { $gte: 2 } }\n\t\t},\n\n\t\t// Stage 6\n\t\t{\n\t\t\t$redact: {\n\t\t\t "$cond" : {\n\t\t\t "if" : {\n\t\t\t "$ne" : [\n\t\t\t "$entity.crosswalks.0.value", \n\t\t\t "$entity.crosswalks.1.value"\n\t\t\t ]\n\t\t\t }, \n\t\t\t "then" : "$$KEEP", \n\t\t\t "else" : "$$PRUNE"\n\t\t\t }\n\t\t\t}\n\t\t},\n\t],\n\n\t// Options\n\t{\n\t\tallowDiskUse: true\n\t}\n\n\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n\n);\n\n\n\nFix LOST_MEREGE entities with missing entityType attribute\nprint("START")\nvar start = new Date().getTime();\n\nvar result = db.getCollection("entityHistory").aggregate(\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: {\n\t\t\t "status" : "LOST_MERGE", \n\t\t\t "entityType" : {\n\t\t\t "$exists" : false\n\t\t\t }, \n\t\t\t "$and" : [\n\t\t\t {\n\t\t\t "$or" : [\n\t\t\t {\n\t\t\t "mdmSource" : "RELTIO"\n\t\t\t }, \n\t\t\t {\n\t\t\t "mdmSource" : {\n\t\t\t "$exists" : false\n\t\t\t }\n\t\t\t }\n\t\t\t ]\n\t\t\t }\n\t\t\t ]\n\t\t\t}\n\t\t},\n\n\t\t// Stage 2\n\t\t{\n\t\t\t$graphLookup: {\n\t\t\t "from" : "entityHistory", \n\t\t\t "startWith" : "$_id", \n\t\t\t "connectFromField" : "parentEntityId", \n\t\t\t "connectToField" : "_id", \n\t\t\t "as" : "master", \n\t\t\t "maxDepth" : 10.0, \n\t\t\t "depthField" : "depthField"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$unwind: {\n\t\t\t "path" : "$master", \n\t\t\t "includeArrayIndex" : "arrayIndex", \n\t\t\t "preserveNullAndEmptyArrays" : false\n\t\t\t}\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$match: {\n\t\t\t "master.status" : {\n\t\t\t "$ne" : "LOST_MERGE"\n\t\t\t }\n\t\t\t}\n\t\t},\n\n\t\t// Stage 5\n\t\t{\n\t\t\t$redact: {\n\t\t\t "$cond" : {\n\t\t\t "if" : {\n\t\t\t "$eq" : [\n\t\t\t "$master._id", \n\t\t\t "$parentEntityId"\n\t\t\t ]\n\t\t\t }, \n\t\t\t "then" : "$$KEEP", \n\t\t\t "else" : "$$PRUNE"\n\t\t\t }\n\t\t\t}\n\t\t}\n\t]\n\n\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n);\n\n\t\nresult.forEach(function(obj) {\n var id = obj._id;\n\n var masterEntityType = obj.master.entityType;\n\t\n\tif( masterEntityType !== undefined){\n if(obj.entityType == undefined){\n\t print("entityType is " + obj.entityType + " for " + id +", changing to "+ masterEntityType);\n\t var currentTime = new Date().getTime();\n\t\n var result = db.entityHistory.update( {"_id":id}, {$set: { "entityType":masterEntityType, "lastModificationDate": NumberLong(currentTime) } });\n printjson(result);\n }\n\t}\n\n});\n \t\n \t\nvar end = new Date().getTime();\nvar duration = end - start;\nprint("duration: " + duration + " ms")\nprint("END")\nGenerate report from gateway_transaction (US)\ndb.getCollection("gateway_transactions").aggregate(\n\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: { \n\t\t\t "$and" : [\n\t\t\t {\n\t\t\t "transactionTS" : {\n\t\t\t "$gte" : NumberLong(1551974500000)\n\t\t\t }, \n\t\t\t "username" : "dea_batch"\n\t\t\t }\n\t\t\t ]\n\t\t\t}\n\t\t},\n\n\t\t// Stage 2\n\t\t{\n\t\t\t$group: {\n\t\t\t _id:"$requestId", \n\t\t\t count: { $sum:1 },\n\t\t\t transactions: { $push : "$$ROOT" }\n\t\t\t}\n\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$unwind: {\n\t\t\t path : "$transactions",\n\t\t\t}\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$addFields: {\n\t\t\t \n\t\t\t "statusNumber": { \n\t\t\t $cond: { \n\t\t\t if: { \n\t\t\t $eq: ["$transactions.status", "failed"] \n\t\t\t }, \n\t\t\t then: 0, \n\t\t\t else: 1 \n\t\t\t }\n\t\t\t } \n\t\t\t \n\t\t\t \n\t\t\t}\n\t\t},\n\n\t\t// Stage 5\n\t\t{\n\t\t\t$sort: {\n\t\t\t "transactions.requestId": 1, \n\t\t\t "statusNumber": -1,\n\t\t\t "transactions.transactionTS": -1 \n\t\t\t}\n\t\t},\n\n\t\t// Stage 6\n\t\t{\n\t\t\t$group: {\n\t\t\t _id:"$_id", \n\t\t\t transaction: { "$first": "$$CURRENT" }\n\t\t\t}\n\t\t},\n\n\t\t// Stage 7\n\t\t{\n\t\t\t$addFields: {\n\t\t\t "transaction.transactions.count": "$transaction.count" \n\t\t\t}\n\t\t},\n\n\t\t// Stage 8\n\t\t{\n\t\t\t$replaceRoot: {\n\t\t\t newRoot: "$transaction.transactions"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 9\n\t\t{\n\t\t\t$addFields: {\n\t\t\t "file_raw_line": "$metadata.file_raw_line",\n\t\t\t "filename": "$metadata.filename"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 10\n\t\t{\n\t\t\t$project: {\n\t\t\t requestId : 1,\n\t\t\t count: 2,\n\t\t\t "filename": 3,\n\t\t\t uri: "$mdmUri",\n\t\t\t country: 5,\n\t\t\t source: 6,\n\t\t\t crosswalkId: 7,\n\t\t\t status: 8,\n\t\t\t timestamp: "$transactionTS",\n\t\t\t //"file_raw_line": 10,\n\t\t\t\n\t\t\t}\n\t\t},\n\t],\n\n\t// Options\n\t{\n\t\tallowDiskUse: true\n\t}\n\n\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n\n);\n\n\n\nExport Config for Studio3T - format: 1 CURRENT_QUERY_RESULT 0 0 CSV 2 MAKE_NULL " false true true _id count country crosswalkId filename requestId source status timestamp uri false false false 0 false false false false Excel _id count country crosswalkId filename requestId source status timestamp uri FILE D:\\docs\\FLEX\\REPORT_transaction_log\\10_10_2018\\load_report.csv trueFind entities and GROUP BY country\n db.entityHistory.aggregate([\n {$match: { status: { $eq: "ACTIVE" }, entityType:"configuration/entityTypes/HCP" } },\n {$project: { _id: 1, "country":1 } },\n {$group : {_id:"$country", count:{$sum:1},}},\n {$match: { count: { $gte: 2 } } },\n],{ allowDiskUse: true } )\nFind Entities where ALL/ANY of the crosswalks array objects has delete date set\n//https://stackoverflow.com/questions/43778747/check-if-a-field-exists-in-all-the-elements-of-an-array-in-mongodb-and-return-th?rq=1\n\n// find entities where ALL crosswalk array objects has delete date set (not + exists false)\ndb.entityHistory.find({\n entityType: "configuration/entityTypes/HCP",\n country: "br",\n status: "ACTIVE",\n "entity.crosswalks": { $not: { $elemMatch: { deleteDate: {$exists:false} } } }\n})\n\n// find entities where ANY OF crosswalk array objecst has delete date set\ndb.entityHistory.find({\n entityType: "configuration/entityTypes/HCP",\n country: "br",\n status: "ACTIVE",\n "entity.crosswalks": { $elemMatch: { deleteDate: {$exists:true} } }\n})\nExample of Multiple Update based on the search query\ndb.getCollection("entityHistory").update(\n { \n "status" : "LOST_MERGE", \n "entity" : {\n "$exists" : true\n }\n },\n { \n $set: { "lastModificationDate": NumberLong(1551433013000) }, \n $unset: {entity:""}\n },\n { multi: true }\n)\n\n\n\nGroup RDM exceptions and get details with sample entities ids\n// Stages that have been excluded from the aggregation pipeline query\n__3tsoftwarelabs_disabled_aggregation_stages = [\n\n\t{\n\t\t// Stage 2 - excluded\n\t\tstage: 2, source: {\n\t\t\t$limit: 1000\n\t\t}\n\t},\n]\n\ndb.getCollection("hub_errors").aggregate(\n\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: {\n\t\t\t "exceptionClass" : "com.COMPANY.publishinghub.processing.RDMMissingEventForwardedException",\n\t\t\t "status" : "NEW"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$project: { \n\t\t\t "entityId":"$exchangeInHeaders.kafka[dot]KEY",\n\t\t\t "attributeName": "$exceptionDetails.attributeName",\n\t\t\t "attributeValue": "$exceptionDetails.attributeValue", \n\t\t\t "errorCode": "$exceptionDetails.errorCode"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$group: {\n\t\t\t _id: { entityId:"$entityId", attributeValue: "$attributeValue",attributeName:"$attributeName"}, // can be grouped on multiple properties \n\t\t\t dups: { "$addToSet": "$_id" }, \n\t\t\t count: { "$sum": 1 } \n\t\t\t}\n\t\t},\n\n\t\t// Stage 5\n\t\t{\n\t\t\t$group: {\n\t\t\t //_id: { attributeValue: "$_id.attributeValue",attributeName:"$_id.attributeName"}, // can be grouped on multiple properties \n\t\t\t _id: { attributeName:"$_id.attributeName"}, // can be grouped on multiple properties \n\t\t\t entities: { "$addToSet": "$_id.entityId" }\n\t\t\t}\n\t\t},\n\n\t\t// Stage 6\n\t\t{\n\t\t\t$project: {\n\t\t\t _id: 1,\n\t\t\t sample_entities: { $slice: [ "$entities", 10 ] } \n\t\t\t affected_entities_count: { $size: "$entities" } \n\t\t\t}\n\t\t},\n\t],\n\n\t// Options\n\t{\n\t\tallowDiskUse: true\n\t}\n\n\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n\n);\n\n\n\nMongo SIMPLE searches/filter/lengs/regexp examples\n// GET\ndb.entityHistory.find({})\n// GET random 20 entities\ndb.entityHistory.aggregate( \n [ \n { $match : { status : "ACTIVE" } },\n { \n $sample: {size: 20} \n }, \n {\n $project: {_id:1}\n },\n\n] )\n \n// entity get by ID\ndb.entityHistory.find({\n"_id":"entities/rOATtJD"\n})\n\n\ndb.entityHistory_PforceRx.find({\n _id: "entities/Tq4c32l"\n})\n\n// Specialities exists\ndb.entityHistory.find({\n "entity.attributes.Specialities": {\n $exists: true\n }\n}).limit(20)\n\n// Specialities size > 4\ndb.entityHistory.find({\n "entity.attributes.Specialities": {\n $exists: true\n },\n $and: [\n {$where: "this.entity.attributes.Specialities.length > 6"}, \n {$where: "this.sources.length >= 2"},\n ]\n\n})\n.limit(10)\n// only project ID\n.projection({id:1})\n\n\n// Address size > 4\ndb.entityHistory.find({\n "entity.attributes.Address": {\n $exists: true\n },\n $and: [\n {$where: "this.entity.attributes.Address.length > 4"}, \n {$where: "this.sources.length > 2"},\n ]\n\n})\n.limit(10)\n// only project ID\n//.projection({id:1})\n\n\n// Address AddressType size 2\ndb.entityHistory.find({\n "entity.attributes.Address": {\n $exists: true\n },\n "entity.attributes.Address.value.Status.lookupCode": {\n $exists: true,\n $eq: "ACTV"\n },\n }, {\n "entity.attributes.Address.value.Status": 1\n })\n .limit(10)\n\n\n// Address AddressType size 2\ndb.entityHistory.find({\n "entity.attributes.Address": {\n $exists: true\n },\n $and: [\n {$where: "this.entity.attributes.Address.length >= 4"}, \n {$where: "this.sources.length >= 4"},\n ]\n\n})\n.limit(2)\n//.projection({id:1})\n// only project ID\n\n\ndb.entityHistory.find({\n "entity.attributes.Address": {\n $exists: true\n },\n "entity.attributes.Address.value.BestRecord": {\n $exists: true\n }\n})\n.limit(2)\n// only project ID\n//.projection({id:1})\n\ndb.entityHistory.find({\n "entity.attributes.Address": {\n $exists: true\n },\n "entity.attributes.Address.value.ValidationStatus": {\n $exists: true\n },\n "entityType":"configuration/entityTypes/HCO",\n $and: [{\n $where: "this.entity.attributes.Address.length > 4"\n \n }]\n })\n .limit(1)\n// only project ID\n//.projection({id:1})\n\n\n\n//SOURCE NAME\ndb.entityHistory.find({\n "entity.attributes.Address": {\n $exists: true\n },\n lastModificationDate: {\n $gt: 1534850405000\n }\n })\n .limit(10)\n// only project\n\n\n\ndb.entityHistory.find({\n "entity.attributes.Address": {\n $exists: true\n },\n "entity.attributes.Address.refRelation.objectURI": {\n $exists: false\n },\n }).limit(10)\n// only project\n\n\n// Phone exists\ndb.entityHistory.find({\n "entity.attributes.Phone": {\n $exists: true\n }\n}) .limit(1)\n\n//Specialities exists\ndb.entityHistory.find({\n "entity.attributes.Specialities": {\n $exists: true\n },\n country: "mx"\n}).limit(10)\n \n// Speclaity Code\ndb.entityHistory.find({\n "entity.attributes.Specialities": {\n $exists: true\n },\n "entity.attributes.Specialities.value.Specialty.lookupCode": "WMX.TE",\n country: "mx"\n}).limit(1)\n \n// entity.attributes. Identifiers License exists\ndb.entityHistory.find({\n "entity.attributes.Identifiers": {\n $exists: true\n },\n country: "mx"\n}).limit(1)\n \n \n// Name of organization is empty\ndb.entityHistory.find({\n entityType: "configuration/entityTypes/HCO",\n "entity.attributes.Name": {\n $exists: false\n },\n // "parentEntityId": {\n // $exists: false\n // },\n country: "mx"\n}).limit(10)\n\n\n\n\n// RELACJE\n// GET\ndb.entityRelations.find({})\n\n// entity get by ID startObjectID\ndb.entityRelations.find({\n startObjectId: "entities/14tDdkhy"\n})\n\ndb.entityRelations.find({\n endObjectId: "entities/14tDdkhy"\n})\n\n\ndb.entityRelations.find({\n _id: "relations/RJx9ZkM"\n})\n\ndb.entityRelations.find({\n "relation.attributes.ActPhone": {\n $exists: true\n }\n}).limit(1)\n\n\n\n// Address size > 4\ndb.entityRelations.find({\n "relation.attributes.Phone": {\n $exists: true\n },\n "relationType":"configuration/relationTypes/HasAddress",\n //$and: [\n// {$where: "this.relation.attributes.Address.length > 3"}, \n //{$where: "this.sources.length >= 2"},\n //]\n\n})\n.limit(10)\n// only project ID\n//.projection({id:1})\n\n\n\n\n// \ndb.entityRelations.find({\n "relation.crosswalks": {\n $exists: true\n },\n "relation.crosswalks.deleteDate": {\n $exists: true\n }\n\n})\n.limit(10)\n// only project ID\n//.projection({id:1})\n\n\ndb.entityRelations.find({\n "relation.startObject": {\n $exists: true\n },\n "relation.startObject.objectURI": {\n $exists: false\n }\n\n})\n.limit(1)\n\n\n\n// merge finder\ndb.entityRelations.find({\n "relation.startObject": {\n $exists: true\n },\n "relation.endObject": {\n $exists: true\n },\n $and: [\n {$where: "this.relation.startObject.crosswalks.length > 2"}, \n {$where: "this.sources.length >= 1"},\n ]\n\n})\n.limit(10)\n// only project ID\n//.projection({id:1})\n\n\n// merge finder\ndb.entityRelations.find({\n "relation.startObject": {\n $exists: true\n },\n "relation.endObject": {\n $exists: true\n },\n //"relation.startObject.crosswalks.0.uri": mb.regex.startsWith("relation.startObject.objectURI")\n "relation.startObject.crosswalks.0.uri": /^relation.startObject.objectURI.*$/i\n})\n.limit(2)\n\n\n\n\n\n// Phone - HasAddress\ndb.entityRelations.find({\n "relation.attributes.Phone": {\n $exists: true\n },\n "relationType":"configuration/relationTypes/HasAddress",\n})\n.limit(10)\n\n// ActPhone - Activity\ndb.entityRelations.find({\n "relation.attributes.ActPhone": {\n $exists: true\n },\n "relationType":"configuration/relationTypes/Activity",\n})\n\n\n// Identifiers - HasAddress\ndb.entityRelations.find({\n "relation.attributes.Identifiers": {\n $exists: true\n },\n "relationType":"configuration/relationTypes/HasAddress",\n})\n.limit(10)\n\n\n// Identifiers - Activity\ndb.entityRelations.find({\n "relation.attributes.ActIdentifiers": {\n $exists: true\n },\n "relationType":"configuration/relationTypes/Activity",\n})\n\n\n\n\ndb.entityHistory.find({\n "entity.attributes.Address": {\n $exists: true\n }\n })\n// only project\n\n\ndb.entityHistory.find({\n "entity.attributes.Address": {\n $exists: true\n },\n "entity.attributes.Address.refRelation.uri": {\n $exists: false\n },\n "entity.attributes.Address.refRelation.objectURI": {\n $exists: true\n },\n })\n// only project\n\n\ndb.entityHistory.find({\n "entity.attributes.Address": {\n $exists: true\n },\n "entity.attributes.Address.refRelation.uri": {\n $exists: true\n },\n "entity.attributes.Address.refRelation.objectURI": {\n $exists: false\n }\n })\n// only project\n\ndb.entityHistory.find({\n "entity.attributes.Address": {\n $exists: true\n },\n "entity.attributes.Address.refRelation.uri": {\n $exists: true\n },\n "entity.attributes.Address.refRelation.objectURI": {\n $exists: true\n },\n })\n\ndb.entityHistory.find({\n "entity.attributes.Address": {\n $exists: true\n },\n lastModificationDate: {\n $gt: 1534850405000\n }\n })\n .limit(10)\n// only project\n\ndb.entityHistory.find({})\n// GET random 20 entities\n\n \n// entity get by ID\ndb.entityHistory.find({\n _id: "entities/Nzn07bq"\n})\n\n\n// Address AddressType size 2\ndb.entityHistory.find({\n "entity.attributes.Address": {\n $exists: true\n },\n $and: [\n {$where: "this.entity.attributes.Address.length >= 4"}, \n {$where: "this.sources.length >= 4"},\n ]\n\n})\n.limit(2)\n\n\n\n\nGet the EntityId and the Crosswalks Size - ifNull return 0 elements\ndb.getCollection("entityHistory").aggregate(\n\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: { \t\n\t\t\t mdmSource: "RELTIO" \n\t\t\t}\n\t\t},\n\n\t\t// Stage 2\n\t\t{\n\t\t\t$limit: 1000\n\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$addFields: {\n\t\t\t "crosswalksSize": { $size: { "$ifNull": [ "$entity.crosswalks", [] ] } }\n\t\t\t}\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$project: {\n\t\t\t _id: 1,\n\t\t\t crosswalksSize:1 \n\t\t\t \n\t\t\t}\n\t\t},\n\n\t]\n\n\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n\n);\n\n\nTMP Copy\n// COPY THIS SECTION \n" + }, + { + "title": "Mongo-SOP-002: Running mongo scripts remotely on k8s cluster", + "pageID": "284809016", + "pageLink": "/display/GMDM/Mongo-SOP-002%3A+Running+mongo+scripts+remotely+on+k8s+cluster", + "content": "Get the tool:Go to file http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/browse/helm/mongo/src/scripts/run_mongo_remote/run_mongo_remote.sh?at=refs%2Fheads%2Fproject%2Fboldmove in inbound-services repository.Download the file to your computer.The tool requires kubenetes installed and WSL (tested on WSL2) for working correctly.Usage guide:Available commands:./run_mongo_remote.sh --helpShows general help message for the script tool:./run_mongo_remote.sh exec Execute to run script remotely on pod agent on k8s script. Script will be copied from the given path on local machine to pod and then run on pod. To get details about accepted arguments run ./run_mongo_remote.sh exec --help./run_mongo_remote.sh get Execute to download script results from pod agent and save in given path on your local machine. To get details about accepted arguments run ./run_mongo_remote.sh get --helpExample flow:Save mongo script you want to run in file example_script.js (Script file has to have .js or .mongo extension for tool to run correctly)Run ./run_mongo_remote.sh exec example_script.js emea_dev to run your script on emea_dev environmentUpon complection the path where the script results were saved on pod agent will be returned (eg. /pod/path/result.txt)Run ./run_mongo_remote.sh get /pod/path/result.txt local/machine/path/example_script_result.txt emea_dev to save script results on your local machine.Tool editionThe tool was written using bashly - a bash framework for developing CLI applications.The tool source is available HERE. Edit files and generate singular output script based on guides available on bashly site.DO NOT EDIT run_mongo_remote.sh file MANUALLY (it may result in script not working correctly)." + }, + { + "title": "Notifications:", + "pageID": "430347505", + "pageLink": "/pages/viewpage.action?pageId=430347505", + "content": "" + }, + { + "title": "Sending notification", + "pageID": "430347508", + "pageLink": "/display/GMDM/Sending+notification", + "content": "We send notifications to our clients in the case of the following events:Unplanned outage - MDMHUB is not available for our clients - REST API, Kafka or Snowflake doesn't work properly and clients are not able to connect. Currently, you have to send notification in the case of the following events:kong_http_500_status_prodkong_http_502_status_prodkong_http_503_status_prodkong3_http_500_status_prodkong3_http_502_status_prodkong3_http_503_status_prodkafka_missing_all_brokers_prodPlanned outage - it is maintenance window when we have to do some maintenance tasks that will cause temporary problems with accessing to MDMHUB endpoints,Update configuration - some of MDMHUB endpoints are changed i.e.: rest API URL address, Kafka address etc.We always sends notification in the case of unplanned outage to inform our clients about and let them know that somebody from us is working on issue. Planned outage and update configuration are always planned activity that are confirmed with release management and scheduled to specific time range.Notification LayoutYou send notifications using your COMPANY's email account.As CC always set our DLs: DL-ATP_MDMHUB_SUPPORT_PROD@COMPANY.com, DL-ATP_MDMHUB_SUPPORT@COMPANY.comAdd our clients as BCC according to table mentioned below:Click here to expand Recepients list (XLS above is easier to filter){"name":"MDM_Hub_notification_recipients.xlsx","type":"xlsx","pageID":"430347508"}Loading On the above screen we can see a few placeholders,Notification type - must be one of: UNPLANNED OUTAGE, PLANNED OUTAGE or UPDATE CONFIGURATION,Environments - a list of MDMHUB environments that related to notification. It is very important to provide region and specific environment type eg. AMER DEV/QA/STAGE, AMER NPRODs etc. It is good to provide a links to documentation that describe listed environments. Environment documentation can be found here,When - the date when situation that notification describes start occurring. In the case of unplanned outage you have to provide the date when we noticed the failure. For rest of situations it should be time range to determine when activity will start and finish,Description - details that describe situation, possible impacts and expected time of resolution (if it is possible to determine). Some of the notification templates have placeholder "" that should be fill up using labels endpoint and endpoint_ext value from alert triggered in karma. Thanks this, customers will be able to recognize that outage impacting on theirs business.Notification templatesBelow you can find notification templates that you can get, fill and send to our clients:Generic template: notification.msgKafka issues: kafka.msgAPI issues: api.msg" + }, + { + "title": "COMPANYGlobalCustomerID:", + "pageID": "302706348", + "pageLink": "/pages/viewpage.action?pageId=302706348", + "content": "" + }, + { + "title": "Fix \"\" or null IDs - Fix Duplicates", + "pageID": "250675882", + "pageLink": "/pages/viewpage.action?pageId=250675882", + "content": "The following SOP describes how to fix "" or null COMPANYGlobalCustomerIDs values in Mongo and regenerate events in Snowflake.The SOP also contains the step to fix duplicated values and regenerate events.Steps: Check empty or null: \n\t db = db.getSiblingDB("reltio_amer-prod");\n\t\tdb.getCollection("entityHistory").find(\n\t\t\t{\n\t\t\t\t"$or" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t"COMPANYGlobalCustomerID" : ""\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t"COMPANYGlobalCustomerID" : {\n\t\t\t\t\t\t\t"$exists" : false\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t],\n\t\t\t\t"status" : {\n\t\t\t\t\t"$ne" : "DELETED"\n\t\t\t\t}\n\t\t\t}\n\t\t);\nMark all ids for further event regeneration. Run the Scritp on Studio3t or K8s mongoScript - http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/docker/mongo_utils/scripts/COMPANYglobalcustomerids_fix_empty_null_script.jsRun on K8s:log in to correct cluster on backend namespace copy script - kubectl cp  ./reload_entities_fix_COMPANY_id_DEV.js mongo-0:/tmp/reload_entities_fix_COMPANY_id_DEV.jsrun - nohup mongo --host mongo/localhost:27017 -u admin -p --authenticationDatabase admin reload_entities_fix_COMPANY_id_DEV.js > out/reload_DEV.out 2>&1 &download result - kubectl cp mongo-0:/tmp/out/reload_DEV.out ./reload_DEV.outUsing output find all "TODO" lines and regenerate correct eventsCheck duplicates:\n\t\t\t\t// Pipeline\n\t\t\t[\n\t\t\t\t// Stage 1\n\t\t\t\t{\n\t\t\t\t\t$group: {\n\t\t\t\t\t_id: {COMPANYID: "$COMPANYID"},\n\t\t\t\t\tuniqueIds: {$addToSet: "$_id"},\n\t\t\t\t\tcount: {$sum: 1}\n\t\t\t\t\t}\n\t\t\t\t},\n\n\t\t\t\t// Stage 2\n\t\t\t\t{\n\t\t\t\t\t$match: { \n\t\t\t\t\tcount: {"$gt": 1}\n\t\t\t\t\t}\n\t\t\t\t}, \n\t\t\t],\n\n\t\t\t// Options\n\t\t\t{\n\t\t\t\tallowDiskUse: true\n\t\t\t}\n\n\t\t\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\nIf there are duplicates run run the Scritp on Studio3t or K8s mongoScript - http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/docker/mongo_utils/scripts/COMPANYglobalcustomerids_fix_duplicates_script.jsRun on K8s:log in to correct cluster on backend namespace copy script - kubectl cp  ./reload_entities_fix_COMPANY_id_DEV.js mongo-0:/tmp/reload_entities_fix_COMPANY_id_DEV.jsrun - nohup mongo --host mongo/localhost:27017 -u admin -p --authenticationDatabase admin reload_entities_fix_COMPANY_id_DEV.js > out/reload_DEV.out 2>&1 &download result - kubectl cp mongo-0:/tmp/out/reload_DEV.out ./reload_DEV.outUsing output find all "TODO" lines and regenerate correct eventsReload events    Events RUNYou can use the following 2 scripts:\n#!/bin/bash\n\nfile=$1\nevent_type=$2\n\ndos2unix $file\n\njq -R -s -c 'split("\\n")' < "${file}" | jq --arg eventTimeArg `date +%s%3N` --arg eventType ${event_type} -r '.[] | . +"|{\\"eventType\\": \\"\\($eventType)\\", \\"eventTime\\": \\"\\($eventTimeArg)\\", \\"entityModificationTime\\": \\"\\($eventTimeArg)\\", \\"entitiesURIs\\": [\\"" + (.|tostring) + "\\"], \\"mdmSource\\": \\"RELTIO\\", \\"viewName\\": \\"default\\"}"'\n\n\nThis script input is the file with entityid separated by new lineExmaple:entities/xVIK0nhentities/uP4eLwsentities/iiKryQOentities/ZYjRCFNentities/13n4v93AExample execution:./script.sh dev_reload_empty_ids.csv HCP_CHANGED >> EMEA_DEV_events.txtOR\n#!/bin/bash\n\nfile=$1\n\ndos2unix $file\n\njq -R -s -c 'split("\\n")' < "${file}" | jq --arg eventTimeArg `date +%s%3N` -r '.[] | (. | tostring | split(",") | .[0] | tostring ) +"|{\\"eventType\\": \\""+ ( . | tostring | split(",") | if .[1] == "LOST_MERGE" then "HCP_LOST_MERGE" else "HCP_CHANGED" end ) + "\\", \\"eventTime\\": \\"\\($eventTimeArg)\\", \\"entityModificationTime\\": \\"\\($eventTimeArg)\\", \\"entitiesURIs\\": [\\"" + (. | tostring | split(",") | .[0] | tostring ) + "\\"], \\"mdmSource\\": \\"RELTIO\\", \\"viewName\\": \\"default\\"}"'\n\n\nThis script input is the file with entityId,status separate by new lineExample:entities/10BBdiHR,LOST_MERGEentities/10BBdv4D,LOST_MERGEentities/10BBe7qz,LOST_MERGEentities/10BBgKFF,INACTIVEentities/10BBgOVV,ACTIVEExample execution:./script_2_columns.sh dev_reload_lost_merges.csv >> EMEA_DEV_events.txtPush the generate file to Kafka topic using Kafka producer:./start_sasl_producer.sh prod-internal-reltio-events < EMEA_PROD_events.txtSnowflake Check\n-- COMPANY COMPANY_GLOBAL_CUSTOMER_ID checks - null/empty\nSELECT count(*) FROM ENTITIES WHERE COMPANY_GLOBAL_CUSTOMER_ID IS NULL OR COMPANY_GLOBAL_CUSTOMER_ID = '' \nSELECT * FROM ENTITIES WHERE COMPANY_GLOBAL_CUSTOMER_ID IS NULL OR COMPANY_GLOBAL_CUSTOMER_ID = '' \n\n-- duplicates\nSELECT COMPANY_GLOBAL_CUSTOMER_ID \nFROM ENTITIES \nWHERE COMPANY_GLOBAL_CUSTOMER_ID IS NOT NULL OR COMPANY_GLOBAL_CUSTOMER_ID != '' \nGROUP BY COMPANY_GLOBAL_CUSTOMER_ID HAVING COUNT(*) >1\n\n\n" + }, + { + "title": "Initialization Process", + "pageID": "218694652", + "pageLink": "/display/GMDM/Initialization+Process", + "content": "The process will sync COMPANYGlobalCustomerID attributes to the MongoDB (EntityHistory and COMPANYIDRegistry) and then refresh the snowflake with this data.The process is divided into the following steps:Create an index in Mongodb.entityHistory.createIndex({COMPANYGlobalCustomerID: -1},  {background: true, name:  "idx_COMPANYGlobalCustomerID"});Configure entity-enricher so it has the ov:false option for COMPANYGlobalCustomerIDbundle.nonOvAttributesToInclude:- COMPANYCustID- COMPANYGlobalCustomerIDDeploy the hub components with callback enabled -COMPANYGlobalCustomerIDCallback (3.9.1 version)RUN hub_reconciliation_v2 - first run the HUB Reconciliation -> this will enrich all Mongo data with COMPANYGlobaCustomerID with ov:true and ov:false valuesbased on EMEA this is here - http://euw1z1dl039.COMPANY.com:8080/airflow/tree?dag_id=hub_reconciliation_v2_emea_dev&root=doc - HUB Reconciliation Process V2check if the configuration contains the following - nonOvAttrToInclude: "COMPANYCustID,COMPANYGlobalCustomerID"check S3 directory structure and reconciliation.properties file in emea//inbound/hub/hub_reconciliation/ http://euw1z1dl039.COMPANY.com:8080/airflow/tree?dag_id=hub_reconciliation_v2_emea_devhttp://euw1z1dl039.COMPANY.com:8080/airflow/tree?dag_id=hub_reconciliation_v2_emea_qahttp://euw1z1dl039.COMPANY.com:8080/airflow/tree?dag_id=hub_reconciliation_v2_emea_stageRUN hub_COMPANYglobacustomerid_initial_sync_ DAGIt contains 2 steps:COMPANYglobacustomerid_active_inactive_reconciliation the groovy script that - check the HUB entityHistory ACTIVE/INACTIVE/DELETED entities - for all these entities get ov:true COMPANYGlobalCustomerId and enrich Mongo and CacheCOMPANYglobacustomerid_lost_merge_reconciliation  the groovy script that - this step checks LOST_MERGE entities. Do the merge_tree full export from Reltio. Based on merge_tree adds the RUN snowflake_reconciliation - full snowflake reconciliation by generating the full file with empty checksums" + }, + { + "title": "Remove Duplicates and Regenerate Events", + "pageID": "272368703", + "pageLink": "/display/GMDM/Remove+Duplicates+and+Regenerate+Events", + "content": "This SOP describes the workaround to fix the COMPANYGlobalCustomerID duplicated values.Case:There are 2 entities with the same COMPANYGlobalCustomerID.Example:    1Qbu0jBQ - Jun 14, 2022 @ 18:10:44.963    ID-mdmhub-reltio-subscriber-dynamic-866b588c7-w9crm-1655205289718-0-157609    ENTITY_CREATED    entities/1Qbu0jBQ    RELTIO    success    entities/1Qbu0jBQ        3Ot2Cfw  - Aug 11, 2022 @ 18:53:31.433    ID-mdmhub-reltio-subscriber-dynamic-79cd788b59-gtzm6-1659525443436-0-1693016    ENTITY_CREATED    entities/3Ot2Cfw    RELTIO    success    entities/3Ot2Cfw3Ot2Cfw  is a WINNER1Qbu0jBQ  is a LOSER. Rule: if there are duplicates, always pick the LOST_MERGED entity and update the looser only with the different value. Do not change an active entity:Steps:GO to Reltio to the winner and check the other (OV:FALSE) COMPANYGlobalCustomerIDsPick the new value from the list:Check if there are no duplicates in Mongo, and search for a new value by the COMPANY in the cache. If exists pick different.Update Mongo Cache:Regenerate event:if the loser entity is now active in Reltio but not active in Mongo regenerate CREATED event:entities/1Qbu0jBQ|{  "eventType" : "HCP_CREATED",  "eventTime" : "1666090581000",  "entityModificationTime" : "1666090581000",  "entitiesURIs" : [ "entities/1Qbu0jBQ" ],  "mdmSource" : "RELTIO",  "viewName" : "default" }if the loser entity is not present in Reltio because is a looser regenerate LOST_MERGE event:entities/1Q7XLreu|{"eventType":"HCO_LOST_MERGE","eventTime":1666018656000,"entityModificationTime":1666018656000,"entitiesURIs":["entities/1Q7XLreu"],"mdmSource":"RELTIO","viewName":"default"}Example PUSH to PROD:Check Mongo, an updated entity should change COMPANYGlobalCustomerIDCheck ReltioCheck Snowflake" + }, + { + "title": "Project FLEX (US):", + "pageID": "302705645", + "pageLink": "/pages/viewpage.action?pageId=302705645", + "content": "" + }, + { + "title": "Batch Loads - Client-Sourced", + "pageID": "164470098", + "pageLink": "/display/GMDM/Batch+Loads+-+Client-Sourced", + "content": "Log in to US PROD Kibana: https://amraelp00006209.COMPANY.com:5601/app/kibanause the dedicated "kibana_gbiccs_user" Go to the Dashboards Tab - "PROD Batch loads"Change the Time rage Choose 24 hours to check if the new file was loaded for the last 24 hours.The Dashboard is divided into the following sections:File by type - this visualization presents how many file of the specific type were loaded during a specific time rangeFile load count - this visualization presents when the specific file was loadedFile load summary - on this table you can verify the detailed information about file loadCheck if files are loaded with the following agenda:SAP - incremental loads - max 4 files per day, min 2 files per day Agenda: whenhoursMonday-Friday 1. 01:20 CET time 2. 13:20 CET time 3. 17:20 CET time 4. 21:20 CET timeSaturday1. 01:20 CET timeSundaynoneHIN - incremental loads - 2 file per day. WKCE.*.txt and WKHH.*.txtAgenda:whenhoursTuesday-Saturday1. estimates: 12PM - 1PM CET timeDEA - full load -  1 file per week FF_DEA_IN_.*.txtAgenda:whenhoursTuesday1. estimates: 10AM - 12PM CET time340B - incremental load - 4 files per month. 340B_FLEX_TO_RELTIO_*.txtAgenda:Files uploaded on 3rd, 10th, 24th and the last day of the month at ~12:30 PM CET time. If the upload day is on the weekend, the file will be loaded on the next workday.Check if DEA file limit was not exceeded. Check "Suspended Entities" attribute. If this parameter is grater than 0, it means that DEA post processing was not invoked. Current DEA post processing limit is 22 000. To increase limit - Send the notification (7.d), after agreement do (8.)Take an action if the input files are not delivered on schedule:SAP To:  santosh.dube@COMPANY.com;Venkata.Mandala@COMPANY.com;Jayant.Srivastava@COMPANY.com;DL-GMFT-EDI-PRD-SUPPORT@COMPANY.comCC: tj.struckus@COMPANY.com;Patrick.Neuman@COMPANY.com;przemyslaw.warecki@COMPANY.com;mikolaj.morawski@COMPANY.com;Melissa.Manseau@COMPANY.com;Deanna.Max@COMPANY.com;Laura.Faddah@COMPANY.com;DL-CBK-MAST@COMPANY.com;BalaSubramanyam.Thirumurthy@COMPANY.comHINTo: santosh.dube@COMPANY.com;Venkata.Mandala@COMPANY.com;Jayant.Srivastava@COMPANY.com;DL-GMFT-EDI-PRD-SUPPORT@COMPANY.comCC: tj.struckus@COMPANY.com;Patrick.Neuman@COMPANY.com;przemyslaw.warecki@COMPANY.com;mikolaj.morawski@COMPANY.com;Melissa.Manseau@COMPANY.com;Deanna.Max@COMPANY.com;Laura.Faddah@COMPANY.com;DL-CBK-MAST@COMPANY.com; BalaSubramanyam.Thirumurthy@COMPANY.comDEATo: santosh.dube@COMPANY.com;Venkata.Mandala@COMPANY.com;Jayant.Srivastava@COMPANY.com;DL-GMFT-EDI-PRD-SUPPORT@COMPANY.comCC: tj.struckus@COMPANY.com;Patrick.Neuman@COMPANY.com;przemyslaw.warecki@COMPANY.com;mikolaj.morawski@COMPANY.com;Melissa.Manseau@COMPANY.com;Deanna.Max@COMPANY.com;Laura.Faddah@COMPANY.com;DL-CBK-MAST@COMPANY.com; BalaSubramanyam.Thirumurthy@COMPANY.comDEA - limit notificationTo: santosh.dube@COMPANY.com;tj.struckus@COMPANY.com;Melissa.Manseau@COMPANY.com;BalaSubramanyam.Thirumurthy@COMPANY.comCC: przemyslaw.warecki@COMPANY.com;mikolaj.morawski@COMPANY.comTake an action if DEA limit was exceeded. Login to each PROD hostGo to "cd /app/mdmgw/batch_channel/config/"Edit "application.yml" on each host:Change poller.inputFormats.DEA.deleteDateLimit: 22 000 to new value.Restart Components: Execute https://jenkins-gbicomcloud.COMPANY.com:8443/job/mdm_manage_playbooks/job/Microservices/job/manage_microservices__prod_us/component: mdmgw_batch-channel_1node: all_nodescommand: restartLoad the latest DEA file (MD5 checksum skips all entities, so only post-processing step will be executed) Change and commit new limit to GIT: https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/prod_us/group_vars/gw-services/batch_channel.yml Example Emails:DEA limit exceeded: DEA load checkHi Team,We just received the DEA file, the current DEA post processing process is set to 22 000 limitation. The DEA load resulted in xxxx profiles to be updated in post-processing. Should I change the limit and re-process profiles ?Regards,HIN File missingHIN PROD file missingHi, Today we expected to receive new HIN files. I checked that HIN files are missing on S3 bucket. Last week we received files at