[ { "title": "HUB Overview", "pageID": "164470108", "pageLink": "/display/GMDM/HUB+Overview", "content": "

MDM Integration services provide services for clients using MDM systems (Reltio or Nucleus 360) in following fields:


MDM Integration Services consist of:

The MDM HUB ecosystem is presented at the picture below.  

\"\" 

" }, { "title": "Modules", "pageID": "164470022", "pageLink": "/display/GMDM/Modules", "content": "" }, { "title": "Direct Channel", "pageID": "164469882", "pageLink": "/display/GMDM/Direct+Channel", "content": "

Description

Direct channel exposes unified REST API interface to update/search profiles in MDM systems. The diagram below shows the logical architecture of the Direct Channel module. 

Logical architecture

\"\"

Components


ComponentSubcomponentDescription

API Gateway


Kong API Gateway components playing the role of prox

Authentication engineKong module providing client authentication services
Manager/Orchestrator
java microservice orchestrating API calls

Data Quality Enginequality service validating data sent to Reltio 

Authorization Engine

authorize client access to MDM resources

MDM routing engineroute calls to MDM systems

Transaction Logger

registers API calls in EFK service for tracing reasons. 

Reltio Adapterhandles communication with Reltio MDM system

Nucleus Adapter

handle communication with Nucleus MDM system

HUB Store


MongoDB database plays the role of persistence store for MDM HUB logic
API Router
routing requests to regional MDM Hub services

Flows

FlowDescription
Create/Update HCP/HCO/MCOCreate or Update HCP/HCO/MCO entity
Search EntitySearch entity
Get EntityRead entity
Read LOVRead LOV
Validate HCPValidate HCP
" }, { "title": "Streaming channel", "pageID": "164469812", "pageLink": "/display/GMDM/Streaming+channel", "content": "

Description

Streaming channel distributes MDM profile updates through KAFKA topics in near real-time to consumers.  Reltio events generate on profile changes are sent via AWS SQS queue to MDM HUB.

MDM HUB enriches events with profile data and dedupes them. During the process, callback service process data (for example: calculate ranks and hco names, clean unused topics) and updates profile in Reltio with the calculated values.   

Publisher distributes events to target client topics based on the configured routing rules.

MDM Datamart built-in Snowflake provides SQL access to up to date MDM data in both the object and the relational model. 

Logical architecture


\"\"

Components


ComponentDescription

Reltio subscriber

Consume events from Reltio

Callback service

Trigger callback actions on incoming events for example calculated rankings

Direct Channel

Orchestrates Reltio updates triggered by callbacks

HUB Store

Keeps MDM data history

Reconciliation service

Reconcile missing events

Publisher

Evaluates routing rules and publishes data do downstream consumers

Snowflake Data Mart

Exposes MDM data in the relation model

Kafka Connect

Sends data to Snowflake from Kafka

Entity enricher

Enrich events with full data retrieved from Reltio

Flows

FlowDescription
Reltio events streamingDistribute Reltio MDM data changes to downstream consumers in the streaming mode
Nucleus events streamingDistribute Nucleus MDM data changes to downstream consumers in the streaming mode
Snowflake: Events publish flowDistribute Reltio MDM data changes to Snowflake DM
" }, { "title": "Java Batch Channel", "pageID": "164469814", "pageLink": "/display/GMDM/Java+Batch+Channel", "content": "

Description

Java Batch Channel is the set of services responsible to load file extract delivered by the external source to Reltio. The heart of the module is file loader service aka inc-batch-channel that maps flat model to Reltio model and orchestrates the load through asynchronous interface manage by Manager. Batch flows are managed by Apache Airflow scheduler.

Logical architecture

\"\"

Components

Flows


" }, { "title": "ETL Batch Channel", "pageID": "164469835", "pageLink": "/display/GMDM/ETL+Batch+Channel", "content": "

Description

ETL Batch channel exposes REST API  for ETL components like Informatica and manages a loading process in an asynchronous way.

With its own cache based on Hub Store, it supports full loads providing a delta detection logic.

Logical architecture

\"\"

Components

Flows


" }, { "title": "Environments", "pageID": "164470172", "pageLink": "/display/GMDM/Environments", "content": "

Reltio Export IPs

EnvironmentIPsReltio Team comment

EMEA NON-PROD

EMEA PROD

- ●●●●●●●●●●●●
- ●●●●●●●●●●●●
- ●●●●●●●●●●●●

are available across all EMEA environments

APAC NON-PROD

APAC PROD

- ●●●●●●●●●●●
- ●●●●●●●●●●●●●●
- ●●●●●●●●●●●●●

are available across all APAC environments

GBLUS NON-PROD

GBLUS PROD

- ●●●●●●●●●●●●●
- ●●●●●●●●●●●
- ●●●●●●●●●●●●● 
for the dev/test and 361 tenants, the IPs can be used by any of the environments.

AMER NON-PROD

AMER PROD

The AMER tenants use the same access points as the US

" }, { "title": "AMER", "pageID": "196878948", "pageLink": "/display/GMDM/AMER", "content": "

Contacts

TypeContactCommentSupported MDMHUB environments
DLDL-ADL-ATP-GLOBAL_MDM_RELTIO@COMPANY.comSupports Reltio instancesGBLUS - Reltio only
" }, { "title": "AMER Non PROD Cluster", "pageID": "196878950", "pageLink": "/display/GMDM/AMER+Non+PROD+Cluster", "content": "

Physical Architecture


\"\"

Kubernetes cluster


nameIPConsole addressresource typeAWS regionFilesystemComponentsType
atp-mdmhub-nprod-amer

10.9.64.0/18

10.9.0.0/18

https://pdcs-som1d.COMPANY.com
EKS over EC2us-east-1

~60GB per node,

6TBx2 replicated Portworx volumes

Kong, Kafka, Mongo, Prometheus, MDMHUB microservices

outbound and inbound

Non PROD - backend 

NamespaceComponentPod nameDescriptionLogs
kongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kong
amer-backendKafka

mdm-kafka-kafka-0

mdm-kafka-kafka-1

mdm-kafka-kafka-2

Kafkalogs
amer-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace amer-backend
amer-backendZookeeper

mdm-kafka-zookeeper-0

mdm-kafka-zookeeper-1

mdm-kafka-zookeeper-2

Zookeeperlogs
amer-backendMongomongo-0Mongologs
amer-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace amer-backend
amer-backendFluentDfluentd-*EFK - fluentd

kubectl logs {{pod name}} --namespace amer-backend

amer-backendElasticsearch

elasticsearch-es-default-0

elasticsearch-es-default-1

EFK - elasticsearchkubectl logs {{pod name}} --namespace amer-backend
amer-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace amer-backend
monitoringCadvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoring
amer-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace amer-backend
amer-backendMongo exportermongo-exporter-*mongo metrics exporter---
amer-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace amer-backend
amer-backendConsul

consul-consul-server-0

consul-consul-server-1

consul-consul-server-2

Consulkubectl logs {{pod name}} --namespace amer-backend
amer-backendSnowflake connector

amer-dev-mdm-connect-cluster-connect-*

amer-qa-mdm-connect-cluster-connect-*

amer-stage-mdm-connect-cluster-connect-*

Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace amer-backend
monitoringKafka Connect Exporter

monitoring-jdbc-snowflake-exporter-amer-dev-*

monitoring-jdbc-snowflake-exporter-amer-stage-*

monitoring-jdbc-snowflake-exporter-amer-stage-*

Kafka Connect metric exporterkubectl logs {{pod name}} --namespace monitoring
amer-backendAkhqakhq-*Kafka UIlogs


Certificates 

Wed Aug 31 21:57:19 CEST 2016 until: Sun Aug 31 22:07:17 CEST 2036

Resource

Certificate LocationValid fromValid to Issued To
Kibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/nprod/namespaces/kong/config_files/certsThu, 13 Jan 2022 14:13:53 GMTTue, 10 Jan 2023 14:13:53 GMThttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/
Kafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/nprod/namespaces/amer-backend/secrets.yaml.encryptedJan 18 11:07:55 2022 GMTJan 18 11:07:55 2024 GMTkafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094


Setup and check connections:

  1. Snowflake - managing service accounts - EMEA Snowflake Access


" }, { "title": "AMER DEV Services", "pageID": "196878953", "pageLink": "/display/GMDM/AMER+DEV+Services", "content": "


HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-amer-dev
Ping Federate

https://devfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-amer-dev
Kafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 

s3://gblmdmhubnprodamrasp100762

HUB UIhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ui-amer-dev/#/dashboard

Snowflake MDM DataMart

Resource NameEndpoint
DB Url

https://amerdev01.us-east-1.privatelink.snowflakecomputing.com/

DB Name

COMM_AMER_MDM_DMART_DEV_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_AMER_MDM_DMART_DEV_DEVOPS_ROLE

Grafana dashboards

Resource NameEndpoint
HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=amer_dev&var-node=All&var-type=entities
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=amer_dev&var-topic=All&var-node=1
Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_nprod
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=amer_dev&var-component=manager
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_nprod&var-service=All&var-node=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_dev&var-interval=$__auto_interval_interval

Kibana dashboards

Resource NameEndpoint
Kibana

https://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com (DEV prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-amer-dev/swagger-ui/index.html?configUrl=/api-gw-spec-amer-dev/v3/api-docs/swagger-config
Batch Service API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-amer-dev/swagger-ui/index.html?configUrl=/api-batch-spec-amer-dev/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-amer-nprod-gbl-mdm-hub.COMPANY.com

Consul

Resource NameEndpoint
Consul UIhttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.com

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-amer-nprod-gbl-mdm-hub.COMPANY.com

Components & Logs

ENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod ports

amer-dev

Managermdmhub-mdm-manager-*Gateway APIlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


amer-dev

Batch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogs
amer-devApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogs

amer-dev

Subscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogs
amer-devEnrichermdmhub-entity-enricher-*Reltio events enricherlogs
amer-devCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogs

amer-dev

Publishermdmhub-event-publisher-*Events publisherlogs
amer-devReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogs

Clients


MDM Systems

Reltio

DEV - wn60kG248ziQSMW

Resource NameEndpoint
SQS queue name

https://sqs.us-east-1.amazonaws.com/930358522410/dev_wJmSQ8GWI8Q6Fl1

Reltio

https://dev.reltio.com/ui/wJmSQ8GWI8Q6Fl1

https://dev.reltio.com/reltio/api/wJmSQ8GWI8Q6Fl1

Reltio Gateway User

svc-pfe-mdmhub
RDMhttps://rdm.reltio.com/lookups/dyzB7cAPhATUslE


Internal Resources


Resource NameEndpoint
Mongo

mongodb://mongo-amer-nprod-gbl-mdm-hub.COMPANY.com:27017

Kafka

kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSL

Kibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com
Elasticsearchhttps://elastic-amer-nprod-gbl-mdm-hub.COMPANY.com


Migration

The amer dev is the first environment that was migrated from old ifrastructure (EC2 based) to a new one - Kubernetes based. The following table presents old endpoints and their substitutes in the new environment. Everyone who wants to connect with amer dev has to use new endpoints.

DescriptionOld endpointNew endpoint
Manager API

https://amraelp00010074.COMPANY.com:8443/dev-ext

https://gbl-mdm-hub-amer-nprod.COMPANY.com:8443/dev-ext

https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-amer-dev
Batch Service API

https://amraelp00010074.COMPANY.com:8443/dev-batch-ext

https://gbl-mdm-hub-amer-nprod.COMPANY.com:8443/dev-batch-ext

https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-amer-dev
Consul API

https://amraelp00010074.COMPANY.com:8443/v1

https://gbl-mdm-hub-amer-nprod.COMPANY.com:8443/v1

https://consul-amer-nprod-gbl-mdm-hub.COMPANY.com/v1

Kafkaamraelp00010074.COMPANY.com:9094kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094


" }, { "title": "AMER QA Services", "pageID": "228921283", "pageLink": "/display/GMDM/AMER+QA+Services", "content": "


HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-amer-qa
Ping Federate

https://devfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-amer-qa
Kafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 

s3://gblmdmhubnprodamrasp100762

HUB UIhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ui-amer-qa/#/dashboard

Snowflake MDM DataMart

Resource NameEndpoint
DB Url

https://amerdev01.us-east-1.privatelink.snowflakecomputing.com/

DB Name

COMM_AMER_MDM_DMART_QA_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_AMER_MDM_DMART_QA_DEVOPS_ROLE


Resource NameEndpoint
HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=amer_qa&var-node=All&var-type=entities
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=amer_qa&var-topic=All&var-node=1
Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_nprod
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=amer_qa&var-component=mdm-manager
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_nprod&var-service=All&var-node=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_nprod&var-interval=$__auto_interval_interval


Resource NameEndpoint
Kibana

https://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com (QA prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-amer-qa/swagger-ui/index.html?configUrl=/api-gw-spec-amer-qa/v3/api-docs/swagger-config
Batch Service API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-amer-qa/swagger-ui/index.html?configUrl=/api-batch-spec-amer-qa/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-amer-nprod-gbl-mdm-hub.COMPANY.com

Consul

Resource NameEndpoint
Consul UIhttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.com

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-amer-nprod-gbl-mdm-hub.COMPANY.com

Components & Logs

ENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod ports

amer-qa

Managermdmhub-mdm-manager-*Gateway APIlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


amer-qa

Batch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogs
amer-qaApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogs

amer-qa

Subscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogs
amer-qaEnrichermdmhub-entity-enricher-*Reltio events enricherlogs
amer-qaCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogs

amer-qa

Publishermdmhub-event-publisher-*Events publisherlogs
amer-qaReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogs

Clients


MDM Systems

Reltio

DEV - wn60kG248ziQSMW

Resource NameEndpoint
SQS queue name

https://sqs.us-east-1.amazonaws.com/930358522410/test_805QOf1Xnm96SPj

Reltio

https://test.reltio.com/ui/805QOf1Xnm96SPj

https://test.reltio.com/reltio/api/805QOf1Xnm96SPj

Reltio Gateway User

svc-pfe-mdmhub
RDMhttps://rdm.reltio.com/lookups/805QOf1Xnm96SPj


Internal Resources


Resource NameEndpoint
Mongo

mongodb://mongo-amer-nprod-gbl-mdm-hub.COMPANY.com/reltio_amer-qa:27017

Kafka

kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSL

Kibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com
Elasticsearchhttps://elastic-amer-nprod-gbl-mdm-hub.COMPANY.com
" }, { "title": "AMER STAGE Services", "pageID": "228921315", "pageLink": "/display/GMDM/AMER+STAGE+Services", "content": "


HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-amer-stage
Ping Federate

https://stgfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-amer-stage
Kafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 

s3://gblmdmhubnprodamrasp100762

HUB UIhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ui-amer-stage/#/dashboard

Snowflake MDM DataMart

Resource NameEndpoint
DB Url

https://amerdev01.us-east-1.privatelink.snowflakecomputing.com/

DB Name

COMM_AMER_MDM_DMART_STG_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_AMER_MDM_DMART_STG_DEVOPS_ROLE


Resource NameEndpoint
HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=amer_stage&var-node=All&var-type=entities
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=amer_stage&var-topic=All&var-node=1
Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_nprod
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=amer_stage&var-component=mdm-manager
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_nprod&var-service=All&var-node=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_nprod&var-interval=$__auto_interval_interval


Resource NameEndpoint
Kibana

https://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com (STAGE prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-amer-stage/swagger-ui/index.html?configUrl=/api-gw-spec-amer-stage/v3/api-docs/swagger-config
Batch Service API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-amer-stage/swagger-ui/index.html?configUrl=/api-batch-spec-amer-stage/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-amer-nprod-gbl-mdm-hub.COMPANY.com

Consul

Resource NameEndpoint
Consul UIhttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.com

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-amer-nprod-gbl-mdm-hub.COMPANY.com

Components & Logs

ENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod ports

amer-stage

Managermdmhub-mdm-manager-*Gateway APIlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


amer-stage

Batch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogs
amer-stageApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogs

amer-stage

Subscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogs
amer-stageEnrichermdmhub-entity-enricher-*Reltio events enricherlogs
amer-stageCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogs

amer-stage

Publishermdmhub-event-publisher-*Events publisherlogs
amer-stageReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogs

Clients


MDM Systems

Reltio

DEV - wn60kG248ziQSMW

Resource NameEndpoint
SQS queue name

https://sqs.us-east-1.amazonaws.com/930358522410/test_K7I3W3xjg98Dy30

Reltio

https://test.reltio.com/ui/K7I3W3xjg98Dy30

https://test.reltio.com/reltio/api/K7I3W3xjg98Dy30

Reltio Gateway User

svc-pfe-mdmhub
RDMhttps://rdm.reltio.com/lookups/K7I3W3xjg98Dy30


Internal Resources


Resource NameEndpoint
Mongo

mongodb://mongo-amer-nprod-gbl-mdm-hub.COMPANY.com/reltio_amer-stage:27017

Kafka

kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSL

Kibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com
Elasticsearchhttps://elastic-amer-nprod-gbl-mdm-hub.COMPANY.com
" }, { "title": "GBLUS-DEV Services", "pageID": "234701562", "pageLink": "/display/GMDM/GBLUS-DEV+Services", "content": "


HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-dev
Ping Federate

https://devfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-gblus-dev
Kafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 

s3://gblmdmhubnprodamrasp100762

HUB UIhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ui-gblus-dev/#/dashboard

Snowflake MDM DataMart

Resource NameEndpoint
DB Urlhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.com
DB Name

COMM_GBL_MDM_DMART_DEV

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_DEV_MDM_DMART_DEVOPS_ROLE

Grafana dashboards

Resource NameEndpoint
HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gblus_dev&var-node=All&var-type=entities
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gblus_dev&var-topic=All&var-node=1&var-instance=amraelp00007335.COMPANY.com:9102
Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_nprod
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=gblus_dev&var-component=&var-instance=All&var-node=
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_nprod&var-service=All&var-node=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_dev&var-interval=$__auto_interval_interval


Resource NameEndpoint
Kibana

https://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com (DEV prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gblus-dev/swagger-ui/index.html?configUrl=/api-gw-spec-gblus-dev/v3/api-docs/swagger-config
Batch Service API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-gblus-dev/swagger-ui/index.html?configUrl=/api-batch-spec-gblus-dev/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-amer-nprod-gbl-mdm-hub.COMPANY.com

Consul

Resource NameEndpoint
Consul UIhttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.com

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-amer-nprod-gbl-mdm-hub.COMPANY.com

Components & Logs

ENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod ports

gblus-stage

Managermdmhub-mdm-manager-*Gateway APIlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


gblus-stage

Batch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogs
gblus-stageApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogs

gblus-stage

Subscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogs
gblus-stageEnrichermdmhub-entity-enricher-*Reltio events enricherlogs
gblus-stageCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogs

gblus-stage

Publishermdmhub-event-publisher-*Events publisherlogs
gblus-stageReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogs

Clients

MDM Systems

Reltio

DEV(gblus_dev) sw8BkTZqjzGr7hn

Resource NameEndpoint
SQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/dev_sw8BkTZqjzGr7hn
Reltio

https://dev.reltio.com/ui/sw8BkTZqjzGr7hn

https://dev.reltio.com/reltio/api/sw8BkTZqjzGr7hn

Reltio Gateway User

svc-pfe-mdmhub
RDMhttps://rdm.reltio.com/%s/wq2MxMmfTUCYk9k


Internal Resources


Resource NameEndpoint
Mongo

mongodb://mongo-amer-nprod-gbl-mdm-hub.COMPANY.com:27017

Kafka

kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSL

Kibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com
Elasticsearchhttps://elastic-amer-nprod-gbl-mdm-hub.COMPANY.com


Migration

The following table presents old endpoints and their substitutes in the new environment. Everyone who wants to connect with gblus dev has to use new endpoints.

DescriptionOld endpointNew endpoint
Manager API

https://gbl-mdm-hub-us-nprod.COMPANY.com:8443/dev-ext

https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-dev
Batch Service API

https://gbl-mdm-hub-us-nprod.COMPANY.com:8443/dev-batch-ext

https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-gblus-dev
Consul API

https://gbl-mdm-hub-us-nprod.COMPANY.com:8443/v1

https://consul-amer-nprod-gbl-mdm-hub.COMPANY.com/v1

Kafkaamraelp00007335.COMPANY.com:9094kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094


" }, { "title": "GBLUS-QA Services", "pageID": "234701566", "pageLink": "/display/GMDM/GBLUS-QA+Services", "content": "


HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-qa
Ping Federate

https://devfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-gblus-qa
Kafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 

s3://gblmdmhubnprodamrasp100762

HUB UIhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ui-gblus-qa/#/dashboard

Snowflake MDM DataMart

Resource NameEndpoint
DB Url

https://amerdev01.us-east-1.privatelink.snowflakecomputing.com/

DB Name

COMM_GBL_MDM_DMART_QA

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_QA_MDM_DMART_DEVOPS_ROLE


Resource NameEndpoint
HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=amer_qa&var-node=All&var-type=entities
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gblus_qa&var-topic=All&var-instance=All&var-node=
Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_nprod
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=gblus_qa&var-component=mdm-manager
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_nprod&var-service=All&var-node=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_nprod&var-interval=$__auto_interval_interval


Resource NameEndpoint
Kibana

https://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com (QA prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gblus-qa/swagger-ui/index.html?configUrl=/api-gw-spec-gblus-qa/v3/api-docs/swagger-config
Batch Service API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-gblus-qa/swagger-ui/index.html?configUrl=/api-batch-spec-gblus-qa/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-amer-nprod-gbl-mdm-hub.COMPANY.com

Consul

Resource NameEndpoint
Consul UIhttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.com

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-amer-nprod-gbl-mdm-hub.COMPANY.com

Components & Logs

ENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod ports

gblus-stage

Managermdmhub-mdm-manager-*Gateway APIlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


gblus-stage

Batch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogs
gblus-stageApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogs

gblus-stage

Subscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogs
gblus-stageEnrichermdmhub-entity-enricher-*Reltio events enricherlogs
gblus-stageCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogs

gblus-stage

Publishermdmhub-event-publisher-*Events publisherlogs
gblus-stageReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogs

Clients


MDM Systems

ReltioQA(gblus_qa) rEAXRHas2ovllvT

SQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/test_rEAXRHas2ovllvT
Reltio

https://test.reltio.com/ui/rEAXRHas2ovllvT

https://test.reltio.com/reltio/api/rEAXRHas2ovllvT

Reltio Gateway User

svc-pfe-mdmhub
RDMhttps://rdm.reltio.com/%s/u78Dh9B87sk6I2v


Internal Resources


Resource NameEndpoint
Mongo

mongodb://mongo-amer-nprod-gbl-mdm-hub.COMPANY.com/reltio_amer-qa:27017

Kafka

kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSL

Kibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com
Elasticsearchhttps://elastic-amer-nprod-gbl-mdm-hub.COMPANY.com

Migration

The following table presents old endpoints and their substitutes in the new environment. Everyone who wants to connect with gblus qa has to use new endpoints.

DescriptionOld endpointNew endpoint
Manager API

https://gbl-mdm-hub-us-nprod.COMPANY.com:8443/qa-ext

https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-qa
Batch Service API

https://gbl-mdm-hub-us-nprod.COMPANY.com:8443/qa-batch-ext

https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-gblus-qa
Consul API

https://gbl-mdm-hub-us-nprod.COMPANY.com:8443/v1

https://consul-amer-nprod-gbl-mdm-hub.COMPANY.com/v1

Kafkaamraelp00007335.COMPANY.com:9094kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094
" }, { "title": "GBLUS-STAGE Services", "pageID": "243863074", "pageLink": "/display/GMDM/GBLUS-STAGE+Services", "content": "

HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-stage
Ping Federate

https://stgfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-gblus-stage
Kafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 

s3://gblmdmhubnprodamrasp100762

HUB UIhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ui-gblus-stage/#/dashboard

Snowflake MDM DataMart

Resource NameEndpoint
DB Url

https://amerdev01.us-east-1.privatelink.snowflakecomputing.com/

DB Name

COMM_GBL_MDM_DMART_STG

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_STG_MDM_DMART_DEVOPS_ROLE


Resource NameEndpoint
HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gblus_stage&var-node=All&var-type=entities
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gblus_stage&var-topic=All&var-node=1
Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_nprod
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=amer_stage&var-component=mdm-manager
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_nprod&var-service=All&var-node=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_nprod&var-interval=$__auto_interval_interval


Resource NameEndpoint
Kibana

https://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com (STAGE prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gblus-stage/swagger-ui/index.html?configUrl=/api-gw-spec-gblus-stage/v3/api-docs/swagger-config
Batch Service API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-gblus-stage/swagger-ui/index.html?configUrl=/api-batch-spec-gblus-stage/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-amer-nprod-gbl-mdm-hub.COMPANY.com

Consul

Resource NameEndpoint
Consul UIhttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.com

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-amer-nprod-gbl-mdm-hub.COMPANY.com

Components & Logs

ENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod ports

gblus-stage

Managermdmhub-mdm-manager-*Gateway APIlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


gblus-stage

Batch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogs
gblus-stageApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogs

gblus-stage

Subscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogs
gblus-stageEnrichermdmhub-entity-enricher-*Reltio events enricherlogs
gblus-stageCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogs

gblus-stage

Publishermdmhub-event-publisher-*Events publisherlogs
gblus-stageReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogs

Clients

MDM Systems

Reltio

STAGE(gblus_stage) 48ElTIteZz05XwT

SQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/test_48ElTIteZz05XwT
Reltio

https://test.reltio.com/ui/48ElTIteZz05XwT

https://test.reltio.com/reltio/api/48ElTIteZz05XwT

Reltio Gateway User

svc-pfe-mdmhub
RDMhttps://rdm.reltio.com/%s/5YqAPYqQnUtQJqp


Internal Resources


Resource NameEndpoint
Mongo

mongodb://mongo-amer-nprod-gbl-mdm-hub.COMPANY.com/reltio_amer-stage:27017

Kafka

kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSL

Kibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com
Elasticsearchhttps://elastic-amer-nprod-gbl-mdm-hub.COMPANY.com
" }, { "title": "AMER PROD Cluster", "pageID": "234698165", "pageLink": "/display/GMDM/AMER+PROD+Cluster", "content": "

Physical Architecture


\"\"

Kubernetes cluster


nameIPConsole addressresource typeAWS regionFilesystemComponentsType
atp-mdmhub-prod-amer

10.9.64.0/18

10.9.0.0/18

https://pdcs-drm1p.COMPANY.com
EKS over EC2us-east-1

~60GB per node,

6TBx3 replicated Portworx volumes

Kong, Kafka, Mongo, Prometheus, MDMHUB microservices

outbound and inbound

PROD - backend 

NamespaceComponentPod nameDescriptionLogs
kongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kong
amer-backendKafka

mdm-kafka-kafka-0

mdm-kafka-kafka-1

mdm-kafka-kafka-2

Kafkalogs
amer-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace amer-backend
amer-backendZookeeper

mdm-kafka-zookeeper-0

mdm-kafka-zookeeper-1

mdm-kafka-zookeeper-2

Zookeeperlogs
amer-backendMongomongo-0Mongologs
amer-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace amer-backend
amer-backendFluentDfluentd-*EFK - fluentd

kubectl logs {{pod name}} --namespace amer-backend

amer-backendElasticsearch

elasticsearch-es-default-0

elasticsearch-es-default-1

EFK - elasticsearchkubectl logs {{pod name}} --namespace amer-backend
amer-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace amer-backend
monitoringCadvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoring
amer-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace amer-backend
amer-backendMongo exportermongo-exporter-*mongo metrics exporter---
amer-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace amer-backend
amer-backendConsul

consul-consul-server-0

consul-consul-server-1

consul-consul-server-2

Consulkubectl logs {{pod name}} --namespace amer-backend
amer-backendSnowflake connector

amer-prod-mdm-connect-cluster-connect-*

amer-qa-mdm-connect-cluster-connect-*

amer-stage-mdm-connect-cluster-connect-*

Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace amer-backend
monitoringKafka Connect Exporter

monitoring-jdbc-snowflake-exporter-amer-prod-*

monitoring-jdbc-snowflake-exporter-amer-stage-*

monitoring-jdbc-snowflake-exporter-amer-stage-*

Kafka Connect metric exporterkubectl logs {{pod name}} --namespace monitoring
amer-backendAkhqakhq-*Kafka UIlogs


Certificates 

Wed Aug 31 21:57:19 CEST 2016 until: Sun Aug 31 22:07:17 CEST 2036

Resource

Certificate LocationValid fromValid to Issued To
Kibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/prod/namespaces/kong/config_files/certsThu, 13 Jan 2022 14:13:53 GMTTue, 10 Jan 2023 14:13:53 GMThttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/
Kafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/prod/namespaces/amer-backend/secrets.yaml.encryptedJan 18 11:07:55 2022 GMTJan 18 11:07:55 2024 GMTkafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094


Setup and check connections:

  1. Snowflake - managing service accounts - via http://btondemand.COMPANY.com/ - Get Support → Submit ticket → 
    GBL-ATP-COMMERCIAL SNOWFLAKE DOMAIN ADMI


" }, { "title": "AMER PROD Services", "pageID": "234698356", "pageLink": "/display/GMDM/AMER+PROD+Services", "content": "


HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEVhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-amer-prod
Ping Federate

https://prodfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEV

https://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-gw-amer-prod

Kafkakafka-amer-prod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 

s3://gblmdmhubprodamrasp101478

HUB UIhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/ui-amer-prod/#/dashboard

Snowflake MDM DataMart

Resource NameEndpoint
DB Url

https://amerprod01.us-east-1.privatelink.snowflakecomputing.com/

DB Name

COMM_AMER_MDM_DMART_PROD_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_AMER_MDM_DMART_PROD_DEVOPS_ROLE


Resource NameEndpoint
HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=amer_prod&var-node=All&var-type=entities
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=amer_prod&var-topic=All&var-node=1
Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_prod
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=amer_prod&var-component=manager
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_prod&var-service=All&var-node=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_prod&var-interval=$__auto_interval_interval
Resource NameEndpoint
Kibana

https://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/ (PROD prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentationhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-gw-spec-amer-prod/swagger-ui/index.html?configUrl=/api-gw-spec-amer-prod/v3/api-docs/swagger-config
Batch Service API documentationhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-batch-spec-amer-prod/swagger-ui/index.html?configUrl=/api-batch-spec-amer-prod/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-amer-prod-gbl-mdm-hub.COMPANY.com

Consul

Resource NameEndpoint
Consul UIhttps://consul-amer-prod-gbl-mdm-hub.COMPANY.com/ui/

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-amer-prod-gbl-mdm-hub.COMPANY.com/

Components & Logs

ENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod ports

amer-prod

Managermdmhub-mdm-manager-*Gateway APIlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


amer-prod

Batch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogs
amer-prodApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogs

amer-prod

Subscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogs
amer-prodEnrichermdmhub-entity-enricher-*Reltio events enricherlogs
amer-prodCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogs

amer-prod

Publishermdmhub-event-publisher-*Events publisherlogs
amer-prodReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogs

Clients


MDM Systems

Reltio

PROD - Ys7joaPjhr9DwBJ

Resource NameEndpoint
SQS queue name

https://sqs.us-east-1.amazonaws.com/930358522410/361_Ys7joaPjhr9DwBJ

Reltio

https://361.reltio.com/ui/Ys7joaPjhr9DwBJ

https://361.reltio.com/reltio/api/Ys7joaPjhr9DwBJ

Reltio Gateway User

svc-pfe-mdmhub-prod
RDMhttps://rdm.reltio.com/lookups/LEo5zuzyWyG1xg4


Internal Resources


Resource NameEndpoint
Mongo

mongodb://mongo-amer-prod-gbl-mdm-hub.COMPANY.com:27017

Kafka

kafka-amer-prod-gbl-mdm-hub.COMPANY.com:9094 SASL SSL

Kibanahttps://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/
Elasticsearchhttps://elastic-amer-prod-gbl-mdm-hub.COMPANY.com/
" }, { "title": "GBL US PROD Services", "pageID": "250133277", "pageLink": "/display/GMDM/GBL+US+PROD+Services", "content": "

HUB Endpoints

API & Kafka & S3

Gateway API OAuth2 External - DEVhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-prod
Ping Federate

https://prodfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEV

https://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-gw-gblus-prod

Kafkakafka-amer-prod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 

s3://gblmdmhubprodamrasp101478

Snowflake MDM DataMart

DB Url

https://amerprod01.us-east-1.privatelink.snowflakecomputing.com

DB Name

COMM_GBL_MDM_DMART_PROD

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_PROD_MDM_DMART_DEVOPS_ROLE



HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gblus_prod&var-node=All&var-type=entities
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gblus_prod&var-topic=All&var-node=1&var-instance=amraelp00007848.COMPANY.com:9102
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=gblus_prod&var-component=manager&var-node=1&var-instance=amraelp00007848.COMPANY.com:9104
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_prod&var-service=All&var-node=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_prod&var-interval=$__auto_interval_interval


Kibana

https://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/ (PROD prefixed dashboards)

Documentation


Manager API documentationhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-gw-spec-amer-prod/swagger-ui/index.html?configUrl=/api-gw-spec-gblus-prod/v3/api-docs/swagger-config
Batch Service API documentationhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-batch-spec-amer-prod/swagger-ui/index.html?configUrl=/api-batch-spec-gblus-prod/v3/api-docs/swagger-config


Airflow


Airflow UIhttps://airflow-amer-prod-gbl-mdm-hub.COMPANY.com


Consul


Consul UIhttps://consul-amer-prod-gbl-mdm-hub.COMPANY.com/ui/


AKHQ - Kafka


AKHQ Kafka UIhttps://akhq-amer-prod-gbl-mdm-hub.COMPANY.com/


Components & Logs

ENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod ports

gblus-prod

Managermdmhub-mdm-manager-*Gateway APIlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


gblus-prod

Batch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogs

gblus-prod

Subscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogs
gblus-prodEnrichermdmhub-entity-enricher-*Reltio events enricherlogs
gblus-prodCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogs

gblus-prod

Publishermdmhub-event-publisher-*Events publisherlogs
gblus-prodReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogs
gblus-prodOnekey DCR
mdmhub-mdm-onekey-dcr-service-*Onekey DCR servicelogs

Clients


MDM Systems

Reltio

PROD- 9kL30u7lFoDHp6X

SQS queue name

https://sqs.us-east-1.amazonaws.com/930358522410/361_9kL30u7lFoDHp6X

Reltio

https://361.reltio.com/ui/9kL30u7lFoDHp6X

https://361.reltio.com/reltio/api/9kL30u7lFoDHp6X

Reltio Gateway User

svc-pfe-mdmhub-prod
RDMhttps://rdm.reltio.com/%s/DABr7gxyKKkrxD3


Internal Resources


Mongo

mongodb://mongo-amer-prod-gbl-mdm-hub.COMPANY.com:27017

Kafka

kafka-amer-prod-gbl-mdm-hub.COMPANY.com:9094 SASL SSL

Kibanahttps://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/
Elasticsearchhttps://elastic-amer-prod-gbl-mdm-hub.COMPANY.com/
" }, { "title": "AMER SANDBOX Cluster", "pageID": "310950353", "pageLink": "/display/GMDM/AMER+SANDBOX+Cluster", "content": "

Physical Architecture


<schema>

Kubernetes cluster


name

IP

Console address

resource type

AWS region

Filesystem

Components

Type

atp-mdmhub-sbx-amer

●●●●●●●●●●●●

●●●●●●●●●●●

https://pdcs-som1d.COMPANY.comEKS over EC2us-east-1

~60GB per node

Kong, Kafka, Mongo, Prometheus, MDMHUB microservices

outbound and inbound

SANDBOX - backend 

Namespace

Component

Pod name

Description

Logs

kongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kong
amer-backendKafka

mdm-kafka-kafka-0

mdm-kafka-kafka-1

mdm-kafka-kafka-2

Kafkalogs
amer-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace amer-backend
amer-backendZookeeper

mdm-kafka-zookeeper-0

mdm-kafka-zookeeper-1

mdm-kafka-zookeeper-2

Zookeeperlogs
amer-backendMongomongo-0Mongologs
amer-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace amer-backend
amer-backendFluentDfluentd-*EFK - fluentd

kubectl logs {{pod name}} --namespace amer-backend

amer-backendElasticsearch

elasticsearch-es-default-0

elasticsearch-es-default-1

elasticsearch-es-default-2

EFK - elasticsearchkubectl logs {{pod name}} --namespace amer-backend
monitoringCadvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoring
amer-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace amer-backend
amer-backendMongo exportermongo-exporter-*mongo metrics exporter---
amer-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace amer-backend
amer-backendConsul

consul-consul-server-0

consul-consul-server-1

consul-consul-server-2

Consulkubectl logs {{pod name}} --namespace amer-backend
amer-backendSnowflake connector

amer-devsbx-mdm-connect-cluster-connect-*

Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace amer-backend
amer-backendAkhqakhq-*Kafka UIlogs


Certificates 

Wed Aug 31 21:57:19 CEST 2016 until: Sun Aug 31 22:07:17 CEST 2036

Resource

Certificate Location

Valid from

Valid to 

Issued To

Kibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/sandbox/namespaces/kong/config_files/certs

2023-02-22 15:16:04

2025-02-21 15:16:04

https://api-amer-sandbox-gbl-mdm-hub.COMPANY.com/
Kafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/sandbox/namespaces/amer-backend/secrets.yaml.encrypted--kafka-amer-sandbox-gbl-mdm-hub.COMPANY.com:9094



" }, { "title": "AMER DEVSBX Services", "pageID": "310950591", "pageLink": "/display/GMDM/AMER+DEVSBX+Services", "content": "

HUB Endpoints

API & Kafka & S3 & UI

Resource Name

Endpoint

Gateway API OAuth2 External - DEVhttps://api-amer-sandbox-gbl-mdm-hub.COMPANY.com/ext-api-gw-amer-devsbx
Ping Federate

https://devfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEVhttps://api-amer-sandbox-gbl-mdm-hub.COMPANY.com/api-gw-amer-devsbx
Kafkahttp://kafka-amer-sandbox-gbl-mdm-hub.COMPANY.com/:9094
MDM HUB S3 

s3://gblmdmhubnprodamrasp100762

HUB UIhttps://api-amer-sandbox-gbl-mdm-hub.COMPANY.com/ui-amer-devsbx/#/dashboard

Grafana dashboards

Resource Name

Endpoint

HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=amer_devsbx&var-node=All&var-type=entities
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=amer_devsbx&var-topic=All&var-node=11
Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_sandbox
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=amer_devsbx&var-component=manager
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_sandbox&var-service=All&var-node=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_devsbx&var-interval=$__auto_interval_interval

Kibana dashboards

Resource Name

Endpoint

Kibana

https://kibana-amer-sandbox-gbl-mdm-hub.COMPANY.com (DEVSBX prefixed dashboards)

Documentation

Resource Name

Endpoint

Manager API documentationhttps://api-amer-sandbox-gbl-mdm-hub.COMPANY.com/api-gw-spec-amer-devsbx/swagger-ui/index.html?configUrl=/api-gw-spec-amer-devsbx/v3/api-docs/swagger-config
Batch Service API documentationhttps://api-amer-sandbox-gbl-mdm-hub.COMPANY.com/api-batch-spec-amer-devsbx/swagger-ui/index.html?configUrl=/api-batch-spec-amer-devsbx/v3/api-docs/swagger-config

Airflow

Resource Name

Endpoint

Airflow UIhttps://airflow-amer-sandbox-gbl-mdm-hub.COMPANY.com

Consul

Resource Name

Endpoint

Consul UIhttps://consul-amer-sandbox-gbl-mdm-hub.COMPANY.com

AKHQ - Kafka

Resource Name

Endpoint

AKHQ Kafka UIhttps://akhq-amer-sandbox-gbl-mdm-hub.COMPANY.com

Components & Logs

ENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod ports

amer-devsbx

Managermdmhub-mdm-manager-*Gateway APIlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


amer-devsbx

Batch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogs
amer-devsbxApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogs
amer-devsbxEnrichermdmhub-entity-enricher-*Reltio events enricherlogs
amer-devsbxCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogs

amer-devsbx

Publishermdmhub-event-publisher-*Events publisherlogs
amer-devsbxReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogs

Internal Resources


Resource Name

Endpoint

Mongo

mongodb://mongo-amer-sandbox-gbl-mdm-hub.COMPANY.com:27017

Kafka

kafka-amer-sandbox-gbl-mdm-hub.COMPANY.com:9094 SASL SSL

Kibanahttps://kibana-amer-sandbox-gbl-mdm-hub.COMPANY.com
Elasticsearchhttps://elastic-amer-sandbox-gbl-mdm-hub.COMPANY.com


" }, { "title": "APAC", "pageID": "228933517", "pageLink": "/display/GMDM/APAC", "content": "" }, { "title": "APAC Non PROD Cluster", "pageID": "228933519", "pageLink": "/display/GMDM/APAC+Non+PROD+Cluster", "content": "

Physical Architecture


\"\"

Kubernetes cluster


nameIPConsole addressresource typeAWS regionFilesystemComponentsType
atp-mdmhub-nprod-apac

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

https://pdcs-apa1p.COMPANY.com
EKS over EC2ap-southeast-1

~60GB per node,

6TBx2 replicated Portworx volumes

Kong, Kafka, Mongo, Prometheus, MDMHUB microservices

inbound/outbound

Components & Logs

DEV - microservices

ENV (namespace)ComponentPodDescriptionLogsPod ports

apac-dev

Managermdmhub-mdm-manager-*Managerlogs

8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


apac-dev

Batch Servicemdmhub-batch-service-*Batch Servicelogs
apac-devAPI routermdmhub-mdm-api-router-*API Routerlogs

apac-dev

Reltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogs
apac-devEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogs
apac-devCallback Servicemdmhub-callback-service-*Callback Servicelogs

apac-dev

Event Publishermdmhub-event-publisher-*Event Publisherlogs
apac-devReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogs
apac-dev
Callback delay service
mdmhub-callback-delay-service-*Callback delay service
logs

QA - microservices

ENV (namespace)ComponentPodDescriptionLogsPod ports

apac-qa

Managermdmhub-mdm-manager-*Managerlogs

8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available

apac-qa

Batch Servicemdmhub-batch-service-*Batch Servicelogs
apac-qaAPI routermdmhub-mdm-api-router-*API Routerlogs

apac-qa

Reltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogs
apac-qaEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogs
apac-qaCallback Servicemdmhub-callback-service-*Callback Servicelogs

apac-qa

Event Publishermdmhub-event-publisher-*Event Publisherlogs
apac-qaReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogs
apac-qa
Callback delay service
mdmhub-callback-delay-service-*Callback delay service
logs

STAGE - microservices

ENV (namespace)ComponentPodDescriptionLogsPod ports

apac-stage

Managermdmhub-mdm-manager-*Managerlogs

8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available

apac-stage

Batch Servicemdmhub-batch-service-*Batch Servicelogs
apac-stageAPI routermdmhub-mdm-api-router-*API Routerlogs

apac-stage

Reltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogs
apac-stageEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogs
apac-stageCallback Servicemdmhub-callback-service-*Callback Servicelogs

apac-stage

Event Publishermdmhub-event-publisher-*Event Publisherlogs
apac-stageReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogs
apac-stage
Callback delay service
mdmhub-callback-delay-service-*Callback delay service
logs

Non PROD - backend 

NamespaceComponentPodDescriptionLogs
kongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kong
apac-backendKafka

mdm-kafka-kafka-0

mdm-kafka-kafka-1

mdm-kafka-kafka-2

Kafkalogs
apac-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace apac-backend
apac-backendZookeeper

mdm-kafka-zookeeper-0

mdm-kafka-zookeeper-1

mdm-kafka-zookeeper-2

Zookeeperlogs
apac-backendMongomongo-0Mongologs
apac-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace apac-backend
apac-backendFluentDfluentd-*EFK - fluentd

kubectl logs {{pod name}} --namespace apac-backend

apac-backendElasticsearch

elasticsearch-es-default-0

elasticsearch-es-default-1

EFK - elasticsearchkubectl logs {{pod name}} --namespace apac-backend
apac-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace apac-backend
monitoringcAdvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoring
apac-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace apac-backend
apac-backendMongo exportermongo-exporter-*mongo metrics exporter---
apac-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace apac-backend
apac-backendConsul

consul-consul-server-0

consul-consul-server-1

consul-consul-server-2

Consulkubectl logs {{pod name}} --namespace apac-backend
apac-backendSnowflake connector

apac-dev-mdm-connect-cluster-connect-*

apac-qa-mdm-connect-cluster-connect-*

apac-stage-mdm-connect-cluster-connect-*

Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace apac-backend
monitoringKafka Connect Exporter

monitoring-jdbc-snowflake-exporter-apac-dev-*

monitoring-jdbc-snowflake-exporter-apac-stage-*

monitoring-jdbc-snowflake-exporter-apac-stage-*

Kafka Connect metric exporterkubectl logs {{pod name}} --namespace monitoring
apac-backendAKHQakhq-*Kafka UIlogs


Certificates 

Resource

Certificate LocationValid fromValid to Issued To
Kibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/apac/nprod/namespaces/kong/config_files/certs2022/03/042024/03/03https://api-apac-nprod-gbl-mdm-hub.COMPANY.com
Kafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/apac/nprod/namespaces/apac-backend/secrets.yaml.encrypted2022/03/072024/03/06https://kafka-api-nprod-gbl-mdm-hub.COMPANY.com:9094
" }, { "title": "APAC DEV Services", "pageID": "228933556", "pageLink": "/display/GMDM/APAC+DEV+Services", "content": "


HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEV

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-apac-dev

Ping Federate

https://devfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEV

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-apac-dev

Kafka

kafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094

MDM HUB S3 s3://globalmdmnprodaspasp202202171347
HUB UIhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/ui-apac-dev/#/dashboard

Snowflake MDM DataMart

Resource NameEndpoint
DB Url

https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

DB Name

COMM_APAC_MDM_DMART_DEV_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_APAC_MDM_DMART_DEV_DEVOPS_ROLE

Resource NameEndpoint
HUB Performance

https://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=apac_dev&var-node=All&var-type=entities

Kafka Topics Overview

https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=apac_dev&var-topic=All&var-node=1

JMX Overview

https://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=apac_dev&var-component=manager

Kong

https://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=apac_nprod&var-service=All&var-node=All

MongoDB

https://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=apac_dev&var-interval=$__auto_interval_interval

Kube Statehttps://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=apac-nprod&var-node=All&var-namespace=All&var-datasource=Prometheus

Pod Monitoring

https://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=apac_nprod&var-namespace=All
PVC Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/R_-8aaf7k/pvc-monitoring?orgId=1&refresh=30s&var-env=apac_nprod


Resource NameEndpoint
Kibana

https://kibana-apac-nprod-gbl-mdm-hub.COMPANY.com (DEV prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentation

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-apac-dev/swagger-ui/index.html?configUrl=/api-gw-spec-apac-dev/v3/api-docs/swagger-config

Batch Service API documentation

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-apac-dev/swagger-ui/index.html?configUrl=/api-batch-spec-apac-dev/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UI

https://airflow-apac-nprod-gbl-mdm-hub.COMPANY.com

Consul

Resource NameEndpoint
Consul UI

https://consul-apac-nprod-gbl-mdm-hub.COMPANY.com

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UI

https://akhq-apac-nprod-gbl-mdm-hub.COMPANY.com

Clients

MDM Systems

Reltio DEV - 2NBAwv1z2AvlkgS

Resource NameEndpoint
SQS queue name

https://sqs.ap-southeast-1.amazonaws.com/930358522410/mpe-02_2NBAwv1z2AvlkgS

Reltio

https://mpe-02.reltio.com/ui/2NBAwv1z2AvlkgS

https://mpe-02.reltio.com/reltio/api/2NBAwv1z2AvlkgS

Reltio Gateway User

svc-pfe-mdmhub
RDMhttps://rdm.reltio.com/lookups/GltqYa2x8xzSnB8


Internal Resources


Resource NameEndpoint
Mongo

mongodb://mongo-apac-nprod-gbl-mdm-hub.COMPANY.com:27017

Kafka

kafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSL

Kibana

https://kibana-apac-nprod-gbl-mdm-hub.COMPANY.com

Elasticsearch

https://elastic-apac-nprod-gbl-mdm-hub.COMPANY.com

" }, { "title": "APAC QA Services", "pageID": "234693067", "pageLink": "/display/GMDM/APAC+QA+Services", "content": "


HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEV

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-apac-qa

Ping Federate

https://devfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEV

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-apac-qa

Kafka

kafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094

MDM HUB S3 

s3://globalmdmnprodaspasp202202171347

HUB UIhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/ui-apac-qa/#/dashboard

Snowflake MDM DataMart

Resource NameEndpoint
DB Url

https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

DB Name

COMM_APAC_MDM_DMART_QA_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_APAC_MDM_DMART_QA_DEVOPS_ROLE

Resource NameEndpoint
HUB Performance

https://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=apac_qa&var-node=All&var-type=entities

Kafka Topics Overview

https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=apac_qa&var-topic=All&var-node=1

JMX Overview

https://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=apac_qa&var-component=manager

Kong

https://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=apac_nprod&var-service=All&var-node=All

MongoDB

https://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=apac_qa&var-interval=$__auto_interval_interval

Kube State

https://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=apac-nprod&var-node=All&var-namespace=All&var-datasource=Prometheus

Pod Monitoring

https://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=apac_nprod&var-namespace=All

PVC Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/R_-8aaf7k/pvc-monitoring?orgId=1&refresh=30s&var-env=apac_nprod


Resource NameEndpoint
Kibana

https://kibana-apac-nprod-gbl-mdm-hub.COMPANY.com (QA prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentation

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-apac-qa/swagger-ui/index.html?configUrl=/api-gw-spec-apac-qa/v3/api-docs/swagger-config

Batch Service API documentation

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-apac-qa/swagger-ui/index.html?configUrl=/api-batch-spec-apac-qa/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UI

https://airflow-apac-nprod-gbl-mdm-hub.COMPANY.com

Consul

Resource NameEndpoint
Consul UI

https://consul-apac-nprod-gbl-mdm-hub.COMPANY.com

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UI

https://akhq-apac-nprod-gbl-mdm-hub.COMPANY.com

Clients

MDM Systems

Reltio QA - xs4oRCXpCKewNDK

Resource NameEndpoint
SQS queue name

https://sqs.ap-southeast-1.amazonaws.com/930358522410/mpe-02_xs4oRCXpCKewNDK

Reltio

https://mpe-02.reltio.com/ui/xs4oRCXpCKewNDK

https://mpe-02.reltio.com/reltio/api/xs4oRCXpCKewNDK

Reltio Gateway User

svc-pfe-mdmhub
RDM

https://rdm.reltio.com/lookups/jemrjLkPUhOsPMa


Internal Resources


Resource NameEndpoint
Mongo

mongodb://mongo-apac-nprod-gbl-mdm-hub.COMPANY.com:27017

Kafka

kafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSL

Kibana

https://kibana-apac-nprod-gbl-mdm-hub.COMPANY.com

Elasticsearch

https://elastic-apac-nprod-gbl-mdm-hub.COMPANY.com

" }, { "title": "APAC STAGE Services", "pageID": "234693073", "pageLink": "/display/GMDM/APAC+STAGE+Services", "content": "

HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEV

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-apac-stage

Ping Federate

https://devfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEV

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-apac-stage

Kafka

kafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094

MDM HUB S3 s3://globalmdmnprodaspasp202202171347
HUB UIhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/ui-apac-stage/#/dashboard

Snowflake MDM DataMart

Resource NameEndpoint
DB Url

https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

DB Name

COMM_APAC_MDM_DMART_STG_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_APAC_MDM_DMART_STG_DEVOPS_ROLE


Resource NameEndpoint
HUB Performance

https://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=apac_stage&var-node=All&var-type=entities

Kafka Topics Overview

https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=apac_stage&var-topic=All&var-node=1

JMX Overview

https://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=apac_stage&var-component=manager

Kong

https://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=apac_nprod&var-service=All&var-node=All

MongoDB

https://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=apac_stage&var-interval=$__auto_interval_interval

Kube Statehttps://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=apac-nprod&var-node=All&var-namespace=All&var-datasource=Prometheus

Pod Monitoring

https://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=apac_nprod&var-namespace=All
PVC Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/R_-8aaf7k/pvc-monitoring?orgId=1&refresh=30s&var-env=apac_nprod


Resource NameEndpoint
Kibana

https://kibana-apac-nprod-gbl-mdm-hub.COMPANY.com (STAGE prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentation

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-apac-stage/swagger-ui/index.html?configUrl=/api-gw-spec-apac-stage/v3/api-docs/swagger-config

Batch Service API documentation

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-apac-stage/swagger-ui/index.html?configUrl=/api-batch-spec-apac-stage/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UI

https://airflow-apac-nprod-gbl-mdm-hub.COMPANY.com

Consul

Resource NameEndpoint
Consul UI

https://consul-apac-nprod-gbl-mdm-hub.COMPANY.com

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UI

https://akhq-apac-nprod-gbl-mdm-hub.COMPANY.com

Clients

MDM Systems

Reltio STAGE - Y4StMNK3b0AGDf6

Resource NameEndpoint
SQS queue name

https://sqs.ap-southeast-1.amazonaws.com/930358522410/mpe-02_Y4StMNK3b0AGDf6

Reltio

https://mpe-02.reltio.com/ui/Y4StMNK3b0AGDf6

https://mpe-02.reltio.com/reltio/api/Y4StMNK3b0AGDf6

Reltio Gateway User

svc-pfe-mdmhub
RDM

https://rdm.reltio.com/lookups/NYa4AETF73napDa


Internal Resources


Resource NameEndpoint
Mongo

mongodb://mongo-apac-nprod-gbl-mdm-hub.COMPANY.com:27017

Kafka

kafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSL

Kibana

https://kibana-apac-nprod-gbl-mdm-hub.COMPANY.com

Elasticsearch

https://elastic-apac-nprod-gbl-mdm-hub.COMPANY.com

" }, { "title": "APAC PROD Cluster", "pageID": "234712170", "pageLink": "/display/GMDM/APAC+PROD+Cluster", "content": "

Physical Architecture


\"\"

Kubernetes cluster


nameIPConsole addressresource typeAWS regionFilesystemComponentsType
atp-mdmhub-prod-apac

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

https://pdcs-apa1p.COMPANY.com
EKS over EC2ap-southeast-1

~60GB per node,

6TBx2 replicated Portworx volumes

Kong, Kafka, Mongo, Prometheus, MDMHUB microservices

inbound/outbound

Components & Logs

PROD - microservices

ENV (namespace)ComponentPodDescriptionLogsPod ports

apac-prod

Managermdmhub-mdm-manager-*Managerlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available



apac-prod

Batch Servicemdmhub-batch-service-*Batch Servicelogs
apac-prodAPI routermdmhub-mdm-api-router-*API Routerlogs

apac-prod

Reltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogs
apac-prodEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogs
apac-prodCallback Servicemdmhub-callback-service-*Callback Servicelogs

apac-prod

Event Publishermdmhub-event-publisher-*Event Publisherlogs
apac-prodReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogs
apac-prod
Callback delay service
mdmhub-callback-delay-service-*Callback delay service
logs

Non PROD - backend 

NamespaceComponentPodDescriptionLogs
kongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kong
apac-backendKafka

mdm-kafka-kafka-0

mdm-kafka-kafka-1

mdm-kafka-kafka-2

Kafkalogs
apac-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace apac-backend
apac-backendZookeeper

mdm-kafka-zookeeper-0

mdm-kafka-zookeeper-1

mdm-kafka-zookeeper-2

Zookeeperlogs
apac-backendMongomongo-0Mongologs
apac-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace apac-backend
apac-backendFluentDfluentd-*EFK - fluentd

kubectl logs {{pod name}} --namespace apac-backend

apac-backendElasticsearch

elasticsearch-es-default-0

elasticsearch-es-default-1

EFK - elasticsearchkubectl logs {{pod name}} --namespace apac-backend
apac-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace apac-backend
monitoringcAdvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoring
apac-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace apac-backend
apac-backendMongo exportermongo-exporter-*mongo metrics exporter---
apac-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace apac-backend
apac-backendConsul

consul-consul-server-0

consul-consul-server-1

consul-consul-server-2

Consulkubectl logs {{pod name}} --namespace apac-backend
apac-backendSnowflake connector

apac-prod-mdm-connect-cluster-connect-*

Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace apac-backend
monitoringKafka Connect Exporter

monitoring-jdbc-snowflake-exporter-apac-prod-*

Kafka Connect metric exporterkubectl logs {{pod name}} --namespace monitoring
apac-backendAKHQakhq-*Kafka UIlogs


Certificates 

Resource

Certificate LocationValid fromValid to Issued To
Kibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/apac/prod/namespaces/kong/config_files/certs2022/03/042024/03/03https://api-apac-prod-gbl-mdm-hub.COMPANY.com
Kafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/apac/prod/namespaces/apac-backend/secrets.yaml.encrypted2022/03/072024/03/06https://kafka-api-prod-gbl-mdm-hub.COMPANY.com:9094
" }, { "title": "APAC PROD Services", "pageID": "234712172", "pageLink": "/display/GMDM/APAC+PROD+Services", "content": "


HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEV

https://api-apac-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-apac-prod

Ping Federate

https://prodfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEV

https://api-apac-prod-gbl-mdm-hub.COMPANY.com/api-gw-apac-prod

Kafka

kafka-apac-prod-gbl-mdm-hub.COMPANY.com:9094

MDM HUB S3 s3://globalmdmprodaspasp202202171415
HUB UIhttps://api-apac-prod-gbl-mdm-hub.COMPANY.com/ui-apac-prod/#/dashboard

Snowflake MDM DataMart

Resource NameEndpoint
DB Url

emeaprod01.eu-west-1.privatelink.snowflakecomputing.com

DB Name

COMM_APAC_MDM_DMART_PROD_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_APAC_MDM_DMART_PROD_DEVOPS_ROLE

Resource NameEndpoint
HUB Performance

https://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=prod_dev&var-node=All&var-type=entities

Kafka Topics Overview

https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=apac_prod&var-topic=All&var-node=1

JMX Overview

https://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=apac_prod&var-component=mdm_manager

Kong

https://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=apac_prod&var-service=All&var-node=All

MongoDB

https://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=apac_prod&var-interval=$__auto_interval_interval

Kube Statehttps://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=atp-mdmhub-prod-apac&var-node=All&var-namespace=All&var-datasource=Prometheus

Pod Monitoring

https://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=apac_prod&var-namespace=All
PVC Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/R_-8aaf7k/pvc-monitoring?orgId=1&refresh=30s&var-env=apac_prod


Resource NameEndpoint
Kibana

https://kibana-apac-prod-gbl-mdm-hub.COMPANY.com (DEV prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentation

https://api-apac-prod-gbl-mdm-hub.COMPANY.com/api-gw-spec-apac-prod/swagger-ui/index.html?configUrl=/api-gw-spec-apac-prod/v3/api-docs/swagger-config

Batch Service API documentation

https://api-apac-prod-gbl-mdm-hub.COMPANY.com/api-batch-spec-apac-prod/swagger-ui/index.html?configUrl=/api-batch-spec-apac-prod/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UI

https://airflow-apac-prod-gbl-mdm-hub.COMPANY.com

Consul

Resource NameEndpoint
Consul UI

https://consul-apac-prod-gbl-mdm-hub.COMPANY.com

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UI

https://akhq-apac-prod-gbl-mdm-hub.COMPANY.com

Clients

MDM Systems

Reltio DEV - 2NBAwv1z2AvlkgS

Resource NameEndpoint
SQS queue name

https://sqs.ap-southeast-1.amazonaws.com/930358522410/ap-360_sew6PfkTtSZhLdW

Reltio

https://ap-360.reltio.com/ui/sew6PfkTtSZhLdW

https://ap-360.reltio.com/reltio/api/sew6PfkTtSZhLdW

Reltio Gateway User

svc-pfe-mdmhub-prod
RDMhttps://rdm.reltio.com/lookups/ARTA9lOg3dbvDqk


Internal Resources


Resource NameEndpoint
Mongo

mongodb://mongo-apac-prod-gbl-mdm-hub.COMPANY.com:27017

Kafka

kafka-apac-prod-gbl-mdm-hub.COMPANY.com:9094 SASL SSL

Kibana

https://kibana-apac-prod-gbl-mdm-hub.COMPANY.com

Elasticsearch

https://elastic-apac-prod-gbl-mdm-hub.COMPANY.com

" }, { "title": "EMEA", "pageID": "181022903", "pageLink": "/display/GMDM/EMEA", "content": "" }, { "title": "EMEA External proxy", "pageID": "308256760", "pageLink": "/display/GMDM/EMEA+External+proxy", "content": "

The page describes the Kong external proxy servers. deployed in a DLP (Double Lollipop) AWS account, used by clients outside of the COMPANY network, to access MDM Hub.

Kong proxy instances

EnvironmentConsole addressInstanceSSH accessresource typeAWS regionAWS Account IDComponents
Non PRODhttp://awsprodv2.COMPANY.com/
and use the role:

i-08d4b21c314a98700 (EUW1Z2DL115)

ssh ec2-user@euw1z2dl115.COMPANY.com
EC2eu-west-1432817204314

Kong

PROD

i-091aa7f1fe1ede714 (EUW1Z2DL113)

ssh ec2-user@euw1z2dl113.COMPANY.com
i-05c4532bf7b8d7511 (EUW1Z2DL114)
ssh ec2-user@euw1z2dl114.COMPANY.com
 

External Hub Endpoints

EnvironmentServiceEndpointInbound security group configuration
Non PRODAPI

https://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/

MDMHub-kafka-and-api-proxy-external-nprod-sg

Kafka

kafka-b1-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com:9095

kafka-b2-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com:9095

kafka-b3-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com:9095

PRODAPI

https://api-emea-k8s-prod-gbl-mdm-hub-ext.COMPANY.com/

MDMHub-kafka-and-api-proxy-external-prod-sg - due to the limit of 60 rules per SG, add new ones to:

MDMHub-kafka-and-api-proxy-external-prod-sg-2

Kafka

kafka-b1-emea-k8s-prod-gbl-mdm-hub-ext.COMPANY.com:9095

kafka-b2-emea-k8s-prod-gbl-mdm-hub-ext.COMPANY.com:9095

kafka-b3-emea-k8s-prod-gbl-mdm-hub-ext.COMPANY.com:9095

Clients

EnvironmentClients
Non PROD

Find all details in the Security Group

MDMHub-kafka-and-api-proxy-external-nprod-sg

PROD

Find all details in the Security Group

MDMHub-kafka-and-api-proxy-external-prod-sg

Ansible configuration

ResourceAddress
Install Kong proxyhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/install_kong.yml
Install cadvisorhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/install_cadvisor.yml
Non PROD inventoryhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/proxy_nprod
PROD inventoryhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/proxy_prod


Useful SOPs

How to access AWS Console

How to restart the EC2 instance

How to login to hosts with SSH

No downtime Kong restart/upgrade

" }, { "title": "EMEA Non PROD Cluster", "pageID": "181022904", "pageLink": "/display/GMDM/EMEA+Non+PROD+Cluster", "content": "

Physical Architecture


\"\"

Kubernetes cluster


nameIPConsole addressresource typeAWS regionFilesystemComponentsType
atp-mdmhub-nprod-emea

10.90.96.0/23

10.90.98.0/23

https://pdcs-ema1p.COMPANY.com/
EKS over EC2eu-west-1

~100GBper node,

7.3Ti x2 replicated Portworx volumes

Kong, Kafka, Mongo, Prometheus, MDMHUB microservices

inbound/outbound

Components & Logs

DEV - microservices

ENV (namespace)ComponentPodDescriptionLogsPod ports

emea-dev

Managermdmhub-mdm-manager-*Managerlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


emea-dev

Batch Servicemdmhub-batch-service-*Batch Servicelogs
emea-devAPI routermdmhub-mdm-api-router-*API Routerlogs

emea-dev

Reltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogs
emea-devEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogs
emea-devCallback Servicemdmhub-callback-service-*Callback Servicelogs

emea-dev

Event Publishermdmhub-event-publisher-*Event Publisherlogs
emea-devReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogs

QA - microservices

ENV (namespace)ComponentPodDescriptionLogsPod ports

emea-qa

Managermdmhub-mdm-manager-*Managerlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


emea-qa

Batch Servicemdmhub-batch-service-*Batch Servicelogs
emea-qaAPI routermdmhub-mdm-api-router-*API Routerlogs

emea-qa

Reltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogs
emea-qaEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogs
emea-qaCallback Servicemdmhub-callback-service-*Callback Servicelogs

emea-qa

Event Publishermdmhub-event-publisher-*Event Publisherlogs
emea-qaReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogs

STAGE - microservices

ENV (namespace)ComponentPodDescriptionLogsPod ports

emea-stage

Managermdmhub-mdm-manager-*Managerlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


emea-stage

Batch Servicemdmhub-batch-service-*Batch Servicelogs
emea-stageAPI routermdmhub-mdm-api-router-*API Routerlogs

emea-stage

Reltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogs
emea-stageEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogs
emea-stageCallback Servicemdmhub-callback-service-*Callback Servicelogs

emea-stage

Event Publishermdmhub-event-publisher-*Event Publisherlogs
emea-stageReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogs

GBL DEV - microservices

ENV (namespace)ComponentPodDescriptionLogsPod ports

gbl-dev

Managermdmhub-mdm-manager-*Managerlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


gbl-dev

Batch Servicemdmhub-batch-service-*Batch Servicelogs

gbl-dev

Reltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogs
gbl-devEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogs
gbl-devCallback Servicemdmhub-callback-service-*Callback Servicelogs

gbl-dev

Event Publishermdmhub-event-publisher-*Event Publisherlogs
gbl-devReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogs
gbl-devDCR Servicemdmhub-mdm-dcr-service-*DCR Servicelogs
gbl-devMAP Channel mdmhub-mdm-map-channel-*MAP Channellogs
gbl-devPforceRX Channelmdm-pforcerx-channel-*PforceRX Channellogs

GBL QA - microservices

ENV (namespace)ComponentPodDescriptionLogsPod ports

gbl-qa

Managermdmhub-mdm-manager-*Managerlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


gbl-qa

Batch Servicemdmhub-batch-service-*Batch Servicelogs

gbl-qa

Reltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogs
gbl-qaEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogs
gbl-qaCallback Servicemdmhub-callback-service-*Callback Servicelogs

gbl-qa

Event Publishermdmhub-event-publisher-*Event Publisherlogs
gbl-qaReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogs
gbl-qaDCR Servicemdmhub-mdm-dcr-service-*DCR Servicelogs
gbl-qaMAP Channel mdmhub-mdm-map-channel-*MAP Channellogs
gbl-qaPforceRX Channelmdm-pforcerx-channel-*PforceRX Channellogs

GBL STAGE - microservices

ENV (namespace)ComponentPodDescriptionLogsPod ports

gbl-stage

Managermdmhub-mdm-manager-*Managerlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


gbl-stage

Batch Servicemdmhub-batch-service-*Batch Servicelogs

gbl-stage

Reltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogs
gbl-stageEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogs
gbl-stageCallback Servicemdmhub-callback-service-*Callback Servicelogs

gbl-stage

Event Publishermdmhub-event-publisher-*Event Publisherlogs
gbl-stageReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogs
gbl-stageDCR Servicemdmhub-mdm-dcr-service-*DCR Servicelogs
gbl-stageMAP Channel mdmhub-mdm-map-channel-*MAP Channellogs
gbl-stagePforceRX Channelmdm-pforcerx-channel-*PforceRX Channellogs

Non PROD - backend 

NamespaceComponentPodDescriptionLogs
kongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kong
emea-backendKafka

mdm-kafka-kafka-0

mdm-kafka-kafka-1

mdm-kafka-kafka-2

Kafkalogs
emea-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace emea-backend
emea-backendZookeeper

mdm-kafka-zookeeper-0

mdm-kafka-zookeeper-1

mdm-kafka-zookeeper-2

Zookeeperlogs
emea-backendMongomongo-0Mongologs
emea-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace emea-backend
emea-backendFluentDfluentd-*EFK - fluentd

kubectl logs {{pod name}} --namespace emea-backend

emea-backendElasticsearch

elasticsearch-es-default-0

elasticsearch-es-default-1

EFK - elasticsearchkubectl logs {{pod name}} --namespace emea-backend
emea-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace emea-backend
monitoringcAdvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoring
emea-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace emea-backend
emea-backendMongo exportermongo-exporter-*mongo metrics exporter---
emea-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace emea-backend
emea-backendConsul

consul-consul-server-0

consul-consul-server-1

consul-consul-server-2

Consulkubectl logs {{pod name}} --namespace emea-backend
emea-backendSnowflake connector

emea-dev-mdm-connect-cluster-connect-*

emea-qa-mdm-connect-cluster-connect-*

emea-stage-mdm-connect-cluster-connect-*

Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace emea-backend
monitoringKafka Connect Exporter

monitoring-jdbc-snowflake-exporter-emea-dev-*

monitoring-jdbc-snowflake-exporter-emea-stage-*

monitoring-jdbc-snowflake-exporter-emea-stage-*

Kafka Connect metric exporterkubectl logs {{pod name}} --namespace monitoring
emea-backendAKHQakhq-*Kafka UIlogs


Certificates 

Resource

Certificate LocationValid fromValid to Issued To
Kibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/nprod/namespaces/kong/config_files/certs2022/03/042024/03/03https://api-emea-nprod-gbl-mdm-hub.COMPANY.com
Kafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/nprod/namespaces/emea-backend2022/03/072024/03/06kafka-emea-nprod-gbl-mdm-hub.COMPANY.com
" }, { "title": "EMEA DEV Services", "pageID": "181022906", "pageLink": "/display/GMDM/EMEA+DEV+Services", "content": "


HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEV

https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-dev

Ping Federate

https://devfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-emea-dev
Kafkakafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 

s3://pfe-atp-eu-w1-nprod-mdmhub/emea/dev

HUB UIhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-emea-dev/#/dashboard

Snowflake MDM DataMart

Resource NameEndpoint
DB Url

https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

DB Name

COMM_EMEA_MDM_DMART_DEV_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_EMEA_MDM_DMART_DEVOPS_DEV_ROLE


Resource NameEndpoint
HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=emea_dev&var-node=All&var-type=entities
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=emea_dev&var-kube_env=emea_nprod&var-topic=All&var-instance=All&var-node=
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=emea_dev&var-component=mdm_manager&var-instance=All&var-node=
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_nprod&var-service=All&var-instance=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_nprod&var-instance=10.90.130.202:9216&var-node_instance=10.90.129.220&var-interval=$__auto_interval_interval
Kube Statehttps://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=atp-mdmhub-nprod-emea&var-node=All&var-namespace=All&var-datasource=Prometheus
Pod Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=emea_nprod&var-namespace=All
PVC Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/xLgt8oTik/portworx-cluster-monitoring?orgId=1&var-cluster=atp-mdmhub-nprod-emea&var-node=All
Resource NameEndpoint
Kibana

https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/ (DEV prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-emea-dev/swagger-ui/index.html?configUrl=/api-gw-spec-emea-dev/v3/api-docs/swagger-config
Batch Service API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-emea-dev/swagger-ui/index.html?configUrl=/api-batch-spec-emea-dev/v3/api-docs/swagger-config
DCR Service 2 API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-dcr-spec-emea-dev/swagger-ui/index.html?configUrl=/api-dcr-spec-emea-dev/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/

Consul

Resource NameEndpoint
Consul UIhttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/

Clients


MDM Systems

Reltio

DEV - wn60kG248ziQSMW

Resource NameEndpoint
SQS queue namehttps://eu-west-1.queue.amazonaws.com/930358522410/mpe-01_wn60kG248ziQSMW
Reltio

https://mpe-01.reltio.com/ui/wn60kG248ziQSMW

https://mpe-01.reltio.com/reltio/api/wn60kG248ziQSMW

Reltio Gateway User

svc-pfe-mdmhub
RDMhttps://rdm.reltio.com/lookups/rQHwiWkdYGZRTNq


Internal Resources


Resource NameEndpoint
Mongo

mongodb://mongo-emea-nprod-gbl-mdm-hub.COMPANY.com:27017

Kafka

http://kafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094/ - SASL SSL

Kibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/
Elasticsearchhttps://elastic-emea-nprod-gbl-mdm-hub.COMPANY.com/




" }, { "title": "EMEA QA Services", "pageID": "192383454", "pageLink": "/display/GMDM/EMEA+QA+Services", "content": "


HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEV

https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-qa

Ping Federate
https://devfederate.COMPANY.com/as/introspect.oauth2
Gateway API KEY auth - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-emea-qa
Kafkakafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 

s3://pfe-atp-eu-w1-nprod-mdmhub/emea/qa

HUB UIhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-emea-qa/#/dashboard

Snowflake MDM DataMe

Resource NameEndpoint
DB Url

https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

DB Name

COMM_EMEA_MDM_DMART_QA_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_EMEA_MDM_DMART_QA_DEVOPS_ROLE

Grafana dashboards

Resource NameEndpoint
HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=emea_qa&var-node=All&var-type=entities
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=emea_qa&var-topic=All&var-node=1&var-instance=euw1z2dl112.COMPANY.com:9102
Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-env=emea_nprod&var-job=node-exporter&var-node=10.90.129.220&var-port=9100
Pod monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&var-env=emea_nprod&var-namespace=All
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=emea_qa&var-component=batch_service&var-instance=All&var-node=
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_nprod&var-service=All&var-instance=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_nprod&var-instance=10.90.130.202:9216&var-node_instance=10.90.129.220&var-interval=$__auto_interval_interval

Kibana dashboards

Resource NameEndpoint
Kibana

https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home (QA prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentation

https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-emea-qa/swagger-ui/index.html

Batch Service API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-emea-qa/swagger-ui/index.html

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/login/?next=https%3A%2F%2Fairflow-emea-nprod-gbl-mdm-hub.COMPANY.com%2Fhome

Consul

Resource NameEndpoint
Consul UIhttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/ui/

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/login

Clients


MDM Systems

Reltio

QA - vke5zyYwTifyeJS

Resource NameEndpoint
SQS queue namehttps://eu-west-1.queue.amazonaws.com/930358522410/mpe-01_vke5zyYwTifyeJS
Reltio

https://mpe-01.reltio.com/ui/vke5zyYwTifyeJS

https://mpe-01.reltio.com/reltio/api/vke5zyYwTifyeJS

Reltio Gateway User

svc-pfe-mdmhub
RDMhttps://rdm.reltio.com/lookups/jIqfd8krU6ua5kR


Internal Resources


Resource NameEndpoint
Mongomongodb://mongo-emea-nprod-gbl-mdm-hub.COMPANY.com:27017
Kafka

http://kafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094/ - SASL SSL

Kibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home
Elasticsearchhttps://elastic-emea-nprod-gbl-mdm-hub.COMPANY.com/






" }, { "title": "EMEA STAGE Services", "pageID": "192383457", "pageLink": "/display/GMDM/EMEA+STAGE+Services", "content": "


HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEV

https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-stage

Ping Federate

https://stgfederate.COMPANY.com/as/introspect.oauth2

Gateway API KEY auth - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-emea-stage
Kafkakafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 

s3://pfe-atp-eu-w1-nprod-mdmhub/emea/stage

HUB UIhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-emea-stage/#/dashboard

Snowflake MDM DataMart

Resource NameEndpoint
DB Url

https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

DB Name

COMM_EMEA_MDM_DMART_STG_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_EMEA_MDM_DMART_STG_DEVOPS_ROLE


Resource NameEndpoint
HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=emea_stage&var-component=mdm_manager&var-component_publisher=event_publisher&var-component_subscriber=reltio_subscriber&var-instance=All&var-type=entities
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=emea_stage&var-kube_env=amer_nprod&var-topic=All&var-instance=All&var-node=
Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-env=emea_nprod&var-job=node-exporter&var-node=10.90.129.220&var-port=9100
Pod monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&var-env=emea_nprod&var-namespace=All
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=emea_stage&var-component=batch_service&var-instance=All&var-node=
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_nprod&var-service=All&var-instance=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_nprod&var-instance=10.90.130.202:9216&var-node_instance=10.90.129.220&var-interval=$__auto_interval_interval


Resource NameEndpoint
Kibana

https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home (STAGE prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-emea-stage/swagger-ui/index.html?configUrl=/api-gw-spec-emea-stage/v3/api-docs/swagger-config
Batch Service API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-emea-stage/swagger-ui/index.html?configUrl=/api-batch-spec-emea-stage/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/login/?next=https%3A%2F%2Fairflow-emea-nprod-gbl-mdm-hub.COMPANY.com%2Fhome

Consul

Resource NameEndpoint
Consul UIhttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/ui/

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/login

Clients


MDM Systems

Reltio

STAGE - Dzueqzlld107BVW

Resource NameEndpoint
SQS queue namehttps://eu-west-1.queue.amazonaws.com/930358522410/mpe-01_Dzueqzlld107BVW
Reltio

https://mpe-01.reltio.com/ui/Dzueqzlld107BVW

https://mpe-01.reltio.com/reltio/api/Dzueqzlld107BVW

Reltio Gateway User

svc-pfe-mdmhub
RDMhttps://rdm.reltio.com/lookups/TBxXCy2Z6LZ8nbn


Internal Resources


Resource NameEndpoint
Mongo

mongodb://mongo-emea-nprod-gbl-mdm-hub.COMPANY.com:27017

Kafkahttp://kafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094/ - SASL SSL
Kibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home
Elasticsearchhttps://elastic-emea-nprod-gbl-mdm-hub.COMPANY.com/




" }, { "title": "GBL DEV Services", "pageID": "250130206", "pageLink": "/display/GMDM/GBL+DEV+Services", "content": "

HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - DEV
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-dev
Ping Federate
https://devfederate.COMPANY.com/as/introspect.oauth2
Gateway API KEY auth - DEV
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-gbl-dev
Kafkakafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 
s3://pfe-atp-eu-w1-nprod-mdmhub (eu-west-1)
HUB UIhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-gbl-dev/#/dashboard

Snowflake MDM DataMart

Resource Name

Endpoint

DB Urlhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.com
DB NameCOMM_EU_MDM_DMART_DEV_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_DEV_MDM_DMART_DEVOPS_ROLE

Monitoring

Resource NameEndpoint
HUB Performance
https://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gbl_dev&var-node=All&var-type=entities
Kafka Topics Overview
https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gbl_dev&var-topic=All&var-node=1&var-instance=10.192.70.189:9102
Pod Monitoring
https://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s
Kube State
https://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=atp-mdmhub-nprod-emea&var-node=All&var-namespace=All&var-datasource=Prometheus
JMX Overview
https://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=gbl_dev&var-component=batch_service&var-instance=All&var-node=
Kong
https://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_nprod&var-service=All&var-instance=All
MongoDB
https://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_nprod&var-instance=10.90.130.202:9216&var-node_instance=10.90.129.220&var-interval=$__auto_interval_interval

Logs

Resource NameEndpoint
Kibana
https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home (DEV prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentation
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gbl-dev/swagger-ui/index.html

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/

Consul

Resource NameEndpoint
Consul UIhttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/

Clients

MDM Systems

Reltio GBL DEV FLy4mo0XAh0YEbN

Resource NameEndpoint
SQS queue name
https://sqs.eu-west-1.amazonaws.com/930358522410/mpe-01_FLy4mo0XAh0YEbN
Reltio
https://eu-dev.reltio.com/ui/FLy4mo0XAh0YEbN
https://eu-dev.reltio.com/reltio/api/FLy4mo0XAh0YEbN

Reltio Gateway User

Integration_Gateway_User
RDM
https://rdm.reltio.com/%s/WUBsSEwz3SU3idO/


Internal Resources

Resource NameEndpoint
Mongo

mongodb://mongo-emea-nprod-gbl-mdm-hub.COMPANY.com:27017

Kafkahttp://kafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094/ - SASL SSL
Kibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home#/
Elasticsearch

https://elastic-apac-nprod-gbl-mdm-hub.COMPANY.com

" }, { "title": "GBL QA Services", "pageID": "250130235", "pageLink": "/display/GMDM/GBL+QA+Services", "content": "

HUB Endpoints

API & Kafka & S3 & UI

Gateway API OAuth2 External - DEV
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-qa
Ping Federate
https://devfederate.COMPANY.com/as/introspect.oauth2
Gateway API KEY auth - DEV
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-gbl-qa
Kafka
kafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 
s3://pfe-atp-eu-w1-nprod-mdmhub (eu-west-1)
HUB UIhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-gbl-qa/#/dashboard

Snowflake MDM DataMart

DB Urlhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.com
DB NameCOMM_EU_MDM_DMART_QA_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_QA_MDM_DMART_DEVOPS_ROLE

Monitoring

HUB Performance
https://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gbl_qa&var-node=All&var-type=entities
Kafka Topics Overview
https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gbl_qa&var-kube_env=emea_nprod&var-topic=All&var-instance=All&var-node=
Pod Monitoring
https://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=emea_nprod&var-namespace=All
Kube State
https://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=atp-mdmhub-nprod-emea&var-node=All&var-namespace=All&var-datasource=Prometheus
JMX Overview
https://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=gbl_qa&var-component=batch_service&var-instance=All&var-node=
Kong
https://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=gbl_dev&var-service=All&var-node=All
MongoDB
https://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_nprod&var-instance=10.90.130.202:9216&var-node_instance=10.90.129.220&var-interval=$__auto_interval_interval

Logs

Kibana
https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home(QA prefixed dashboards)

Documentation

Manager API documentation
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gbl-qa/swagger-ui/index.html

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/

Consul

Resource NameEndpoint
Consul UIhttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/

Clients

MDM Systems

Reltio GBL MAPP AwFwKWinxbarC0Z

SQS queue name
https://sqs.eu-west-1.amazonaws.com/930358522410/mpe-01_AwFwKWinxbarC0Z
Reltio
https://mpe-01.reltio.com/ui/AwFwKWinxbarC0Z/
https://mpe-01.reltio.com/reltio/api/AwFwKWinxbarC0Z/

Reltio Gateway User

Integration_Gateway_User
RDM
https://rdm.reltio.com/%s/WUBsSEwz3SU3idO/

Internal Resources

Mongo

mongodb://mongo-emea-nprod-gbl-mdm-hub.COMPANY.com:27017

Kafkakafka-emea-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSL
Kibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home#/
Elasticsearch

https://elastic-emea-nprod-gbl-mdm-hub.COMPANY.com

" }, { "title": "GBL STAGE Services", "pageID": "250130297", "pageLink": "/display/GMDM/GBL+STAGE+Services", "content": "

HUB Endpoints

API & Kafka & S3

Gateway API OAuth2 External - DEV
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-stage
Ping Federate
https://devfederate.COMPANY.com/as/introspect.oauth2
Gateway API KEY auth - DEV
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-gbl-stage
Kafkakafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 
s3://pfe-atp-eu-w1-nprod-mdmhub (eu-west-1)
HUB UIhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-gbl-stage/#/dashboard

Snowflake MDM DataMart

DB Urlhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.com
DB NameCOMM_EU_MDM_DMART_STG_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_STG_MDM_DMART_DEVOPS_ROLE

Monitoring

HUB Performance
https://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gbl_stage&var-node=All&var-type=entities


Kafka Topics Overview
https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gbl_stage&var-kube_env=emea_nprod&var-topic=All&var-instance=All&var-node=
Pod Monitoring
https://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=emea_nprod&var-namespace=All
Kube State
https://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=atp-mdmhub-nprod-emea&var-node=All&var-namespace=All&var-datasource=Prometheus
JMX Overview
https://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=gbl_stage&var-component=batch_service&var-instance=All&var-node=
Kong
https://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_nprod&var-service=All&var-instance=All
MongoDB
https://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=gbl_stage&var-instance=&var-node_instance=&var-interval=$__auto_interval_interval

Logs

Kibana
https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home(STAGE prefixed dashboards)

Documentation

Manager API documentation
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gbl-stage/swagger-ui/index.html

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/

Consul

Resource NameEndpoint
Consul UIhttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/

Clients

MDM Systems

Reltio GBL STAGE FW4YTaNQTJEcN2g

SQS queue name
https://sqs.eu-west-1.amazonaws.com/930358522410/mpe-01_FW4YTaNQTJEcN2g
Reltio
https://eu-dev.reltio.com/ui/FW4YTaNQTJEcN2g/
https://eu-dev.reltio.com/reltio/api/FW4YTaNQTJEcN2g/

Reltio Gateway User

Integration_Gateway_User
RDM
https://rdm.reltio.com/%s/WUBsSEwz3SU3idO/

Internal Resources

Mongo

mongodb://mongo-emea-nprod-gbl-mdm-hub.COMPANY.com:27017

Kafkahttp://kafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094/ - SASL SSL
Kibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home#/
Elasticsearch

https://elastic-apac-nprod-gbl-mdm-hub.COMPANY.com

" }, { "title": "EMEA PROD Cluster", "pageID": "196881569", "pageLink": "/display/GMDM/EMEA+PROD+Cluster", "content": "

Physical Architecture


\"\"

Kubernetes cluster


nameIPConsole addressresource typeAWS regionFilesystemComponentsType
atp-mdmhub-nprod-emea

10.90.96.0/23

10.90.98.0/23

https://pdcs-ema1p.COMPANY.com/
EKS over EC2eu-west-1

~100GBper node,

7.3Ti x2 replicated Portworx volumes

Kong, Kafka, Mongo, Prometheus, MDMHUB microservices

inbound/outbound

Components & Logs

PROD - microservices

ENV (namespace)ComponentPodDescriptionLogsPod ports

emea-prod

Managermdmhub-mdm-manager-*Managerlogs


8081 - application API,

8000 - if remote debugging is enabled you are able to use this to debug app in environment,

9000 - Prometheus exporter,

8888 - spring boot actuator,

8080 - serves swagger API definition - if available


emea-prod

Batch Servicemdmhub-batch-service-*Batch Servicelogs
emea-prodAPI routermdmhub-mdm-api-router-*API Routerlogs

emea-prod

Reltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogs
emea-prodEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogs
emea-prodCallback Servicemdmhub-callback-service-*Callback Servicelogs

emea-prod

Event Publishermdmhub-event-publisher-*Event Publisherlogs
emea-prodReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogs

PROD - backend 

NamespaceComponentPodDescriptionLogs
kongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kong
emea-backendKafka

mdm-kafka-kafka-0

mdm-kafka-kafka-1

mdm-kafka-kafka-2

Kafkalogs
emea-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace emea-backend
emea-backendZookeeper

mdm-kafka-zookeeper-0

mdm-kafka-zookeeper-1

mdm-kafka-zookeeper-2

Zookeeperlogs
emea-backendMongomongo-0
mongo-1
mongo-2
Mongologs
emea-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace emea-backend
emea-backendFluentDfluentd-*EFK - fluentd

kubectl logs {{pod name}} --namespace emea-backend

emea-backendElasticsearch

elasticsearch-es-default-0

elasticsearch-es-default-1

elasticsearch-es-default-2

EFK - elasticsearchkubectl logs {{pod name}} --namespace emea-backend
emea-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace emea-backend
monitoringcAdvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoring
emea-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace emea-backend
emea-backendMongo exportermongo-exporter-*mongo metrics exporter---
emea-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace emea-backend
emea-backendConsul

consul-consul-server-0

consul-consul-server-1

consul-consul-server-2

Consulkubectl logs {{pod name}} --namespace emea-backend
emea-backendSnowflake connector

emea-prod-mdm-connect-cluster-connect-*

Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace emea-backend
monitoringKafka Connect Exporter

monitoring-jdbc-snowflake-exporter-emea-prod-*

Kafka Connect metric exporterkubectl logs {{pod name}} --namespace monitoring
emea-backendAKHQakhq-*Kafka UIlogs


Certificates 

Resource

Certificate LocationValid fromValid to Issued To
Kibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/prod/namespaces/kong/config_files/certs2022/03/042024/03/03https://api-emea-prod-gbl-mdm-hub.COMPANY.com/
Kafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/prod/namespaces/emea-backend2022/03/072024/03/06https://kafka-emea-prod-gbl-mdm-hub.COMPANY.com/
" }, { "title": "EMEA PROD Services", "pageID": "196881867", "pageLink": "/display/GMDM/EMEA+PROD+Services", "content": "

HUB Endpoints

API & Kafka & S3 & UI

Resource NameEndpoint
Gateway API OAuth2 External - PRODhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-prod
Ping Federate
https://prodfederate.COMPANY.com/as/token.oauth2
Gateway API KEY auth - PROD

https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-gw-emea-prod

Kafkakafka-b1-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 

s3://pfe-atp-eu-w1-prod-mdmhub/emea/prod

HUB UIhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ui-emea-prod/#/dashboard

Snowflake MDM DataMart

Resource Name

Endpoint

DB Urlhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com/
DB Name

COMM_EMEA_MDM_DMART_PROD_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_EMEA_MDM_DMART_PROD_DEVOPS_ROLE

Monitoring

Resource NameEndpoint
HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=emea_prod&var-node=All&var-type=entities
HUB Batch Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/gz0X6rkMk/hub-batch-performance?orgId=1&refresh=10s&var-env=emea_prod&var-node=All&var-name=All
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=emea_prod&var-topic=All&var-node=5&var-instance=euw1z1pl117.COMPANY.com:9102
Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-env=emea_prod&var-job=node_exporter&var-node=euw1z2pl113.COMPANY.com&var-port=9100
Docker monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/Z1VgYm6iz/docker-monitoring?orgId=1&refresh=10s&var-env=emea_prod&var-node=1
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=emea_prod&var-component=manager&var-node=5&var-instance=euw1z1pl117.COMPANY.com:9104
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_prod&var-service=All&var-node=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_prod&var-instance=euw1z2pl115.COMPANY.com:9120&var-node_instance=euw1z2pl115.COMPANY.com&var-interval=$__auto_interval_interval

Logs

Resource NameEndpoint
Kibanahttps://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/ (PROD prefixed dashboards)

Documentation

Resource NameEndpoint
Manager API documentationhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-gw-spec-emea-prod/swagger-ui/index.html?configUrl=/api-gw-spec-emea-prod/v3/api-docs/swagger-config
Batch Service API documentationhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-batch-spec-emea-prod/swagger-ui/index.html?configUrl=/api-batch-spec-emea-prod/v3/api-docs/swagger-config

Airflow

Resource NameEndpoint
Airflow UIhttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/home

Consul

Resource NameEndpoint
Consul UIhttps://consul-emea-prod-gbl-mdm-hub.COMPANY.com/ui/dc1/services

AKHQ - Kafka

Resource NameEndpoint
AKHQ Kafka UIhttps://akhq-emea-prod-gbl-mdm-hub.COMPANY.com/login

Clients

MDM Systems

Reltio

PROD_EMEA Xy67R0nDA10RUV6

Resource NameEndpoint
SQS queue namehttps://sqs.eu-west-1.amazonaws.com/930358522410/eu-360_Xy67R0nDA10RUV6
Reltio

https://eu-360.reltio.com/reltio/api/Xy67R0nDA10RUV6 - API

https://eu-360.reltio.com/ui/Xy67R0nDA10RUV6/# - UI

Reltio Gateway User

svc-pfe-mdmhub-prod
RDM

https://rdm.reltio.com/%s/uJG2vepGEXEHmrI/


Internal Resources


Resource NameEndpoint
Mongo

https://mongo-emea-prod-gbl-mdm-hub.COMPANY.com:27017

Kafka

http://kafka-b1-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094/,http://kafka-b2-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094/,http://kafka-b3-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094/

Kibanahttps://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/
Elasticsearchhttps://elastic-emea-prod-gbl-mdm-hub.COMPANY.com/
" }, { "title": "GBL PROD Services", "pageID": "284792395", "pageLink": "/display/GMDM/GBL+PROD+Services", "content": "

HUB Endpoints

API & Kafka & S3 & UI

Gateway API OAuth2 External - PRODhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ext-api-gbl-prod
Ping Federate
https://prodfederate.COMPANY.com/as/token.oauth2
Gateway API KEY auth - PROD

https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-gbl-prod

Kafkakafka-b1-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094
MDM HUB S3 

s3://pfe-baiaes-eu-w1-project/mdm

HUB UIhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ui-gbl-prod/#/dashboard

Snowflake MDM DataMart

DB Urlhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com/
DB Name

COMM_EU_MDM_DMART_PROD_DB

Default warehouse name

COMM_MDM_DMART_WH

DevOps role name

COMM_GBL_MDM_DMART_PROD_DEVOPS_ROLE

Monitoring


HUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gbl_prod&var-component=mdm_manager&var-component_publisher=event_publisher&var-component_subscriber=reltio_subscriber&var-instance=All&var-type=entities
Kafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gbl_prod&var-kube_env=emea_prod&var-topic=All&var-instance=All&var-node=
Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/Z1VgYm6iz/docker-monitoring?orgId=1&refresh=10s&var-env=emea_prod&var-node=&var-instance=10.90.130.122
Pods monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/Z1VgYm6iz/docker-monitoring?orgId=1&refresh=10s&var-env=emea_prod&var-node=&var-instance=10.90.130.122
JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=emea_prod&var-component=manager&var-node=5&var-instance=euw1z1pl117.COMPANY.com:9104
Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_prod&var-service=All&var-node=All
MongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_prod&var-instance=10.90.142.48:9216&var-node_instance=euw1z2pl115.COMPANY.com&var-interval=$__auto_interval_interval


Logs


Kibanahttps://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/ (PROD prefixed dashboards)


Documentation


Manager API documentationhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gbl-prod/swagger-ui/index.html?configUrl=/api-gw-spec-emea-prod/v3/api-docs/swagger-config


Airflow


Airflow UIhttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/home


Consul


Consul UIhttps://consul-emea-prod-gbl-mdm-hub.COMPANY.com/ui/dc1/services


AKHQ - Kafka


AKHQ Kafka UIhttps://akhq-emea-prod-gbl-mdm-hub.COMPANY.com/login


Clients

MDM Systems

Reltio

PROD_EMEA - FW2ZTF8K3JpdfFl

SQS queue namehttps://sqs.eu-west-1.amazonaws.com/930358522410/euprod-01_FW2ZTF8K3JpdfFl
Reltio

https://eu-360.reltio.com/reltio/api/FW2ZTF8K3JpdfFl - API

https://eu-360.reltio.com/ui/FW2ZTF8K3JpdfFl/ - UI

Reltio Gateway User

pfe_mdm_api
RDM
https://rdm.reltio.com/%s/ImsRdmCOMPANY/


Internal Resources


Mongo

https://mongo-emea-prod-gbl-mdm-hub.COMPANY.com:27017

Kafka

http://kafka-b1-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094/,http://kafka-b2-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094/,http://kafka-b3-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094/

Kibanahttps://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/
Elasticsearchhttps://elastic-emea-prod-gbl-mdm-hub.COMPANY.com/
" }, { "title": "US Trade (FLEX)", "pageID": "164470168", "pageLink": "/pages/viewpage.action?pageId=164470168", "content": "" }, { "title": "US Non PROD Cluster", "pageID": "164470067", "pageLink": "/display/GMDM/US+Non+PROD+Cluster", "content": "

Physical Architecture

\"\"


Hosts

IDIPHostnameDocker UserResource TypeSpecificationAWS RegionFilesystem

DEV

●●●●●●●●●●●●●

amraelp00005781.COMPANY.com

mdmihnpr

EC2

r4.2xlarge

us-east

750 GB - /app

15 GB - /var/lib/docker

Components & Logs

ENVHostComponentDocker nameDescriptionLogsOpen Ports
DEVDEVManagerdevmdmsrv_mdm-manager_1Gateway API/app/mdmgw/dev-mdm-srv/manager/log8849, 9104
DEVDEVBatch Channeldevmdmsrv_batch-channel_1Batch file processor, S3 poller/app/mdmgw/dev-mdm-srv/batch_channel/log9121
DEVDEVPublisherdevmdmhubsrv_event-publisher_1Event publisher/app/mdmhub/dev-mdm-srv/event_publisher/log9106
DEVDEVSubscriberdevmdmhubsrv_reltio-subscriber_1SQS Reltio event subscriber/app/mdmhub/dev-mdm-srv/reltio_subscriber/log9105
DEVDEVConsoledevmdmsrv_console_1Hawtio console
9999
ENVHostComponentDocker nameDescriptionLogsOpen Ports
TESTDEVManagertestmdmsrv_mdm-manager_1Gateway API/app/mdmgw/test-mdm-srv/manager/log8850, 9108
TESTDEVBatch Channeltestmdmsrv_batch-channel_1Batch file processor, S3 poller/app/mdmgw/test-mdm-srv/batch_channel/log9111
TESTDEVPublishertestmdmhubsrv_event-publisher_1Event publisher/app/mdmhub/test-mdm-srv/event_publisher/log9110
TESTDEVSubscribertestmdmhubsrv_reltio-subscriber_1SQS Reltio event subscriber/app/mdmhub/test-mdm-srv/reltio_subscriber/log9109

Back-End 

HostComponentDocker nameDescriptionLogsOpen Ports
DEVFluentDfluentdEFK - FluentD/app/efk/fluentd/log24225
DEVKibanakibanaEFK - Kibanadocker logs kibana5601
DEVElasticsearchelasticsearchEFK - Elasticsearch/app/efk/elasticsearch/logs9200
DEVPrometheusprometheusPrometheus Federation slave serverdocker logs prometheus9119
DEVMongomongo_mongo_1Mongodocker logs mongo_mongo_127017
DEVMongo Exportermongo_exporterMongo → Prometheus exporter/app/mongo_exporter/logs9120
DEVMonstache Connectormonstache-connectorMongo → Elasticsearch exporter
8095
DEVKafkakafka_kafka_1Kafkadocker logs kafka_kafka_19093, 9094, 9101
DEVKafka Exporterkafka_kafka_exporter_1Kafka → Prometheus exporterdocker logs kafka_kafka_exporter_19102
DEVSQS Exportersqs-exporter-devSQS → Prometheus exporterdocker logs sqs-exporter-dev9122
DEVCadvisorcadvisorDocker → Prometheus exporterdocker logs cadvisor9103
DEVKongkong_kong_1API Manager/app/mdmgw/kong/kong_logs8000, 8443, 32774
DEVKong - DBkong_kong-database_1Kong Cassandra databasedocker logs kong_kong-database_19042
DEVZookeeperkafka_zookeeper_1Zookeeperdocker logs kafka_zookeeper_12181
DEVNode Exporter(non-docker) node_exporterPrometheus node exportersystemctl status node_exporter9100

Certificates

ResourceCertificate LocationValid fromValid to Issued To
Kibana
https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/dev_us/efk/kibana/mdm-log-management-us-nonprod.COMPANY.com.cer
22.02.201907.05.2022mdm-log-management-us-nonprod.COMPANY.com
Kong - API
https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/dev_us/certs/mdm-ihub-us-nonprod.COMPANY.com.pem
18.07.201817.07.2021

CN = mdm-ihub-us-nonprod.COMPANY.com

O = COMPANY

Kafka - Server Truststore
https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/dev_us/ssl/server.truststore.jks
10.07.202001.09.2026

O = Default Company Ltd

ST = Some-State

C = AU

Kafka - Server KeyStore
https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/dev_us/ssl/server.keystore.jks
10.07.202006.07.2022 

CN = KafkaFlex

OU = Unknown

O = Unknown

L = Unknown

ST = Unknown

C = Unknown

Elasticsearch
https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/dev_us/efk/esnode1/mdm-esnode1-us-nonprod.COMPANY.com.cer

22.02.201921.02.2022

mdm-esnode1-us-nonprod.COMPANY.com

Unix groups

Resource NameTypeDescriptionSupport
userComputer Role

Login: mdmihnpr
Name: SRVGBL-Pf6687993
Uid: 27634358
Gid: 20796763 <mdmihub>


userUnix Role Group

Role: ADMIN_ROLE


portsSecurity groupSG Name: PFE-SG-IHUB-APP-DEV-001

http://btondemand.COMPANY.com

Submit ticket to GBL-BTI-IOD AWS FULL SUPPORT

Internal Clients

NameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopic
FLEX US user
flex_nprod
External OAuth2
Flex-MDM_client
- "CREATE_HCP"
- "CREATE_HCO"
- "UPDATE_HCP"
- "UPDATE_HCO"
- "GET_ENTITIES"
- "SCAN_ENTITIES"
ALL
- "FLEXProposal"
- "FLEX"
- "FLEXIDL"
- "Calculate"
- "SAP"
dev-out-full-flex-all
test-out-full-flex-all
test2-out-full-flex-all
test3-out-full-flex-all
Internal HUB user
mdm_test_user
External OAuth2
Flex-MDM_client
- "CREATE_HCP"
- "CREATE_HCO"
- "UPDATE_HCP"
- "UPDATE_HCO"
- "GET_ENTITIES"
- "DELETE_CROSSWALK"
- "GET_RELATION"
- "SCAN_ENTITIES"
- "SCAN_RELATIONS"
- "LOOKUPS"
- "ENTITY_ATTRIBUTES_UPDATE"
ALL
- "FLEXProposal"
- "FLEX"
- "FLEXIDL"
- "Calculate"
- "AddrCalc"
- "SAP"
- "HIN"
- "DEA

Integration Batch Update user
integration_batch_user
Key Auth
N/A
- "GET_ENTITIES"
- "ENTITY_ATTRIBUTES_UPDATE"
- "GENERATE_ID"
- "CREATE_HCO"
- "UPDATE_HCO"
ALL
- "FLEXProposal"
- "FLEX"
- "FLEXIDL"
- "Calculate"
- "AddrCalc"
dev-internal-integration-tests
FLEX Batch Channel user

flex_batch_dev
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "FLEX"
- "FLEXIDL"
dev-internal-hco-create-flex
flex_batch_test
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "FLEX"
- "FLEXIDL"
test-internal-hco-create-flex
flex_batch_test2
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "FLEX"
- "FLEXIDL"
test2-internal-hco-create-flex
flex_batch_test3
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "FLEX"
- "FLEXIDL"
test3-internal-hco-create-flex
SAP Batch Channel user

sap_batch_dev
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "SAP"
dev-internal-hco-create-sap
sap_batch_test
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "SAP"
test-internal-hco-create-sap
sap_batch_test2
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "SAP"
test2-internal-hco-create-sap
sap_batch_test3
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "SAP"
test3-internal-hco-create-sap
HIN Batch Channel user

hin_batch_dev
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "HIN"
dev-internal-hco-create-hin
hin_batch_test
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "HIN"
test-internal-hco-create-hin
hin_batch_test2
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "HIN"
test2-internal-hco-create-hin
hin_batch_test3
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "HIN"
test3-internal-hco-create-hin
DEA Batch Channel user

dea_batch_dev
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "DEA"
dev-internal-hco-create-dea
dea_batch_test
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "DEA"
test-internal-hco-create-dea
dea_batch_test2
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "DEA"
test2-internal-hco-create-dea
dea_batch_test3
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "DEA"
test3-internal-hco-create-dea
340B Batch Channel user
340b_batch_dev
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "340B"
dev-internal-hco-create-340b
340b_batch_test
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "340B"
test-internal-hco-create-340b
" }, { "title": "US DEV Services", "pageID": "164469990", "pageLink": "/display/GMDM/US+DEV+Services", "content": "

HUB Endpoints

API & Kafka & S3

Resource NameEndpoint
Gateway API OAuth2 External - DEV
https://mdm-ihub-us-nonprod.COMPANY.com:8443/dev-ext
Ping Federate
https://devfederate.COMPANY.com/as/introspect.oauth2
Gateway API KEY auth - DEV
https://mdm-ihub-us-nonprod.COMPANY.com:8443/dev
Kafka
amraelp00005781.COMPANY.com:9094
MDM HUB S3 
s3://mdmnprodamrasp22124/

Monitoring

Resource NameEndpoint
HUB Performance
https://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=us_dev&var-node=All&var-type=entities
Kafka Topics Overview
https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=us_dev&var-topic=All&var-node=1&var-instance=amraelp00005781.COMPANY.com:9102
Host Statistics
https://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=us_dev&var-node=amraelp00005781.COMPANY.com&var-port=9100
Docker monitoring
https://mdm-monitoring.COMPANY.com/grafana/d/Z1VgYm6iz/docker-monitoring?orgId=1&refresh=10s&var-env=us_dev&var-node=1
JMX Overview
https://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=us_dev&var-component=batch_channel&var-node=1&var-instance=amraelp00005781.COMPANY.com:9121
Kong
MongoDB
https://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=us_dev&var-instance=amraelp00005781.COMPANY.com:9120&var-node_instance=amraelp00005781.COMPANY.com&var-interval=$__auto_interval_interval

Logs

Resource NameEndpoint
Kibana
https://mdm-log-management-us-trade-nonprod.COMPANY.com:5601/app/kibana (DEV prefixed dashboards)

MDM Systems

Reltio US DEV keHVup25rN7ij3Y

Resource NameEndpoint
SQS queue name
https://sqs.us-east-1.amazonaws.com/930358522410/dev_keHVup25rN7ij3Y
Reltio
https://dev.reltio.com/ui/keHVup25rN7ij3Y
https://dev.reltio.com/reltio/api/keHVup25rN7ij3Y

Reltio Gateway User

Integration_Gateway_US_User
RDM
https://rdm.reltio.com/%s/aPYW1rxK6I1Op4y/

Internal Resources

Resource NameEndpoint
Mongo
mongodb://amraelp00005781.COMPANY.com:27107
Kafka
amraelp00005781.COMPANY.com:9094
Zookeeper
amraelp00005781.COMPANY.com:2181
Kibana
https://amraelp00005781.COMPANY.com:5601/app/kibana
Elasticsearch
https://amraelp00005781.COMPANY.com:9200
Hawtio
http://amraelp00005781.COMPANY.com:9999/hawtio/#/login
" }, { "title": "US TEST (QA) Services", "pageID": "164469988", "pageLink": "/display/GMDM/US+TEST+%28QA%29+Services", "content": "

HUB Endpoints

API & Kafka & S3

Resource NameEndpoint
Gateway API OAuth2 External - TEST
https://mdm-ihub-us-nonprod.COMPANY.com:8443/test-ext
Gateway API OAuth2 External - TEST2
https://mdm-ihub-us-nonprod.COMPANY.com:8443/test2-ext
Gateway API OAuth2 External - TEST3
https://mdm-ihub-us-nonprod.COMPANY.com:8443/test3-ext
Gateway API KEY auth - TEST
https://mdm-ihub-us-nonprod.COMPANY.com:8443/test
Gateway API KEY auth - TEST2
https://mdm-ihub-us-nonprod.COMPANY.com:8443/test2
Gateway API KEY auth - TEST3
https://mdm-ihub-us-nonprod.COMPANY.com:8443/test3
Ping Federate
https://devfederate.COMPANY.com/as/introspect.oauth2
Kafka
amraelp00005781.COMPANY.com:9094
MDM HUB S3 
s3://mdmnprodamrasp22124/

Logs

Resource NameEndpoint
Kibana
https://mdm-log-management-us-trade-nonprod.COMPANY.com:5601/app/kibana (TEST prefixed dashboards)

MDM Systems

Reltio US TEST cnL0Gq086PrguOd

Resource NameEndpoint
SQS queue name
https://sqs.us-east-1.amazonaws.com/930358522410/test_cnL0Gq086PrguOd 
Reltio
https://test.reltio.com/ui/cnL0Gq086PrguOd
https://test.reltio.com/reltio/api/cnL0Gq086PrguOd

Reltio Gateway User

Integration_Gateway_US_User
RDM
https://rdm.reltio.com/%s/FENBHNkytefh9dB/  

Reltio US TEST2 JKabsuFZzNb4K6k

Resource NameEndpoint
SQS queue name
https://sqs.us-east-1.amazonaws.com/930358522410/test_JKabsuFZzNb4K6k
Reltio
https://test.reltio.com/ui/JKabsuFZzNb4K6k
https://test.reltio.com/reltio/api/JKabsuFZzNb4K6k

Reltio Gateway User

Integration_Gateway_US_User
RDM
https://rdm.reltio.com/%s/dhUp0Lm9NebmqB9/  

Reltio US TEST3 Yy7KqOqppDVzJpk

Resource NameEndpoint
SQS queue name
https://sqs.us-east-1.amazonaws.com/930358522410/test_Yy7KqOqppDVzJpk
Reltio
https://test.reltio.com/ui/Yy7KqOqppDVzJpk
https://test.reltio.com/reltio/api/Yy7KqOqppDVzJpk

Reltio Gateway User

Integration_Gateway_US_User
RDM
https://rdm.reltio.com/%s/Q4rz1LUZ9WnpVoJ/  

Internal Resources

Resource NameEndpoint
Mongo
mongodb://amraelp00005781.COMPANY.com:27107
Kafka
amraelp00005781.COMPANY.com:9094
Zookeeper
amraelp00005781.COMPANY.com:2181
Kibana
https://amraelp00005781.COMPANY.com:5601/app/kibana
Elasticsearch
https://amraelp00005781.COMPANY.com:9200
Hawtio
http://amraelp00005781.COMPANY.com:9999/hawtio/#/login
" }, { "title": "US PROD Cluster", "pageID": "164470064", "pageLink": "/display/GMDM/US+PROD+Cluster", "content": "

Physical Architecture

\"\"


Hosts

IDIPHostnameDocker UserResource TypeSpecificationAWS RegionFilesystem
PROD1
●●●●●●●●●●●●●●
amraelp00006207.COMPANY.com
mdmihpr 
EC2r4.xlarge us-east-1e

500 GB - /app

15 GB - /var/lib/docker

PROD2
●●●●●●●●●●●●●●
amraelp00006208.COMPANY.com
mdmihpr
EC2r4.xlarge us-east-1e

500 GB - /app

15 GB - /var/lib/docker

PROD3
●●●●●●●●●●●●
amraelp00006209.COMPANY.com
mdmihpr
EC2r4.xlarge us-east-1e

500 GB - /app

15 GB - /var/lib/docker

Components & Logs

HostComponentDocker nameDescriptionLogsOpen Ports
PROD1, PROD2, PROD3Managermdmgw_mdm-manager_1Gateway API/app/mdmgw/manager/log9104, 8851
PROD1Batch Channelmdmgw_batch-channel_1Batch file processor, S3 poller/app/mdmgw/batch_channel/log9107
PROD1, PROD2, PROD3Publishermdmhub_event-publisher_1Event publisher/app/mdmhub/event_publisher/log9106
PROD1, PROD2, PROD3Subscribermdmhub_reltio-subscriber_1SQS Reltio event subscriber/app/mdmhub/reltio_subscriber/log9105

Back-End

HostComponentDocker nameDescriptionLogsOpen Ports
PROD1, PROD2, PROD3ElasticsearchelasticsearchEFK - Elasticsearch/app/efk/elasticsearch/logs9200
PROD1, PROD2, PROD3FluentDfluentdEFK - FluentD/app/efk/fluentd/log
PROD3KibanakibanaEFK - Kibanadocker logs kibana5601
PROD3PrometheusprometheusPrometheus Federation slave serverdocker logs prometheus9109
PROD1, PROD2, PROD3Mongomongo_mongo_1Mongodocker logs mongo_mongo_127017
PROD3Monstache Connectormonstache-connectorMongo → Elasticsearch exporter

PROD1, PROD2, PROD3

Kafkakafka_kafka_1Kafkadocker logs kafka_kafka_19101, 9093, 9094
PROD1, PROD2, PROD3Kafka Exporterkafka_kafka_exporter_1Kafka → Prometheus exporterdocker logs kafka_kafka_exporter_19102
PROD1, PROD2, PROD3CadvisorcadvisorDocker → Prometheus exporterdocker logs cadvisor9103
PROD3SQS Exportersqs-exporterSQS → Prometheus exporterdocker logs sqs-exporter9108
PROD1, PROD2, PROD3Kongkong_kong_1API Manager/app/mdmgw/kong/kong_logs8000, 8443, 32777
PROD1, PROD2, PROD3Kong - DBkong_kong-database_1Kong Cassandra databasedocker logs kong_kong-database_17000, 9042
PROD1, PROD2, PROD3Zookeeperkafka_zookeeper_1Zookeeperdocker logs kafka_zookeeper_12181, 2888, 3888
PROD1, PROD2, PROD3Node Exporter(non-docker) node_exporterPrometheus node exportersystemctl status node_exporter9100

Certificates

ResourceCertificate LocationValid fromValid to Issued To
Kibanahttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/prod_us/efk/kibana/mdm-log-management-us-trade-prod.COMPANY.com.cer22.02.201921.02.2022mdm-log-management-us-trade-prod.COMPANY.com
Kong - APIhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/prod_us/certs/mdm-ihub-us-trade-prod.COMPANY.com.pem04.01.202204.01.2024

CN = mdm-ihub-us-trade-prod.COMPANY.com

O = COMPANY

Kafka - Client Truststorehttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/prod_us/ssl/client.truststore.jks01.09.201601.09.2026COMPANY Root CA G2
Kafka - Server Truststore
PROD1 - https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/prod_us/ssl/server1.keystore.jks
PROD2 - https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/prod_us/ssl/server2.keystore.jks
PROD3 - https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/prod_us/ssl/server3.keystore.jks
04.01.202204.01.2024

CN = mdm-ihub-us-trade-prod.COMPANY.com

O = COMPANY

Elasticsearch

esnode1 - https://github.com/COMPANY/mdm-reltio-handler-env/tree/master/ssl_certs/prod_us/efk/esnode1

esnode2 - https://github.com/COMPANY/mdm-reltio-handler-env/tree/master/ssl_certs/prod_us/efk/esnode2

esnode3 - https://github.com/COMPANY/mdm-reltio-handler-env/tree/master/ssl_certs/prod_us/efk/esnode3

22.02.201921.02.2022

mdm-esnode1-us-trade-prod.COMPANY.com

mdm-esnode2-us-trade-prod.COMPANY.com

mdm-esnode3-us-trade-prod.COMPANY.com

Unix groups

Resource NameTypeDescriptionSupport
ELBLoad Balancer

Reference LB Name: PFE-CLB-JIRA-HARMONY-PROD-001
CLB name: PFE-CLB-MDM-HUB-TRADE-PROD-001
DNS name: internal-PFE-CLB-MDM-HUB-TRADE-PROD-001-1966081961.us-east-1.elb.amazonaws.com


userComputer Role

Computer Role: UNIX-UNIVERSAL-AWSCBSDEV-MDMIHPR-COMPUTERS-U 

Login: mdmihpr
Name: SRVGBL-mdmihpr
UID: 25084803
GID: 20796763 <mdmihub>


userUnix Role Group

Unix-mdmihubProd-U

Role: ADMIN_ROLE


portsSecurity groupSG Name: PFE-SG-IHUB-APP-PROD-001

http://btondemand.COMPANY.com

Submit ticket to GBL-BTI-IOD AWS FULL SUPPORT

S3S3 Bucket

mdmprodamrasp42095 (us-east-1)

Username: SRVC-MDMIHPR
Console login: https://bti-aws-prod-hosting.signin.aws.amazon.com/console


Internal Clients

NameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopic
Internal MDM Hub user
publishing_hub
Key Auth
N/A
- "CREATE_HCP"
- "CREATE_HCO"
- "UPDATE_HCP"
- "UPDATE_HCO"
- "GET_ENTITIES"
- "DELETE_CROSSWALK"
- "GET_RELATION"
- "SCAN_ENTITIES"
- "SCAN_RELATIONS"
- "LOOKUPS"
- "ENTITY_ATTRIBUTES_UPDATE"
ALL
- "FLEXProposal"
- "FLEX"
- "FLEXIDL"
- "Calculate"
- "AddrCalc"
prod-internal-reltio-events
Internal MDM Test user
mdm_test_user
External OAuth2
MDM_client
- "CREATE_HCP"
- "CREATE_HCO"
- "UPDATE_HCP"
- "UPDATE_HCO"
- "GET_ENTITIES"
- "DELETE_CROSSWALK"
- "GET_RELATION"
- "SCAN_ENTITIES"
- "SCAN_RELATIONS"
- "LOOKUPS"
- "ENTITY_ATTRIBUTES_UPDATE"
ALL
- "FLEXProposal"
- "FLEX"
- "FLEXIDL"
- "Calculate"
- "AddrCalc"
- "SAP"
- "HIN"
- "DEA"

Integration Batch Update user
integration_batch_user
Key Auth
N/A
- "GET_ENTITIES"
- "ENTITY_ATTRIBUTES_UPDATE"
- "GENERATE_ID"
- "CREATE_HCO"
- "UPDATE_HCO"
ALL
- "FLEXProposal"
- "FLEX"
- "FLEXIDL"
- "Calculate"
- "AddrCalc"

FLEX US user
flex_prod
External OAuth2
Flex-MDM_client
- "CREATE_HCP"
- "CREATE_HCO"
- "UPDATE_HCP"
- "UPDATE_HCO"
- "GET_ENTITIES"
- "SCAN_ENTITIES"
ALL
- "FLEXProposal"
- "FLEX"
- "FLEXIDL"
- "Calculate"
prod-out-full-flex-all
FLEX Batch Channel user
flex_batch
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "FLEX"
- "FLEXIDL"
prod-internal-hco-create-flex
SAP Batch Channel user
sap_batch
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "SAP"
prod-internal-hco-create-sap
HIN Batch Channel user
hin_batch
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "HIN"
prod-internal-hco-create-hin
DEA Batch Channel user
dea_batch
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "DEA"
prod-internal-hco-create-dea
340B Batch Channel user
340b_batch
Key Auth
N/A
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
ALL
- "340B"
prod-internal-hco-create-340b
" }, { "title": "US PROD Services", "pageID": "164469976", "pageLink": "/display/GMDM/US+PROD+Services", "content": "

HUB Endpoints

API & Kafka & S3

Resource NameEndpoint
Gateway API OAuth2 External - PROD
https://mdm-ihub-us-trade-prod.COMPANY.com/gw-api-oauth-ext
Gateway API OAuth2 - PROD
https://mdm-ihub-us-trade-prod.COMPANY.com/gw-api-oauth
Gateway API KEY auth - PROD
https://mdm-ihub-us-trade-prod.COMPANY.com/gw-api
Ping Federate
https://prodfederate.COMPANY.com/as/introspect.oauth2
Kafka
amraelp00006207.COMPANY.com:9094
amraelp00006208.COMPANY.com:9094
amraelp00006209.COMPANY.com:9094
MDM HUB S3 
s3://mdmprodamrasp42095/
- FLEX: PROD/inbound/FLEX
- SAP: PROD/inbound/SAP
- HIN: PROD/inbound/HIN
- DEA: PROD/inbound/DEA
- 340B: PROD/inbound/340B

Monitoring

Resource NameEndpoint
HUB Performance
https://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=us_prod&var-node=All&var-type=entities
Kafka Topics Overview
https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=us_prod&var-topic=All&var-node=1&var-instance=amraelp00006207.COMPANY.com:9102
Host Statistics
https://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=us_prod&var-node=amraelp00006207.COMPANY.com&var-port=9100
Docker monitoring
https://mdm-monitoring.COMPANY.com/grafana/d/Z1VgYm6iz/docker-monitoring?orgId=1&refresh=10s&var-env=us_prod&var-node=1
JMX Overview
https://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=us_prod&var-component=batch_channel&var-node=1&var-instance=amraelp00006207.COMPANY.com:9107
Kong
MongoDB
https://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=us_prod&var-instance=amraelp00006209.COMPANY.com:9110&var-node_instance=amraelp00006209.COMPANY.com&var-interval=$__auto_interval_interval

Logs

Resource NameEndpoint
Kibana
https://mdm-log-management-us-trade-prod.COMPANY.com:5601/app/kibana

MDM Systems

Reltio US PROD VUUWV21sflYijwa

Resource NameEndpoint
SQS queue name
https://sqs.us-east-1.amazonaws.com/930358522410/361_VUUWV21sflYijwa
Reltio
https://361.reltio.com/ui/VUUWV21sflYijwa/
https://361.reltio.com/reltio/api/VUUWV21sflYijwa 

Reltio Gateway User

Integration_Gateway_US_User
RDM
https://rdm.reltio.com/%s/f6dQoR9tfCpFCtm/

Internal Resources

Resource NameEndpoint
Mongo
mongodb://amraelp00006207.COMPANY.com:27017,amraelp00006208.COMPANY.com:27017,amraelp00006209.COMPANY.com:28018
Kafka
amraelp00006207.COMPANY.com:9094
amraelp00006208.COMPANY.com:9094
amraelp00006209.COMPANY.com:9094
Zookeeper
amraelp00006207.COMPANY.com:2181
amraelp00006208.COMPANY.com:2181
amraelp00006209.COMPANY.com:2181
Kibana
https://amraelp00006209.COMPANY.com:5601/app/kibana
Elasticsearch
https://amraelp00006207.COMPANY.com:9200
https://amraelp00006208.COMPANY.com:9200
https://amraelp00006209.COMPANY.com:9200
Hawtio
http://amraelp00006207.COMPANY.com:9999/hawtio/#/login
http://amraelp00006208.COMPANY.com:9999/hawtio/#/login
http://amraelp00006209.COMPANY.com:9999/hawtio/#/login
" }, { "title": "Components", "pageID": "164469881", "pageLink": "/display/GMDM/Components", "content": "" }, { "title": "Apache Airflow", "pageID": "164469951", "pageLink": "/display/GMDM/Apache+Airflow", "content": "

Description

Airflow is platform created by Apache and designed to schedule workflows called dags.

Airflow docs:

https://airflow.apache.org/docs/apache-airflow/stable/index.html

We are using airflow on kubernetes with helm of official airflow helm chart: https://airflow.apache.org/docs/helm-chart/stable/index.html

In this architecture airflow consists of 3 main components:

Interfaces

Flows

Flows are configure in mdm-hub-cluster-env repository in ansible/inventory/${environment}/group_vars/gw-airflow-services/${dag_name}.yaml files

Used flows are described in dags list


" }, { "title": "API Gateway", "pageID": "164469910", "pageLink": "/display/GMDM/API+Gateway", "content": "

Description

Kong (API Gateway) is the component used as the gateway for all API requests in the MDM HUB. This component exposes only one URL to the external clients, which means that all internal docker containers are secured and it is not possible to access them. This allows to track whole network traffic access in one place. Kong is the router that redirects requests to specific services using configured routes. Kong contains multiple additional plugins, these plugins are connected with the specific services and add additional security (Key-Auth, OAuth 2.0, Oauth2-External) or user management. Only Kong authorized users are allowed to execute specific operations in the HUB.

Flows


Interface NameTypeEndpoint patternDescription
Admin APIREST APIGET http://localhost:8001/Internal and secured PORT available only in the docker container used by kong to manage existing servicesroutes, plugins, consumers, certificates
External APIREST APIGET https://localhost:8443/External and secured PORT exposed to the ELB and accessed by clients. 

Dependent components


ComponentInterfaceFlowDescription
Cassandra - kong_kong-database_1TCP internal docker communicationN/Akong configuration database
HUB MicroservicesREST internal docker communicationN/AThe route to all HUB microservices, required to expose API to external clients 

Configuration

Kong configuration is divided into 5 sections:

Config ParameterDefault valueDescription
- snowflake_api_user:
create_or_update: False
vars:
username: snowflake_api_user
plugins:
- name: key-auth
parameters:
key: "{{ secret_kong_consumers.snowflake_api_user.key_auth.key }}"
N/A

Configuration for the user with key-auth authentication - used only for the technical services users.

All External OAuth2 users are configured in the 4.Routes Sections

Config ParameterDefault valueDescription
- gbl_mdm_hub_us_nprod:
create_or_update: False
vars:
cert: "{{ lookup('file', '{{playbook_dir}}/ssl_certs/{{ env_name }}/certs/gbl-mdm-hub-us-nprod.COMPANY.com.pem') }}"
key: "{{ lookup('file', '{{playbook_dir}}/ssl_certs/{{ env_name }}/certs/gbl-mdm-hub-us-nprod.key') }}"
snis:
- "gbl-mdm-hub-us-nprod.COMPANY.com"
- "amraelp00007335.COMPANY.com"
- "10.12.209.27"

N/A Configuration of the SSL Certificate in the Kong.
Config ParameterDefault valueDescription
kong_services:
- create_or_update: False
vars:
name: "{{ kong_env }}-manager-service"
url: "http://{{ kong_env }}mdmsrv_mdm-manager_1:8081"
connect_timeout: 120000
write_timeout: 120000
read_timeout: 120000
N/A

Kong Service - this is a main part of the configuration, this connects internally Kong with Docker container. 

Kong allows configuring multiple services with multiple routes and plugins.

Config ParameterDefault valueDescription
- create_or_update: False
vars:
name: "{{ kong_env }}-manager-ext-int-api-oauth-route"
service: "{{ kong_env }}-manager-service"
paths: [ "/{{ kong_env }}-ext" ]
methods: [ "GET", "POST", "PATCH", "DELETE" ]
N/A

Exposes the route to the service. Clients using ELB have to add the path to the API invocation to access specified services. "-ext" suffix defines the API that used the External OAuth 2.0 plugin connected to the PingFederate. Configures the methods that the user is allowed to invoke. 

Config ParameterDefault valueDescription
- create_or_update: False
vars:
name: key-auth
route: "{{ kong_env }}-manager-int-api-route"
config:
hide_credentials: true
N/AThe type of plugin "key-auth" used for the internal or technical users that authenticate using a security key
- create_or_update: False
vars:
name: mdm-external-oauth
route: "{{ kong_env }}-manager-ext-int-api-oauth-route"
config:
introspection_url: "https://devfederate.COMPANY.com/as/introspect.oauth2"
authorization_value: "{{ devfederate.secret_oauth2_authorization_value }}"
hide_credentials: true
users_map:
- "e2a6de9c38be44f4a3c1b53f50218cf7:engage"
N/A

The type of plugin "mdm-external-oauth" is a customized plugin used for all External Clients that are using tokens generated in the PingFederate.

The configuration contains introspection_url - Ping API for token verification.

The most important part of this configuration is the users_map 

The Key is the PingFedeate User, the Value is the HUB user configured in the services.

" }, { "title": "API Router", "pageID": "196877505", "pageLink": "/display/GMDM/API+Router", "content": "

Description

The api router component is responsible for routing requests to regional MDM Hub services. Application exposes REST API to call MDM Hub services from different regions simultaneously. The component provides centralized authorization and authentication service and transaction log feature. Api router uses http4k library which is a lightweight  HTTP toolkit written in Kotlin that enables the serving and consuming of HTTP services in a functional and consistent way.



Request flow

\"\"

Component

Description

Authentication service

authenticates user by x-consumer-username header

Request enricher

detects request sources, countries and role

Authorization service

authorizes user permissions to role, countries and sources

Service caller

calls MDM Hub services, tries 3 times in case of an exception,requests are routed to the appropriate mdm services based on the countries parameter, if the requests contains countries from multiple regions, different regional services are called, if the request contains no countries, default user or application country is set

Service response transformer and filter

transforms and/or filters service responses (e.g. data anonymization) depending on the defined request and/or response filtration parameters (e.g. header, http method, path)

Response composer

composes responses from services, if multiple services responded, the response is concatenated


Request enrichment



Parameter
Methodsourcescountriesrole

create hco

request body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedCREATE HCO
update hcorequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedUPDATE_HCO
batch create hcorequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedCREATE_HCO
batch update hcorequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedUPDATE_HCO
create hcprequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedCREATE_HCP
update hcprequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedUPDATE_HCP
batch create hcprequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedCREATE_HCP
batch update hcprequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedUPDATE_HCP
create mcorequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedCREATE_MCO
update mcorequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedUPDATE_MCO
batch create mcorequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedCREATE_MCO
batch update mcorequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedUPDATE_MCO
create entityrequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedCREATE_ENTITY
update entityrequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedUPDATE_ENTITY
get entities by urissources not allowedrequest param Country attribute, 0 or more allowedGET_ENTITIES
get entity by urisources not allowedrequest param Country attribute, 0 or more allowedGET_ENTITIES
delete entity by crosswalktype query param, required at least onerequest param Country attribute, 0 or more allowedDELETE_CROSSWALK
get entity matchessources not allowedrequest param Country attribute, 0 or more allowedGET_ENTITY_MATCHES
create relationrequest body crosswalk attributes, required at least onerequest param Country attribute, 0 or more allowedCREATE_RELATION
batch create relationrequest body crosswalk attributes, required at least onerequest param Country attribute, 0 or more allowedCREATE_RELATION
get relation by urisources not allowedrequest param Country attribute, 0 or more allowedGET_RELATION
delete relation by crosswalktype query param, required at least onerequest param Country attribute, 0 or more allowedDELETE_CROSSWALK
get lookupssources not allowedrequest param Country attribute, 0 or more allowedLOOKUPS


Configuration

Config parameterDescription
defaultCountrydefault application instance country
usersusers configuration listed below
zoneszones configuration listed below
responseTransformresponse transformation definitions explained below

User configuration


Config parameterDescription
nameuser name
descriptionuser description
rolesallowed user roles
countriesallowed user countries
sourcesallowed user sources
defaultCountryuser default country


Zone configuration

Config parameterDescription
urlmdm service url
userNamemdm service user name
logMessagesflag indicates that mdm service messages should be logged
timeoutMsmdm service request timeout

Response transformation configuration

Config parameterDescription
filtersrequest and response filter configuation
mapresponse body JSLT transformation definitions

Filters configuration

Config parameterDescription
requestrequest filter configuation
responseresponse filter configuation

Request filter configuration

Config parameterDescription
methodHTTP method
pathAPI REST call path
headerslist of HTTP headers with name and value parameters

Response filter configuration

Config parameterDescription
bodyresponse body JSTL transformation definition

Example configuration of response transformation

API router configuration
responseTransform: 
- filters:
     request:
       method: GET
       path: /entities.*
       headers:
- name: X-Consumer-Username
           value: mdm_test_user
     response:
       body:
         jstl.content: |
contains(true,[for (.crosswalks) .type == "configuration/sources/HUB_CALLBACK"])
   map:
- jstl.content: |
.crosswalks
- jstl.content: |
.

" }, { "title": "Batch Service", "pageID": "164469936", "pageLink": "/display/GMDM/Batch+Service", "content": "

Description

The batch-service component is responsible for managing the batch loads to MDM Systems. It exposes the REST API that clients use to create a new instance of a batch and upload data. The component is responsible for managing the batch instances and stages, processing the data, gathering acknowledge responses from the Manager component. Batch service stores data in two collections batchInstance - stores all instances of batches and statistics gathered during load and batchEntityProcessStatus  - stores metadata information about all objects that were loaded through all batches. These two collections are required to manage and process the data, check the checksum deduplication process, mark entities as processed after ACK from Reltio, and soft-delete entities in case of full files load. 

The component uses the Asynchronous operations using Kafka topics as the stages for each part of the load. 

Flows

Exposed interfaces

Batch Controller - manage batch instances

Interface NameTypeEndpoint patternDescription
Create a new instance for the specific batchREST APIPOST /batchController/{batchName}/instancesCreates a new instance of the specific batch. Returns the object of Batch with a generated ID that has to be used in the all below requests. Based on the ID client is able to check the status or load data using this instance. It is not possible to start new batch instance once the previous one is not completed. 
Get batch instance detailsREST APIGET /batchController/{batchName}/instances/{batchInstanceId}Returns current details about the specific batch instance. Returns object with all stages, statuses, and statistics. 
Initialize the stage or complete the stage and save statistics in the cache. REST API

POST /batchController/{batchName}/instances/{batchInstanceId}/stages/{stageName}

Creates or updates the specific stage in the batch. Using this operation clients are able to do two things.

1. initialize and start the stage before loading the data. In that case, the Body request should be empty.

2. update and complete the stage after loading the data. In that case, the Body should contain the stage name and statistics.

Clients have permission to update only "Loading" stages. The next stages are managed by the internal batch-service processes.

Initialize multiple stages or complete the stages and save statistics in the cache. REST APIPOST /batchController/{batchName}/instances/{batchInstanceId}/stagesThis operation is similar to the single-stage management operation. This operation allows manage of multiple stages in one request.
Remove the specific batch instance from the cache.REST APIDELETE /batchController/{batchName}/instances/{batchInstanceId}Additional service operation used to delete the batch instances from cache. The permission for this operation is not exposed to external clients, this operation is used only by the HUB support team. 
Clear cache ( clear objects from batchEntityProcessStatus collection that stores metada of objects and is used in deduplication logic)REST API

GET /batchController/{batchName}/_clearCache

headers:
  objectType: ENTITY/RELATION
  entityType: e.g. configuration/entityTypes/HCP

Additional service operation used to clear cache for the specific batch. The user can provide additional parameters to the API to specify what type of objects should be removed from the cache. Operation is used by the clients after executing smoke tests on PROD and during testing on DEV environments. It allows clearing the cache after load to avoid data deduplication during load. 

Bulk Service - load data using previously created batch instances

Interface NameTypeEndpoint patternDescription
Load multiple entities using create operationREST APIPOST /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/entitiesThe operation should be used once the user created a new batch instance and initialized the "Loading" stage. At that moment client is able to load entities to the MDM system. The operation accepts the bulk of entities and loads the data to Kafka topic. Using POST operation the standard creates operation is used.
Load multiple entities using the partial override operationREST APIPATCH /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/entitiesThis operation is similar to the above. The PATCH operation force to use partialOverride operation. 
Load multiple relations using create operationREST APIPOST /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/relationsThe operation is similar to the above. Using POST operation the standard creates operation is used. Using /relations suffix in the URI clients is able to create relations objects in MDM.
Load multiple Tags using PATCH operation - append operationREST APIPATCH /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/tagsThe operation should be used once the user created a new batch instance and initialized the "Loading" stage. At that moment client is able to load tags to the MDM system. The operation accepts the bulk of entities and loads the data to Kafka topic. Using PATCH operation the standard append operation is used so all tags in the input array are added to specified profile in MDM.
Load multiple Tags using delete operation - removal operationREST APIDELETE /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/tagsThis operation is similar to the above. The DELETE operation removes selected TAGS from the MDM system.
Load multiple merge requests using POST operation, this will result in a merge between two entities.REST APIPOST /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/entities/_mergeThe operation should be used once the user created a new batch instance and initialized the "Loading" stage. At that moment client is able to load merge requests to the MDM system - this will result in merging operation between two entities specified in the request. The operation accepts the bulk of merging requests and loads the data to Kafka's topic. 
Load multiple unmerge requests using POST operation, this will result in a unmerge between two entities.REST APIPOST /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/entities/_unmergeThe operation should be used once the user created a new batch instance and initialized the "Loading" stage. At that moment client is able to load unmerge requests to the MDM system - this will result in unmerging operation between two entities specified in the request. The operation accepts the bulk of unmerging requests and loads the data to Kafka's topic. 

Dependent components


ComponentInterfaceFlowDescription
ManagerAsyncMDMManagementServiceRouteEntitiesCreateProcess bulk objects with entities and creates the HCP/HCO/MCO in MDM. Returns asynchronous ACK response
EntitiesUpdateProcess entities and creates using partialOverride property the HCP/HCO/MCO in MDM. Returns asynchronous ACK response
RelationsCreateProcess bulk objects with entities and creates the HCP/HCO/MCO in MDM. Returns asynchronous ACK response
Hub StoreMongo connectionN/AStore cache data in mongo collection

Configuration

Batch Workflows configuration, main config for all Batches and Stages

Config ParameterDescription
batchWorkflows:
- batchName: "ONEKEY"
batchDescription: "ONEKEY - HCO and HCP entities and relations loading"
stages:
- stageName: "HCOLoading"

The main part of the batches configuration. Each batch has to contain:

batchName - the name of the specific batch, used in the API request.

batchDescription - additional description for the specific

stages - the list of dependent stages arranged in the execution sequence.

This configuration presents the workflow for the specific batch, Administrator can setup these stages in the order that is required for the batch and Client requirements. 

The main assumptions:

  1. The "Loading" Stage is the first one always.
  2. The "Sending" Stage is dependent on the "Loading" stage
  3. The "Processing" Stage is dependent on the "Sending" stage.

There is the possibility to add 2 additional optional stages:

  1. "EntitiesUnseenDeletion" - used only once the full file is loaded and the soft-delete process is required
  2. "HCODeletesProcessing" - process soft-deleted objects to check if all ACKs were received. 

Available jobs:

  1. SendingJob
  2. ProcessingJob
  3. DeletingJob
  4. DeletingRelationJob

It is possible to set up different stage names but the assumption is to reuse the existing names to keep consistency.

The JOB is dependent on each other in two ways:

  1. softDependentStages - allows starting next stage immediately after the dependent one is started. Used in the Sending stages to immediately send data to the Manager.
  2. dependentStages - hard dependent stages, this blocks the starting of the stage until the previous one is ended.  
- stageName: "HCOSending"
softDependentStages: ["HCOLoading"]
processingJobName: "SendingJob"
Example configuration of Sending stage dependent from the Loading stage. In this stage, data is taken from the stage Kafka Topics and published to the Manager component for further processing
- stageName: "HCOProcessing"
dependentStages: ["HCOSending"]
processingJobName: "ProcessingJob"
Example configuration of the Processing stage. This stage starts once the Sending JOB is completed. It uses the batchEntityProcessStatus collection to check if all ACK responses were received from MDM. 
- stageName: "RelationLoading"
- stageName: "RelationSending"
dependentStages: [ "HCOProcessing"]
softDependentStages: ["RelationLoading"]
processingJobName: "SendingJob"
- stageName: "RelationProcessing"
dependentStages: [ "RelationSending" ]
processingJobName: "ProcessingJob"
The full example configuration for the Relation loading, sending, and processing stages.
- stageName: "EntitiesUnseenDeletion"
dependentStages: ["RelationProcessing"]
processingJobName: "DeletingJob"
- stageName: "HCODeletesProcessing"
dependentStages: ["EntitiesUnseenDeletion"]
processingJobName: "ProcessingJob"
Configuration for entities. The example configuration that is used in the full files. It is triggered at the end of the Workflow and checks the data that should be removed. 
- stageName: "RelationsUnseenDeletion"
dependentStages: ["HCODeletesProcessing"]
processingJobName: "DeletingRelationJob"
- stageName: "RelationDeletesProcessing"
dependentStages: ["RelationsUnseenDeletion"]
processingJobName: "ProcessingJob"
Configuration for relations. The example configuration that is used in the full files. It is triggered at the end of the Workflow and checks the data that should be removed. 

Loading stage configuration for Entities and Relations BULK load through API request

Config ParameterDescription
bulkConfiguration:
destinations:
    "ONEKEY":
HCPLoading:
bulkLimit: 25
destination:
topic: "{{ env_local_name }}-internal-batch-onekey-hcp"

The configuration contains the following:

destinations - list of batches and kafka topics on which data should be loaded from REST API to Kafka Topics.

"ONEKEY" - batch name

HCPLoading - specific configuration for loading stage

bulkLimit - limit of entities/relations in one API call

destination.topic - target topic name

Sending stage configuration for Sending Entities and Relations to MDM Async API (Reltio)

Config ParameterDefault valueDescription
sendingJob:
numberOfRetriesOnError:
3Number of retries once an exception occurs during Kafka events publishing 
  pauseBetweenRetriesSecs: 
30Number of seconds to wait between the next retry
  idleTimeWhenProcessingEndsSec: 
60Number of seconds once to wait for new events and complete the Sending JOB
  threadPoolSize:
2Number of threads used to Kafka Producer
    "ONEKEY":
HCPSending:
source:
topic: "{{ env_local_name }}-internal-batch-onekey-hcp"
bulkSending: false
bulkPacketSize: 10
reltioRequestTopic: "{{ env_local_name }}-internal-async-all-onekey"
reltioReponseTopic: "{{ env_local_name }}-internal-async-all-onekey-ack"

The specific configuration for Sending Stage

"ONEKEY" - batch name

HCPSending - specific configuration for sending stage

source.topic- source topic name from which data is consumed

bulkSending - by default false (bundling is implemented and managed in Manager client, currently there is no need to bundle the events on client-side)

bulkPacketSize - optionally once bulkSending is true, batch-service is able to bundle the requests. 

reltioRequestTopic- processing requests in manager

reltioReponseTopic - processing ACK in batch-service

Processing stage config for checking processing entities status in MDM Async API (Reltio) - check ACK collector

Config ParameterDefault valueDescription
processingJob.pauseBetweenQueriesSecs:
60Interval in which Cache is cached if all ACK were received.

Entities/Relations UnseenDeletion Job config for Reltio Request Topic and Max Deletes Limit for entities soft Delete.

Config ParameterDefault valueDescription
deletingJob:
"Symphony":
"EntitiesUnseenDeletion":

The specific configuration for Deleting Stage

"Symphony" - batch name

EntitiesUnseenDelettion- specific configuration for soft-delete stage

maxDeletesLimit100The limit is a safety switch in case if we get a corrupted file (empty or partial).
It prevents from deleting all profiles Reltio in such cases.
queryBatchSize10The number of entities/relations downloaded from Cache in one call
reltioRequestTopic: "{{ env_local_name }}-internal-async-all-symphony"
target topic - processing requests in manager
reltioResponseTopic: "{{ env_local_name }}-internal-async-all-symphony-ack"
ack topics - processing ACK in batch-service

Users

Config ParameterDescription
- name: "mdmetl_nprod"
description: "MDMETL Informatica IICS User - BATCH loader"
defaultClient: "ReltioAll"
roles:
- "CREATE_HCP"
- "CREATE_HCO"
- "CREATE_MCO"
- "CREATE_BATCH"
- "GET_BATCH"
- "MANAGE_STAGE"
- "CLEAR_CACHE_BATCH"
countries:
- US
sources:
- "SHS"
...
batches:
"Symphony":
- "HCPLoading"

The example ETL user configuration. The configuration is divided into the following sections:


  1. roles - available roles to create specific objects and manage batch instances
  2. countries - list of countries that user is allowed to load
  3. sources - list of sources that user is allowed to load
  4. batches - list of batch names with corresponding stages. In general external users are able to create/edit Loading stages only.

Connections

Config ParameterDescription
mongo.url: "mongodb://mdm_batch_service:{{ mongo.users.mdm_batch_service.password }}@{{ mongo.springURL }}/{{ mongo.dbName }}"
Full Mongo DB URL
mongo.dbName: "{{ mongo.dbName }}"Mongo database name
kafka.servers: "{{ kafka.servers }}"Kafka Hostname 
kafka.groupId: "batch_service_{{ env_local_name }}"Batch Service component group name
kafka.saslMechanism: "{{ kafka.saslMechanism }}"SASL configrration
kafka.securityProtocol: "{{ kafka.securityProtocol }}"Security Protocol
kafka.sslTruststoreLocation: /opt/mdm-gw-batch-service/config/kafka_truststore.jksSSL trustore file location
kafka.sslTruststorePassword: "{{ kafka.sslTruststorePassword }}"SSL trustore file passowrd
kafka.username: batch_serviceKafka username
kafka.password: "{{ hub_broker_users.batch_service }}"Kafka dedicated user password
kafka.sslEndpointAlgorithm:SSL algoright

Advanced Kafka configuration (do not edit if not required)

Config Parameter
spring:
kafka:
properties:
sasl:
mechanism: ${kafka.saslMechanism}
security:
protocol: ${kafka.securityProtocol}
ssl.endpoint.identification.algorithm:

consumer:
properties:
max.poll.interval.ms: 600000
bootstrap-servers:
- ${kafka.servers}
groupId: ${kafka.groupId}
auto-offset-reset: earliest
max-poll-records: 50
fetch-max-wait: 1s
fetch-min-size: 512000
enable-auto-commit: false
ssl:
trustStoreLocation: file:${kafka.sslTruststoreLocation}
trustStorePassword: ${kafka.sslTruststorePassword}

producer:
bootstrap-servers:
- ${kafka.servers}
groupId: ${kafka.groupId}
auto-offset-reset: earliest
ssl:
trustStoreLocation: file:${kafka.sslTruststoreLocation}
trustStorePassword: ${kafka.sslTruststorePassword}

streams:
bootstrap-servers:
- ${kafka.servers}
applicationId: ${kafka.groupId}_ack # for Kafka Streams GroupID have to different that Kafka consumer
clientId: batch_service_ID
stateDir: /tmp
# num-stream-threads: 1 - default 1
ssl:
trustStoreLocation: file:${kafka.sslTruststoreLocation}
trustStorePassword: ${kafka.sslTruststorePassword}

Additional config (do not edit if not required)

Config Parameter
server.port: 8083

management.endpoint.shutdown.enabled=false:
management.endpoints.web.exposure.include: prometheus, health, info
spring.main.allow-bean-definition-overriding: true
camel.springboot.main-run-controller: True
camel:
component:
metrics:
metric-registry=prometheusMeterRegistry:

server:
use-forward-headers: true
forward-headers-strategy: FRAMEWORK
springdoc:
swagger-ui:
disable-swagger-default-url: True

restService:
#service port - do not change if it run in docker container
port: 8082
schedulerTreadCount: 5


" }, { "title": "Callback Delay Service", "pageID": "322536130", "pageLink": "/display/GMDM/Callback+Delay+Service", "content": "

Description

The application consists of two streams - precallback and postcallback. When the precallback stream detects the need to change the ranking for a given relationship, it generates an event to the post callback stream. The post callback stream collects events in the time window for a given key and processes the last one. This allows you to avoid updating the rankings multiple times when loading relations using batch.

Responsible for following transformations:

Applies transformations to the Kafka input stream producing the Kafka output stream.


Flows

Exposed interfaces

PreCallbackDelay Stream -(rankings)

Interface Name

Type

Endpoint pattern

Description

callback inputKAFKA${env}-internal-reltio-full-delay-eventsEvents processed by the precallback service
output  - callbacksKAFKA${env}-internal-reltio-proc-events

Result events processed by the precallback delay service

output - processing KAFKA${env}-internal-async-all-bulk-callbacksUpdateAttribute requests sent to Manager component for asynchronous processing

Dependent components


ComponentInterfaceFlowDescription

Manager

AsyncMDMManagementServiceRouteRelationshipAttributesUpdateUpdate relationship attributes in asynchronous mode
Hub StoreMongo connectionN/AGet mongodb stored relation data when Kafka cache is empty.


Configuration

Main Configuration


Default valueDescription
kafka.groupId${env}-precallback-delay-serviceThe application ID. Each stream processing application must have a unique ID. The same ID must be given to all instances of the application. It is recommended to use only alphanumeric characters, (dot), - (hyphen), and _ (underscore). Examples: "hello_world", "hello_world-v1.0.0"
kafkaOther.num.stream.threads10Number of threads used in the Kafka Stream
kafkaOther.default.deserialization.exception.handler

com.COMPANY.mdm.common.streams.

StructuredLogAndContinueExceptionHandler

Deserialization exception handler
kafkaOther.max.poll.interval.ms3600000Number of milliseconds to wait max time before next poll of events
kafkaOther.max.request.size2097152Events message size


CallbackWithDelay Stream -(rankings)

Config Parameter

Default value

Description

preCallbackDelay.eventInputTopic${env}-internal-reltio-full-delay-eventsinput topic
preCallbackDelay.eventDelayTopic${env}-internal-reltio-full-callback-delay-eventsdelay stream input topic, when the precallback stream detects the need to modify ranks for a given relationship group, it produces an event for this topic. Events for a given key are aggregated in a time window
preCallbackDelay.eventOutputTopic${env}-internal-reltio-proc-eventsoutput topic for events
preCallbackDelay.internalAsyncBulkCallbacksTopic${env}-internal-async-all-bulk-callbacksoutput topic for callbacks
preCallbackDelay.relationDataStore.storeName${env}-relation-data-storeRelation data cache store name
preCallbackDelay.rankCallback.featureActivationtrueParameter used to enable/disable the Rank feature
preCallbackDelay.rankCallback.callbackSourceHUB_CALLBACKCrosswalk used to update Reltio with Rank attributes
preCallbackDelay.rankCallback.rawRelationChecksumDedupeStore.namewith-delay-raw-relation-checksum-dedupe-storetopic name that store rawRelation MD5 checksum - used in rank callback deduplication
preCallbackDelay.rankCallback.rawRelationChecksumDedupeStore.retentionPeriod1hstore retention period
preCallbackDelay.rankCallback.rawRelationChecksumDedupeStore.windowSize10mstore window size
preCallbackDelay.rankCallback.attributeChangesChecksumDedupeStore.nameattribute-changes-checksum-dedupe-storetopic name that store attribute changes MD5 checksum - used in rank callback deduplication
preCallbackDelay.rankCallback.attributeChangesChecksumDedupeStore.retentionPeriod1hstore retention period
preCallbackDelay.rankCallback.attributeChangesChecksumDedupeStore.windowSize10mstore window size
preCallbackDelay.rankCallback.activeCallbacksOtherHCOtoHCOAffiliationsDelayCallbackList of Ranker to be activated
preCallbackDelay.rankTransform.featureActivationtrueParaemter defines in the Rank feature should be activated.
preCallbackDelay.rankTransform.activationFilter.activeRankSorterOtherHCOtoHCOAffiliationsDelayRankSorterRank sorter names
preCallbackDelay.rankTransform.rankSortOrder.affiliationN/A

The source order defined for the specific Ranking. Details about the algorithm in: 

 OtherHCOtoHCOAffiliations RankSorter

deduplication

Post callback stream ddeduplication config

deduplication.pingInterval1m

Post callback stream ping inverval

deduplication.duration1h

Post callback stream window duration

deduplication.gracePeriod0s

Post callback stream deduplication grace period

deduplication.byteLimit122869944

Post callback stream deduplication byte limit

deduplication.suppressNamecallback-rank-delay-suppress

Post callback stream deduplication suppress name

deduplication.namecallback-rank-delay-suppress

Post callback stream deduplication name

deduplication.storeNamecallback-rank-delay-suppress-deduplication-store

Post callback stream deduplication store name

Rank sort order config:

The component allows you to set different sorting (ranking) configurations depending on the country of the relationship. Relations for selected countries are sorted based on the rankExecutionOrder configuration - in the order of the items on the list. The following sorters are available:

Sample rankSortOrder confiugration:

rankSortOrder:
affiliation:
config:
- countries:
- AU
- NZ
rankExecutionOrder:
- type: ACTIVE
- type: ATTRIBUTE
attributeName: RelationType/RelationshipDescription
lookupCode: true
order:
REL.HIE: 1
REL.MAI: 2
REL.FPA: 3
REL.BNG: 4
REL.BUY: 5
REL.PHN: 6
REL.GPR: 7
REL.MBR: 8
REL.REM: 9
REL.GPSS: 10
REL.WPC: 11
REL.WPIC: 12
REL.DOU: 13
- type: SOURCE
order:
Reltio: 1
ONEKEY: 2
JPDWH: 3
SAP: 4
PFORCERX: 5
PFORCERX_ODS: 5
KOL_OneView: 6
ONEMED: 6
ENGAGE: 7
MAPP: 8
GRV: 9
GCP: 10
SSE: 11
PCMS: 12
PTRS: 13
- type: LUD

" }, { "title": "Callback Service", "pageID": "164469913", "pageLink": "/display/GMDM/Callback+Service", "content": "

Description

Responsible for following transformations:

Applies transformations to the Kafka input stream producing the Kafka output stream.


Flows

Exposed interfaces

PreCallback Stream -(rankings)

Interface Name

Type

Endpoint pattern

Description

callback inputKAFKA
${env}-internal-reltio-full-events
Events enriched by the EntityEnricher component. Full JSON data
output  - callbacksKAFKA
${env}-internal-reltio-proc-events

Events that are already processed by the precallback services (contains updated Ranks and Reltio callback is also processed)

output - processing KAFKA${env}-internal-async-all-bulk-callbacksUpdateAttribute requests sent to Manager component for asynchronous processing

HCO Names

Interface Name

Type

Endpoint pattern

Description

callback inputKAFKA
${env}-internal-callback-hconame-in
events being sent by the event publisher component. Event types being considered:  HCO_CREATED, HCO_CHANGED, RELATIONSHIP_CREATED, RELATIONSHIP_CHANGED
callback outputKAFKA
${env}-internal-hconames-rel-create

Relation Create requests sent to Manager component for asynchronous processing

Danging Affiliations

Interface Name

Type

Endpoint pattern

Description

callback inputKAFKA
${env}-internal-callback-orphanClean-in
events being sent by the event publisher component. Event types being considered:  'HCP_REMOVED', 'HCO_REMOVED', 'MCO_REMOVED', 'HCP_INACTIVATED', 'HCO_INACTIVATED', 'MCO_INACTIVATED'
callback outputKAFKA
${env}-internal-async-all-orphanClean

Relation Update (soft-delete) requests sent to Manager component for asynchronous processing

Crosswalk Cleaner

Interface Name

Type

Endpoint pattern

Description

callback inputKAFKA
${env}-internal-callback-cleaner-in
events being sent by the event publisher component. Event types being considered: 'HCO_CHANGED', 'HCP_CHANGED', 'MCO_CHANGED', 'RELATIONSHIP_CHANGED'
callback outputKAFKA
${env}-internal-async-all-cleaner-callbacks

Delete Crosswalk or Soft-Delete requests sent to Manager component for asynchronous processing


NotMatch callback (clean potential match queue)

Interface Name

Type

Endpoint pattern

Description

callback inputKAFKA
${env}-internal-callback-potentialMatchCleaner-in
events being sent by the event publisher component. Event types being considered:  'RELATIONSHIP_CHANGED', 'RELATIONSHIP_CREATED'
callback outputKAFKA
${env}-internal-async-all-notmatch-callbacks

NotMatch requests sent to Manager component for asynchronous processing



Dependent components


ComponentInterfaceFlowDescription

Manager

MDMIntegrationService


GetEntitiesByUrisRetrieve multiple entities by providing the list of entities URIS
AsyncMDMManagementServiceRouteRelationshipUpdateUpdate relationship object in asynchronous mode
EntitiesUpdateUpdate entity object in asynchronous mode - set soft-delete
CrosswalkDeleteRemove Crosswalk from entity/relation in asynchronous mode
NotMatchSet Not a Match between two  entities
Hub StoreMongo connectionN/AStore cache data in mongo collection


Configuration

Main Configuration


Default valueDescription
kafka.groupId
${env}-entity-enricherThe application ID. Each stream processing application must have a unique ID. The same ID must be given to all instances of the application. It is recommended to use only alphanumeric characters, (dot), - (hyphen), and _ (underscore). Examples: "hello_world", "hello_world-v1.0.0"
kafkaOther.num.stream.threads
10Number of threads used in the Kafka Stream
kafkaOther.default.deserialization.exception.handler
com.COMPANY.mdm.common.streams.StructuredLogAndContinueExceptionHandlerDeserialization exception handler
kafkaOther.max.poll.interval.ms
3600000Number of milliseconds to wait max time before next poll of events
kafkaOther.max.request.size
2097152Events message size
gateway.apiKey
${gateway.apiKey}API key used in the communication to Manager
gateway.logMessages
falseParameter used to turn on/off logging the payload
gateway.url
${gateway.url}Manager URL
gateway.userName
${gateway.userName}Manager user name

HCO Names

Config Parameter

Default value

Description

callback.hconames.eventInputTopic
${env}-internal-callback-hconame-ininput topic
callback.hconames.HCPCalculateStageTopic
${env}-internal-callback-hconame-hcp4calcinternal topic
callback.hconames.intAsyncHCONames
${env}-internal-hconames-rel-createoutput topic
callback.hconames.deduplicationWindowDuration
10The size of the windows in milliseconds
callback.hconames.deduplicationWindowGracePeriod
10sThe grace period to admit out-of-order events to a window.
callback.hconames.dedupStoreName
hco-name-dedupe-storededuplication topic name
callback.hconames.acceptedEntityEventTypes
HCO_CREATED, HCO_CHANGEDaccepted events types for entity objects
callback.hconames.acceptedRelationEventTypes
RELATIONSHIP_CREATED, RELATIONSHIP_CHANGEDaccepted events types for relationship objects
callback.hconames.acceptedCountries

AI,AN,AG,AR,AW,BS,BB,BZ,

BM,BO,BR,CL,CO,CR,CW,

DO,EC,GT,GY,HN,JM,

KY,LC,MX,NI,PA,PY,

PE,PN,SV,SX,TT,UY,VG

list of countries aceppted in further processing 
callback.hconames.impactedHcpTraverseRelationTypes

configuration/relationTypes/Activity, 

configuration/relationTypes/Managed, 

configuration/relationTypes/RLE.MAI

accepted relationship types to travers for impacted HCP objects
callback.hconames.mainHCOTraverseRelationTypes

configuration/relationTypes/Activity, 

configuration/relationTypes/Managed, 

configuration/relationTypes/RLE.MAI

accepted relationship types to travers for impacted main HCO objects
callback.hconames.mainHCOTypeCodes.default
HOSPthe Type code name for the Main HCO object
callback.hconames.mainHCOStructurTypeCodes

e.g.: 

AD:
- "WFR.TSR.JUR"
- "WFR.TSR.GRN"
- "WFR.TSR.ETA"

Cotains the map where the:

KEY is the country 

Values are the TypCodes for the corresponding country, 

callback.hconames.deduplicationeither callback.hconames.deduplication or callback.hconames.windowSessionDeduplication must be set
callback.hconames.deduplication.duration
duration size of time window
callback.hconames.deduplication.gracePeriod
grace period related to time window
callback.hconames.deduplication.byteLimit
byte limit of 
Suppressed.BufferConfig
callback.hconames.deduplication.suppressName

name of

Suppressed.BufferConfig

callback.hconames.deduplication.name
name of the Grouping step in deduplication
callback.hconames.deduplication.storageNamewhen switching from callback.hconames.deduplication to callback.hconames.windowSessionDeduplication storageName must be differentname of Materialized Session Store
callback.hconames.deduplication.pingInterval
interval in which ping messages are being generated
callback.hconames.windowSessionDeduplicationeither callback.hconames.deduplication or callback.hconames.windowSessionDeduplication must be set
callback.hconames.windowSessionDeduplication.duration
duration size of session window
callback.hconames.windowSessionDeduplication.byteLimit
byte limit of 
Suppressed.BufferConfig
callback.hconames.windowSessionDeduplication.suppressName

name of

Suppressed.BufferConfig

callback.hconames.windowSessionDeduplication.name
name of the Grouping step in deduplication
callback.hconames.windowSessionDeduplication.storageNamewhen switching from callback.hconames.deduplication to callback.hconames.windowSessionDeduplication storageName must be differentname of Materialized Session Store
callback.hconames.windowSessionDeduplication.pingInterval
interval in which ping messages are being generated

Pfe HCO Names

Config Parameter

Default value

Description

callback.pfeHconames.eventInputTopic
${env}-internal-callback-hconame-ininput topic
callback.pfeHconames.HCPCalculateStageTopic
${env}-internal-callback-hconame-hcp4calcinternal topic
callback.pfeHconames.intAsyncHCONames
${env}-internal-hconames-rel-createoutput topic
callback.pfeHconames.timeWindoweither callback.pfeHconames.timeWindow or callback.pfeHconames.sessionWindow must be set
callback.pfeHconames.timeWindow.duration
duration size of time window
callback.pfeHconames.timeWindow.gracePeriod
grace period related to time window
callback.pfeHconames.timeWindow.byteLimit
byte limit of 
Suppressed.BufferConfig
callback.pfeHconames.timeWindow.suppressName

name of

Suppressed.BufferConfig

callback.pfeHconames.timeWindow.name
name of the Grouping step in deduplication
callback.pfeHconames.timeWindow.storageNamewhen switching from callback.pfeHconames.timeWindow to callback.pfeHconames.sessionWindow storageName must be differentname of Materialized Session Store
callback.pfeHconames.timeWindow.pingInterval
interval in which ping messages are being generated
callback.pfeHconames.sessionWindoweither callback.pfeHconames.timeWindow or callback.pfeHconames.sessionWindow must be set
callback.pfeHconames.sessionWindow.duration
duration size of session window
callback.pfeHconames.sessionWindow.byteLimit
byte limit of 
Suppressed.BufferConfig
callback.pfeHconames.sessionWindow.suppressName

name of

Suppressed.BufferConfig

callback.pfeHconames.sessionWindow.name
name of the Grouping step in deduplication
callback.pfeHconames.sessionWindow.storageNamewhen switching from callback.pfeHconames.deduplication to callback.pfeHconames.windowSessionDeduplication storageName must be differentname of Materialized Session Store
callback.pfeHconames.sessionWindow.pingInterval
interval in which ping messages are being generated

Danging Affiliations

Config Parameter

Default value

Description

callback.danglingAffiliations.eventInputTopic
${env}-internal-callback-orphanClean-ininput topic
callback.danglingAffiliations.acceptedEntityEventTypes
HCP_REMOVED, HCO_REMOVED, MCO_REMOVED, HCP_INACTIVATED, HCO_INACTIVATED, MCO_INACTIVATEDaccepted entity events
callback.danglingAffiliations.eventOutputTopic
${env}-internal-async-all-orphanCleanoutput topic
callback.danglingAffiliations.relationUpdateHeaders.HubAsyncOperation
rel-updatekafka record header
callback.danglingAffiliations.exceptCrosswalkTypes
configuration/sources/Reltiocrosswalk types to exclude

Crosswalk Cleaner

Config Parameter

Default value

Description

callback.crosswalkCleaner.eventInputTopic
${env}-internal-callback-cleaner-ininput topic
callback.crosswalkCleaner.acceptedEntityEventTypes
MCO_CHANGED, HCP_CHANGED, HCO_CHANGEDaccepted entity events
callback.crosswalkCleaner.acceptedRelationEventTypes
RELATIONSHIP_CHANGEDaccepted relation events
callback.crosswalkCleaner.hardDeleteCrosswalkTypes.always
configuration/sources/HUB_CallbackHub callback crosswalk name
callback.crosswalkCleaner.hardDeleteCrosswalkTypes.except
configuration/sources/ReltioCleanserReltio cleanser crosswalk name
callback.crosswalkCleaner.hardDeleteCrosswalkRelationTypes.always
configuration/sources/HUB_CallbackHub callback crosswalk name
callback.crosswalkCleaner.hardDeleteCrosswalkRelationTypes.except
configuration/sources/ReltioCleanserReltio cleanser crosswalk name
callback.crosswalkCleaner.softDeleteCrosswalkTypes.always
configuration/sources/HUB_USAGETAGCrosswalks list to soft-delete
callback.crosswalkCleaner.softDeleteCrosswalkTypes.whenOneKeyNotExists
configuration/sources/IQVIA_PRDP, configuration/sources/IQVIA_RAWDEACrosswalk list to soft-delete when ONEKEY crosswalk does not exists
callback.crosswalkCleaner.softDeleteCrosswalkTypes.except
configuration/sources/HUB_CALLBACK, configuration/sources/ReltioCleanserCrosswalk to exclude
callback.crosswalkCleaner.hardDeleteHeaders.HubAsyncOperation
crosswalk-deletekafka record header
callback.crosswalkCleaner.hardDeleteRelationHeaders.HubAsyncOperation
crosswalk-relation-deletekafka record header
callback.crosswalkCleaner.softDeleteHeaders.hcp.HubAsyncOperation
hcp-updatekafka record header
callback.crosswalkCleaner.softDeleteHeaders.hco.HubAsyncOperation
hco-updatekafka record header
callback.crosswalkCleaner.oneKey
configuration/sources/ONEKEYONEKEY crosswalk name
callback.crosswalkCleaner.eventOutputTopic
${env}-internal-async-all-cleaner-callbacksoutput topic
callback.crosswalkCleaner.softDeleteOneKeyReferbackCrosswalkTypes.referbackLookupCodes

HCPIT.RBI, HCOIT.RBI

OneKey referback crosswalk lookup codes
callback.crosswalkCleaner.softDeleteOneKeyReferbackCrosswalkTypes.oneKeyLookupCodes
HCPIT.OK, HCOIT.OKOneKey crosswalk lookup codes


NotMatch callback (clean potential match queue)

Config Parameter

Default value

Description

callback.potentialMatchLinkCleaner.eventInputTopic
${env}-internal-callback-potentialMatchCleaner-ininput topic
callback.potentialMatchLinkCleaner.acceptedRelationEventTypes
- RELATIONSHIP_CREATED
- RELATIONSHIP_CHANGED
accepted relation events
callback.potentialMatchLinkCleaner.acceptedRelationObjectTypes
- "configuration/relationTypes/FlextoHCOSAffiliations"
- "configuration/relationTypes/FlextoDDDAffiliations"
- "configuration/relationTypes/SAPtoHCOSAffiliations"
accepted relationship types
callback.potentialMatchLinkCleaner.matchTypesInCache
- "AUTO_LINK"
- "POTENTIAL_LINK"
PotentialMatch cache object types
callback.potentialMatchLinkCleaner.notMatchHeaders.hco.HubAsyncOperation
entities-not-match-setkafka record header
callback.potentialMatchLinkCleaner.eventOutputTopic
${env}-internal-async-all-notmatch-callbacksoutput topic


PreCallback Stream -(rankings)

Config Parameter

Default value

Description

preCallback.eventInputTopic${env}-internal-reltio-full-eventsinput topic
preCallback.eventOutputTopic${env}-internal-reltio-proc-eventsoutput topic for events
preCallback.internalAsyncBulkCallbacksTopic${env}-internal-async-all-bulk-callbacksoutput topic for callbacks
preCallback.mdmIntegrationService.baseURLN/AManager URL defined per environemnt
preCallback.mdmIntegrationService.apiKeyN/AManager secret API KEY defined per environemnt
preCallback.mdmIntegrationService.logMessagesfalseParameter used to turn on/off logging the payload
preCallback.skipEventTypesENTITY_MATCHES_CHANGED, ENTITY_AUTO_LINK_FOUND, ENTITY_POTENTIAL_LINK_FOUND, DCR_CREATED, DCR_CHANGED, DCR_REMOVEDEvents skipped in the processing
preCallback.oldEventsDeletion.maintainDuration10mCache duration time (for callbacks MD5 checksum)
preCallback.oldEventsDeletion.interval5mCache deletion interval
preCallback.rankCallback.featureActivationtrueParameter used to enable/disable the Rank feature
preCallback.rankCallback.callbackSourceHUB_CallbackCrosswalk used to update Reltio with Rank attributes
preCallback.rankCallback.activationFilter.countriesAG, AI, AN, AR, AW, BB, BL, BM, BO, BR, BS, BZ, CL, CO, CR, CW, DE, DO, EC, ES, FR, GF, GP, GT, GY, HK, HN, ID, IN, IT, JM, JP, KY, LC, MC, MF, MQ, MX, MY, NL, NC, NI, PA, PE, PF, PH, PK, PM, PN, PY, RE, RU, SA, SG, SV, SX, TF, TH, TR, TT, TW, UY, VE, VG, VN, WF, YT, XX, EMPTYList of countries for wich process activates the Rank (different between GBL and GBLUS)
preCallback.rankCallback.rawEntityChecksumDedupeStoreNameraw-entity-checksum-dedupe-storetopic name that store rawEntity MD5 checksum - used in rank callback deduplication
preCallback.rankCallback.attributeChangesChecksumDedupeStoreNameattribute-changes-checksum-dedupe-storetopic name that store attribute changes MD5 checksum - used in rank callback deduplication
preCallback.rankCallback.forwardMainEventsDuringPartialUpdatefalseThe parameter used to define if we want to forward partial events. By default it is false so only events that are fully calculated are sent further
preCallback.rankCallback.ignoreAndRemoveDuplicatesfalseThe parameter used in the Ranking may contain duplicities in the group. It is set to False because now Reltio is removing duplicated Identifier
preCallback.rankCallback.activeCleanerCallbacksSpecialityCleanerCallback, IdentifierCleanerCallback, EmailCleanerCallback, PhoneCleanerCallbackList of cleaner callbacks to be activated
preCallback.rankCallback.activeCallbacksSpecialityCallback, AddressCallback, AffiliationCallback, IdentifierCallback, EmailCallback, PhoneCallbackList of Ranker to be activated
preCallback.rankTransform.featureActivationtrueParaemter defines in the Rank feature should be activated.
preCallback.rankTransform.activationFilter.activeRankSorterSpecialtyRankSorter, AffiliationRankSorter, AddressRankSorter, IdentifierRankSorter, EmailRankSorter, PhoneRankSorter
preCallback.rankTransform.rankSortOrder.affiliationN/A

The source order defined for the specific Ranking. Details about the algorithm in: 

 Affiliation RankSorter

preCallback.rankTransform.rankSortOrder.phoneN/A

The source order defined for the specific Ranking. Details about the algorithm in: 

Phone RankSorter

preCallback.rankTransform.rankSortOrder.emailN/A

The source order defined for the specific Ranking. Details about the algorithm in: 

Email RankSorter

preCallback.rankTransform.rankSortOrder.specialitiesN/A

The source order defined for the specific Ranking. Details about the algorithm in: 

Specialty RankSorter

preCallback.rankTransform.rankSortOrder.identifierN/A

The source order defined for the specific Ranking. Details about the algorithm in: 

Identifier RankSorter

preCallback.rankTransform.rankSortOrder.addressSource.ReltioN/A

The source order defined for the specific Ranking. Details about the algorithm in: 

Address RankSorter

preCallback.rankTransform.rankSortOrder.addressesSource.ReltioN/A

The source order defined for the specific Ranking. Details about the algorithm in: 

Addresses RankSorter

" }, { "title": "China Selective Router", "pageID": "284812312", "pageLink": "/display/GMDM/China+Selective+Router", "content": "

Description

The china-selective-router component is responsible for enriching events and transformig from COMPANY model to Iqivia model. Component is using Asynchronous operation using kafka topics. To transform COMPANY object it needs to be consumed from input topic and based on configuration it is enriched, hco entity is connected with mainHco and as a last step event model is transformed to Iqivia model, after all operations event is sending to output topic.

Flows

Exposed interfaces


Interface Name

Type

Endpoint pattern

Description

Event transformer topology
KAFKA

topic: {env}-{topic_postfix}

Transform event from COMPANY model to Iqivia model, and send to ouptut topic

Dependent components


Component

Interface

Flow

Description

Data model
HCPModelConverter
N/AConverter to transform Entity to COMPANY model or to Iqivia model

Configuration


Config Parameter

Description

eventTransformer:
- country: "CN"
eventInputTopic: "${env}-internal-full-hcp-merge-cn"
eventOutputTopic: "${env}-out-full-hcp-merge-cn"
enricher: com.COMPANY.mdm.event_transformer.enricher.ChinaRefEntityProcessor
hcoConnector:
processor: com.COMPANY.mdm.event_transformer.enricher.ChinaHcoConnectorProcessor
transformer: com.COMPANY.mdm.event_transformer.transformer.COMPANYToIqviaEventTransformer
refEntity:
- type: HCO
attribute: ContactAffiliations
relationLookupAttribute: RelationType.RelationshipDescription
relationLookupCode: CON
- type: MainHCO
attribute: ContactAffiliations
relationLookupAttribute: RelationType.RelationshipDescription
relationLookupCode: REL.MAI

The main part of china-selective-router configuration, contains list of event transformaton configuration

country - specify country, value of this parameter have to be in event country section otherwise event will be skipped

eventInputTopic - input topic

eventOutputTopic - output topic

enricher - specify class to enrich event, based on refEntity configuration this class is resposible for collecting related hco and mainHco entities.

hcoConnector.processor - specify class to connect hco with main hco, in this class is made a call to reltio for all connections by hco uri. Based on received data is created additional attribute 'OtherHcoToHco' contains mainHco entity collected by enricher.

hcoConnector.enabled - enable or disable hcoConnector

hcoConnector.hcoAttrName - specify additional attibute name to place connected mainHco

hcoConnector.outRelations - specify the list of out relation to filter while calling reltio for hco connections

refEntity - contains list of attributes containing information about HCO or MainHCO entity (refEntity uri)

refEntity.type - type of entity: HCO or MainHco

refEntity.attribute - base attribute to search for entity

refEntity.relationLookupAttribute - attribute to search for lookupCode to decide what entity we are looking for

refEntity.relationLookupCode - code specify entity type


" }, { "title": "Component Template", "pageID": "164469941", "pageLink": "/display/GMDM/Component+Template", "content": "

Description

<short description of the componet>

Flows

<List of realized flow with links to Flow section>

Exposed interfaces


Interface NameTypeEndpoint patternDescription

REST API|KAFKA

Dependent components


ComponentInterfaceFlowDescription
<component name with link><Interface name><flow name with link>for what

Configuration


Config ParameterDefault valueDescription



" }, { "title": "DCR Service", "pageID": "209949312", "pageLink": "/display/GMDM/DCR+Service", "content": "" }, { "title": "DCR Service 2", "pageID": "218444525", "pageLink": "/display/GMDM/DCR+Service+2", "content": "

Description

Responsible for the DCR processing. Client (PforceRx) sends the DCRs through REST API, DCRs are routed to the target system (OneKey/Veeva Opendata/Reltio). Client (Pforcerx) retrieves the status of the DCR using status API. Service also contains Kafka-streams functionality to process the DCR updates asynchronously and update the DCRRegistry cache.

Services are accessible with REST API.

Applies transformations to the Kafka input stream producing the Kafka output stream.


Flows


Exposed interfaces

REST API

Interface Name

Type

Endpoint pattern

Description

Create DCRsREST API

POST /dcr

Create DCRs

GET DCRs statusREST APIGET /dcr/statusGET DCRs status

OneKey Stream

Interface Name

Type

Endpoint pattern

Description

callback inputKAFKA
{env}-internal-onekey-dcr-change-events-in
Events generated by the OneKey component after OneKey DataSteward Action. Flow responsible for events generation is OneKey: generate DCR Change Events (traceVR)
output  - callbacksMongo
mongo

DCR Registry updated 

Veeva OpenData Stream

Interface Name

Type

Endpoint pattern

Description

callback inputKAFKA
{env}-internal-veeva-dcr-change-events-in
Events generated by the Veeva component after Veeva DataSteward Action. Flow responsible for events generation is Veeva: generate DCR Change Events (traceVR)
output  - callbacksMongo
mongo

DCR Registry updated 

Reltio Stream

Interface Name

Type

Endpoint pattern

Description

callback inputKAFKA
{env}-internal-reltio-dcr-change-events-in

Events generated by Reltio after DataSteward Action. Published by the event-publisher component 

selector: "(exchange.in.headers.reconciliationTarget==null)
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.eventSubtype in ['DCR_CREATED', 'DCR_CHANGED', 'DCR_REMOVED']"

 

output  - callbacksMongo
mongo

DCR Registry updated 

Dependent components


ComponentInterfaceFlowDescription
API RouterAPI routingCreate DCRroute the requests to the DCR-Service component

Manager

MDMIntegrationService


GetEntitiesByUrisRetrieve multiple entities by providing the list of entities URIS
GetEntityByIdget entity by the id
GetEntityByCrosswalkget entity by the crosswalk
CreateDCRcreate change requests in Reltio
OK DCR Service
OneKeyIntegrationService
CreateDCRcreate VR in OneKey
Veeva DCR Service
ThirdPartyIntegrationService
CreateDCR

create VR in Veeva

At the moment only Veeva realized this interface, however in the future OneKey will be exposed via this interface as well  

Hub StoreMongo connectionN/AStore cache data in mongo collection
Transaction LoggerTransactionServiceTransactionsSaves each DCR status change in transactions

Configuration

Config Parameter

Default value

Description


kafka.groupId
${env}_dcr2
The application ID. Each stream processing application must have a unique ID. The same ID must be given to all instances of the application. It is recommended to use only alphanumeric characters, (dot), - (hyphen), and _ (underscore). Examples: "hello_world", "hello_world-v1.0.0"








kafkaOther.num.stream.threads
10Number of threads used in the Kafka Stream
kafkaOther.default.deserialization.exception.handler
com.COMPANY.mdm.common.streams.StructuredLogAndContinueExceptionHandlerDeserialization exception handler
kafkaOther.ssl.engine.factory.class
com.COMPANY.mdm.common.security.CustomTrustStoreSslEngineFactory
SSL config
kafkaOther.partitioner.class
com.COMPANY.mdm.common.ping.PingPartitioner
Ping partitioner required in Kafka Streams application with PING service
kafkaOther.max.poll.interval.ms
3600000Number of milliseconds to wait max time before next poll of events
kafkaOther.max.poll.records
10
Number of records downloaded in one poll from kafka
kafkaOther.max.request.size
2097152Events message size
dataStewardResponseConfig:
reltioResponseStreamConfig:
enable: true
eventInputTopic:
- ${env}-internal-reltio-dcr-change-events-in
   sendTo3PartyDecisionTable:
      - target: Veeva
        decisionProperties:
          sourceName: "VEEVA_CROSSWALK"
      - target: Veeva
        decisionProperties:
          countries: ["ID","PK","MY","TH"]
      - target: OneKey
    sendTo3PartyTopics:
      Veeva:
        - ${env}-internal-sendtothirdparty-ds-requests-in
      OneKey:
        - ${env}-internal-onekeyvr-ds-requests-in

VeevaResponseStreamConfig:
enable: true
eventInputTopic:
- ${env}-internal-veeva-dcr-change-events-in

 onekeyResponseStreamConfig:
enable: true
eventInputTopic:
- ${env}-internal-onekey-dcr-change-events-in
    maxRetryCounter: 20
deduplication:
duration: 2m
gracePeriod: 0s
byteLimit: 2147483648
suppressName: dcr2-onekey-response-stream-suppress
name: dcr2-onekey-response-stream-with-delay
storeName: dcr2-onekey-response-window-deduplication-store
pingInterval: 1m

- ${env}-internal-reltio-dcr-change-events-in

- ${env}-internal-onekey-dcr-change-events-in

- ${env}-internal-veeva-dcr-change-events-in

- ${env}-internal-sendtothirdparty-ds-requests-in

- ${env}-internal-onekeyvr-ds-requests-in

Configuration related to the event processing from Reltio, Onekey or Veeva


Deduplication is related to Onekey and allows to configure the aggregation window for events (processing daily) - 24h

MaxRetryCounter should be set to a high number - 1000000


targetDecisionTable:
- target: Reltio
decisionProperties:
userName: "mdm_dcr2_test_reltio_user"
- target: OneKey
decisionProperties:
userName: "mdm_dcr2_test_onekey_user"

- target: Veeva
    decisionProperties:
      sourceName: "VEEVA_CROSSWALK"
- target: Veeva
    decisionProperties:
      countries: ["ID","PK","MY","TH"]

- target: Reltio
decisionProperties:
country: GB
LIST OF the following combination of attributes




  1. Each attribute in the configuration is optional. 

  2. The decision table is making the validation based on the input request and the main object- the main object is HCP, if the HCP is empty then the decision table is checking HCO. 
  3. The result of the decision table is the TargetType, the routing to the Reltio MDM system, OneKey or Veeva service. 


userName 
the user name that executes the request
sourceName
the source name of the Main object
country
the county defined in the request
operationType

the operation type for the Main object

{ insert, update, delete }
affectedAttributes
the list of attributes that the user is changing
affectedObjects
{ HCP, HCO, HCP_HCO}

RESULT →  TargetType {Reltio, OneKey, Veeva}

PreCloseConfig:
acceptCountries:
- "IN"
- "SA"
  rejectCountries:
- "PL"
- "GB"

DCRs with countries which belong to acceptCountries attribute are automatically accepted (PRE_APPROVED) or rejected (PRE_REJECTED) when belong to rejectCountires

acceptCountriesList of values, example: [ IN, GB, PL , ...]
rejectCountries

List of values, example: [ IN, GB, PL ]

transactionLogger:
simpleDCRLog:
enable: true
kafkaEfk:
enable: true
Transaction ServiceThe configuration that enables/disables the transaction logger

oneKeyClient:
url: http://devmdmsrv_onekey-dcr-service_1:8092
userName: dcr_service_2_user
OneKey Integration Service

The configuration that allows connecting to onekey dcr service

VeevaClient:
url: http://localhost:8093
username: user
apiKey: ""
Veeva Integration Service 

The configuration that allows connecting to Veeva dcr service


manager:
url: https://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/${env}/gw
userName:dcr_service_2_user
logMessages: true
timeoutMs: 120000
MDM Integration ServiceThe configuration that allows connecting to Reltio service

Indexes

DCR Service 2 Indexes

" }, { "title": "DCR service connect guide", "pageID": "415221200", "pageLink": "/display/GMDM/DCR+service+connect+guide", "content": "

Introduction

This guide provides comprehensive instructions on integrating new client applications with the DCR (Data Change Request) service in the MDM HUB system. It is intended for technical engineers, client architects, solution designers, and MDM/Mulesoft teams.

Table of Contents

Overview

The DCR service processes Data Change Requests (DCRs) sent by clients through a REST API. These DCRs are routed to target systems such as OneKey, Veeva Opendata, or Reltio. The service also includes Kafka-streams functionality to process DCR updates asynchronously and update the DCRRegistry cache.

Access to the DCR API should be confirmed in advance with the P.O. MDM HUB → A.J. Varganin

Getting Started

Prerequisites

Setup Instructions

  1. Create MDM HUB User: Follow the SOP to add a direct API user to the HUB.  Complete the steps outlined in → Add Direct API User to HUB
  2. Obtain Access Token: Use PingFederate to acquire an access token

API Overview

Endpoints

Methods

Authentication and Authorization

  1. First step is to acquire access token. If you are connecting first time to MDM HUB API you should create MDM HUB user 
  2. Once you have the PingFederate username and password, you can acquire the access token.

Obtaining Access Token

  1. Request Token:
    \n
    curl --location --request POST 'https://devfederate.COMPANY.com/as/token.oauth2?grant_type=client_credentials' \\      // Use devfederate for DEV & UAT, stgfederate for STAGE, prodfederate for PROD\n--header 'Content-Type: application/x-www-form-urlencoded' \\\n--header 'Authorization: Basic Base64-encoded(username:password)'
    \n
    \n
  2. Response:
    \n
    {\n  "access_token": "12341SPRtjWQzaq6kgK7hXkMVcTzX",                                                                   \n  "token_type": "Bearer",\n  "expires_in": 1799                                                                                                 // The token expires after the time - "expires_in" field. Once the token expires, it must be refreshed.\n}
    \n

Below you can see, how Postman should be configured to obtain access_token

\"\"

Using Access Token

Include the access token in the Authorization header for all API requests.

Network Configuration

Required Settings

Creating DCRs

This method is used to create new DCR objects in the MDM HUB system. Below is an example request to create a new HCP object in the MDM system.

More examples and the entire data model can be found at:

Example Request

Create new HCP
\n
curl --location '{api_url}/dcr' \\                                                                                     // e.g., https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-amer-dev\n--header 'Content-Type: application/json' \\\n--header 'Authorization: Bearer ${access_token_value}' \\                                                              // e.g., 0001WvxKA16VWwlufC2dslSILdbE\n--data-raw '[\n    {\n        "country": "${dcr_country}",                                                                                  // e.g., CA\n        "createdBy": "${created_by}",                                                                                 // e.g., Test user\n        "extDCRComment": "${external_system_comment}",                                                                // e.g., This is test DCR to create new HCP\n        "extDCRRequestId": "${external_system_request_id}",                                                           // e.g., CA-VR-00255752\n        "dcrType": "${dcr_type}",                                                                                     // e.g., PforceRxDCR\n        "entities": [\n            {\n                "@type": "hcp",\n                "action": "insert",\n                "updateCrosswalk": {\n                    "type": "${source_system_name}",                                                                  // e.g., PFORCERX \n                    "value": "${source_system_value}"                                                                 // e.g., HCP-CA-VR-00255752 \n                },\n                "values": {\n                    "birthDate": "07-08-2017",\n                    "birthYear": "2017",\n                    "firstName": "Maurice",\n                    "lastName": "Brekke",\n                    "title": "HCPTIT.1118",\n                    "middleName": "Karen",\n                    "subTypeCode": "HCPST.A",\n                    "addresses": [\n                        {\n                            "action": "insert",\n                            "values": {\n                                "sourceAddressId": {\n                                    "source": "${source_system_name}",                                                // e.g., PFORCERX\n                                    "id": "${address_source_system_value}"                                            // e.g., ADR-CA-VR-00255752 \n                                },\n                                "addressLine1": "08316 McCullough Terrace",\n                                "addressLine2": "Waynetown",\n                                "addressLine3": "Designer Books gold parsing",\n                                "addressType": "AT.OFF",\n                                "buildingName": "Handmade Cotton Shirt",\n                                "city": "Singapore",\n                                "country": "SG",\n                                "zip": "ZIP 5"\n                            }\n                        }\n                    ]              \n                }\n            }\n        ]\n    }\n]'
\n

Request placeholders:

parameter namedescriptionexample
api_urlAPI router URLhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-amer-dev
access_token_valueAccess token value0001WvxKA16VWwlufC2dslSILdbE

dcr_country

Main entity countryCA

created_by

Created by userTest user

external_system_comment

Comment that will be populate to next processing stepsThis is test DCR

external_system_request_id

ID for tracking DCR processingCA-VR-00255752

dcr_type

Provided by MDM HUB team when user with DCR permission will be createdPforceRxDCR

source_system_name

Source system name. User used to invoke request has to have access to this sourcePFORCERX

source_system_value

ID of this object in source systemHCO-CA-VR-00255752

address_source_system_value

ID of address in source systemADR-CA-VR-00255752

Handling Responses

Success Response

Create DCR success response
\n
[\n    {\n        "requestStatus": "${request_status}",                                                                         // e.g., REQUEST_ACCEPTED\n        "extDCRRequestId": "${external_system_request_id},                                                            // e.g., CA-VR-00255752\n        "dcrRequestId": "${mdm_hub_dcr_request_id}",                                                                  // e.g., 4a480255a4e942e18c6816fa0c89a0d2\n        "targetSystem": "${target_system_name}",                                                                      // e.g., Reltio\n        "country": "${dcr_request_country}",                                                                          // e.g., CA\n        "dcrStatus": {\n            "status": "CREATED",\n            "updateDate": "2024-05-07T11:22:10.806Z",\n            "dcrid": "${reltio_dcr_status_entity_uri}"                                                                // e.g., entities/0HjtwJO\n        }\n    }\n]
\n

Response placeholders:

parameterdescriptionexample
external_system_request_idDCR request id in source systemCA-VR-00255752
mdm_hub_dcr_request_idDCR request id in MDM HUB system4a480255a4e942e18c6816fa0c89a0d2
target_system_nameDCR target system name, one of values: OneKey, Reltio, VeevaReltio
dcr_request_countryDCR request countryCA
request_statusDCR request status, one of values: REQUEST_ACCEPTED, REQUEST_FAILED, REQUEST_REJECTEDREQUEST_ACCEPTED
reltio_dcr_status_entity_uriURI of DCR status entity in Reltio systementities/0HjtwJO

Rejected Response

\n
[\n    {\n        "requestStatus": "REQUEST_REJECTED",\n        "errorMessage": "DuplicateRequestException -> Request [97aa3b3f-35dc-404c-9d4a-edfaf9e7121211c] has already been processed",\n        "errorCode": "DUPLICATE_REQUEST",\n        "extDCRRequestId": "97aa3b3f-35dc-404c-9d4a-edfaf9e7121211c"\n    }\n]
\n

Failed Response

\n
[\n    {\n        "requestStatus": "REQUEST_FAILED",\n        "errorMessage": "Target lookup code not found for attribute: HCPTitle, country: SG, source value: HCPTIT.111218.",\n        "errorCode": "VALIDATION_ERROR",\n        "extDCRRequestId": "97aa3b3f-35dc-404c-9d4a-edfaf9e712121121c"\n    }\n]
\n
In case of incorrect user configuration in the system, the API will return errors as follows. In these cases, please contact the MDM HUB team.

Getting DCR status

Processing of DCR will take some time. DCR status can be track via get DCR status API calls. DCR processing ends when it reaches the final status: ACCEPTED or REJECTED. When the DCR gets the ACCEPTED status, the following fields will appear in its status: "objectUri" and "COMPANYCustomerId". These can be used to find created/modified entities in the MDM system. Full documentation can be found at → Get DCR status.

Example Request

Below is an example query for the selected external_system_request_id

\n
curl --location '{api_url}/dcr/_status/${external_system_request_id}' \\                                               // e.g., CA-VR-00255752 \n--header 'Authorization: Bearer ${access_token_value}'                                                                // e.g., 0001WvxKA16VWwlufC2dslSILdbE 
\n

Handling Responses

Success Response

\n
{\n    "requestStatus": "REQUEST_ACCEPTED",\n    "extDCRRequestId": "8600ca9a-c317-45d0-97f6-152f01d70158",\n    "dcrRequestId": "a2848f2a573344248f78bff8dc54871a",\n    "targetSystem": "Reltio",\n    "country": "AU",\n    "dcrStatus": {\n        "status": "ACCEPTED",\n        "objectUri": "entities/0Hhskyx",                                                                               // \n        "COMPANYCustomerId": "03-102837896",                                                                            // usually HCP. HCO only when creating or updating HCO without references to HCP in DCR request\n        "updateDate": "2024-05-07T11:47:08.958Z",\n        "changeRequestUri": "changeRequests/0N38Jq0",\n        "dcrid": "entities/0EUulla"\n    }\n}
\n

Rejected Response

\n
{\n    "requestStatus": "REQUEST_REJECTED",\n    "errorMessage": "Received DCR_CHANGED event, updatedBy: svc-pfe-mdmhub, on 1714378259964. Updating DCR status to: REJECTED",\n    "extDCRRequestId": "b9239835-937e-434d-948c-6a282a736c4f",\n    "dcrRequestId": "0b4125648b6c4d9cb785856841f7d65d",\n    "targetSystem": "Veeva",\n    "country": "HK",\n    "dcrStatus": {\n        "status": "REJECTED",\n        "updateDate": "2024-04-29T08:11:06.555Z",\n        "comment": "This DCR was REJECTED by the VEEVA Data Steward with the following comment: [A-20022] Veeva Data Steward: Your request has been rejected..",\n        "changeRequestUri": "changeRequests/0IojkYP",\n        "dcrid": "entities/0qmBUXU"\n    }\n}
\n

Getting multiple DCR statuses

Multiple statuses can be selected at once using the DCR status filtering API

Example Request

Filter DCR status
\n
curl --location '{api_url}/dcr/_status?updateFrom=2021-10-17T20%3A31%3A31.424Z&updateTo=2023-10-17T20%3A31%3A31.424Z&limit=5&offset=3' \\\n--header 'Authorization: Bearer ${access_token_value}'                                                                // e.g., 0001WvxKA16VWwlufC2dslSILdbE 
\n

Example Response

Success Response

\n
[\n    {\n        "requestStatus": "REQUEST_ACCEPTED",\n        "extDCRRequestId": "8d3eb4f7-7a08-4813-9a90-73caa7537eba",\n        "dcrRequestId": "360d152d58d7457ab6a0610b718b6b8b",\n        "targetSystem": "OneKey",\n        "country": "AU",\n        "dcrStatus": {\n            "status": "ACCEPTED",\n            "objectUri": "entities/05jHpR1",\n            "COMPANYCustomerId": "03-102429068",\n            "updateDate": "2023-10-13T05:43:02.007Z",\n            "comment": "ONEKEY response comment: ONEKEY accepted response - HCP EID assigned\\nONEKEY HCP ID: WUSM03999911",\n            "changeRequestUri": "8b32b8544ede4c72b7adfa861b1dc53f",\n            "dcrid": "entities/04TxaQB"\n        }\n    },\n    {\n        "requestStatus": "REQUEST_ACCEPTED",\n        "extDCRRequestId": "b66be6bd-655a-47f8-b78b-684e80166096",\n        "dcrRequestId": "becafcb2cd004c1d89ecfc670de1de70",\n        "targetSystem": "Reltio",\n        "country": "AU",\n        "dcrStatus": {\n            "status": "ACCEPTED",\n            "objectUri": "entities/06SVUCq",\n            "COMPANYCustomerId": "03-102429064",\n            "updateDate": "2023-10-13T05:35:08.597Z",\n            "comment": "26498057 [svc-pfe-mdmhub][1697175298895] -",\n            "changeRequestUri": "changeRequests/06sXnXH",\n            "dcrid": "entities/08LAHeQ"\n        }\n    }\n]
\n


Get entity

This method is used to prepare a DCR request for modifying entities and to validate the created/modified entities in the DCR process. Use the "objectUri" field available after accepting the DCR to query MDM system.

Example Request

Get entity request
\n
curl --location '{api_url}/${objectUri}' \\                                                                             // e.g., https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-amer-dev, entities/05jHpR1\n --header 'Authorization: Bearer ${access_token_value}'                                                                // e.g., 0001WvxKA16VWwlufC2dslSILdbE   
\n

Example Response

Success Response

Get entity response
\n
{\n    "type": "configuration/entityTypes/HCP",\n    "uri": "entities/06SVUCq",\n    "createdBy": "svc-pfe-mdmhub",\n    "createdTime": 1697175293866,\n    "updatedBy": "Re-cleansing of null in tenant 2NBAwv1z2AvlkgS background task. (started by test.test@COMPANY.com)",\n    "updatedTime": 1713375695895,\n    "attributes": {\n        "COMPANYGlobalCustomerID": [\n            {\n                "uri": "entities/06SVUCq/attributes/COMPANYGlobalCustomerID/LoT0xC2",\n                "type": "configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",\n                "value": "03-102429064",\n                "ov": true\n            }\n        ],\n        "TypeCode": [\n            {\n                "uri": "entities/06SVUCq/attributes/TypeCode/LoT0XcU",\n                "type": "configuration/entityTypes/HCP/attributes/TypeCode",\n                "value": "HCPT.NPRS",\n                "ov": true\n            }\n        ],\n        "Addresses": [\n            {\n                "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv",\n                "value": {\n                    "AddressType": [\n                        {\n                            "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressType/dZqkWUB",\n                            "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressType",\n                            "value": "TYS.P",\n                            "ov": true\n                        }\n                    ],\n                    "COMPANYAddressID": [\n                        {\n                            "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/COMPANYAddressID/dZqkakR",\n                            "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/COMPANYAddressID",\n                            "value": "7001330683",\n                            "ov": true\n                        }\n                    ],\n                    "AddressLine1": [\n                        {\n                            "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine1/dZqkf0h",\n                            "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine1",\n                            "value": "addressLine1",\n                            "ov": true\n                        }\n                    ],\n                    "AddressLine2": [\n                        {\n                            "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine2/dZqkjGx",\n                            "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine2",\n                            "value": "addressLine2",\n                            "ov": true\n                        }\n                    ],\n                    "AddressLine3": [\n                        {\n                            "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine3/dZqknXD",\n                            "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine3",\n                            "value": "addressLine3",\n                            "ov": true\n                        }\n                    ],\n                    "City": [\n                        {\n                            "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/City/dZqkrnT",\n                            "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/City",\n                            "value": "city",\n                            "ov": true\n                        }\n                    ],\n                    "Country": [\n                        {\n                            "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/Country/dZqkw3j",\n                            "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Country",\n                            "value": "GB",\n                            "ov": true\n                        }\n                    ],\n                    "Zip5": [\n                        {\n                            "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/Zip5/dZql0Jz",\n                            "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Zip5",\n                            "value": "zip5",\n                            "ov": true\n                        }\n                    ],\n                    "Source": [\n                        {\n                            "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF",\n                            "value": {\n                                "SourceName": [\n                                    {\n                                        "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceName/dZql8qV",\n                                        "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Source/attributes/SourceName",\n                                        "value": "PforceRx",\n                                        "ov": true\n                                    }\n                                ],\n                                "SourceAddressID": [\n                                    {\n                                        "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceAddressID/dZqlD6l",\n                                        "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Source/attributes/SourceAddressID",\n                                        "value": "string",\n                                        "ov": true\n                                    }\n                                ]\n                            },\n                            "ov": true,\n                            "label": "PforceRx"\n                        }\n                    ],\n                    "VerificationStatus": [\n                        {\n                            "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/VerificationStatus/dZrp4Jz",\n                            "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/VerificationStatus",\n                            "value": "Unverified",\n                            "ov": true\n                        }\n                    ],\n                    "VerificationStatusDetails": [\n                        {\n                            "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/VerificationStatusDetails/hLXLd9W",\n                            "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/VerificationStatusDetails",\n                            "value": "Address Verification Status is unverified - unable to verify. the output fields will contain the input data.\\nPost-Processed Verification Match Level is 0 - none.\\nPre-Processed Verification Match Level is 0 - none.\\nParsing Status isidentified and parsed - All input data has been able to be identified and placed into components.\\nLexicon Identification Match Level is 0 - none.\\nContext Identification Match Level is 5 - delivery point (postbox or subbuilding).\\nPostcode Status is PostalCodePrimary identified by context - postalcodeprimary identified by context.\\nThe accuracy matchscore, which gives the similarity between the input data and closest reference data match is 100%.",\n                            "ov": true\n                        }\n                    ],\n                    "AVC": [\n                        {\n                            "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AVC/hLXLhPm",\n                            "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/AVC",\n                            "value": "U00-I05-P1-100",\n                            "ov": true\n                        }\n                    ],\n                    "AddressRank": [\n                        {\n                            "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressRank/gjq5qMj",\n                            "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressRank",\n                            "value": "1",\n                            "ov": true\n                        }\n                    ]\n                },\n                "ov": true,\n                "label": "TYS.P - addressLine1, addressLine2, city, zip5, GB"\n            }\n        ]\n    },\n    "crosswalks": [\n        {\n            "type": "configuration/sources/ReltioCleanser",\n            "value": "06SVUCq",\n            "uri": "entities/06SVUCq/crosswalks/dZrp03j",\n            "reltioLoadDate": 1697175300805,\n            "createDate": 1697175303886,\n            "updateDate": 1697175303886,\n            "attributes": [\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/AVC/hLXLhPm",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/VerificationStatus/dZrp4Jz",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/VerificationStatusDetails/hLXLd9W"\n            ]\n        },\n        {\n            "type": "configuration/sources/Reltio",\n            "value": "06SVUCq",\n            "uri": "entities/06SVUCq/crosswalks/dZqkNxf",\n            "reltioLoadDate": 1697175300805,\n            "createDate": 1697175300805,\n            "updateDate": 1697175300805,\n            "attributes": [\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/Country/dZqkw3j",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceName/dZql8qV",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/Zip5/dZql0Jz",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine1/dZqkf0h",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceAddressID/dZqlD6l",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/COMPANYAddressID/dZqkakR",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine2/dZqkjGx",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/City/dZqkrnT",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine3/dZqknXD",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressType/dZqkWUB"\n            ],\n            "singleAttributeUpdateDates": {\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/Country/dZqkw3j": "2023-10-13T05:35:00.805Z",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceName/dZql8qV": "2023-10-13T05:35:00.805Z",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/Zip5/dZql0Jz": "2023-10-13T05:35:00.805Z",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine1/dZqkf0h": "2023-10-13T05:35:00.805Z",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceAddressID/dZqlD6l": "2023-10-13T05:35:00.805Z",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/COMPANYAddressID/dZqkakR": "2023-10-13T05:35:00.805Z",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF": "2023-10-13T05:35:00.805Z",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine2/dZqkjGx": "2023-10-13T05:35:00.805Z",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/City/dZqkrnT": "2023-10-13T05:35:00.805Z",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine3/dZqknXD": "2023-10-13T05:35:00.805Z",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv": "2023-10-13T05:35:00.805Z",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressType/dZqkWUB": "2023-10-13T05:35:00.805Z"\n            }\n        },\n        {\n            "type": "configuration/sources/HUB_CALLBACK",\n            "value": "06SVUCq",\n            "uri": "entities/06SVUCq/crosswalks/LoT0kPG",\n            "reltioLoadDate": 1697175429294,\n            "createDate": 1697175296673,\n            "updateDate": 1697175296673,\n            "attributes": [\n                "entities/06SVUCq/attributes/TypeCode/LoT0XcU",\n                "entities/06SVUCq/attributes/COMPANYGlobalCustomerID/LoT0xC2",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressRank/gjq5qMj",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv"\n            ],\n            "singleAttributeUpdateDates": {\n                "entities/06SVUCq/attributes/TypeCode/LoT0XcU": "2023-10-13T05:34:56.673Z",\n                "entities/06SVUCq/attributes/COMPANYGlobalCustomerID/LoT0xC2": "2023-10-13T05:37:09.294Z",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressRank/gjq5qMj": "2023-10-13T05:35:08.420Z",\n                "entities/06SVUCq/attributes/Addresses/dZqkSDv": "2023-10-13T05:35:08.420Z"\n            }\n        }\n    ]\n}
\n

Rejected Response

Entity not found response
\n
{\n    "code": "404",\n    "message": "Entity not found"\n}
\n

Troubleshooting Guide

All documentation with a detailed description of flows can be found at → PforceRx DCR flows

Common Issues and Solutions

Duplicate Request:


Validation Error:


Network Errors:


Authentication Errors:


Service Unavailable Errors:


Missing Configuration for User


Permission Denied to create DCR:


Validation Error:

" }, { "title": "Entity Enricher", "pageID": "164469912", "pageLink": "/display/GMDM/Entity+Enricher", "content": "

Description

Accepts simple events on the input. Performs the following calls to Reltio:

Produces the events enriched with the targetEntity / targetRelation field retrieved from RELTIO.

Exposed interfaces


Interface Name

Type

Endpoint pattern

Description

entity enricher inputKAFKA
${env}-internal-reltio-events
events being sent by the event publisher component. Event types being considered: HCP_*, HCO_*, ENTITY_MATCHES_CHANGED
entity enricher outputKAFKA
${env}-internal-reltio-full-events

Dependent components


ComponentInterfaceFlowDescription

Manager




MDMIntegrationService


getEntitiesByUris
getRelation
getChangeRequest
findEntityCountry

Configuration


Config ParameterDefault valueDescription
bundle.enabletrueenable / disable function
bundle.inputTopics${env}-internal-reltio-eventsinput topic
bundle.threadPoolSize10number of thread pool size
bundle.pollDuration10spoll interval
bundle.outputTopic${env}-internal-reltio-full-eventsoutput topic
kafka.groupId${env}-entity-enricherThe application ID. Each stream processing application must have a unique ID. The same ID must be given to all instances of the application. It is recommended to use only alphanumeric characters, . (dot), - (hyphen), and _ (underscore). Examples: "hello_world", "hello_world-v1.0.0"
bundle.kafkaOther.session.timeout.ms30000
bundle.kafkaOther.max.poll.records10
bundle.kafkaOther.max.poll.interval.ms300000
bundle.kafkaOther.auto.offset.resetearliest
bundle.kafkaOther.enable.auto.commitfalse
bundle.kafkaOther.max.request.size2097152
bundle.gateway.apiKey${gateway.apiKey}
bundle.gateway.logMessagesfalse
bundle.gateway.url${gateway.url}
bundle.gateway.userName${gateway.userName}



" }, { "title": "HUB APP", "pageID": "302700538", "pageLink": "/display/GMDM/HUB+APP", "content": "

Description


HUB UI is a front-end application that presents basic information about the MDM HUB cluster. This component allows you to manage Kafka and Airflow Dags or view quality service configuration.

The app allows users to log in with their COMPANY accounts.

Technology: Angular

Code link: mdm-hub-app

Flows

Access:

Dependent components


ComponentInterfaceDescription
MDM ManagerREST API

Used to fetch quality service configuration and for testing entities

MDM AdminREST API

Used to manage kafka, airflow dags and reconciliation service


Configuration

Component is configured via environment variables


Environment variableDefault valueDescription
BACKEND_URI
N/AMDM Manager URI
ADMIN_URIN/AMDM Admin URI
INGRESS_PREFIXN/AApplication context path
" }, { "title": "Hub Store", "pageID": "164469908", "pageLink": "/display/GMDM/Hub+Store", "content": "

Hub store is a mongo cache where are stored: EntityHistory, EntityMatchesHistory, EntityRelation.


Configuration

Config Parameter

Default value

Description

mongo:
host: ***:27017,***:27017,***:27017
dbName: reltio_${env}
user: ***
url: mongodb://${mongo.user}:${mongo.password}@${mongo.host}/${mongo.dbName}

Mong DB connection configuration

" }, { "title": "Inc batch channel", "pageID": "302686382", "pageLink": "/display/GMDM/Inc+batch+channel", "content": "

Description

Responsible for ETL data loads of data to Reltio. It takes plain data files(eg. txt, csv) and, based on defined mappings, converts it into json objects, which are then sent to Reltio.

Flows

Dependent components

ComponentInterface nameDescription
ManagerKafka

Events constructed by inc-batch-channel are transferred to the kafka topic, from where they are read by mdm-manager and sent to Reltio. When the event is processed by the Reltio manager send ACK message on the appropriate topic:

Example input topic: gbl-prod-internal-async-all-sap

Example ACK topic: gbl-prod-internal-async-all-sap-ack

Batch ServiceBatch ControllerUsed to store ETL loads state and statistics. All information are placed in mongodb


MongoDb collections


Configuration

Connections

mongoConnectionProps.dbUrl
Full Mongo DB URL
mongoConnectionProps.mongo.dbNameMongo database name
kafka.serversKafka Hostname 
kafka.groupIdBatch Service component group name
kafka.saslMechanismSASL configrration
kafka.securityProtocolSecurity Protocol
kafka.sslTruststoreLocationSSL trustore file location
kafka.sslTruststorePasswordSSL trustore file passowrd
kafka.usernameKafka username
kafka.passwordKafka dedicated user password
kafka.sslEndpointAlgorithm:SSL algoright

Batches configuration:

batches.${batch_name}
Batch configuration
batches.${batch_name}.inputFolderDirectory with input files
batches.${batch_name}.outputFolderDirectory with output files
batches.${batch_name}.columnsDefinitionFileFile defining mapping
batches.${batch_name}.requestTopicManager topic with events that are going to be sent to Reltio
batches.${batch_name}.ackTopicAck topic
batches.${batch_name}.parserTypeParser type. Defines separator and encoding format
batches.${batch_name}.preProcessingDefine preprocessin of input files
batches.${batch_name}.stages.${stage_name}.stageOrderStage priority
batches.${batch_name}.stages.${stage_name}.processorTypeProcessor type:
  • SIMPLE - change is applied only in mongo
  • ENTITY_SENDER - change is sent to Reltio
batches.${batch_name}.stages.${stage_name}.outputFileNameOutput file name
batches.${batch_name}.stages.${stage_name}.disabledIf stage is disabled
batches.${batch_name}.stages.${stage_name}.definitionsDefine which definition is used to map input file
batches.${batch_name}.stages.${stage_name}.deltaDetectionEnabledIf previous and current state of objects are compared
batches.${batch_name}.stages.${stage_name}.initDeletedLoadEnabled
batches.${batch_name}.stages.${stage_name}.fullAttributesMerge
batches.${batch_name}.stages.${stage_name}.postDeleteProcessorEnabled
batches.${batch_name}.stages.${stage_name}.senderHeadersDefines http headers


" }, { "title": "Kafka Connect", "pageID": "164469804", "pageLink": "/display/GMDM/Kafka+Connect", "content": "

Description

Kafka Connect is a tool for scalably and reliably streaming data between Apache Kafka® and other data systems.  It makes it simple to quickly define connectors that move large data sets in and out of Kafka.

Kafka Connect can ingest entire databases or collect metrics from all your application servers into Kafka topics, making the data available for stream processing with low latency.

Flows

Snowflake: Base tables refresh

Snowflake: Events publish flow

Snowflake: History Inactive

Snowflake: LOV data publish flow

Snowflake: MT data publish flow

Configuration

Kafka Connect - properties description

param

value

group.id<env>-kafka-connect-snowflake
topic.creation.enablefalse

offset.storage.topic

<env>-internal-kafka-connect-snowflake-offset
config.storage.topic<env>-internal-kafka-connect-snowflake-config
status.storage.topic<env>-internal-kafka-connect-snowflake-status
key.converterorg.apache.kafka.connect.storage.StringConverter
value.converterorg.apache.kafka.connect.storage.StringConverter
key.converter.schemas.enabletrue
value.converter.schemas.enabletrue
config.storage.replication.factor3
offset.storage.replication.factor3
status.storage.replication.factor3
rest.advertised.host.namelocalhost
rest.port8083
security.protocolSASL_PLAINTEXT
sasl.mechanismSCRAM-SHA-512
consumer.group.id<env>-kafka-connect-snowflake-consumer
consumer.security.protocolSASL_PLAINTEXT
consumer.sasl.mechanismSCRAM-SHA-512

connectors - SnowflakeSinkConnector - properties description

paramvalue

snowflake.topic2table.map

<env>-out-full-snowflake-all:HUB_KAFKA_DATA

topics

<env>-out-full-snowflake-all

buffer.flush.time

300

snowflake.url.name

<sf_instance_name>

snowflake.database.name

<db_name>

snowflake.schema.name

LANDING

buffer.count.records

1000

snowflake.user.name

<user_name>

value.converter

com.snowflake.kafka.connector.records.SnowflakeJsonConverter

key.converter

org.apache.kafka.connect.storage.StringConverter

buffer.size.bytes

60000000

snowflake.private.key.passphrase

<secret>

snowflake.private.key

<secret>


There is an one exception connected with FLEX environment. The S3SinkConnector is used here - properties description

param

value

s3.region<region>
s3.part.retries10
s3.bucket.name<s3_bucket>
s3.compression.typenone
topics.dir<s3_topic_dit>
topics<env>-out-full-gblus-flex-all
flush.size1000000
timezoneUTC
locale<locale>
format.classio.confluent.connect.s3.format.json.JsonFormat
schema.generator.classio.confluent.connect.storage.hive.schema.DefaultSchemaGenerator
schema.compatibilityNONE
aws.access.key.id<secret>
aws.secret.access.key<secret>
value.converterorg.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enablefalse
key.converterorg.apache.kafka.connect.storage.StringConverter
key.converter.schemas.enablefalse
partition.duration.ms86400000
partitioner.classio.confluent.connect.storage.partitioner.TimeBasedPartitioner
storage.classio.confluent.connect.s3.storage.S3Storage
rotate.schedule.interval.ms86400000
rotate.interval.ms-1
path.formatYYYY-MM-dd
timestamp.extractorWallclock


" }, { "title": "Manager", "pageID": "164469894", "pageLink": "/display/GMDM/Manager", "content": "

Description

Manager is the main component taking part in client interactions with MDM systems.

It orchestrates API calls with  the following services:

Manager services are accessible with REST API.  Some services are exposed as asynchronous operations through Kafka for performance reasons.


Technology: Java, Spring, Apache Camel

Code link: mdm-manager

Flows

Exposed interfaces


Interface NameTypeEndpoint patternDescription
Get entityREST API

GET /entities/{entityId}

Get detailed entity information

Get multiple entitiesREST APIGET /entities/_byUrisReturn multiple entities with provided uris
Get entity countryREST APIGET /entities/{entityId}/_countryReturn country for an entity with the provided uri
Merge & UnmegeREST API

POST/entities/{entitiyId/_merge

POST/entities/{entitiyId/_unmerge


_byUris

Merge entity A with entity B using Reltio uris as IDs.

Unmerge entity B from entity A using Reltio uris as IDs.


Merge & Unmege ComplexREST API

POST/entities/_merge

POST/entities/_unmerge

Merge entity A with entity B using request body (JSON) with ids.

Unmerge entity B from entity A using request body (JSON) with ids.


Create/Update entityREST API & KAFKA

POST /hcp

PATCH /hcp

POST /hco

PATCH /hco

Create/partially update entity
Create/Update multiple entitiesREST API

POST /batch/hcp

PATCH /batch/hcp

POST /batch/hco

PATCH /batch/hco

Batch create HCO/HCP entities
Get entity by crosswalkREST APIGET /entities/crosswalkGet entity by crosswalk
Delete entity by crosswalkREST APIDELETE /entities/crosswalkDelete entityt by crosswalk
Create/Update relationREST API

POST /relations/

_dbscan

PATCH /relations/

Create/update relation
Get relationREST APIGET /relations/{relationId}Get relation by reltio URI
Get relation by crosswalkREST APIGET /relations/crosswalkGet relation by crosswalk
Delete relation by crosswalkREST APIDELETE /relations/crosswalkDelete relation by crosswalk
Batch create relationREST APIPOST /batch/relationBatch create relation
Create/replace/update mco profileREST API

POST /mco

PATCH /mco

Create, replace or partially update mco profile
Create/replace/update batch mco profileREST API

POST /batch/mco

PATCH /batch/mco

Create, replace or partially update mco profiles
Update Usage FlagsREST APIPOST /updateUsageFlags

Create, Update, Remove UsageType UsageFlags of "Addresses' Address field of HCP and HCO entities

Search for change requestsREST APIGET /changeRequests/_byEntityCrosswalkSearch for change requests by entity crosswalk
Get change request by uriREST APIGET /changeRequests/{uri}Get change request by uri
Create change requestREST API

POST /changeRequest

Create change request - internal
Get change requestREST APIGET /changeRequestGet change request - internal

Dependent components


ComponentInterfaceDescription
Reltio AdapterInternal Java interface

Used to communicate with Reltio

Nucleus AdapterInternal Java interface

Used to communicate with Nucleus

Authorization Engine

Internal Java interfaceProvide user authorization

MDM Routing Engine

Internal Java interfaceProvides routing

Configuration

The configuration is a composition of dependent components configurations and parameters specifived below.


Config ParameterDefault valueDescription
mongo.url
Mongo url
mongo.dbName
Mongo database name
mongoConnectionProps.dbUrl
Mongo database url
mongoConnectionProps.dbName
Mongo database name
mongoConnectionProps.user
Mongo username
mongoConnectionProps.password
Mongo user password
mongoConnectionProps.entityCollectionName
Entity collection name
mongoConnectionProps.lovCollectionName
Lov collection name
" }, { "title": "Authorization Engine", "pageID": "164469870", "pageLink": "/display/GMDM/Authorization+Engine", "content": "

Description

Authorization Engine is responsible for authorizing users executing API operations. All API operations are secured and can be executed only by users that have specific roles. The engine checks if a user has a role allowed access to API operation.


Flows

The Authorization Engine is engaged in all flows exposed by Manager component.


Exposed interfaces

Interface NameTypeJava class:methodDescription
Authorization ServiceJavaAuthorizationService:processCheck user permission to run a specific operation. If the user has granted a role to run this operation method will allow to call it. In other case authorization exception will throw

Dependent components

All of the below operations are exposed by Manager component and details about was described here. Description column of below table has role names which have to be assigned to user permitted to use described operations.

ComponentInterfaceDescription
Manager

GET /entities/*

GET_ENTITIES

GET /relations/*GET_RELATION
GET /changeRequests/*GET_CHANGE_REQUESTS

DELETE /entities/crosswalk

DELETE /relations/crosswalk

DELETE_CROSSWALK

POST /hcp

POST /batch/hcp

CREATE_HCP

PATCH /hcp

PATCH /batch/hcp

UPDATE_HCP

POST /hco

POST /batch/hco

CREATE_HCO

PATCH /hco

PATCH /batch/hco

UPDATE_HCO

POST /mco

POST /batch/mco

CREATE_MCO

PATCH /mco

PATCH /batch/mco

UPDATE_MCO
POST /relationsCREATE_RELATION
PATCH /relationsUPDATE_RELATION
POST /changeRequestCREATE_CHANGE_REQUEST
POST /updateUsageFlagsUSAGE_FLAG_UPDATE
POST /entities/{entityId}/_mergeMERGE_ENTITIES
POST /entities/{entityId}/_unmergeUNMERGE_ENTITIES
GET /lookupLOOKUPS

Configuration

Configuration parameterDescription
users[].nameUser name
users[].descriptionDescription of user
users[].defaultClientDefault MDM client that is used in the case when the user doesn't specify country
users[].rolesList of roles assigned to user
users[].countriesList of countries whose data can be managed by user
users[].sourcesList of sources (crosswalk types) whose can be used during manage data by the user
" }, { "title": "MDM Routing Engine", "pageID": "164469900", "pageLink": "/display/GMDM/MDM+Routing+Engine", "content": "

Description

MDM Routing Engine is responsible for making a decision on which MDM system has to be used to process client requests. The call is made based on a decision table that maps MDM system with a  country.

In the case of multiple MDM systems for the same market, the decision table contains a user dimension allowing to select MDM system by user name.

Flows

The MDM Routing Engine is engaged in all flows supported by Manager component.


Exposed interfaces

Interface NameTypeJava class:methodDescription
MDM Client Factory

JavaMDMClientFactory:getDefaultMDMClientGet default MDM client
JavaMDMClientFactory:getDefaultMDMClient(username)Get default MDM client specified for the user
JavaMDMClientFactory:getMDMClient(country)Get MDM client that supports the specified country
JavaMDMClientFactory:getMDMClient(country, user);Get MDM client that  supported specified country and user

Dependent components

ComponentInterfaceDescription
Reltio AdapterJavaProvides integrations with Reltio MDM
Nucleus AdapterJavaProvides integration with Nucleus MDM


Configuration

Configuration parameterDescription

users[].name

name of user
users[].defaultClientdefault mdm client for user
clientsDecisionTable.{selector name}.countries[]List of countries
clientsDecisionTable.{selector name}.clients[]

Map where the key is username and value is MDM client name that will be used to process data comes from defined countries.

Special key "default" defines the default MDM client which will be used in the case when there is no specific client for username.

mdmFactoryConfig.{mdm client name}.typeType of MDM client. Only two values are supported: "reltio" or "nucleus".
mdmFactoryConfig.{mdm client name}.configMDM client configuration. It is based on adapter type: Reltio or Nucleus
" }, { "title": "Nucleus Adapter", "pageID": "164469896", "pageLink": "/display/GMDM/Nucleus+Adapter", "content": "

Description

Nucleus-adapter is a component of MDM Hub that is used to communicate with Nucleus. It provides 4 types of operations:

Nucleus 360 is an old COMPANY MDM platform comparing to Reltio. It's used to store and manage data about healthcare professionals(hcp) and healthcare organizations(hco).

It uses batch processing so the results of the operation are applied for the golden record after a certain period of time.

Nucleus accepts requests with an XML formatted body and also sends responses in the same way.

Flows

Exposed interfaces


Interface NameTypeJava class:methodDescription
get entityJava
NucleusMDMClient:getEntity

Provides a mechanism to obtain information about the specified entity. Entity can be obtained by entity id, e.g. xyzf325

Two Nucleuses methods are used to obtain detailed information about the entity.

First is Look up method, thanks to which we can obtain basic information about entity(xml format) by its id.

Next, we provide that information for the second Nucleus method, Get Profile Details that sends a response with all available information (xml format).

Finally, we gather all received information about the entity, convert it to Relto model(json format) and transfer it to a client.

get entitiesJava
NucleusMDMClient:getEntities

Provide a mechanism to obtain basic information about a group of entities. This entity group is determined based on the defined filters(e.g. first name, last name, professional type code).

For this purpose only Nuclueus look up method is used. This way we receive only basic information about entities but it is performance-optimized and does not create unnecessary load on the server.

create/update entity

Java

NucleusMDMClient:creteEntity

Using the Nucleus Add Update web service method nucleus-adapter provides a mechanism to create or update data present in the database according to the business rules(createEntity method).

Nucleus-adapter accepts JSON formatted requests body, maps it to xml format, and then sends it to Nucleus.

get relationsJava
NucleusMDMClient:getRelation

To get relations nucleus-adapter uses the Nucleus affiliation interface.

Nucleus produces XML formatted response and nucleus-adapter transforms it to Reltio model(JSON format).

Dependent components


ComponentInterfaceDescription
Nucleus

https://{{ nuleus host }}/CustomerManage_COMPANY_EU_Prod/manage.svc?singleWsdl


Nucleus endpoint for Creating/updating hcp and hco
https://{{ nuleus host }}/Nuc360ProfileDetails5.0/Api/DetailSearchNucleus endpoint for getting details about entity
https://{{ nuleus host }}/Nuc360QuickSearch5.0/LookupNucleus endpoint for getting basic information about entity
https://{{ nuleus host }}/Nuc360DbSearch5.0/api/affiliationNucleus endpoint for getting relations information

Configuration


Config ParameterDefault valueDescription
nucleusConfig.baseURLnullBase url of Nucleus mdm
nucleusConfig.usernamenullNucleus username

nucleusConfig.password

nullNucleus password
nucleusConfig.additionalOptions.customerManageUrlnullNucleus endpoint for creating/updating entities
nucleusConfig.additionalOptions.profileDetailsUrlnullNucleus endpoint for getting detailed information about entity
nucleusConfig.additionalOptions.quickSearchUrlnullNucleus endpoint for getting basic information about entity
nucleusConfig.additionalOptions.affiliationUrlnullNucleus endpoint for getting information about entities relations
nucleusConfig.additionalOptions.defaultIdTypenullDefault IdType for entities search(used if another not provided)
" }, { "title": "Quality Engine and Rules", "pageID": "164469944", "pageLink": "/display/GMDM/Quality+Engine+and+Rules", "content": "

Description

Quality engine is used to verify data quality in entity attributes. It is used for MCO, HCO, HCP entities.

Quality engine is responsible for preprocessing Entity when a specific precondition is met. This engine is started in the following cases:

It has two two components quality-engine and quality-engine-integration


Flows

Validation by quality rules is done before sending entities to reltio. Quality rules should be enabled in configuration.

Data quality checking is started in com.COMPANY.mdm.manager.service.QualityService. Whole rule flow for entity have one context (com.COMPANY.entityprocessingengine.pipeline.RuleContext)


Rule

Rule have following configuration

Preconditions

Structure:

Example:

preconditions:

    - type: source

      values: 

         - CENTRIS

Possible types:

Checks

Structure:

Example:

check:

   type: match

   attribute: FirstName

   values:

       - '[^0-9@#$%^&*~!"<>?/|\\_]+'

Possible types:


Actions

Structure:

Example:

action:

   type: add

   attributes:

      - DataQuality[].DQDescription

   value: "{source}_005_02"

Possible types:


Dependent components

ComponentInterfaceFlowDescription
managerQualityServiceValidationRuns quality engine validation

Configuration

Config ParameterDefault valueDescription
validationOntrueIt turns on or off validation - it needs to specified in application.yml
partialOverrideValidationOntrueIt turns on or off validation for updates

hcpQualityRulesConfigs

list of files with quality rules for hcpIt contains a list of files with quality rules for hcp

hcoQualityRulesConfigs

list of files with quality rules for hcoIt contains a list of files with quality rules for hco

hcpAffiliatedHCOsQualityRulesConfigs

list of files with quality rules for affilitated hcpIt contains a list of files with quality rules for affilitated HCO
mcoQualityRulesConfigslist of files with quality rules for mcoIt contains a list of files with quality rules for mco
" }, { "title": "Reltio Adapter", "pageID": "164469898", "pageLink": "/display/GMDM/Reltio+Adapter", "content": "

Description

Reltio-adapter is a component of MDM Hub(part of mdm-manager) that is used to communicate with Reltio. 

Flows

Exposed interfaces

Interface NameTypeEndpoint patternDescription
Get entityJava
ReltioMDMClient:getEntity

Get detailed entity information by entity URI

Get entitiesJava
ReltioMDMClient:getEntities

Get basic information about a group of entities based on applied filters

Create/Update entityJava
ReltioMDMClient:createEntity
Create/partially update entity(HCO, HCP, MCO)
Create/Update multiple entitiesJava
ReltioMDMClient:createEntities
Batch create HCO/HCP/MCO entities
Delete entityJava
ReltioMDMClient:deleteEntity
Deletes entity by its URI
Find entityJava
ReltioMDMClient:findEntity

Finds entity. The search mechanism is flexible and chooses the proper method:

  • If URI applied in entityPattern then use the getEntity method.
  • If URI not specified and finds crosswalks then uses getEntityByCrosswalk method
  • Otherwise, it uses the find matches method
Merge entitiesJava
ReltioMDMClient:mergeEntities

Merge two entities basing on reltio merging rules.

Also accepts explicit winner as explicitWinnerEntityUri.

Unmerge entitiesJava
ReltioMDMClient:unmergeEntities
Unmerge entities

Unmerge Entity Tree

Java
ReltioMDMClient:treeUnmergeEntities

Unmerge entities recursively(details in reltio treeunmerge documentation)

Scan entitiesJava
ReltioMDMClient:scanEntities
Iterate entities of a specific type in a particular tenant.
Delete crosswalkJava
ReltioMDMClient:deleteCrosswalk
Deletes crosswalk from an object
Find matchesJava
ReltioMDMClient:findMatches

Returns potential matches based on rules in entity type configuration

Get entity connectionsJava
ReltioMDMClient:getMultipleEntityConnections
Get connected entities
Get entity by a crosswalkJava
ReltioMDMClient:getEntityByCrosswalk
Get entity by the crosswalk
Delete relation by a crosswalkJava
ReltioMDMClient:deleteRelation
Delete relation by relation URI
Get relationJava
ReltioMDMClient:getRelation
Get relation by relation URI
Create/Update relationJava
ReltioMDMClient:createRelation
Create/update relation
Scan relationsJava
ReltioMDMClient:scanRelations

Iterate entities of a specific type in a particular tenant.

Get relation by a crosswalkJava
ReltioMDMClient:getRelationByCrosswalk
Get relation by the crosswalk
Batch create relationJava
ReltioMDMClient:createRelations
Batch create relation
Search for change requestsJava
ReltioMDMClient:search
Search for change requests by entity crosswalk
Get change request by URIJava
ReltioMDMClient:getChangeRequest
Get change request by URI
Create change requestJava
ReltioMDMClient:createChangeRequest
Create change request - internal
Delete change requestJava
ReltioMDMClient:deleteChangeRequest
Delete change request
Apply change requestJava
ReltioMDMClient:applyChangeRequest
Apply data change request
Reject change requestJava
ReltioMDMClient:rejectChangeRequest
Reject data change request
Add/update external inforJava
ReltioMDMClient:createOrUpdateExternalInfo
Add external info to specified DCR

Dependencies


ComponentInterfaceDescription
Reltio







GET {TenantURL}/entities/{Entity ID}

Get detailed information about the entity

https://docs.reltio.com/entitiesapi/getentity.html

GET {TenantURL}/entities

Get basic( or chosen ) information about entity based on applied filters

https://docs.reltio.com/mulesoftconnector/getentities_2.html

GET {TenantURL}/entities/_byCrosswalk/{crosswalkValue}?type={sourceType}

Get entity by crosswalk

https://docs.reltio.com/entitiesapi/getentitybycrosswalk_2.html

DELETE {TenantURL}/{entity object URI}

Delete entity

https://docs.reltio.com/entitiesapi/deleteentity.html

POST {TenantURL}/entities

Create/update single or a bunch of entities

https://docs.reltio.com/entitiesapi/createentities.html

POST {TenantURL}/entities/_dbscan
https://docs.reltio.com/searchapi/iterateentitiesbytype.html?hl=_dbscan
POST {TenantURL}/entities/{winner}/_sameAs?uri=entities/{looser}

Merge entities basing on looser and winner ID

https://docs.reltio.com/mergeapis/mergingtwoentities.html

POST {TenantURL}/<origin id>/_unmerge?contributorURI=<spawn URI>

Unmerge entities

https://docs.reltio.com/mergeapis/unmergeentitybycontriburi.html

POST {TenantURL}/<origin id>/_treeUnmerge?contributorURI=<spawn URI>

Tree unmerge entities

https://docs.reltio.com/mergeapis/unmergeentitybycontriburi.html

GET {TenantURL}/relations/

Get relation by relation URI

https://docs.reltio.com/relationsapi/getrelationship.html

POST {TenantURL}/relations

Create relation

https://docs.reltio.com/relationsapi/createrelationships.html

POST {TenantURL}/relations/_dbscan
https://docs.reltio.com/relationsapi/iteraterelationshipbytype.html?hl=relations%2F_dbscan
GET {TenantURL}/changeRequests

Get change request

https://docs.reltio.com/dcrapi/searchdcr.html

GET {TenantURL}/changeRequests/{id}

Returns a data change request by ID.

https://docs.reltio.com/dcrapi/getdatachangereq.html

POST {TenantURL}/changeRequests 

Create data change request

https://docs.reltio.com/dcrapi/createnewdatachangerequest.html

DELETE {TenantURL}/changeRequests/{id} 

Delete data change request

https://docs.reltio.com/dcrapi/deletedatachangereq.html

POST {TenantURL}/changeRequests/_byUris/_apply

This API applies (commits) all changes inside a data change request to real entities and relationships.

https://docs.reltio.com/dcrapi/applydcr.html

POST {TenantURL}/changeRequests/_byUris/_reject

Reject data change request

https://docs.reltio.com/dcrapi/rejectdcr.html

POST {TenantURL}/entities/_matches

Returns potential matches based on rules in entity type configuration.
https://docs.reltio.com/matchesapi/serachpotentialmatchesforjsonentity.html
POST {TenantURL}/_connectionsGet connected entities
https://docs.reltio.com/relationsapi/requestdifferententityconnections.html?hl=_connections
DELETE /{crosswalk URI}

Delete crosswalk

https://docs.reltio.com/mergeapis/dataapicrosswalks.html?hl=delete,crosswalkdataapicrosswalks__deletecrosswalk#dataapicrosswalks__deletecrosswalk


POST {TenantURL}/changeRequests/0000OVV/_externalInfo

Add/update external info to DCR

https://docs.reltio.com/dcrapi/addexternalinfotochangereq.html?hl=_externalinfo


Configuration


Config ParameterDefault valueDescription
mdmConfig.authURLnullReltio authentication URL
mdmConfig.baseURLnullReltio base URL
mdmConfig.rdmUrlnullReltio  RDM URL

mdmConfig.username

nullReltio username
mdmConfig.passwordnullReltio password
mdmConfig.apiKeynullReltio apiKey
mdmConfig.apiSecretnullReltio apiSecret
translateCache.milisecondsToExpire


translateCache.objectsLimit

" }, { "title": "Map Channel", "pageID": "302697819", "pageLink": "/display/GMDM/Map+Channel", "content": "

Description

Map Channel integrates GCP and GRV systems data. External systems use the SQS queue or REST API to load data. The data is then copied to the internal queue. This allows to redo the processing at a later time. The identifier and market contained in the data are used to retrieve complete data via REST requests. The data is then sent to the Manager component to storage in the MDM system. Application provides features for filtering events by country, status or permissions. This component uses different mappers to process data for the COMPANY or IQVIA data model.


Technology: Java, Spring, Apache Camel

Code link: map-channel

Flows

Exposed interfaces


Interface nameTypeEndpoint patternDescription
create contactREST API

POST /gcp

create HCP profile based on GCP contact data

update contactREST APIPUT /gcp/{gcpId}update HCP profile based on GCP contact data
create userREST APIPOST /grvcreate HCP profile based on GRV user data
update userREST APIPUT /grv/{grvId}update HCP profile based on GRV user data


Dependent components


ComponentInterfaceDescription
ManagerREST API

create HCP, create HCO, update HCP, update HCO

Configuration

The configuration is a composition of dependent components configurations and parameters specifived below.


Kafka processing config


Config paramDefault valueDescription
kafkaProducerProp
kafka producer properties
kafkaConsumerProp
kafka consumer properties
processing.endpoints
kafka internal topics configuration
processing.endpoints.[endpoint-type].topic
kafka entpoint-type topic name
processing.endpoints.[endpoint-type].activeOnStartup
should endpoint start on application startup
processing.endpoints.[endpoint-type].consumerCount
kafka endpoint consumer count
processing.endpoints.[endpoint-type].breakOnFirstError
should kafka rebalance on error
processing.endpoints.[endpoint-type].autoCommitEnable
should kafka cuto commit enable

DEG config

Config paramDefault valueDescription
DEG.urll
DEG gateway URL
DEG.oAuth2Service
DEG authorization service URL
DEG.protocol
DEG protocol
DEG.port
DEG port
DEG.prefix
DEG API prefix

Transaction log config

Config paramDefault valueDescription
transactionLogger.kafkaEfk.enable
should kafka efk transaction logger enable
transactionLogger.kafkaEfk.kafkaProducer.topic
kafka efk topic name
transactionLogger.kafkaEfk.logContentOnlyOnFailed
Log request body only on failed transactions
transactionLogger.simpleLog.enable
should simple console transaction logger enable


Filter config


Config paramDefault valueDescription
activeCountries.GRV
list of allowed GRV countries
activeCountries.GRV
list of allowed GCP countries
deactivatedStatuses.[Source].[Country]
list of ValidationStatus attribute values for which HCP will be deleted for given country and source
deactivateGCPContactWhenInactive
lst of countries for which GCP will be deleted when contact is inactive
deactivatedWhenNoPermissions
lst of countries for which GCP will be deleted when contact permissions are missing
deleteOption.[Source].none
HCP will be sent to MDM when deleted date is present
deleteOption.[Source].hard
call delete crosswalk action when deleted date is present
deleteOption.[Source].soft
call update HCP when delete date is present

Mapper config

Config paramDefault valueDescription
gcpMapper
name of GCP mapper implenentation
grvMapper
name of GRV mapper implenentation

Mappings

IQVIA mapping

\"\"

COMPANY mapping

\"\"

" }, { "title": "MDM Admin", "pageID": "284817212", "pageLink": "/display/GMDM/MDM+Admin", "content": "

Description

MDM Admin exposes an API of tools automating repetitive and/or difficult Operating Procedures and Tasks. It also aggregates APIs of various Hub components that should not be exposed to the world, while providing an authorization layer. Permissions to each Admin operation can be granted to client's API user.

Flows

Exposed interfaces

REST API

Swagger: https://api-apac-prod-gbl-mdm-hub.COMPANY.com/api-admin-spec-apac-prod/swagger-ui/index.html

Dependent components

ComponentInterfaceFlowDescription
Reconciliation ServiceReconciliation Service APIEntities ReconciliationAdmin uses internal Reconciliation Service API to trigger reconciliations. Passes the same inputs and returns the same results.
Relations Reconciliation
Partials Reconciliation
Precallback ServicePrecallback Service APIPartials ListAdmin fetches a list of partials directly from Precallback Service and returns it to the user or uses it to reconcile all entities stuck in partial state.
Partials Reconciliation
AirflowAirflow APIEvents ResendAdmin allows triggering an Airflow DAG with request parameters/body and checking its status.
Events Resend Complex
KafkaKafka Client/Admin APIKafka OffsetsAdmin allows modifying topic/group offsets.

Configuration

Config Parameter

Default value

Description

airflow-config:
url: https://airflow-amer-nprod-gbl-mdm-hub.COMPANY.com
user: admin
password: ${airflow.password}
dag: reconciliation_system_amer_dev

-

Dependent Airflow configuration including external URL, DAG name and credentials. Entities Reload operation will trigger a DAG of configured name in the configured Airflow instance.
services:
services:
reconciliationService: mdmhub-mdm-reconciliation-service-svc:8081
precallbackService: mdmhub-precallback-service-svc:8081
URLs of dependent services. Default values lead to internal Kubernetes services.
" }, { "title": "MDM Integration Tests", "pageID": "302687584", "pageLink": "/display/GMDM/MDM+Integration+Tests", "content": "

Description

The module contains Integration Tests. All Integration Tests are divided into different categories based on environment on which are executed.

Technology:

Gradle tasks

The table shows which environment uses which gradle task.

EnvironmentGradle taskConfiguration properties
ALL

commonIntegrationTests

-
GBLUS

integrationTestsForCOMPANYModelRegionUS

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/kube_dev_gblus/group_vars/gw-services/int_tests.yml
CHINA

integrationTestsForCOMPANYModelChina

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/kube_devchina_apac/group_vars/gw-services/int_tests.yml
EMEA

integrationTestsForCOMPANYModelRegionEMEA

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/kube_dev_emea/group_vars/gw-services/int_tests.yml

APACintegrationTestsForCOMPANYModelRegionAPAChttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/kube_dev_apac/group_vars/gw-services/int_tests.yml
AMER

integrationTestsForCOMPANYModelRegionAMER

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/kube_dev_amer/group_vars/gw-services/int_tests.yml
OTHERS

integrationTestsForIqviaModel

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/kube_dev_gbl/group_vars/gw-services/int_tests.yml

The Jenkins script with configuration: http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/jenkins/k8s_int_test.groovy

Gradle tasks - IT categories

The table shows which test categories are included in gradle tasks.

Gradle taskTest category

commonIntegrationTests

  • CommonIntegrationTest

integrationTestsForCOMPANYModelRegionUS

  • IntegrationTestForCOMPANYModel
  • IntegrationTestForCOMPANYModelRegionUS
integrationTestsForCOMPANYModelChina
  • IntegrationTestForCOMPANYModel
  • IntegrationTestForCOMPANYModelChina
integrationTestsForCOMPANYModel
integrationTestsForCOMPANYModelRegionAMER
integrationTestsForCOMPANYModelRegionAPAC
integrationTestsForCOMPANYModelRegionEMEA
integrationTestsForIqviaModel
  • IntegrationTestForIqiviaModel

Tests are configured in build.gradle file: http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/browse/build.gradle?at=refs%2Fheads%2Fproject%2Fboldmove

Test use cases included in categories

Test categoryTest use cases

CommonIntegrationTest

Common Integration Test

IntegrationTestForIqiviaModel

Integration Test For Iqvia Model

IntegrationTestForCOMPANYModel

Integration Test For COMPANY Model

IntegrationTestForCOMPANYModelRegionUS

Integration Test For COMPANY Model Region US

IntegrationTestForCOMPANYModelChina

Integration Test For COMPANY Model China

IntegrationTestForCOMPANYModelRegionAMER

Integration Test For COMPANY Model Region AMER

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Integration Test For COMPANY Model DCR2Service

IntegrationTestsForCOMPANYModelRegionEMEA

Integration Test For COMPANY Model Region EMEA

" }, { "title": "Nucleus Subscriber", "pageID": "164469790", "pageLink": "/display/GMDM/Nucleus+Subscriber", "content": "

Description

Nucleus subscriber collects events from Amazon AWS S3 modifies it and then transfer to the right Kafka Topic.

Data changes are stored as archive files on S3 from where they are then pulled byt the nucleus subscriber.
The next step is to modify the event from the Reltio format to one accepted by the MDM Hub. The modified data is then transfered to the appropriate Kafka Topic.

Data pulls from S3 are performed periodically so the changes made  are visible after some time.


Part of: Streaming channgel

Technology: Java, Spring, Apache Camel

Code link: nucleus-subscriber

Flows

Exposed interfaces


Interface NameTypeEndpoint patternDescription
Kafka topic KAFKA
{env}-internal-nucleus-events
Enents pulled from sqs are then transformed and published to kafka topic

Dependencies


ComponentInterfaceFlowDescription
AWS S3
Entity change events processing (Nucleus)Stores events regarding data modification in reltio
Entity enricher

Nucleus Subscriber downstream component. Collects events from Kafka and produces events enriched with the targetEntity

Configuration


Config ParameterDefault valueDescription
nucleus_subscriber.server.port

8082

Nucleus subscriber port
nucleus_subscriber.kafka.servers

10.192.71.136:9094

Kafka server
nucleus_subscriber.lockingPolicy.zookeeperServer

null

Zookeeper server
nucleus_subscriber.lockingPolicy.groupName

null

Zookeeper group name
nucleus_subscriber.deduplicationCache.maxSize

100000


nucleus_subscriber.deduplicationCache.expirationTimeSeconds

3600


nucleus_subscriber.kafka.groupId

hub

Kafka group Id
nucleus_subscriber.kafka.username

null

Kafka username
nucleus_subscriber.kafka.password

null

Kafka user password
nucleus_subscriber.publisher.entities.topic

dev-internal-integration-tests


nucleus_subscriber.publisher.dictioneries.topic

dev-internal-reltio-dictionaries-events


nucleus_subscriber.publisher.relationships.topic

dev-internal-integration-tests


nucleus_subscriber.mongoConnectionProp.dbUrl

null

MongoDB url
nucleus_subscriber.mongoConnectionProp.dbName

null

MongoDB database name
nucleus_subscriber.mongoConnectionProp.user

null

MongoDB user
nucleus_subscriber.mongoConnectionProp.password

null

MongoDB user password
nucleus_subscriber.mongoConnectionProp.chechConnectionOnStartup

null

Check connection on startup( yes/no )
nucleus_subscriber.poller.type

file

Source type
nucleus_subscriber.poller.enableOnStartup

yes

Enable on startup( yes/no )
nucleus_subscriber.poller.fileMask

null

Input files mask
nucleus_subscriber.poller.bucketName

candf-mesos

Name of S3 bucket
nucleus_subscriber.poller.processingTimeoutMs

3000000

Timeout in miliseconds
nucleus_subscriber.poller.inputFolder

C:/PROJECTS/COMPANY/GIT/mdm-publishing-hub/nucleus-subscriber/src/test/resources/data

Input directory
nucleus_subscriber.poller.outputFolder

null

Output directory
nucleus_subscriber.poller.key

null

Poller key
nucleus_subscriber.poller.secret

null

Poller secret
nucleus_subscriber.poller.region

EU_WEST_1

Poller region
nucleus_subscriber.poller.alloweSubDirs

null

Allowed sub directories( e.g. by country code - AU, CA )
nucleus_subscriber.fileFormat.hcp

.*Professional.exp

Input fiile format for hcp
nucleus_subscriber.fileFormat.hco

.*Organization.exp

Input fiile format for hco
nucleus_subscriber.fileFormat.dictionary

.*Code_Header.exp

Input fiile format for dictionary
nucleus_subscriber.fileFormat.dictionaryItem

.*Code_Item.exp

Input fiile format for dictionary Item
nucleus_subscriber.fileFormat.dictionaryItemDesc

.*Code_Item_Description.exp

Input fiile format for
nucleus_subscriber.fileFormat.dictionaryItemExternal

.*Code_Item_External.exp

Input fiile format for

nucleus_subscriber.fileFormat.

customerMerge

.*customer_merge.exp

Input fiile format for customer merge

nucleus_subscriber.fileFormat.specialty

.*Specialty.exp

Input fiile format for speciality

nucleus_subscriber.fileFormat.address

.*Address.exp

Input fiile format foraddress

nucleus_subscriber.fileFormat.degree

.*Degree.exp

Input fiile format for degree

nucleus_subscriber.fileFormat.identifier

.*Identifier.exp

Input fiile format foridentifier

nucleus_subscriber.fileFormat.communication

.*Communication.exp

Input fiile format forcommunication

nucleus_subscriber.fileFormat.optout

.*Optout.exp

Input fiile format for optout
nucleus_subscriber.fileFormat.affiliation

.*Affiliation.exp

Input fiile format for affiliation
nucleus_subscriber.fileFormat.affiliationRole

.*AffiliationRole.exp

Input fiile format for affiliation role

.

" }, { "title": "OK DCR Service", "pageID": "164469929", "pageLink": "/display/GMDM/OK+DCR+Service", "content": "

Description

Validation of information regarding healthcare institutions and professionals based on ONE KEY webservices database

Flows

Exposed interfaces


Interface NameTypeEndpoint patternDescription
internal onekeyvr inputKAFKA
${env}-internal-onekeyvr-in
events being sent by the event publisher component. Event types being considered: HCP_*, HCO_*, ENTITY_MATCHES_CHANGED
internal onekeyvr change requests inputKAFKA
${env}-internal-onekeyvr-change-requests-in

Dependent components


ComponentInterfaceFlowDescription

Manager





GetEntitygetEntitygetting the entity from RELTIO
MDMIntegrationService


getMatchesgetting matches from RELTIO
translateLookupstranslating lookup codes
createEntityDCR entity created in Reltio and the relation between the processed entity and the DCR entity
createResponse
patchEntityupdating the entity in RELTIO

Both ONEKEY service and the Manager service are called with the retry policy.

Configuration


Config ParameterDefault valueDescription
onekey.oneKeyIntegrationService.url${oneKeyClient.url}
onekey.oneKeyIntegrationService.userName${oneKeyClient.userName}
onekey.oneKeyIntegrationService.password${oneKeyClient.password}
onekey.oneKeyIntegrationService.connectionPoint${oneKeyClient.connectionPoint}
onekey.oneKeyIntegrationService.logMessages${oneKeyClient.logMessages}
onekey.oneKeyIntegrationService.retrying.maxAttemts22Limit to the number of attempts -> Exponential Back Off
onekey.oneKeyIntegrationService.retrying.initialIntervalMs1000Initial interval -> Exponential Back Off
onekey.oneKeyIntegrationService.retrying.multiplier2.0Multiplier -> Exponential Back Off
onekey.oneKeyIntegrationService.retrying.maxIntervalMs3600000Max interval -> Exponential Back Off
onekey.gatewayIntegrationService.url${gateway.url}
onekey.gatewayIntegrationService.userName${gateway.userName}
onekey.gatewayIntegrationService.apiKey${gateway.apiKey}
onekey.gatewayIntegrationService.logMessages${gateway.logMessages}
onekey.gatewayIntegrationService.timeoutMs${gateway.timeoutMs}
onekey.gatewayIntegrationService.gatewayRetryConfig.maxAttemts22
onekey.gatewayIntegrationService.gatewayRetryConfig.initialIntervalMs1000
onekey.gatewayIntegrationService.gatewayRetryConfig.multiplier2.0
onekey.gatewayIntegrationService.gatewayRetryConfig.maxIntervalMs3600000
onekey.gatewayIntegrationService.gatewayRetryConfig.maxAttemts22Limit to the number of attempts -> Exponential Back Off
onekey.gatewayIntegrationService.gatewayRetryConfig.initialIntervalMs1000Initial interval -> Exponential Back Off
onekey.gatewayIntegrationService.gatewayRetryConfig.multiplier2.0Multiplier -> Exponential Back Off
onekey.gatewayIntegrationService.gatewayRetryConfig.maxIntervalMs3600000Max interval -> Exponential Back Off
onekey.submitVR.eventInputTopic${env}-internal-onekeyvr-inSubmit Validation input topic
onekey.submitVR.skipEventTypeSuffix

_REMOVED

_INACTIVATED

_LOST_MERGE

Submit Validation event type string endings to skip
onekey.submitVR.storeNamewindow-deduplication-storeInternal kafka topic that stores events to deduplicate
onekey.submitVR.window.duration4hThe size of the windows in milliseconds.
onekey.submitVR.window.name<no value>Internal kafka topic that stores events being grouped by.
onekey.submitVR.window.gracePeriod0The grace period to admit out-of-order events to a window.
onekey.submitVR.window.byteLimit107374182Maximum number of bytes the size-constrained suppression buffer will use.
onekey.submitVR.window.suppressNamedcr-suppressThe specified name for the suppression node in the topology.
onekey.traceVR.enabletrue
onekey.traceVR.minusExportDateTimeMillis3600000
onekey.traceVR.schedule.cron0 0 * ? * * # every hour






quartz.properties.org.quartz.scheduler.instanceNamemdm-onekey-dcr-service

Can be any string, and the value has no meaning to the scheduler itself - but rather serves as a mechanism for client code to distinguish schedulers when multiple instances are used within the same program. If you are using the clustering features, you must use the same name for every instance in the cluster that is ‘logically’ the same Scheduler.

quartz.properties.org.quartz.scheduler.skipUpdateChecktrue

Whether or not to skip running a quick web request to determine if there is an updated version of Quartz available for download. If the check runs, and an update is found, it will be reported as available in Quartz’s logs. You can also disable the update check with the system property “org.terracotta.quartz.skipUpdateCheck=true” (which you can set in your system environment or as a -D on the java command line). It is recommended that you disable the update check for production deployments.

quartz.properties.org.quartz.scheduler.instanceIdGenerator.classorg.quartz.simpl.HostnameInstanceIdGenerator

Only used if org.quartz.scheduler.instanceId is set to “AUTO”. Defaults to “org.quartz.simpl.SimpleInstanceIdGenerator”, which generates an instance id based upon host name and time stamp. Other IntanceIdGenerator implementations include SystemPropertyInstanceIdGenerator (which gets the instance id from the system property “org.quartz.scheduler.instanceId”, and HostnameInstanceIdGenerator which uses the local host name (InetAddress.getLocalHost().getHostName()). You can also implement the InstanceIdGenerator interface your self.

quartz.properties.org.quartz.jobStore.classcom.novemberain.quartz.mongodb.MongoDBJobStore
quartz.properties.org.quartz.jobStore.mongoUri${mongo.url}
quartz.properties.org.quartz.jobStore.dbName${mongo.dbName}
quartz.properties.org.quartz.jobStore.collectionPrefix quartz-onekey-dcr
quartz.properties.org.quartz.scheduler.instanceIdAUTO

Can be any string, but must be unique for all schedulers working as if they are the same ‘logical’ Scheduler within a cluster. You may use the value “AUTO” as the instanceId if you wish the Id to be generated for you. Or the value “SYS_PROP” if you want the value to come from the system property “org.quartz.scheduler.instanceId”.

quartz.properties.org.quartz.jobStore.isClusteredtrue
quartz.properties.org.quartz.threadPool.threadCount1












" }, { "title": "Publisher", "pageID": "164469927", "pageLink": "/display/GMDM/Publisher", "content": "

Description

Publisher is member of Streaming channel. It distributes events to target client topics based on configured routing rules.

Main tasks:


Technology: Java, Spring, Kafka

Code: event-publisher

Flows

Exposed interfaces


Interface NameTypeEndpoint patternDescription

Kafka - input topics for entities data


KAFKA

${env_name}-internal-reltio-proc-events

${env_name}-internal-nucleus-events

Stores events about entities, relations and change requests changes.
Kafka - input topics for dicrtionaries dataKAFKA

${env_name}-internal-reltio-dictionaries-events

${env_name}-internal-nucleus-dictionaries-events

Stores events about lookup (LOV) changes.

Kafka - output topics

KAFKA

${env_name}-out-*

*(All topics that get events from publisher)

Output topics for Publisher.

Event after filtration process is then transferred on the appropriate topic based on routing rules defined in the configuration

Resend eventsREST

POST /resendLastEvent

Allow triggering reconstruction event. Events are created based on the current state fetch for MongoDB and then forwarded according to defined routing rules.
Mongo's collections

Mongo collectionentityHistoryCollection stored last known state of entities data
Mongo collectionentityRelationsCollection stored last known state of relations data
Mongo collectionLookupValuesCollection stored last known state of lookups (LOVs) data

Dependencies


ComponentInterfaceFlowDescription
Callback ServiceKAFKA

Creates input for Publisher

Responsible for following transformations:

  • HCO names calculation
  • Dangling affiliations
  • Crosswalk cleaner
  • Precallback stream
MongoDB
Stores the last known state of objects such as: entities, relations. Used as cache data to reduce Reltio load. Is updated after every entity change event
Kafka Connect Snowflake connectorKAFKA

Snowflake: Events publish flow

Receives events from the publisher and loads it to Snowflake database
Clients of the HUB

Clients that receive events from MDM HUB

MAPP, China, etc

Configuration


Config ParameterDefault valueDescription

event_publisher.users

null

Publisher users dictionary used to authenticate user in ResendService operations.

User parameters:

  • name,
  • description,
  • roles(list) - currently there is only one role which can be assign to user:
    • RESEND_EVENT - user with this role is granted to use resend last event operation

event_publisher.activeCountries
- AD
- BL
- FR
- GF
- GP
- MC
- MF
- MQ
- MU
- NC
- PF
- PM
- RE
- WF
- YT
- CN
List of active countries

event_publisher.lookupValuesPoller.

interval

60mInterval of lookups (LOVs) from Reltio

event_publisher.lookupValuesPoller.

batchSize

1000Poller batch size

event_publisher.lookupValuesPoller.

enableOnStartup

yes

Enable on startup

( yes/no )

event_publisher.lookupValuesPoller.

dbCollectionName

LookupValuesMongo's collection name stored fetched lookup data

event_publisher.eventRouter.incomingEvents

incomingEvents:
reltio:
topic: dev-internal-reltio-entity-and-relation-events
enableOnStartup: no
startupOrder: 10
properties:
autoOffsetReset: latest
consumersCount: 20
maxPollRecords: 50
pollTimeoutMs: 30000
Configuration of the incoming topic with events regarding entities, relations etc.

event_publisher.eventRouter.dictionaryEvents

dictionaryEvents:
reltio:
topic: dev-internal-reltio-dictionaries-events
enableOnStartup: true
startupOrder: 30
properties:
autoOffsetReset: earliest
consumersCount: 10
maxPollRecords: 5
pollTimeoutMs: 30000

Configuration of incoming topic with events regarding dictionary changes.

event_publisher.eventRouter.historyCollectionName

entityHistoryName of collection stored entities state

event_publisher.eventRouter.relationCollectionName

entityRelationsName of collection stored relations state

event_publisher.eventRouter.routingRules.[]

null

List of routing rules. Routing rule definition has following parameters

  • id - unique identifier of rule,
  • selector - conditional expression written in groovy which filters incoming events,
  • destination - topic name.
" }, { "title": "Raw data service", "pageID": "337869880", "pageLink": "/display/GMDM/Raw+data+service", "content": "

Description

Raw data service is the component used to process source data. Allows you to remove expired data in real time. Provides a REST interface for restoring source data on the environment.

Flows



Exposed interfaces

Batch Controller - manage batch instances

Interface nameTypeEndpoint patternDescription
Restore entitiesREST API

POST /restore/entities

Restore entities for selected parameters: entity types, sources, countries, date from

1. Create consumer for entities topic and given offset - date from

2. Poll and filter records

3. Produce data to bundle input topic

Restore relationsREST API

POST /restore/relations

Restore entities for selected parameters: sources, countries, relation types and date from

1. Create consumer for relations topic and given offset - date from

2. Poll and filter records

3. Produce data to bundle input topic

Restore entitiesREST API

POST /restore/entities/count

Count entities for selected parameters: entity types, sources, countries, date from

Restore entitiesREST API

POST /restore/relations/count

Count relations for selected parameters: sources, countries, relation types and date from

Configuration

Config paramdescription
kafka.groupIdkafka group id
kafkaOtherother kafka consumer/producer properties
entityTopictopic used to store entity data
relationTopictopic used to store relation data
streamConfig.patchKeyStoreNamestate store name used to store entities patch keys
streamConfig.relationStoreNamestate store name used to store relations patch keys
streamConfig.enabledis raw data stream processor enabled
streamConfig.kafkaOtherraw data processor stream kafka other properties
restoreConfig.enabledis restore api enabled
restoreConfig.consumer.pollTimeoutrestore api kafka topic consumer poll timeout
restoreConfig.consumer.kafkaOtherother kafka consumer properties
restoreConfig.producer.outputrestore data producer output topic - manager bundle input topic
restoreConfig.producer.kafkaOtherother kafka producer properties
" }, { "title": "Reconciliation Service", "pageID": "164469826", "pageLink": "/display/GMDM/Reconciliation+Service", "content": "

Reconciliation service is used to consume reconciliation event from reltio and decide is entity or relation should be refreshed in mongo cache. after reconsiliation this service also produce metrics from reconciliation, it counts changes and produce event with all metatdta and statistics about reconciliated entity/relation


Flows

Reconciliation+HUB-Client

Reconciliation metrics

Configuration

Config Parameter

Default value

Description

reconciliation:
eventInputTopic:
eventOutputTopic:
reconciliation:
eventInputTopic: ${env}-internal-reltio-reconciliation-events
eventOutputTopic: ${env}-internal-reltio-events
Consumes event from eventInputTopic, decide about reconiliation and produce event to eventOutputTopic
reconciliation:
eventMetricsInputTopic:
eventMetricsOutputTopic:

metricRules:
- name:
operationRegexp:
pathRegexp:
valueRegexp:
reconciliation:
eventInputTopic: ${env}-internal-reltio-reconciliation-events
eventOutputTopic: ${env}-internal-reltio-events
eventMetricsInputTopic: ${env}-internal-reltio-reconciliation-metrics-event
eventMetricsOutputTopic: ${env}-internal-reconciliation-metrics-efk-transactions

metricRules:
- name: reconciliation.object.missed
operationRegexp: "remove"
pathRegexp: ""
valueRegexp: ".*"
- name: reconciliation.object.added
operationRegexp: "add"
pathRegexp: ""
valueRegexp: ".*"
- name: reconciliation.lookupcode.error
operationRegexp: "add"
pathRegexp: "^.*/lookupCode$"
valueRegexp: ".*"
- name: reconciliation.lookupcode.changed
operationRegexp: "replace"
pathRegexp: "^.*/lookupCode$"
valueRegexp: ".*"
- name: reconciliation.value.changed
operationRegexp: "add|replace|remove"
pathRegexp: "^/attributes/.+$"
valueRegexp: ".*"
- name: reconciliation.other.reason
operationRegexp: ".*"
pathRegexp: ".*"
valueRegexp: ".*"
Consume event from eventMetricsInputTopic, then calculate diff betwent current and previous event, based on diff produce statisctis and metrics. After all produce event with all information to eventMetricsOutputTopic
" }, { "title": "Reltio Subscriber", "pageID": "164469916", "pageLink": "/display/GMDM/Reltio+Subscriber", "content": "

Description

Reltio subscriber is part of Reltio events streaming flow. It consumes Reltio events from Amazon SQS, filters, maps, and transfers to the Kafka Topic.


Part of: Streaming channel

Technology: Java, Spring, Apache Camel

Code link: reltio-subscriber

Flows


Exposed interfaces


Interface NameTypeEndpoint patternDescription
Kafka topic KAFKA
${env}-internal-reltio-events
Enents pulled from sqs are then transformed and published to kafka topic

Dependent components


ComponentInterfaceFlowDescription
Sqs - queue
Entity change events processing (Reltio)It stores events about entities modification in reltio
Entity enricher

Reltio Subscriber downstream component. Collects events from Kafka and produces events enriched with the target entity

Configuration


Config ParameterDefault valueDescription
reltio_subscriber.reltio.queue
mpe-01_FLy4mo0XAh0YEbN
Reltio queue name
reltio_subscriber.reltio.queueOwner
930358522410
Reltio queue owner number
reltio_subscriber.reltio.concurrentConsumers1Max number of concurrent consumers
reltio_subscriber.reltio.messagesPerPoll10Messages per poll
reltio_subscriber.publisher.topic
dev-internal-reltio-events
Publisher kafka topic
reltio_subscriber.publisher.enableOnStartup
yes
Enable on startup
reltio_subscriber.publisher.filterSelfMerges
no

Filter self merges
( yes/no )

reltio_subscriber.relationshipPublisher.topic
dev-internal-reltio-relations-events
Relationship publisher kafka topic
reltio_subscriber.dcrPublisher.topicnullDCR publisher kafka topic
reltio_subscriber.kafka.servers
10.192.71.136:9094
Kafka servers
reltio_subscriber.kafka.groupId
hub
Kafka group Id
reltio_subscriber.kafka.saslMechanism
PLAIN
Kafka sasl mechanism
reltio_subscriber.kafka.securityProtocol
SASL_SSL
Kafka security protocol
reltio_subscriber.kafka.sslTruststoreLocation
src/test/resources/client.truststore.jks
Kafka truststore location
reltio_subscriber.kafka.sslTuststorePassword
kafka123
Kafka truststore password
reltio_subscriber.kafka.username
null
Kafka username
reltio_subscriber.kafka.passwordnullKafka user password
reltio_subscriber.kafka.compressionCodecnullKafka compression codec
reltio_subscriber.poller.types3Source type
reltio_subscriber.poller.enableOnStartup

no

Enable on startup( yes/no )
reltio_subscriber.poller.fileMask

.*

Input files mask
reltio_subscriber.poller.bucketName

candf-mesos

Name of S3 bucket
reltio_subscriber.poller.processingTimeoutMs

7200000

Timeout in miliseconds
reltio_subscriber.poller.inputFolder

null

Input directory
reltio_subscriber.poller.outputFolder

null

Output directory
reltio_subscriber.poller.key

null

Poller key
reltio_subscriber.poller.secret

null

Poller secret
reltio_subscriber.poller.region

EU_WEST_1

Poller region
reltio_subscriber.allowedEventTypes
- ENTITY_CREATED
- ENTITY_REMOVED
- ENTITY_CHANGED
- ENTITY_LOST_MERGE
- ENTITIES_MERGED
- ENTITIES_SPLITTED
- RELATIONSHIP_CREATED
- RELATIONSHIP_CHANGED
- RELATIONSHIP_REMOVED
- RELATIONSHIP_MERGED
- RELATION_LOST_MERGE
- CHANGE_REQUEST_CHANGED
- CHANGE_REQUEST_CREATED
- CHANGE_REQUEST_REMOVED
- ENTITIES_MATCHES_CHANGED

Event types that are processed when received.

Other event types are being rejected

reltio_subscriber.transactionLogger.kafkaEfk

.enable

nullTransaction logger enabled( true/false)

reltio_subscriber.transactionLogger.kafkaEfk

.logContentOnlyOnFailed

null

Log content only on failed

( true/false)

reltio_subscriber.transactionLogger.kafkaEfk

.kafkaConsumerProp.groupId

nullKafka consumer group Id

reltio_subscriber.transactionLogger.kafkaEfk

.kafkaConsumerProp.autoOffsetReset

nullKafka transaction logger topic

reltio_subscriber.transactionLogger.kafkaEfk

.kafkaConsumerProp.consumerCount

null

reltio_subscriber.transactionLogger.kafkaEfk

.kafkaConsumerProp.sessionTimeoutMs

nullSession timeout

reltio_subscriber.transactionLogger.kafkaEfk

.kafkaConsumerProp.maxPollRecords

null

reltio_subscriber.transactionLogger.kafkaEfk

.kafkaConsumerProp.breakOnFirstError

null

reltio_subscriber.transactionLogger.kafkaEfk

.kafkaConsumerProp.consumerRequestTimeoutMs

null

reltio_subscriber.transactionLogger.SimpleLog.enable

null
" }, { "title": "Clients", "pageID": "164470170", "pageLink": "/display/GMDM/Clients", "content": "

The section describes clients (systems) that publish or subscribe data to MDM systems vis MDH HUB


Active clients

\n\n \n \n \n\n
\n \n \n \n\n \n \n\n \n \n \n\n \n \n \n \n \n \n\n
\n \n
\n
\n

Aggregated Contact List

COMPANY MDM Team


NameContact
Andrew J. Varganin

Andrew.J.Varganin@COMPANY.com

Sowjanya Tirumala

sowjanya.tirumala@COMPANY.com

John AustinJohn.Austin@COMPANY.com
Trivedi Nishith

Nishith.Trivedi@COMPANY.com


GLOBAL

ClientContacts
MAPDL-BT-Production-Engineering@COMPANY.com
KOL

DL-SFA-INF_Support_PforceOL@COMPANY.com

Solanki, Hardik (US - Mumbai) <hsolanki@COMPANY.com>;

Yagnamurthy, Maanasa (US - Hyderabad) <myagnamurthy@COMPANY.com>;

China

Ming Ming <MingMing.Xu@COMPANY.com>;

Jiang, Dawei <Dawei.Jiang@COMPANY.com>

MAPP

Shashi.Banda@COMPANY.com

Rajesh.K.Chengalpathy@COMPANY.com

Debbie.Gelfand@COMPANY.com

Dinesh.Vs@COMPANY.com

DL-MAPP-Navigator-Hypercare-Support@COMPANY.com

Japan DWHDL-GDM-ServiceOps-Commercial_APAC@COMPANY.com
GRACEDL-AIS-Mule-Integration-Support@COMPANY.com
EngageDL-BTAMS-ENGAGE-PLUS@COMPANY.com;

Amish.Adhvaryu@COMPANY.com

PTRS

Sagar.Bodala@COMPANY.com

OneMed

Marsha.Wirtel@COMPANY.com;AnveshVedula.Chalapati@COMPANY.com

Medic

DL-F&BO-MEDIC@COMPANY.com

GBL US

ClientContacts
CDW

Narayanan, Abhilash <Abhilash.KadampanalNarayanan@COMPANY.com>

Raman, Krishnan <Krishnan.Raman@COMPANY.com>

ETL

Nayan, Rajeev <Rajeev.Nayan3@COMPANY.com>

Duvvuri, Satya <Satya.Duvvuri@COMPANY.com>

KOL

Tikyani, Devesh <Devesh.Tikyani@COMPANY.com>

Brahma, Bagmita <Bagmita.Brahma2@COMPANY.com>

Solanki, Hardik <Hardik.Solanki@COMPANY.com>


US Trade (FLEX COV)

ClientContacts
Main contacts

Dube, Santosh R <santosh.dube@COMPANY.com>

Manseau, Melissa <Melissa.Manseau@COMPANY.com>

Thirumurthy, Bala Subramanyam <BalaSubramanyam.Thirumurthy@COMPANY.com>

Business Team

Max, Deanna <Deanna.Max@COMPANY.com>

Faddah, Laura Jordan <Laura.Faddah@COMPANY.com>

GIS(file transfer)

Mandala, Venkata <venkata.mandala@COMPANY.com>

Srivastava, Jayant <Jayant.Srivastava@COMPANY.com>




" }, { "title": "KOL", "pageID": "164470183", "pageLink": "/display/GMDM/KOL", "content": "\n

Data pushing

\n

\"\" Figure 22. KOL authentication with Identity ManagerKOL system push data to MDM integration service using REST API. To authenticate, KOL uses external Oauth2 authorization service named Identity Manager to fetch access token. Then system sends the REST request to integration service endpoint which validates access token using Identity Manager API.\n
\nKOL manage data for several countries. Many of these is loaded to default MDM system (Reltio), supported by integration service but for GB, PT, DK and CA countries data is sent to Nucleus 360. Decision, where the data should be loaded, is made by MDM Manager logic. Based on Country attribute value, MDM manager selects the right MDM adapter. It is important to set the Country attribute value correctly during data updating. Same rule applies to the country query parameter during data fetching. Thanks to this, MDM manager is able to process the right data in the right MDM system. In case of updating data with the Country attribute set incorrectly, the REST request will be rejected. When data is being fetched without country attribute query parameter set, the default MDM (Reltio) will be used to resolve the data.\n

\n

Event processing

\n

KOL application receives events in one standard way – kafka topic. Events from Reltio MDM system are published to this topic directly after Reltio has processed changes, sent event to SQS and processed them by Event Publisher. It means that the Reltio processes change and send events in real time. Client, who listens for events, does not have to wait for receiving them too long.
\n\"\" Figure 23. Difference between processing events in Reltio and Nucleus 360The situation changes when the entity changes are processed by Nucleus 360. This MDM publishes changes once in a while, so the events will be delivered to kafka topic with longer delay.

" }, { "title": "Japan DWH", "pageID": "164470060", "pageLink": "/display/GMDM/Japan+DWH", "content": "

Contacts

Japan DWH Feed Support DL: DL-GDM-ServiceOps-Commercial_APAC@COMPANY.com - it is valid until 15/04/2023

DL-ATP-SERVICEOPS-JPN-DATALAKE@COMPANY.com - it will be valid since 15/04/2023 

Flows

Japan DWH has only one batch process which consume the incremental file export from data warehouse, process this and loads data to MDM. This process is based on incremental batch engine and run on Airflow platform.

Input files

The input files are delivered by GIS to AWS S3 bucket.


UATPROD
S3 service accountdidn't createdsvc_gbi-cc_mdm_japan_rw_s3
S3 Access key IDdidn't createdAKIATCTZXPPJU6VBUUKB
S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
S3 Foldermdm/UAT/inbound/JAPAN/mdm/inbound/JAPAN/
Input data file mask JPDWH_[0-9]+.zipJPDWH_[0-9]+.zip
CompressionZipZip
FormatFlat files, DWH dedicated format Flat files, DWH dedicated format 

Example

JPDWH_20200421202224.zipJPDWH_20200421202224.zip
SchedulenoneAt 08:00 UTC on every day-of-week from Monday through Friday (0 8 * * 1-5). The input file is not delivered in Japan's holidays (https://www.officeholidays.com/countries/japan/2020)
Airflow jobinc_batch_jp_stageinc_batch_jp_prod


Data mapping 

The detailed filed mappings are presented in the document.

Mapping rules:


Configuration

Flow configuration is stored in MDM Environment configuration repository. For each environment where the flow should be enabled the configuration file inc_batch_jp.yml has to be created in the location related to configured environment: inventory/[env name]/group_vars/gw-airflow-services/ and the batch name "inc_batch_jp" has to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Below table prresents the location of inc_batch_jp.yml file for UAT and PROD env:


UATPROD
inc_batch_jp.ymlhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/stage/group_vars/gw-airflow-services/inc_batch_jp.ymlhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/prod/group_vars/gw-airflow-services/inc_batch_jp.yml

Applying configuration changes is done by executing the deploy Airflow's components procedure.

SOPs

There is no particular SOP procedure for this flow. All common SOPs was described in the "Airflow:" chapter.


" }, { "title": "Nucleus", "pageID": "164470256", "pageLink": "/display/GMDM/Nucleus", "content": "

Contacts

Delivering of data used by Nucleus's processes is maintained by Iqvia Team: COMPANY-MDM-Support@iqvia.com

Flows

There are several batch processes that loads data extracted from Nucleus to Reltio MDM. Data are delivered for countries: Canada, South Korea, Australia, United Kingdom, Portugal and Denmark as zip archive available at S3 bucket.

Input files


UATPROD
S3 service accountdidn't createdsvc_mdm_project_nuc360_rw-s3
S3 Access key IDdidn't createdAKIATCTZXPPJTFMGRZFM
S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
S3 Folder

mdm/UAT/inbound/APAC_CCV/AU/

mdm/UAT/inbound/APAC_CCV/KR/

mdm/UAT/inbound/nuc360/inc-batch/GB/

mdm/UAT/inbound/nuc360/inc-batch/PT/

mdm/UAT/inbound/nuc360/inc-batch/DK/

mdm/UAT/inbound/nuc360/inc-batch/CA/

mdm/inbound/nuc360/inc-batch/AU/

mdm/inbound/nuc360/inc-batch/KR/

mdm/inbound/nuc360/inc-batch/GB/

mdm/inbound/nuc360/inc-batch/PT/

mdm/inbound/nuc360/inc-batch/DK/

mdm/inbound/nuc360/inc-batch/CA/

Input data file mask NUCLEUS_CCV_[0-9_]+.zipNUCLEUS_CCV_[0-9_]+.zip
CompressionZipZip
FormatFlat files in CCV format 

Flat files in CCV format 

Example

NUCLEUS_CCV_8000000792_20200609_211102.zipNUCLEUS_CCV_8000000792_20200609_211102.zip
Schedulenone

inc_batch_apac_ccv_au_prod - at 17:00 UTC on every day-of-week from Monday through Friday (0 17 * * 1-5)

inc_batch_apac_ccv_kr_prod - at 08:00 UTC on every day-of-week from Monday through Friday (0 8 * * 1-5)

inc_batch_eu_ccv_gb_stage - at 07:00 UTC on every day-of-week from Monday through Friday (0 7 * * 1-5)

inc_batch_eu_ccv_pt_stage - at 07:00 UTC on every day-of-week from Monday through Friday (0 7 * * 1-5)

inc_batch_eu_ccv_dk_stage - at 07:00 UTC on every day-of-week from Monday through Friday (0 7 * * 1-5)

inc_batch_amer_ccv_ca_prod - at 17:00 UTC on every day-of-week from Monday through Friday (0 17 * * 1-5)

Airflow's DAGS

inc_batch_apac_ccv_au_stage

inc_batch_apac_ccv_kr_stage

inc_batch_eu_ccv_gb_stage

inc_batch_eu_ccv_pt_stage

inc_batch_eu_ccv_dk_stage

inc_batch_amer_ccv_ca_stage

inc_batch_apac_ccv_au_prod

inc_batch_apac_ccv_kr_prod

inc_batch_eu_ccv_gb_stage

inc_batch_eu_ccv_pt_stage

inc_batch_eu_ccv_dk_stage

inc_batch_amer_ccv_ca_prod


Data mapping

Data mapping is described in the following document.


Configuration

Flows configuration is stored in MDM Environment configuration repository. For each environment where the flows should be enabled configuration files has to be created in the location related to configured environment: inventory/[env name]/group_vars/gw-airflow-services/ and the batch name has to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Below table presents the location of flows configuration files for UAT and PROD env:

Flow configuration fileUATPROD
inc_batch_apac_ccv_au.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_apac_ccv_au.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_apac_ccv_au.yml
inc_batch_apac_ccv_kr.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_apac_ccv_kr.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_apac_ccv_kr.yml
inc_batch_eu_ccv_gb.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_eu_ccv_gb.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_eu_ccv_gb.yml
inc_batch_eu_ccv_pt.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_eu_ccv_pt.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_eu_ccv_pt.yml
inc_batch_eu_ccv_dk.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_eu_ccv_dk.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_eu_ccv_dk.yml
inc_batch_amer_ccv_ca.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_amer_ccv_ca.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_amer_ccv_ca.yml

To deploy changes of DAG's configuration you have to execute SOP Deploying DAGs

SOPs

There is no particular SOP procedure for this flow. All common SOPs was described in the "Airflow:" chapter.


" }, { "title": "Veeva New Zealand", "pageID": "164470112", "pageLink": "/display/GMDM/Veeva+New+Zealand", "content": "

Contacts

DL-ATP-APC-APACODS-SUPPORT@COMPANY.com

Flow

The flow transforms the Veeva's data to Reltio model and loads the result to MDM. Data contains HCPs and HCOs from New Zealand.

This flow is divided into two steps:

  1. Pre-proccessing - Copying source files from Veeva's S3 bucket, filtering once and uploading result to HUB's bucket,
  2. Incremental batch - Running the standard incremental batch process.

Each of these steps are realized by separated Airflow's DAGs.


Input files


UATPROD
Veeva's S3 service accountSRVC-MDMHUB_GBL_NONPRODSRVC-MDMHUB_GBL
Veeva's S3 Access key IDAKIAYCS3RWHN72AQKG6BAKIAYZQEVFARKMXC574Q
Veeva's S3 bucketapacdatalakeprcaspasp55737apacdatalakeprcaspasp63567
Veeva's S3 bucket regionap-southeast-1ap-southeast-1
Veeva's S3 Folder

project_kangaroo/landing/veeva/sf_account/

project_kangaroo/landing/veeva/sf_address_vod__c/

project_kangaroo/landing/veeva/sf_child_account_vod__c/

project_kangaroo/landing/veeva/sf_account/

project_kangaroo/landing/veeva/sf_address_vod__c/

project_kangaroo/landing/veeva/sf_child_account_vod__c/

Veeva's Input data file mask 

* (all files inside above folders)

* (all files inside above folders)
Veeva's Input data file compression
nonenone
HUB's S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
HUB's S3 Foldermdm/UAT/inbound/APAC_VEEVA/mdm/inbound/APAC_PforceRx/
HUS's input data file maskin_nz_[0-9]+.zipin_nz_[0-9]+.zip
HUS's input data file compressionZipZip
Schedule (is set only for pre-processing DAG)noneAt 06:00 UTC on every day-of-week from Monday through Friday (0 8 * * 1-5)
Pre-processing Airflow's DAGinc_batch_apac_veeva_wrapper_stageinc_batch_apac_veeva_wrapper_prod
Incremental batch Airflow's DAGinc_batch_apac_veeva_stageinc_batch_apac_veeva_prod


Data mapping

Data mapping is described in the following document.


Configuration

Configuration of this flow is defined in two configuration files. First of these inc_batch_apac_veeva_wrapper.yml specifies the pre-processing DAG configuration and the second inc_batch_apac_veeva.yml defines configuration of DAG for standard incremental batch process. To activate the flow on environment files should be created in the following location inventory/[env name]/group_vars/gw-airflow-services/ and batch names "inc_batch_apac_veeva_wrapper" and "inc_batch_apac_veeva" have to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Changes made in configuration are applied on environment by running Deploy Airflow Components procedure.

Below table presents the location of flows configuration files for UAT and PROD env:

Configuration fileUATPROD
inc_batch_apac_veeva_wrapper.ymlhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/stage/group_vars/gw-airflow-services/inc_batch_apac_veeva_wrapper.ymlhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/prod/group_vars/gw-airflow-services/inc_batch_apac_veeva_wrapper.yml
inc_batch_apac_veeva.ymlhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/stage/group_vars/gw-airflow-services/inc_batch_apac_veeva.ymlhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/prod/group_vars/gw-airflow-services/inc_batch_apac_veeva.yml


SOPs

There is no dedicated SOP procedures for this flow. However, you must remember that this flow consists of two DAGs which both have to finish successfully.

All common SOPs was described in the "Incremental batch flows: SOP" chapter.


" }, { "title": "ODS", "pageID": "164470116", "pageLink": "/display/GMDM/ODS", "content": "

Contacts

Flow

The flow transforms the ODS's data to Reltio model and loads the result to MDM. Data contains HCPs and HCOs from: HK, ID, IN, MY, PH, PK, SG, TH, TW, VN, BL, FR, GF, GP, MF, MQ, MU, NC, PF, PM, RE, TF, WF, YT, SI, RS countries.

This flow is divided into two steps:

  1. Pre-proccessing - Copying source files from ODS's bucket and then uploading these to HUB's bucket,
  2. Incremental batch - Running the standard incremental batch process.

Each of these steps are realized by separated Airflow's DAGs.

Input files


UAT APACUAT EU

PROD APAC

PROD EU
Supported countriesHK, ID, IN, MY, PH, PK, SG, TH, TW, VN, BLFR, GF, GP, MF, MQ, MU, NC, PF, PM, RE, TF, WF, YT, SI, RSHK, ID, IN, MY, PH, PK, SG, TH, TW, VN, BLFR, GF, GP, MF, MQ, MU, NC, PF, PM, RE, TF, WF, YT, SI, RS
ODS S3 service accountSRVC-GCMDMS3DEVSRVC-GCMDMS3DEVSRVC-GCMDMS3PRDsvc_gbicc_euw1_prod_partner_gcmdm_rw_s3
ODS S3 Access key IDAKIAYCS3RWHN45FC4MOPAKIAYCS3RWHN45FC4MOPAKIAYZQEVFARE64ESXWHAKIA6NIP3JYIMUIQABMX
ODS S3 bucketapacdatalakeintaspasp100939apacdatalakeintaspasp100939apacdatalakeintaspasp104492pfe-gbi-eu-w1-prod-partner-internal
ODS S3 folder/APACODSD/GCMDM//APACODSD/GCMDM//APACODSD/GCMDM//eu-dmart-odsd-file-extracts/gateway/GATEWAY/ODS/PROD/GCMDM/
ODS Input data file mask ****
ODS Input data file compressionzipzipzipzip
HUB's S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectpfe-baiaes-eu-w1-project
HUB's S3 Foldermdm/UAT/inbound/ODS/APAC/mdm/UAT/inbound/ODS/EU/mdm/inbound/ODS/APAC/mdm/inbound/ODS/EU/
HUS's input data file mask****
HUS's input data file compressionzipzipzipzip
Pre-processing Airflow's DAGmove_ods_apac_export_stagemove_ods_eu_export_stagemove_ods_apac_export_prodmove_ods_eu_export_prod
Pre-processing Airflow's DAG schedulenonenone0 6 * * 1-50 7 * * 2  (At 07:00 on Tuesday.)
Incremental batch Airflow's DAGinc_batch_apac_ods_stageinc_batch_eu_ods_stageinc_batch_apac_ods_prodinc_batch_eu_ods_prod
Incremental batch Airflow's DAG schedulenonenone0 8 * * 1-50 8 * * 2 (At 08:00 on Tuesday.)

Data mapping

Data mapping is described in the following document.


Configuration

Configuration of this flow is defined in two configuration files. First of these move_ods_apac_export.yml specifies the pre-processing DAG configuration and the second inc_batch_apac_ods.yml defines configuration of DAG for standard incremental batch process. To activate the flow on environment files should be created in the following location inventory/[env name]/group_vars/gw-airflow-services/ and batch names "move_ods_apac_export" and "inc_batch_apac_ods" have to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Changes made in configuration are applied on environment by running Deploy Airflow's components procedure.

Below table presents the location of flows configuration files for UAT and PROD env:

Configuration fileUATPROD
move_ods_apac_export.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/move_ods_apac_export.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/move_ods_apac_export.yml
inc_batch_apac_ods.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_apac_ods.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_apac_ods.yml
move_ods_eu_export.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/move_ods_eu_export.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/move_ods_eu_export.yml
inc_batch_eu_ods.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_eu_ods.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_eu_ods.yml


SOPs

There is no dedicated SOP procedures for this flow. However, you must remember that this flow consists of two DAGs which both have to finish successfully.

All common SOPs was described in the "Incremental batch flows: SOP" chapter.


" }, { "title": "China", "pageID": "164470000", "pageLink": "/display/GMDM/China", "content": "

ACLs

NameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopic
China client access
china-client
Key AuthN/A
- "CREATE_HCP"
- "CREATE_HCO"
- "UPDATE_HCO"
- "UPDATE_HCP"
- "GET_ENTITIES"
- CN
- "CN3RDPARTY"
- "MDE"
- "FACE"
- "EVR"
- dev-out-full-mde-cn
- stage-out-full-mde-cn
- dev-out-full-mde-cn

Contacts

QianRu.Zhou@COMPANY.com


Flows

  1. Batch merge & unmerge
  2. DCR generation process (China DCR)
  3. [FL.IN.1] HCP & HCO update processes


Reports

Reports

" }, { "title": "Corrective batch process for EVR", "pageID": "164470250", "pageLink": "/display/GMDM/Corrective+batch+process+for+EVR", "content": "


Corrective batch process for EVR fixes China data using standard incremental batch mechanism. The process gets data from csv file, transforms to json model and loads to Reltio. During loading of changes following HCP's attributes can be changed:

  1. Name,
  2. Title,
  3. SubTypeCode,
  4. ValidationStatus,
  5. Specific Workplace can be ignored or its ValidationStatus can be changed,
  6. Specific MainWorkplace can be ignored.

The load saves the changes in Reltio under crosswalk where:

Thanks this, it is easy to find changes that was made by this process.


Input files

The input files are delivered to s3 bucket


UATPROD
Input S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
Input S3 Folder

mdm/UAT/inbound/CHINA/EVR/

mdm/inbound/CHINA/EVR/
Input data file mask evr_corrective_file_[0-9]*.zipevr_corrective_file_[0-9]*.zip
Compressionzipzip
FormatFlat files in CCV format Flat files in CCV format 
Exampleevr_corrective_file_20201109.zipevr_corrective_file_20201109.zip
Schedulenonenone
Airflow's DAGSinc_batch_china_evr_stageinc_batch_china_evr_prod


Data mapping

Mapping from CSV to Reltio's json was describe in this document: evr_corrective_file_format_new.xlsx

Example file presented input data: evr_corrective_file_20221215.csv


Configuration

Flows configuration is stored in MDM Environment configuration repository. For each environment where the flow should be enabled configuration file inc_batch_china_evr.yml has to be created in the location related to configured environment: inventory/[env name]/group_vars/gw-airflow-services/ and the batch name "inc_batch_china" has to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Below table presents the location of flow configuration files for UAT and PROD environment:

UATPROD
http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_china_evr.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_china_evr.yml


SOPs

There is no particular SOP procedure for this flow. All common SOPs was described in the "Incremental batch flows: SOP" chapter.


" }, { "title": "Reports", "pageID": "164469873", "pageLink": "/display/GMDM/Reports", "content": "

Daily Reports

There are 4 reports which their preparing is triggered by china_generate_reports_[env] DAG. The DAG starts all dependent report DAGs and then waits for files published by them on s3. When all required files are delivered to s3, DAG sents the email with generted reports to all configured recipients.

china_generate_reports_[env]
|-- china_import_and_gen_dcr_statistics_report_[env]
|-- import_pfdcr_from_reltio_[env]
+-- china_dcr_statistics_report_[env]
|-- china_import_and_gen_merge_report_[env]
|-- import_merges_from_reltio_[env]
+-- china_merge_report_[env]
|-- china_total_entities_report_[env]
+-- china_hcp_by_source_report_[env]


Daily DAGs are triggered by DAG china_generate_reports


UATPROD
Parent DAGchina_generate_reports_stagechina_generate_reports_prod
SchedulenoneEvery day at 00:05.


Filter applied to all reports:

FieldValue
countrycn
statusACTIVE


HCP by source report

The Report shows how many HCPs was delivered to MDM by specific source.

The Output  files are delivered to s3 bucket:


UATPROD
Output S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
Output S3 Folder

mdm/UAT/outbound/china_reports/daily/

mdm/outbound/china_reports/daily/
Output data file mask china_hcp_by_source_report_.*.xlsxchina_hcp_by_source_report_.*.xlsx
FormatMicrosoft Excel xlsxMicrosoft Excel xlsx
Examplechina_hcp_by_source_report_20201113093437.xlsxchina_hcp_by_source_report_20201113093437.xlsx
Schedulenonenone
Airflow's DAGSchina_hcp_by_source_report_stagechina_hcp_by_source_report_prod
Report Templatechina_hcp_by_source_template.xlsx
Mongo scripthcp_by_source_report.js
Applied filters
"country" : "CN"
"entityType": "configuration/entityTypes/HCP"
"status": "ACTIVE"


Report fields description:

ColumnDescription
SourceThe source which delivered HCP
HCPNumber of all HCPs which has the source
Daily IncrementalNumber of HCPs modified last utc day.

Total entities report

The report shows total entities count, grouped by entity type, theirs validation status and speaker attribute.

The Output  files are delivered to s3 bucket


UATPROD
Output S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
Output S3 Folder

mdm/UAT/outbound/china_reports/daily/

mdm/outbound/china_reports/daily/
Output data file mask china_total_entities_report_.*.xlsxchina_total_entities_report_.*.xlsx
FormatMicrosoft Excel xlsxMicrosoft Excel xlsx
Examplechina_total_entities_report_20201113093437.xlsxchina_total_entities_report_20201113093437.xlsx
Schedulenonenone
Airflow's DAGSchina_total_entities_report_stagechina_total_entities_report_prod
Report Templatechina_total_entities_template.xlsx
Mongo scripttotal_entities_report.js
Applied filters
"country" : "CN"
"status": "ACTIVE"

Report fields description:

ColumnDescription
Total_Hospital_MDMNumber of total hospital MDM
Total_Dept_MDMNumber of total department MDM
Total_HCP_MDMNumber of total HCP MDM
Validated_HCPNumber of validated HCP
Pending_HCPNumber of pending HCP
Not_Validated_HCPNumber of validated HCP
Other_Status_HCP?Number of HCP with other status
Total_Speaker Number of total speakers
Total_Speaker_EnabledNumber of enabled speakers
Total_Speaker_DisabledNumber of disabled speakers

DCR statistics report

The report shows statistics about data change requests which were created in MDM.

Generating of this report is divided into two steps:

  1. Importing PfDataChengeRequest data from Reltio - this step is realized by import_pfdcr_from_reltio_[env] DAG. It schedules export data in Reltio using Export Entities operation and then waits for result. After export file is ready, DAG load its content to mongo,
  2. Generating report - generates report based on proviosly imported data. This step is perform by china_dcr_statistics_report_[env] DAG.

Both of above steps are run sequentially by china_import_and_gen_dcr_statistics_report_[env] DAG.

The Output  files are delivered to s3 bucket:


UATPROD
Output S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
Output S3 Folder

mdm/UAT/outbound/china_reports/daily/

mdm/outbound/china_reports/daily/
Output data file mask china_dcr_statistics_report_.*.xlsxchina_dcr_statistics_report_.*.xlsx
FormatMicrosoft Excel xlsxMicrosoft Excel xlsx
Examplechina_dcr_statistics_report_20201113093437.xlsxchina_dcr_statistics_report_20201113093437.xlsx
Airflow's DAGSchina_dcr_statistics_report_stagechina_dcr_statistics_report_prod
Report Templatechina_dcr_statistics_template.xlsx
Mongo scriptchina_dcr_statistics_report.js
Applied filtersThere are no additional conditions applied to select data


Report fields description:

ColumnDescription
Total_DCR_MDMTotal number of DCRs
New_HCP_DCRTotal number of DCRs of type NewHCP
New_HCO_L1_DCRTotal number of DCRs of type NewHCOL1
New_HCO_L2_DCRTotal number of DCRs of type NewHCOL2
MultiAffil_DCRTotal number of DCRs of type MultiAffil
New_HCP_DCR_CompletedTotal number of DCRs of type NewHCP which have completed status
New_HCO_L1_DCR_CompletedTotal number of DCRs of type NewHCOL1 which have completed status
New_HCO_L2_DCR_CompletedTotal number of DCRs of type NewHCOL2 which have completed status
MultiAffil_DCR_CompletedTotal number of DCRs of type MultiAffil which have completed status
New_HCP_AcceptTotal number of DCRs of type NewHCP which were accepted
New_HCP_UpdateTotal number of DCRs of type NewHCP which were updated during responding for these
New_HCP_MergeTotal number of DCRs of type NewHCP which were accepted and response had entities to merge
New_HCP_MergeUpdateTotal number of DCRs of type NewHCP which were updated and response had entities to merge
New_HCP_RejectTotal number of DCRs of type NewHCP which were rejected
New_HCP_CloseTotal number of closed DCRs of type NewHCP
Affil_AcceptTotal number of DCRs of type MultiAffil which were accepted
Affil_RejectTotal number of DCRs of type MultiAffil which were rejected
Affil_AddTotal number of DCRs of type MultiAffil which data were updated during responding
MultiAffil_DCR_CloseTotal number of closed DCRs of type MultiAffil
New_HCO_L1_UpdateTotal number of closed DCRs of type NewHCOL1 which data were updated during responding
New_HCO_L1_RejectTotal number of rejected DCRs of type NewHCOL1 
New_HCO_L1_CloseTotal number of closed DCRs of type NewHCOL1 
New_HCO_L2_AcceptTotal number of accepted DCRs of type NewHCOL2
New_HCO_L2_UpdateTotal number of DCRs of type NewHCOL2 which data were updated during responding
New_HCO_L2_RejectTotal number of rejected DCRs of type NewHCOL2
New_HCO_L2_CloseTotal number of closed DCRs of type NewHCOL2
New_HCP_DCR_OpenedTotal number of opend DCRs of type NewHCP
MultiAffil_DCR_OpenedTotal number of opend DCRs of type MultiAffil
New_HCO_L1_DCR_OpenedTotal number of opend DCRs of type NewHCOL1
New_HCO_L2_DCR_OpenedTotal number of opend DCRs of type NewHCOL2
New_HCP_DCR_FailedTotal number of failed DCRs of type NewHCP
MultiAffil_DCR_FailedTotal number of failed DCRs of type MultiAffil
New_HCO_L1_DCR_FailedTotal number of failed DCRs of type NewHCOL1
New_HCO_L2_DCR_FailedTotal number of failed DCRs of type NewHCOL2

Merge report

The report shows statistics about merges which were occurred in MDM.

Generating of this report, similar to DCR statistics report, is divided into two steps:

  1. Importing merges data from Reltio - this step is performed by import_merges_from_reltio_[env] DAG. It schedules export data in Reltio unsing Export Merge Tree operation and then waits for result. After export file is ready, DAG loads its content to mongo,
  2. Generating report - generates report based on previously imported data. This step is performed by china_merge_report_[env] DAG.

Both of above steps are run sequentially by china_import_and_gen_merge_report_[env] DAG.

The Output  files are delivered to s3 bucket:


UATPROD
Output S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
Output S3 Folder

mdm/UAT/outbound/china_reports/daily/

mdm/outbound/china_reports/daily/
Output data file mask china_merge_report_.*.xlsxchina_merge_report_.*.xlsx
FormatMicrosoft Excel xlsxMicrosoft Excel xlsx
Examplechina_merge_report_20201113093437.xlsxchina_merge_report_20201113093437.xlsx
Schedulenonenone
Airflow's DAGSchina_import_and_gen_merge_report_stagechina_import_and_gen_merge_report_prod
Report Templatechina_daily_merges_template.xlsx
Mongo scriptmerge_report.js
Applied filters
"country" : "CN"


Report fields description:

ColumnDescription
DateDate when merges occurred
Daily_Merge_HosptialTotal number of merges on HCO
Daily_Merge_HCPTotal number of merges on HCP
Daily_Manually_Merge_HosptialTotal number of manual merges on HCP
Daily_Manually_Merge_HCPTotal number of manual merges on HCP

Monthly Reports

There are 8 monthly reports. All of them are triggered by china_monthly_generate_reports_[env] which then waits for files, generated and published to S3 bucket by each depended DAGs. When all required files exist on S3, DAG prepares the email with all files and sents this defined recipients.

china_monthly_generate_reports_[env]
|-- china_monthly_hcp_by_SubTypeCode_report_[env]
|-- china_monthly_hcp_by_channel_report_[env]
|-- china_monthly_hcp_by_city_type_report_[env]
|-- china_monthly_hcp_by_department_report_[env]
|-- china_monthly_hcp_by_gender_report_[env]
|-- china_monthly_hcp_by_hospital_class_report_[env]
|-- china_monthly_hcp_by_province_report_[env]
+-- china_monthly_hcp_by_source_report_[env]


Monthly DAGs are triggered by DAG china_monthly_generate_reports


UATPROD
Parent DAGchina_monthly_generate_reports_stagechina_monthly_generate_reports_prod



HCP by source report

The report shows how many HCPs were delivered by specific source.

The Output  files are delivered to s3 bucket


UATPROD
Output S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
Output S3 Folder

mdm/UAT/outbound/china_reports/daily/

mdm/outbound/china_reports/daily/
Output data file mask china_monthly_hcp_by_source_report_.*.xlsxchina_monthly_hcp_by_source_report_.*.xlsx
FormatMicrosoft Excel xlsxMicrosoft Excel xlsx
Examplechina_monthly_hcp_by_source_report_20201113093437.xlsxchina_monthly_hcp_by_source_report_20201113093437.xlsx
Schedulenonenone
Airflow's DAGSchina_monthly_hcp_by_source_report_stagechina_monthly_hcp_by_source_report_prod
Report Templatechina_monthly_hcp_by_source_template.xlsx
Mongo scriptmonthly_hcp_by_source_report.js
Applied filters
"country" : "CN"
"entityType": "configuration/entityTypes/HCP"
"status": "ACTIVE"


Report fields description:

ColumnDescription
SourceSource that delivered HCP
HCPNumber of all HCPs which has the source


HCP by channel report

The report presents amount of HCPs which were delivered to MDM through specific Channel.

The Output  files are delivered to s3 bucket


UATPROD
Output S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
Output S3 Folder

mdm/UAT/outbound/china_reports/daily/

mdm/outbound/china_reports/daily/
Output data file mask china_monthly_hcp_by_channel_report_.*.xlsxchina_monthly_hcp_by_channel_report_.*.xlsx
FormatMicrosoft Excel xlsxMicrosoft Excel xlsx
Examplechina_monthly_hcp_by_channel_report_20201113093437.xlsxchina_monthly_hcp_by_channel_report_20201113093437.xlsx
Schedulenonenone
Airflow's DAGSchina_monthly_hcp_by_channel_report_stagechina_monthly_hcp_by_channel_report_prod
Report Templatechina_monthly_hcp_by_channel_template.xlsx
Mongo scriptmonthly_hcp_by_channel_report.js
Applied filters
"country" : "CN"
"entityType": "configuration/entityTypes/HCP"
"status": "ACTIVE"


Report fields description:

ColumnDescription
Channel
Channel name
HCPNumber of all HCPs which match the channel


HCP by SubTypeCode report

The report presents HCPs grouped by its Medical Title (SubTypeCode)

The Output  files are delivered to s3 bucket


UATPROD
Output S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
Output S3 Folder

mdm/UAT/outbound/china_reports/daily/

mdm/outbound/china_reports/daily/
Output data file mask china_monthly_hcp_by_SubTypeCode_report_.*.xlsxchina_monthly_hcp_by_SubTypeCode_report_.*.xlsx
FormatMicrosoft Excel xlsxMicrosoft Excel xlsx
Examplechina_monthly_hcp_by_SubTypeCode_report_20201113093437.xlsxchina_monthly_hcp_by_SubTypeCode_report_20201113093437.xlsx
Schedulenonenone
Airflow's DAGSchina_monthly_hcp_by_SubTypeCode_report_stage china_monthly_hcp_by_SubTypeCode_report_prod
Report Templatechina_monthly_hcp_by_SubTypeCode_template.xlsx
Mongo scriptmonthly_hcp_by_SubTypeCode_report.js
Applied filters
"country" : "CN"
"entityType": "configuration/entityTypes/HCP"
"status": "ACTIVE"


Report fields description:

ColumnDescription
Medical TitleMedical Title (SubTypeCode) of HCP
HCPNumber of all HCPs which match the medical title


HCP by city type report

The report shows amount of HCP which works in specific city type. Type of city in not avaiable in MDM data. To know what is type of specific citys report uses additional collection chinaGeography which has mapping between city's name and its type. Data in the collection can be updated on request of china's team.

The Output  files are delivered to s3 bucket


UATPROD
Output S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
Output S3 Folder

mdm/UAT/outbound/china_reports/daily/

mdm/outbound/china_reports/daily/
Output data file mask china_monthly_hcp_by_city_type_report_.*.xlsxchina_monthly_hcp_by_city_type_report_.*.xlsx
FormatMicrosoft Excel xlsxMicrosoft Excel xlsx
Examplechina_monthly_hcp_by_city_type_report_20201113093437.xlsxchina_monthly_hcp_by_city_type_report_20201113093437.xlsx
Schedulenonenone
Airflow's DAGSchina_monthly_hcp_by_city_type_report_stage china_monthly_hcp_by_city_type_report_prod
Report Templatechina_monthly_hcp_by_city_type_template.xlsx
Mongo scriptmonthly_hcp_by_city_type_report.js
Applied filters
"country" : "CN"
"entityType": "configuration/entityTypes/HCP"
"status": "ACTIVE"


Report fields description:

ColumnDescription
City TypeCity Type taken from chinaGeography collection which match entity.attributes.Workplace.value.MainHCO.value.Address.value.City.value
HCPNumber of all HCPs which match the city type


HCP by department report

The report presents the HCPs grouped by department where they work.

The Output  files are delivered to s3 bucket


UATPROD
Output S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
Output S3 Folder

mdm/UAT/outbound/china_reports/daily/

mdm/outbound/china_reports/daily/
Output data file mask china_monthly_hcp_by_department_report_.*.xlsxchina_monthly_hcp_by_department_report_.*.xlsx
FormatMicrosoft Excel xlsxMicrosoft Excel xlsx
Examplechina_monthly_hcp_by_department_report_20201113093437.xlsxchina_monthly_hcp_by_department_report_20201113093437.xlsx
Schedulenonenone
Airflow's DAGSchina_monthly_hcp_by_department_report_stage china_monthly_hcp_by_department_report_prod
Report Templatechina_monthly_hcp_by_department_template.xlsx
Mongo scriptmonthly_hcp_by_department_report.js
Applied filters
"country" : "CN"
"entityType": "configuration/entityTypes/HCP"
"status": "ACTIVE"


Report fields description:

ColumnDescription
DeptDepartment's name
HCPNumber of all HCPs which match the dept


HCP by gender report

The report presents the HCPs grouped by gender.

The Output  files are delivered to s3 bucket


UATPROD
Output S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
Output S3 Folder

mdm/UAT/outbound/china_reports/daily/

mdm/outbound/china_reports/daily/
Output data file mask china_monthly_hcp_by_gender_report_.*.xlsxchina_monthly_hcp_by_gender_report_.*.xlsx
FormatMicrosoft Excel xlsxMicrosoft Excel xlsx
Examplechina_monthly_hcp_by_gender_report_20201113093437.xlsxchina_monthly_hcp_by_gender_report_20201113093437.xlsx
Schedulenonenone
Airflow's DAGSchina_monthly_hcp_by_gender_report_stage china_monthly_hcp_by_gender_report_prod
Report Templatechina_monthly_hcp_by_gender_template.xlsx
Mongo scriptmonthly_hcp_by_gender_report.js
Applied filters
"country" : "CN"
"entityType": "configuration/entityTypes/HCP"
"status": "ACTIVE"


Report fields description:

ColumnDescription
GenderGender
HCPNumber of all HCPs which match the gender


HCP by hospital class report

The report presents the HCPs grouped by theirs department.

The Output  files are delivered to s3 bucket


UATPROD
Output S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
Output S3 Folder

mdm/UAT/outbound/china_reports/daily/

mdm/outbound/china_reports/daily/
Output data file mask china_monthly_hcp_by_hospital_class_report_.*.xlsxchina_monthly_hcp_by_hospital_class_report_.*.xlsx
FormatMicrosoft Excel xlsxMicrosoft Excel xlsx
Examplechina_monthly_hcp_by_hospital_class_report_20201113093437.xlsxchina_monthly_hcp_by_hospital_class_report_20201113093437.xlsx
Schedulenonenone
Airflow's DAGSchina_monthly_hcp_by_hospital_class_report_stage china_monthly_hcp_by_hospital_class_report_prod
Report Templatechina_monthly_hcp_by_hospital_class_template.xlsx
Mongo scriptmonthly_hcp_by_hospital_class_report.js
Applied filters
"country" : "CN"
"entityType": "configuration/entityTypes/HCP"
"status": "ACTIVE"


Report fields description:

ColumnDescription
ClassClassification
HCPNumber of all HCPs which match the class


HCP by province report

The report presents the HCPs grouped by province where they work.

The Output  files are delivered to s3 bucket


UATPROD
Output S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
Output S3 Folder

mdm/UAT/outbound/china_reports/daily/

mdm/outbound/china_reports/daily/
Output data file mask china_monthly_hcp_by_province_report_.*.xlsxchina_monthly_hcp_by_province_report_.*.xlsx
FormatMicrosoft Excel xlsxMicrosoft Excel xlsx
Examplechina_monthly_hcp_by_province_report_20201113093437.xlsxchina_monthly_hcp_by_province_report_20201113093437.xlsx
Schedulenonenone
Airflow's DAGSchina_monthly_hcp_by_province_report_stage china_monthly_hcp_by_province_report_prod
Report Templatechina_monthly_hcp_by_province_template.xlsx
Mongo scriptmonthly_hcp_by_province_report.js
Applied filters
"country" : "CN"
"entityType": "configuration/entityTypes/HCP"
"status": "ACTIVE"

Report fields description:



ProvinceName of province
HCPNumber of all HCPs which match the Province


SOPs

How can I check the status of generating reports?

Status of generating reports can be chacked by verification of task statuses on main DAGs - china_generate_reports_[env] for daily reports or china_monthly_generate_reports_[env] for monthly reports. Both of these DAGs have task "sendEmailReports" which waits for files generated by dependent DAGs. If required files are not published to S3 in confgured amount of time, the task will fail with following message:

\n
[2020-11-27 12:12:54,085] {{docker_operator.py:252}} INFO - Caught: java.lang.RuntimeException: ERROR: Elapsed time 300 minutes. Timeout exceeded: 300\n[2020-11-27 12:12:54,086] {{docker_operator.py:252}} INFO - java.lang.RuntimeException: ERROR: Elapsed time 300 minutes. Timeout exceeded: 300\n[2020-11-27 12:12:54,086] {{docker_operator.py:252}} INFO - at SendEmailReports.getListOfFilesLoop(sendEmailReports.groovy:221)\n\tat SendEmailReports.processReport(sendEmailReports.groovy:257)\n[2020-11-27 12:12:54,290] {{docker_operator.py:252}} INFO - at SendEmailReports$processReport.call(Unknown Source)\n\tat sendEmailReports.run(sendEmailReports.groovy:279)\n[2020-11-27 12:12:55,552] {{taskinstance.py:1058}} ERROR - docker container failed: {'StatusCode': 1}
\n

In this case you have to check the status of all dependent DAGs to find the reason on failure, resolve the issue and retry all failed tasks starting by tasks in dependend DAGs and finishing by task in main DAG.


Daily reports failed due to error durign importing data from Reltio. What to do?

If you are able to see that DAGs import_pfdcr_from_reltio_[env] or import_merges_from_reltio_[env] in failed state, it probably means that export data from Reltio took longer then usual. To confirm this supposing you have to show details of importing DAG and check status of waitingForExportFile task. If it has failed state and in the logs you can see following messages:

\n
[2020-12-04 12:09:10,957] {{s3_key_sensor.py:88}} INFO - Poking for key : s3://pfe-baiaes-eu-w1-project/mdm/reltio_exports/merges_from_reltio_20201204T000718/_SUCCESS\n[2020-12-04 12:09:11,074] {{taskinstance.py:1047}} ERROR - Snap. Time is OUT.\nTraceback (most recent call last):\n  File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 922, in _run_raw_task\n    result = task_copy.execute(context=context)\n  File "/usr/local/lib/python3.7/site-packages/airflow/sensors/base_sensor_operator.py", line 116, in execute\n    raise AirflowSensorTimeout('Snap. Time is OUT.')\nairflow.exceptions.AirflowSensorTimeout: Snap. Time is OUT.\n[2020-12-04 12:09:11,085] {{taskinstance.py:1078}} INFO - Marking task as FAILED.
\n

You can be pretty sure that the export is still processed on Reltio side. You can confirm this by using tasks api. If on the returned list you are able to see tasks in processing state, it means that MDM still works on this export. To fix this issue in DAG you have to restart the failed task. The DAG will start checking existance of export file once agine.

" }, { "title": "CDW (AMER)", "pageID": "164470121", "pageLink": "/pages/viewpage.action?pageId=164470121", "content": "

Contacts

Narayanan, Abhilash <Abhilash.KadampanalNarayanan@COMPANY.com>

Balan, Sakthi <Sakthi.Balan@COMPANY.com>

Raman, Krishnan <Krishnan.Raman@COMPANY.com>

Gateway

AMER(manager)

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

CDW user (NPROD)
cdw
External OAuth2

CDW-MDM_client


["CREATE_HCO","UPDATE_HCO","GET_ENTITIES","USAGE_FLAG_UPDATE"]
["US"]

["SHS","SHS_MCO","IQVIA_MCO","CENTRIS","SAP","IQVIA_DDD","ONEKEY","DT_340b","DEA","HUB_CALLBACK",
"IQVIA_RAWDEA","IQVIA_PDRP","ENGAGE","GRV","ICUE","KOL_OneView","COV","ENGAGE 1.0","GRV","IQVIA_RX",
"MILLIMAN_MCO","ICUE","KOL_OneView","SHS_RX","MMIT","INTEGRICHAIN_TRADE_PARTNER","INTEGRICHAIN_SHIP_TO","EMDS_VVA","APUS_VVA","BMS (NAV)",
"EXAS","POLARIS_DM","ANRO_DM","ASHVVA","MM_C1st","KFIS","DVA","Reltio","DDDV","IQVIA_DDD_ZIP",
"867","MYOV_VVA","COMPANY_ACCTS"]

CDW user (PROD)
cdw
External OAuth2
CDW-MDM_client
["CREATE_HCO","UPDATE_HCO","GET_ENTITIES","USAGE_FLAG_UPDATE"]
["US"]

["SHS","SHS_MCO","IQVIA_MCO","CENTRIS","SAP","IQVIA_DDD","ONEKEY","DT_340b","DEA","HUB_CALLBACK",
"IQVIA_RAWDEA","IQVIA_PDRP","ENGAGE","GRV","ICUE","KOL_OneView","COV","ENGAGE 1.0","GRV","IQVIA_RX",
"MILLIMAN_MCO","ICUE","KOL_OneView","SHS_RX","MMIT","INTEGRICHAIN_TRADE_PARTNER","INTEGRICHAIN_SHIP_TO","EMDS_VVA","APUS_VVA","BMS (NAV)",
"EXAS","POLARIS_DM","ANRO_DM","ASHVVA","MM_C1st","KFIS","DVA","Reltio","DDDV","IQVIA_DDD_ZIP",
"867","MYOV_VVA","COMPANY_ACCTS"]

Flows

Flow

Description

Snowflake: Events publish flowEvents are published to snowflake
Snowflake: Base tables refresh

Table is refreshed (every 2 hours in prod) with those events

Snowflake MDMTable are read by an ETL process implemented by COMPANY Team 
Update Usage TagsUpdate BESTCALLEDON used flag on addresses
CDW docs: Best Address Data flow

Client software 



" }, { "title": "ETL - COMPANY (GBLUS)", "pageID": "164470236", "pageLink": "/pages/viewpage.action?pageId=164470236", "content": "

Contacts

Nayan, Rajeev <Rajeev.Nayan3@COMPANY.com>

Duvvuri, Satya <Satya.Duvvuri@COMPANY.com>

ACLs

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

Sources

Topic

Batches

ETL batch load user

mdmetl_nprod

OAuth2

SVC-MDMETL_client
- "CREATE_HCP"
- "CREATE_HCO"
- "CREATE_MCO"
- "CREATE_BATCH"
- "GET_BATCH"
- "MANAGE_STAGE"
- "CLEAR_CACHE_BATCH"
US
- "SHS"
- "SHS_MCO"
- "IQVIA_MCO"
- "CENTRIS"
- "ENGAGE 1.0"
- "GRV"
- "IQVIA_DDD"
- "SAP"
- "ONEKEY"
- "IQVIA_RAWDEA"
- "IQVIA_PDRP"
- "COV"
- "IQVIA_RX"
- "MILLIMAN_MCO"
- "ICUE"
- "KOL_OneView"
- "SHS_RX"
- "MMIT"
- "INTEGRICHAIN"

N/A

batches:
"Symphony":
- "HCPLoading"
"Centris":
- "HCPLoading"
"IQVIA_DDD":
- "HCOLoading"
- "RelationLoading"
"SAP":
- "HCOLoading"
"ONEKEY":
- "HCPLoading"
- "HCOLoading"
- "RelationLoading"
"IQVIA_RAWDEA":
- "HCPLoading"
"IQVIA_PDRP":
- "HCPLoading"
"PFZ_CUSTID_SYNC":
- "COMPANYCustIDLoading"
"OneView":
- "HCOLoading"
"HCPM":
- "HCPLoading"
"SHS_MCO":
- "MCOLoading"
- "RelationLoading"
"IQVIA_MCO":
- "MCOLoading"
- "RelationLoading"
"IQVIA_RX":
- "HCPLoading"
"MILLIMAN_MCO":
- "MCOLoading"
- "RelationLoading"
"VEEVA":
- "HCPLoading"
- "HCOLoading"
- "MCOLoading"
- "RelationLoading"
"SHS_RX":
- "HCPLoading"
"MMIT":
- "MCOLoading"
- "RelationLoading"
"DDD_SAP":
- "RelationLoading"
"INTEGRICHAIN":
- "HCOLoading"
...

ETL Get/Resubmit Errors

mdmetl_nprod

OAuth2

SVC-MDMETL_client
- "GET_ERRORS"
- "RESUBMIT_ERRORS"
USALLN/AN/A

Flows

Client software 

SOPs


" }, { "title": "KOL_ONEVIEW (GBLUS)", "pageID": "164469966", "pageLink": "/pages/viewpage.action?pageId=164469966", "content": "

Contacts

Brahma, Bagmita <Bagmita.Brahma2@COMPANY.com>

Solanki, Hardik <Hardik.Solanki@COMPANY.com>

Tikyani, Devesh <Devesh.Tikyani@COMPANY.com>

DL DL-iMed_L3@COMPANY.com

ACLs

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

Sources

Topic

KOL_OneView user
kol_oneview

OAuth2

KOL-MDM-PFORCEOL_client
- "CREATE_HCP"
- "UPDATE_HCP"
- "CREATE_HCO"
- "UPDATE_HCO"
- "GET_ENTITIES"
- "LOOKUPS"

US

KOL_OneView

N/A

KOL_OneView TOPICN/AKafka JassN/A
"(exchange.in.headers.reconciliationTarget==null 
|| exchange.in.headers.reconciliationTarget == 'KOL_ONEVIEW')
&& exchange.in.headers.eventType in ['full']
&& ['KOL_OneView'].intersect(exchange.in.headers.eventSource)
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
US
KOL_OneView
prod-out-full-koloneview-all

Flows


Client software 


" }, { "title": "GRV (GBLUS)", "pageID": "164469964", "pageLink": "/pages/viewpage.action?pageId=164469964", "content": "

Contacts

Bablani, Vijay <Vijay.Bablani@COMPANY.com>

Jain, Somya <Somya.Jain@COMPANY.com>

Adhvaryu, Amish <Amish.Adhvaryu@COMPANY.com>

Reynolds, Lori <Lori.Reynolds@COMPANY.com>

Alphonso, Venisa <Venisa.Alphonso@COMPANY.com>

Patel, Jay <Jay.Patel@COMPANY.com>

Anumalasetty, Jayasravani <Jayasravani.Anumalasetty@COMPANY.com>


ACLs

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

Sources

Topic

GRV User
grv

OAuth2

GRV-MDM_client
- "GET_ENTITIES"
- "LOOKUPS"
- "VALIDATE_HCP"
- "CREATE_HCP"
- "UPDATE_HCP"

US

- "GRV"

N/A

GRV-AIS-MDM User
grv_ais
OAuth2
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
- "GET_ENTITIES"
- "LOOKUPS"
- "VALIDATE_HCP"
- "CREATE_HCP"
- "UPDATE_HCP"
- "CREATE_HCO"
- "UPDATE_HCO"
US
- "GRV"
- "CENTRIS"
- "ENGAGE"
N/A
GRV TOPICN/AKafka JassN/A
"(exchange.in.headers.reconciliationTarget==null)
&& exchange.in.headers.eventType in ['full_not_trimmed']
&& ['GRV'].intersect(exchange.in.headers.eventSource)
&& exchange.in.headers.objectType in ['HCP']
&& exchange.in.headers.eventSubtype in ['HCP_CHANGED']"
US
GRV
prod-out-full-grv-all

Flows



Client software 


" }, { "title": "GRACE (GBLUS)", "pageID": "164469962", "pageLink": "/pages/viewpage.action?pageId=164469962", "content": "

Contacts

Jeffrey.D.LoVetere@COMPANY.com

william.nerbonne@COMPANY.com

Kalyan.Kanumuru@COMPANY.com

Brigilin.Stanley@COMPANY.com

ACLs

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

Sources

Topic

GRACE User
grace

OAuth2

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
- "GET_ENTITIES"
- "LOOKUPS"

US

- "GRV"
- "CENTRIS"
- "ENGAGE"

N/A

Flows

Client software 

" }, { "title": "KOL_ONEVIEW (EMEA, AMER, APAC)", "pageID": "164470136", "pageLink": "/pages/viewpage.action?pageId=164470136", "content": "

Contacts

DL-SFA-INF_Support_PforceOL@COMPANY.com

Solanki, Hardik (US - Mumbai) <hsolanki@COMPANY.com>

Yagnamurthy, Maanasa (US - Hyderabad) <myagnamurthy@COMPANY.com>

ACLs

EMEA

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

KOL_ONEVIEW user (NPROD)
kol_oneview
External OAuth2

KOL-MDM-PFORCEOL_client

KOL-MDM_client

[
"CREATE_HCP",
"UPDATE_HCP",
"CREATE_HCO",
"UPDATE_HCO",
"GET_ENTITIES",
"LOOKUPS"
]
["AD","AE","AO","AR","AU","BF","BH","BI","BJ","BL",
"BO","BR","BW","BZ","CA","CD","CF","CG","CH","CI",
"CL","CM","CN","CO","CP","CR","CV","DE","DJ","DK",
"DO","DZ","EC","EG","ES","ET","FI","FO","FR","GA","GB",
"GF","GH","GL","GM","GN","GP","GQ","GT","GW","HN",
"IE","IL","IN","IQ","IR","IT","JO","JP","KE","KW",
"LB","LR","LS","LY","MA","MC","MF","MG","ML","MQ",
"MR","MU","MW","MX","NA","NC","NG","NI","NZ","OM",
"PA","PE","PF","PL","PM","PT","PY","QA","RE","RU",
"RW","SA","SD","SE","SL","SM","SN","SV","SY","SZ","TD",
"TF","TG","TN","TR","TZ","UG","UY","VE","WF","YE",
"YT","ZA","ZM","ZW"]
GB
- "KOL_OneView"

KOL_ONEVIEW user (PROD)
kol_oneview
External OAuth2
KOL-MDM-PFORCEOL_client
KOL-MDM_client
[
"CREATE_HCP",
"UPDATE_HCP",
"CREATE_HCO",
"UPDATE_HCO",
"GET_ENTITIES",
"LOOKUPS"
]
["AD","AE","AO","AR","AU","BF","BH","BI","BJ","BL",
"BO","BR","BW","BZ","CA","CD","CF","CG","CH","CI",
"CL","CM","CN","CO","CP","CR","CV","DE","DJ","DK",
"DO","DZ","EC","EG","ES","ET","FO","FR","GA","GB",
"GF","GH","GL","GM","GN","GP","GQ","GT","GW","HN",
"IE","IL","IN","IQ","IR","IT","JO","JP","KE","KW",
"LB","LR","LS","LY","MA","MC","MF","MG","ML","MQ",
"MR","MU","MW","MX","NA","NC","NG","NI","NZ","OM",
"PA","PE","PF","PL","PM","PT","PY","QA","RE","RU",
"RW","SA","SD","SL","SM","SN","SV","SY","SZ","TD",
"TF","TG","TN","TR","TZ","UG","UY","VE","WF","YE",
"YT","ZA","ZM","ZW"]
GB
- "KOL_OneView"

AMER

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

KOL_ONEVIEW user (NPROD)
kol_oneview
External OAuth2

KOL-MDM-PFORCEOL_client

[
"CREATE_HCP",
"UPDATE_HCP",
"CREATE_HCO",
"UPDATE_HCO",
"GET_ENTITIES",
"LOOKUPS"
]
["AR","BR","CA","MX","UY"]
CA
- "KOL_OneView"

KOL_ONEVIEW user (PROD)
kol_oneview
External OAuth2
KOL-MDM-PFORCEOL_client
[
"CREATE_HCP",
"UPDATE_HCP",
"CREATE_HCO",
"UPDATE_HCO",
"GET_ENTITIES",
"LOOKUPS"
]
["AR","BR","CA","MX","UY"]
CA
- "KOL_OneView"

APAC

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

KOL_ONEVIEW user (NPROD)
kol_oneview
External OAuth2

KOL-MDM-PFORCEOL_client

[
"CREATE_HCP",
"UPDATE_HCP",
"CREATE_HCO",
"UPDATE_HCO",
"GET_ENTITIES",
"LOOKUPS"
]
["AU","IN","KR","NZ","JP"]
JP
- "KOL_OneView"

KOL_ONEVIEW user (PROD)
kol_oneview
External OAuth2
KOL-MDM-PFORCEOL_client
[
"CREATE_HCP",
"UPDATE_HCP",
"CREATE_HCO",
"UPDATE_HCO",
"GET_ENTITIES",
"LOOKUPS"
]
["AU","IN","KR","NZ","JP"]
JP
- "KOL_OneView"

Kafka

EMEA

Env

Name

Kafka Username

Consumergroup

Publisher routing rule

Topic

Partitions
emea-prod
Kol_oneview
kol_oneview

"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'KOL_ONEVIEW')
&& exchange.in.headers.eventType in ['full']
&& ['KOL_OneView'].intersect(exchange.in.headers.eventSource)
&& exchange.in.headers.objectType in ['HCP', 'HCO']
&& exchange.in.headers.country in ['ie', 'gb']"

-${env}-out-full-koloneview-all
3
emea-dev
Kol_oneview
kol_oneview

-${env}-out-full-koloneview-all

3
emea-qaKol_oneview
kol_oneview

-${env}-out-full-koloneview-all

3
emea-stageKol_oneview
kol_oneview

-${env}-out-full-koloneview-all

3

AMER

Env

Name

Kafka Username

Consumergroup

Publisher routing rule

Topic

Partitions
gblus-prod
Kol_oneview
kol_oneview

"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'KOL_OneView')
&& exchange.in.headers.eventType in ['full'] && ['KOL_OneView'].intersect(exchange.in.headers.eventSource) && exchange.in.headers.objectType in ['HCP', 'HCO']"
-${env}-out-full-koloneview-all
3
gblus-dev
Kol_oneview
kol_oneview

-${env}-out-full-koloneview-all

3
gblus-qaKol_oneview
kol_oneview

-${env}-out-full-koloneview-all

3
gblus-stageKol_oneview
kol_oneview

-${env}-out-full-koloneview-all

3
" }, { "title": "GRV (EMEA, AMER)", "pageID": "164470150", "pageLink": "/pages/viewpage.action?pageId=164470150", "content": "

Contacts

TODO

Gateway

EMEA

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

GRV user (NPROD)
grv
External OAuth2
GRV-MDM_client
- GET_ENTITIES
- LOOKUPS
- VALIDATE_HCP
["CA"]
GB
GRV
N/A
GRV user (PROD)
grv
External OAuth2
GRV-MDM_client
- GET_ENTITIES
- LOOKUPS
- VALIDATE_HCP
["CA"]
GB
GRV
N/A

AMER(manager)

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

GRV user (NPROD)
grv
External OAuth2
GRV-MDM_client
["GET_ENTITIES","LOOKUPS","VALIDATE_HCP","CREATE_HCP","UPDATE_HCP"]
["US"]

GRV
N/A
GRV user (PROD)
grv
External OAuth2
GRV-MDM_client
["GET_ENTITIES","LOOKUPS","VALIDATE_HCP","CREATE_HCP","UPDATE_HCP"]
["US"]

GRV
N/A

Kafka

AMER

Env

Name

Kafka Username

Consumergroup

Publisher routing rule

Topic

Partitions
gblus-prod
Grv
grv

"(exchange.in.headers.reconciliationTarget==null)
&& exchange.in.headers.eventType in ['full_not_trimmed'] && ['GRV'].intersect(exchange.in.headers.eventSource)
&& exchange.in.headers.objectType in ['HCP'] && exchange.in.headers.eventSubtype in ['HCP_CHANGED']"

- ${env}-out-full-grv-all


gblus-dev
Grv
grv

- ${local_env}-out-full-grv-all


gblus-qaGrv
grv

- ${local_env}-out-full-grv-all


gblus-stageGrv 
grv

- ${local_env}-out-full-grv-all


" }, { "title": "GANT (Global, EMEA, AMER, APAC)", "pageID": "164470148", "pageLink": "/pages/viewpage.action?pageId=164470148", "content": "

Contacts

Gateway

EMEA

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

GANT User
gant
External OAuth2
GANT-MDM_client

- "GET_ENTITIES"
- "LOOKUPS"
["AD", "AG", "AI", "AM", "AN",
"AR", "AT", "AU", "AW", "BA",
"BB", "BE", "BG", "BL", "BM",
"BO", "BQ", "BR", "BS", "BY",
"BZ", "CA", "CH", "CL", "CN",
"CO", "CP", "CR", "CW", "CY",
"CZ", "DE", "DK", "DO", "DZ",
"EC", "EE", "EG", "ES", "FI",
"FO", "FR", "GB", "GF", "GP",
"GR", "GT", "GY", "HK", "HN",
"HR", "HU", "ID", "IE", "IL",
"IN", "IT", "JM", "JP", "KR",
"KY", "KZ", "LC", "LT", "LU",
"LV", "MA", "MC", "MF", "MQ",
"MU", "MX", "MY", "NC", "NI",
"NL", "NO", "NZ", "PA", "PE",
"PF", "PH", "PK", "PL", "PM",
"PN", "PT", "PY", "RE", "RO",
"RS", "RU", "SA", "SE", "SG",
"SI", "SK", "SV", "SX", "TF",
"TH", "TN", "TR", "TT", "TW",
"UA", "UY", "VE", "VG", "VN",
"WF", "XX", "YT", "ZA"]
GB
GRV
N/A

AMER

Action Required

User configuration

PingFederate Username

GANT-MDM_client

Countries

Brazil

Tenant

AMER

Environments (PROD/NON-PROD/ALL)

ALL

API Services

ext-api-gw-amer-stage/entities,  ext-api-gw-amer-stage/lookups.

Sources

ONEKEY,CRMMI,MAPP

Business Justification

As we are fetching hcp data from MDM COMPANY Instance, Earlier It was MDM IQVIA instance

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

GANT User
gant
External OAuth2
GANT-MDM_client

- "GET_ENTITIES"
- "LOOKUPS"
["BR"]
BR
- ONEKEY
- CRMMI
- MAPP
N/A

APAC

Action Required

User configuration

PingFederate Username

GANT-MDM_client

Countries

India

Tenant

APAC

Environments (PROD/NON-PROD/ALL)

ALL

API Services

ext-api-gw-apac-stage/entities,  ext-api-gw-apac-stage/lookups.

Sources

ONEKEY,CRMMI,MAPP

Business Justification

As we are fetching hcp data from MDM COMPANY Instance, Earlier It was MDM IQVIA instance

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

GANT User
gant
External OAuth2
GANT-MDM_client

- "GET_ENTITIES"
- "LOOKUPS"
["IN"]
IN
- ONEKEY
- CRMMI
- MAPP
N/A
" }, { "title": "Medic (EMEA, AMER, APAC)", "pageID": "164470140", "pageLink": "/pages/viewpage.action?pageId=164470140", "content": "

Contacts

DL-F&BO-MEDIC@COMPANY.com

Gateway

EMEA

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

Medic user (NPROD)
medic
External OAuth2

MEDIC-MDM_client

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

["GET_ENTITIES","LOOKUPS"]
["AR","BR","CO","FR","GR","IE","IN","IT","NZ"]
IE
["MEDIC"]

Medic user (PROD)
medic
External OAuth2
MEDIC-MDM_client
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
["GET_ENTITIES","LOOKUPS"]
["AR","BR","CO","FR","GR","IE","IN","IT","NZ"]
IE
["MEDIC"]

AMER

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

Medic  user (NPROD)
medic
External OAuth2

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●, ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

["GET_ENTITIES","LOOKUPS"]
["AR","BR","CO","FR","GR","IE","IN","IT","NZ","US"]

["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI",
"DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE",
"GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO",
"IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC",
"MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM",
"PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]

Medic user (PROD)
medic
External OAuth2
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
["GET_ENTITIES","LOOKUPS"]
["AR","BR","CO","FR","GR","IE","IN","IT","NZ","US"]

["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI",
"DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE",
"GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO",
"IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC",
"MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM",
"PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]

APAC

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

Medic user (NPROD)
medic
External OAuth2

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

["GET_ENTITIES","LOOKUPS"]
["AR","BR","CO","FR","GR","IE","IN","IT","NZ"]
IN
["MEDIC"]

Medic user (PROD)
medic
External OAuth2
MEDIC-MDM_client
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
["GET_ENTITIES","LOOKUPS"]
["AR","BR","CO","FR","GR","IE","IN","IT","NZ"]
IN
["MEDIC"]

" }, { "title": "PTRS (EMEA, AMER, APAC)", "pageID": "164470165", "pageLink": "/pages/viewpage.action?pageId=164470165", "content": "

Requirements

EnvPublisher routing ruleTopic
emea-prod
(ptrs-eu)
"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_RECONCILIATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf', 'br', 'mx', 'id', 'pt']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"

01/Mar/23 4:14 AM

[10:13 AM] Shanbhag, Bhushan
Okay in that case we want Turkey market's events to come from emea-prod-out-full-ptrs-global2 topic only. 

${env}-out-full-ptrs-eu
emea prod and nprods

Adding MC and AD to out-full-ptrs-eu

15/05/2023

Sagar: 

Hi Karol,

Can you please add below counties for France to country configuration list for FRANCE EMEA Topics (Prod, Stage QA & Dev)

1. Monaco

2. Andorra
\n MR-6236\n -\n Getting issue details...\n STATUS\n

${env}-out-full-ptrs-eu

Contacts

API: Prapti.Nanda@COMPANY.com;Varun.ArunKumar@COMPANY.com

Kafka: Sagar.Bodala@COMPANY.com

Gateway

EMEA

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

PTRS user (NPROD)
ptrs
External OAuth2
PTRS-MDM_client
["CREATE_HCO","CREATE_HCP","GET_ENTITIES","LOOKUPS"]
["AG","AI","AN","AR","AW","BB","BL","BM","BO","BR",
"BS","BZ","CL","CO","CR","CW","DO","EC","FR","GF",
"GP","GT","GY","HN","ID","IL","JM","KY","LC","MF",
"MQ","MU","MX","NC","NI","PA","PE","PF","PH","PM",
"PN","PT","PY","RE","SV","SX","TF","TR","TT","UY",
"VE","VG","WF","YT"]

["PTRS"]

PTRS user (PROD)ptrsExternal OAuth2
PTRS-MDM_client
["CREATE_HCO","CREATE_HCP","GET_ENTITIES","LOOKUPS"]["AG","AI","AN","AR","AW","BB","BL","BM","BO","BR",
"BS","BZ","CL","CO","CR","CW","DO","EC","FR","GF",
"GP","GT","GY","HN","ID","IL","JM","KY","LC","MF",
"MQ","MU","MX","NC","NI","PA","PE","PF","PH","PM",
"PN","PT","PY","RE","SV","SX","TF","TR","TT","UY",
"VE","VG","WF","YT"]

["PTRS"]

AMER(manager)

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

PTRS user (NPROD)
ptrs
External OAuth2
PTRS-MDM_client

["CREATE_HCO","CREATE_HCP","GET_ENTITIES","LOOKUPS"]

["MX","BR"]

["PTRS"]

PTRS user (PROD)ptrsExternal OAuth2
PTRS-MDM_client

["CREATE_HCO","CREATE_HCP","GET_ENTITIES","LOOKUPS"]

["MX","BR"]
["PTRS"]

APAC(manager)

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

PTRS user (NPROD)
ptrs
External OAuth2
PTRS_RELTIO_Client
PTRS-MDM_client

["CREATE_HCO","CREATE_HCP","GET_ENTITIES"]

["ID","JP","PH"]

["VOC","PTRS"]

PTRS user (PROD)ptrsExternal OAuth2
PTRS_RELTIO_Client
PTRS-MDM_client

["CREATE_HCO","CREATE_HCP","GET_ENTITIES"]

["JP"]
["VOC","PTRS"]

Kafka

EMEA

Env

Name

Kafka Username

Consumergroup

Publisher routing rule

Topic

Partitions
emea-prod
(ptrs-eu)
Ptrs
ptrs

"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_RECONCILIATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf', 'br', 'mx', 'id', 'pt', 'ad', 'mc']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"

${env}-out-full-ptrs-eu
3
emea-prod (ptrs-global2)
Ptrs
ptrs

"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_GLOBAL2_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['tr']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
${env}-out-full-ptrs-global2
3
emea-dev 
(ptrs-global2)
Ptrs
ptrs

"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_GLOBAL2_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['tr']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"

${env}-out-full-ptrs-global2

3
emea-qa (ptrs-eu)Ptrsptrsemea-dev-ptrs-eu
"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_EU_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
${env}-out-full-ptrs-eu
3
emea-qa (ptrs-global2)Ptrsptrs
"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_GLOBAL2_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['tr']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
${env}-out-full-ptrs-global2
3
emea-stage (ptrs-eu)Ptrsptrsemea-stage-ptrs-eu
"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_EU_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf', 'pt', 'id', 'tr']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
${env}-out-full-ptrs-eu
3
emea-stage (ptrs-global2)Ptrsptrs
"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_GLOBAL2_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['tr']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
${env}-out-full-ptrs-global2
3

AMER

Env

Name

Kafka Username

Consumergroup

Publisher routing rule

Topic

Partitions
amer-prod
(ptrs-amer)
Ptrs
ptrs

"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_AMER_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['mx', 'br']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
${env}-out-full-ptrs-amer
3
amer-dev 
(ptrs-amer)
Ptrs
ptrs
amer-dev-ptrs
"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_AMER_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['mx', 'br']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
${env}-out-full-ptrs-amer
3
amer-qa (ptrs-amer)Ptrsptrsamer-qa-ptrs
"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_AMER_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['mx', 'br']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
${env}-out-full-ptrs-amer
3
amer-stage (ptrs-amer)Ptrsptrsamer-stage-ptrs
"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_AMER_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['mx', 'br']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
${env}-out-full-ptrs-amer
3

APAC

Env

Name

Kafka Username

Consumergroup

Publisher routing rule

Topic

Partitions
apac-dev 
(ptrs-apac)
Ptrs
ptrs

"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_APAC_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['pk']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
${env}-out-full-ptrs-apac

apac-qa (ptrs-apac)Ptrsptrs
"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_APAC_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['pk']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
${env}-out-full-ptrs-apac

apac-stage (ptrs-apac)Ptrsptrs
"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_APAC_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['pk']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
${env}-out-full-ptrs-apac

GBL

Env

Name

Kafka Username

Consumergroup

Publisher routing rule

Topic

Partitions
gbl-prod
Ptrs
ptrs

"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['co', 'mx', 'br', 'ph']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"

- ${env}-out-full-ptrs


gbl-prod (ptrs-eu)
Ptrs
ptrs

"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_EU_REGENERATION')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
${env}-out-full-ptrs-eu

gbl-prod (ptrs-porind)
Ptrs
ptrs

exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['id', 'pt']
&& exchange.in.headers.objectType in ['HCP', 'HCO']
&& !exchange.in.headers.eventSubtype.endsWith('_MATCHES_CHANGED')
&& (exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_PORIND_REGENERATION')"
${env}-out-full-ptrs-porind

gbl-dev
Ptrs
ptrs

"exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['co', 'mx', 'br', 'ph', 'cl', 'tr']
&& exchange.in.headers.objectType in ['HCP', 'HCO']
&& !exchange.in.headers.eventSubtype.endsWith('_MATCHES_CHANGED')
&& (exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_REGENERATION')"

- ${env}-out-full-ptrs

20
gbl-dev (ptrs-eu)
Ptrs
ptrs
ptrs_nprod
"exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf']
&& exchange.in.headers.objectType in ['HCP', 'HCO']
&& !exchange.in.headers.eventSubtype.endsWith('_MATCHES_CHANGED')
&& (exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_EU_REGENERATION')"

- ${env}-out-full-ptrs-eu


gbl-dev (ptrs-porind)
Ptrs
ptrs

"exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['id', 'pt']
&& exchange.in.headers.objectType in ['HCP', 'HCO']
&& !exchange.in.headers.eventSubtype.endsWith('_MATCHES_CHANGED')
&& (exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_PORIND_REGENERATION')"

- ${env}-out-full-ptrs-porind


gbl-qa (ptrs-eu)Ptrsptrs
"exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf']
&& exchange.in.headers.objectType in ['HCP', 'HCO']
&& (exchange.in.headers.reconciliationTarget==null)"
- ${env}-out-full-ptrs-eu
20
gbl-stagePtrs
ptrs

"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_LATAM')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['co', 'mx', 'br', 'ph', 'cl','tr']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
- ${env}-out-full-ptrs

gbl-stage (ptrs-eu)Ptrs
ptrs
ptrs_nprod
"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_EU')
&& exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf']
&& exchange.in.headers.objectType in ['HCP', 'HCO']"
- ${env}-out-full-ptrs-eu

gbl-stage (ptrs-porind)Ptrs
ptrs

"exchange.in.headers.eventType in ['full']
&& exchange.in.headers.country in ['id', 'pt']
&& exchange.in.headers.objectType in ['HCP', 'HCO']
&& !exchange.in.headers.eventSubtype.endsWith('_MATCHES_CHANGED')
&& (exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_PORIND_REGENERATION')"
- ${env}-out-full-ptrs-porind

" }, { "title": "OneMed (EMEA)", "pageID": "164470163", "pageLink": "/pages/viewpage.action?pageId=164470163", "content": "

Contacts

Marsha.Wirtel@COMPANY.com;AnveshVedula.Chalapati@COMPANY.com

Gateway

EMEA

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

OneMed user (NPROD)
onemed
External OAuth2

ONEMED-MDM_client

["GET_ENTITIES","LOOKUPS"]
["AR","AU","BR","CH","CN","DE","ES","FR","GB","IE",
"IL","IN","IT","JP","MX","NZ","PL","SA","TR"]
IE
["CICR","CN3RDPARTY","CRMMI","EVR","FACE","GCP","GRV","KOL_OneView","LocalMDM","MAPP",
"MDE","OK","Reltio","Rx_Audit"]

OneMeduser (PROD)
onemed
External OAuth2
ONEMED-MDM_client
["GET_ENTITIES","LOOKUPS"]
["AR","AU","BR","CH","CN","DE","ES","FR","GB","IE",
"IL","IN","IT","JP","MX","NZ","PL","SA","TR"]
IE
["CICR","CN3RDPARTY","CRMMI","EVR","FACE","GCP","GRV","KOL_OneView","LocalMDM","MAPP",
"MDE","OK","Reltio","Rx_Audit"]

" }, { "title": "GRACE (EMEA, AMER, APAC)", "pageID": "164470161", "pageLink": "/pages/viewpage.action?pageId=164470161", "content": "

Contacts

DL-AIS-Mule-Integration-Support@COMPANY.com

Requirements

Partial requirements

Sent by Amish Adhvaryu

action needed

Need Plugin Configuration for below usernames

username

GRACE MAVENS SFDC - DEV - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● - Dev
GRACE MAVENS SFDC - STG - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● - Stage
GRACE MAVENS SFDC - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● - Prod

countries

AU,NZ,IN,JP,KR (APAC) and AR, UY, MX (AMER)

tenant

APAC and AMER

environments (prod/nonprods/all)

ALL

API services exposed

HCP HCO MCO Search, Lookups

Sources

Grace

Business justification

Client ID used by GRACE application to search HCP and HCOs

Gateway

EMEA

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

GRACE user
grace
External OAuth2
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
["GET_ENTITIES","LOOKUPS"]
["AD","AG","AI","AM","AN","AR","AT","AU","AW","BA",
"BB","BE","BG","BL","BM","BO","BQ","BR","BS","BY",
"BZ","CA","CH","CL","CN","CO","CP","CR","CW","CY",
"CZ","DE","DK","DO","DZ","EC","EE","ES","FI","FO",
"FR","GB","GD","GF","GL","GP","GR","GT","GY","HK",
"HN","HR","HU","ID","IE","IL","IN","IT","JM","JP",
"KR","KY","KZ","LC","LT","LU","LV","MA","MC","MF",
"MQ","MU","MX","MY","NC","NI","NL","NO","NZ","PA",
"PE","PF","PH","PK","PL","PM","PN","PT","PY","RE",
"RO","RS","RU","SA","SE","SG","SI","SK","SR","SV",
"SX","TF","TH","TN","TR","TT","TW","UA","US","UY",
"VE","VG","VN","WF","XX","YT","ZA"]
GB
["NONE"]
N/A
GRACE User
grace
External OAuth2
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

["GET_ENTITIES","LOOKUPS"]
["AD","AG","AI","AM","AN","AR","AT","AU","AW","BA",
"BB","BE","BG","BL","BM","BO","BQ","BR","BS","BY",
"BZ","CA","CH","CL","CN","CO","CP","CR","CW","CY",
"CZ","DE","DK","DO","DZ","EC","EE","ES","FI","FO",
"FR","GB","GD","GF","GL","GP","GR","GT","GY","HK",
"HN","HR","HU","ID","IE","IL","IN","IT","JM","JP",
"KR","KY","KZ","LC","LT","LU","LV","MA","MC","MF",
"MQ","MU","MX","MY","NC","NI","NL","NO","NZ","PA",
"PE","PF","PH","PK","PL","PM","PN","PT","PY","RE",
"RO","RS","RU","SA","SE","SG","SI","SK","SR","SV",
"SX","TF","TH","TN","TR","TT","TW","UA","US","UY",
"VE","VG","VN","WF","XX","YT"]
GB
["NONE"]
N/A

AMER

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

GRACE user
grace
External OAuth2 (all)
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
["GET_ENTITIES","LOOKUPS"]
["CA","US","AR","UY","MX"]

["NONE"]
N/A
External OAuth2 (amer-dev)●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
External OAuth2 (gblus-stage)●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
External OAuth2 (amer-stage)●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
GRACE User
grace
External OAuth2
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

["GET_ENTITIES","LOOKUPS"]
["AD","AR","AU","BR","CA","DE","ES","FR","GB","GF",
"GP","IN","IT","JP","KR","MC","MF","MQ","MX","NC",
"NZ","PF","PM","RE","SA","TR","US","UY"]

["NONE"]
N/A


APAC

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

GRACE user
grace
External OAuth2 (all)
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
["GET_ENTITIES","LOOKUPS"]
["AR","AU","BR","CA","HK","ID","IN","JP","KR","MX",
"MY","NZ","PH","PK","SG","TH","TW","US","UY","VN"]

["NONE"]
N/A
External OAuth2 (apac-stageb469b84094724d74adb9ff7224588647
GRACE User
grace
External OAuth2
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

["GET_ENTITIES","LOOKUPS"]
["AD","AR","AU","BR","CA","DE","ES","FR","GB","GF",
"GP","IN","IT","JP","KR","MC","MF","MQ","MX","NC",
"NZ","PF","PM","RE","SA","TR","US","UY"]

["NONE"]
N/A
" }, { "title": "Snowflake (Global, GBLUS)", "pageID": "164469783", "pageLink": "/pages/viewpage.action?pageId=164469783", "content": "

Contacts

Narayanan, Abhilash <Abhilash.KadampanalNarayanan@COMPANY.com>

ACLs

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

Sources

Topic

Snowflake topicSnowflake TopicKafka JAASN/A
exchange.in.headers.eventType in ['full_not_trimmed']
exchange.in.headers.objectType in ['HCP', 'HCO', 'MCO', 'RELATIONSHIP'])
||
(exchange.in.headers.eventType in ['simple'] && exchange.in.headers.objectType in ['ENTITY']))
ALLALL
prod-out-full-snowflake-all

Flows

Snowflake participate in two flows:

  1. Snowflake: Events publish flow
    Event publisher pushes all events regarding entity/relation change to Kafka topic that is created for Snowflake( {{$env}}-out-full-snowflake-all }} ). Then Kafka Connect component pulls those events and loads them to Snowflake table(Flat model).
  2. Reconciliation
    Main goal of reconciliation process is to synchronise Snowflake database with MongoDB.
    Snowflake periodically exports entities and creates csv file with their identifiers and checksums. The file is sent to S3 from where it is then downloaded in the reconciliation process. This process compares the data in the file with the values stored in Mongo.
    A reconciliation event is created and posted on kafka topic in two cases:

    1. the cheksum has changed
    2. there is lack of entity in csv file

Client software 

Kafka Connect is responsible for collecting kafka events and loading them to Snowflake database in flat model.

SOPs

Currently there are no SOPs for snowflake.

" }, { "title": "Vaccine (GBLUS)", "pageID": "164469863", "pageLink": "/pages/viewpage.action?pageId=164469863", "content": "

Contacts

Vajapeyajula, Venkata Kalyan Ram <Kalyan.Vajapeyajula@COMPANY.com>

BAVISHI, MONICA <MONICA.BAVISHI@COMPANY.com>

Duvvuri, Satya <Satya.Duvvuri@COMPANY.com>

Garg, Nalini <Nalini.Garg@COMPANY.com>

Shah, Himanshu <Himanshu.Shah@COMPANY.com>

Flows


FlowDescription
Snowflake: Events publish flowEvents AUTO_LINK_FOUND and POTENTIAL_LINK_FOUND are published to snowflake
Snowflake: Base tables refresh

MATCHES table is refreshed (every 2 hours in prod) with those events

Snowflake MDMMATCHES table are read by an ETL process implemented by COMPANY Team 
ETL Batches

The ETL process creates relations like  SAPtoHCOSAffiliations. FlextoDDDAffiliations, FlextoHCOSAffiliations through the Batch Channel


NotMatch CallbackFor created relations, the NotMatch callback is triggered and removes LINKS using NotMatch Reltio calls

Client software 

ACLs

NameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopic
DerivedAffilations Batch Load user

derivedaffiliations_load

N/AN/A
- "CREATE_RELATION"
- "UPDATE_RELATION"
- US
*

" }, { "title": "ICUE (AMER)", "pageID": "172301085", "pageLink": "/pages/viewpage.action?pageId=172301085", "content": "

Contacts

Gateway

AMER

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

ICUE user (NPROD)
icue
External OAuth2

ICUE-MDM_client

["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","CREATE_MCO","UPDATE_MCO","GET_ENTITIES","LOOKUPS"]
["US"]

["ICUE"]
consumer:
regex:
- "^.*-out-full-icue-all$"
- "^.*-out-full-icue-grv-all$"
groups:
- icue_dev
- icue_qa
- icue_stage
- dev_icue_grv
- qa_icue_grv
- stage_icue_grv
ICUE user (PROD)
icue
External OAuth2
ICUE-MDM_client
["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","CREATE_MCO","UPDATE_MCO","GET_ENTITIES","LOOKUPS"]
["US"]

["ICUE"]
consumer:
regex:
- "^.*-out-full-icue-all$"
- "^.*-out-full-icue-grv-all$"
groups:
- icue_prod
- prod_icue_grv

Kafka

GBLUS (icue-grv-mule)

Name

Kafka Username

Consumergroup

Publisher routing rule

Topic

Partitions
icue - DEV
icue_nprod

"exchange.in.headers.eventType in ['full_not_trimmed']
&& exchange.in.headers.objectType in ['HCP']
&& ['GRV'].intersect(exchange.in.headers.eventSource)
&& !(['ICUE'].intersect(exchange.in.headers.eventSource))
&& exchange.in.headers.eventSubtype in ['HCP_CREATED', 'HCP_CHANGED']"
${local_env}-out-full-icue-grv-all"

icue - QA
icue_nprod

${local_env}-out-full-icue-grv-all

icue - STAGE
icue_nprod

${local_env}-out-full-icue-grv-all

icue  - PROD
icuex_prod

${env}-out-full-icue-grv-all

Flows

Client software 


" }, { "title": "ESAMPLES (GBLUS)", "pageID": "172301089", "pageLink": "/pages/viewpage.action?pageId=172301089", "content": "

Contacts

Adhvaryu, Amish <Amish.Adhvaryu@COMPANY.com>

Jain, Somya <Somya.Jain@COMPANY.com>

Bablani, Vijay <Vijay.Bablani@COMPANY.com>

Reynolds, Lori <Lori.Reynolds@COMPANY.com>

ACLs

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

Sources

Topic

MuleSoft - esamples user
esamples

OAuth2

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
- "GET_ENTITIES"
US
all_sources

N/A

Flows

Client software 


" }, { "title": "VEEVA_FIELD (EMEA, AMER)", "pageID": "172301091", "pageLink": "/pages/viewpage.action?pageId=172301091", "content": "

Contacts

Adhvaryu, Amish <Amish.Adhvaryu@COMPANY.com>

Fani, Chris <Christopher.Fani@COMPANY.com>

Gateway

EMEA

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

VEEVA_FIELD user (NPROD)
veeva_field
External OAuth2
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
["GET_ENTITIES","LOOKUPS"]
["AD","AG","AI","AM","AN","AR","AT","AU","AW","BA",
"BB","BE","BG","BL","BM","BO","BQ","BR","BS","BY",
"BZ","CA","CH","CL","CN","CO","CP","CR","CW","CY",
"CZ","DE","DK","DO","DZ","EC","EE","ES","FI","FO",
"FR","GB","GF","GL","GP","GR","GT","GY","HK","HN",
"HR","HU","ID","IE","IL","IN","IT","JM","JP","KR",
"KY","KZ","LC","LT","LU","LV","MA","MC","MF","MQ",
"MU","MX","MY","NC","NI","NL","NO","NZ","PA","PE",
"PF","PH","PK","PL","PM","PN","PT","PY","RE","RO",
"RS","RU","SA","SE","SG","SI","SK","SV","SX","TF",
"TH","TN","TR","TT","TW","UA","UY","VE","VG","VN",
"WF","XX","YT"]
GB
["AHA","AMA","AMPCO","AMS","AOA","BIODOSE","BUPA","CH","CICR","CN3RDPARTY",
"CRMMI","CRMMI-SUR","CSL","DDD","DEA","DT_340b","ENGAGE","EVR","FACE","GCP",
"GRV","HCH","HCOS","HMS","HUB_CALLBACK","HUB_Callback","HUB_USAGETAG","IMSDDD","IMSPLAN","JPDWH",
"KOL_OneView","KOL_OneView","LLOYDS","LocalMDM","MAPP","MDE","MEDIC","NHS","NUCLEUS","OK",
"ONEKEY","PCMS","PFORCERX","PFORCERX_ID","PFORCERX_ODS","PTRS","RX_AUDIT","Reltio","ReltioCleanser","Rx_Audit",
"SAP","SYMP","VEEVA","VEEVA_AU","VEEVA_NZ","VEEVA_PHARMACY_AU","XPO"]
N/A
VEEVA_FIELD user (PROD)
veeva_field
External OAuth2
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
["GET_ENTITIES","LOOKUPS"]
["AD","AG","AI","AM","AN","AR","AT","AU","AW","BA",
"BB","BE","BG","BL","BM","BO","BQ","BR","BS","BY",
"BZ","CA","CH","CL","CN","CO","CP","CR","CW","CY",
"CZ","DE","DK","DO","DZ","EC","EE","ES","FI","FO",
"FR","GB","GF","GL","GP","GR","GT","GY","HK","HN",
"HR","HU","ID","IE","IL","IN","IT","JM","JP","KR",
"KY","KZ","LC","LT","LU","LV","MA","MC","MF","MQ",
"MU","MX","MY","NC","NI","NL","NO","NZ","PA","PE",
"PF","PH","PK","PL","PM","PN","PT","PY","RE","RO",
"RS","RU","SA","SE","SG","SI","SK","SV","SX","TF",
"TH","TN","TR","TT","TW","UA","UY","VE","VG","VN",
"WF","XX","YT"]
GB
["AHA","AMA","AMPCO","AMS","AOA","BIODOSE","BUPA","CH","CICR","CN3RDPARTY",
"CRMMI","CRMMI-SUR","CSL","DDD","DEA","DT_340b","ENGAGE","EVR","FACE","GCP",
"GRV","HCH","HCOS","HMS","HUB_CALLBACK","HUB_Callback","HUB_USAGETAG","IMSDDD","IMSPLAN","JPDWH",
"KOL_OneView","LLOYDS","LocalMDM","MAPP","MDE","MEDIC","NHS","NUCLEUS","OK","ONEKEY",
"PCMS","PFORCERX","PFORCERX_ID","PFORCERX_ODS","PTRS","RX_AUDIT","Reltio","ReltioCleanser","Rx_Audit","SAP",
"SYMP","VEEVA","VEEVA_AU","VEEVA_NZ","VEEVA_PHARMACY_AU","XPO"]
N/A

AMER

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

VEEVA_FIELD   user (NPROD)
veeva_field
External OAuth2
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
["GET_ENTITIES","LOOKUPS"]
["CA", "US"]

["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI",
"DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE",
"GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO",
"IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC",
"MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM",
"PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]
N/A

External OAuth2

(GBLUS-STAGE)

55062bae02364c7598bc3ffbfe38e07b
VEEVA_FIELD user (PROD)
veeva_field
External OAuth2 (ALL)
67b77aa7ecf045539237af0dec890e59
726b6d341f994412a998a3e32fdec17a
["GET_ENTITIES","LOOKUPS"]
["CA", "US"]

["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI",
"DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE",
"GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO",
"IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC",
"MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM",
"PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]
N/A

Flows

Client software 


" }, { "title": "PFORCEOL (EMEA, AMER, APAC)", "pageID": "172301093", "pageLink": "/pages/viewpage.action?pageId=172301093", "content": "

Contacts

Adhvaryu, Amish <Amish.Adhvaryu@COMPANY.com>

Fani, Chris <Christopher.Fani@COMPANY.com>

Requirements

Partial requirements

Sent by Amish Adhvaryu

PforceOL Dev - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●





























































PforceOL Stage - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●





























































PforceOL Prod - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●





























































 PT RO DK BR IL TR GR NO CA JP MX AT AR RU KR DE PL AU HK IN MY PH SG TW TH ES CZ LT UA VN ID KZ HU SK UK SE FI CH SA EG MA ZA BE NL IT DZ CO NZ PE CL EE HR LV RS TN US CN SI FR BG IR WA PK

New Requirements - October 2024

Action needed

Need Access to PFORCEOL - DEV, PFORCEOL - QA, PFORCEOL - STG, PFORCEOL - PROD

PingFederate username

DEV & QA: ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
STG: ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
PROD: ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Countries

AC, AE, AG, AI, AR, AT, AU, AW, BB, BE, BH, BM, BR, BS, BZ, CA, CH, CN, CO, CR, CU, CW, CY, CZ, DE, DK, DM, DO, DZ, EG, ES, FI, FK, FR, GB, GD, GF, GP, GR, GT, GY, HK, HN, HT, ID, IE, IL, IN, IT, JM, JP, KN, KR, KW, KY, LC, LU, MF, MQ, MS, MX, MY, NI, NL, NO, NZ, OM, PA, PH, PL, PT, QA, RO, SA, SE, SG, SK, SR, SV, SX, TC, TH, TR, TT, TW, UE, UK, US, VC, VG, VN, YE, ZA

AJ: "Keep the other countries for now"

Full list:

AC, AD, AE, AG, AI, AM, AN, AR, AT, AU, AW, BA, BB, BE, BG, BH, BL, BM, BO, BQ, BR, BS, BY, BZ, CA, CH, CL, CN, CO, CP, CR, CU, CW, CY, CZ, DE, DK, DM, DO, DZ, EC, EE, EG, ES, FI, FK, FO, FR, GB, GD, GF, GL, GP, GR, GT, GY, HK, HN, HR, HT, HU, ID, IE, IL, IN, IR, IT, JM, JP, KN, KR, KW, KY, KZ, LC, LT, LU, LV, MA, MC, MF, MQ, MS, MU, MX, MY, NC, NI, NL, NO, NZ, OM, PA, PE, PF, PH, PK, PL, PM, PN, PT, PY, QA, RE, RO, RS, RU, SA, SE, SG, SI, SK, SR, SV, SX, TC, TF, TH, TN, TR, TT, TW, UA, UE, UK, US, UY, VC, VE, VG, VN, WA, WF, XX, YE, YT, ZA

Tenant

AMER, EMEA, APAC, US, EX-US

Environments

DEV, QA, STG, PROD

Permissions range

Read access for HCP Search and HCO Search and MCO Search

Sources

Sources that are configured in OneMed:
MAPP, ONEKEY,OK, PFORCERX_ODS, PFORCERX, VOD, LEGACY_SFA_IDL, PTRS, JPDWH, iCUE, IQVIA_DDD, DCR_SYNC, MDE, MEDPAGESHCP, MEDPAGESHCO

Business justification

These changes are required as part of OneMed 2.0 Transformation Project. This project is responsible to ensure an improvised system due to which the proposed changes will help the OneMed technical team to build a better solution to search for HCP/HCO data within MDM system through API integration.

Point of contact

Anvesh (anveshvedula.chalapati@COMPANY.com), Aparna (aparna.balakrishna@COMPANY.com)

Excel sheet with countries: \"\"

Gateway

EMEA

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

PFORCEOL user (NPROD)
pforceol
External OAuth2
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
["GET_ENTITIES","LOOKUPS"]
["NO","AD","AG","AI","AM","AN","AR","AT","AU","AW",
"BA","BB","BE","BG","BL","BM","BO","BQ","BR","BS",
"BY","BZ","CA","CH","CL","CN","CO","CP","CR","CW",
"CY","CZ","DE","DK","DO","DZ","EC","EE","EG","ES",
"FI","FO","FR","GB","GF","GL","GP","GR","GT","GY",
"HK","HN","HR","HU","ID","IE","IL","IN","IR","IT",
"JM","JP","KR","KY","KZ","LC","LT","LU","LV","MA",
"MC","MF","MQ","MU","MX","MY","NC","NI","NL","false",
"NZ","PA","PE","PF","PH","PK","PL","PM","PN","PT",
"PY","RE","RO","RS","RU","SA","SE","SG","SI","SK",
"SV","SX","TF","TH","TN","TR","TT","TW","UA","UK",
"US","UY","VE","VG","VN","WA","WF","XX","YT","ZA"]
GB
["AHA","AMA","AMPCO","AMS","AOA","BIODOSE","BUPA","CH","CICR","CN3RDPARTY",
"CRMMI","CRMMI-SUR","CSL","DDD","DEA","DT_340b","ENGAGE","EVR","FACE","GCP",
"GRV","HCH","HCOS","HMS","HUB_CALLBACK","HUB_Callback","HUB_USAGETAG","IMSDDD","IMSPLAN","JPDWH",
"KOL_OneView","KOL_OneView","LLOYDS","LocalMDM","MAPP","MDE","MEDIC","NHS","NUCLEUS","OK",
"ONEKEY","PCMS","PFORCERX","PFORCERX_ID","PFORCERX_ODS","PTRS","RX_AUDIT","Reltio","ReltioCleanser","Rx_Audit",
"SAP","SYMP","VEEVA","VEEVA_AU","VEEVA_NZ","VEEVA_PHARMACY_AU","XPO"]
N/A
PFORCEOL user (PROD)
pforceol
External OAuth2
- ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
["GET_ENTITIES","LOOKUPS"]
["NO","AD","AG","AI","AM","AN","AR","AT","AU","AW",
"BA","BB","BE","BG","BL","BM","BO","BQ","BR","BS",
"BY","BZ","CA","CH","CL","CN","CO","CP","CR","CW",
"CY","CZ","DE","DK","DO","DZ","EC","EE","EG","ES",
"FI","FO","FR","GB","GF","GL","GP","GR","GT","GY",
"HK","HN","HR","HU","ID","IE","IL","IN","IR","IT",
"JM","JP","KR","KY","KZ","LC","LT","LU","LV","MA",
"MC","MF","MQ","MU","MX","MY","NC","NI","NL","false",
"NZ","PA","PE","PF","PH","PK","PL","PM","PN","PT",
"PY","RE","RO","RS","RU","SA","SE","SG","SI","SK",
"SV","SX","TF","TH","TN","TR","TT","TW","UA","UK",
"UY","VE","VG","VN","WA","WF","XX","YT","ZA"]
GB
["AHA","AMA","AMPCO","AMS","AOA","BIODOSE","BUPA","CH","CICR","CN3RDPARTY",
"CRMMI","CRMMI-SUR","CSL","DDD","DEA","DT_340b","ENGAGE","EVR","FACE","GCP",
"GRV","HCH","HCOS","HMS","HUB_CALLBACK","HUB_Callback","HUB_USAGETAG","IMSDDD","IMSPLAN","JPDWH",
"KOL_OneView","LLOYDS","LocalMDM","MAPP","MDE","MEDIC","NHS","NUCLEUS","OK","ONEKEY",
"PCMS","PFORCERX","PFORCERX_ID","PFORCERX_ODS","PTRS","RX_AUDIT","Reltio","ReltioCleanser","Rx_Audit","SAP",
"SYMP","VEEVA","VEEVA_AU","VEEVA_NZ","VEEVA_PHARMACY_AU","XPO"]
N/A

AMER

Name

Gateway User Name

Authentication

Ping Federate User

Roles

Countries

DefaultCountry

Sources

Topic

PFORCEOL  user (NPROD)
pforceol
External OAuth2
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
["GET_ENTITIES","LOOKUPS"]
["CA", "US"]

["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI",
"DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE",
"GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO",
"IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC",
"MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM",
"PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]
N/A

External OAuth2

(GBLUS-STAGE)

223ca6b37aef4168afaa35aa2cf39a3e
PFORCEOL user (PROD)
pforceol
External OAuth2 (ALL)
e678c66c02c64b599b351e0ab02bae9f
e6ece8da20284c6987ce3b8564fe9087
["GET_ENTITIES","LOOKUPS"]
["CA", "US"]

["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI",
"DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE",
"GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO",
"IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC",
"MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM",
"PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]
N/A

Flows

Client software 


" }, { "title": "1CKOL (Global)", "pageID": "184688633", "pageLink": "/pages/viewpage.action?pageId=184688633", "content": "

Contacts:

Kucherov, Aleksei <Aleksei.Kucherov@COMPANY.com>; Moshin, Nikolay <Nikolay.Moshin@COMPANY.com>

Old Contacts:

Data load support:

First Name: Ilya

Last Name: Enkovich

Office:  ●●●●●●●●●●●●●●●●●●

Mob: ●●●●●●●●●●●●●●●●●●

Internet: www.unit-systems.ru

E-mail: enkovich.i.s@unit-systems.ru


Backup contact:

First Name: Sergey

Last Name: Portnov

Office: ●●●●●●●●●●●●●●●●●●

Mob: ●●●●●●●●●●●●●●●●●●

Internet: www.unit-systems.ru

E-mail: portnov.s.a@unit-systems.ru

Flows

1CKOL has one batch process which consumes export files from data warehouse, process this, and loads data to MDM. This process is base on incremental batch engine and run on Airflow platform.


Input files

The input files are delivered by 1CKOL to AWS S3 bucket

MAPP Review - Europe - 1cKOL - All Documents (sharepoint.com)


UATPROD
S3 service accountsvc_gbicc_euw1_project_mdm_inbound_1ckol_rw_s3svc_gbicc_euw1_project_mdm_inbound_1ckol_rw_s3
S3 Access key IDAKIATCTZXPPJXRNSDOGNAKIATCTZXPPJXRNSDOGN
S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-project
S3 Foldermdm/UAT/inbound/KOL/RU/mdm/inbound/KOL/RU/
Input data file mask KOL_Extract_Russia_[0-9]+.zipKOL_Extract_Russia_[0-9]+.zip
Compressionzipzip
FormatFlat files, 1CKOL dedicated format Flat files, 1CKOL dedicated format 

Example

KOL_Extract_Russia_07212021.zipKOL_Extract_Russia_07212021.zip
Schedulenonenone
Airflow job inc_batch_eu_kol_ru_stage inc_batch_eu_kol_ru_prod

Data mapping 

Data mapping is described in the attached document.

\"\"

Configuration

Flow configuration is stored in MDM Environment configuration repository. For each environment where the flow should be enabled the configuration file inc_batch_eu_kol_ru.yml has to be created in the location related to configured environment: inventory/[env name]/group_vars/gw-airflow-services/ and the batch name "inc_batch_eu_kol_ru" has to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Below table prresents the location of inc_batch_jp.yml file for Test, Dev, Mapp, Stage and PROD envs:


inc_batch_eu_kol_ru
UAThttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_eu_kol_ru.yml
PRODhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_eu_kol_ru.yml

Applying configuration changes is done by executing the deploy Airflow's components procedure.

SOPs


There is no particular SOP procedure for this flow. All common SOPs was described in the "Incremental batch flows: SOP" chapter.



" }, { "title": "Snowflake MDM Data Mart", "pageID": "164470197", "pageLink": "/display/GMDM/Snowflake+MDM+Data+Mart", "content": "

The section describes   MDM Data Mart in Snowflake. The Data Mart contains MDM data from Reltio tenants published into Snowflake via MDM HUB.

\"\"



Roles, permissions, warehouses used in MDM Data Mart in Snowflake:
NewMdmSfRoles_231017.xlsx

" }, { "title": "Connect Guide", "pageID": "196886695", "pageLink": "/display/GMDM/Connect+Guide", "content": "


How to add a user to the DATA Role: 

 Users accessing snowflake have to create a ticket and add themselves to the DATA role. This will allow the user to view CUSTOMER_SL schema (users access layer to Snowflake):

  1. Go to https://requestmanager.COMPANY.com/
  2. Click on the TOP: "Group Manager" - https://requestmanager1.COMPANY.com/Group/Default.aspx
  3. Click on the "Distribution Lists"
  4. Search for the correct group you want to be added. Check the group name here: "List Of Groups With Access To The DataMart
    1. \"\"
  5. In the search write the "AD Group Name" for selected SF Instance.
  6. Click Request Access
    1. \"\"
  7. Click "Add Myself" and then save 
    1. \"\"
  8. Go to "Cart" and click "Submit Request"
    1. \"\"

How to connect to the DB:


List Of Groups With Access To The DataMart

Since October 2023

NewMdmSfRoles_231017 1.xlsx

[Expired Oct 2023] Groups that have access to CUSTOMER_SL schema:

Role NameSF InstanceDB InstanceEnvAD Group Name
COMM_AMER_MDM_DMART_DEV_DATA_ROLEAMERAMERDEVsfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_DEV_DATA_ROLE
COMM_AMER_MDM_DMART_QA_DATA_ROLEAMERAMERQAsfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_QA_DATA_ROLE
COMM_AMER_MDM_DMART_STG_DATA_ROLEAMERAMERSTAGEsfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_STG_DATA_ROLE
COMM_AMER_MDM_DMART_PROD_DATA_ROLEAMERAMERPRODsfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_DATA_ROLE
COMM_MDM_DMART_DEV_DATA_ROLEAMERUSDEVsfdb_us-east-1_amerdev01_COMM_DEV_MDM_DMART_DATA_ROLE
COMM_MDM_DMART_QA_DATA_ROLEAMERUSQAsfdb_us-east-1_amerdev01_COMM_QA_MDM_DMART_DATA_ROLE
COMM_MDM_DMART_STG_DATA_ROLEAMERUSSTAGEsfdb_us-east-1_amerdev01_COMM_STG_MDM_DMART_DATA_ROLE
COMM_MDM_DMART_PROD_DATA_ROLEAMERUSPRODsfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_DATA_ROLE
COMM_APAC_MDM_DMART_DEV_DATA_ROLEEMEAAPACDEVsfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_DEV_DATA_ROLE
COMM_APAC_MDM_DMART_QA_DATA_ROLEEMEAAPACQAsfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_QA_DATA_ROLE
COMM_APAC_MDM_DMART_STG_DATA_ROLEEMEAAPACSTAGEsfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_STG_DATA_ROLE
COMM_APAC_MDM_DMART_PROD_DATA_ROLEEMEAAPACPRODsfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_DATA_ROLE
COMM_EMEA_MDM_DMART_DEV_DATA_ROLEEMEAEMEADEVsfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_DEV_DATA_ROLE
COMM_EMEA_MDM_DMART_QA_DATA_ROLEEMEAEMEAQAsfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_QA_DATA_ROLE
COMM_EMEA_MDM_DMART_STG_DATA_ROLEEMEAEMEASTAGEsfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_STG_DATA_ROLE
COMM_EMEA_MDM_DMART_PROD_DATA_ROLEEMEAEMEAPRODsfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_DATA_ROLE
COMM_MDM_DMART_DEV_DATA_ROLEEMEAEUDEVsfdb_eu-west-1_emeadev01_COMM_DEV_MDM_DMART_DATA_ROLE
COMM_MDM_DMART_QA_DATA_ROLEEMEAEUQAsfdb_eu-west-1_emeadev01_COMM_QA_MDM_DMART_DATA_ROLE
COMM_MDM_DMART_STG_DATA_ROLEEMEAEUSTAGEsfdb_eu-west-1_emeadev01_COMM_STG_MDM_DMART_DATA_ROLE
COMM_MDM_DMART_PROD_DATA_ROLEEMEAEUPRODsfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_DATA_ROLE
COMM_GBL_MDM_DMART_DEV_DATA_ROLEEMEAGBLDEVsfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_DEV_DATA_ROLE
COMM_GBL_MDM_DMART_QA_DATA_ROLEEMEAGBLQAsfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_QA_DATA_ROLE
COMM_GBL_MDM_DMART_STG_DATA_ROLEEMEAGBLSTAGEsfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_STG_DATA_ROLE
COMM_GBL_MDM_DMART_PROD_DATA_ROLEEMEAGBLPRODsfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_DATA_ROLE



" }, { "title": "Data model", "pageID": "196886989", "pageLink": "/display/GMDM/Data+model", "content": "


The data mart contains MDM data in object & relational data models. The fragment of the model is presented in the picture below. 

The object data model includes the latest version of Reltio JSON documents representing entities, relationships, lovs, merge-tree. They are loaded into  ENTITIES, RELATIONS, LOV_DATA, MERGES, MATCHES tables. 
They are loading from Reltio using a HUB streaming interface described here.

The object model is transformed into the relation model by a set of dynamic views using Snowflake JSON processing query language. Dynamic views are generated dynamically from the Retlio data model. The regeneration process is maintained in Jenkins and triggered weekly or on-demand.  The generation process starts from root objects like HCP, HCO, walks through JSON tree and generates views with the following rules:  


\"The

Model versions

There are two versions of Reltio data model maintained in the data mart:

Key generation strategy

Object model:

ObjectsKey columnsDescription
ENTITIES, MATCHES MERGESentity_uri, country*Reltio entity unique identifier and country
RELATIONSrelation_uri, country*Reltio relationship unique identifier & country
LOV_DATAid, mdm_region*the concatenation of Reltio LOV name + ':'+ canonical code as id & mdm region

  * - only in global data mart

Relational model:


ObjectsKey columnsDescription
root objects like HCP, HCO, MCO, MERGE_HISTORY, MATCH_HISTORYentity_uri, country*Reltio entity unique identifier and country
AFFILIATIONSrelation_uri, country*Reltio relationship unique identifier and country
child views for nested attributes Addresses, Specialties ...parent view keys, nested attribute uri, country* parent view keys + nested attribute uri  + country

  * - only in global data mart


Schemas:


MDM Data Mart contains the following schemas:

Schema nameDescription
LANDINGSchemas used by HUB ETL processes as stage area
CUSTOMERMain schema containing data mart data 
CUSTOMER_SLAccess schema to CUSTOMER schema data
AES_RS_SLContains views presenting data in Redshift data model




" }, { "title": "AES_RS_SL", "pageID": "203229895", "pageLink": "/display/GMDM/AES_RS_SL", "content": "

The schema contains a set of views that mimic MDM DataMart from Redshift. 

The views integrate both data models COMPANY and IQIVIA and present data from all countries available in Reltio.


Differences from original Redshift mart




" }, { "title": "CUSTOMER schema", "pageID": "163919161", "pageLink": "/display/GMDM/CUSTOMER+schema", "content": "

This is the main schema containing MDM data in two formats.

Object model that represents Reltio JSON format. Data in the format are kept in ENTITIES , RELATIONS, MERGE_TREE tables. 

Relation model is created as a part of views (standard or materialized) derived from the object model. Most of the views are generated in an automated way based on Reltio Data Model configuration. They directly reflect Relito object model. There are two sets of views as there are two models in Reltio: COMPANY and Iqivia,  Those views can change dynamically as Reltio config is updated.




\n\n \n \n \n\n
\n \n \n \n\n \n \n\n \n \n \n\n \n \n \n \n \n \n\n
\n \n
\n
\n

" }, { "title": "Customer base objects", "pageID": "164470194", "pageLink": "/display/GMDM/Customer+base+objects", "content": "

ENTITIES

Keeps Relto entities objects


ColumnTypeDescription

ENTITY_URI

TEXT

Reltio entityt uri

COUNTRY

TEXT

Country

ENTITY_TYPE

TEXT

Entity type for example: HCO, HCP

ACTIVE

BOOLEAN

Active flag 

CREATE_TIME

TIMESTAMP_LTZ

Create time

UPDATE_TIME

TIMESTAMP_LTZ

Update time

OBJECT

VARIANT

JSON object

LAST_EVENT_TYPE

TEXT

The last event updated the JSON object

LAST_EVENT_TIME

TIMESTAMP_LTZ

Last event time

PARENT

TEXT

Parent entity uri

CHECKSUM

NUMBER

Checksum
COMPANY_GLOBAL_CUSTOMER_IDTEXTEntity COMPANY Global Id
PARENT_COMPANY_GLOBAL_CUSTOMER_ITEXTIn case of lost merge that field store COMPANY Global Id of the winner entity else is empty


HIST_INACTIVE_ENTITIES

Used for history inactive onekey crosswals. Structure is a copy of entities table.

ColumnTypeDescription

ENTITY_URI

TEXT

Reltio entityt uri

COUNTRY

TEXT

Country

ENTITY_TYPE

TEXT

Entity type for example: HCO, HCP

ACTIVE

BOOLEAN

Active flag 

CREATE_TIME

TIMESTAMP_LTZ

Create time

UPDATE_TIME

TIMESTAMP_LTZ

Update time

OBJECT

VARIANT

JSON object

LAST_EVENT_TYPE

TEXT

The last event updated the JSON object

LAST_EVENT_TIME

TIMESTAMP_LTZ

Last event time

PARENT

TEXT

Parent entity uri

CHECKSUM

NUMBER

Checksum
COMPANY_GLOBAL_CUSTOMER_IDTEXTEntity COMPANY Global Id
PARENT_COMPANY_GLOBAL_CUSTOMER_ITEXTIn case of lost merge that field store COMPANY Global Id of the winner entity else is empty

RELATIONS

Keeps Relto relations objects


ColumnTypeDescription

RELATION_URI

TEXT

Reltio relation uri

COUNTRY

TEXT

Country

RELATION_TYPE

TEXT

Relation type

ACTIVE

BOOLEAN

Active flag

CREATE_TIME

TIMESTAMP_LTZ

Create time

UPDATE_TIME

TIMESTAMP_LTZ

Update time

START_ENTITY_URI

TEXT

Source entity uri 

END_ENTITY_URI

TEXT

Target entity uri

OBJECT

VARIANT

JSON object 

LAST_EVENT_TYPE

TEXT

The last event type modified the record

LAST_EVENT_TIME

TIMESTAMP_LTZ

Last event time

PARENT

TEXT

not used

CHECKSUM

NUMBER

Checksum

MATCHES

The table presents active and historical matches found in Reltio for all entities.


ColumnTypeDescription
ENTITY_URITEXTReltio entity uri
TARGET_ENTITY_URITEXTReltio entity uri to which matches ENTITY_URI
MATCH_TYPETEXTMatch type
MATCH_RULE_NAMETEXTMatch rule name
COUNTRYTEXTCountry
LAST_EVENT_TYPETEXTThe last event type modified the record
LAST_EVENT_TIMETIMESTAMP_LTZLast event time
LAST_EVENT_CHECKSUMNUMBERThe last event checksum
ACTIVEBOOLEANActive flag

MATCH_HISTORY

The view shows match history for active and inactive matches enriched by merge data. The merge info is available for matches that were inactivated by the merge action triggered by users or Reltio background processes.  

ColumnTypeDescription
ENTITY_URITEXTReltio entity uri
TARGET_ENTITY_URITEXTReltio entity uri to which matches ENTITY_URI
MATCH_TYPETEXTMatch type
MATCH_RULE_NAMETEXTMatch rule name
COUNTRYTEXTCountry
LAST_EVENT_TYPETEXTThe last event type modified the record
LAST_EVENT_TIMETIMESTAMP_LTZLast event time
LAST_EVENT_CHECKSUMNUMBERThe last event checksum
ACTIVEBOOLEANActive flag
MERGEDBOOLEANMerge indicator, the true value indicates that the merge happened for the match.
MERGE_REASONTEXT Merge reason 
MERGE_USERTEXTReltio user name or process name that executed the merge
MERGE_DATETO_TIMESTAMP_LTZMerge date 
MERGE_RULETEXTMerge rule that triggered the merge

MERGES

The table presents active merges found in Reltio based on the merge_tree export.


ColumnTypeDescription
ENTITY_URITEXTReltio entity uri
LAST_UPDATE_TIMETO_TIMESTAMP_LTZDate of the last update on the selected row
CREATE_TIMETO_TIMESTAMP_LTZCreation date on the selected row

OBJECT

VARIANT

JSON object 

MERGE_HISTORY

The view shows merge history for active entities. The merge history view is build based on the merge_tree Reltio export. 

ColumnTypeDescription
ENTITY_URITEXTReltio entity uri
LOSER_ENTITY_URITEXTReltio entity uri for the merge loser
MERGE_REASONTEXT 

Merge reason 


Merge on the flyThis indicates automatic match rules were able to find matches for a newly added entity. Therefore, the new entity was not created as a separate entity in the platform but was merged into an existing one instead.
Merge by crosswalksIf a newly added entity has the same crosswalk as that of an existing entity in the platform, such entities are merged automatically on the fly because the Reltio platform does not allow multiple entities with the same crosswalk.
Automatic merge by crosswalksSometimes, two entities with the same crosswalk may exist in the platform (simultaneously added entities). In this case, such entities are merged automatically using a special background thread.
Group merge (Matches found on object creation)This indicates that several entities are grouped into one merge request because all such entities will be merged at the same time to create a single entity in the platform. The reason for a group merge can be an automatic match rule or same crosswalk or both.
Merges found by background merge processThe background match thread (incremental match processor) modifies entities as a result of create/change/remove events and performs a rematch. During the rematch, if some entities match using the automatic match rules, such entities are merged.
Merge by handThis is a merge performed by a user through the API or from the UI by going through the potential matches.
MERGE_RULETEXTMerge rule that triggered the merge
USERTEXTUser name which executed the merge
MERGE_DATETO_TIMESTAMP_LTZMerge date 

ENTITY_HISTORY

Keeps event history for entities and relations

ColumnTypeDescription
EVENT_KEYTEXTEvent key
EVENT_PARTITIONNUMBERPartition number in Kafka
EVENT_OFFSETNUMBEROffset in Kafka
EVENT_TOPICTEXTName of the topic in Kafka where this event is stored
EVENT_TIMETIMESTAMP_LTZTimestamp when the event was generated
EVENT_TYPETEXTEvent type
COUNTRYTEXTCountry
ENTITY_URITEXTReltio entity uri
CHECKSUMNUMBERChecksum

LOV_DATA

Keeps LOV objects

ColumnTypeDescription
IDTEXTLOV identifier 
OBJECTVARIANTReltio RDM object in JSON format

CODES

ColumnTypeDescription
SOURCETEXTSource MDM system name
CODE_IDTEXTCode id - generated by concatenated LOV name and canonical code
CANONICAL_CODETEXTCanonical code
LOV_NAMETEXTLOV (Dictionary) name
ACTIVEBOOLEANActive flag
DESCTEXTEnglish description
COUNTRYTEXTCode country
PARENTSTEXTParent code id

CODE_TRANSLATIONS

RDM code translations

ColumnTypeDescription
SOURCETEXTSource MDM system name
CODE_IDTEXTCode id
CANONICAL_CODETEXTCanonical code
LOV_NAMETEXTLOV (Dictionary) name
ACTIVEBOOLEANActive flag
LANG_CODETEXTLanguage code
LAND_DESCTEXTLanguage description
COUNTRYTEXTCountry

CODE_SOURCE_MAPPINGS

Source code mappings to canonical codes in Reltio RDM

ColumnTypeDescription
SOURCETEXTSource MDM system name
CODE_IDTEXTCode id
SOURCE_NAMETEXTSource name
SOURCE_CODETEXTSource code
ACTIVEBOOLEANActve flag (true - active, false - inactive)
IS_CANONICALBOOLEANIs canonical
COUNTRYTEXTCountry
LAST_MODIFIEDTIMESTAMP_LTZLast modified date
PARENTTEXTParent code

ENTITY_CROSSWALKS

Keeps entity crosswalks

ColumnTypeDescription
CROSSWALK_URITEXTCrosswalk uri
ENTITY_URITEXTEntity uri
ENTITY_TYPETEXTEntity type
ACTIVEBOOLEANActive flag
TYPETEXTCrosswalk type
VALUETEXTCrosswalk value
SOURCE_TABLETEXTSource table
CREATE_DATETIMESTAMP_NTZCreate date
UPDATE_DATETIMESTAMP_NTZUpdate date
RELTIO_LOAD_DATETIMESTAMP_NTZDate when this crosswalk was loaded to Reltio
DELETE_DATETIMESTAMP_NTZDelete date
COMPANY_GLOBAL_CUSTOMER_IDTEXTEntity COMPANY Global Id

RELATION_CROSSWALKS

Keeps relations crosswalks

ColumnTypeDescription
CROSSWALK_URITEXTCrosswalk URI
RELATION_URITEXTRelation URI
RELATION_TYPETEXTRelation type
ACTIVEBOOLEANActive flag
TYPETEXTCrosswalk type
VALUETEXTCrosswalk value
SOURCE_TABLETEXTSource table
CREATE_DATETIMESTAMP_NTZCreate date
UPDATE_DATETIMESTAMP_NTZUpdate date
DELETE_DATETIMESTAMP_NTZDelete date
RELTIO_LOAD_DATETIMESTAMP_NTZDate when this relation was loaded to Reltio

ATTRIBUTE_SOURCE

Presents information about what crosswalk provided the given attribute. 

The view can be joined with views for nested attributes to get also attribute values.


ColumnTypeDescription
ATTTRIBUTE_URITEXTAttribute URI
ENTITY_URTEXT

Entity URI

ACTIVEBOOLEANIs entity active
TYPETEXTCrosswalk type
VALUETEXTCrosswalk value
SOURCE_TABLETEXTCrosswalk source table


ENTITY_UPDATE_DATES

Presents information about updated dates of entities in Reltio MDM or Snowflake

The view can be used to query updated records in a period of time including root objects like HCP, HCO, MCO, and child objects like IDENTIFIERS, SPECIALTIES, ADDRESSED etc.


ColumnTypeDescription
ENTITY_URITEXT

Entity URI

ACTIVEBOOLEANIs entity active
ENTITY_TYPETEXTType of entity
COUNTRYTEXTCountry iso code
MDM_CREATE_TIMETIMESTAMP_LTZEntity create time in Reltio
MDM_UPDATE_TIMETIMESAMP_LTZEntity update time in Reltio
SF_CREATE_TIMETIMESTAMP_LTZEntity create time in Snowflake DB
SF_UPDATE_TIMETIMESTAMP_LTZEntity last update time in Snowflake
LAST_EVENT_TIMETIMESTAMP_LTZLast KAFKA event timestamp
CHECKSUMNUMBERChecksum
COMPANY_GLOBAL_CUSTOMER_IDTEXTEntity COMPANY Global Id
PARENT_COMPANY_GLOBAL_CUSTOMER_ITEXTIn case of lost merge that field store COMPANY Global Id of the winner entity else is empty

RELATION_UPDATE_DATES

Presents information about updated dates of relations Reltio MDM or Snowflake

The view can be used to query all updated entries in a period of time from  AFFILIATONS and child objects like AFFIL_RELATION_TYPE


ColumnTypeDescription
RELATION_URITEXT

Entity URI

ACTIVEBOOLEANIs entity active
RELATION_TYPETEXTType of entity
COUNTRYTEXTCountry iso code
MDM_CREATE_TIMETIMESTAMP_LTZRelation create time in Reltio
MDM_UPDATE_TIMETIMESAMP_LTZRelation update time in Reltio
SF_CREATE_TIMETIMESTAMP_LTZRelation create time in Snowflake DB
SF_UPDATE_TIMETIMESTAMP_LTZRelation last update time in Snowflake
LAST_EVENT_TIMETIMESTAMP_LTZLast KAFKA event timestamp
CHECKSUMNUMBERChecksum
" }, { "title": "Data Materialization Process", "pageID": "347657026", "pageLink": "/display/GMDM/Data+Materialization+Process", "content": "

\"\"

" }, { "title": "Dynamic views for IQVIA MDM Model", "pageID": "164470213", "pageLink": "/display/GMDM/Dynamic+views++for+IQVIA+MDM+Model", "content": "


HCP

Health care provider

Column

Type

Description

Reltio Attribute URI

LOV Name

ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



FIRST_NAME

VARCHAR

First Name

configuration/entityTypes/HCP/attributes/FirstName


LAST_NAME

VARCHAR

Last Name

configuration/entityTypes/HCP/attributes/LastName


MIDDLE_NAME

VARCHAR

Middle Name

configuration/entityTypes/HCP/attributes/MiddleName


NAME

VARCHAR

Name

configuration/entityTypes/HCP/attributes/Name


PREFIX

VARCHAR


configuration/entityTypes/HCP/attributes/Prefix

LKUP_IMS_PREFIX

SUFFIX_NAME

VARCHAR

Generation Suffix

configuration/entityTypes/HCP/attributes/SuffixName

LKUP_IMS_SUFFIX

PREFERRED_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/PreferredName


NICKNAME

VARCHAR


configuration/entityTypes/HCP/attributes/Nickname


COUNTRY_CODE

VARCHAR

Country Code

configuration/entityTypes/HCP/attributes/Country

LKUP_IMS_COUNTRY_CODE

GENDER

VARCHAR


configuration/entityTypes/HCP/attributes/Gender

LKUP_IMS_GENDER

TYPE_CODE

VARCHAR

Type code

configuration/entityTypes/HCP/attributes/TypeCode

LKUP_IMS_HCP_CUST_TYPE

ACCOUNT_TYPE

VARCHAR

Account Type

configuration/entityTypes/HCP/attributes/AccountType


SUB_TYPE_CODE

VARCHAR

Sub type code

configuration/entityTypes/HCP/attributes/SubTypeCode

LKUP_IMS_HCP_SUBTYPE

TITLE

VARCHAR


configuration/entityTypes/HCP/attributes/Title

LKUP_IMS_PROF_TITLE

INITIALS

VARCHAR

Initials

configuration/entityTypes/HCP/attributes/Initials


D_O_B

DATE

Date of Birth

configuration/entityTypes/HCP/attributes/DoB


Y_O_B

VARCHAR

Birth Year

configuration/entityTypes/HCP/attributes/YoB


MAPP_HCP_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/MAPPHcpStatus

LKUP_MAPP_HCPSTATUS

GO_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/GOStatus

LKUP_GOVOFF_GOSTATUS

PIGO_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/PIGOStatus

LKUP_GOVOFF_PIGOSTATUS

NIPPIGO_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/NIPPIGOStatus

LKUP_GOVOFF_NIPPIGOSTATUS

PRIMARY_PIGO_RATIONALE

VARCHAR


configuration/entityTypes/HCP/attributes/PrimaryPIGORationale

LKUP_GOVOFF_PIGORATIONALE

SECONDARY_PIGO_RATIONALE

VARCHAR


configuration/entityTypes/HCP/attributes/SecondaryPIGORationale

LKUP_GOVOFF_PIGORATIONALE

PIGOSME_REVIEW

VARCHAR


configuration/entityTypes/HCP/attributes/PIGOSMEReview

LKUP_GOVOFF_PIGOSMEREVIEW

GSQ_DATE

DATE

GSQDate

configuration/entityTypes/HCP/attributes/GSQDate


MAPP_DO_NOT_USE

VARCHAR


configuration/entityTypes/HCP/attributes/MAPPDoNotUse

LKUP_GOVOFF_DONOTUSE

MAPP_CHANGE_DATE

VARCHAR


configuration/entityTypes/HCP/attributes/MAPPChangeDate


MAPP_CHANGE_REASON

VARCHAR


configuration/entityTypes/HCP/attributes/MAPPChangeReason


IS_EMPLOYEE

BOOLEAN


configuration/entityTypes/HCP/attributes/IsEmployee


VALIDATION_STATUS

VARCHAR

Validation Status of the Customer

configuration/entityTypes/HCP/attributes/ValidationStatus

LKUP_IMS_VAL_STATUS

SOURCE_CHANGE_DATE

DATE

SourceChangeDate

configuration/entityTypes/HCP/attributes/SourceChangeDate


SOURCE_CHANGE_REASON

VARCHAR

SourceChangeReason

configuration/entityTypes/HCP/attributes/SourceChangeReason


ORIGIN_SOURCE

VARCHAR

Originating Source

configuration/entityTypes/HCP/attributes/OriginSource


OK_VR_TRIGGER

VARCHAR


configuration/entityTypes/HCP/attributes/OK_VR_Trigger

LKUP_IMS_SEND_FOR_VALIDATION

BIRTH_CITY

VARCHAR

Birth City

configuration/entityTypes/HCP/attributes/BirthCity


BIRTH_STATE

VARCHAR

Birth State

configuration/entityTypes/HCP/attributes/BirthState

STATE_CODE

BIRTH_COUNTRY

VARCHAR

Birth Country

configuration/entityTypes/HCP/attributes/BirthCountry

COUNTRY_CD

D_O_D

DATE


configuration/entityTypes/HCP/attributes/DoD


Y_O_D

VARCHAR


configuration/entityTypes/HCP/attributes/YoD


TAX_ID

VARCHAR


configuration/entityTypes/HCP/attributes/TaxID


SSN_LAST4

VARCHAR


configuration/entityTypes/HCP/attributes/SSNLast4


ME

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/ME


NPI

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/NPI


UPIN

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/UPIN


KAISER_PROVIDER

BOOLEAN


configuration/entityTypes/HCP/attributes/KaiserProvider


MAJOR_PROFESSIONAL_ACTIVITY

VARCHAR


configuration/entityTypes/HCP/attributes/MajorProfessionalActivity

MPA_CD

PRESENT_EMPLOYMENT

VARCHAR


configuration/entityTypes/HCP/attributes/PresentEmployment

PE_CD

TYPE_OF_PRACTICE

VARCHAR


configuration/entityTypes/HCP/attributes/TypeOfPractice

TOP_CD

SOLO

BOOLEAN


configuration/entityTypes/HCP/attributes/Solo


GROUP

BOOLEAN


configuration/entityTypes/HCP/attributes/Group


ADMINISTRATOR

BOOLEAN


configuration/entityTypes/HCP/attributes/Administrator


RESEARCH

BOOLEAN


configuration/entityTypes/HCP/attributes/Research


CLINICAL_TRIALS

BOOLEAN


configuration/entityTypes/HCP/attributes/ClinicalTrials


WEBSITE_URL

VARCHAR


configuration/entityTypes/HCP/attributes/WebsiteURL


IMAGE_LINKS

VARCHAR


configuration/entityTypes/HCP/attributes/ImageLinks


DOCUMENT_LINKS

VARCHAR


configuration/entityTypes/HCP/attributes/DocumentLinks


VIDEO_LINKS

VARCHAR


configuration/entityTypes/HCP/attributes/VideoLinks


DESCRIPTION

VARCHAR


configuration/entityTypes/HCP/attributes/Description


CREDENTIALS

VARCHAR


configuration/entityTypes/HCP/attributes/Credentials

CRED

FORMER_FIRST_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/FormerFirstName


FORMER_LAST_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/FormerLastName


FORMER_MIDDLE_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/FormerMiddleName


FORMER_SUFFIX_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/FormerSuffixName


SSN

VARCHAR


configuration/entityTypes/HCP/attributes/SSN


PRESUMED_DEAD

BOOLEAN


configuration/entityTypes/HCP/attributes/PresumedDead


DEA_BUSINESS_ACTIVITY

VARCHAR


configuration/entityTypes/HCP/attributes/DEABusinessActivity


STATUS_IMS

VARCHAR


configuration/entityTypes/HCP/attributes/StatusIMS

LKUP_IMS_STATUS

STATUS_UPDATE_DATE

DATE


configuration/entityTypes/HCP/attributes/StatusUpdateDate


STATUS_REASON_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/StatusReasonCode

LKUP_IMS_SRC_DEACTIVE_REASON_CODE

COMMENTERS

VARCHAR

Commenters

configuration/entityTypes/HCP/attributes/Commenters


SOURCE_CREATION_DATE

DATE


configuration/entityTypes/HCP/attributes/SourceCreationDate


SOURCE_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/SourceName


SUB_SOURCE_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/SubSourceName


EXCLUDE_FROM_MATCH

VARCHAR


configuration/entityTypes/HCP/attributes/ExcludeFromMatch


PROVIDER_IDENTIFIER_TYPE

VARCHAR

Provider Identifier Type

configuration/entityTypes/HCP/attributes/ProviderIdentifierType

LKUP_IMS_PROVIDER_IDENTIFIER_TYPE

CATEGORY

VARCHAR

Category Code

configuration/entityTypes/HCP/attributes/Category

LKUP_IMS_HCP_CATEGORY

DEGREE_CODE

VARCHAR

Degree Code

configuration/entityTypes/HCP/attributes/DegreeCode

LKUP_IMS_DEGREE

SALUTATION_NAME

VARCHAR

Salutation Name

configuration/entityTypes/HCP/attributes/SalutationName


IS_BLACK_LISTED

BOOLEAN

Indicates to Blacklist the profile

configuration/entityTypes/HCP/attributes/IsBlackListed


TRAINING_HOSPITAL

VARCHAR

Training Hospital

configuration/entityTypes/HCP/attributes/TrainingHospital


ACRONYM_NAME

VARCHAR

AcronymName

configuration/entityTypes/HCP/attributes/AcronymName


FIRST_SET_DATE

DATE

Date of 1st Installation

configuration/entityTypes/HCP/attributes/FirstSetDate


CREATE_DATE

DATE

Individual Creation Date

configuration/entityTypes/HCP/attributes/CreateDate


UPDATE_DATE

DATE

Date of Last Individual Update

configuration/entityTypes/HCP/attributes/UpdateDate


CHECK_DATE

DATE

Date of Last Individual Quality Check

configuration/entityTypes/HCP/attributes/CheckDate


STATE_CODE

VARCHAR

Situation of the healthcare professional (ex. Active, Inactive, Retired)

configuration/entityTypes/HCP/attributes/StateCode

LKUP_IMS_PROFILE_STATE

STATE_DATE

DATE

Date when state of the record was last modified.

configuration/entityTypes/HCP/attributes/StateDate


VALIDATION_CHANGE_REASON

VARCHAR

Reason for Validation Status change

configuration/entityTypes/HCP/attributes/ValidationChangeReason

LKUP_IMS_VAL_STATUS_CHANGE_REASON

VALIDATION_CHANGE_DATE

DATE

Date of Validation change

configuration/entityTypes/HCP/attributes/ValidationChangeDate


APPOINTMENT_REQUIRED

BOOLEAN

Indicates whether sales reps need to make an appointment to see the Professional.

configuration/entityTypes/HCP/attributes/AppointmentRequired


NHS_STATUS

VARCHAR

National Health System Status

configuration/entityTypes/HCP/attributes/NHSStatus

LKUP_IMS_SECTOR_OF_CARE

NUM_OF_PATIENTS

VARCHAR

Number of attached patients

configuration/entityTypes/HCP/attributes/NumOfPatients


PRACTICE_SIZE

VARCHAR

Practice Size

configuration/entityTypes/HCP/attributes/PracticeSize


PATIENTS_X_DAY

VARCHAR

Patients Per Day

configuration/entityTypes/HCP/attributes/PatientsXDay


PREFERRED_LANGUAGE

VARCHAR

Preferred Spoken Language

configuration/entityTypes/HCP/attributes/PreferredLanguage


POLITICAL_AFFILIATION

VARCHAR

Political Affiliation

configuration/entityTypes/HCP/attributes/PoliticalAffiliation

LKUP_IMS_POL_AFFIL

PRESCRIBING_LEVEL

VARCHAR

Prescribing Level

configuration/entityTypes/HCP/attributes/PrescribingLevel

LKUP_IMS_PRES_LEVEL

EXTERNAL_RATING

VARCHAR

External Rating

configuration/entityTypes/HCP/attributes/ExternalRating


TARGETING_CLASSIFICATION

VARCHAR

Targeting Classification

configuration/entityTypes/HCP/attributes/TargetingClassification


KOL_TITLE

VARCHAR

Key Opinion Leader Title

configuration/entityTypes/HCP/attributes/KOLTitle


SAMPLING_STATUS

VARCHAR

Sampling Status of HCP

configuration/entityTypes/HCP/attributes/SamplingStatus

LKUP_IMS_SAMPLING_STATUS

ADMINISTRATIVE_NAME

VARCHAR

Administrative Name

configuration/entityTypes/HCP/attributes/AdministrativeName


PROFESSIONAL_DESIGNATION

VARCHAR


configuration/entityTypes/HCP/attributes/ProfessionalDesignation

LKUP_IMS_PROF_DESIGNATION

EXTERNAL_INFORMATION_URL

VARCHAR


configuration/entityTypes/HCP/attributes/ExternalInformationURL


MATCH_STATUS_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/MatchStatusCode

LKUP_IMS_MATCH_STATUS_CODE

SUBSCRIPTION_FLAG1

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCP/attributes/SubscriptionFlag1


SUBSCRIPTION_FLAG2

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCP/attributes/SubscriptionFlag2


SUBSCRIPTION_FLAG3

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCP/attributes/SubscriptionFlag3


SUBSCRIPTION_FLAG4

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCP/attributes/SubscriptionFlag4


SUBSCRIPTION_FLAG5

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCP/attributes/SubscriptionFlag5


SUBSCRIPTION_FLAG6

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCP/attributes/SubscriptionFlag6


SUBSCRIPTION_FLAG7

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCP/attributes/SubscriptionFlag7


SUBSCRIPTION_FLAG8

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCP/attributes/SubscriptionFlag8


SUBSCRIPTION_FLAG9

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCP/attributes/SubscriptionFlag9


SUBSCRIPTION_FLAG10

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCP/attributes/SubscriptionFlag10


MIDDLE_INITIAL

VARCHAR

Middle Initial. This attribute is populated from Middle Name

configuration/entityTypes/HCP/attributes/MiddleInitial


DELETE_ENTITY

BOOLEAN

Property for GDPR removing

configuration/entityTypes/HCP/attributes/DeleteEntity


PARTY_ID

VARCHAR


configuration/entityTypes/HCP/attributes/PartyID


LAST_VERIFICATION_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/LastVerificationStatus


LAST_VERIFICATION_DATE

DATE


configuration/entityTypes/HCP/attributes/LastVerificationDate


EFFECTIVE_DATE

DATE


configuration/entityTypes/HCP/attributes/EffectiveDate


END_DATE

DATE


configuration/entityTypes/HCP/attributes/EndDate


PARTY_LOCALIZATION_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/PartyLocalizationCode


MATCH_PARTY_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/MatchPartyName


LICENSE

Column

Type

Description

Reltio Attribute URI

LOV Name

LICENSE_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



CATEGORY

VARCHAR


configuration/entityTypes/HCP/attributes/License/attributes/Category

LKUP_IMS_LIC_CATEGORY

NUMBER

VARCHAR

State License INTEGER. A unique license INTEGER is listed for each license the physician holds. There is no standard format syntax. Format examples: 18986, 4301079019, BX1464089. There is also no limit to the INTEGER of licenses a physician can hold in a state. Example: A physician can have an inactive resident license plus unlimited active licenses. Residents can have as many as four licenses since some states issue licenses every year

configuration/entityTypes/HCP/attributes/License/attributes/Number


BOARD_EXTERNAL_ID

VARCHAR

Board External ID

configuration/entityTypes/HCP/attributes/License/attributes/BoardExternalID


BOARD_CODE

VARCHAR

State License Board Code. For AMA The board code will always be AMA

configuration/entityTypes/HCP/attributes/License/attributes/BoardCode

STLIC_BRD_CD_LOV

STATE

VARCHAR

State License State. Two character field. USPS standard abbreviations.

configuration/entityTypes/HCP/attributes/License/attributes/State

LKUP_IMS_STATE_CODE

ISO_COUNTRY_CODE

VARCHAR

ISO country code

configuration/entityTypes/HCP/attributes/License/attributes/ISOCountryCode

LKUP_IMS_COUNTRY_CODE

DEGREE

VARCHAR

State License Degree. A physician may hold more than one license in a given state. However, not more than one MD or more than one DO license in the same state.

configuration/entityTypes/HCP/attributes/License/attributes/Degree

LKUP_IMS_DEGREE

AUTHORIZATION_STATUS

VARCHAR

Authorization Status

configuration/entityTypes/HCP/attributes/License/attributes/AuthorizationStatus

LKUP_IMS_IDENTIFIER_STATUS

LICENSE_NUMBER_KEY

VARCHAR

State License Number Key

configuration/entityTypes/HCP/attributes/License/attributes/LicenseNumberKey


AUTHORITY_NAME

VARCHAR

Authority Name

configuration/entityTypes/HCP/attributes/License/attributes/AuthorityName


PROFESSION_CODE

VARCHAR

Profession

configuration/entityTypes/HCP/attributes/License/attributes/ProfessionCode

LKUP_IMS_PROFESSION

TYPE_ID

VARCHAR

Authorization Type id

configuration/entityTypes/HCP/attributes/License/attributes/TypeId


TYPE

VARCHAR

State License Type. U = Unlimited there is no restriction on the physician to practice medicine; L = Limited implies restrictions of some sort. For example, the physician may practice only in a given county, admit patients only to particular hospitals, or practice under the supervision of a physician with a license in state or private hospitals or other settings; T = Temporary issued to a physician temporarily practicing in an underserved area outside his/her state of licensure. Also granted between board meetings when new licenses are issued. Time span for a temporary license varies from state to state. Temporary licenses typically expire 6-9 months from the date they are issued; R = Resident License granted to a physician in graduate medical education (e.g., residency training).

configuration/entityTypes/HCP/attributes/License/attributes/Type

LKUP_IMS_LICENSE_TYPE

PRIVILEGE_ID

VARCHAR

License Privilege

configuration/entityTypes/HCP/attributes/License/attributes/PrivilegeId


PRIVILEGE_NAME

VARCHAR

License Privilege Name

configuration/entityTypes/HCP/attributes/License/attributes/PrivilegeName


PRIVILEGE_RANK

VARCHAR

License Privilege Rank

configuration/entityTypes/HCP/attributes/License/attributes/PrivilegeRank


STATUS

VARCHAR

State License Status. A = Active. Physician is licensed to practice within the state; I = Inactive. If the physician has not reregistered a state license OR if the license has been suspended or revoked by the state board; X = unknown. If the state has not provided current information Note: Some state boards issue inactive licenses to physicians who want to maintain licensure in the state although they are currently practicing in another state.

configuration/entityTypes/HCP/attributes/License/attributes/Status

LKUP_IMS_IDENTIFIER_STATUS

DEACTIVATION_REASON_CODE

VARCHAR

Deactivation Reason Code

configuration/entityTypes/HCP/attributes/License/attributes/DeactivationReasonCode

LKUP_IMS_SRC_DEACTIVE_REASON_CODE

EXPIRATION_DATE

DATE


configuration/entityTypes/HCP/attributes/License/attributes/ExpirationDate


ISSUE_DATE

DATE

State License Issue Date

configuration/entityTypes/HCP/attributes/License/attributes/IssueDate


BRD_DATE

DATE

State License as of date or pull date. The as of date (or stamp date) is the date the current license file is provided to the Database Licensees.

configuration/entityTypes/HCP/attributes/License/attributes/BrdDate


SAMPLE_ELIGIBILITY

VARCHAR


configuration/entityTypes/HCP/attributes/License/attributes/SampleEligibility


SOURCE_CD

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/License/attributes/SourceCD


RANK

VARCHAR

License Rank

configuration/entityTypes/HCP/attributes/License/attributes/Rank


CERTIFICATION

VARCHAR

Certification

configuration/entityTypes/HCP/attributes/License/attributes/Certification


REQ_SAMPL_NON_CTRL

VARCHAR

Request Samples Non-Controlled

configuration/entityTypes/HCP/attributes/License/attributes/ReqSamplNonCtrl


REQ_SAMPL_CTRL

VARCHAR

Request Samples Controlled

configuration/entityTypes/HCP/attributes/License/attributes/ReqSamplCtrl


RECV_SAMPL_NON_CTRL

VARCHAR

Receives Samples Non-Controlled Substances

configuration/entityTypes/HCP/attributes/License/attributes/RecvSamplNonCtrl


RECV_SAMPL_CTRL

VARCHAR

Receives Samples Controlled

configuration/entityTypes/HCP/attributes/License/attributes/RecvSamplCtrl


DISTR_SAMPL_NON_CTRL

VARCHAR

Distribute Samples Non-Controlled Substances

configuration/entityTypes/HCP/attributes/License/attributes/DistrSamplNonCtrl


DISTR_SAMPL_CTRL

VARCHAR

Distribute Samples Controlled

configuration/entityTypes/HCP/attributes/License/attributes/DistrSamplCtrl


SAMP_DRUG_SCHED_I_FLAG

VARCHAR

Sample Drug Schedule I flag

configuration/entityTypes/HCP/attributes/License/attributes/SampDrugSchedIFlag


SAMP_DRUG_SCHED_II_FLAG

VARCHAR

Sample Drug Schedule II flag

configuration/entityTypes/HCP/attributes/License/attributes/SampDrugSchedIIFlag


SAMP_DRUG_SCHED_III_FLAG

VARCHAR

Sample Drug Schedule III flag

configuration/entityTypes/HCP/attributes/License/attributes/SampDrugSchedIIIFlag


SAMP_DRUG_SCHED_IV_FLAG

VARCHAR

Sample Drug Schedule IV flag

configuration/entityTypes/HCP/attributes/License/attributes/SampDrugSchedIVFlag


SAMP_DRUG_SCHED_V_FLAG

VARCHAR

Sample Drug Schedule V flag

configuration/entityTypes/HCP/attributes/License/attributes/SampDrugSchedVFlag


SAMP_DRUG_SCHED_VI_FLAG

VARCHAR

Sample Drug Schedule VI flag

configuration/entityTypes/HCP/attributes/License/attributes/SampDrugSchedVIFlag


PRESCR_NON_CTRL_FLAG

VARCHAR

Prescribe Non-controlled flag

configuration/entityTypes/HCP/attributes/License/attributes/PrescrNonCtrlFlag


PRESCR_APP_REQ_NON_CTRL_FLAG

VARCHAR

Prescribe Application Request for Non-controlled Substances Flag

configuration/entityTypes/HCP/attributes/License/attributes/PrescrAppReqNonCtrlFlag


PRESCR_CTRL_FLAG

VARCHAR

Prescribe Controlled flag

configuration/entityTypes/HCP/attributes/License/attributes/PrescrCtrlFlag


PRESCR_APP_REQ_CTRL_FLAG

VARCHAR

Prescribe Application Request for Controlled Substances Flag

configuration/entityTypes/HCP/attributes/License/attributes/PrescrAppReqCtrlFlag


PRESCR_DRUG_SCHED_I_FLAG

VARCHAR

PrescrDrugSchedIFlag

configuration/entityTypes/HCP/attributes/License/attributes/PrescrDrugSchedIFlag


PRESCR_DRUG_SCHED_II_FLAG

VARCHAR

Prescribe Schedule II Flag

configuration/entityTypes/HCP/attributes/License/attributes/PrescrDrugSchedIIFlag


PRESCR_DRUG_SCHED_III_FLAG

VARCHAR

Prescribe Schedule III Flag

configuration/entityTypes/HCP/attributes/License/attributes/PrescrDrugSchedIIIFlag


PRESCR_DRUG_SCHED_IV_FLAG

VARCHAR

Prescribe Schedule IV Flag

configuration/entityTypes/HCP/attributes/License/attributes/PrescrDrugSchedIVFlag


PRESCR_DRUG_SCHED_V_FLAG

VARCHAR

Prescribe Schedule V Flag

configuration/entityTypes/HCP/attributes/License/attributes/PrescrDrugSchedVFlag


PRESCR_DRUG_SCHED_VI_FLAG

VARCHAR

Prescribe Schedule VI Flag

configuration/entityTypes/HCP/attributes/License/attributes/PrescrDrugSchedVIFlag


SUPERVISORY_REL_CD_NON_CTRL

VARCHAR

Supervisory Relationship for Non-Controlled Substances

configuration/entityTypes/HCP/attributes/License/attributes/SupervisoryRelCdNonCtrl


SUPERVISORY_REL_CD_CTRL

VARCHAR

SupervisoryRelCdCtrl

configuration/entityTypes/HCP/attributes/License/attributes/SupervisoryRelCdCtrl


COLLABORATIVE_NONCTRL

VARCHAR

Collaboration for Non-Controlled Substances

configuration/entityTypes/HCP/attributes/License/attributes/CollaborativeNonctrl


COLLABORATIVE_CTRL

VARCHAR

Collaboration for Controlled Substances

configuration/entityTypes/HCP/attributes/License/attributes/CollaborativeCtrl


INCLUSIONARY

VARCHAR

Inclusionary

configuration/entityTypes/HCP/attributes/License/attributes/Inclusionary


EXCLUSIONARY

VARCHAR

Exclusionary

configuration/entityTypes/HCP/attributes/License/attributes/Exclusionary


DELEGATION_NON_CTRL

VARCHAR

DelegationNonCtrl

configuration/entityTypes/HCP/attributes/License/attributes/DelegationNonCtrl


DELEGATION_CTRL

VARCHAR

Delegation for Controlled Substances

configuration/entityTypes/HCP/attributes/License/attributes/DelegationCtrl


DISCIPLINARY_ACTION_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/License/attributes/DisciplinaryActionStatus


ADDRESS

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDRESS_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



PRIMARY_AFFILIATION

VARCHAR


configuration/relationTypes/HasAddress/attributes/PrimaryAffiliation, configuration/relationTypes/HasAddress/attributes/PrimaryAffiliation

LKUP_IMS_YES_NO

SOURCE_ADDRESS_ID

VARCHAR


configuration/relationTypes/HasAddress/attributes/SourceAddressID, configuration/relationTypes/HasAddress/attributes/SourceAddressID


ADDRESS_TYPE

VARCHAR


configuration/relationTypes/HasAddress/attributes/AddressType, configuration/relationTypes/HasAddress/attributes/AddressType

LKUP_IMS_ADDR_TYPE

CARE_OF

VARCHAR


configuration/relationTypes/HasAddress/attributes/CareOf, configuration/relationTypes/HasAddress/attributes/CareOf


PRIMARY

BOOLEAN


configuration/relationTypes/HasAddress/attributes/Primary, configuration/relationTypes/HasAddress/attributes/Primary


ADDRESS_RANK

VARCHAR


configuration/relationTypes/HasAddress/attributes/AddressRank, configuration/relationTypes/HasAddress/attributes/AddressRank


SOURCE_NAME

VARCHAR


configuration/relationTypes/HasAddress/attributes/SourceAddressInfo/attributes/SourceName, configuration/relationTypes/HasAddress/attributes/SourceAddressInfo/attributes/SourceName


SOURCE_LOCATION_ID

VARCHAR


configuration/relationTypes/HasAddress/attributes/SourceAddressInfo/attributes/SourceLocationId, configuration/relationTypes/HasAddress/attributes/SourceAddressInfo/attributes/SourceLocationId


ADDRESS_LINE1

VARCHAR


configuration/entityTypes/Location/attributes/AddressLine1, configuration/entityTypes/Location/attributes/AddressLine1


ADDRESS_LINE2

VARCHAR


configuration/entityTypes/Location/attributes/AddressLine2, configuration/entityTypes/Location/attributes/AddressLine2


ADDRESS_LINE3

VARCHAR

AddressLine3

configuration/entityTypes/Location/attributes/AddressLine3, configuration/entityTypes/Location/attributes/AddressLine3


ADDRESS_LINE4

VARCHAR

AddressLine4

configuration/entityTypes/Location/attributes/AddressLine4, configuration/entityTypes/Location/attributes/AddressLine4


PREMISE

VARCHAR


configuration/entityTypes/Location/attributes/Premise, configuration/entityTypes/Location/attributes/Premise


STREET

VARCHAR


configuration/entityTypes/Location/attributes/Street, configuration/entityTypes/Location/attributes/Street


FLOOR

VARCHAR

N/A

configuration/entityTypes/Location/attributes/Floor, configuration/entityTypes/Location/attributes/Floor


BUILDING

VARCHAR

N/A

configuration/entityTypes/Location/attributes/Building, configuration/entityTypes/Location/attributes/Building


CITY

VARCHAR


configuration/entityTypes/Location/attributes/City, configuration/entityTypes/Location/attributes/City


STATE_PROVINCE

VARCHAR


configuration/entityTypes/Location/attributes/StateProvince, configuration/entityTypes/Location/attributes/StateProvince


STATE_PROVINCE_CODE

VARCHAR


configuration/entityTypes/Location/attributes/StateProvinceCode, configuration/entityTypes/Location/attributes/StateProvinceCode

LKUP_IMS_STATE_CODE

POSTAL_CODE

VARCHAR


configuration/entityTypes/Location/attributes/Zip/attributes/PostalCode, configuration/entityTypes/Location/attributes/Zip/attributes/PostalCode


ZIP5

VARCHAR


configuration/entityTypes/Location/attributes/Zip/attributes/Zip5, configuration/entityTypes/Location/attributes/Zip/attributes/Zip5


ZIP4

VARCHAR


configuration/entityTypes/Location/attributes/Zip/attributes/Zip4, configuration/entityTypes/Location/attributes/Zip/attributes/Zip4


COUNTRY

VARCHAR


configuration/entityTypes/Location/attributes/Country

LKUP_IMS_COUNTRY_CODE

CBSA_CODE

VARCHAR

Core Based Statistical Area

configuration/entityTypes/Location/attributes/CBSACode, configuration/entityTypes/Location/attributes/CBSACode

CBSA_CD

FIPS_COUNTY_CODE

VARCHAR

FIPS county Code

configuration/entityTypes/Location/attributes/FIPSCountyCode, configuration/entityTypes/Location/attributes/FIPSCountyCode


FIPS_STATE_CODE

VARCHAR

FIPS State Code

configuration/entityTypes/Location/attributes/FIPSStateCode, configuration/entityTypes/Location/attributes/FIPSStateCode


DPV

VARCHAR

USPS delivery point validation. R = Range Check; C = Clerk; F = Formally Valid; V = DPV Valid

configuration/entityTypes/Location/attributes/DPV, configuration/entityTypes/Location/attributes/DPV


MSA

VARCHAR

Metropolitan Statistical Area for a business

configuration/entityTypes/Location/attributes/MSA, configuration/entityTypes/Location/attributes/MSA


LATITUDE

VARCHAR


configuration/entityTypes/Location/attributes/GeoLocation/attributes/Latitude


LONGITUDE

VARCHAR


configuration/entityTypes/Location/attributes/GeoLocation/attributes/Longitude


GEO_ACCURACY

VARCHAR


configuration/entityTypes/Location/attributes/GeoLocation/attributes/GeoAccuracy


GEO_CODING_SYSTEM

VARCHAR


configuration/entityTypes/Location/attributes/GeoLocation/attributes/GeoCodingSystem


ADDRESS_INPUT

VARCHAR


configuration/entityTypes/Location/attributes/AddressInput, configuration/entityTypes/Location/attributes/AddressInput


SUB_ADMINISTRATIVE_AREA

VARCHAR

This field holds the smallest geographic data element within a country. For instance, USA County.

configuration/entityTypes/Location/attributes/SubAdministrativeArea, configuration/entityTypes/Location/attributes/SubAdministrativeArea


POSTAL_CITY

VARCHAR


configuration/entityTypes/Location/attributes/PostalCity, configuration/entityTypes/Location/attributes/PostalCity


LOCALITY

VARCHAR

This field holds the most common population center data element within a country. For instance, USA City, Canadian Municipality.

configuration/entityTypes/Location/attributes/Locality, configuration/entityTypes/Location/attributes/Locality


VERIFICATION_STATUS

VARCHAR


configuration/entityTypes/Location/attributes/VerificationStatus, configuration/entityTypes/Location/attributes/VerificationStatus


STATUS_CHANGE_DATE

DATE

Status Change Date

configuration/entityTypes/Location/attributes/StatusChangeDate, configuration/entityTypes/Location/attributes/StatusChangeDate


ADDRESS_STATUS

VARCHAR

Status of the Address

configuration/entityTypes/Location/attributes/AddressStatus, configuration/entityTypes/Location/attributes/AddressStatus


ACTIVE_ADDRESS

BOOLEAN


configuration/relationTypes/HasAddress/attributes/Active, configuration/relationTypes/HasAddress/attributes/Active


LOC_CONF_IND

VARCHAR


configuration/relationTypes/HasAddress/attributes/LocConfInd, configuration/relationTypes/HasAddress/attributes/LocConfInd

LKUP_IMS_LOCATION_CONFIDENCE

BEST_RECORD

VARCHAR


configuration/relationTypes/HasAddress/attributes/BestRecord, configuration/relationTypes/HasAddress/attributes/BestRecord


RELATION_STATUS_CHANGE_DATE

DATE


configuration/relationTypes/HasAddress/attributes/RelationStatusChangeDate, configuration/relationTypes/HasAddress/attributes/RelationStatusChangeDate


VALIDATION_STATUS

VARCHAR

Validation status of the Address. When Addresses are merged, the loser Address is set to INVL.

configuration/relationTypes/HasAddress/attributes/ValidationStatus, configuration/relationTypes/HasAddress/attributes/ValidationStatus

LKUP_IMS_VAL_STATUS

STATUS

VARCHAR


configuration/relationTypes/HasAddress/attributes/Status, configuration/relationTypes/HasAddress/attributes/Status

LKUP_IMS_ADDR_STATUS

HCO_NAME

VARCHAR


configuration/relationTypes/HasAddress/attributes/HcoName, configuration/relationTypes/HasAddress/attributes/HcoName


MAIN_HCO_NAME

VARCHAR


configuration/relationTypes/HasAddress/attributes/MainHcoName, configuration/relationTypes/HasAddress/attributes/MainHcoName


BUILD_LABEL

VARCHAR


configuration/relationTypes/HasAddress/attributes/BuildLabel, configuration/relationTypes/HasAddress/attributes/BuildLabel


PO_BOX

VARCHAR


configuration/relationTypes/HasAddress/attributes/POBox, configuration/relationTypes/HasAddress/attributes/POBox


VALIDATION_REASON

VARCHAR


configuration/relationTypes/HasAddress/attributes/ValidationReason, configuration/relationTypes/HasAddress/attributes/ValidationReason

LKUP_IMS_VAL_STATUS_CHANGE_REASON

VALIDATION_CHANGE_DATE

DATE


configuration/relationTypes/HasAddress/attributes/ValidationChangeDate, configuration/relationTypes/HasAddress/attributes/ValidationChangeDate


STATUS_REASON_CODE

VARCHAR


configuration/relationTypes/HasAddress/attributes/StatusReasonCode, configuration/relationTypes/HasAddress/attributes/StatusReasonCode

LKUP_IMS_SRC_DEACTIVE_REASON_CODE

PRIMARY_MAIL

BOOLEAN


configuration/relationTypes/HasAddress/attributes/PrimaryMail, configuration/relationTypes/HasAddress/attributes/PrimaryMail


VISIT_ACTIVITY

VARCHAR


configuration/relationTypes/HasAddress/attributes/VisitActivity, configuration/relationTypes/HasAddress/attributes/VisitActivity


DERIVED_ADDRESS

VARCHAR


configuration/relationTypes/HasAddress/attributes/derivedAddress, configuration/relationTypes/HasAddress/attributes/derivedAddress


NEIGHBORHOOD

VARCHAR


configuration/entityTypes/Location/attributes/Neighborhood, configuration/entityTypes/Location/attributes/Neighborhood


AVC

VARCHAR


configuration/entityTypes/Location/attributes/AVC, configuration/entityTypes/Location/attributes/AVC


COUNTRY_CODE

VARCHAR


configuration/entityTypes/Location/attributes/Country

LKUP_IMS_COUNTRY_CODE

GEO_LOCATION.LATITUDE

VARCHAR


configuration/entityTypes/Location/attributes/GeoLocation/attributes/Latitude


GEO_LOCATION.LONGITUDE

VARCHAR


configuration/entityTypes/Location/attributes/GeoLocation/attributes/Longitude


GEO_LOCATION.GEO_ACCURACY

VARCHAR


configuration/entityTypes/Location/attributes/GeoLocation/attributes/GeoAccuracy


GEO_LOCATION.GEO_CODING_SYSTEM

VARCHAR


configuration/entityTypes/Location/attributes/GeoLocation/attributes/GeoCodingSystem


ADDRESS_PHONE

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDRESS_URI

VARCHAR

generated key description



PHONE_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE_IMS

VARCHAR


configuration/relationTypes/HasAddress/attributes/Phone/attributes/TypeIMS, configuration/relationTypes/HasAddress/attributes/Phone/attributes/TypeIMS

LKUP_IMS_COMMUNICATION_TYPE

NUMBER

VARCHAR


configuration/relationTypes/HasAddress/attributes/Phone/attributes/Number, configuration/relationTypes/HasAddress/attributes/Phone/attributes/Number


EXTENSION

VARCHAR


configuration/relationTypes/HasAddress/attributes/Phone/attributes/Extension, configuration/relationTypes/HasAddress/attributes/Phone/attributes/Extension


RANK

VARCHAR


configuration/relationTypes/HasAddress/attributes/Phone/attributes/Rank, configuration/relationTypes/HasAddress/attributes/Phone/attributes/Rank


ACTIVE_ADDRESS_PHONE

BOOLEAN


configuration/relationTypes/HasAddress/attributes/Phone/attributes/Active, configuration/relationTypes/HasAddress/attributes/Phone/attributes/Active


BEST_PHONE_INDICATOR

VARCHAR


configuration/relationTypes/HasAddress/attributes/Phone/attributes/BestPhoneIndicator, configuration/relationTypes/HasAddress/attributes/Phone/attributes/BestPhoneIndicator


ADDRESS_DEA

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDRESS_URI

VARCHAR

generated key description



DEA_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



NUMBER

VARCHAR


configuration/relationTypes/HasAddress/attributes/DEA/attributes/Number, configuration/relationTypes/HasAddress/attributes/DEA/attributes/Number


EXPIRATION_DATE

DATE


configuration/relationTypes/HasAddress/attributes/DEA/attributes/ExpirationDate, configuration/relationTypes/HasAddress/attributes/DEA/attributes/ExpirationDate


STATUS

VARCHAR


configuration/relationTypes/HasAddress/attributes/DEA/attributes/Status, configuration/relationTypes/HasAddress/attributes/DEA/attributes/Status

LKUP_IMS_IDENTIFIER_STATUS

DRUG_SCHEDULE

VARCHAR


configuration/relationTypes/HasAddress/attributes/DEA/attributes/DrugSchedule, configuration/relationTypes/HasAddress/attributes/DEA/attributes/DrugSchedule


BUSINESS_ACTIVITY_CODE

VARCHAR

Business Activity Code

configuration/relationTypes/HasAddress/attributes/DEA/attributes/BusinessActivityCode, configuration/relationTypes/HasAddress/attributes/DEA/attributes/BusinessActivityCode


SUB_BUSINESS_ACTIVITY_CODE

VARCHAR

Sub Business Activity Code

configuration/relationTypes/HasAddress/attributes/DEA/attributes/SubBusinessActivityCode, configuration/relationTypes/HasAddress/attributes/DEA/attributes/SubBusinessActivityCode


DEA_CHANGE_REASON_CODE

VARCHAR

DEA Change Reason Code

configuration/relationTypes/HasAddress/attributes/DEA/attributes/DEAChangeReasonCode, configuration/relationTypes/HasAddress/attributes/DEA/attributes/DEAChangeReasonCode

LKUP_IMS_SRC_DEACTIVE_REASON_CODE

AUTHORIZATION_STATUS

VARCHAR

Authorization Status

configuration/relationTypes/HasAddress/attributes/DEA/attributes/AuthorizationStatus, configuration/relationTypes/HasAddress/attributes/DEA/attributes/AuthorizationStatus

LKUP_IMS_IDENTIFIER_STATUS

ADDRESS_OFFICE_INFORMATION

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDRESS_URI

VARCHAR

generated key description



OFFICE_INFORMATION_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



BEST_TIMES

VARCHAR


configuration/relationTypes/HasAddress/attributes/OfficeInformation/attributes/BestTimes, configuration/relationTypes/HasAddress/attributes/OfficeInformation/attributes/BestTimes


APPT_REQUIRED

BOOLEAN


configuration/relationTypes/HasAddress/attributes/OfficeInformation/attributes/ApptRequired, configuration/relationTypes/HasAddress/attributes/OfficeInformation/attributes/ApptRequired


OFFICE_NOTES

VARCHAR


configuration/relationTypes/HasAddress/attributes/OfficeInformation/attributes/OfficeNotes, configuration/relationTypes/HasAddress/attributes/OfficeInformation/attributes/OfficeNotes


SPECIALITIES

Column

Type

Description

Reltio Attribute URI

LOV Name

SPECIALITIES_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SPECIALTY_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Specialities/attributes/SpecialtyType, configuration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyType

LKUP_IMS_SPECIALTY_TYPE

SPECIALTY

VARCHAR


configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialty, configuration/entityTypes/HCO/attributes/Specialities/attributes/Specialty

LKUP_IMS_SPECIALTY

RANK

VARCHAR

Specialty Rank

configuration/entityTypes/HCP/attributes/Specialities/attributes/Rank, configuration/entityTypes/HCO/attributes/Specialities/attributes/Rank


DESC

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/Specialities/attributes/Desc


GROUP

VARCHAR


configuration/entityTypes/HCP/attributes/Specialities/attributes/Group, configuration/entityTypes/HCO/attributes/Specialities/attributes/Group


SOURCE_CD

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/Specialities/attributes/SourceCD


SPECIALTY_DETAIL

VARCHAR


configuration/entityTypes/HCP/attributes/Specialities/attributes/SpecialtyDetail, configuration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyDetail


PROFESSION_CODE

VARCHAR

Profession

configuration/entityTypes/HCP/attributes/Specialities/attributes/ProfessionCode

LKUP_IMS_PROFESSION

PRIMARY_SPECIALTY_FLAG

BOOLEAN


configuration/entityTypes/HCP/attributes/Specialities/attributes/PrimarySpecialtyFlag, configuration/entityTypes/HCO/attributes/Specialities/attributes/PrimarySpecialtyFlag


SORT_ORDER

VARCHAR


configuration/entityTypes/HCP/attributes/Specialities/attributes/SortOrder, configuration/entityTypes/HCO/attributes/Specialities/attributes/SortOrder


BEST_RECORD

VARCHAR


configuration/entityTypes/HCP/attributes/Specialities/attributes/BestRecord, configuration/entityTypes/HCO/attributes/Specialities/attributes/BestRecord


SUB_SPECIALTY

VARCHAR


configuration/entityTypes/HCP/attributes/Specialities/attributes/SubSpecialty, configuration/entityTypes/HCO/attributes/Specialities/attributes/SubSpecialty

LKUP_IMS_SPECIALTY

SUB_SPECIALTY_RANK

VARCHAR

SubSpecialty Rank

configuration/entityTypes/HCP/attributes/Specialities/attributes/SubSpecialtyRank, configuration/entityTypes/HCO/attributes/Specialities/attributes/SubSpecialtyRank


TRUSTED_INDICATOR

VARCHAR


configuration/entityTypes/HCP/attributes/Specialities/attributes/TrustedIndicator, configuration/entityTypes/HCO/attributes/Specialities/attributes/TrustedIndicator

LKUP_IMS_YES_NO

RAW_SPECIALTY

VARCHAR


configuration/entityTypes/HCP/attributes/Specialities/attributes/RawSpecialty, configuration/entityTypes/HCO/attributes/Specialities/attributes/RawSpecialty


RAW_SPECIALTY_DESCRIPTION

VARCHAR


configuration/entityTypes/HCP/attributes/Specialities/attributes/RawSpecialtyDescription, configuration/entityTypes/HCO/attributes/Specialities/attributes/RawSpecialtyDescription


IDENTIFIERS

Column

Type

Description

Reltio Attribute URI

LOV Name

IDENTIFIERS_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Identifiers/attributes/Type, configuration/entityTypes/HCO/attributes/Identifiers/attributes/Type

LKUP_IMS_HCP_IDENTIFIER_TYPE,LKUP_IMS_HCO_IDENTIFIER_TYPE

ID

VARCHAR


configuration/entityTypes/HCP/attributes/Identifiers/attributes/ID, configuration/entityTypes/HCO/attributes/Identifiers/attributes/ID


ORDER

VARCHAR

Displays the order of priority for an MPN for those facilities that share an MPN. Valid values are: P ?the MPN on a business record is the primary identifier for the business and O ?the MPN is a secondary identifier. (Using P for the MPN supports aggregating clinical volumes and avoids double counting).

configuration/entityTypes/HCP/attributes/Identifiers/attributes/Order, configuration/entityTypes/HCO/attributes/Identifiers/attributes/Order


CATEGORY

VARCHAR

Additional information about the identifer. For a DDD identifer, the DDD subcategory code (e.g. H4, D1, A2). For a DEA identifier, contains the DEA activity code (e.g. M for Mid Level Practitioner)

configuration/entityTypes/HCP/attributes/Identifiers/attributes/Category, configuration/entityTypes/HCO/attributes/Identifiers/attributes/Category

LKUP_IMS_IDENTIFIERS_CATEGORY

STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Identifiers/attributes/Status, configuration/entityTypes/HCO/attributes/Identifiers/attributes/Status

LKUP_IMS_IDENTIFIER_STATUS

AUTHORIZATION_STATUS

VARCHAR

Authorization Status

configuration/entityTypes/HCP/attributes/Identifiers/attributes/AuthorizationStatus, configuration/entityTypes/HCO/attributes/Identifiers/attributes/AuthorizationStatus

LKUP_IMS_IDENTIFIER_STATUS

DEACTIVATION_REASON_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Identifiers/attributes/DeactivationReasonCode, configuration/entityTypes/HCO/attributes/Identifiers/attributes/DeactivationReasonCode

LKUP_IMS_SRC_DEACTIVE_REASON_CODE

DEACTIVATION_DATE

DATE


configuration/entityTypes/HCP/attributes/Identifiers/attributes/DeactivationDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/DeactivationDate


REACTIVATION_DATE

DATE


configuration/entityTypes/HCP/attributes/Identifiers/attributes/ReactivationDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/ReactivationDate


NATIONAL_ID_ATTRIBUTE

VARCHAR


configuration/entityTypes/HCP/attributes/Identifiers/attributes/NationalIdAttribute, configuration/entityTypes/HCO/attributes/Identifiers/attributes/NationalIdAttribute


AMAMDDO_FLAG

VARCHAR

AMA MD-DO Flag

configuration/entityTypes/HCP/attributes/Identifiers/attributes/AMAMDDOFlag


MAJOR_PROF_ACT

VARCHAR

Major Professional Activity Code

configuration/entityTypes/HCP/attributes/Identifiers/attributes/MajorProfAct


HOSPITAL_HOURS

VARCHAR

HospitalHours

configuration/entityTypes/HCP/attributes/Identifiers/attributes/HospitalHours


AMA_HOSPITAL_ID

VARCHAR

AMAHospitalID

configuration/entityTypes/HCP/attributes/Identifiers/attributes/AMAHospitalID


PRACTICE_TYPE_CODE

VARCHAR

PracticeTypeCode

configuration/entityTypes/HCP/attributes/Identifiers/attributes/PracticeTypeCode


EMPLOYMENT_TYPE_CODE

VARCHAR

EmploymentTypeCode

configuration/entityTypes/HCP/attributes/Identifiers/attributes/EmploymentTypeCode


BIRTH_CITY

VARCHAR

BirthCity

configuration/entityTypes/HCP/attributes/Identifiers/attributes/BirthCity


BIRTH_STATE

VARCHAR

BirthState

configuration/entityTypes/HCP/attributes/Identifiers/attributes/BirthState


BIRTH_COUNTRY

VARCHAR

BirthCountry

configuration/entityTypes/HCP/attributes/Identifiers/attributes/BirthCountry


MEDICAL_SCHOOL

VARCHAR

MedicalSchool

configuration/entityTypes/HCP/attributes/Identifiers/attributes/MedicalSchool


GRADUATION_YEAR

VARCHAR

GraduationYear

configuration/entityTypes/HCP/attributes/Identifiers/attributes/GraduationYear


NUM_OF_PYSICIANS

VARCHAR

NumOfPysicians

configuration/entityTypes/HCP/attributes/Identifiers/attributes/NumOfPysicians


STATE

VARCHAR

LicenseState

configuration/entityTypes/HCP/attributes/Identifiers/attributes/State, configuration/entityTypes/HCO/attributes/Identifiers/attributes/State

LKUP_IMS_STATE_CODE

TRUSTED_INDICATOR

VARCHAR


configuration/entityTypes/HCP/attributes/Identifiers/attributes/TrustedIndicator, configuration/entityTypes/HCO/attributes/Identifiers/attributes/TrustedIndicator

LKUP_IMS_YES_NO

HARD_LINK_INDICATOR

VARCHAR


configuration/entityTypes/HCP/attributes/Identifiers/attributes/HardLinkIndicator, configuration/entityTypes/HCO/attributes/Identifiers/attributes/HardLinkIndicator

LKUP_IMS_YES_NO

LAST_VERIFICATION_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Identifiers/attributes/LastVerificationStatus, configuration/entityTypes/HCO/attributes/Identifiers/attributes/LastVerificationStatus


LAST_VERIFICATION_DATE

DATE


configuration/entityTypes/HCP/attributes/Identifiers/attributes/LastVerificationDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/LastVerificationDate


ACTIVATION_DATE

DATE


configuration/entityTypes/HCP/attributes/Identifiers/attributes/ActivationDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/ActivationDate


SPEAKER

Column

Type

Description

Reltio Attribute URI

LOV Name

SPEAKER_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



IS_SPEAKER

BOOLEAN


configuration/entityTypes/HCP/attributes/Speaker/attributes/IsSpeaker


IS_COMPANY_APPROVED_SPEAKER

BOOLEAN

Attribute to track if an HCP is a COMPANY approved speaker

configuration/entityTypes/HCP/attributes/Speaker/attributes/IsCOMPANYApprovedSpeaker


LAST_BRIEFING_DATE

DATE

Track the last date that the HCP received the briefing/training to be certified as an approved COMPANY Speaker

configuration/entityTypes/HCP/attributes/Speaker/attributes/LastBriefingDate


SPEAKER_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Speaker/attributes/SpeakerStatus

LKUP_SPEAKERSTATUS

SPEAKER_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Speaker/attributes/SpeakerType

LKUP_SPEAKERTYPE

SPEAKER_LEVEL

VARCHAR


configuration/entityTypes/HCP/attributes/Speaker/attributes/SpeakerLevel

LKUP_SPEAKERLEVEL

HCP_WORKPLACE_MAIN_HCO

Column

Type

Description

Reltio Attribute URI

LOV Name

WORKPLACE_URI

VARCHAR

generated key description



MAINHCO_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



NAME

VARCHAR

Name

configuration/entityTypes/HCO/attributes/Name


OTHER_NAMES

VARCHAR

Other Names

configuration/entityTypes/HCO/attributes/OtherNames


TYPE_CODE

VARCHAR

Customer Type

configuration/entityTypes/HCO/attributes/TypeCode

LKUP_IMS_HCO_CUST_TYPE

SOURCE_ID

VARCHAR

Source ID

configuration/entityTypes/HCO/attributes/SourceID


VALIDATION_STATUS

VARCHAR


configuration/relationTypes/RLE.MAI/attributes/ValidationStatus

LKUP_IMS_VAL_STATUS

VALIDATION_CHANGE_DATE

DATE


configuration/relationTypes/RLE.MAI/attributes/ValidationChangeDate


AFFILIATION_STATUS

VARCHAR


configuration/relationTypes/RLE.MAI/attributes/AffiliationStatus

LKUP_IMS_STATUS

COUNTRY

VARCHAR

Country Code

configuration/relationTypes/RLE.MAI/attributes/Country

LKUP_IMS_COUNTRY_CODE

HCP_WORKPLACE_MAIN_HCO_CLASSOF_TRADE_N

Column

Type

Description

Reltio Attribute URI

LOV Name

WORKPLACE_URI

VARCHAR

generated key description



MAINHCO_URI

VARCHAR

generated key description



CLASSOFTRADEN_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



PRIORITY

VARCHAR

Numeric code for the primary class of trade

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Priority


CLASSIFICATION

VARCHAR


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Classification

LKUP_IMS_HCO_CLASSOFTRADEN_CLASSIFICATION

FACILITY_TYPE

VARCHAR


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/FacilityType

LKUP_IMS_HCO_CLASSOFTRADEN_FACILITYTYPE

SPECIALTY

VARCHAR


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty

LKUP_IMS_HCO_CLASSOFTRADEN_SPECIALTY

HCP_MAIN_WORKPLACE_CLASSOF_TRADE_N

Column

Type

Description

Reltio Attribute URI

LOV Name

MAINWORKPLACE_URI

VARCHAR

generated key description



CLASSOFTRADEN_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



PRIORITY

VARCHAR

Numeric code for the primary class of trade

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Priority


CLASSIFICATION

VARCHAR


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Classification

LKUP_IMS_HCO_CLASSOFTRADEN_CLASSIFICATION

FACILITY_TYPE

VARCHAR


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/FacilityType

LKUP_IMS_HCO_CLASSOFTRADEN_FACILITYTYPE

SPECIALTY

VARCHAR


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty

LKUP_IMS_HCO_CLASSOFTRADEN_SPECIALTY

PHONE

Column

Type

Description

Reltio Attribute URI

LOV Name

PHONE_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE_IMS

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/TypeIMS, configuration/entityTypes/HCO/attributes/Phone/attributes/TypeIMS

LKUP_IMS_COMMUNICATION_TYPE

NUMBER

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/Number, configuration/entityTypes/HCO/attributes/Phone/attributes/Number


EXTENSION

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/Extension, configuration/entityTypes/HCO/attributes/Phone/attributes/Extension


RANK

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/Rank, configuration/entityTypes/HCO/attributes/Phone/attributes/Rank


COUNTRY_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/CountryCode, configuration/entityTypes/HCO/attributes/Phone/attributes/CountryCode

LKUP_IMS_COUNTRY_CODE

AREA_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/AreaCode, configuration/entityTypes/HCO/attributes/Phone/attributes/AreaCode


LOCAL_NUMBER

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/LocalNumber, configuration/entityTypes/HCO/attributes/Phone/attributes/LocalNumber


FORMATTED_NUMBER

VARCHAR

Formatted number of the phone

configuration/entityTypes/HCP/attributes/Phone/attributes/FormattedNumber, configuration/entityTypes/HCO/attributes/Phone/attributes/FormattedNumber


VALIDATION_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/ValidationStatus, configuration/entityTypes/HCO/attributes/Phone/attributes/ValidationStatus


VALIDATION_DATE

DATE


configuration/entityTypes/HCP/attributes/Phone/attributes/ValidationDate, configuration/entityTypes/HCO/attributes/Phone/attributes/ValidationDate


LINE_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/LineType, configuration/entityTypes/HCO/attributes/Phone/attributes/LineType


FORMAT_MASK

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/FormatMask, configuration/entityTypes/HCO/attributes/Phone/attributes/FormatMask


DIGIT_COUNT

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/DigitCount, configuration/entityTypes/HCO/attributes/Phone/attributes/DigitCount


GEO_AREA

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/GeoArea, configuration/entityTypes/HCO/attributes/Phone/attributes/GeoArea


GEO_COUNTRY

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/GeoCountry, configuration/entityTypes/HCO/attributes/Phone/attributes/GeoCountry


DQ_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/DQCode, configuration/entityTypes/HCO/attributes/Phone/attributes/DQCode


ACTIVE_PHONE

BOOLEAN

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/Phone/attributes/Active


BEST_PHONE_INDICATOR

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/BestPhoneIndicator, configuration/entityTypes/HCO/attributes/Phone/attributes/BestPhoneIndicator


PHONE_SOURCE_DATA

Column

Type

Description

Reltio Attribute URI

LOV Name

PHONE_URI

VARCHAR

generated key description



SOURCE_DATA_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



DATASET_IDENTIFIER

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/SourceData/attributes/DatasetIdentifier, configuration/entityTypes/HCO/attributes/Phone/attributes/SourceData/attributes/DatasetIdentifier


DATASET_PARTY_IDENTIFIER

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/SourceData/attributes/DatasetPartyIdentifier, configuration/entityTypes/HCO/attributes/Phone/attributes/SourceData/attributes/DatasetPartyIdentifier


DATASET_PHONE_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/SourceData/attributes/DatasetPhoneType, configuration/entityTypes/HCO/attributes/Phone/attributes/SourceData/attributes/DatasetPhoneType

LKUP_IMS_COMMUNICATION_TYPE

RAW_DATASET_PHONE_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/SourceData/attributes/RawDatasetPhoneType, configuration/entityTypes/HCO/attributes/Phone/attributes/SourceData/attributes/RawDatasetPhoneType


BEST_PHONE_INDICATOR

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/SourceData/attributes/BestPhoneIndicator, configuration/entityTypes/HCO/attributes/Phone/attributes/SourceData/attributes/BestPhoneIndicator


EMAIL

Column

Type

Description

Reltio Attribute URI

LOV Name

EMAIL_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE_IMS

VARCHAR


configuration/entityTypes/HCP/attributes/Email/attributes/TypeIMS, configuration/entityTypes/HCO/attributes/Email/attributes/TypeIMS

LKUP_IMS_EMAIL_TYPE

EMAIL

VARCHAR


configuration/entityTypes/HCP/attributes/Email/attributes/Email, configuration/entityTypes/HCO/attributes/Email/attributes/Email


DOMAIN

VARCHAR


configuration/entityTypes/HCP/attributes/Email/attributes/Domain, configuration/entityTypes/HCO/attributes/Email/attributes/Domain


DOMAIN_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Email/attributes/DomainType, configuration/entityTypes/HCO/attributes/Email/attributes/DomainType


USERNAME

VARCHAR


configuration/entityTypes/HCP/attributes/Email/attributes/Username, configuration/entityTypes/HCO/attributes/Email/attributes/Username


RANK

VARCHAR


configuration/entityTypes/HCP/attributes/Email/attributes/Rank, configuration/entityTypes/HCO/attributes/Email/attributes/Rank


VALIDATION_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Email/attributes/ValidationStatus, configuration/entityTypes/HCO/attributes/Email/attributes/ValidationStatus


VALIDATION_DATE

DATE


configuration/entityTypes/HCP/attributes/Email/attributes/ValidationDate, configuration/entityTypes/HCO/attributes/Email/attributes/ValidationDate


ACTIVE_EMAIL_HCP

VARCHAR


configuration/entityTypes/HCP/attributes/Email/attributes/Active


DQ_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Email/attributes/DQCode, configuration/entityTypes/HCO/attributes/Email/attributes/DQCode


SOURCE_CD

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/Email/attributes/SourceCD


ACTIVE_EMAIL_HCO

BOOLEAN


configuration/entityTypes/HCO/attributes/Email/attributes/Active


DISCLOSURE

Disclosure - Reporting derived attributes

Column

Type

Description

Reltio Attribute URI

LOV Name

DISCLOSURE_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



DGS_CATEGORY

VARCHAR


configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSCategory, configuration/entityTypes/HCO/attributes/Disclosure/attributes/DGSCategory

LKUP_BENEFITCATEGORY_HCP,LKUP_BENEFITCATEGORY_HCO

DGS_TITLE

VARCHAR


configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSTitle

LKUP_BENEFITTITLE

DGS_QUALITY

VARCHAR


configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSQuality

LKUP_BENEFITQUALITY

DGS_SPECIALTY

VARCHAR


configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSSpecialty

LKUP_BENEFITSPECIALTY

CONTRACT_CLASSIFICATION

VARCHAR


configuration/entityTypes/HCP/attributes/Disclosure/attributes/ContractClassification

LKUP_CONTRACTCLASSIFICATION

CONTRACT_CLASSIFICATION_DATE

DATE


configuration/entityTypes/HCP/attributes/Disclosure/attributes/ContractClassificationDate


MILITARY

BOOLEAN


configuration/entityTypes/HCP/attributes/Disclosure/attributes/Military


LEGALSTATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Disclosure/attributes/LEGALSTATUS

LKUP_LEGALSTATUS

THIRD_PARTY_VERIFY

Column

Type

Description

Reltio Attribute URI

LOV Name

THIRD_PARTY_VERIFY_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SEND_FOR_VERIFY

VARCHAR


configuration/entityTypes/HCP/attributes/ThirdPartyVerify/attributes/SendForVerify, configuration/entityTypes/HCO/attributes/ThirdPartyVerify/attributes/SendForVerify

LKUP_IMS_SEND_FOR_VALIDATION

VERIFY_DATE

VARCHAR


configuration/entityTypes/HCP/attributes/ThirdPartyVerify/attributes/VerifyDate, configuration/entityTypes/HCO/attributes/ThirdPartyVerify/attributes/VerifyDate


PRIVACY_PREFERENCES

Column

Type

Description

Reltio Attribute URI

LOV Name

PRIVACY_PREFERENCES_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



OPT_OUT

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/OptOut


OPT_OUT_START_DATE

DATE


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/OptOutStartDate


ALLOWED_TO_CONTACT

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/AllowedToContact


PHONE_OPT_OUT

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/PhoneOptOut, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/PhoneOptOut


EMAIL_OPT_OUT

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/EmailOptOut, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/EmailOptOut


FAX_OPT_OUT

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/FaxOptOut, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/FaxOptOut


VISIT_OPT_OUT

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/VisitOptOut, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/VisitOptOut


AMA_NO_CONTACT

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/AMANoContact


PDRP

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/PDRP


PDRP_DATE

DATE


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/PDRPDate


TEXT_MESSAGE_OPT_OUT

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/TextMessageOptOut


MAIL_OPT_OUT

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/MailOptOut, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/MailOptOut


OPT_OUT_CHANGE_DATE

DATE

The date the opt out indicator was changed

configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/OptOutChangeDate


REMOTE_OPT_OUT

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/RemoteOptOut, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/RemoteOptOut


OPT_OUT_ONE_KEY

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/OptOutOneKey, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/OptOutOneKey


OPT_OUT_SAFE_HARBOR

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/OptOutSafeHarbor


KEY_OPINION_LEADER

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/KeyOpinionLeader


RESIDENT_INDICATOR

BOOLEAN


configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/ResidentIndicator


ALLOW_SAFE_HARBOR

BOOLEAN


configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/AllowSafeHarbor


SANCTION

Column

Type

Description

Reltio Attribute URI

LOV Name

SANCTION_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SANCTION_ID

VARCHAR

Court sanction Id for any case.

configuration/entityTypes/HCP/attributes/Sanction/attributes/SanctionId


ACTION_CODE

VARCHAR

Court sanction code for a case

configuration/entityTypes/HCP/attributes/Sanction/attributes/ActionCode


ACTION_DESCRIPTION

VARCHAR


configuration/entityTypes/HCP/attributes/Sanction/attributes/ActionDescription


BOARD_CODE

VARCHAR

Court case board id

configuration/entityTypes/HCP/attributes/Sanction/attributes/BoardCode


BOARD_DESC

VARCHAR

court case board description

configuration/entityTypes/HCP/attributes/Sanction/attributes/BoardDesc


ACTION_DATE

DATE


configuration/entityTypes/HCP/attributes/Sanction/attributes/ActionDate


SANCTION_PERIOD_START_DATE

DATE


configuration/entityTypes/HCP/attributes/Sanction/attributes/SanctionPeriodStartDate


SANCTION_PERIOD_END_DATE

DATE


configuration/entityTypes/HCP/attributes/Sanction/attributes/SanctionPeriodEndDate


MONTH_DURATION

VARCHAR


configuration/entityTypes/HCP/attributes/Sanction/attributes/MonthDuration


FINE_AMOUNT

VARCHAR


configuration/entityTypes/HCP/attributes/Sanction/attributes/FineAmount


OFFENSE_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Sanction/attributes/OffenseCode


OFFENSE_DESCRIPTION

VARCHAR


configuration/entityTypes/HCP/attributes/Sanction/attributes/OffenseDescription


OFFENSE_DATE

DATE


configuration/entityTypes/HCP/attributes/Sanction/attributes/OffenseDate


HCP_SANCTIONS

Column

Type

Description

Reltio Attribute URI

LOV Name

SANCTIONS_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



IDENTIFIER_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Sanctions/attributes/IdentifierType

LKUP_IMS_HCP_IDENTIFIER_TYPE

IDENTIFIER_ID

VARCHAR


configuration/entityTypes/HCP/attributes/Sanctions/attributes/IdentifierID


TYPE_CODE

VARCHAR

Type of sanction/restriction for a given provided

configuration/entityTypes/HCP/attributes/Sanctions/attributes/TypeCode

LKUP_IMS_SNCTN_RSTR_ACTN

DEACTIVATION_REASON_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Sanctions/attributes/DeactivationReasonCode

LKUP_IMS_SNCTN_RSTR_DACT_RSN

DISPOSITION_CATEGORY_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Sanctions/attributes/DispositionCategoryCode

LKUP_IMS_SNCTN_RSTR_DSP_CATG

EXCLUSION_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Sanctions/attributes/ExclusionCode

LKUP_IMS_SNCTN_RSTR_EXCL

DESCRIPTION

VARCHAR


configuration/entityTypes/HCP/attributes/Sanctions/attributes/Description


URL

VARCHAR


configuration/entityTypes/HCP/attributes/Sanctions/attributes/URL


ISSUED_DATE

DATE


configuration/entityTypes/HCP/attributes/Sanctions/attributes/IssuedDate


EFFECTIVE_DATE

DATE


configuration/entityTypes/HCP/attributes/Sanctions/attributes/EffectiveDate


REINSTATEMENT_DATE

DATE


configuration/entityTypes/HCP/attributes/Sanctions/attributes/ReinstatementDate


IS_STATE_WAIVER

BOOLEAN


configuration/entityTypes/HCP/attributes/Sanctions/attributes/IsStateWaiver


STATUS_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Sanctions/attributes/StatusCode

LKUP_IMS_IDENTIFIER_STATUS

SOURCE_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Sanctions/attributes/SourceCode

LKUP_IMS_SNCTN_RSTR_SRC

PUBLICATION_DATE

DATE


configuration/entityTypes/HCP/attributes/Sanctions/attributes/PublicationDate


GOVERNMENT_LEVEL_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Sanctions/attributes/GovernmentLevelCode

LKUP_IMS_GOVT_LVL

HCP_GSA_SANCTION

Column

Type

Description

Reltio Attribute URI

LOV Name

GSA_SANCTION_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SANCTION_ID

VARCHAR


configuration/entityTypes/HCP/attributes/GSASanction/attributes/SanctionId


FIRST_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/GSASanction/attributes/FirstName


MIDDLE_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/GSASanction/attributes/MiddleName


LAST_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/GSASanction/attributes/LastName


SUFFIX_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/GSASanction/attributes/SuffixName


CITY

VARCHAR


configuration/entityTypes/HCP/attributes/GSASanction/attributes/City


STATE

VARCHAR


configuration/entityTypes/HCP/attributes/GSASanction/attributes/State


ZIP

VARCHAR


configuration/entityTypes/HCP/attributes/GSASanction/attributes/Zip


ACTION_DATE

VARCHAR


configuration/entityTypes/HCP/attributes/GSASanction/attributes/ActionDate


TERM_DATE

VARCHAR


configuration/entityTypes/HCP/attributes/GSASanction/attributes/TermDate


AGENCY

VARCHAR


configuration/entityTypes/HCP/attributes/GSASanction/attributes/Agency


CONFIDENCE

VARCHAR


configuration/entityTypes/HCP/attributes/GSASanction/attributes/Confidence


DEGREES

DO NOT USE THIS ATTRIBUTE - will be deprecated

Column

Type

Description

Reltio Attribute URI

LOV Name

DEGREES_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



DEGREE

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/Degrees/attributes/Degree

DEGREE

BEST_DEGREE

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/Degrees/attributes/BestDegree


CERTIFICATES

Column

Type

Description

Reltio Attribute URI

LOV Name

CERTIFICATES_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



CERTIFICATE_ID

VARCHAR


configuration/entityTypes/HCP/attributes/Certificates/attributes/CertificateId


NAME

VARCHAR


configuration/entityTypes/HCP/attributes/Certificates/attributes/Name


BOARD_ID

VARCHAR


configuration/entityTypes/HCP/attributes/Certificates/attributes/BoardId


BOARD_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/Certificates/attributes/BoardName


INTERNAL_HCP_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Certificates/attributes/InternalHCPStatus


INTERNAL_HCP_INACTIVE_REASON_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Certificates/attributes/InternalHCPInactiveReasonCode


INTERNAL_SAMPLING_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Certificates/attributes/InternalSamplingStatus


PVS_ELIGIBILTY

VARCHAR


configuration/entityTypes/HCP/attributes/Certificates/attributes/PVSEligibilty


EMPLOYMENT

Column

Type

Description

Reltio Attribute URI

LOV Name

EMPLOYMENT_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TITLE

VARCHAR


configuration/relationTypes/Employment/attributes/Title


SUMMARY

VARCHAR


configuration/relationTypes/Employment/attributes/Summary


IS_CURRENT

BOOLEAN


configuration/relationTypes/Employment/attributes/IsCurrent


NAME

VARCHAR

Name

configuration/entityTypes/Organization/attributes/Name


CREDENTIAL

DO NOT USE THIS ATTRIBUTE - will be deprecated

Column

Type

Description

Reltio Attribute URI

LOV Name

CREDENTIAL_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



RANK

VARCHAR


configuration/entityTypes/HCP/attributes/Credential/attributes/Rank


CREDENTIAL

VARCHAR


configuration/entityTypes/HCP/attributes/Credential/attributes/Credential

CRED

PROFESSION

Column

Type

Description

Reltio Attribute URI

LOV Name

PROFESSION_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



PROFESSION_CODE

VARCHAR

Profession

configuration/entityTypes/HCP/attributes/Profession/attributes/ProfessionCode

LKUP_IMS_PROFESSION

RANK

VARCHAR

Profession Rank

configuration/entityTypes/HCP/attributes/Profession/attributes/Rank


EDUCATION

Column

Type

Description

Reltio Attribute URI

LOV Name

EDUCATION_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SCHOOL_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/SchoolName

LKUP_IMS_SCHOOL_CODE

TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/Type


DEGREE

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/Degree


YEAR_OF_GRADUATION

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/Education/attributes/YearOfGraduation


GRADUATED

BOOLEAN

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/Education/attributes/Graduated


GPA

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/GPA


YEARS_IN_PROGRAM

VARCHAR

Year in Grad Training Program, Year in training in current program

configuration/entityTypes/HCP/attributes/Education/attributes/YearsInProgram


START_YEAR

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/StartYear


END_YEAR

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/EndYear


FIELDOF_STUDY

VARCHAR

Specialty Focus or Specialty Training

configuration/entityTypes/HCP/attributes/Education/attributes/FieldofStudy


ELIGIBILITY

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/Education/attributes/Eligibility


EDUCATION_TYPE

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/Education/attributes/EducationType


RANK

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/Rank


MEDICAL_SCHOOL

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/MedicalSchool


TAXONOMY

Column

Type

Description

Reltio Attribute URI

LOV Name

TAXONOMY_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TAXONOMY

VARCHAR


configuration/entityTypes/HCP/attributes/Taxonomy/attributes/Taxonomy, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/Taxonomy

TAXONOMY_CD,LKUP_IMS_JURIDIC_CATEGORY

TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Taxonomy/attributes/Type, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/Type

TAXONOMY_TYPE

PROVIDER_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Taxonomy/attributes/ProviderType, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/ProviderType


CLASSIFICATION

VARCHAR


configuration/entityTypes/HCP/attributes/Taxonomy/attributes/Classification, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/Classification


SPECIALIZATION

VARCHAR


configuration/entityTypes/HCP/attributes/Taxonomy/attributes/Specialization, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/Specialization


PRIORITY

VARCHAR


configuration/entityTypes/HCP/attributes/Taxonomy/attributes/Priority, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/Priority

TAXONOMY_PRIORITY

STR_TYPE

VARCHAR


configuration/entityTypes/HCO/attributes/Taxonomy/attributes/StrType

LKUP_IMS_STRUCTURE_TYPE

DP_PRESENCE

Column

Type

Description

Reltio Attribute URI

LOV Name

DP_PRESENCE_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



CHANNEL_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/DPPresence/attributes/ChannelCode, configuration/entityTypes/HCO/attributes/DPPresence/attributes/ChannelCode

LKUP_IMS_DP_CHANNEL

CHANNEL_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/DPPresence/attributes/ChannelName, configuration/entityTypes/HCO/attributes/DPPresence/attributes/ChannelName


CHANNEL_URL

VARCHAR


configuration/entityTypes/HCP/attributes/DPPresence/attributes/ChannelURL, configuration/entityTypes/HCO/attributes/DPPresence/attributes/ChannelURL


CHANNEL_REGISTRATION_DATE

DATE


configuration/entityTypes/HCP/attributes/DPPresence/attributes/ChannelRegistrationDate, configuration/entityTypes/HCO/attributes/DPPresence/attributes/ChannelRegistrationDate


PRESENCE_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/DPPresence/attributes/PresenceType, configuration/entityTypes/HCO/attributes/DPPresence/attributes/PresenceType

LKUP_IMS_DP_PRESENCE_TYPE

ACTIVITY

VARCHAR


configuration/entityTypes/HCP/attributes/DPPresence/attributes/Activity, configuration/entityTypes/HCO/attributes/DPPresence/attributes/Activity

LKUP_IMS_DP_SCORE_CODE

AUDIENCE

VARCHAR


configuration/entityTypes/HCP/attributes/DPPresence/attributes/Audience, configuration/entityTypes/HCO/attributes/DPPresence/attributes/Audience

LKUP_IMS_DP_SCORE_CODE

DP_SUMMARY

Column

Type

Description

Reltio Attribute URI

LOV Name

DP_SUMMARY_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SUMMARY_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/DPSummary/attributes/SummaryType, configuration/entityTypes/HCO/attributes/DPSummary/attributes/SummaryType

LKUP_IMS_DP_SUMMARY_TYPE

SCORE_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/DPSummary/attributes/ScoreCode, configuration/entityTypes/HCO/attributes/DPSummary/attributes/ScoreCode

LKUP_IMS_DP_SCORE_CODE

ADDITIONAL_ATTRIBUTES

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDITIONAL_ATTRIBUTES_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



ATTRIBUTE_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/AdditionalAttributes/attributes/AttributeName, configuration/entityTypes/HCO/attributes/AdditionalAttributes/attributes/AttributeName


ATTRIBUTE_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/AdditionalAttributes/attributes/AttributeType, configuration/entityTypes/HCO/attributes/AdditionalAttributes/attributes/AttributeType

LKUP_IMS_TYPE_CODE

ATTRIBUTE_VALUE

VARCHAR


configuration/entityTypes/HCP/attributes/AdditionalAttributes/attributes/AttributeValue, configuration/entityTypes/HCO/attributes/AdditionalAttributes/attributes/AttributeValue


ATTRIBUTE_RANK

VARCHAR


configuration/entityTypes/HCP/attributes/AdditionalAttributes/attributes/AttributeRank, configuration/entityTypes/HCO/attributes/AdditionalAttributes/attributes/AttributeRank


ADDITIONAL_INFO

VARCHAR


configuration/entityTypes/HCP/attributes/AdditionalAttributes/attributes/AdditionalInfo, configuration/entityTypes/HCO/attributes/AdditionalAttributes/attributes/AdditionalInfo


DATA_QUALITY

Data Quality

Column

Type

Description

Reltio Attribute URI

LOV Name

DATA_QUALITY_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SEVERITY_LEVEL

VARCHAR


configuration/entityTypes/HCP/attributes/DataQuality/attributes/SeverityLevel, configuration/entityTypes/HCO/attributes/DataQuality/attributes/SeverityLevel

LKUP_IMS_DQ_SEVERITY

SOURCE

VARCHAR


configuration/entityTypes/HCP/attributes/DataQuality/attributes/Source, configuration/entityTypes/HCO/attributes/DataQuality/attributes/Source


SCORE

VARCHAR


configuration/entityTypes/HCP/attributes/DataQuality/attributes/Score, configuration/entityTypes/HCO/attributes/DataQuality/attributes/Score


CLASSIFICATION

Column

Type

Description

Reltio Attribute URI

LOV Name

CLASSIFICATION_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



CLASSIFICATION_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Classification/attributes/ClassificationType, configuration/entityTypes/HCO/attributes/Classification/attributes/ClassificationType

LKUP_IMS_CLASSIFICATION_TYPE

CLASSIFICATION_VALUE

VARCHAR


configuration/entityTypes/HCP/attributes/Classification/attributes/ClassificationValue, configuration/entityTypes/HCO/attributes/Classification/attributes/ClassificationValue


CLASSIFICATION_VALUE_NUMERIC_QUANTITY

VARCHAR


configuration/entityTypes/HCP/attributes/Classification/attributes/ClassificationValueNumericQuantity, configuration/entityTypes/HCO/attributes/Classification/attributes/ClassificationValueNumericQuantity


STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Classification/attributes/Status, configuration/entityTypes/HCO/attributes/Classification/attributes/Status

LKUP_IMS_CLASSIFICATION_STATUS

EFFECTIVE_DATE

DATE


configuration/entityTypes/HCP/attributes/Classification/attributes/EffectiveDate, configuration/entityTypes/HCO/attributes/Classification/attributes/EffectiveDate


END_DATE

DATE


configuration/entityTypes/HCP/attributes/Classification/attributes/EndDate, configuration/entityTypes/HCO/attributes/Classification/attributes/EndDate


NOTES

VARCHAR


configuration/entityTypes/HCP/attributes/Classification/attributes/Notes, configuration/entityTypes/HCO/attributes/Classification/attributes/Notes


TAG

Column

Type

Description

Reltio Attribute URI

LOV Name

TAG_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TAG_TYPE_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Tag/attributes/TagTypeCode, configuration/entityTypes/HCO/attributes/Tag/attributes/TagTypeCode

LKUP_IMS_TAG_TYPE_CODE

TAG_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Tag/attributes/TagCode, configuration/entityTypes/HCO/attributes/Tag/attributes/TagCode


STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Tag/attributes/Status, configuration/entityTypes/HCO/attributes/Tag/attributes/Status

LKUP_IMS_TAG_STATUS

EFFECTIVE_DATE

DATE


configuration/entityTypes/HCP/attributes/Tag/attributes/EffectiveDate, configuration/entityTypes/HCO/attributes/Tag/attributes/EffectiveDate


END_DATE

DATE


configuration/entityTypes/HCP/attributes/Tag/attributes/EndDate, configuration/entityTypes/HCO/attributes/Tag/attributes/EndDate


NOTES

VARCHAR


configuration/entityTypes/HCP/attributes/Tag/attributes/Notes, configuration/entityTypes/HCO/attributes/Tag/attributes/Notes


EXCLUSIONS

Column

Type

Description

Reltio Attribute URI

LOV Name

EXCLUSIONS_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



PRODUCT_ID

VARCHAR


configuration/entityTypes/HCP/attributes/Exclusions/attributes/ProductId, configuration/entityTypes/HCO/attributes/Exclusions/attributes/ProductId

LKUP_IMS_PRODUCT_ID

EXCLUSION_STATUS_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Exclusions/attributes/ExclusionStatusCode, configuration/entityTypes/HCO/attributes/Exclusions/attributes/ExclusionStatusCode

LKUP_IMS_EXCL_STATUS_CODE

EFFECTIVE_DATE

DATE


configuration/entityTypes/HCP/attributes/Exclusions/attributes/EffectiveDate, configuration/entityTypes/HCO/attributes/Exclusions/attributes/EffectiveDate


END_DATE

DATE


configuration/entityTypes/HCP/attributes/Exclusions/attributes/EndDate, configuration/entityTypes/HCO/attributes/Exclusions/attributes/EndDate


NOTES

VARCHAR


configuration/entityTypes/HCP/attributes/Exclusions/attributes/Notes, configuration/entityTypes/HCO/attributes/Exclusions/attributes/Notes


EXCLUSION_RULE_ID

VARCHAR


configuration/entityTypes/HCP/attributes/Exclusions/attributes/ExclusionRuleId, configuration/entityTypes/HCO/attributes/Exclusions/attributes/ExclusionRuleId


ACTION

Column

Type

Description

Reltio Attribute URI

LOV Name

ACTION_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



ACTION_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Action/attributes/ActionCode, configuration/entityTypes/HCO/attributes/Action/attributes/ActionCode

LKUP_IMS_ACTION_CODE

ACTION_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/Action/attributes/ActionName, configuration/entityTypes/HCO/attributes/Action/attributes/ActionName


ACTION_REQUESTED_DATE

DATE


configuration/entityTypes/HCP/attributes/Action/attributes/ActionRequestedDate, configuration/entityTypes/HCO/attributes/Action/attributes/ActionRequestedDate


ACTION_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Action/attributes/ActionStatus, configuration/entityTypes/HCO/attributes/Action/attributes/ActionStatus

LKUP_IMS_ACTION_STATUS

ACTION_STATUS_DATE

DATE


configuration/entityTypes/HCP/attributes/Action/attributes/ActionStatusDate, configuration/entityTypes/HCO/attributes/Action/attributes/ActionStatusDate


ALTERNATE_NAME

Column

Type

Description

Reltio Attribute URI

LOV Name

ALTERNATE_NAME_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



NAME_TYPE_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/AlternateName/attributes/NameTypeCode, configuration/entityTypes/HCO/attributes/AlternateName/attributes/NameTypeCode

LKUP_IMS_NAME_TYPE_CODE

NAME

VARCHAR


configuration/entityTypes/HCP/attributes/AlternateName/attributes/Name, configuration/entityTypes/HCO/attributes/AlternateName/attributes/Name


FIRST_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/AlternateName/attributes/FirstName, configuration/entityTypes/HCO/attributes/AlternateName/attributes/FirstName


MIDDLE_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/AlternateName/attributes/MiddleName, configuration/entityTypes/HCO/attributes/AlternateName/attributes/MiddleName


LAST_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/AlternateName/attributes/LastName, configuration/entityTypes/HCO/attributes/AlternateName/attributes/LastName


SUFFIX_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/AlternateName/attributes/SuffixName, configuration/entityTypes/HCO/attributes/AlternateName/attributes/SuffixName


LANGUAGE

Column

Type

Description

Reltio Attribute URI

LOV Name

LANGUAGE_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



LANGUAGE_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Language/attributes/LanguageCode, configuration/entityTypes/HCO/attributes/Language/attributes/LanguageCode


PROFICIENCY_LEVEL

VARCHAR


configuration/entityTypes/HCP/attributes/Language/attributes/ProficiencyLevel, configuration/entityTypes/HCO/attributes/Language/attributes/ProficiencyLevel


SOURCE_DATA

Column

Type

Description

Reltio Attribute URI

LOV Name

SOURCE_DATA_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



CLASS_OF_TRADE_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/SourceData/attributes/ClassOfTradeCode, configuration/entityTypes/HCO/attributes/SourceData/attributes/ClassOfTradeCode


RAW_CLASS_OF_TRADE_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/SourceData/attributes/RawClassOfTradeCode, configuration/entityTypes/HCO/attributes/SourceData/attributes/RawClassOfTradeCode


RAW_CLASS_OF_TRADE_DESCRIPTION

VARCHAR


configuration/entityTypes/HCP/attributes/SourceData/attributes/RawClassOfTradeDescription, configuration/entityTypes/HCO/attributes/SourceData/attributes/RawClassOfTradeDescription


DATASET_IDENTIFIER

VARCHAR


configuration/entityTypes/HCP/attributes/SourceData/attributes/DatasetIdentifier, configuration/entityTypes/HCO/attributes/SourceData/attributes/DatasetIdentifier


DATASET_PARTY_IDENTIFIER

VARCHAR


configuration/entityTypes/HCP/attributes/SourceData/attributes/DatasetPartyIdentifier, configuration/entityTypes/HCO/attributes/SourceData/attributes/DatasetPartyIdentifier


PARTY_STATUS_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/SourceData/attributes/PartyStatusCode, configuration/entityTypes/HCO/attributes/SourceData/attributes/PartyStatusCode


NOTES

Column

Type

Description

Reltio Attribute URI

LOV Name

NOTES_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



NOTE_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Notes/attributes/NoteCode, configuration/entityTypes/HCO/attributes/Notes/attributes/NoteCode

LKUP_IMS_NOTE_CODE

NOTE_TEXT

VARCHAR


configuration/entityTypes/HCP/attributes/Notes/attributes/NoteText, configuration/entityTypes/HCO/attributes/Notes/attributes/NoteText


HCO

Health care provider

Column

Type

Description

Reltio Attribute URI

LOV Name

ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



NAME

VARCHAR

Name

configuration/entityTypes/HCO/attributes/Name


TYPE_CODE

VARCHAR

Customer Type

configuration/entityTypes/HCO/attributes/TypeCode

LKUP_IMS_HCO_CUST_TYPE

SUB_TYPE_CODE

VARCHAR

Customer Sub Type

configuration/entityTypes/HCO/attributes/SubTypeCode

LKUP_IMS_HCO_SUBTYPE

EXCLUDE_FROM_MATCH

VARCHAR


configuration/entityTypes/HCO/attributes/ExcludeFromMatch


OTHER_NAMES

VARCHAR

Other Names

configuration/entityTypes/HCO/attributes/OtherNames


SOURCE_ID

VARCHAR

Source ID

configuration/entityTypes/HCO/attributes/SourceID


VALIDATION_STATUS

VARCHAR


configuration/entityTypes/HCO/attributes/ValidationStatus

LKUP_IMS_VAL_STATUS

ORIGIN_SOURCE

VARCHAR

Originating Source

configuration/entityTypes/HCO/attributes/OriginSource


COUNTRY_CODE

VARCHAR

Country Code

configuration/entityTypes/HCO/attributes/Country

LKUP_IMS_COUNTRY_CODE

FISCAL

VARCHAR


configuration/entityTypes/HCO/attributes/Fiscal


SITE

VARCHAR


configuration/entityTypes/HCO/attributes/Site


GROUP_PRACTICE

BOOLEAN


configuration/entityTypes/HCO/attributes/GroupPractice


GEN_FIRST

VARCHAR

String

configuration/entityTypes/HCO/attributes/GenFirst

LKUP_IMS_HCO_GENFIRST

SREP_ACCESS

VARCHAR

String

configuration/entityTypes/HCO/attributes/SrepAccess

LKUP_IMS_HCO_SREPACCESS

ACCEPT_MEDICARE

BOOLEAN


configuration/entityTypes/HCO/attributes/AcceptMedicare


ACCEPT_MEDICAID

BOOLEAN


configuration/entityTypes/HCO/attributes/AcceptMedicaid


PERCENT_MEDICARE

VARCHAR


configuration/entityTypes/HCO/attributes/PercentMedicare


PERCENT_MEDICAID

VARCHAR


configuration/entityTypes/HCO/attributes/PercentMedicaid


PARENT_COMPANY

VARCHAR

Replacement Parent Satellite

configuration/entityTypes/HCO/attributes/ParentCompany


HEALTH_SYSTEM_NAME

VARCHAR


configuration/entityTypes/HCO/attributes/HealthSystemName


VADOD

BOOLEAN


configuration/entityTypes/HCO/attributes/VADOD


GPO_MEMBERSHIP

BOOLEAN


configuration/entityTypes/HCO/attributes/GPOMembership


ACADEMIC

BOOLEAN


configuration/entityTypes/HCO/attributes/Academic


MKT_SEGMENT_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/MktSegmentCode


TOTAL_LICENSE_BEDS

VARCHAR


configuration/entityTypes/HCO/attributes/TotalLicenseBeds


TOTAL_CENSUS_BEDS

VARCHAR


configuration/entityTypes/HCO/attributes/TotalCensusBeds


NUM_PATIENTS

VARCHAR


configuration/entityTypes/HCO/attributes/NumPatients


TOTAL_STAFFED_BEDS

VARCHAR


configuration/entityTypes/HCO/attributes/TotalStaffedBeds


TOTAL_SURGERIES

VARCHAR


configuration/entityTypes/HCO/attributes/TotalSurgeries


TOTAL_PROCEDURES

VARCHAR


configuration/entityTypes/HCO/attributes/TotalProcedures


OR_SURGERIES

VARCHAR


configuration/entityTypes/HCO/attributes/ORSurgeries


RESIDENT_PROGRAM

BOOLEAN


configuration/entityTypes/HCO/attributes/ResidentProgram


RESIDENT_COUNT

VARCHAR


configuration/entityTypes/HCO/attributes/ResidentCount


NUMS_OF_PROVIDERS

VARCHAR

Num_of_providers displays the total number of distinct providers affiliated with a business. Current Data: Value between 1 and 422816

configuration/entityTypes/HCO/attributes/NumsOfProviders


CORP_PARENT_NAME

VARCHAR

Corporate Parent Name

configuration/entityTypes/HCO/attributes/CorpParentName


MANAGER_HCO_ID

VARCHAR

Manager Hco Id

configuration/entityTypes/HCO/attributes/ManagerHcoId


MANAGER_HCO_NAME

VARCHAR

Manager Hco Name

configuration/entityTypes/HCO/attributes/ManagerHcoName


OWNER_SUB_NAME

VARCHAR

Owner Sub Name

configuration/entityTypes/HCO/attributes/OwnerSubName


FORMULARY

VARCHAR


configuration/entityTypes/HCO/attributes/Formulary

LKUP_IMS_HCO_FORMULARY

E_MEDICAL_RECORD

VARCHAR


configuration/entityTypes/HCO/attributes/EMedicalRecord

LKUP_IMS_HCO_EREC

E_PRESCRIBE

VARCHAR


configuration/entityTypes/HCO/attributes/EPrescribe

LKUP_IMS_HCO_EREC

PAY_PERFORM

VARCHAR


configuration/entityTypes/HCO/attributes/PayPerform

LKUP_IMS_HCO_PAYPERFORM

CMS_COVERED_FOR_TEACHING

BOOLEAN


configuration/entityTypes/HCO/attributes/CMSCoveredForTeaching


COMM_HOSP

BOOLEAN

Indicates whether the facility is a short-term (average length of stay is less than 30 days) acute care, or non federal hospital. Values: Yes and Null

configuration/entityTypes/HCO/attributes/CommHosp


EMAIL_DOMAIN

VARCHAR


configuration/entityTypes/HCO/attributes/EmailDomain


STATUS_IMS

VARCHAR


configuration/entityTypes/HCO/attributes/StatusIMS

LKUP_IMS_STATUS

DOING_BUSINESS_AS_NAME

VARCHAR


configuration/entityTypes/HCO/attributes/DoingBusinessAsName


COMPANY_TYPE

VARCHAR


configuration/entityTypes/HCO/attributes/CompanyType

LKUP_IMS_ORG_TYPE

CUSIP

VARCHAR


configuration/entityTypes/HCO/attributes/CUSIP


SECTOR_IMS

VARCHAR

Sector

configuration/entityTypes/HCO/attributes/SectorIMS

LKUP_IMS_HCO_SECTORIMS

INDUSTRY

VARCHAR


configuration/entityTypes/HCO/attributes/Industry


FOUNDED_YEAR

VARCHAR


configuration/entityTypes/HCO/attributes/FoundedYear


END_YEAR

VARCHAR


configuration/entityTypes/HCO/attributes/EndYear


IPO_YEAR

VARCHAR


configuration/entityTypes/HCO/attributes/IPOYear


LEGAL_DOMICILE

VARCHAR

State of Legal Domicile

configuration/entityTypes/HCO/attributes/LegalDomicile


OWNERSHIP_STATUS

VARCHAR


configuration/entityTypes/HCO/attributes/OwnershipStatus

LKUP_IMS_HCO_OWNERSHIPSTATUS

PROFIT_STATUS

VARCHAR

The profit status of the facility. Values include: For Profit, Not For Profit, Government, Armed Forces, or NULL (If data is unknown or Not Confidential and Proprietary to IMS Health. Field Name Data Type Field Description Applicable).

configuration/entityTypes/HCO/attributes/ProfitStatus

LKUP_IMS_HCO_PROFITSTATUS

CMI

VARCHAR

CMI is the Case Mix Index for an organization. This is a government-assigned measure of the complexity of medical and surgical care provided to Medicare inpatients by a hospital under the prospective payment system (PPS). It factors in a hospital?s use of technology for patient care and medical services? level of acuity required by the patient population.

configuration/entityTypes/HCO/attributes/CMI


SOURCE_NAME

VARCHAR


configuration/entityTypes/HCO/attributes/SourceName


SUB_SOURCE_NAME

VARCHAR


configuration/entityTypes/HCO/attributes/SubSourceName


DEA_BUSINESS_ACTIVITY

VARCHAR


configuration/entityTypes/HCO/attributes/DEABusinessActivity


IMAGE_LINKS

VARCHAR


configuration/entityTypes/HCO/attributes/ImageLinks


VIDEO_LINKS

VARCHAR


configuration/entityTypes/HCO/attributes/VideoLinks


DOCUMENT_LINKS

VARCHAR


configuration/entityTypes/HCO/attributes/DocumentLinks


WEBSITE_URL

VARCHAR


configuration/entityTypes/HCO/attributes/WebsiteURL


TAX_ID

VARCHAR


configuration/entityTypes/HCO/attributes/TaxID


DESCRIPTION

VARCHAR


configuration/entityTypes/HCO/attributes/Description


STATUS_UPDATE_DATE

DATE


configuration/entityTypes/HCO/attributes/StatusUpdateDate


STATUS_REASON_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/StatusReasonCode

LKUP_IMS_SRC_DEACTIVE_REASON_CODE

COMMENTERS

VARCHAR

Commenters

configuration/entityTypes/HCO/attributes/Commenters


CLIENT_TYPE_CODE

VARCHAR

Client Customer Type

configuration/entityTypes/HCO/attributes/ClientTypeCode

LKUP_IMS_HCO_CLIENT_CUST_TYPE

OFFICIAL_NAME

VARCHAR

Official Name

configuration/entityTypes/HCO/attributes/OfficialName


VALIDATION_CHANGE_REASON

VARCHAR


configuration/entityTypes/HCO/attributes/ValidationChangeReason

LKUP_IMS_VAL_STATUS_CHANGE_REASON

VALIDATION_CHANGE_DATE

DATE


configuration/entityTypes/HCO/attributes/ValidationChangeDate


CREATE_DATE

DATE


configuration/entityTypes/HCO/attributes/CreateDate


UPDATE_DATE

DATE


configuration/entityTypes/HCO/attributes/UpdateDate


CHECK_DATE

DATE


configuration/entityTypes/HCO/attributes/CheckDate


STATE_CODE

VARCHAR

Situation of the workplace: Open/Closed

configuration/entityTypes/HCO/attributes/StateCode

LKUP_IMS_PROFILE_STATE

STATE_DATE

DATE

Date when state of the record was last modified.

configuration/entityTypes/HCO/attributes/StateDate


STATUS_CHANGE_REASON

VARCHAR

Reason the status of the Organization changed

configuration/entityTypes/HCO/attributes/StatusChangeReason


NUM_EMPLOYEES

VARCHAR


configuration/entityTypes/HCO/attributes/NumEmployees


NUM_MED_EMPLOYEES

VARCHAR


configuration/entityTypes/HCO/attributes/NumMedEmployees


TOTAL_BEDS_INTENSIVE_CARE

VARCHAR


configuration/entityTypes/HCO/attributes/TotalBedsIntensiveCare


NUM_EXAMINATION_ROOM

VARCHAR


configuration/entityTypes/HCO/attributes/NumExaminationRoom


NUM_AFFILIATED_SITES

VARCHAR


configuration/entityTypes/HCO/attributes/NumAffiliatedSites


NUM_ENROLLED_MEMBERS

VARCHAR


configuration/entityTypes/HCO/attributes/NumEnrolledMembers


NUM_IN_PATIENTS

VARCHAR


configuration/entityTypes/HCO/attributes/NumInPatients


NUM_OUT_PATIENTS

VARCHAR


configuration/entityTypes/HCO/attributes/NumOutPatients


NUM_OPERATING_ROOMS

VARCHAR


configuration/entityTypes/HCO/attributes/NumOperatingRooms


NUM_PATIENTS_X_WEEK

VARCHAR


configuration/entityTypes/HCO/attributes/NumPatientsXWeek


ACT_TYPE_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/ActTypeCode

LKUP_IMS_ACTIVITY_TYPE

DISPENSE_DRUGS

BOOLEAN


configuration/entityTypes/HCO/attributes/DispenseDrugs


NUM_PRESCRIBERS

VARCHAR


configuration/entityTypes/HCO/attributes/NumPrescribers


PATIENTS_X_YEAR

VARCHAR


configuration/entityTypes/HCO/attributes/PatientsXYear


ACCEPTS_NEW_PATIENTS

VARCHAR

Y/N field indicating whether the workplace accepts new patients

configuration/entityTypes/HCO/attributes/AcceptsNewPatients


EXTERNAL_INFORMATION_URL

VARCHAR


configuration/entityTypes/HCO/attributes/ExternalInformationURL


MATCH_STATUS_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/MatchStatusCode

LKUP_IMS_MATCH_STATUS_CODE

SUBSCRIPTION_FLAG1

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCO/attributes/SubscriptionFlag1


SUBSCRIPTION_FLAG2

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCO/attributes/SubscriptionFlag2


SUBSCRIPTION_FLAG3

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCO/attributes/SubscriptionFlag3


SUBSCRIPTION_FLAG4

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCO/attributes/SubscriptionFlag4


SUBSCRIPTION_FLAG5

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCO/attributes/SubscriptionFlag5


SUBSCRIPTION_FLAG6

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCO/attributes/SubscriptionFlag6


SUBSCRIPTION_FLAG7

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCO/attributes/SubscriptionFlag7


SUBSCRIPTION_FLAG8

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCO/attributes/SubscriptionFlag8


SUBSCRIPTION_FLAG9

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCO/attributes/SubscriptionFlag9


SUBSCRIPTION_FLAG10

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/entityTypes/HCO/attributes/SubscriptionFlag10


ROLE_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/RoleCode

LKUP_IMS_ORG_ROLE_CODE

ACTIVATION_DATE

VARCHAR


configuration/entityTypes/HCO/attributes/ActivationDate


PARTY_ID

VARCHAR


configuration/entityTypes/HCO/attributes/PartyID


LAST_VERIFICATION_STATUS

VARCHAR


configuration/entityTypes/HCO/attributes/LastVerificationStatus


LAST_VERIFICATION_DATE

DATE


configuration/entityTypes/HCO/attributes/LastVerificationDate


EFFECTIVE_DATE

DATE


configuration/entityTypes/HCO/attributes/EffectiveDate


END_DATE

DATE


configuration/entityTypes/HCO/attributes/EndDate


PARTY_LOCALIZATION_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/PartyLocalizationCode


MATCH_PARTY_NAME

VARCHAR


configuration/entityTypes/HCO/attributes/MatchPartyName


DELETE_ENTITY

BOOLEAN

DeleteEntity flag to identify GDPR compliant data

configuration/entityTypes/HCO/attributes/DeleteEntity


OK_VR_TRIGGER

VARCHAR


configuration/entityTypes/HCO/attributes/OK_VR_Trigger

LKUP_IMS_SEND_FOR_VALIDATION

HCO_MAIN_HCO_CLASSOF_TRADE_N

Column

Type

Description

Reltio Attribute URI

LOV Name

MAINHCO_URI

VARCHAR

generated key description



CLASSOFTRADEN_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



PRIORITY

VARCHAR

Numeric code for the primary class of trade

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Priority


CLASSIFICATION

VARCHAR


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Classification

LKUP_IMS_HCO_CLASSOFTRADEN_CLASSIFICATION

FACILITY_TYPE

VARCHAR


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/FacilityType

LKUP_IMS_HCO_CLASSOFTRADEN_FACILITYTYPE

SPECIALTY

VARCHAR


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty

LKUP_IMS_HCO_CLASSOFTRADEN_SPECIALTY

HCO_ADDRESS_UNIT

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDRESS_URI

VARCHAR

generated key description



UNIT_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



UNIT_NAME

VARCHAR


configuration/entityTypes/Location/attributes/Unit/attributes/UnitName


UNIT_VALUE

VARCHAR


configuration/entityTypes/Location/attributes/Unit/attributes/UnitValue


HCO_ADDRESS_BRICK

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDRESS_URI

VARCHAR

generated key description



BRICK_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE

VARCHAR


configuration/entityTypes/Location/attributes/Brick/attributes/Type

LKUP_IMS_BRICK_TYPE

BRICK_VALUE

VARCHAR


configuration/entityTypes/Location/attributes/Brick/attributes/BrickValue

LKUP_IMS_BRICK_VALUE

SORT_ORDER

VARCHAR


configuration/entityTypes/Location/attributes/Brick/attributes/SortOrder


KEY_FINANCIAL_FIGURES_OVERVIEW

Column

Type

Description

Reltio Attribute URI

LOV Name

KEY_FINANCIAL_FIGURES_OVERVIEW_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



FINANCIAL_STATEMENT_TO_DATE

DATE


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/FinancialStatementToDate


FINANCIAL_PERIOD_DURATION

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/FinancialPeriodDuration


SALES_REVENUE_CURRENCY

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueCurrency


SALES_REVENUE_CURRENCY_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueCurrencyCode


SALES_REVENUE_RELIABILITY_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueReliabilityCode


SALES_REVENUE_UNIT_OF_SIZE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueUnitOfSize


SALES_REVENUE_AMOUNT

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueAmount


PROFIT_OR_LOSS_CURRENCY

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossCurrency


PROFIT_OR_LOSS_RELIABILITY_TEXT

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossReliabilityText


PROFIT_OR_LOSS_UNIT_OF_SIZE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossUnitOfSize


PROFIT_OR_LOSS_AMOUNT

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossAmount


SALES_TURNOVER_GROWTH_RATE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesTurnoverGrowthRate


SALES3YRY_GROWTH_RATE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Sales3YryGrowthRate


SALES5YRY_GROWTH_RATE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Sales5YryGrowthRate


EMPLOYEE3YRY_GROWTH_RATE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Employee3YryGrowthRate


EMPLOYEE5YRY_GROWTH_RATE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Employee5YryGrowthRate


CLASSOF_TRADE_N

Column

Type

Description

Reltio Attribute URI

LOV Name

CLASSOF_TRADE_N_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



PRIORITY

VARCHAR

Numeric code for the primary class of trade

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Priority


CLASSIFICATION

VARCHAR


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Classification

LKUP_IMS_HCO_CLASSOFTRADEN_CLASSIFICATION

FACILITY_TYPE

VARCHAR


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/FacilityType

LKUP_IMS_HCO_CLASSOFTRADEN_FACILITYTYPE

SPECIALTY

VARCHAR


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty

LKUP_IMS_HCO_CLASSOFTRADEN_SPECIALTY

SPECIALTY

DO NOT USE THIS ATTRIBUTE - will be deprecated

Column

Type

Description

Reltio Attribute URI

LOV Name

SPECIALTY_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SPECIALTY

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCO/attributes/Specialty/attributes/Specialty


TYPE

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCO/attributes/Specialty/attributes/Type


GSA_EXCLUSION

Column

Type

Description

Reltio Attribute URI

LOV Name

GSA_EXCLUSION_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SANCTION_ID

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/SanctionId


ORGANIZATION_NAME

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/OrganizationName


ADDRESS_LINE1

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/AddressLine1


ADDRESS_LINE2

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/AddressLine2


CITY

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/City


STATE

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/State


ZIP

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/Zip


ACTION_DATE

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/ActionDate


TERM_DATE

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/TermDate


AGENCY

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/Agency


CONFIDENCE

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/Confidence


OIG_EXCLUSION

Column

Type

Description

Reltio Attribute URI

LOV Name

OIG_EXCLUSION_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SANCTION_ID

VARCHAR


configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/SanctionId


ACTION_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/ActionCode


ACTION_DESCRIPTION

VARCHAR


configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/ActionDescription


BOARD_CODE

VARCHAR

Court case board id

configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/BoardCode


BOARD_DESC

VARCHAR

court case board description

configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/BoardDesc


ACTION_DATE

DATE


configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/ActionDate


OFFENSE_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/OffenseCode


OFFENSE_DESCRIPTION

VARCHAR


configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/OffenseDescription


BRICK

Column

Type

Description

Reltio Attribute URI

LOV Name

BRICK_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE

VARCHAR


configuration/entityTypes/HCO/attributes/Brick/attributes/Type

LKUP_IMS_BRICK_TYPE

BRICK_VALUE

VARCHAR


configuration/entityTypes/HCO/attributes/Brick/attributes/BrickValue

LKUP_IMS_BRICK_VALUE

EMR

Column

Type

Description

Reltio Attribute URI

LOV Name

EMR_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



NOTES

BOOLEAN

Y/N field indicating whether workplace uses EMR software to write notes

configuration/entityTypes/HCO/attributes/EMR/attributes/Notes


PRESCRIBES

BOOLEAN

Y/N field indicating whether the workplace uses EMR software to write a prescriptions

configuration/entityTypes/HCO/attributes/EMR/attributes/Prescribes

LKUP_IMS_EMR_PRESCRIBES

ELABS_X_RAYS

BOOLEAN

Y/N indicating whether the workplace uses EMR software for eLabs/Xrays

configuration/entityTypes/HCO/attributes/EMR/attributes/ElabsXRays

LKUP_IMS_EMR_ELABS_XRAYS

NUMBER_OF_PHYSICIANS

VARCHAR

Number of physicians that use EMR software in the workplace

configuration/entityTypes/HCO/attributes/EMR/attributes/NumberOfPhysicians


POLICYMAKER

VARCHAR

Individual who makes decisions regarding EMR software

configuration/entityTypes/HCO/attributes/EMR/attributes/Policymaker


SOFTWARE_TYPE

VARCHAR

Name of the EMR software used at the workplace

configuration/entityTypes/HCO/attributes/EMR/attributes/SoftwareType


ADOPTION

VARCHAR

When the EMR software was adopted at the workplace

configuration/entityTypes/HCO/attributes/EMR/attributes/Adoption


BUYING_FACTOR

VARCHAR

Buying factor which influenced the workplace's decision to purchase the EMR

configuration/entityTypes/HCO/attributes/EMR/attributes/BuyingFactor


OWNER

VARCHAR

Individual who made the decision to purchase EMR software

configuration/entityTypes/HCO/attributes/EMR/attributes/Owner


AWARE

BOOLEAN


configuration/entityTypes/HCO/attributes/EMR/attributes/Aware

LKUP_IMS_EMR_AWARE

SOFTWARE

BOOLEAN


configuration/entityTypes/HCO/attributes/EMR/attributes/Software

LKUP_IMS_EMR_SOFTWARE

VENDOR

VARCHAR


configuration/entityTypes/HCO/attributes/EMR/attributes/Vendor

LKUP_IMS_EMR_VENDOR

BUSINESS_HOURS

Column

Type

Description

Reltio Attribute URI

LOV Name

BUSINESS_HOURS_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



DAY

VARCHAR


configuration/entityTypes/HCO/attributes/BusinessHours/attributes/Day


PERIOD

VARCHAR


configuration/entityTypes/HCO/attributes/BusinessHours/attributes/Period


TIME_SLOT

VARCHAR


configuration/entityTypes/HCO/attributes/BusinessHours/attributes/TimeSlot


START_TIME

VARCHAR


configuration/entityTypes/HCO/attributes/BusinessHours/attributes/StartTime


END_TIME

VARCHAR


configuration/entityTypes/HCO/attributes/BusinessHours/attributes/EndTime


APPOINTMENT_ONLY

BOOLEAN


configuration/entityTypes/HCO/attributes/BusinessHours/attributes/AppointmentOnly


PERIOD_START

VARCHAR


configuration/entityTypes/HCO/attributes/BusinessHours/attributes/PeriodStart


PERIOD_END

VARCHAR


configuration/entityTypes/HCO/attributes/BusinessHours/attributes/PeriodEnd


ACO_DETAILS

ACO Details

Column

Type

Description

Reltio Attribute URI

LOV Name

ACO_DETAILS_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



ACO_TYPE_CODE

VARCHAR

AcoTypeCode

configuration/entityTypes/HCO/attributes/ACODetails/attributes/AcoTypeCode

LKUP_IMS_ACO_TYPE

ACO_TYPE_CATG

VARCHAR

AcoTypeCatg

configuration/entityTypes/HCO/attributes/ACODetails/attributes/AcoTypeCatg


ACO_TYPE_MDEL

VARCHAR

AcoTypeMdel

configuration/entityTypes/HCO/attributes/ACODetails/attributes/AcoTypeMdel


ACO_DETAIL_ID

VARCHAR

AcoDetailId

configuration/entityTypes/HCO/attributes/ACODetails/attributes/AcoDetailId


ACO_DETAIL_CODE

VARCHAR

AcoDetailCode

configuration/entityTypes/HCO/attributes/ACODetails/attributes/AcoDetailCode

LKUP_IMS_ACO_DETAIL

ACO_DETAIL_GROUP_CODE

VARCHAR

AcoDetailGroupCode

configuration/entityTypes/HCO/attributes/ACODetails/attributes/AcoDetailGroupCode

LKUP_IMS_ACO_DETAIL_GROUP

ACO_VAL

VARCHAR

AcoVal

configuration/entityTypes/HCO/attributes/ACODetails/attributes/AcoVal


TRADE_STYLE_NAME

Column

Type

Description

Reltio Attribute URI

LOV Name

TRADE_STYLE_NAME_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



ORGANIZATION_NAME

VARCHAR


configuration/entityTypes/HCO/attributes/TradeStyleName/attributes/OrganizationName


LANGUAGE_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/TradeStyleName/attributes/LanguageCode


FORMER_ORGANIZATION_PRIMARY_NAME

VARCHAR


configuration/entityTypes/HCO/attributes/TradeStyleName/attributes/FormerOrganizationPrimaryName


DISPLAY_SEQUENCE

VARCHAR


configuration/entityTypes/HCO/attributes/TradeStyleName/attributes/DisplaySequence


TYPE

VARCHAR


configuration/entityTypes/HCO/attributes/TradeStyleName/attributes/Type


PRIOR_DUNS_NUMBER

Column

Type

Description

Reltio Attribute URI

LOV Name

PRIOR_DUNSN_UMBER_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TRANSFER_DUNS_NUMBER

VARCHAR


configuration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferDUNSNumber


TRANSFER_REASON_TEXT

VARCHAR


configuration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferReasonText


TRANSFER_REASON_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferReasonCode


TRANSFER_DATE

VARCHAR


configuration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferDate


TRANSFERRED_FROM_DUNS_NUMBER

VARCHAR


configuration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferredFromDUNSNumber


TRANSFERRED_TO_DUNS_NUMBER

VARCHAR


configuration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferredToDUNSNumber


INDUSTRY_CODE

Column

Type

Description

Reltio Attribute URI

LOV Name

INDUSTRY_CODE_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



DNB_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/DNBCode


INDUSTRY_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCode


INDUSTRY_CODE_DESCRIPTION

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeDescription


INDUSTRY_CODE_LANGUAGE_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeLanguageCode


INDUSTRY_CODE_WRITING_SCRIPT

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeWritingScript


DISPLAY_SEQUENCE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/DisplaySequence


SALES_PERCENTAGE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/SalesPercentage


TYPE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/Type


INDUSTRY_TYPE_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryTypeCode


IMPORT_EXPORT_AGENT

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/ImportExportAgent


ACTIVITIES_AND_OPERATIONS

Column

Type

Description

Reltio Attribute URI

LOV Name

ACTIVITIES_AND_OPERATIONS_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



LINE_OF_BUSINESS_DESCRIPTION

VARCHAR


configuration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/LineOfBusinessDescription


LANGUAGE_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/LanguageCode


WRITING_SCRIPT_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/WritingScriptCode


IMPORT_INDICATOR

BOOLEAN


configuration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/ImportIndicator


EXPORT_INDICATOR

BOOLEAN


configuration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/ExportIndicator


AGENT_INDICATOR

BOOLEAN


configuration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/AgentIndicator


EMPLOYEE_DETAILS

Column

Type

Description

Reltio Attribute URI

LOV Name

EMPLOYEE_DETAILS_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



INDIVIDUAL_EMPLOYEE_FIGURES_DATE

VARCHAR


configuration/entityTypes/HCO/attributes/EmployeeDetails/attributes/IndividualEmployeeFiguresDate


INDIVIDUAL_TOTAL_EMPLOYEE_QUANTITY

VARCHAR


configuration/entityTypes/HCO/attributes/EmployeeDetails/attributes/IndividualTotalEmployeeQuantity


INDIVIDUAL_RELIABILITY_TEXT

VARCHAR


configuration/entityTypes/HCO/attributes/EmployeeDetails/attributes/IndividualReliabilityText


TOTAL_EMPLOYEE_QUANTITY

VARCHAR


configuration/entityTypes/HCO/attributes/EmployeeDetails/attributes/TotalEmployeeQuantity


TOTAL_EMPLOYEE_RELIABILITY

VARCHAR


configuration/entityTypes/HCO/attributes/EmployeeDetails/attributes/TotalEmployeeReliability


PRINCIPALS_INCLUDED

VARCHAR


configuration/entityTypes/HCO/attributes/EmployeeDetails/attributes/PrincipalsIncluded


MATCH_QUALITY

Column

Type

Description

Reltio Attribute URI

LOV Name

MATCH_QUALITY_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



CONFIDENCE_CODE

VARCHAR

DnB Match Quality Confidence Code

configuration/entityTypes/HCO/attributes/MatchQuality/attributes/ConfidenceCode


DISPLAY_SEQUENCE

VARCHAR

DnB Match Quality Display Sequence

configuration/entityTypes/HCO/attributes/MatchQuality/attributes/DisplaySequence


MATCH_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/MatchQuality/attributes/MatchCode


BEMFAB

VARCHAR


configuration/entityTypes/HCO/attributes/MatchQuality/attributes/BEMFAB


MATCH_GRADE

VARCHAR


configuration/entityTypes/HCO/attributes/MatchQuality/attributes/MatchGrade


ORGANIZATION_DETAIL

Column

Type

Description

Reltio Attribute URI

LOV Name

ORGANIZATION_DETAIL_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



MEMBER_ROLE

VARCHAR


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/MemberRole


STANDALONE

BOOLEAN


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/Standalone


CONTROL_OWNERSHIP_DATE

DATE


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/ControlOwnershipDate


OPERATING_STATUS

VARCHAR


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/OperatingStatus


START_YEAR

VARCHAR


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/StartYear


FRANCHISE_OPERATION_TYPE

VARCHAR


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/FranchiseOperationType


BONEYARD_ORGANIZATION

BOOLEAN


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/BoneyardOrganization


OPERATING_STATUS_COMMENT

VARCHAR


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/OperatingStatusComment


DUNS_HIERARCHY

Column

Type

Description

Reltio Attribute URI

LOV Name

DUNS_HIERARCHY_URI

VARCHAR

generated key description



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



GLOBAL_ULTIMATE_DUNS

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/GlobalUltimateDUNS


GLOBAL_ULTIMATE_ORGANIZATION

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/GlobalUltimateOrganization


DOMESTIC_ULTIMATE_DUNS

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/DomesticUltimateDUNS


DOMESTIC_ULTIMATE_ORGANIZATION

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/DomesticUltimateOrganization


PARENT_DUNS

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/ParentDUNS


PARENT_ORGANIZATION

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/ParentOrganization


HEADQUARTERS_DUNS

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/HeadquartersDUNS


HEADQUARTERS_ORGANIZATION

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/HeadquartersOrganization


AFFILIATIONS

Column

Type

Description

Reltio Attribute URI

LOV Name

RELATION_URI

VARCHAR

Reltio Relation URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



RELATION_TYPE

VARCHAR

Reltio Relation Type



START_ENTITY_URI

VARCHAR

Reltio Start Entity URI



END_ENTITY_URI

VARCHAR

Reltio End Entity URI



REL_GROUP

VARCHAR

HCRS relation group from the relationship type, each rel group refers to one relation id

configuration/relationTypes/AffiliatedPurchasing/attributes/RelGroup, configuration/relationTypes/Managed/attributes/RelGroup

LKUP_IMS_RELGROUP_TYPE

REL_ORDER_AFFILIATEDPURCHASING

VARCHAR

Order

configuration/relationTypes/AffiliatedPurchasing/attributes/RelOrder


STATUS_REASON_CODE

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/StatusReasonCode, configuration/relationTypes/Activity/attributes/StatusReasonCode, configuration/relationTypes/Managed/attributes/StatusReasonCode

LKUP_IMS_SRC_DEACTIVE_REASON_CODE

STATUS_UPDATE_DATE

DATE


configuration/relationTypes/AffiliatedPurchasing/attributes/StatusUpdateDate, configuration/relationTypes/Activity/attributes/StatusUpdateDate, configuration/relationTypes/Managed/attributes/StatusUpdateDate


VALIDATION_CHANGE_REASON

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/ValidationChangeReason, configuration/relationTypes/Activity/attributes/ValidationChangeReason, configuration/relationTypes/Managed/attributes/ValidationChangeReason

LKUP_IMS_VAL_STATUS_CHANGE_REASON

VALIDATION_CHANGE_DATE

DATE


configuration/relationTypes/AffiliatedPurchasing/attributes/ValidationChangeDate, configuration/relationTypes/Activity/attributes/ValidationChangeDate, configuration/relationTypes/Managed/attributes/ValidationChangeDate


VALIDATION_STATUS

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/ValidationStatus, configuration/relationTypes/Activity/attributes/ValidationStatus, configuration/relationTypes/Managed/attributes/ValidationStatus

LKUP_IMS_VAL_STATUS

AFFILIATION_STATUS

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/AffiliationStatus, configuration/relationTypes/Activity/attributes/AffiliationStatus, configuration/relationTypes/Managed/attributes/AffiliationStatus

LKUP_IMS_STATUS

COUNTRY

VARCHAR

Country Code

configuration/relationTypes/AffiliatedPurchasing/attributes/Country, configuration/relationTypes/Activity/attributes/Country, configuration/relationTypes/Managed/attributes/Country

LKUP_IMS_COUNTRY_CODE

AFFILIATION_NAME

VARCHAR

Affiliation Name

configuration/relationTypes/AffiliatedPurchasing/attributes/AffiliationName, configuration/relationTypes/Activity/attributes/AffiliationName


SUBSCRIPTION_FLAG1

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag1, configuration/relationTypes/Activity/attributes/SubscriptionFlag1, configuration/relationTypes/Managed/attributes/SubscriptionFlag1


SUBSCRIPTION_FLAG2

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag2, configuration/relationTypes/Activity/attributes/SubscriptionFlag2, configuration/relationTypes/Managed/attributes/SubscriptionFlag2


SUBSCRIPTION_FLAG3

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag3, configuration/relationTypes/Activity/attributes/SubscriptionFlag3, configuration/relationTypes/Managed/attributes/SubscriptionFlag3


SUBSCRIPTION_FLAG4

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag4, configuration/relationTypes/Activity/attributes/SubscriptionFlag4, configuration/relationTypes/Managed/attributes/SubscriptionFlag4


SUBSCRIPTION_FLAG5

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag5, configuration/relationTypes/Activity/attributes/SubscriptionFlag5, configuration/relationTypes/Managed/attributes/SubscriptionFlag5


SUBSCRIPTION_FLAG6

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag6, configuration/relationTypes/Activity/attributes/SubscriptionFlag6, configuration/relationTypes/Managed/attributes/SubscriptionFlag6


SUBSCRIPTION_FLAG7

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag7, configuration/relationTypes/Activity/attributes/SubscriptionFlag7, configuration/relationTypes/Managed/attributes/SubscriptionFlag7


SUBSCRIPTION_FLAG8

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag8, configuration/relationTypes/Activity/attributes/SubscriptionFlag8, configuration/relationTypes/Managed/attributes/SubscriptionFlag8


SUBSCRIPTION_FLAG9

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag9, configuration/relationTypes/Activity/attributes/SubscriptionFlag9, configuration/relationTypes/Managed/attributes/SubscriptionFlag9


SUBSCRIPTION_FLAG10

BOOLEAN

Used for setting a profile eligible for certain subscription

configuration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag10, configuration/relationTypes/Activity/attributes/SubscriptionFlag10, configuration/relationTypes/Managed/attributes/SubscriptionFlag10


BEST_RELATIONSHIP_INDICATOR

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/BestRelationshipIndicator, configuration/relationTypes/Activity/attributes/BestRelationshipIndicator, configuration/relationTypes/Managed/attributes/BestRelationshipIndicator

LKUP_IMS_YES_NO

RELATIONSHIP_RANK

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/RelationshipRank, configuration/relationTypes/Activity/attributes/RelationshipRank, configuration/relationTypes/Managed/attributes/RelationshipRank


RELATIONSHIP_VIEW_CODE

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/RelationshipViewCode, configuration/relationTypes/Activity/attributes/RelationshipViewCode, configuration/relationTypes/Managed/attributes/RelationshipViewCode


RELATIONSHIP_VIEW_TYPE_CODE

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/RelationshipViewTypeCode, configuration/relationTypes/Activity/attributes/RelationshipViewTypeCode, configuration/relationTypes/Managed/attributes/RelationshipViewTypeCode


RELATIONSHIP_STATUS

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/RelationshipStatus, configuration/relationTypes/Activity/attributes/RelationshipStatus, configuration/relationTypes/Managed/attributes/RelationshipStatus

LKUP_IMS_RELATIONSHIP_STATUS

RELATIONSHIP_CREATE_DATE

DATE


configuration/relationTypes/AffiliatedPurchasing/attributes/RelationshipCreateDate, configuration/relationTypes/Activity/attributes/RelationshipCreateDate, configuration/relationTypes/Managed/attributes/RelationshipCreateDate


UPDATE_DATE

DATE


configuration/relationTypes/AffiliatedPurchasing/attributes/UpdateDate, configuration/relationTypes/Activity/attributes/UpdateDate, configuration/relationTypes/Managed/attributes/UpdateDate


RELATIONSHIP_START_DATE

DATE


configuration/relationTypes/AffiliatedPurchasing/attributes/RelationshipStartDate, configuration/relationTypes/Activity/attributes/RelationshipStartDate, configuration/relationTypes/Managed/attributes/RelationshipStartDate


RELATIONSHIP_END_DATE

DATE


configuration/relationTypes/AffiliatedPurchasing/attributes/RelationshipEndDate, configuration/relationTypes/Activity/attributes/RelationshipEndDate, configuration/relationTypes/Managed/attributes/RelationshipEndDate


CHECKED_DATE

DATE


configuration/relationTypes/Activity/attributes/CheckedDate


PREFERRED_MAIL_INDICATOR

BOOLEAN


configuration/relationTypes/Activity/attributes/PreferredMailIndicator


PREFERRED_VISIT_INDICATOR

BOOLEAN


configuration/relationTypes/Activity/attributes/PreferredVisitIndicator


COMMITTEE_MEMBER

VARCHAR


configuration/relationTypes/Activity/attributes/CommitteeMember

LKUP_IMS_MEMBER_MED_COMMITTEE

APPOINTMENT_REQUIRED

BOOLEAN


configuration/relationTypes/Activity/attributes/AppointmentRequired


AFFILIATION_TYPE_CODE

VARCHAR

Affiliation Type Code

configuration/relationTypes/Activity/attributes/AffiliationTypeCode


WORKING_STATUS

VARCHAR


configuration/relationTypes/Activity/attributes/WorkingStatus

LKUP_IMS_WORKING_STATUS

TITLE

VARCHAR


configuration/relationTypes/Activity/attributes/Title

LKUP_IMS_PROF_TITLE

RANK

VARCHAR


configuration/relationTypes/Activity/attributes/Rank


PRIMARY_AFFILIATION_INDICATOR

BOOLEAN


configuration/relationTypes/Activity/attributes/PrimaryAffiliationIndicator


ACT_WEBSITE_URL

VARCHAR


configuration/relationTypes/Activity/attributes/ActWebsiteURL


ACT_VALIDATION_STATUS

VARCHAR


configuration/relationTypes/Activity/attributes/ActValidationStatus

LKUP_IMS_VAL_STATUS

PREF_OR_ACTIVE

VARCHAR


configuration/relationTypes/Activity/attributes/PrefOrActive


COMMENTERS

VARCHAR

Commenters

configuration/relationTypes/Activity/attributes/Commenters


REL_ORDER_MANAGED

BOOLEAN

Order

configuration/relationTypes/Managed/attributes/RelOrder


PURCHASING_CLASSIFICATION

Column

Type

Description

Reltio Attribute URI

LOV Name

CLASSIFICATION_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



CLASSIFICATION_TYPE

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/ClassificationType

LKUP_IMS_CLASSIFICATION_TYPE

CLASSIFICATION_INDICATOR

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/ClassificationIndicator

LKUP_IMS_CLASSIFICATION_INDICATOR

CLASSIFICATION_VALUE

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/ClassificationValue


CLASSIFICATION_VALUE_NUMERIC_QUANTITY

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/ClassificationValueNumericQuantity


STATUS

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/Status

LKUP_IMS_CLASSIFICATION_STATUS

EFFECTIVE_DATE

DATE


configuration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/EffectiveDate


END_DATE

DATE


configuration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/EndDate


NOTES

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/Notes


PURCHASING_SOURCE_DATA

Column

Type

Description

Reltio Attribute URI

LOV Name

SOURCE_DATA_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



DATASET_IDENTIFIER

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/SourceData/attributes/DatasetIdentifier


START_OBJECT_DATASET_PARTY_IDENTIFIER

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/SourceData/attributes/StartObjectDatasetPartyIdentifier


END_OBJECT_DATASET_PARTY_IDENTIFIER

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/SourceData/attributes/EndObjectDatasetPartyIdentifier


RANK

VARCHAR


configuration/relationTypes/AffiliatedPurchasing/attributes/SourceData/attributes/Rank


ACTIVITY_PHONE

Column

Type

Description

Reltio Attribute URI

LOV Name

ACT_PHONE_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



TYPE_IMS

VARCHAR


configuration/relationTypes/Activity/attributes/ActPhone/attributes/TypeIMS

LKUP_IMS_COMMUNICATION_TYPE

NUMBER

VARCHAR


configuration/relationTypes/Activity/attributes/ActPhone/attributes/Number


EXTENSION

VARCHAR


configuration/relationTypes/Activity/attributes/ActPhone/attributes/Extension


RANK

VARCHAR


configuration/relationTypes/Activity/attributes/ActPhone/attributes/Rank


COUNTRY_CODE

VARCHAR


configuration/relationTypes/Activity/attributes/ActPhone/attributes/CountryCode

LKUP_IMS_COUNTRY_CODE

AREA_CODE

VARCHAR


configuration/relationTypes/Activity/attributes/ActPhone/attributes/AreaCode


LOCAL_NUMBER

VARCHAR


configuration/relationTypes/Activity/attributes/ActPhone/attributes/LocalNumber


FORMATTED_NUMBER

VARCHAR

Formatted number of the phone

configuration/relationTypes/Activity/attributes/ActPhone/attributes/FormattedNumber


VALIDATION_STATUS

VARCHAR


configuration/relationTypes/Activity/attributes/ActPhone/attributes/ValidationStatus


LINE_TYPE

VARCHAR


configuration/relationTypes/Activity/attributes/ActPhone/attributes/LineType


FORMAT_MASK

VARCHAR


configuration/relationTypes/Activity/attributes/ActPhone/attributes/FormatMask


DIGIT_COUNT

VARCHAR


configuration/relationTypes/Activity/attributes/ActPhone/attributes/DigitCount


GEO_AREA

VARCHAR


configuration/relationTypes/Activity/attributes/ActPhone/attributes/GeoArea


GEO_COUNTRY

VARCHAR


configuration/relationTypes/Activity/attributes/ActPhone/attributes/GeoCountry


ACTIVE

BOOLEAN

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/relationTypes/Activity/attributes/ActPhone/attributes/Active


ACTIVITY_PRIVACY_PREFERENCES

Column

Type

Description

Reltio Attribute URI

LOV Name

PRIVACY_PREFERENCES_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



PHONE_OPT_OUT

BOOLEAN


configuration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/PhoneOptOut


ALLOWED_TO_CONTACT

BOOLEAN


configuration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/AllowedToContact


EMAIL_OPT_OUT

BOOLEAN


configuration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/EmailOptOut


MAIL_OPT_OUT

BOOLEAN


configuration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/MailOptOut


FAX_OPT_OUT

BOOLEAN


configuration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/FaxOptOut


REMOTE_OPT_OUT

BOOLEAN


configuration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/RemoteOptOut


OPT_OUT_ONEKEY

BOOLEAN


configuration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/OptOutOnekey


VISIT_OPT_OUT

BOOLEAN


configuration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/VisitOptOut


ACTIVITY_SPECIALITIES

Column

Type

Description

Reltio Attribute URI

LOV Name

SPECIALITIES_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



SPECIALTY_TYPE

VARCHAR


configuration/relationTypes/Activity/attributes/Specialities/attributes/SpecialtyType

LKUP_IMS_SPECIALTY_TYPE

SPECIALTY

VARCHAR


configuration/relationTypes/Activity/attributes/Specialities/attributes/Specialty

LKUP_IMS_SPECIALTY

EMAIL_OPT_OUT

BOOLEAN


configuration/relationTypes/Activity/attributes/Specialities/attributes/EmailOptOut


DESC

VARCHAR


configuration/relationTypes/Activity/attributes/Specialities/attributes/Desc


GROUP

VARCHAR


configuration/relationTypes/Activity/attributes/Specialities/attributes/Group


SOURCE_CD

VARCHAR


configuration/relationTypes/Activity/attributes/Specialities/attributes/SourceCD


SPECIALTY_DETAIL

VARCHAR


configuration/relationTypes/Activity/attributes/Specialities/attributes/SpecialtyDetail


PROFESSION_CODE

VARCHAR


configuration/relationTypes/Activity/attributes/Specialities/attributes/ProfessionCode


RANK

VARCHAR


configuration/relationTypes/Activity/attributes/Specialities/attributes/Rank


PRIMARY_SPECIALTY_FLAG

BOOLEAN

Primary Specialty flag to be populated by client teams according to business rules

configuration/relationTypes/Activity/attributes/Specialities/attributes/PrimarySpecialtyFlag


SORT_ORDER

VARCHAR


configuration/relationTypes/Activity/attributes/Specialities/attributes/SortOrder


BEST_RECORD

VARCHAR


configuration/relationTypes/Activity/attributes/Specialities/attributes/BestRecord


SUB_SPECIALTY

VARCHAR


configuration/relationTypes/Activity/attributes/Specialities/attributes/SubSpecialty

LKUP_IMS_SPECIALTY

SUB_SPECIALTY_RANK

VARCHAR

SubSpecialty Rank

configuration/relationTypes/Activity/attributes/Specialities/attributes/SubSpecialtyRank


ACTIVITY_IDENTIFIERS

Column

Type

Description

Reltio Attribute URI

LOV Name

ACT_IDENTIFIERS_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



ID

VARCHAR


configuration/relationTypes/Activity/attributes/ActIdentifiers/attributes/ID


TYPE

VARCHAR


configuration/relationTypes/Activity/attributes/ActIdentifiers/attributes/Type

LKUP_IMS_HCP_IDENTIFIER_TYPE

ORDER

VARCHAR

Displays the order of priority for an MPN for those facilities that share an MPN. Valid values are: P ?the MPN on a business record is the primary identifier for the business and O ?the MPN is a secondary identifier. (Using P for the MPN supports aggregating clinical volumes and avoids double counting).

configuration/relationTypes/Activity/attributes/ActIdentifiers/attributes/Order


AUTHORIZATION_STATUS

VARCHAR

Authorization Status

configuration/relationTypes/Activity/attributes/ActIdentifiers/attributes/AuthorizationStatus

LKUP_IMS_IDENTIFIER_STATUS

NATIONAL_ID_ATTRIBUTE

VARCHAR


configuration/relationTypes/Activity/attributes/ActIdentifiers/attributes/NationalIdAttribute


ACTIVITY_ADDITIONAL_ATTRIBUTES

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDITIONAL_ATTRIBUTES_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



ATTRIBUTE_NAME

VARCHAR


configuration/relationTypes/Activity/attributes/AdditionalAttributes/attributes/AttributeName


ATTRIBUTE_TYPE

VARCHAR


configuration/relationTypes/Activity/attributes/AdditionalAttributes/attributes/AttributeType

LKUP_IMS_TYPE_CODE

ATTRIBUTE_VALUE

VARCHAR


configuration/relationTypes/Activity/attributes/AdditionalAttributes/attributes/AttributeValue


ATTRIBUTE_RANK

VARCHAR


configuration/relationTypes/Activity/attributes/AdditionalAttributes/attributes/AttributeRank


ADDITIONAL_INFO

VARCHAR


configuration/relationTypes/Activity/attributes/AdditionalAttributes/attributes/AdditionalInfo


ACTIVITY_BUSINESS_HOURS

Column

Type

Description

Reltio Attribute URI

LOV Name

BUSINESS_HOURS_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



DAY

VARCHAR


configuration/relationTypes/Activity/attributes/BusinessHours/attributes/Day


PERIOD

VARCHAR


configuration/relationTypes/Activity/attributes/BusinessHours/attributes/Period


TIME_SLOT

VARCHAR


configuration/relationTypes/Activity/attributes/BusinessHours/attributes/TimeSlot


START_TIME

VARCHAR


configuration/relationTypes/Activity/attributes/BusinessHours/attributes/StartTime


END_TIME

VARCHAR


configuration/relationTypes/Activity/attributes/BusinessHours/attributes/EndTime


APPOINTMENT_ONLY

BOOLEAN


configuration/relationTypes/Activity/attributes/BusinessHours/attributes/AppointmentOnly


PERIOD_START

VARCHAR


configuration/relationTypes/Activity/attributes/BusinessHours/attributes/PeriodStart


PERIOD_END

VARCHAR


configuration/relationTypes/Activity/attributes/BusinessHours/attributes/PeriodEnd


PERIOD_OF_DAY

VARCHAR


configuration/relationTypes/Activity/attributes/BusinessHours/attributes/PeriodOfDay


ACTIVITY_AFFILIATION_ROLE

Column

Type

Description

Reltio Attribute URI

LOV Name

AFFILIATION_ROLE_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



ROLE_RANK

VARCHAR


configuration/relationTypes/Activity/attributes/AffiliationRole/attributes/RoleRank


ROLE_NAME

VARCHAR


configuration/relationTypes/Activity/attributes/AffiliationRole/attributes/RoleName

LKUP_IMS_ROLE

ROLE_ATTRIBUTE

VARCHAR


configuration/relationTypes/Activity/attributes/AffiliationRole/attributes/RoleAttribute


ROLE_TYPE_ATTRIBUTE

VARCHAR


configuration/relationTypes/Activity/attributes/AffiliationRole/attributes/RoleTypeAttribute


ROLE_STATUS

VARCHAR


configuration/relationTypes/Activity/attributes/AffiliationRole/attributes/RoleStatus


BEST_ROLE_INDICATOR

VARCHAR


configuration/relationTypes/Activity/attributes/AffiliationRole/attributes/BestRoleIndicator


ACTIVITY_EMAIL

Column

Type

Description

Reltio Attribute URI

LOV Name

ACT_EMAIL_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



TYPE_IMS

VARCHAR


configuration/relationTypes/Activity/attributes/ActEmail/attributes/TypeIMS

LKUP_IMS_COMMUNICATION_TYPE

EMAIL

VARCHAR


configuration/relationTypes/Activity/attributes/ActEmail/attributes/Email


DOMAIN

VARCHAR


configuration/relationTypes/Activity/attributes/ActEmail/attributes/Domain


DOMAIN_TYPE

VARCHAR


configuration/relationTypes/Activity/attributes/ActEmail/attributes/DomainType


USERNAME

VARCHAR


configuration/relationTypes/Activity/attributes/ActEmail/attributes/Username


RANK

VARCHAR


configuration/relationTypes/Activity/attributes/ActEmail/attributes/Rank


VALIDATION_STATUS

VARCHAR


configuration/relationTypes/Activity/attributes/ActEmail/attributes/ValidationStatus


ACTIVE

BOOLEAN


configuration/relationTypes/Activity/attributes/ActEmail/attributes/Active


ACTIVITY_BRICK

Column

Type

Description

Reltio Attribute URI

LOV Name

BRICK_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



TYPE

VARCHAR


configuration/relationTypes/Activity/attributes/Brick/attributes/Type

LKUP_IMS_BRICK_TYPE

BRICK_VALUE

VARCHAR


configuration/relationTypes/Activity/attributes/Brick/attributes/BrickValue

LKUP_IMS_BRICK_VALUE

SORT_ORDER

VARCHAR


configuration/relationTypes/Activity/attributes/Brick/attributes/SortOrder


ACTIVITY_CLASSIFICATION

Column

Type

Description

Reltio Attribute URI

LOV Name

CLASSIFICATION_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



CLASSIFICATION_TYPE

VARCHAR


configuration/relationTypes/Activity/attributes/Classification/attributes/ClassificationType

LKUP_IMS_CLASSIFICATION_TYPE

CLASSIFICATION_INDICATOR

VARCHAR


configuration/relationTypes/Activity/attributes/Classification/attributes/ClassificationIndicator

LKUP_IMS_CLASSIFICATION_INDICATOR

CLASSIFICATION_VALUE

VARCHAR


configuration/relationTypes/Activity/attributes/Classification/attributes/ClassificationValue


CLASSIFICATION_VALUE_NUMERIC_QUANTITY

VARCHAR


configuration/relationTypes/Activity/attributes/Classification/attributes/ClassificationValueNumericQuantity


STATUS

VARCHAR


configuration/relationTypes/Activity/attributes/Classification/attributes/Status

LKUP_IMS_CLASSIFICATION_STATUS

EFFECTIVE_DATE

DATE


configuration/relationTypes/Activity/attributes/Classification/attributes/EffectiveDate


END_DATE

DATE


configuration/relationTypes/Activity/attributes/Classification/attributes/EndDate


NOTES

VARCHAR


configuration/relationTypes/Activity/attributes/Classification/attributes/Notes


ACTIVITY_SOURCE_DATA

Column

Type

Description

Reltio Attribute URI

LOV Name

SOURCE_DATA_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



DATASET_IDENTIFIER

VARCHAR


configuration/relationTypes/Activity/attributes/SourceData/attributes/DatasetIdentifier


START_OBJECT_DATASET_PARTY_IDENTIFIER

VARCHAR


configuration/relationTypes/Activity/attributes/SourceData/attributes/StartObjectDatasetPartyIdentifier


END_OBJECT_DATASET_PARTY_IDENTIFIER

VARCHAR


configuration/relationTypes/Activity/attributes/SourceData/attributes/EndObjectDatasetPartyIdentifier


RANK

VARCHAR


configuration/relationTypes/Activity/attributes/SourceData/attributes/Rank


MANAGED_CLASSIFICATION

Column

Type

Description

Reltio Attribute URI

LOV Name

CLASSIFICATION_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



CLASSIFICATION_TYPE

VARCHAR


configuration/relationTypes/Managed/attributes/Classification/attributes/ClassificationType

LKUP_IMS_CLASSIFICATION_TYPE

CLASSIFICATION_INDICATOR

VARCHAR


configuration/relationTypes/Managed/attributes/Classification/attributes/ClassificationIndicator

LKUP_IMS_CLASSIFICATION_INDICATOR

CLASSIFICATION_VALUE

VARCHAR


configuration/relationTypes/Managed/attributes/Classification/attributes/ClassificationValue


CLASSIFICATION_VALUE_NUMERIC_QUANTITY

VARCHAR


configuration/relationTypes/Managed/attributes/Classification/attributes/ClassificationValueNumericQuantity


STATUS

VARCHAR


configuration/relationTypes/Managed/attributes/Classification/attributes/Status

LKUP_IMS_CLASSIFICATION_STATUS

EFFECTIVE_DATE

DATE


configuration/relationTypes/Managed/attributes/Classification/attributes/EffectiveDate


END_DATE

DATE


configuration/relationTypes/Managed/attributes/Classification/attributes/EndDate


NOTES

VARCHAR


configuration/relationTypes/Managed/attributes/Classification/attributes/Notes


MANAGED_SOURCE_DATA

Column

Type

Description

Reltio Attribute URI

LOV Name

SOURCE_DATA_URI

VARCHAR

generated key description



RELATION_URI

VARCHAR

Reltio Relation URI



DATASET_IDENTIFIER

VARCHAR


configuration/relationTypes/Managed/attributes/SourceData/attributes/DatasetIdentifier


START_OBJECT_DATASET_PARTY_IDENTIFIER

VARCHAR


configuration/relationTypes/Managed/attributes/SourceData/attributes/StartObjectDatasetPartyIdentifier


END_OBJECT_DATASET_PARTY_IDENTIFIER

VARCHAR


configuration/relationTypes/Managed/attributes/SourceData/attributes/EndObjectDatasetPartyIdentifier


RANK

VARCHAR


configuration/relationTypes/Managed/attributes/SourceData/attributes/Rank


" }, { "title": "Dynamic views for COMPANY MDM Model", "pageID": "163917858", "pageLink": "/display/GMDM/Dynamic+views+for+COMPANY+MDM+Model", "content": "

HCP

Health care provider

Column

Type

Description

Reltio Attribute URI

LOV Name

ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



COUNTRY_HCP

VARCHAR

Country

configuration/entityTypes/HCP/attributes/Country


COMPANY_CUST_ID

VARCHAR

An auto-generated unique COMPANY id assigned to an HCP

configuration/entityTypes/HCP/attributes/COMPANYCustID


PREFIX

VARCHAR

Prefix added before the name, e.g., Mr, Ms, Dr

configuration/entityTypes/HCP/attributes/Prefix

HCPPrefix

NAME

VARCHAR

Name

configuration/entityTypes/HCP/attributes/Name


FIRST_NAME

VARCHAR

First Name

configuration/entityTypes/HCP/attributes/FirstName


LAST_NAME

VARCHAR

Last Name

configuration/entityTypes/HCP/attributes/LastName


MIDDLE_NAME

VARCHAR

Middle Name

configuration/entityTypes/HCP/attributes/MiddleName


CLEANSED_MIDDLE_NAME

VARCHAR

Middle Name

configuration/entityTypes/HCP/attributes/CleansedMiddleName


STATUS

VARCHAR

Status, e.g., Active or Inactive

configuration/entityTypes/HCP/attributes/Status

HCPStatus

STATUS_DETAIL

VARCHAR

Deactivation reason

configuration/entityTypes/HCP/attributes/StatusDetail

HCPStatusDetail

DEACTIVATION_CODE

VARCHAR

Deactivation reason

configuration/entityTypes/HCP/attributes/DeactivationCode

HCPDeactivationReasonCode

SUFFIX_NAME

VARCHAR

Generation Suffix

configuration/entityTypes/HCP/attributes/SuffixName

SuffixName

GENDER

VARCHAR

Gender

configuration/entityTypes/HCP/attributes/Gender

Gender

NICKNAME

VARCHAR

Nickname

configuration/entityTypes/HCP/attributes/Nickname


PREFERRED_NAME

VARCHAR

Preferred Name

configuration/entityTypes/HCP/attributes/PreferredName


FORMATTED_NAME

VARCHAR

Formatted Name

configuration/entityTypes/HCP/attributes/FormattedName


TYPE_CODE

VARCHAR

HCP Type Code

configuration/entityTypes/HCP/attributes/TypeCode

HCPType

SUB_TYPE_CODE

VARCHAR

HCP SubType Code

configuration/entityTypes/HCP/attributes/SubTypeCode

HCPSubTypeCode

IS_COMPANY_APPROVED_SPEAKER

BOOLEAN

Is COMPANY Approved Speaker

configuration/entityTypes/HCP/attributes/IsCOMPANYApprovedSpeaker


SPEAKER_LAST_BRIEFING_DATE

DATE

Last Briefing Date

configuration/entityTypes/HCP/attributes/SpeakerLastBriefingDate


SPEAKER_TYPE

VARCHAR

Speaker type

configuration/entityTypes/HCP/attributes/SpeakerType


SPEAKER_STATUS

VARCHAR

Speaker Status

configuration/entityTypes/HCP/attributes/SpeakerStatus

HCPSpeakerStatus

SPEAKER_LEVEL

VARCHAR

Speaker Status

configuration/entityTypes/HCP/attributes/SpeakerLevel


SPEAKER_EFFECTIVE_DATE

DATE

Speaker Effective Date

configuration/entityTypes/HCP/attributes/SpeakerEffectiveDate


SPEAKER_DEACTIVATE_REASON

VARCHAR

Speaker Effective Date

configuration/entityTypes/HCP/attributes/SpeakerDeactivateReason


DELETION_DATE

DATE

Deletion Data

configuration/entityTypes/HCP/attributes/DeletionDate


ACCOUNT_BLOCKED

BOOLEAN

Indicator of account blocked or not

configuration/entityTypes/HCP/attributes/AccountBlocked


Y_O_B

VARCHAR

Birth Year

configuration/entityTypes/HCP/attributes/YoB


D_O_D

DATE


configuration/entityTypes/HCP/attributes/DoD


Y_O_D

VARCHAR


configuration/entityTypes/HCP/attributes/YoD


TERRITORY_NUMBER

VARCHAR

Title of HCP

configuration/entityTypes/HCP/attributes/TerritoryNumber


WEBSITE_URL

VARCHAR

Website URL

configuration/entityTypes/HCP/attributes/WebsiteURL


TITLE

VARCHAR

Title of HCP

configuration/entityTypes/HCP/attributes/Title

HCPTitle

EFFECTIVE_END_DATE

DATE


configuration/entityTypes/HCP/attributes/EffectiveEndDate


COMPANY_WATCH_IND

BOOLEAN

COMPANY Watch Ind

configuration/entityTypes/HCP/attributes/COMPANYWatchInd


KOL_STATUS

BOOLEAN

KOL Status

configuration/entityTypes/HCP/attributes/KOLStatus


THIRD_PARTY_DECIL

VARCHAR

Third Party Decil

configuration/entityTypes/HCP/attributes/ThirdPartyDecil


FEDERAL_EMP_LETTER_DATE

DATE

Federal Emp Letter Date

configuration/entityTypes/HCP/attributes/FederalEmpLetterDate


MARKETING_CONTRACT_CODE

VARCHAR

Marketing Contract Code

configuration/entityTypes/HCP/attributes/MarketingContractCode


CURRICULUM_VITAE_LINK

VARCHAR

Curriculum Vitae Link

configuration/entityTypes/HCP/attributes/CurriculumVitaeLink


SPEAKER_TRAVEL_INDICATOR

VARCHAR

Speaker Travel Indicator

configuration/entityTypes/HCP/attributes/SpeakerTravelIndicator


SPEAKER_INFO

VARCHAR

Speaker Information

configuration/entityTypes/HCP/attributes/SpeakerInfo


DEGREE

VARCHAR

Degree Information

configuration/entityTypes/HCP/attributes/Degree


PRESENT_EMPLOYMENT

VARCHAR

Present Employment

configuration/entityTypes/HCP/attributes/PresentEmployment

PE_CD

EMPLOYMENT_TYPE_CODE

VARCHAR

Employment Type Code

configuration/entityTypes/HCP/attributes/EmploymentTypeCode


EMPLOYMENT_TYPE_DESC

VARCHAR

Employment Type Description

configuration/entityTypes/HCP/attributes/EmploymentTypeDesc


TYPE_OF_PRACTICE

VARCHAR

Type Of Practice

configuration/entityTypes/HCP/attributes/TypeOfPractice

TOP_CD

TYPE_OF_PRACTICE_DESC

VARCHAR

Type Of Practice Description

configuration/entityTypes/HCP/attributes/TypeOfPracticeDesc


SCHOOL_SEQ_NUMBER

VARCHAR

School Sequence Number

configuration/entityTypes/HCP/attributes/SchoolSeqNumber


MRM_DELETE_FLAG

BOOLEAN

MRM Delete Flag

configuration/entityTypes/HCP/attributes/MRMDeleteFlag


MRM_DELETE_DATE

DATE

MRM Delete Date

configuration/entityTypes/HCP/attributes/MRMDeleteDate


CNCY_DATE

DATE

CNCY Date

configuration/entityTypes/HCP/attributes/CNCYDate


AMA_HOSPITAL

VARCHAR

AMA Hospital Info

configuration/entityTypes/HCP/attributes/AMAHospital


AMA_HOSPITAL_DESC

VARCHAR

AMA Hospital Desc

configuration/entityTypes/HCP/attributes/AMAHospitalDesc


PRACTISE_AT_HOSPITAL

VARCHAR

Practise At Hospital

configuration/entityTypes/HCP/attributes/PractiseAtHospital


SEGMENT_ID

VARCHAR

Segment ID

configuration/entityTypes/HCP/attributes/SegmentID


SEGMENT_DESC

VARCHAR

Segment Desc

configuration/entityTypes/HCP/attributes/SegmentDesc


DCR_STATUS

VARCHAR

Status of HCP profile

configuration/entityTypes/HCP/attributes/DCRStatus

DCRStatus

PREFERRED_LANGUAGE

VARCHAR

Language preference

configuration/entityTypes/HCP/attributes/PreferredLanguage


SOURCE_TYPE

VARCHAR

Type of the source

configuration/entityTypes/HCP/attributes/SourceType


STATE_UPDATE_DATE

DATE

Update date of state

configuration/entityTypes/HCP/attributes/StateUpdateDate


SOURCE_UPDATE_DATE

DATE

Update date at source

configuration/entityTypes/HCP/attributes/SourceUpdateDate


COMMENTERS

VARCHAR

Commenters

configuration/entityTypes/HCP/attributes/Commenters


IMAGE_GALLERY

VARCHAR


configuration/entityTypes/HCP/attributes/ImageGallery


BIRTH_CITY

VARCHAR

Birth City

configuration/entityTypes/HCP/attributes/BirthCity


BIRTH_STATE

VARCHAR

Birth State

configuration/entityTypes/HCP/attributes/BirthState

State

BIRTH_COUNTRY

VARCHAR

Birth Country

configuration/entityTypes/HCP/attributes/BirthCountry

Country

D_O_B

DATE

Date of Birth

configuration/entityTypes/HCP/attributes/DoB


ORIGINAL_SOURCE_NAME

VARCHAR

Original Source Name

configuration/entityTypes/HCP/attributes/OriginalSourceName


SOURCE_MATCH_CATEGORY

VARCHAR

Source Match Category

configuration/entityTypes/HCP/attributes/SourceMatchCategory


ALTERNATE_NAME

Column

Type

Description

Reltio Attribute URI

LOV Name

ALTERNATE_NAME_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



NAME_TYPE_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/AlternateName/attributes/NameTypeCode

HCPAlternateNameType

FULL_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/AlternateName/attributes/FullName


FIRST_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/AlternateName/attributes/FirstName


MIDDLE_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/AlternateName/attributes/MiddleName


LAST_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/AlternateName/attributes/LastName


VERSION

VARCHAR


configuration/entityTypes/HCP/attributes/AlternateName/attributes/Version


ADDRESSES

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDRESSES_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



ADDRESS_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressType, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressType, configuration/entityTypes/MCO/attributes/Addresses/attributes/AddressType

AddressType

COMPANY_ADDRESS_ID

VARCHAR

COMPANY Address ID

configuration/entityTypes/HCP/attributes/Addresses/attributes/COMPANYAddressID, configuration/entityTypes/HCO/attributes/Addresses/attributes/COMPANYAddressID, configuration/entityTypes/MCO/attributes/Addresses/attributes/COMPANYAddressID


ADDRESS_LINE1

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine1, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressLine1, configuration/entityTypes/MCO/attributes/Addresses/attributes/AddressLine1


ADDRESS_LINE2

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine2, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressLine2, configuration/entityTypes/MCO/attributes/Addresses/attributes/AddressLine2


ADDRESS_LINE3

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine3, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressLine3, configuration/entityTypes/MCO/attributes/Addresses/attributes/AddressLine3


ADDRESS_LINE4

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine4, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressLine4, configuration/entityTypes/MCO/attributes/Addresses/attributes/AddressLine4


CITY

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/City, configuration/entityTypes/HCO/attributes/Addresses/attributes/City, configuration/entityTypes/MCO/attributes/Addresses/attributes/City


STATE_PROVINCE

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/StateProvince, configuration/entityTypes/HCO/attributes/Addresses/attributes/StateProvince, configuration/entityTypes/MCO/attributes/Addresses/attributes/StateProvince

State

COUNTRY_ADDRESSES

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/Country, configuration/entityTypes/HCO/attributes/Addresses/attributes/Country, configuration/entityTypes/MCO/attributes/Addresses/attributes/Country

Country

PO_BOX

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/POBox, configuration/entityTypes/HCO/attributes/Addresses/attributes/POBox, configuration/entityTypes/MCO/attributes/Addresses/attributes/POBox


ZIP5

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/Zip5, configuration/entityTypes/HCO/attributes/Addresses/attributes/Zip5, configuration/entityTypes/MCO/attributes/Addresses/attributes/Zip5


ZIP4

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/Zip4, configuration/entityTypes/HCO/attributes/Addresses/attributes/Zip4, configuration/entityTypes/MCO/attributes/Addresses/attributes/Zip4


STREET

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/Street, configuration/entityTypes/HCO/attributes/Addresses/attributes/Street, configuration/entityTypes/MCO/attributes/Addresses/attributes/Street


POSTAL_CODE_EXTENSION

VARCHAR

Postal Code Extension

configuration/entityTypes/HCP/attributes/Addresses/attributes/PostalCodeExtension, configuration/entityTypes/HCO/attributes/Addresses/attributes/PostalCodeExtension, configuration/entityTypes/MCO/attributes/Addresses/attributes/PostalCodeExtension


ADDRESS_USAGE_TAG

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressUsageTag, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressUsageTag

AddressUsageTag

CNCY_DATE

DATE

CNCY Date

configuration/entityTypes/HCP/attributes/Addresses/attributes/CNCYDate, configuration/entityTypes/HCO/attributes/Addresses/attributes/CNCYDate


CBSA_CODE

VARCHAR

Core Based Statistical Area

configuration/entityTypes/HCP/attributes/Addresses/attributes/CBSACode, configuration/entityTypes/HCO/attributes/Addresses/attributes/CBSACode, configuration/entityTypes/MCO/attributes/Addresses/attributes/CBSACode


PREMISE

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/Premise, configuration/entityTypes/HCO/attributes/Addresses/attributes/Premise


ISO3166-2

VARCHAR

This field holds the ISO 3166 2-character country code.

configuration/entityTypes/HCP/attributes/Addresses/attributes/ISO3166-2, configuration/entityTypes/HCO/attributes/Addresses/attributes/ISO3166-2, configuration/entityTypes/MCO/attributes/Addresses/attributes/ISO3166-2


ISO3166-3

VARCHAR

This field holds the ISO 3166 3-character country code.

configuration/entityTypes/HCP/attributes/Addresses/attributes/ISO3166-3, configuration/entityTypes/HCO/attributes/Addresses/attributes/ISO3166-3, configuration/entityTypes/MCO/attributes/Addresses/attributes/ISO3166-3


ISO3166-N

VARCHAR

This field holds the ISO 3166 N-digit numeric country code.

configuration/entityTypes/HCP/attributes/Addresses/attributes/ISO3166-N, configuration/entityTypes/HCO/attributes/Addresses/attributes/ISO3166-N, configuration/entityTypes/MCO/attributes/Addresses/attributes/ISO3166-N


LATITUDE

VARCHAR

Latitude

configuration/entityTypes/HCP/attributes/Addresses/attributes/Latitude, configuration/entityTypes/HCO/attributes/Addresses/attributes/Latitude, configuration/entityTypes/MCO/attributes/Addresses/attributes/Latitude


LONGITUDE

VARCHAR

Longitude

configuration/entityTypes/HCP/attributes/Addresses/attributes/Longitude, configuration/entityTypes/HCO/attributes/Addresses/attributes/Longitude, configuration/entityTypes/MCO/attributes/Addresses/attributes/Longitude


GEO_ACCURACY

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/GeoAccuracy, configuration/entityTypes/HCO/attributes/Addresses/attributes/GeoAccuracy, configuration/entityTypes/MCO/attributes/Addresses/attributes/GeoAccuracy


VERIFICATION_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/VerificationStatus, configuration/entityTypes/HCO/attributes/Addresses/attributes/VerificationStatus, configuration/entityTypes/MCO/attributes/Addresses/attributes/VerificationStatus


VERIFICATION_STATUS_DETAILS

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/VerificationStatusDetails, configuration/entityTypes/HCO/attributes/Addresses/attributes/VerificationStatusDetails, configuration/entityTypes/MCO/attributes/Addresses/attributes/VerificationStatusDetails


AVC

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/AVC, configuration/entityTypes/HCO/attributes/Addresses/attributes/AVC, configuration/entityTypes/MCO/attributes/Addresses/attributes/AVC


SETTING_TYPE

VARCHAR

Setting Type

configuration/entityTypes/HCP/attributes/Addresses/attributes/SettingType, configuration/entityTypes/HCO/attributes/Addresses/attributes/SettingType


ADDRESS_SETTING_TYPE_DESC

VARCHAR

Address Setting Type Desc

configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressSettingTypeDesc, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressSettingTypeDesc


CATEGORY

VARCHAR

Category

configuration/entityTypes/HCP/attributes/Addresses/attributes/Category, configuration/entityTypes/HCO/attributes/Addresses/attributes/Category

AddressCategory

FIPS_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/FIPSCode, configuration/entityTypes/HCO/attributes/Addresses/attributes/FIPSCode


FIPS_COUNTY_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/FIPSCountyCode, configuration/entityTypes/HCO/attributes/Addresses/attributes/FIPSCountyCode


FIPS_COUNTY_CODE_DESC

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/FIPSCountyCodeDesc, configuration/entityTypes/HCO/attributes/Addresses/attributes/FIPSCountyCodeDesc


FIPS_STATE_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/FIPSStateCode, configuration/entityTypes/HCO/attributes/Addresses/attributes/FIPSStateCode


FIPS_STATE_CODE_DESC

VARCHAR


configuration/entityTypes/HCP/attributes/Addresses/attributes/FIPSStateCodeDesc, configuration/entityTypes/HCO/attributes/Addresses/attributes/FIPSStateCodeDesc


CARE_OF

VARCHAR

Care Of

configuration/entityTypes/HCP/attributes/Addresses/attributes/CareOf, configuration/entityTypes/HCO/attributes/Addresses/attributes/CareOf


MAIN_PHYSICAL_OFFICE

VARCHAR

Main Physical Office

configuration/entityTypes/HCP/attributes/Addresses/attributes/MainPhysicalOffice, configuration/entityTypes/HCO/attributes/Addresses/attributes/MainPhysicalOffice


DELIVERABILITY_CONFIDENCE

VARCHAR

Deliverability Confidence

configuration/entityTypes/HCP/attributes/Addresses/attributes/DeliverabilityConfidence, configuration/entityTypes/HCO/attributes/Addresses/attributes/DeliverabilityConfidence


APPLID

VARCHAR

APPLID

configuration/entityTypes/HCP/attributes/Addresses/attributes/APPLID, configuration/entityTypes/HCO/attributes/Addresses/attributes/APPLID


SMPLDLV_IND

BOOLEAN

SMPLDLV Ind

configuration/entityTypes/HCP/attributes/Addresses/attributes/SMPLDLVInd, configuration/entityTypes/HCO/attributes/Addresses/attributes/SMPLDLVInd


STATUS

VARCHAR

Status

configuration/entityTypes/HCP/attributes/Addresses/attributes/Status, configuration/entityTypes/HCO/attributes/Addresses/attributes/Status

AddressStatus

STARTER_ELIGIBLE_FLAG

VARCHAR

StarterEligibleFlag

configuration/entityTypes/HCP/attributes/Addresses/attributes/StarterEligibleFlag, configuration/entityTypes/HCO/attributes/Addresses/attributes/StarterEligibleFlag


DEA_FLAG

BOOLEAN

DEA Flag

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEAFlag, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEAFlag


USAGE_TYPE

VARCHAR

Usage Type

configuration/entityTypes/HCP/attributes/Addresses/attributes/UsageType, configuration/entityTypes/HCO/attributes/Addresses/attributes/UsageType


PRIMARY

BOOLEAN

Primary Address

configuration/entityTypes/HCP/attributes/Addresses/attributes/Primary, configuration/entityTypes/HCO/attributes/Addresses/attributes/Primary


EFFECTIVE_START_DATE

DATE

Effective Start Date

configuration/entityTypes/HCP/attributes/Addresses/attributes/EffectiveStartDate, configuration/entityTypes/HCO/attributes/Addresses/attributes/EffectiveStartDate


EFFECTIVE_END_DATE

DATE

Effective End Date

configuration/entityTypes/HCP/attributes/Addresses/attributes/EffectiveEndDate, configuration/entityTypes/HCO/attributes/Addresses/attributes/EffectiveEndDate


ADDRESS_RANK

VARCHAR

Address Rank for priority

configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressRank, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressRank, configuration/entityTypes/MCO/attributes/Addresses/attributes/AddressRank


SOURCE_SEGMENT_CODE

VARCHAR

Source Segment Code

configuration/entityTypes/HCP/attributes/Addresses/attributes/SourceSegmentCode, configuration/entityTypes/HCO/attributes/Addresses/attributes/SourceSegmentCode


SEGMENT1

VARCHAR

Segment1

configuration/entityTypes/HCP/attributes/Addresses/attributes/Segment1, configuration/entityTypes/HCO/attributes/Addresses/attributes/Segment1


SEGMENT2

VARCHAR

Segment2

configuration/entityTypes/HCP/attributes/Addresses/attributes/Segment2, configuration/entityTypes/HCO/attributes/Addresses/attributes/Segment2


SEGMENT3

VARCHAR

Segment3

configuration/entityTypes/HCP/attributes/Addresses/attributes/Segment3, configuration/entityTypes/HCO/attributes/Addresses/attributes/Segment3


ADDRESS_IND

BOOLEAN

AddressInd

configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressInd, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressInd


SCRIPT_UTILIZATION_WEIGHT

VARCHAR

Script Utilization Weight

configuration/entityTypes/HCP/attributes/Addresses/attributes/ScriptUtilizationWeight, configuration/entityTypes/HCO/attributes/Addresses/attributes/ScriptUtilizationWeight


BUSINESS_ACTIVITY_CODE

VARCHAR

Business Activity Code

configuration/entityTypes/HCP/attributes/Addresses/attributes/BusinessActivityCode, configuration/entityTypes/HCO/attributes/Addresses/attributes/BusinessActivityCode


BUSINESS_ACTIVITY_DESC

VARCHAR

Business Activity Desc

configuration/entityTypes/HCP/attributes/Addresses/attributes/BusinessActivityDesc, configuration/entityTypes/HCO/attributes/Addresses/attributes/BusinessActivityDesc


PRACTICE_LOCATION_RANK

VARCHAR

Practice Location Rank

configuration/entityTypes/HCP/attributes/Addresses/attributes/PracticeLocationRank, configuration/entityTypes/HCO/attributes/Addresses/attributes/PracticeLocationRank

PracticeLocationRank

PRACTICE_LOCATION_CONFIDENCE_IND

VARCHAR

Practice Location Confidence Ind

configuration/entityTypes/HCP/attributes/Addresses/attributes/PracticeLocationConfidenceInd, configuration/entityTypes/HCO/attributes/Addresses/attributes/PracticeLocationConfidenceInd


PRACTICE_LOCATION_CONFIDENCE_DESC

VARCHAR

Practice Location Confidence Desc

configuration/entityTypes/HCP/attributes/Addresses/attributes/PracticeLocationConfidenceDesc, configuration/entityTypes/HCO/attributes/Addresses/attributes/PracticeLocationConfidenceDesc


SINGLE_ADDRESS_IND

BOOLEAN

Single Address Ind

configuration/entityTypes/HCP/attributes/Addresses/attributes/SingleAddressInd, configuration/entityTypes/HCO/attributes/Addresses/attributes/SingleAddressInd


SUB_ADMINISTRATIVE_AREA

VARCHAR

This field holds the smallest geographic data element within a country. For instance, USA County.

configuration/entityTypes/HCP/attributes/Addresses/attributes/SubAdministrativeArea, configuration/entityTypes/HCO/attributes/Addresses/attributes/SubAdministrativeArea, configuration/entityTypes/MCO/attributes/Addresses/attributes/SubAdministrativeArea


SUPER_ADMINISTRATIVE_AREA

VARCHAR

This field holds the largest geographic data element within a country.

configuration/entityTypes/HCO/attributes/Addresses/attributes/SuperAdministrativeArea


ADMINISTRATIVE_AREA

VARCHAR

This field holds the most common geographic data element within a country. For instance, USA State, and Canadian Province.

configuration/entityTypes/HCO/attributes/Addresses/attributes/AdministrativeArea


UNIT_NAME

VARCHAR


configuration/entityTypes/HCO/attributes/Addresses/attributes/UnitName


UNIT_VALUE

VARCHAR


configuration/entityTypes/HCO/attributes/Addresses/attributes/UnitValue


FLOOR

VARCHAR

N/A

configuration/entityTypes/HCO/attributes/Addresses/attributes/Floor


BUILDING

VARCHAR

N/A

configuration/entityTypes/HCO/attributes/Addresses/attributes/Building


SUB_BUILDING

VARCHAR


configuration/entityTypes/HCO/attributes/Addresses/attributes/SubBuilding


NEIGHBORHOOD

VARCHAR


configuration/entityTypes/HCO/attributes/Addresses/attributes/Neighborhood


PREMISE_NUMBER

VARCHAR


configuration/entityTypes/HCO/attributes/Addresses/attributes/PremiseNumber


ADDRESSES_SOURCE

Source

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDRESSES_URI

VARCHAR

Generated Key



SOURCE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SOURCE_NAME

VARCHAR

SourceName

configuration/entityTypes/HCP/attributes/Addresses/attributes/Source/attributes/SourceName, configuration/entityTypes/HCO/attributes/Addresses/attributes/Source/attributes/SourceName, configuration/entityTypes/MCO/attributes/Addresses/attributes/Source/attributes/SourceName


SOURCE_RANK

VARCHAR

SourceRank

configuration/entityTypes/HCP/attributes/Addresses/attributes/Source/attributes/SourceRank, configuration/entityTypes/HCO/attributes/Addresses/attributes/Source/attributes/SourceRank, configuration/entityTypes/MCO/attributes/Addresses/attributes/Source/attributes/SourceRank


SOURCE_ADDRESS_ID

VARCHAR

Source Address ID

configuration/entityTypes/HCP/attributes/Addresses/attributes/Source/attributes/SourceAddressID, configuration/entityTypes/HCO/attributes/Addresses/attributes/Source/attributes/SourceAddressID, configuration/entityTypes/MCO/attributes/Addresses/attributes/Source/attributes/SourceAddressID


LEGACY_IQVIA_ADDRESS_ID

VARCHAR

Legacy address id

configuration/entityTypes/HCP/attributes/Addresses/attributes/Source/attributes/LegacyIQVIAAddressID, configuration/entityTypes/HCO/attributes/Addresses/attributes/Source/attributes/LegacyIQVIAAddressID


ADDRESSES_DEA

DEA

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDRESSES_URI

VARCHAR

Generated Key



DEA_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



NUMBER

VARCHAR

Number

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/Number, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/Number


EXPIRATION_DATE

DATE

Expiration Date

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/ExpirationDate, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/ExpirationDate


STATUS

VARCHAR

Status

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/Status, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/Status

AddressDEAStatus

STATUS

VARCHAR

Status

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/Status, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/Status


STATUS_DETAIL

VARCHAR

Deactivation Reason Code

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/StatusDetail, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/StatusDetail

HCPDEAStatusDetail

STATUS_DETAIL

VARCHAR

Deactivation Reason Code

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/StatusDetail, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/StatusDetail


DRUG_SCHEDULE

VARCHAR

Drug Schedule

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/DrugSchedule, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/DrugSchedule


DRUG_SCHEDULE

VARCHAR

Drug Schedule

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/DrugSchedule, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/DrugSchedule

App-LSCustomer360DEADrugSchedule

EFFECTIVE_DATE

DATE

Effective Date

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/EffectiveDate, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/EffectiveDate


STATUS_DATE

DATE

Status Date

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/StatusDate, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/StatusDate


DEA_BUSINESS_ACTIVITY

VARCHAR

Business Activity

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/DEABusinessActivity, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/DEABusinessActivity

DEABusinessActivity

DEA_BUSINESS_ACTIVITY

VARCHAR

Business Activity

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/DEABusinessActivity, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/DEABusinessActivity


SUB_BUSINESS_ACTIVITY

VARCHAR

Sub Business Activity

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/SubBusinessActivity, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/SubBusinessActivity

DEABusinessSubActivity

SUB_BUSINESS_ACTIVITY

VARCHAR

Sub Business Activity

configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/SubBusinessActivity, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/SubBusinessActivity


BUSINESS_ACTIVITY_DESC

VARCHAR

Business Activity Desc

configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/BusinessActivityDesc


SUB_BUSINESS_ACTIVITY_DESC

VARCHAR

Sub Business Activity Desc

configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/SubBusinessActivityDesc


ADDRESSES_OFFICE_INFORMATION

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDRESSES_URI

VARCHAR

Generated Key



OFFICE_INFORMATION_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



BEST_TIMES

VARCHAR

Best Times

configuration/entityTypes/HCP/attributes/Addresses/attributes/OfficeInformation/attributes/BestTimes, configuration/entityTypes/HCO/attributes/Addresses/attributes/OfficeInformation/attributes/BestTimes


APPT_REQUIRED

BOOLEAN

Appointment Required or not

configuration/entityTypes/HCP/attributes/Addresses/attributes/OfficeInformation/attributes/ApptRequired, configuration/entityTypes/HCO/attributes/Addresses/attributes/OfficeInformation/attributes/ApptRequired


OFFICE_NOTES

VARCHAR

Office Notes

configuration/entityTypes/HCP/attributes/Addresses/attributes/OfficeInformation/attributes/OfficeNotes, configuration/entityTypes/HCO/attributes/Addresses/attributes/OfficeInformation/attributes/OfficeNotes


COMPLIANCE

Compliance

Column

Type

Description

Reltio Attribute URI

LOV Name

COMPLIANCE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



GO_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Compliance/attributes/GOStatus

HCPComplianceGOStatus

PIGO_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Compliance/attributes/PIGOStatus

HCPPIGOStatus

NIPPIGO_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Compliance/attributes/NIPPIGOStatus

HCPNIPPIGOStatus

PRIMARY_PIGO_RATIONALE

VARCHAR


configuration/entityTypes/HCP/attributes/Compliance/attributes/PrimaryPIGORationale

HCPPIGORationale

SECONDARY_PIGO_RATIONALE

VARCHAR


configuration/entityTypes/HCP/attributes/Compliance/attributes/SecondaryPIGORationale

HCPPIGORationale

PIGOSME_REVIEW

VARCHAR


configuration/entityTypes/HCP/attributes/Compliance/attributes/PIGOSMEReview

HCPPIGOSMEReview

GSQ_DATE

DATE


configuration/entityTypes/HCP/attributes/Compliance/attributes/GSQDate


DO_NOT_USE

BOOLEAN


configuration/entityTypes/HCP/attributes/Compliance/attributes/DoNotUse


CHANGE_DATE

DATE


configuration/entityTypes/HCP/attributes/Compliance/attributes/ChangeDate


CHANGE_REASON

VARCHAR


configuration/entityTypes/HCP/attributes/Compliance/attributes/ChangeReason


MAPPHCP_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Compliance/attributes/MAPPHCPStatus


MAPP_MAIL

VARCHAR


configuration/entityTypes/HCP/attributes/Compliance/attributes/MAPPMail


DISCLOSURE

Disclosure

Column

Type

Description

Reltio Attribute URI

LOV Name

DISCLOSURE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



BENEFIT_CATEGORY

VARCHAR

Benefit Category

configuration/entityTypes/HCP/attributes/Disclosure/attributes/BenefitCategory

HCPBenefitCategory

BENEFIT_TITLE

VARCHAR

Benefit Title

configuration/entityTypes/HCP/attributes/Disclosure/attributes/BenefitTitle

HCPBenefitTitle

BENEFIT_QUALITY

VARCHAR

Benefit Quality

configuration/entityTypes/HCP/attributes/Disclosure/attributes/BenefitQuality

HCPBenefitQuality

BENEFIT_SPECIALTY

VARCHAR

Benefit Specialty

configuration/entityTypes/HCP/attributes/Disclosure/attributes/BenefitSpecialty

HCPBenefitSpecialty

CONTRACT_CLASSIFICATION

VARCHAR

Contract Classification

configuration/entityTypes/HCP/attributes/Disclosure/attributes/ContractClassification


CONTRACT_CLASSIFICATION_DATE

DATE

Contract Classification Date

configuration/entityTypes/HCP/attributes/Disclosure/attributes/ContractClassificationDate


MILITARY

BOOLEAN

Military

configuration/entityTypes/HCP/attributes/Disclosure/attributes/Military


CIVIL_SERVANT

BOOLEAN

Civil Servant

configuration/entityTypes/HCP/attributes/Disclosure/attributes/CivilServant


CREDENTIAL

Credential Information

Column

Type

Description

Reltio Attribute URI

LOV Name

CREDENTIAL_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



CREDENTIAL

VARCHAR


configuration/entityTypes/HCP/attributes/Credential/attributes/Credential

Credential

OTHER_CDTL_TXT

VARCHAR

Other Credential Text

configuration/entityTypes/HCP/attributes/Credential/attributes/OtherCdtlTxt


PRIMARY_FLAG

BOOLEAN

Primary Flag

configuration/entityTypes/HCP/attributes/Credential/attributes/PrimaryFlag


EFFECTIVE_END_DATE

DATE

Effective End Date

configuration/entityTypes/HCP/attributes/Credential/attributes/EffectiveEndDate


PROFESSION

Profession Information

Column

Type

Description

Reltio Attribute URI

LOV Name

PROFESSION_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



PROFESSION

VARCHAR


configuration/entityTypes/HCP/attributes/Profession/attributes/Profession

HCPSpecialtyProfession

PROFESSION_SOURCE

Source

Column

Type

Description

Reltio Attribute URI

LOV Name

PROFESSION_URI

VARCHAR

Generated Key



SOURCE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SOURCE_NAME

VARCHAR

SourceName

configuration/entityTypes/HCP/attributes/Profession/attributes/Source/attributes/SourceName


SOURCE_RANK

VARCHAR

SourceRank

configuration/entityTypes/HCP/attributes/Profession/attributes/Source/attributes/SourceRank


SPECIALITIES

Column

Type

Description

Reltio Attribute URI

LOV Name

SPECIALITIES_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SPECIALTY

VARCHAR

Specialty of the entity, e.g., Adult Congenital Heart Disease

configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialty, configuration/entityTypes/HCO/attributes/Specialities/attributes/Specialty

HCPSpecialty,App-LSCustomer360Specialty

PROFESSION

VARCHAR


configuration/entityTypes/HCP/attributes/Specialities/attributes/Profession

HCPSpecialtyProfession

PRIMARY

BOOLEAN

Whether Primary Specialty or not

configuration/entityTypes/HCP/attributes/Specialities/attributes/Primary, configuration/entityTypes/HCO/attributes/Specialities/attributes/Primary


RANK

VARCHAR

Rank

configuration/entityTypes/HCP/attributes/Specialities/attributes/Rank


TRUST_INDICATOR

VARCHAR


configuration/entityTypes/HCP/attributes/Specialities/attributes/TrustIndicator


DESC

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/Specialities/attributes/Desc


SPECIALTY_TYPE

VARCHAR

Type of Specialty, e.g. Secondary

configuration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyType

App-LSCustomer360SpecialtyType

GROUP

VARCHAR

Group, Specialty belongs to

configuration/entityTypes/HCO/attributes/Specialities/attributes/Group


SPECIALTY_DETAIL

VARCHAR

Description of Specialty

configuration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyDetail


SPECIALITIES_SOURCE

Column

Type

Description

Reltio Attribute URI

LOV Name

SPECIALITIES_URI

VARCHAR

Generated Key



SOURCE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SOURCE_NAME

VARCHAR

SourceName

configuration/entityTypes/HCP/attributes/Specialities/attributes/Source/attributes/SourceName


SOURCE_RANK

VARCHAR

Rank

configuration/entityTypes/HCP/attributes/Specialities/attributes/Source/attributes/SourceRank


SUB_SPECIALITIES

Column

Type

Description

Reltio Attribute URI

LOV Name

SUB_SPECIALITIES_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SPECIALTY_CODE

VARCHAR

Sub specialty code of the entity

configuration/entityTypes/HCP/attributes/SubSpecialities/attributes/SpecialtyCode


SUB_SPECIALTY

VARCHAR

Sub specialty of the entity

configuration/entityTypes/HCP/attributes/SubSpecialities/attributes/SubSpecialty


PROFESSION_CODE

VARCHAR

Profession Code

configuration/entityTypes/HCP/attributes/SubSpecialities/attributes/ProfessionCode


SUB_SPECIALITIES_SOURCE

Column

Type

Description

Reltio Attribute URI

LOV Name

SUB_SPECIALITIES_URI

VARCHAR

Generated Key



SOURCE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SOURCE_NAME

VARCHAR

SourceName

configuration/entityTypes/HCP/attributes/SubSpecialities/attributes/Source/attributes/SourceName


SOURCE_RANK

VARCHAR

Rank

configuration/entityTypes/HCP/attributes/SubSpecialities/attributes/Source/attributes/SourceRank


EDUCATION

Column

Type

Description

Reltio Attribute URI

LOV Name

EDUCATION_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SCHOOL_CD

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/SchoolCD


SCHOOL_NAME

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/SchoolName


YEAR_OF_GRADUATION

VARCHAR

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/Education/attributes/YearOfGraduation


STATE

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/State


COUNTRY_EDUCATION

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/Country


TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/Type


GPA

VARCHAR


configuration/entityTypes/HCP/attributes/Education/attributes/GPA


GRADUATED

BOOLEAN

DO NOT USE THIS ATTRIBUTE - will be deprecated

configuration/entityTypes/HCP/attributes/Education/attributes/Graduated


EMAIL

Column

Type

Description

Reltio Attribute URI

LOV Name

EMAIL_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE

VARCHAR

Type of Email, e.g., Home

configuration/entityTypes/HCP/attributes/Email/attributes/Type, configuration/entityTypes/HCO/attributes/Email/attributes/Type, configuration/entityTypes/MCO/attributes/Email/attributes/Type

EmailType

EMAIL

VARCHAR

Email address

configuration/entityTypes/HCP/attributes/Email/attributes/Email, configuration/entityTypes/HCO/attributes/Email/attributes/Email, configuration/entityTypes/MCO/attributes/Email/attributes/Email


RANK

VARCHAR

Rank used to assign priority to a Email

configuration/entityTypes/HCP/attributes/Email/attributes/Rank, configuration/entityTypes/HCO/attributes/Email/attributes/Rank, configuration/entityTypes/MCO/attributes/Email/attributes/Rank


EMAIL_USAGE_TAG

VARCHAR


configuration/entityTypes/HCP/attributes/Email/attributes/EmailUsageTag, configuration/entityTypes/HCO/attributes/Email/attributes/EmailUsageTag, configuration/entityTypes/MCO/attributes/Email/attributes/EmailUsageTag

EmailUsageTag

USAGE_TYPE

VARCHAR

Usage Type of an Email

configuration/entityTypes/HCP/attributes/Email/attributes/UsageType, configuration/entityTypes/HCO/attributes/Email/attributes/UsageType, configuration/entityTypes/MCO/attributes/Email/attributes/UsageType


DOMAIN

VARCHAR


configuration/entityTypes/HCP/attributes/Email/attributes/Domain, configuration/entityTypes/HCO/attributes/Email/attributes/Domain, configuration/entityTypes/MCO/attributes/Email/attributes/Domain


VALIDATION_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Email/attributes/ValidationStatus, configuration/entityTypes/HCO/attributes/Email/attributes/ValidationStatus, configuration/entityTypes/MCO/attributes/Email/attributes/ValidationStatus


DOMAIN_TYPE

VARCHAR

Status of Email

configuration/entityTypes/HCO/attributes/Email/attributes/DomainType, configuration/entityTypes/MCO/attributes/Email/attributes/DomainType


USERNAME

VARCHAR

Domain on which Email is created

configuration/entityTypes/HCO/attributes/Email/attributes/Username, configuration/entityTypes/MCO/attributes/Email/attributes/Username


EMAIL_SOURCE

Source

Column

Type

Description

Reltio Attribute URI

LOV Name

EMAIL_URI

VARCHAR

Generated Key



SOURCE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SOURCE_NAME

VARCHAR

SourceName

configuration/entityTypes/HCP/attributes/Email/attributes/Source/attributes/SourceName, configuration/entityTypes/HCO/attributes/Email/attributes/Source/attributes/SourceName, configuration/entityTypes/MCO/attributes/Email/attributes/Source/attributes/SourceName


SOURCE_RANK

VARCHAR

SourceRank

configuration/entityTypes/HCP/attributes/Email/attributes/Source/attributes/SourceRank, configuration/entityTypes/HCO/attributes/Email/attributes/Source/attributes/SourceRank, configuration/entityTypes/MCO/attributes/Email/attributes/Source/attributes/SourceRank


IDENTIFIERS

Column

Type

Description

Reltio Attribute URI

LOV Name

IDENTIFIERS_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE

VARCHAR

Identifier Type

configuration/entityTypes/HCP/attributes/Identifiers/attributes/Type, configuration/entityTypes/HCO/attributes/Identifiers/attributes/Type, configuration/entityTypes/MCO/attributes/Identifiers/attributes/Type

HCPIdentifierType,HCOIdentifierType

ID

VARCHAR

Identifier ID

configuration/entityTypes/HCP/attributes/Identifiers/attributes/ID, configuration/entityTypes/HCO/attributes/Identifiers/attributes/ID, configuration/entityTypes/MCO/attributes/Identifiers/attributes/ID


EXTL_DATE

DATE

External Date

configuration/entityTypes/HCP/attributes/Identifiers/attributes/EXTLDate


ACTIVATION_DATE

DATE

Activation Date

configuration/entityTypes/HCP/attributes/Identifiers/attributes/ActivationDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/ActivationDate


REFER_BACK_ID_STATUS

VARCHAR

Status

configuration/entityTypes/HCP/attributes/Identifiers/attributes/ReferBackIDStatus, configuration/entityTypes/HCO/attributes/Identifiers/attributes/ReferBackIDStatus


DEACTIVATION_DATE

DATE

Identifier Deactivation Date

configuration/entityTypes/HCP/attributes/Identifiers/attributes/DeactivationDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/DeactivationDate


STATE

VARCHAR

Identifier State

configuration/entityTypes/HCP/attributes/Identifiers/attributes/State

State

SOURCE_NAME

VARCHAR

Name of the Identifier source

configuration/entityTypes/HCP/attributes/Identifiers/attributes/SourceName, configuration/entityTypes/HCO/attributes/Identifiers/attributes/SourceName, configuration/entityTypes/MCO/attributes/Identifiers/attributes/SourceName


TRUST

VARCHAR

Trust

configuration/entityTypes/HCP/attributes/Identifiers/attributes/Trust, configuration/entityTypes/HCO/attributes/Identifiers/attributes/Trust, configuration/entityTypes/MCO/attributes/Identifiers/attributes/Trust


SOURCE_START_DATE

DATE

Start date at source

configuration/entityTypes/HCP/attributes/Identifiers/attributes/SourceStartDate


SOURCE_UPDATE_DATE

DATE

Update date at source

configuration/entityTypes/HCP/attributes/Identifiers/attributes/SourceUpdateDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/SourceUpdateDate


STATUS

VARCHAR

Status

configuration/entityTypes/HCP/attributes/Identifiers/attributes/Status, configuration/entityTypes/HCO/attributes/Identifiers/attributes/Status

HCPIdentifierStatus,HCOIdentifierStatus

STATUS_DETAIL

VARCHAR

Identifier Deactivation Reason Code

configuration/entityTypes/HCP/attributes/Identifiers/attributes/StatusDetail, configuration/entityTypes/HCO/attributes/Identifiers/attributes/StatusDetail

HCPIdentifierStatusDetail,HCOIdentifierStatusDetail

DRUG_SCHEDULE

VARCHAR

Status

configuration/entityTypes/HCP/attributes/Identifiers/attributes/DrugSchedule


TAXONOMY

VARCHAR


configuration/entityTypes/HCP/attributes/Identifiers/attributes/Taxonomy


SEQUENCE_NUMBER

VARCHAR


configuration/entityTypes/HCP/attributes/Identifiers/attributes/SequenceNumber


MCRPE_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Identifiers/attributes/MCRPECode


MCRPE_START_DATE

DATE


configuration/entityTypes/HCP/attributes/Identifiers/attributes/MCRPEStartDate


MCRPE_END_DATE

DATE


configuration/entityTypes/HCP/attributes/Identifiers/attributes/MCRPEEndDate


MCRPE_IS_OPTED

BOOLEAN


configuration/entityTypes/HCP/attributes/Identifiers/attributes/MCRPEIsOpted


EXPIRATION_DATE

DATE


configuration/entityTypes/HCP/attributes/Identifiers/attributes/ExpirationDate


ORDER

VARCHAR

Order

configuration/entityTypes/HCO/attributes/Identifiers/attributes/Order


REASON

VARCHAR

Reason

configuration/entityTypes/HCO/attributes/Identifiers/attributes/Reason


START_DATE

DATE

Identifier Start Date

configuration/entityTypes/HCO/attributes/Identifiers/attributes/StartDate


END_DATE

DATE

Identifier End Date

configuration/entityTypes/HCO/attributes/Identifiers/attributes/EndDate


DATA_QUALITY

Column

Type

Description

Reltio Attribute URI

LOV Name

DATA_QUALITY_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



DQ_DESCRIPTION

VARCHAR

DQ Description

configuration/entityTypes/HCP/attributes/DataQuality/attributes/DQDescription, configuration/entityTypes/HCO/attributes/DataQuality/attributes/DQDescription, configuration/entityTypes/MCO/attributes/DataQuality/attributes/DQDescription

DQDescription

LICENSE

Column

Type

Description

Reltio Attribute URI

LOV Name

LICENSE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



CATEGORY

VARCHAR

Category License belongs to, e.g., International

configuration/entityTypes/HCP/attributes/License/attributes/Category


PROFESSION_CODE

VARCHAR

Profession Information

configuration/entityTypes/HCP/attributes/License/attributes/ProfessionCode

HCPProfession

NUMBER

VARCHAR

State License INTEGER. A unique license INTEGER is listed for each license the physician holds. There is no standard format syntax. Format examples: 18986, 4301079019, BX1464089. There is also no limit to the INTEGER of licenses a physician can hold in a state. Example: A physician can have an inactive resident license plus unlimited active licenses. Residents can have as many as four licenses since some states issue licenses every year

configuration/entityTypes/HCP/attributes/License/attributes/Number, configuration/entityTypes/HCO/attributes/License/attributes/Number


REG_AUTH_ID

VARCHAR

RegAuthID

configuration/entityTypes/HCP/attributes/License/attributes/RegAuthID


STATE_BOARD

VARCHAR

State Board

configuration/entityTypes/HCP/attributes/License/attributes/StateBoard


STATE_BOARD_NAME

VARCHAR

State Board Name

configuration/entityTypes/HCP/attributes/License/attributes/StateBoardName


STATE

VARCHAR

State License State. Two character field. USPS standard abbreviations.

configuration/entityTypes/HCP/attributes/License/attributes/State, configuration/entityTypes/HCO/attributes/License/attributes/State


TYPE

VARCHAR

State License Type. U = Unlimited there is no restriction on the physician to practice medicine; L = Limited implies restrictions of some sort. For example, the physician may practice only in a given county, admit patients only to particular hospitals, or practice under the supervision of a physician with a license in state or private hospitals or other settings; T = Temporary issued to a physician temporarily practicing in an underserved area outside his/her state of licensure. Also granted between board meetings when new licenses are issued. Time span for a temporary license varies from state to state. Temporary licenses typically expire 6-9 months from the date they are issued; R = Resident License granted to a physician in graduate medical education (e.g., residency training).

configuration/entityTypes/HCP/attributes/License/attributes/Type

ST_LIC_TYPE

STATUS

VARCHAR

State License Status. A = Active. Physician is licensed to practice within the state; I = Inactive. If the physician has not reregistered a state license OR if the license has been suspended or revoked by the state board; X = unknown. If the state has not provided current information Note: Some state boards issue inactive licenses to physicians who want to maintain licensure in the state although they are currently practicing in another state.

configuration/entityTypes/HCP/attributes/License/attributes/Status

HCPLicenseStatus

STATUS_DETAIL

VARCHAR

Deactivation Reason Code

configuration/entityTypes/HCP/attributes/License/attributes/StatusDetail

HCPLicenseStatusDetail

TRUST

VARCHAR

Trust flag

configuration/entityTypes/HCP/attributes/License/attributes/Trust


DEACTIVATION_REASON_CODE

VARCHAR

Deactivation Reason Code

configuration/entityTypes/HCP/attributes/License/attributes/DeactivationReasonCode

HCPLicenseDeactivationReasonCode

EXPIRATION_DATE

DATE

License Expiration Date

configuration/entityTypes/HCP/attributes/License/attributes/ExpirationDate


ISSUE_DATE

DATE

State License Issue Date

configuration/entityTypes/HCP/attributes/License/attributes/IssueDate


STATE_LICENSE_PRIVILEGE

VARCHAR

State License Privilege

configuration/entityTypes/HCP/attributes/License/attributes/StateLicensePrivilege


STATE_LICENSE_PRIVILEGE_NAME

VARCHAR

State License Privilege Name

configuration/entityTypes/HCP/attributes/License/attributes/StateLicensePrivilegeName


STATE_LICENSE_STATUS_DATE

DATE

State License Status Date

configuration/entityTypes/HCP/attributes/License/attributes/StateLicenseStatusDate


RANK

VARCHAR

Rank of License

configuration/entityTypes/HCP/attributes/License/attributes/Rank


CERTIFICATION_CODE

VARCHAR

Certification Code

configuration/entityTypes/HCP/attributes/License/attributes/CertificationCode

HCPLicenseCertification

LICENSE_SOURCE

Source

Column

Type

Description

Reltio Attribute URI

LOV Name

LICENSE_URI

VARCHAR

Generated Key



SOURCE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SOURCE_NAME

VARCHAR

SourceName

configuration/entityTypes/HCP/attributes/License/attributes/Source/attributes/SourceName


SOURCE_RANK

VARCHAR

SourceRank

configuration/entityTypes/HCP/attributes/License/attributes/Source/attributes/SourceRank


LICENSE_REGULATORY

License Regulatory

Column

Type

Description

Reltio Attribute URI

LOV Name

LICENSE_URI

VARCHAR

Generated Key



REGULATORY_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



REQ_SAMPL_NON_CTRL

VARCHAR

Req Sampl Non Ctrl

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/ReqSamplNonCtrl


REQ_SAMPL_CTRL

VARCHAR

Req Sampl Ctrl

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/ReqSamplCtrl


RECV_SAMPL_NON_CTRL

VARCHAR

Recv Sampl Non Ctrl

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/RecvSamplNonCtrl


RECV_SAMPL_CTRL

VARCHAR

Recv Sampl Ctrl

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/RecvSamplCtrl


DISTR_SAMPL_NON_CTRL

VARCHAR

Distr Sampl Non Ctrl

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/DistrSamplNonCtrl


DISTR_SAMPL_CTRL

VARCHAR

Distr Sampl Ctrl

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/DistrSamplCtrl


SAMP_DRUG_SCHED_I_FLAG

VARCHAR

Samp Drug Sched I Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SampDrugSchedIFlag


SAMP_DRUG_SCHED_II_FLAG

VARCHAR

Samp Drug Sched II Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SampDrugSchedIIFlag


SAMP_DRUG_SCHED_III_FLAG

VARCHAR

Samp Drug Sched III Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SampDrugSchedIIIFlag


SAMP_DRUG_SCHED_IV_FLAG

VARCHAR

Samp Drug Sched IV Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SampDrugSchedIVFlag


SAMP_DRUG_SCHED_V_FLAG

VARCHAR

Samp Drug Sched V Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SampDrugSchedVFlag


SAMP_DRUG_SCHED_VI_FLAG

VARCHAR

Samp Drug Sched VI Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SampDrugSchedVIFlag


PRESCR_NON_CTRL_FLAG

VARCHAR

Prescr Non Ctrl Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrNonCtrlFlag


PRESCR_APP_REQ_NON_CTRL_FLAG

VARCHAR

Prescr App Req Non Ctrl Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrAppReqNonCtrlFlag


PRESCR_CTRL_FLAG

VARCHAR

Prescr Ctrl Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrCtrlFlag


PRESCR_APP_REQ_CTRL_FLAG

VARCHAR

Prescr App Req Ctrl Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrAppReqCtrlFlag


PRESCR_DRUG_SCHED_I_FLAG

VARCHAR

Prescr Drug Sched I Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedIFlag


PRESCR_DRUG_SCHED_II_FLAG

VARCHAR

Prescr Drug Sched II Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedIIFlag


PRESCR_DRUG_SCHED_III_FLAG

VARCHAR

Prescr Drug Sched III Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedIIIFlag


PRESCR_DRUG_SCHED_IV_FLAG

VARCHAR

Prescr Drug Sched IV Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedIVFlag


PRESCR_DRUG_SCHED_V_FLAG

VARCHAR

Prescr Drug Sched V Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedVFlag


PRESCR_DRUG_SCHED_VI_FLAG

VARCHAR

Prescr Drug Sched VI Flag

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedVIFlag


SUPERVISORY_REL_CD_NON_CTRL

VARCHAR

Supervisory Rel Cd Non Ctrl

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SupervisoryRelCdNonCtrl


SUPERVISORY_REL_CD_CTRL

VARCHAR

Supervisory Rel Cd Ctrl

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SupervisoryRelCdCtrl


COLLABORATIVE_NONCTRL

VARCHAR

Collaborative Non ctrl

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/CollaborativeNonctrl


COLLABORATIVE_CTRL

VARCHAR

Collaborative ctrl

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/CollaborativeCtrl


INCLUSIONARY

VARCHAR

Inclusionary

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/Inclusionary


EXCLUSIONARY

VARCHAR

Exclusionary

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/Exclusionary


DELEGATION_NON_CTRL

VARCHAR

Delegation Non Ctrl

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/DelegationNonCtrl


DELEGATION_CTRL

VARCHAR

Delegation Ctrl

configuration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/DelegationCtrl


CSR

Column

Type

Description

Reltio Attribute URI

LOV Name

CSR_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



PROFESSION_CODE

VARCHAR

Profession Information

configuration/entityTypes/HCP/attributes/CSR/attributes/ProfessionCode

HCPProfession

AUTHORIZATION_NUMBER

VARCHAR

Autorization number of CSR

configuration/entityTypes/HCP/attributes/CSR/attributes/AuthorizationNumber


REG_AUTH_ID

VARCHAR

RegAuthID

configuration/entityTypes/HCP/attributes/CSR/attributes/RegAuthID


STATE_BOARD

VARCHAR

State Board

configuration/entityTypes/HCP/attributes/CSR/attributes/StateBoard


STATE_BOARD_NAME

VARCHAR

State Board Name

configuration/entityTypes/HCP/attributes/CSR/attributes/StateBoardName


STATE

VARCHAR

State of CSR.

configuration/entityTypes/HCP/attributes/CSR/attributes/State


CSR_LICENSE_TYPE

VARCHAR

CSR License Type

configuration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseType


CSR_LICENSE_TYPE_NAME

VARCHAR

CSR License Type Name

configuration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseTypeName


CSR_LICENSE_PRIVILEGE

VARCHAR

CSR License Privilege

configuration/entityTypes/HCP/attributes/CSR/attributes/CSRLicensePrivilege


CSR_LICENSE_PRIVILEGE_NAME

VARCHAR

CSR License Privilege Name

configuration/entityTypes/HCP/attributes/CSR/attributes/CSRLicensePrivilegeName


CSR_LICENSE_EFFECTIVE_DATE

DATE

CSR License Effective Date

configuration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseEffectiveDate


CSR_LICENSE_EXPIRATION_DATE

DATE

CSR License Expiration Date

configuration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseExpirationDate


CSR_LICENSE_STATUS

VARCHAR

CSR License Status

configuration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseStatus

HCPLicenseStatus

STATUS_DETAIL

VARCHAR

CSRLicenseDeactivationReason

configuration/entityTypes/HCP/attributes/CSR/attributes/StatusDetail

HCPLicenseStatusDetail

CSR_LICENSE_DEACTIVATION_REASON

VARCHAR

CSR License Deactivation Reason

configuration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseDeactivationReason

HCPCSRLicenseDeactivationReason

CSR_LICENSE_CERTIFICATION

VARCHAR

CSR License Certification

configuration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseCertification

HCPLicenseCertification

CSR_LICENSE_TYPE_PRIVILEGE_RANK

VARCHAR

CSR License Type Privilege Rank

configuration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseTypePrivilegeRank


CSR_REGULATORY

CSR Regulatory

Column

Type

Description

Reltio Attribute URI

LOV Name

CSR_URI

VARCHAR

Generated Key



REGULATORY_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



REQ_SAMPL_NON_CTRL

VARCHAR

Req Sampl Non Ctrl

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/ReqSamplNonCtrl


REQ_SAMPL_CTRL

VARCHAR

Req Sampl Ctrl

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/ReqSamplCtrl


RECV_SAMPL_NON_CTRL

VARCHAR

Recv Sampl Non Ctrl

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/RecvSamplNonCtrl


RECV_SAMPL_CTRL

VARCHAR

Recv Sampl Ctrl

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/RecvSamplCtrl


DISTR_SAMPL_NON_CTRL

VARCHAR

Distr Sampl Non Ctrl

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/DistrSamplNonCtrl


DISTR_SAMPL_CTRL

VARCHAR

Distr Sampl Ctrl

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/DistrSamplCtrl


SAMP_DRUG_SCHED_I_FLAG

VARCHAR

Samp Drug Sched I Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SampDrugSchedIFlag


SAMP_DRUG_SCHED_II_FLAG

VARCHAR

Samp Drug Sched II Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SampDrugSchedIIFlag


SAMP_DRUG_SCHED_III_FLAG

VARCHAR

Samp Drug Sched III Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SampDrugSchedIIIFlag


SAMP_DRUG_SCHED_IV_FLAG

VARCHAR

Samp Drug Sched IV Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SampDrugSchedIVFlag


SAMP_DRUG_SCHED_V_FLAG

VARCHAR

Samp Drug Sched V Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SampDrugSchedVFlag


SAMP_DRUG_SCHED_VI_FLAG

VARCHAR

Samp Drug Sched VI Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SampDrugSchedVIFlag


PRESCR_NON_CTRL_FLAG

VARCHAR

Prescr Non Ctrl Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrNonCtrlFlag


PRESCR_APP_REQ_NON_CTRL_FLAG

VARCHAR

Prescr App Req Non Ctrl Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrAppReqNonCtrlFlag


PRESCR_CTRL_FLAG

VARCHAR

Prescr Ctrl Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrCtrlFlag


PRESCR_APP_REQ_CTRL_FLAG

VARCHAR

Prescr App Req Ctrl Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrAppReqCtrlFlag


PRESCR_DRUG_SCHED_I_FLAG

VARCHAR

Prescr Drug Sched I Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrDrugSchedIFlag


PRESCR_DRUG_SCHED_II_FLAG

VARCHAR

Prescr Drug Sched II Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrDrugSchedIIFlag


PRESCR_DRUG_SCHED_III_FLAG

VARCHAR

Prescr Drug Sched III Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrDrugSchedIIIFlag


PRESCR_DRUG_SCHED_IV_FLAG

VARCHAR

Prescr Drug Sched IV Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrDrugSchedIVFlag


PRESCR_DRUG_SCHED_V_FLAG

VARCHAR

Prescr Drug Sched V Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrDrugSchedVFlag


PRESCR_DRUG_SCHED_VI_FLAG

VARCHAR

Prescr Drug Sched VI Flag

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrDrugSchedVIFlag


SUPERVISORY_REL_CD_NON_CTRL

VARCHAR

Supervisory Rel Cd Non Ctrl

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SupervisoryRelCdNonCtrl


SUPERVISORY_REL_CD_CTRL

VARCHAR

Supervisory Rel Cd Ctrl

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SupervisoryRelCdCtrl


COLLABORATIVE_NONCTRL

VARCHAR

Collaborative Non ctrl

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/CollaborativeNonctrl


COLLABORATIVE_CTRL

VARCHAR

Collaborative ctrl

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/CollaborativeCtrl


INCLUSIONARY

VARCHAR

Inclusionary

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/Inclusionary


EXCLUSIONARY

VARCHAR

Exclusionary

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/Exclusionary


DELEGATION_NON_CTRL

VARCHAR

Delegation Non Ctrl

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/DelegationNonCtrl


DELEGATION_CTRL

VARCHAR

Delegation Ctrl

configuration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/DelegationCtrl


PRIVACY_PREFERENCES

Column

Type

Description

Reltio Attribute URI

LOV Name

PRIVACY_PREFERENCES_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



AMA_NO_CONTACT

BOOLEAN

Can be Contacted through AMA or not

configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/AMANoContact


FTC_NO_CONTACT

BOOLEAN

Can be Contacted through FTC or not

configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/FTCNoContact


PDRP

BOOLEAN

Physician Data Restriction Program enrolled or not

configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/PDRP


PDRP_DATE

DATE

Physician Data Restriction Program enrolment date

configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/PDRPDate


OPT_OUT_START_DATE

DATE

Opt Out Start Date

configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/OptOutStartDate


ALLOWED_TO_CONTACT

BOOLEAN

Indicator whether allowed to contact

configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/AllowedToContact


PHONE_OPT_OUT

BOOLEAN

Opted Out for being contacted on Phone or not

configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/PhoneOptOut


EMAIL_OPT_OUT

BOOLEAN

Opted Out for being contacted through Email or not

configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/EmailOptOut


FAX_OPT_OUT

BOOLEAN

Opted Out for being contacted through Fax or not

configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/FaxOptOut


MAIL_OPT_OUT

BOOLEAN

Opted Out for being contacted through Mail or not

configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/MailOptOut


NO_CONTACT_REASON

VARCHAR

Reason for no contact

configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/NoContactReason


NO_CONTACT_EFFECTIVE_DATE

DATE

Effective date of no contact

configuration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/NoContactEffectiveDate


CERTIFICATES

Column

Type

Description

Reltio Attribute URI

LOV Name

CERTIFICATES_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



CERTIFICATE_ID

VARCHAR

Certificate Id of Certificate received by HCP

configuration/entityTypes/HCP/attributes/Certificates/attributes/CertificateId


SPEAKER

Column

Type

Description

Reltio Attribute URI

LOV Name

SPEAKER_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



LEVEL

VARCHAR

Level

configuration/entityTypes/HCP/attributes/Speaker/attributes/Level

HCPTierLevel

TIER_STATUS

VARCHAR

Tier Status

configuration/entityTypes/HCP/attributes/Speaker/attributes/TierStatus

HCPTierStatus

TIER_APPROVAL_DATE

DATE

Tier Approval Date

configuration/entityTypes/HCP/attributes/Speaker/attributes/TierApprovalDate


TIER_UPDATED_DATE

DATE

Tier Updated Date

configuration/entityTypes/HCP/attributes/Speaker/attributes/TierUpdatedDate


TIER_APPROVER

VARCHAR

Tier Approver

configuration/entityTypes/HCP/attributes/Speaker/attributes/TierApprover


EFFECTIVE_DATE

DATE

Speaker Effective Date

configuration/entityTypes/HCP/attributes/Speaker/attributes/EffectiveDate


DEACTIVATE_REASON

VARCHAR

Speaker Deactivate Reason

configuration/entityTypes/HCP/attributes/Speaker/attributes/DeactivateReason


IS_SPEAKER

BOOLEAN


configuration/entityTypes/HCP/attributes/Speaker/attributes/IsSpeaker


SPEAKER_TIER_RATIONALE

Tier Rationale

Column

Type

Description

Reltio Attribute URI

LOV Name

SPEAKER_URI

VARCHAR

Generated Key



TIER_RATIONALE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TIER_RATIONALE

VARCHAR

Tier Rationale

configuration/entityTypes/HCP/attributes/Speaker/attributes/TierRationale/attributes/TierRationale

HCPTierRational

RAWDEA

Column

Type

Description

Reltio Attribute URI

LOV Name

RAWDEA_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



DEA_NUMBER

VARCHAR

RAW DEA Number

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/DEANumber


DEA_BUSINESS_ACTIVITY

VARCHAR

DEA Business Activity

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/DEABusinessActivity


EFFECTIVE_DATE

DATE

RAW DEA Effective Date

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/EffectiveDate


EXPIRATION_DATE

DATE

RAW DEA Expiration Date

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/ExpirationDate


NAME

VARCHAR

RAW DEA Name

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/Name


ADDITIONAL_COMPANY_INFO

VARCHAR

Additional Company Info

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/AdditionalCompanyInfo


ADDRESS1

VARCHAR

RAW DEA Address 1

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/Address1


ADDRESS2

VARCHAR

RAW DEA Address 2

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/Address2


CITY

VARCHAR

RAW DEA City

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/City


STATE

VARCHAR

RAW DEA State

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/State


ZIP

VARCHAR

RAW DEA Zip

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/Zip


BUSINESS_ACTIVITY_SUB_CD

VARCHAR

Business Activity Sub Cd

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/BusinessActivitySubCd


PAYMT_IND

VARCHAR

Paymt Indicator

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/PaymtInd

HCPRAWDEAPaymtInd

RAW_DEA_SCHD_CLAS_CD

VARCHAR

Raw Dea Schd Clas Cd

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/RawDeaSchdClasCd


STATUS

VARCHAR

Raw Dea Status

configuration/entityTypes/HCP/attributes/RAWDEA/attributes/Status


PHONE

Column

Type

Description

Reltio Attribute URI

LOV Name

PHONE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/Type, configuration/entityTypes/HCO/attributes/Phone/attributes/Type, configuration/entityTypes/MCO/attributes/Phone/attributes/Type

PhoneType

NUMBER

VARCHAR

Phone number

configuration/entityTypes/HCP/attributes/Phone/attributes/Number, configuration/entityTypes/HCO/attributes/Phone/attributes/Number, configuration/entityTypes/MCO/attributes/Phone/attributes/Number


FORMATTED_NUMBER

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/FormattedNumber, configuration/entityTypes/HCO/attributes/Phone/attributes/FormattedNumber, configuration/entityTypes/MCO/attributes/Phone/attributes/FormattedNumber


EXTENSION

VARCHAR

Extension, if any

configuration/entityTypes/HCP/attributes/Phone/attributes/Extension, configuration/entityTypes/HCO/attributes/Phone/attributes/Extension, configuration/entityTypes/MCO/attributes/Phone/attributes/Extension


RANK

VARCHAR

Rank used to assign priority to a Phone number

configuration/entityTypes/HCP/attributes/Phone/attributes/Rank, configuration/entityTypes/HCO/attributes/Phone/attributes/Rank, configuration/entityTypes/MCO/attributes/Phone/attributes/Rank


PHONE_USAGE_TAG

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/PhoneUsageTag, configuration/entityTypes/HCO/attributes/Phone/attributes/PhoneUsageTag, configuration/entityTypes/MCO/attributes/Phone/attributes/PhoneUsageTag

PhoneUsageTag

USAGE_TYPE

VARCHAR

Usage Type of a Phone number

configuration/entityTypes/HCP/attributes/Phone/attributes/UsageType, configuration/entityTypes/HCO/attributes/Phone/attributes/UsageType, configuration/entityTypes/MCO/attributes/Phone/attributes/UsageType


AREA_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/AreaCode, configuration/entityTypes/HCO/attributes/Phone/attributes/AreaCode, configuration/entityTypes/MCO/attributes/Phone/attributes/AreaCode


LOCAL_NUMBER

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/LocalNumber, configuration/entityTypes/HCO/attributes/Phone/attributes/LocalNumber, configuration/entityTypes/MCO/attributes/Phone/attributes/LocalNumber


VALIDATION_STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/ValidationStatus, configuration/entityTypes/HCO/attributes/Phone/attributes/ValidationStatus, configuration/entityTypes/MCO/attributes/Phone/attributes/ValidationStatus


LINE_TYPE

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/LineType, configuration/entityTypes/HCO/attributes/Phone/attributes/LineType, configuration/entityTypes/MCO/attributes/Phone/attributes/LineType


FORMAT_MASK

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/FormatMask, configuration/entityTypes/HCO/attributes/Phone/attributes/FormatMask, configuration/entityTypes/MCO/attributes/Phone/attributes/FormatMask


DIGIT_COUNT

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/DigitCount, configuration/entityTypes/HCO/attributes/Phone/attributes/DigitCount, configuration/entityTypes/MCO/attributes/Phone/attributes/DigitCount


GEO_AREA

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/GeoArea, configuration/entityTypes/HCO/attributes/Phone/attributes/GeoArea, configuration/entityTypes/MCO/attributes/Phone/attributes/GeoArea


GEO_COUNTRY

VARCHAR


configuration/entityTypes/HCP/attributes/Phone/attributes/GeoCountry, configuration/entityTypes/HCO/attributes/Phone/attributes/GeoCountry, configuration/entityTypes/MCO/attributes/Phone/attributes/GeoCountry


COUNTRY_CODE

VARCHAR

Two digit code for a Country

configuration/entityTypes/HCO/attributes/Phone/attributes/CountryCode, configuration/entityTypes/MCO/attributes/Phone/attributes/CountryCode


PHONE_SOURCE

Source

Column

Type

Description

Reltio Attribute URI

LOV Name

PHONE_URI

VARCHAR

Generated Key



SOURCE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SOURCE_NAME

VARCHAR

SourceName

configuration/entityTypes/HCP/attributes/Phone/attributes/Source/attributes/SourceName, configuration/entityTypes/HCO/attributes/Phone/attributes/Source/attributes/SourceName, configuration/entityTypes/MCO/attributes/Phone/attributes/Source/attributes/SourceName


SOURCE_RANK

VARCHAR

SourceRank

configuration/entityTypes/HCP/attributes/Phone/attributes/Source/attributes/SourceRank, configuration/entityTypes/HCO/attributes/Phone/attributes/Source/attributes/SourceRank, configuration/entityTypes/MCO/attributes/Phone/attributes/Source/attributes/SourceRank


SOURCE_ADDRESS_ID

VARCHAR

SourceAddressID

configuration/entityTypes/HCP/attributes/Phone/attributes/Source/attributes/SourceAddressID, configuration/entityTypes/HCO/attributes/Phone/attributes/Source/attributes/SourceAddressID, configuration/entityTypes/MCO/attributes/Phone/attributes/Source/attributes/SourceAddressID


HCP_ADDRESS_ZIP

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDRESS_URI

VARCHAR

Generated Key



ZIP_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



POSTAL_CODE

VARCHAR


configuration/entityTypes/Location/attributes/Zip/attributes/PostalCode


ZIP5

VARCHAR


configuration/entityTypes/Location/attributes/Zip/attributes/Zip5


ZIP4

VARCHAR


configuration/entityTypes/Location/attributes/Zip/attributes/Zip4


DEA

Column

Type

Description

Reltio Attribute URI

LOV Name

DEA_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



NUMBER

VARCHAR


configuration/entityTypes/HCP/attributes/DEA/attributes/Number, configuration/entityTypes/HCO/attributes/DEA/attributes/Number


STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/DEA/attributes/Status, configuration/entityTypes/HCO/attributes/DEA/attributes/Status


STATUS

VARCHAR


configuration/entityTypes/HCP/attributes/DEA/attributes/Status, configuration/entityTypes/HCO/attributes/DEA/attributes/Status

App-LSCustomer360DEAStatus

EXPIRATION_DATE

DATE


configuration/entityTypes/HCP/attributes/DEA/attributes/ExpirationDate, configuration/entityTypes/HCO/attributes/DEA/attributes/ExpirationDate


DRUG_SCHEDULE

VARCHAR


configuration/entityTypes/HCP/attributes/DEA/attributes/DrugSchedule, configuration/entityTypes/HCO/attributes/DEA/attributes/DrugSchedule

App-LSCustomer360DEADrugSchedule

DRUG_SCHEDULE_DESCRIPTION

VARCHAR


configuration/entityTypes/HCP/attributes/DEA/attributes/DrugScheduleDescription, configuration/entityTypes/HCO/attributes/DEA/attributes/DrugScheduleDescription


BUSINESS_ACTIVITY

VARCHAR


configuration/entityTypes/HCP/attributes/DEA/attributes/BusinessActivity, configuration/entityTypes/HCO/attributes/DEA/attributes/BusinessActivity

App-LSCustomer360DEABusinessActivity

BUSINESS_ACTIVITY_PLUS_SUB_CODE

VARCHAR

Business Activity SubCode

configuration/entityTypes/HCP/attributes/DEA/attributes/BusinessActivityPlusSubCode, configuration/entityTypes/HCO/attributes/DEA/attributes/BusinessActivityPlusSubCode

App-LSCustomer360DEABusinessActivitySubcode

BUSINESS_ACTIVITY_DESCRIPTION

VARCHAR

String

configuration/entityTypes/HCP/attributes/DEA/attributes/BusinessActivityDescription, configuration/entityTypes/HCO/attributes/DEA/attributes/BusinessActivityDescription

App-LSCustomer360DEABusinessActivityDescription

PAYMENT_INDICATOR

VARCHAR

String

configuration/entityTypes/HCP/attributes/DEA/attributes/PaymentIndicator, configuration/entityTypes/HCO/attributes/DEA/attributes/PaymentIndicator

App-LSCustomer360DEAPaymentIndicator

TAXONOMY

Column

Type

Description

Reltio Attribute URI

LOV Name

TAXONOMY_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TAXONOMY

VARCHAR

Taxonomy related to HCP, e.g., Obstetrics & Gynecology

configuration/entityTypes/HCP/attributes/Taxonomy/attributes/Taxonomy, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/Taxonomy

App-LSCustomer360Taxonomy,TAXONOMY_CD

TYPE

VARCHAR

Type of Taxonomy, e.g., Primary

configuration/entityTypes/HCP/attributes/Taxonomy/attributes/Type, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/Type

App-LSCustomer360TaxonomyType,TAXONOMY_TYPE

STATE_CODE

VARCHAR


configuration/entityTypes/HCP/attributes/Taxonomy/attributes/StateCode


GROUP

VARCHAR

Group Taxonomy belongs to

configuration/entityTypes/HCP/attributes/Taxonomy/attributes/Group


PROVIDER_TYPE

VARCHAR

Taxonomy Provider Type

configuration/entityTypes/HCP/attributes/Taxonomy/attributes/ProviderType, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/ProviderType


CLASSIFICATION

VARCHAR

Classification of Taxonomy

configuration/entityTypes/HCP/attributes/Taxonomy/attributes/Classification, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/Classification


SPECIALIZATION

VARCHAR

Specialization of Taxonomy

configuration/entityTypes/HCP/attributes/Taxonomy/attributes/Specialization, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/Specialization


PRIORITY

VARCHAR

Taxonomy Priority

configuration/entityTypes/HCP/attributes/Taxonomy/attributes/Priority, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/Priority

TAXONOMY_PRIORITY

SANCTION

Column

Type

Description

Reltio Attribute URI

LOV Name

SANCTION_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SANCTION_ID

VARCHAR

Court sanction Id for any case.

configuration/entityTypes/HCP/attributes/Sanction/attributes/SanctionId


ACTION_CODE

VARCHAR

Court sanction code for a case

configuration/entityTypes/HCP/attributes/Sanction/attributes/ActionCode


ACTION_DESCRIPTION

VARCHAR

Court sanction Action Description

configuration/entityTypes/HCP/attributes/Sanction/attributes/ActionDescription


BOARD_CODE

VARCHAR

Court case board id

configuration/entityTypes/HCP/attributes/Sanction/attributes/BoardCode


BOARD_DESC

VARCHAR

court case board description

configuration/entityTypes/HCP/attributes/Sanction/attributes/BoardDesc


ACTION_DATE

DATE

Court sanction Action Date

configuration/entityTypes/HCP/attributes/Sanction/attributes/ActionDate


SANCTION_PERIOD_START_DATE

DATE

Sanction Period Start Date

configuration/entityTypes/HCP/attributes/Sanction/attributes/SanctionPeriodStartDate


SANCTION_PERIOD_END_DATE

DATE

Sanction Period End Date

configuration/entityTypes/HCP/attributes/Sanction/attributes/SanctionPeriodEndDate


MONTH_DURATION

VARCHAR

Sanction Duration in Months

configuration/entityTypes/HCP/attributes/Sanction/attributes/MonthDuration


FINE_AMOUNT

VARCHAR

Fine Amount for Sanction

configuration/entityTypes/HCP/attributes/Sanction/attributes/FineAmount


OFFENSE_CODE

VARCHAR

Offense Code for Sanction

configuration/entityTypes/HCP/attributes/Sanction/attributes/OffenseCode


OFFENSE_DESCRIPTION

VARCHAR

Offense Description for Sanction

configuration/entityTypes/HCP/attributes/Sanction/attributes/OffenseDescription


OFFENSE_DATE

DATE

Offense Date for Sanction

configuration/entityTypes/HCP/attributes/Sanction/attributes/OffenseDate


GSA_SANCTION

Column

Type

Description

Reltio Attribute URI

LOV Name

GSA_SANCTION_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SANCTION_ID

VARCHAR

Sanction Id of HCP as per GSA Saction list

configuration/entityTypes/HCP/attributes/GSASanction/attributes/SanctionId


FIRST_NAME

VARCHAR

First Name of HCP as per GSA Saction list

configuration/entityTypes/HCP/attributes/GSASanction/attributes/FirstName


MIDDLE_NAME

VARCHAR

Middle Name of HCP as per GSA Saction list

configuration/entityTypes/HCP/attributes/GSASanction/attributes/MiddleName


LAST_NAME

VARCHAR

Last Name of HCP as per GSA Saction list

configuration/entityTypes/HCP/attributes/GSASanction/attributes/LastName


SUFFIX_NAME

VARCHAR

Suffix Name of HCP as per GSA Saction list

configuration/entityTypes/HCP/attributes/GSASanction/attributes/SuffixName


CITY

VARCHAR

City of HCP as per GSA Saction list

configuration/entityTypes/HCP/attributes/GSASanction/attributes/City


STATE

VARCHAR

State of HCP as per GSA Saction list

configuration/entityTypes/HCP/attributes/GSASanction/attributes/State


ZIP

VARCHAR

Zip of HCP as per GSA Saction list

configuration/entityTypes/HCP/attributes/GSASanction/attributes/Zip


ACTION_DATE

VARCHAR

Action Date for GSA Saction

configuration/entityTypes/HCP/attributes/GSASanction/attributes/ActionDate


TERM_DATE

VARCHAR

Term Date for GSA Saction

configuration/entityTypes/HCP/attributes/GSASanction/attributes/TermDate


AGENCY

VARCHAR

Agency that imposed Sanction

configuration/entityTypes/HCP/attributes/GSASanction/attributes/Agency


CONFIDENCE

VARCHAR

Confidence as per GSA Saction list

configuration/entityTypes/HCP/attributes/GSASanction/attributes/Confidence


MULTI_CHANNEL_COMMUNICATION_CONSENT

Column

Type

Description

Reltio Attribute URI

LOV Name

MULTI_CHANNEL_COMMUNICATION_CONSENT_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



CHANNEL_TYPE

VARCHAR

Channel type for the consent, e.g. email, SMS, etc.

configuration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/ChannelType


CHANNEL_VALUE

VARCHAR

Value of the channel for consent - john.doe@email.com

configuration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/ChannelValue


CHANNEL_CONSENT

VARCHAR

The consent for the corresponding channel and the id - yes or no

configuration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/ChannelConsent

ChannelConsent

START_DATE

DATE

Start date of the consent

configuration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/StartDate


EXPIRATION_DATE

DATE

Expiration date of the consent

configuration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/ExpirationDate


COMMUNICATION_TYPE

VARCHAR

Different communication type that the individual prefers, for e.g. - New Product Launches, Sales/Discounts, Brand-level News

configuration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/CommunicationType


COMMUNICATION_FREQUENCY

VARCHAR

How frequently can the individual be communicated to. Example - Daily/monthly/weekly

configuration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/CommunicationFrequency


CHANNEL_PREFERENCE_FLAG

BOOLEAN

When checked denotes the preferred channel of communication

configuration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/ChannelPreferenceFlag


EMPLOYMENT

Column

Type

Description

Reltio Attribute URI

LOV Name

EMPLOYMENT_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



NAME

VARCHAR

Name

configuration/entityTypes/Organization/attributes/Name


TITLE

VARCHAR


configuration/relationTypes/Employment/attributes/Title


SUMMARY

VARCHAR


configuration/relationTypes/Employment/attributes/Summary


IS_CURRENT

BOOLEAN


configuration/relationTypes/Employment/attributes/IsCurrent


HCO

Health care organization

Column

Type

Description

Reltio Attribute URI

LOV Name

ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE_CODE

VARCHAR

Type Code

configuration/entityTypes/HCO/attributes/TypeCode

HCOType

COMPANY_CUST_ID

VARCHAR

COMPANY Customer ID

configuration/entityTypes/HCO/attributes/COMPANYCustID


SUB_TYPE_CODE

VARCHAR

SubType Code

configuration/entityTypes/HCO/attributes/SubTypeCode

HCOSubType

SUB_CATEGORY

VARCHAR

SubCategory

configuration/entityTypes/HCO/attributes/SubCategory

HCOSubCategory

STRUCTURE_TYPE_CODE

VARCHAR

SubType Code

configuration/entityTypes/HCO/attributes/StructureTypeCode

HCOStructureTypeCode

NAME

VARCHAR

Name

configuration/entityTypes/HCO/attributes/Name


DOING_BUSINESS_AS_NAME

VARCHAR


configuration/entityTypes/HCO/attributes/DoingBusinessAsName


FLEX_RESTRICTED_PARTY_IND

VARCHAR

party indicator for FLEX

configuration/entityTypes/HCO/attributes/FlexRestrictedPartyInd


TRADE_PARTNER

VARCHAR

String

configuration/entityTypes/HCO/attributes/TradePartner


SHIP_TO_SR_PARENT_NAME

VARCHAR

String

configuration/entityTypes/HCO/attributes/ShipToSrParentName


SHIP_TO_JR_PARENT_NAME

VARCHAR

String

configuration/entityTypes/HCO/attributes/ShipToJrParentName


SHIP_FROM_JR_PARENT_NAME

VARCHAR

String

configuration/entityTypes/HCO/attributes/ShipFromJrParentName


TEACHING_HOSPITAL

VARCHAR

Teaching Hospital

configuration/entityTypes/HCO/attributes/TeachingHospital


OWNERSHIP_STATUS

VARCHAR


configuration/entityTypes/HCO/attributes/OwnershipStatus

HCOOwnershipStatus

PROFIT_STATUS

VARCHAR

Profit Status

configuration/entityTypes/HCO/attributes/ProfitStatus

HCOProfitStatus

CMI

VARCHAR

CMI

configuration/entityTypes/HCO/attributes/CMI


COMPANY_HCOS_FLAG

VARCHAR

COMPANY HCOS Flag

configuration/entityTypes/HCO/attributes/COMPANYHCOSFlag


SOURCE_MATCH_CATEGORY

VARCHAR

Source Match Category

configuration/entityTypes/HCO/attributes/SourceMatchCategory


COMM_HOSP

VARCHAR

CommHosp

configuration/entityTypes/HCO/attributes/CommHosp


GEN_FIRST

VARCHAR

String

configuration/entityTypes/HCO/attributes/GenFirst

HCOGenFirst

SREP_ACCESS

VARCHAR

String

configuration/entityTypes/HCO/attributes/SrepAccess

HCOSrepAccess

OUT_PATIENTS_NUMBERS

VARCHAR


configuration/entityTypes/HCO/attributes/OutPatientsNumbers


UNIT_OPER_ROOM_NUMBER

VARCHAR


configuration/entityTypes/HCO/attributes/UnitOperRoomNumber


PRIMARY_GPO

VARCHAR

Primary GPO

configuration/entityTypes/HCO/attributes/PrimaryGPO


TOTAL_PRESCRIBERS

VARCHAR

Total Prescribers

configuration/entityTypes/HCO/attributes/TotalPrescribers


NUM_IN_PATIENTS

VARCHAR

Total InPatients

configuration/entityTypes/HCO/attributes/NumInPatients


TOTAL_LIVES

VARCHAR

Total Lives

configuration/entityTypes/HCO/attributes/TotalLives


TOTAL_PHARMACISTS

VARCHAR

Total Pharmacists

configuration/entityTypes/HCO/attributes/TotalPharmacists


TOTAL_M_DS

VARCHAR

Total MDs

configuration/entityTypes/HCO/attributes/TotalMDs


TOTAL_REVENUE

VARCHAR

Total Revenue

configuration/entityTypes/HCO/attributes/TotalRevenue


STATUS

VARCHAR


configuration/entityTypes/HCO/attributes/Status

HCOStatus

STATUS_DETAIL

VARCHAR

Deactivation Reason

configuration/entityTypes/HCO/attributes/StatusDetail

HCOStatusDetail

ACCOUNT_BLOCK_CODE

VARCHAR

Account Block Code

configuration/entityTypes/HCO/attributes/AccountBlockCode


TOTAL_LICENSE_BEDS

VARCHAR

Total License Beds

configuration/entityTypes/HCO/attributes/TotalLicenseBeds


TOTAL_CENSUS_BEDS

VARCHAR


configuration/entityTypes/HCO/attributes/TotalCensusBeds


TOTAL_STAFFED_BEDS

VARCHAR


configuration/entityTypes/HCO/attributes/TotalStaffedBeds


TOTAL_SURGERIES

VARCHAR

Total Surgeries

configuration/entityTypes/HCO/attributes/TotalSurgeries


TOTAL_PROCEDURES

VARCHAR

Total Procedures

configuration/entityTypes/HCO/attributes/TotalProcedures


NUM_EMPLOYEES

VARCHAR

Number of Procedures

configuration/entityTypes/HCO/attributes/NumEmployees


RESIDENT_COUNT

VARCHAR

Resident Count

configuration/entityTypes/HCO/attributes/ResidentCount


FORMULARY

VARCHAR

Formulary

configuration/entityTypes/HCO/attributes/Formulary

HCOFormulary

E_MEDICAL_RECORD

VARCHAR

e-Medical Record

configuration/entityTypes/HCO/attributes/EMedicalRecord


E_PRESCRIBE

VARCHAR

e-Prescribe

configuration/entityTypes/HCO/attributes/EPrescribe

HCOEPrescribe

PAY_PERFORM

VARCHAR

Pay Perform

configuration/entityTypes/HCO/attributes/PayPerform

HCOPayPerform

DEACTIVATION_REASON

VARCHAR

Deactivation Reason

configuration/entityTypes/HCO/attributes/DeactivationReason

HCODeactivationReason

INTERNATIONAL_LOCATION_NUMBER

VARCHAR

International location number (part 1)

configuration/entityTypes/HCO/attributes/InternationalLocationNumber


DCR_STATUS

VARCHAR

Status of HCO profile

configuration/entityTypes/HCO/attributes/DCRStatus

DCRStatus

COUNTRY_HCO

VARCHAR

Country

configuration/entityTypes/HCO/attributes/Country


ORIGINAL_SOURCE_NAME

VARCHAR

Original Source

configuration/entityTypes/HCO/attributes/OriginalSourceName


SOURCE_UPDATE_DATE

DATE


configuration/entityTypes/HCO/attributes/SourceUpdateDate


CLASSOF_TRADE_N

Column

Type

Description

Reltio Attribute URI

LOV Name

CLASSOF_TRADE_N_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SOURCE_COTID

VARCHAR

Source COT ID

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/SourceCOTID

COT

PRIORITY

VARCHAR

Priority

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Priority


SPECIALTY

VARCHAR

Specialty of Class of Trade

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty

COTSpecialty

CLASSIFICATION

VARCHAR

Classification of Class of Trade

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Classification

COTClassification

FACILITY_TYPE

VARCHAR

Facility Type of Class of Trade

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/FacilityType

COTFacilityType

COT_ORDER

VARCHAR

COT Order

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/COTOrder


START_DATE

DATE

Start Date

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/StartDate


SOURCE

VARCHAR

Source

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Source


PRIMARY

VARCHAR

Primary

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Primary


HCO_ADDRESS_ZIP

Column

Type

Description

Reltio Attribute URI

LOV Name

ADDRESS_URI

VARCHAR

Generated Key



ZIP_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



POSTAL_CODE

VARCHAR


configuration/entityTypes/Location/attributes/Zip/attributes/PostalCode


ZIP5

VARCHAR


configuration/entityTypes/Location/attributes/Zip/attributes/Zip5


ZIP4

VARCHAR


configuration/entityTypes/Location/attributes/Zip/attributes/Zip4


340B

Column

Type

Description

Reltio Attribute URI

LOV Name

340B_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



340BID

VARCHAR

340B ID

configuration/entityTypes/HCO/attributes/340b/attributes/340BID


ENTITY_SUB_DIVISION_NAME

VARCHAR

Entity Sub-Division Name

configuration/entityTypes/HCO/attributes/340b/attributes/EntitySubDivisionName


PROGRAM_CODE

VARCHAR

Program Code

configuration/entityTypes/HCO/attributes/340b/attributes/ProgramCode

340BProgramCode

PARTICIPATING

BOOLEAN

Participating

configuration/entityTypes/HCO/attributes/340b/attributes/Participating


AUTHORIZING_OFFICIAL_NAME

VARCHAR

Authorizing Official Name

configuration/entityTypes/HCO/attributes/340b/attributes/AuthorizingOfficialName


AUTHORIZING_OFFICIAL_TITLE

VARCHAR

Authorizing Official Title

configuration/entityTypes/HCO/attributes/340b/attributes/AuthorizingOfficialTitle


AUTHORIZING_OFFICIAL_TEL

VARCHAR

Authorizing Official Tel

configuration/entityTypes/HCO/attributes/340b/attributes/AuthorizingOfficialTel


AUTHORIZING_OFFICIAL_TEL_EXT

VARCHAR

Authorizing Official Tel Ext

configuration/entityTypes/HCO/attributes/340b/attributes/AuthorizingOfficialTelExt


CONTACT_NAME

VARCHAR

Contact Name

configuration/entityTypes/HCO/attributes/340b/attributes/ContactName


CONTACT_TITLE

VARCHAR

Contact Title

configuration/entityTypes/HCO/attributes/340b/attributes/ContactTitle


CONTACT_TELEPHONE

VARCHAR

Contact Telephone

configuration/entityTypes/HCO/attributes/340b/attributes/ContactTelephone


CONTACT_TELEPHONE_EXT

VARCHAR

Contact Telephone Ext

configuration/entityTypes/HCO/attributes/340b/attributes/ContactTelephoneExt


SIGNED_BY_NAME

VARCHAR

Signed By Name

configuration/entityTypes/HCO/attributes/340b/attributes/SignedByName


SIGNED_BY_TITLE

VARCHAR

Signed By Title

configuration/entityTypes/HCO/attributes/340b/attributes/SignedByTitle


SIGNED_BY_TELEPHONE

VARCHAR

Signed By Telephone

configuration/entityTypes/HCO/attributes/340b/attributes/SignedByTelephone


SIGNED_BY_TELEPHONE_EXT

VARCHAR

Signed By Telephone Ext

configuration/entityTypes/HCO/attributes/340b/attributes/SignedByTelephoneExt


SIGNED_BY_DATE

DATE

Signed By Date

configuration/entityTypes/HCO/attributes/340b/attributes/SignedByDate


CERTIFIED_DECERTIFIED_DATE

DATE

Certified/Decertified Date

configuration/entityTypes/HCO/attributes/340b/attributes/CertifiedDecertifiedDate


RURAL

VARCHAR

Rural

configuration/entityTypes/HCO/attributes/340b/attributes/Rural


ENTRY_COMMENTS

VARCHAR

Entry Comments

configuration/entityTypes/HCO/attributes/340b/attributes/EntryComments


NATURE_OF_SUPPORT

VARCHAR

Nature Of Support

configuration/entityTypes/HCO/attributes/340b/attributes/NatureOfSupport


EDIT_DATE

VARCHAR

Edit Date

configuration/entityTypes/HCO/attributes/340b/attributes/EditDate


340B_PARTICIPATION_DATES

Column

Type

Description

Reltio Attribute URI

LOV Name

340B_URI

VARCHAR

Generated Key



PARTICIPATION_DATES_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



PARTICIPATING_START_DATE

DATE

Participating Start Date

configuration/entityTypes/HCO/attributes/340b/attributes/ParticipationDates/attributes/ParticipatingStartDate


TERMINATION_DATE

DATE

Termination Date

configuration/entityTypes/HCO/attributes/340b/attributes/ParticipationDates/attributes/TerminationDate


TERMINATION_CODE

VARCHAR

Termination Code

configuration/entityTypes/HCO/attributes/340b/attributes/ParticipationDates/attributes/TerminationCode

340BTerminationCode

OTHER_NAMES

Column

Type

Description

Reltio Attribute URI

LOV Name

OTHER_NAMES_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE

VARCHAR

Type

configuration/entityTypes/HCO/attributes/OtherNames/attributes/Type


NAME

VARCHAR

Name

configuration/entityTypes/HCO/attributes/OtherNames/attributes/Name


ACO

Column

Type

Description

Reltio Attribute URI

LOV Name

ACO_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE

VARCHAR

Type

configuration/entityTypes/HCO/attributes/ACO/attributes/Type

HCOACOType

ACO_TYPE_CATEGORY

VARCHAR

Type Category

configuration/entityTypes/HCO/attributes/ACO/attributes/ACOTypeCategory

HCOACOTypeCategory

ACO_TYPE_GROUP

VARCHAR

Type Group of ACO

configuration/entityTypes/HCO/attributes/ACO/attributes/ACOTypeGroup

HCOACOTypeGroup

ACO_ACODETAIL

Column

Type

Description

Reltio Attribute URI

LOV Name

ACO_URI

VARCHAR

Generated Key



ACO_DETAIL_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



ACO_DETAIL_CODE

VARCHAR

Detail Code for ACO

configuration/entityTypes/HCO/attributes/ACO/attributes/ACODetail/attributes/ACODetailCode

HCOACODetail

ACO_DETAIL_VALUE

VARCHAR

Detail Value for ACO

configuration/entityTypes/HCO/attributes/ACO/attributes/ACODetail/attributes/ACODetailValue


ACO_DETAIL_GROUP_CODE

VARCHAR

Detail Value for ACO

configuration/entityTypes/HCO/attributes/ACO/attributes/ACODetail/attributes/ACODetailGroupCode

HCOACODetailGroup

WEBSITE

Column

Type

Description

Reltio Attribute URI

LOV Name

WEBSITE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



WEBSITE_URL

VARCHAR

Url of the website

configuration/entityTypes/HCO/attributes/Website/attributes/WebsiteURL


WEBSITE_SOURCE

Source

Column

Type

Description

Reltio Attribute URI

LOV Name

WEBSITE_URI

VARCHAR

Generated Key



SOURCE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SOURCE_NAME

VARCHAR

SourceName

configuration/entityTypes/HCO/attributes/Website/attributes/Source/attributes/SourceName


SOURCE_RANK

VARCHAR

SourceRank

configuration/entityTypes/HCO/attributes/Website/attributes/Source/attributes/SourceRank


SALES_ORGANIZATION

Sales Organization

Column

Type

Description

Reltio Attribute URI

LOV Name

SALES_ORGANIZATION_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SALES_ORGANIZATION_CODE

VARCHAR

Sales Organization Code

configuration/entityTypes/HCO/attributes/SalesOrganization/attributes/SalesOrganizationCode


CUSTOMER_ORDER_BLOCK

VARCHAR

Customer Order Block

configuration/entityTypes/HCO/attributes/SalesOrganization/attributes/CustomerOrderBlock


CUSTOMER_GROUP

VARCHAR

Customer Group

configuration/entityTypes/HCO/attributes/SalesOrganization/attributes/CustomerGroup


HCO_BUSINESS_UNIT_TAG

Column

Type

Description

Reltio Attribute URI

LOV Name

BUSINESSUNITTAG_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



BUSINESS_UNIT

VARCHAR

Business Unit

configuration/entityTypes/HCO/attributes/BusinessUnitTAG/attributes/BusinessUnit


SEGMENT

VARCHAR

Segment

configuration/entityTypes/HCO/attributes/BusinessUnitTAG/attributes/Segment


CONTRACT_TYPE

VARCHAR

Contract Type

configuration/entityTypes/HCO/attributes/BusinessUnitTAG/attributes/ContractType


GLN

Column

Type

Description

Reltio Attribute URI

LOV Name

GLN_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE

VARCHAR

GLN Type

configuration/entityTypes/HCO/attributes/GLN/attributes/Type


ID

VARCHAR

GLN ID

configuration/entityTypes/HCO/attributes/GLN/attributes/ID


STATUS

VARCHAR

GLN Status

configuration/entityTypes/HCO/attributes/GLN/attributes/Status

HCOGLNStatus

STATUS_DETAIL

VARCHAR

GLN Status

configuration/entityTypes/HCO/attributes/GLN/attributes/StatusDetail

HCOGLNStatusDetail

HCO_REFER_BACK

Column

Type

Description

Reltio Attribute URI

LOV Name

REFERBACK_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



REFER_BACK_ID

VARCHAR

Refer Back ID

configuration/entityTypes/HCO/attributes/ReferBack/attributes/ReferBackID


REFER_BACK_HCOSID

VARCHAR

GLN ID

configuration/entityTypes/HCO/attributes/ReferBack/attributes/ReferBackHCOSID


DEACTIVATION_REASON

VARCHAR

Deactivation Reason

configuration/entityTypes/HCO/attributes/ReferBack/attributes/DeactivationReason


BED

Column

Type

Description

Reltio Attribute URI

LOV Name

BED_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TYPE

VARCHAR

Type

configuration/entityTypes/HCO/attributes/Bed/attributes/Type

HCOBedType

LICENSE_BEDS

VARCHAR

License Beds

configuration/entityTypes/HCO/attributes/Bed/attributes/LicenseBeds


CENSUS_BEDS

VARCHAR

Census Beds

configuration/entityTypes/HCO/attributes/Bed/attributes/CensusBeds


STAFFED_BEDS

VARCHAR

Staffed Beds

configuration/entityTypes/HCO/attributes/Bed/attributes/StaffedBeds


GSA_EXCLUSION

Column

Type

Description

Reltio Attribute URI

LOV Name

GSA_EXCLUSION_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SANCTION_ID

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/SanctionId


ORGANIZATION_NAME

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/OrganizationName


ADDRESS_LINE1

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/AddressLine1


ADDRESS_LINE2

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/AddressLine2


CITY

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/City


STATE

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/State


ZIP

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/Zip


ACTION_DATE

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/ActionDate


TERM_DATE

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/TermDate


AGENCY

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/Agency


CONFIDENCE

VARCHAR


configuration/entityTypes/HCO/attributes/GSAExclusion/attributes/Confidence


OIG_EXCLUSION

Column

Type

Description

Reltio Attribute URI

LOV Name

OIG_EXCLUSION_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SANCTION_ID

VARCHAR


configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/SanctionId


ACTION_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/ActionCode


ACTION_DESCRIPTION

VARCHAR


configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/ActionDescription


BOARD_CODE

VARCHAR

Court case board id

configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/BoardCode


BOARD_DESC

VARCHAR

court case board description

configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/BoardDesc


ACTION_DATE

DATE


configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/ActionDate


OFFENSE_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/OffenseCode


OFFENSE_DESCRIPTION

VARCHAR


configuration/entityTypes/HCO/attributes/OIGExclusion/attributes/OffenseDescription


BUSINESS_DETAIL

Column

Type

Description

Reltio Attribute URI

LOV Name

BUSINESS_DETAIL_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



DETAIL

VARCHAR

Detail

configuration/entityTypes/HCO/attributes/BusinessDetail/attributes/Detail

HCOBusinessDetail

GROUP

VARCHAR

Group

configuration/entityTypes/HCO/attributes/BusinessDetail/attributes/Group

HCOBusinessDetailGroup

DETAIL_VALUE

VARCHAR

Detail Value

configuration/entityTypes/HCO/attributes/BusinessDetail/attributes/DetailValue


DETAIL_COUNT

VARCHAR

Detail Count

configuration/entityTypes/HCO/attributes/BusinessDetail/attributes/DetailCount


HIN

HIN

Column

Type

Description

Reltio Attribute URI

LOV Name

HIN_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



HIN

VARCHAR

HIN

configuration/entityTypes/HCO/attributes/HIN/attributes/HIN


TICKER

Column

Type

Description

Reltio Attribute URI

LOV Name

TICKER_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



SYMBOL

VARCHAR


configuration/entityTypes/HCO/attributes/Ticker/attributes/Symbol


STOCK_EXCHANGE

VARCHAR


configuration/entityTypes/HCO/attributes/Ticker/attributes/StockExchange


TRADE_STYLE_NAME

Column

Type

Description

Reltio Attribute URI

LOV Name

TRADE_STYLE_NAME_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



ORGANIZATION_NAME

VARCHAR


configuration/entityTypes/HCO/attributes/TradeStyleName/attributes/OrganizationName


LANGUAGE_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/TradeStyleName/attributes/LanguageCode


FORMER_ORGANIZATION_PRIMARY_NAME

VARCHAR


configuration/entityTypes/HCO/attributes/TradeStyleName/attributes/FormerOrganizationPrimaryName


DISPLAY_SEQUENCE

VARCHAR


configuration/entityTypes/HCO/attributes/TradeStyleName/attributes/DisplaySequence


TYPE

VARCHAR


configuration/entityTypes/HCO/attributes/TradeStyleName/attributes/Type


HRIOR_DUNS_NUMBER

Column

Type

Description

Reltio Attribute URI

LOV Name

PRIOR_DUNS_NUMBER_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



TRANSFER_DUNS_NUMBER

VARCHAR


configuration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferDUNSNumber


TRANSFER_REASON_TEXT

VARCHAR


configuration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferReasonText


TRANSFER_REASON_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferReasonCode


TRANSFER_DATE

VARCHAR


configuration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferDate


TRANSFERRED_FROM_DUNS_NUMBER

VARCHAR


configuration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferredFromDUNSNumber


TRANSFERRED_TO_DUNS_NUMBER

VARCHAR


configuration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferredToDUNSNumber


INDUSTRY_CODE

Column

Type

Description

Reltio Attribute URI

LOV Name

INDUSTRY_CODE_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



DNB_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/DNBCode


INDUSTRY_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCode


INDUSTRY_CODE_DESCRIPTION

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeDescription


INDUSTRY_CODE_LANGUAGE_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeLanguageCode


INDUSTRY_CODE_WRITING_SCRIPT

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeWritingScript


DISPLAY_SEQUENCE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/DisplaySequence


SALES_PERCENTAGE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/SalesPercentage


TYPE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/Type


INDUSTRY_TYPE_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryTypeCode


IMPORT_EXPORT_AGENT

VARCHAR


configuration/entityTypes/HCO/attributes/IndustryCode/attributes/ImportExportAgent


ACTIVITIES_AND_OPERATIONS

Column

Type

Description

Reltio Attribute URI

LOV Name

ACTIVITIES_AND_OPERATIONS_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



LINE_OF_BUSINESS_DESCRIPTION

VARCHAR


configuration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/LineOfBusinessDescription


LANGUAGE_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/LanguageCode


WRITING_SCRIPT_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/WritingScriptCode


IMPORT_INDICATOR

BOOLEAN


configuration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/ImportIndicator


EXPORT_INDICATOR

BOOLEAN


configuration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/ExportIndicator


AGENT_INDICATOR

BOOLEAN


configuration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/AgentIndicator


EMPLOYEE_DETAILS

Column

Type

Description

Reltio Attribute URI

LOV Name

EMPLOYEE_DETAILS_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



INDIVIDUAL_EMPLOYEE_FIGURES_DATE

VARCHAR


configuration/entityTypes/HCO/attributes/EmployeeDetails/attributes/IndividualEmployeeFiguresDate


INDIVIDUAL_TOTAL_EMPLOYEE_QUANTITY

VARCHAR


configuration/entityTypes/HCO/attributes/EmployeeDetails/attributes/IndividualTotalEmployeeQuantity


INDIVIDUAL_RELIABILITY_TEXT

VARCHAR


configuration/entityTypes/HCO/attributes/EmployeeDetails/attributes/IndividualReliabilityText


TOTAL_EMPLOYEE_QUANTITY

VARCHAR


configuration/entityTypes/HCO/attributes/EmployeeDetails/attributes/TotalEmployeeQuantity


TOTAL_EMPLOYEE_RELIABILITY

VARCHAR


configuration/entityTypes/HCO/attributes/EmployeeDetails/attributes/TotalEmployeeReliability


PRINCIPALS_INCLUDED

VARCHAR


configuration/entityTypes/HCO/attributes/EmployeeDetails/attributes/PrincipalsIncluded


KEY_FINANCIAL_FIGURES_OVERVIEW

Column

Type

Description

Reltio Attribute URI

LOV Name

KEY_FINANCIAL_FIGURES_OVERVIEW_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



FINANCIAL_STATEMENT_TO_DATE

DATE


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/FinancialStatementToDate


FINANCIAL_PERIOD_DURATION

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/FinancialPeriodDuration


SALES_REVENUE_CURRENCY

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueCurrency


SALES_REVENUE_CURRENCY_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueCurrencyCode


SALES_REVENUE_RELIABILITY_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueReliabilityCode


SALES_REVENUE_UNIT_OF_SIZE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueUnitOfSize


SALES_REVENUE_AMOUNT

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueAmount


PROFIT_OR_LOSS_CURRENCY

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossCurrency


PROFIT_OR_LOSS_RELIABILITY_TEXT

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossReliabilityText


PROFIT_OR_LOSS_UNIT_OF_SIZE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossUnitOfSize


PROFIT_OR_LOSS_AMOUNT

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossAmount


SALES_TURNOVER_GROWTH_RATE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesTurnoverGrowthRate


SALES3YRY_GROWTH_RATE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Sales3YryGrowthRate


SALES5YRY_GROWTH_RATE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Sales5YryGrowthRate


EMPLOYEE3YRY_GROWTH_RATE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Employee3YryGrowthRate


EMPLOYEE5YRY_GROWTH_RATE

VARCHAR


configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Employee5YryGrowthRate


MATCH_QUALITY

Column

Type

Description

Reltio Attribute URI

LOV Name

MATCH_QUALITY_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



CONFIDENCE_CODE

VARCHAR

DnB Match Quality Confidence Code

configuration/entityTypes/HCO/attributes/MatchQuality/attributes/ConfidenceCode


DISPLAY_SEQUENCE

VARCHAR

DnB Match Quality Display Sequence

configuration/entityTypes/HCO/attributes/MatchQuality/attributes/DisplaySequence


MATCH_CODE

VARCHAR


configuration/entityTypes/HCO/attributes/MatchQuality/attributes/MatchCode


BEMFAB

VARCHAR


configuration/entityTypes/HCO/attributes/MatchQuality/attributes/BEMFAB


MATCH_GRADE

VARCHAR


configuration/entityTypes/HCO/attributes/MatchQuality/attributes/MatchGrade


ORGANIZATION_DETAIL

Column

Type

Description

Reltio Attribute URI

LOV Name

ORGANIZATION_DETAIL_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



MEMBER_ROLE

VARCHAR


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/MemberRole


STANDALONE

BOOLEAN


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/Standalone


CONTROL_OWNERSHIP_DATE

DATE


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/ControlOwnershipDate


OPERATING_STATUS

VARCHAR


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/OperatingStatus


START_YEAR

VARCHAR


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/StartYear


FRANCHISE_OPERATION_TYPE

VARCHAR


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/FranchiseOperationType


BONEYARD_ORGANIZATION

BOOLEAN


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/BoneyardOrganization


OPERATING_STATUS_COMMENT

VARCHAR


configuration/entityTypes/HCO/attributes/OrganizationDetail/attributes/OperatingStatusComment


DUNS_HIERARCHY

Column

Type

Description

Reltio Attribute URI

LOV Name

DUNS_HIERARCHY_URI

VARCHAR

Generated Key



ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



GLOBAL_ULTIMATE_DUNS

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/GlobalUltimateDUNS


GLOBAL_ULTIMATE_ORGANIZATION

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/GlobalUltimateOrganization


DOMESTIC_ULTIMATE_DUNS

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/DomesticUltimateDUNS


DOMESTIC_ULTIMATE_ORGANIZATION

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/DomesticUltimateOrganization


PARENT_DUNS

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/ParentDUNS


PARENT_ORGANIZATION

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/ParentOrganization


HEADQUARTERS_DUNS

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/HeadquartersDUNS


HEADQUARTERS_ORGANIZATION

VARCHAR


configuration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/HeadquartersOrganization


MCO

Managed Care Organization

Column

Type

Description

Reltio Attribute URI

LOV Name

ENTITY_URI

VARCHAR

Reltio Entity URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



ENTITY_TYPE

VARCHAR

Reltio Entity Type



COMPANY_CUST_ID

VARCHAR

COMPANY Customer ID

configuration/entityTypes/MCO/attributes/COMPANYCustID


NAME

VARCHAR

Name

configuration/entityTypes/MCO/attributes/Name


TYPE

VARCHAR

Type

configuration/entityTypes/MCO/attributes/Type

MCOType

MANAGED_CARE_CHANNEL

VARCHAR

Managed Care Channel

configuration/entityTypes/MCO/attributes/ManagedCareChannel

MCOManagedCareChannel

PLAN_MODEL_TYPE

VARCHAR

PlanModelType

configuration/entityTypes/MCO/attributes/PlanModelType

MCOPlanModelType

SUB_TYPE

VARCHAR

SubType

configuration/entityTypes/MCO/attributes/SubType

MCOSubType

SUB_TYPE2

VARCHAR

SubType2

configuration/entityTypes/MCO/attributes/SubType2


SUB_TYPE3

VARCHAR

Sub Type 3

configuration/entityTypes/MCO/attributes/SubType3


NUM_LIVES_MEDICARE

VARCHAR

Medicare Number of Lives

configuration/entityTypes/MCO/attributes/NumLives_Medicare


NUM_LIVES_MEDICAL

VARCHAR

Medical Number of Lives

configuration/entityTypes/MCO/attributes/NumLives_Medical


NUM_LIVES_PHARMACY

VARCHAR

Pharmacy Number of Lives

configuration/entityTypes/MCO/attributes/NumLives_Pharmacy


OPERATING_STATE

VARCHAR

State Operating from

configuration/entityTypes/MCO/attributes/Operating_State


ORIGINAL_SOURCE_NAME

VARCHAR

Original Source Name

configuration/entityTypes/MCO/attributes/OriginalSourceName


DISTRIBUTION_CHANNEL

VARCHAR

Distribution Channel

configuration/entityTypes/MCO/attributes/DistributionChannel


ACCESS_LANDSCAPE_FORMULARY_CHANNEL

VARCHAR

Access Landscape Formulary Channel

configuration/entityTypes/MCO/attributes/AccessLandscapeFormularyChannel


EFFECTIVE_START_DATE

DATE

Effective Start Date

configuration/entityTypes/MCO/attributes/EffectiveStartDate


EFFECTIVE_END_DATE

DATE

Effective End Date

configuration/entityTypes/MCO/attributes/EffectiveEndDate


STATUS

VARCHAR

Status

configuration/entityTypes/MCO/attributes/Status

MCOStatus

SOURCE_MATCH_CATEGORY

VARCHAR

Source Match Category

configuration/entityTypes/MCO/attributes/SourceMatchCategory


COUNTRY_MCO

VARCHAR

Country

configuration/entityTypes/MCO/attributes/Country


AFFILIATIONS

Column

Type

Description

Reltio Attribute URI

LOV Name

RELATION_URI

VARCHAR

Reltio Relation URI



COUNTRY

VARCHAR

Country Code



ACTIVE

VARCHAR

Active Flag



RELATION_TYPE

VARCHAR

Reltio Relation Type



START_ENTITY_URI

VARCHAR

Reltio Start Entity URI



END_ENTITY_URI

VARCHAR

Reltio End Entity URI



SOURCE

VARCHAR


configuration/relationTypes/FlextoDDDAffiliations/attributes/Source, configuration/relationTypes/Ownership/attributes/Source, configuration/relationTypes/PAYERtoPLAN/attributes/Source, configuration/relationTypes/PBMVendortoMCO/attributes/Source, configuration/relationTypes/ACOAffiliations/attributes/Source, configuration/relationTypes/MCOtoPLAN/attributes/Source, configuration/relationTypes/FlextoHCOSAffiliations/attributes/Source, configuration/relationTypes/FlextoSAPAffiliations/attributes/Source, configuration/relationTypes/MCOtoMMITORG/attributes/Source, configuration/relationTypes/HCOStoDDDAffiliations/attributes/Source, configuration/relationTypes/EnterprisetoBOB/attributes/Source, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/Source, configuration/relationTypes/ContactAffiliations/attributes/Source, configuration/relationTypes/VAAffiliations/attributes/Source, configuration/relationTypes/PBMtoPLAN/attributes/Source, configuration/relationTypes/Purchasing/attributes/Source, configuration/relationTypes/BOBtoMCO/attributes/Source, configuration/relationTypes/DDDtoSAPAffiliations/attributes/Source, configuration/relationTypes/Distribution/attributes/Source, configuration/relationTypes/ProviderAffiliations/attributes/Source, configuration/relationTypes/SAPtoHCOSAffiliations/attributes/Source


LINKED_BY

VARCHAR


configuration/relationTypes/FlextoDDDAffiliations/attributes/LinkedBy, configuration/relationTypes/FlextoHCOSAffiliations/attributes/LinkedBy, configuration/relationTypes/FlextoSAPAffiliations/attributes/LinkedBy, configuration/relationTypes/SAPtoHCOSAffiliations/attributes/LinkedBy


COUNTRY_AFFILIATIONS

VARCHAR


configuration/relationTypes/FlextoDDDAffiliations/attributes/Country, configuration/relationTypes/Ownership/attributes/Country, configuration/relationTypes/PAYERtoPLAN/attributes/Country, configuration/relationTypes/PBMVendortoMCO/attributes/Country, configuration/relationTypes/ACOAffiliations/attributes/Country, configuration/relationTypes/MCOtoPLAN/attributes/Country, configuration/relationTypes/FlextoHCOSAffiliations/attributes/Country, configuration/relationTypes/FlextoSAPAffiliations/attributes/Country, configuration/relationTypes/MCOtoMMITORG/attributes/Country, configuration/relationTypes/HCOStoDDDAffiliations/attributes/Country, configuration/relationTypes/EnterprisetoBOB/attributes/Country, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/Country, configuration/relationTypes/ContactAffiliations/attributes/Country, configuration/relationTypes/VAAffiliations/attributes/Country, configuration/relationTypes/PBMtoPLAN/attributes/Country, configuration/relationTypes/Purchasing/attributes/Country, configuration/relationTypes/BOBtoMCO/attributes/Country, configuration/relationTypes/DDDtoSAPAffiliations/attributes/Country, configuration/relationTypes/Distribution/attributes/Country, configuration/relationTypes/ProviderAffiliations/attributes/Country, configuration/relationTypes/SAPtoHCOSAffiliations/attributes/Country


AFFILIATION_TYPE

VARCHAR


configuration/relationTypes/PAYERtoPLAN/attributes/AffiliationType, configuration/relationTypes/PBMVendortoMCO/attributes/AffiliationType, configuration/relationTypes/MCOtoPLAN/attributes/AffiliationType, configuration/relationTypes/MCOtoMMITORG/attributes/AffiliationType, configuration/relationTypes/EnterprisetoBOB/attributes/AffiliationType, configuration/relationTypes/VAAffiliations/attributes/AffiliationType, configuration/relationTypes/PBMtoPLAN/attributes/AffiliationType, configuration/relationTypes/BOBtoMCO/attributes/AffiliationType


PBM_AFFILIATION_TYPE

VARCHAR


configuration/relationTypes/PAYERtoPLAN/attributes/PBMAffiliationType, configuration/relationTypes/PBMVendortoMCO/attributes/PBMAffiliationType, configuration/relationTypes/MCOtoPLAN/attributes/PBMAffiliationType, configuration/relationTypes/MCOtoMMITORG/attributes/PBMAffiliationType, configuration/relationTypes/EnterprisetoBOB/attributes/PBMAffiliationType, configuration/relationTypes/PBMtoPLAN/attributes/PBMAffiliationType, configuration/relationTypes/BOBtoMCO/attributes/PBMAffiliationType


PLAN_MODEL_TYPE

VARCHAR


configuration/relationTypes/PAYERtoPLAN/attributes/PlanModelType, configuration/relationTypes/PBMVendortoMCO/attributes/PlanModelType, configuration/relationTypes/MCOtoPLAN/attributes/PlanModelType, configuration/relationTypes/MCOtoMMITORG/attributes/PlanModelType, configuration/relationTypes/EnterprisetoBOB/attributes/PlanModelType, configuration/relationTypes/PBMtoPLAN/attributes/PlanModelType, configuration/relationTypes/BOBtoMCO/attributes/PlanModelType

MCOPlanModelType

MANAGED_CARE_CHANNEL

VARCHAR


configuration/relationTypes/PAYERtoPLAN/attributes/ManagedCareChannel, configuration/relationTypes/PBMVendortoMCO/attributes/ManagedCareChannel, configuration/relationTypes/MCOtoPLAN/attributes/ManagedCareChannel, configuration/relationTypes/MCOtoMMITORG/attributes/ManagedCareChannel, configuration/relationTypes/EnterprisetoBOB/attributes/ManagedCareChannel, configuration/relationTypes/PBMtoPLAN/attributes/ManagedCareChannel, configuration/relationTypes/BOBtoMCO/attributes/ManagedCareChannel

MCOManagedCareChannel

EFFECTIVE_START_DATE

DATE


configuration/relationTypes/MCOtoPLAN/attributes/EffectiveStartDate


EFFECTIVE_END_DATE

DATE


configuration/relationTypes/MCOtoPLAN/attributes/EffectiveEndDate


STATUS

VARCHAR


configuration/relationTypes/VAAffiliations/attributes/Status


AFFIL_RELATION_TYPE

Column

Type

Description

Reltio Attribute URI

LOV Name

RELATION_TYPE_URI

VARCHAR

Generated Key



RELATION_URI

VARCHAR

Reltio Relation URI



RELATIONSHIP_GROUP_OWNERSHIP

VARCHAR


configuration/relationTypes/Ownership/attributes/RelationType/attributes/RelationshipGroup

HCORelationGroup

RELATIONSHIP_DESCRIPTION_OWNERSHIP

VARCHAR


configuration/relationTypes/Ownership/attributes/RelationType/attributes/RelationshipDescription

HCORelationDescription

RELATIONSHIP_ORDER

VARCHAR


configuration/relationTypes/Ownership/attributes/RelationType/attributes/RelationshipOrder, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/RelationshipOrder, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/RelationshipOrder, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/RelationshipOrder, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/RelationshipOrder, configuration/relationTypes/Distribution/attributes/RelationType/attributes/RelationshipOrder


RANK

VARCHAR


configuration/relationTypes/Ownership/attributes/RelationType/attributes/Rank, configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/Rank, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/Rank, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/Rank, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/Rank, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/Rank, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/Rank, configuration/relationTypes/Distribution/attributes/RelationType/attributes/Rank, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/Rank


AMA_HOSPITAL_ID

VARCHAR


configuration/relationTypes/Ownership/attributes/RelationType/attributes/AMAHospitalID, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/AMAHospitalID, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/AMAHospitalID, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/AMAHospitalID, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/AMAHospitalID, configuration/relationTypes/Distribution/attributes/RelationType/attributes/AMAHospitalID


AMA_HOSPITAL_HOURS

VARCHAR


configuration/relationTypes/Ownership/attributes/RelationType/attributes/AMAHospitalHours, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/AMAHospitalHours, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/AMAHospitalHours, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/AMAHospitalHours, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/AMAHospitalHours, configuration/relationTypes/Distribution/attributes/RelationType/attributes/AMAHospitalHours


EFFECTIVE_START_DATE

DATE


configuration/relationTypes/Ownership/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/Distribution/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/EffectiveStartDate


EFFECTIVE_END_DATE

DATE


configuration/relationTypes/Ownership/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/Distribution/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/EffectiveEndDate


ACTIVE_FLAG

BOOLEAN


configuration/relationTypes/Ownership/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/Distribution/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/ActiveFlag


PRIMARY_AFFILIATION

VARCHAR


configuration/relationTypes/Ownership/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/Distribution/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/PrimaryAffiliation


AFFILIATION_CONFIDENCE_CODE

VARCHAR


configuration/relationTypes/Ownership/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/Distribution/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/AffiliationConfidenceCode


RELATIONSHIP_GROUP_ACOAFFILIATIONS

VARCHAR


configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/RelationshipGroup

HCPRelationGroup

RELATIONSHIP_DESCRIPTION_ACOAFFILIATIONS

VARCHAR


configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/RelationshipDescription

HCPRelationshipDescription

RELATIONSHIP_STATUS_CODE

VARCHAR


configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/RelationshipStatusCode, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/RelationshipStatusCode, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/RelationshipStatusCode

HCPtoHCORelationshipStatus

RELATIONSHIP_STATUS_REASON_CODE

VARCHAR


configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/RelationshipStatusReasonCode, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/RelationshipStatusReasonCode, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/RelationshipStatusReasonCode

HCPtoHCORelationshipStatusReasonCode

WORKING_STATUS

VARCHAR


configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/WorkingStatus, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/WorkingStatus, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/WorkingStatus

WorkingStatus

RELATIONSHIP_GROUP_HCOSTODDDAFFILIATIONS

VARCHAR


configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/RelationshipGroup

HCORelationGroup

RELATIONSHIP_DESCRIPTION_HCOSTODDDAFFILIATIONS

VARCHAR


configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/RelationshipDescription

HCORelationDescription

RELATIONSHIP_GROUP_OTHERHCOTOHCOAFFILIATIONS

VARCHAR


configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/RelationshipGroup

HCORelationGroup

RELATIONSHIP_DESCRIPTION_OTHERHCOTOHCOAFFILIATIONS

VARCHAR


configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/RelationshipDescription

HCORelationDescription

RELATIONSHIP_GROUP_CONTACTAFFILIATIONS

VARCHAR


configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/RelationshipGroup

HCPRelationGroup

RELATIONSHIP_DESCRIPTION_CONTACTAFFILIATIONS

VARCHAR


configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/RelationshipDescription

HCPRelationshipDescription

RELATIONSHIP_GROUP_PURCHASING

VARCHAR


configuration/relationTypes/Purchasing/attributes/RelationType/attributes/RelationshipGroup

HCORelationGroup

RELATIONSHIP_DESCRIPTION_PURCHASING

VARCHAR


configuration/relationTypes/Purchasing/attributes/RelationType/attributes/RelationshipDescription

HCORelationDescription

RELATIONSHIP_GROUP_DDDTOSAPAFFILIATIONS

VARCHAR


configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/RelationshipGroup

HCORelationGroup

RELATIONSHIP_DESCRIPTION_DDDTOSAPAFFILIATIONS

VARCHAR


configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/RelationshipDescription

HCORelationDescription

RELATIONSHIP_GROUP_DISTRIBUTION

VARCHAR


configuration/relationTypes/Distribution/attributes/RelationType/attributes/RelationshipGroup

HCORelationGroup

RELATIONSHIP_DESCRIPTION_DISTRIBUTION

VARCHAR


configuration/relationTypes/Distribution/attributes/RelationType/attributes/RelationshipDescription

HCORelationDescription

RELATIONSHIP_GROUP_PROVIDERAFFILIATIONS

VARCHAR


configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/RelationshipGroup

HCPRelationGroup

RELATIONSHIP_DESCRIPTION_PROVIDERAFFILIATIONS

VARCHAR


configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/RelationshipDescription

HCPRelationshipDescription

AFFIL_ACO

Column

Type

Description

Reltio Attribute URI

LOV Name

ACO_URI

VARCHAR

Generated Key



RELATION_URI

VARCHAR

Reltio Relation URI



ACO_TYPE

VARCHAR


configuration/relationTypes/Ownership/attributes/ACO/attributes/ACOType, configuration/relationTypes/ACOAffiliations/attributes/ACO/attributes/ACOType, configuration/relationTypes/HCOStoDDDAffiliations/attributes/ACO/attributes/ACOType, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/ACO/attributes/ACOType, configuration/relationTypes/ContactAffiliations/attributes/ACO/attributes/ACOType, configuration/relationTypes/Purchasing/attributes/ACO/attributes/ACOType, configuration/relationTypes/DDDtoSAPAffiliations/attributes/ACO/attributes/ACOType, configuration/relationTypes/Distribution/attributes/ACO/attributes/ACOType, configuration/relationTypes/ProviderAffiliations/attributes/ACO/attributes/ACOType

HCOACOType

ACO_TYPE_CATEGORY

VARCHAR


configuration/relationTypes/Ownership/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/ACOAffiliations/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/HCOStoDDDAffiliations/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/ContactAffiliations/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/Purchasing/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/DDDtoSAPAffiliations/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/Distribution/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/ProviderAffiliations/attributes/ACO/attributes/ACOTypeCategory

HCOACOTypeCategory

ACO_TYPE_GROUP

VARCHAR


configuration/relationTypes/Ownership/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/ACOAffiliations/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/HCOStoDDDAffiliations/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/ContactAffiliations/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/Purchasing/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/DDDtoSAPAffiliations/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/Distribution/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/ProviderAffiliations/attributes/ACO/attributes/ACOTypeGroup

HCOACOTypeGroup

AFFIL_RELATION_TYPE_ROLE

Column

Type

Description

Reltio Attribute URI

LOV Name

RELATION_TYPE_URI

VARCHAR

Generated Key



ROLE_URI

VARCHAR

Generated Key



RELATION_URI

VARCHAR

Reltio Relation URI



ROLE

VARCHAR


configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/Role/attributes/Role, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/Role/attributes/Role, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/Role/attributes/Role

RoleType

RANK

VARCHAR


configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/Role/attributes/Rank, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/Role/attributes/Rank, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/Role/attributes/Rank


AFFIL_USAGE_TAG

Column

Type

Description

Reltio Attribute URI

LOV Name

USAGE_TAG_URI

VARCHAR

Generated Key



RELATION_URI

VARCHAR

Reltio Relation URI



USAGE_TAG

VARCHAR


configuration/relationTypes/ProviderAffiliations/attributes/UsageTag/attributes/UsageTag


" }, { "title": "CUSTOMER_SL schema", "pageID": "163924327", "pageLink": "/display/GMDM/CUSTOMER_SL+schema", "content": "

The schema plays the role of access layer for clients reading MDM data. It includes a set of views that are directly inherited from CUSTOMER schema.

Views have the same structure as views in CUSTOMER schemat. To learn about view definitions please see CUSTOMER schema

In regional data marts, the schema views have MDM prefix. 

In CUSTOMER_SL schema in Global Data Mart views are prefixed with 'P'  for COMPANY Reltio Model,'I' for IQIVIA Reltio model, and 'P_HI' for Historical Inactive data for COMPANY Reltio Model.


To speed up access, most views are being materialized to physical tables. The process is transparent to users. Access views are being switched to physical tables automatically if they are available.  The refresh process is incremental and connected with the loading process. 




" }, { "title": "LANDING schema", "pageID": "163920137", "pageLink": "/display/GMDM/LANDING+schema", "content": "

LANDING schema plays a role of the staging database for publishing  MDM data from Reltio tenants throught MDM HUB

HUB_KAFKA_DATA


Target table for KAFA events published through Snowflake pipe.


ColumnTypeDescription
RECORD_METADATAVARIANTMetadata of KAFKA event like KAFKA key, topic, partition, create time
RECORD_CONTENTVARIANTEvent payload

LOV_DATA

Target table for LOV data publish 

ColumnTypeDescription 
IDTEXTLOV object id
OBJECTVARIANTRelto RDM json object

MERGE_TREE_DATA

Target table for merge_tree exports from Reltio

ColumnTypeDescription 
FILENAMETEXTFull S3 file path
OBJECTVARIANTRelto MERGE_TREE json object

HI_DATA

Target table for ad-hoc historical inactive data

ColumnTypeDescription 
OBJECTVARIANTHistorical Inactive json object
" }, { "title": "PTE_SL", "pageID": "302687546", "pageLink": "/display/GMDM/PTE_SL", "content": "

The schema plays the role of access layer for Clients reading data required for PT&E reports. It mimics its structure and logic. 

To make a connection to the PTE_SL schema you need to have a proper role assigned:

COMM_GBL_MDM_DMART_DEV_PTE_ROLE
COMM_GBL_MDM_DMART_QA_PTE_ROLE
COMM_GBL_MDM_DMART_STG_PTE_ROLE
COMM_GBL_MDM_DMART_PROD_PTE_ROLE

that are connected with groups:

sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_DEV_PTE_ROLE\nsfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_QA_PTE_ROLE\nsfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_STG_PTE_ROLE\nsfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_PTE_ROLE

Information how to request for an acces is described here: Snowflake - connection guid

Snowflake path to the client report: "COMM_GBL_MDM_DMART_PROD_DB"."PTE_SL"."PTE_REPORT"

General assumptions for view creation:

  1. The views integrate both data models COMPANY and IQIVIA via a Union function. Meaning that they're calculated separately and then joined together. 
  2. driven_tabel1.iso_code = entity_uri.country 
  3. The lang_code from the code translations is always 'en'
  4. In case the hcp identifiers aren't provided by the client there is an option to calculate them dynamically by the number of HCPs having the identifier.


Driven tables:

DRIVEN_TABLE1

This is a view selecting data from the country_config table for countries that need to be added to the PTE_REPORT

Column nameDescription
ISO_CODEISO2 code of the country
NAMECountry name
LABELCountry label (name + iso_code)
RELTIO_TENANTEither 'IQVIA' or the region of the Reltio tenant (EMEA/AMER...)
HUB_TENANTIndicator of the HUB database the date comes from
SF_INSTANCEName of the Snowflake instance the data comes from (

emeaprod01.eu-west-1...)

SF_TENANTDATABASEFull database name form which the data comes from
CUSTOMERSL_PREFIXeither 'i_' for the IQVIA data model or 'p_' for the COMPANY data model

DRIVEN_TABLEV2 / DRIVEN_TABLE2_STATIC

DRIVEN_TABLEV2 is a view used to get the HCP identifiers and sort them by the count of HCPs that have the identifier. DRIVEN_TABLE2_STATIC is a table containing the list of identifiers used per country and the order in which they're placed in the PTE_REPORT view. If the country isn't available in DRIVEN_TABLE2_STATIC the report will use DRIVEN_TABLEV2 to get them calculated dynamically every time the report is used.

Column nameDescription
ISO_CDOEISO2 code of the country
CANONICAL_CODECanonical code of the identifier
LANG_DESCCode description in English
CODE_IDCode id
MODELeither 'i' for the IQVIA data model or 'p' for the COMPANY data model
ORDER_IDOrder in which the identifier will be available in the PTE_REPORT view. Only identifiers from 1 to 5 will be used.

DRIVEN_TABLE3

Specialty dictionary provided by the client for the IQVIA data model only. Used for calculating the is_prescriber data.'IS PRESCRIBER' calculation method for IQIVIA model

The path to the dictionary files on S3: pfe-baiaes-eu-w1-project/mdm/config/PTE_Dictionaries

Column nameDescription
COUNTRY_CODEISO2 code of the country
HEADER_NAMECode name
MDM_CODECode id
CANONICAL_CODECanonical code of the identifier
LONG_DESCRIPTIONCode description in English
PROFESSIONAL_TYPEIf the specialty is a prescriber or not 

PTE_REPORT:

The PTE_REPORT is the view from which the clients should get their data. It's an UNION of the reports for the IQVIA data model and the COMPANY data model. Calculation detail may be found in the respective articles:

IQVIA: PTE_SL IQVIA MODEL

COMPANY: PTE_SL COMPANY MODEL


" }, { "title": "Data Sourcing", "pageID": "347664788", "pageLink": "/display/GMDM/Data+Sourcing", "content": "
CountryIso CodeMDM Region

Data Model

Snowflake View

FranceFREMEACOMPANYPTE_REPORT
ArgentinaAEGBL

IQVIA

PTE_REPORT

BrazilBRAMERCOMPANYPTE_REPORT
MexicoMXGBLIQVIAPTE_REPORT
ChileCLGBLIQVIAPTE_REPORT
ColombiaCOGBL

IQVIA

PTE_REPORT

SlovakaSKGBL

IQVIA

PTE_REPORT

PhilippinesPKGBL

IQVIA

PTE_REPORT

RéunionREEMEA

COMPANY

PTE_REPORT

Saint Pierre and MiquelonPMEMEA

COMPANY

PTE_REPORT

MayotteYTEMEA

COMPANY

PTE_REPORT

French PolynesiaPFEMEA

COMPANY

PTE_REPORT

French GuianaGFEMEA

COMPANY

PTE_REPORT

Wallis and FutunaWFEMEA

COMPANY

PTE_REPORT

GuadeloupeGPEMEA

COMPANY

PTE_REPORT

New CaledoniaNCEMEA

COMPANY

PTE_REPORT

MartiniqueMQEMEA

COMPANY

PTE_REPORT

MauritiusMUEMEA

COMPANY

PTE_REPORT

MonacoMCEMEA

COMPANY

PTE_REPORT

AndorraADEMEA

COMPANY

PTE_REPORT

TurkeyTREMEA

COMPANY

PTE_REPORT_TR

South KoreaKRAPAC

COMPANY

PTE_REPORT_KR

All views are available in the global database in the PTE_SL schema.

" }, { "title": "PTE_SL IQVIA MODEL", "pageID": "218432348", "pageLink": "/display/GMDM/PTE_SL+IQVIA+MODEL", "content": "

Iqvia data model specification:

name typedescription Reltio attribute URILOV Name additional querry conditions (IQIVIA model)additional querry conditions (COMPANY model)
HCP_IDVARCHARReltio Entity URI

i_hcp.entity_uri or i_affiliations.start_entity_uri

only active hcp are returned (customer_sl.i_hcp.active ='TRUE')

i_hcp.entity_uri or i_affiliations.start_entity_uri

only active hcp are returned

HCO_IDVARCHARReltio Entity URI

For the IQIVIA model, all affiliation with i_affiliation.active = 'TRUE' and relation type in ('Activity','HasHealthCareRole') must be returned.

i_hco.entity_uri 


select END_ENTITY_URI from customer_sl.i_affiliations where start_entity_uri ='T9u7Ej4'and active = 'TRUE'and relation_type in ('Activity','HasHealthCareRole') ;


select * from customer_sl.p_affiliations where active=TRUE and relation_type = 'ContactAffiliations';

WORKPLACE_NAMEVARCHARReltio workplace name or reltio workplace parent name.configuration/entityTypes/HCO/attributes/Name

For the IQIVIA model, all affiliation with i_affiliation.active = 'TRUE' and relation type in ('Activity','HasHealthCareRole') must be returned.

i_hco.name must be returned

select hco.name from 
customer_sl.i_affiliations a,
customer_sl.i_hco hco
where a.end_entity_uri = hco.entity_uri 
and a.start_entity_uri ='T9u7Ej4'and a.active = 'TRUE'and a.relation_type in ('Activity','HasHealthCareRole') ;

For the COMPANY model, all affiliation with p_affiliation.active=TRUE and relation_type = 'ContactAffiliations'

i_hco.name

STATUSBOOLEANReltio Entity status

i_customer_sl.i_hcp.active

mapping rule TRUE = ACTIVE

i_customer_sl.p_hcp.active

mapping rule TRUE = ACTIVE

LAST_MODIFICATION_DATETIMESAMP_LTZEntity update time in SnowFlakeconfiguration/entityTypes/HCP/updateTime

customer_sl.i_entity_update_dates.SF_UPDATE_TIME

i_customer_sl.p_entity_update.SF_UPDATE_TIME
FIRST_NAMEVARCHAR
configuration/entityTypes/HCP/attributes/FirstName
i_customer_sl.i_hcp.first_namei_customer_sl.p_hcp.first_name
LAST_NAMEVARCHAR
configuration/entityTypes/HCP/attributes/LastName
i_customer_sl.i_hcp.last_namei_customer_sl.p_hcp.last_name
TITLE_CODEVARCHAR
configuration/entityTypes/HCP/attributes/Title

LOV Name COMPANY = HCPTitle

LOV Name IQIVIA = LKUP_IMS_PROF_TITLE


select  c.canonical_code  from 
customer_sl.i_hcp hcp,
customer_sl.i_codetranslations c
where 
hcp.title_lkp = c.code_id

e.g.

select c.canonical_code from
customer_sl.i_hcp hcp,
customer_sl.i_code_translations c
where
hcp.title_lkp = c.code_id
and hcp.entity_uri='T9u7Ej4'
and c.country='FR';

select c.canonical_code from 
customer_sl.p_hcp hcp,
customer_sl.p_codes c
where 
hcp.title_lkp = c.code_id
TITLE_DESCVARCHAR
configuration/entityTypes/HCP/attributes/Title

LOV Name COMPANY = THCPTitle

LOV Name IQIVIA = LKUP_IMS_PROF_TITLE

select  c.lang_desc  from 
customer_sl.i_hcp hcp,
customer_sl.i_code_translations c
where 
hcp.title_lkp = c.code_id


e.g.

select c.lang_desc from
customer_sl.i_hcp hcp,
customer_sl.i_code_translations c
where
hcp.title_lkp = c.code_id
and hcp.entity_uri='T9u7Ej4'
and c.country='FR';

select c.desc from 
customer_sl.p_hcp hcp,
customer_sl.p_codes c
where 
hcp.title_lkp = c.code_id
IS_PRESCRIBER



'IS PRESCRIBER' calculation method for IQIVIA model

CASE

When p_hcp.TYPE_CODE_LKP = 'HCPType:HCPT.PRES' then Y

CASE

When p_hcp.TYPE_CODE_LKP = 'HCPType:HCPT.NPRS' then N

ELSE

To define
                                                

COUNTRY
Country codeconfiguration/entityTypes/Location/attributes/country
customer_sl.i_hcp.countrycustomer_sl.p_hcp.country
PRIMARY_ADDRESS_LINE_1

IQIVIA: configuration/entityTypes/Location/attributes/AddressLine1

COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine1


select address_line1 from customer_sl.i_address where address_rank=1

select address_line1 from customer_sl.i_address where address_rank=1 and entity_uri='T9u7Ej4';

select a. address_line1 from customer_sl.p_addresses a where a.address_rank =1
PRIMARY_ADDRESS_LINE_2

IQIVIA: configuration/entityTypes/Location/attributes/AddressLine2

COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine2


select address_line2 from customer_sl.i_address where address_rank=1select a. address_line2 from customer_sl.p_addresses a where a.address_rank =1
PRIMARY_ADDRESS_CITY

IQIVIA: configuration/entityTypes/Location/attributes/City

COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/City


select cityfrom customer_sl.i_address where address_rank=1select a.city from customer_sl.p_addresses a where a.address_rank =1
PRIMARY_ADDRESS_POSTAL_CODE

IQIVIA: configuration/entityTypes/Location/attributes/Zip/attributes/ZIP5

COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/Zip5


select ZIP5 from customer_sl.i_address where address_rank=1select a.ZIP5 from customer_sl.p_addresses a where a.address_rank =1
PRIMARY_ADDRESS_STATE

IQIVIA: configuration/entityTypes/Location/attributes/StateProvince

COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/StateProvince

LOV Name COMPANY = Stateselect state_province from customer_sl.i_address where address_rank=1select c.desc from
customer_sl.p_codes c,
customer_sl.p_addresses a
where 
a.address_rank=1
and
a.STATE_PROVINCE_LKP = c.code_id 
PRIMARY_ADDR_STATUS

IQIVIA: configuration/entityTypes/Location/attributes/VerificationStatus

COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/VerificationStatus


customer_sl.i_address.verification_statuscustomer_sl.p_addresses.verification_status
PRIMARY_SPECIALTY_CODE

configuration/entityTypes/HCO/attributes/Specialities/attributes/Specialty

LOV Name COMPANY = HCPSpecialty

LOV Name IQIVIA =LKUP_IMS_SPECIALTY

e.g.

select c.canonical_code from 
customer_sl.i_specialities s,
customer_sl.i_code_translations c
where 
s.specialty_lkp = c.code_id
and s.entity_uri ='T9liLpi'

and s.SPECIALTY_TYPE_LKP='LKUP_IMS_SPECIALTY_TYPE:SPEC' 
and c.lang_code = 'en'
and c.country = 'FR';

select c.canonical_code from 
customer_sl.p_specialities s,
customer_sl.p_codes c
where s.specialty_lkp =c.code_id
and s.rank = 1 
;

There are no extra query conditions connected with SPECIALTY_TYPE_LKP because in the GBL environment that parameter always has a NULL value. 

PRIMARY_SPECIALTY_DESC

configuration/entityTypes/HCO/attributes/Specialities/attributes/Specialty

LOV Name COMPANY = LKUP_IMS_SPECIALTY

LOV Name IQIVIA =LKUP_IMS_SPECIALTY

e.g

select  c.lang_desc from 
customer_sl.i_specialities s,
customer_sl.i_code_translations c
where 
s.specialty_lkp = c.code_id
and s.entity_uri ='T9liLpi'

and s.SPECIALTY_TYPE_LKP='LKUP_IMS_SPECIALTY_TYPE:SPEC' 
and c.lang_code = 'en'
and c.country = 'FR';

select c.desc from 
customer_sl.p_specialities s,
customer_sl.p_codes c
where s.specialty_lkp =c.code_id
and s.rank = 1 
;

There are no extra query conditions connected with SPECIALTY_TYPE_LKP because in the GBL environment that parameter always has a NULL value. 

GO_STATUSVARCHAR
configuration/entityTypes/HCP/attributes/Compliance/attributes/GOStatus

go_status <> ''


CASE

When i_hcp.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:GO' then Yes

CASE

When i_hcp.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:NGO' then No

ELSE

NULL

go_status <> ''


CASE

When p_compliance.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:GO' then Y

CASE

When p_compliance.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:NGO' then N

ELSE Not defined

\"(lightbulb)\"(now this is an empty tabel)

IDENTIFIER1_CODEVARCHAR

Reltio identyfier code.


configuration/entityTypes/HCP/attributes/Identifiers/attributes/Type

select ct.canonical_code from 
customer_sl.i_code_translations ct,
customer_sl.i_identifiers d
where
ct.code_id = d.TYPE_LKP


There is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the first one.


e.g.

select ct.canonical_code, ct.lang_desc, d.id, ct.*,d.* from 
customer_sl.i_code_translations ct,
customer_sl.i_identifiers d
where
ct.code_id = d.TYPE_LKP
and 
d.entity_uri='T9v0e54'
and
ct.lang_code='en'
and 
ct.country ='FR'
;

select ct.canonical_code from 
customer_sl.p_codes ct,
customer_sl.p_identifiers d
where
ct.code_id = d.TYPE_LKP


There is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the first one.

IDENTIFIER1_CODE_DESCVARCHAR
configuration/entityTypes/HCP/attributes/Identifiers/attributes/Type
select ct.lang_desc from 
customer_sl.i_code_translations ct,
customer_sl.i_identifiers d
where
ct.code_id = d.TYPE_LKP
select ct.desc from 
customer_sl.p_codes ct,
customer_sl.p_identifiers d
where
ct.code_id = d.TYPE_LKP
IDENTIFIER1_VALUEVARCHAR
configuration/entityTypes/HCP/attributes/Identifiers/attributes/ID
select id from customer_sl.i_identifiers.id select id from customer_sl.p_identifiers
IDENTIFIER2_CODEVARCHAR
configuration/entityTypes/HCP/attributes/Identifiers/attributes/Type

select ct.canonical_code from 
customer_sl.i_code_translations ct,
customer_sl.i_identifiers d
where
ct.code_id = d.TYPE_LKP

Maximum two identyfiers can be returned

There is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the second one.

select ct.canonical_code from 
customer_sl.p_codes ct,
customer_sl.p_identifiers d
where
ct.code_id = d.TYPE_LKP

Maximum two identifiers can be returned

There is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the second one.

IDENTIFIER2_CODE_DESCVARCHAR
configuration/entityTypes/HCP/attributes/Identifiers/attributes/Type
select ct.lang_desc from 
customer_sl.i_code_translations ct,
customer_sl.i_identifiers d
where
ct.code_id = d.TYPE_LKP
select ct.desc from 
customer_sl.p_codes ct,
customer_sl.p_identifiers d
where
ct.code_id = d.TYPE_LKP
IDENTIFIER2_VALUEVARCHAR
configuration/entityTypes/HCP/attributes/Identifiers/attributes/ID
select i.id from customer_sl.i_identifiers.idselect id from customer_sl.p_identifiers
DGSCATEGORYVARCHAR

IQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSCategory

COMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitCategory

LKUP_BENEFITCATEGORY_HCP,

LKUP_BENEFITCATEGORY_HCO

select ct.lang_desc from 
customer_sl.i_code_translations ct,
customer_sl.i_disclosure d
where
ct.code_id = d.dgs_category_lkp
select DisclosureBenefitCategory from p_hcp
DGSCATEGORY_CODEVARCHAR

configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSCategory

LKUP_BENEFITCATEGORY_HCP,

LKUP_BENEFITCATEGORY_HCO

select ct.canonical_code from 
customer_sl.i_code_translations ct,
customer_sl.i_disclosure d
where
ct.code_id = d.dgs_category_lkp
comment: select i_code.canonical_code for a valu returned from DisclosureBenefitCategory 
DGSTITLEVARCHAR

IQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSTitle

COMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitTitle

LKUP_BENEFITTITLEselect ct.lang_desc from 
customer_sl.i_code_translations ct,
customer_sl.i_disclosure d
where
ct.code_id = d.DGS_TITLE_LKP

select DisclosureBenefitTitle from p_hcp


DGSTITLE_CODEVARCHAR
configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSTitleLKUP_BENEFITTITLE

select ct.canonical_code from 
customer_sl.i_code_translations ct,
customer_sl.i_disclosure d
where
ct.code_id = d.DGS_TITLE_LKP

comment: select i_code.canonical_code for a valu returned from DisclosureBenefitTitle 
DGSQUALITYVARCHAR

IQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSQuality

COMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitQuality



LKUP_BENEFITQUALITY

select ct.lang_desc from 
customer_sl.i_code_translations ct,
customer_sl.i_disclosure d
where
ct.code_id = d.DGS_QUALITY_LKP
select DisclosureBenefitQuality from p_hcp
DGSQUALITY_CODEVARCHAR

configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSQuality


LKUP_BENEFITQUALITYselect ct.canonical_code from 
customer_sl.i_code_translations ct,
customer_sl.i_disclosure d
where
ct.code_id = d.DGS_QUALITY_LKP
comment: select i_code.canonical_code for a valu returned from DisclosureBenefitQuality 
DGSSPECIALTYVARCHAR

IQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSSpecialty

COMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitSpecialty

LKUP_BENEFITSPECIALTYselect ct.lang_desc from 
customer_sl.i_code_translations ct,
customer_sl.i_disclosure d
where
ct.code_id = d.DGS_SPECIALTY_LKP
DisclosureBenefitSpecialty
DGSSPECIALTY_CODEVARCHAR
configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSSpecialtyLKUP_BENEFITSPECIALTYselect canonical_code from 
customer_sl.i_code_translations ct,
customer_sl.i_disclosure d
where
ct.code_id = d.DGS_SPECIALTY_LKP
comment: select i_code.canonical_code for a valu returned from DisclosureBenefitSpecialty
SECONDARY_SPECIALTY_DESCVARCHAR


A query should return values like:


select c.LANG_DESC from 
"COMM_GBL_MDM_DMART_PROD_DB"."CUSTOMER_SL"."I_SPECIALITIES" s,
"COMM_GBL_MDM_DMART_PROD_DB"."CUSTOMER_SL"."I_CODE_TRANSLATIONS" c
where s.SPECIALTY_LKP =c.CODE_ID
and s.RANK=2
and s.SPECIALTY_TYPE_LKP='LKUP_IMS_SPECIALTY_TYPE:SPEC'
and c.LANG_CODE ='en' ← lang code condition
and c.country ='PH' ← country condition
and s.ENTITY_URI ='ENTITI_URI'; ← entity uri condition


EMAILVARCHAR


A query should return values like:

select EMAIL from 
"COMM_GBL_MDM_DMART_PROD_DB"."CUSTOMER_SL"."I_EMAIL" 
where rank= 1 
and entity_uri ='ENTITI_URI';  ← entity uri condition


CAUTION: In case when multiple values are returned, the first one must be returned as a query result.


PHONEVARCHAR


A query should return values like:

select FORMATTED_NUMBER from 
"COMM_GBL_MDM_DMART_PROD_DB"."CUSTOMER_SL"."I_PHONE" 
where RANK=1 
and entity_uri ='ENTITI_URI'; ← entity uri condition

CAUTION: In case when multiple values are returned, the first one must be returned as a query result.



" }, { "title": "'IS PRESCRIBER' calculation method for IQIVIA model", "pageID": "218434836", "pageLink": "/display/GMDM/%27IS+PRESCRIBER%27+calculation+method+for+IQIVIA+model", "content": "

Parameters contains in SF model:

SF xml parameter name in calculation metode.g. value from SF model
customer_sl.i_hcp.type_code_lkp hcp.professional_type_cdi_hcp.type_code_lkp LKUP_IMS_HCP_CUST_TYPE:PRES
select c.canonical_code from 
customer_sl.i_hcp s,
customer_sl.i_codes c
where
s.SUB_TYPE_CODE_LKP = c.code_id 
hcp.professional_subtype_cdprof_subtype_codeWFR.TYP.I
select c.canonical_code from 
customer_sl.i_specialities s,
customer_sl.i_codes c
where
s.specialty_lkp = c.code_id and s.rank=1 and s.SPECIALTY_TYPE_LKP='LKUP_IMS_SPECIALTY_TYPE:SPEC' and c.parents='SPEC'
spec.specialty_codespec_codeWFR.SP.IE
customer_sl.i_hcp.countryhcp.countryi_hcp.countryFR

Dictionaries parameters:

profesion_type_subtype.csv as dict_subtypes

profesion_type_subtype_fr.csv as dict_subtypes

professions_type_subtype.xlsxxml

value from file to calculate SF view

e.g. value to calculate SF view

mdm_codedict_subtypes.mdm_codecanonical_codeWAR.TYP.A
professional_typedict_subtypes.professional_typeprofessional_typeNon-Prescriber, Prescriber

country_code

dict_subtypes.country_codecountry_codeFR

profesion_type_speciality.csv as dict_specialties

profesion_type_speciality_fr.csv as dict_specialties

professions_type_subtype.xlsxxml

value from file to calculate SF view

e.g. value to calculate SF view

mdm_codedict_subtypes.mdm_codecanonical_codeWAC.SP.24
professional_typedict_subtypes.professional_typeprofessional_typeNon-Prescriber, Prescriber

country_code

dict_subtypes.country_codecountry_codeFR

In a new PTE_SL view the files mentions above are migrated to driven_tabel3. So in a method description, there is an extra condition that matches a dependence with profession subtype or specialty.

Method description:

Query condition: 

  1. driven_tabel3.country_code = i_hcp.country and driven_tabel3.canonical_code = prof_subtype_code and driven_tabel3.header_name = 'LKUP_IMS_HCP_SUBTYPE'
  2. driven_tabel3.country_code = i_hcp.country and driven_tabel3.canonical_code = spec_code and driven_tabel3.header_name='LKUP_IMS_SPECIALTY'

CASE
         WHEN i_hcp.type_code_lkp ='LKUP_IMS_HCP_CUST_TYPE:PRES' THEN 'Y'
         WHEN    coalesce(prof_subtype_code,spec_code,'') = '' THEN 'N'
         WHEN    coalesce(prof_subtype_code,'') <> '' THEN
                    CASE
                             WHEN coalesce(driven_tabel3.canonical_code,'') = '' THEN 'N@1'                             –- for driven_tabel3.header_name = 'LKUP_IMS_HCP_SUBTYPE', this is a profession subtype checking condition
                             WHEN coalesce(driven_tabel3.canonical_code,'') <> '' THEN                                      –- for driven_tabel3.header_name = 'LKUP_IMS_HCP_SUBTYPE', this is a profession subtype checking condition
                                        CASE
                                                 WHEN driven_tabel3.professional_type = 'Prescriber' THEN 'Y'              –- for driven_tabel3.header_name = 'LKUP_IMS_HCP_SUBTYPE', this is a profession subtype checking condition
                                                 WHEN driven_tabel3.professional_type = 'Non-Prescriber' THEN 'N'     –- for driven_tabel3.header_name = 'LKUP_IMS_HCP_SUBTYPE', this is a profession subtype checking condition
                                                 ELSE 'N@2'
                                        END
                     END
          WHEN    coalesce(spec_code,'') <> '' THEN
                     CASE
                              WHEN coalesce(driven_tabel3.canonical_code,'') = '' THEN 'N@3'                                –- for driven_tabel3.header_name = 'LKUP_IMS_SPECIALTY', this is a specialty checking condition
                              WHEN coalesce(driven_tabel3.canonical_code,'') <> '' THEN                                        –- for driven_tabel3.header_name = 'LKUP_IMS_SPECIALTY', this is a specialty checking condition
                                         CASE
                                                  WHEN driven_tabel3.professional_type = 'Prescriber' THEN 'Y'                 –- for driven_tabel3.header_name = 'LKUP_IMS_SPECIALTY', this is a specialty checking condition
                                                  WHEN driven_tabel3.professional_type = 'Non-Prescriber' THEN 'N'        –- for driven_tabel3.header_name = 'LKUP_IMS_SPECIALTY', this is a specialty checking condition
                                                  ELSE 'N@4'
                                          END
                     END
           ELSE 'N@99'

END AS IS_PRESCRIBER

" }, { "title": "PTE_SL COMPANY MODEL", "pageID": "234711638", "pageLink": "/display/GMDM/PTE_SL+COMPANY+MODEL", "content": "


COMPANY data model specification:

name typedescription Reltio attribute URILOV Name additional querry conditions (COMPANY model)
HCP_IDVARCHARReltio Entity URI

\"(tick)\"i_hcp.entity_uri or i_affiliations.start_entity_uri

only active hcp are returned (customer_sl.i_hcp.active ='TRUE')

HCO_IDVARCHARReltio Entity URI

\"(warning)\"SELECT HCO.ENTITY_URI
FROM CUSTOMER_SL.P_HCP HCP
INNER JOIN CUSTOMER_SL.P_AFFILIATIONS AF
    ON HCP.ENTITY_URI= AF.START_ENTITY_URI
INNER JOIN CUSTOMER_SL.P_HCO HCO
    ON AF.END_ENTITY_URI = HCO.ENTITY_URI
WHERE AF.relation_type = 'ContactAffiliations'
AND AF.ACTIVE = 'TRUE';


TO - DO An additional conditions that should be included:

  • querry need to return only HCP-HCO pairs for witch "P_AFFIL_RELATION_TYPE.RELATIONSHIPDESCRIPTION_LKP" = 'HCPRelationshipDescription:CON' \"(question)\"


A Pair HCP plus HCO must be uniqe.

WORKPLACE_NAMEVARCHARReltio workplace name or reltio workplace parent name.configuration/entityTypes/HCO/attributes/Name

\"(tick)\"SELECT HCO.NAME
FROM CUSTOMER_SL.P_HCP HCP
INNER JOIN CUSTOMER_SL.P_AFFILIATIONS AF
    ON HCP.ENTITY_URI= AF.START_ENTITY_URI
INNER JOIN CUSTOMER_SL.P_HCO HCO
    ON AF.END_ENTITY_URI = HCO.ENTITY_URI
WHERE AF.relation_type = 'ContactAffiliations'
AND AF.ACTIVE = 'TRUE';


A Pair HCP plus HCO must be uniqe.

STATUSBOOLEANReltio Entity status

\"(tick)\"i_customer_sl.p_hcp.active

mapping rule TRUE = ACTIVE

LAST_MODIFICATION_DATETIMESAMP_LTZEntity update time in SnowFlakeconfiguration/entityTypes/HCP/updateTime
\"(tick)\"p_entity_update.SF_UPDATE_TIME
FIRST_NAMEVARCHAR
configuration/entityTypes/HCP/attributes/FirstName
\"(tick)\"i_customer_sl.p_hcp.first_name
LAST_NAMEVARCHAR
configuration/entityTypes/HCP/attributes/LastName
\"(tick)\"i_customer_sl.p_hcp.last_name
TITLE_CODEVARCHAR
configuration/entityTypes/HCP/attributes/Title

LOV Name COMPANY = HCPTitle

LOV Name IQIVIA = LKUP_IMS_PROF_TITLE


select c.canonical_code from 
customer_sl.p_hcp hcp,
customer_sl.p_codes c
where 
hcp.title_lkp = c.code_id
TITLE_DESCVARCHAR
configuration/entityTypes/HCP/attributes/Title

LOV Name COMPANY = THCPTitle

LOV Name IQIVIA = LKUP_IMS_PROF_TITLE

select c.desc from 
customer_sl.p_hcp hcp,
customer_sl.p_codes c
where 
hcp.title_lkp = c.code_id
IS_PRESCRIBER



CASE

When p_hcp.TYPE_CODE_LKP = 'HCPType:HCPT.PRES' then Y

CASE

When p_hcp.TYPE_CODE_LKP = 'HCPType:HCPT.NPRS' then N

ELSE

To define
                                                

COUNTRY
Country codeconfiguration/entityTypes/Location/attributes/country
customer_sl.p_hcp.country
PRIMARY_ADDRESS_LINE_1

IQIVIA: configuration/entityTypes/Location/attributes/AddressLine1

COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine1


select a. address_line1 from customer_sl.p_addresses a where a.address_rank =1
PRIMARY_ADDRESS_LINE_2

IQIVIA: configuration/entityTypes/Location/attributes/AddressLine2

COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine2


select a. address_line2 from customer_sl.p_addresses a where a.address_rank =1
PRIMARY_ADDRESS_CITY

IQIVIA: configuration/entityTypes/Location/attributes/City

COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/City


select a.city from customer_sl.p_addresses a where a.address_rank =1
PRIMARY_ADDRESS_POSTAL_CODE

IQIVIA: configuration/entityTypes/Location/attributes/Zip/attributes/ZIP5

COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/Zip5


select a.ZIP5 from customer_sl.p_addresses a where a.address_rank =1
PRIMARY_ADDRESS_STATE

IQIVIA: configuration/entityTypes/Location/attributes/StateProvince

COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/StateProvince

LOV Name COMPANY = Stateselect c.desc from
customer_sl.p_codes c,
customer_sl.p_addresses a
where 
a.address_rank=1
and
a.STATE_PROVINCE_LKP = c.code_id 
PRIMARY_ADDR_STATUS

IQIVIA: configuration/entityTypes/Location/attributes/VerificationStatus

COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/VerificationStatus


customer_sl.p_addresses.verification_status
PRIMARY_SPECIALTY_CODE

configuration/entityTypes/HCO/attributes/Specialities/attributes/Specialty

LOV Name COMPANY = HCPSpecialty

LOV Name IQIVIA =LKUP_IMS_SPECIALTY

select c.canonical_code from 
customer_sl.p_specialities s,
customer_sl.p_codes c
where s.specialty_lkp =c.code_id
and s.rank = 1 
;

There are no extra query conditions connected with SPECIALTY_TYPE_LKP because in the GBL environment that parameter always has a NULL value. 

PRIMARY_SPECIALTY_DESC

configuration/entityTypes/HCO/attributes/Specialities/attributes/Specialty

LOV Name COMPANY = LKUP_IMS_SPECIALTY

LOV Name IQIVIA =LKUP_IMS_SPECIALTY

select c.desc from 
customer_sl.p_specialities s,
customer_sl.p_codes c
where s.specialty_lkp =c.code_id
and s.rank = 1 
;

There are no extra query conditions connected with SPECIALTY_TYPE_LKP because in the GBL environment that parameter always has a NULL value. 

GO_STATUSVARCHAR
configuration/entityTypes/HCP/attributes/Compliance/attributes/GOStatus

go_status <> ''


CASE

When p_compliance.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:GO' then Y

CASE

When p_compliance.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:NGO' then N

ELSE Not defined

\"(lightbulb)\"(now this is an empty tabel)

IDENTIFIER1_CODEVARCHAR

Reltio identyfier code.


configuration/entityTypes/HCP/attributes/Identifiers/attributes/Type

select ct.canonical_code from 
customer_sl.p_codes ct,
customer_sl.p_identifiers d
where
ct.code_id = d.TYPE_LKP


There is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the first one.

IDENTIFIER1_CODE_DESCVARCHAR
configuration/entityTypes/HCP/attributes/Identifiers/attributes/Type
select ct.desc from 
customer_sl.p_codes ct,
customer_sl.p_identifiers d
where
ct.code_id = d.TYPE_LKP
IDENTIFIER1_VALUEVARCHAR
configuration/entityTypes/HCP/attributes/Identifiers/attributes/ID
select id from customer_sl.p_identifiers
IDENTIFIER2_CODEVARCHAR
configuration/entityTypes/HCP/attributes/Identifiers/attributes/Type

select ct.canonical_code from 
customer_sl.p_codes ct,
customer_sl.p_identifiers d
where
ct.code_id = d.TYPE_LKP

Maximum two identifiers can be returned

There is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the second one.

IDENTIFIER2_CODE_DESCVARCHAR
configuration/entityTypes/HCP/attributes/Identifiers/attributes/Type
select ct.desc from 
customer_sl.p_codes ct,
customer_sl.p_identifiers d
where
ct.code_id = d.TYPE_LKP
IDENTIFIER2_VALUEVARCHAR
configuration/entityTypes/HCP/attributes/Identifiers/attributes/ID
select id from customer_sl.p_identifiers
DGSCATEGORYVARCHAR

IQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSCategory

COMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitCategory

LKUP_BENEFITCATEGORY_HCP,

LKUP_BENEFITCATEGORY_HCO

select DisclosureBenefitCategory from p_hcp
DGSCATEGORY_CODEVARCHAR

configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSCategory

LKUP_BENEFITCATEGORY_HCP,

LKUP_BENEFITCATEGORY_HCO

comment: select i_code.canonical_code for a valu returned from DisclosureBenefitCategory 
DGSTITLEVARCHAR

IQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSTitle

COMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitTitle

LKUP_BENEFITTITLE

select DisclosureBenefitTitle from p_hcp


DGSTITLE_CODEVARCHAR
configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSTitleLKUP_BENEFITTITLEcomment: select i_code.canonical_code for a valu returned from DisclosureBenefitTitle 
DGSQUALITYVARCHAR

IQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSQuality

COMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitQuality



LKUP_BENEFITQUALITY

select DisclosureBenefitQuality from p_hcp
DGSQUALITY_CODEVARCHAR

configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSQuality


LKUP_BENEFITQUALITYcomment: select i_code.canonical_code for a valu returned from DisclosureBenefitQuality 
DGSSPECIALTYVARCHAR

IQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSSpecialty

COMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitSpecialty

LKUP_BENEFITSPECIALTYDisclosureBenefitSpecialty
DGSSPECIALTY_CODEVARCHAR
configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSSpecialtyLKUP_BENEFITSPECIALTYcomment: select i_code.canonical_code for a valu returned from DisclosureBenefitSpecialty
SECONDARY_SPECIALTY_DESCVARCHAR



EMAILVARCHAR



PHONEVARCHAR




" }, { "title": "Global Data Mart", "pageID": "196886082", "pageLink": "/display/GMDM/Global+Data+Mart", "content": "

The section describes the structure of  MDM GLOBAL Data Mart in Snowflake. The GLOBAL Data Mart contains consolidated data from multiple regional data marts.

\"\"

Databases:

The Global MDM Data mart connects all markets using Snowflake DB Replication (if in the different zone) or Local DB (if in the same zone)

<ENV>: DEV/QA/STG/PROD

MDM_REGIONMDM Region detailsSnowflake  InstanceSnowflake DB nameTypeModel
EMEAlink

https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

https://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com

COMM_EMEA_MDM_DMART_<ENV>_DBlocalP / P_HI
AMERlink

https://amerdev01.us-east-1.privatelink.snowflakecomputing.com

https://amerprod01.us-east-1.privatelink.snowflakecomputing.com

COMM_AMER_MDM_DMART_<ENV>_DBreplicaP / P_HI
USlink

https://amerdev01.us-east-1.privatelink.snowflakecomputing.com

https://amerprod01.us-east-1.privatelink.snowflakecomputing.com

COMM_GBL_MDM_DMART_<ENV>replicaP / P_HI
APAClink

https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

https://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com

COMM_APAC_MDM_DMART_<ENV>_DBlocalP / P_HI
EUlink

https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

https://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com

COMM_EU_MDM_DMART_<ENV>_DBlocalI

Consolidated GLOBAL Schema:

The COMM_GBL_MDM_DMART_<ENV>_DB database includes the following schema:


User accessing the CUSTOMER_SL schema can query across all markets, having in mind the following details:

P_ prefixed viewsP_HI prefixed viewsI_ prefixed views

Consolidated view from all markets that are from "P" Model.

The first column in each view is the MDM_REGION representing the information about the connection of the specific row to the market. 

Each market may contain a different number of columns and also some columns that exist in one market may not be available in the other. The Consolidated views aggregate all columns from all markets.



Corresponding data model: Dynamic views for COMPANY MDM Model


Consolidated view from all markets that are from "P_HI" Model.

The first column in each view is the MDM_REGION representing the information about the connection of the specific row to the market. 

Each market may contain a different number of columns and also some columns that exist in one market may not be available in the other. The Consolidated views aggregate all columns from all markets.

View build based on the Legacy IQVIA Reltio Model, from EU market that is using "I" Model"







Corresponding data model: Dynamic views for IQIVIA MDM Model


GLOBAL

Instance details

ENV

Snowflake Instance

Snowflake DB Name

Reltio Tenant

Refresh time

DEVhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

COMM_GBL_MDM_DMART_DEV_DB

EMEA + AMER + US+ APAC + EUonce per day
QAhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_GBL_MDM_DMART_QA_DBEMEA + AMER + US+ APAC + EUonce per day
STGhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_GBL_MDM_DMART_STG_DBEMEA + AMER + US+ APAC + EUonce per day
PRODhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com

COMM_GBL_MDM_DMART_PROD_DB

EMEA + AMER + US+ APAC + EUevery 2h

Roles

NPROD

<ENV> = DEV/QA/STG

Role Name

Landing

Customer

Customer SL

AES RS SL

Account Mapping

Metrics

Sandbox

PTE_SL

Warehouse

AD Group Name

COMM_GBL_MDM_DMART_<ENV>_DEVOPS_ROLEFullFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)
COMM_MDM_DMART_M_WH(M)
COMM_MDM_DMART_L_WH(L)
sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_<ENV>_DEVOPS_ROLE
COMM_GBL_MDM_DMART_<ENV>_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only

COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_<ENV>_MTCH_AFFIL_ROLE
COMM_GBL_MDM_DMART_<ENV>_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull

COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_<ENV>_METRIC_ROLE
COMM_GBL_MDM_DMART_<ENV>_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_<ENV>_MDM_ROLE
COMM_GBL_MDM_DMART_<ENV>_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-Only

COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_<ENV>_READ_ROLE
COMM_GBL_MDM_DMART_<ENV>_DATA_ROLE

Read-Only

Read-Only

COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_<ENV>_DATA_ROLE
COMM_GBL_MDM_DMART_<ENV>_PTE_ROLE

Read-Only



Read-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_<ENV>_PTE_ROLE

PROD

Role Name

Landing

Customer

Customer SL

AES RS SL

Account Mapping

Metrics

Sandbox

PTE_SL

Warehouse

AD Group Name

COMM_GBL_MDM_DMART_PROD_DEVOPS_ROLEFullFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)
COMM_MDM_DMART_M_WH(M)
COMM_MDM_DMART_L_WH(L)
sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_DEVOPS_ROLE
COMM_GBL_MDM_DMART_PROD_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only

COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PRD_MTCHAFFIL_ROLE
COMM_GBL_MDM_DMART_PROD_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull

COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_METRIC_ROLE
COMM_GBL_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_MDM_ROLE
COMM_GBL_MDM_DMART_PROD_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-Only

COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_READ_ROLE
COMM_GBL_MDM_DMART_PROD_DATA_ROLE

Read-Only

Read-Only

COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_DATA_ROLE
COMM_GBL_MDM_DMART_PROD_PTE_ROLE

Read-Only



Read-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_PTE_ROLE




" }, { "title": "Global Data Materialization Process", "pageID": "356800042", "pageLink": "/display/GMDM/Global+Data+Materialization+Process", "content": "

\"\"

" }, { "title": "Regional Data Marts", "pageID": "196886987", "pageLink": "/display/GMDM/Regional+Data+Marts", "content": "

The regional data mart is presenting MDM data from one region.  Data are loaded from one selected Reltio instance. 

They are being refreshed more frequently than the global mart. They are a good choice for clients operating in local markets.


EMEA

Instance details

ENVSnowflake InstanceSnowflake DB NameReltio TenantRefresh time
DEVhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

COMM_EMEA_MDM_DMART_DEV_DB

wn60kG248ziQSMW

every day between 2 am - 4 am EST
QAhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_EMEA_MDM_DMART_QA_DB

vke5zyYwTifyeJS

every day between 2 am - 4 am EST
STGhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_EMEA_MDM_DMART_STG_DB

Dzueqzlld107BVW

every day between 2 am - 4 am EST *Due to many projects running on the environment the refresh time has been temporarily changed to "every 2 hours" for the client's convenience.
PRODhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com/

COMM_EMEA_MDM_DMART_PROD_DB

Xy67R0nDA10RUV6

every 2 hours

Roles

NPROD

<ENV> = DEV/QA/STG

Role NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group Name
COMM_EMEA_MDM_DMART_<ENV>_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)
COMM_MDM_DMART_M_WH(M)
COMM_MDM_DMART_L_WH(L)
sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_<ENV>_DEVOPS_ROLE
COMM_EMEA_MDM_DMART_<ENV>_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_<ENV>_MTCH_AFFIL_ROLE
COMM_EMEA_MDM_DMART_<ENV>_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_<ENV>_METRIC_ROLE
COMM_EMEA_MDM_DMART_<ENV>_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_<ENV>_MDM_ROLE
COMM_EMEA_MDM_DMART_<ENV>_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_<ENV>_READ_ROLE
COMM_EMEA_MDM_DMART_<ENV>_DATA_ROLE

Read-Only

Read-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_<ENV>_DATA_ROLE

PROD

Role NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group Name
COMM_EMEA_MDM_DMART_PROD_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)
COMM_MDM_DMART_M_WH(M)
COMM_MDM_DMART_L_WH(L)
sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_DEVOPS_ROLE
COMM_EMEA_MDM_DMART_PROD_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PRD_MTCHAFFIL_ROLE
COMM_EMEA_MDM_DMART_PROD_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_METRIC_ROLE
COMM_EMEA_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_MDM_ROLE
COMM_EMEA_MDM_DMART_PROD_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_READ_ROLE
COMM_EMEA_MDM_DMART_PROD_DATA_ROLE

Read-Only

Read-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_DATA_ROLE



AMER

Instance details

ENVSnowflake InstanceSnowflake DB NameReltio TenantRefresh time
DEVhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.com/

COMM_AMER_MDM_DMART_DEV_DB

wJmSQ8GWI8Q6Fl1

every day between 2 am - 4 am EST
QAhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.com/COMM_AMER_MDM_DMART_QA_DB

805QOf1Xnm96SPj

every day between 2 am - 4 am EST
STGhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.com/COMM_AMER_MDM_DMART_STG_DB

K7I3W3xjg98Dy30

every day between 2 am - 4 am EST
PRODhttps://amerprod01.us-east-1.privatelink.snowflakecomputing.com

COMM_AMER_MDM_DMART_PROD_DB

Ys7joaPjhr9DwBJ

every 2 hours

Roles

NPROD

<ENV> = DEV/QA/STG

Role NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group Name
COMM_AMER_MDM_DMART_<ENV>_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)
COMM_MDM_DMART_M_WH(M)
COMM_MDM_DMART_L_WH(L)
sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_<ENV>_DEVOPS_ROLE
COMM_AMER_MDM_DMART_<ENV>_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_<ENV>_MTCH_AFFIL_ROLE
COMM_AMER_MDM_DMART_<ENV>_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_<ENV>_METRIC_ROLE
COMM_AMER_MDM_DMART_<ENV>_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_<ENV>_MDM_ROLE
COMM_AMER_MDM_DMART_<ENV>_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-Only
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_<ENV>_READ_ROLE
COMM_AMER_MDM_DMART_<ENV>_DATA_ROLE

Read-Only

Read-Only
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_<ENV>_DATA_ROLE

PROD

Role NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group Name
COMM_AMER_MDM_DMART_PROD_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)
COMM_MDM_DMART_M_WH(M)
COMM_MDM_DMART_L_WH(L)
sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_DEVOPS_ROLE
COMM_AMER_MDM_DMART_PROD_MTCH_AFFIL_RORead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_MTCH_AFFIL_RO
COMM_AMER_MDM_DMART_PROD_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_METRIC_ROLE
COMM_AMER_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_MDM_ROLE
COMM_AMER_MDM_DMART_PROD_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-Only
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_READ_ROLE
COMM_AMER_MDM_DMART_PROD_DATA_ROLE

Read-Only

Read-Only
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_DATA_ROLE




US

Instance details

ENVSnowflake InstanceSnowflake DB NameReltio TenantRefresh time
DEVhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.com

COMM_GBL_MDM_DMART_DEV

sw8BkTZqjzGr7hn

every day between 2 am - 4 am EST
QAhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.comCOMM_GBL_MDM_DMART_QA

rEAXRHas2ovllvT

every day between 2 am - 4 am EST
STGhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.comCOMM_GBL_MDM_DMART_STG

48ElTIteZz05XwT

every day between 2 am - 4 am EST
PRODhttps://amerprod01.us-east-1.privatelink.snowflakecomputing.com

COMM_GBL_MDM_DMART_PROD

9kL30u7lFoDHp6X

every 2 hours

Roles

NPROD

<ENV> = DEV/QA/STG

Role NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group Name
COMM_<ENV>_MDM_DMART_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)
COMM_MDM_DMART_M_WH(M)
COMM_MDM_DMART_L_WH(L)
sfdb_us-east-1_amerdev01_COMM_<ENV>_MDM_DMART_DEVOPS_ROLE
COMM_MDM_DMART_<ENV>_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_<ENV>_MDM_DMART_MTCH_AFFIL_ROLE
COMM_<ENV>_MDM_DMART_ANALYSIS_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only

sfdb_us-east-1_amerdev01_COMM_<ENV>_MDM_DMART_ANALYSIS_ROLE
COMM_<ENV>_MDM_DMART_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_<ENV>_MDM_DMART_METRIC_ROLE
COMM_MDM_DMART_<ENV>_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_<ENV>_MDM_DMART_MDM_ROLE
COMM_<ENV>_MDM_DMART_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-Only
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_<ENV>_MDM_DMART_READ_ROLE
COMM_MDM_DMART_<ENV>_DATA_ROLE

Read-Only

Read-Only
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_<ENV>_MDM_DMART_DATA_ROLE


PROD

Role NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group Name
COMM_PROD_MDM_DMART_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)
COMM_MDM_DMART_M_WH(M)
COMM_MDM_DMART_L_WH(L)
sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_DEVOPS_ROLE
COMM_MDM_DMART_PROD_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_MTCH_AFFIL_ROLE
COMM_PROD_MDM_DMART_ANALYSIS_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_ANALYSIS_ROLE
COMM_PROD_MDM_DMART_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_METRIC_ROLE
COMM_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_MDM_ROLE
COMM_PROD_MDM_DMART_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-Only
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_READ_ROLE
COMM_MDM_DMART_PROD_DATA_ROLE

Read-Only

Read-Only
COMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_DATA_ROLE




APAC

Instance details

ENVSnowflake InstanceSnowflake DB NameReltio TenantRefresh time
DEVhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

COMM_APAC_MDM_DMART_DEV_DB

w2NBAwv1z2AvlkgS

every day between 2 am - 4 am EST
QAhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_APAC_MDM_DMART_QA_DB

xs4oRCXpCKewNDK

every day between 2 am - 4 am EST
STGhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_APAC_MDM_DMART_STG_DB

Y4StMNK3b0AGDf6

every day between 2 am - 4 am EST
PRODhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com/

COMM_APAC_MDM_DMART_PROD_DB

sew6PfkTtSZhLdW

every 2 hours

Roles

NPROD

<ENV> = DEV/QA/STG

Role NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group Name
COMM_APAC_MDM_DMART_<ENV>_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)
COMM_MDM_DMART_M_WH(M)
COMM_MDM_DMART_L_WH(L)
sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_<ENV>_DEVOPS_ROLE
COMM_APAC_MDM_DMART_<ENV>_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_<ENV>_MTCH_AFFIL_ROLE
COMM_APAC_MDM_DMART_<ENV>_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_<ENV>_METRIC_ROLE
COMM_APAC_MDM_DMART_<ENV>_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_<ENV>_MDM_ROLE
COMM_APAC_MDM_DMART_<ENV>_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_<ENV>_READ_ROLE
COMM_APAC_MDM_DMART_<ENV>_DATA_ROLE

Read-Only

Read-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_<ENV>_DATA_ROLE

PROD

Role NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group Name
COMM_APAC_MDM_DMART_PROD_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)
COMM_MDM_DMART_M_WH(M)
COMM_MDM_DMART_L_WH(L)
sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_DEVOPS_ROLE
COMM_APAC_MDM_DMART_PROD_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PRD_MTCHAFFIL_ROLE
COMM_APAC_MDM_DMART_PROD_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_METRIC_ROLE
COMM_APAC_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_MDM_ROLE
COMM_APAC_MDM_DMART_PROD_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_READ_ROLE
COMM_APAC_MDM_DMART_PROD_DATA_ROLE

Read-Only

Read-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_DATA_ROLE




EU (ex-us)

Instance details

ENVSnowflake InstanceSnowflake DB NameReltio TenantRefresh time
DEVhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

COMM_EU_MDM_DMART_DEV_DB

FLy4mo0XAh0YEbN

every day between 2 am - 4 am EST
QAhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_EU_MDM_DMART_QA_DB

AwFwKWinxbarC0Z

every day between 2 am - 4 am EST
STGhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_EU_MDM_DMART_STG_DB

FW4YTaNQTJEcN2g

every day between 2 am - 4 am EST
PRODhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com/

COMM_EU_MDM_DMART_PROD_DB

FW2ZTF8K3JpdfFl

every 2 hours

Roles

NPROD

<ENV> = DEV/QA/STG

Role NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group Name
COMM_<ENV>_MDM_DMART_OPS_ROLEDEVFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)
COMM_MDM_DMART_M_WH(M)
COMM_MDM_DMART_L_WH(L)
sfdb_eu-west-1_emeadev01_COMM_<ENV>_MDM_DMART_DEVOPS_ROLE
COMM_MDM_DMART_<ENV>_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_<ENV>_MDM_DMART_MTCH_AFFIL_ROLE
COMM_EU_<ENV>_MDM_DMART_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EU_<ENV>_MDM_DMART_METRIC_ROLE
COMM_MDM_DMART_<ENV>_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_<ENV>_MDM_DMART_MDM_ROLE
COMM_EU_MDM_DMART_<ENV>_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_<ENV>_MDM_DMART_READ_ROLE
COMM_MDM_DMART_<ENV>_DATA_ROLE

Read-Only

Read-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_<ENV>_MDM_DMART_DATA_ROLE

PROD

Role NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group Name
COMM_PROD_MDM_DMART_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)
COMM_MDM_DMART_M_WH(M)
COMM_MDM_DMART_L_WH(L)
sfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_DEVOPS_ROLE
COMM_MDM_DMART_PROD_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_MTCH_AFFIL_ROLE
COMM_EU_MDM_DMART_PROD_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFull
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EU_PROD_MDM_DMART_METRIC_ROLE
COMM_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_MDM_ROLE
COMM_PROD_MDM_DMART_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_READ_ROLE
COMM_MDM_DMART_PROD_DATA_ROLE

Read-Only

Read-Only
COMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_DATA_ROLE





" }, { "title": "MDM Admin Management API", "pageID": "294663752", "pageLink": "/display/GMDM/MDM+Admin+Management+API", "content": "" }, { "title": "Description", "pageID": "294663759", "pageLink": "/display/GMDM/Description", "content": "

MDM Admin is a management API, automating numerous repeatable tasks and enabling the end user to perform them, without the need to make a request and wait for one of MDM Hub's engineers to pick it up.

At its current state, MDM Hub provides below services:

Each functionality is described in detail in the following chapters.

API URL list

TenantEnvironmentMDM Admin API Base URLSwagger URL - API Documentation
GBL (EX-US)


DEV
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-dev/

https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gbl-dev/swagger-ui/index.html 

QA
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-qa/

https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gbl-qa/swagger-ui/index.html 

STAGE
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-stage/

https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gbl-stage/swagger-ui/index.html 

PROD
https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-prod/

https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gbl-prod/swagger-ui/index.html 

GBLUS


DEV
https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gblus-dev/

https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gblus-dev/swagger-ui/index.html 

QA
https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gblus-qa/

https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gblus-qa/swagger-ui/index.html 

STAGE
https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gblus-stage/

https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gblus-stage/swagger-ui/index.html 

PROD
https://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-admin-gblus-prod/

https://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gblus-prod/swagger-ui/index.html 

EMEA


DEV
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/

https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html 

QA
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-qa/

https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-qa/swagger-ui/index.html 

STAGE
https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-stage/

https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-stage/swagger-ui/index.html 

PROD
https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-admin-emea-prod/

https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-prod/swagger-ui/index.html 

AMER


DEV
https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-amer-dev/

https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-amer-dev/swagger-ui/index.html 

QA
https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-amer-qa/

https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-amer-qa/swagger-ui/index.html 

STAGE
https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-amer-stage/

https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-amer-stage/swagger-ui/index.html 

PROD
https://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-admin-amer-prod/

https://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-admin-spec-amer-prod/swagger-ui/index.html 

APAC


DEV
https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-apac-dev/

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-apac-dev/swagger-ui/index.html 

QA
https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-apac-qa/

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-apac-qa/swagger-ui/index.html 

STAGE
https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-apac-stage/

https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-apac-stage/swagger-ui/index.html 

PROD
https://api-apac-prod-gbl-mdm-hub.COMPANY.com/api-admin-apac-prod/

https://api-apac-prod-gbl-mdm-hub.COMPANY.com/api-admin-spec-apac-prod/swagger-ui/index.html 

Modify Kafka offset

If you are consuming from MDM Hub's outbound topic, you can now modify the offsets to skip/re-send messages. Please refer to the Swagger Documentation for additional details.

Example 1

Environment is EMEA DEV. User wants to consume the last 100 messages from his topic again. He is using topic "emea-dev-out-full-test-topic-1" and consumer-group "emea-dev-consumergroup-1"

Steps:

  1. Disable the consumer. Kafka will not allow offset manipulation, if the topic/consumergroup is being used
  2. Send below request:

    \n
    POST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/kafka/offset\n{\n  "topic": "emea-dev-out-full-test-topic-1", \n  "groupId": "emea-dev-consumergroup-1",\n  "shiftBy": -100\n}
    \n
  3. Enable the consumer. Last 100 events will be re-consumed.

Example 2

User wants to consume all available messages from the topic again.

Steps:

  1. Disable the consumer. Kafka will not allow offset manipulation, if the topic/consumergroup is being used.
  2. Send below request:

    \n
    POST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/kafka/offset\n{\n  "topic": "emea-dev-out-full-test-topic-1", \n  "groupId": "emea-dev-consumergroup-1",\n  "offset": earliest\n}
    \n
  3. Enable the consumer. All events from the topic will be available for consumption again.

Resend Events

Allows re-sending events to MDM Hub's outbound Kafka topics, with filtering by Entity Type (entity or relation), modification date, country and source. Please refer to the Swagger Documentation for more details. Example use scenario is described below.

Generated events are filtered by the topic routing rule (by country, event type etc.). Generating events for some country may not result in anything being produced on the topic, if this country is not added to the filter.

Before starting a Resend Events job, please make sure that the country is already added to the routing rule. Otherwise, request additional country to be added (TODO: link to the instruction).

Example

For development purposes, user needs to generate 10k of events to his "emea-dev-out-full-test-topic-1" topic for the new market - Belgium (BE).

Steps:

  1. Send below request:

    \n
    POST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/events/resend\n{\n  "countries": [\n    "be"\n  ],\n  "objectType": "ENTITY",\n  "limit": 10000,\n  "reconciliationTarget": "emea-dev-out-full-test-topic-1"\n}
    \n
  2. A process will start on MDM Hub's side, generating events on this topic. Response to the request will contain the process ID (dag_run_id):

    \n
    {\n  "dag_id": "reconciliation_system_amer_dev",\n  "dag_run_id": "manual__2022-11-30T14:12:07.780320+00:00",\n  "execution_date": "2022-11-30T14:12:07.780320+00:00",\n  "state": "queued"\n}
    \n
  3. You can check the status of this process by sending below request:

    \n
    GET https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/events/resend/status/manual__2022-11-30T14:12:07.780320+00:00
    \n


    Response:

    \n
    {\n  "dag_id": "reconciliation_system_amer_dev",\n  "dag_run_id": "manual__2022-11-30T14:12:07.780320+00:00",\n  "execution_date": "2022-11-30T14:12:07.780320+00:00",\n  "state": "started"\n}
    \n
  4. Once the process is completed, all the requested events will have been sent to the topic.


" }, { "title": "Requesting Access", "pageID": "294663762", "pageLink": "/display/GMDM/Requesting+Access", "content": "

Access to MDM Admin Management API should be requested via email sent to MDM Hub's DL: DL-ATP_MDMHUB_SUPPORT@COMPANY.com.

Below chapters contain required details and email templates.

Modify Kafka Offset

Required details:

Email template:


\n
Hi Team,\n\nPlease provide us with access to the MDM Admin API. Details below:\n\nAPI: Kafka Offset\nTeam name: MDM Hub\nTopics:\n  - emea-dev-out-full-test-topic\n  - emea-qa-out-full-test-topic \n  - emea-stage-out-full-test-topic \nConsumergroups: \n  - emea-dev-hub \n  - emea-qa-hub  \n  - emea-stage-hub \nUsername: mdm-hub-user\n\nBest Regards,\nPiotr
\n


Resend Events

Required details:

Email template:


\n
Hi Team,\n\nPlease provide us with access to the MDM Admin API. Details below:\n\nAPI: Resend Events\nTeam name: MDM Hub\nTopics: \n  - emea-dev-out-full-test-topic\nUsername: mdm-hub-user\n\nBest Regards,\nPiotr
\n


" }, { "title": "Flows", "pageID": "164470069", "pageLink": "/display/GMDM/Flows", "content": "


" }, { "title": "Batch clear ETL data load cache", "pageID": "333154693", "pageLink": "/display/GMDM/Batch+clear+ETL+data+load+cache", "content": "

Description

This is the batch operation to clear batch cache. The process was design to clear mongo cache (removes records from batchEntityProcessStatus) for specified batch name, sourceId type and value. This process is an adapter to the /batchController/{batchName}/_clearCache operation exposed by mdmhub batch service that allows user to clear cache.

Link to clear batch cache by crosswalk documentation exposed by Batch Service Clear Cache by croswalks

Link to HUB UI documentation: HUB UI User Guide

 Flow: 

File load through UI details:

MAX Size

Max file size is 128MB

How to prepare the file to avoid unexpected errors:

File format description

File needs to be encoded with UTF-8 without bom.

Input file

File format: CSV 

Encoding: UTF-8

EOL: Unix

How to setup this using Notepad++:

Set encoding:

\"\"

Set EOL to Unix:

\"\"

Check (bottom right corner):

\"\"



Column headers:


Input file example

1
2
3

SourceType;SourceValue
Reltio;upIP01W
SAP;3000201428

\"\"clear_cache_ex.csv

Internals

Airflow process name: clear_batch_service_cache_{{ env }}

" }, { "title": "Batch merge & unmerge", "pageID": "164470091", "pageLink": "/pages/viewpage.action?pageId=164470091", "content": "

Description

This is the batch operation to merge/unmerge entities in Reltio. The process was designed to execute the force merge operation between Reltio objects. In Reltio, there are merge rules that automatically merge objects, but the user may explicitly define the merge between objects. This process is the adapter to the _merge or _unmerge operation that allows the user to specify the CSV file with multi entries so there is no need to execute API multiple times. 

 Flow: 

File load through UI details:

MAX Size

Max file size is 128MB or 10k records

How to prepare the file to avoid unexpected errors:

File format description

File needs to be encoded with UTF-8 without bom.

Merge operation 

Input file

File format: CSV 

Encoding: UTF-8

EOL: Unix

How to setup this using Notepad++:

Set encoding:

\"\"

Set EOL to Unix:

\"\"

Check (bottom right corner):

\"\"

File name format: merge_YYYYMMDD.csv


Drop location: 


Column headers:

The column names are kept for backward compatibility. The winner of the merge is always the entity that was created earlier. There is currently no possibility to select an explicit winner via the merge_unmerge batch.

 In the output file there are two additional fields:


Merge input file example
\n
WinnerSourceName;WinnerId;LoserSourceName;LoserId\nRELTIO;15hgDlsd;RELTIO;1JRPpffH\nRELTI;15hgDlsd;RELTIO;1JRPpffH
\n

Output file

File format: CSV 

Encoding: UTF-8

File name format: status_merge_YYYYMMDD_<seqNr>.csv

  <seqNr> - the number of the file process in the current day. Starting with 1 to n. 

Drop location: 

Column headers:

Merge output file example
\n
sourceId.type,sourceId.value,status,errorCode,errorMessage\nmerge_RELTIO_RELTIO,0009e93_00Ff82E,updated,,\nmerge_GRV_GRV,6422af22f7c95392db313216_23f45427-8cdc-43e6-9aea-0896d4cae5f8,updated,,\nmerge_RELTI_RELTIO,15hgDlsd_1JRPpffH,notFound,EntityNotFoundByCrosswalk,Entity not found by crosswalk in getEntityByCrosswalk [Type:RELTI Value:15hgDlsd]
\n

Unmerge operation 

Input file

File format: CSV 

Encoding: UTF-8

File name format: unmerge_YYYYMMDD_<seqNr>.csv

  <seqNr> - the number of the file process in the current day. Starting with 1 to n. 

Drop location: 

Column headers:


Unmerge input file example
\n
SourceURI;TargetURI\n15hgG6nP;15hgG6nQ1\n15hgG6qc;15hgG6rq
\n

Output file

File format: CSV 

Encoding: UTF-8

File name format: status_umerge_YYYYMMDD_<seqNr>.csv

  <seqNr> - the number of the file process in the current day. Starting with 1 to n. 

Column headers:


Unmerge output file example
\n
sourceId.type,sourceId.value,status,errorCode,errorMessage\nunmerge_RELTIO_RELTIO,01lAEll_01jIfxx,updated,,\nunmerge_RELTIO_RELTIO,0144V4D_01EFVyb,updated,,
\n

Internals

Airflow process name: merge_unmerge_entities


" }, { "title": "Batch reload MapChannel data", "pageID": "407896553", "pageLink": "/display/GMDM/Batch+reload+MapChannel+data", "content": "


Description

This process is used to reload source data from GCP/GRV systems. The user has two ways to indicate the data he wants to reload:

In process Airflow Dag is used to control the flow 


 Flow: 


File load through UI details:

MAX Size

Max file size is 128MB



Input file example

\"\"reload_map_channel_data.csv

Output file

File format: CSV 

Encoding: UTF-8

File name format: report__reload_map_channel_data_YYYYMMDD_<seqNr>.csv

  <seqNr> - the number of the file process in the current day. Starting with 1 to n. 

Column headers: TODO




Output file example TODO

SourceCrosswalkType,SourceCrosswalkValue,IdentifierType,IdentifierValue,status,errorCode,errorMessage
Reltio,upIP01W,HCOIT.PFORCERX,TEST9_OEG_1000005218888,failed,404,Can't find entity for target: EntityURITargetObjectId(entityURI=entities/upIP01W)
SAP,3000201428,HCOIT.SAP,3000201428,failed,CrosswalkNotFoundException,Entity not found by crosswalk in getEntityByCrosswalk [Type:SAP Value:3000201428]


Internals

Airflow process name: reload_map_channel_data_{{ env }}

" }, { "title": "Batch Reltio Reindex", "pageID": "337846347", "pageLink": "/display/GMDM/Batch+Reltio+Reindex", "content": "

Description

This is the operation to execute Reltio Reindex API. The process was designed to get the input CSV file with entities URIS and schedule the Reltio Reindex API. 

More details about the Reltio API is available here: 5. Reltio Reindex

HUB wraps the Entity URIs and schedules Reltio Task. 

 Flow: 

File load through UI details:

MAX Size

Max file size is 128MB. The user should be able to load around 7.4M entity uris lines in one file to fit into a 128MB file size. Please check the file size before uploading. Larger files will be rejected.

Please be aware that 128MB file upload may take a few minutes depending on the user network performance. Please wait until processing is finished and the response appears.

How to prepare the file to avoid unexpected errors:

File format description

File needs to be encoded with UTF-8 without bom.

Input file

File format: CSV 

Encoding: UTF-8

EOL: Unix

How to setup this using Notepad++:

Set encoding:

\"\"

Set EOL to Unix:

\"\"

Check (bottom right corner):

\"\"

Column headers:


Input file example

1
2
3

entities/E0pV5Xm
entities/1CsgdXN4
entities/2O5RmRi

\"\"reltio_reindex.csv

Internals

Airflow process name: reindex_entities_mdm_{{ env }}



" }, { "title": "Batch update identifiers", "pageID": "234704200", "pageLink": "/display/GMDM/Batch+update+identifiers", "content": "

Description

This is the batch operation to update identifiers in Reltio. The process was design to update selected identifiers selected by identifier lookup code. This process is an adapter to the /entities/_updateAttributes operation exposed by mdmhub manager service that allows user to modify nested attributes using specific filters.

Source for the batch process is csv in which one row corresponds with single identifiers that should be changed.

In process batch service is used to control the flow 


 Flow: 


File load through UI details:

MAX Size

Max file size is 128MB or 10k records

How to prepare the file to avoid unexpected errors:

File format description

File needs to be encoded with UTF-8 without bom.

Input file

File format: CSV 

Encoding: UTF-8

EOL: Unix

How to setup this using Notepad++:

Set encoding:

\"\"

Set EOL to Unix:

\"\"

Check (bottom right corner):

\"\"

File name format: update_identifiers_YYYYMMDD_<seqNr>.csv

  <seqNr> - the number of the file process in the current day. Starting with 1 to n. 

Drop location: 

GBL:

EMEA:


Column headers:



Input file example

1
2
3

SourceCrosswalkType;SourceCrosswalkValue;IdentifierType;IdentifierValue;IdentifierTrust;IdentifierSourceName;Action;TargetCrosswalkType
Reltio;upIP01W;HCOIT.PFORCERX;TEST9_OEG_1000005218888;;;update;
SAP;3000201428;HCOIT.SAP;3000201428;Yes;SAP;update;

\"\"update_identifier_20220323.csv

Output file

File format: CSV 

Encoding: UTF-8

File name format: report__update_identifiers_YYYYMMDD_<seqNr>.csv

  <seqNr> - the number of the file process in the current day. Starting with 1 to n. 

Column headers:




Output file example

\n
SourceCrosswalkType,SourceCrosswalkValue,IdentifierType,IdentifierValue,status,errorCode,errorMessage\nReltio,upIP01W,HCOIT.PFORCERX,TEST9_OEG_1000005218888,failed,404,Can't find entity for target: EntityURITargetObjectId(entityURI=entities/upIP01W)\nSAP,3000201428,HCOIT.SAP,3000201428,failed,CrosswalkNotFoundException,Entity not found by crosswalk in getEntityByCrosswalk [Type:SAP Value:3000201428]
\n


Internals

Airflow process name: update_identifiers_{{ env }}

" }, { "title": "Callbacks", "pageID": "164469861", "pageLink": "/display/GMDM/Callbacks", "content": "

Description

The HUB Callbacks are divided into the following two sections:

  1. PreCallback process is responsible for the Ranking of the selected attributes RankSorters. This callback is based on the full enriched events from the "${env}-internal-reltio-full-events". Only events that do not require additional ranking updates in Reltio are published to the next processing stage. Some rankings calculations - like OtherHCOtoHCO is delayed and processed in PreDylayCallbackService - such functionality was required to gather all changes for relations in time windows and send events to Reltio only after the aggregation window is closed. This limits the number of events and updates to Reltio.
    1.  OtherHCOtoHCOAffiliations Rankings - more details related to the OtherHCOtoHCO relation ranking with all PreDylayCallbackService  and DelayRankActivationProcessor
      1. rank details OtherHCOtoHCOAffiliations RankSorter
  2. "Post" Callback process is responsible for the specific logic and is based on the events published by the Event Publisher component. Here are the processes executed in the post callback process:
    1. AttributeSetter Callback - based on the "{env}-internal--callback-attributes-setter-in" events. Sets additional attributes for EMEA COMPANY France market  e.g. ComplianceMAPPHCPStatus
    2. CrosswalkActivator Callback  - based on the "${env}-internal-callback-activator-in" events. Activates selected crosswalk or soft-delete specific crosswalks based on the configuration. 
    3. CrosswalkCleaner Callback - based on the "${env}-internal-callback-cleaner-in" events. Cleans orphan HUB_Callback crosswalk or soft-delete specific crosswalks based on the configuration. 
    4. CrosswalkCleanerWithDelay Callback - based on the "${env}-internal-callback-cleaner-with-delay-in" events. Cleans orphan HUB_Callback crosswalk or soft-delete specific crosswalks based on the configuration with delay (aggregate events in time window)
    5. DanglingAffiliations Callback - based on the "${env}-internal-callback-orphan-clean-in" events. Removes orphan affiliations once one of the start or end objects was removed. 
    6. Derived Addresses Callback  - based on the "${env}-internal-callback-derived-addresses-in" events. Rewrites an Address from HCO to HCP, connected to each other with some type of Relationship. used on IQVIA tenant
    7. HCONames Callback for IQVIA model - based on the "${env}-internal-callback-hconame-in" events. Caclucate HCO Names. 
    8. HCONames Callback for COMPANY model -  based on the "${env}-internal-callback-hconame-in" events. Caclucate HCO Names in COMPANY Model.
    9. NotMatch Callback - based on the "${env}-internal-callback-potential-match-cleaner-in" events. Based on the created relationships between two matched objects, removes the match using _notMatch operation. 

More details about the HUB callbacks are described in the sub-pages. 

Flow diagram



\"\"



" }, { "title": "AttributeSetter Callback", "pageID": "250150261", "pageLink": "/display/GMDM/AttributeSetter+Callback", "content": "

Description

Callback auto-fills configured static Attributes, as long as the profile's attribute values meet the requirements. If no requirement (rule) is met, an optional cleaner deletes the existing, Hub-provided value for this attribute. AttributeSetter uses Manager's Update Attributes async interface.

Flow Diagram

\"\"

Steps


  1. After event has been routed from EventPublisher, check the following:
    1. Entity must be active and have at least one active crosswalk 
    2. Event Type must match configured allowedEventTypes
    3. Country must match configured allowedCountries
  2. For each configured setAttribute do the following:
    1. Check if the entityType matches 
    2. For each rules do the following:
      1. Check if criteria are met
      1. If criteria are met:
        1. Check if Hub crosswalk already provides the AutoFill value (either Attribute's value or lookupCode must match)
        2. If attribute value is already present, do nothing
        3. If attribute is not present:
          1. Add inserting AutoFill attribute to the list of changes
          2. Check if Hub crosswalk provides another value for this attribute
          3. If Hub crosswalk provides another value, add deleting that attribute value to the list of changes
    3. If no rules were matched for this setAttribute and cleaner is enabled:
      1. Find the Hub-provided value of this attribute and add deleting this value to the list of changes (if exists)
    4. Map the list of changes into a single AttributeUpdateRequest object and send to Manager inbound topic.

Configuration

Example AttributeSetter rule (multiple allowed):

\n
      - setAttribute: "ComplianceMAPPHCPStatus"\n        entityType: "HCP"\n        cleanerEnabled: true\n        rules:\n          - name: "AutoFill HCPMHS.Non-HCP IF SubTypeCode = Administrator (HCPST.A) / Researcher/Scientist (HCPST.C) / Counselor/Social Worker (HCPST.CO) / Technician/Technologist (HCPST.TC)"\n            setValue: "HCPMHS.Non-HCP"\n            where:\n              - attribute: "SubTypeCode"\n                values: [ "HCPST.A", "HCPST.C", "HCPST.CO", "HCPST.TC" ]\n\n          - name: "AutoFill HCPMHS.Non-HCP IF SubTypeCode = Allied Health Professionals (HCPST.R) AND PrimarySpecialty = Psychology (SP.PSY)"\n            setValue: "HCPMHS.Non-HCP"\n            where:\n              - attribute: "SubTypeCode"\n                values: [ "HCPST.R" ]\n              - attribute: "Specialities"\n                nested:\n                  - attribute: "Primary"\n                    values: [ "true" ]\n                  - attribute: "Specialty"\n                    values: [ "SP.PSY" ]\n\n          - name: "AutoFill HCPMHS.HCP for all others"\n            setValue: "HCPMHS.HCP"
\n

Rule inserts ComplianceMAPPHCPStatus attribute for every HCP:

Dependent Components

ComponentUsage
Callback ServiceMain component with flow implementation
PublisherGeneration of incoming events
ManagerAsynchronous processing of generated AttributeUpdateRequest events



" }, { "title": "CrosswalkActivator Callback", "pageID": "302701827", "pageLink": "/display/GMDM/CrosswalkActivator+Callback", "content": "

Description

CrosswalkActivator is the opposite of CrosswalkCleaner. There are 4 main processing branches (described in more detail in the "Algorithm" section):

Algorithm

For each event from ${env}-internal-callback-activator-in topic, do:

  1. filter by event country (configured),
  2. filter by event type (configured, usually only CHANGED events),
  3. Processing: WhenOneKeyExistsAndActive
    1. find all active Onekey crosswalks (exact Onekey source name is fetched from configuration)
    2. for each crosswalk in the input event entity do:
      1. if crosswalk type is in the configured list (getWhenOneKeyExistsAndActive) and crosswalk value is the same as one of active Onekey crosswalks, send activator request to Manager,
      2. activator request contains
        • entityType,
        • activated crosswalk with empty string ("") in deleteDate,
        • Country attribute rewritten from the input event,
      3. Manager processes the request as partialOverride.

  4. Processing: WhenAnyOneKeyExistsAndActive
    1. find all active Onekey crosswalks (exact Onekey source name is fetched from configuration)
    2. for each crosswalk in the input event entity do:
      1. if crosswalk type is in the configured list (getWhenAnyOneKeyExistsAndActive) and active Onekey crosswalks list is not empty, send activator request to Manager,
      2. activator request contains
        • entityType,
        • activated crosswalk with empty string ("") in deleteDate,
        • Country attribute rewritten from the input event,
      3. Manager processes the request as partialOverride.

  5. Processing: WhenAnyCrosswalksExistsAndActive
    1. find all active crosswalks (sources in the configuration except list are filtered out)
    2. for each crosswalk in the input event entity do:
      1. if crosswalk type is in the configured list (getWhenAnyCrosswalksExistsAndActive) and active Onekey crosswalks list is not empty, send activator request to Manager,
      2. activator request contains
        • entityType,
        • activated crosswalk with empty string ("") in deleteDate,
        • Country attribute rewritten from the input event,
      3. Manager processes the request as partialOverride.
  6. Processing: ActivateOneKeyReferbackCrosswalkWhenRelatedOneKeyCrosswalkExistsAndActive
    1. find all OneKey crosswalks,
    2. check for active OneKey crosswalk with lookupCode included in the configured list oneKeyLookupCodes,
    3. check for related inactive OneKey referback crosswalk with lookupCode included in the configured list referbackLookupCodes,
    4. if above conditions are met, send activator request to Manager,
    5. activator request contains:
      • entityType,
      • activated OneKey referback crosswalk with empty string ("") in deleteDate,
      • Country attribute rewritten from the input event,
    6. Manager processes the request as partialOverride.

Dependent components

Component

Usage

Callback ServiceMain component with flow implementation
PublisherRoutes incoming events
ManagerAsync processing of generated activator requests
" }, { "title": "CrosswalkCleaner Callback", "pageID": "164469744", "pageLink": "/display/GMDM/CrosswalkCleaner+Callback", "content": "

Description

This process removes using the hard delete or soft-delete operation crosswalks on Entity or Relation objects. There are the following sections in this process.

  1. Hard Delete Crosswalks - Entities
    1. Based on the input configuration removes the crosswalk from Reltio once all other crosswalks were removed or inactivated.  Once the source decides to inactivated the crosswalk, associated attributes are removed from the Golden Profile (OV), and in that case Rank attributes delivered by the HUB have to be removed. The process is used to remove orphan HUB_CALLBACK crosswalks that are used in the PreCallbacks (Rankings/COMPANYGlobalCustomerId/Canada Micro-Bricks/HCPType) process
  2. Hard Delete Crosswalks - Relationships
    1. This is similar to the above. The only difference here is that the PreCallbacks (Rankings/COMPANYGlobalCustomerId/Canada Micro-Bricks/HCPType) process is adding new Rank attributes to the relationship between two objects. Once the relationship is deactivated by the Source, the orphan HUB_CALLBACK crosswalk is removed. 
  3. Soft Delete Crosswalks 
    1. This process does not remove the crosswalk from Reltio. It updates the existing providing additional deleteDate attribute on the soft-deleting crosswalk. In that case in Reltio the corresponding crosswalk becomes inactive. There are three types of soft-deletes:
      1. always - soft-delete crosswalks based on the configuration once all other crosswalks are removed or inactivated,
      2. whenOneKeyNotExists - soft-delete crosswalks based on the configuration once ONEKEY crosswalk is removed or inactivated. This process is similar to the "always" process by the activation is only based on the ONEKEY crosswalk inactivation,
      3. softDeleteOneKeyReferbackCrosswalkWhenOneKeyCrosswalkIsInactive - soft-delete ONEKEY referback crosswalk (lookupCode in configuration) once ONEKEY crosswalk is inactivated.

Flow diagram


\"\"

Steps


Triggers

Trigger actionComponentActionDefault time
IN Events incoming mdm-callback-service:CrosswalkCleanerStream (callback package)Process events and calculate hard or soft-delete requests and publish to the next processing stage. realtime - events stream

Dependent components

ComponentUsage
Callback ServiceMain component with flow implementation
PublisherEvents publisher generates incoming events
ManagerAsynchronous process of generated events
" }, { "title": "CrosswalkCleanerWithDelay Callback", "pageID": "302701874", "pageLink": "/display/GMDM/CrosswalkCleanerWithDelay+Callback", "content": "

Description

CrosswalkCleanerWithDelay works similarly to CrosswalkCleaner. It is using the same Kafka Streams topology, but events are trimmed (eliminateNeedlessData parameter - all the fields other than crosswalks are removed), and, which is most important, deduplication window is added.

Deduplication window's parameters are configured, there are no default parameters. EMEA PROD example:

This means, that the delay is equal to 8-9 hours.

Algorithm

For more details on algorithm steps, see CrosswalkCleaner Callback.

Dependencies

ComponentUsage
Callback ServiceMain component with flow implementation
PublisherRoutes incoming events
ManagerAsync processing of generated requests
" }, { "title": "DanglingAffiliations Callback", "pageID": "164469754", "pageLink": "/display/GMDM/DanglingAffiliations+Callback", "content": "

Description


DanglingAffiliation Callback consists of two sub-processes:

" }, { "title": "DanglingAffiliations Based On Inactive Objects", "pageID": "347635836", "pageLink": "/display/GMDM/DanglingAffiliations+Based+On+Inactive+Objects", "content": "

Description

The process soft-deletes active relationships between inactivated start or end objects. Based on the configuration only REMOVED or INACTIVATE events are processed. It means that once the Start or End objects becomes inactive process checks the orphan relationship and sends the soft-delete request to the next processing stage. 

Flow diagram

\"\"

Steps


Triggers

Trigger actionComponentActionDefault time
IN Events incoming mdm-callback-service:DanglingAffiliationsStream (callback package)Process events for inactive entities and calculate soft-delete requests and publish to the next processing stage. realtime - events stream

Dependent components

ComponentUsage
Callback ServiceMain component with flow implementation
PublisherEvents publisher generates incoming events
ManagerAsynchronous process of generated events
Hub StoreRelationship Cache
" }, { "title": "DanglingAffiliations Based On Same Start And End Objects", "pageID": "347635839", "pageLink": "/display/GMDM/DanglingAffiliations+Based+On+Same+Start+And+End+Objects", "content": "

Description

This process soft-deletes looping relations - active relations having the same startObject and endObject.

Such loops can be created in one of two ways:

both of these create a RELATIONSHIP_CHANGED event, so the process is based off of RELATIONSHIP_CREATED and RELATIONSHIP_CHANGED events.

Unlike the other DanglingAffiliations sub-process, this one does not query the cache for relations, because all the required information is in the processed event.

Flow diagram

\"\"

Steps


Triggers

Trigger actionComponentActionDefault time
IN Events incoming mdm-callback-service:DanglingAffiliationsStream (callback package)Process events for relations and calculate soft-delete requests and publish to the next processing stage. realtime - events stream

Dependent components

ComponentUsage
Callback ServiceMain component with flow implementation
PublisherEvents publisher generates incoming events
ManagerAsynchronous process of generated events
" }, { "title": "Derived Addresses Callback", "pageID": "294677441", "pageLink": "/display/GMDM/Derived+Addresses+Callback", "content": "

Description

The Callback is a tool for rewriting an Address from HCO to HCP, connected to each other with some type of Relationship.

Sequence Diagram

\"\"

Flow

Process is a callback. It operates on four Kafka topics:

Steps

Algorithm has 3 stages:

  1. Stage I – Event Publisher
    1. Event Publisher routes all above event types to ${env}-internal-callback-derived-addresses-in topic, optional filtering by country/source.
  2. Stage II – Callback Service – Preprocessing Stage
    1. If event subType ~ HCP_*:
    2. pass targetEntity URI to ${env}-internal-callback-derived-addresses-hcp4calc
    3. If event subtype ~ HCO_*:
      1. Find all ACTIVE relations of types ${walkRelationType} ending at this HCO in entityRelations collection.
      2. Extract URIs of all HCPs at starts of these relations and send them to topic ${env}-internal-callback-derived-addresses-hcp4calc
    4. If event subtype ~ RELATIONSHIP_*:
      1. Find the relation by URI in entityRelations collection.
      2. Check if relation type matches the configured ${walkRelationType}
      3. Extract URI of the startObject (HCP) and send it to the topic ${env}-internal-callback-derived-addresses-hcp4calc
  3. Stage III – Callback Service – Main Stage
    1. Input is HCP URI.
    2. Find HCP by URI in entityHistory collection.
    3.  Check:
      1. If we cannot find entity in entityHistory, log error and skip
      2. If found entity has other type than “configuration/entityTypes/HCP”, log error and skip
      3. If entity has status LOST_MERGE/DELETED/INACTIVE, skip
    4. In entityHistory, find all relations of types ${walkRelationType} starting at this HCP, extract HCO at the end of relation
    5. For each extracted HCO (Hospital) do:
      1. Find HCO in entityHistory collection
      2. Wrap HCO Addresses in a Create HCP Request:
        1. Rewrite all sub-attributes from each ov==true Hospital’s Address
        2. Add attributes from ${staticAddedFields}, according to strategy: overwrite or underwrite (add if missing)
        3. Add the required Country attribute (rewrite from HCP)
        4. Add two crosswalks:
          1. Data provider ${hubCrosswalk} with value: ${hcpId}_${hcoId}.
          2. Contributor provider Reltio type with HCP uri.
        5. Send Create HPC Request to Manager through bundle topic
    6. If HCP has a crosswalk of type and sourceTable as below:

      type: ${hubCrosswalk.type}
      sourceTable: ${hubCrosswalk.sourceTable}
      value: ${hcpId}_${hcoId}

      but its hcoUri suffix does not match any Hospital found, send request to delete the crosswalk to MDM Manager.

Configuration

Following configurations have to be made (examples are for GBL tenants).

Callback Service

Add and handle following section to CallbackService application.yml in GBL:

\n
callback:\n...\n  derivedAddresses:\n    enabled: true\n    walkRelationType: \n      - configuration/relationTypes/HasHealthCareRole\n    hubCrosswalk:\n      type: HUB_Callback\n      sourceTable: DerivedAddresses\n    staticAddedFields:\n      - attributeName: AddressType\n        attributeValue: TYS.P\n        strategy: over\n    inputTopic: ${env}-internal-callback-derived-addresses-in\n    hcp4calcTopic: ${env}-internal-callback-derived-addresses-hcp4calc\n    outputTopic: ${env}-internal-derived-addresses-hcp-create\n    cleanerTopic: ${env}-internal-async-all-cleaner-callbacks
\n

Since we are adding a new crosswalk, cleaning of which will be handled by the Derived Addresses callback itself, we should exclude this crosswalk from the Crosswalk Cleaner config (similar to HcoNames one):

\n
callback:\n  crosswalkCleaner:\n    ...\n    hardDeleteCrosswalkTypes:\n      ...\n      exclude:\n        - type: configuration/sources/HUB_Callback\n          sourceTable: DerivedAddresses
\n

Manager

Add below to the MDM Manager bundle config:

\n
bundle:\n...\n  inputs:\n...\n    - topic: "${env}-internal-derived-addresses-hcp-create"\n      username: "mdm_callback_service_user"\n      defaultOperation: hcp-create
\n


Check DQ Rules configuration.

Event Publisher

Routing rule has to be added:

\n
- id: derived_addresses_callback\n  destination: "${env}-internal-derived-addresses-in"\n  selector: "(exchange.in.headers.reconciliationTarget==null)\n              && exchange.in.headers.eventType in ['simple']\n              && exchange.in.headers.country in ['cn']\n              && exchange.in.headers.eventSubtype in ['HCP_CREATED', 'HCP_CHANGED', 'HCO_CREATED', 'HCO_CHANGED', 'HCO_REMOVED', 'HCO_INACTIVATED', 'RELATIONSHIP_CREATED', 'RELATIONSHIP_CHANGED', 'RELATIONSHIP_REMOVED']"
\n

Dependent Components

ComponentUsage
Callback ServiceMain component with flow implementation
ManagerProcessing HCP Create, Crosswalk Delete operations
Event PublisherGeneration of incoming events
" }, { "title": "HCONames Callback for IQVIA model", "pageID": "164469742", "pageLink": "/display/GMDM/HCONames+Callback+for+IQVIA+model", "content": "

Description

The HCO names callback is responsible for calculating HCO Names. At first events are filtered, deduplicated and the list of impacted hcp is being evaluated. Then the new HCO are calculated. And finally if there is a need for update, the updates are being send for asynchronous processing in HUB Callback Source

Flow diagram

\"\"

Steps

1. Impacted HCP Generator

  1. Listen for the events on the ${env}-internal-callback-hconame-in topic.
  2. Filter out against the list of predefined countries (AI, AN, AG, AR, AW, BS, BB, BZ, BM, BO, BR,
    CL, CO, CR, CW, DO, EC, GT, GY, HN, JM, KY, LC,
    MX, NI, PA, PY, PE, PN, SV, SX, TT, UY, VG, VE).
  3. Filter out against the list of predefined event types (HCO_CREATED, HCO_CHANGED,
    RELATIONSHIP_CREATED, RELATIONSHIP_CHANGED).
  4. Split into two following branches. Results of both are then published on the ${env}-internal-callback-hconame-hcp4calc.

Entity Event Stream

1 extract the "Name" attribute from the target entity.

2. reject the event if "Name" does not exist

3. check if there was already a record with the identical Key + Name pair (a duplicate)

4. reject the duplicate

5. find the list of impacted HCPs based on the key

6. return a flat stream of the key and the list
e.g. key: entities/dL144Hk, impactedHCP: 1, 2, 3 return (entities/dL144Hk, 1), (entities/dL144Hk, 2), (entities/dL144Hk, 3)

Relation Event Stream

1. map Event to RelationWrapper(type,uRI,country,startURI,endURI,active,startObjectType,endObjectTyp)

2. reject if any of fields missing

3. check if there was already a record with the identical Key + Name pair (a duplicate)

4. reject the duplicate

5. find the list of impacted HCPs based on the key

6. return a flat stream of the key and the list
e.g. key: entities/dL144Hk, impactedHCP: 1, 2, 3 return (entities/dL144Hk, 1), (entities/dL144Hk, 2), (entities/dL144Hk, 3)

2. HCO Names Update Stream

  1. Listen for the events on the ${env}-internal-callback-hconame-hcp4calc.
  2. The incoming list of HCPs is passed to the calculator (described below).
  3. The HcoMainCalculatorResult contains hcpUri, a list of entityAddresses and the mainWorkplaceUri (to update)
  4. The result is being mapped to the RelationRequest
  5.  The RelationRequest is generated to the "${env}-internal-hconames-rel-create" topic.

3. HCP Calc Alogithm

calculate HCO Name

  1. HCOL1: get HCO from mongo where uri equals HCP.attributes.Workplace.refEntity.uri
  2. return HCOL1.Name

calculate MainHCOName

  1. get all target HCO for relations (paremeter traverseRelationTypes) when start object id equals HCOL1 uri.
  2. for each target HCO (curHCO) do
    1. if target HCO is last in hierarchy then
      1. return HCO.attributes.Name
    2. else if target HCO.attributes.TypeCode.lookupCode is on the configured list defined by parameter mainHCOTypeCodes for selected country
      1. return HCO.attributes.Name
    3. else if target HCO.attributes.Taxonomy.StrType.lookupCode is on the configured list defined by parameter mainHCOStructurTypeCodes for selected country
      1. return HCO.attributes.Name
    4. else if target HCO.attributes.ClassofTradeN.FacilityType.lookupCode is on the configured list defined by parameter mainHCOFacilityTypeCodes for selected country
      1. return HCO.attributes.Name
    5. else
      1. get all target HCO when start object id is curHCO.uri (recursive call)

update HCP addresses

  1. find address in HCP.attributes.Address when Address.refEntity.uri=HCOL1.uri
  2. if found and address.HCOName<>calcHCOName or address.MainHcoName<>calcMainHCOName then
  3. create/update HasAddress relation using HUBCallback source

Triggers

*

Oops, it seems that you need to place a table or a macro generating a table within the Table Filter macro.

Trigger actionComponentActionDefault time
IN Events incoming mdm-callback-service:HCONamesUpdateStream (callback package)Evaluates the list of affected HCPs. Based on that the HCO updates being sent when needed.realtime - events stream
\n
\n
\n\n\n

Dependent components

ComponentUsage
Callback ServiceMain component with flow implementation
PublisherEvents publisher generates incoming events
ManagerAsynchronous process of generated events
Hub StoreCache




" }, { "title": "HCONames Callback for COMPANY model", "pageID": "243863711", "pageLink": "/display/GMDM/HCONames+Callback+for+COMPANY+model", "content": "

Description

HCONames Callback for COMPANY data model differs from the one for IQVIA model.

Callback consists of two stages: preprocessing and main processing. Main processing stage takes in HCP URIs, so the preprocessing stage logic extracts such affected HCPs from HCO, HCP, RELATIONSHIP events.

During main processing, Callback calculates trees, where nodes are HCOs (tree root is always the input HCP) and edges are Relationships. HCOs and MainHCOs are extracted from this tree. MainHCOs are chosen following some business specification from the Callback config. Direct Relationships from HCPs to MainHCOs are created (or cleaned if no longer applicable). If any of HCP's Addresses matches HCO/MainHCO Address, adequate sub-attribute is added to this Address.

Algorithm

Stage I - preprocessing

Input topic: ${env}-internal-callback-hconame-in

Input event types:

For each HCO event from the topic:

  1. Deduplicate events by key (deduplication window size is configurable),
  2. using MongoDB entityRelations collection, build maximum dependency tree (recursive algorithm) consisting of HCPs and HCOs connected with:
    1. relations of type equal to hcoHcoTraverseRelationTypes from configuration,
    2. relations of type equal to hcoHcpTraverseRelationTypes from configuration,
  3. return all HCPs from the dependency tree (all visited HCPs),
  4. generate events having key and value equal to HCP uri and send to the main processing topic (${env}-internal-callback-hconame-hcp4calc).

For each RELATIONSHIP event from the topic:

  1. Deduplicate events by key (deduplication window size is configurable),
  2. if relation's startObject is HCP:
    1. add HCP's entityURI to result list,
  3. if relation's startObject is HCO: 
    1. similarly to HCO events preprocessing, build dependency tree and return all HCPs from the tree. HCP URIs are added to the result list,
  4. for each HCP on the result list, generate an event and send to the main processing topic (${env}-internal-callback-hconame-hcp4calc).

For each HCP event from the topic:

  1. Deduplicate events by key (deduplication window size is configurable),
  2. generate events having key and value equal to HCP uri and send to the main processing topic (${env}-internal-callback-hconame-hcp4calc).

Stage II - main processing

Input topic: ${env}-internal-callback-hconame-hcp4calc

For each HCP from the topic:

  1. Deduplicate by entity URI (deduplication window size is configurable),
  2. fetch current state of HCP from MongoDB, entityHistory collection,
  3. traversing by HCP-HCO relation type from config, find all affiliated HCOs with "CON" descriptors,
  4. traversing by HCO-HCO relation type from config, find all affiliated HCOs with MainHCO: "REL.MAI" or "REL.HIE" descriptors,
  5. from the "CON" HCO list, find all MainHCO candidates - MainHCO candidate must pass the configured specification. Below is MainHCO spec in EMEA PROD:
    \"\"
  6. if not yet existing, create new HcoNames relationship to MainHCO candidates by generating a request and sending to Manager async topic: ${env}-internal-hconames-rel-create,
  7. if existing, but not on candidates list, delete the relationship by generating a request and sending to Manager async topic: ${env}-internal-async-all-cleaner-callbacks,
  8. if one of input HCP's Addresses matches HCO Address or MainHCO Address, generate a request adding "HCO" or "MainHCO" sub-attribute to the Address and send to Manager async topic: ${env}-internal-hconames-hcp-create.

Processing events

\"\"

1. Find Impacted HCP

  1. Listen for the events on the ${env}-internal-callback-hconame-in topic.
  2. Filter out against the list of predefined countries (GB, IE).
  3. Filter out against the list of predefined event types (HCO_CREATED, HCO_CHANGED,
    RELATIONSHIP_CREATED, RELATIONSHIP_CHANGED).
  4. Split into two following branches. Results of both are then published on the ${env}-internal-callback-hconame-hcp4calc.

Entity Event Stream

1 extract the "Name" attribute from the target entity.

2. reject the event if "Name" does not exist

3. check if there was already a record with the identical Key + Name pair (a duplicate)

4. reject the duplicate

5. find the list of impacted HCPs based on the key

6. return a flat stream of the key and the list
e.g. key: entities/dL144Hk, impactedHCP: 1, 2, 3 return (entities/dL144Hk, 1), (entities/dL144Hk, 2), (entities/dL144Hk, 3)

Relation Event Stream

1. map Event to RelationWrapper(type,uRI,country,startURI,endURI,active,startObjectType,endObjectTyp)

2. reject if any of fields missing

3. check if there was already a record with the identical Key + Name pair (a duplicate)

4. reject the duplicate

5. find the list of impacted HCPs based on the key

6. return a flat stream of the key and the list
e.g. key: entities/dL144Hk, impactedHCP: 1, 2, 3 return (entities/dL144Hk, 1), (entities/dL144Hk, 2), (entities/dL144Hk, 3)

2. Select HCOs affiliated with HCP

  1. Listen for incoming list of HCPs on the ${env}-internal-callback-hconame-hcp4calc.
  2. For each HCP a list of affiliated HCOs is retrieved from a database. HCP-HCO relation is based on type:
    configuration/relationTypes/ContactAffiliations
    and description:
    "CON"

3. Find Main HCO traversing HCO-HCO hierarchy

  1. For each HCO from the list of selected HCOs above a list of HCO is retrieved from the database.  HCO-HCO relation is based on type:
    configuration/relationTypes/OtherHCOtoHCOAffiliations
    and description:
    "RLE.MAI", "RLE.HIE"
    The step is being repeated recursively until there are no affiliated HCOs or the Subtype matches the one provided in configuration.
    mainHcoIndicator.subTypeCode (STOP condition)
  2. The result is being mapped to the RelationRequest
  3.  The RelationRequest is generated to the "${env}-internal-hconames-rel-create" topic.

4. Populate HcoName / Main HCO Name in HCP addresses if required 

  1. So far there are two HCO lists: HCOs affiliated with HCP and Main HCOs.
  2. There's a check if HCP fields HCOName and MainHCOName which are also two lists match the HCO names.
  3. If not, then the HCP update event is being generated.
  4. Address is nested attribute in the model ​
    Matching by uri must be replaced by matching by the key on attribute values. ​
    The match key will include AddressType, AddressLine1, AddressLine2,City,StateProvinance, Zip5.​
    The same key is configured in Reltio for address deduping. ​
    Changes the address key in Reltio must be consulted with HUB team​

    The target attributes in addresses will be populated by creating new HCP address having the same match key + HCOName and MainHCOName by HubCallback source. Reltio will match the new address with the existing based on the match key.​

    Each HCP address will have own HUBCallback crosswalk {type=HUB_Callback, value={Address Attribute URI}, sourceTable=HCO_NAME}​


4. Create HCO -> Main HCO affiliation if not exist 

  1. Also there's a check if the HCP outgoing relations point to Main HCOs. Only relations with the type 
    "configuration/relationTypes/ContactAffiliations"
    and description
    "MainHCO"
     are being considered.
  2. Appropriate relations need to be created and not appropriate removed.


Data model

 \"\"

Dependencies

Component

Usage

Callback ServiceMain component with flow implementation
PublisherRoutes incoming events
ManagerAsync processing of generated requests
" }, { "title": "NotMatch Callback", "pageID": "164469859", "pageLink": "/display/GMDM/NotMatch+Callback", "content": "

Description

The NotMatch callback was created to clear the potential match queue for the suspect matches when the Linkage has been created by the DerivedAffiliationsbatch process. During this batch process, affiliations are created between COV and ONEKEY HCO objects. The potential match queue is not cleared and this impacts the Data Steward process because DS does not know what matches have to be processed through the UI. Potential match queue is cleared during RELATIONSHIP events processing using the "NotMatch callback" process. The process invokes _notMatch operation in MDM and removed these matches from Reltio. All "_notMatch" matches are visible in the UI in the "Potental Matches"."Not a Match" TAB. 

Flow diagram

\"\"

Steps

Triggers

Trigger actionComponentActionDefault time
IN Events incoming mdm-callback-service:PotentialMatchLinkCleanerStreamprocess relationship events in streaming mode and sets _notMatch in MDMrealtime - events stream

Dependent components

ComponentUsage
Callback ServiceMain component with flow implementation
PublisherEvents publisher generates incoming events
ManagerReltio Adapter for _notMatch operation in asynchronous mode
Hub StoreMatches Store
" }, { "title": "PotentialMatchLinkCleaner Callback", "pageID": "302702435", "pageLink": "/display/GMDM/PotentialMatchLinkCleaner+Callback", "content": "

Description

Algorithm

Callback accepts relationship events - this is configurable, usually:

For each event from inbound topic (${env}-internal-callback-potential-match-cleaner-in):

  1. event is filtered by eventType (acceptedRelationEventTypes list in configuration),
  2. event is filtered by relationship type (acceptedRelationObjectTypes list in configuration),
  3. extract startObjectURI and endObjectURI from event targetRelation,
  4. search MongoDB, collection entityMatchesHistory, for records having both URIs in matches and having same matchType (matchTypesInCache list in configuration),
  5. if found a record in cache, check if it has already been sent (boolean field in the document),
  6. if record has not been yet sent, generate a EntitiesNotMatchRequest containing two fields:
  7. add the operation header and send the Request to Manager.

Dependencies

Component

Usage

Callback ServiceMain component with flow implementation
PublisherRoutes incoming events
ManagerAsync processing of generated requests
" }, { "title": "PreCallbacks (Rankings/COMPANYGlobalCustomerId/Canada Micro-Bricks/HCPType)", "pageID": "164469756", "pageLink": "/pages/viewpage.action?pageId=164469756", "content": "

Description

The main part of the process is responsible for setting up the Rank attributes on the specific Attributes in Reltio. Based on the input JSON events, the difference between the RAW entity and the Ranked entity is calculated and changes shared through the asynchronous topic to Manager. Only events that contain no changes are published to the next processing stage, it limits the number of events sent to the external Clients. Only data that is ranked and contains the correct callback is shared further. During processing, if changes are detected main events are skipped and a callback is executed. This will cause the generation of new events in Reltio and the next calculation. The next calculation should detect 0 changes but that may occur that process will fall into an infinity loop. Due to this, the MD5 checksum is implemented on the Entity and AttributeUpdate request to percent such a situation. 

The PreCallback is the setup with the chain of responsibility with the following steps:

  1. Enricher Processor Enrich object with RefLookup service
  2. MultMergeProcessor - change the ID of the main entity to the loser Id when the Main Entity is different from Target Entity - it means that the merge happened between timestamp when Reltio generated the EVENT and HUB retrieved the Entitty from Reltio. In that case the outcome entity contains 3 ID <New Winner, Old Winner as loser, loser>
  3. RankSorters Calculate rankings - transform entity with correct Ranks attributes
  4. Based on the calculated rank generate pre-callback events that will be sent to Manger
  5. Global COMPANY ID callback Generation of changes on COMPANYGlobalCustomerIDs <if required when there is a need to fix the ID>
  6. Canada Micro-Bricks Autofill Canada Micro-Bricks
  7. HCPType Callback Calculate HCPType attribute based on Specilaity and SubTypeCode canonical Reltio codes. 
  8. Cleaner Processor Clean reference attributes enriched in the first step (save in mongo only when cleanAdditionalRefAttributes is false)
  9. Inactivation Generator Generation of inactivated events (for each changed event)
  10. OtherHCOtoHCOAffiliations Rankings Generation of the event to full-delay topic to process Ranking changes on relationships objects 


Flow diagram

\"\"


Steps


Triggers

Trigger actionComponentActionDefault time
IN Events incoming mdm-callback-service:PrecallbackStream (precallback package)Process full events, execute ranking services, generates callbacks, and published calculated events to the EventPublisher componentrealtime - events stream

Dependent components

ComponentUsage
Callback ServiceMain component with flow implementation
Entity EnricherGenerates incoming events full events
ManagerProcess callbacks generated by this service
Hub StoreCache-Store
" }, { "title": "Global COMPANY ID callback", "pageID": "218447103", "pageLink": "/display/GMDM/Global+COMPANY+ID+callback", "content": "

Proces provides a unique Global COMPANY ID to each entity. The current solution on the Reltio side overwrites an entity's Global COMPANY ID when it loses a merge. 

Global COMPANY ID pre-callback solution was created to contain Global COMPANY Id as a unique value for entity_uri.

To fulfill the requirement a solution based on COMPANY Global ID Registry is prepared. It includes elements like below:

  1. Modification on Orchestrator/Manager side - during the entity creation process
  2. Creation of COMPANYGloballId Pre-callback 
  3. Modification on entity history to enrich search process


Logical Architecture

\"\"


Modification on Orchestrator/Manager side - during the entity creation process

  1. Process description
    1. The request is sent to the HUB Manager - it may come from each source allowed. Like ETL loading or direct channel. 
    2. getCOMPANYIdOrRegister service is call and entityURI with COMPANYGlobalId is stored in COMPANYIdRegistry 
  2. From an external system point of view, the response to a client is modified. COMPANY Global Id is a part of the main attributes section in the JSON file (not in a nest). 
    1. In response, there are information about OVI true and false

{
    "uri": "entities/19EaDJ5L",
    "status": "created",
    "errorCode": null,
    "errorMessage": null,
    "COMPANYGlobalCustomerID": "04-125652694",
    "crosswalk": {
        "type": "configuration/sources/RX_AUDIT",
        "value": "test1_104421022022_RX_AUDIT_1",
        "deleteDate": ""
    }
}



{
    "uri""entities/entityURI",
    "type""configuration/entityTypes/HCP",
    "createdBy""username",
    "createdTime"1000000000000,
    "updatedBy""username",
    "updatedTime"1000000000000,

"attributes": {
        "COMPANYGlobalCustomerID": [
            {
                "type""configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",
                "ov"true,
                "value""04-111855581",
                "uri""entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/1jVIrkG2D"
            },
            {
                "type""configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",
                "ov"false,
                "value""04-123653905",
                "uri""entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/1jVIrosrm"
            },
            {
                "type""configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",
                "ov"false,
                "value""04-124022162",
                "uri""entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/1jVIrhcNY"
            },
            {
                "type""configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",
                "ov"false,
                "value""04-117260591",
                "uri""entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/1jVIrnM10"
            },
            {
                "type""configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",
                "ov"false,
                "value""04-129895294",
                "uri""entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/1mrOsvf6P"
            },
            {
                "type""configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",
                "ov"false,
                "value""04-112615849",
                "uri""entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/2ZNzEowk3"
            },
            {
                "type""configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",
                "ov"false,
                "value""04-111851893",
                "uri""entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/2LG7Grmul"
            }
        ],


3. How to store GlobalCOMPANYId process diagram - business level.

\"\"


Creation of COMPANYGlobalId Pre-callback

A publisher event model is extended with two new values:

    1. COMPANYGlobalCustomerIDs - list of ID. For some merge events, there is two entityURI ID. The order of the IDs must match the order of the IDs in entitiURI field.
    2. parentCOMPANYGlobalCustomerID - it has value only for the LOST_MERGE event type. It contains winner entityURI.

data class PublisherEvent(val eventType: EventType?,
                          val eventTime: Long? = null,
                          val entityModificationTime: Long? = null,
                          val countryCode: String? = null,
                          val entitiesURIs: List<String> = emptyList(),
                          val targetEntity: Entity? = null,
                          val targetRelation: Relation? = null,
                          val targetChangeRequest: ChangeRequest? = null,
                          val dictionaryItem: DictionaryItem? = null,
                          val mdmSource: String?,
                          val viewName: String? = DEFAULT_VIEW_NAME,
                          val matches: List<MatchItem>? = null,
                          val COMPANYGlobalCustomerIDs: List<String> = emptyList(),
                          val parentCOMPANYGlobalCustomerID: String? = null,
                          @JsonIgnore
                          val checksumChanged: Boolean = false,
                          @JsonIgnore
                          val isPartialUpdate: Boolean = false,
                          @JsonIgnore
                          val isReconciliation: Boolean = false


There are made changes in  entityHistory collection on MongoDB side

For each object in a collection, we store also COMPANYGlobalCustomerID:

Additionally, new fields are stored in the Snowflake structure in %_HCP and %_HCO views in CUSTOMER_SL schema, like:

From an external system point of view, those internal changes are prepared to make a GlobalCOMPANYID filed unique.

In case of overwriting GLobalCOMPANYID on Reltio MDM side (lost merge) pre-callback main task is to search for an original value in COMPANYIfRegistry. It will then insert this value into that entity in Reltio MDM that has been overwritten due to lost merge.

Process diagram:

\"\" 


Search LOST_MERGE entity with its first Global COMPANY ID

Process diagram:

\"\"

Process description:

  1. MDM HUB gets SEARCH calls from an external system. The search parameter is Global COMPANY ID.
  2. Verification entity status.  
  3. If entity status is 'LOST_MERGE' then replace in search request PfiezrGlobalCustomerId to parentCOMPANYGlobalCustomerId
  4. Make a search call in Reltio with enriched data


Dependent components

" }, { "title": "Canada Micro-Bricks", "pageID": "250138445", "pageLink": "/display/GMDM/Canada+Micro-Bricks", "content": "

Description

The process was designed to auto-fill the Micro Brick values on Addresses for Canadian market entities. The process is based on the events streaming, the main event is recalculated based on the current state and during comparison, the current mapping file the changes are generated. The generated change (partial event) updates the Reltio which leads to another change. Only when the entity is fully updated the main event is published to the output topic and processed in the next stage in the event publisher. The process also registers the Changelog events on the topic. the Changelog events are saved only when the state of the entity is not partial. The Changelog events are required in the ReloadService that is triggered by the Airflow DAG. Business users may change the mapping file, this triggers the reload process, changelog events are processed and the updates are generated in reltio.

For Canada, we created a new brick type "Micro Brick" and implemented a new pre-callback service to populate the brick codes based on the postal code mapping file:

The mapping file will be delivered monthly, usually with no change.  However, 1-2 a year the Business will go thru a re-mapping exercise that could cause significant change.  Also, a few minor changes may happen (e.g., add new pair, etc.). 

A month change process will be added to the Airflow scheduler as a DAG. This DAG will be scheduled and will generate the export from the Snowflake, when there will be mapping changes changelog events will trigger update to the existing MicroBrick codes in Reltio. 

A new BrickType code has been added for Micro Brick - "UGM"

Flow diagram

Logical Architecture

\"\"


PreCallback Logic

\"\"


Reload Logic

\"\"

Steps

Overview Reltio attributes


Brick

"uri": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Brick",

                Brick Type:

                RDM: A new BrickType code has been added for Micro Brick - "UGM"

                                    "uri": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Brick/attributes/Type",

                                    "lookupCode": "rdm/lookupTypes/BrickType",

                Brick Value:

                                    "uri": "configuration/entityTypes/HCO/attributes/Addresses/attributes/Brick/attributes/Value",

                                    "lookupCode": "rdm/lookupTypes/BrickValue",

PostalCode:

"uri": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Zip5",


Canada postal codes format:

e.g: K1A 0B1



PreCallback Logic

Flow:

  1. Activation:
    1. Check if feature flag activation is true and the acceptedCountires list contains entity country
    2. Take into account only the CHANGED and CREATED events in this pre-callback implementation
  2. Steps:
    1. For each address in the entity check:
      1. Check if the Address contains BrickType= microBrickType and BrickValue!=null and PostalCode!=null
        1. Check if PostalCode is in the micro-bricks-mapping.csv file
          1. if true compare
            1. if different generate UPDATE_ATTRIBUTE
            2. if in sync add AddressChange with all attributes to MicroBrickChangelog
          2. if false compare BrickValue with “numberOfPostalCodeCharacters” from PostalCode
            1. if different generate UPDATE_ATTRIBUTE
            2. if in sync add AddressChange with all attributes to MicroBrickChangelog
      2. Check if Address does not contain BrickType= microBrickType and BrickValue==null and PostalCode !=null
        1. check if PostalCode is in the micro-bricks-mapping.csv file
          1. if true generate INSERT_ATTRIBUTE
          2. if false get “numberOfPostalCodeCharacters” from PostalCode and generate INSERT_ATTRIBUTE


  1. After the Addresses array is checked, the main event is blocked when partial. Only when there are 0 changes main event is forwarded
    1. if there are changes send partialUpdate and skip the main event depending on the forwardMainEventsDuringPartialUpdate
    2. if there are 0 changes send MainEvent and push MicroBrickChangelog to the changelog topic

Note: The service contains 2 roles – the main role is to check PostalCode for each address with a mapping file and generate MicroBrick Changes (INSERT (initial) UPDATE (changes)). The second role is to push MicroBrickChangelog events when we detected 0 changes. It means this flow should keep in sync the changelog topic with all changes that are happening in Reltio (address was added/removed/changed). Because ReloadService will work on these changelog events and requires the exact URI to the BrickValue this service needs to push all MicroBrickChangelog events with calculatedMicroBrickUri and calculatedMicroBrickValue and current value on postalCode for specific address represented by the address URI.



Reload Logic (Airflow DAG)

Flow: 

  1. Activation
    1. Business users make changes on the Snowflake side to micro bricks mapping.
  2. Steps
    1. DAG is scheduled once a month and process changes made by the Business users, this triggers the Reload Logic on Callback-Service components
    2. Get changes from snowflake and generate the micro-bricks-mapping.csv file
    3. If there are 0 changes END the process
    4. If there are change in the micro-bricks-mapping.csv file push the changes to the Consul. Load current Configuration to GIT and push micro-bricks-mapping.csv to Consul.
    5. Trigger API call on Callback-Service to reload Consul configuration - this will cause that Pre-Callback processors and the ReloadService will now use new mapping files. Only after this operation is successful go to the next step:Copy events from current topic to reload topic using tmp file
    6. Copy events from current topic to reload topic using temporary file
      1. Note: the micro-brick process is divided into 2 steps 
        1. Pre-Callback generated ChangeLog events to the $env-internal-microbricks-changelog-events
        2. Reload service is reading the events from $env-internal-microbricks-changelog-reload-events
      2. The main goal here is to copy events from one topic to another using Kafka Console Producer and Consumer. Copy is made by the Kafka Console Consumer, we are generating a temporary file with all events, Consumer has to poll all events, and wait 2 min until no new events are in the topic. After this time Kafka Console Producer should send all events to the target topic.
    7. After events are in the target $env-internal-microbricks-changelog-reload-events topic the next step described below starts automatically. 

Reload Logic (Callback-Service)

Flow:

  1. Activation:
    1. Callback-Service Exposes API to reload Consul Configuration - because these changes are made once per month max, there is no need to schedule this process in service internally. Reload is made by the DAG and reloads mapping file inside callback-service.
    2. Only after Consul Configuration is reloaded the events are pushed from the $env-internal-microbricks-changelog-events to the $env-internal-microbricks-changelog-reload-events.
    3. This triggers the MicroBrickReloadService because it is based on the Kafka-Streams – service is subscribing to events in real-time
  2. Steps:
    1. New events to the $env-internal-microbricks-changelog-reload-events will trigger the following:
    2. Kafka Stream consumer that will read the changelogTopic
    3. For each MicroBrickChangelog event check:
      1. for each address in addresses changes check:
        1. check if PostalCode is in the micro-bricks-mapping.csv file
          1. if true and the current mapping value is different than calculatedMicroBrickValue  → generate UPDATE_ATTRIBUTE
          2. if false and calculatedMicroBrickValue is different than “numberOfPostalCodeCharacters” from PostalCode → generate UPDATE_ATTRIBUTE
      2. Gather all changes and push them to the $env-internal-async-all-bulk-callbacks


The reload is required because it may happen that:


Note: The data model requires the calculatedMicroBrickUri because we need to trigger UPDATE_ATTRIBUE on the specified BrickValue on a specific Address so an exact URI is required to work properly with the Reltio UPDATE_ATTRIBUTE operation. Only INSERT_ATTRIBUTE requires the URI only on the address attribute, and the body will contain BrickType and BrickValue (this insert is handled in the pre-callback implementation). The changes made by ReloadService will generate the next changes after the mapping file was updated. Once we trigger this event Reltio will generate the change, this change will be processed by the pre-callback service (MicroBrickProcessor). The result of this processor will be no-change-detected (entity and mapping file are in sync) and new CHANGELOG event generation. It may happen that during ReloadService run new Changelog events will be constantly generated, but this will not impact the current process because events from the original topic to the target topic are triggered by the manual copy during reloading. Additionally, 24h compaction window on Kafka will overwrite old changes with new changes generated from pre-callback. So we will have only one newest key on kafka topic after this time, and these changes will be copied to reload process after the next business change (1-2 times a year)


Attachment docs with more details:

IMPL:\"\" TEST:\"\"



Data Model and Configuration


ChangeLog Event
\n
CHANGELOG Event:\n\nKafka KEY: entityUri\n\nBody:\ndata class MicroBrickChangelog(\n        val entityUri: String,\n        val addressesChanges: List<AddressChange>,\n)\ndata class AddressChange(\n        val addressUri: String,\n        val postalCode: String,\n        val calculatedMicroBrickUri: String,\n        val calculatedMicroBrickValue: String,\n)\n\n
\n




Triggers

Trigger action

Component

Action

Default time

IN Events incoming Callback Service: Pre-Callback:Canada Micro-Brick LogicFull events trigger pre-callback stream and during processing, partial events are processed with generated changes. If data is in sync partial event is not generated, and the main event is forwarded to external clientsrealtime - events stream
User  - triggers a change in mapping

API: Callback-service - sync consul Configuration

Pre-Callback:ReloadService - streaming

The business user changes the mapping file. Process refreshed Consul store, copies data to changelog topic and this triggers real-time processing on Reload service

Manual Trigger by Business User

realtime - events stream

Dependent components

Component

Usage

Callback ServiceMain component of flow implementation
Entity EnricherGenerates incoming events full events
ManagerProcess callbacks generated by this service
" }, { "title": "RankSorters", "pageID": "302687133", "pageLink": "/display/GMDM/RankSorters", "content": "" }, { "title": "Address RankSorter", "pageID": "164469761", "pageLink": "/display/GMDM/Address+RankSorter", "content": "

GLOBAL - IQVIA model

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Address provided by source "Reltio" is higher in the hierarchy than the Address provided by "CRMMI" source. Based on this configuration, each specialty will be sorted in the following order:

addressSource:
"Reltio": 1
"EVR": 2
"OK": 3
"AMPCO": 4
"JPDWH": 5
"NUCLEUS": 6
"CMM": 7
"MDE": 8
"LocalMDM": 9
"PFORCERX": 10
"VEEVA_NZ": 11
"VEEVA_AU": 12
"VEEVA_PHARMACY_AU": 13
"CRMMI": 14
"FACE": 15
"KOL_OneView": 16
"GRV": 17
"GCP": 18
"MAPP": 19
"CN3RDPARTY": 20
"Rx_Audit": 21
"PCMS": 22
"CICR": 23

Additionally, Address Rank Sorting is based on the following configuration:

addressType:
"[TYS.P]": 1
"[TYS.PHYS]": 2
"[TYS.S]": 3
"[TYS.L]": 4
"[TYS.M]": 5
"[Mailing]": 6
"[TYS.F]": 7
"[TYS.HEAD]": 8
"[TYS.PHAR]": 9
"[Unknown]": 10
addressValidationStatus:
"[STA.3]": 1
"[validated]": 2
"[Y]": 3
"[STA.0]": 4
"[pending]": 5
"[NEW]": 6
"[RNEW]": 7
"[selfvalidated]": 8
"[SVALD]": 9
"[preregister]": 10
"[notapplicable]": 11
"[N]": 97
"[notvalidated]": 98
"[STA.9]": 99
addressStatus:
"[VALD]": 1
"[ACTV]": 2
"[INAC]": 98
"[INVL]": 99


Address rank sort process operates under the following conditions:

First, before address ranking the Affiliation RankSorter have to be executed. It is required to get the appropriate value on the Workplace.PrimaryAffiliationIndicator attribute value

  1. Each address is sorted with the following rules:
    1. sort by the PrimaryAffiliationIndicator value. The address with "true" values is ranked higher in the hierarchy. The attribute used in this step is taken from the Workplace.PrimaryAffiliationIndicator
    2. sort by Validation Status (lowest rank from the configuration on TOP) - attribute Address.ValidationStatus
    3. sort by Status (lowest rank from the configuration on TOP) - attribute Address.Status
    4. sort by Source Name (lowest rank from the configuration on TOP) - this is calculated based on the Address.RefEntity.crosswalks, means that each address is associated with the appropriate crosswalk and based on the input configuration the order is caluclated.
    5. sort by Primary Affiliation (true value wins against false value) - attribute Address.PrimaryAffiliation
    6. sort by Address Type (lowest rank from the configuration on TOP) - attribute Address.AddressType
    7. sort by Rank (lowers rank on TOP) in descending order 1 -> 99 - attribute Address.AddressRank
    8. sort by LUD (highest LUD date on TOP) in descending order 2017.07 -> 2017.06 - attribute Address.RefEntity.crosswalks.updateDate
    9. sort by Label value alphabetically in ascending order A -> Z - attribute Address.label
  2. Sorted addresses are recalculated for the new Rank – each Address Rank is reassigned with an appropriate number from lowest to highest.

Additionally:

  1. When refRelation.crosswalk.deleteDate exists, then the address is excluded from the sorting process

When recalculated Address Rank has a value equal to "1" then BestRecord attribute is added with the value set to "true"


Address rank sort process fallback operates under the following conditions:

  1. During Validation Status from configuration (, 1.b) sorting, when ValitdationStatus attribute is missing address, is placed on 90 position ( which means that empty validation status is higher in the ranking than e.g. STA.9 status)
  2. During Status from configuration (1.c) sorting when the Status attribute is missing address is placed on 90 position (which means that empty status is higher in the ranking than e.g. INAC status)
  3. When Source system name (1.d) is missing address, address is placed on 99 position
  4. When address Type (1.e) is empty, address is placed on 99 position
  5. When Rank (1.f) is empty, address is placed on 99 position
  6. For multiple Address Types for the same relation – an address with a higher rank is taken


Business requirements (provided by AJ)

COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*



" }, { "title": "Addresses RankSorter", "pageID": "164469759", "pageLink": "/display/GMDM/Addresses+RankSorter", "content": "

GLOBAL US

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Address provided by source "ONEKEY" is higher in the hierarchy than the Address provided by "COV" source. Configuration is divided by country and source lists, for which this order is applicable.  Based on this configuration, each address will be sorted in the following order:

addressesSource:
- countries:
- "ALL"
sources:
- "ALL"
rankSortOrder:
"Reltio" : 1
"ONEKEY" : 2
"IQVIA_RAWDEA" : 3
"IQVIA_DDD" : 4
"HCOS" : 5
"SAP" : 6
"SAPVENDOR" : 7
"COV" : 8
"DVA" : 9
"ENGAGE" : 10
"KOL_OneView" : 11
"ONEMED" : 11
"ICUE" : 12
"DDDV" : 13
"MMIT" : 14
"MILLIMAN_MCO" : 15
"SHS": 16
"COMPANY_ACCTS" : 17
"IQVIA_RX" : 18
"SEAGEN": 19
"CENTRIS" : 20
"ASTELAS" : 21
"EMD_SERONO" : 22
"MAPP" : 23
"VEEVALINK" : 24
"VALKRE" : 25
"THUB" : 26
"PTRS" : 27
"MEDISPEND" : 28
"PORZIO" : 29

 Additionally, Addresses Rank Sorting is based on the following configuration:

addressType:
"[OFFICE]": 1
"[PHYSICAL]": 2
"[MAIN]": 3
"[SHIPPING]": 4
"[MAILING]": 5
"[BILLING]": 6
"[SOLD_TO]": 7
"[HOME]": 8
"[PO_BOX]": 9


Address rank sort process operates under the following conditions:

  1. Each address is sorted with the following rules:
    1. sort by address status (active addresses on top) - attribute Status (is Active)
    2. sort by the source order number from input source order configuration (lowest rank from the configuration on TOP) - source is taken from last updated crosswalk Addresses.RefEntity.crosswalks.updateDate once multiple from the same source
    3. sort by DEA flag (HCP only with DEA flag set to true on top) - attribute DEAFlag
    4. sort by SingleAddressIndicator (true on top) - attribute SingleAddressInd
    5. sort by Source Rank (lowers rank on TOP) in descending order 1 -> 99 - for ONEKEY rank is calculated with minus sign - attribute Source.SourceRank
    6. sort by address type of HCO and MCO only (lowest rank from the configuration on TOP) - attribute AddressType
    7. sort by COMPANYAddressId (addresses with this attribute are on top) - attribute COMPANYAddressID
  2. Sorted addresses are recalculated for new Rank – each Address Rank is reassigned with an appropriate number from lowest to highest - attribute AddressRank

Additionally:

  1. When refRelation.crosswalk.deleteDate exists, then the address is excluded from the sorting process


MORAWM03 explaining reverse rankings for ONEKEY Addresses:

Here is the clarification:


The minus rank can be related only to ONEKEY source and will be related to the lowest precedence address.


All other sources, different than ONEKEY, contains the normal SourceRank source precedence - it means that the SourceRank 1 will be on top. We will sort SourceRank attribute in ascending order 1 -> 99 (lowest source rank on TOP), so SourceRank 1 will be first, SourceRank 2 second and so on.


Due to the ONEKEY data in US - That rank code is a number from 10 to -10 with the larger number (i.e., 10) being the top ranked. We have a logic that makes an opposite ranking on ONEKEY SourceRank attribute. We are sorting in descending order …10 -> -10…, meaning that the rank 10 will be on TOP (highest source rank on TOP)


We have reverse the SourceRank logic for ONEKEY, otherwise it led to -10 SourceRank ranked on TOP.

In US ONEKEY Addresses contains minus sign and are ranked in descending order. (10,9,8…-1,-2..-10)


I am sorry for the confusion that was made in previous explanation.


This opposite logic for ONEKEY SourceRank data is in:

Addresses: https://confluence.COMPANY.com/display/GMDM/Addresses+RankSorter



DOC:

\"\"



EMEA/AMER/APAC


This feature requires the following configuration:

This map contains sources with appropriate sort numbers, which means e.g. Configuration is divided by country and source lists, for which this order is applicable. Address provided by source "Reltio" is higher in the hierarchy than the Address provided by "ONEKEY" source. Based on this configuration, each address will be sorted in the following order:


EMEA

addressesSource:
- countries:
- GB
- IE
- FK
- FR
- BL
- GP
- MF
- MQ
- NC
- PF
- PM
- RE
- TF
- WF
- ES
- DE
- IT
- VA
- SM
- TR
- RU
rankSortOrder:
Reltio: 1
ONEKEY: 2
SAP: 3
SAPVENDOR: 4
PFORCERX: 5
PFORCERX_ODS: 5
KOL_OneView: 6
ONEMED: 6
ENGAGE: 7
MAPP: 8
SEAGEN: 9
GRV: 10
GCP: 11
SSE: 12
BIODOSE: 13
BUPA: 14
CH: 15
HCH: 16
CSL: 17
1CKOL: 18
VEEVALINK: 19
VALKRE: 201
THUB: 21
PTRS: 22
MEDISPEND: 23
PORZIO: 24
sources:
- ALL
- countries:
- ALL
rankSortOrder:
Reltio: 1
ONEKEY: 2
MEDPAGESHCP: 3
MEDPAGESHCO: 3
SAP: 4
SAPVENDOR: 5
ENGAGE: 6
MAPP: 7
PFORCERX: 8
PFORCERX_ODS: 8
KOL_OneView: 9
ONEMED: 9
SEAGEN: 10
GRV: 11
GCP: 12
SSE: 13
SDM: 14
PULSE_KAM: 15
WEBINAR: 16
DREAMWEAVER: 17
EVENTHUB: 18
SPRINKLR: 19
VEEVALINK: 20
VALKRE: 21
THUB: 22
PTRS: 23
MEDISPEND: 24
PORZIO: 25
sources:
- ALL
AMER
addressesSource:
- countries:
- ALL
rankSortOrder:
Reltio: 1
DCR_SYNC: 2
ONEKEY: 3
IMSO: 4
CS: 5
PFCA: 6
WSR: 7
PFORCERX: 8
PFORCERX_ODS: 8
SAP: 9
SAPVENDOR: 10
LEGACY_SFA_IDL: 11
ENGAGE: 12
MAPP: 13
SEAGEN: 14
GRV: 15
KOL_OneView: 16
ONEMED: 16
GCP: 17
SSE: 18
RX_AUDIT: 19
VEEVALINK: 20
VALKRE: 21
THUB: 22
PTRS: 23
MEDISPEND: 24
PORZIO: 25
sources:
- ALL


APAC

addressesSource:
- countries:
- CN
rankSortOrder:
Reltio: 1
EVR: 2
MDE: 3
FACE: 4
GRV: 5
CN3RDPARTY: 6
PFORCERX: 7
PFORCERX_ODS: 7
KOL_OneView: 8
ONEMED: 8
ENGAGE: 9
MAPP: 10
GCP: 11
SSE: 12
VEEVALINK: 13
THUB: 14
PTRS: 15
sources:
- ALL
- countries:
- ALL
rankSortOrder:
Reltio: 1
ONEKEY: 2
JPDWH: 3
VOD: 4
PFORCERX: 5
PFORCERX_ODS: 5
SAP: 6
SAPVENDOR: 7
KOL_OneView: 8
ONEMED: 8
ENGAGE: 9
MAPP: 10
SEAGEN: 11
GRV: 12
GCP: 13
SSE: 14
PCMS: 15
WEBINAR: 16
DREAMWEAVER: 17
EVENTHUB: 18
SPRINKLR: 19
VEEVALINK: 20
VALKRE: 21
THUB: 22
PTRS: 23
MEDISPEND: 24
PORZIO: 25
sources:
- ALL


This map contains AddressType attribute values with appropriate sort numbers, which means e.g. Address Type AT.OFF is higher in the hierarchy than the AddressType AT.MAIL. Based on this configuration, each address will be sorted in the following order:

addressType:
"[OFF]": 1
"[BUS]": 2
"[DEL]": 3
"[LGL]": 4
"[MAIL]": 5
"[BILL]": 6
"[HOM]": 7
"[UNSP]": 99

 
  1. Each address is sorted with the following rules: 
    1. sort by Primary affiliation indicator - address related to affiliation with primary usage tag on top, HCP and HCO addresses are compared by fields: AddressType, AddressLine1, AddressLine2, City, StateProvince and Zip5
    2. sort by Addresses.Primary attribute - primary addresses on TOP - applicable only for HCO entities
    3. sort by address status Addresses.Status (contains the AddressStatus configuration)
    4. sort by the source order number from input source order configuration (lowest rank from the configuration on TOP) - source is taken from the last updated crosswalk Addresses.RefEntity.crosswalks.updateDate once multiple from the same source
    5. sort by address type (lowest rank from the configuration on TOP) - attribute Addresses.AddressType
    6. sort by Source Rank (lowers rank on TOP) in descending order 1 -> 99 - attribute Addresses.Source.SourceRank
    7. sort by COMPANYAddressId (addresses with this attribute are on top) - attribute Addresses.COMPANYAddressID
    8. sort by address label (alphabetically from A to Z)
  2. Sorted addresses are recalculated for new Rank – each Address Rank is reassigned with an appropriate number from lowest to highest - attribute AddressRank

Additionally:

  1. When refRelation.crosswalk.deleteDate exists, then the address is excluded from the sorting process


Business requirements (provided by AJ)

COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*


\"\"


" }, { "title": "Affiliation RankSorter", "pageID": "164469770", "pageLink": "/display/GMDM/Affiliation+RankSorter", "content": "

GLOBAL - IQVIA model

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Workplace provided by source "Reltio" is higher in the hierarchy than the Workplace provided by "CRMMI" source. Based on this configuration, each specialty will be sorted in the following order:

affiliation:
"Reltio": 1
"EVR": 2
"OK": 3
"AMPCO": 4
"JPDWH": 5
"NUCLEUS": 6
"CMM": 7
"MDE": 8
"LocalMDM": 9
"PFORCERX": 10
"VEEVA_NZ": 11
"VEEVA_AU": 12
"VEEVA_PHARMACY_AU": 13
"CRMMI": 14
"FACE": 15
"KOL_OneView": 16
"GRV": 17
"GCP": 18
"MAPP": 19
"CN3RDPARTY": 20
"Rx_Audit": 21
"PCMS": 22
"CICR": 23

The affiliation rank sort process operates under the following conditions:

  1. Each workplace is sorted with the following rules:
    1. sort by Source Name (lowest rank from the configuration on TOP) - this is calculated based on the Workplace.RefEntity.crosswalks, means that each address is associated with the appropriate crosswalk, and based on the input configuration the order is calculated.
    2. sort by LUD (highest LUD date on TOP) in descending order 2017.07 -> 2017.06 - attribute Workplace.RefEntity.crosswalks.updateDate
    3. sort by Label value alphabetically in ascending order A -> Z - attribute Workplace.label
  2. Sorted workplaces are recalculated for the new PrimaryAffiliationIndicator attribute – each Workplace is reassigned with an appropriate value. The winner gets the "true" on the PrimaryAffiliationIndicator. Any looser, if exists is reasigned to "false"

Additionally:

  1. When refRelation.crosswalk.deleteDate exists, then the workplace is excluded from the sorting process



GLOBAL US

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. FacilityType with name "35" is higher in the hierarchy than FacilityType with the name "27". Based on this configuration, each affiliation will be sorted in the following order:

facilityType:
"35": 1
"MHS": 1
"34": 1
"27": 2

Each affiliation before sorting is enriched with the ProviderAffiliation attribute which contains information about HCO because there are attributes that are needed during sorting.

Affiliation rank sort process operates under the following conditions:

  1. Each affiliation is sorted with the following rules
    1. sort by facility type (the lower number is on top) - attribute ClassofTradeN.FacilityType
    2. sort by affiliation confidence code DESC(the higher number or if exists it is on top) - attribute RelationType.AffiliationConfidenceCode
    3. sort by staffed beds (if it exists it is higher and higher number on top) - attribute Bed.Type("StaffedBeds").Total
    4. sort by total prescribers (if it exists it is higher and higher number on top) - attribute TotalPrescribers
    5. sort by org identifier (if it exists it is higher and if not it compares is as a string) - attribute Identifiers.Type("HCOS_ORG_ID").ID
  2. Sorted affiliation are recalculated for new Rank - each Affiliation Rank is reassigned with an appropriate number from lowest to highest - attribute Rank
    1. Affiliation with Rank = "1" is enriched with the UsageTag attribute with the "Primary" value.

Additionally:

  1. If facility type is not found it is set to 99


EMEA/AMER/APAC


This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Contact Affiliation provided by source "Reltio" is higher in the hierarchy than the Contact Affiliation provided by "ONEKEY" source.  Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each specialty will be sorted in the following order:

EMEA



affiliation:
- countries:
- GB
- IE
- FK
- FR
- BL
- GP
- MF
- MQ
- NC
- PF
- PM
- RE
- TF
- WF
- ES
- DE
- IT
- VA
- SM
- TR
- RU
rankSortOrder:
Reltio: 1
ONEKEY: 2
SAP: 3
SAPVENDOR: 4
PFORCERX: 5
PFORCERX_ODS: 5
KOL_OneView: 6
ONEMED: 6
ENGAGE: 7
MAPP: 8
SEAGEN: 9
VALKRE: 10
GRV: 11
GCP: 12
SSE: 13
BIODOSE: 14
BUPA: 15
CH: 16
HCH: 17
CSL: 18
THUB: 19
PTRS: 20
1CKOL: 21
MEDISPEND: 22
VEEVALINK: 23
PORZIO: 24
sources:
- ALL
- countries:
- ALL
rankSortOrder:
Reltio: 1
ONEKEY: 2
MEDPAGESHCP: 3
MEDPAGESHCO: 3
SAP: 4
SAPVENDOR: 5
PFORCERX: 6
PFORCERX_ODS: 6
KOL_OneView: 7
ONEMED: 7
ENGAGE: 8
MAPP: 9
SEAGEN: 10
VALKRE: 11
GRV: 12
GCP: 13
SSE: 14
SDM: 15
PULSE_KAM: 16
WEBINAR: 17
DREAMWEAVER: 18
EVENTHUB: 19
SPRINKLR: 20
THUB: 21
PTRS: 22
VEEVALINK: 23
MEDISPEND: 24
PORZIO: 25
sources:
- ALL
 

AMER

affiliation:
- countries:
- ALL
rankSortOrder:
Reltio: 1
DCR_SYNC: 2
ONEKEY: 3
SAP: 4
SAPVENDOR: 5
PFORCERX: 6
PFORCERX_ODS: 6
KOL_OneView: 7
ONEMED: 7
LEGACY_SFA_IDL: 8
ENGAGE: 9
MAPP: 10
SEAGEN: 11
VALKRE: 12
GRV: 13
GCP: 14
SSE: 15
IMSO: 16
CS: 17
PFCA: 18
WSR: 19
THUB: 20
PTRS: 21
RX_AUDIT: 22
VEEVALINK: 23
MEDISPEND: 24
PORZIO: 25
sources:
- ALL

APAC

affiliation:
- countries:
- CN
rankSortOrder:
Reltio: 1
EVR: 2
MDE: 3
FACE: 4
GRV: 5
CN3RDPARTY: 6
GCP: 7
SSE: 8
PFORCERX: 9
PFORCERX_ODS: 9
KOL_OneView: 10
ONEMED: 10
ENGAGE: 11
MAPP: 12
VALKRE: 13
THUB: 14
PTRS: 15
sources:
- ALL
- countries:
- ALL
rankSortOrder:
Reltio: 1
ONEKEY: 2
JPDWH: 3
VOD: 4
SAP: 5
SAPVENDOR: 6
PFORCERX: 7
PFORCERX_ODS: 7
KOL_OneView: 8
ONEMED: 8
ENGAGE: 9
MAPP: 10
SEAGEN: 11
VALKRE: 12
GRV: 13
GCP: 14
SSE: 15
PCMS: 16
WEBINAR: 17
DREAMWEAVER: 18
EVENTHUB: 19
SPRINKLR: 20
THUB: 21
PTRS: 22
VEEVALINK: 23
MEDISPEND: 24
PORZIO: 25
sources:
- ALL


The affiliation rank sort process operates under the following conditions:

  1. Each contact affiliation is sorted with the following rules:
    1. sort by affiliation status - active on top
    2. sort by source priority
    3. sort by source rank - attribute ContactAffiliation.RelationType.Source.SourceRank, ascending
    4. sort by confidence level - attribute ContactAffiliation.RelationType.AffiliationConfidenceCode
    5. sort by attribute last updated date - newest at the top
    6. sort by Label value alphabetically in ascending order A -> Z - attribute ContactAffiliation.label
  2. Sorted contact affiliations are recalculated for the new primary usage tag attribute – each contact affiliation is reassigned with an appropriate value. The winner gets the "true" on the primary usage tag.

Additionally:

  1. When refRelation.crosswalk.deleteDate exists, then the workplace is excluded from the sorting process


Business requirements (provided by AJ)

COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*


\"\"

" }, { "title": "Email RankSorter", "pageID": "164469768", "pageLink": "/display/GMDM/Email+RankSorter", "content": "

GLOBAL - IQVIA model

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Email provided by source "1CKOL" is higher in the hierarchy than Email provided by any other source. Based on this configuration, each email address will be sorted in the following order:

email:
- countries:
- "ALL"
sources:
- "ALL"
rankSortOrder:
"1CKOL": 1

Email rank sort process operates under the following conditions:

  1. Each email is sorted with the following rules
  2. Group by the TypeIMS attribute and sort each group:
    1. sort by source rank (the lower number on top of the one with this attribute)
    2. sort by the validation status (VALID value is the winner) - attribute ValidationStatus
    3. sort by LUD (highest LUD date on TOP) in descending order 2017.07 -> 2017.06 - attribute crosswalks.updateDate
    4. sort by email value alphabetically in ascending order A -> Z - attribute Email.email
  3. Sorted emails are recalculated for the new Rank - each Email Rank is reassigned with an appropriate number



GLOBAL US

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Email provided by source "GRV" is higher in the hierarchy than Email provided by "ONEKEY" source. Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each email address will be sorted in the following order:

email:
- countries:
- "ALL"
sources:
- "ALL"
rankSortOrder:
"Reltio" : 1
"GRV" : 2
"ENGAGE" : 3
"KOL_OneView" : 4
"ONEMED" : 4
"ICUE" : 5
"MAPP" : 6
"ONEKEY" : 7
"SHS" : 8
"VEEVALINK": 9
"SEAGEN": 10
"CENTRIS" : 11
"ASTELAS" : 12
"EMD_SERONO" : 13
"IQVIA_RX" : 14
"IQVIA_RAWDEA" : 15
"COV" : 16
"THUB" : 17
"PTRS" : 18
"SAP" : 19
"SAPVENDOR": 20
"IQVIA_DDD" : 22
"VALKRE": 23
"MEDISPEND" : 24
"PORZIO" : 25

Email rank sort process operates under the following conditions:

  1. Each email is sorted with the following rules
    1. sort by source order (the lower number on top)
    2. sort by source rank (the lower number on top of the one with this attribute)
  2. Sorted email are recalculated for new Rank - each Email Rank is reassigned with an appropriate number




EMEA/AMER/APAC

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Email provided by source "Reltio" is higher in the hierarchy than Email provided by "GCP" source. Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each email address will be sorted in the following order:


EMEA

email:
- countries:
- GB
- IE
- FK
- FR
- BL
- GP
- MF
- MQ
- NC
- PF
- PM
- RE
- TF
- WF
- ES
- DE
- IT
- VA
- SM
- TR
- RU
rankSortOrder:
Reltio: 1
1CKOL: 2
GCP: 3
GRV: 4
SSE: 5
ENGAGE: 6
MAPP: 7
VEEVALINK: 8
SEAGEN: 9
KOL_OneView: 10
ONEMED: 10
PFORCERX: 11
PFORCERX_ODS: 11
THUB: 12
PTRS: 13
ONEKEY: 14
SAP: 15
SAPVENDOR: 16
SDM: 17
BIODOSE: 18
BUPA: 19
CH: 20
HCH: 21
CSL: 22
MEDISPEND: 23
PORZIO: 24
sources:
- ALL
- countries:
- ALL
rankSortOrder:
Reltio: 1
GCP: 2
GRV: 3
SSE: 4
ENGAGE: 5
MAPP: 6
VEEVALINK: 7
SEAGEN: 8
KOL_OneView: 9
ONEMED: 9
PULSE_KAM: 10
SPRINKLR: 11
WEBINAR: 12
DREAMWEAVER: 13
EVENTHUB: 14
PFORCERX: 15
PFORCERX_ODS: 15
THUB: 16
PTRS: 17
ONEKEY: 18
MEDPAGESHCP: 19
MEDPAGESHCO: 19
SAP: 20
SAPVENDOR: 21
SDM: 22
MEDISPEND: 23
PORZIO: 24
sources:
- ALL

AMER

email:
- countries:
- ALL
rankSortOrder:
Reltio: 1
DCR_SYNC: 2
GCP: 3
GRV: 4
SSE: 5
ENGAGE: 6
MAPP: 7
VEEVALINK: 8
SEAGEN: 9
KOL_OneView: 10
ONEMED: 10
PFORCERX: 11
PFORCERX_ODS: 11
ONEKEY: 12
IMSO: 13
CS: 14
PFCA: 15
WSR: 16
THUB: 17
PTRS: 18
SAP: 19
SAPVENDOR: 20
LEGACY_SFA_IDL: 21
RX_AUDIT: 22
MEDISPEND: 23
PORZIO: 24
sources:
- ALL

APAC

email:
- countries:
- CN
rankSortOrder:
Reltio: 1
EVR: 2
MDE: 3
FACE: 4
GRV: 5
CN3RDPARTY: 6
ENGAGE: 7
MAPP: 8
VEEVALINK: 9
KOL_OneView: 10
ONEMED: 10
PFORCERX: 11
PFORCERX_ODS: 11
THUB: 12
PTRS: 13
sources:
- ALL
- countries:
- ALL
rankSortOrder:
Reltio: 1
JPDWH: 2
PCMS: 3
GCP: 4
GRV: 5
SSE: 6
ENGAGE: 7
MAPP: 8
VEEVALINK: 9
SEAGEN: 10
KOL_OneView: 11
ONEMED: 11
SPRINKLR: 12
WEBINAR: 13
DREAMWEAVER: 14
EVENTHUB: 15
PFORCERX: 16
PFORCERX_ODS: 16
THUB: 17
PTRS: 18
ONEKEY: 19
VOD: 20
SAP: 21
SAPVENDOR: 22
MEDISPEND: 23
PORZIO: 24
sources:
- ALL


Email rank sort process operates under the following conditions:

  1. Each email is sorted with the following rules 
    1. sort by cleanser status - valid/invalid
    2. sort by source order (the lower number on top)
    3. sort by source rank (the lower number on top of the one with this attribute)
    4. sort by last updated date - newest at the top
    5. sort by email value alphabetically in ascending order A -> Z - attribute Email.label
  2. Sorted email are recalculated for new Rank - each Email Rank is reassigned with an appropriate number



Business requirements (provided by AJ)

COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*


\"\"

" }, { "title": "Identifier RankSorter", "pageID": "164469766", "pageLink": "/display/GMDM/Identifier+RankSorter", "content": "

IQVIA Model (Global)

Algorithm

The identifier rank sort process operates under the following conditions:

  1. Each Identifier is grouped by Identifier Type: e.g GRV_ID / GCP ID / MI_ID / Physician_Code /. .. – each group is sorted separately.
  2. Each group is sorted with the following rules:
    1. By identifier "Source System order configuration" (lowest rank from the configuration on TOP)
    2. By identifier Order (lower ranks on TOP) in descending order 1 -> 99 - attribute Order
    3. By update date (LUD) (highest LUD date on TOP) in descending order 2017.07 -> 2017.06  - attribute crosswalks.updateDate
    4. By Identifier value (alphabetically in ascending order A -> Z)
  3. Sorted identifiers are optionally deduplicated (by Identifier Type in each group) – from each group, the lowest in rank and the duplicated identifier is removed. Currently the ( isIgnoreAndRemoveDuplicates = False) is set to False, which means that groups are not deduplicated. Duplicates are removed by Reltio.
  4. Sorted identifiers are recalculated for the new Rank – each Rank (for each sorted group) is reassigned with an appropriate number from lowest to highest. - attribute - Order

Identifier rank sort process fallback operates under the following conditions:

  1. When Identifier Type is empty – each empty identifier is grouped together. Each identifier with an empty type is added to the "EMPTY" group and sorted and DE duplicated separately.
  2. During source system from configuration (2.a) sorting when Source system is missing identifier is placed on 99 position
  3. During Rank (, 2.b) sorting when the Source system is missing identifier is placed on 99 position

Source Order Configuration 

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Identifier provided by source "Reltio" is higher in the hierarchy than the Identifier provided by the "CRMMI" source. Based on this configuration each identifier will be sorted in the following order:

Updated: 2023-12-29

EnvironmentGlobal (EX-US)

Countries

(in environment)

  • CN
Others
Source Order
Reltio: 1
EVR: 2
MDE: 3
MAPP: 4
FACE: 5
CRMMI: 6
KOL_OneView: 7
GRV: 8
CN3RDPARTY: 9
Reltio: 1
EVR: 2
OK: 3
AMPCO: 4
JPDWH: 5
NUCLEUS: 6
CMM: 7
MDE: 8
LocalMDM: 9
PFORCERX: 10
VEEVA_NZ: 11
VEEVA_AU: 12
VEEVA_PHARMACY_AU: 13
CRMMI: 14
FACE: 15
KOL_OneView: 16
GRV: 17
GCP: 18
MAPP: 19
CN3RDPARTY: 20
Rx_Audit: 21
PCMS: 22
CICR: 23

COMPANY Model

Algorithm

Identifier Rank sort algorithm slightly varies from the IQVIA model one:

  1. Identifiers are grouped by Type (Identifiers.Type field). Identifiers without a Type count as a separate group.
  2. Each group is sorted separately according to following rules:
    1. By Trust flag (Identifiers.Trust field). "Yes" takes precedence over "No". If Trust flag is missing, it's as if it was equal to "No".
    2. By Source Order (table below). Lowest rank from configuration takes precedence. If a Source is missing in configuration, it gets the lowest possible order (99).
    3. By Status (Identifiers.Status). Valid/Active status takes precedence over Invalid/Inactive/missing status. List of status codes is configurable. Currently (2023-12-29), the following codes are configured in all COMPANY environments:
      1. Valid codes: [HCPIS.VLD], [HCPIS.ACTV], [HCOIS.VLD], [HCOIS.ACTV]
      2. Invalid codes: [HCPIS.INAC], [HCPIS.INVLD], [HCOIS.INAC], [HCOIS.INVLD]
    4. By Source Rank (Identifiers.SourceRank field). Lowest rank takes precedence.

    5. By LUD. Latest LUD takes precedence. LUD is equal to the highest of 3 dates: 
      1. providing crosswalk's createDate
      2. providing crosswalk's updateDate
      3. providing crosswalk's singleAttributeUpdateDate for this Identifier (if present)
    6. By ID alphabetically. This is a fallback mechanism.
  3. Sorted identifiers are recalculated for the new Rank – each Rank (for each sorted group) is reassigned with an appropriate number from lowest to highest. - attribute - Rank.

Source Order Configuration

Updated: 2023-12-29

EnvironmentUSAMEREMEAAPAC

Countries

(in environment)

ALLALL

EU:

  • GB
  • IE
  • FR
  • BL
  • GP
  • MF
  • MQ
  • NC
  • PF
  • PM
  • RE
  • TF
  • WF
  • ES
  • DE
  • IT
  • VA
  • SM
  • TR
  • RU
Others (AfME)
  • CN
Others
Source Order
Reltio: 1
ONEKEY: 2
ICUE: 3
ENGAGE: 4
KOL_OneView: 5
ONEMED: 5
GRV: 6
SHS: 7
IQVIA_RX: 8
IQVIA_RAWDEA: 9
SEAGEN: 10
CENTRIS: 11
MAPP: 12
ASTELAS: 13
EMD_SERONO: 14
COV: 15
SAP: 16
SAPVENDOR: 17
IQVIA_DDD: 18
PTRS: 19
Reltio: 1
ONEKEY: 2
PFORCERX: 3
PFORCERX_ODS: 3
KOL_OneView: 4
ONEMED: 4
LEGACY_SFA_IDL: 5
ENGAGE: 6
MAPP: 7
SEAGEN: 8
GRV: 9
GCP: 10
SSE: 11
IMSO: 12
CS: 13
PFCA: 14
SAP: 15
SAPVENDOR: 16
PTRS: 17
RX_AUDIT: 18
Reltio: 1
ONEKEY: 2
PFORCERX: 3
PFORCERX_ODS: 3
KOL_ONEVIEW: 4
ENGAGE: 5
MAPP: 6
SEAGEN: 7
GRV: 8
GCP: 9
SSE: 10
1CKOL: 11
SAP: 12
SAPVENDOR: 13
BIODOSE: 14
BUPA: 15
CH: 16
HCH: 17
CSL: 18
Reltio: 1
ONEKEY: 2
MEDPAGES: 3
MEDPAGESHCP: 3
MEDPAGESHCO: 3
PFORCERX: 4
PFORCERX_ODS: 4
KOL_ONEVIEW: 5
ENGAGE: 6
MAPP: 7
SEAGEN: 8
GRV: 9
GCP: 10
SSE: 11
PULSE_KAM: 12
WEBINAR: 13
SAP: 14
SAPVENDOR: 15
SDM: 16
PTRS: 17
Reltio: 1
EVR: 2
MDE: 3
FACE: 4
GRV: 5
CN3RDPARTY: 6
GCP: 7
PFORCERX: 8
PFORCERX_ODS: 8
KOL_OneView: 9
ONEMED: 9
ENGAGE: 10
MAPP: 11
PTRS: 12
Reltio: 1
ONEKEY: 2
JPDWH: 3
VOD: 4
PFORCERX: 5
PFORCERX_ODS: 5
KOL_OneView: 6
ONEMED: 6
ENGAGE: 7
MAPP: 8
SEAGEN: 9
GRV: 10
GCP: 11
SSE: 12
PCMS: 13
PTRS: 14
SAP: 15
SAPVENDOR: 16



Business requirements (provided by AJ)

COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*


\"\"

" }, { "title": "OtherHCOtoHCOAffiliations RankSorter", "pageID": "319291956", "pageLink": "/display/GMDM/OtherHCOtoHCOAffiliations+RankSorter", "content": "

APAC COMPANY (currently for AU and NZ)


Business requirements (provided by AJ)

COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*


\"\"


The functionality is configured in the callback delay service. Allows you to set different types of sorting for each country. The configuration for AU and NZ is shown below.


rankSortOrder:
affiliation:
- countries:
- AU
- NZ
rankExecutionOrder:
- type: ATTRIBUTE
attributeName: RelationType/RelationshipDescription
lookupCode: true
order:
REL.HIE: 1
REL.MAI: 2
REL.FPA: 3
REL.BNG: 4
REL.BUY: 5
REL.PHN: 6
REL.GPR: 7
REL.MBR: 8
REL.REM: 9
REL.GPSS: 10
REL.WPC: 11
REL.WPIC: 12
REL.DOU: 13
- type: ACTIVE
- type: SOURCE
order:
Reltio: 1
ONEKEY: 2
JPDWH: 3
SAP: 4
PFORCERX: 5
PFORCERX_ODS: 5
KOL_OneView: 6
ONEMED: 6
ENGAGE: 7
MAPP: 8
GRV: 9
GCP: 10
SSE: 11
PCMS: 12
PTRS: 13
- type: LUD

Relationships are grouped by endObjectId, then the whole bundle is sorted and ranked. The relationship's position on the list (its rank) for AU and NZ is calculated based on the following algorithm:

" }, { "title": "Phone RankSorter", "pageID": "164469748", "pageLink": "/display/GMDM/Phone+RankSorter", "content": "

GLOBAL - IQVIA model

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Phones provided by source "Reltio" is higher in the hierarchy than the Address provided by "EVR" source. Based on this configuration, each phonewill be sorted in the following order:

phone:
- countries:
- "ALL"
sources:
- "ALL"
rankSortOrder:
"Reltio": 1
"EVR": 2
"OK": 3
"AMPCO": 4
"JPDWH": 5
"NUCLEUS": 6
"CMM": 7
"MDE": 8
"LocalMDM": 9
"PFORCERX": 10
"VEEVA_NZ": 11
"VEEVA_AU": 12
"VEEVA_PHARMACY_AU": 13
"CRMMI": 14
"FACE": 15
"KOL_OneView": 16
"GRV": 17
"GCP": 18
"MAPP": 19
"CN3RDPARTY": 20
"Rx_Audit": 21
"PCMS": 22
"CICR": 23

Phone rank sort process operates under the following conditions:

  1. Each phone is sorted with the following rules
  2. Group by the TypeIMS attribute and sort each group:
    1. sort by "Source System order configuration" (lowest rank from the configuration on TOP)
    2. sort by source rank (the lower number on top of the one with this attribute)
    3. sort by the validation status (VALID value is the winner) - attribute ValidationStatus
    4. sort by LUD (highest LUD date on TOP) in descending order 2017.07 -> 2017.06 - attribute crosswalks.updateDate
    5. sort by number value alphabetically in ascending order A -> Z - attribute Phone.number
  3. Sorted phones are recalculated for the new Rank - each Phone Rank is reassigned with an appropriate number

GLOBAL US

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Phone provided by source "ONEKEY" is higher in the hierarchy than the Phone provided by "ENGAGE" source.  Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each phone number will be sorted in the following order:

phone:
- countries:
- "ALL"
sources:
- "ALL"
rankSortOrder:
"Reltio" : 1
"ONEKEY" : 2
"ICUE" : 3
"VEEVALINK" : 4
"ENGAGE" : 5
"KOL_OneView" : 6
"ONEMED" : 6
"GRV" : 7
"SHS" : 8
"IQVIA_RX" : 9
"IQVIA_RAWDEA" : 10
"SEAGEN": 11
"CENTRIS" : 12
"MAPP" : 13
"ASTELAS" : 14
"EMD_SERONO" : 15
"COV" : 16
"SAP" : 17
"SAPVENDOR": 18
"IQVIA_DDD" : 19
"VALKRE" : 20
"THUB" : 21
"PTRS" : 22
"MEDISPEND" : 23
"PORZIO" : 24



Phone number rank sort process operates under the following conditions:

  1. Each phone number is sorted with the following rules, on top, it is grouped by type.
  2. Group by the Type attribute and sort each group 
    1. sort by source order (the lower number on top) - source name is taken from the last updated crosswalk for this Phone attribute
    2. sort by source rank (the lower number on top or the one with this attribute) - attribute Source.SourceRank for this Phone attribute
  3. Sorted phone numbers are recalculated for new Rank - each Phone Rank is reassigned with an appropriate number - attribute Rank for Phone attribute



EMEA/AMER/APAC

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Phone provided by source "ONEKEY" is higher in the hierarchy than the Phone provided by "ENGAGE" source.  Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each phone number will be sorted in the following order:


EMEA

phone:
- countries:
- GB
- IE
- FK
- FR
- BL
- GP
- MF
- MQ
- NC
- PF
- PM
- RE
- TF
- WF
- ES
- DE
- IT
- VA
- SM
- TR
- RU
rankSortOrder:
Reltio: 1
ONEKEY: 2
PFORCERX: 3
PFORCERX_ODS: 3
VEEVALINK: 4
KOL_OneView: 5
ONEMED: 5
ENGAGE: 6
MAPP: 7
SEAGEN: 8
GRV: 9
GCP: 10
SSE: 11
1CKOL: 12
THUB: 13
PTRS: 14
SAP: 15
SAPVENDOR: 16
BIODOSE: 17
BUPA: 18
CH: 19
HCH: 20
CSL: 21
MEDISPEND: 22
PORZIO: 23
sources:
- ALL
- countries:
- ALL
rankSortOrder:
Reltio: 1
ONEKEY: 2
MEDPAGESHCP: 3
MEDPAGESHCO: 3
PFORCERX: 4
PFORCERX_ODS: 4
VEEVALINK: 5
KOL_OneView: 6
ONEMED: 6
ENGAGE: 7
MAPP: 8
SEAGEN: 9
GRV: 10
GCP: 11
SSE: 12
PULSE_KAM: 13
SPRINKLR: 14
WEBINAR: 15
DREAMWEAVER: 16
EVENTHUB: 17
SAP: 18
SAPVENDOR: 19
SDM: 20
THUB: 21
PTRS: 22
MEDISPEND: 23
PORZIO: 24
sources:
- ALL


AMER


phone:
- countries:
- ALL
rankSortOrder:
Reltio: 1
DCR_SYNC: 2
ONEKEY: 3
PFORCERX: 4
PFORCERX_ODS: 4
VEEVALINK: 5
KOL_OneView: 6
ONEMED: 6
LEGACY_SFA_IDL: 7
ENGAGE: 8
MAPP: 8
SEAGEN: 9
GRV: 10
GCP: 11
SSE: 12
IMSO: 13
CS: 14
PFCA: 15
WSR: 16
SAP: 17
SAPVENDOR: 18
THUB: 19
PTRS: 20
RX_AUDIT: 21
MEDISPEND: 22
PORZIO: 23
sources:
- ALL

APAC


phone:
- countries:
- CN
rankSortOrder:
Reltio: 1
EVR: 2
MDE: 3
FACE: 4
GRV: 5
CN3RDPARTY: 6
GCP: 7
PFORCERX: 8
PFORCERX_ODS: 8
VEEVALINK: 9
KOL_OneView: 10
ONEMED: 10
ENGAGE: 11
MAPP: 12
PTRS: 13
sources:
- ALL
- countries:
- ALL
rankSortOrder:
Reltio: 1
ONEKEY: 2
JPDWH: 3
VOD: 4
PFORCERX: 5
PFORCERX_ODS: 5
VEEVALINK: 6
KOL_OneView: 7
ONEMED: 7
ENGAGE: 8
MAPP: 9
SEAGEN: 10
GRV: 11
GCP: 12
SSE: 13
PCMS: 14
THUB: 15
PTRS: 16
SAP: 17
SAPVENDOR: 18
SPRINKLR: 19
WEBINAR: 20
DREAMWEAVER: 21
EVENTHUB: 22
MEDISPEND: 23
PORZIO: 24
sources:
- ALL



Phone number rank sort process operates under the following conditions:

  1. Each phone number is sorted with the following rules, on top, it is grouped by type.
  2. Group by the Type attribute and sort each group  
    1. sort by cleanser status - valid/invalid
    2. sort by source order (the lower number on top) - source name is taken from the last updated crosswalk for this Phone attribute
    3. sort by source rank (the lower number on top or the one with this attribute) - attribute Source.SourceRank for this Phone attribute
    4. last update date - newest to oldest
    5. sort by label - alphabetical order A-Z
  3. Sorted phone numbers are recalculated for new Rank - each Phone Rank is reassigned with an appropriate number - attribute Rank for Phone attribute


Business requirements (provided by AJ)

COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*


\"\"

" }, { "title": "Speaker RankSorter", "pageID": "337862629", "pageLink": "/display/GMDM/Speaker+RankSorter", "content": "

Description

Unlike other RankSorters, Speaker Rank is expressed not by a nested "Rank" or "Order" field, but by the "ignore" flag.

"Ignore" flag sets the attribute's "ov" to false. By operating this flag, we assure that only the most valuable attribute is visible and sent downstream from Hub.

Algorithm

  1. Sort all Speaker nests
    1. Sort by source hierarchy
    2. If same source, sort by Last Update Date (higher of crosswalk.updateDate / crosswalk.singleAttributeUpdateDates/{speaker attribute uri})
    3. If same source and LUD, sort by attribute URI (fallback strategy)
  2. Process sorted group
    1. If first Speaker nest has ignored == true, set ignored := false for that nest
    2. If every next Speaker nest does not have ignored == true, set ignored := true for that nest
    3. Post the list of changes to Manager's async interface using Kafka topic

Global - IQVIA Model

Speaker RankSorter is active only for China. Source hierarchy is as follows:

speaker:
"Reltio": 1
"MAPP": 2
"FACE": 3
"EVR": 4
"MDE": 5
"CRMMI": 6
"KOL_OneView": 7
"GRV": 8
"CN3RDPARTY": 9

Specific Configuration

Unlike other PreCallback flows, Speaker RankSorter requires both ov=true and ov=false attribute values to work correctly.

This is why:


Business requirements (provided by AJ)

COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*


" }, { "title": "Specialty RankSorter", "pageID": "164469746", "pageLink": "/display/GMDM/Specialty+RankSorter", "content": "

GLOBAL - IQVIA model

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Specialty provided by source "Reltio" is higher in the hierarchy than the Specialty provided by the "CRMMI" source. Additionally, for Specialities, there is a difference between countries. The configuration for RU and TD contains only 4 sources and is different than the base configuration. Based on this configuration each specialty will be sorted in the following order:

specialities:
-
countries:
- "RU"
- "TR"
sources:
- "ALL"
rankSortOrder:
"GRV": 1
"GCP": 2
"OK": 3
"KOL_OneView": 4
-
countries:
- "ALL"
sources:
- "ALL"
rankSortOrder:
"Reltio": 1
"EVR": 2
"OK": 3
"AMPCO": 4
"JPDWH": 5
"NUCLEUS": 6
"CMM": 7
"MDE": 8
"LocalMDM": 9
"PFORCERX": 10
"VEEVA_NZ": 11
"VEEVA_AU": 12
"VEEVA_PHARMACY_AU": 13
"CRMMI": 14
"FACE": 15
"KOL_OneView": 16
"GRV": 17
"GCP": 18
"MAPP": 19
"CN3RDPARTY": 20
"Rx_Audit": 21
"PCMS": 22
"CICR": 23


The specialty rank sort process operates under the following conditions:

  1. Each Specialty is grouped by Specialty Type: SPEC/TEND/QUAL/EDUC – each group is sorted separately.
  2. Each group is sorted with the following rules:
    1. By specialty "Source System order configuration" (lowest rank from the configuration on TOP)
    2. By specialty Rank (lower ranks on TOP) in descending order 1 -> 99
    3. By update date (LUD) (highest LUD date on TOP) in descending order 2017.07 -> 2017.06 - attribute crosswalks.updateDate
    4. By Specialty Value (alphabetically in ascending order A -> Z)
  3. Sorted specialties are optionally deduplicated (by Specialty Type in each group) – from each group, the lowest in rank and the duplicated specialty is removed. Currently the ( isIgnoreAndRemoveDuplicates = False) is set to False, which means that groups are not deduplicated. Duplicates are removed by Reltio.
  4. Sorted specialties are recalculated for the new Ranks – each Rank (for each sorted group) is reassigned with an appropriate number from lowest to highest.
  5. Additionally, for the Specialty Rank = 1 the best record is set to true - attribute - PrimarySpecialtyFlag

Specialty rank sort process fallback operates under the following conditions:

  1. When Specialty Type is empty – each empty specialty is grouped together. Each specialty with an empty type is added to the "EMPTY" group and sorted and DE duplicated separately.
  2. During source system from configuration (2.a) sorting when Source system is missing specialty is placed on 99 position
  3. During Rank (, 2.b) sorting when the Source system is missing specialty is placed on 99 position



GLOBAL US

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Speciality provided by source "ONEKEY" is higher in the hierarchy than the Speciality provided by the "ENGAGE" source. Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each Speciality will be sorted in the following order:

specialities:
- countries:
- "ALL"
sources:
- "ALL"
rankSortOrder:
"Reltio" : 1
"ONEKEY" : 2
"IQVIA_RAWDEA" : 3
"VEEVALINK" : 4
"ENGAGE" : 5
"KOL_OneView" : 6
"ONEMED" : 6
"SPEAKER" : 7
"ICUE" : 8
"SHS" : 9
"IQVIA_RX" : 10
"SEAGEN": 11
"CENTRIS" : 12
"ASTELAS" : 13
"EMD_SERONO" : 14
"MAPP" : 15
"GRV" : 16
"THUB" : 17
"PTRS" : 18
"VALKRE" : 19
"MEDISPEND" : 20
"PORZIO" : 21


The specialty rank sort process operates under the following conditions:

  1. Specialty is sorted with the following rules, but on the top, it is grouped by Speciality.SpecialityType attribute:
  2. Group by Speciality.SpecialityType attribute and sort each group: 
    1. sort by specialty unspecified status value (higher value on the top) - attribute Specialty with value Unspecified
    2. sort by source order number (the lower number on the top) - source name is taken from crosswalk that was last updated
    3. sort by source rank (the lower on the top) - attribute Source.SourceRank
    4. sort by last update date (the earliest on the top) - last update date is taken from lately updated crosswalk
    5. sort by specialty attribute value (string comparison) - attribute Specialty
  3. Sorted specialties are recalculated for new Rank - each Specialty Rank is reassigned with an appropriate number - attribute Rank

Additionally:

  1. If the source is not found it is set to 99
  2. If specialty unspecified attribute name or value is not set it is set to 99



EMEA/AMER/APAC

This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Speciality provided by source "ONEKEY" is higher in the hierarchy than the Speciality provided by the "ENGAGE" source. Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each Speciality will be sorted in the following order:

EMEA


specialities:
- countries:
- GB
- IE
- FK
- FR
- BL
- GP
- MF
- MQ
- NC
- PF
- PM
- RE
- TF
- WF
- ES
- DE
- IT
- VA
- SM
- TR
- RU
rankSortOrder:
Reltio: 1
ONEKEY: 2
PFORCERX: 3
PFORCERX_ODS: 3
VEEVALINK: 4
KOL_OneView: 5
ONEMED: 5
ENGAGE: 6
MAPP: 7
SEAGEN: 8
GRV: 9
GCP: 10
SSE: 11
THUB: 12
PTRS: 13
1CKOL: 14
MEDISPEND: 15
PORZIO: 16
sources:
- ALL
- countries:
- ALL
sources:
- ALL
rankSortOrder:
Reltio: 1
ONEKEY: 2
MEDPAGESHCP: 3
MEDPAGESHCO: 3
PFORCERX: 4
PFORCERX_ODS: 4
VEEVALINK: 5
KOL_OneView: 6
ONEMED: 6
ENGAGE: 7
MAPP: 8
SEAGEN: 9
GRV: 10
GCP: 11
SSE: 12
PULSE_KAM: 13
WEBINAR: 14
DREAMWEAVER: 15
EVENTHUB: 16
SPRINKLR: 17
THUB: 18
PTRS: 19
MEDISPEND: 20
PORZIO: 21


AMER


specialities:
- countries:
- ALL
rankSortOrder:
Reltio: 1
DCR_SYNC: 2
ONEKEY: 3
PFORCERX: 4
PFORCERX_ODS: 4
VEEVALINK: 5
KOL_OneView: 6
ONEMED: 6
LEGACY_SFA_IDL: 7
ENGAGE: 8
MAPP: 9
SEAGEN: 10
GRV: 11
GCP: 12
SSE: 13
THUB: 14
PTRS: 15
RX_AUDIT: 16
PFCA: 17
WSR: 18
MEDISPEND: 19
PORZIO: 20
sources:
- ALL

APAC


specialities:
- countries:
- CN
rankSortOrder:
Reltio: 1
EVR: 2
MDE: 3
FACE: 4
GRV: 5
CN3RDPARTY: 6
GCP: 7
SSE: 8
PFORCERX: 9
PFORCERX_ODS: 9
VEEVALINK: 10
KOL_OneView: 11
ONEMED: 11
ENGAGE: 12
MAPP: 13
THUB: 14
PTRS: 15
sources:
- ALL
- countries:
- ALL
rankSortOrder:
Reltio: 1
ONEKEY: 2
JPDWH: 3
VOD: 4
PFORCERX: 5
PFORCERX_ODS: 5
VEEVALINK: 6
KOL_OneView: 7
ONEMED: 7
ENGAGE: 8
MAPP: 9
SEAGEN: 10
GRV: 11
GCP: 12
SSE: 13
PCMS: 14
WEBINAR: 15
DREAMWEAVER: 16
EVENTHUB: 17
SPRINKLR: 18
THUB: 19
PTRS: 20
MEDISPEND: 21
PORZIO: 22
sources:
- ALL


The specialty rank sort process operates under the following conditions:

  1. Specialty is sorted with the following rules, but on the top, it is grouped by Speciality.SpecialityType attribute:
  2. Group by Speciality.SpecialityType attribute and sort each group: 
    1. sort by specialty unspecified status value (higher value on the top) - attribute Specialty with value Unspecified
    2. sort by source order number (the lower number on the top) - source name is taken from crosswalk that was last updated
    3. sort by source rank (the lower on the top) - attribute Source.SourceRank
    4. sort by last update date (the earliest on the top) - last update date is taken from lately updated crosswalk
    5. sort by specialty attribute value (string comparison) - attribute Specialty
  3. Sorted specialties are recalculated for new Rank - each Specialty Rank is reassigned with an appropriate number - attribute Rank. The primary flag is set for the top ranked specialty.

Additionally:

  1. If the source is not found it is set to 99
  2. If specialty unspecified attribute name or value is not set it is set to 99


Business requirements (provided by AJ)

COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*



\"\"

" }, { "title": "Enricher Processor", "pageID": "302687243", "pageLink": "/display/GMDM/Enricher+Processor", "content": "

EnricherProcessor is the first PreCallback processor applied to incoming events. It enriches reference attributes with refEntity attributes, for the Rank calculation purposes. Usually, enriched attributes are removed after applying all PreCallbacks - this is configurable using cleanAdditionalRefAttributes flag. The only exception is GBL (EX-US), where attributes remain for CN. Removing "borrowed" attributes is carried out by the Cleaner Processor.

Algorithm

For targetEntity:

  1. Find reference attributes matching configuration
  2. For each such attribute:
    1. Walk the relation to get endObject entity
    2. Fetch endObject entity's current state through Manager (using cache)
    3. Rewrite entity's attributes to this reference attribute, inserting them in <Attribute>.refEntity.attributes path
      steps a-b are applied recursively, according to configured maxDepth.

Example

Below is EnricherProcessor config from APAC PROD's Precallback Service:

\n
refLookupConfig:\n    - cleanAdditionalRefAttributes: true\n      country:\n          - AU\n          - IN\n          - JP\n          - KR\n          - NZ\n      entities:\n          - attributes:\n                - ContactAffiliations\n            type: HCP\n      maxDepth: 2
\n

How to read the config:

" }, { "title": "Cleaner Processor", "pageID": "302687603", "pageLink": "/display/GMDM/Cleaner+Processor", "content": "

Cleaner Processor removed attributes enriched by the Enricher Processor. It is one of the last processors in the Precallback Service's execution order. Processor checks the cleanAdditionalRefAttributes flag in config.

Algorithm

For targetEntity:

  1. Find all refLookupConfig entries applicable for this Country.
  2. For all attributes in found entries, remove refEntity.attributes map.
" }, { "title": "Inactivation Generator", "pageID": "302697554", "pageLink": "/display/GMDM/Inactivation+Generator", "content": "

Inactivation Generator is one of Precallback Service's event Processors. It checks input event's targetEntity and changes event type to INACTIVATED, if it detects one of below:

Algorithm

For each event:

  1. If targetEntity not null and targetEntity.endDate is null, skip event,
  2. If targetRelation not null:
    1. If targetRelation.endDate is null or targetRelation.startRefIgnored is null or targetRelation.endRefIgnored is null, skip event,
  3. Search the mapping for adequate output event type, according to table below. If no match found, skip event,

    Inbound event typeOutbound event type
    HCP_CREATEDHCP_INACTIVATED
    HCP_CHANGED
    HCO_CREATEDHCO_INACTIVATED
    HCO_CHANGED
    MCO_CREATEDMCO_INACTIVATED
    MCO_CHANGED
    RELATIONSHIP_CREATEDRELATIONSHIP_INACTIVATED
    RELATIONSHIP_CHANGED
  4. Return same event with new event type, according to table above.
" }, { "title": "MultiMerge Processor", "pageID": "302697588", "pageLink": "/display/GMDM/MultiMerge+Processor", "content": "

MultiMerge Processor is one of Precallback Service's event Processors.

For MERGED events, it checks if targetEntity.uri is equal to first URI from entitiesURIs. If it is different, entitiesURIs is adjusted, by inserting targetEntity.uri in the beginning. This is to assure, that targetEntity.uri[0] always contains a merge winner, even in cases of multiple merges.

Algorithm

For each event of type:

do:

  1. if targetEntity.uri is null, skip event,
  2. if entitiesURIs[0] and targetEntity.uri are equal, skip event,
  3. insert targetEntity.uri at the beginning of entitiesURIs and return the event.
" }, { "title": "OtherHCOtoHCOAffiliations Rankings", "pageID": "319291954", "pageLink": "/display/GMDM/OtherHCOtoHCOAffiliations+Rankings", "content": "

Description

The process was designed to rank OtherHCOtoHCOAffiliation with rules that are specific to the country. The current configuration contains Activator and Rankers available for AU and NZ countries and the OtherHCOtoHCOAffiliationsType. The process (compared to the ContactAffilaitions) was designed to process RELATIONSHIP_CHANGE events, which are single events that contain one piece of information about specific relation. The process builds the cache with the hierarchy of objects when the main object is Reltio EndObject (The direction that we check and implement the Rankins: (child)END_OBJECT -> START_OBJECT(parent).  Change in the relation is not generating the HCO_CHANGE events so we need to check relations events. Relation change/create/remove events may change the hierarchy and ranking order.

Comparing this to the ContactAffiliations ranking logic, change on HCP object had whole information about the whole hierarchy in one event, this caused we could count and generate events based on HCP CHANGE.

This new logic builds this hierarchy based on RELATIONSHIP events, compact the changes in the time window, and generates events after aggregation to limit the number of changes in REltio and API calls. 


DATA VERIFICATION:

Snowflake queries:

\n
SELECT COUNT(*) FROM (\n\nSELECT END_ENTITY_URI, COUNT(*) FROM COMM_APAC_MDM_DMART_PROD_DB.CUSTOMER_SL.MDM_RELATIONS\n\nWHERE COUNTRY = 'AU' and RELATION_TYPE ='OtherHCOtoHCOAffiliations' and ACTIVE = TRUE\n\nGROUP BY END_ENTITY_URI\n\n)\n\n\n\n\nSELECT COUNT(*) FROM COMM_APAC_MDM_DMART_PROD_DB.CUSTOMER_SL.MDM_ENTITIES\n\nWHERE ENTITY_TYPE='HCO' and COUNTRY ='AU' AND ACTIVE = TRUE\n\nSELECT COUNT(*) FROM (\n\nSELECT END_ENTITY_URI, COUNT(*) FROM COMM_APAC_MDM_DMART_PROD_DB.CUSTOMER_SL.MDM_RELATIONS\n\nWHERE COUNTRY = 'NZ' and RELATION_TYPE ='OtherHCOtoHCOAffiliations' and ACTIVE = TRUE\n\nGROUP BY END_ENTITY_URI\n\n)
\n


Example few cases from APAC QA:

010Xcxi NZ          2

00zxT2O              NZ          2

008NxIA              NZ          2

1CVfmxOm        NZ          2

VCMuTvz            NZ          2

cvoyNhG             NZ          2

VCMnOvP          NZ          2

00yZOis                NZ          2

00JoRnN              NZ          2


\n
SELECT END_ENTITY_URI, COUNTRY, COUNT(*) AS count FROM CUSTOMER_SL.MDM_RELATIONS\n\nWHERE RELATION_TYPE ='OtherHCOtoHCOAffiliations' AND ACTIVE = TRUE\n\nAND COUNTRY IN ('AU','NZ')\n\nGROUP BY END_ENTITY_URI, COUNTRY\n\nORDER BY count DESC
\n


Cq2pWio             AU          5

00KcdEA              AU          3

T5NxyUa             AU          3

ZsTdYcS               AU          3

XhGoqwo           AU          3

00wMWdy         AU          3

Cq1wjj8               AU          3


The direction that we should check and implement the Rankins:

(child)END_OBJECT -> START_OBJECT(parent)

We are starting with Child objects and checking if this child is connected to multiple parents and we are ranking. In most cases, 99% of these will be one relation that will auto-filled with rank=1 during load. If not we are going to rank this using below implementation:

Example:

https://mpe-02.reltio.com/nui/xs4oRCXpCKewNDK/profile?entityUri=entities%2F00KcdEA

\"\"


REQUIREMENTS:

\"\"

Flow diagram


Logical Architecture

\"\"

PreDelayCallback Logic

\"\"


Steps

Overview Reltio attributes

\n
ATTRIBUTES TO UPDATE/INSERT\nRANK\n                {\n                    "label": "Rank",\n                    "name": "Rank",\n                    "description": "Rank",\n                    "type": "Int",\n                    "hidden": false,\n                    "important": false,\n                    "system": false,\n                    "required": false,\n                    "faceted": true,\n                    "searchable": true,\n                    "attributeOrdering": {\n                        "orderType": "ASC",\n                        "orderingStrategy": "LUD"\n                    },\n                    "uri": "configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/Rank",\n                    "skipInDataAccess": false\n                },
\n

PreCallback Logic - RANK Activator

DelayRankActivationProcessor:

The purpose of this activator is to pick specific events and push them to delay-events topics, events from this topic will be ranked using the algorithm described on this page (OtherHCOtoHCOAffiliations Rankings), the flow is also described below.


Example configuration for AU and NZ:

delayRankActivationCallback:
featureActivation: true
activators:
- description: "Delay OtherHCOtoHCOAffiliations RELATION events from AU and NZ country to calculate Rank in delay service"
acceptedEventTypes:
- RELATIONSHIP_CHANGED
- RELATIONSHIP_CREATED
- RELATIONSHIP_REMOVED
- RELATIONSHIP_INACTIVATED
acceptedRelationObjectTypes:
- configuration/relationTypes/OtherHCOtoHCOAffiliations
acceptedCountries:
- AU
- NZ
additionalFunctions:
- RelationEndObjectAsKafkaKey




PreDelayCallback - RANK Logic

The purpose of this pre-delay-callback service is to Rank specific objects (currently available OtherHCOToHCO ranking for AU and NZ - OtherHCOtoHCOAffiliations Rankings)

CallbackWithDelay and CurrentStateCache advantages:


Logic:


Data Model and Configuration

\n
RelationData cache model:\n[\n   Id: endObjectId\n   relations:\n        - relationUri: relations/13pTXPR0\n          endObjectUri: endObjectId"       \n          country: AU \n          crosswalks:\n              - type: ONEKEY\n                value: WSK123sdcF\n                deleteDate: 123324521243\n          RankUri: e.g. relations/13pTXPR0/attributes/Rank\n          Rank: null\n  \t      Attributes:\n              Status:\n  \t              - ACTIVE                     \n              RelationType/RelationshipDescription:\n                  - REL.MAI\n                  - REL.CON\n\n]\n\n
\n

Triggers

RankActivation

Trigger action

Component

Action

Default time

IN Events incoming 

Callback Service: Pre-Callback: DelayRankActivationProcessor

$env-internal-reltio-full-events

Full events trigger pre-callback stream and the activation logic that will route the events to next processing state


realtime - events stream
OUT Activated events to be sorted

Callback Service: Pre-Callback: DelayRankActivationProcessor 

$env-internal-reltio-full-delay-events

Output topic

realtime - events stream

Trigger action

Component

Action

Default time

IN Events incoming 

mdm-callback-delay-service: Pre-Delay-Callback: PreCallbackDelayStream

$env-internal-reltio-full-delay-events

DELAY: ${env}-internal-reltio-full-callback-delay-events

Full events trigger pre-delay-callback stream and the ranking logic

realtime - events stream

OUT Sorted events with the correct state 

mdm-callback-delay-service: Pre-Delay-Callback: PreCallbackDelayStream

$env-internal-reltio-proc-events

Output topic with correct events

realtime - events stream

OUT Reltio Updates

mdm-callback-delay-service: Pre-Delay-Callback: PostCallbackStream

$env-internal-async-all-bulk-callbacks

Output topic with Reltio updates

realtime - events stream

Dependent components

Component

Usage

Callback ServiceRELATION ranking activator that push events to delay service
Callback Delay ServiceMain Service with OtherHCOtoHCOAffiliations Rankings logic
Entity EnricherGenerates incoming events full events
Manager

Process callbacks generated by this service


Attachment docs with more technical implementation details:

\"\"\"\"example-reqeusts.json

" }, { "title": "HCPType Callback", "pageID": "347637202", "pageLink": "/display/GMDM/HCPType+Callback", "content": "

Description

The process was designed to update HCPType RDM code in TypeCode attribute on HCP profiles. The process is based on the events streaming, the main event is recalculated based on the current state and during comparison of existing TypeCode on Profile and calculated value the callback is generated. This process (like all processes in PreCallback Service) blocks the main event and will send the update to external clients only when the update is visible in Reltio and TypeCode contains correct code. The process uses the RDM as a internal cache and calculates the output value based on current mapping. To limit the number of requests to RDM we are using the internal Mongo Cache and we refresh this cache every 2 hours on PROD. Additionally we designed the in-memory cache to store 2 required codes (PRES/NON-PRESC) with HUB_CALLBACK source code values.

This logic is related to these 2 values in Reltio HCP profiles:

Type-  Prescriber (HCPT.PRES)

Type - Non-Prescriber (HCPT.NPRS)


Why this process was designed:

With the addition of the Eastern Cluster LOVs, we have hit the limit/issue where HCP Type Prescriber & Non-Prescriber canonical codes no longer into RDM.

Issue is a size limit in RDM’s underlying GCP tech stack It is a GCP physical limitation and cannot be increased. We cannot add new RDM codes to PRES/NON-PRESC codes and this will cause issues in HCP data.

The previous logic:

In the ingestion service layer (all API calls) there was a DQ rule called “HCP TypeCode”. This logic adds the TypeCode as a concatenation of SubTypeCode and Speciality Ranked 1. Logic get source code and puts the concatenation in TypeCode attribute. The number of combination on source codes is reaching the limit so we are building new logic.

For future reference adding old DQ rules that will be removed after we deploy the new process.

DQ rules (sort rank):

\"\"

- name: Sort specialities by source rank
category: OTHER
createdDate: 20-10-2022
modifiedDate: 20-10-2022
preconditions:
- type: operationType
values:
- create
- update
- type: not
preconditions:
- type: source
values:
- HUB_CALLBACK
- NUCLEUS
- LEGACYMDM
- PFORCERX_ID
- type: not
preconditions:
- type: match
attribute: TypeCode
values:
- "^.+$"
action:
type: sort
key: Specialities
sorter: SourceRankSorter


DQ rules (add sub type code):

\"\"

- name: Autofill sub type code when sub type is null/empty
category: AUTOFILL_BASE
createdDate: 20-10-2022
modifiedDate: 20-10-2022
preconditions:
- type: operationType
values:
- create
- update
- type: not
preconditions:
- type: source
values:
- HUB_CALLBACK
- NUCLEUS
- LEGACYMDM
- PFORCERX_ID
- KOL_OneView
action:
type: modify
attributes:
- TypeCode
value: "{SubTypeCode}-{Specialities.Specialty}"
replaceNulls: true
when:
- ""
- "NULL"


Example of previous input values:

attributes:
"TypeCode": [
{
"value": "TYP.M-SP.WDE.04"
}
]

TYP.M is a SubTypeCode
SP.WDE.04 is a Speciality

calucated value - PRESC:
\"\"
As we can see on this screenshot on EMEA PROD there are 2920 combinations for one ONEKEY source that generates PRESC value.



The new logic:

The new logic was designed in pre callback service in hybrid mode. The logic uses the same assumptions like are made in previous version, but instead we are using Reltio Canonical codes, and this limits the number of combinations. We are providing this value using only one Source HUB_CALLBACK so there is no need to configure ONEKEY,GRV and all other sources that provides multiple combinations.

Advantages:

Service populates HCP Type with SubType & Specialty canonical codes

HCP Type LOVs reduced to single source (HUB_CALLBACK) and canonical codes


The change in HCP Type RDM will be processed using standard reindex process.

This change is impacting the Historical Inactive flow – change described Snowflake: HI HCPType enrichment


Key features in new logic and what you should know:

  1. The change in HCP Type RDM will be processed using standard reindex process.
  2. Calculate the HCP TypeCode is based on the OV profile and Reltio canonical codes
    1. Previously each source delivered data and the ingestion service calculated TypeCode based on RAW JSON data delivered by the source.
    2. Now we calculate on OV Profile, not on the source level.
      1. We deliver only one value using HUB_CALLBACK crosswalk.
    3. Now once we receive the event we have access to ov:true – golden profile
      1. Specialties, this is the list, each source has the SourceName and SourceRank, so we pick with Rank 1 for selected profile.
      2. SubTypeCode is a single attribute, and can pick only ov:true value.
    4. 2 canonical cocdes are mapped to TypeCode attribute like on the below example
  3. Activation/Deactivation profiles in Reltio and Historical Inactive flow
    1. Snowflake: HI HCPType enrichment
    2. Snowflake: History Inactive 
    3. When the whole profile is deactivated HUB_CALLBACK technical crosswalks are hard-deleted, HCPTypeCode will be hard-deleted
    4. This is impact HI Views because the HUB_CALLBACK value will be dropped
    5. We implemented a logic in HI view that will rebuild TypeCode attribute and put this PRES/NON-PRESC in JSON file visible in HI view.
  4. Reltio contains the checksum logic and is not generating the event when the sourceCode changes but is mapped to the same canonical code
    1. We implemented a delta detection logic and we are sending an update only when change is detected
      1. Lookup to RDM, requeiers the logic to resolve HUB_CALLBACK code to canonical code.
      2. Change only when
        1. Type does not exists
        2. Type changes from PRESC to NON-PRESC
        3. Type changes from NON-PRESC to PRESC


Example of new input values:

attributes:
"TypeCode": [
{
"value": "HCPST.M-SP.AN"
}
]

TYP.M is a SubTypeCode source code mapped to HCPST.M
SP.WDE.04 is a Speciality source code mapped to SP.AN

rdm/lookupTypes/HCPSubTypeCode:HCPST.M
rdm/lookupTypes/HCPSpecialty:SP.AN

Flow diagram

Logical Architecture

\"\"

HCPType PreCallback Logic

\"\"


Steps

Overview Reltio attributes and RDM

                {
                    "label": "Type",
                    "name": "TypeCode",
                    "description": "HCP Type Code",
                    "type": "String",
                    "hidden": false,
                    "important": false,
                    "system": false,
                    "required": false,
                    "faceted": true,
                    "searchable": true,
                    "attributeOrdering": {
                        "orderType": "ASC",
                        "orderingStrategy": "LUD"
                    },
                    "uri": "configuration/entityTypes/HCP/attributes/TypeCode",
                    "lookupCode": "rdm/lookupTypes/HCPType",
                    "skipInDataAccess": false
                },

Based on:

SubTypeCode:

                {
                    "label": "Sub Type",
                    "name": "SubTypeCode",
                    "description": "HCP SubType Code",
                    "type": "String",
                    "hidden": false,
                    "important": false,
                    "system": false,
                    "required": false,
                    "faceted": true,
                    "searchable": true,
                    "attributeOrdering": {
                        "orderType": "ASC",
                        "orderingStrategy": "LUD"
                    },
                    "uri": "configuration/entityTypes/HCP/attributes/SubTypeCode",
                    "lookupCode": "rdm/lookupTypes/HCPSubTypeCode",
                    "skipInDataAccess": false
                },

Speciality:

                        {
                            "label": "Specialty",
                            "name": "Specialty",
                            "description": "Specialty of the entity, e.g., Adult Congenital Heart Disease",
                            "type": "String",
                            "hidden": false,
                            "important": false,
                            "system": false,
                            "required": false,
                            "faceted": true,
                            "searchable": true,
                            "attributeOrdering": {
                                "orderingStrategy": "LUD"
                            },
                            "cardinality": {
                                "minValue": 0,
                                "maxValue": 1
                            },
                            "uri": "configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialty",
                            "lookupCode": "rdm/lookupTypes/HCPSpecialty",
                            "skipInDataAccess": false
                        },

RDM

Codes:

rdm/lookupTypes/HCPType:HCPT.NPRS

rdm/lookupTypes/HCPType:HCPT.PRES

\"\"


HCPType PreCallback Logic

Flow:

  1. Component Startup
    1. during the Pre-Callback component startup we are initializing in memory cache to store 2 PRESC and NPRES values for HUB_CALLBACK soruce
      1. This implementation limits number of requests to RDM Reltio through manager
      2. Also this limit number of API call manager service from pre-callback service
    2. The Cache contains TTL configuration and is invalidated after TTL
  2. Activation
    1. Check if feature flag activation is true
    2. Take into account only the CHANGED and CREATED events in this pre-callback implementation limited to HCP objects
    3. Take into account only profiles that crosswalks are not on the following list. When Profile contains the crosswalks that are related to this configuration list skip the TypeCode generation. When the Profile contains the following crosswalk and additionally valid crosswalk like ONEKEY generate a TypeCode.
      1. - type: not
        preconditions:
        - type: source
        values:
        - HUB_CALLBACK
        - NUCLEUS
        - LEGACYMDM
        - PFORCERX_ID
  3. Steps
    1. Each CHANGE or CREATE event triggers the following logic:
      1. Get the canonical code from HCP/attributes/SubTypeCode
        1. pick a lookupCode
        2. <fallback 1> if lookupCode is missing and lookupError exists pick a value
        3. <fallback 2> if the SupTypeCode does not exists put an empty value = ""
      2. Get the canonical code from HCP/attributes/Specialities/attributes/Specialty array
        1. pick a speciality with Rank equal to 1
        2. pick a lookupCode 
        3. <fallback 1> if lookupCode is missing and lookupError exists pick a value
        4. <fallback 2> if the Specialty does not exists put an empty value = ""
      3. Combine to canonical codes, using "-" hyphen character as a concatenation.
      4. possible values:
        1. <subtypecode_canonicalCode>-<speciality_canonicalCode>
        2. <subtypecode_canonicalCode>-""
        3. ""-<speciality_canonicalCode>
        4. ""-""
      5. Execute delta detection logic:
        1. <transformation function>: using the RDM cache translate the generated value to PRESC or NPRES code
        2. Compare the generated value with HCP/attributes/TypeCode
          1. pick a lookupCode and compare to generated and translated value
          2. <fallback 1> if lookupCode is missing and lookupError exists pick a value and compare to generated and not translated value
        3. Generate:
          1. INSERT_ATTRIBUTE: when TypeCode does not exits
          2. UPDATE_ATTRIBUTE: when value is different
        4. Forward main event to next processing topic when there are 0 changes.

Triggers

Trigger action

Component

Action

Default time

IN Events incoming Callback Service: Pre-Callback:HCP Type Callback logicFull events trigger pre-callback stream and during processing, partial events are processed with generated changes. If data is in sync partial event is not generated, and the main event is forwarded to external clientsrealtime - events stream

Dependent components

Component

Usage

Callback ServiceMain component of flow implementation
Entity EnricherGenerates incoming events full events
ManagerProcess callbacks generated by this service
Hub StoreHUB Mongo Cache
LOV readLookup RDM values flow


" }, { "title": "China IQVIA<->COMPANY", "pageID": "263501508", "pageLink": "/display/GMDM/China+IQVIA%3C-%3ECOMPANY", "content": "

Description

The section and all subpages describe HUB adjustments for China clients with transformation to the COMPANY model. HUB created a logic to allow China clients to make a transparent transition between IQVIA and COMPANY Models. Additionally, the DCR process will be adjusted to the new COMPANY model. The New DCR process will eliminate a lot of DCRs that are currently created in the IQVIA tenant. The description of changes and all flows are described in this section and the subpages, links are displayed below. 

HUB processed all the changes in MR-4191 – the MAIN task, To verify and track please check Jira.

Flows

Triggers

Described in the separated sub-pages for each process.

Dependent components

Described in the separated sub-pages for each process.


Documents with HUB details

mapping China_attributes.xlsx

API: China_HUB_Changes.docx

dcr: China_HUB_DCR_Changes.docx


" }, { "title": "China IQVIA - current flow and user properties + COMPANY changes", "pageID": "284805827", "pageLink": "/pages/viewpage.action?pageId=284805827", "content": "

Description

On this page, the current IQVIA flow is described. Contains the full API description, and complex API on IQVIA end with all details about HUB configuration and properties used for the China IQVIA model.

In the next section of this page, the COMPANY changes are described in a generic way. More details of the new COMPANY complex model and API adjustments were described in other subpages. 

IQVIA

COMPANY

The key concepts and general description of COMPANY adjustments:




" }, { "title": "China Selective Router - model transformation flow", "pageID": "284800572", "pageLink": "/display/GMDM/China+Selective+Router+-+model+transformation+flow", "content": "

Description

China selective router was created to enrich and transform event from COMPANY model to IQIVIA model. Component is also able to connect related mainHco with hco, based on reltio connections API, in Iqivia model its reflected as MainHco in Workplace attribute.

Flow diagram

\"\"


\"\"

Steps

Triggers

Trigger action

Component

Action

Default time

kafka message
eventTransformerTopology
transform event to Iqivia modelrealtime


Dependent components

Component

Usage

Mdm manager

getEntitisByUri

getEntityConnectionsByUri

HCPModelConverter
toIqviaModel
" }, { "title": "Create HCP/HCO complex methods - IQVIA model (legacy)", "pageID": "284800564", "pageLink": "/pages/viewpage.action?pageId=284800564", "content": "

Description

The IQVIA China user uses the following methods to create the HCP HCO objects - Create/Update HCP/HCO/MCO. On this linked page the API calls flow is described. The most complex and important thing is the following sections for China users:

IQVIA China user also activates the DCR logic using this Create HCP method. The complex description of this flow is here DCR IQVIA flow

Currently, the DCR activation process from the IQVIA flow is described here - DCR generation process (China DCR)

New DCR COMPANY flow is described here: DCR COMPANY flow


The below flow diagram and steps description contain the detailed description of all cases used in HCP HCO and DCR methods in legacy code.

Flow diagram

\"\"

Steps

HCP Service = China logic / STEPS:




 

HCO Service = China logic / STEPS:

Triggers

Trigger action

Component

Action

Default time

operation link
REST callManager: POST/PATCH /hco /hcp /mcocreate specific objects in MDM systemAPI synchronous requests - realtimeCreate/Update HCP/HCO/MCO
REST callManager: GET /lookupget lookup Code from ReltioAPI synchronous requests - realtimeLOV read
REST callManager: GET /entity?filter=(criteria)search the specific objects in the MDM systemAPI synchronous requests - realtimeSearch Entity
REST callManager: GET /entityget Object from RetlioAPI synchronous requests - realtimeGet Entity
Kafka Request DCRManager: Push Kafka DCR eventpush Kafka DCR EventKafka asynchronous event - realtimeDCR IQVIA flow


Dependent components

Component

Usage

Managersearch entities in MDM systems
API Gatewayproxy REST and secure access
ReltioReltio MDM system
DCR ServiceOld legacy DCR processor
" }, { "title": "Create HCP/HCO complex V2 methods - COMPANY model", "pageID": "284800566", "pageLink": "/pages/viewpage.action?pageId=284800566", "content": "

Description


This API is used to process complex HCP/HCO requests. It supports the management of MDM entities with the relationships between them. The user can provide data in the IQVIA or COMPANY model.


Flow diagram

\"\"

Flow diagram HCP (overview)

(details on main diagram)


\"\"

Steps HCP 

  1. Map HCP to COMPANY model
  2. Extract parent HCO - MainHCO attribute of affiliated HCO entity
  3. Execute search service for affiliated HCO and parent HCO
    1. If affiliated HCO or parent HCO not found in MDM system: execute trigger service
    2. Otherwise set entity URI for found objects
  4. Execute HCO complex service for HCO request - affiliated  HCO and parent HCO entities
  5. Map HCO response to contact affiliations HCP attribute
    1. create relation between HCP and affiliated HCO
    2. create relation between HCP and parent HCO
  6. Execute HCP simple service


HCP API search entity service

Search entity service is used to search for existing entities in the MDM system. This feature is configured for user via searchConfigHcpApi attribute. This configuration is divided for HCO and affiliated HCO entities and contains a list of searcher implementations - searcher type.

attributedescription
HCOsearch configuration for affiliated HCO entity
MAIN_HCO search configuration for parent HCO entity
searcherTypetype of searcher implementation
attributesattributes used for attribute search implementation



HCP trigger service

Trigger service is used to execute action when entities are missing in MDM system. This feature is configured for user via triggerType attribute.


trigger typedescription
CREATEcreate missing HCO or parent HCO via HCO complex API
DCRcreate DCR request for missing objects
IGNOREignore missing objects, flow will continue, missing objects and relations will not be created
REJECTreject request, stop processing and return response to client


Flow diagram HCO (overview)

(details on main diagram)

\"\"

Steps HCO

  1. Map HCO request to COMPANY model
  2. If hco.uri attribute is null then create HCO entity
  3. Create relation
    1. if parentHCO.uri is not null then use to create other affiliations
    2. if parentHCO.uri is null then use search service to find entity
      1. if entity is found then use is to create other affiliations
      2. if entity is not found then create parentHCO and use to create other affiliations
    3. if Relation exists then do nothing
    4. if Relation doesn't exist then create relation


Triggers

Trigger action

Component

Action

Default time

REST callmanager POST/PATCH v2/hcp/complexcreate HCP, HCO objects and relationsAPI synchronous requests - realtime
REST callmanager POST/PATCH v2/hco/complexcreate HCO objects and relationsAPI synchronous requests - realtime


Dependent components

Component

Usage

Entity search servicesearch entity HCP API opertaion
Trigger serviceget trigger result opertaion
Entity management serviceget entity connections
" }, { "title": "Create HCP/HCO simple V2 methods - COMPANY model", "pageID": "284806830", "pageLink": "/pages/viewpage.action?pageId=284806830", "content": "

Description

V2 API simple methods are used to manage the Reltio entities - HCP/HCO/MCO.

They support basic HCP/HCO/MCO request with COMPANY model.

Flow diagram

\"\"

Steps

  1.  Crosswalk generator - auto-create crosswalk - if not exists
  2.  Entity validation
  3. Execute HTTP request - post entities Reltio operation
  4. Execute GetOrRegister COMPANYGlobalCustomerID operation


 

Crosswalk generator service

Crosswalk generator service is used for creating crosswalk when entity crosswalk is missing. This feature is configured for user via crosswalkGeneratorConfig attribute.


attributedescription
crosswalkGeneratorTypecrosswalk generator implementation
typecrosswalk type value
sourceTablecrosswalk source table value


Triggers

Trigger actionComponentActionDefault time
REST callManager: POST/PATCH /v2/hcpcreate HCP objects in MDM systemAPI synchronous requests - realtime
REST callManager: POST/PATCH /v2/hcocreate HCO objects in MDM systemAPI synchronous requests - realtime
REST callManager: POST/PATCH /v2/mcocreate MCO objects in MDM systemAPI synchronous requests - realtime

Dependent components

Component

Usage

COMPANY Global Customer ID RegistrygetOrRegister operation
Crosswalk generator servicegenerate crosswalk opertaion
" }, { "title": "DCR IQVIA flow", "pageID": "284800568", "pageLink": "/display/GMDM/DCR+IQVIA+flow", "content": "

Description

The following page contains a detailed description of IQVIA DCR flow for China clients. The logic is complicated and contains multiple relations.

Currently, it contains the following:

Complex business rules for generating DCRs,

Limited flexibility with IQVIA tenants,

Complex end-to-end technical processes (e.g., hand-offs, transfers, etc.)


The flow is related to numerous file transfers & hand-offs.

The idea is to make a simplified flow in the COMPANY model - details described here - DCR COMPANY flow


The below diagrams and description contain the current state that will be deprecated in the future.

Flow diagram - Overview - high level

\"\"

Flow diagram - Overview - simplified view


\"\"

Steps

\"\"



HUB LOGIC

HUB Configuration overview:

DCR CONFIG AND CLASSES:

Logic is in the MDM-MANAGER

 

Config:

\n
dcrConfig:  \n  dcrProcessing: yes\n  routeEnableOnStartup: yes\n  deadLetterEndpoint: "file:///opt/app/log/rejected/"\n  externalLogActive: yes\n  activationCriteria:\n    NEW_HCO:\n      - country: "CN"\n        sources:\n          - "CN3RDPARTY"\n          - "FACE"\n          - "GRV"\n    NEW_HCP:\n      - country: "CN"\n        sources:\n          - "GRV"\n    NEW_WORKPLACE:\n      - country: "CN"\n        sources:\n          - "GRV"\n          - "MDE"\n          - "FACE"\n          - "CN3RDPARTY"\n          - "EVR"\n\n  continueOnHCONotFoundActivationCriteria:\n    - country: "CN"\n      sources:\n        - "GCP"\n    - countries:\n        - AD\n        - BL\n        - BR\n        - DE\n        - ES\n        - FR\n        - FR\n        - GF\n        - GP\n        - IT\n        - MC\n        - MF\n        - MQ\n        - MU\n        - MX\n        - NC\n        - NL\n        - PF\n        - PM\n        - RE\n        - RU\n        - TR\n        - WF\n        - YT\n      sources:\n        - GRV\n        - GCP\n  validationStatusesMap:\n    VALID: validated\n    NOT_VALID: notvalidated\n    PENDING: pending
\n

Flow diagram - DCR Activation

\"\"

Steps

IQVIA/China  ACTIVATION LOGIC/ACTIVATION CRITERIA:


Kafka DCR sender - produce event to Kafka Topic



Flow diagram - DCR event Receiver (DCR processor)

\"\"

Steps


NewHCPDCRService - STEPS  - Process DCR Custom Logic (NEW_HCP)


NewHCODCRService - STEPS  - Process DCR Custom Logic (NEW_HCO, NEW_HCO_L1,NEW_HCO_L2)


NewWorkplaceDCRService - STEPS  - Process DCR Custom Logic (NEW_WORKPLACE)


Flow diagram - DCR Response - process DCR Response from API client

\"\"

Steps


IQVIA/China DCRResponseRoute:


DCR response processing:

REST api

Activated by china_apps user based on the IQVIA EVRs export


Used by China Client to accept/reject(Action) DCR in Reltio


Triggers

Trigger action

Component

Action

Default time

Operation linkDetails
REST callManager: POST/PATCH /hcpcreate specific objects in MDM systemAPI synchronous requests - realtimeCreate/Update HCP/HCO/MCOInitializes the DCR request
Kafka Request DCRManager: Push Kafka DCR eventpush Kafka DCR EventKafka asynchronous event - realtimeDCR IQVIA flowPush DCR event to DCR processor
Kafka Request DCRDCRServiceRoute: Poll Kafka DCR evenConsumes Kafa DCR eventsKafka asynchronous event - realtimeDCR IQVIA flowPoll/Consumes DCR events and process it
Rest call - DCR responseManager:DCRResponseRoute POST /dcrResponse/{id}/acceptupdates DCR by API (accept/reject e.tc.)API synchronous requests - realtimeDCR IQVIA flowAPI to accept DCR
Rest call - DCR responseManager:DCRResponseRoute POST /dcrResponse/{id}/updateHCPupdates DCR by API (accept/reject e.tc.)API synchronous requests - realtimeDCR IQVIA flowAPI to update HCP through DCR
Rest call - DCR responseManager:DCRResponseRoute POST /dcrResponse/{id}/updateHCOupdates DCR by API (accept/reject e.tc.)API synchronous requests - realtimeDCR IQVIA flowAPI to update HCO through DCR
Rest call - DCR responseManager:DCRResponseRoute POST /dcrResponse/{id}/updateAffiliationsupdates DCR by API (accept/reject e.tc.)API synchronous requests - realtimeDCR IQVIA flowAPI to update HCO to HCO affiliations through DCR
Rest call - DCR responseManager:DCRResponseRoute POST /dcrResponse/{id}/rejectupdates DCR by API (accept/reject e.tc.)API synchronous requests - realtimeDCR IQVIA flowAPI to reject DCR
Rest call - DCR responseManager:DCRResponseRoute POST /dcrResponse/{id}/mergeupdates DCR by API (accept/reject e.tc.)API synchronous requests - realtimeDCR IQVIA flowAPI to merge DCR HCP entities


Dependent components

Component

Usage

Managersearch entities in MDM systems
API Gatewayproxy REST and secure access
ReltioReltio MDM system
ManagerOld legacy DCR processor
" }, { "title": "DCR COMPANY flow", "pageID": "284800570", "pageLink": "/display/GMDM/DCR+COMPANY+flow", "content": "

Description

TBD 

Flow diagram (drafts)

\"\"




\"\"




Steps

TBD


Triggers

Trigger action

Component

Action

Default time






Dependent components

Component

Usage



" }, { "title": "Model Mapping (IQVIA<->COMPANY)", "pageID": "284800575", "pageLink": "/pages/viewpage.action?pageId=284800575", "content": "

Description

The interface is used to map MDM Entities between IQIVIA and COMPANY model.

Flow diagram

-

Mapping

Address ↔ Addresses attribute mapping

IQIVIA MODEL ATTRIBUTE [Address]COMPANY MODEL ATTRIBUTE [Addresses]

Address

Premise


Addresses

Premise


Address

Building


Addresses

Building


Address

VerificationStatus


Addresses

VerificationStatus


Address

StateProvince


Addresses

StateProvince


Address

Country


Addresses

Country


Address

AddressLine1


Addresses

AddressLine1


Address

AddressLine2


Addresses

AddressLine2


Address

AVC


Addresses

AVC


Address

City


Addresses

City


Address

Neighborhood


Addresses

Neighborhood


Address

Street


Addresses

Street


Address

Geolocation

Latitude

Addresses

Latitude


Address

Geolocation

Longitude

Addresses

Longitude


Address

Geolocation

GeoAccuracy

Addresses

GeoAccuracy


Address

Zip

Zip4

Addresses

Zip4


Address

Zip

Zip5

Addresses

Zip5


Address

Zip

PostalCode

Addresses

POBox


Phone attribute mappings

IQIVIA MODEL ATTRIBUTECOMPANY MODEL ATTRIBUTE

Phone

LineType


Phone

LineType


Phone

LocalNumber


Phone

LocalNumber


Phone

Number


Phone

Number


Phone

FormatMask


Phone

FormatMask


Phone

GeoCountry


Phone

GeoCountry


Phone

DigitCount


Phone

DigitCount


Phone

CountryCode


Phone

CountryCode


Phone

GeoArea


Phone

GeoArea


Phone

FormattedNumber


Phone

FormattedNumber


Phone

AreaCode


Phone

AreaCode


Phone

ValidationStatus


Phone

ValidationStatus


Phone

TypeIMS


Phone

Type


Phone

Active


Phone

Privacy

OptOut


Email attribute mappings

IQIVIA MODEL ATTRIBUTECOMPANY MODEL ATTRIBUTE
Email

Email

EmailDomain
EmailDomain
EmailDomainType
EmailDomainType
EmailValidationStatus
EmailValidationStatus
EmailTypeIMS
EmailType
EmailActive
EmailPrivacyOptOut
EmailUsername
EmailSourceSourceName

HCO mappings

IQIVIA MODEL ATTRIBUTECOMPANY MODEL ATTRIBUTE
Country
Country
Name
Name
TypeCode
TypeCode
SubTypeCode
SubTypeCode

CMSCoveredForTeaching


CMSCoveredForTeaching

Commenters


Commenters

CommHosp


CommHosp

Description


Description

Fiscal


Fiscal

GPOMembership


GPOMembership

HealthSystemName


HealthSystemName

NumInPatients


NumInPatients

ResidentProgram


ResidentProgram

TotalLicenseBeds


TotalLicenseBeds

TotalSurgeries


TotalSurgeries

VADOD


VADOD

Academic


Academic

KeyFinancialFiguresOverview

SalesRevenueUnitOfSizeKeyFinancialFiguresOverviewSalesRevenueUnitOfSize
ClassofTradeNSpecialtyClassofTradeNSpecialty
ClassofTradeNClassificationClassofTradeNClassification
IdentifiersIDIdentifiersID
IdentifiersTypeIdentifiersType
SourceName

OriginalSourceName

NumOutPatients

OutPatientsNumbers

Status

ValidationStatus

UpdateDate

SourceUpdateDate

WebsiteURL


Website

WebsiteURL

OtherNames-
OtherNames
Name

-
Type (constant: OTHER_NAMES)

OfficialName

-OtherNamesName

-
Type (constant: OFFICIAL_NAME)
Address*Addresses*
Phone*Phone*


HCP mappings

IQIVIA MODEL ATTRIBUTECOMPANY MODEL ATTRIBUTEDESCRIPTION
Country

Country


DoB

DoB


FirstName

FirstName

case: (IQVIA -> COMPANY), if IQIVIA(FirstName) is empty then IQIVIA(Name) is used as COMPANY(FirstName) mapping result
LastName

LastName

case: (IQVIA -> COMPANY), if IQIVIA(LastName) is empty then IQIVIA(Name) is used as COMPANY(LastName) mapping result
Name

Name


NickName

NickName


Gender

Gender


PrefferedLanguage

PrefferedLanguage


Prefix

Prefix


SubTypeCode

SubTypeCode


Title

Title


TypeCode

TypeCode


PresentEmployment

PresentEmployment


Certificates

Certificates


License

License


IdentifiersID
IdentifiersID

IdentifiersType
IdentifiersType

UpdateDate

SourceUpdateDate


SourceName

SourceValidationSourceName

ValidationChangeDate

SourceValidationChangeDate

ValidationStatus

SourceValidationStatus

SpeakerSpeakerLevel
SpeakerLevel


SpeakerSpeakerType
SpeakerType


SpeakerSpeakerStatus
SpeakerStatus


SpeakerIsSpeaker
IsSpeaker


DPPresenceChannelCode
DigitalPresenceChannelCode

METHOD PARAM<Workplaces>

ContactAffiliations

case: (IQVIA -> COMPANY), param workplaces is converted to HCO and added to ContactAffiliations
METHOD PARAM<MainWorkplaces>

ContactAffiliations

case: (IQVIA -> COMPANY), param main workplaces are converted to HCO and added to ContactAffiliations
Workplace

METHOD PARAM<Workplaces>

case: (COMPANY → IQIVIA), param workplaces is converted to HCO and assigned to Workplace
MainWorkplace

METHOD PARAM<MainWorkplaces>

case: (COMPANY → IQIVIA),  param main workplaces are converted to HCO and assigned to MainWorkplace
Address*
Addresses*

Phone*
Phone*

Email*
Email*

Triggers

Trigger action

Component

Action

Default time

Method invocation

HCPModelConverter.class

toCOMPANYModel(EntityKt  iqiviaModel, List<EntityKt> workplaces, List<EntityKt> mainWorkplaces, List<AttributeValueKt> addresses)realtime
Method invocationHCPModelConverter.classtoCOMPANYModel(EntityKt  iqiviaModel, List<EntityKt> workplaces, List<EntityKt> mainWorkplaces)realtime
Method invocationHCPModelConverter.classtoIqiviaModel(EntityKt  COMPANYModel, List<EntityKt> workplaces, List<EntityKt> mainWorkplaces)realtime
Method invocation

HCOModelConverter.class

toCOMPANYModel(EntityKt iqiviaModel)realtime
Method invocation

HCOModelConverter.class

toIqiviaModel(EntityKt  COMPANYModel)realtime


Dependent components

Component

Usage

data-modelMapper uses models to convert between them
" }, { "title": "User Profile (China user)", "pageID": "284800562", "pageLink": "/pages/viewpage.action?pageId=284800562", "content": "

Description

User profile got new attributes used in V2 API.


AttributeDescription
searchConfigHcpApiconfig search entity service for HCP API - contains HCO/MAIN_HCO search entity type configuration
searchConfigHcoApiconfig search entity service for HCO API
searcherType

type of searcher implementation

available values: [UriEntitySearch/CrosswalkEntitySearch/AttributesEntitySearch]

attributesattribute names used in AttributesEntitySearch
triggerType

V2 HCP/HCO complex API trigger configuration - action executed when there are missing entities in request

available values: [REJECT/IGNORE/DCR/CREATE]

crosswalkGeneratorConfigauto-create entity crosswalk - if missing in request
crosswalkGeneratorTypetype of crosswalk generator, available values: [UUID]
typeauto-generated crosswalk type value
soruceTableauto-generated crosswalk source table value
sourceModel

source model of entity provided by user for V2 HCP/HCO complex,

available values: [COMPANY,IQIVIA]



\"\"

                   

Flow diagram

TBD

Steps

TBD


Triggers

Trigger action

Component

Action

Default time






Dependent components

Component

Usage



" }, { "title": "User", "pageID": "284811104", "pageLink": "/display/GMDM/User", "content": "


The user is configured with a profile that is shared between all MDM services. Configuration is provided via yaml files and loaded at boot time. To use the profile in any application, import the com.COMPANY.mdm.user.UserConfiguration configuration from the mdm-user module. This operation will allow you to use the UserService class, which is used to retrieve users.


User profile configuration

attributedescription
nameuser name
descriptionuser description
tokentoken used for authentication
getEntityUsesMongoCacheretrive entity from mongo cache in get entity operation
lookupsUseMongoCacheretrive lookups from mongo cache in LookupService
trim

trimming entities/relationships in response to the client

guardrailsEnabledcheck if contributor provider crosswalk exists with data provider crosswalk
rolesuser permissions
countriesuser allowed countries
sourcesuser allowed crosswalks
defaultClientdefault mdm client name
validationRulesForValidateEntityServicevalidation rules configuration
batchesuser allowed batches configuration
defaultCountryuser default country, used in api-router, when country is not provided in request
overrideZonesuser country-zone configuration that overwrites default api-router behavior
kafkauser kafka configuration, used in kafka management service
reconciliationTargetsreconciliation targets, used in event resend service



" }, { "title": "Country Cluster", "pageID": "234715057", "pageLink": "/display/GMDM/Country+Cluster", "content": "

General assumptions

Example of mapping: 

Country

countryCluster

Andorra (AD)

France (FR)

Maroco (MC)

France (FR)

Changes in MDM HUB

1. Enrichment of  Kafka events  with extra parameter defaultClusterCoutry

2. Add a new column COUNTRY_CLUSTER representing the default country cluster  in views:

3. Handling cluster country sent by PforceRx in DCR process in a transparent way

Change in the event model

{

  "eventType": "HCP_CHANGED",

  "eventTime": 1514976138977,

  "countryCode": "MC",

  “defaultCountryCluster": "FR",

  "entitiesURIs": ["entities/ysCkGNx“

  ] ,

  "targetEntity":

  {

  "uri": "entities/ytY3wd9",

  "type": "configuration/entityTypes/HCP",

Changes on client-side

  1. MULE
  2. ODS


" }, { "title": "Create/Update HCP/HCO/MCO", "pageID": "164470018", "pageLink": "/pages/viewpage.action?pageId=164470018", "content": "

Description

The REST interfaces exposed through the MDM Manager component used by clients to update or create HCP/HCO/MCO objects. The update process is supported by all connected MDMs – Reltio and Nucleus360 with some limitations. At this moment Reltio MDM is fully supported for entity types: HCP, HCO, MCO. The Nucleus360 supports only the HCP update process. The decision which MDM should be selected to process the update request is controlled by configuration. Configuration map defines country assignment to MDM which stores country's data. Based on this map, MDM Manager selects the correct MDM system to forward the update request.

The difference between Create and Update operations is the additional API request during the update operation. During the update, an entity is retrieved from the MDM by the crosswalk value for validation purposes. 

Diagrams 1 and 2 presents standard flow. On diagrams 3, 4, 5, 6 additional logic is optional and activated once the specific condition or attribute is provided. 

The diagrams below present a sequence of steps in processing client calls.

Update 2023-09:

To increase Update HCP/HCO/MCO performance, the logic was slightly altered:

Flow diagram

1Create HCP/HCO/MCO

\"\"

2 Update HCP/HCO/MCO

\"\"

3 (additional optional logic) Create/Update HCO with ParentHCO 

\"\"

4 (additional optional logic) Create/Update HCP with AffiliatedHCO&Relation

\"\"

5 (additional optional logic) Create/Update HCO with ParentHCO 



\"\"


6 (additional optional logic) Create/Update HCP with source crosswalk replace 

\"\"

Steps

Triggers

Trigger actionComponentActionDefault time
REST callManager: POST/PATCH /hco /hcp /mcocreate specific objects in MDM systemAPI synchronous requests - realtime

Dependent components

ComponentUsage
Managercreate update Entities in MDM systems
API Gatewayproxy REST and secure access
ReltioReltio MDM system
NucleusNucleus MDM system





" }, { "title": "Create/Update Relations", "pageID": "164469796", "pageLink": "/pages/viewpage.action?pageId=164469796", "content": "

Description

The operation creates or updates the Relation of MDM Manager manages the relations in the Reltio MDM system. User can update the specific relation using a crosswalk to match or create a new object using unique crosswalks and information about start and end object

The detailed process flow is shown below.

Flow diagram

Create/Update Relation

\"\"

Steps

  1. The client sends HTTP requests to the MDM Manager endpoint.
  2. Kong Gateway receives requests and handles authentication.
  3. If the authentication succeeds, the request is forwarded to the MDM Manager component.
  4. MDM Manager checks user permissions to call createRelation/updateRelation operation and the correctness of the request.
  5. If the user's permissions are correct, MDM Manager proceeds with the create/update operation.
  6. OPTIONALLY: after successfully update (ResponseStatus != failed), relations are cached in the MongoDB, the relations are then reused in the ReferenceAttributeEnrichment Service (currently configured for the GBLUS ONEKEY Affiliations). This is required to enrich these relations to the HCP/HCO objects during the update, this prevents losing reference attributes duringHCP create operation.
  7. OPTIONALLY: PATCH operation adds the PARTIAL_OVERRIDE header to Reltio switching the request to the partial update operation.


Triggers

Trigger actionComponentActionDefault time
REST call

Manager: POST/PATCH

/relations

create or updates the Relations in MDM systemAPI synchronous requests - realtime

Dependent components

ComponentUsage
Managercreate or updates the Relations in MDM system
" }, { "title": "Create/Update/Delete tags", "pageID": "172295228", "pageLink": "/pages/viewpage.action?pageId=172295228", "content": "

The REST interfaces exposed through the MDM Manager component used by clients to update, delete or create tags assigned to entity objects. Difference between create and update is that tags are added and if the option returnObjects is set to true all previously added and new tags will be returned. Delete action removes one tag.

The diagrams below present a sequence of steps in processing client calls.

Flow diagram

  1. Create tag
  2. Update tag
  3. Delete tag


Steps

Triggers

Trigger actionComponentActionDefault time
REST callManager: POST/PATCH/DELETE /entityTagscreate specific objects in MDM systemAPI synchronous requests - realtime

Dependent components

ComponentUsage
Managercreate update delete Entity Tags in MDM systems
API Gatewayproxy REST and secure access
ReltioReltio MDM system



" }, { "title": "DCR flows", "pageID": "415205424", "pageLink": "/display/GMDM/DCR+flows", "content": "
\n
\n
\n
\n

Overview

DCR (Data Change Request) process helps to improve existing data in source systems. Proposal for change is being created by source systems a as DCR object (sometimes also called VR - Validation Request) which is usually being routed by MDM HUB to DS (Data Stewards) either in Reltio or in Third party validators (OneKey, Veeva OpenData). Response is provided twofold:

  • response for specific DCR - metadata
  • profile data update as a direct effect of a DCR processing - payload


General DCR process flow

High level solution architecture for DCR flow


\"\"

Source: Lucid



\n
\n
\n
\n
\n
\n

Solution for OneKey (OK)

\"\"

\n
\n
\n
\n

Solution for Veeva OpenData (VOD)

\"\"

\n
\n
\n
\n
\n
\n

Architecture highlights

  • Actors involved: PforceRX, Reltio, HUB, OneKey
  • Key components: DCR Service 2 (second version) for AMER, EMEA, APAC, US tenants
  • Process details:
    • DCRs are created directly by PforceRx using DCR's HUB API
    • PforceRx checks for DCR status updates every 24h → finds out which DCRs has been updated (since last check 24h ago) and the pulls details from each one with /dcr/_status 
    • Integration with OneKey is realized by APIs - DCRs are created with /vr/submit and their status is verified every 8h with /vr/trace
    • Data profile updates (payload) are being delivered via CSV and S3 and ETLed (VOD batch) to Reltio with COMPANY's help
    • DCRRegistry & DCRRegistryVeeva collections are used in Mongo for tracking purposes




\n
\n
\n
\n

Architecture highlights

  • Actors involved: Data Stewards in Reltio, HUB, Veeva OpenData (VOD)
  • Key components: DCR Service 2 (second version) for AMER, EMEA, APAC, US tenants
  • Process details:
    • DCRs are created by Data Stewards (DSRs) in Reltio via Suggest / Send to 3rd Party Validation - input for DSRs is being provided by reports from PforceRx
    • Communication with Veeva via S3<>SFTP and synchronization GMTF jobs. DCRs are sent and received in batches every 24h 
    • DCRs metadata is being exchanged via multiple CSV files ZIPed
    • Data profile updates (payload) are being delivered via CSV and S3 and ETLed (VOD batch) to Reltio with COMPANY's help  
    • DCRRegistry & DCRRegistryONEKEY collections are used in Mongofor tracking purposes
\n
\n
\n
\n
\n
\n

Solution for IQVIA Highlander (HL) 

\"\"


\n
\n
\n
\n

Solution for OneKey on GBLUS - sources ICEU, Engage, GRV

\n
\n
\n
\n
\n
\n

Architecture highlights

  • Actors involved: Veeva on behalf of PforceRX, Reltio, HUB, IQVIA wrapper
  • Key components: DCR Service (first version) for GBLUS tenant
  • Process details:
    • DCRs are created by sending CSV requests by Veeva - based on information acquired from PforceRx
    • Integration HUB <> Veeva → via files and S3<>SFTP. HUB confirms DCR creation by returning file reports back to Veeva
    • Integration HUB <> IQVIA wrapper → via files and S3
    • HUB is responsible for translation of Veeva DCR CSV format to IQVIA CSV wrapper which then creates DCR in Reltio
    • Data Stewards approve or reject the DCRs in Reltio which updates data profiles accordingly. 
    • PforceRx receives update about changes in Reltio
    • DCRRequest collection is used in Mongo for tracking purposes
\n
\n
\n
\n

Architecture highlights (draft)

  • Actors involved: HUB, IQVIA wrapper
  • Key components: DCR Service (first version) for GBLUS tenant
  • Process details:
    • POST events from sources are captured - some of them are translated to direct DCRs, some of them are gathered and then pushed via flat files to be transformed into DCRs to OneKey

 


\n
\n
\n
" }, { "title": "DCR generation process (China DCR)", "pageID": "164470008", "pageLink": "/pages/viewpage.action?pageId=164470008", "content": "

The gateway supports following DCR types:


DCR generation processes are handled in two steps:

  1. During HCP modification – if initial activation criteria are met, then a DCR request is generated and published to KAFKA <env>-gw-dcr-requests topic.
  2. In the next step, the internal Camel route DCRServiceRoute reads requests generated from the topic and processes as follows:
    1. checks if the time specified by delayPrcInSeconds elapsed since request generation – it makes sure that Reltio batch match process has finished and newly inserted profiles merge with the existing ones.
    2. checks if an entity, that caused DCR generation, still exists;
    3. checks full activation criteria (table below) on the latest state of the target entity, if criteria are not met then the request is closed
    4. creates DCR in Reltio
    5. updates external info
    6. creates COMPANYDataChangeRequest entity in Reltio for tracking and exporting purposes.
  3. Created DCRs are exported by the Informatica ETL process managed by IQIVIA
  4. DCR applying process (reject/approve actions) are executed through MDM HUB DCR response API executed by the external app manged by MDE team.


The table below presents DCR activation criteria handled by system.

Table 9. DCR activation criteria





Rule

NewHCP

MultiAffiliation

NewHCOL2

NewHCOL1

Country in

CN

CN

CN

CN

Source in

GRV

GRV, MDE, FACE, EVR, CN3RDPARTY

GRV, FACE, CN3RDPARTY

GRV, FACE, CN3RDPARTY

ValidationStatus in

pending, partial-validated

or, if merged:

OV: notvalidated, GRV nonOV: pending/partial-validated

validated, pending

validated, pending

validated, pending

SpeakerStatus in

enabled, null

enabled, null

enabled, null

enabled, null

Workplaces count


>1



Hospital found

true

true

false

true

Department found

true

true


false

Similar DCR created in the past

false

false

false

false


Update: December 2021

\"\"

" }, { "title": "HL DCR [Decommissioned April 2025]", "pageID": "164470085", "pageLink": "/pages/viewpage.action?pageId=164470085", "content": "

Contacts

VendorContact
PforceRX

DL-PForceRx-SUPPORT@COMPANY.com

IQVIA (DCR Wrapper)COMPANY-MDM-Support@iqvia.com 


As a part of Highlander project, the DCR processing flow was created which realizes following scenarios:

  1. Update HCP account details i.e. specialty, address, name (different sources of elements),
  2. Add new HCP account with primary affiliation to an existing organization,
  3. Add new HCP account with a new business account,
  4. Update HCP and add affiliation to a new HCO,
  5. Update HCP account details and remove existing details i.e. birth date, national id, …,
  6. Update HCP account and add new non primary affiliation to an existing organization,
  7. Update HCP account and add new primary affiliation to an existing organization,
  8. Update HCP account inactivate primary affiliation. Person account has more than 1 affiliation,
  9. Update HCP account inactivate non primary affiliation. Person account has more than 1 affiliation,
  10. Inactivate HCP account,
  11. Update HCP and add a private address,
  12. Update HCP and update existing private address,
  13. Update HCP and inactivate a private address,
  14. Update HCO details i.e. address, name (different sources of elements),
  15. Add new HCO account,
  16. Update HCO and remove details,
  17. Inactivate HCO account,
  18. Update HCO address,
  19. Update HCO and add new address,
  20. Update HCO and inactivate address,
  21. Update HCP's existing affiliation.


Above cases has been aggregated into six generic types in internal HUB model:

  1. NEW_HCP_GENERIC - represents cases when the new HCP object is created with or without affiliation to HCO,
  2. UPDATE_HCP_GENERIC - aggregates cases when the existing HCP object is changed,
  3. DELETE_HCP_GENERIC - represents the case when HCP is deactivating,
  4. NEW_HCO_GENERIC - aggregates scenarios when new HCO object is created with or without affiliations to parent HCO,
  5. UPDATE_HCO_GENERIC - represents cases when existing HCO object is changing,
  6. DELETE_HCO_GENERIC - represents the case when HCO is deactivating.


General Process Overview

\"\"


Process steps:

  1. Veeva uploads DCR request file to FTP location,
  2. PforceRx Channel component downloads the DCR request file,
  3. PforceRx Channel validates and maps each DCR requests to internal model,
  4. PforceRx Channel sends the request to DCR Service,
  5. DCR Service process the request: validating, enriching and mapping to Iqvia DCR Wrapper,
  6. PforceRx Channel prepares the report file containing technical status of DCR processing - at this time, report will contain only requests which don't pass the validation,
  7. Scheduled process in DCR Service, prepares the Wrapper requests file and uploads this to S3 location.
  8. DCR Wrapper processes the file: creating DCRs in Reltio or rejecting the request due to errors. After that the response file is published to s3 location,
  9. DCR Service downloads the response and updates DCRs status,
  10. Scheduled process in PforceRx Channel gets DCR requests and prepares next technical report - at this time the report has technical status which comes from DCR Wrappper,
  11. DCRs that was created by DCR Wrapper are reviewed by Data Stewards. DCR can be accepted or rejected,
  12. After accepting or rejecting DCR, Reltio publishes the message about this event,
  13. DCR Service consumes the message and updates DCR status,
  14. PforceRx Channel gets DCR data to prepare a response file. The response file contains the final status of DCRs processing in Reltio.


Veeva DCR request file specification

The specification is available at following location:

https://COMPANY-my.sharepoint.com/:x:/r/personal/chinj2_COMPANY_com/Documents/Mig%20In-Prog/Highlander/PMO/09%20Integration/LATAM%20Reltio%20DCR/DCR_Reltio_T144_Field_Mapping_Reltio.xlsx


DCR Wrapper request file specification

The specification is available at following link:

https://COMPANY.sharepoint.com/:x:/r/sites/HLDCR/Shared%20Documents/ReltioCloudMDM_LATAM_Highlander_DCR_DID_COMPANY__DEVMapping_v2.1.xlsx





" }, { "title": "OK DCR flows (GBLUS)", "pageID": "164469877", "pageLink": "/pages/viewpage.action?pageId=164469877", "content": "

Description

The process is responsible for creating DCRs in Reltio and starting Change Requests Workflow for singleton entities created in Reltio. During this process, the communication to IQVIA OneKey VR API is established.  SubmitVR operation is executed to create a new Validation Request. The TraceVR operation is executed to check the status of the VR in OneKey. All DCRs are saved in the dedicated collection in HUB Mongo DB, required to gather metadata and trace the changes for each DCR request. Some changes can be suggested by the DS using "Suggest" operation in Reltio and "Send to Third Party Validation" button, the process "Data Steward OK Validation Request" is processing these changes and sends them to the OneKey service. 

The process is divided into 4 sections:

  1. Submit Validation Request
  2. Trace Validation Request
  3. Data Steward Response
  4. Data Steward OK Validation Request

The below diagram presents an overview of the entire process. Detailed descriptions are available in the separated subpages.

Flow diagram

\"\"

Model diagram

\"\"

Steps

Triggers

Described in the separated sub-pages for each process.

Dependent components

Described in the separated sub-pages for each process.

" }, { "title": "Data Steward OK Validation Request", "pageID": "172306908", "pageLink": "/display/GMDM/Data+Steward+OK+Validation+Request", "content": "

Description

The process the DS suggested changes based on the Change Request events received from Reltio(publishing) that are marked with the ThirdPartyValidation flag. The "suggested" changes are retrieved using the "preview" method and send to IQVIA OneKey or Veeva OpenData for validation. After successful submitVR response HUB is closing/rejecting the existing DCR in Reltio and additionally creates a new DCR object with relation to the entity in Reltio for tracking and status purposes. 

Because of the ONEKEY interface limitation, removal of attributes is send to IQVIA as a comment.

Flow diagram

\"\"


Steps


 ONEKEY Comparator (suggested changes)

HCP

Reltio AttributeONEKEY attributemandatory typeattribute type
FirstName
individual.firstNameoptionalsimple value
LastNameindividual.lastNamemandatorysimple value
Country
isoCod2
mandatorysimple value
Genderindividual.genderCodeoptionalsimple lookup
Prefixindividual.prefixNameCodeoptionalsimple lookup
Titleindividual.titleCodeoptionalsimple lookup
MiddleNameindividual.middleNameoptionalsimple value
YoBindividual.birthYearoptionalsimple value
Dobindividual.birthDayoptionalsimple value
TypeCodeindividual.typeCodeoptionalsimple lookup
PreferredLanguageindividual.languageEidoptionalsimple value
WebsiteURL
individual.websiteoptionalsimple value

Identifier value 1

individial.externalId1optionalsimple value

Identifier value 2

individial.externalId2optionalsimple value
Addresses[]

address.country

address.city

address.addressLine1

address.addressLine2

address.Zip5

mandatorycomplex (nested)
Specialities[]
individual.speciality1 / 2 / 3optionalcomplex (nested)
Phone[]
individual.phoneoptionalcomplex (nested)
Email[]
individual.emailoptionalcomplex (nested)
Contact Affiliations[]

workplace.usualName

workplace.officialName

workplace.workplaceEid

optionalContact Affiliation
ONEKEY crosswalk
individual.individualEid
mandatoryID

HCO

Reltio AttributeONEKEY attributemandatory typeattribute type
Name

workplace.usualName

workplace.officialName

optionalsimple value
Country
isoCod2
mandatorysimple value
OtherNames.Name
workplace.usualName2optionalcomplex (nested)
TypeCodeworkplace.typeCodeoptionalsimple lookup
WebisteWebsiteURL
workplace.websiteoptionalcomplex (nested)
Addresses[]

address.country

address.city

address.addressLine1

address.addressLine2

address.Zip5

mandatorycomplex (nested)
Specialities[]
workplace.speciality1 / 2 / 3optionalcomplex (nested)
Phone[] (!FAX)
workplace.telephoneoptionalcomplex (nested)
Phone[] (FAX)
workplace.faxoptionalcomplex (nested)
Email[]
workplace.emailoptionalcomplex (nested)
ONEKEY crosswalk
workplace.workplaceEid
mandatoryID



Triggers

Trigger actionComponentActionDefault time
IN Events incoming mdm-onekey-dcr-service:ChangeRequestStreamprocess publisher full change request events in the stream that contain ThirdPartyValidation flagrealtime: events stream processing 

Dependent components

ComponentUsage
OK DCR ServiceMain component with flow implementation
Veeva DCR ServiceMain component with flow implementation
PublisherEvents publisher generates incoming events
Hub StoreDCR and Entities Cache 
" }, { "title": "Data Steward Response", "pageID": "164469841", "pageLink": "/display/GMDM/Data+Steward+Response", "content": "

Description

The process updates the DCR's based on the Change Request events received from Reltio(publishing). Based on the Data Steward decision the state attribute contains relevant information to update DCR status.

Flow diagram


\"\"

Steps

Triggers

Trigger actionComponentActionDefault time
IN Events incoming 

mdm-onekey-dcr-service:OneKeyResponseStream

mdm-veeva-dcr-service:veevaResponseStream

process publisher full change request events in streamrealtime: events stream processing 

Dependent components

ComponentUsage
OK DCR ServiceMain component with flow implementation
Veeva DCR ServiceMain component with flow implementation
PublisherEvents publisher generates incoming events
Hub StoreDCR and Entities Cache 
" }, { "title": "Submit Validation Request", "pageID": "164469875", "pageLink": "/display/GMDM/Submit+Validation+Request", "content": "

Description

The process of submitting new validation requests to the OneKey service based on the Reltio change events aggregated in time windows. During this process, new DCRs are created in Reltio.

Flow diagram


\"\"

Steps


Triggers

Trigger actionComponentActionDefault time
IN Events incoming mdm-onekey-dcr-service:OneKeyStreamprocess publisher simple events in streamevents stream processing with 4h time window events aggregation
OUT API requestone-key-client:OneKeyIntegrationService.submitValidationsubmit VR request to OneKeyinvokes API request for each accepted event

Dependent components

ComponentUsage
OK DCR ServiceMain component with flow implementation
PublisherEvents publisher generates incoming events
ManagerReltio Adapter for getMatches and created operations
OneKey AdapterSubmits Validation Request
Hub StoreDCR and Entities Cache 

Mappings

Reltio → OK mapping file: onkey_mappings.xlsx

OK mandatory / required fields: VR - Business Fields Requirements(COMPANY).xlsx

OneKey Documentation

\"\"




" }, { "title": "Trace Validation Request", "pageID": "164469983", "pageLink": "/display/GMDM/Trace+Validation+Request", "content": "

Description

The process of tracing the VR changes based on the OneKey VR changes. During this process HUB, DCR Cache is triggered every <T> hour for SENT DCR's and check VR status using OneKey web service. After verification DCR is updated in Reltio or a new Workflow is started in Reltio for the Data Steward manual validation. 

Flow diagram


\"\"


Steps



Triggers

Trigger actionComponentActionDefault time
IN Timer (cron)mdm-onekey-dcr-service:TraceVRServicequery mongo to get all SENT DCR's related to OK_VR processevery <T> hour
OUT API requestone-key-client:OneKeyIntegrationService.traceValidationtrace VR request to OneKeyinvokes API request for each DCR

Dependent components

ComponentUsage
OK DCR ServiceMain component with flow implementation
ManagerReltio Adapter for GET /changeRequests and POST /workflow/_initiate operations 
OneKey AdapterTraceValidation Request
Hub StoreDCR and Entities Cache 



" }, { "title": "PforceRx DCR flows", "pageID": "209949183", "pageLink": "/display/GMDM/PforceRx+DCR+flows", "content": "

Description

MDM HUB exposes Rest API to create and check the status of DCR. The process is responsible for creating DCRs in Reltio and starting Change Requests Workflow DCRs created in Reltio or creating the DCRs (submitVR operation) in ONEKEY. DCR requests can be routed to an external MDM HUB instance handling the requested country. The action is transparent to the caller. During this process, the communication to IQVIA OneKey VR API / Reltio API is established. The routing decision depends on the market, operation type, or changed profile attributes.

Reltio API:  createEntity (with ChangeReqest) operation is executed to create a completely new entity in the new Change Request in Reltio. attributesUpdate (with ChageRequest) operation is executed after calculation of the specific changes on complex or simple attributes on existing entity - this also creates a new Change Request.  Start Workflow operation is requested at the end, this starts the Wrofklow for the DCR in Reltio so the change requests are started in the Reltio Inbox for Data Steward review.

IQVIA API: SubmitVR operation is executed to create a new Validation Request. The TraceVR operation is executed to check the status of the VR in OneKey.

All DCRs are saved in the dedicated collection in HUB Mongo DB, required to gather metadata and trace the changes for each DCR request. The DCR statuses are updated by consuming events generated by Reltio or periodic query action of open DCRs in OneKey

The Data Steward can decide to route a DCR to IQVIA as well - some changes can be suggested by the DS using the "Suggest" operation in Reltio and "Send to Third Party Validation" button, the process "Data Steward OK Validation Request" is processing these changes and sends them to the OneKey service. 

The below diagram presents an overview of the entire process. Detailed descriptions are available in the separated subpages.

API doc URL: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-dcr-spec-emea-dev/swagger-ui/index.html

Flow diagram

DCR Service High-Level Architecture

\"\"

DCR HUB Logical Architecture

\"\"


Model diagram


\"\"

Flows:

Triggers

Described in the separated sub-pages for each process.

Dependent components

Described in the separated sub-pages for each process.


" }, { "title": "Create DCR", "pageID": "209949185", "pageLink": "/display/GMDM/Create+DCR", "content": "

Description

The process creates change requests received from PforceRx Client and sends the DCR to the specified target service - Reltio, OneKey or Veeva OpenData (VOD). DCR is created in the system and then processed by the data stewards. The status is asynchronously updated by the HUB processes, Client represents the DCR using a unique extDCRRequestId value. Using this value Client can check the status of the DCR (Get DCR status). 

Flow diagram

\"\"

Source: Lucid

\"\"

Source: Lucid


DCR Service component perspective


Steps


  1. Clients execute the API POST /dcr request
  2. Kong receives requests and handles authentication
  3. If the authentication succeeds the request is forwarded to the dcr-service-2 component,
  4. DCR Service checks permissions to call this operation and the correctness of the request, then the flow is started and the following steps are executed:
    1. Parse and validate the dcr request. The validation logic checks the following: 
      1. Check if the list of DCRRequests contains unique extDCRRequestId.
        1. Requests that are duplicate will be rejected with the error message - "Found duplicated request(s)"
      2. For each DCRRequest in the input list execute the following checks:
        1. Users can define the following number of entities in the Request:
          1. at least one entity has to be defined, otherwise, the request will be rejected with an error message - "No entities found in the request"
          2. single HCP
          3. singe HCO
          4. singe HCP with single HCO
          5. two HCOs
        2. Check if the main reference objects exist in Reltio for update and delete action
          1. HCP.refId or HCO.refId, user have to specify one of:
            1. CrosswalkTargetObjectId - then the entity is retrieved from Reltio using get entity by crosswalk operation
            2. EntityURITargetObjectId - then the entity is retrieved from Reltio using get entity by uri operation
            3. COMPANYCustomerIdTargetObjectId - then the entity is retrieved from Reltio using search operation by the COMPANYGlobalCustomerID
        3. Attributes validation:
          1. Simple attributes - like firstName/lastName e.t.c
            1. for update action on the main object:
              1. if the input parameter is defined with an empty value - "" - this will result in the removal of the target attribute
              2. if the input parameter is defined with a non-empty value - this will result in the update of the target attribute
          2. Nested attributes - like Specialties/Addresses e.t.c
            1. for each attribute, the user has to define the refId to uniquely identify the attribute
              1. For action "update" - if the refId is not found in the target object request will be rejected with a detailed error message 
              2. For action "insert" - the refId is not required - new reference attribute will be added to the target object
        4. Changes validation:
          1. If the validation detected 0 changes (during comparison of applying changes and the target entity) -  the request is rejected with an error message - "No changes detected"
    2. Evaluate dcr service (based on the decision table config)
      1. The following decision table is defined to choose the target service
        1. LIST OF the following combination of attributes:

          attributedescription
          userName 
          the user name that executes the request
          sourceName
          the source name of the Main object
          country
          the county defined in the request
          operationType

          the operation type for the Main object

          { insert, update, delete }
          affectedAttributes
          the list of attributes that the user is changing
          affectedObjects
          { HCP, HCO, HCP_HCO }

          RESULT →  TargetType {Reltio, OneKey, Veeva}

        2. Each attribute in the configuration is optional. 

        3. The decision table is making the validation based on the input request and the main object- the main object is HCP, if the HCP is empty then the decision table is checking HCO. 
        4. The result of the decision table is the TargetType, the routing to the Reltio MDM system, OneKey or Veeva service. 
    3. Execute target service (reltio/onekey/veeva)
      1. Reltio: create DCR method - direct
      2. OneKey: create DCR method (submitVR) - direct
      3. Veeva: create DCR method (storeVR)
    4. Create DCR in Reltio and save DCR in DCR Registry 
      • If the submission is successful then: 
        • DCR entity is created in Reltio and the relation between the processed entity and the DCR entity
          • Reltio source name (crosswalk.type): DCR
          • Reltio relation type: HCPtoDCR or HCOtoDCR (depending on the object type)
            • for "create" and "delete" operation the Relation have to be created between objects
            • if this is just the "insert" operation the Relation will be created after the acceptance of the Change Request in Reltio - Reltio: process DCR Change Events
          • DCR entity attributes once sent to OneKey

            DCR entity attributes

            Mapping

            DCRIDextDCRRequestId
            EntityURIthe processed entity URI
            VRStatus"OPEN"
            VRStatusDetail"SENT_TO_OK"
            CreatedByMDM HUB
            SentDatecurrent time

            CreateDate

            current time

            CloseDate

            if REJECTED | ACCEPTED -> current time

            dcrType

            evaluate based on config:

            dcrTypeRules:
            - type: CR0
            size: 1
            action: insert
            entity: com.COMPANY.mdm.api.dcr2.HCP

            \"\"

          • DCR entity attributes once sent to Veeva

            DCR entity attributes

            Mapping

            DCRIDextDCRRequestId
            EntityURIthe processed entity URI
            VRStatus"OPEN"
            VRStatusDetail"SENT_TO_VEEVA"
            CreatedByMDM HUB
            SentDatecurrent time

            CreateDate

            current time

            CloseDate

            if REJECTED | ACCEPTED -> current time

            dcrType

            evaluate based on config:

            dcrTypeRules:
            - type: CR0
            size: 1
            action: insert
            entity: com.COMPANY.mdm.api.dcr2.HCP

            \"\"

          • DCR entity attributes once sent to Reltio → action is passed to DS and workflow is started. 

            DCR entity attributes

            Mapping

            DCRIDextDCRRequestId
            EntityURIthe processed entity URI
            VRStatus"OPEN"
            VRStatusDetail"DS_ACTION_REQUIRED "
            CreatedByMDM HUB
            SentDatecurrent time

            CreateDate

            current time

            CloseDate

            if REJECTED | ACCEPTED -> current time

            dcrType

            evaluate based on config:

            dcrTypeRules:
            - type: CR0
            size: 1
            action: insert
            entity: com.COMPANY.mdm.api.dcr2.HCP

            \"\"

        • Mongo Update: DCRRequest.status is updated to SENT with OneKey or Veeva request and response details or DS_ACTION_REQURIED with all Reltio details
      • Otherwise FAILED status is recorded in DCRRequest with a detailed error message.
        • Mongo Update:  DCRRequest.status is updated to FAILED with all required attributes, request, and exception response details 
    5. Initialize Workflow in Reltio (only requests that TargetType is Reltio)
      1. POST /workflow/_initiate operation is invoked to init new Workflow in Reltio

        Workflow attributes

        Mapping

        changeRequest.uriChangeRequest Reltio URI
        changeRequest.changesEntity URI
    6. Then Auto close logic is invoked to evaluate whether DCR request meets conditions to be auto accepted or auto rejected. Logic is based on decision table PreCloseConfig. If DCRRequest.country is contained in PreCloseConfig.acceptCountries or PreCloseConfig.rejectCountries then DCR is accepted or rejected respectively. 
    7. return DCRResponse to Client - During the flow, DCRRespone may be returned to Client with the specific errorCode or requestStatus. The description for all response codes is presented on this page: Get DCR status

Triggers

Trigger actionComponentActionDefault time
REST callDCR Service: POST /dcrcreate DCRs in the Reltio, OneKey or Veeva systemAPI synchronous requests - realtime


Dependent components

ComponentUsage
DCR ServiceMain component with flow implementation
OK DCR ServiceOneKey Adapter - API operations
Veeva DCR ServiceVeeva Adapter - API operations and S3/SFTP communication 
ManagerReltio Adapter - API operations
Hub StoreDCR and Entities Cache 
" }, { "title": "DCR state change", "pageID": "218438617", "pageLink": "/display/GMDM/DCR+state+change", "content": "

Description

The following diagram represents the DCR state changes. DCR object stat is saved in HUB and in Reltio DCR entity object. The state of the DCR is changed based on the Reltio/IQVIA/Veeva Data Steward action.

Flow diagram

\"\"

Steps

  1. DCR is created (OPEN)  - Create DCR
    1. DCR is sent to Reltio, OneKey or Veeva
      1. When sent to Reltio
        1. Pre Close logic is invoked to auto accept (PRE_ACCEPT) or auto reject (PRE_REJECT) DCR
        2. Reltio Data Steward process the DCR - Reltio: process DCR Change Events
      2. OneKey Data Steward process the DCR - OneKey: process DCR Change Events
      3. Veeva Data Steward process the DCR - Veeva: process DCR Change Events


Data Steward DCR status change perspective

\"\"

Transaction Log

There are the following main assumptions regarding the transaction log in DCR service: 


Log appenders:


Triggers

Trigger action

Component

Action

Default time

REST callDCR Service: POST /dcrcreate DCRs in the Reltio system or in OneKeyAPI synchronous requests - realtime
IN Events incoming dcr-service-2:DCRReltioResponseStreamprocess publisher full change request events in the streamrealtime: events stream processing 
IN Events incoming dcr-service-2:DCROneKeyResponseStreamprocess publisher full change request events in the streamrealtime: events stream processing 
IN Events incoming dcr-service-2:DCRVeevaResponseStreamprocess publisher full change request events in the streamrealtime: events stream processing 


Dependent components

Component

Usage

DCR ServiceMain component with flow implementation
OK DCR ServiceOneKey Adapter  - API operations
Veeva DCR ServiceVeeva Adapter  - API operations
ManagerReltio Adapter  - API operations
Hub StoreDCR and Entities Cache 
" }, { "title": "Get DCR status", "pageID": "209949187", "pageLink": "/display/GMDM/Get+DCR+status", "content": "

Description

The client creates DCRs in Reltio, OneKey or Veeva OpenData using the Create DCR operation. The status is then asynchronously updated in the DCR Registry. The operation retrieves the current status of the DCRs that the updated date is between 'updateFrom' and 'updateTo' input parameters. PforceRx first asks what DCRs have been changed since last time they checked (usually 24h) and then iterate for each DCR they get detailed info.

Flow diagram

\"\",

\"\"

Source: Lucid



Dependent flows:
  1. The DCRRegistry is enriched by the DCR events that are generated by Reltio - the flow description is here - Reltio: process DCR Change Events
  2. The DCRRegistry is enriched by the DCR events generated in OneKey DCR service component - after submitVR operation is invoked to ONEKEY, each DCR is traced asynchronously in this process - OneKey: process DCR Change Events
  3. The DCRRegistry is enriched by the DCR events generated in Veeva OpenData DCR service component - after submitVR operation is invoked to VEEVA, each DCR is traced asynchronously in this process - Veeva: process DCR Change Events

Steps

Status

There are the following request statuses that users may receive during Create DCR operation or during checking the updated status using GET /dcr/_status operation described below:

RequestStatusDCRStatus Internal Cache statusDescription
REQUEST_ACCEPTEDCREATEDSENT_TO_OKDCR was sent to the ONEKEY system for validation and pending the processing by Data Steward in the system
REQUEST_ACCEPTEDCREATEDSENT_TO_VEEVADCR was sent to the VEEVA system for validation and pending the processing by Data Steward in the system
REQUEST_ACCEPTEDCREATEDDS_ACTION_REQUIREDDCR is pending Data Steward validation in Reltio, waiting for approval or rejection
REQUEST_ACCEPTEDCREATEDOK_NOT_FOUNDUsed when ONEKEY profile was not found after X retries
REQUEST_ACCEPTEDCREATEDVEEVA_NOT_FOUNDUsed when VEEVA profile was not found after X retries
REQUEST_ACCEPTEDCREATEDWAITING_FOR_ETL_DATA_LOADUsed when waiting for actual data profile load from 3rd Party to appear in Reltio
REQUEST_ACCEPTEDACCEPTEDACCEPTEDData Steward accepted the DCR, changes were applied
REQUEST_ACCEPTEDACCEPTEDPRE_ACCEPTEDPreClose logic was invoked and automatically accepted DCR according to decision table in PreCloseConfig
REQUEST_REJECTEDREJECTED REJECTEDData Steward rejected the changes presented in the Change Request
REQUEST_REJECTEDREJECTED PRE_REJECTEDPreClose logic was invoked and automatically rejected DCR according to decision table in PreCloseConfig
REQUEST_FAILED-FAILEDDCR requests failed due to: validation error/ unexpected error e.t.d - details in the errorCode and errorMessage
Error codes:

There are the following classes of exception that users may receive during Create DCR operation:

ClasserrorCodeDescriptionHTTP code
1DUPLICATE_REQUESTrequest rejected - extDCRRequestId  is registered - this is a duplicate request403
2NO_CHANGES_DETECTEDentities are the same (request is the same) - no changes400
3VALIDATION_ERRORref object does not exist (not able to find HCP/HCO target object404
3VALIDATION_ERRORref attribute does not exist - not able to find nested attribute in the target object400
3VALIDATION_ERRORwrong number of HCP/HCO entities in the input request400


  1. Clients execute the API GET/dcr/_status request
  2. Kong receives requests and handles authentication
  3. If the authentication succeeds the request is forwarded to the dcr-service-2 component,
  4. DCR Service checks permissions to call this operation and the correctness of the request, then the flow is started and the following steps are executed
    1. Query on mongo is executed to get all DCRs matching input parameters:
      1. updateFrom (date-time) - DCR last update from - DCRRequestDetails.status.changeDate
      2. updateTo (date-time) - DCR last update to - DCRRequestDetails.status.changeDate
      3. limit (int) the maximum number of results returned through API - the recommended value is 25. The max value for a single request is 50.
      4. offset(int) - result offset - the parameter used to query through results that exceeded the limit. 
    2. Resulted values are aggregated and returned to the Client.
    3. The client receives the List<DCRResposne> body.

Triggers

Trigger action

Component

Action

Default time

REST callDCR Service: GET/dcr/_statusget status of created DCRs. Limit the results using query parameters like dates and offsetAPI synchronous requests - realtime


Dependent components

Component

Usage

DCR ServiceMain component with flow implementation
Hub StoreDCR and Entities Cache 
" }, { "title": "OneKey: create DCR method (submitVR) - direct", "pageID": "209949294", "pageLink": "/display/GMDM/OneKey%3A+create+DCR+method+%28submitVR%29+-+direct", "content": "

Description

Rest API method exposed in the OK DCR Service component responsible for submitting the VR to OneKey

Flow diagram

\"\"

Steps


  1. Receive the API request
  2. Validate - check if the onekey crosswalk exists once there is an update on the profile, otherwise reject the request
  3. The DCR is mapped to OK VR Request and it's submitted using API REST method POST /vr/submit. (mapping described below)
    1. If the submission is successful then:
      • DCRRequesti updated to SENT_TO_OK with OK request and response details. DCRRegistryONEKEY collection in saved for tracing purposes. The process that reads and check ONEKEY VRs is described here: OneKey: generate DCR Change Events (traceVR)
    2. Otherwise FAILED status is recorded and the response is returned with an OK error response

Mapping


VR - Business Fields Requirements_UK.xlsx - file that contains VR UK requirements and mapping to IQVIA model


HUB

ONEKEY

attributesattributescodes
mandatoryattributesvalues

HCO













YentityTypeWORKPLACE





Yvalidation.clientRequestIdHUB_GENERATED_ID





Yvalidation.processQ





Yvalidation.requestDate1970-01-01T00:00Z





Yvalidation.callDate1970-01-01T00:00Z
attributes



Yvalidation.requestProcessI

extDCRComment



validation.requestComment









country


YisoCod2

















reference EntitycrosswalkONEKEY

workplace.workplaceEid









name



workplace.usualName






workplace.officialName

otherHCOAffiliationsparentUsualName


workplace.parentUsualName

subTypeCode

COTFacilityType

(TET.W.*)



workplace.typeCode

typeCodeno value in PFORCERX

HCOSubType

(LEX.W.*)



workplace.activityLocationCode

addresses







sourceAddressId


N/A


addressType


N/A


addressLine1


address.longLabel


addressLine2


address.longLabel2


addressLine3


N/A


stateProvince

AddressState

(DPT.W.*)



address.countyCode


city

Yaddress.city


zip


address.longPostalCode


country

Yaddress.country


rank


get address with rank=1 

emails







type


N/A


email


workplace.email


rank


get email with rank=1 

otherHCOAffiliations







type


N/A


rank


get affiliation with rank=1 

reference EntityotherHCOAffiliations reference entity onekeyID ONEKEY

workplace.parentWorkplaceEid

phones







typecontains FAX





number


workplace.telephone


rank


get phone with rank=1 










typenot contains FAX





number


workplace.fax


rank


get phone with rank=1 

HCP













YentityTypeACTIVITY





Yvalidation.clientRequestIdHUB_GENERATED_ID





Yvalidation.processQ





Yvalidation.requestDate1970-01-01T00:00Z





Yvalidation.callDate1970-01-01T00:00Z
attributes



Yvalidation.requestProcessI

extDCRComment



validation.requestComment









country


YisoCod2

















reference EntitycrosswalkONEKEY

individual.individualEid









firstName



individual.firstName

lastName


Yindividual.lastName

middleName



individual.middleName

typeCode



N/A



subTypeCode

HCPSubTypeCode

(TYP..*)



individual.typeCode

title

HCPTitle

(TIT.*)



individual.titleCode

prefix

HCPPrefix

(APP.*)



individual.prefixNameCode

suffix



N/A

gender

Gender

(.*)



individual.genderCode

specialties







typeCode

HCPSpecialty

(SP.W.*)



individual.speciality1


type


N/A


rank


get speciality with rank=1 


typeCode

HCPSpecialty

(SP.W.*)



individual.speciality2


type


N/A


rank


get speciality with rank=2 


typeCode

HCPSpecialty

(SP.W.*)



individual.speciality3


type


N/A


rank


get speciality with rank=3 

addresses







sourceAddressId


N/A


addressType


N/A


addressLine1


address.longLabel


addressLine2


address.longLabel2


addressLine3


N/A


stateProvince

AddressState

(DPT.W.*)



address.countyCode


city

Yaddress.city


zip


address.longPostalCode


country

Yaddress.country


rank


get address with rank=1 

identifiers







type


N/A


id


N/A

phones







type


N/A


number


individual.mobilePhone


rank


get phone with rank=1 

emails







type


N/A


email


individual.email


rank


get phone with rank=1 

contactAffiliationsno value in PFORCERX






type

RoleType

(TIH.W.*)



activity.role


primary


N/A


rank


get affiliation with rank=1 

contactAffiliations reference EntitycrosswalksONEKEY

workplace.workplaceEid

HCP & HCO













YentityTypeACTIVITY

For HCP full mapping check the HCP section above

Yvalidation.clientRequestIdHUB_GENERATED_ID

For HCO full mapping check the HCO section above

Yvalidation.processQ





Yvalidation.requestDate1970-01-01T00:00Z





Yvalidation.callDate1970-01-01T00:00Z
attributes



Yvalidation.requestProcessI

extDCRComment



validation.requestComment









country


YisoCod2

addresses







If the HCO address exists map to ONEKEY address

address (mapping HCO)


else






If the HCP address exists map to ONEKEY address

address (mapping HCP)

contactAffiliationsno value in PFORCERX






type

RoleType

(TIH.W.*)


activity.role


primary


N/A


rank


get affiliation with rank=1 


Triggers

Trigger action

Component

Action

Default time

REST callDCR Service: POST /dcrcreate DCRs in the ONEKEYAPI synchronous requests - realtime


Dependent components

Component

Usage

DCR Service 2Main component with flow implementation
Hub StoreDCR and Entities Cache 
" }, { "title": "OneKey: generate DCR Change Events (traceVR)", "pageID": "209950500", "pageLink": "/pages/viewpage.action?pageId=209950500", "content": "

Description

This process is triggered after the DCR was routed to Onekey based on the decision table configuration. The process of tracing the VR changes is based on the OneKey VR changes. During this process HUB, DCR Cache is triggered every <T> hour for SENT DCR's and check VR status using OneKey web service. After verification, the DCR Change event is generated. The DCR event is processed in the OneKey: process DCR Change Events and the DCR is updated in Reltio with Accepted or Rejected status.

Flow diagram

\"\"

Steps


Event Model

data class OneKeyDCREvent(val eventType: String? = null,
val eventTime: Long? = null,
val eventPublishingTime: Long? = null,
val countryCode: String? = null,
val dcrId: String? = null,
val targetChangeRequest: OneKeyChangeRequest,
)

data class OneKeyChangeRequest(
val vrStatus : String? = null,
val vrStatusDetail : String? = null,
val oneKeyComment : String? = null,
val individualEidValidated : String? = null,
val workplaceEidValidated : String? = null,
val vrTraceRequest : String? = null,
val vrTraceResponse : String? = null,
)

Triggers

Trigger action

Component

Action

Default time

IN Timer (cron)dcr-service:TraceVRServicequery mongo to get all SENT DCR's related to the PFORCERX processevery <T> hour
OUT Eventsdcr-service:TraceVRServicegenerate the OneKeyDCREventevery <T> hour


Dependent components

Component

Usage

DCR ServiceMain component with flow implementation
Hub StoreDCR and Entities Cache 
" }, { "title": "OneKey: process DCR Change Events", "pageID": "209949303", "pageLink": "/display/GMDM/OneKey%3A+process+DCR+Change+Events", "content": "
\n
\n
\n
\n

Description

The process updates the DCR's based on the Change Request events received from [ONEKEY|VOD] (after trace VR method result). Based on the [IQVIA|VEEVA] Data Steward decision the state attribute contains relevant information to update DCR status. During this process also the comments created by IQVIA DS are retrieved and the relationship (optional step) between the DCR object and the newly created entity is created. DCR status is accepted only after the [ONEKEY|VOD] profile is created in Reltio, only then the Client will receive the ACCEPTED status. The process is checking Reltio with <T> delay and retries if the ETL load is still in progress waiting for [ONEKEY|VOD] profile. 

Flow diagram

\n
\n
\n
\n
\n
\n

OneKey variant

\"\"

\n
\n
\n
\n

Veeva variant: \"\"

\n
\n
\n
\n
\n
\n


Steps

  • OneKey: generate DCR Change Events (traceVR) publishes simple events to $env-internal-onekey-dcr-change-events-in: DCR_CHANGED
  • Events are aggregated in a time window (recommended the window length 24 hours) and the last event is returned to the process after the window is closed.
  • Events are processed in the Stream and based on the OneKeyDCREvent.OneKeyChangeRequest.vrStatus | VeevaDCREvent.VeevaChangeRequestDetails.vrStatus attribute decision is made
  • DCR is retrieved from the cache based on the _id of the DCR
  • If the event state is ACCEPTED
    • Get Reltio entity COMPANYCustomerID by [ONEKEY|VOD] crosswalk
    • If such crosswalk entity exists in Reltio:
      • COMPANYGlobalCustomerId is saved in Registry and will be returned to the Client 
      • During the process, the optional check is triggered - create the relation between the DCR object and newly created entities
        • if DCRRegistry contain an empty list of entityUris, or some of the newly created entity is not present in the list, the Relation between this object and the DCR has to be created
          • DCR entity is updated in Reltio and the relation between the processed entity and the DCR entity
            • Reltio source name (crosswalk. type): DCR
            • Reltio relation type: HCPtoDCR or HCOtoDCR (depending on the object type)
          • Newly created entities uris should be retrieved by the individualEidValidated or workplaceEidValidated (it may be both) attributes from the events that represent the HCP or HCO crosswalks.
      • The status in Reltio and in Mongo is updated

        DCR entity attributes

        Mapping for OneKey

        Mapping for Veeva

        VRStatusCLOSED
        VRStatusDetail

        state: ACCEPTED

        CommentsONEKEY comments ({VR.rsp.responseComments})
        ONEKEY ID = individualEidValidated or workplaceEidValidated
        VEEVA comments = VR.rsp.responseComments
        VEEVA ID = entityUris
        COMPANYGlobalCustomerIdThis is required in ACCEPTED status

         

    • If the [ONEKEY|VOD] does not exist in Reltio
      • Regenerate the Event with a new timestamp to the input topic so this will be processed in the next <T> hours
      • Update the Reltio DCR status
        • DCR entity attributes

          Mapping

          VRStatusOPEN
          VRStatusDetail

          ACCEPTED

      • update the Mongo status to the OK_NOT_FOUND | VEEVA_NOT_FOUND and increase the "retryCounter" attribute
  • If the event state is REJECTED
    • If a Reltio DS has already seen this request, REJECT the DCR and end the flow (if the initial target type is Reltio)

      The status in Reltio and in Mongo is updated

      DCR entity attributes

      Mapping

      VRStatusCLOSED
      VRStatusDetail

      state: REJECTED

      Comments[ONEKEY|VOD] comments ({VR.rsp.responseComments})
    • If this is based on the routing table and it was never sent to the Reltio DS, then create the DCR workflow and send this to the Reltio DS. Add the information comment that this was Rejected by the OneKey, so now Reltio DS has to decide if this should be REJECTED or APPLIED in Reltio. Add the comment that this is not possible to execute the sendTo3PartyValidation button in this case. Steps:
      • Check if the initial target type is [ONEKEY|VOD]
      • Use the DCR Request that was initially received from PforceRx and is a Domain Model request (after validation) 
      • Send the DCR to Reltio the service returns the following response:
        • ACCEPTED (change request accepted by Reltio)
          • update the status to DS_ACTION_REQUIERED and in the comment add the following: "This DCR was REJECTED by the [ONEKEY|VOD] Data Steward with the following comment: <[ONEKEY|VOD] reject comment>. Please review this DCR in Reltio and APPLY or REJECT. It is not possible to execute the sendTo3PartyValidation button in this case"
          • initialize new Workflow in Reltio with the comment.
          • save data in the DCR entity status in Reltio and update Mongo DCR Registry with workflow ID and other attributes that were used in this Flow.
        • REJECTED  (failure or error response from Reltio)
          • CLOSE the DCR with the information that DCR was REJECTED by the [ONEKEY|VOD] and Reltio also REJECTED the DCR. Add the error message from both systems in the comment. 

Triggers

Trigger action

Component

Action

Default time

IN Events incoming 

dcr-service-2:DCROneKeyResponseStream

dcr-service-2:DCRVeevaResponseStream ($env-internal-veeva-dcr-change-events-in)

process publisher full change request events in the streamrealtime: events stream processing 

Dependent components

Component

Usage

DCR Service 2Main component with flow implementation
ManagerReltio Adapter  - API operations
PublisherEvents publisher generates incoming events
Hub StoreDCR and Entities Cache 
\n
\n
\n
" }, { "title": "Reltio: create DCR method - direct", "pageID": "209949292", "pageLink": "/display/GMDM/Reltio%3A+create+DCR+method+-+direct", "content": "

Description

Rest API method exposed in the Manager component responsible for submitting the Change Request to Reltio

Flow diagram

\"\"

Steps

  1. Receive the DCR request generated by DCR Service 2 component
  2. Depending on the Action execute the method in the Manager component:
    1. insert - Execute standard Create/Update HCP/HCO/MCO operation with additional changeRequest.id parameter
    2. update - Execute Update Attributes operation with additional changeRequest.id parameter
      1. the combination of IGNORE_ATTRIBUTE & INSERT_ATTRIBUTE once updating existing parameter in Reltio
      2. the INSERT_ATTRIBUTE once adding new attribute to Reltio
    3. delete - Execute Update Attribute operation with additional changeRequest.id parameter
      1. the UPDATE_END_DATE on the entity to inactivate this profile
  3. Based on the Reltio response the DCR Response is returned:
    1. REQUEST_ACCEPTED - Reltio processed the request successfully 
    2. REQUEST_FAILED - Reltio returned the exception, Client will receive the detailed description in the errorMessage

Triggers

Trigger action

Component

Action

Default time

REST callDCR Service: POST /dcr2Create change Requests in ReltioAPI synchronous requests - realtime


Dependent components

Component

Usage

DCR ServiceMain component with flow implementation
Hub StoreDCR and Entities Cache 
" }, { "title": "Reltio: process DCR Change Events", "pageID": "209949300", "pageLink": "/display/GMDM/Reltio%3A+process+DCR+Change+Events", "content": "

Description

The process updates the DCR's based on the Change Request events received from Reltio(publishing). Based on the Data Steward decision the state attribute contains relevant information to update DCR status. During this process also the comments created by DS are retrieved and the relationship (optional step) between the DCR object and the newly created entity is created.


Flow diagram

\"\"

Steps

Triggers

Trigger action

Component

Action

Default time

IN Events incoming dcr-service-2:DCRReltioResponseStreamprocess publisher full change request events in the streamrealtime: events stream processing 

Dependent components

Component

Usage

DCR Service

DCR Service 2

Main component with flow implementation
ManagerReltio Adapter  - API operations
PublisherEvents publisher generates incoming events
Hub StoreDCR and Entities Cache 
" }, { "title": "Reltio: Profiles created by DCR", "pageID": "510266969", "pageLink": "/display/GMDM/Reltio%3A+Profiles+created+by+DCR", "content": "
DCR typeApproval/Reject Record visibility in MDMCrosswalk TypeCrosswalk ValueSource
DCR create for HCP/HCOApproved by OneKey/VODHCP/HCO created in MDMONEKEY|VODonekey id ONEKEY|VOD
Approved by DSRHCP/HCO created in MDMSystem source name from DCR (KOL_OneView, PforceRx, etc)DCR IDSystem source name from DCR (KOL_OneView, PforceRx, etc)
DCR edit for HCP/HCOApproved by OneKey/VODHCP/HCO requested attribute updated in MDMONEKEY|VOD
ONEKEY|VOD
Approved by DSRHCP/HCO requested attribute updated in MDMReltioentity uriReltio
DCR edit for HCPaddress/HCO addressApproved by OneKey/VODNew address created in MDM, existing address marked as inactiveONEKEY|VOD
ONEKEY|VOD
Approved by DSRNew address created in MDM, existing address marked as inactiveReltioentity uriReltio
" }, { "title": "Veeva DCR flows", "pageID": "379332475", "pageLink": "/display/GMDM/Veeva+DCR+flows", "content": "

Description

The process is responsible for creating DCRs which are stored (Store VR) to be further transferred and processed by Veeva. Changes can be suggested by the DS using "Suggest" operation in Reltio and "Send to Third Party Validation" button. All DCRs are saved in the dedicated collection in HUB Mongo DB, required to gather metadata and trace the changes for each DCR request. During this process, the communication to Veeva Opendata is established via S3/SFTP communication. SubmitVR operation is executed to create a new ZIP files with DCR requests spread across multiple CSV files. The TraceVR operation is executed to check if Veeva responded to initial DCR Requests via ZIP file placed Inbound S3 dir. 

The process is divided into 3 sections:

  1. Create DCR request - Veeva
  2. Submit DCR Request - Veeva
  3. Trace Validation Request - Veeva

The below diagram presents an overview of the entire process. Detailed descriptions are available in the separated subpages.

Business process diagram for R1 phase

\"\"


Flow diagram

\"\"

Steps

Triggers

DCR service 2 is being triggered via /dcr API calls which are triggered by Data Stewards actions (R1 phase) → "Suggests 3rd party validation" which pushes DCR from Reltio to HUB.

Dependent components

Described in the separated sub-pages for each process.

Design document for HUB development 

  1. Design → VeevaOpenData-implementation.docx

  2. Reltio HUB-VOD mapping → VeevaOpenDataAPACDataDictionary.xlsx
  3. VOD model description (v4) → Veeva_OpenData_APAC_Data_Dictionary v4.xlsx
" }, { "title": "Create DCR request - Veeva", "pageID": "386814533", "pageLink": "/display/GMDM/Create+DCR+request+-+Veeva", "content": "

Description

The process of creating new DCR requests to the Veeva OpenData. During this process, new DCRs are created in DCRregistryVeeva mongo collection.

Flow diagram

\"\"

Steps

Mappings

DCR domain model→ VOD mapping file: VeevaOpenDataAPACDataDictionary-mmor-mapping.xlsx

Veeva integration guide

\"\"

" }, { "title": "Submit DCR Request - Veeva", "pageID": "379333348", "pageLink": "/display/GMDM/Submit+DCR+Request+-+Veeva", "content": "

Description

The process of submitting new validation requests to the Veeva OpenData service via VeevaAdapter (communication with S3/SFTP) based on DCRRegistryVeeva mongo collection . During this process, new DCRs are created in VOD system.

Flow diagram


\"\"

Steps

Veeva DCR service flow:

SFTP integration service flow:

Triggers

Trigger actionComponentActionDefault time
Spring scheduler
mdm-veeva-dcr-service:VeevaDCRRequestSenderprepare ZIP files for VOD systemCalled every specified interval

Dependent components

ComponentUsage
Veeva adapterUpload DCR request to s3 location
" }, { "title": "Trace Validation Request - Veeva", "pageID": "379333358", "pageLink": "/display/GMDM/Trace+Validation+Request+-+Veeva", "content": "

Description

The process of tracing the VR changes based on the Veeva VR changes. During this process HUB, DCRRegistryVeeva Cache is triggered every <T> hour for SENT DCR's and check VR status using Veeva Adapter (s3/SFTP integration). After verification DCR event is sent to DCR Service 2  Veeva response stream.

Flow diagram


\"\"


Steps


Triggers

Trigger actionComponentActionDefault time
IN Spring schedulermdm-veeva-dcr-service:VeevaDCRRequestTracestart trace validation request processevery <T> hour
OUT Kafka topicmdm-dcr-service-2:VeevaResponseStreamupdate DCR status in Reltio, create relationsinvokes Kafka producer for each veeva DCR response

Dependent components

ComponentUsage
DCR Service 2Process response event
" }, { "title": "Veeva: create DCR method (storeVR)", "pageID": "379332642", "pageLink": "/pages/viewpage.action?pageId=379332642", "content": "

Description

Rest API method exposed in the Veeva DCR Service component responsible for creating new DCR requests specific to Veeva OpenData (VOD) and storing them in dedicated collection for further submit. Since VOD enables communication only via S3/SFTP, it's required to use dedicated mechanism to actually trigger CSV/ZIP file creation and file placement in outbound directory. This will periodic call to Submit VR method will be scheduled once a day (with cron) which will in the end call VeevaAdapter with method createChangeRequest.

Flow diagram

\"\"

Steps


  1. Receive the API request
  2. Validate initial request
    1. check if the Veeva crosswalk exists once there is an update on the profile
    2. otherwise it's required to prepare DCR to create new Veeva profile
    3. If there is any formal attribute missing or incorrect: skip request
  3. Then the DCR is mapped to Veeva Request by invoking mapper between HUB DCR → VEEVA model 
    1. For mapping purpose below mapping table should be used 
    2. If there is not proper LOV mapping between HUB and Veeva, default fallback should be set to question mark → ?  
  4. Once proper request has been created, it should be stored as a VeevaVRDetails entry in dedicated DCRRegistryVeeva collection to be ready for actually send via Submit VR job and for future tracing purposes
  5. Prepare return response for initial API request with below logic
    1. Generate sample request after successful mongo insert →  generateResponse(dcrRequest, RequestStatus.REQUEST_ACCEPTED, null, null)
    2. Generate error when validation or exception →  generateResponse(dcrRequest, RequestStatus.REQUEST_FAILED, getErrorDetails(), null);

Mapping HUB DCR → Veeva model 

ReltioHUBVEEVA
Attribute PathDetailsDCR Request pathDetailsFile NameField NameRequired for Add Request?Required for Change Request?DescriptionReference (RDM/LOV)NOTE
HCO
N/A
Mongo Generated ID for this DCR | Kafka KEYonce mapping from HUB Domain DCRRequest take this from DCRRequestD.dcrRequestId: String, // HUB DCR request id - Mongo ID - required in ONEKEY servicechange_requestdcr_keyYYCustomer's internal identifier for this request

Change Requests comments 
extDCRComment
change_requestdescriptionYYRequester free-text comments explaining the DCR

targetChangeRequest.createdBy
createdBy
change_requestcreated_byYYFor requestor identification

N/A
if new objects - ADD, if veeva ID CHANGE
change_requestchange_request_typeYYADD_REQUEST or CHANGE_REQUEST

N/Adepends on suggested changes (check use-cases)main entity object type HCP or HCO
change_requestentity_typeYNHCP or HCOEntityType
N/A
Mongo Generated ID for this DCR | Kafka KEY
change_request_hcodcr_keyYYCustomer's internal identifier for this request

Reltio Uri and Reltio Typewhen insert new profileentities.HCO.updateCrosswalk.type (Reltio)
entities.HCO.updateCrosswalk.value (Reltio id)
and
refId.entityURI
concatenate Reltio:rvu44dmchange_request_hcoentity_keyYYCustomer's internal HCO identifier

Crosswalks - VEEVA crosswalkwhen update on VEEVAentities.HCO.updateCrosswalk.type (VEEVA)
entities.HCO.updateCrosswalk.value (VEEVA ID)

change_request_hcovid__vYNVeeva ID of existing HCO to update; if blank, the request will be interpreted as an add request

configuration/entityTypes/HCO/attributes/OtherNames/attributes/Namefirst elementTODO - add new attribute
change_request_hcoalternate_name_1__vYN


??
??
change_request_hcobusiness_type__vYN
HCOBusinessTypeTO BE CONFIRMED
configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/FacilityType
HCO.subTypeCode
change_request_hcpmajor_class_of_trade__vNN
COTFacilityType

In PforceRx - Account Type, more info: \n MR-9512\n -\n Getting issue details...\n STATUS\n

configuration/entityTypes/HCO/attributes/Name
name
change_request_hcocorporate_name__vNY


configuration/entityTypes/HCO/attributes/TotalLicenseBeds
TODO - add new attribute
change_request_hcocount_beds__vNY


configuration/entityTypes/HCO/attributes/Email/attributes/Emailemail with rank 1emails
change_request_hcoemail_1__vNN


configuration/entityTypes/HCO/attributes/Email/attributes/Emailemail with rank 2
change_request_hcoemail_2__vNN


configuration/entityTypes/HCO/attributes/Phone/attributes/Numberphone type TEL.FAX with best rankphones
change_request_hcofax_1__vNN


configuration/entityTypes/HCO/attributes/Phone/attributes/Numberphone type TEL.FAX with worst rank
change_request_hcofax_2__vNN


configuration/entityTypes/HCO/attributes/StatusDetail
TODO - add new attribute
change_request_hcohco_status__vNN
HCOStatus
configuration/entityTypes/HCO/attributes/TypeCode
typecode
change_request_hcohco_type__vNN
HCOType
configuration/entityTypes/HCO/attributes/Phone/attributes/Numberphone type TEL.OFFICE with best rankphones
change_request_hcophone_1__vNN


configuration/entityTypes/HCO/attributes/Phone/attributes/Numberphone type TEL.OFFICE with worst rank
change_request_hcophone_2__vNN


configuration/entityTypes/HCO/attributes/Phone/attributes/Numberphone type TEL.OFFICE with worst rank
change_request_hcophone_3__vNN


configuration/entityTypes/HCO/attributes/Country
DCRRequest.country
change_request_hcoprimary_country__vNN


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialtyelements from COT specialties
change_request_hcospecialty_1__vNN


configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty
change_request_hcospecialty_10__vNN
Speciality
configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty
change_request_hcospecialty_2__vNN

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty
change_request_hcospecialty_3__vNN

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty
change_request_hcospecialty_4__vNN

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty
change_request_hcospecialty_5__vNN

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty
change_request_hcospecialty_6__vNN

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty
change_request_hcospecialty_7__vNN

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty
change_request_hcospecialty_8__vNN

configuration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialty
change_request_hcospecialty_9__vNN

configuration/entityTypes/HCO/attributes/Website/attributes/WebsiteURLfirst elementwebsiteURL
change_request_hcoURL_1__vNN


configuration/entityTypes/HCO/attributes/Website/attributes/WebsiteURLN/AN/A
change_request_hcoURL_2__vNN


HCP
 
N/A
Mongo Generated ID for this DCR | Kafka KEY
change_request_hcpdcr_keyYYCustomer's internal identifier for this request

Reltio Uri and Reltio Typewhen insert new profileentities.HCO.updateCrosswalk.type (Reltio)
entities.HCO.updateCrosswalk.value (Reltio id)
and
refId.entityURI
concatenate Reltio:rvu44dmchange_request_hcpentity_keyYYCustomer's internal HCP identifier

configuration/entityTypes/HCP/attributes/Country
DCRRequest.country
change_request_hcpprimary_country__vYY


Crosswalks - VEEVA crosswalkwhen update on VEEVAentities.HCO.updateCrosswalk.type (VEEVA)
entities.HCO.updateCrosswalk.value (VEEVA ID)

change_request_hcpvid__vNY


configuration/entityTypes/HCP/attributes/FirstName
firstName
change_request_hcpfirst_name__vYN


configuration/entityTypes/HCP/attributes/Middle
middleName
change_request_hcpmiddle_name__vNN


configuration/entityTypes/HCP/attributes/LastName
lastName
change_request_hcplast_name__vYN


configuration/entityTypes/HCP/attributes/Nickname
TODO - add new attribute
change_request_hcpnickname__vNN


configuration/entityTypes/HCP/attributes/Prefix
prefix
change_request_hcpprefix__vNN
HCPPrefix
configuration/entityTypes/HCP/attributes/SuffixName
suffix
change_request_hcpsuffix__vNN


configuration/entityTypes/HCP/attributes/Title
title
change_request_hcpprofessional_title__vNN
HCPProfessionalTitle
configuration/entityTypes/HCP/attributes/SubTypeCode
subTypeCode
change_request_hcphcp_type__vYN
HCPType
configuration/entityTypes/HCP/attributes/StatusDetail
TODO - add new attribute
change_request_hcphcp_status__vNN
HCPStatus
configuration/entityTypes/HCP/attributes/AlternateName/attributes/FirstName
TODO - add new attribute
change_request_hcpalternate_first_name__vNN


configuration/entityTypes/HCP/attributes/AlternateName/attributes/LastName
TODO - add new attribute
change_request_hcpalternate_last_name__vNN


configuration/entityTypes/HCP/attributes/AlternateName/attributes/MiddleName
TODO - add new attribute
change_request_hcpalternate_middle_name__vNN


??
TODO - add new attribute
change_request_hcpfamily_full_name__vNN

TO BE CONFRIMED
configuration/entityTypes/HCP/attributes/DoB
birthYear
change_request_hcpbirth_year__vNN


configuration/entityTypes/HCP/attributes/Credential/attributes/Credentialby rank 1TODO - add new attribute
change_request_hcpcredentials_1__vNN

TO BE CONFIRMED
configuration/entityTypes/HCP/attributes/Credential/attributes/Credential2TODO - add new attribute
change_request_hcpcredentials_2__vNN

In reltio there is attribute but not used
configuration/entityTypes/HCP/attributes/Credential/attributes/Credential3TODO - add new attribute
change_request_hcpcredentials_3__vNN

                            "uri": "configuration/entityTypes/HCP/attributes/Credential/attributes/Credential",
configuration/entityTypes/HCP/attributes/Credential/attributes/Credential4TODO - add new attribute
change_request_hcpcredentials_4__vNN

                            "lookupCode": "rdm/lookupTypes/Credential",
configuration/entityTypes/HCP/attributes/Credential/attributes/Credential5TODO - add new attribute
change_request_hcpcredentials_5__vNN
HCPCredentials                            "skipInDataAccess": false
??
TODO - add new attribute
change_request_hcpfellow__vNN
BooleanReferenceTO BE CONFRIMED
configuration/entityTypes/HCP/attributes/Gender
gender
change_request_hcpgender__vNN
HCPGender
?? Education ??
TODO - add new attribute
change_request_hcpeducation_level__vNN
HCPEducationLevelTO BE CONFRIMED
configuration/entityTypes/HCP/attributes/Education/attributes/SchoolName
TODO - add new attribute
change_request_hcpgrad_school__vNN


configuration/entityTypes/HCP/attributes/Education/attributes/YearOfGraduationTODO - add new attribute
change_request_hcpgrad_year__vNN


??


change_request_hcphcp_focus_area_10__vNN

TO BE CONFRIMED
??


change_request_hcphcp_focus_area_1__vNN


??


change_request_hcphcp_focus_area_2__vNN


??


change_request_hcphcp_focus_area_3__vNN


??


change_request_hcphcp_focus_area_4__vNN


??


change_request_hcphcp_focus_area_5__vNN


??


change_request_hcphcp_focus_area_6__vNN


??


change_request_hcphcp_focus_area_7__vNN


??


change_request_hcphcp_focus_area_8__vNN


??


change_request_hcphcp_focus_area_9__vNN
HCPFocusArea
??


change_request_hcpmedical_degree_1__vNN

TO BE CONFRIMED
??


change_request_hcpmedical_degree_2__vNN
HCPMedicalDegree
configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyby rank from 1 to 100specialties
change_request_hcpspecialty_1__vYN


configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialties
change_request_hcpspecialty_10__vNN


configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialties
change_request_hcpspecialty_2__vNN


configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialties
change_request_hcpspecialty_3__vNN


configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialties
change_request_hcpspecialty_4__vNN


configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialties
change_request_hcpspecialty_5__vNN


configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialties
change_request_hcpspecialty_6__vNN


configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialties
change_request_hcpspecialty_7__vNN


configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialties
change_request_hcpspecialty_8__vNN


configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialties
change_request_hcpspecialty_9__vNN
Specialty
configuration/entityTypes/HCP/attributes/WebsiteURL
TODO - add new attribute
change_request_hcpURL_1__vNN


ADDRESS


Mongo Generated ID for this DCR | Kafka KEY
change_request_addressdcr_keyYYCustomer's internal identifier for this request

Reltio Uri and Reltio Typewhen insert new profileentities.HCP OR HCO.updateCrosswalk.type (Reltio)
entities.HCP OR HCO.updateCrosswalk.value (Reltio id)
and
refId.entityURI
concatenate Reltio:rvu44dmchange_request_addressentity_keyYYCustomer's internal HCO/HCP identifier

attributes/Addresses/attributes/COMPANYAddressID
address.refId
change_request_addressaddress_keyYYCustomer's internal address identifier

attributes/Addresses/attributes/AddressLine1
addressLine1
change_request_addressaddress_line_1__vYN


attributes/Addresses/attributes/AddressLine2
addressLine2
change_request_addressaddress_line_2__vNN


attributes/Addresses/attributes/AddressLine3
addressLine3
change_request_addressaddress_line_3__vNN


N/A
N/AAchange_request_addressaddress_status__vNN
AddressStatus
attributes/Addresses/attributes/AddressType
addressType
change_request_addressaddress_type__vYN
AddressType
attributes/Addresses/attributes/StateProvince
stateProvince
change_request_addressadministrative_area__vYN
AddressAdminArea
attributes/Addresses/attributes/Country
country
change_request_addresscountry__vYN


attributes/Addresses/attributes/City
city
change_request_addresslocality__vYY


attributes/Addresses/attributes/Zip5
zip
change_request_addresspostal_code__vYN


attributes/Addresses/attributes/Source/attributes/SourceName
attributes/Addresses/attributes/Source/attributes/SourceAddressID
when VEEVA map VEEVA ID to sourceAddressId
change_request_addressvid__vNY


map from
relationTypes/OtherHCOtoHCOAffiliations
or
relationTypes/ContactAffiliations

This will be HCP.ContactAffiliation or HCO.OtherHcoToHCO affiliation








Mongo Generated ID for this DCR | Kafka KEY
change_request_parenthcodcr_keyYYCustomer's internal identifier for this request



HCO.otherHCOAffiliations.relationUri
or
HCP.contactAffiliations.relationUri
 (from Domain model)
information about Reltio Relation ID
change_request_parenthcoparenthco_keyYYCustomer's internal identifier for this relationshipRELATION ID


KEY entity_key from HCP or HCO (start object)
change_request_parenthcochild_entity_keyYYChild Identifier in the HCO/HCP fileSTART OBJECT ID
endObject entity uri mapped to refId.EntityURITargetObjectId
KEY entity_key from HCP or HCO (end object, by affiliation)
change_request_parenthcoparent_entity_keyYYParent identifier in the HCO fileEND OBJECT ID

changes in Domain model mappingmap Reltion.Source.SourceName - VEEVA
map Relation.Source.SourceValue - VEEVA ID
add to Domain model
map if relation is from VEEVA ID 
change_request_parenthcovid__vNY




start object entity type 
change_request_parenthcoentity_type__vYN


attributes/RelationType/attributes/PrimaryAffiliation
if is primary
TODO - add new attribute to otherHcoToHCO

change_request_parenthcois_primary_relationship__vNN
BooleanReference


HCO_HCO or HCP_HCO
change_request_parenthcohierarchy_type__v


RelationHierarchyType
attributes/RelationType/attributes/RelationshipDescription
type from affiliation
based on ContactAffliation or OtherHCOToHCO affiliation
I think it will be 14-Emploted for HCP_HCO
and 4-Manages for HCO_HCO
but maybe we can map from affiliation.type
change_request_parenthcorelationship_type__vYN
RelationType


Mongo collection

All DCRs initiated by the dcr-service-2 API and to be sent to Veeva will be stored in Mongo in new collection DCRRegistryVeeva. The idea is to gather all DCRs requested by the client through the day and schedule ‘SubmitVR’ process that will communicate with Veeva adapter.

Typical use case: 


In this store we are going to keep both types of DCRs:

\n
initiated by PforceRX - PFORCERX_DCR("PforceRxDCR")\ninitiated by Reltio SubmitVR - SENDTO3PART_DCR("ReltioSuggestedAndSendTo3PartyDCR");
\n


Store class idea:


VeevaVRDetails
\n
@Document("DCRRegistryVEEVA")\n@JsonIgnoreProperties(ignoreUnknown = true)\n@JsonInclude(JsonInclude.Include.NON_NULL)\ndata class VeevaVRDetails(\n    @JsonProperty("_id")\n    @Id\n    val id: String? = null,\n    val type: DCRType,\n    val status: DCRRequestStatusDetails,\n    val createdBy: String? = null,\n    val createTime: ZonedDateTime? = null,\n    val endTime: ZonedDateTime? = null,\n    val veevaRequestTime: ZonedDateTime? = null,\n    val veevaResponseTime: ZonedDateTime? = null,\n    val veevaRequestFileName: String? = null\n    val veevaResponseFileName: String? = null    val veevaResponseFileTime: ZonedDateTime? = null\n    val country: String? = null,\n    val source: String? = null,\n    val extDCRComment: String? = null, // external DCR Comment (client comment)\n    val trackingDetails: List<DCRTrackingDetails> = mutableListOf(),\n\n    RAW FILE LINES mapped from DCRRequestD to Veeva model\n    val veevaRequest:\n            val change_request_csv: String,\n            val change_request_hcp_csv: String\n            val change_request_hco_csv: List<String>\n            val change_request_address_csv: List<String>\n            val change_request_parenthco_csv: List<String>\n\n    RAW FILE LINES mapped from Veeva Response model\n    val veevaResponse:\n            val change_request_response_csv: String,\n            val change_request_response_hcp_csv: String\n            val change_request_response_hco_csv: List<String>\n            val change_request_response_address_csv: List<String>\n            val change_request_response_parenthco_csv: List<String>\n)
\n

Mapping Reltio canonical codes → Veeva source codes

There are a couple of steps performed to find out a mapping for canonical code from Reltio to source code understood by VOD. Below steps are performed (in this order) once a code is found. 

Veeva Defaults 

Configuration is stored in mdm-config-registry > config-hub/stage_apac/mdm-veeva-dcr-service/defaults

The purpose of these logic is to select one of possible multiple source codes on VOD end for a single code on COMPANY side (1:N). The other scenario is when there is no actual source code for a canonical code on VOD end (1:0), however this is usually covered by fallback code logic.

There are a couple of files, each containing source codes for a specific attribute. The ones related to HCO.Specialty and HCP.Specialty have logic which selects proper code.

RDM lookups with RegExp

The main logic which is used to find out proper source code for canonical code. We're using codes configured in RDM, however mongo collection LookupValues are used. For specific canonical code (code) we looking for sourceMappings with source = VOD. Often country is embedded within source code so we're applying regexpConfig (more in Veeva Fallback section) to extract specific source code for particular country.

Veeva Fallback

Configuration is stored in mdm-config-registry > config-hub/stage_apac/mdm-veeva-dcr-service/fallback


Triggers

Trigger action

Component

Action

Default time

REST callmdm-veeva-dcr-service: POST /dcr → veevaDCRService.createChangeRequest(request)

Creates DCR and stores it in collection without actual send to Veeva. 

API synchronous requests - realtime


Dependent components

Component

Usage

DCR Service 2Main component with flow implementation
Hub StoreDCR and Entities Cache 
" }, { "title": "Veeva: create DCR method (submitVR)", "pageID": "386796763", "pageLink": "/pages/viewpage.action?pageId=386796763", "content": "

Description

Gather all stored DCR entities in DCRRegistryVeeva collection (status = NEW) and sends them via S3/SFTP to Veeva OpenData (VOD). This method triggers CSV/ZIP file creation and file placement in outbound directory. This method is triggered from cron which invokes VeevaDCRRequestSender.sendDCRs() from the Veeva DCR Service 

Flow diagram

\"\"

Steps


  1. Receive the API request via scheduled trigger, usually every 24h (senderConfiguration.schedulerConfig.fixedDelay) at specific time of day (senderConfiguration.schedulerConfig.initDelay)
  2. All DCR entities (VeevaVRDetails) with status NEW are being retrieved from DCRRegistryVeeva collection 
  3. Then VeevaCreateChangeRequest object is created which aggregates all CSV content which should be placed in actual CSV files. 
    1. Each object contains only DCRs specific for country
    2. Each country has its own S3/SFTP directory structure as well as dedicated SFTP server instance
  4. Once CSV files are created with header and content, they are packed into single ZIP file
  5. Finally ZIP file is placed in outbound S3 directory
  6. If file was placed
    1. successfuly - then VeevaChangeRequestACK status = SUCCESS
    2. otherwise - then VeevaChangeRequestACK status = FAILURE and process ends
  7. Finally, status of VeevaVRDetails entity in DCRRegistryVeeva collection is updated and set to SENT_TO_VEEVA

Triggers

Trigger action

Component

Action

Default time

Timer (cron)mdm-veeva-dcr-service: VeevaDCRRequestSender.sendDCRs()

Takes all unsent entities (status = NEW) from Veeva collection and actually puts file on S3/SFTP directory via veevaAdapter.createDCRs



Usually every 24h (senderConfiguration.schedulerConfig.fixedDelay) at specific time of day (senderConfiguration.schedulerConfig.initDelay)


Dependent components

Component

Usage

DCR Service 2Main component with flow implementation
Hub StoreDCR and Entities Cache 
" }, { "title": "Veeva: generate DCR Change Events (traceVR)", "pageID": "379329922", "pageLink": "/pages/viewpage.action?pageId=379329922", "content": "

Description

The process is responsible for gathering DCR responses from Veeva OpenData (VOD). Responses are provided via CSV/ZIP files placed on S3/SFTP server in inbound directory which are specific for each country. During this process files should be retrieved, mapped from VOD to HUB DCR model and published to Kafka topic to be properly processed by DCR Service 2, Veeva: process DCR Change Events.

Flow diagram

\"\"

Source: Lucid

Steps

  1. Method is trigger via cron, usually every 24h (traceConfiguration.schedulerConfig.fixedDelay) at specific time of day (traceConfiguration.schedulerConfig.initDelay)
  2. For each country, each inbound directory in scanned for ZIP files
  3. Each ZIP files (<country>_DCR_Response_<Date>.zip) should be unpacked and processed. A bunch of CSV files should be extracted. Specifically:
    1. change_request_response.csv → it's a manifest file with general information in specific columns
      1. dcr_key → ID of DCR which was established during DCR request creation 
      2. entity_key → ID of entity in Reltio, the same one we provided during DCR request creation
      3. entity_type → type of entity (HCO, HCP) which is being modified via this DCR
      4. resolution → has information whether DCR was accepted or rejected. Full list of values is below.
        1. resolution value

          Description

          CHANGE_PENDING

          This change is still processing and hasn't been resolved

          CHANGE_ACCEPTED

          This change has been accepted without modification

          CHANGE_PARTIAL

          This change has been accepted with additional changes made by the steward, or some parts of the change request have been rejected

          CHANGE_REJECTED

          This change has been rejected in its entirety

          CHANGE_CANCELLED

          This change has been cancelled

      5. change_request_type 
        1. change_request_type valueDescription
          ADD_REQUEST

          whether DCR caused to create new profile in VOD with new vid__v  (Veeva id)

          CHANGE_REQUEST

          just update of existing profile in VOD with existing and already known vid__v (Veeva id)

    2. change_request_hcp_response.csv - contains information about DCR related to HCP
    3. change_request_hco_response.csv - contains information about DCR related to HCO
    4. change_request_address_response.csv - contains information about DCR related to addresses which are related to specific HCP or HCO
    5. change_request_parenthco_response.csv - contains information about DCR which correspond to relations between HCP and HCO, and HCO and HCO
    6. File with log: <country>_DCR_Request_Job_Log.csv can be skipped. It does not contain any useful information to be processed automatically
  4. For all DCR responses from VOD, we need to get corresponding DCR entity (VeevaVRDetails)from collection DCRRegistryVeeva should be selected. 
  5. In general, specific response files are not that important (VOD profiles updates will be ingested to HUB via ETL channel) however when new profiles are created (change_request_response.csv.change_request_type = ADD_REQUEST) we need to extract theirs Veeva ID. 
    1. We need to deep dive into change_request_hcp_response.csv or change_request_hco_response.csv to find vid__v (Veeva ID) for specific dcr_key 
    2. This new Veeva ID should be stored in VeevaDCREvent.vrDetails.veevaHCPIds
    3. It should be further used as a crosswalk value in Reltio:

      1. entities.HCO.updateCrosswalk.type (VEEVA)
      2. entities.HCO.updateCrosswalk.value (VEEVA ID)
  6. Once data has been properly mapped from Veeva to HUB DCR model, new VeevaDCREvent entity should be created and published to dedicated Kafka topic $env-internal-veeva-dcr-change-events-in
    1. Please be advised, when the status of resolution is not final (CHANGE_ACCEPTED, CHANGE_REJECTED, CHANGE_CANCELLED, CHANGE_PARTIAL) we should not sent event to DCR-service-2
  7. Then for each successfully processed DCR entity (VeevaVRDetails) in Mongo  DCRRegistryVeeva collection should be updated 
    1. Veeva CSV: resolution

      Mongo: DCRRegistryVeeva 

      Entity: VeevaVRDetails.status: DCRRequestStatusDetails

      Topic: $env-internal-veeva-dcr-change-events-in

      Event: VeevaDCREvent.vrDetails.vrStatus

      Topic: $env-internal-veeva-dcr-change-events-in

      Event: VeevaDCREvent.vrDetails.vrStatusDetail

      CHANGE_PENDING

      status should not be updated at all (stays as SENT)

      do not send events to DCR-service-2 do not send events to DCR-service-2 

      CHANGE_ACCEPTED

      ACCEPTEDCLOSEDACCEPTED

      CHANGE_PARTIAL

      ACCEPTED

      CLOSED

      ACCEPTED

      resolutionNotes / veevaComment should contain more information what was rejected by VEEVA DS

      CHANGE_REJECTED

      REJECTEDCLOSEDREJECTED

      CHANGE_CANCELLED

      REJECTEDCLOSEDREJECTED
  8. Once files are processed, ZIP file should be moved from inbound to archive directory


Event VeevaDCREvent Model

\n
data class VeevaDCREvent (val eventType: String? = null,\n                          val eventTime: Long? = null,\n                          val eventPublishingTime: Long? = null,\n                          val countryCode: String? = null,\n                          val dcrId: String? = null,\n                          val vrDetails: VeevaChangeRequestDetails)\n\ndata class VeevaChangeRequestDetails (\n    val vrStatus: String? = null, - HUB CODEs\n    val vrStatusDetail: String? = null, - HUB CODEs\n    val veevaComment: String? = null,\n    val veevaHCPIds: List<String>? = null,\n    val veevaHCOIds: List<String>? = null)
\n


Triggers

Trigger action

Component

Action

Default time

IN Timer (cron)mdm-veeva-dcr-service: VeevaDCRRequestTrace.traceDCRs()get DCR responses from S3/SFTP directory, extract CSV files from ZIP file and publish events to kafka topic

every <T> hour

usually every 6h (traceConfiguration.schedulerConfig.fixedDelay) at specific time of day (traceConfiguration.schedulerConfig.initDelay)

OUT Events on Kafka Topic

mdm-veeva-dcr-service: VeevaDCRRequestTrace.traceDCRs()

$env-internal-veeva-dcr-change-events-in

VeevaDCREvent event published to topic to be consumed by DCR Service 2

every <T> hour

usually every 6h (traceConfiguration.schedulerConfig.fixedDelay) at specific time of day (traceConfiguration.schedulerConfig.initDelay)


Dependent components

Component

Usage

DCR Service 2Main component with flow implementation
Hub StoreDCR and Entities Cache 
" }, { "title": "ETL Batches", "pageID": "164470046", "pageLink": "/display/GMDM/ETL+Batches", "content": "

Description

The process is responsible for managing the batch instances/stages and loading data received from the ETL channel to the MDM system. The Batch service is a complex component that contains predefined JOBS, Batch Workflow configuration that is using the JOBS implementations and using asynchronous communication with Kafka topis updates data in MDM system and gathered the acknowledgment events. Mongo cache stores the BatchInstances with corresponding stages and EntityProcessStatus objects that contain metadata information about loaded objects.


The below diagram presents an overview of the entire process. Detailed descriptions are available in the separated subpages.

Flow diagram

\"\"

Model diagram

\"\"


\"\"

Steps

Triggers

Described in the separated sub-pages for each process.

Dependent components

ComponentUsage
Batch ServiceMain component with flow implementation
ManagerAsynchronous events processing
Hub StoreDatastore and cache



" }, { "title": "ACK Collector", "pageID": "164469774", "pageLink": "/display/GMDM/ACK+Collector", "content": "

Description

The flow process the ACK response messages and updates the cache. Based on these responses the Processing flow is checking the Cache status and is blocking the workflow by the time all responses are received. This process updates the "status" attribute with the MDM system response and the "updateDateMDM" with the corresponding update timestamp. 

Flow diagram

\"\"

Steps

Triggers

Trigger actionComponentActionDefault time
IN Events incoming batch-service:AckProcessorupdate the cache based on the ACK responserealtime

Dependent components

ComponentUsage
Batch ServiceThe main component
ManagerAsync route with ACK responses
Hub StoreCache
" }, { "title": "Batch Controller: creating and updating batch instance", "pageID": "164469788", "pageLink": "/display/GMDM/Batch+Controller%3A+creating+and+updating+batch+instance", "content": "

Description

The batch controller is responsible for managing the Batch Instances. The service allows to creation of a new batch instance for the specific Batch, create a new Stage in the batch and update stage with the statistics. The Batch controller component manages the batch instances and checks the validation of the requests. Only authorized users are allowed to manage specific batches or stages. Additionally, it is not possible to START multiple instances of the same batch in one time. Once batch is started Client should load the data and at the end complete the current batch instance. Once user creates new batch instance the new unique ID is assigned, in the next request user has to use this ID to update the workflow. By default, once the batch instance is created all stages are initialized with status PENDING. Batch controller also manages the dependent stages and is marking the whole batch as COMPLETED at the end. 

Flow diagram

\"\"

Steps

Triggers

Trigger actionComponentActionDefault time
API requestbatch-service.RestBatchControllerRoute

User initializes the new batch instance, updates the STAGE, saves the statistics, and completes the corresponding STAGE.

User is able to get batch instance details and wait for the load completionm

user API request dependent, triggered by an external client

Dependent components

ComponentUsage
Batch ServiceThe main component that exposes the REST API
Hub StoreBatch Instances Cache
" }, { "title": "Batches registry", "pageID": "234695693", "pageLink": "/display/GMDM/Batches+registry", "content": "

There is a list of batches configured from 01.02.2022.

ONEKEY

TenantCountrySource NameBatch NameStageDetails
EMEAAlgeriaONEKEYONEKEY_DZHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
TunisiaONEKEYONEKEY_TNHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
MoroccoONEKEYONEKEY_MAHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
GermanyONEKEYONEKEY_DEHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
France, AD, MCONEKEYONEKEY_FRHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
France (DOMTOM) = RE,MQ,GP,PF,YT,GF,PM,WF,MU,NCONEKEYONEKEY_PFHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
ItalyONEKEYONEKEY_ITHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
SpainONEKEYONEKEY_ESHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
Turkey ONEKEYONEKEY_TRHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
Denmark
(Plus Faroe Islands and Greenland)
ONEKEYONEKEY_DKHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
PortugalONEKEYONEKEY_PTHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
RussiaONEKEYONEKEY_RUHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
APACAustraliaONEKEYONEKEY_AUHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
New ZealandONEKEYONEKEY_NZHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
South KoreaONEKEYONEKEY_KRHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
AMERCanadaONEKEYONEKEY_CAHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
BrazilONEKEYONEKEY_BRHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
MexicoONEKEYONEKEY_MXHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
Argentina/UruguayONEKEYONEKEY_ARHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)

PFORCE_RX

TenantCountrySource NameBatch NameStageDetails
AMERBrazilPFORCERX_ODSPFORCERX_ODSHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
Mexico
Argentina/Uruguay
Canada
APACJapan PFORCERX_ODSPFORCERX_ODSHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
Australia /New Zealand
India
South Korea
EMEASaudi ArabiaPFORCERX_ODSPFORCERX_ODSHCPLoading
HCOLoading
RelationLoading
It will be incremental file load and don’t need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)
Germany
France
Italy
Spain
Russia
Turkey 
Denmark
Portugal

GRV

TenantCountrySource NameBatch NameStage
EMEAGRGRVGRVHCPLoading
IT
FR
ES
RU
TR
SA
DK
GL
FO
PT
AMERCAGRVGRVHCPLoading
BR
MX
AR
APACAUGRVGRVHCPLoading
NZ
IN
JP
KR

GCP

TenantCountrySource NameBatch NameStage
EMEAGRGCPGCPHCPLoading
IT
FR
ES
RU
TR
SA
DK
GL
FO
PT
AMERCAGCPGCPHCPLoading
BR
MX
AR
APACAUGCPGCPHCPLoading
NZ
IN
JP
KR

ENGAGE

TenantCountrySource NameBatch NameStage
AMERCAENGAGEENGAGEHCPLoading
HCOLoading
RelationLoading
" }, { "title": "Bulk Service: loading bulk data", "pageID": "164469786", "pageLink": "/display/GMDM/Bulk+Service%3A+loading+bulk+data", "content": "

Description

The bulk service is responsible for loading the bundled data using REST API as the input and Kafka stage topics as the output. This process is strictly connected to the Batch Controller: creating and updating batch instance flow, which means that the Client should first initialize the new batch instance and stage. Using API requests data is loaded to the next processing stages. 

Flow diagram

\"\"

Steps


Triggers

Trigger actionComponentActionDefault time
API requestbatch-service.RestBulkControllerRouteClients send the data to the bulk service.user API request dependent, triggered by an external client

Dependent components

ComponentUsage
Batch ServiceThe main component that exposes the REST API
Hub StoreBatch Instances Cache



" }, { "title": "Clear Cache", "pageID": "164469784", "pageLink": "/display/GMDM/Clear+Cache", "content": "

Description

This flow is used to clear mongo cache (removes records from batchEntityProcessStatus) for specified batch name, object type and entity type. Optional list of countries (comma-separated) allows filtering by countries.

Flow diagram

\"\"

Steps

Triggers

Trigger actionComponentActionDefault time
API Requestbatch-service.RestBatchControllerRouteExternal client calls request to clear the cacheuser API request dependent, triggered by an external client

Dependent components

ComponentUsage
Batch ServiceThe main component that exposes the REST API
Hub StoreBatch entities/relations cache
" }, { "title": "Clear Cache by croswalks", "pageID": "282663410", "pageLink": "/display/GMDM/Clear+Cache+by+croswalks", "content": "

Description

This flow is used to clear mongo cache (removes records from batchEntityProcessStatus) for specified batch name, sourceId type or/and value

Flow diagram

\"\"

Steps

Triggers

Trigger action

Component

Action

Default time

API Requestbatch-service.RestBatchControllerRouteExternal client calls request to clear the cacheuser API request dependent, triggered by an external client

Dependent components

Component

Usage

Batch ServiceThe main component that exposes the REST API
Hub StoreBatch entities/relations cache
" }, { "title": "PATCH Operation", "pageID": "355371021", "pageLink": "/display/GMDM/PATCH+Operation", "content": "

Description

Entity PATCH (UpdateHCP/UpdateHCO/UpdateMCO) operation differs slightly from the standard POST (CreateHCP/CreateHCO/CreateMCO) operation:

Algorithm

PATCH operation logic consists of following steps:

" }, { "title": "Processing JOB", "pageID": "164469780", "pageLink": "/display/GMDM/Processing+JOB", "content": "

Description

The flow checks the Cache using a poller that executes the query each <T> minutes. During this processing, the count is decreasing until it reaches 0. 

The following query is used to check the count of objects that were not delivered. The process ends if the query return 0 objects - it means that we received ACK for each object and it is possible to go to the next dependent stage. 

"{'batchName': ?0 ,'sendDateMDM':{ $gt: ?1 }, '$or':[ {'updateDateMDM':{ $lt: ?1 } }, { 'updateDateMDM':{ $exists : false } } ] }"

Using Mongo query there is a possibility to find what objects are still not processed. In that case, the user should provide batchName==" currently loading batch " and use the date that is the batch start date. 

Flow diagram

\"\"

Steps

Triggers

Trigger actionComponentActionDefault time
The previous dependent JOB is completed. Triggered by the Scheduler mechanismbatch-service:ProcessingJobTriggers mongo and checks the number of objects that are not yet processed.every 60 seconds

Dependent components

ComponentUsage
Batch ServiceThe main component with the Processing JOB implementation
Hub StoreThe cache that stores all information about the loaded objects
" }, { "title": "Sending JOB", "pageID": "164469778", "pageLink": "/display/GMDM/Sending+JOB", "content": "

Description

The JOB is responsible for sending the data from the Stage Kafka topics to the manager component. During this process data is checked, the checksum is calculated and compared to the previous state, os only the changes are applied to MDM. The Cache - Batch data store, contains multiple metadata attributes like sourceIngetstionDate - the time once this entity was recently shared by the Client, and the ACK response status (create/update/failed) 

The Checksum is calculation is skipped for the "failed" objects. It means there is no need to clear the cache for the failed objects, the user just needs to reload the data. 

The JOB is triggered once the previous dependent job is completed or is started. There are two mode of dependences between Loading STAGE and Sending STAGE

The purpose of hard dependency is the case when the user has to Load HCP/HCO and Relations objects. The sending of relation has to start after HCP and HCO load is COMPLETED. 

The process finishes once the Batch stage queue is empty for 1 minute (no new events are in the queue).

The following query is used to retrieve processing object from cache. Where the batchName is the corersponding Batch Instance, and sourceId is the information about loaded source crosswalk.

{'batchName': ?0, {'sourceId.type': ?1, 'sourceId.value': ?2,'sourceId.sourceTable': ?3 } }

Flow diagram

\"\"

Steps

Triggers

Trigger actionComponentActionDefault time
The previous dependent JOB is completed. Triggered by the Scheduler mechanismbatch-service:SendingJobGet entries from stage topic, saved data in mongo and create/updates profiles using Kafka producer (asynchronous channel)once the dependence JOB is completed

Dependent components

ComponentUsage
Batch ServiceThe main component with the Sending JOB implementation
Hub StoreThe cache that stores all information about the loaded objects
" }, { "title": "SoftDeleting JOB", "pageID": "164469776", "pageLink": "/display/GMDM/SoftDeleting+JOB", "content": "

Description

This JOB is responsible for the soft-delete process for the full file loads. Batches that are configured with this JOB have to always deliver the full set of data. The process is triggered at the end of the workflow and soft-delete objects in the MDM system. 

The following query is used to check how many objects are going to be removed and also to get all these objects and send the soft-delete requests. 

{'batchName': ?0, 'deleted': false, 'objectType': 'ENTITY OR RELATION', 'sourceIngestionDate':{ $lt: ?1 } }

Once the object is soft deleted "deleted" flag is changed to "true"
Using the mongo query there is a possibility to check what objects were soft-deleted by this process. In that case, the Administrator should provide the batchName=" currently loading batch" and the deleted parameter =" true".
The process removes all objects that were not delivered in the current load, which means that the "SourceIngestionDate" is lower than the "BatchStartDate".
It may occur that the number of objects to soft-delete exceeds the limit, in that case, the process is aborted and the Administrator should verify what objects are blocked and notify the client.
The production limit is a maximum of 10000 objects in one load.

Flow diagram

\"\"

Steps 

2023-07 Update: Set Soft-Delete Limit by Country

DeletingJob now allows additional configuration:

\n
deletingJob:\n  "TestDeletesPerCountryBatch":\n    "EntitiesUnseenDeletion":\n      maxDeletesLimit: 20\n      queryBatchSize: 5\n      reltioRequestTopic: "local-internal-async-all-testbatch"\n      reltioResponseTopic: "local-internal-async-all-testbatch-ack"\n>     maxDeletesLimitPerCountry:\n>       enabled: true\n>       overrides:\n>         CA: 10\n>         BR: 30
\n

If maxDeletesLimitPerCountry.enabled == true (default false):


Triggers

Trigger actionComponentActionDefault time
The previous dependent JOB is completed. Triggered by the Scheduler mechanismbatch-service:AbstractDeletingJob (DeletingJob/DeletingRelationJob)Triggers mongo and soft-delete profiles using Kafka producer (asynchronous channel)once the dependence JOB is completed

Dependent components

ComponentUsage
Batch ServiceThe main component with the SoftDeleting JOB implementation
ManagerAsynchronous channel 
Hub StoreThe cache that stores all information about the loaded objects
" }, { "title": "Event filtering and routing rules", "pageID": "164470034", "pageLink": "/display/GMDM/Event+filtering+and+routing+rules", "content": "

At various stages of processing events can be filtered based on some configurable criteria. This helps to lessen the load on the Hub and client systems, as well as simplifies processing on client side by avoiding the types of events that are of no interest to the target application. There are three places where event filtering is applied:

Event type filtering

Each event received from SQS queue has a "type" attribute. Reltio Subscriber has a "allowedEventTypes" configuration parameter (in application.yml config file) that lists event types which are processed by application. Currently, complete list of supported types is:

An event that does not match this list is ignored, and "Message skipped" entry is added to a log file.
Please keep in mind that while it is easy to remove an event type from this list in order to ignore it, adding new event type is a whole different story – it might not be possible without changes to the application source code.

Duplicate detection (Nucleus)

There's an in-memory cache maintained that stores entityUri and type of an event previously sent for that uri. This allows duplicate detection. The cache is cleared after successful processing of the whole zip file.

Entity data-based filtering

Event Publisher component receives events from internal Kafka topic. After fetching current Entity state from Reltio (via MDM Integration Gateway) it imposes few additional filtering rules based on fetched data. Those rules are:

  1. Filtering based on Country that entity belongs to. This is based on value of ISO country code, extracted from Country attribute of an entity. List of allowed codes is maintained as "activeCountries" parameter in application.yml config file.
  2. Filtering based on Entity type. This is controlled by "allowedEntityTypes" configuration parameter, which currently lists two values: "HCP" and "HCO". Those values are matched against "entityType" attribute of Entity (prefix "configuration/entityTypes/" is added automatically, so it does not need to be included in configuration file)
  3. Filtering out events that have empty "targetEntity" attribute – such events are considered outdated, plus they lack some mandatory information that would normally be extracted from targetEntity, such as originating country and source system. They are filtered out because Hub would not be able to process them correctly anyway.
  4. Filtering out events that have value mismatch between "entitiesURIs" attribute of an event and "uri" attribute of targetEntity – for all event types except HCP_LOST_MERGE and HCO_LOST_MERGE. Uri mismatch may arise when EventPublisher is processing events with significant delay (e.g. due to downtime, or when reprocessing events) – Event Publisher might be processing HCP_CHANGED (HCO_CHANGED) event for an Entity that was merged with another Entity since then, so HCP_CHANGED event is considered outdated, and we are expecting HCP_LOST_MERGE event for the same Entity.

This filter is controlled by eventRouter.filterMismatchedURIs configuration parameter, which takes Boolean values (yes/no, true/false)

  1. Filtering out events based on timestamps. When HCP_CHANGED or HCO_CHANGED event arrives that has "eventTime" timestamp older than "updatedTime" of the targetEntity, it is assumed that another change for the same entity has already happened and that another event is waiting in the queue to be processed. By ignoring current event Event Publisher is ensuring that only the most recent change is forwarded to client systems.

This filter is controlled by eventRouter.filterOutdatedChanges configuration parameter, which can take Boolean values (yes/no, true/false)

Event routing

Publishing Hub supports multiple client systems subscribing for Entity change events. Since those clients might be interested in different subset of Events, the event routing mechanism was created to allow configurable, content-based routing of the events to specific client systems. Routing mechanics consists of three main parts:

  1. Kafka topics – each client system can has one or more dedicated topics where events of interest for that system are published
  2. Metadata extraction – as one of the processing steps, there are some pieces of information extracted from the Event and related Entity and put in processing context (as headers), so they can be easily accessed.
  3. Configurable routing rulesEvent Publisher's configuration file contains the whole section for defining rules that facilitates Groovy scripting language and the metadata.

Available metadata is described in the table below.

Table 10. Routing headers





Header

Type

Values

Source Field

Description

eventType

String

full
simple

none

Type of an event. "full" means Event Sourcing mode, with full targetEntity data.
"simple" is just an event with basic data, without targetEntity

eventSubtype

String

HCP_CREATED,
HCP_CHANGED,
….

event.eventType

For the full list of available event subtypes is specified in MDM Publishing Hub Streaming Interface document.

country

String

CN
FR

event.targetEntity.attributes .Country.lookupCode

Country of origin for the Entity

eventSource

Array of String

["OK", "GRV"]

event. targetEntity.crosswalks.type

Array containing names of all the source systems as defined by Reltio crosswalks

mdmSource

String

["RELTIO", NUCLEUS"]

None

System of origin for the Entity.

selfMerge

Boolean

true, false

None

Is the event "self-merge"? Enables filtering out merges on the fly.


Routing rules configuration is found in eventRouter.routingRules section of application.yml configuration file. Here's an example of such rule:
\"\"
Elements of this configuration are described below.

Selector syntax can include, among the others, the elements listed in the table below.

Table 11. Selector syntax



Element

Example

Description

comparison operators

==, !=, <, >

Standard Groovy syntax

boolean operators

&&,


set operators

in, intersect


Message headers

exchange.in.headers.country

See Table 10 for list of available headers. "exchange.in.headers" is the standard prefix that must be used do access them


Full syntax reference can be found in Apache Camel documentation: http://camel.apache.org/groovy.html .
The limitation here is that the whole snippet should return a single boolean value.
Destination name can be literal, but can also reference any of the message headers from Table 10, with the following syntax:
\"\"

" }, { "title": "FLEX COV Flows", "pageID": "172301002", "pageLink": "/display/GMDM/FLEX+COV+Flows", "content": "" }, { "title": "Address rank callback", "pageID": "164470175", "pageLink": "/display/GMDM/Address+rank+callback", "content": "

The Address Rank Callback is used only in the FLEX COV environment to update the Rank attribute on Addresses. This process sends the callback to Reltio only when the specific source exists on the profile. The Rank is used then by the Bussiness Team or Data Stewards in Reltio or by the downstream FLEX system. 

Address Rank Callback is triggered always when getEntity operation is invoked. The purpose of this process is to synchronize Reltio with correct address rank sort order.

Currently the functionality is configured only for US Trade Instance. Below is the diagram outlining the whole process.

\"\" 
Process steps description:

  1. Event Publisher receives events from internal Kafka topic and calls MDM Gateway API to retrieve latest state of Entity from Reltio.
  2. Event Publisher internal user is authorized in MDM Manager to check source, country and appropriate access roles. MDM Manager invokes get entity operation in Reltio. Returned JSON is then added to the Address Rank sort process, so the client will always get entity with sorted address rank order, but only when this feature is activated in configuration.
  3. When Address Rank Sort process is activated, each address in entity is sorted. In this case "AddressRank" and "BestRecord" attributes are set. When AddressRank is equal to "1" BestRecord attribute will always have "1" value.
  4. When Address Rank Callback process is activated, relation operation is invoked in Reltio. The Relation Request object contains Relation object for each sorted address. Each Relation will be created with "AddrCalc" source, where the start object is current entity id and the end object is id of the Location entity. In that case relation between entity and Location is created with additional rank attributes. There is no need to send multiple callback requests every time when get entity operation is invoked, so the Callback operation is invoked only when address rank sort order have changed.
  5. Entity data is stored in MongoDB NOSQL database, for later use in Simple mode (publication of events that entityURI and require client to retrieve full Entity via REST API).
  6. For every Reltio event there are two Publishing Hub events created: one in Simple mode and one in Event Sourcing (full) mode. Based on metadata, and Routing Rules provided as a part of application configuration, the list of the target destinations for those events is created. Event is sent to all matched destinations.


" }, { "title": "DEA Flow", "pageID": "164470009", "pageLink": "/display/GMDM/DEA+Flow", "content": "

This flow processes DEA files published by GIS Team to S3 Bucket. Flow steps are presented on the sequence diagram below.

\"\" 
Process steps description:

  1. DEA files are uploaded to AWS S3 storage bucket to the appropriate directory intended only for DEA files.
  2. Batch Channel component is monitoring S3 location and processes the files uploaded to it.
  3. Folder structure for DEA is divided on "inbound" and "archive" directories. Batch Channel component is polling data from inbound directory, after successful processing the file is copied to "archive directory"
  4. Files downloaded from S3 is processed in streaming mode. The processing of the file can be stared before full download of the file. Such solution is dedicated to speed up processing of the big files, because there is no need to wait until the file will be fully downloaded.
  5. DEA file load Start Time is saved for the specific load – as loadStartDate.
  6. Each line in file is parsed in Batch Channel component and mapped to the dedicated DEA object. DEA file is saved in Fixed Width Data Format, in that case one DEA record is saved in one line in the file so there is no need to use record aggregator. Each line has specified length, each column has specified star and end point number in the row.
  7. BatchContext is downloaded from MongoDB for each DEA record. This context contains DEA crosswalk ID, line from file, MD5 checksum, last modification date, delete flag. When BatchContext is empty it means that this DEA record is initially created – such object is send to Kafka Topic. When BatchContext is not empty the MD5 form the source DEA file is compared to the MD5 from the BatchContext (mongo). If MD5 checksums are equals – such object is skipped, otherwise – such object is send to Kafka Topic. For each modified object, lastModificationDate is updated in Mongo – it is required to detected delete records as the final step.
  8. Only when record MD5 checksum is not changed, DEA record will be published to Kafka topic dedicated for events for DEA records. They will be processed by MDM Manager component. The first step is authorization check to verify if this event was produced by Batch Channel component with appropriate source name and country and roles. Then the standard process for HCO creation is stared. The full description of this process is in HCO Post section.
  9. TransactionLog Service is an additional component for managing transaction logs. The role of this component is to save each successful or unsuccessful flow in transaction log. Additionally each log is saved in MongoDB to create a full report from current load and to correlate record flow between Batch Channel and MDM Manager components.
  10. After DEA file is successfully processed, DEA delete record processor is started. From Mongo Database each record with lastModificationDate less than loadStartDate and delete flag equal to false is downloaded. When the result count is grater that 1000, delete record processor is stoped – it is a protector feature in case of wrong file uploade which can generate multiple unexpected DEA profiles deletion. Otherwise, when result count is less than 1000, each record from MongoDB is parsed and send to Kafka Topic with deleteDate attribute on crosswalk. Then they will be processed by MDM Manager component. The first step is authorization check to verify if this event was produced by Batch Channel component with appropriate source name and country and roles. Then the standard process for HCO creation is stared. The full description of this process is in HCO Post section. Profiles created with deleteDate attribute on crosswalk are soft deleted in Reltio.
  11. Finally DEA file is moved to archive subtree in S3 bucket.


" }, { "title": "FLEX Flow", "pageID": "164470035", "pageLink": "/display/GMDM/FLEX+Flow", "content": "

This flow processes FLEX files published by Flex Team to S3 Bucket. Flow steps are presented on the sequence diagram below.

\"\"
Process steps description:

  1. FLEX files are uploaded to AWS S3 storage bucket to the appropriate directory intended only for FLEX files.
  2. Batch Channel component is monitoring S3 location and processes the files uploaded to it.
  3. Folder structure for FLEX is divided on "inbound" and "archive" directories. Batch Channel component is polling data from inbound directory, after successful processing the file is copied to "archive directory"
  4. Files downloaded from S3 is processed in streaming mode. The processing of the file can be stared before full download of the file. Such solution is dedicated to speed up processing of the big files, because there is no need to wait until the file will be fully downloaded.
  5. Each line in file is parsed in Batch Channel component and mapped to the dedicated FLEX object. FLEX file is saved in CSV Data Format, in that case one FLEX record is saved in one line in the file so there is no need to use record aggregator. The first line in the file is always the header line with column names, each next line is the FLEX records with "," (comma character) delimiter. The most complex thing in FLEX mapping is Identifiers mapping. When Flex records contain "GROUP_KEY" ("Address Key") attribute it means that Identifiers saved in "Other Active IDs" will be added to FlexID.Identifiers nested attributes. "Other Active IDs" is one line string with key value pairs separated by "," (comma character), and key-value delimiter ":" (colon character). Additionally for each type of customer Flex identifier is always saved in FlexID section.
  6. FLEX record will be published to Kafka topic dedicated for events for FLEX records. They will be processed by MDM Manager component. The first step is authorization check to verify if this event was produced by Batch Channel component with appropriate source name and country and roles. Then the standard process for HCO creation is stared. The full description of this process is in HCO Post section.
  7. TransactionLog Service is an additional component for managing transaction logs. The role of this component is to save each successful or unsuccessful flow in transaction log. Additionally each log is saved in MongoDB to create a full report from current load and to correlate record flow between Batch Channel and MDM Manager components.
  8. After FLEX file is successfully processed, it is moved to archive subtree in S3 bucket.



" }, { "title": "HIN Flow", "pageID": "164469995", "pageLink": "/display/GMDM/HIN+Flow", "content": "

This flow processes HIN files published by HIN Team to S3 Bucket. Flow steps are presented on the sequence diagram below.

\"\"
Process steps description:

  1. HIN files are uploaded to AWS S3 storage bucket to the appropriate directory intended only for HIN files.
  2. Batch Channel component is monitoring S3 location and processes the files uploaded to it.
  3. Folder structure for HIN is divided on "inbound" and "archive" directories. Batch Channel component is polling data from inbound directory, after successful processing the file is copied to "archive directory"
  4. Files downloaded from S3 is processed in streaming mode. The processing of the file can be stared before full download of the file. Such solution is dedicated to speed up processing of the big files, because there is no need to wait until the file will be fully downloaded.
  5. Each line in file is parsed in Batch Channel component and mapped to the dedicated HIN object. HIN file is saved in Fixed Width Data Format, in that case one HIN record is saved in one line in the file so there is no need to use record aggregator. Each line has specified length, each column has specified star and end point number in the row.
  6. HIN record will be published to Kafka topic dedicated for events for FLEX records. They will be processed by MDM Manager component. The first step is authorization check to verify if this event was produced by Batch Channel component with appropriate source name and country and roles. Then the standard process for HCO creation is stared. The full description of this process is in HCO Post section.
  7. TransactionLog Service is an additional component for managing transaction logs. The role of this component is to save each successful or unsuccessful flow in transaction log. Additionally each log is saved in MongoDB to create a full report from current load and to correlate record flow between Batch Channel and MDM Manager components.
  8. After HIN file is successfully processed, it is moved to archive subtree in S3 bucket.


" }, { "title": "SAP Flow", "pageID": "164469997", "pageLink": "/display/GMDM/SAP+Flow", "content": "

This flow processes SAP files published by GIS system to S3 Bucket. Flow steps are presented on the sequence diagram below.

\"\"
Process steps description:

  1. SAP files are uploaded to AWS S3 storage bucket to the appropriate directory intended only for SAP files.
  2. Batch Channel component is monitoring S3 location and processes the files uploaded to it.
    Important note: To facilitate fault tolerance the Batch Channel component will be deployed on multiple instances on different machines. However, to avoid conflicts, such as processing the same file twice, only one instance is allowed to do the processing at any given time. This is implemented via standard Apache Camel mechanism of Route Policy, which is backed by Zookeeper distributed key-value store. When a new file is picked up by Batch Channel instance, the first processing step would be to create a key in Zookeeper, acting as a lock. Only one instance will succeed in creating the key, therefore only one instance will be allowed to proceed.
  3. Folder structure for SAP is divided on "inbound" and "archive" directories. Batch Channel component is polling data from inbound directory, after successful processing the file is copied to "archive directory"
  4. Files downloaded from S3 is processed in streaming mode. The processing of the file can be stared before full download of the file. Such solution is dedicated to speed up processing of the big files, because there is no need to wait until the file will be fully downloaded.
  5. Each line in file is parsed in Batch Channel component and mapped to the dedicated SAP object. In case of SAP files where one SAP record is saved in multiple lines in the file there is need to use SAPRecordAggregator. This class will read each line of the SAP file and aggregate each line to create full SAP record. Each line starts with Record Type character, the separator for SAP is "~" (tilde character). Only lines that start with the following character are parsed and create full SAP record:

    When header line is parsed Account Type attribute is checked. Only SAP records with "Z031" type are filtered and post to Reltio.

  6. BatchContext is downloaded from MongoDB for each SAP record. This context contains Start Date for SAP and 340B Identifiers. When BatchContext is empty current timestamp is saved for each of the Identifiers, otherwise the start date for the identifiers is changed for the one saved in the Mongo cache. This Start Date always must be overwritten with the initial dates from mongo cache.
  7. Aggregated SAP record will be published to Kafka topic dedicated for events for SAP records. They will be processed by MDM Manager component. The first step is authorization check to verify if this event was produced by Batch Channel component with appropriate source name and country and roles. Then the standard process for HCO creation is stared. The full description of this process is in HCO POST section.
  8. TransactionLog Service is an additional component for managing transaction logs. The role of this component is to save each successful or unsuccessful flow in transaction log. Additionally each log is saved in MongoDB to create a full report from current load and to correlate record flow between Batch Channel and MDM Manager components.
  9. After SAP file is successfully processed, it is moved to archive subtree in S3 bucket.




" }, { "title": "US overview", "pageID": "164470019", "pageLink": "/display/GMDM/US+overview", "content": "

\"\"

" }, { "title": "Generic Batch", "pageID": "164469994", "pageLink": "/display/GMDM/Generic+Batch", "content": "

The generic batch offers the functionality of configuring processes of HCP/HCO data loading from text files (CSV) into MDM.
The loading processes are defined in the configuration, without the need for changes in the implementation.

Description of the process


\"\"


Definition of single data flow 

Configuration (definition) od each data flow contains:


Currently defined data flows:



Flow nameCountrySource systemInput files (with names required after preprocessing stage)Detailed columns to entity attribute mapping file

TH HCP

THCICR
  • hcpEntities

fileNamePattern: '(TH_Contact_In)+(\\.(?i)(txt))$'

  • hcpAddresses

fileNamePattern: '(TH_Contact_Address_In_JOINED)+(\\.(?i)(txt))$'

  • hcpSpecialties

fileNamePattern: '(TH_Contact_Speciality_In)+(\\.(?i)(txt))$'

mdm-gateway\\batch-channel\\src\\main\\resources\\flows.yml

SA HCP

SALocalMDM
  • hcpEntities

fileNamePattern: '(KSA_HCPs)+(\\.(?i)(csv))$'

mdm-gateway\\batch-channel\\src\\main\\resources\\flows.yml





" }, { "title": "Get Entity", "pageID": "164470021", "pageLink": "/display/GMDM/Get+Entity", "content": "

Description

Operation getEntity of MDM Manager fetches current state of OV from MongoDB store.

The detailed process flow is shown below.

Flow diagram

Get Entity


\"\"


Steps

  1. Client sends HTTP request to MDM Manager endpoint.
  2. Kong Gateway receives requests and handles authentication.
  3. If the authentication succeeds, the request is forwarded to MDM Manager component.
  4. MDM Manager checks user permissions to call getEntity operation and the correctness of the request.
  5. If user's permissions are correct, MDM Manager proceeds with searching for the specified entity by id.
  6. MDM Manager checks user profile configuration for getEntity operation to determine whether to return results based on MongoDB state or call Reltio directly.
  7. For clients configured to use MongoDB – if the entity is found, then its status is checked. For entities with LOST_MERGE status parentEntityId attribute is used to fetch and return the parent Entity instead. This is in line with default Reltio behavior since MDM Manager is supposed to mirror Reltio.


Triggers

Trigger actionComponentActionDefault time
REST callManager: GET /entity/{entityId}get specific objects from MDM systemAPI synchronous requests - realtime

Dependent components

ComponentUsage
Managerget Entities in MDM systems









" }, { "title": "GRV & GCP events processing", "pageID": "164470032", "pageLink": "/pages/viewpage.action?pageId=164470032", "content": "


Contacts

VendorContact
MAP/DEG API supportMatej.Dolanc@COMPANY.com


This flow processes events from GRV and GCP systems distributed through Event Hub. Processing is split into three stages. Since each stage is implemented as separate Apache Camel route and separated from other stages by persistent message store (Kafka), it is possible to turn each stage on/off separately using Admin Console.

SQS subscription

First processing stage is receiving data published by Event Hub from Amazon SQS queues, which is done as shown on diagram below.


\"\"

Figure 5. First processing stage


Process steps description:

  1. Data changes in GRV and GCP are captured by Event Hub and distributed via queues to MAP Channel components using SQS queues with names:
    1. eh-out-reltio-gcp-update-<env_code>
    2. eh-out-reltio-gcp-batch-update-<env_code>
    3. eh-out-reltio-grv-update-<env_code>
  2. Events pulled from SQS queue are published to Kafka topic as a way of persisting them (allowing reprocessing) and to do event prioritizing and control throughput to Reltio. The following topics are used:
    1. <env_code>-gw-internal-gcp-events-raw
    2. <env_code>-gw-internal-grv-events-raw
  3. To ensure correct ordering of messages in Kafka, there is a custom message key generated. It is a concatenation of market code and unique Contact/User id.
  4. Once the message is published to Kafka, it is confirmed in SQS and deleted from the queue.

Enrichment with DEG data


\"\"

Figure 6. Second processing stageSecond processing stage is focused on getting data from DEG system. The control flow is presented below.


Process steps description:

  1. MAPChannel receives events from Kafka topic on which they were published in previous stage.
  2. MAPChannel filters events based on country activation criteria – events coming from not activated countries are skipped. A list of active countries is controlled by configuration parameter, separately for each source (GRV, GCP);
  3. Next, MapChannel calls DEG REST services (INT2.1 or INT 2.2 depending on whether it is a GRV or GCP event) to get detailed information about changed record. DEG always returns current state of GRV and GCP records.
  4. Data from DEG is published to Kafka topic (again, as a way of persisting them and separating processing stages). The topics used are:
    1. <env_code>-gw-internal-gcp-events-deg
    2. <env_code>-gw-internal-grv-events-deg
  5. Again, custom message key (which is a concatenation of market code and unique Contact/User id

Creating HCP entities

Last processing stage involves mapping data to Reltio format and calling MDM Gateway API to create HCP entities in Reltio. Process overview is shown below.


\"\"

Figure 7. Third processing stage



Process steps description:

  1. MAPChannel receives events from Kafka topic on which they were published in previous stage.
  2. MAPChannel filters events based on country activation criteria, events coming from not activated countries are skipped. A list of active countries is controlled by configuration parameter, separately for each source (GRV, GCP) – this is exactly the same parameter as in previous stage.
  3. MapChannel maps data from GCP/GRV to HCP:
    1. EMEA mapping
    2. GLOBAL mapping
  4. Validation status of mapped HCP is checked – if it matches a configurable list of inactive statuses, then deleteCrosswalk operation is called on MDM Manager. As a result entity data originating from GCP/GRV is deleted from Reltio.
  5. Otherwise, Map Channel calls REST operation POST /hcp on MDM Manager (INT4.1) to create or replace HCP profile in Reltio. MDM Manager handles complexity of the update process in Reltio.

Processing events from multiple sources and prioritization

As mentioned in previous sections, there are three different SQS queues that are populated with events by Event Hub. Each of them is processed by a separate Camel Route, allowing for some flexibility and prioritizing one queue above others. This can be accomplished by altering consumer configuration found in application.yml file. Relevant section of mentioned file is shown below.


\"\"


Queue eh-out-reltio-gcp-batch-update-dev has 15 consumers (and therefore 15 processing threads), while two remaining queues have only 5 consumers each. This allows faster processing of GCP Batch events.
The same principle applies to further stages of the processing, which use Kafka endpoints. Again, there is a configuration section dedicated to each of the internal Kafka topic that allows tuning the pace of processing.


\"\"


" }, { "title": "HUB UI User Guide", "pageID": "302701919", "pageLink": "/display/GMDM/HUB+UI+User+Guide", "content": "

This page contains the complete user guide related to the HUB UI.

Please check the sub-pages to get details about the HUB UI and usage.

Start with Main Page - HUB Status - main page


A handful of information that may be helpful when you are using HUB UI:


If you want to add any new features to the HUB UI please send your suggestions to the HUB Team: DL-ATP_MDMHUB_SUPPORT@COMPANY.com


" }, { "title": "HUB Admin", "pageID": "302701923", "pageLink": "/display/GMDM/HUB+Admin", "content": "

All the subpages contain the user guide - how to use the hub admin tools.

To gain access to the selected operation please read - UI Connect Guide

" }, { "title": "1. Kafka Offset", "pageID": "302703128", "pageLink": "/display/GMDM/1.+Kafka+Offset", "content": "

Description

This tab is available to a user with the MODIFY_KAFKA_OFFSET management role.

Allows you to reset the offset for the selected topic and group.

Kafka Consumer

Please turn off your Kafka Consumer before executing this operation, it is not possible to manage the ACTIVE consumer group

Required parameters

Details

The offset parameter can take one of three values:

View

\"\"



" }, { "title": "10. Jobs Manager", "pageID": "337846274", "pageLink": "/display/GMDM/10.+Jobs+Manager", "content": "

Description

This page is available to users that scheduled the JOB

Allows you to check the current status of an asynchronous operation 

Required parameters

Job Type  choose a JOB to check the status

Details

The page shows the statuses of jobs for each operation.

Click the Job Type and select the business operation.

In the table below all the jobs for all users in your AD group are displayed. You can track the jobs and download the reports here.

Click the\"\" Refresh view button to refresh the page

Click the \"\"icon to download the report.

View

\"\"

" }, { "title": "2. Partials", "pageID": "302703134", "pageLink": "/display/GMDM/2.+Partials", "content": "

Description

This tab is available to the user with the LIST_PARTIALS role to manage the precallback service.

It allows you to download a list of partials - these are events for which the need to change the Reltio has been detected and their sending to output topics has been suspended.

The operation allows you to specify the limit of returned records and to sort them by the time of their occurrence.

HUB ADMIN

Used only internally by MDM HUB ADMINS

Required parameters

N/A - by default, you will get all partial entities.

Details


View

\"\"

" }, { "title": "3. HUB Reconciliation", "pageID": "302703130", "pageLink": "/display/GMDM/3.+HUB+Reconciliation", "content": "

Description

This tab is available to the user with the reconciliation service management role - RECONCILE and RECONCILE_COMPLEX

The operation accepts a list of identifiers for which it is to be performed. It allows you to trigger a reconciliation task for a selected type of object:

Divided into 2 sections:

Simple JOBS:

Required parameters

N/A - by default generate CHANGE events and skip entity when it is in REMOE/INACTIVE/LOST_MERGE state. In that case, we only push CHANGE events. 

Details

ParameterDefault valueDescription
forcefalseSend an event to output topics even when a partial update is detected or the checksum is the same.
push lost mergefalseReconcile event with LOST_MERGE status
push inactivatedfalseReconcile event with INACTIVE status
push removedfalseReconcile event with REMOVE status

View

\"\"


Complex JOBS:

Required parameters

Details

Simple

ParameterDefault valueDescription
forcefalseSend an event to output topics even when a partial update is detected or the checksum is the same.
Countries N/Alist of countries.e.g: CA, MX
SourcesN/Acrosswalks names for which you want to generate the events.
Object TypeENTITYgenerates events from ENTITY or RELATION objects
Entity Typedepend on object Type

Can be for ENTITY: HCP/HCO/MCO/DCR

Can be for RELATION: input test in which you specify the relation e.g.: OtherHCOToHCO

Batch limitN/A

limit the number of events - useful for testing purposes

Complex

ParameterDefault valueDescription
forcefalseSend an event to output topics even when a partial update is detected
Entity QueryN/A

PUT the MATCH query to get Mongo results and generate events. e.g.:

{

"status": "ACTIVE",

"sources": "ONEKEY",

"country": "gb"

}

Entities limitN/Alimit the number of events - useful for testing purposes
Relation QueryN/A

PUT the MATCH query to get Mongo results and generate events. e.g.:

{

"status": "ACTIVE",

"sources": "ONEKEY",

"country": "gb"

}

Relation limitN/Alimit the number of events - useful for testing purposes

View

\"\"


" }, { "title": "4. Kafka Republish Events", "pageID": "302703132", "pageLink": "/display/GMDM/4.+Kafka+Republish+Events", "content": "

Description

This page is available to users with the publisher manager role -RESEND_KAFKA_EVENT and RESEND_KAFKA_EVENT_COMPLEX

Allows you to resend events to output topics. It can be used in two modes: simple and complex.

The operation will trigger JOB  with selected parameters. In response, the user will receive an identifier that is used to check the status of the asynchronous operation in the 10. Jobs Manager tab.

Simple mode

Required parameters

Details

In this mode, the user specifies values ​​for defined parameters:

ParameterDefault valueDescription
Select moderepublish CHANGE events

note:

  • when you mark 'republish CHANGE events' - the process will generate CHANGE events for all entities that are ACTIVE, and will check if the entity is LOST_MERGE - then will generate LOST_MERGED events, DELETED - then will generate REMOVED events, INACTIVE - then will generate INACTIVATED events.
  • when you mark 'republish CREATE events' - the process will generate CREATE events for all entities that are ACTIVE, and will check if the entity is LOST_MERGE - then will generate LOST_MERGED events, DELETED - then will generate REMOVED events, INACTIVE - then will generate INACTIVATED events.
  • The difference between these 2 modes is, in one we generate CHANGEs in the second CREATE events (depending if whether this is IDL generation or not)
CountriestrueList of countries for which the task will be performed
SourcesfalseList of sources for which the task will be performed
Object typetrueObject type for which operation will be performed, available values: Entity, Relation
Reconciliation targettrueOutput kafka topick name
limittrueLimit of generated events
modification time fromfalseEvents with a modification date greater than this will be generated
modification time tofalseEvents with a modification date less than this will be generated

View

\"\"

Complex mode

Required parameters

Entities query or  Relation query

Details

      In this mode, the user himself defines the Mongo query that will be used to generate events


ParameterRequiredDescription
Select moderepublish CHANGE events

note:

  • when you mark 'republish CHANGE events' - the process will generate CHANGE events for all entities that are ACTIVE, and will check if the entity is LOST_MERGE - then will generate LOST_MERGED events, DELETED - then will generate REMOVED events, INACTIVE - then will generate INACTIVATED events.
  • when you mark 'republish CREATE events' - the process will generate CREATE events for all entities that are ACTIVE, and will check if the entity is LOST_MERGE - then will generate LOST_MERGED events, DELETED - then will generate REMOVED events, INACTIVE - then will generate INACTIVATED events.
  • The difference between these 2 modes is, in one we generate CHANGEs in the second CREATE events (depending if whether this is IDL generation or not)
Entities querytrueResend entities Mongo query
Entities limitfalseResend entities limit
Relation querytrueResend relations Mongo query
Relations limittrueResend relations limit
Reconciliation targettrueOutput kafka topick name

View

\"\"






" }, { "title": "5. Reltio Reindex", "pageID": "337846264", "pageLink": "/display/GMDM/5.+Reltio+Reindex", "content": "

Description

This page is available to users with the reltio reindex role - REINDEX_ENTITIES

Allows you to schedule Reltio Reindex JOB. It can be used in two modes: query and file.

The operation will trigger JOB  with selected parameters. In response, the user will receive an identifier that is used to check the status of the asynchronous operation in the 10. Jobs Manager tab.

Required parameters

Specify Countries in query mode or file with entity uris in file mode. 

Details

query

ParameterDescription
CountriesList of countries for which the task will be performed
SourcesList of sources for which the task will be performed
Entity typeObject type for which operation will be performed, available values: HCP/HCO/MCO/DCR
Batch limitAdd if you want to limit the reindex to the specific number - helpful with testing purposes

file

Input file

File format: CSV 

Encoding: UTF-8

Column headers: - N/A

Input file example

1
2
3

entities/E0pV5Xm
entities/1CsgdXN4
entities/2O5RmRi

View

\"\"


Reltio Reindex details:

HUB executes Reltio Reindex API with the following default parameters:

\"\"

ParameterAPI Parameter nameDefault ValueReltio detailed descriptionUI details
Entity type
entityType
N/AIf provided, the task restricts the reindexing scope to Entities of specified type.User can specify  the EntityType is search API and the URIS list will be generated. There is no need to pass this to Reltio API becouse we are using the generated URI list
Skip entities count
skipEntitiesCount
0If provided, sets the number of Entities which are skipped during reindexing.-
Entities limit
entitiesLimit
infinityIf provided, sets the maximum number of Entities are reindexed-
Updated since
updatedSince
N/ATimestamp in Unix format. If this parameter is provided, then only entities with greater or equal timestamp are reindexed. This is a good way to limit the reindexing to newer records.-
Update entities
updateEntities
true 

If set to true, initiates update for Search, Match tables, History. If set to false, then no rematching, no history changes, only ES structures are updated.

If set to true (default), in addition to refreshing the ElasticSearch index, the task also updates history, match tables, and the analytics layer (RI). This ensures that all indexes and supporting structures are as up-to-date as possible. As explained above, however, triggering all these activities may decrease the overall performance level of the database system for business work, and overwhelm the event streaming channels. If set to false, the task updates ElasticSearch data only. It does not perform rematching, or update history or analytics. These other activities can be performed at different times to spread out the performance impact.

-
Check crosswalk consistency
checkCrosswalksConsistency
false

If true, this will start a task to check if all crosswalks are unique before reindexing data. Please note, if entitiesLimit or distributed parameters have any value other than default, this parameter will be unavailable

Specify true to reindex each Entity, whether it has changed or not. This operation ensures that each Entity in the database is processed. Reltio does not recommend this optionit decreases the performance of the reindex task dramatically, and may overload the server, which will interfere with all database operations.

-
URI list
entityUris
generated list of URIS from UI

One or more entity URIs (separated by a comma) that you would like to process. For example: entities/<id1>, entities/<id2>.


Reltio suggests to use 50-100K uris in one API request, this is Reltio limitation. 
Our process splits to 100K files if required. 


Based on the input files size one JOB from HUB end may produce multiple Reltio tasks.

UI generates list of URIS from mongo querry or we are running the reindex with the input files
Ignore streaming events
forceIgnoreInStreaming
false

If set to true, no streaming events will be generated until after the reindex job has completed.


-
Distributed
distributed
falseIf set to true, the task runs in distributed mode, which is a good way to take advantage of a networked or clustered computing environment to spread the performance demands of reindexing over several nodes. -
Job parts count
taskPartsCount

N/A due to distributed=false

Default value: 2

The number of tasks which are created for distributed reindexing. Each task reindexes its own subset of Entities. Each task may be executed on a different API node, so that all tasks can run in parallel. Recommended value: the number of API nodes which can execute the tasks. 


Note: This parameter is used only in distributed mode ( distributed=true); otherwise, its ignored.

-


More detials in Reltio docs:

https://docs.reltio.com/en/explore/get-going-with-apis-and-rocs-utilities/reltio-rest-apis/engage-apis/tasks-api/reindex-data-task

https://docs.reltio.com/en/explore/get-your-bearings-in-reltio/console/tenant-management-applications/tenant-management/jobs/creating-a-reindex-data-job



" }, { "title": "6. Merge/Unmerge entities", "pageID": "337846268", "pageLink": "/pages/viewpage.action?pageId=337846268", "content": "

Description

This page is available to users with the merge/unmerge role - MERGE_UNMERGE_ENTITIES

Allows you to schedule Merge/Unmerge JOB. It can be used in two modes: merge or unmerge.

The operation will trigger JOB  with selected parameters. In response, the user will receive an identifier that is used to check the status of the asynchronous operation in the 10. Jobs Manager tab.

Required parameters

file with profiles to be merged or unmerged in the selected format

Details

file

Input file

File format: CSV 

Encoding: UTF-8

more details here - Batch merge & unmerge


View

\"\"

" }, { "title": "7. Update Identifiers", "pageID": "337846270", "pageLink": "/display/GMDM/7.+Update+Identifiers", "content": "

Description

This page is available to users with the update identifiers role - UPDATE_IDENTIFIERS

Allows you to schedule update identifiers JOB.

The operation will trigger JOB  with selected parameters. In response, the user will receive an identifier that is used to check the status of the asynchronous operation in the 10. Jobs Manager tab.

Required parameters

file with profiles to be updated in the selected format

Details

file

Input file

File format: CSV 

Encoding: UTF-8

more details here - Batch update identifiers

View

\"\"

" }, { "title": "8. Clear Cache", "pageID": "337846272", "pageLink": "/display/GMDM/8.+Clear+Cache", "content": "

Description

This page is available to users with the ETL clear cache role - CLEAR_CACHE_BATCH

The cache is related to the Direct Channel ETL jobs:

Docs: ETL Batch Channel and ETL Batches

Allows you to clear the ETL checksum cache. It can be used in three modes: query or by_source or file.

The operation will trigger JOB  with selected parameters. In response, the user will receive an identifier that is used to check the status of the asynchronous operation in the 10. Jobs Manager tab.

Query mode

Required parameters

Batch name  - specify a batch name for which you want to clear the cache

Object type - ENTITY or RELATION

Entity type - e.g. configuration/relationTypes/Employment or configuration/entityTypes/HCP

Details

ParameterDescription
Batch nameSpecify a batch on which the clear cache will be triggered
Object type ENTITY or RELATION
Entity type

If object type is ENTITY then e.g:

configuration/entityTypes/HCO

configuration/entityTypes/HCP

If object type is RELATION then e.g.:

configuration/relationTypes/ContactAffiliations

configuration/relationTypes/Employment

CountryAdd a country if required to limit the clear cache query 

View

\"\"


by_source mode

Required parameters

Batch name  - specify a batch name for which you want to clear the cache

Source - crosswalk type and value

Details

Specify a batch name and click add a source to specify new crosswalks that you want to remove from the cache.

View

\"\"


file mode

Required parameters

Batch name  - specify a batch name for which you want to clear cache

file with crosswalks to be cleared in ETL cache in the selected format for specified batch

Details

file

Input file

File format: CSV 

Encoding: UTF-8

more details here - Batch clear ETL data load cacheView

View

\"\"


" }, { "title": "9. Restore Raw Data", "pageID": "356650113", "pageLink": "/display/GMDM/9.+Restore+Raw+Data", "content": "

Description

This page is available to users with the restore data role - RESTORE

The raw data contains data send to MDM HUB:

Docs: Restore raw data

Allows you to restore raw (source) data on selected environment

The operation will trigger asynchronous job with selected parameters.

Restore entities

Required parameters

Source environment - restore data from another environment eg from QA to DEV environment, the default is the currently logged in environment

Entity type  - restore data only for specified entity type: HCP, HCO, MCO

Optional parameters

Countries - restore data only for specified entity country, eq: GB, IE, BR

Sources - restore data only for specified entity source, eq: GRV, ONEKEY

Date Time - restore data created after specified date time


View

\"\"

Restore relations


Required parameters

Source environment - restore data from another environment eg from QA to DEV environment, the default is the currently logged in environment

Optional parameters

Countries - restore data only for specified entity country, eq: GB, IE, BR

Sources - restore data only for specified entity source, eq: GRV, ONEKEY

Relation types- restore data only for specified relation type, eg: configuration/relationTypes/OtherHCOtoHCOAffiliations

Date Time - restore data created after specified date time


View

\"\"

" }, { "title": "HUB Status - main page", "pageID": "333155175", "pageLink": "/display/GMDM/HUB+Status+-+main+page", "content": "

Description

The UI is divided into the following sections:

\"\"

  1. MENU
    1. Contains links to 
      1. Ingestion Services Configuration
      2. Ingestion Services Tester
      3. HUB Admin
  2. HEADER
    1. Shows the current tenant name, click to quickly change the tenant to a different one.
    2. Shows the logged-in user name. Click to log out. 
  3. FOOTER
    1. Link to User Guide
    2. Link to Connect Gide
    3. Link to the whole HUB documentation
    4. Link to the Get Help page
    5. Currently deployed version
      1. Click to get the details about the CHANGELOG
        1. on PROD - released version
        2. on NON-PROD- snapshot version - Changelog contains unreleased changes that will be deployed in the upcoming release to PROD.
  4. HUB Status dashboard is divided into the following sections:
    1. On this page you can check HUB processing status / kafka topics LAGs / API availability / Snowflake DataMart refresh. 
    2. API (related to the Direct Channel)
      1. \"\"
      2. API Availability  - status related to HUB API (all API exposed by HUB e.g. based on EMEA PROD - EMEA PROD Services )
      3. Reltio READ operations performance and latency - for example, GET Entity operations (every operation that gets data from Reltio)
      4. Reltio WRITE operations performance and latency - for example, POST/PATCH Entity operations (every operation that changes data in Reltio)
    3. Batches (related to the ETL Batch Channel)
      1. \"\"
      2. Currently running batches and duration of completed batches.
      3. Currently running batches may cause data load and impact event processing visible in the dashboard below (inbound and outbound)
    4. Event Processing 
      1. \"\"
      2. Shows information about events that we are processing to:
        1. Inbound - all updates made by HUB on profiles in Reltio
          1. shows the ETA based on the:
            1. ETL Batch Channel (loading and processing events into HUB from ETL)
            2. Direct Channel processing:
              1. loading ETL data to Reltio
              2. loading Rankings/Callbacks/HcoNames (all updates on profiles on Reltio)
                    
        2. Outbound - streaming channel processing (related to the Streaming channel)
          1. shows the ETA based on the:
            1. Streaming channel - all events processing starting from Reltio SQS queue, events currently processing by HUB Streaming channel microservices.
    5. DataMart (related to the Snowflake MDM Data Mart)
      1. \"\"
      2. The time when the last REGIONAL and GLOBAL Snowflake data marts
      3. Shows the number of events that are still processing by HUB microservices and are not yet consumed by Snowflake Connector. 



" }, { "title": "Ingestion Services Configuration", "pageID": "302701936", "pageLink": "/display/GMDM/Ingestion+Services+Configuration", "content": "

Description

This page shows configuration related to the

Choose a filter to switch between different entity types and use input boxes to filter results.

\"\"


Available filters:

FilterDescription
Entity TypeHCP/HCO/MCO - choose an entity type that you want to review and click Search
CategoryPick to limit the result and review only selected rules
CountryType a country code to limit the number of rules related to the specific country
Source Type a source to limit the number of rules related to the specific source
QueryOpen Text filed -helps to limit the number of results when searching for specific attributes. Example case - put the "firstname" and click Search to get all rules that modify/use FirstName attribute.

Audit filed

Comparison type

Date

Use a combination of these 3 attributes to find rules created before or after a specific date. Or to get rules modified after a specific date. 


Click on the:

\"\"


                                                                                 

" }, { "title": "Ingestion Services Tester", "pageID": "302701950", "pageLink": "/display/GMDM/Ingestion+Services+Tester", "content": "

Description

This site allows you to test quality service. The user can select the input entity using the 'upload' button, paste the content of the entity into the editor or drag it. After clicking the 'test' button, the entity will be sent to the quality service. After processing, the result will appear in the right window. The user can choose two modes of presenting the result - the whole entity or the difference. In the second mode, only changes made by quality service will be displayed. After clicking the 'validation result' button, a dialog box will be displayed with information on which rules were applied during the operation of the service for the selected entity.


Quality service tester editor

\"\"


Validation summary                                      

Here you can check which rules were "triggered" and check the rule in the Ingestion Services Configuration using the Rule name.

Search by text using attribute or "triggered" keyword to get all triggered rules. 

\"\"

                                           

" }, { "title": "Incremantal batch", "pageID": "164470033", "pageLink": "/display/GMDM/Incremantal+batch", "content": "

On the diagram below presented the generic structure of the batch flow. Data sources will have own instances of the flow configured:

\"\"

The flow consists of the following stages: 


Generic Mapper

Generic Mapper is a component that converts source data into documents in the unified format required by Reltio API. The component is flexible enough to support incremental batches as well as full snapshots of data. Handling a new type of data source is a matter of (in most cases) creating a new configuration that consists of stage and metadata parts. 

The first one defines details of so called "stages", i.e.: HCO, HCP, etc. The latter contains all mapping rules defining how to transform source data into attribute path/value form. Once data are transformed into the mentioned form it is easy to store it, merge it or do any other operation (including Reltio document creation) in the same way for all types of sources. This simple idea makes Generic Mapper a very powerful tool that can be extended in many ways. 

\"Mapping

 A stage is a logical group of steps that as a whole process single type of Reltio document, i.e.: HCO entity.    

\"Stage

At the beginning of each stage the component reads source data and generates attribute changes (events) and then stores this in an output file. It is worth to notice that there can be many source data configured. Once the output file is produced it is sorted. The above logic can be called phase 1 of a stage. Until now no database has been used. 

In the phase 2 the sorted file is read, events are aggregated into groups in such a way that each element of a group refers to the same Reltio document. Next all lookups are resolved against a database, merged with previous version of a document attributes and persisted. Then, Reltio document (Json) is created and sent to Kafka. The stage is finished when all acks from the gateway are collected. 

Under the hood each stage is a sequence of jobs: a job (i.e.: the one for sorting a file) can be started only in case its direct predecessor is finished with a success. Stages can be configured to run in parallel and depends on each other. 


Load reports 

At runtime Generic Mapper collects various types of data that give insight into DAG state and load statistics. The HTML report is written to disk each time a status of any job is changed. The report consists of three panels: Summary, Metrics and DAG. 

The summary panel contains details of all jobs within a DAG that was created for the current execution (load). The DAG panel shows relationships between jobs in the form of a graph. 

\"\"

The metrics panel presents details of a load. Each metric key is prefixed by a stage name.  

\"\"

" }, { "title": "Kafka offset modification", "pageID": "273695178", "pageLink": "/display/GMDM/Kafka+offset+modification", "content": "

Description

The REST interfaces exposed through the MDM Manager component used by clients to modify kafka offset.

During the update, we will check access to groupId and specyfic topic.

Diagram 1 presents flow, and kafka communication during offset modification.


The diagrams below present a sequence of steps in processing client calls.

Flow diagram


\"\"


Steps

Triggers

Trigger action

Component

Action

Default time

REST callManager: POST /kafka/offsetmodify kafka offsetAPI synchronous requests - realtime
RequestResponse

{
    "groupId""mdm_test_user_group",
    "topic""amer-dev-in-guest-tests",
    "offset""latest"
}

{
    "values": [
        {
            "topic""amer-dev-in-guest-tests",
            "partition"0,
            "offset"2
        }
    ]
}

{
    "groupId""mdm_test_user_group",
    "topic""amer-dev-in-guest-tests",
    "offset""earliest"
}
{
    "values": [
        {
            "topic""amer-dev-in-guest-tests",
            "partition"0,
            "offset"0
        }
    ]
}
{
    "groupId""mdm_test_user_group",
    "topic""amer-dev-in-guest-tests",
    "offset""2022-12-15T08:15:02Z"
}
{
    "values": [
        {
            "topic""amer-dev-in-guest-tests",
            "partition"0,
            "offset"1
        }
    ]
}

{
    "groupId""mdm_test_user_group",
    "topic""amer-dev-in-guest-tests",
    "offset""latest"
    "partition"4
}

{
    "values": [
        {
            "topic""amer-dev-in-guest-tests",
            "partition"4,
            "offset"2
        }
    ]
}

{
    "groupId""mdm_test_user_group",
    "topic""amer-dev-in-guest-tests",
    "offset""2022-12-15T08:15:02Z",
    "shift"5
}

{
    "values": [
        {
            "topic""amer-dev-in-guest-tests",
            "partition"0,
            "offset"6
        }
    ]
}

Dependent components

Component

Usage

Managercreate update Entities in MDM systems
API Gatewayproxy REST and secure access
" }, { "title": "LOV read", "pageID": "164469998", "pageLink": "/display/GMDM/LOV+read", "content": "


The flow is triggered by API GET /lookup  call.  It retrives LOV data from HUB store.


\"\"


Process steps description:

  1. Client sends HTTP request to MDM Manager endpoint.
  2. Kong Gateway receives request and handles authentication
  3. If the authentication succeeds, the request is forwarded to MDM Manager component
  4. MDM Manager checks user permissions to call getEntity operation and the correctness of the request
  5. MDM Manager checks user profile configuration for lookup operation to determine whether to return results based on MongoDB state, or call Reltio directly.
  6. Request parameters are used to dynamically generate a query. This query is executed in findByCriteria method.
  7. Query results are returned to the client



" }, { "title": "LOV update process (Nucleus)", "pageID": "164469999", "pageLink": "/pages/viewpage.action?pageId=164469999", "content": "\n

Process steps description:

\n
    \n\t
  1. Nucleus Subscriber monitors AWS S3 location where CCV files are uploaded.
  2. \n\t
  3. When a new file is found, it is downloaded and processed. Single CCV zip file contains multiple *.exp files, which contain different parts of LOV – header, description, references to values from external systems.
  4. \n\t
  5. Each *.exp file is processed line by line, with Dictionary change events generated for each line. These events are published to a Kafka topic from where the Event Publisher component receives them.
  6. \n\t
  7. After CCV file is processed completely, it is moved to archive subtree in S3 bucket folder structure.
  8. \n\t
  9. When Dictionary change event is received in Event Publisher the current state of LOV is first fetched from Mongo database. New data from the event is then merged with that state and the result is saved back in Mongo.
  10. \n
\n\n\n

Additional remarks:

\n\n" }, { "title": "LOV update processes (Reltio)", "pageID": "164469992", "pageLink": "/pages/viewpage.action?pageId=164469992", "content": "\n

\"\" Figure 18. Updating LOVs from ReltioLOV update processes are triggered by timer on regular, configurable intervals. Their purpose is to synchronize dictionary values from Reltio. Below is the diagram outlining the whole process.\n
\nProcess steps description:

\n
    \n\t
  1. Synchronization processes are triggered at regular intervals.
  2. \n\t
  3. Reltio Subscriber calls MDM Gateway lookups API to retrieve first batch of LOV data
  4. \n\t
  5. Fetched data is inserted into the Mongo database. Existing records are updated
  6. \n
\n\n\n

Second and third steps are repeated in a loop until there is no more LOV data remaining.

" }, { "title": "MDM Admin Flows", "pageID": "302683297", "pageLink": "/display/GMDM/MDM+Admin+Flows", "content": "" }, { "title": "Kafka Offset", "pageID": "302684674", "pageLink": "/display/GMDM/Kafka+Offset", "content": "

Swagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Kafka/kafkaOffsetModification

API allows offset manipulation for consumergroup-topic pair. Offsets can be set to earliest/latest/timestamp, or adjusted (shifted) by a numeric value.

An important point to mention is that in many cases offset does not equal to messages - shifting offset on a topic back by 100 may result in receiving 90 extra messages. This is due to compactation and retention - Kafka may mark offset as removed, but it still remains for the sake of continuity.

Example 1

Environment is EMEA DEV. User wants to consume the last 100 messages from his topic again. He is using topic "emea-dev-out-full-test-topic-1" and consumer-group "emea-dev-consumergroup-1".

User has disabled the consumer - Kafka will not allow offset manipulation, if the topic/consumergroup is being used.

He sent below request:

\n
POST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/kafka/offset\nBody:\n{\n  "topic": "emea-dev-out-full-test-topic-1",\n  "groupId": "emea-dev-consumergroup-1",\n  "shiftBy": -100\n}
\n

Upon re-enabling the consumer, 100 of the last events were re-consumed.

Example 2

User wants to consume all available messages from the topic again.

User has disabled the consumer and sent below request:

\n
POST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/kafka/offset\nBody:\n{\n  "topic": "emea-dev-out-full-test-topic-1",\n  "groupId": "emea-dev-consumergroup-1",\n  "offset": earliest\n}
\n

Upon re-enabling the consumer, all events from the topic were available for consumption again.

" }, { "title": "Partial List", "pageID": "302683607", "pageLink": "/display/GMDM/Partial+List", "content": "

Swagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Precallback%20Service/reconcilePartials_1

API calls Precallback Service's internal API and returns a list of events stuck in partial state (more information here). List can be limited and sorted. Partial age can be displayed in one of below formats:

Example

User has noticed an alert being triggered for GBLUS DEV, informing about events in partial state. To investigate the situation, he sends the following request:

\n
GET https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gblus-dev/precallback/partials?absolute=true
\n

Response:

\n
{\n    "entities/1sgqoyCR": "2023-02-09T11:42:06.523Z",\n    "entities/1eUqpXVe": "2023-02-01T12:39:57.345Z",\n    "entities/2ZlDTE2U": "2023-02-09T11:40:30.950Z",\n    "entities/2J1YiLW9": "2023-02-09T11:41:45.092Z",\n    "entities/1KgPnkhY": "2023-02-01T12:39:58.594Z",\n    "entities/1YpLnUIR": "2023-02-01T12:40:06.661Z"\n}
\n

He realized, that it is difficult to quickly tell the age of each partial based on timestamp. He removed the absolute flag from request:

\n
GET https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gblus-dev/precallback/partials
\n

Response:

\n
{\n    "entities/1sgqoyCR": "27:26:56.228",\n    "entities/1eUqpXVe": "218:29:05.406",\n    "entities/2ZlDTE2U": "27:28:31.801",\n    "entities/2J1YiLW9": "27:27:17.659",\n    "entities/1KgPnkhY": "218:29:04.157",\n    "entities/1YpLnUIR": "218:28:56.090"\n}
\n

Three partials have been stuck for more than 200 hours. Other three partials - for over 27 hours.

" }, { "title": "Reconciliation", "pageID": "302683312", "pageLink": "/display/GMDM/Reconciliation", "content": "

Entities

Swagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Reconciliation/reconcileEntities

API accepts a JSON list of entity URIs. URIs not beginning with "entities/" are filtered out. For each URI it:

  1. Checks entityType (HCP/HCO/MCO) in Mongo
  2. Checks status (ACTIVE/LOST_MERGE/INACTIVE/REMOVED) in Mongo
  3. If entity is ACTIVE, it generates a *_CHANGED event and sends it to the ${env}-internal-reltio-events to be enriched by the Entity Enricher
  4. If entity has status other than ACTIVE:
    1. If entity has status LOST_MERGE and pushLostMerge parameter is true, generate a *_LOST_MERGE event.
    2. If entity has status INACTIVE and pushInactived parameter is true, generate a *_INACTIVATED event.
    3. If entity has status DELETED and pushRemoved parameter is true, generate a *_REMOVED event.
  5. *Additional parameter, force, may be used. When set to true, event will proceed to the EventPublisher even if rejected by Precallbacks.

Example

User wants to reconcile 4 entities, which have different data in Snowflake/Mongo than in Reltio:

Below request is sent (GBL DEV):

\n
POST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-dev/reconciliation/entities\nBody:\n["entities/108dNvgB", "entities/10VLBsCl", "entities/10bH3nze", "entities/1065AHEA", "relations/101LIzcm"]
\n

Response:

\n
{\n    "entities/10bH3nze": "false - Record with INACTIVE status in cache",\n    "entities/1065AHEA": "false - Record with DELETED status in cache",\n    "entities/10VLBsCl": "false - Record with LOST_MERGE status in cache",\n    "entities/108dNvgB": "true",\n    "relations/101LIzcm": "false"\n}
\n

Only one event was generated: HCP_CHANGED for entities/108dNvgB.

User decided that he also need an HCP_LOST_MERGE event for entities/10VLBsCl. He sent the same request with pushLostMerge flag:

\n
POST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-dev/reconciliation/entities?pushLostMerge=true\nBody:\n["entities/108dNvgB", "entities/10VLBsCl", "entities/10bH3nze", "entities/1065AHEA", "relations/101LIzcm"]
\n

Response:

\n
{\n    "entities/10bH3nze": "false - Record with INACTIVE status in cache",\n    "entities/1065AHEA": "false - Record with DELETED status in cache",\n    "entities/10VLBsCl": "true",\n    "entities/108dNvgB": "true",\n    "relations/101LIzcm": "false"\n}
\n

This time, two events have been generated:

Relations

Swagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Reconciliation/reconcileRelations

API works the same way as for Entities, but this time URIs not beginning with "relations/" are filtered out.

Example

User sent the same request as in previous example (GBL DEV):

\n
POST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-dev/reconciliation/relations\nBody:\n["entities/108dNvgB", "entities/10VLBsCl", "entities/10bH3nze", "entities/1065AHEA", "relations/101LIzcm"]
\n

Response:

\n
{\n    "entities/10bH3nze": "false",\n    "entities/1065AHEA": "false",\n    "entities/10VLBsCl": "false",\n    "entities/108dNvgB": "false",\n    "relations/101LIzcm": "false - Record with DELETED status in cache"\n}
\n

First 4 URIs have been filtered out due to unexpected prefix. Event for relations/101LIzcm has not been generated, because this relation has DELETED status in cache.

Same request has been sent with pushRemoved flag:

\n
POST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-dev/reconciliation/relations?pushRemoved=true\nBody:\n["entities/108dNvgB", "entities/10VLBsCl", "entities/10bH3nze", "entities/1065AHEA", "relations/101LIzcm"]
\n

Response:

\n
{\n    "entities/10bH3nze": "false",\n    "entities/1065AHEA": "false",\n    "entities/10VLBsCl": "false",\n    "entities/108dNvgB": "false",\n    "relations/101LIzcm": "true"\n}
\n

A single event has been generated: RELATIONSHIP_REMOVED for relations/101LIzcm.

Partials

Swagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Reconciliation/reconcilePartials

Partials Reconciliation API works the same way that Entities Reconciliation does, but it automatically fetches the current list of entities stuck in partial state using Partial List API.

Partials Reconciliation API also handles push and force flags. Additionally, partials can be filtered by age, using partialAge parameter with one of following values: NONE (default), MINUTE, HOUR, DAY.

Example

User wants to reload entities stuck in partial state in GBL DEV. Prometheus alert informs him that there are plenty, but he remembers that there is currently an ongoing data load, which may cause many temporary partials.

User decides that he should use the partialAge parameter with value DAY, to only reload the entities which have been stuck for a longer while, and not generate unnecessary additional traffic.

He sends the following request:

\n
POST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-dev/reconciliation/partials?partialAge=DAY\nBody: -
\n

Flow fetches a full list of partials from Precallback Service API and filters out the ones stuck for less than a day. It then executes the Entities Reconciliation with this list. Response:

\n
{\n    "entities/1yHHKEZ7": "true",\n    "entities/2EHamZr3": "true",\n    "entities/2EyP0kYM": "true",\n    "entities/21QU96KG": "true",\n    "entities/2BmHQMCn": "true"\n}
\n

5 HCP/HCO_CHANGED events have been generated as a result.

" }, { "title": "Resend Events", "pageID": "302684685", "pageLink": "/display/GMDM/Resend+Events", "content": "

API triggers an Airflow DAG. The DAG:

  1. Runs a query on MongoDB and generates a list of entity/relation URIs.
  2. Using Event Publisher's /resendLastEvent API, it produces outbound events for received reconciliationTarget (user-sent).

Resend - Simple

Swagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Events/resendEvent

When using Simple API, user does not actually write the Mongo query - they instead fill in the blanks.

Required parameters are:

Optionally, objects can be filtered by:

Example

Environment is EMEA DEV. User wants to generate 300 entity events (HCP_CHANGED or HCO_CHANGED) for Poland, source CRMMI. His outbound topic is emea-dev-out-full-user-all.

He sends the request:

\n
POST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/events/resend\nBody:\n{\n  "countries": [\n    "pl"\n  ],\n  "sources": [\n    "CRMMI"\n  ],\n  "objectType": "ENTITY",\n  "limit": 300,\n  "reconciliationTarget": "emea-dev-out-full-user-all"\n}
\n

Response:

\n
{\n  "dag_id": "reconciliation_system_emea_dev",\n  "dag_run_id": "manual__2023-02-13T14:26:22.283902+00:00",\n  "execution_date": "2023-02-13T14:26:22.283902+00:00",\n  "state": "queued"\n}
\n

A new Airflow DAG run was started. dag_run_id field contains this run's unique ID. Below request can be sent to fetch current status of this DAG run:

\n
GET https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/events/resend/status/manual__2023-02-13T14:26:22.283902+00:00
\n

Response:

\n
{\n  "dag_id": "reconciliation_system_emea_dev",\n  "dag_run_id": "manual__2023-02-13T14:26:22.283902+00:00",\n  "execution_date": "2023-02-13T14:26:22.283902+00:00",\n  "state": "running"\n}
\n

After the DAG has finished, 300 HCP_CHANGED/HCO_CHANGED events will have been generated to the emea-dev-out-full-user-all topic.

Resend - Complex

Swagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Events/resendEventComplex

For Complex API, user writes their own Mongo query.

Required parameters are:

Optionally, resulting objects can be limited (separate fields for each query).

Example

As in previous example, user wants to generate 300 events for Poland, source CRMMI. Output topic is emea-dev-out-full-user-all.

This time, he sends the following request:

\n
POST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/events/resend/complex\nBody:\n{\n  "entitiesQuery": "{ 'country': 'pl', 'sources': 'CRMMI' }",\n  "relationsQuery": null,\n  "reconciliationTarget": "emea-dev-out-full-user-all",\n  "limitEntities": 300,\n  "limitRelations": null\n}
\n

Response:

\n
{\n  "dag_id": "reconciliation_system_emea_dev",\n  "dag_run_id": "manual__2023-02-13T14:57:11.543256+00:00",\n  "execution_date": "2023-02-13T14:57:11.543256+00:00",\n  "state": "queued"\n}
\n

Resend - Status

Swagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Events/getStatus

As described in previous examples, this API returns current status of DAG run. Request url parameter must be equal to dag_run_id. Possible statuses are:



" }, { "title": "Internals", "pageID": "164470109", "pageLink": "/display/GMDM/Internals", "content": "


" }, { "title": "Archive", "pageID": "333152415", "pageLink": "/display/GMDM/Archive", "content": "" }, { "title": "APM performance tests", "pageID": "333152417", "pageLink": "/display/GMDM/APM+performance+tests", "content": "

Performance tests were executed using Jmeter tool placed on CI/CD server.

Test scenario:

Tests werer performed by 4 parallel users  in a loop for 60 min.

\"\"

Test results:

\"\"



" }, { "title": "Client integration specifics", "pageID": "492493127", "pageLink": "/display/GMDM/Client+integration+specifics", "content": "" }, { "title": "Saudi Arabia integration with IQVIA", "pageID": "492493129", "pageLink": "/display/GMDM/Saudi+Arabia+integration+with+IQVIA", "content": "

Below design was confirmed with Alain and Eleni during 14.01.2025 meeting. Concept of such solution was earlier approved by AJ.

\"\"

Source: Lucid

" }, { "title": "Components providers - AWS S3, networking, etc...", "pageID": "273702388", "pageLink": "/pages/viewpage.action?pageId=273702388", "content": "
TenantProviderReltioAWS accounts IDsIAM usersIAM rolesS3 bucketsNetwork (subnets, VPCe)Application ID
EMEA NPROD

PDCS - Kubernetes in IoD

COMPANY
  1. Airflow (S3) - 211782433747
  2. Snowflake (S3) - 211782433747
  3. Reltio (S3) -  211782433747
  4. AWS (PDCS) - 330470878083
  1. Airflow (S3)- arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_nprod_rw_s3
  2. Snowflake (S3) - arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_nprod_rw_s3
  3. Reltio (S3) - arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_nprod_rw_s3

Node Instance Role ARN: arn:aws:iam::330470878083:role/atp-mdmhub-nprod-emea-eks-worker-NodeInstanceRole-1OG6IFX6DO8B9

Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSO

  1. Airflow - pfe-atp-eu-w1-nprod-mdmhub 
  2. Snowflake - pfe-atp-eu-w1-nprod-mdmhub
  3. Reltio - pfe-atp-eu-w1-nprod-mdmhub

VPC

  • vpc-0c55bf38e97950aa5

Subnets

SC3028977
EMEA PROD
  1. Airflow (S3) - 211782433747
  2. Snowflake (S3) - 211782433747
  3. Reltio (S3) -  211782433747
  4. AWS (PDCS) - 330470878083
  5. S3 backup bucket - 604526422050

  1. Airflow (S3) - arn:aws:iam::211782433747:user/SRVC-MDMCDI-PROD
  2. Snowflake (S3) - arn:aws:iam::211782433747:user/SRVC-MDMCDI-PROD
  3. Reltio (S3) - arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_mdm_exports_prod_rw_s3
Node Instance Role ARN: arn:aws:iam::330470878083:role/atp-mdmhub-prod-emea-eks-worker-n-NodeInstanceRole-11OT3ADBULAGC

Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSO

  1. Airflow - pfe-atp-eu-w1-prod-mdmhub
  2. Snowflake - pfe-atp-eu-w1-prod-mdmhub
  3. Reltio - pfe-atp-eu-w1-prod-mdmhub
  4. Backups - pfe-atp-eu-w1-prod-mdmhub-backupemaasp202207120811

VPC

  • vpc-0c55bf38e97950aa5

Subnets

SC3211836
AMER NPRODPDCS - Kubernetes in IoDCOMPANY
  1. Airflow (S3) - 555316523483
  2. Snowflake (S3)-  555316523483
  3. Reltio (S3) -  555316523483
  4. AWS (PDCS) - 330470878083
  1. Airflow (S3) - arn:aws:iam::555316523483:user/SRVC-MDMGBLFT
  2. Snowflake (S3) - arn:aws:iam::555316523483:user/SRVC-MDMGBLFT
  3. Reltio (S3) - arn:aws:iam::555316523483:user/SVRC-MDMRELTIOGBLFTNPROD

Node Instance Role ARN: arn:aws:iam::330470878083:role/atp-mdmhub-nprod-amer-eks-worker-NodeInstanceRole-1X8MZ6QZQD5V7

Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSO


  1. Airflow - gblmdmhubnprodamrasp100762
  2. Snowflake - gblmdmhubnprodamrasp100762
  3. Reltio - gblmdmhubnprodamrasp100762

VPC

  • vpc-0aedf14e7c9f0c024

Subnets

  • subnet-0dec853f7c9e507dd (10.9.0.0/18)
  • subnet-07743203751be58b9 (10.9.64.0/18)
SC3028977
AMER PROD
  1. Airflow (S3) - 604526422050
  2. Snowflake (S3)- 604526422050
  3. Reltio (S3) -  555316523483
  4. AWS (PDCS) - 330470878083
  5. Backup bucket (S3) - 604526422050

  1. Airflow (S3) - arn:aws:iam::604526422050:user/SRVC-MDMGBLFT
  2. Snowflake (S3) - arn:aws:iam::604526422050:user/SRVC-MDMGBLFT
  3. Reltio (S3) - arn:aws:iam::555316523483:user/SVRC-MDMRELTIOGBLFTNPROD

Node Instance Role ARN: arn:aws:iam::330470878083:role/atp-mdmhub-prod-amer-eks-worker-n-NodeInstanceRole-1KA6LWUDBA3OI

Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSO

  1. Airflow - gblmdmhubprodamrasp101478
  2. Snowflake - gblmdmhubprodamrasp101478
  3. Reltio - gblmdmhubprodamrasp101478
  4. Backups - pfe-atp-us-e1-prod-mdmhub-backupamrasp202207120808

VPC

  • vpc-0aedf14e7c9f0c024

Subnets

  • subnet-0dec853f7c9e507dd (10.9.0.0/18)
  • subnet-07743203751be58b9 (10.9.64.0/18)
SC3211836
APAC NPRODPDCS - Kubernetes in IoDCOMPANY
  1. Airflow (S3) - 555316523483
  2. Snowflake (S3) - 555316523483
  3. Reltio (S3) -  555316523483
  4. AWS (PDCS) - 330470878083

1.Airflow - (S3) - arn:aws:iam::555316523483:user/svc_atp_aps1_mdmetl_nprod_rw_s3

2. Snowflake (S3) - arn:aws:iam::555316523483:user/svc_atp_aps1_mdmetl_nprod_rw_s3

3. Reltio (S3) - arn:aws:iam::555316523483:user/SVRC-MDMRELTIOGBLFTNPROD

Node Instance Role ARN: arn:aws:iam::330470878083:role/atp-mdmhub-nprod-apac-eks-worker-NodeInstanceRole-1053BVM6D7I2L

Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSO

  1. Airflow - globalmdmnprodaspasp202202171347
  2. Snowflake - globalmdmnprodaspasp202202171347
  3. Reltio - globalmdmnprodaspasp202202171347

VPC

  • vpc-0d4b6d3f77ac3a877

Subnets

SC3028977
APAC PROD
  1. Airflow (S3) -
  2. Snowflake (S3) - 
  3. Reltio -  555316523483
  4. AWS (PDCS) - 330470878083
  5. S3 backup bucket 604526422050

1.Airflow - (S3) -  arn:aws:iam::604526422050:user/svc_atp_aps1_mdmetl_prod_rw_s3

2. Snowflake (S3) - arn:aws:iam::604526422050:user/svc_atp_aps1_mdmetl_prod_rw_s3

3. Reltio (S3) - arn:aws:iam::555316523483:user/SVRC-MDMRELTIOGBLFTNPROD


Node Instance Role ARN: arn:aws:iam::330470878083:role/atp-mdmhub-prod-apac-eks-worker-n-NodeInstanceRole-1NMGPUSYG7H8Q

Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSO

  1. Airflow - globalmdmprodaspasp202202171415
  2. Snowflake - globalmdmprodaspasp202202171415
  3. Reltio - globalmdmprodaspasp202202171415
  4. Backups - pfe-atp-ap-se1-prod-mdmhub-backuaspasp202207141502

VPC

  • vpc-0d4b6d3f77ac3a877

Subnets

SC3211836
GBLUS NPRODPDCS - Kubernetes in IoDCOMPANY
  1. Airflow (S3) - 555316523483
  2. Snowflake (S3) - 555316523483
  3. Reltio (S3) -  555316523483
  4. AWS (PDCS) - 330470878083
  1. Airflow (S3) - arn:aws:iam::555316523483:user/SRVC-MDMGBLFT
  2. Snowflake (S3) - arn:aws:iam::555316523483:user/SRVC-MDMGBLFT
  3. Reltio (S3) - arn:aws:iam::555316523483:user/SVRC-MDMRELTIOGBLFTNPROD

Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSO

  1. Airflow - gblmdmhubnprodamrasp100762
  2. Snowflake - gblmdmhubnprodamrasp100762
  3. Reltio - gblmdmhubnprodamrasp100762
Same as AMER NPRODSC3028977
GBLUS PROD
  1. Airflow (S3) - 604526422050
  2. Snowflake - 604526422050
  3. Reltio (S3) -  
  4. AWS (PDCS) - 330470878083
  5. S3 backup bucket - 604526422050

  1. Airflow (S3) - arn:aws:iam::604526422050:user/SRVC-MDMGBLFT
  2. Snowflake (S3) - arn:aws:iam::604526422050:user/SRVC-MDMGBLFT
  3. Reltio (S3) - arn:aws:iam::555316523483:user/SVRC-MDMRELTIOGBLFTNPROD

Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSO

  1. Airflow - gblmdmhubprodamrasp101478
  2. Snowflake - gblmdmhubprodamrasp101478
  3. Reltio - gblmdmhubprodamrasp101478
  4. Backups - pfe-atp-us-e1-prod-mdmhub-backupamrasp202207120808
Same as AMER  PRODSC3211836
GBL NPROD

PDCS - Kubernetes in IoD

IQVIA
  1. Airflow (S3) -
  2. Snowflake (S3) - 211782433747
  3. Reltio (S3) -  
  4. AWS (PDCS) - 330470878083

1.Airflow (S3) - arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_nprod_rw_s3

2. Snowflake (S3) - arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_nprod_rw_s3

3. Reltio (S3) - arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_mdm_exports_prod_rw_s3


Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSO

  1. Airflow - pfe-atp-eu-w1-nprod-mdmhub
  2. Snowflake - pfe-atp-eu-w1-nprod-mdmhub
  3. Reltio - pfe-atp-eu-w1-nprod-mdmhub
Same as EMEA NPRODSC3028977
GBL PROD
  1. Airflow (S3) -
  2. Snowflake (S3) - 211782433747
  3. Reltio (S3) -  
  4. AWS (PDCS) - 330470878083
  5. S3 backup bucket - 604526422050

1.Airflow (S3) - arn:aws:iam::211782433747:user/svc_mdm_project_rw_s3

2. Snowflake (S3) - arn:aws:iam::211782433747:user/svc_mdm_project_rw_s3

3. Reltio (S3) - arn:aws:iam::211782433747:user/svc_mdm_project_rw_s3 ???

Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSO

  1. Airflow - pfe-baiaes-eu-w1-project
  2. Snowflake - pfe-baiaes-eu-w1-project
  3. Reltio - pfe-baiaes-eu-w1-project
  4. Backups - pfe-atp-eu-w1-prod-mdmhub-backupemaasp202207120811
Same as EMEA PRODSC3211836
FLEX NPRODCloudBroker - EC2IQVIA
  1. Airflow (S3) -
  2. Reltio (S3) - 


  1. Airflow - mdmnprodamrasp22124
  2. Reltio - mdmnprodamrasp22124


FLEX PROD
  1. Airflow (S3) - 
  2. Reltio (S3) - 


  1. Airflow - mdmprodamrasp42095
  2. Reltio - mdmprodamrasp42095


Proxy

Rapid - EC2N/A
  1. AWS EC2 - 432817204314





Monitoring

CloudBroker - EC2N/A
  1. AWS EC2 - 604526422050
  2. AWS S3 - 604526422050
  1. Thanos (S3) - arn:aws:iam::604526422050:user/SRVC-gblmdmhub
Node Instance Role: arn:aws:iam::604526422050:role/PFE-ATP-MDMHUB-MONITORING-BACKUP-ROLE-01
  1. Grafana Backup - pfe-atp-us-e1-prod-mdmhub-grafanaamrasp20240315101601
  2. Thanos - pfe-atp-us-e1-prod-mdmhub-monitoringamrasp20240208135314


Jenkins build

FLEX Airflow

CloudBroker - EC2N/A



VPC:

  • Jenkins vpc-12aa056a

" }, { "title": "Configuration", "pageID": "164470110", "pageLink": "/display/GMDM/Configuration", "content": "\n

All runtime configuration is stored in GitHub repository and changes are monitored using GIT history. Sensitive data is encrypted by Ansible Vault using AES256 algorithm and decrypted only during automatic deployment managed by Continuous Delivery process in Jenkins.

" }, { "title": "●●●●●●●●●●●●● [https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1587199]", "pageID": "164470111", "pageLink": "/pages/viewpage.action?pageId=164470111", "content": "\n

Configuration for all environments is placed in mdm-reltio-handler-env/inventory branch.
\nAvailable environments:

\n\n\n\n

In order to separate variables for each service, we created the following groups:

\n\n" }, { "title": "Kafka", "pageID": "164470104", "pageLink": "/display/GMDM/Kafka", "content": "\n

Kafka deployment procedures

\n\n\n\n

Kafka variables

\n

Production Kafka cluster requires the following variables:

\n\n" }, { "title": "Kong", "pageID": "164470105", "pageLink": "/display/GMDM/Kong", "content": "\n

Kong deployment procedures

\n\n\n\n

Kong variables

\n

Cassandra memory parameters are controlled by:

\n\n\n\n

Kong required variables:

\n\n\n\n

To manage kong api through deployment procedure these maps are needed:

\n\n" }, { "title": "Mongo", "pageID": "164470004", "pageLink": "/display/GMDM/Mongo", "content": "\n

Mongo deployment procedures

\n\n\n\n

Mongo variables

\n

Production mongo cluster requires the following variables declared in /inventory/prod/group_vars/ all/all.yml file:

\n\n\n\n

Development mongo instance requires the following variables declared in /inventory/dev/group_vars/all/all.yml file:

\n\n" }, { "title": "Services - hub_gateway", "pageID": "164470005", "pageLink": "/display/GMDM/Services+-+hub_gateway", "content": "\n

Services deployment procedures

\n

Hub deployment procedure:

\n\n\n\n


\nGateway deployment procedure:

\n\n\n\n

Services variables

\n

[gw-services] - this group contains variables for map channel and mdm manager in the following two maps:

\n

\n\n\n

[hub-services] - this group contains variables for hub api, reltio subscriber and event publisher in the following maps:

\n

\n\n\n

It is possible to redefine JVM_OPTS or any other environment using these maps:

\n\n" }, { "title": "Data storage", "pageID": "164470006", "pageLink": "/display/GMDM/Data+storage", "content": "\n

Publishing Hub among other functions serves as data store, caching the latest state of each Entity fetched from Reltio MDM. This allows clients to take advantage of increased performance and high availability provided by MongoDB NoSQL database.

" }, { "title": "Data structures", "pageID": "164470007", "pageLink": "/display/GMDM/Data+structures", "content": "\n

\"\" Figure 21. Structure of Publishing HUB's databasesThe following diagram shows the structure of DB collections used by Publishing Hub.\n
\nDetailed description:

\n\n\n\n

INSERT vs UPSERT

\n

To speed up database operations Publishing Hub takes advantage of MongoDB "upsert" flag of db.collection.update() method. This allows the application to skip the potentially costly query checking if the entity already exists in database. Instead the update operation is call right away, ceding the responsibility of checking for entity existence on Mongo internal mechanisms.

" }, { "title": "Indexes", "pageID": "164470001", "pageLink": "/display/GMDM/Indexes", "content": "\n

All of the fields in database collections are indexed, except complex documents (i.e. "entity" in entityHistory, "value" in LookupValues). Queries that do not use indexes (for example querying arbitrarily nested attributes of "entity") might suffer from bad performance.

" }, { "title": "DoR, AC, DoD", "pageID": "294674667", "pageLink": "/display/GMDM/DoR%2C+AC%2C+DoD", "content": "" }, { "title": "DoD - template", "pageID": "294674670", "pageLink": "/display/GMDM/DoD+-+template", "content": "

Requirements of task needed to be met before closing:

" }, { "title": "DoR - template", "pageID": "294674659", "pageLink": "/display/GMDM/DoR+-+template", "content": "

Requirements of task needed to be met before pushing to the Sprint:

" }, { "title": "Exponential Back Off", "pageID": "164469928", "pageLink": "/display/GMDM/Exponential+Back+Off", "content": "

BackOff mechanizm that increases the back off period for each retry attempt. When the interval has reached the max interval, it is no longer increased. Stops retrying once the max elapsed time has been reached.
Example: The default interval is 2000L ms, the default multiplier is 1.5, and the default max interval is 30000L. For 10 attempts the sequence will be as follows:

requestback off ms
12000
23000
34500
46750
510125
615187
722780
830000
930000
1030000


Note that the default max elapsed time is Long.MAX_VALUE. Use setMaxElapsedTime(long) to limit the maximum length of time that an instance should accumulate before returning BackOffExecution.STOP.

Implementation based on spring-retry library.


" }, { "title": "HUB UI", "pageID": "294675912", "pageLink": "/display/GMDM/HUB+UI", "content": "


DRAFT:

\"\"


TODO: 

Grafana dashboards through iframe - https://www.itpanther.com/embedding-grafana-in-iframe/

" }, { "title": "Integration Tests", "pageID": "302681782", "pageLink": "/display/GMDM/Integration+Tests", "content": "

Integration tests are devided into different categories. These categories are used for different environments.

Jenkins IT configuration: http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/jenkins/k8s_int_test.groovy

" }, { "title": "Common Integration Test", "pageID": "302681798", "pageLink": "/display/GMDM/Common+Integration+Test", "content": "
Test classTest caseFlow
CommonGetEntityTests
testGetEntityByUri
  1. Create HCP
  2. Get HCP by URI and validate

testSearchEntity
  1. Create HCP
  2. Get entities using filter (get by country code, first name and last name)
  3. Validate if entity exists

testGetEntityByCrosswalk
  1. Create HCP
  2. Get entity by corsswalk and validate if exists

testGetEntitiesByUris
  1. Create HCP
  2. Get entity by uris andvalidate if exists

testGetEntityCountry
  1. Create HCP
  2. Get entity by country and validate if exists

testGetEntityCountryOv
  1. Create HCP
  2. Add new country
  3. Send update request
  4. Get HCP's Country and validate
  5. Make ignored = true and ov = false on all countries
  6. Send update request
  7. Get HCP's Country and validate
CreateHCPTestcreateHCPTest
  1. Create HCP
  2. Get entity and validate
CreateRelationTestcreateRelationTest
  1. Create HCP
  2. Create HCO
  3. Create Relation between HCP and HCO
  4. Get Relation and validate
DeleteCrosswalkTestdeleteCrosswalkTest
  1. Create HCO
  2. Delete crosswalk and validate status response
UpdateHCOTestupdateHCPTest
  1. Create HCO
  2. Get created HCO
  3. Update HCO's name
  4. Validate response status
  5. Get HCO and validate if it is updated
UpdateHCPUsingReltioContributorProviderupdateHCPUsingReltioContributorProviderTrueAndDataProviderFalse
  1. Create HCP
  2. Get created HCP and validate
  3. Update existing corosswalk and set contributorProvider to false
  4. Add new contributor provider crosswalk
  5. Update first name
  6. Send update HCP request
  7. Validate if it is updated
PublishingEventTesttest1_hcp
  1. Create HCP
  2. Wait for HCP_CREATED event
  3. Update HCP first name
  4. Wait for HCP_CHANGED event
  5. Get entity and validate

test2_hcp
  1. Create HCP
  2. Wait for HCP_CREATED event
  3. Update HCP's last name
  4. Wait for HCP_CHANGED event
  5. Delete crosswalk
  6. Wait for HCP_REMOVED event

test3_hco
  1. Create HCO
  2. Wait for HCO_CREATED event
  3. Update HCO's name
  4. Wait for HCO_CHANGED event
  5. Delete crosswalk
  6. Wait for HCO_REMOVED event
" }, { "title": "Integration Test For Iqvia Model", "pageID": "302681788", "pageLink": "/display/GMDM/Integration+Test+For+Iqvia+Model", "content": "
Test classTest caseFlow
CRUDHCOAsynctest
  1. Send HCORequest to Kafka topic
  2. Wait for created event and validate
  3. Update HCO's name and send HCORequest to Kafka topic
  4. Wait for updated event and validate
  5. Remove entities
CRUDHCOAsyncComplextest
  1. Create Source HCO
  2. Send HCORequest with Source HCO to Kafka Topic
  3. Wait for created event and validate
  4. Create Source Department HCO - set Source HCO as Main HCO
  5. Send HCORequest with Source Department HCO
  6. Wait for event and validate
  7. Remove entities
CRUDHCPAsynctest
  1. Send HCPRequest to Kafka topic
  2. Wait for created event and validate
  3. Update HCP's Last Name and send HCORequest to Kafka topic
  4. Wait for updated event and validate
  5. Remove entities
CRUDPostBulkAsynctestHCO
  1. Send EntitiesUpdateRequest with multiple HCO entities to Kafka topic
  2. Wait for entities-create event with specific correlactionId header
  3. Validate message payload and check if all entities are created
  4. Remove entities

testHCP
  1. Send EntitiesUpdateRequest with multiple HCP entities to Kafka topic
  2. Wait for entities-create event with specific correlactionId header
  3. Validate message payload and check if all entities are created
  4. Remove entities

testHCPRejected
  1. Send EntitiesUpdateRequest with multiple incorrect HCP entities to Kafka topic
  2. Wait for event with specific correlactionId header
  3. Check if all entities have ValidatioError and status is failed
CreateRelationAsynctestCreate
  1. Create HCO
  2. Create HCP
  3. Send RelationRequest with Relation Activity between HCP and HCO to Kafka topic
  4. Wait for event with specific correlactionId header and validate status

testCreateRelations
  1. Create HCO
  2. Create HCP_1
  3. Create HCP_2 and validate response
  4. Create HCP_3 and validate response
  5. Create HCP_4 and validate response
  6. Create Activity Relations between HCP_1 → HCO, HCP_2 → HCO, HCP_3 → HCO, HCP_4 → HCO
  7. Send RelationRequest event with all relations to Kafka topic
  8. Wait for event with specific correlactionId header and validate status
  9. Remove entities

testCraeteWithAddressCopy
  1. Create HCO
  2. Create HCP
  3. Create Activity Relation between HCP and HCO
  4. Send RelationRequest event to Kafka topic with param copyAddressFromTarget = true
  5. Wait for event with specific correlactionId header and validate status is created
  6. Get HCP and HCO
  7. Validate updated HCP - check if address exists and contains HcoName attribute
  8. Remove entities

testDeactivateRelation
  1. Create HCO
  2. Create HCP
  3. Create Activity Relation between HCP and HCO with PrimaryAffiliationIndicator = true
  4. Send RelationRequest event to Kafka topic
  5. Wait for event with specific correlactionId header and validate status is created
  6. Update Relation - set delete date on now
  7. Send RelationRequest event to Kafka topic
  8. Wait for event with specific correlactionId header and validate status is deleted
  9. Remove entities
HCOAsyncErrorsTestCasetest
  1. Send HCORequest to Kafka topic - create HCO with incorrect values
  2. Wait for event with specific correlactionId header and validate status is failed
HCPAsyncErrorsTestCasetest
  1. Send HCPRequest to Kafka topic - create HCP without permissions
  2. Wait for event with specific correlactionId header and validate status is failed
UpdateRelationAsynctest
  1. Create HCO and validate status created
  2. Create HCP with affiliatedHCO and validate status created
  3. Get HCP and check if Workplace relation exists
  4. Get existing Relation
  5. Patch Relation - update ActEmail.Email attribute and validate if status is updated
  6. Get Relation and validate if ActEmail list size is 1
  7. Add Country attribute to Relation
  8. Send RelationRequest event to Kafka topic with updated Relation
  9. Wait for event with specific correlactionId header and validate status is updated
  10. Get Relation and check if ActEmail and Country exist
  11. Add AffiliationStatus attribute to Relation
  12. Send RelationRequest event to Kafka topic with updated Relation
  13. Wait for event with specific correlactionId header and validate status is updated
  14. Get Relation and check if ActEmail, Country and  AffiliationStatus  exist
  15. Remove entities
BundlingTesttest
  1. Send multiple HCORequests to Kafka topic - create HCOs
  2. For each request wait for event with status created and collect HCO's uri
  3. Check if number of requests equals number of recived events
  4. Send multiple HCPRequests to Kafka topic - create HCPs
  5. For each request wait for event with status created and collect HCP's uri
  6. Check if number of requests equals number of recived events
  7. Send multiple RelationRequests to Kafka topic - create Relation
  8. For each request wait for event with status created and collect Relation's uri
  9. Check if number of requests equals number of recived events
  10. Set delete date on now for every HCO
  11. Send multiple HCORequests to Kafka topic
  12. For each request wait for event with status deleted
  13. Set delete date on now for every HCP
  14. Send multiple HCPRequests to Kafka topic
  15. For each request wait for event with status deleted
DCRResponseTestcreateAndAcceptDCRThenTryToAcceptAgainTest
  1. Create Hopsital HCO
  2. Create Department HCO
  3. Set Hospital HCO as Department's Main HCO
  4. Create HCP with Affiliated HCO as Department
  5. Check if DCR is created
  6. Accept DCR and check if response is OK
  7. Accept DCR again and check if response is BAD_REQUEST
  8. Remove entities

createAndPartialAcceptThenConfirmNoLoop
  1. Create Hopsital HCO
  2. Create Department HCO
  3. Set Hospital HCO as Department's Main HCO
  4. Create HCP with Affiliated HCO as Department
  5. Check if DCR is created
  6. Partial accept DCR and check if response is OK
  7. Get HCP entity and check if ValidationStatus attribute is "partialValidated"
  8. Check if DCR is not created - confirms that DCR creation does not loop
  9. Remove entities

createAndRejectDCRThenTryToRejectAgainTest
  1. Create Hopsital HCO
  2. Create Department HCO
  3. Set Hospital HCO as Department's Main HCO
  4. Create HCP with Affiliated HCO as Department
  5. Check if DCR is created
  6. Reject DCR and check if response is OK
  7. Reject again DCR and check if response is BAD_REQUEST
  8. Remove entities
DeriveHCPAddressesTestCasederivedHCPAddressesTest
  1. Create HCP and validate response
  2. Create HCO Department with 1 Address and validate response
  3. Create HCO Hospital with 2 Addresses and validate response
  4. Create "Activity" Relation HCP → HCO Department and validate response
  5. Create "Has Health Care Role" Relation HCP → HCO Hospital and validate response
  6. Get HCP and check if contains Hospital's Addresses
  7. Update HCO Hospital Address and validate response
  8. Get HCP and check if contains updated Hospital's Addresses
  9. Remove HCO Hospital Address and validate response
  10. Get HCP and check if contains Hospital's Addresses (without removed)
  11. Remove "Has Health Care Role" Relation HCP → HCO Hospital and validate response
  12. Get HCP and check if Addresses are removed
  13. Remove entities
EVRDCRUpdateHCPLUDTestCasetest
  1. Create Hopsital HCO
  2. Create Department HCO
  3. Set Hospital HCO as Department's Main HCO
  4. Create HCP with Affiliated HCO as Department
  5. Get Change requests and check that DCR was created
  6. Update HCP
    1. ValidationStatus = notvalidated
    2. change existing GRV crosswalk - set DataProvider = true
    3. add DCR crosswalk - EVR set ContributorProvider = true
    4. add another EVR crosswalk set DataProvider = true
  7. Send update request and vadiate response
  8. Update HCP (partial update)
    1. ValidationStatus = validated
    2. Remove First and Last Name
    3. Remove crosswalks
  9. Send update request and validate response
  10. Get HCP and validate
  11. Check if the ValidationStatus & LUD (updateDate/singleAttributeUpdateDate) were refreshed
  12. Remove crosswalks
ExistingDepartmentAndHCPTestCasecreateHCP_HCPNotInPendingStatus_NoDCR
  1. Create Hospital HCO
  2. Create Department HCO with Hospital HCO as MainHCO
  3. Create HCP with affiliated HCO (Department HCO) and ValidationStatus = validated
  4. Get HCP and validate attributes
  5. Get Change requests and check if the list is empty
  6. Remove crosswalks

createHCP_HCPIsInPendingStatus_HCPDCRCreated
  1. Create Hospital HCO
  2. Create Department HCO with Hospital HCO as MainHCO
  3. Create HCP with affiliated HCO (Department HCO) and ValidationStatus = pending
  4. Get HCP and validate attributes
  5. Get Change requests and check if there is one NEW_HCP change request
  6. Remove crosswalks

createHCP_HCPHasTwoWorkplaces_HCPAndWorkplaceDCRCreated
  1. Create Hospital HCO
  2. Create Department1 HCO with Hospital HCO as MainHCO
  3. Create Department2 HCO with Hospital HCO as MainHCO
  4. Create HCP with affiliated HCO (Department1 HCO) and ValidationStatus = pending
  5. Get HCP and validate attributes
    1. has only one Workplace (Department1 HCO)
  6. Update HCP with affiliated HCO (Department2 HCO) and ValidationStatus = pending
  7. Get HCP and validate attributes
    1. has only one Workplace (Department2 HCO)
  8. Get Change requests and check if there is one NEW_HCP change request
  9. Remove crosswalks
NewHCODCRTestCasescreateHCP_DepartmentDoesNotExist_HCOL1DCR
  1. Create Hospital HCO
  2. Create Department HCO with Hospital HCO as MainHCO
  3. Create HCP with affiliated HCO (Department HCO)
  4. Get HCP and validate attributes
    1. Validate Workplace and MainWorkplace
  5. Get Change requests and check if the list is empty
  6. Remove crosswalks

createHCP_HospitalAndDepartmentDoesNotExist_HCOL1DCR
  1. Create Department HCO with Hospital HCO (not created yet) as MainHCO
  2. Create HCP with affiliated HCO (Department HCO) and ValidationStatus = pending
  3. Get HCP and validate attributes
  4. Get HCO Department and validate attributes
  5. Get Change requests and check if there is one NEW_HCO_L2 change request
  6. Remove crosswalks
NewHCPDCRTestCasecreateHCPTest
  1. Create HCO Hospital
  2. Create HCO Department
  3. Create HCP with affiliated HCO (Department HCO)
  4. Get HCP and validate Workplace and MainWorkplace
  5. Remove crosswalks

createHCPPendingTest
  1. Create HCO Hospital
  2. Create HCO Department
  3. Create HCP with affiliated HCO (Department HCO) and ValidationStatus = pending
  4. Validate HCP response
  5. Validate if DCR is created
  6. Remove crosswalks

createHCPNotValidatedTest
  1. Create HCO Hospital
  2. Create HCO Department
  3. Create HCP with affiliated HCO (Department HCO) and ValidationStatus = notvalidated
  4. Validate HCP response
  5. Validate if DCR is created
  6. Remove crosswalks

createHCPNotValidatedMergedIntoNotValidatedTest
  1. Create HCP_1 with ValidationStatus = notvalidated (Merge winner HCP)
  2. Create HCO Hospital
  3. Create HCO Department
  4. Create HCP_2 with affiliated HCO (Department HCO) and ValidationStatus = notvalidated
  5. Validate HCP response
  6. Validate if DCR is not created
  7. Remove crosswalks

createHCPPendingMergedIntoNotValidatedTest
  1. Create HCP_1 with ValidationStatus = notvalidated (Merge winner HCP)
  2. Create HCO Hospital
  3. Create HCO Department
  4. Create HCP_2 with affiliated HCO (Department HCO) and ValidationStatus = pending
  5. Validate HCP response
  6. Validate if DCR is created
  7. Remove crosswalks

createHCPPendingMergedIntoNotValidatedWithAnotherGRVNotValidatedTest
  1. Create HCP_1 with ValidationStatus = notvalidated (Merge winner HCP)
  2. Create HCO Hospital
  3. Create HCP_2 with ValidationStatus = notvalidated (Merge loser HCP)
  4. Create HCO Department
  5. Create HCP_3 with affiliated HCO (Department HCO) and ValidationStatus = pending
  6. Validate if DCR is created
  7. Remove crosswalks

createHCPNotValidatedMergedIntoNotValidatedWithAnotherGRVNotValidatedTest
  1. Create HCP_1 with ValidationStatus = notvalidated (Merge winner HCP)
  2. Create HCO Hospital
  3. Create HCP_2 with ValidationStatus = notvalidated (Merge loser HCP)
  4. Create HCO Department
  5. Create HCP_3 with affiliated HCO (Department HCO) and ValidationStatus = notvalidated
  6. Validate if DCR is not created
  7. Remove crosswalks

createHCPPendingMergedIntoNotValidatedWithGRVAsUpdateTest
  1. Create HCP_1 with ValidationStatus = notvalidated (Merge winner HCP)
  2. Create HCO Hospital
  3. Create HCP_2 with ValidationStatus = notvalidated (Merge loser HCP)
  4. Create HCO Department
  5. Create HCP_3 with affiliated HCO (Department HCO) and ValidationStatus = notvalidated
  6. Get HCP and validate corsswalk GRV count == 3
  7. Validate if DCR is not created
  8. Update HCP_3 set code = pending
  9. Validate if DCR is created
  10. Remove crosswalks
PfDataChangeRequestLiveCycleTesttest
  1. Create HCO Hospital
  2. Create HCO Department with parent HCO Hospital
  3. Create HCP with affiliated HCO (Department HCO) and ValidationStatus = pending
  4. Check if DCR exist
  5. Check if PfDataChangeRequest exist
  6. Accpet DCR
  7. Check that HCP ValidationStatus == validated
  8. Check that PfDataChangeRequest is closed
  9. Remove crosswalks
ResponseInfoTestTest
  1. Create HCO Hospital
  2. Create HCO Department with parent HCO Hospital
  3. Create HCP_1 with affiliated HCO (Department HCO) and ValidationStatus = pending
  4. Create HCP_2 with affiliated HCO (Department HCO) and ValidationStatus = pending
  5. Check that DCR_1 exist
  6. Check that DCR_2 exist
  7. Check that PfDataChangeRequest exist
  8. Respond for DCR_1 - update HCP with merged uris
    1. change First Name
    2. set ValidationStatus = validated
  9. Get HCP and check if ValidationStatus is validated
  10. Check if PfDataChangeRequest is closed and validate ResponseInfo
  11. Respond for DCR_2 - accept and validate message
  12. Check if PfDataChangeRequest is closed and validate ResponseInfo
  13. Check that DCR_2 does not exist
  14. Remove crosswalks
RevalidateNewHCPDCRTestCasetest
  1. Create Parent HCO and validate response
  2. Create Department HCO with Parent HCO and validate response
  3. Create HCP with affiliated HCO (Department HCO), ValidationStatus = pending and validate response
  4. Check that DCR exist
  5. Check that PfDataChangeRequest exist
  6. Respond to DCR - accept
  7. Check that HCP has ValidationStatus = validated
  8. Send revalidate event to Kafka topic
  9. Check that new DCR was created
  10. Checking that previous PfDataChangeRequest has ResponseStatus=accept
  11. Check that new PfDataChangeRequest exist
  12. Check that HCP has ValidationStatus = pending
  13. Remove crosswalks
StandarNonExistingDepartmentTestCasecreateNewHCPTest
  1. Create Hospital HCO
  2. Create HCP with a new affiliated HCO (Department HCO with Hospital HCO as MainHCO)
  3. Get HCP and validate attributes (Workplace and MainWorkplace)
UpdateHCPPhonestest
  1. Create HCP and validate response
  2. Update Phone and send patchHCP request
  3. Validate response status is OK
  4. Remove crosswalks
GetEntityTeststestGetEntityByUri
  1. Create HCP with ValidationStatus = validated and affiliatedHcos (HCO_1, HCO_2)
  2. Get HCP by uri and validate attributes
  3. Remove crosswalks

testSearchEntity
  1. Create HCP with ValidationStatus = validated and affiliatedHcos (HCO_1, HCO_2)
  2. Get entites using filter - HCP by country, first name and last name
  3. Validate if entity exists
  4. Remove crosswalks

testSearchEntityWithoutCountryFilter
  1. Create HCP with ValidationStatus = validated and affiliatedHcos (HCO_1, HCO_2)
  2. Get by corsswalk HCO_1 and check if exists
  3. Get by corsswalk HCO_2 and check if exists
  4. Get entites using filter - HCO by country and (HCO_1 name or HCO_2 name)
  5. Validate if both HCO exists
  6. Remove crosswalks

testGetEntityByCrosswalk
  1. Create HCP with ValidationStatus = validated and affiliatedHcos (HCO_1, HCO_2)
  2. Get HCP by crosswalk
  3. Validate if HCP exists
  4. Remove crosswalks

testGetEntitiesByUris
  1. Create HCP with ValidationStatus = validated and affiliatedHcos (HCO_1, HCO_2)
  2. Get HCP by uri
  3. Validate if HCP exists
  4. Remove crosswalks

testGetEntityCountry
  1. Create HCP with ValidationStatus = validated and affiliatedHcos (HCO_1, HCO_2)
  2. Get HCP's country
  3. Validate reponse
  4. Remove crosswalks

testGetEntityCountryOv
  1. Create HCP with ValidationStatus = validated, affiliatedHcos (HCO_1, HCO_2) and Country = Brazil
  2. Update HCP
    1. update existing crosswalk - set ContributorProvider = true
    2. add new crosswalk as DataProvider
    3. set Country ignored = true
    4. update Country - set to China
  3. Get HCP's Country and validate
    1. check value == BR-Brazil
    2. check ov == true
  4. Update HCP - make ignored=true, ov=false on all countries
  5. Get HCP's Country and validate
    1. lookupCode == BR
  6. Remove crosswalks
MergeUnmergeHCPTestcreateHCP1andHCP2_checkMerge_checkUnmerge_API
  1. Create HCP_1 and validate response
  2. Create HCP_2 and validate response
  3. Merge HCP_1 with HCP_2
  4. Get HCP_1 after merge and validate attributes
  5. Get HCP_2 after merge and validate attributes
  6. Unmerge HCP_1 and HCP_2
  7. Get HCP_1 after unmerge and validate attributes
  8. Get HCP_2 after unmerge and validate attributes
  9. Unmerge HCP_1 and HCP_2 - validate if response code is BAD_REQUEST
  10. Merge HCP_1 and NOT_EXISTING_URI - validate if response code is NOT_FOUND
  11. Remove crosswalks
HCPMatcherTestCasetestPositiveMatch
  1. Create 2 the same HCP objects
  2. Check that objects match

testNegativeMatch
  1. Create 2 different HCP objects
  2. Check that objects do not match
GetEntitiesTesttestGetHCPs
  1. Get entities with filter: country = BR and entityType = HCP
  2. Validate response
    1. All entites are HCP
    2. At least one entity has Workplace

testGetHCOs
  1. Get entities with filter: country = BR and entityType = HCO
  2. Validate response
    1. All entites are HCO
GetEntityUSTestcreateHCPTest
  1. Create HCP and validate response
  2. Get HCP and check if exists
  3. Remove crosswalks
" }, { "title": "Integration Test For COMPANY Model", "pageID": "302681792", "pageLink": "/display/GMDM/Integration+Test+For+COMPANY+Model", "content": "
Test classTest caseFlow
AttributeSetterTestTestAttributeSetter
  1. Create HCP with TypeCode attribute
  2. Get entity and validate if has autofilled attributes
  3. Update TypeCode field: send "None" as attribute value
  4. Update HCP request
  5. Get entity and validate autofileld attributes by DQ rules
  6. Update TypeCode field
  7. Update HCP request
  8. Get entity and validate autofileld attributes by DQ rules
  9. Update TypeCode field
  10. Update HCP request
  11. Get entity and validate autofilled NON-HCP value
  12. Set HCP's crosswalk delete date
  13. Update and validate if delete date has been set
BatchControllerTestmanageBatchInstance_checkPermissionsWithLimitation
  1. Create batch instance
  2. Create batch stage
  3. Validate response code: 403 and message: Cannot access the processor which has been protected
  4. Get batch instance with incorrect name
  5. Validate response code: 403 and message: Batch 'testBatchNotAdded' is not allowed. 
  6. Update batch stage with existing stage name
  7. Update batch stage with limited user
  8. Validate response code: 403 and message: Stage '' is not allowed.
  9. Update batch stage with not authorized stage name
  10. Validate response code: 403 and message: Stage '' passed in Body is not allowed.

createBatchInstance
  1. Create batch instance and validate
  2. Complete stage 1 and start stage 2
  3. Validate stages
  4. Complete stage 2
  5. Start stage 3
  6. Validate all 3 stages
  7. Complete stage 3 and finish batch
  8. Get batch instance and validate
TestBatchBundlingErrorQueueTesttestBatchWorkflowTest
  1. Create batch instance
  2. Get errors and check if there is no errors
  3. Create batch stage: HCO_LOADING
  4. Create batch stage: HCP_LOADING
  5. Create batch stage: RELATION_LOADING
  6. Send entites to HCO_LOADING stage
  7. Finish HCO_LOADING stage
  8. Check sender job status - validate if all entities were sent to Reltio
  9. Check processing job status - validate if all entities were processed
  10. Send entites to HCP_LOADING stage
  11. Finish HCP_LOADING stage
  12. Check sender job status - validate if all entities were sent to Reltio
  13. Check processing job status - validate if all entities were processed
  14. Send relations to RELATION_LOADING stage
  15. Finish RELATION_LOADING stage
  16. Check sender job status - validate if all relations were sent to Reltio
  17. Check processing job status - validate if all relatons were processed
  18. Get batch instance and validate completion status
  19. Validate expected errors
  20. Resubmit errors
  21. Validate expected errors
  22. Validate if all errors were resubmited
TestBatchBundlingTesttestBatchWorkflowTest
  1. Create batch instance
  2. Create batch stage: HCO_LOADING
  3. Create batch stage: HCP_LOADING
  4. Create batch stage: RELATION_LOADING
  5. Send entites to HCO_LOADING stage
  6. Finish HCO_LOADING stage
  7. Check sender job status - validate if all entities were sent to Reltio
  8. Check processing job status - validate if all entities were processed
  9. Send entites to HCP_LOADING stage
  10. Finish HCP_LOADING stage
  11. Check sender job status - validate if all entities were sent to Reltio
  12. Check processing job status - validate if all entities were processed
  13. Send relations to RELATION_LOADING stage
  14. Finish RELATION_LOADING stage
  15. Check sender job status - validate if all relations were sent to Reltio
  16. Check processing job status - validate if all relatons were processed
  17. Get batch instance and validate completion status
  18. Get Relations by crosswalk and validate
TestBatchHCOBulkTesttestBatchWorkflowTest
  1. Create batch instance
  2. Create batch stage: HCO_LOADING
  3. Send entites to HCO_LOADING stage
  4. Finish HCO_LOADING stage
  5. Check sender job status - validate if all entities were sent to Reltio
  6. Check processing job status - validate if all entities were processed
  7. Get batch instance and validate completion status
  8. Get entities by crosswalk and validate
TestBatchHCOTesttestBatchWorkflowTest
  1. Create batch instance
  2. Create batch stage: HCO_LOADING
  3. Send entites to HCO_LOADING stage
  4. Finish HCO_LOADING stage
  5. Check sender job status - validate if all entities were sent to Reltio
  6. Check processing job status - validate if all entities were processed
  7. Get batch instance and validate completion status
  8. Get entities by crosswalk and validate created status

testBatchWorkflowTest_CheckFAILonLoadJob
  1. Create batch instance
  2. Create batch stage: HCO_LOADING
  3. Send entites to HCO_LOADING stage
  4. Update batch stage status: FAILED
  5. Get batch instance and validate

testBatchWorkflowTest_SendEntities_Update_and_MD5Skip
  1. Create batch instance
  2. Create batch stage: HCO_LOADING
  3. Send entites to HCO_LOADING stage
  4. Finish HCO_LOADING stage
  5. Get batch instance and validate completion status
  6. Get entities by crosswalk and validate create status
  7. Create batch instance
  8. Create batch stage: HCO_LOADING
  9. Send entites to HCO_LOADING stage (skip 2 entities - MD5 check sum changed)
  10. Finish HCO_LOADING stage
  11. Get batch instance and validate completion status
  12. Get entities by crosswalk and validate update status

testBatchWorkflowTest_SendEntities_Update_and_DeletesProcessing
  1. Create batch instance
  2. Create batch stage: HCO_LOADING
  3. Send entites to HCO_LOADING stage
  4. Finish HCO_LOADING stage
  5. Check sender job status - validate if all entities were sent to Reltio
  6. Check processing job status - validate if all entities were processed
  7. Check deleting job status - validate if all entities were send
  8. Check deleting processing job - validate if all entities were processed
  9. Get batch instance and validate completion status
  10. Get entities by crosswalk and validate delete status
  11. -- second run
  12. Create batch instance
  13. Create batch stage: HCO_LOADING
  14. Send entites to HCO_LOADING stage (skip 2 entities - delete in post processing)
  15. Finish HCO_LOADING stage
  16. Check sender job status - validate if all entities were sent to Reltio
  17. Check processing job status - validate if all entities were processed
  18. Check deleting job status - validate if all entities were send
  19. Check deleting processing job - validate if all entities were processed
  20. Get batch instance and validate completion status
  21. Get entities by crosswalk and validate delete status
  22. -- third run
  23. Create batch instance for checking activation
  24. Create batch stage: HCO_LOADING
  25. Send entites to HCO_LOADING stage
  26. Finish HCO_LOADING stage
  27. Check sender job status - validate if all entities were sent to Reltio
  28. Check processing job status - validate if all entities were processed
  29. Check deleting job status - validate if all entities were send
  30. Check deleting processing job - validate if all entities were processed
  31. Get batch instance and validate completion status
  32. Get entities by crosswalk and validate delete status
TestBatchHCPErrorQueueTesttestBatchWorkflowTest
  1. Create batch instance
  2. Create batch stage: HCP_LOADING
  3. Get errors and check if there is no errors
  4. Send entites to HCP_LOADING stage
  5. Finish HCP_LOADING stage
  6. Check sender job status - validate if all entities were sent to Reltio
  7. Check processing job status - validate if all entities were processed
  8. Get errors and validate if exists excepted
  9. Resubmit errors
  10. Get errors and validate if all were resubmited
TestBatchHCPPartialOverwriteTesttestBatchWorkflowTest
  1. Create HCP
  2. Create batch instance
  3. Create batch stage: HCP_LOADING
  4. Send entites to HCP_LOADING stage with update last name
  5. Finish HCP_LOADING stage
  6. Check sender job status - validate if all entities are created in mongo
  7. Check processing job status - validate if all entities were processed
  8. Get batch instance and validate completion status
  9. Get entities by crosswalk and validate
TestBatchHCPSoftDependentTesttestBatchWorkflowTest
  1. Create batch instance
  2. Create batch stage: HCP_LOADING
  3. Check Sender job status - SOFT DEPENDENT 
  4. Send entites to HCP_LOADING stage
  5. Finish HCP_LOADING stage
  6. Check sender job status - validate if all entities are sent to Reltio
  7. Check processing job status - validate if all entities were processed
  8. Get batch instance and validate completion status
  9. Get entities by crosswalk and validate created status
TestBatchHCPTesttestBatchWorkflowTest
  1. Create batch instance
  2. Create batch stage: HCP_LOADING
  3. Send entites to HCP_LOADING stage
  4. Finish HCP_LOADING stage
  5. Check sender job status - validate if all entities are sent to Reltio
  6. Check processing job status - validate if all entities were processed
  7. Get batch instance and validate completion status
  8. Get entities by crosswalk and validate created status
TestBatchMergeTesttestBatchWorkflowTest
  1. Create 4 x HCP and validate respons status
  2. Get entities and validate if are created
  3. Create batch instance
  4. Create batch stage: MERGE_ENTITIES_LOADING
  5. Send merge entities objects (Reltio, Onekey)
  6. Finish MERGE_ENTITIES_LOADING stage
  7. Check sender job status - validate if all tags are sent to Reltio
  8. Check processing job status - validate if all entities were processed
  9. Get batch instance and validate completion status
  10. Get entities and validate update status (check if tags are visible in Reltio)
  11. Create batch instance
  12. Create batch stage: MERGE_ENTITIES_LOADING
  13. Send unmerge entities objects (Reltio, Onekey)
  14. Finish MERGE_ENTITIES_LOADING stage
  15. Check sender job status - validate if all tags are sent to Reltio
  16. Check processing job status - validate if all entities were processed
  17. Get batch instance and validate completion status
TestBatchPatchHCPPartialOverwriteTest
  1. Create batch instance
  2. Create batch stage: HCP_LOADING
  3. Create HCP entity with crosswalk's delete date set on now
  4. Send entites to HCP_LOADING stage
  5. Finish HCP_LOADING stage
  6. Check sender job status - validate if all entities are sent to Reltio
  7. Check processing job status - validate if all entities were processed
  8. Get batch instance and validate completion status
  9. Get entities by crosswalk and validate created status
  10. Create batch instance
  11. Create batch stage: HCP_LOADING
  12. Send entites PATCH to HCP_LOADING stage with empty crosswalk's delete date and missing first and last name
  13. Finish HCP_LOADING stage
  14. Check sender job status - validate if all entities are sent to Reltio
  15. Check processing job status - validate if all entities were processed
  16. Get batch instance and validate completion status
  17. Get entities by crosswalk and validate if are update
TestBatchRelationTesttestBatchWorkflowTest
  1. Create batch instance
  2. Create batch stage: HCO_LOADING
  3. Create batch stage: HCP_LOADING
  4. Create batch stage: RELATION_LOADING
  5. Send entites to HCO_LOADING stage
  6. Finish HCO_LOADING stage
  7. Check sender job status - validate if all entities were sent to Reltio
  8. Check processing job status - validate if all entities were processed
  9. Send entites to HCP_LOADING stage
  10. Finish HCP_LOADING stage
  11. Check sender job status - validate if all entities were sent to Reltio
  12. Check processing job status - validate if all entities were processed
  13. Send relations to RELATION_LOADING stage
  14. Finish RELATION_LOADING stage
  15. Check sender job status - validate if all relations were sent to Reltio
  16. Check processing job status - validate if all relatons were processed
  17. Get batch instance and validate completion status
TestBatchTAGSTesttestBatchWorkflowTest
  1. Create HCP
  2. Get HCP and check if there is no tags
  3. Create batch instance
  4. Create batch stage: TAGS_LOADING
  5. Send request: Append entity tags objects
  6. Finish TAGS_LOADING stage
  7. Check sender job status - validate if all entities were sent to Reltio
  8. Check processing job status - validate if all entities were processed
  9. Get batch instance and validate completion status
  10. Create batch instance
  11. Create batch stage: TAGS_LOADING - DELETE
  12. Send request: Delete entity tags objects
  13. Check sender job status - validate if all entities were sent to Reltio
  14. Check processing job status - validate if all entities were processed
  15. Get batch instance and validate update status
  16. Get entity and check if tags are removed from Reltio
COMPANYGlobalCustomerIdSearchOnLostMergeEntitiesTesttest
  1. Create first HCP and validate response status
  2. Create second HCP and validate response status
  3. Create third HCP and validate response status
  4. Merge HCP2 with HCP3 and validate response status
  5. Merge HCP2 with HCP1 and validate response status
  6. Get entities: filter by COMPANYGlobalCustomerID and HCP1Uri
  7. Validate if exists
  8. Get entities: filter by COMPANYGlobalCustomerID and HCP2Uri
  9. Validate if exists
  10. Get entities: filter by COMPANYGlobalCustomerID and HCP3Uri
  11. Validate if exists
COMPANYGlobalCustomerIdTesttest
  1. Create HCP_1 with RX_AUDIT crosswalk
  2. Wait for HCP_CREATED event
  3. Create HCP_2 with GRV crosswalk
  4. Wait for HCP_CREATED event
  5. Merge both HCP's with RX_AUDIT being winner
  6. Wait for HCP_MERGE, HCP_LOST_MARGE and HCP_CHANGED events
  7. Get entities by uri and validate. Check if merge succeeded and resulting profile has winner COMPANYId.
  8. Update HCP_1: set delete date on RX_AUDIT crosswalk
  9. Check if entity's COMPANYID has not changed after softDeleting the crosswalk
  10. Get HCP_1 and validate COMPANYGlobalCustomerID after soft deleting crosswalk
  11. Remove HCP_1 by crosswalk
  12. Remove HCP_2 by crosswalk

testWithDeleteDate
  1. Create HCP_1 with crosswalk delete date
  2. Wait for HCP_CREATED event
  3. Create HCP_2
  4. Wait for HCP_CREATED event
  5. Merge both HCP's
  6. Wait for HCP_MERGE, HCP_LOST_MARGE and HCP_CHANGED events
  7. Check if merge succeeded and resulting profile has winner COMPANYId.
  8. Remove HCP_1 by crosswalk
  9. Remove HCP_2 by crosswalk
RelationEventChecksumTesttest
  1. Create HCP and validate status
  2. Get HCP and validate if exists
  3. Create HCO and validate status
  4. Create Employment Relation between HCP and HCO - validate response status
  5. Wait for RELATIONSHIP_CREATED event and validate
  6. Find Relation by id and keep checksum
  7. Update Relation title attribute and validate response
  8. Wait for RELATIONSHIP_CHANGED event
  9. Validate if checksum has changed
  10. Delete HCO crosswalk and validate
  11. Delete HCP crosswalk and validate
  12. Delete Relation crosswalk and validate
CreateChangeRequestTestcreateChangeRequestTest
  1. Create Change Request
  2. Create HCP
  3. Get HCP and validate
  4. Update HCP's First Name with dcrId from Change Request
  5. Init Change Request and validate response is not null
  6. Delete Change Request
  7. Delete HCP's crosswalk
AttributesEnricherNoCachedTesttestCreateFailedRelationNoCache
  1. Create HCO
  2. Create HCP
  3. Create Relation with missing attributes - validate response stats is failed
  4. Search Relation in mogno and check if not exists
AttributesEnricherTesttestCreate
  1. Create HCP and validate
  2. Create HCP and validate
  3. Create Relation and validate
  4. Get HCP and validate if ProviderAffiliations attribute exists
  5. Update HCP's Last Name
  6. Get HCP and validate if ProviderAffiliations attribute exists
  7. Check last Last Name is updated
  8. Remove HCP, HCO and Relation by crosswalk
AttributesEnricherWithDeleteDateOnRelationTesttestCreateAndUpdateRelationWithDeleteDate
  1. Create HCP and validate
  2. Create HCP and validate
  3. Create Relation and validate
  4. Get HCP and validate if ProviderAffiliations attribute exists
  5. Update HCP's Last Name
  6. Get HCP and validate if ProviderAffiliations attribute exists
  7. Check if Last Name is updated
  8. Set Relation's crosswalk delete date on now and update
  9. Update HCP's Last Name
  10. Get HCP and validate that ProviderAffiliations attribute does not exist
  11. Check last Last Name is updated
  12. Send update Relation request and check status is deleted
AttributesEnricherWithMultipleEndObjectstestCreateWithMultipleEndObjects
  1. Create HCO_1
  2. Create HCO_2
  3. Create HCP
  4. Create Relation between HCP and HCO_1
  5. Create Relation between HCP and HCO_2
  6. Get HCP and validate if ProviderAffiliations attribute exists
  7. Update HCP's Last Name
  8. Get HCP and validate that ProviderAffiliations attribute exists
  9. Remove all entities
UpdateEntityAttributeTestshouldUpdateIdentifier
  1. Create HCP and validate
  2. Update HCP's attribute: insert idetifier and validate
  3. Update HCP's attribute: update idetifier and validate
  4. Update HCP's attribute: merge idetifier and validate
  5. Update HCP's attribute: replace idetifier and validate
  6. Update HCP's attribute: delete idetifier and validate
  7. Remove all entities by crosswalk
CreateEntityTestcreateAndUpdateEntityTest
  1. Create DCR entity
  2. Get entity and validate
  3. Update DCR ID attribute
  4. Validate updated entity
  5. Get matches entities and validate that response is not null
  6. Remove entity
CreateHCPWithoutCOMPANYAddressIdcreateHCPTest
  1. Create HCP
  2. Get HCP and validate fields
  3. Get generatedId from Mongo cache collection keyIdRegistry
  4. Validate if created HCP's address has COMPANYAddressID
  5. Check if COMPANYAddressID equals generatedId
  6. Remove entity
GetMatchesTestcreateHCPTest
  1. Create HCP_1
  2. Create HCP_2 with similar attributes and values
  3. Get matches for HCP_1
  4. Check if matches size >= 0
TranslateLookupsTesttranslateLookupTest
  1. Send get translate lookups request: Type=AddressStatus, canonicalCode=A,sourceName=ONEKEY
  2. Assert resposne is not null
DelayRankActivationTesttest
  1. Create HCO_A
  2. CREATE HCO_B1
  3. CREATE HCO_B2
  4. CREATE HCO_B3
  5. CREATE RELATION B1 → A (type: OtherHCOtoHCOAffiliations, rel type: REL.BNG, source: ONEKEY)
  6. CREATE RELATION B2 → A (type: OtherHCOtoHCOAffiliations, rel type: REL.BNG, source: ONEKEY)
  7. CREATE RELATION B3 → A (type: OtherHCOtoHCOAffiliations, rel type: REL.BNG, source: ONEKEY)
  8. Check UPDATE ATTRIBUTE events:
    1. UPDATE RANK event exists with Rank = 3 for B1.A
    2. UPDATE RANK event exists with Rank = 2 for B2.A
  9. Check PUBLISHED events:
    1. B3 - RELATIONSHIP_CREATED event exists with Rank = 1
    2. B1 - RELATIONSHIP_CHANGED event exists with Rank = 3
    3. B2 - RELATIONSHIP_CHANGED event exists with Rank = 2
  10. Check order of events:
    1. B1 - RELATIONSHIP_CHANGED and B2 - RELATIONSHIP_CHANGED are after UPDATE events
  11. CREATE HCO_B4
  12. CREATE RELATION B4 → A (type: OtherHCOtoHCOAffiliations, rel type: REL.BNG, source: GRV)
  13. Check UPDATE ATTRIBUTE events:
    1. UPDATE RANK event exists with Rank = 4 for B4.A
  14. Check PUBLISHED events:
    1. B4 - RELATIONSHIP_CHANGED event exists with Rank = 4
  15. Check order of events:
    1. B4 - RELATIONSHIP_CHANGED is after UPDATE events
  16. CREATE HCO_B5
  17. CREATE RELATION B5 → A (type: OtherHCOtoHCOAffiliations, rel type: REL.FPA, source: ONEKEY)
  18. Check UPDATE ATTRIBUTE events:
    1. UPDATE RANK event exists with Rank = 4 for B1.A
    2. UPDATE RANK event exists with Rank = 3 for B2.A
    3. UPDATE RANK event exists with Rank = 2 for B3.A
    4. UPDATE RANK event exists with Rank = 5 for B4.A
  19. Check PUBLISHED events:
    1. B1 - RELATIONSHIP_CHANGED event exists with Rank = 4
    2. B2 - RELATIONSHIP_CHANGED event exists with Rank = 3
    3. B3 - RELATIONSHIP_CHANGED event exists with Rank = 2
    4. B4 - RELATIONSHIP_CHANGED event exists with Rank = 5
    5. B5 - RELATIONSHIP_CREATED event exists with Rank = 1
  20. Check order of events:
    1. All published RELATIONSHIP_CHANGED are after UPDATE_RANK events
  21. Set deleteDate on B1.A
  22. Check UPDATE ATTRIBUTE events:
    1. UPDATE RANK event exists with Rank = 4 for B4.A
  23. Check PUBLISHED events:
    1. B4 - RELATIONSHIP_CHANGED event exists with Rank = 4
  24. Check order of events:
    1. Published RELATIONSHIP_CHANGED is after UPDATE_RANK event
  25. Get B2.A relation and check Rank = 3
  26. Get B3.A relation and check Rank = 2
  27. Get B4.A relation and check Rank = 4
  28. Get B5.A relation and check Rank = 1
  29. Clear data
RawDataTestshouldRestoreHCP
  1. Create HCP entity
  2. Delete HCP by crosswalk
  3. Search entity by name - expected not found
  4. Restore HCP entity
  5. Search entity by name
  6. Clear data
shouldRestoreHCO
  1. Create HCO entity
  2. Delete HCO by crosswalk
  3. Search entity by name - expected not found
  4. Restore HCO entity
  5. Search entity by name
  6. Clear data
shouldRestoreRelation
  1. Create HCP entity
  2. Create HCO entity
  3. Create relation from HCP to HCO
  4. Delete relation by crosswalk
  5. Get relation by crosswalk - expected not found
  6. Restore relation
  7. Get relation by crosswalk
  8. Clear data
TestBatchUpdateAttributesTest
testBatchWorkFlowTest
  1. Create 2 x HCP and validate respons status
  2. Get entities and validate if they are created
  3. Test Insert Identifiers
    1. Create batch instance
    2. Create batch stage: UPDATE_ATTRIBUTES_LOADING
    3. Initialize UPDATE_ATTRIBUTES_LOADING stage
    4. Send updateEntityAttributeRequest objects with different identifiers
    5. Finish UPDATE_ATTRIBUTES_LOADING stage
    6. Check sender job status - validate if all updates are sent to Reltio
    7. Check processing job status - validate if all entities were processed
    8. Get batch instance and validate completion status
    9. Get entities and validate update status (check if inserted identifiers are visible in Reltio)
  4. Test Update Identifiers
    1. Create batch instance
    2. Create batch stage: UPDATE_ATTRIBUTES_LOADING
    3. Initialize UPDATE_ATTRIBUTES_LOADING stage
    4. Send updateEntityAttributeRequest objects with different identifiers
    5. Finish UPDATE_ATTRIBUTES_LOADING stage
    6. Check sender job status - validate if all updates are sent to Reltio
    7. Check processing job status - validate if all entities were processed
    8. Get batch instance and validate completion status
    9. Get entities and validate update status (check if updated identifiers are visible in Reltio)
  5. Test Merge Identifiers
    1. Create batch instance
    2. Create batch stage: UPDATE_ATTRIBUTES_LOADING
    3. Initialize UPDATE_ATTRIBUTES_LOADING stage
    4. Send updateEntityAttributeRequest objects with different identifiers
    5. Finish UPDATE_ATTRIBUTES_LOADING stage
    6. Check sender job status - validate if all updates are sent to Reltio
    7. Check processing job status - validate if all entities were processed
    8. Get batch instance and validate completion status
    9. Get entities and validate update status (check if merged identifiers are visible in Reltio)
  6. Test Replace Identifiers
    1. Create batch instance
    2. Create batch stage: UPDATE_ATTRIBUTES_LOADING
    3. Initialize UPDATE_ATTRIBUTES_LOADING stage
    4. Send updateEntityAttributeRequest objects with different identifiers
    5. Finish UPDATE_ATTRIBUTES_LOADING stage
    6. Check sender job status - validate if all updates are sent to Reltio
    7. Check processing job status - validate if all entities were processed
    8. Get batch instance and validate completion status
    9. Get entities and validate update status (check if replaced identifiers are visible in Reltio)
  7. Test Delete Identifiers
    1. Create batch instance
    2. Create batch stage: UPDATE_ATTRIBUTES_LOADING
    3. Initialize UPDATE_ATTRIBUTES_LOADING stage
    4. Send updateEntityAttributeRequest objects with different identifiers
    5. Finish UPDATE_ATTRIBUTES_LOADING stage
    6. Check sender job status - validate if all updates are sent to Reltio
    7. Check processing job status - validate if all entities were processed
    8. Get batch instance and validate completion status
    9. Get entities and validate update status (check if deleted identifiers are visible in Reltio)
  8. Remove all entities by crosswalk and all batch instances by id
" }, { "title": "Integration Test For COMPANY Model China", "pageID": "302681804", "pageLink": "/display/GMDM/Integration+Test+For+COMPANY+Model+China", "content": "
Test classTest caseFlow
ChinaComplexEventCaseshouldCreateHCPAndConnectWithAffiliatedHCOByName
  1. Create HCO (AffiliatedHCO) and validate response
  2. Get entities with filter by HCO's Name and entityType
  3. Validate if exists
  4. Create HCP (V2Complex method)
    1. with not existing MainHCO
    2. with affiliatedHCO and existing HCO's Name
  5. Get HCP and validate
    1. Check if affiliatedHCO Uri equals created HCO uri (Workplace)
  6. Remove entities

shouldCreateHCPAndMainHCO
  1. Create HCO (AffiliatedHCO) and validate response
  2. Create HCP (V2Complex method)
    1. with AffiliatedHCO - set uri from previously created HCO
    2. with MainHCO without uri
  3. Get HCP and validate
    1. Check if affiliatedHCO Uri equals created HCO uri (Workplace)
    2. Validate Workplace attributes
  4. Remove entities

shouldCreateHCPAndAffiliatedHCO
  1. Create HCO (MainHCO) and validate response
  2. Create HCP (V2Complex method)
    1. with AffiliatedHCO without uri (not existing HCO)
    2. with MainHCO - set objectURI from previously created Main HCO
  3. Get HCP and validate
    1. Check if MainHCO Uri equals created HCO uri (MainWorkplace)
    2. Validate MainWorkplace attributes
  4. Remove entities

shouldCreateHCPAndConnectWithAffiliations
  1. Create HCO (MainHCO) and validate response
  2. Create HCO (AffiliatedHCO) and validate response
  3. Create HCP (V2Complex method)
    1. with AffiliatedHCO - set uri from previously created Affiliated HCO
    2. with MainHCO - set objectURI from previously created Main HCO
  4. Get HCP and validate
    1. Check if affiliatedHCO Uri equals created HCO uri (Workplace)
    2. Check if MainHCO Uri equals created HCO uri (MainWorkplace)
    3. Validate Workplace and MainWorkplace attributes
  5. Remove entities

shouldCreateHCPAndAffiliations
  1. Create HCP (V2Complex method)
    1. without AffialitedHCO uri
    2. without MainHCO objectURI
  2. Get HCP and validate
    1. Check if Workplace is created and has correct attributes
    2. Check if MainWorkplace is created and has correct attributes
    3. Validate Workplace and MainWorkplace attributes
  3. Remove entities
ChinaSimpleEventCaseshouldPublishCreateHCPInIqiviaModel
  1. Create HCP in COMPANYModel (V2Simple method)
  2. Validate response
  3. Get HCP entity and validate attributes
  4. Wait for Kafka output event
  5. Validate event
    1. Validate attributes and check if event is in IqiviaModel
  6. Remove entities
ChinaMergeEntityTest
  1. Craete HCP_1 (V2Complex method) and validate response
  2. Craete HCP_2 (V2Complex method) and validate response
  3. Merge entities HCP_1 and HCP_2
  4. Get HCP by HCP_1 uri and check if exists
  5. Wait for Kafka event on merge response topic
  6. Validate Kafka event
  7. Remove entities
ChinaWorkplaceValidationEntityTestshouldValidateMainHCO
  1. Create HCP (V2Complex method)
    1. with 2 affiliatedHCO which do not exist
    2. with 1 MainHCO which does not exist
  2. Get HCP entity and check if exist
  3. Wait for Kafka event on response topic
  4. Validate Kafka event
    1. Validate MainWorkplace (1 exists)
    2. Validate Workplaces (2 exists)
    3. Validate MainHCO (1 exists)
    4. Assert MainWorkplace equals MainHCO
  5. Remove entities
" }, { "title": "Integration Test For COMPANY Model DCR2Service", "pageID": "302681794", "pageLink": "/display/GMDM/Integration+Test+For+COMPANY+Model+DCR2Service", "content": "
Test classTest caseFlow
DCR2ServiceTestshouldCreateHCPTest
  1. Create HCO and validate response
  2. Create DCR request (hcp-create)
  3. Send Apply Change request
  4. Get DCR status and validate
  5. Validate created entity
  6. Remove entities

shouldUpdateHCPChangePrimarySpecialtyTest
  1. Create HCP
  2. Create DCR request: update HCP Primary Speciality
  3. Validate DCR response
  4. Apply Change request
  5. Get DCR status and validate
  6. Get HCP and validate
  7. Get DCR and validate
  8. Remove all entities

shouldCreateHCOTest
  1. Create DCR Request (hco-create) and validate response
  2. Apply Change request
  3. Get DCR status and validate
  4. Get HCO and validate
  5. Get DCR and validate
  6. Remove all entities

shouldUpdateHCPChangePrimaryAffiliationTest
  1. Create HCO_1 and valdiate response
  2. Create HCO_2 and validate response
  3. Create HCP with affiliations and validate reponse
  4. Get HCO_1 and save COMPANYGlobalCustomerId
  5. Get HCP and save COMPANYGlobalCustomerId
  6. Get entities - search by HCO_1's COMPANYGlobalCustomerId and check if exists
  7. Get entities - search by HCP's COMPANYGlobalCustomerId and check if exists
  8. Create DCR Request and validate response: update HCP primary affiliation
  9. Apply Change request
  10. Get DCR status and validate
  11. Get HCP and validate
  12. Get DCR and validate
  13. Remove all entities

shouldUpdateHCPIgnoreRelation
  1. Create HCO_1 and valdiate response
  2. Create HCO_2 and validate response
  3. Create HCP with affiliations and validate reponse
  4. Get HCO_1 and save COMPANYGlobalCustomerId
  5. Get HCP and save COMPANYGlobalCustomerId
  6. Get entities - search by HCO_1's COMPANYGlobalCustomerId and check if exists
  7. Get entities - search by HCP's COMPANYGlobalCustomerId and check if exists
  8. Create DCR Request and validate response: ignore affiliation
  9. Apply Change request
  10. Get DCR status and validate
  11. Wait for RELATIONSHIP_CHANGED event
  12. Wait for RELATIONSHIP_INACIVATED event
  13. Get HCP and validate
  14. Get DCR and validate
  15. Remove all entities

shouldUpdateHCPAddPrimaryAffiliationTest
  1. Create HCO and validate response
  2. Create HCP and validate response
  3. Create DCR Request: HCP update added new primary affiliation
  4. Validate DCR response
  5. Apply Change request
  6. Get DCR status and validate
  7. Get HCP and validate
  8. Get DCR and validate
  9. Remove all entities

shouldUpdateHCOAddAffiliationTest
  1. Create HCO_1 and validate
  2. Create HCO_2 and validate
  3. Create DCR Request: update HCO add other affiliation (OtherHCOtoHCOAffiliations)
  4. Validate DCR response
  5. Apply Change request
  6. Get DCR status and validate
  7. Get HCO's connections (OtherHCOtoHCOAffiliations) and validate
  8. Get DCR and validate
  9. Remove all entities

shouldInactivateHCP
  1. Create HCP and validate response
  2. Create DCR Request: Inactivate HCP
  3. Validate DCR response
  4. Apply Change request
  5. Get DCR status and validate
  6. Get HCP and validate
  7. Get DCR and validate
  8. Remove all entities

shouldUpdateHCPAddPrivateAddress
  1. Create HCP and validate response
  2. Create DCR Request: update HCP - add private address
  3. Validate DCR response
  4. Apply Change request
  5. Get DCR status and validate
  6. Get HCP and validate
  7. Get DCR and validate
  8. Remove all entities

shouldUpdateHCPAddAffiliationToNewHCO
  1. Create HCO and validate response
  2. Create HCP and validate response
  3. Create DCR Request: update HCP - add affiliation to new HCO
  4. Validate DCR response
  5. Apply Change request
  6. Get DCR status and validate
  7. Get HCP and validate
  8. Get HCO entity by crosswalk and save uri
  9. Get DCR and validate
  10. Remove all entities

shouldReturnValidationError
  1. Create DCR request with unknown entityUri
  2. Validate DCR response and check if REQUEST_FAILED

shouldCreateHCPOneKey
  1. Create HCP and validate response
  2. Create DCR Request: create OneKey HCP
  3. Validate DCR response
  4. Get DCR status and validate
  5. Get HCP and validate
  6. Get DCR and validate
  7. Remove all entities

shouldCreateHCPOneKeySpecialityMapping
  1. Create HCP and validate response
  2. Create DCR Request: create OneKey HCP with speciality value
  3. Validate DCR response
  4. Get DCR status and validate
  5. Get HCP and validate
  6. Get DCR and validate
  7. Remove all entities

shouldCreateHCPOneKeyRedirectToReltio
  1. Create HCP and validate response
  2. Create DCR Request: create OneKey HCP with speciality value "not found key"
  3. Validate DCR response
  4. Apply Change Request
  5. Get DCR status and validate
  6. Get HCP and validate
  7. Get DCR and validate
  8. Remove all entities

shouldCreateHCOOneKey
  1. Create HCO nad validate response
  2. Create DCR Request: create OneKey HCO
  3. Validate DCR response
  4. Get DCR status and validate
  5. Get HCO and validate
  6. Get DCR and validate
  7. Remove all entities

shouldReturnMissingDataException
  1. Create DCR Request with missing data
  2. Validate DCR response: status = REQUEST_REJECTED and response has correct message

shouldReturnForbiddenAccessException
  1. Create DCR Request with forbidden access data
  2. Validate DCR response: status = REQUEST_FAILED and response has correct message

shouldReturnInternalServerError
  1. Create DCR Request with internal server error data
  2. Validate DCR response: status = REQUEST_FAILED and response has correct message
" }, { "title": "Integration Test For COMPANY Model Region AMER", "pageID": "302681796", "pageLink": "/display/GMDM/Integration+Test+For+COMPANY+Model+Region+AMER", "content": "
Test classTest caseFlow
MicroBrickTestshouldCalculateMicroBricks
  1. Create HCP and validate response
  2. Wait for event on ChangeLog topic with specified country
  3. Get HCP entity and validate MicroBrick
  4. Update HCP with new zip codes and valdiate response
  5. Wait for event on ChangeLog topic with specified country
  6. Get HCP entity and validate MicroBrick
  7. Delete entities
ValidateHCPTestvalidateHCPTest
  1. Create HCP and validate response status
  2. Create validation request with valid params
  3. Assert if response is ok and validation status is "Valid"

validateHCPTestNotValid
  1. Create HCP and validate response status
  2. Create validation request with not valid params
  3. Assert if response is ok and validation status is "NotValid"

validateHCPLookupTest
  1. Create HCP with "Speciality" attribute and validate response status
  2. Create lookup validation request with "Speciality" attribute
  3. Assert if response is ok and validation status is "Valid"
" }, { "title": "Integration Test For COMPANY Model Region EMEA", "pageID": "347655258", "pageLink": "/display/GMDM/Integration+Test+For+COMPANY+Model+Region+EMEA", "content": "
Test classTest caseFlow
AutofillTypeCodeTestshouldProcessNonPrescriber
  1. Create HCP entity
  2. Validate type code value is Non-Prescriber on output topic
  3. Inactivate HCP entity
  4. Validate type code value is Non-Prescriber on history inactive topic
  5. Delete entity
shouldProcessPrescriber
  1. Create HCP entity
  2. Validate type code value is Prescriber on output topic
  3. Inactivate HCP entity
  4. Validate type code value is Prescriber on history inactive topic
  5. Delete entity
shouldProcessMerge
  1. Create first HCP entity
  2. Validate type code is Prescriber on output topic
  3. Create second HCP entity
  4. Validate type code is Non-Prescriber on output topic
  5. Merge entities
  6. Validate type code is Prescriber on output topic
  7. Inactivate first entity
  8. Validate type code is Non-Prescriber
  9. Delete second entity crosswalk
  10. Validate entity has end date on output topic
  11. Validate type code value is Prescriber on output topic
  12. Delete entity
shouldNotUpdateTypeCode
  1. Create HCP entity with correct type code value
  2. Validate there is no type code value provided by HUB technical source on output topic
  3. Delete entity
shouldProcessLookupErrors
  1. Create HCP entity with invalid sub type code and speciality values
  2. Validate type code value is concatenation of sub type code and speciality values on output topic
  3. Inactivate HCP entity
  4. Validate type code value is concatenation of sub type code and speciality values on history inactive topic
  5. Delete entity
" }, { "title": "Integration Test For COMPANY Model Region US", "pageID": "302681784", "pageLink": "/display/GMDM/Integration+Test+For+COMPANY+Model+Region+US", "content": "
Test classTest caseFlow
CRUDMCOAsynctest
  1. Send MCORequest to Kafka topic
  2. Wait for created event
  3. Validate created MCO
  4. Update MCO's name
  5. Send MCORequest to Kafka topic
  6. Wait for updated event
  7. Validate updated entity
  8. Delete all entities
TestBatchMCOTesttestBatchWorkflowTest
  1. Create batch instance: testBatch
  2. Create MCO_LOADNIG stage
  3. Send MCO entities to MCO_LOADNIG stage
  4. Finish MCO_LOADNIG stage
  5. Check sender job status - get batch instance and validate if all entities are created
  6. Check processing job status - get batch instance and validate if all entties are processed
  7. Get batch instance and check batch completion status
  8. Get entities by crosswalk and check if all are created
  9. Remove all entities

testBatchWorkflowTest_SendEntities_Update_and_MD5Skip
  1. Create batch instance: testBatch
  2. Create MCO_LOADNIG stage
  3. Send MCO entities to MCO_LOADNIG stage
  4. Finish MCO_LOADNIG stage
  5. Check sender job status - get batch instance and validate if all entities are created
  6. Check processing job status - get batch instance and validate if all entties are processed
  7. Get batch instance and check batch completion status
  8. Get entities by crosswalk and check if all are created
  9. Create batch instance: testBatch
  10. Create MCO_LOADNIG stage
  11. Send MCO entities to MCO_LOADNIG stage (skip 2 entities MD5 checksum changed)
  12. Finish MCO_LOADNIG stage
  13. Check sender job status - get batch instance and validate if all entities are created
  14. Check processing job status - get batch instance and validate if all entties are processed
  15. Get batch instance and check batch completion status
  16. Get entities by crosswalk and check if all are created
  17. Remove all entities
MCOBundlingTesttest
  1. Send multiple MCORequest to kafka topic
  2. Wait for created event for every MCORequest
  3. Check if number of recived events equals number of sent requests
  4. Set crosswalk's delete date on now for every request
  5. Send all updated MCORequests to Kafka topic
  6. Wait for deleted event for every MCORequest
EntityEventChecksumTesttest
  1. Create HCP
  2. Wait for HCP_CREATED event
  3. Get created HCP by uri and check if exists
  4. Find by id created HCP in mogno and save "checksum"
  5. Update HCP's attribute and send request
  6. Wait for HCP_CHANGED event
  7. Find by id created HCP in mogno and save
  8. Check if old checksum is different than current checksum
  9. Remove HCP
  10. Wait for HCP_REMOVED event
EntityEventsTesttest
  1. Create MCO
  2. Wait for ENTITY_CREATED event
  3. Update MCO
  4. Wait for ENTITY_CHANGED event
  5. Remove MCO
  6. Wait for ENTITY_REMOVED event
HCPEventsMergeTesttest
  1. Create HCP_1 and validate response
  2. Wait for HCP_CREATED event
  3. Get HCP_1 and validate attributes
  4. Create HCP_2 and validate response
  5. Get HCP_2 and validate attributes
  6. Merge HCP_1 and HCP_2
  7. Wait for HCP_MERGED event
  8. Get HCP_2 and validate attributes
  9. Delete HCP_1 crosswalk
  10. Wait for HCP_CHANGED event and validate HCP_URI
  11. Delete HCP_1 and HCP_2 crosswalks
  12. Wait for HCP_REMOVED event
  13. Delete HCP_2 crosswalk
HCPEventsNotTrimmedMergeTesttest
  1. Create HCP_1 and validate response
  2. Wait for HCP_CREATED event
  3. Get HCP_1 and validate attributes
  4. Create HCP_2 and validate response
  5. Get HCP_2 and validate attributes
  6. Merge HCP_1 and HCP_2
  7. Wait for HCP_MERGED event and validate attributes
  8. Get HCP_2 and validate attributes
  9. Delete HCP_1 crosswalk
  10. Wait for HCP_CHANGED event and validate HCP_URI
  11. Delete HCP_1 and HCP_2 crosswalks
  12. Wait for HCP_REMOVED event
  13. Delete HCP_2 crosswalk
MCOEventsTesttest
  1. Create MCO and validate reponse
  2. Wait for MCO_CREATED event and validate uris
  3. Update MCO's name and validate response
  4. Wait for MCO_CHANGED event and validate uris
  5. Delete MCO's crosswalk and validate response status
  6. Wait for MCO_REMOVED event and validate uris
  7. Remove entities
PotentialMatchLinkCleanerTest
  1. Create HCO: Start FLEX
  2. Get HCO and validate
  3. Create HCO: End ONEKEY
  4. Get HCO and validate
  5. Get matches by Start FLEX HCO entityId
  6. Validate matches
  7. Get not matches by Start FLEX HCO entityId
  8. Validate - not match does not exist
  9. Get Start FLEX HCO from mongo entityMatchesHistory collection
  10. Validate matches from mongo
  11. Create DerivedAffiliation - realtion between FLEX and HCO
  12. Get matches by Start FLEX HCO entityId
  13. Check if there is no matches
  14. Get not matches by Start FLEX HCO entityId
  15. Validate not matches response
  16. Remove all entities
UpdateMCOTesttest1_createMCOTest
  1. Create MCO and validate response
  2. Get MCO by uri and validate
  3. Remove entities

test2_updateMCOTest
  1. Create MCO and validate response
  2. Update MCO's name
  3. Get MCO by uri and validate
  4. Remove entities

test3_createMCOBatchTest
  1. Create multiple MCOs using postBatchMCO
  2. Validate response
  3. Remove entities
UpdateUsageFlagsTesttest1_updateUsageFlags
  1. Create HCP and validate response
  2. Get entities using filter (Country & Uri) and validate if HCP exists
  3. Get entities using filter (Uri) and validate if HCP exists
  4. Update usage flags and validate response
  5. Get entity and validate updated usage flags

test2_updateUsageFlags
  1. Create HCO and validate response
  2. Get entities using filter (Country & Uri) and validate if HCO exists
  3. Get entities using filter (Uri) and validate if HCO exists
  4. Update usage flags and validate response
  5. Get entity and validate updated usage flags

test3_updateUsageFlags
  1. Create HCO with 2 addresses (COMPANYAddressId=3001 and 3002) and validate response
  2. Get entities using filter (Country & Uri) and validate if HCO exists
  3. Get entities using filter (Uri) and validate if HCO exists
  4. Update usage flags (COMPANYAddressId = 3002, action=set) and validate response
  5. Update usage flags (COMPANYAddressId = 3001, action=set) and validate response
  6. Get entity and validate updated usage flags
  7. Remove usage flag and validate response
  8. Get entity and validate updated usage flags
  9. Clear usage flag and validate response
  10. get entity and validate updated usage flags 
" }, { "title": "MDM Factory", "pageID": "164470002", "pageLink": "/display/GMDM/MDM+Factory", "content": "\n

MDM Client Factory was implemented in MDM manager to select a specific MDM Client (Reltio/Nucleus) based on a client selector configuration. Factory allows to register multiple MDM Clients on runtime and choose it based on country. To register Factory the following example configuration needs to be defined:

\n
    \n\t
  1. clientDecisionTable
  2. \n
\n\n\n

Based on this configuration a specific request will be processed by Reltio or Nucleus. Each selector has to define default view for a specific client. For example, 'ReltioAllSelector' has a definition of a default and PforceRx view which corresponds to two factory clients with different user name to Reltio.
\n\"\"

\n
    \n\t
  1. mdmFactoryConfig
  2. \n
\n\n\n

This map contains MDM Factory Clients. Each client has a specific unique name and a configuration with URL, username, ●●●●●●●●●●●● other specific values defined for a Client. This unique name is used in decision table to choose a factory client based on country in request.
\n \"\"

" }, { "title": "Mulesoft integration", "pageID": "447577227", "pageLink": "/display/GMDM/Mulesoft+integration", "content": "

Description

Mulesoft platform is integration portal that is used to integrate Clients from inside and outside of COMPANY network with MDM Hub. 

Mule integration

API Endpoints

MuleSoft API Catalog:

\"\"

Requests routing on Mule side

Below values can change. Please check in source MDM Tenant URL Configuration - AIS Application Integration Solutions Mule - Confluence

API Country Mapping

Tenant
Dev
Test (QA)
Stage
Prod
US
US
US
US
US
EMEA
UK,IE,GB,SA,EG,DZ,TN,MA,AE,KW,QA,OM,
BH,NG,GH,KE,ET,ZW,MU,IQ,LB,JO,ZA,BW,
CI,DJ,GQ,GA,GM,GN,GW,LR,MG,ML,MR,SN,
SL,TG,MW,TZ,UG,RW,LS,NA,SZ,ZM,IR,SY,
CD,LY,AO,BJ,BF,BI,CM,CV,CF,TD,CG,SD,
YE,FR,DE,IT,ES,TF,PM,WF,MF,BL,RE,NC,
YT,MQ,GP,GF,PF,MC,AD,SM,VA,TR,AT,BE,
LU,DK,FO,GL,FI,NL,NO,PT,SE,CH,CZ,GR,
CY,PL,RO,SK,IL,AL,AM,IO,GE,IS,MT,NE,RS,SI,ME
UK,IE,GB,SA,EG,DZ,TN,MA,AE,KW,QA,OM,
BH,NG,GH,KE,ET,ZW,MU,IQ,LB,JO,ZA,BW,
CI,DJ,GQ,GA,GM,GN,GW,LR,MG,ML,MR,SN,
SL,TG,MW,TZ,UG,RW,LS,NA,SZ,ZM,IR,SY,
CD,LY,AO,BJ,BF,BI,CM,CV,CF,TD,CG,SD,
YE,FR,DE,IT,ES,TF,PM,WF,MF,BL,RE,NC,
YT,MQ,GP,GF,PF,MC,AD,SM,VA,TR,AT,BE,
LU,DK,FO,GL,FI,NL,NO,PT,SE,CH,CZ,GR,
CY,PL,RO,SK,IL,AL,AM,IO,GE,IS,MT,NE,RS,SI,ME
UK,IE,GB,SA,EG,DZ,TN,MA,AE,KW,QA,OM,
BH,NG,GH,KE,ET,ZW,MU,IQ,LB,JO,ZA,BW,
CI,DJ,GQ,GA,GM,GN,GW,LR,MG,ML,MR,SN,
SL,TG,MW,TZ,UG,RW,LS,NA,SZ,ZM,IR,SY,
CD,LY,AO,BJ,BF,BI,CM,CV,CF,TD,CG,SD,
YE,FR,DE,IT,ES,TF,PM,WF,MF,BL,RE,NC,
YT,MQ,GP,GF,PF,MC,AD,SM,VA,TR,AT,BE,
LU,DK,FO,GL,FI,NL,NO,PT,SE,CH,CZ,GR,
CY,PL,RO,SK,IL,AL,AM,IO,GE,IS,MT,NE,RS,SI,ME
UK,GB,IE,AE,AO,BF,BH,BI,BJ,BW,CD,CF,
CG,CI,CM,CV,DJ,DZ,EG,ET,GA,GH,GM,GN,
GQ,GW,IQ,IR,JO,KE,KW,LB,LR,LS,LY,MA,
MG,ML,MR,MU,MW,NA,NG,OM,QA,RW,SA,SD,
SL,SN,SY,SZ,TD,TG,TN,TZ,UG,YE,ZA,ZM,
ZW,FR,DE,IT,ES,AD,BL,GF,GP,MC,MF,MQ,
NC,PF,PM,RE,TF,WF,YT,SM,VA,TR,AT,BE,
LU,DK,FO,GL,FI,NL,NO,PT,SE,CH,CZ,GR,
CY,PL,RO,SK,IL
AMER
CA,BR,AR,UY,MX,CL,CO,PE,BO,EC
CA,BR,AR,UY,MX,CL,CO,PE,BO,EC
CA,BR,AR,UY,MX,CL,CO,PE,BO,EC
CA,BR,AR,UY,MX
APAC

AU,NZ,IN,KR,JP,HK,ID,MY,PK,PH,SG,TW,TH,

VN,MO,BN,BD,NP,LK,MN

AU,NZ,IN,KR,JP,HK,ID,MY,PK,PH,SG,TW,TH, VN,MO,BN,NP,LK,MNKR,JP,AU,NZ,IN,HK,ID,MY,PK,PH,SG,TW,TH, VN,MO,BN,NP,LK,MNKR,JP,AU,NZ,IN,HK,ID,MY,PK,PH,SG,TW,TH, VN,MO,BN
EXUS 
(IQVIA)
Everything else
Everything else
Everything else
Everything else

API URLs

MuleSoft MDM HCP Reltio API URLs

EnvironmentCloud APIGround API
Devhttps://muleapic-amer-dev.COMPANY.com/mdm-hcp-reltio-dlb-v1-devhttp://mule4api-comm-amer-dev.COMPANY.com/mdm-hcp-reltio-v1/
Testhttps://muleapic-amer-dev.COMPANY.com/mdm-hcp-reltio-dlb-v1-tst/http://mule4api-comm-amer-tst.COMPANY.com/mdm-hcp-reltio-v1
Stagehttps://muleapic-amer-stg.COMPANY.com/mdm-hcp-reltio-dlb-v1-stghttp://mule4api-comm-amer-stg.COMPANY.com/mdm-hcp-reltio-v1
Prodhttps://muleapic-amer.COMPANY.com/mdm-hcp-reltio-dlb-v1http://mule4api-comm-amer.COMPANY.com/mdm-hcp-reltio-v1

Integrations

Integrations can be found under below url:

MDM - AIS Application Integration Solutions Mule - Confluence

Mule documentation reference

Solution Profiles/MDM 

https://confluence.COMPANY.com/display/AAISM/MDM

MDM HCP Reltio API

https://confluence.COMPANY.com/display/AAISM/MDM+HCP+Reltio+API

MDM Tenant URL Configuration


https://confluence.COMPANY.com/display/AAISM/MDM+Tenant+URL+Configuration

Using OAuth2 for API Authentication

Described how to use OAuth2

How to use an API

Described how to request access to API and how to use it

Consumer On-boarding

Described consumer onboarding process


" }, { "title": "Multi view", "pageID": "164470089", "pageLink": "/display/GMDM/Multi+view", "content": "\n

During getEntity or getRelation operation "ViewAdapterService" is activated. This feature contains two steps:

\n
    \n\t
  1. Adapt
  2. \n
\n\n\n

Based on the following map each entity will be checked before return:
\n\"\"
\nThis means that for PforceRx view, only entities with source CRMMI will be returned. Otherwise getEntity or getRelation operations will return "404" EntityNotFound exception.
\nWhen entity can be returned with success the next step is started:

\n
    \n\t
  1. Filter
  2. \n
\n\n\n

Each entity is filtered based on attribute Uris list provided in crosswalks.attribute list.
\nThe process will take each attribute from entity and will check if this attribute exists in restricted for specific source crosswalk attribute list. When this attribute is not on restricted list, then it will be removed from entity. This way we will receive entity for specific view only with attribute restricted for specific source.
\nMDM publishing HUB has an additional configuration for multi view process. When an entity with a specific country suits the configuration, getEntity operation is invoked with country and view name parameter. Then MDM gateway Factory is activated, and entity is returned from a specific Reltio instance and saved in a mongo collection suffixed with a view name.
\n \"\"
\nFor this configuration entities from BR country will be saved in entityHistory and entityHistory_PforceRx mongo collections. In the view collection entities will be adapted and filtered by View Adapter Service.

" }, { "title": "Playbook", "pageID": "218437749", "pageLink": "/display/GMDM/Playbook", "content": "

The document depicts how to request access to different sources. 

" }, { "title": "Issues list", "pageID": "218441145", "pageLink": "/display/GMDM/Issues+list", "content": "" }, { "title": "Add a user to a new group.", "pageID": "218438493", "pageLink": "/pages/viewpage.action?pageId=218438493", "content": "
  1. To create a request you need to use  a link:https://requestmanager1.COMPANY.com/Group/
  2. Then choose as follow:
  3. \"\"
  4. Than search a group and click request access:
  5. \"\"
  6. As the last step, you need to choose the 'View Cart' button and submit your request. 
" }, { "title": "Snowflake new schema/group/role creation", "pageID": "218437752", "pageLink": "/pages/viewpage.action?pageId=218437752", "content": "
  1. Connect with: https://digitalondemand.COMPANY.com/
  2. Click 'Get Support' button.

\"\"

3. Then click that one:

\"\"

4. And as a next step:

\"\"

5. Now you are on create ticket site. The most important thing is to place a proper queue name in a detailed description place. For example a queue name for Snowflake issues looks like this:  gbl-atp-commercial snowflake domain admin. I recommend to you to place it as a first line. And then the request text is required.

\"\"

6. There is a typical request for a new schema:


gbl-atp-commercial snowflake domain admin
Hello,\nI'd like to ask to create a new schema and new roles on Snowflake side.\nNew schema name: PTE_SL\nEnvironments: DEV, QA, STG, PROD, details below:\nDEV\t\nSnowflake instance: https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com\t\nSnowflake DB name:COMM_GBL_MDM_DMART_DEV_DB\nQA\t\nSnowflake instance: https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com\t\nSnowflake DB name: COMM_GBL_MDM_DMART_QA_DB\nSTG\t\nSnowflake instance: https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com\t\nSnowflake DB name:COMM_GBL_MDM_DMART_STG_DB\nPROD\t\nSnowflake instance: https://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com\t\nSnowflake DB name: COMM_GBL_MDM_DMART_PROD_DB\n\nAdd new roles with names (one for each environment): COMM_GBL_MDM_DMART_[Dev/QA/STG/Prod]_PTE_ROLE\nwith read-only acces on Customer_SL & PTE_SL\nand\nadd a roles with full acces to new schema with names (one for each environment) COMM_GBL_MDM_DMART_[Dev/QA/STG/Prod]_DEVOPS_ROLE - like in customer_sl schema


7. If you are requesting for a new role too - like in an example above - you need to request to add this role to AD. In this case you need to provide primary and secondary owner details for all groups to be created.
You can send a primary ana a secondary owner data or write that the ownership should be set like in another existing role.

8. Ticket example: https://digitalondemand.COMPANY.com/My-Tickets/Ticket-Details?ticket=RF3490743

" }, { "title": "AWS ELB NLB configuration request", "pageID": "218440089", "pageLink": "/display/GMDM/AWS+ELB+NLB+configuration+request", "content": "
  1. To create a ticket use this link: http://btondemand.COMPANY.com/
  2. Please follow this link if you want to know all the specific steps and click: Snowflake new schema/group/role creation
  3. Remember to add a proper queue name!
  4. In a request please attached full list of general information:
    1. VPC
    2. ELB Type
    3. Health Checks
    4. Allowed incoming traffic from
  5. Then please add a specific ELB NLB information FOR EACH NLB ELB you requested for - even if the information is the same and obvious:
    1. Listener
    2. Target Group 
    3. No of ELB
    4. Type
    5. Environment
    6. ELB Health Check
    7. Target Group additional information: e.x: 1 Target group with 3 servers:port
    8. Where to add a Listener: e.x.: Listener to be added in ELB #Listner Name
    9. Security Group information
    10. Additional information: e.x: IP ●●●●●●●●●●●● mdm-event-handler (Prod) should be able to access this ELB
  6. Ticket example: http://btod.COMPANY.com/My-Tickets/Ticket-Details?ticket=IM40983303
  7. E.g. request text:


VPC: Public\nELB Type: Network Load Balancer\nHealth Checks: Passive\nAllowed incoming traffic from:\n●●●●●●●●●●●● mdm-event-handler (Prod)\n\n1. API\nListener:\napi-emea-prod-gbl-mdm-hub-ext.COMPANY.com:8443\n\nTarget Group:\neuw1z2pl116.COMPANY.com:8443\neuw1z1pl117.COMPANY.com:8443\neuw1z2pl118.COMPANY.com:8443\n\n2. KAFKA\n\n2.1\nListener:\nkafka-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095\nTG:\neuw1z2pl116.COMPANY.com:9095\neuw1z1pl117.COMPANY.com:9095\neuw1z2pl118.COMPANY.com:9095\n\n2.2\nListener:\nkafka-b1-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095\nTG:\neuw1z2pl116.COMPANY.com:9095\n\n2.3\nListener:\nkafka-b2-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095\nTG:\neuw1z1pl117.COMPANY.com:9095\n\n2.4\nListener:\nkafka-b3-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095\nTG:\neuw1z2pl118.COMPANY.com:9095\n\nGBL-BTI-EXT HOSTING AWS CLOUD
" }, { "title": "To open a traffic between hosts", "pageID": "218441143", "pageLink": "/display/GMDM/To+open+a+traffic+between+hosts", "content": "
  1. To create a ticket using this link: http://btondemand.COMPANY.com/
  2. Please follow this link if you want to know all the specific steps and click: Snowflake new schema/group/role creation
  3. Remember to add a proper queue name!
  4. In a request please attached the full list of general information:
    1. Source
      1. IP range
      2. IP range
      3. ..
      4. ..
    2. Targets - remember to add each targets instances
      1. Target1
        1. Name
        2. Cname
        3. Address
        4. Port
      2. Target2
        1. ..
        2. ..
        3. ..
      3. ..
  5. Example ticket: http://btod.COMPANY.com/My-Tickets/Ticket-Details?ticket=IM41240161
  6. Example request text:


Source:\n1. IP range: ●●●●●●●●●●●●●\n2. IP range: ●●●●●●●●●●●●●\n\nTarget1:\nLoadBalancer:\ngbl-mdm-hub-us-prod.COMPANY.com  canonical name = internal-pfe-clb-atp-mdmhub-us-prod-001-146249044.us-east-1.elb.amazonaws.com.\nName:   internal-pfe-clb-atp-mdmhub-us-prod-001-146249044.us-east-1.elb.amazonaws.com\nAddress: ●●●●●●●●●●●●●●\nName:   internal-pfe-clb-atp-mdmhub-us-prod-001-146249044.us-east-1.elb.amazonaws.com\nAddress: ●●●●●●●●●●●●●●\nTarget port: 443\n\nTarget2:\nhosts:\namraelp00007848.COMPANY.com(●●●●●●●●●●●●●●)\namraelp00007849.COMPANY.com(●●●●●●●●●●●●●)\namraelp00007871.COMPANY.com(●●●●●●●●●●●●●●)\ntarget port: 8443
" }, { "title": "Support information with queue and DL names", "pageID": "218438484", "pageLink": "/display/GMDM/Support+information+with+queue+and+DL+names", "content": "

There are a few places when you can send your request:

  1. https://digitalondemand.COMPANY.com/getsupport
  2. https://requestmanager.COMPANY.com/

Caution! 

When we are adding a new client to our architecture there is a MUST to get from him a support queue.

Support queues

System/component/area nameDedicated queueSupport DLAdditional notes
Rapid, Digital Labs, GCP etc
GBL-EPS-CLOUD OPS FULL SUPPORT
EPS-CloudOps@COMPANY.comAWS Global, EMEA environments
IOD AWS Team
GBL-BTI-IOD AWS FULL SUPPORT
EPS-CloudOps@COMPANY.com (same as EPS, not a mistake)Rotating AWS keys, AWS GBL US, AWS FLEX US
IOD
GBL-BTI-IOD FULL OS SUPPORT (VMC)

VMware Cloud
FLEX Team
GBL-F&BO-MAST AMM SUPPORT
DL-CBK-MAST@COMPANY.comData, file transfer issues in US FLEX environments
SAP Interface Team (FLEX)
GBL-SS SAP SALES ORDER MGMT

Queries regarding SAP FLEX input files
SAP Master Date Team (FLEX)
Dianna.OConnell@COMPANY.comQueries regarding data in SAP FLEX
Network Team
GBL-NETWORK DDI

All domain and DNS changes
Firewall Team
GBL-NETWORK ECS
GBL-NETWORK-SCS@COMPANY.com"Big" firewall changes
Snowflake
GBL-ATP-COMMERCIAL SNOWFLAKE DOMAIN ADMIN


MDM Hub - non-prod
GBL-ADL-ATP GLOBAL MDM - HUB DEVOPS
DL-ATP_MDMHUB_SUPPORT@COMPANY.com
MDM Hub - prod
GBL-ADL-ATP GLOBAL MDM - HUB DEVOPS
DL-ATP_MDMHUB_SUPPORT_PROD@COMPANY.com
PDKS
GBL-BAP-Kubernetes Service L2
PDCSOps@COMPANY.comPDKS Kubernetes cluster, ie. new MDM Hub Amer NPROD
Go to http://containers.COMPANY.com/ "PDKS Get Help" for details.
PDKS Engineering Team
GBL-BTI-SYSTEMS ENGINEERING BTCS
DL-PDCS-ADMIN@COMPANY.comPDKS Kubernetes - For Environment provisioning/modification issues with CloudBrokerage/IOD
AMER/APAC/EMEA/GBLUS Reltio - COMPANY
GBL-ADL-ATP GLOBAL MDM - RELTIO
DL-ADL-ATP-GLOBAL_MDM_RELTIO@COMPANY.comTeam responsible for Reltio and ETL batch loads.
GBL/USFLEX Reltio - IQVIA
GBL-MDM APP SUPPORT

COMPANY-MDM-Support@iqvia.com

DL-Global-MDM-Support@COMPANY.com


Reltio consulting
N/A

Sumit Singh - reltio consulting (NO support)

sumit.singh@reltio.com
Sumit.Singh@COMPANY.com

It is no support, we can use that contact on technical issues level (API implementation etc) 
Reltio UI with data acces
use request manager: https://requestmanager.COMPANY.com/

Reltio Commercial MDM - GBLUS

Reltio Customer MDM - GBL

Ping Federate
DL-CIT-PXEDOperations@COMPANY.comPing Federate/OAuth2 support
MAPP Navigator
GBL-FBO-MAPP NAVIGATOR HYPERCARE
DL-BTAMS-MAPP-Navigator@COMPANY.com (rarely respond)MAPP Nav issues
Harmony Bitbucket
GBL-CBT-GBI HARMONY SERVICES

DL-GBI-Harmony-Support@COMPANY.com

Confluence page:
ATP Harmony Service SD

Confluence, Jira
GBL-DA-DEVSECOPS TOOLS SUPPORT
DL-SESRM-ATLASSIAN-SUPPORT <DL-SESRM-ATLASSIAN-SUPPORT@COMPANY.com>
Artifactory
GBL-SESRM-ARTIFACTORY SUPPORT
DL-SESRM-ARTIFACTORY-SUPPORT@COMPANY.com

Mule integration team support
DL-AIS Mule Integration Support 
DL-AIS-Mule-Integration-Support@COMPANY.comUsed to integrate with mule proxy 
VOD DCR
Laurie.Koudstaal@COMPANY.comPOC if Veeva did not send an input file for the VOD DCR process for 24 hours

Example: there is a description how to request with https://digitalondemand.COMPANY.com/for a ticket assigned to one of groups above. Snowflake new schema/group/role creation

" }, { "title": "Global Clients", "pageID": "310963401", "pageLink": "/display/GMDM/Global+Clients", "content": "


ClientContact
CICRProbably Amish
ADTSDL-BTAMS-ENGAGE-PLUS@COMPANY.com
EASI
ENGAGE
ESAMPLESSomya.Jain@COMPANY.com;
Vijay.Bablani@COMPANY.com;
Lori.Reynolds@COMPANY.com
GANTGangadhar.Nadpolla@COMPANY.com
GRACECory.Arthus@COMPANY.com
GRVvikas.verma@COMPANY.com; Luther Chris <chris.luther@COMPANY.com>; Matej.Dolanc@COMPANY.com
JOShweta.Kulkarni@COMPANY.com
MAPDL-BT-Production-Engineering@COMPANY.com;
Matej.Dolanc@COMPANY.com
MAPPDL-BTAMS-MAPP-Navigator@COMPANY.com;
Rajesh.K.Chengalpathy@COMPANY.com
MEDICDL-F&BO-MEDIC@COMPANY.com
MULEDL-AIS-Mule-Integration-Support@COMPANY.com
Amish.Adhvaryu@COMPANY.com
ODSDL-GBI-PFORCERX_ODS_Support@COMPANY.com
ONEMEDMarsha.Wirtel@COMPANY.com;
AnveshVedula.Chalapati@COMPANY.com
PFORCEOLChristopher.Fani@COMPANY.com
VEEVA_FIELD
PFORCERXNagaJayakiran.Nagumothu@COMPANY.com;
dl-pforcerx-support@COMPANY.com
PTRSSagar.Bodala@COMPANY.com;
bhushan.shanbhag@COMPANY.com
JAPAN DWHDL-GDM-ServiceOps-Commercial_APAC@COMPANY.com DL-ATP-SERVICEOPS-JPN-DATALAKE@COMPANY.com
CHINAChen, Yong <Yong.Chen@COMPANY.com>; QianRu.Zhou@COMPANY.com
KOL_ONEVIEWDL-SFA-INF_Support_PforceOL@COMPANY.comSolanki,
Hardik (US - Mumbai)<hsolanki@COMPANY.com>
Yagnamurthy, Maanasa (US - Hyderabad) <myagnamurthy@COMPANY.com>
NEXUS SriVeerendra.Chode@COMPANY.com;
DL-Acc-GBICC-Team@COMPANY.com
IMPROMPTUPRAWDOPODOBNIE AMISH
CDWNarayanan, Abhilash <Abhilash.KadampanalNarayanan@COMPANY.com>
Balan, Sakthi <Sakthi.Balan@COMPANY.com>
Raman, Krishnan <Krishnan.Raman@COMPANY.com>
ICUEBrahma, Bagmita <Bagmita.Brahma2@COMPANY.com>
Solanki, Hardik <Hardik.Solanki@COMPANY.com>
Tikyani, Devesh <Devesh.Tikyani@COMPANY.com>
EVENTHUB


SNOWFLAKE
ClientContact
C360DL-C360_Support@COMPANY.com
PT&EDL-PTE-Batch-Team@COMPANY.com>;  Drabold, Erich <Erich.Drabold@COMPANY.com>
DQ_OPSmarkus.henriksson@COMPANY.com;
dl-atp-dq-ops@COMPANY.com


accentureDL-Acc-GBICC-Team@COMPANY.com


Big bossesPratap.Deshmukh@COMPANY.com

Mikhail.Komarov@COMPANY.com

Rafael.Aviles@COMPANY.com
" }, { "title": "How to login to Service Manager", "pageID": "218448126", "pageLink": "/display/GMDM/How+to+login+to+Service+Manager", "content": "

How to add a user to Service Manager tool

  1. Choose link: https://smweb.COMPANY.com/SCAccountRequest.aspx#/search
  2. Find yourself
    \"\"
  3. Click "Next >>"
  4. Choose proper role: Service desk analyst – and click „Needs training”
    \"\"
  5. When you have your training succeeded, there is a need to choose groups to which you want to be added :
    1. GBL-ADL-ATP GLOBAL MDM - HUB DEVOPS
  6. You do it here:
    \"\"
  7. Please remember when you click “Add selected group to cart” there is a second approval step – click: “SUBMIT”.
  8. When permissions will be granted you can explore Service Manager possibilities here: https://sma.COMPANY.com/sm/index.do
" }, { "title": "How to Escalate btondemand Ticket Priority", "pageID": "218448925", "pageLink": "/display/GMDM/How+to+Escalate+btondemand+Ticket+Priority", "content": "

Below is a copy of: AWS Rapid Support → How to Escalate Ticket Priority

How to Escalate Ticket Priority

Tickets will be opened as low priority by default and response time will align to the restoration and resolution times listed in the SLA below. If your request priority needs to be change follow these instructions:

  1. Use the Chat function at BT On Demand (or call the Service Desk at 1-877-733-4357)
    1. Select Get Support
    2. Select "Click here to continue without selecting a ticket option."
    3. Select Chat
  2. Provide the existing ticket number you already opened
  3. Ask that ticket Priority be raised to Medium, High or Critical based on the issue and utilize one of the following key phrases to help set priority:
    1. Issue is Effecting Production Application
    2. Product Quality is being impacted
    3. Batch is unable to proceed
    4. Life safety or physical security is impacted
    5. Development work stopped awaiting resolution
" }, { "title": "How to get AWS Account ID", "pageID": "218453784", "pageLink": "/display/GMDM/How+to+get+AWS+Account+ID", "content": "

MDM Hub components are deployed in different AWS Accounts. In a ticket support process, you might be asked about the AWS Account ID of the host, load balancer, or other resources. You can get it quickly in at least two ways described below.

Using AWS Console

In AWS Console: http://awsprodv2.COMPANY.com/ (How to access AWS Console) you can find the Account ID in any resource's Amazon Resource Name (ARN).

\"\"

Using curl

SSH to a host and run this curl command, same for all AWS accounts:

[ec2-user@euw1z2pl116 ~]$ curl http://169.254.169.254/latest/dynamic/instance-identity/document
{
"accountId" : "432817204314",
"architecture" : "x86_64",
"availabilityZone" : "eu-west-1b",
"billingProducts" : null,
"devpayProductCodes" : null,
"marketplaceProductCodes" : null,
"imageId" : "ami-05c4f918537788bab",
"instanceId" : "i-030e29a6e5aa27e38",
"instanceType" : "r5.2xlarge",
"kernelId" : null,
"pendingTime" : "2021-12-21T06:07:12Z",
"privateIp" : "10.90.98.178",
"ramdiskId" : null,
"region" : "eu-west-1",
"version" : "2017-09-30"
}

" }, { "title": "How to push Docker image to artifactory.COMPANY.com", "pageID": "218458682", "pageLink": "/display/GMDM/How+to+push+Docker+image+to+artifactory.COMPANY.com", "content": "

I am using the AKHQ image as an example.

Login to artifactory.COMPANY.com

  1. Log in with COMPANY credentials: https://artifactory.COMPANY.com/artifactory/
  2. Generate Identity Token: https://artifactory.COMPANY.com/ui/admin/artifactory/user_profile
  3. Use COMPANY username and generated Identity Token in "docker login artifactory.COMPANY.com"
marek@CF-19CHU8:~$ docker login artifactory.COMPANY.com
Authenticating with existing credentials...
Login Succeeded

Pull, tag, and push

marek@CF-19CHU8:~$ docker pull tchiotludo/akhq:0.14.1
0.14.1: Pulling from tchiotludo/akhq
...
Digest: sha256:b7f21a6a60ed1e89e525f57d6f06f53bea6e15c087a64ae60197d9a220244e9c
Status: Downloaded newer image for tchiotludo/akhq:0.14.1
docker.io/tchiotludo/akhq:0.14.1
marek@CF-19CHU8:~$ docker tag tchiotludo/akhq:0.14.1 artifactory.COMPANY.com/mdmhub-docker-dev/tchiotludo/akhq:0.14.1
marek@CF-19CHU8:~$ docker push artifactory.COMPANY.com/mdmhub-docker-dev/tchiotludo/akhq:0.14.1
The push refers to repository [artifactory.COMPANY.com/mdmhub-docker-dev/tchiotludo/akhq]
0.14.1: digest: sha256:b7f21a6a60ed1e89e525f57d6f06f53bea6e15c087a64ae60197d9a220244e9c size: 1577


And that's all, you can now use this image from artifactory.COMPANY.com!
" }, { "title": "Emergency contact list", "pageID": "218459579", "pageLink": "/display/GMDM/Emergency+contact+list", "content": "

In case of emergency please inform the person from the list attached to each environment.

EMEA:

Varganin, A.J. <Andrew.J.Varganin@COMPANY.com>; Trivedi, Nishith <Nishith.Trivedi@COMPANY.com>; Austin, John <John.Austin@COMPANY.com>; Simon, Veronica <Veronica.Simon@COMPANY.com>; Adhvaryu, Amish <Amish.Adhvaryu@COMPANY.com>; Kothandaraman, Sathyanarayanan <Sathyanarayanan.Kothandaraman@COMPANY.com>; Dolanc, Matej <Matej.Dolanc@COMPANY.com>; Kunchithapatham, Bhavanya <Bhavanya.Kunchithapatham@COMPANY.com>; Bhowmick, Aditya <Aditya.Bhowmick@COMPANY.com>

GBL:

TO-DO


GBL US:

TO-DO


EMEA:

TO-DO


AMER:

TO-DO

" }, { "title": "How to handle issues reported to DL", "pageID": "294665000", "pageLink": "/display/GMDM/How+to+handle+issues+reported+to+DL", "content": "
  1. Create a ticket in Jira
    1. Name: "DL: {{ email title }}"
    2. Epic: BAU
    3. Fix Version(s): BAU
  2. Use below template:
    \"\"MDM Hub Issue Response Template.oft
  3. Replace all the red placeholders. Fill in the table where you can, based on original email.
  4. Respond to the email, requesting additional details if any of the table rows could not be filled in.
  5. Update the ticket:
    1. Copy/Paste the filled table
    2. Adjust the priority based on the "Business impact details" row

\"\"


" }, { "title": "Sample estimation for jira tickets", "pageID": "415215566", "pageLink": "/display/GMDM/Sample+estimation+for+jira+tickets", "content": "

1

https://jira.COMPANY.com/browse/MR-8591(Disable keycloak by default)
https://jira.COMPANY.com/browse/MR-8544(Investigate server git hooks in BitBucket)
https://jira.COMPANY.com/browse/MR-8508(Lack of changelog when build from master)
https://jira.COMPANY.com/browse/MR-8506(pvc-autoresizer deployment on PRODs)
https://jira.COMPANY.com/browse/MR-8502(Dashboards adjustments)

2

https://jira.COMPANY.com/browse/MR-8649 (Move kong-mdm-external-oauth-plugin to mdm-utils repo)
https://jira.COMPANY.com/browse/MR-8585 (Alert about not ready ScaledObject)
https://jira.COMPANY.com/browse/MR-8539 (Reduce number of stored Cadvisor metrics and labels)
https://jira.COMPANY.com/browse/MR-8531 (Old monitoring host decomissioning)
https://jira.COMPANY.com/browse/MR-8375 (Quality Gateway: deploy publisher changes to PRODs)
https://jira.COMPANY.com/browse/MR-8359 (Write article to describe Airflow upgrade procedure)
https://jira.COMPANY.com/browse/MR-8166 (Fluentd - improve deployment time and downtime)
https://jira.COMPANY.com/browse/MR-8128 (Turn on compression in reconciliation service)

3

https://jira.COMPANY.com/browse/MR-8543 (POC: Create local git hook with secrets verification)
https://jira.COMPANY.com/browse/MR-8503 (Replace hardcoded rate intervals)
https://jira.COMPANY.com/browse/MR-8370 (Investigate and plan fix for different version of monitoring CRDs)
https://jira.COMPANY.com/browse/MR-8245 (Fluentbit: deploy NPRODs)
https://jira.COMPANY.com/browse/MR-7926 (Move jenkins agents containers definition to inbound-services repo)

5

https://jira.COMPANY.com/browse/MR-8334 (Implement integration with Grafana)
https://jira.COMPANY.com/browse/MR-7720 (Logstash - configuration creation and deployment)
https://jira.COMPANY.com/browse/MR-7417 (Grafana dashboards backup process)
https://jira.COMPANY.com/browse/MR-7075 (POC: Store transaction logs for 6 months)

8

https://jira.COMPANY.com/browse/MR-8258 (Implement integration with Kibana)
https://jira.COMPANY.com/browse/MR-6285 (Prepare Kafka upgrade plan to version 3.3.2)
https://jira.COMPANY.com/browse/MR-5981 (Process analysis)
https://jira.COMPANY.com/browse/MR-5694 (Implement Reltio mock)
https://jira.COMPANY.com/browse/MR-5835 (Mongo backup process: implement backup process)


" }, { "title": "FAQ - Frequently Asked Questions", "pageID": "415217275", "pageLink": "/display/GMDM/FAQ+-+Frequently+Asked+Questions", "content": "" }, { "title": "API", "pageID": "415217277", "pageLink": "/display/GMDM/API", "content": "

Is there an MDM Hub API Documentation?

Of course - it is available for each component:

What is the difference between /api-emea-prod and /api-gw-emea-prod API endpoints?

Both of these endpoints are leading to different API Components:

Both of these Components' APIs can be used in similar way. The main difference is:

What is the difference between /api-emea-prod and /ext-api-emea-prod API endpoints?

These endpoints use different Authentication methods:

It is recommended that all the API Users use OAuth2 and /ext-api-emea-prod endpoint, leaving Key Auth for support and debugging purposes.

When should I use a GET Entity operation, when should I use a SEARCH Entity operation?

There are two main ways of fetching an HCP/HCO JSON using HUB API:

Below two requests correspond to each other:

Although both are quick, Hub recommends only using the first one to find and entity by URI:

What is the difference between POST and PATCH /hcp, /hco, /entities operations?

The key difference is:

POST should be used if we are sending the full JSON - crosswalk + all attributes.

PATCH should be used if we are only sending incremental changes to a pre-existing profile.



" }, { "title": "Merging Into Existing Entities", "pageID": "462075948", "pageLink": "/display/GMDM/Merging+Into+Existing+Entities", "content": "

Can I post a profile and merge it to one already existing in MDM?

Yes, there are 3 ways you can do that:

Merge-On-The-Fly - Details

Merge-on-the-fly is a Reltio mechanism using matchGroups configuration. MatchGroups contain lists of requirements that two entities must pass in order to be merged. There are two types of matchGroups: "suspect" and "automatic". Suspects merely display as potential matches in Reltio UI, but Automatic groups trigger automatic merges of the objects.

Example of an HCP automatic matchGroup from Reltio's configuration (EMEA PROD):

\n
                {\n                    "uri": "configuration/entityTypes/HCP/matchGroups/ExctONEKEYID",\n                    "label": "(iii) Auto Rule - Exact Source Unique Identifier(ReferBack ID)",\n                    "type": "automatic",\n                    "useOvOnly": "true",\n                    "rule": {\n                        "and": {\n                            "exact": [\n                                "configuration/entityTypes/HCP/attributes/Identifiers/attributes/ID",\n                                "configuration/entityTypes/HCP/attributes/Country"\n                            ],\n                            "in": [\n                                {\n                                    "values": [\n                                        "OneKey ID"\n                                    ],\n                                    "uri": "configuration/entityTypes/HCP/attributes/Identifiers/attributes/Type"\n                                },\n                                {\n                                    "values": [\n                                        "ONEKEY"\n                                    ],\n                                    "uri": "configuration/entityTypes/HCP/attributes/OriginalSourceName"\n                                },\n                                {\n                                    "values": [\n                                        "Yes"\n                                    ],\n                                    "uri": "configuration/entityTypes/HCP/attributes/Identifiers/attributes/Trust"\n                                }\n                            ]\n                        }\n                    },\n                    "scoreStandalone": 100,\n                    "scoreIncremental": 0\n                
\n

Above example merges two entities having same Country attribute and same Identifier of type "OneKey ID". Identifier must have the Trusted flag and the OriginalSourceName must be "ONEKEY".


When posting a record to MDM, matchGroups are evaluated. If an automatic matchGroup is matched, Reltio will perform a Merge-On-The-Fly, adding the posted crosswalk to an existing profile.

Contributor Merge - Details

When posting an object to Reltio, we can use its Crosswalk contributorProvider/dataProvider mechanism to bind posted crosswalk to an existing one.

If we know that a crosswalk exists in MDM, we can add it to the crosswalks array with contributorProvider=true and dataProvider=false flags. Crosswalk marked like that serves as an indicator of an object to bind to.

The other crosswalk must have the flags set the other way around: contributorProvider=false and dataProvider=true. This is the crosswalk that will de facto provide the attributes and be considered for the Hub's ingestion rules.


Example - we are sending data with an MAPP crosswalk and binding that crosswalk to the existing ONEKEY crosswalk:

\n
{\n    "hcp": {\n        "type": "configuration/entityTypes/HCP",\n        "attributes": {\n            "FirstName": [\n                {\n                    "value": "John"\n                }\n            ],\n            "LastName": [\n                {\n                    "value": "Doe"\n                }\n            ],\n            "Country": [\n                {\n                    "value": "ES"\n                }\n            ]\n        },\n        "crosswalks": [\n            {\n                "type": "configuration/sources/MAPP",\n                "value": "B53DFCEA-8231-E444-24F8-7E72C62C0147",\n                "contributorProvider": false,\n                "dataProvider": true\n            },\n            {\n                "type": "configuration/sources/ONEKEY",\n                "value": "WESR04566503",\n                "contributorProvider": true,\n                "dataProvider": false\n            }\n        ]\n    }\n}
\n


Every MDM record also has a crosswalk of type "Reltio" and value equal to Reltio ID. We can use that to bind our record to the entity:

\n
{\n    "hcp": {\n        "type": "configuration/entityTypes/HCP",\n        "attributes": {\n            "FirstName": [\n                {\n                    "value": "John"\n                }\n            ],\n            "LastName": [\n                {\n                    "value": "Doe"\n                }\n            ],\n            "Country": [\n                {\n                    "value": "ES"\n                }\n            ]\n        },\n        "crosswalks": [\n            {\n                "type": "configuration/sources/MAPP",\n                "value": "B53DFCEA-8231-E444-24F8-7E72C62C0147",\n                "contributorProvider": false,\n                "dataProvider": true\n            },\n            {\n                "type": "configuration/sources/Reltio",\n                "value": "00TnuTu",\n                "contributorProvider": true,\n                "dataProvider": false\n            }\n        ]\n    }\n}
\n


This approach has a downside: crosswalks are bound, so they cannot be unmerged later on.

Manual Merge - Details

Last approach is simply creating a record in Reltio and straight away merging it with another.


Let's use the previous example. First, we are simply posting the MAPP data:

\n
{\n    "hcp": {\n        "type": "configuration/entityTypes/HCP",\n        "attributes": {\n            "FirstName": [\n                {\n                    "value": "John"\n                }\n            ],\n            "LastName": [\n                {\n                    "value": "Doe"\n                }\n            ],\n            "Country": [\n                {\n                    "value": "ES"\n                }\n            ]\n        },\n        "crosswalks": [\n            {\n                "type": "configuration/sources/MAPP",\n                "value": "B53DFCEA-8231-E444-24F8-7E72C62C0147"\n            }\n        ]\n    }\n}
\n


Response:

\n
{\n    "uri": "entities/0zu5sHM",\n    "status": "created",\n    "errorCode": null,\n    "errorMessage": null,\n    "COMPANYGlobalCustomerID": "04-131155084",\n    "crosswalk": {\n        "type": "configuration/sources/MAPP",\n        "value": "B53DFCEA-8231-E444-24F8-7E72C62C0147",\n        "updateDate": 1728043082037,\n        "deleteDate": ""\n    }\n}
\n


We can now use the URI from response to merge the new record into existing one:

\n
POST /entities/0zu5sHM/_merge?uri=00TnuTu
\n


" }, { "title": "Quality rules", "pageID": "164470090", "pageLink": "/display/GMDM/Quality+rules", "content": "

Quality engine is responsible for preprocessing Entity when a specific precondition is met. This engine is started in the following cases:

When a validationOn parameter is set to true the first step in HCP/HCO request processing is quality engine validation. MDM Manager Configuration should contain the following quality rules:

These properties are able to accept a list of yaml files. Each file has to be added in environment repository in /config_files/<env_name>/mdm_mananger/config/.*quality-rules.yaml. Then each of these files has to be added to these variables in inventory /<env_name>/group_vars/gw-services/mdm_manager.yml.
For HCP request processing, files are loaded in the following order:

  1. hcpQualityRulesConfigs
  2. hcpAffiliatedHCOsQualityRulesConfigs


For HCO request processing, files are loaded only from the following configuration:

  1. hcoQualityRulesConfigs


It is a good practice to divide files in a common logic and a specific logic for countries. For example, HCP Quality Rules file names should have the following structure:


Quality rules yaml file is a set of rules, which will be applied on Entity. Each rule should have the following yaml structure:
\"\"

preconditions

\"\"

\"\"


check

\"\"

\"\"

\"\"
action
When the precondition and check are properly evaluated then a specific action can be invoked on entity attributes.

\"\"

\"\"


\"\"

\"\"

\"\"

\"\"


\"\"

action:
type: autofillSourceName
attribute: Addresses


The logic of the quality engine rule check is as follows:


Quality rules DOC: 

\"\"



" }, { "title": "Relation replacer", "pageID": "164470095", "pageLink": "/display/GMDM/Relation+replacer", "content": "


After getRelation operation is invoked, "Relation Replacer" feature can be activated on returned relation entity object. When entity is merged, Reltio sometimes does not replace objectUri id with new updated value. This process will detect such situation and replace objectUri with correct URI from crosswalk.
Relation replacer process operates under the following conditions:

  1. Relation replacer will check EndObject and StartObject sections.
  2. When objectUri is different from each entity id from crosswalks section, then objectURI is replaced with entity id from crosswalks.
  3. When crosswalks contain multiple entries in list and there is a situation that crosswalks list contains different entity uri, relation replacer process ends with the following warning: "Object has more than one possible uri to replace" – it is not possible to decide which entity should be pointed as StartObject or EndObject after merge.
" }, { "title": "SMTP server", "pageID": "387170360", "pageLink": "/display/GMDM/SMTP+server", "content": "

Access to SMTP server is granted for each region separately:


AMER

Destination Host: amersmtp.COMPANY.com

Destination SMTP Port: 25

Authentication: NONE


EMEA

Destination Host: emeasmtp.COMPANY.com

Destination SMTP Port: 25

Authentication: NONE


APAC

Destination Host: apacsmtp.COMPANY.com

Destination SMTP Port: 25

Authentication: NONE


To request access to SMTP server there is need to fill in the SMTP relay registration form through http://ecmi.COMPANY.com portal.


" }, { "title": "Airflow", "pageID": "218432163", "pageLink": "/display/GMDM/Airflow", "content": "" }, { "title": "Overview", "pageID": "218432165", "pageLink": "/display/GMDM/Overview", "content": "

Configuration

Airflow is deployed on kubernetes cluster using official airflow helm chart:

Main airflow chart adjustments(creting pvc's, k8s jobs, etc.) are located in components repository.

Environment's specific configuration is located in cluster configuration repository.

Deployment

Local deployment

Airflow can be easily deployed on local kubernetes cluster for testing purposes. All you have to do is:

If deployment is performed on windows machine please make sure that install.sh, encrypt.sh, decrypt.sh and .config files have unix line endings. Otherwise it will cause deployment errors.

  1. Edit .config file to enable airflow deployment(and any other component you want. To enable component it needs to have assigned value greater than 0

    \n
    enable_airflow=1
    \n
  2. Run ./install.sh file located in main helm directory

    \n
    ./install.sh
    \n

Environment deployment

Environment deployment should be performed with great care.

If deployment is performed on windows machine please make sure that install.sh, encrypt.sh, decrypt.sh and .config files have unix line endings. Otherwise it will cause deployment errors.


Environment deployemnt can be performed after connecting local machine to remote kubernetes cluster.

  1. Prepare airflow configuration in cluster env repository.
  2. Adjust .config file to update airflow(and any other service you want)

    \n
    enable_airflow=1
    \n
  3. Run ./install.sh script to update kuberntes cluster
  4. Check if all airflow pods are working correctly

Helm chart configuration

You can find described available configuration in values.yaml file in airflow github repository.

Helm chart adjustments

Additionally to base airflow kubernetes resources there are created:

Definitions: helm templates

Dags deployment

Dags are deployed using ansible playbook: install_mdmgw_airflow_services_k8s.yml

Playbook uses kubectl command to work with airflow pods.

You can run this playbook locally:

  1. To modify lists of dags that should be deployed during playbook run you have to adjust airflow_components list:
    e.g.

    \n
    airflow_components:\n  - lookup_values_export_to_s3
    \n
  2. Run playbook(adjust environment)
    e.g.

    \n
    ansible-playbook install_mdmgw_airflow_services.yml -i inventory/emea_dev/inventory
    \n

Or with jenkins job:

https://jenkins-gbicomcloud.COMPANY.com/job/MDM_Airflow_Deploy_jobs/

" }, { "title": "Airflow DAGs", "pageID": "164470169", "pageLink": "/display/GMDM/Airflow+DAGs", "content": "" }, { "title": "●●●●●●●●●●●●●●● [https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1589274]", "pageID": "310943460", "pageLink": "/pages/viewpage.action?pageId=310943460", "content": "

\"\"

Description

Dag used to prepare data from FLEX(US) tenant to be lodaed into  GBLUS tenant.

S3 kafka connector on FLEX enironment uploads files everyday to s3 bucket as multiple small files. This dag takes those multiple files and concatenate them into one. ETL team downloads this concatenated file from s3 bucket and upload it into GBLUS tenant via batch service.

Example

https://airflow-amer-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=concat_s3_files_gblus_prod

" }, { "title": "active_hcp_ids_report", "pageID": "310939877", "pageLink": "/display/GMDM/active_hcp_ids_report", "content": "

\"\"

Description

Generates report of active hcp's from defined countries.

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=active_hcp_ids_report_emea_prod

Steps

" }, { "title": "China reports", "pageID": "310939879", "pageLink": "/display/GMDM/China+reports", "content": "

Description

Set of dags that produces china reports on gbl environment that are later sent via email:

Single reports are generated by executing the defined queries on mongo, then extracts are published on s3. Then main dags download exports from s3 and send an email with all reports.


Main dag example:

\"\"

Report generating dag example:

\"\"

Dags list

Dags executed every day:

china_generate_reports_gbl_prod - main dag that triggers the rest

china_affiliation_status_report_gbl_prod

china_dcr_statistics_report_gbl_prod

china_hcp_by_source_report_gbl_prod

china_import_and_gen_dcr_statistics_report_gbl_prod

china_import_and_gen_merge_report_gbl_prod

china_merge_report_gbl_prod


Dags executed weekly:

china_monthly_generate_reports_gbl_prod - main dag that triggers the rest

china_monthly_hcp_by_channel_report_gbl_prod

china_monthly_hcp_by_city_type_report_gbl_prod

china_monthly_hcp_by_department_report_gbl_prod

china_monthly_hcp_by_gender_report_gbl_prod

china_monthly_hcp_by_hospital_class_report_gbl_prod

china_monthly_hcp_by_province_report_gbl_prod

china_monthly_hcp_by_source_report_gbl_prod

china_monthly_hcp_by_SubTypeCode_report_gbl_prod

china_total_entities_report_gbl_prod



" }, { "title": "clear_batch_service_cache", "pageID": "333156979", "pageLink": "/display/GMDM/clear_batch_service_cache", "content": "

\"\"

Description

This dag is used to clear batch-service cache(mongo batchEntityProcessStatus collection). It deletes all records specified in csv file for specified batchName.

To clear cache batch-service batchController/{batch_name}/_clearCache endpoint is used.

Dag used by mdmhub hub-ui.

Input parameters:

\n
{\n  "fileName": "inputFile.csv",\n  "batchName": "testBatchTAGS"\n}
\n

Main steps

\n
{'removedRecords': 1}\n
\n


Example

https://airflow-amer-nprod-gbl-mdm-hub.COMPANY.com/graph?dag_id=clear_batch_service_cache_amer_dev&root=

" }, { "title": "distribute_nucleus_extract", "pageID": "310939886", "pageLink": "/display/GMDM/distribute_nucleus_extract", "content": "

DEPRECATED

Description

Distributes extracts that are sent by nucleus to s3 directory between multiple directories for the respective countries that are later used by inc_batch_* dags

Input and output directories are configured in dags configuration file:

\"\"

Dag:

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=distribute_nucleus_extract_gbl_prod&root=

" }, { "title": "export_merges_from_reltio_to_s3", "pageID": "310939888", "pageLink": "/display/GMDM/export_merges_from_reltio_to_s3", "content": "

\"\"

Description

Dag used to schedule Reltio merges export, adjust file format and then uload file to s3 snowflake directory.

Steps:

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=export_merges_from_reltio_to_s3_full_emea_prod

" }, { "title": "get_rx_audit_files", "pageID": "310943418", "pageLink": "/display/GMDM/get_rx_audit_files", "content": "

\"\"

Description

Download rx_audit files from:

Files are the uploaded to defined s3 directory that is later used by inc_batch_rx_audit dag.

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=inc_batch_rx_audit_gbl_prod

Useful links

RX_AUDIT

" }, { "title": "historical_inactive", "pageID": "310943421", "pageLink": "/display/GMDM/historical_inactive", "content": "

\"\"

Description

Dag used to implement history inactive process

Steps:

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=historical_inactive_emea_prod

Reference

Snowflake: History Inactive

" }, { "title": "hldcr_reconciliation", "pageID": "310943423", "pageLink": "/display/GMDM/hldcr_reconciliation", "content": "

\"\"

Description

HL DCR flow occasionally blocked some VRs' statuses from being sent to PforceRx in an outbound file, because Hub has not received an event from Reltio, informing about Change Request resolution. The exact event expected is CHANGE_REQUEST_CHANGED.

To prevent the above, HLDCR Reconciliation process runs regularly, doing the following steps:

  1. Query MongoDB store (Collection DCRRequests) for VRs in CREATED status. Export result as list.
  2. For each VR from the list, generate a CHANGE_REQUEST_CHANGED event and post it to Kafka.
  3. Further processing is as usual - DCR Service enriches the event with current changeRequest state. If the changeRequest has been resolved, it updates the status in MongoDB.

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hldcr_reconciliation_gbl_prod

" }, { "title": "HUB Reconciliation process", "pageID": "164470182", "pageLink": "/display/GMDM/HUB+Reconciliation+process", "content": "

The reconciliation process was created to synchronize Reltio with HUB. Because Reltio sometimes does not generate events, and therefore these events are not consumed by HUB from the SQS queue and the HUB platform is out of sync with Reltio data. External Clients dose not receive the required changes, which cause that multiple systems are not consistent. To solve this problem this process was designed. 

The fully automated reconciliation process generates these missing events. Then these events are sent to the inbound Kafka topic, HUB platform process these events, updates mongo collection and route the events to external Clients topics.

Airflow

The following diagram presents the reconciliation process steps:

\"\"

This directed acyclic diagram presents the steps that are taken to compare Reltio and HUB and produce the missing events. This diagram is divided into the following sections:

  1. Initialization and Reltio Data preparation - in this section the process invokes the Reltio export, and upload full export to mongo.
    1. clean_dirs_before_init, init_dirs, timestamp – these 3 tasks are responsible for the directory structure preparation required in the further steps and timestamp capture required for the reconciliation process. Reltio and HUB data changes in time and the export is made at a specific point in time. We need to ensure that during comparison only entities that were changed before Reltio Export are compared. This requirement guarantee that only correct events are generated and consistent data is compared.
    2. entities_export – the task invokes the Reltio Export API and triggers the export job in Reltio
    3. sensor_s3_reltio_file – this task is an S3 bucket sensor. Because the Reltio export job is an asynchronous task running in the background, the file sensor checks the S3 location ‘hub_reconciliation/<ENV>/RELTIO/inbound/’ and waits for export. When the success criteria are met, the process exits with success. The timeout for this job is set to 24 hours, the poke interval is set to 10 minutes.
    4. download_reltio_s3_file, unzip_reltio_export, mongo_import_json_array, generate_mongo_indexes – these 4 tasks are invoked after successful export generation. Zip is downloaded and extracted to the JSON file, then this file is uploaded to mongo collection. The generate_mongo_indexes task is responsible for generating mongo indexes in the newly uploaded collection. The indexes are created to optimize performance.
    5. archive_flex_s3_file_name – After successful mongo import Reltio export is archived for future reference. 
  2. HUB validation - Reltio ↔ HUB comparison - the main comparison and events generation logic is invoked in this SUB DAG. The details are described in the section below
  3. Events generation  - after data comparison, generated events are sent to selected Kafka topic.
    1. Then standard events processing begins. The details are described in HUB documentation.
      1. Please check the following documents to find more details: 
        1. Entity change events processing (Reltio)
        2. Event filtering and routing rules
        3. Processing events on client side


HUB validation - Reltio ↔ HUB comparison

\"\"

This directed acyclic diagram (SUB DAG) presents the steps that are taken to compare HUB and Reltio data in both directions. Because Reltio data is already uploaded and HUB (“entityHistory”) collection is always available we can immediately start the comparison process. 

  1. mongo_find_reltio_hub_differnces - this process compares Reltio data to HUB data.  
    1. Mongo aggregation pipeline matches the entities from Reltio export to HUB profiles located in mongo collection by entity URI (ID). All Reltio profiles that are not presented in Reltio export data are marked as missing. All attributes in Reltio are compared to HUB profile attributes - based on this when the difference is found, it means that the profile is out of sync and new even should be generated. 
      1. Based on these changes the HCP_CHANGED or HCO_CHANGED events are generated.
      2. When the profile is missing the HCP_CREATED or HCO_CREATED events are generated. 
  2. mongo_find_hub_reltio_differnces - this process compares HUB entities to Reltio data. The process is designed to find only missing entities in Reltio, based on these changes the HCP_REMOVED or HCO_REMOVED events are generated
    1. Mongo aggregation pipeline matches the entities from HUB mongo collection to Reltio profiles by entity URI (ID). All HUB profiles that are not presented in Reltio export data are marked as missing for future reference. 
  3. mongo_generate_hub_events_differences - this task is related to the automated reconciliation process. The full process is described in this paragraph.


Configuration and scheduling

The process can be started in Airflow on demand. 

The configuration for this process is stored in the MDM Environment configuration repository. 

The following section is responsible for the HUB Reconciliation process activation on the selected environment:

\n
active_dags:\n  gbl_dev:\n    - hub_reconciliation.py
\n


The file is available in "inventory/scheduler/group_vars/all/all.yml"
To activate the Reconciliation process on the new environment the new environment should be added to "active_dags" map.
Then the "ansible-playbook install_airflow_dags.yml" needs to be invoked. After this new process is ready for use in Airflow. 

Reconciliation process 


To synchronize Reltio with HUB and therefore synchronize profiles in Reltio with external Clients the fully automated process is started after full HUB<->Reltio comparison. this is the "mongo_generate_hub_events_differences" task. 

The automated reconciliation process generates events. Then these events are sent to the inbound Kafka topic, HUB platform process these events, updates mongo collection and route the events to flex topic.

The following diagram presents the reconciliation steps:

\"\"

  1. Automated reconciliation process generates events:

The following events are generated during this process:

2. Next, Event Publisher receives events from the internal Kafka topic and calls MDM Gateway API to retrieve the latest state of Entity from Reltio. Entity data in JSON is added to the event to form a full event. For REMOVED events, where Entity data is by definition not available in Reltio at the time of the event, Event Publisher fetches the cached Entity data from Mongo database instead.

3. Event Publisher extracts the metadata from Entity (type, country of origin, source system).

4. Entity data is stored in the MongoDB database, for later use

5. For every Reltio event, there are two Publishing Hub events created: one in Simple mode and one in Event Sourcing (full) mode. Based on the metadata, and Routing Rules provided as a part of application configuration, the list of the target destinations for those events is created. The event is sent to all matched destinations to the target topic (<env>-out-full-<client>) when the event type is full or (<env>-out-simple-<client>) when the event type is simple. 




" }, { "title": "HUB Reconciliation Process V2", "pageID": "164470184", "pageLink": "/display/GMDM/HUB+Reconciliation+Process+V2", "content": "

\"\"


  1. Hub reconciliation process is starting from downloading reconciliation.properties file with following information:
    1. reconciliationType - reconciliation type - possible values: FULL_RECONCILIATION or PARTIAL_RECONCILIATION (since last run)
    2. eventType - event type - it is used in in generating events for kafka - possible values: FULL or CROSSWALK_ONLY
    3. reconcileEntities - if set to true entities will be reconciliated
    4. reconcileRelations - if set to true relations will be reconciliated
    5. reconcileMergeTree - if set to true mergeTree will be reconciliated
  2. Sets hub reconciliation properties in the process
  3. If reconcileEntities is set to true that process for reconciliate entities is started
    1. <entities_get_last_timestamp> Process gets last timestamp when entities was lately exported
    2. <entities_export> Entities export is triggered from Reltio - this step is done by groovy script
    3. <entities_export_sensor> Process is checking if export is finished by verifing if the file SUCCESS with manifest.json exists on S3 folder /us/<env>/inboud/hub/hub_reconciliation/entities/inbound/entities_export_<timestamp>
    4. <entities_set_last_timestamp> In this step process is setting timestamp for future reconciliation of entities - it is set in airflow variables
    5. <entities_generate_hub_reconciliation_events> this step is responsible for checking which entities has been changed and generate events for changed entities
      1. firstly we get export file from S3 folder /us/<env>/inboud/hub/hub_reconciliation/entities/inbound/entities_export_<timestamp>
      2. we unzip the file in bash script
      3. for the unzipped file we there are two options
        1. if we useChecksum than calculateChecksum groovy script is executed which calculates checksum for exported entities and generates ReconciliationEvent only with checksum
        2. if we don't useChecksum than ReconciliationEvent is generated with whole entity
      4. in the last step we send those generated events to specified kafka topics 
      5. Events from topic will be processed by reconciliation service
      6. Reconciliation service is checking basing on checksum change/changes if PublisherEvent should be generated
        1. it compares checksum if it exists from ReconciliationEvent with the one that we have in entityHistory table
        2. it compares entity objects from ReconciliationEvent with the one that we have in mongo in entityHistory table if checksum is absent - objects on both sides are normalized before compare process
        3. it compares SimpleCrosswalkOnlyEntity objects if CROSSWALK_ONLY reconciliation event type is choosen
    6. <entities_export_archive> - move export folder on S3 from inbound to archive folder

\"\"

4. If reconcileRelations is set to true that process for reconciliate relations is started

  1. <relations_get_last_timestamp> Process gets last timestamp when relations was lately exported
  2. <relations_export> Relations export is triggered from Reltio - this step is done by groovy script
  3. <relations_export_sensor> Process is checking if export is finished by verifing if the file SUCCESS with manifest.json exists on S3 folder /us/<env>/inboud/hub/hub_reconciliation/relations/inbound/relations_export_<timestamp>
  4. <relations_set_last_timestamp> In this step process is setting timestamp for future reconciliation of relations - it is set in airflow variables
  5. <relations_generate_hub_reconciliation_events> this step is responsible for checking which relations has been changed and generate events for changed relations
    1. firstly we get export file from S3 folder /us/<env>/inboud/hub/hub_reconciliation/relations/inbound/relations_export_<timestamp>
    2. we unzip the file in bash script
    3. for the unzipped file we there are two options
      1. if we useChecksum than calculateChecksum groovy script is executed which calculates checksum for exported relations and generates ReconciliationEvent only with checksum
      2. if we don't useChecksum than ReconciliationEvent is generated with whole relation
    4. in the last step we send those generated events to specified kafka topic 
    5. Events from topic will be processed by reconciliation service
    6. Reconciliation service is checking basing on checksum change/object changes if PublisherEvent should be generated
      1. it compares checksum if it exists from ReconciliationEvent with the one that we have in mongo in entityRelation table
      2. it compares relation objects from ReconciliationEvent with the one that we have in mongo in entityRelation table if checksum is absent - objects on both sides are normalized before compare process
      3. it compares SimpleCrosswalkOnlyRelation objects if CROSSWALK_ONLY reconciliation event type is choosen
  6. <relations_export_archive> - move export folder on S3 from inbound to archive folder

\"\"

5. If reconcileMergeTree is set to true that process for reconciliate relations is started

  1. <merge_tree_get_last_timestamp> Process gets last timestamp when merge tree was lately exported
  2. <merge_tree_export> Merge tree export is triggered from Reltio - this step is done by groovy script
  3. <merge_tree_export_sensor> Process is checking if export is finished by verifing if the file SUCCESS with manifest.json exists on S3 folder /us/<env>/inboud/hub/hub_reconciliation/merge_tree/inbound/merge_tree_export_<timestamp>
  4. <merge_tree_set_last_timestamp> In this step process is setting timestamp for future reconciliation of merge tree - it is set in airflow variables
  5. <merge_tree_generate_hub_reconciliation_events> this step is responsible for checking which merge tree has been changed and generate events for changed merge tree objects
    1. firstly we get export file from S3 folder /us/<env>/inboud/hub/hub_reconciliation/merge_tree/inbound/merge_tree_export_<timestamp>
    2. we unzip the file in bash script
    3. for the unzipped file we there are two options
      1. if we useChecksum than calculateChecksum groovy script is executed which creates ReconciliationMergeEvent with uri of the main object and list of loosers uri
      2. if we don't useChecksum than ReconciliationEvent is generated with whole merge tree object
    4. in the last step we send those generated events to specified kafka topic 
    5. Events from topic will be processed by reconciliation service
    6. Reconciliation service is sending merge and lost_merger PublisherEvent for winner and every looser
  6. <merge_tree_export_archive> - move export folder on S3 from inbound to archive folder









" }, { "title": "import_merges_from_reltio", "pageID": "310943426", "pageLink": "/display/GMDM/import_merges_from_reltio", "content": "

\"\"

Description

Schedules reltio merges export, and imports it into mong.

This dag is scheduled by china_import_and_gen_merge_report and data imported into mongo are used by china_merge_report to generate china raport files

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=import_merges_from_reltio_gbl_prod&root=&num_runs=25&base_date=2023-04-06T00%3A05%3A20Z

" }, { "title": "import_pfdcr_from_reltio", "pageID": "310943428", "pageLink": "/display/GMDM/import_pfdcr_from_reltio", "content": "

\"\"

Description

Schedules reltio entities export, download it from s3, make small changes in export and import into mongo.

This dag is scheduled by china_import_and_gen_dcr_statistics_report and data imported into mongo is used by china_dcr_statistics_report to generate china raport files

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=import_pfdcr_from_reltio_gbl_prod

" }, { "title": "inc_batch", "pageID": "310943432", "pageLink": "/display/GMDM/inc_batch", "content": "

\"\"

Description

Proces used to load idl files stored on s3 into Reltio. This dags is basing on mdmhub inc_batch_channel component.

Steps

  1. Crate batch instance in mongo using batch-service /batchController endpoint
  2. Download idl files from s3 directory
  3. Extract compressed archives
  4. Preprocess files(eg. dos2unix )
  5. Run inc_batch_channel component
  6. Archive input files and reports

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=inc_batch_sap_gbl_prod

" }, { "title": "Initial events generation process", "pageID": "164470083", "pageLink": "/display/GMDM/Initial+events+generation+process", "content": "

Newly connected clients doesn't have konwledge about entities which was created in MDM before theirs connecting. Due to this the initial event loading process was designed. Process loads events about already existing entities to client's kafka topic. Thanks this the new client is synced with MDM.

Airflow

The process was implemented as Airflow's DAG:

\"\"

Process steps:

  1. prepareWorkingDir - prepares directories structure required for the process,

  2. getLastTimestamp - gets time marked of last process execution. This marker is used to determine which of events has been sent by previously running process. If the process is run first time the marker has always 0 value,

  3. getTimestamp - gets current time marker,

  4. generatesEvents - generates events file based on current Mongo state. Data used to prepare event messages is selected based on condition entity.lastModificationDate > lastTimestamp,

  5. divEventsByEventKind - divides events file based on event kind: simple or full,

  6. loadFullEvents* - it is a group of steps that populates full events to specific topic. The amount of this steps is based on amount of topics specified in configuration,

  7. loadSimpleEvents* - similar to above, those steps populates simple events to specific topic. The amount of this steps is based on amount of topics specified in configuration,

  8. setLastTimestamp - save current time marker. It will be used in the next process execution as last time marker.


Configuration and scheduling

The process can be started on demand.

The Process's configuration is stored in the MDM Environment configuration repository.

To enable the process on specific environment:

  1. Its should be valid with template "generate_events_for_[client name]" and added to the list "airflow_components" which is defined in "inventory/[env name]/group_vars/gw-airflow-services/all.yml" file,
  2. Create configuration file in "inventory/[env name]/group_vars/gw-airflow-services/generate_events_for_[client name].yml" with content as below:
  3. The process configuration
    \n
    ---\n\ngenerate_events_for_test_name: "generate_events_for_test" #Process name. It has to be the same as in "airflow_components" list avaiable in all.yml\ngenerate_events_for_test_base_dir: "{{ install_base_dir }}/{{ generate_events_for_test_name }}"\ngenerate_events_for_test:\n  dag: #Airflow's DAG configuration section\n    template: "generate_events.py" #do not change\n    variables:\n      DOCKER_URL: "tcp://euw1z1dl039.COMPANY.com:2376" #do not change\n      dataDir: "{{ generate_events_for_test_base_dir }}/data" #do not change\n      configDir: "{{ generate_events_for_test_base_dir }}/config" #do not change\n      logDir: "{{ generate_events_for_test_base_dir }}/log" #do not change\n      tmpDir: "{{ generate_events_for_test_base_dir }}/tmp" #do not change\n      user:\n        id: "7000" #do not change\n        name: "mdm" #do not change\n        groupId: "1002" #do not change\n        groupName: "docker" #do not change\n      mongo: #mongo configuration properties\n        host: "localhost"\n        port: "27017"\n        user: "mdm_gw"\n        password: "{{ secret_generate_events_for_test.dag.variables.mongo.password }}" #password is taken from the secret.yml file\n        authDB: "reltio"\n      kafka: #kafka configuration properties\n        username: "hub"\n        password: "{{ secret_generate_events_for_test.dag.variables.kafka.password }}" #password is taken from the secret.yml file\n        servers: "10.192.71.136:9094"\n        properties:\n          "security.protocol": SASL_SSL\n          "sasl.mechanism": PLAIN\n          "ssl.truststore.location": /opt/kafka_utils/config/kafka_truststore.jks\n          "ssl.truststore.password": "{{ secret_generate_events_for_test.dag.variables.kafka.properties.sslTruststorePassword }}" #password is taken from the secret.yml file\n          "ssl.endpoint.identification.algorithm": ""\n      countries: #Events will be generated only for below countries\n        - CR\n        - BR\n      targetTopics: #Target topics list. It is array of pairs topic name and event Kind. Only simple and full event kind are allowed.\n        - topic: dev-out-simple-int_test\n          eventKind: simple\n        - topic: dev-out-full-int_test\n          eventKind: full\n\n...
    \n
  4. then the playbook install_mdmgw_services.yml needs to be invoked to update runtime configuration.


" }, { "title": "lookup_values_export_to_s3", "pageID": "310943435", "pageLink": "/display/GMDM/lookup_values_export_to_s3", "content": "

\"\"

Description

Process used to extract lookup values from mongo and upload it to s3. The file from s3 i then pulled into snowflake.

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=lookup_values_export_to_s3_gbl_prod


" }, { "title": "MAPP IDL Export process", "pageID": "164470173", "pageLink": "/display/GMDM/MAPP+IDL+Export+process", "content": "

\"\"

Description

Process used to generate excel with entities export. Export is based on two monogo collections: lookupValues and entityHistory. Excel files are then uploaded into s3 directory

Excels are used in MAPP Review process on gbl_prod environment.

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=mapp_idl_excel_template_gbl_prod

" }, { "title": "mapp_update_idl_export_config", "pageID": "310943437", "pageLink": "/display/GMDM/mapp_update_idl_export_config", "content": "

Description

Process is used to update configuration of mapp_idl_excel_template dags stored in mongo.

Configuration is stored in mappExportConfig collection and consists of information about configuration and crosswalks order for each country.

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=mapp_update_idl_export_config_gbl_prod

" }, { "title": "merge_unmerge_entities", "pageID": "310943439", "pageLink": "/display/GMDM/merge_unmerge_entities", "content": "

\"\"

\"\"


Description

This dag implements batch Batch merge & unmerge process. It download file from s3 with list of files to merge or unmerge and then process documents. To process documents batch-service is used. After documents are processed report is generated and transferred to s3 directory.

Flow

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=merge_unmerge_entities_emea_prod

" }, { "title": "micro_bricks_reload", "pageID": "310943463", "pageLink": "/display/GMDM/micro_bricks_reload", "content": "

\"\"

Description

Dag extract data from snowflake table that contains microbricks exceptions. Data is then comited in git repository from where it will be pulled by consul and loaded into mdmhub components.

If microbricks mapping file has changed since last dag run then we'll wait for mapping reload and  copy events from {{ env_name }}-internal-microbricks-changelog-events topic into {{ env_name }}-internal-microbricks-changelog-reload-events"

Example

https://airflow-amer-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=micro_bricks_reload_amer_prod

" }, { "title": "move_ods_", "pageID": "310943441", "pageLink": "/pages/viewpage.action?pageId=310943441", "content": "

\"\"

Description

Dag copies files from external source s3 buckets and uploads them to our internal s3 bucket to the desired location. This data is later used in inc_batch_* dags

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=move_ods_eu_export_gbl_prod

" }, { "title": "rdm_errors_report", "pageID": "310943445", "pageLink": "/display/GMDM/rdm_errors_report", "content": "

DEPRECATED

\"\"

Description

This dags generate report with all rdm errors from ErrorLogs collection and publish it to s3 bucket.

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=rdm_errors_report_gbl_prod

" }, { "title": "reconcile_entities", "pageID": "337846202", "pageLink": "/display/GMDM/reconcile_entities", "content": "

\"\"


Details:

Process allowing export data from mongo based on query and generate https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Reconciliation/reconcileEntities request for each package or generate a flat file from exported entities and push to Kafka reltio-events.

Steps:


Example

https://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/tree?dag_id=reconcile_entities_emea_dev&root=

" }, { "title": "reconciliation_ptrs", "pageID": "310943447", "pageLink": "/display/GMDM/reconciliation_ptrs", "content": "

\"\"

DEPRECATED

Details

Process allowing to reconcile events for ptrs source.

Logic: Reconciliation process

Steps:

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=reconciliation_ptrs_emea_prod

" }, { "title": "reconciliation_snowflake", "pageID": "310943449", "pageLink": "/display/GMDM/reconciliation_snowflake", "content": "

\"\"

Details

Process allowing to reconcile events for snowflake topic.

Logic: Reconciliation process

Steps:

Example

https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=reconciliation_ptrs_emea_prod

" }, { "title": "Kubernetes", "pageID": "218693740", "pageLink": "/display/GMDM/Kubernetes", "content": "" }, { "title": "Platform Overview", "pageID": "218452673", "pageLink": "/display/GMDM/Platform+Overview", "content": "

In the latest physical architecture, MDM HUB services are deployed in Kubernetes clusters managed by COMPANY Digitial Kubernates Service (PDKS)

There are non-prod and prod cluster for each region: AMER, EMEA, APAC 

Architecture

The picture below presents the layout of HUB services in Kubernetes cluster managed by PDKS  


\"PDCS

\"Global

Nodes

There are two groups of nodes:

Storage

Portworx storage appliance is used to manage persistence volumes required by stateful components.

Configuration:

Operators

 MDM HUB uses K8 operators to manage applications like:

Application NameOperator (with link)Version
MongoDBMongo Comunity operator0.6.2
KafkaStrimzi0.27.x
ElasticSearchElasticsearch operator1.9.0
Prometheus

Prometheus operator

8.7.3

Monitoring

Cluster are monitored by local Prometheus service integrated with central Prometheus and Grafana services 

For details got to monitoring section.

Logging 

All logs from HUB components are sent to Elastic service and can be discovered by Kibana UI.

For details got to Kibana dashboard section. 

Backend components

NameVersion
MongoDB4.2.6
Kafka2.8.1
ElasticSearch7.13.1
Prometheus2.15.2

Scaling 

TO BE 

Implementation

Kubernetes objects are implemented using helm - package manager for Kubernetes. There are several modules that connected together makes the MDMHUB application:

  1. operators - delivers a set of operators used to manage backend components of MDMHUB: Mongo operator, Kafka operator, Elasticsearch operator, Kong operator and Prometheus operator,
  2. consul - delivers consul server instance, user management tools and git2consul - the tool used to synchronize consul key-value registry with a git repository,
  3. airflow - deploys an instance of Airflow server,
  4. eck - using Elasticsearch operator creates EFK stack - Kibana, Elasticsearch and Fluentd,
  5. kafka - installs Kafka server,
  6. kafka-resources - installs Kafka topics, Kafka connector instances, managed users and ACLs,
  7. kong - using Kong operators installs a Kong server,
  8. kong-resources - delivers basic Kong configuration: users, plugins etc,
  9. mongo - installs mongo server instance, configures users and their permissions,
  10. monitoring - install Prometheus server and exporters used to monitors resources, components and endpoints,
  11. migration - a set of tools supported migration from old (ec2 based environments) to new Kubernetes infrastructure,
  12. mdmhub - delivers the MDMHUB components, their configuration and dependencies.

All above modules are stored in application source code as a part of module helm.

Configuration

The runtime configuration is stored in mdm-hub-cluster-env repository. Configuration has following structure:

[region]/ - MDMHUB rerion eg: emea, amer, apac

    nprod|prod/ -  cluster class. nprod or prod values are possible,

        namespaces/ - logical spaces where MDMHUB coponents are deployed

            monitoring/ - configuration of prometheus stack

                service-monitors/

                values.yaml - namespace level variables

            [region]-dev/ - specific configuration for dev env eg.: kafka topics, hub components configuration

                config_files/ - MDMHUB components configuration files

                    all|mdm-manager|batch-service|.../

                values.yaml - variables specific for dev env.

                kafka-topics.yaml - kafka topic configuration

            [region]-qa/ - specific configuration for qa env

                config_files/

                    all|mdm-manager|batch-service|.../

            [region]-stage/ - specific configuration for stage env

                config_files/

                    all|mdm-manager|batch-service|.../

                values.yaml

                kafka-topics.yaml

            [region]-prod/ - specific configuration for prod env

                config_files/

                    all|mdm-manager|batch-service|.../

                values.yaml

                kafka-topics.yaml

            [region]-backend/ - backend services configuration: EFK stack, Kafka, Mongo etc.

                eck-config/ #eck specific files

                values.yaml

            kong/ - configuration of Kong proxy

                values.yaml

            airflow/ - configuration of Airflow scheduler

                values.yaml

        users/ #users configuration

            mdm_test_user.yaml

            callback_service_user.yaml

            ...

        values.yaml #cluster level variables

        secrets.yaml #cluster level sensitive data

    values.yaml #region level variables

values.yaml #values common for all environments and clusters

install.sh #implementation of deployment procedure


Application is deployed by install.sh script. The script does this in the following steps:

  1. Decrypt sensitive data: passwords, certificates, token, etc,
  2. Prepare the order of values and secrets precedence (the last listed variables override all other variables):
    1. common values for all environments,
    2. region values,
    3. cluster variables,
    4. users values,
    5. namespace values.
  3. Download helm package,
  4. Do some package customization if required,
  5. Install helm package to the selected cluster.


Deployment

Build

Job: mdm-hub-inbound-services/feature/kubernates

Deploy

All Kubernetes deployment jobs

AMER:

Deploy backend: Kong, Kafka, mongoDB, EFK, Consul, Airflow, Prometheus

Deploy MDM HUB


Administration

Administration tasks and standard operating procedures were described here.

" }, { "title": "Migration guide", "pageID": "218452659", "pageLink": "/display/GMDM/Migration+guide", "content": "

Phase 0

  1. Validate configuration:
    1. validate if all configuration was moved correctly - compare application.yml files, check topic name prefix (on k8s env the prefix has 2 parts), check Reltio confguration etc,
    2. Check if reading event from sqs is disabled on k8s - reltio-subscriber,
    3. Check if reading evets from MAP sqs is disabled on k8s - map-channel,
    4. Check if event-publisher is configured to publish events to old kafka server - all client topics (*-out-*) without snowflake.
  2. Check if network traffic is opened:
    1. from old servers to new REST api endpoint,
    2. from k8s cluster to old kafka,
    3. from k8s cluster to old REST API endpoint,
  3. Make a mongo dump of data collections from mongo - remember start date and time:
    1. find mongo-migration-* pod and run shell on it.
    2. cd /opt/mongo_utils/data
      mkdir data
      cd data
      nohup dumpData.sh <source database schema> &
    3. start date is shown in the first line of log file:
      head -1 nohup.out #example output → [Mon Jul  4 12:09:32 UTC 2022] Dumping all collections without: entityHistory, entityMatchesHistory, entityRelations and LookupValues from source database mongo
    4. validate the output of dump tool by:
      cd /opt/mongo_utils/data/data && tail -f nohup.out
  4. Restore dumped collections in the new mongo instance:
    cd /opt/mongo_utils/data/data
    mv nohup.out nohup.out.dump
    nohup mongorestore.sh dump/ <target database schema> <source database schema> &
    tail -f nohup.out #validate the output
  5. Validate the target database and check if only entityHistory, entityMatchesHistory, entityRelations and LookupValues coolections were copied from source. If there are more collections than mentioned, you can delete them.
  6. Create a new consumer group ${new_env}-event-publisher for sync-event-publisher component on topic ${old_env}-internal-reltio-proc-events located on old Kafka instance. Set offset to start date and time of mongo dump - do this by command line client because Akhq has a problem with this action,
  7. Configure and run sync-event-publisher - it is responsible for the synchronization of mongo DB with the old environment. The component has to be connected with the old Kafka and Manager and the routing rules list has to be empty,

Phase 1(External clients are still connected to old endpoints of rest services and kafka):

  1. Check if something is waiting for processing on kafka topics and there are active batches in batch service,
  2. If there is a data on kafka topics stop subscriber and wait until all data in enricher, callback and publisher will be processed. Check it out by monitoring input topics of these components,
  3. Wait unit all data will be processed by the snowflake connector,
  4. Disable Jenkins jobs,
  5. Stop outbound (mdmhub) components,
  6. Stop inbound (mdmgw) components,
  7. Disable all Airflow's DAGs assigned to the migrated environment,
  8. Turn off the snowflake connector at the old environment,
  9. Turn off sync-event-publisher on k8s environment,
  10. Run Mongo Migration Tool to copy mongo databases - copy only collections with caches, data collections were synced before (mongodump + sync-event-publisher). Before start check collections in old mongo instance. You can delete all temporary collections lookup_values_export_to_s3_*, reconciliation_* etc.
    #dumping
    cd /opt/mongo_utils/data
    mkdir non_data
    cd non_data
    nohup dumpNonData.sh <source database schema> &
    tail -f nohup.out #validate the output

    #restoring
    nohup mongorestore.sh dump/ <target database schema> <source database schema> &
    tail -f nohup.out #validate the output
  11. Enable reltio subscriber on K8s - check SQS credentials and turn on SQS route,
  12. Enable processing events on MAP sqs queues - if map-channel exists on migrated environment,
  13. Reconfigure Kong:
    1. forward all incoming traffic to the new instance of MDMHUB
    2. include rules for API paths from: \n MR-3140\n -\n Getting issue details...\n STATUS\n
    3. Delete all plugins oauth and key-auth plugins https://docs.konghq.com/gateway-oss/2.5.x/admin-api/#delete-plugin
    4. it might be required to remove routes, when ansible playbook will throw a duplication error https://docs.konghq.com/gateway-oss/2.5.x/admin-api/#delete-route
  14. Start Snowflake connector located at k8s cluster, 
  15. Turn on components (without sync-event-publisher) on k8s environment,
  16. Change api url and secret (manager apikey) in snowflake deployment configuration (Ansible)
  17. Chnage api key in depenedent api routers.
  18. Install Kibana dashboards,
  19. Add mappings to Monstache,
  20. Add transaction topics to fluentd.


Phase 2 (Environment run in K8s):

  1. Run Kibana Migration Tool to copy indexes, - after migration,
  2. Run Kafka Mirror Maker to copy all data from old output topics to new ones.

Phase 2 (All external clients confirmed that they switched their applications to new endpoints):

  1. Wait until all clients will be switched to new endpoints,

Phase 3 (All environments are migrated to kubernetes):

  1. Stop old mongo instance,
  2. Stop fluentd and kibana,
  3. Stop Kafka Mirror Maker
  4. Stop kafka and kong at old environment,
  5. Decommission old environment hosts.


To remember after migration

  1. Review CPU requests on k8s https://pdcs-som1d.COMPANY.com/c/c-57wsz/monitoringResource management for components - done
  2. MongoDB on k8s has only 1 instance
  3. Kong API delete plugin - https://docs.konghq.com/gateway-oss/2.5.x/admin-api/#delete-plugin
  4. K8s add consul-server service to ingress - consul ui already exposes API https://consul-amer-nprod-gbl-mdm-hub.COMPANY.com/v1/kv/

  5. Consul UI redirect doesn't work due to consul being stubborn about using /ui path. Decision: skip this, send client new consul address 
  6. Fix issue with MDMHUB manage and batch-service oauth user being duplicated in mappings - done
  7. Verify if mdm hub components are using external api address and switch to internal k8s service address - checked, confirmed nothing is using external addresses
  8. Check if Portworx requires setting affinity rules to be running only on 3 nodes
  9. akhq - disable default k8s token automount - done
" }, { "title": "PDKS Cluster tests", "pageID": "228917568", "pageLink": "/display/GMDM/PDKS+Cluster+tests", "content": "

Assumptions

Addresses used in tests

  1. API: https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-amer-dev/actuator/health/
  2. Kafka
  3. Consul

K8s resources


Each MDM Hub app deployed in 1 replica, so no redundancy.

Failover tests

Expected results

No downtimes of API and all services exposed to clients.

Scenario

One EKS node down

Force node drain with timeout and grace period set to low 10 seconds. 

Results

One EKS node down

Conclusions

Test was partially successful

To remove risk of services unavailability

To reduce time of services unavailability

Scale tests

Expected results

EKS node scaling up and down should be automatic based on cluster capacity. 

Scenarios

Scale pods up, to overcome capacity of static ASG, then scale down.

Results

Scale up and down test was carried out while doing failover tests. 

When 1 of 3 static nodes became unavailable, ASG scaled up number of dynamic instances. First to 1 and then to 2. After a static node was once again operational, ASG scaled down dynamic nodes to 0.

Conclusions

" }, { "title": "Portworx - storage administration guide", "pageID": "218458438", "pageLink": "/display/GMDM/Portworx+-+storage+administration+guide", "content": "

Outdated

Portworx is not longer used in MDM Hub Kubernetes clusters

Portworx, what is it?

Commercial product, validated storage solution and a standard for PDKS Kubernetes clusters. It uses AWS EBS volumes, adds a replication and provides a k8s storage class as a result. It then can be used just as any k8s storage by defining PVC. 

What problem does it solve?

\"\"

How to:

use Portworx storage

Configure Persistent Volume Claim to use one of Portworx Storage Classes configured on K8s.

2 classes are available

\"\"

extend volumes

In Helm just change PVC requested size and deploy changes to a cluster with a Jenkins job. No other action should be required. 

Example change: MR-3124 change persistent volumes claims

check status, statistics and alerts

TBD

One of the tools should provide volume status and statistics:

Responsibilities

Who is responsible for what is described in the table below. 

In short: if any change in Portworx setup is required, create a support ticket to a queue found on Support information with queues names page.

\"\"

Additional documentation

  1. PDCS Kubernetes Storage Management Platform Standards (If link doesn't work, go to http://containers.COMPANY.com/ search in "PDKS Docs" section for "WTST-0299 PDCS Kubernetes Storage Management Platform Standards")
  2. Kubernetes Portworx storage class documentation
  3. Portworx on Kubernetes docs




" }, { "title": "Resource management for components", "pageID": "218444330", "pageLink": "/display/GMDM/Resource+management+for+components", "content": "


Outdated

MDM Hub components resources are managed automatically by the Vertical Pod Autoscaler - table below is no longer applicable

K8s resource requests vs limits 

Quotes on how to understand Kubernetes resource limits

requests is a guarantee, limits is an obligation

Galo Navarro


When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for Pods. The scheduler ensures that, for each resource type, the sum of the resource requests of the scheduled containers is less than the capacity of the node. Note that although actual memory or CPU resource usage on nodes is very low, the scheduler still refuses to place a Pod on a node if the capacity check fails. This protects against a resource shortage on a node when resource usage later increases, for example, during a daily peak in request rate.

How Pods with resource requests are scheduled

MDM Hub resource configuration per component

IMPORTANT: table is outdated. The current CPU and memory configuration are in mdm-hub-cluster-env git repository.


CPU [m]Memory [Mi]
ComponentRequestLimitRequestLimit
mdm-callback-service200400016002560
mdm-hub-reltio-subscriber2001000400640
mdm-hub-event-publisher20020008001280
mdm-hub-entity-enricher20020008001280
mdm-api-router20040008001280
mdm-manager200400010002000
mdm-reconciliation-service200400016002560
mdm-batch-service20020008001280
Kafka500400010000 (Xmx 3GB)20000
Zookeeper2001000256512
akhq100500256512
kafka-connect500200010002000
MongoDB50040002000032000
MongoDB agent200400200500
Elasticsearch5002000800020000
Kibana

100

200010241536
Airflow - scheduler2007005122048
Airflow - webserver2007002561024
Airflow - postgresql250-256-
Airflow - statsd200500256512
Consul100500256512
git2consul100500

256

512
Kong10020005122048
Prometheus200100015363072
Legend
requires tuning
proposal
deployed

Useful links

Links helpful when talking about k8s resource management:

" }, { "title": "Standards and rules", "pageID": "218435163", "pageLink": "/display/GMDM/Standards+and+rules", "content": "

K8s Limit definition

Limit size for CPU has to be defined in "m" (milliCPU), ram in "Mi" (mibibytes) and storage in "Gi" (Gibibytes). More details about resource limits you can find on https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

GB vs GiB: What’s the Difference Between Gigabytes and Gibibytes?

At its most basic level, one GB is defined as 1000³ (1,000,000,000) bytes and one GiB as 1024³ (1,073,741,824) bytes. That means one GB equals 0.93 GiB. 

Source: https://massive.io/blog/gb-vs-gib-whats-the-difference/


To check current resource configuration, check: Resource management for components

Docker

To secure our images from changing of remote images which come from remote registries such as https://hub.docker.com/ before using remote these as a base image in the implementation, you have to publish the remote image in our private registry http://artifactory.COMPANY.com/mdmhub-docker-dev.

Kafka objects naming standards

Kafka topics

Name template: <$envName>-$<topicType>-$<name>

Topic Types: 

Consumer Groups

Name template: <$envName>-<$componentName>-[$processName]


Standardized environment names

Standardized component names

" }, { "title": "Technical details", "pageID": "218440550", "pageLink": "/display/GMDM/Technical+details", "content": "

Network

Subnet name

Subnet mask

RegionDetails
subnet-07743203751be58b910.9.64.0/18amer

\"\"

subnet-0dec853f7c9e507dd10.9.0.0/18amer

\"\"

subnet-018f9a3c441b24c2b

●●●●●●●●●●●●●●●

apac

\"\"

subnet-06e1183e436d67f2910.116.176.0/20apac

\"\"

subnet-0e485098a41ac03ca10.90.144.0/20emea

\"\"

subnet-067425933ced0e77f10.90.128.0/20emea

\"\"

" }, { "title": "SOPs", "pageID": "228923665", "pageLink": "/display/GMDM/SOPs", "content": "

Standard operation procedures are available here.

" }, { "title": "Downstream system migration guide", "pageID": "218452663", "pageLink": "/display/GMDM/Downstream+system+migration+guide", "content": "

This chapter describes steps that you have to take if you want to switch your application to new MDM HUB instance.

Direct channel (Rest services)

If you use the direct channel to communicate with MDM HUB the only thing that you should do is changing of API endpoint addresses. The authentication mechanism, based on oAuth serving by Ping Federate stays unchanged. Please remember that probably network traffic between your services and MDMHUB has to be opened before switching your application to new HUB endpoints.

The following table presents old endpoints and their substitutes in the new environment. Everyone who wants to connect with MDMHUB has to use new endpoints.

EnvironmentOld endpointNew endpointAffected clientsDescription
GBLUS DEV/QA/STAGEhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/v1https://consul-amer-nprod-gbl-mdm-hub.COMPANY.com/v1ETLConsul
GBLUS DEVhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/dev-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-devCDW, ENGAGE, KOL_ONEVIEW, GRV, GRACE, ICUE, ESAMPLES, MULE

Manager API

GBLUS DEVhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/dev-batch-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-gblus-devETLBatch API
GBLUS QAhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/qa-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-qaCDW, ENGAGE, KOL_ONEVIEW, GRV, GRACE, ICUE, ESAMPLES, MULEManager API
GBLUS QAhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/qa-batch-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-gblus-qaETL,Batch API
GBLUS STAGEhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/stage-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-stageCDW, ENGAGE, KOL_ONEVIEW, GRV, GRACE, ICUE, ESAMPLES, MULEManager API
GBLUS STAGEhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/stage-batch-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-gblus-stageETL,Batch API
GBLUS PRODhttps://gbl-mdm-hub-us-prod.COMPANY.com/v1https://consul-amer-prod-gbl-mdm-hub.COMPANY.com/v1ETLConsul
GBLUS PRODhttps://gbl-mdm-hub-us-prod.COMPANY.com/prod-exthttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-prodCDW, ENGAGE, KOL_ONEVIEW, GRV, GRACE, ICUE, ESAMPLES, MULEManager API
GBLUS PRODhttps://gbl-mdm-hub-us-prod.COMPANY.com/prod-batch-exthttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/ext-api-batch-gblus-prodETLBatch API
EMEA DEV/QA/STAGEhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/v1https://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/v1ETLConsul
EMEA DEVhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/dev-ext

https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-emea-dev

MULE, GRV, PforceRx, JORouter API
EMEA DEVhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/dev-ext/gwhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-dev
Manager API
EMEA DEVhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/dev-batch-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-emea-devETLBatch API
EMEA QAhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/qa-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-emea-qaMULE, GRV, PforceRx, JORouter API
EMEA QAhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/qa-ext/gwhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-qa
Manager API
EMEA QAhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/qa-batch-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-emea-qaETLBatch API
EMEA STAGEhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/stage-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-emea-stageMULE, GRV, PforceRx, JORouter API
EMEA STAGEhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/stage-ext/gwhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-stage
Manager API
EMEA STAGEhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/stage-batch-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-emea-stageETLBatch API
EMEA PRODhttps://gbl-mdm-hub-emea-prod.COMPANY.com:8443/v1https://consul-emea-prod-gbl-mdm-hub.COMPANY.com/v1ETLConsul
EMEA PRODhttps://gbl-mdm-hub-emea-prod.COMPANY.com:8443/prod-exthttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ext-api-emea-prodMULE, GRV, PforceRxRouter API
EMEA PRODhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/prod-ext/gwhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-prod
Manager API
EMEA PRODhttps://gbl-mdm-hub-emea-prod.COMPANY.com:8443/prod-batch-exthttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ext-api-batch-emea-prod
Batch API
GBL DEVhttps://mdm-reltio-proxy.COMPANY.com:8443/dev-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-devMULE, GRV, JO, KOL_ONEVIEW, MAPP, MEDIC, ONEMED, PTRS, VEEVA_FIELD,Manager API
GBL QA (MAPP)https://mdm-reltio-proxy.COMPANY.com:8443/mapp-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-qaMULE, GRV, JO, KOL_ONEVIEW, MAPP, MEDIC, ONEMED, PTRS, VEEVA_FIELD,Manager API
GBL STAGEhttps://mdm-reltio-proxy.COMPANY.com:8443/stage-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-stageMULE, GRV, JO, KOL_ONEVIEW, MEDIC, ONEMED, PTRS, VEEVA_FIELDManager API
GBL PRODhttps://mdm-gateway.COMPANY.com/prod-exthttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-prodMULE, GRV, JO, KOL_ONEVIEW, MAPP, MEDIC, ONEMED, PTRS, VEEVA_FIELDManager API
GBL PRODhttps://mdm-gateway-int.COMPANY.com/gw-apihttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-gw-gbl-prodCHINAManager API
EXTERNAL GBL DEVhttps://mdm-reltio-proxy.COMPANY.com:8443/dev-exthttps://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/ext-api-gw-gbl-devMAP, GANT, MAPPManager API
EXTERNAL GBL QA (MAPP)https://mdm-reltio-proxy.COMPANY.com:8443/mapp-exthttps://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/ext-api-gw-gbl-qaMAP, GANT, MAPPManager API
EXTERNAL GBL STAGEhttps://mdm-reltio-proxy.COMPANY.com:8443/stage-exthttps://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/ext-api-gw-gbl-stageMAP, GANT, MAPPManager API
EXTERNAL GBL PRODhttps://mdm-gateway.COMPANY.com/prod-exthttps://api-emea-k8s-prod-gbl-mdm-hub-ext.COMPANY.com/ext-api-gw-gbl-prodMAP, GANT, MAPPManager API
EXTERNAL EMEA DEVhttps://api-emea-nprod-gbl-mdm-hub-ext.COMPANY.com:8443/dev-exthttps://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/ext-api-emea-devMAP, GANT, MAPPRouter API
EXTERNAL EMEA QAhttps://api-emea-nprod-gbl-mdm-hub-ext.COMPANY.com:8443/qa-exthttps://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/ext-api-emea-qaMAP, GANT, MAPPRouter API
EXTERNAL EMEA STAGEhttps://api-emea-nprod-gbl-mdm-hub-ext.COMPANY.com:8443/stage-exthttps://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/ext-api-emea-stageMAP, GANT, MAPPRouter API
EXTERNAL EMEA PRODhttps://api-emea-prod-gbl-mdm-hub-ext.COMPANY.com:8443/prod-exthttps://api-emea-k8s-prod-gbl-mdm-hub-ext.COMPANY.com/ext-api-emea-prodMAP, GANT, MAPPRouter API

Streaming channel (Kafka)

Switching to a new environment requires configuration change on your side:

  1. Change the Kafka's broker address,
  2. Change JAAS configuration - in the new architecture, we decided to change JAAS authentication mechanisms to SCRAM. To be sure that you are using the right authentication you have to change a few parameters in Kafka's connection:
    1. JAAS login config file which path is specified in "java.security.auth.login.config" java property. It should look like below:
KafkaClient {
  org.apache.kafka.common.security.scram.ScramLoginModule required
username="<user>"
●●●●●●●●●●●●●●●●●●●>";
};

                   b.  change the value of "sasl.mechanism" property to "SCRAM-SHA-512"

                   c. if you configure JAAS login using "sasl.jaas.config" property you have to change its value to "org.apache.kafka.common.security.scram.ScramLoginModule required username="<user>" ●●●●●●●●●●●●●●●●●●●>";"

You should receive new credentials (username and password) in the email about changing Kafka endpoints. In another case to get the proper username and ●●●●●●●●●●●●●●● contact our support team.


The following table presents old endpoints and their substitutes in the new environment. Everyone who wants to connect with MDMHUB has to use new endpoints.

EnvironmentOld endpointNew endpointAffected clientsDescription
GBLUS DEV/QA/STAGEamraelp00007335.COMPANY.com:9094kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094ENGAGE, KOL_ONEVIEW, GRV, ICUE, MULE

Kafka

GBLUS PRODamraelp00007848.COMPANY.com:9094,amraelp00007849.COMPANY.com:9094,amraelp00007871.COMPANY.com:9094kafka-amer-prod-gbl-mdm-hub.COMPANY.com:9094ENGAGE, KOL_ONEVIEW, GRV, ICUE, MULEKafka
EMEA DEV/QA/STAGE

euw1z2dl112.COMPANY.com:9094

mdm-reltio-proxy.COMPANY.com:9094 (external)

kafka-b1-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094MAP (external), PforceRx, MULEKafka
EMEA PROD

euw1z2pl116.COMPANY.com:9094,euw1z1pl117.COMPANY.com:9094,euw1z2pl118.COMPANY.com:9094

kafka-b1-emea-prod-gbl-mdm-hub.COMPANY.com:9094,kafka-b2-emea-prod-gbl-mdm-hub.COMPANY.com:9094,kafka-b3-emea-prod-gbl-mdm-hub.COMPANY.com:9094

kafka-b1-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095,kafka-b2-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095,kafka-b3-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095 (external)

kafka-b1-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094MAP (external), PforceRx, MULEKafka
GBL DEV/QA/STAGE

euw1z1dl037.COMPANY.com:9094

mdm-reltio-proxy.COMPANY.com:9094 (external)

kafka-b1-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094MAP (external), China, KOL_ONEVIEW, PTRS, PTE, ENGAGE, MAPP,Kafka
GBL PROD

euw1z1pl017.COMPANY.com:9094,euw1z1pl021.COMPANY.com:9094,euw1z1pl022.COMPANY.com:9094

mdm-broker-p1.COMPANY.com:9094,mdm-broker-p2.COMPANY.com:9094,mdm-broker-p3.COMPANY.com:9094 (external)

kafka-b1-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094MAP (external), China, KOL_ONEVIEW, PTRS, ENGAGE, MAPP,Kafka
EXTERNAL GBL DEV/QA/STAGE



Data Mart (Snowflake)

There are no changes required if you use Snowflake to get MDMHUB data.

" }, { "title": "MDM HUB Log Management", "pageID": "164470115", "pageLink": "/display/GMDM/MDM+HUB+Log+Management", "content": "

MDM HUB has built in a log management solution that allows to trace data going through the system (incoming and outgoing events).

It improves:

The solution is based on EFK stack:

The solutions is presented on the picture below: 



\"\"

" }, { "title": "EFK Environments", "pageID": "164470092", "pageLink": "/display/GMDM/EFK+Environments", "content": "


" }, { "title": "Elastic Cloud on Kubernetes in MDM HUB", "pageID": "284787486", "pageLink": "/display/GMDM/Elastic+Cloud+on+Kubernetes+in+MDM+HUB", "content": "

Overview

<graphic0>

After migration on Kubernetes platform from on premise solutions we started to use Elastic Cloud on Kubernetes (ECK).

https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-overview.html

With ECK we can streamline critical operations, such as:

  1. Setting up hot-warm-cold architectures.
  2. Providing lifecycle policies for logs and transactions, snapshots of obsolete/older/less utility data.
  3. Creating dashboards visualising data of MDM HUB core processes.

Logs, transactions and mongo collections

We splitted all the data entering the Elastic Stack cluster into different categories listed as follows:

1. MDM HUB services logs

For forwarding MDM HUB services logs we use FluentBit where its used as a sidecar/agent container inside the mdmhub service pod.

The sidecar/agents send data directly to a backend service on Kubernetes cluster.

\"\"

2. Backend logs and transactions

For backend logs and transactions forwarding we use Fluentd as a forwarder and aggregator, lightweight pod instance deployed on edge.

In case of Elasticsearch unavailability, secondary output is defined on S3 storage to not miss any data coming from services.

\"\"

3. MongoDB collections

In this scenario we decided to use Monstache, sync daemon written in Go that continously indexes MongoDB collections into Elasticsearch.

We use it to mirror Reltio data gathered in MongoDB collections in Elasticsearch as a backup and a source for Kibana's dashboards visualisations.

\"\"


Data streams

MDM HUB services and backend logs and transactions are managed by Data streams mechanism.
A data stream lets us store append-only time series data (logs/transactions) across multiple indices while giving a single named resource for requests.

https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams.html

Index lifecycle policies and snapshots management

Index templates, index lifecycle policies and snapshots for index management are enirely covered by the Elasticsearch built-in mechanisms.

Description of the index lifecycle divided into phases:

  1. Index rollover - logs and transactions are stored in hot-tiers
  2. Index rollover - logs and transactions are moved to delete phase
  3. Snapshot - deleted logs and transactions from elasticsearch are snapshotted on S3 bucket
  4. Snapshot -  logs and transactions are deleted from S3 bucket - index is no longer available

All snapshotted indices may be restored and recreated on Elasticsearch anytime.

Maximum sizes and ages for the indexes rollovers and snapshots are included in the following tables:

Non PROD environments

typeindex rollover hot phase

index rollover delete phase

snapshot phase
 MDM HUB logs

age: 7d

size: 100gb

age: 30dage: 180d
Backend logs

age: 7d

size: 100gb

age: 30dage: 180d
Kafka transactions

age: 7d

size: 25gb

age: 30dage: 180d

PROD environments

typeindex rollover hot phase

index rollover delete phase

snapshot phase
 MDM HUB logs

age: 7d

size: 100gb

age: 90dage: 365d
Backend logs

age: 7d

size: 100gb

age: 90dage: 365d
Kafka transactions

age: 7d

size: 25gb

age: 180dage:  365d

Aditionally, we execute full snapshot policy on daily basis. It is responsible for incremental storing all the elasticsearch indexes on S3 buckets as a backup. 

Snapshots locations

environmentS3 bucketpath
EMEA NPRODpfe-atp-eu-w1-nprod-mdmhubemea/archive/elastic/full
EMEA PRODpfe-atp-eu-w1-prod-mdmhub-backupemaasp202207120811emea/archive/elastic/full

AMER NPROD

gblmdmhubnprodamrasp100762amer/archive/elastic/full
AMER PRODpfe-atp-us-e1-prod-mdmhub-backupamrasp202207120808amer/archive/elastic/full
APAC NPRODglobalmdmnprodaspasp202202171347apac/archive/elastic/full
APAC PRODpfe-atp-ap-se1-prod-mdmhub-backuaspasp202207141502apac/archive/elastic/full


MongoDB collections data are stored on Elasticsearch permanently, they are not covered by the index lifecycle processes.

Kibana dashboards

Kibana Dashboard Overview


" }, { "title": "Kibana Dashboards", "pageID": "164470093", "pageLink": "/display/GMDM/Kibana+Dashboards", "content": "


" }, { "title": "Tracing areas", "pageID": "164470094", "pageLink": "/display/GMDM/Tracing+areas", "content": "

Log data are generated in the following actions:



\"\"

" }, { "title": "MDM HUB Monitoring", "pageID": "164470106", "pageLink": "/display/GMDM/MDM+HUB+Monitoring", "content": "" }, { "title": "AKHQ", "pageID": "164470020", "pageLink": "/display/GMDM/AKHQ", "content": "

AKHQ (https://github.com/tchiotludo/akhq) is a tool for browsing, changing and monitoring Kafka's instances.


https://akhq-amer-nprod-gbl-mdm-hub.COMPANY.com/

https://akhq-amer-prod-gbl-mdm-hub.COMPANY.com/

https://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/

https://akhq-emea-prod-gbl-mdm-hub.COMPANY.com/

https://akhq-apac-nprod-gbl-mdm-hub.COMPANY.com/

https://akhq-apac-prod-gbl-mdm-hub.COMPANY.com/

" }, { "title": "Grafana & Kibana", "pageID": "228933027", "pageLink": "/pages/viewpage.action?pageId=228933027", "content": "

KIBANA

US PROD https://mdm-log-management-us-trade-prod.COMPANY.com:5601/app/kibana

User: kibana_dashboard_view


US NONPROD https://mdm-log-management-us-trade-nonprod.COMPANY.com:5601/app/kibana

User: kibana_dashboard_view

=====

GBL PROD https://kibana-emea-prod-gbl-mdm-hub.COMPANY.com

GBL NONPROD https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com

=====

EMEA PROD https://kibana-emea-prod-gbl-mdm-hub.COMPANY.com

EMEA NONPROD https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com

=====

GBLUS PROD https://kibana-amer-prod-gbl-mdm-hub.COMPANY.com

GBLUS NONPROD https://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com

=====

AMER PROD https://kibana-amer-prod-gbl-mdm-hub.COMPANY.com

AMER NONPROD https://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com

=====

APAC PROD https://kibana-apac-prod-gbl-mdm-hub.COMPANY.com

APAC NONPROD https://kibana-apac-nprod-gbl-mdm-hub.COMPANY.com


GRAFANA

https://grafana-mdm-monitoring.COMPANY.com


KeePass

 - download this

\"\"Kibana-k8s.kdbx

The password to the KeePass is sent in a separate email to improve the security level of credentials sending.

To get access, you only need to download the KeePass application 2.50 version (https://keepass.info/download.html) and use a password that is sent to log in to it.

After you do it you will see a screen like:

\"\"

Then just click a title that you are interested in. And you get a window like:

\"\"

Here you have a user name, and a proper link and when you click 3 dots = red square you will get the password.

" }, { "title": "Grafana Dashboard Overview", "pageID": "164470208", "pageLink": "/display/GMDM/Grafana+Dashboard+Overview", "content": "

MDM HUB's Grafana is deployed on the MONITORING host and is available under the following URL:

https://grafana-mdm-monitoring.COMPANY.com


All the dashboards are built using Prometheus's metrics.

" }, { "title": "Alerts Monitoring PROD&NON_PROD", "pageID": "163917772", "pageLink": "/pages/viewpage.action?pageId=163917772", "content": "

PROD: https://mdm-monitoring.COMPANY.com/grafana/d/5h4gLmemz/alerts-monitoring-prod

NON PROD: https://mdm-monitoring.COMPANY.com/grafana/d/COVgYieiz/alerts-monitoring-non_prod


\"\"


The Dashboard contains firing alerts and last Airflow DAG runs statuses for GBL (left side) and US FLEX (right side):

a., e. number of alerts firing

b., f. turns red when one or more DAG JOBS have failed

c., g. alerts currently firing

d., h. table containing all the DAGs and their run count for each of the statuses

" }, { "title": "AWS SQS", "pageID": "163917788", "pageLink": "/display/GMDM/AWS+SQS", "content": "

Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/CI4RLieik/aws-sqs


The dashboard is describing the SQS queue used in Reltio→MDM HUB communication.


\"\"


The dashboard is divided into following sections:

a. Approximate number of messages - how many messages are currently waiting in the queue

b. Approximate number of messages delayed - how many messages are waiting to be added in the queue

c. Approximate number of messages invisible - how many messages are not timed out nor deleted

" }, { "title": "Docker Monitoring", "pageID": "163917797", "pageLink": "/display/GMDM/Docker+Monitoring", "content": "

Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/Z1VgYm6iz/docker-monitoring


This dashboard is describing the Docker containers running on hosts in each environment. Switch currently viewed environment/host using the variables at the top of the dashboard ("env", "host").


\"\"


The dashboard is divided into following sections:

a. Running containers - how many containers are currently running on this host

b. Total Memory Usage

c. Total CPU Usage

d. CPU Usage - over time CPU use per container

e. Memory Usage - over time Memory use per container

f. Network Rx - received bytes per container over time

g. Network Tx - transmited bytes per container over time

" }, { "title": "Host Statistics", "pageID": "163917801", "pageLink": "/display/GMDM/Host+Statistics", "content": "
\n
\n
\n
\n

Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics

Dashboard template source: https://grafana.com/grafana/dashboards/1860


This dashboard is describing various statistics related to hosts' resource usage. It uses metrics from the node_exporter. You can change the currently viewed environment and host using variables at the top of the dashboard.


\n
\n
\n
\n
\n
\n

Basic CPU / Mem / Disk Gauge

\"\"


a. CPU Busy

b. Used RAM Memory

c. Used SWAP - hard disk memory used for swapping

d. Used Root FS

e. CPU System Load (1m avg)

f. CPU System Load (5m avg)


\n
\n
\n
\n
\n
\n

Basic CPU / Mem / Disk Info

\"\"


a. CPU Cores

b. Total RAM

c. Total SWAP

d. Total RootFS

e. System Load (1m avg)

f. Uptime - time since last restart


\n
\n
\n
\n
\n
\n

Basic CPU / Mem Graph

\"\"

a. CPU Basic - CPU state %

b. Memory Basic - memory (SWAP + RAM) use


\n
\n
\n
\n
\n
\n

Basic Net / Disk Info

\"\"

a. Network Traffic Basic - network traffic in bytes per interface

b, Disk Space Used Basic - disk usage per mount


\n
\n
\n
\n
\n
\n

CPU Memory Net Disk

\"\"
a. CPU - percentage use per status/operation

b. Memory Stack - use per status/operation

c. Network Traffic - detailed network traffic in bytes per interface. Negative values correspond to transmited bytes, positive to received.

d. Disk Space Used - disk usage per mount

\"\"

e. Disk IOps - disk operations per partition. Negative values correspond to write operations, positive - read operations.

f. I/O Usage Read / Write - bytes read(positive)/written(negative) per partition

g. I/O Usage Times - time of I/O operations in seconds per partition


\n
\n
\n
\n
\n
\n

Etc.

As the dashboard template is a publicaly-available project, the panels/graphs are sufficiently described and do not require further explanation.

\n
\n
\n
" }, { "title": "HUB Batch Performance", "pageID": "163917855", "pageLink": "/display/GMDM/HUB+Batch+Performance", "content": "
\n\n
\n
\n
\n

\"\"

a. Batch loading rate

b. Batch loading latency

c. Batch sending rate

d. Batch sending latency

e. Batch processing rate - batch processing in ops/s

f. Batch processing latency - batch processing time in seconds

\"\"

g. Batch loading max gauge - max loading time in seconds

h. Batch sending max gauge - max sending time in seconds

i. Batch processing max gauge - max processing in seconds

\n
\n
\n
" }, { "title": "HUB Overview Dashboard", "pageID": "163917867", "pageLink": "/display/GMDM/HUB+Overview+Dashboard", "content": "
\n
\n
\n
\n

Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/OfVgLm6ik/hub-overview

This dashboard contains information about Kafka topics/consumer groups in HUB - downstream from Reltio.


\n
\n
\n
\n
\n
\n

\"\"

a. Lag by Consumer Group - lag on each INBOUND consumer group

b. Message consume per minute - messages consumed by each INBOUND consumer group

c. Message in per minute - inbound messages count by each INBOUND topic

d. Lag by Consumer Group - lag on each OUTBOUND consumer group

e. Message consume per minute - messages consumed by each OUTBOUND consumer group

f. Message in per minute - inbound messages count by each OUTBOUND topic

g. Lag by Consumer Group - lag on each INTERNAL BATCH consumer group

h. Message consume per minute - messages consumed by each INTERNAL BATCH consumer group

i. Message in per minute - inbound messages count by each INTERNAL BATCH topic

\n
\n
\n
" }, { "title": "HUB Performance", "pageID": "163917830", "pageLink": "/display/GMDM/HUB+Performance", "content": "
\n\n
\n
\n
\n

API Performance

\"\"

a. Read Rate - API Read operations in 5/10/15min rate

b. Read Latency - API Read operations latency in seconds for 50/75/99th percentile of requests. Consists of Reltio response time, processing time and total time

c. Write Rate - API Write operations in 5/10/15min rate

d. Write Latency - API Write operations latency in seconds for 50/75/99th percentile of requests per each API operation


\n
\n
\n
\n
\n
\n

Publishing Performance

\"\"

a. Event Preprocessing Total Rate - Publisher's preprocessed events 5/10/15min rate divided for entity/relation events

b. Event Preprocessing Total Latency - preprocessing time in seconds for 50/75/99th percentile of events


\n
\n
\n
\n
\n
\n

Subscribing Performance

\"\"

a. MDM Events Subscribing Rate - Subscriber's events rate

b. MDM Events Subscribing Latency - Subscriber's event processing (passing downstream) rate

\n
\n
\n
" }, { "title": "JMX Overview", "pageID": "163917876", "pageLink": "/display/GMDM/JMX+Overview", "content": "

Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview

This dashboard organizes and displays data extracted from each component by a JMX exporter - related to this component's resource usage. You can switch currently viewed environment/component/node using variables on the top of the dashboard.


\"\"

a. Memory

b. Total RAM

c. Used SWAP

d. Total SWAP

e. CPU System Load(1m avg)

f. CPU System Load(5m avg)

g. CPU Cores

h. CPU Usage

i. Memory Heap/NonHeap

j. Memory Pool Used

k. Threads used

l. Class loading

m. Open File Descriptors

n. GC time / 1 min. rate - Garbage Collector time rate/min

o. GC count - Garbage Collector operations count

" }, { "title": "Kafka Overview", "pageID": "163917904", "pageLink": "/display/GMDM/Kafka+Overview", "content": "

Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/YNIRYmeik/kafka-overview

This dashboard describes Kafka's per node resource usage.


\"\"

a. CPU Usage

b. JVM Memory Used

c. Time spent in GC

d. Messages in Per Topic

e. Bytes in Per Topic

f. Bytes Out Per Topic

" }, { "title": "Kafka Overview - Total", "pageID": "163917913", "pageLink": "/display/GMDM/Kafka+Overview+-+Total", "content": "

Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/W6OysZ5Zz/kafka-overview-total

This dashboard describes Kafka's total (all node summary) resource usage per environment.


\"\"

a. CPU Usage

b. JVM Memory Used

c. Time spent in GC

d. Messages rate

e. Bytes in Rate

f. Bytes Out Rate

" }, { "title": "Kafka Topics Overview", "pageID": "163917920", "pageLink": "/display/GMDM/Kafka+Topics+Overview", "content": "

Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview

This dashboard describes Kafka topics and consumer groups in each environment.


\"\"

a. Topics purge ETA in hours - approximate time it should take for each consumer group to process all the events on their topic

b. Lag by Consumer Group

c. Message in per minute - per topic

d. Message consume per minute - per consumer group

e. Message in per second - per topic

" }, { "title": "Kong Dashboard", "pageID": "163917927", "pageLink": "/display/GMDM/Kong+Dashboard", "content": "

Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong

This dashboard describes the Kong component statistics.


\"\"

a. Total requests per second

b. DB reachability

c. Requests per service

d. Requests by HTTP status code

e. Total Bandwidth

\"\"

f. Egress per service (All) - traffic exiting the MDM network in bytes

g. Ingress per service (All) - traffic entering the MDM network in bytes

h. Kong Proxy Latency across all services - divided on 90/95/99 percentile

i. Kong Proxy Latency per service (All) - divided on 90/95/99 percentile

j. Request Time across all services - divided on 90/95/99 percentile

k. Request Time per service (All) - divided on 90/95/99 percentile

l. Upstream Time across all services - divided on 90/95/99 percentile

m. Upstream Time per service (All) - divided on 90/95/99 percentile

\"\"

o. Nginx connection state

p. Total Connections

q. Handled Connections

r. Accepted Connections

" }, { "title": "MongoDB", "pageID": "163917945", "pageLink": "/display/GMDM/MongoDB", "content": "

Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb


\"\"

a. Query Operations

b. Document Operations

c. Document Query Executor

d. Member Health

e. Member State

f. Replica Query Operations

g. Uptime

h. Available Connections

i. Open Connections

j. Oplog Size

k. Memory

l. Network I/O

\"\"

m. Oplog Lag

n. Disk I/O Utilization

o. Disk Reads Completed

p. Disk Writes Completed

" }, { "title": "Snowflake Tasks", "pageID": "163917954", "pageLink": "/display/GMDM/Snowflake+Tasks", "content": "

Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/358IxM_Mz/snowflake-tasks

This dashboard describes tasks running on each Snowflake instance.

Please keep in mind that metrics supporting this dashboard are scraped rarely (every 8h on nprod, every 2h on prod), so keep the Time since last scrape gauge in mind when reviewing the results.


\"\"

a. Time since last scrape - time since the metrics were last scraped - it marks dashboard freshness

b. Last Task Runs - table contains:

c. Processing time - visualizes how the processing time of each task was changing over time

" }, { "title": "Kibana Dashboard Overview", "pageID": "164469839", "pageLink": "/display/GMDM/Kibana+Dashboard+Overview", "content": "" }, { "title": "API Calls Dashboard", "pageID": "164469837", "pageLink": "/display/GMDM/API+Calls+Dashboard", "content": "

The dashboard contains summary of MDM Gateway API calls in the chosen time range.

Use it to:


\"\"


The dashboard is divided into the following sections:

a. Total requests count - how many requests have been logged in this time range (or passed the filter if that's the case)

b. Controls - allows user to filter requests based on username and operation

c. Requests by operation - how many requests have been sent per each operation

d. Average response time - how long the response time was on average per each action

e. Request per client - how many requests have been sent per each client

f. Response status - how many requests have resulted with each status

g. Top 10 processing times - summary of 10 requests that have been processed the longest in this time range. Contains transaction ID, related entity URI, operation type and duration in ms.

\"\"

681pxh. Logs - summary of all the logged requests

" }, { "title": "Batch Loads Dashboard", "pageID": "164469855", "pageLink": "/display/GMDM/Batch+Loads+Dashboard", "content": "

The dashboard contains information about files processed by the Batch Channel component.

Use this dashboard to:


\"\"


The dashboard is divided into following sections:

a. File by type - summary of how many files of each type were delivered in this time range.

b. File load status count - visualisation of how many entities were extracted from each file type and what was the result of their processing.

c. File load count - visualisation of loaded files in this time range. Use it to verify that the files have been delivered on schedule.

d. File load summary - summary of the processing of each loaded file. 

e. Response status load summary - summary of processing result for each file type.

" }, { "title": "HL DCR Dashboard", "pageID": "164469753", "pageLink": "/display/GMDM/HL+DCR+Dashboard", "content": "

This dashboard contains information related to the HL DCR flow (DCR Service).

Use it to:


\"\"


The dashboard is divided into following sections:

a. DCR Status - summary of how many DCRs have each of the statuses

b. Reltio DCR Stats - summary of how many DCRs that have been processed and sent to Reltio have each of the statuses

c. DCRRequestProcessing report - list of DCR reports generated in this time range


\"\"


d. DCR Current state - list of DCRs and their current statuses

" }, { "title": "HUB Events Dashboard", "pageID": "164469849", "pageLink": "/display/GMDM/HUB+Events+Dashboard", "content": "

Dashboard contains information about the Publisher component - events sent to clients or internal components (ex. Callback Service).

Use it to:


\"\"


The dashboard is divided into following sections:

a. Count - how many events have been processed by the Publisher in this time range

b. Event count - visualisation of how many events have been processed over time

c. Simple events in time - visualisation of how many simple events have been processed (published) over time per each outbound topic

d. Skipped events in time - visualisation of how many events have been skipped (filtered) for each reason over time


\"\"


e. Full events in time - visualisation of how many full events have been published over time per each topic

f. Processing time - visualisation of how long the processing of entities/relations events took

g. Events by country - summary of how many events were related to each country

h. Event types - summary of how many events were of each type


\"\"


i. Full events by Topics - visualisation of how many full events of each type were published on each of the topics

j. Simple events by Topics - visualisation of how many simple events of each type were published on each of the topics

k. Publisher Logs - list containing all the useful information extracted from the Publisher logs for each event. Use it to track issues related to Publisher's event processing.

" }, { "title": "HUB Store Dashboard", "pageID": "164469853", "pageLink": "/display/GMDM/HUB+Store+Dashboard", "content": "

Summary of all entities in the MDM in this environment. Contains summary information about entities count, countries and sources. 


\"\"


The dashboard is divided into following sections:

a. Entities count - how many entities are there currently in MDM

b. Entities modification count - how many entity modifications (create/update/delete) were there over time

c. Status - summary of how many entities have each of the statuses

d. Type - summary of how many entities are HCO (Health Care Organization) or HCP (Health Care Professional)

e. MDM - summary of how many MDM entities are in Reltio/Nucleus

f. Entities country - visualisation of country to entity count

g. Entities source - visualisation of source to entity count


\"\"


h. Entities by country source type - visualisation of how many entities are there from each country with each source

i. World Map - visualisation of how many entities are there from each country


\"\"


j. Source/Country Heat Map - another visualisation of Country-Source distribution

" }, { "title": "MDM Events Dashboard", "pageID": "164469851", "pageLink": "/display/GMDM/MDM+Events+Dashboard", "content": "

This dashboard contains information extracted from the Subscriber component.

Use it to:


\"\"


The dashboard is divided into following sections:

a. Total events count - how many events have been received and published to an internal topic in this time range

b. Event types - visualisation of how many events processed were of each type

c. Event count - visualisation of how many events were processed over time

d. Event destinations - visualisation of how many events have been passed to each of internal topics over time

e. Average consume time - visualisation of how long it took to process/pass received events over time

f. Subscriber Logs - list containing all the useful information extracted from the Subscriber logs. Use it to track potential issues

" }, { "title": "Profile Updates Dashboard", "pageID": "164469751", "pageLink": "/display/GMDM/Profile+Updates+Dashboard", "content": "

This dashboard contains information about HCO/HCP profile updates via MDM Gateway.

Use it to:

Note, that the Gateway is not only used by the external vendors, but also by HUB's components (Callback Service).


\"\"


The dashboard is divided into following sections:

a. Count - how many profile updates have been logged in this time period

b. Updates by status - how many updates have each of the statuses

c. Updates count - visualisation of how many updates were received by the Gateway over time

d. Updates by country source status - visualisation of how many updates were there for each country, from each source and with each status


\"\"


e. Updates by source - summary of how many profile updates were there from each source

f. Updates by country source status - another visualisation of how many updates were there for each country, source, status

g. World Map - visualisation of how many updates were there on profiles from each of the countries


\"\"


h. Gateway Logs - list containing all the useful information extracted from the Gateway components' logs. Use it to track issues related to the MDM Gateway

" }, { "title": "Reconciliation metrics Dashboard", "pageID": "310964632", "pageLink": "/display/GMDM/Reconciliation+metrics+Dashboard", "content": "

The Reconciliation Metrics Dashboard shows reasons why the MDM object (entity or relation) was reconciled.

Use it to:

Currently, the dashboard can show the following reasons:


\"\"

 The dashboard consists of a few diagrams:

  1. {ENV NAME} Reconciliation reasons - shows the most often existing reasons for reconciliation,
  2. Number by country - general number of reconciliation reasons divided by countries,
  3. Number by types - shows the general number of reconciliation reasons grouped by MDM object type,
  4. Reason list - reconciliation reasons with the number of their occurrences,
  5. {ENV NAME} Reconciliation metrics - detail view that shows data generated by Reconciliation Metrics flow. Data has detailed information about what exactly changed on specific MDM object.
" }, { "title": "Prometheus Alerts", "pageID": "164470107", "pageLink": "/display/GMDM/Prometheus+Alerts", "content": "

Dashboards

There are 2 dashboards available for problems overview: 

Karma

Grafana - Alerts Monitoring Dashboard

Alerts

ENV

Name

Alert

Cause (Expression)

Time

Severity

Action to be taken

ALL

MDM

high_load

> 30 load1

30m

warning

Detect why load is increasing. Decrease number of threads on components or turn off some of them.

ALL

MDM

high_load

> 30 load1

2h

critical

Detect why load is increasing. Decrease number of threads on components or turn off some of them.

ALL

MDM

memory_usage

>  90% used

1h

critical

Detect the component which is causing high memory usage and restart it.

ALL

MDM

disk_usage

< 10% free

2m

high

Remove or archive old component logs.

ALL

MDM

disk_usage

<  5% free

2m

critical

Remove or archive old component logs.

ALL

MDM

kong_processor_usage
> 120% CPU used by container10mhighCheck the Kong container

ALL

MDM

cpu_usage
> 90% CPU used1hcriticalDetect the cause of high CPU use and take appropriate measures

ALL

MDM

snowflake_task_not_successful_nprod
Last Snowflake task run has state other than "SUCCEEDED"1mhigh

Investigate whether the task failed or was skipped, and what caused it.

Metric value returned by the alert corresponds to the task state:

  • 0 - FAILED
  • 1 - SUCCEEDED
  • 2 - SCHEDULED
  • 3 - SKIPPED

ALL

MDM

snowflake_task_not_successful_prod
Last Snowflake task run has state other than "SUCCEEDED"1mhigh

Investigate whether the task failed or was skipped, and what caused it.

Metric value returned by the alert corresponds to the task state:

  • 0 - FAILED
  • 1 - SUCCEEDED
  • 2 - SCHEDULED
  • 3 - SKIPPED

ALL

MDM

snowflake_task_not_started_24h
Snowflake task has not started in the last 24h (+ 8h scrape time)1mhighInvestigate why the task was not scheduled/did not start.
ALLMDM
reltio_response_time
Reltio response time to entities/get requests is >= 3 sec for 99th percentile20mhighNotify the Reltio Team.

NON PROD

MDM

service_down

up{env!~".*_prod"} == 0

20m

warning

Detect the not working component and start it.

NON PROD

MDM

kafka_streams_client_state
kafka streams client state != 21mhighCheck and restart the Callback Service.
NON PRODKong
kong_database_down
Kong DB unreachable20mwarningCheck the Kong DB component.
NON PRODKong
kong_http_500_status_rate
HTTP 500 > 10%5mwarningCheck Gateway components' logs.
NON PRODKong
kong_http_502_status_rate
HTTP 502 > 10%5mwarningCheck Kong's port availability.
NON PRODKong
kong_http_503_status_rate
HTTP 503 > 10%5mwarningCheck the Kong component.
NON PRODKong
kong_http_504_status_rate
HTTP 504 > 10%5mwarningCheck Reltio response rates. Check Gateway components for issues.
NON PRODKong
kong_http_401_status_rate
HTTP 401 > 30%20mwarningCheck Kong logs. Notify the authorities in case of suspected break-in attempts.
GBL NON PRODKafka
internal_reltio_events_lag_dev
> 500 00030minfoCheck why lag is increasing. Restart the Event Publisher.
GBL NON PRODKafka
internal_reltio_relations_events_lag_dev
> 500 00030minfoCheck why lag is increasing. Restart the Event Publisher.
GBL NON PRODKafka
internal_reltio_events_lag_stage
> 500 00030minfoCheck why lag is increasing. Restart the Event Publisher.
GBL NON PRODKafka
internal_reltio_relations_events_lag_stage
> 500 00030minfoCheck why lag is increasing. Restart the Event Publisher.
GBL NON PRODKafka
internal_reltio_events_lag_qa
> 500 00030minfoCheck why lag is increasing. Restart the Event Publisher.
GBL NON PRODKafka
internal_reltio_relations_events_lag_qa
> 500 00030minfoCheck why lag is increasing. Restart the Event Publisher.
GBL NON PRODKafka
kafka_jvm_heap_memory_increasing
> 1000MB memory use predicted in 5 hours20mhighCheck if Kafka is rebalancing. Check the Event Publisher.
GBL NON PRODKafka
fluentd_dev_kafka_consumer_group_members
0 EFK consumergroup members30mhighCheck Fluentd logs. Restart Fluentd.
GBLUS NON PRODKafka
internal_reltio_events_lag_gblus_dev
> 500 00040minfoCheck why lag is increasing. Restart the Event Publisher.
GBLUS NON PRODKafka
internal_reltio_events_lag_gblus_qa
> 500 00040minfoCheck why lag is increasing. Restart the Event Publisher.
GBLUS NON PRODKafka
internal_reltio_events_lag_gblus_stage
> 500 00040minfoCheck why lag is increasing. Restart the Event Publisher.
GBLUS NON PRODKafka
kafka_jvm_heap_memory_increasing
> 3100MB memory use predicted in 5 hours20mhighCheck if Kafka is rebalancing. Check the Event Publisher.
GBLUS NON PRODKafka
fluentd_gblus_dev_kafka_consumer_group_members
0 EFK consumergroup members30mhighCheck Fluentd logs. Restart Fluentd.

GBL PROD

MDM

service_down

count(up{env=~"gbl_prod"} == 0) by (env,component) == 1

5m

high

Detect the not working component and start it.

GBL PROD

MDM

service_down

count(up{env=~"gbl_prod"} == 0) by (env,component) > 1

5m

critical

Detect the not working component and start it.

GBL PROD

MDM

service_down_kafka_connect
0 Kafka Connect Exporters up in the environment5mcriticalCheck and start the Kafka Connect Exporter.

GBL PROD

MDM

service_down
One or more Kafka Connect instances down5mcriticalCheck and start he Kafka Connect.

GBL PROD

MDM

dcr_stuck_on_prepared_status
DCR has been PREPARED for 1h1hhighDCR has not been processed downstream. Notify IQVIA.

GBL PROD

MDM

dcr_processing_failure
DCR processing failed in the last 24 hours

Check DCR Service, Wrapper logs.

GBL PROD

Cron Jobs

mongo_automated_script_not_started
Mongo Cron Job has not started1hhighCheck the MongoDB.

GBL PROD

Kong

kong_database_down
Kong DB unreachable20mwarningCheck the Kong DB component.

GBL PROD

Kong

kong_http_500_status_rate
HTTP 500 > 10%5mwarningCheck Gateway components' logs.

GBL PROD

Kong

kong_http_502_status_rate
HTTP 502 > 10%5mwarningCheck Kong's port availability.

GBL PROD

Kong

kong_http_503_status_rate
HTTP 503 > 10%5mwarningCheck the Kong component.

GBL PROD

Kong

kong_http_504_status_rate
HTTP 504 > 10%5mwarningCheck Reltio response rates. Check Gateway components for issues.

GBL PROD

Kong

kong_http_401_status_rate
HTTP 401 > 30%10mwarningCheck Kong logs. Notify the authorities in case of suspected break-in attempts.

GBL PROD

Kafka

internal_reltio_events_lag_prod
> 1 000 00030minfoCheck why lag is increasing. Restart the Event Publisher.

GBL PROD

Kafka

internal_reltio_relations_events_lag_prod
> 1 000 00030minfoCheck why lag is increasing. Restart the Event Publisher.

GBL PROD

Kafka

prod-out-full-snowflake-all_no_consumers
prod-out-full-snowflake-all has lag and has not been consumed for 2 hours1mhighCheck and restart the Kafka Connect Snowflake component.

GBL PROD

Kafka

internal_gw_gcp_events_deg_lag_prod
> 50 00030minfoCheck the Map Channel component.

GBL PROD

Kafka

internal_gw_gcp_events_raw_lag_prod
> 50 00030minfoCheck the Map Channel component.

GBL PROD

Kafka

internal_gw_grv_events_deg_lag_prod
> 50 00030minfoCheck the Map Channel component.

GBL PROD

Kafka

internal_gw_grv_events_deg_lag_prod
> 50 00030minfoCheck the Map Channel component.

GBL PROD

Kafka

forwarder_mapp_prod_kafka_consumer_group_members
forwarder_mapp_prod consumer group has 0 members30mcriticalCheck the MAPP Events Forwarder.

GBL PROD

Kafka

igate_prod_kafka_consumer_group_members
igate_prod consumer group members have decreased (still > 20)15minfoCheck the Gateway components.

GBL PROD

Kafka

igate_prod_kafka_consumer_group_members
igate_prod consumer group members have decreased (still > 10)15mhighCheck the Gateway components.

GBL PROD

Kafka

igate_prod_kafka_consumer_group_members
igate_prod consumer group has 0 members15mcriticalCheck the Gateway components.

GBL PROD

Kafka

hub_prod_kafka_consumer_group_members
hub_prod consumer group members have decreased (still > 100)15minfoCheck the Hub components.

GBL PROD

Kafka

hub_prod_kafka_consumer_group_members
hub_prod consumer group members have decreased (still > 50)15minfoCheck the Hub components.

GBL PROD

Kafka

hub_prod_kafka_consumer_group_members
hub_prod consumer group has 0 members15minfoCheck the Hub components.

GBL PROD

Kafka

kafka_jvm_heap_memory_increasing
> 2100MB memory use on node 1 predicted in 5 hours20mhighCheck if Kafka is rebalancing. Check the Event Publisher.

GBL PROD

Kafka

kafka_jvm_heap_memory_increasing
> 2000MB memory use on nodes 2&3 predicted in 5 hours20mhighCheck if Kafka is rebalancing. Check the Event Publisher.

GBL PROD

Kafka

fluentd_prod_kafka_consumer_group_members
Fluentd consumergroup has 0 members30mhighCheck and restart Fluentd.
US PRODMDM
service_down

Batch Channel is not running

5m

critical

Start the Batch Channel
US PRODMDM
service_down
1 component is not running5mhighDetect the not working component and start it.
US PRODMDM
service_down

>1 component is not running

5m

critical

Detect the not working components and start them.
US PRODCron Jobs
archiver_not_started
Archiver has not started in 24 hours1hhighCheck the Archiver.

US PROD

Kafka

internal_reltio_events_lag_us_prod

> 500 000

5m

high

Check why lag is increasing. Restart the Event Publisher.

US PROD

Kafka

internal_reltio_events_lag_us_prod

> 1 000 000

5m

critical

Check why lag is increasing. Restart the Event Publisher.

US PROD

Kafka

hin_kafka_consumer_lag_us_prod

> 1000

15m

critical

Check why lag is increasing. Restart the Batch Channel.

US PROD

Kafka

flex_kafka_consumer_lag_us_prod

> 1000

15m

critical

Check why lag is increasing. Restart the Batch Channel.

US PROD

Kafka

sap_kafka_consumer_lag_us_prod

> 1000

15m

critical

Check why lag is increasing. Restart the Batch Channel.

US PROD

Kafka

dea_kafka_consumer_lag_us_prod

> 1000

15m

critical

Check why lag is increasing. Restart the Batch Channel.

US PROD

Kafka

igate_prod_hco_create_kafka_consumer_group_members

>= 30 < 40 and lag > 1000

15m

info

Check why the number of consumers is decreasing. Restart the Batch Channel.

US PROD

Kafka

igate_prod_hco_create_kafka_consumer_group_members

>= 10 < 30 and lag > 1000

15m

high

Check why the number of consumers is decreasing. Restart the Batch Channel.

US PROD

Kafka

igate_prod_hco_create_kafka_consumer_group_members

== 0 and lag > 1000

15m

critical

Check why the number of consumers is decreasing. Restart the Batch Channel.

US PROD

Kafka

hub_prod_kafka_consumer_group_members

>= 30 < 45 and lag > 1000

15m

info

Check why the number of consumers is decreasing. Restart the Event Publisher.

US PROD

Kafka

hub_prod_kafka_consumer_group_members

>= 10 < 30 and lag > 1000

15m

high

Check why the number of consumers is decreasing. Restart the Event Publisher.

US PROD

Kafka

hub_prod_kafka_consumer_group_members

== 0 and lag > 1000

15m

critical

Check why the number of consumers is decreasing. Restart the Event Publisher.

US PROD

Kafka

fluentd_prod_kafka_consumer_group_members
EFK consumer group has 0 members30mhighCheck and restart Fluentd.

US PROD

Kafka

flex_prod_kafka_consumer_group_members
FLEX Kafka Connector has 0 consumers10mcriticalNotify the FLEX Team

GBLUS PROD

MDM

service_down
count(up{env=~"gblus_prod"} == 0) by (env,component) == 15mhighDetect the not working component and start it.

GBLUS PROD

MDM

service_down
count(up{env=~"gblus_prod"} == 0) by (env,component) > 15mcriticalDetect the not working component and start it.
GBLUS PRODKong
kong_database_down
Kong DB unreachable20mwarningCheck the Kong DB component.
GBLUS PRODKong
kong_http_500_status_rate
HTTP 500 > 10%5mwarningCheck Gateway components' logs.
GBLUS PRODKong
kong_http_502_status_rate
HTTP 502 > 10%5mwarningCheck Kong's port availability.
GBLUS PRODKong
kong_http_503_status_rate
HTTP 503 > 10%5mwarningCheck the Kong component.
GBLUS PRODKong
kong_http_504_status_rate
HTTP 504 > 10%5mwarningCheck Reltio response rates. Check Gateway components for issues.
GBLUS PRODKong
kong_http_401_status_rate
HTTP 401 > 30%10mwarningCheck Kong logs. Notify the authorities in case of suspected break-in attempts.

GBLUS PROD

Kafka

internal_reltio_events_lag_prod
> 1 000 00030minfoCheck why lag is increasing. Restart the Event Publisher.

GBLUS PROD

Kafka

igate_async_prod_kafka_consumer_group_members
igate_async_prod consumer group members have decreased (still > 20)15minfoCheck the Gateway components.

GBLUS PROD

Kafka

igate_async_prod_kafka_consumer_group_members
igate_async_prod consumer group members have decreased (still > 10)15mhighCheck the Gateway components.

GBLUS PROD

Kafka

igate_async_prod_kafka_consumer_group_members
igate_async_prod consumer group has 0 members15mcriticalCheck the Gateway components.

GBLUS PROD

Kafka

hub_prod_kafka_consumer_group_members
hub_prod consumer group members have decreased (still > 20)15minfoCheck the Hub components.

GBLUS PROD

Kafka

hub_prod_kafka_consumer_group_members
hub_prod consumer group members have decreased (still > 10)15mhighCheck the Hub components.

GBLUS PROD

Kafka

hub_prod_kafka_consumer_group_members
hub_prod consumer group has 0 members15mcriticalCheck the Hub components.

GBLUS PROD

Kafka

batch_service_prod_kafka_consumer_group_members
batch_service_prod consumer group has 0 members15mcritical

Check the Batch Service component.

GBLUS PROD

Kafka

batch_service_prod_ack_kafka_consumer_group_members
batch_service_prod_ack consumer group has 0 members15mcriticalCheck the Batch Service component.

GBLUS PROD

Kafka

fluentd_gblus_prod_kafka_consumer_group_members
EFK consumer group has 0 members30mhighCheck Fluentd. Restart if necessary.

GBLUS PROD

Kafka

kafka_jvm_heap_memory_increasing
> 3100MB memory use predicted in 5 hours20mhighCheck if Kafka is rebalancing. Check the Event Publisher.
" }, { "title": "Security", "pageID": "164470097", "pageLink": "/display/GMDM/Security", "content": "\n

There are following aspects supporting security implemented in the solution:

\n\n" }, { "title": "Authentication", "pageID": "164470075", "pageLink": "/display/GMDM/Authentication", "content": "\n

API Authentication

\n

API authentication is provided by KONG. There are two methods supported:

\n\n\n\n

OAuth2 method is recommended, especially for cloud services. The gateway uses Client Credentials grant type variant of OAuth2. The method is supported by KONG OAuth2 plugin. Client secrets are managed by Kong and stored in Cassandra configuration database.
\nAPI key authentication is a deprecated method, its usage should be avoided for new services. Keys are unique, randomly generated with 32 characters length managed by Kong Gateway – please see Kong Gateway documentation for details.

" }, { "title": "Authorization", "pageID": "164470078", "pageLink": "/display/GMDM/Authorization", "content": "\n

Rest APIs

\n

Access to exposed services is controlled with the following algorithm:

\n" }, { "title": "KONG external OAuth2 plugin", "pageID": "164470072", "pageLink": "/display/GMDM/KONG+external+OAuth2+plugin", "content": "\n

To integrate with Ping Federate token validation process, external KONG plugin was implemented. Source code and instructions for installation and configuration of local environment were published on GitHub.
\nCheck https://github.com/COMPANY/mdm-gateway/tree/kong/mdm-external-oauth-plugin readme file for more information.
\nThe role of plugin:
\nValidate access tokens sent by developers using a third-party OAuth 2.0 Authorization Server (RFC 7662). The flow of plugin, request, and response from PingFedarate have to be compatible with RFC 7622 specification. To get more information about this specification check https://tools.ietf.org/html/rfc7662 .Plugin assumes that the Consumer already has an access token that will be validated against a third-party OAuth 2.0 server – Ping Federate.
\nFlow of the plugin:

\n
    \n\t
  1. Client invokes Gateway API providing token generated from PING API
  2. \n\t
  3. KONG plugin introspects this token\n\t
      \n\t\t
    1. if the token is active, plugin will fill X-Consumer-Username header
    2. \n\t\t
    3. if the token is not active, the access to the specific uri will be forbidden
    4. \n\t
    \n\t
  4. \n
\n\n\n


\nExample External Plugin configuration:
\n \"\"\n
\nTo define a mdm-external-oauth plugin the following parameters have to be defined:

\n\n\n\n

KAFKA authentication

\n

Kafka access is protected using SASL framework. Clients are required to specify user and ●●●●●●●●●●● the configuration. Credentials are sent over TLS transport.

" }, { "title": "Transport", "pageID": "164470076", "pageLink": "/display/GMDM/Transport", "content": "\n

Communication between the KONG API Gateway and external systems is secured by setting up an encrypted connection with the following specifications:

\n\n\n\n


" }, { "title": "User management", "pageID": "164470079", "pageLink": "/display/GMDM/User+management", "content": "\n

User accounts are managed by the respective components of the Gateway and Hub.

\n

API Users

\n

Those are managed by Kong Gateway and stored in Cassandra database. There are two ways of adding a new user to Kong configuration:

\n
    \n\t
  1. Using configuration repository and Ansible
  2. \n
\n\n\n

Ansible tool, which is used to deploy MDM Integration Services, has a plugin that supports Kong user management. User configuration is kept in YAML configuration files (passwords being encrypted using built-in AES-256 encryption). Adding a new user requires adding the following section to the appropriate configuration file:
\n \"\"

\n
    \n\t
  1. Directly, using Kong REST API
  2. \n
\n\n\n

This method requires access to COMPANY VPN and to machine that hosts the MDM Integration Services, since REST endpoints are only bound to "localhost", and not exposed to the outside world. URL of the endpoint is:
\n \"\" It can be accessed via cURL commandline tool. To list all the users that are currently defined use the following command:
\n \"\"
\nTo create a new user:
\n \"\" To set an API Key for the user:
\n \"\" A new API key will be automatically generated by Kong and returned in response.
\nTo create OAuth2 credentials use the following call instead:
\n \"\" client_id and client_secret are login credentials, redirect_uri should point to HUB API endpoint. Please see Kong Gateway documentation for details.\n

\n

KAFKA users

\n

Kafka users are managed by brokers. Authentication method used is Java Authentication and Authorization Service (JAAS) with PlainLogin module. User configuration is stored inside kafka_server_jaas.conf file, that is present in each broker. File has the following structure:
\n \"\"
\nProperties "username" and "password" define credentials to use to secure inter-broker communication. Properties in format "user_<username>" are actual definitions of users. So, adding a new user named "bob" would require addition of the following property to kafka_server_jaas.conf file:\n
\n \"\"\n
\nCAUTION! Since JAAS configuration file is only read on Kafka broker startup, adding a new user requires restart of all brokers. In multi-broker environment this can be achieved by restarting one broker at a time, which should be transparent for end users, given Kafka fault-tolerance capabilities. This limitation might be overcome in future versions by using external user store or custom login module, instead of PlainLoginModule.The process of adding this entry and distributing kafka_server_jass.conf file is automated with Ansible: usernames and ●●●●●●●●●●●● kept in YAML configuration file, encrypted using Ansible Vault (with AES encryption).

\n

MongoDB users

\n

MongoDB is used only internally, by Publishing Hub modules and is not exposed to external users, therefore there is no need to create accounts for them. For operational purposes there might be some administration/technical accounts created using standard Mongo commandline tools, as described in MongoDB documentation.

" }, { "title": "SOP HUB", "pageID": "164470101", "pageLink": "/display/GMDM/SOP+HUB", "content": "


" }, { "title": "Hub Configuration", "pageID": "302705379", "pageLink": "/display/GMDM/Hub+Configuration", "content": "" }, { "title": "APM:", "pageID": "302703254", "pageLink": "/pages/viewpage.action?pageId=302703254", "content": "" }, { "title": "Setup APM integration in Kibana", "pageID": "302703256", "pageLink": "/display/GMDM/Setup+APM+integration+in+Kibana", "content": "
  1. To setup APM integration in Kibana you need to deploy fleet server first. To do so you need to enable it in mdm-hub-cluster-env repository(eg. in emea/nprod/namespaces/emea-backend/values.yaml)
    \"\"
  2. After deploying it open kibana UI. And got to Fleet.
    \"\"
    Verify if fleet-server is properly configured:
    \"\"
  3. Go to Observability - APM
    \"\"
  4. Click Add the APM Integration
    \"\"
  5. Click Add Elastic APM
    \"\"
  6. Change host to 0.0.0.0:8200
    \"\"
    In section 2 choose Existing hosts and choose desired agent-policy(Fleet server on ECK policy)
    \"\"
    \"\"
    Save changes
    \"\"
  7. After configuring your service to connect to apm-server it should be visible in Observability.APM
    \"\"


" }, { "title": "Consul:", "pageID": "302705585", "pageLink": "/pages/viewpage.action?pageId=302705585", "content": "" }, { "title": "Updating Dictionary", "pageID": "164470212", "pageLink": "/display/GMDM/Updating+Dictionary", "content": "

To update dictionary from excel

  1. Convert excel to csv format
  2. Change EOL to Unix 
  3. Put file in appropriate path in mdm-config-registry repository in config-ext
  4. Check Updating ETL Dictionaries in Consul page for appropriate Consul UI URL (You need to have a security token set in ACL section)
" }, { "title": "Updating ETL Dictionaries in Consul", "pageID": "164470102", "pageLink": "/display/GMDM/Updating+ETL+Dictionaries+in+Consul", "content": "

Configuration repository has dedicated directories that store dictionaries used by the ETL engine during loading data with batch service. The content of directories is published in Consul. The table shows the dir name and consul's key under which data in posted:

Dir nameConsul key
config-ext/dev_gblushttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/dev_gblus/
config-ext/qa_gblushttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/qa_gblus/
config-ext/prod_gblushttps://consul-amer-prod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/prod_gblus/
config-ext/dev_emea

https://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/dev_emea/

config-ext/qa_emea

https://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/qa_emea/

config-ext/stage_emea

https://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/stage_emea/

config-ext/prod_emea

https://consul-emea-prod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/prod_emea/

config-ext/dev_apac

https://consul-apac-nprod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/dev_apac/

config-ext/qa_apac

https://consul-apac-nprod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/qa_apac/

config-ext/stage_apac

https://consul-apac-nprod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/stage_apac/

config-ext/prod_apac

https://consul-apac-prod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/prod_apac/

To update Consul values you have to:

  1. Make changes in the desired directory and push them to the master git branch,
  2. git2consul will synchronize the git repo to Consul 

Please be advised that proper SecretId token is required to access key/value path you desire. Especially important for AMER/GBLUS directories. 

" }, { "title": "Environment Setup:", "pageID": "164470244", "pageLink": "/pages/viewpage.action?pageId=164470244", "content": "" }, { "title": "Configuration (amer k8s)", "pageID": "228917406", "pageLink": "/pages/viewpage.action?pageId=228917406", "content": "

Configuration steps:

  1. Configure mongo permissions for users mdm_batch_service, mdmhub, and mdmgw. Add permissions to database schema related to new environment:

---

users:

  mdm_batch_service:

    mongo:

      databases:

        reltio_amer-dev:

          roles:

            - "readWrite"

        reltio_[tenant-env]:

             - "readWrite"

2. Add directory with environment configuration files in amer/nprod/namespaces/. You can just make a copy of the existing amer-dev configuration.

3. Change file [tenant-env]/values.yaml:

4. Change file [tenant-env]/kafka-topics.yaml by changing the prefix of topic names.

5. Add kafka connect instance for newly added environment - add the configuration section to kafkaConnect property located in amer/nprod/namespaces/amer-backend/values.yaml
5.1 Add secrets - kafkaConnect.[tenant-env].connectors.[tenant-env]-snowflake-sink-connector.spec.config.snowflake.private.key.passphrase and kafkaConnect.[tenant-env].connectors.[tenant-env]-snowflake-sink-connector.spec.config.snowflake.private.key

6. Configure Consul (amer/nprod/namespaces/amer-backend/values.yaml and amer/nprod/namespaces/amer-backend/secrets.yaml):

7. Modify components configuration:

\"\"

8. Add transaction topics in fluentd configuration - amer/nprod/namespaces/amer-backend/values.yaml and change fluentd.kafka.topics list.

9. Monitoring

a) Add additional service monitor to amer/nprod/namespaces/monitoring/service-monitors.yaml configuration file:

- namespace: [tenant-env]

  name: sm-[tenant-env]-services

  selector:

    matchLabels:

      prometheus: [tenant-env]-services

  endpoints:

    - port: prometheus

      interval: 30s

      scrapeTimeout: 30s

    - port: prometheus-fluent-bit

      path: "/api/v1/metrics/prometheus"

      interval: 30s

      scrapeTimeout: 30s

b) Add Snowflake database details to amer/nprod/namespaces/monitoring/jdbc-exporter.yaml configuration file:

jdbcExporters:
amer-dev:
db:
url: "jdbc:snowflake://amerdev01.us-east-1.privatelink.snowflakecomputing.com/?db=COMM_AMER_MDM_DMART_DEV_DB&role=COMM_AMER_MDM_DMART_DEV_DEVOPS_ROLE&warehouse=COMM_MDM_DMART_WH"
username: "[ USERNAME ]"

Add ●●●●●●●●●●● amer/nprod/namespaces/monitoring/secrets.yaml

jdbcExporters:
amer-dev:
db:
password: "[ ●●●●●●●●●●●


10. Run Jenkins job responsible for deploying backend services - to apply mongo and fluentd changes.

11. Connect to mongodb server and create scheme reltio_[tenant-env].

11.1 Create collections and indexes in the newly added schemas:
 Intellishell

db.createCollection("entityHistory") 
db.entityHistory.createIndex({country: -1},  {background: true, name:  "idx_country"});
db.entityHistory.createIndex({sources: -1},  {background: true, name:  "idx_sources"});
db.entityHistory.createIndex({entityType: -1},  {background: true, name:  "idx_entityType"});
db.entityHistory.createIndex({status: -1},  {background: true, name:  "idx_status"});
db.entityHistory.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});
db.entityHistory.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});
db.entityHistory.createIndex({"entity.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});
db.entityHistory.createIndex({"entity.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});
db.entityHistory.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});
db.entityHistory.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});
db.entityHistory.createIndex({entityChecksum: -1},  {background: true, name:  "idx_entityChecksum"});
db.entityHistory.createIndex({parentEntityId: -1},  {background: true, name:  "idx_parentEntityId"});

db.entityHistory.createIndex({COMPANYGlobalCustomerID: -1},  {background: true, name:  "idx_COMPANYGlobalCustomerID"});


db.createCollection("entityRelations")
db.entityRelations.createIndex({country: -1},  {background: true, name:  "idx_country"});
db.entityRelations.createIndex({sources: -1},  {background: true, name:  "idx_sources"});
db.entityRelations.createIndex({relationType: -1},  {background: true, name:  "idx_relationType"});
db.entityRelations.createIndex({status: -1},  {background: true, name:  "idx_status"});
db.entityRelations.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});
db.entityRelations.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});
db.entityRelations.createIndex({startObjectId: -1},  {background: true, name:  "idx_startObjectId"});
db.entityRelations.createIndex({endObjectId: -1},  {background: true, name:  "idx_endObjectId"});
db.entityRelations.createIndex({"relation.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});   
db.entityRelations.createIndex({"relation.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});   
db.entityRelations.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});   
db.entityRelations.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});
 
db.createCollection("LookupValues")
db.LookupValues.createIndex({updatedOn: 1},  {background: true, name:  "idx_updatedOn"});
db.LookupValues.createIndex({countries: 1},  {background: true, name:  "idx_countries"});
db.LookupValues.createIndex({mdmSource: 1},  {background: true, name:  "idx_mdmSource"});
db.LookupValues.createIndex({type: 1},  {background: true, name:  "idx_type"});
db.LookupValues.createIndex({code: 1},  {background: true, name:  "idx_code"});
db.LookupValues.createIndex({valueUpdateDate: 1},  {background: true, name:  "idx_valueUpdateDate"});

db.createCollection("ErrorLogs")
db.ErrorLogs.createIndex({plannedResubmissionDate: -1},  {background: true, name:  "idx_plannedResubmissionDate_-1"});
db.ErrorLogs.createIndex({timestamp: -1},  {background: true, name:  "idx_timestamp_-1"});
db.ErrorLogs.createIndex({exceptionClass: 1},  {background: true, name:  "idx_exceptionClass_1"});
db.ErrorLogs.createIndex({status: -1},  {background: true, name:  "idx_status_-1"});

db.createCollection("batchEntityProcessStatus")
db.batchEntityProcessStatus.createIndex({batchName: -1, sourceId: -1},  {background: true, name:  "idx_findByBatchNameAndSourceId"});
db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1},  {background: true, name:  "idx_EntitiesUnseen_SoftDeleteJob"});
db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResult_ProcessingJob"});
db.batchEntityProcessStatus.createIndex({batchName: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResultAll_ProcessingJob"});

db.createCollection("batchInstance")

db.createCollection("relationCache")
db.relationCache.createIndex({startSourceId: -1},  {background: true, name:  "idx_findByStartSourceId"});

db.createCollection("DCRRequests")
db.DCRRequests.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});
db.DCRRequests.createIndex({entityURI: -1, "status.name": -1},  {background: true, name:  "idx_entityURIStatusNameFind_SubmitVR"});
db.DCRRequests.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});

db.createCollection("entityMatchesHistory")
db.entityMatchesHistory.createIndex({_id: -1, "matches.matchObjectUri": -1, "matches.matchType": -1},  {background: true, name:  "idx_findAutoLinkMatch_CleanerStream"});


db.createCollection("DCRRegistry")

db.DCRRegistry.createIndex({"status.changeDate": -1},  {background: true, name:  "idx_changeDate_FindDCRsBy"});

db.DCRRegistry.createIndex({extDCRRequestId: -1},  {background: true, name:  "idx_extDCRRequestId_FindByExtId"});
db.DCRRegistry.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});

db.DCRRegistryONEKEY.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});


db.createCollection("sequenceCounters")

db.sequenceCounters.insertOne({_id: "COMPANYAddressIDSeq", sequence: NumberLong([sequence start number])}) //NOTE!!!! replace text [sequence start count] with value from below table

RegionSeq start number
emea5000000000
amer6000000000
apac7000000000

12. Run Jenkins job to deploy kafka resources and mdmhub components for the new environment.

13. Create paths on S3 bucket required by Snowflake and Airflow's DAGs.

14. Configure Kibana:

15. Configure basic Airflow DAGs (ansible directory):

16. Deploy DAGs (NOTE: check if your kubectl is configured to communicate with the cluster you wanted to change):

ansible-playbook install_mdmgw_airflow_services_k8s.yml -i inventory/[tenant-env]/inventory

17. Configure Snowflake for the [tenant-env] in mdm-hub-env-config as in example inventory/dev_amer/group_vars/snowflake/*. 


Verification points

Check Reltio's configuration - get reltio tenant configuration:

  1. Check if you are able to execute Reltio's operations using credentials of the service user,

  2. Check if streaming processing is enable - streamingConfig.messaging.destinations.enabled = true, streamingConfig.streamingEnabled=true, streamingConfig.streamingAPIEnabled=true,

  3. Check if cassanda export is configured - exportConfig.smartExport.secondaryDsEnabled = false.


Check Kafka:

  1. Check if you are able to connect to kafka server using command line client running from your local machine.


Check Mongo:

  1. Users mdmgw, mdmhub and mdm_batch_service - permissions for the newly added database (readWrite),
  2. Indexes,

  3. Verify if correct start value is set for sequance COMPANYAddressIDSeq - collection sequenceCounters _id = COMPANYAddressIDSeq.


Check MDMHUB API:

  1. Check mdm-manager API with apikey authentication by executing one of read operations: GET {{ manager_url }}/entities?filter=equals(type, 'configuration/entityTypes/HCP'). The request should execute properly (HTTP status code 200) and returns some HCP objects. The empty response is also possible in the case when there is no HCP data in Reltio,

  2. Run the same operation using oAuth2 authentication - remember that the manager url is different,
  3. Check mdm-manager API with apikey authentication by executing write operation:

    curl --location --request POST '{{ manager_url }}/hcp' \\
    --header 'apikey: {{ api_key }}' \\
    --header 'Content-Type: application/json' \\
    --data-raw '{
      "hcp" : {
        "type" : "configuration/entityTypes/HCP",
        "attributes" : {
          "Country" : [ {
            "value" : "{{ country }}"
          } ],
          "FirstName" : [ {
            "value" : "Verification Test MDMHUB"
          } ],
          "LastName" : [ {
            "value" : "Verification Test MDMHUB"
          } ]
        },
        "crosswalks" : [ {
          "type" : "configuration/sources/{{ source }}",
          "value" : "verification_test_mdmhub"
        } ]
      }
    }'

    Replace all placeholders in the above request using the correct values for the configured environment. The response should return HTTP code 200 and a URI of the created object. After verification deleted created object by running: curl --location --request DELETE '{{ manager_url }}/entities/crosswalk?type={{ source }}&value=verification_test_mdmhub' --header 'apikey: {{ api_key }}'
  4. Run the same operations using oAuth2 authentication - remember that the mdm manager url is different,
  5. Verify api-router API with apikey authentication using search operation: GET {{ api_router_url }}/entities?filter=equals(type, 'configuration/entityTypes/HCP'). The request should execute properly (HTTP status code 200) and returns some HCP objects. Empty response is also possible in the case when there is no HCP data in Reltio,

  6. Check api-router API with apikey authentication by executing write operation:

    curl --location --request POST '{{ api_router_url }}/hcp' \\
    --header 'apikey: {{ api_key }}' \\
    --header 'Content-Type: application/json' \\
    --data-raw '{
      "hcp" : {
        "type" : "configuration/entityTypes/HCP",
        "attributes" : {
          "Country" : [ {
            "value" : "{{ country }}"
          } ],
          "FirstName" : [ {
            "value" : "Verification Test MDMHUB"
          } ],
          "LastName" : [ {
            "value" : "Verification Test MDMHUB"
          } ]
        },
        "crosswalks" : [ {
          "type" : "configuration/sources/{{ source }}",
          "value" : "verification_test_mdmhub"
        } ]
      }
    }'

    Replace all placeholders in the above request using the correct values for the configured environment. The response should return HTTP code 200 and a URI of the created object. After verification deleted created object by running: curl --location --request DELETE '2/entities/crosswalk?type={{ source }}&value=verification_test_mdmhub' --header 'apikey: {{ api_key }}'
  7. Run the same operations using oAuth2 authentication - remember that the api router url is different,
  8. Check batch service API with apikey authentication by executing following operation GET {{ batch_service_url }}/batchController/NA/instances/NA. The request should return 403 HTTP Code and body:

    {

        "code": "403",

        "message": "Forbidden: com.COMPANY.mdm.security.AuthorizationException: Batch 'NA' is not allowed."

    }

    The request doesn't create any batch.

  9. Run the same operation using oAuth2 authentication - remember that the batch service url is different,
  10. Verify of component logs: mdm-manager, api-router and batch-service url. Focus on errors and kafka records - rebalancing, authorization problems, topic existence warnings etc.


MDMHUB streaming services:

  1. Check logs of reltio-subscriber, entity-enricher, callback-service, event-publisher and mdm-reconciliation-service components. Verify if there is no errors and kafka warnings related with rebalancing, authorization problems, topic existence warnings etc,

  2. Verify if lookup refresh process is working properly - check existance of mongo collection LookupValues. It should have data,


Airflow:

  1. Check if DAGs are enabled and have a defined schedule,
  2. Run DAGs: export_merges_from_reltio_to_s3_full_{{ env }}, hub_reconciliation_v2_{{ env }}, lookup_values_export_to_s3_{{ env }}, reconciliation_snowflake_{{ env }}.

  3. Wait for their finish and validate results.


Snowflake:

  1. Check snowflake connector logs,

  2. Check if tables HUB_KAFKA_DATA, LOV_DATA, MERGE_TREE_DATA exist at LANDING schama and has data,

  3. Verify if mdm-hub-snowflake-dm package is deployed,
  4. What else?


Monitoring:

  1. Check grafana dashboards:
    1. HUB Performance,
    2. Kafka Topics Overview,
    3. Host Statistics,
    4. JMX Overview,
    5. Kong,
    6. MongoDB.
  2. Check Kibana index patterns:
    1. {{env}}-internal-batch-efk-transactions*,
    2. {{env}}-internal-gw-efk-transactions*,
    3. {{env}}-internal-publisher-efk-transactions*,
    4. {{env}}-internal-subscriber-efk-transactions*,
    5. {{env}}-mdmhub,
  3. Check Kibana dashboards:
    1. {{env}} API calls,
    2. {{env}} Batch Instances,
    3. {{env}} Batch loads,
    4. {{env}} Error Logs Overview,
    5. {{env}} Error Logs RDM,
    6. {{env}} HUB Store
    7. {{env}} HUB events,

    8. {{env}} MDM Events,
    9. {{env}} Profile Updates,
  4. Check alerts - How?



" }, { "title": "Configuration (amer prod k8s)", "pageID": "234691394", "pageLink": "/pages/viewpage.action?pageId=234691394", "content": "

Configuration steps:

  1. Copy mdm-hub-cluster-env/amer/nprod directory into mdm-hub-cluster-env/amer/nprod directory.
  2. Replace ...
  3. Certificates
    1. Generate private-keys, CSRs and request Kong certificate (kong/config_files/certs).

      \n
      marek@CF-19CHU8:~$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout api-amer-prod-gbl-mdm-hub.COMPANY.com.key -out api-amer-prod-gbl-mdm-hub.COMPANY.com.csr\nGenerating a RSA private key\n.....+++++\n.....................................................+++++\nwriting new private key to 'api-amer-prod-gbl-mdm-hub.COMPANY.com.key'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []: api-amer-prod-gbl-mdm-hub.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT_PROD@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1588632">●●●●●●●●●●●●</a>\nAn optional company name []:
      \n
    2. Generate private-keys, CSRs and request Kafka certificate (apac-backend/secret.yaml

      \n
      marek@CF-19CHU8:~$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout kafka-amer-prod-gbl-mdm-hub.COMPANY.com.key -out kafka-amer-prod-gbl-mdm-hub.COMPANY.com.csr\nGenerating a RSA private key\n..........................+++++\n.....+++++\nwriting new private key to 'kafka-amer-prod-gbl-mdm-hub.COMPANY.com.key'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:kafka-amer-prod-gbl-mdm-hub.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT_PROD@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1588633">●●●●●●●●●●●●</a>\nAn optional company name []:
      \n




BELOW IS AMER NPROD COPY WE USE AS A REFERENCE


Configuration steps:

  1. Configure mongo permissions for users mdm_batch_service, mdmhub, and mdmgw. Add permissions to database schema related to new environment:

---

users:

  mdm_batch_service:

    mongo:

      databases:

        reltio_amer-dev:

          roles:

            - "readWrite"

        reltio_[tenant-env]:

             - "readWrite"

2. Add directory with environment configuration files in amer/nprod/namespaces/. You can just make a copy of the existing amer-dev configuration.

3. Change file [tenant-env]/values.yaml:

4. Change file [tenant-env]/kafka-topics.yaml by changing the prefix of topic names.

5. Add kafka connect instance for newly added environment - add the configuration section to kafkaConnect property located in amer/nprod/namespaces/amer-backend/values.yaml
5.1 Add secrets - kafkaConnect.[tenant-env].connectors.[tenant-env]-snowflake-sink-connector.spec.config.snowflake.private.key.passphrase and kafkaConnect.[tenant-env].connectors.[tenant-env]-snowflake-sink-connector.spec.config.snowflake.private.key

6. Configure Consul (amer/nprod/namespaces/amer-backend/values.yaml and amer/nprod/namespaces/amer-backend/secrets.yaml):

7. Modify components configuration:

8. Add transaction topics in fluentd configuration - amer/nprod/namespaces/amer-backend/values.yaml and change fluentd.kafka.topics list.

9. Monitoring

a) Add additional service monitor to amer/nprod/namespaces/monitoring/service-monitors.yaml configuration file:

- namespace: [tenant-env]

  name: sm-[tenant-env]-services

  selector:

    matchLabels:

      prometheus: [tenant-env]-services

  endpoints:

    - port: prometheus

      interval: 30s

      scrapeTimeout: 30s

    - port: prometheus-fluent-bit

      path: "/api/v1/metrics/prometheus"

      interval: 30s

      scrapeTimeout: 30s

b) Add Snowflake database details to amer/nprod/namespaces/monitoring/jdbc-exporter.yaml configuration file:

jdbcExporters:
amer-dev:
db:
url: "jdbc:snowflake://amerdev01.us-east-1.privatelink.snowflakecomputing.com/?db=COMM_AMER_MDM_DMART_DEV_DB&role=COMM_AMER_MDM_DMART_DEV_DEVOPS_ROLE&warehouse=COMM_MDM_DMART_WH"
username: "[ USERNAME ]"

Add ●●●●●●●●●●● amer/nprod/namespaces/monitoring/secrets.yaml

jdbcExporters:
amer-dev:
db:
password: "[ ●●●●●●●●●●●


10. Run Jenkins job responsible for deploying backend services - to apply mongo and fluentd changes.

11. Connect to mongodb server and create scheme reltio_[tenant-env].

11.1 Create collections and indexes in the newly added schemas:
 Intellishell

db.createCollection("entityHistory") 
db.entityHistory.createIndex({country: -1},  {background: true, name:  "idx_country"});
db.entityHistory.createIndex({sources: -1},  {background: true, name:  "idx_sources"});
db.entityHistory.createIndex({entityType: -1},  {background: true, name:  "idx_entityType"});
db.entityHistory.createIndex({status: -1},  {background: true, name:  "idx_status"});
db.entityHistory.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});
db.entityHistory.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});
db.entityHistory.createIndex({"entity.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});
db.entityHistory.createIndex({"entity.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});
db.entityHistory.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});
db.entityHistory.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});
db.entityHistory.createIndex({entityChecksum: -1},  {background: true, name:  "idx_entityChecksum"});
db.entityHistory.createIndex({parentEntityId: -1},  {background: true, name:  "idx_parentEntityId"});

db.entityHistory.createIndex({COMPANYGlobalCustomerID: -1},  {background: true, name:  "idx_COMPANYGlobalCustomerID"});


db.createCollection("entityRelations")
db.entityRelations.createIndex({country: -1},  {background: true, name:  "idx_country"});
db.entityRelations.createIndex({sources: -1},  {background: true, name:  "idx_sources"});
db.entityRelations.createIndex({relationType: -1},  {background: true, name:  "idx_relationType"});
db.entityRelations.createIndex({status: -1},  {background: true, name:  "idx_status"});
db.entityRelations.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});
db.entityRelations.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});
db.entityRelations.createIndex({startObjectId: -1},  {background: true, name:  "idx_startObjectId"});
db.entityRelations.createIndex({endObjectId: -1},  {background: true, name:  "idx_endObjectId"});
db.entityRelations.createIndex({"relation.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});   
db.entityRelations.createIndex({"relation.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});   
db.entityRelations.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});   
db.entityRelations.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});
 
db.createCollection("LookupValues")
db.LookupValues.createIndex({updatedOn: 1},  {background: true, name:  "idx_updatedOn"});
db.LookupValues.createIndex({countries: 1},  {background: true, name:  "idx_countries"});
db.LookupValues.createIndex({mdmSource: 1},  {background: true, name:  "idx_mdmSource"});
db.LookupValues.createIndex({type: 1},  {background: true, name:  "idx_type"});
db.LookupValues.createIndex({code: 1},  {background: true, name:  "idx_code"});
db.LookupValues.createIndex({valueUpdateDate: 1},  {background: true, name:  "idx_valueUpdateDate"});

db.createCollection("ErrorLogs")
db.ErrorLogs.createIndex({plannedResubmissionDate: -1},  {background: true, name:  "idx_plannedResubmissionDate_-1"});
db.ErrorLogs.createIndex({timestamp: -1},  {background: true, name:  "idx_timestamp_-1"});
db.ErrorLogs.createIndex({exceptionClass: 1},  {background: true, name:  "idx_exceptionClass_1"});
db.ErrorLogs.createIndex({status: -1},  {background: true, name:  "idx_status_-1"});

db.createCollection("batchEntityProcessStatus")
db.batchEntityProcessStatus.createIndex({batchName: -1, sourceId: -1},  {background: true, name:  "idx_findByBatchNameAndSourceId"});
db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1},  {background: true, name:  "idx_EntitiesUnseen_SoftDeleteJob"});
db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResult_ProcessingJob"});
db.batchEntityProcessStatus.createIndex({batchName: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResultAll_ProcessingJob"});

db.createCollection("batchInstance")

db.createCollection("relationCache")
db.relationCache.createIndex({startSourceId: -1},  {background: true, name:  "idx_findByStartSourceId"});

db.createCollection("DCRRequests")
db.DCRRequests.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});
db.DCRRequests.createIndex({entityURI: -1, "status.name": -1},  {background: true, name:  "idx_entityURIStatusNameFind_SubmitVR"});
db.DCRRequests.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});

db.createCollection("entityMatchesHistory")
db.entityMatchesHistory.createIndex({_id: -1, "matches.matchObjectUri": -1, "matches.matchType": -1},  {background: true, name:  "idx_findAutoLinkMatch_CleanerStream"});


db.createCollection("DCRRegistry")

db.DCRRegistry.createIndex({"status.changeDate": -1},  {background: true, name:  "idx_changeDate_FindDCRsBy"});

db.DCRRegistry.createIndex({extDCRRequestId: -1},  {background: true, name:  "idx_extDCRRequestId_FindByExtId"});
db.DCRRegistry.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});

db.DCRRegistryONEKEY.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});


db.createCollection("sequenceCounters")

db.sequenceCounters.insertOne({_id: "COMPANYAddressIDSeq", sequence: NumberLong([sequence start number])}) //NOTE!!!! replace text [sequence start count] with value from below table

RegionSeq start number
emea5000000000
amer6000000000
apac7000000000

12. Run Jenkins job to deploy kafka resources and mdmhub components for the new environment.

13. Create paths on S3 bucket required by Snowflake and Airflow's DAGs.

14. Configure Kibana:

15. Configure basic Airflow DAGs (ansible directory):

16. Deploy DAGs (NOTE: check if your kubectl is configured to communicate with the cluster you wanted to change):

ansible-playbook install_mdmgw_airflow_services_k8s.yml -i inventory/[tenant-env]/inventory

17. Configure Snowflake for the [tenant-env] in mdm-hub-env-config as in example inventory/dev_amer/group_vars/snowflake/*. 


Verification points

Check Reltio's configuration - get reltio tenant configuration:

  1. Check if you are able to execute Reltio's operations using credentials of the service user,

  2. Check if streaming processing is enable - streamingConfig.messaging.destinations.enabled = true, streamingConfig.streamingEnabled=true, streamingConfig.streamingAPIEnabled=true,

  3. Check if cassanda export is configured - exportConfig.smartExport.secondaryDsEnabled = false.


Check Mongo:

  1. Users mdmgw, mdmhub and mdm_batch_service - permissions for the newly added database (readWrite),
  2. Indexes,

  3. Verify if correct start value is set for sequance COMPANYAddressIDSeq - collection sequenceCounters _id = COMPANYAddressIDSeq.


Check MDMHUB API:

  1. Check mdm-manager API with apikey authentication by executing one of read operations: GET {{ manager_url }}/entities?filter=equals(type, 'configuration/entityTypes/HCP'). The request should execute properly (HTTP status code 200) and returns some HCP objects. The empty response is also possible in the case when there is no HCP data in Reltio,

  2. Run the same operation using oAuth2 authentication - remember that the manager url is different,
  3. Verify api-router API with apikey authentication using search operation: GET {{ api_router_url }}/entities?filter=equals(type, 'configuration/entityTypes/HCP'). The request should execute properly (HTTP status code 200) and returns some HCP objects. Empty response is also possible in the case when there is no HCP data in Reltio,

  4. Run the same operation using oAuth2 authentication - remember that the api router url is different,
  5. Check batch service API with apikey authentication by executing following operation GET {{ batch_service_url }}/batchController/NA/instances/NA. The request should return 403 HTTP Code and body:

    {

        "code": "403",

        "message": "Forbidden: com.COMPANY.mdm.security.AuthorizationException: Batch 'NA' is not allowed."

    }

    The request doesn't create any batch.

  6. Run the same operation using oAuth2 authentication - remember that the batch service url is different,
  7. Verify of component logs: mdm-manager, api-router and batch-service url. Focus on errors and kafka records - rebalancing, authorization problems, topic existence warnings etc.


MDMHUB streaming services:

  1. Check logs of reltio-subscriber, entity-enricher, callback-service, event-publisher and mdm-reconciliation-service components. Verify if there is no errors and kafka warnings related with rebalancing, authorization problems, topic existence warnings etc,

  2. Verify if lookup refresh process is working properly - check existance of mongo collection LookupValues. It should have data,


Airflow:

  1. Run DAGs: export_merges_from_reltio_to_s3_full_{{ env }}, hub_reconciliation_v2_{{ env }}, lookup_values_export_to_s3_{{ env }}, reconciliation_snowflake_{{ env }}.

  2. Wait for their finish and validate results.


Snowflake:

  1. Check snowflake connector logs,

  2. Check if tables HUB_KAFKA_DATA, LOV_DATA, MERGE_TREE_DATA exist at LANDING schama and has data,

  3. Verify if mdm-hub-snowflake-dm package is deployed,
  4. What else?


Monitoring:

  1. Check grafana dashboards:
    1. HUB Performance,
    2. Kafka Topics Overview,
    3. Host Statistics,
    4. JMX Overview,
    5. Kong,
    6. MongoDB.
  2. Check Kibana index patterns:
    1. {{env}}-internal-batch-efk-transactions*,
    2. {{env}}-internal-gw-efk-transactions*,
    3. {{env}}-internal-publisher-efk-transactions*,
    4. {{env}}-internal-subscriber-efk-transactions*,
    5. {{env}}-mdmhub,
  3. Check Kibana dashboards:
    1. {{env}} API calls,
    2. {{env}} Batch Instances,
    3. {{env}} Batch loads,
    4. {{env}} Error Logs Overview,
    5. {{env}} Error Logs RDM,
    6. {{env}} HUB Store
    7. {{env}} HUB events,

    8. {{env}} MDM Events,
    9. {{env}} Profile Updates,
  4. Check alerts - How?



" }, { "title": "Configuration (apac k8s)", "pageID": "228933487", "pageLink": "/pages/viewpage.action?pageId=228933487", "content": "

Installation of new APAC non-prod cluster basing on AMER non-prod configuration.


  1. Copy mdm-hub-cluster-env/amer directory into mdm-hub-cluster-env/apac directory.

  2. Change dir names from "amer" to "apac".

  3. Replace everything in files in apac directory: "amer"→"apac".
    \"\"

  4. Certificates

    1. Generate private-keys, CSRs and request Kong certificate (kong/config_files/certs).

      \n
      anuskp@CF-341562$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout api-apac-nprod-gbl-mdm-hub.COMPANY.com.key -out api-apac-nprod-gbl-mdm-hub.COMPANY.com.csr\nGenerating a RSA private key\n..................+++++\n.........................+++++\nwriting new private key to 'api-apac-nprod-gbl-mdm-hub.COMPANY.com.key'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:api-apac-nprod-gbl-mdm-hub.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1588584">●●●●●●●●●●●●</a>\nAn optional company name []:
      \n

      SAN:
      DNS Name=api-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=www.api-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=kibana-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=prometheus-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=grafana-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=elastic-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=consul-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=akhq-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=airflow-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=mongo-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=mdm-log-management-apac-nonprod.COMPANY.com
      DNS Name=gbl-mdm-hub-apac-nprod.COMPANY.com

      Place private-key and signed certificate in kong/config_files/certs. Git-ignore them and encrypt them into .encrypt files.

    2. Generate private-keys, CSRs and request Kafka certificate (apac-backend/secret.yaml)

      \n
      anuskp@CF-341562$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout kafka-apac-nprod-gbl-mdm-hub.COMPANY.com.key -out kafka-apac-nprod-gbl-mdm-hub.COMPANY.com.csr\nGenerating a RSA private key\n................................................................+++++\n.......................................+++++\nwriting new private key to 'kafka-apac-nprod-gbl-mdm-hub.COMPANY.com.key'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:kafka-apac-nprod-gbl-mdm-hub.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1588586">●●●●●●●●●●●●</a>\nAn optional company name []:
      \n

      SAN:
      DNS Name=kafka-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=kafka-b1-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=kafka-b2-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=kafka-b3-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=kafka-b4-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=kafka-b5-apac-nprod-gbl-mdm-hub.COMPANY.com
      DNS Name=kafka-b6-apac-nprod-gbl-mdm-hub.COMPANY.com

      After receiving the certificate, encode it with base64 and paste into apac-backend/secrets.yaml:
        -> secrets.mdm-kafka-external-listener-cert.listener.key
        -> secrets.mdm-kafka-external-listener-cert.listener.crt 

  5.  (*) Since this is a new environment, remove everything under "migration" key in apac-backend/values.yaml.

  6. Replace all user_passwords in apac/nprod/secrets.yaml. for each ●●●●●●●●●●●●●●●●● a new, 32-char one and globally replace it in all apac configs.

  7. Go through apac-dev/config_files one by one and adjust settings such as: Reltio, SQS etc.

  8. (*) Change Kafka topics and consumergroup names to fit naming standards. This is a one-time activity and does not need to be repeated if next environments will be built based on APAC config.
  9. Export amer-nprod CRDs into yaml file and import it in apac-nprod:

    \n
    $ kubectx atp-mdmhub-nprod-amer\n$ kubectl get crd -A -o yaml > ~/crd-definitions-amer.yaml\n$ kubectx atp-mdmhub-nprod-apac\n$ kubectl apply -f ~/crd-definitions-amer.yaml
    \n
  10. Create config dirs for git2consul (mdm-hub-env-config):

    \n
    $ git checkout config/dev_amer\n$ git pull\n$ git branch config/dev_apac\n$ git checkout config/dev_apac\n$ git push origin config/dev_apac
    \n

    Repeat for qa and stage.

  11. Install operators:

    \n
    $ ./install.sh -l operators -r apac -c nprod -e apac-dev -v 3.9.4
    \n
  12. Install backend:

    \n
    $ ./install.sh -l backend -r apac -c nprod -e apac-dev -v 3.9.4
    \n
  13. Log into mongodb (use port forward if there is no connection to kong: run "kubectl port-forward mongo-0 -n apac-backend 27017" and connect to mongo on localhost:27017). Run below script:

    \n
    db.createCollection("entityHistory") \ndb.entityHistory.createIndex({country: -1},  {background: true, name:  "idx_country"});\ndb.entityHistory.createIndex({sources: -1},  {background: true, name:  "idx_sources"});\ndb.entityHistory.createIndex({entityType: -1},  {background: true, name:  "idx_entityType"});\ndb.entityHistory.createIndex({status: -1},  {background: true, name:  "idx_status"});\ndb.entityHistory.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});\ndb.entityHistory.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});\ndb.entityHistory.createIndex({"entity.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});\ndb.entityHistory.createIndex({"entity.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});\ndb.entityHistory.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});\ndb.entityHistory.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});\ndb.entityHistory.createIndex({entityChecksum: -1},  {background: true, name:  "idx_entityChecksum"});\ndb.entityHistory.createIndex({parentEntityId: -1},  {background: true, name:  "idx_parentEntityId"});\n\ndb.entityHistory.createIndex({COMPANYGlobalCustomerID: -1},  {background: true, name:  "idx_COMPANYGlobalCustomerID"});\n\ndb.createCollection("entityRelations")\ndb.entityRelations.createIndex({country: -1},  {background: true, name:  "idx_country"});\ndb.entityRelations.createIndex({sources: -1},  {background: true, name:  "idx_sources"});\ndb.entityRelations.createIndex({relationType: -1},  {background: true, name:  "idx_relationType"});\ndb.entityRelations.createIndex({status: -1},  {background: true, name:  "idx_status"});\ndb.entityRelations.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});\ndb.entityRelations.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});\ndb.entityRelations.createIndex({startObjectId: -1},  {background: true, name:  "idx_startObjectId"});\ndb.entityRelations.createIndex({endObjectId: -1},  {background: true, name:  "idx_endObjectId"});\ndb.entityRelations.createIndex({"relation.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});   \ndb.entityRelations.createIndex({"relation.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});   \ndb.entityRelations.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});   \ndb.entityRelations.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});\n \ndb.createCollection("LookupValues")\ndb.LookupValues.createIndex({updatedOn: 1},  {background: true, name:  "idx_updatedOn"});\ndb.LookupValues.createIndex({countries: 1},  {background: true, name:  "idx_countries"});\ndb.LookupValues.createIndex({mdmSource: 1},  {background: true, name:  "idx_mdmSource"});\ndb.LookupValues.createIndex({type: 1},  {background: true, name:  "idx_type"});\ndb.LookupValues.createIndex({code: 1},  {background: true, name:  "idx_code"});\ndb.LookupValues.createIndex({valueUpdateDate: 1},  {background: true, name:  "idx_valueUpdateDate"});\n\ndb.createCollection("ErrorLogs")\ndb.ErrorLogs.createIndex({plannedResubmissionDate: -1},  {background: true, name:  "idx_plannedResubmissionDate_-1"});\ndb.ErrorLogs.createIndex({timestamp: -1},  {background: true, name:  "idx_timestamp_-1"});\ndb.ErrorLogs.createIndex({exceptionClass: 1},  {background: true, name:  "idx_exceptionClass_1"});\ndb.ErrorLogs.createIndex({status: -1},  {background: true, name:  "idx_status_-1"});\n\ndb.createCollection("batchEntityProcessStatus")\ndb.batchEntityProcessStatus.createIndex({batchName: -1, sourceId: -1},  {background: true, name:  "idx_findByBatchNameAndSourceId"});\ndb.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1},  {background: true, name:  "idx_EntitiesUnseen_SoftDeleteJob"});\ndb.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResult_ProcessingJob"});\ndb.batchEntityProcessStatus.createIndex({batchName: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResultAll_ProcessingJob"});\n\ndb.createCollection("batchInstance")\n\ndb.createCollection("relationCache")\ndb.relationCache.createIndex({startSourceId: -1},  {background: true, name:  "idx_findByStartSourceId"});\n\ndb.createCollection("DCRRequests")\ndb.DCRRequests.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});\ndb.DCRRequests.createIndex({entityURI: -1, "status.name": -1},  {background: true, name:  "idx_entityURIStatusNameFind_SubmitVR"});\ndb.DCRRequests.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});\n\ndb.createCollection("entityMatchesHistory")\ndb.entityMatchesHistory.createIndex({_id: -1, "matches.matchObjectUri": -1, "matches.matchType": -1},  {background: true, name:  "idx_findAutoLinkMatch_CleanerStream"});\n\ndb.createCollection("DCRRegistry")\ndb.DCRRegistry.createIndex({"status.changeDate": -1},  {background: true, name:  "idx_changeDate_FindDCRsBy"});\ndb.DCRRegistry.createIndex({extDCRRequestId: -1},  {background: true, name:  "idx_extDCRRequestId_FindByExtId"});\ndb.DCRRegistry.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});\n\ndb.DCRRegistryONEKEY.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});\n\ndb.createCollection("sequenceCounters")\ndb.sequenceCounters.insertOne({_id: "COMPANYAddressIDSeq", sequence: NumberLong(7000000000)}) // NOTE: 7000000000 is APAC-specific
    \n
  14. Log into Kibana. Export dashboards/indices from AMER and import them in APAC.
  15. Install mdmhub:

    \n
    $ ./install.sh -l mdmhub -r apac -c nprod -e apac-dev -v 3.9.4
    \n
  16. Tickets:
    1. DNS names ticket:

      Ticket queue: GBL-NETWORK DDI

      Title: Add domains to DNS


      Description:

      Hi Team,\n\nPlease add below domains:\n\napi-apac-nprod-gbl-mdm-hub.COMPANY.com\nkibana-apac-nprod-gbl-mdm-hub.COMPANY.com\nprometheus-apac-nprod-gbl-mdm-hub.COMPANY.com\ngrafana-apac-nprod-gbl-mdm-hub.COMPANY.com\nelastic-apac-nprod-gbl-mdm-hub.COMPANY.com\nconsul-apac-nprod-gbl-mdm-hub.COMPANY.com\nakhq-apac-nprod-gbl-mdm-hub.COMPANY.com\nairflow-apac-nprod-gbl-mdm-hub.COMPANY.com\nmongo-apac-nprod-gbl-mdm-hub.COMPANY.com\nmdm-log-management-apac-nonprod.COMPANY.com\ngbl-mdm-hub-apac-nprod.COMPANY.com\n\nas CNAMEs of our ELB:\na81322116787943bf80a29940dbc2891-00e7418d9be731b0.elb.ap-southeast-1.amazonaws.com

      Also, please add one CNAME for each one of below ELBs:\n\nCNAME: kafka-apac-nprod-gbl-mdm-hub.COMPANY.com\nELB: a7ba438d7068b4a799d29d3d408b0932-1e39235cdff6d511.elb.ap-southeast-1.amazonaws.com\n\nCNAME: kafka-b1-apac-nprod-gbl-mdm-hub.COMPANY.com\nELB: a72bbc64327cb4ee4b35ae5abeefbb26-4c392c106b29b6e5.elb.us-east-1.amazonaws.com\n\nCNAME: kafka-b2-apac-nprod-gbl-mdm-hub.COMPANY.com\nELB: a7fdb6117b2184096915aed31732110b-91c5ac7fb0968710.elb.us-east-1.amazonaws.com\n\nCNAME: kafka-b3-apac-nprod-gbl-mdm-hub.COMPANY.com\nELB: a99220323cc684bcaa5e29c198777e13-ddf5ddbf36fe3025.elb.us-east-1.amazonaws.com

      Best Regards,
      Piotr
      MDM Hub
    2. Firewall whitelisting

      Ticket queue: GBL-NETWORK ECS

      Title: Firewall exceptions for new BoldMoves PDKS cluster


      Description:

      Hi Team,\n\nPlease open all traffic listed in attached Excel sheet.\nIn case this is not the queue where I should request Firewall changes, kindly point me in the right direction.\n\nBest Regards,\nPiotr\nMDM Hub

      Attached excel:

      SourceSource IPDestinationDestination IPPort

      MDM Hub monitoring (euw1z1pl046.COMPANY.com)

      CI/CD server (sonar-gbicomcloud.COMPANY.com)

      10.90.98.0/24pdcs-apa1p.COMPANY.com-443

      MDM Hub monitoring (euw1z1pl046.COMPANY.com)

      CI/CD server (sonar-gbicomcloud.COMPANY.com)

      EMEA NPROD MDM Hub

      10.90.98.0/24APAC NPROD - PDKS cluster

      ●●●●●●●●●●●●●●●

      ●●●●●●●●●●●●●●●

      443

      9094

      Global NPROD MDM Hub10.90.96.0/24APAC NPROD - PDKS cluster

      ●●●●●●●●●●●●●●●

      ●●●●●●●●●●●●●●●

      443
      APAC NPROD - PDKS cluster

      ●●●●●●●●●●●●●●●

      ●●●●●●●●●●●●●●●

      Global NPROD MDM Hub10.90.96.0/248443
      APAC NPROD - PDKS cluster

      ●●●●●●●●●●●●●●●

      ●●●●●●●●●●●●●●●

      EMEA NPROD MDM Hub10.90.98.0/248443
  17. Integration tests:
    In mdm-hub-env-config prepare inventory/kube_dev_apac (copy kube_dev_amer and adjust variables)
    run "prepare_int_tests" playbook:

    \n
    $ ansible-playbook prepare_int_tests.yml -i inventory/kube_dev_apac/inventory -e src_dir="/mnt/c/Users/panu/gitrep/mdm-hub-inbound-services-all"
    \n


    in mdm-hub-inbound-services confirm test resources (citrus properties) for mdm-integration-tests have been replaced and run two Gradle tasks:
    -mdm-gateway/mdm-interation-tests/Tasks/verification/commonIntegrationTests
    -mdm-gateway/mdm-interation-tests/Tasks/verification/integrationTestsForCOMPANYModel


" }, { "title": "Configuration (apac prod k8s)", "pageID": "234699630", "pageLink": "/pages/viewpage.action?pageId=234699630", "content": "

Installation of new APAC prod cluster basing on AMER prod configuration.


  1. Copy mdm-hub-cluster-env/amer/prod directory into mdm-hub-cluster-env/apac directory.

  2. Change dir names from "amer" to "apac" - apac-backend, apac-prod

  3. Replace everything in files in apac directory: "amer"→"apac".
    \"\"

  4. Certificates

    1. Generate private-keys, CSRs and request Kong certificate (kong/config_files/certs).

      \n
      anuskp@CF-341562$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout api-apac-prod-gbl-mdm-hub.COMPANY.com.key -out api-apac-prod-gbl-mdm-hub.COMPANY.com.csr\nGenerating a RSA private key\n..................+++++\n.........................+++++\nwriting new private key to 'api-apac-prod-gbl-mdm-hub.COMPANY.com.key'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:api-apac-prod-gbl-mdm-hub.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1588665">●●●●●●●●●●●●</a>\nAn optional company name []:
      \n

      SAN:
      DNS Name=api-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=www.api-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=kibana-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=prometheus-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=grafana-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=elastic-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=consul-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=akhq-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=airflow-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=mongo-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=mdm-log-management-apac-noprod.COMPANY.com
      DNS Name=gbl-mdm-hub-apac-prod.COMPANY.com

      Place private-key and signed certificate in kong/config_files/certs. Git-ignore them and encrypt them into .encrypt files.

    2. Generate private-keys, CSRs and request Kafka certificate (apac-backend/secret.yaml)

      \n
      anuskp@CF-341562$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout kafka-apac-prod-gbl-mdm-hub.COMPANY.com.key -out kafka-apac-prod-gbl-mdm-hub.COMPANY.com.csr\nGenerating a RSA private key\n................................................................+++++\n.......................................+++++\nwriting new private key to 'kafka-apac-prod-gbl-mdm-hub.COMPANY.com.key'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:kafka-apac-prod-gbl-mdm-hub.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1588666">●●●●●●●●●●●●</a>\nAn optional company name []:
      \n

      SAN:
      DNS Name=kafka-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=kafka-b1-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=kafka-b2-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=kafka-b3-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=kafka-b4-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=kafka-b5-apac-prod-gbl-mdm-hub.COMPANY.com
      DNS Name=kafka-b6-apac-prod-gbl-mdm-hub.COMPANY.com

      After receiving the certificate, encode it with base64 and paste into apac-backend/secrets.yaml:
        -> secrets.mdm-kafka-external-listener-cert.listener.key
        -> secrets.mdm-kafka-external-listener-cert.listener.crt 

      Raise a ticket via Request Manager
  5.  (*) Since this is a new environment, remove everything under "migration" key in apac-backend/values.yaml.

  6. Replace all user_passwords in apac/prod/secrets.yaml. for each ●●●●●●●●●●●●●●●●● a new, 40-char one and globally replace it in all apac configs.

  7. Go through apac-dev/config_files one by one and adjust settings such as: Reltio, SQS etc.

  8. (*) Change Kafka topics and consumergroup names to fit naming standards. This is a one-time activity and does not need to be repeated if next environments will be built based on APAC config.
  9. Export amer-prod CRDs into yaml file and import it in apac-prod:

    \n
    $ kubectx atp-mdmhub-prod-amer\n$ kubectl get crd -A -o yaml > ~/crd-definitions-amer.yaml\n$ kubectx atp-mdmhub-prod-apac\n$ kubectl apply -f ~/crd-definitions-amer.yaml
    \n
  10. Create config dirs for git2consul (mdm-hub-env-config):

    \n
    $ git checkout config/dev_amer\n$ git pull\n$ git branch config/dev_apac\n$ git checkout config/dev_apac\n$ git push origin config/dev_apac
    \n

    Repeat for qa and stage.

  11. Install operators:

    \n
    $ ./install.sh -l operators -r apac -c prod -e apac-dev -v 3.9.4
    \n
  12. Install backend:

    \n
    $ ./install.sh -l backend -r apac -c prod -e apac-dev -v 3.9.4
    \n
  13. 1 Log into mongodb (use port forward if there is no connection to kong: run "kubectl port-forward mongo-0 -n apac-backend 27017" and connect to mongo on localhost:27017) or
    retrieve ip address from ELB of kong service and add it to Windows hosts file as DNS name (example. ●●●●●●●●●●●● mongo-amer-prod-gbl-mdm-hub.COMPANY.com) and connect to mongo on mongo-amer-prod-gbl-mdm-hub.COMPANY.com:27017

    2 Run below script:

    \n
    db.createCollection("entityHistory") \ndb.entityHistory.createIndex({country: -1},  {background: true, name:  "idx_country"});\ndb.entityHistory.createIndex({sources: -1},  {background: true, name:  "idx_sources"});\ndb.entityHistory.createIndex({entityType: -1},  {background: true, name:  "idx_entityType"});\ndb.entityHistory.createIndex({status: -1},  {background: true, name:  "idx_status"});\ndb.entityHistory.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});\ndb.entityHistory.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});\ndb.entityHistory.createIndex({"entity.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});\ndb.entityHistory.createIndex({"entity.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});\ndb.entityHistory.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});\ndb.entityHistory.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});\ndb.entityHistory.createIndex({entityChecksum: -1},  {background: true, name:  "idx_entityChecksum"});\ndb.entityHistory.createIndex({parentEntityId: -1},  {background: true, name:  "idx_parentEntityId"});\n\ndb.entityHistory.createIndex({COMPANYGlobalCustomerID: -1},  {background: true, name:  "idx_COMPANYGlobalCustomerID"});\n\ndb.createCollection("entityRelations")\ndb.entityRelations.createIndex({country: -1},  {background: true, name:  "idx_country"});\ndb.entityRelations.createIndex({sources: -1},  {background: true, name:  "idx_sources"});\ndb.entityRelations.createIndex({relationType: -1},  {background: true, name:  "idx_relationType"});\ndb.entityRelations.createIndex({status: -1},  {background: true, name:  "idx_status"});\ndb.entityRelations.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});\ndb.entityRelations.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});\ndb.entityRelations.createIndex({startObjectId: -1},  {background: true, name:  "idx_startObjectId"});\ndb.entityRelations.createIndex({endObjectId: -1},  {background: true, name:  "idx_endObjectId"});\ndb.entityRelations.createIndex({"relation.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});   \ndb.entityRelations.createIndex({"relation.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});   \ndb.entityRelations.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});   \ndb.entityRelations.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});\n \ndb.createCollection("LookupValues")\ndb.LookupValues.createIndex({updatedOn: 1},  {background: true, name:  "idx_updatedOn"});\ndb.LookupValues.createIndex({countries: 1},  {background: true, name:  "idx_countries"});\ndb.LookupValues.createIndex({mdmSource: 1},  {background: true, name:  "idx_mdmSource"});\ndb.LookupValues.createIndex({type: 1},  {background: true, name:  "idx_type"});\ndb.LookupValues.createIndex({code: 1},  {background: true, name:  "idx_code"});\ndb.LookupValues.createIndex({valueUpdateDate: 1},  {background: true, name:  "idx_valueUpdateDate"});\n\ndb.createCollection("ErrorLogs")\ndb.ErrorLogs.createIndex({plannedResubmissionDate: -1},  {background: true, name:  "idx_plannedResubmissionDate_-1"});\ndb.ErrorLogs.createIndex({timestamp: -1},  {background: true, name:  "idx_timestamp_-1"});\ndb.ErrorLogs.createIndex({exceptionClass: 1},  {background: true, name:  "idx_exceptionClass_1"});\ndb.ErrorLogs.createIndex({status: -1},  {background: true, name:  "idx_status_-1"});\n\ndb.createCollection("batchEntityProcessStatus")\ndb.batchEntityProcessStatus.createIndex({batchName: -1, sourceId: -1},  {background: true, name:  "idx_findByBatchNameAndSourceId"});\ndb.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1},  {background: true, name:  "idx_EntitiesUnseen_SoftDeleteJob"});\ndb.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResult_ProcessingJob"});\ndb.batchEntityProcessStatus.createIndex({batchName: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResultAll_ProcessingJob"});\n\ndb.createCollection("batchInstance")\n\ndb.createCollection("relationCache")\ndb.relationCache.createIndex({startSourceId: -1},  {background: true, name:  "idx_findByStartSourceId"});\n\ndb.createCollection("DCRRequests")\ndb.DCRRequests.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});\ndb.DCRRequests.createIndex({entityURI: -1, "status.name": -1},  {background: true, name:  "idx_entityURIStatusNameFind_SubmitVR"});\ndb.DCRRequests.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});\n\ndb.createCollection("entityMatchesHistory")\ndb.entityMatchesHistory.createIndex({_id: -1, "matches.matchObjectUri": -1, "matches.matchType": -1},  {background: true, name:  "idx_findAutoLinkMatch_CleanerStream"});\n\ndb.createCollection("DCRRegistry")\ndb.DCRRegistry.createIndex({"status.changeDate": -1},  {background: true, name:  "idx_changeDate_FindDCRsBy"});\ndb.DCRRegistry.createIndex({extDCRRequestId: -1},  {background: true, name:  "idx_extDCRRequestId_FindByExtId"});\ndb.DCRRegistry.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});\n\ndb.DCRRegistryONEKEY.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});\n\ndb.createCollection("sequenceCounters")\ndb.sequenceCounters.insertOne({_id: "COMPANYAddressIDSeq", sequence: NumberLong(7000000000)}) // NOTE: 7000000000 is APAC-specific
    \n

    Region

    Seq start number

    amer6000000000
    apac7000000000
    emea5000000000
  14. Log into Kibana. Export dashboards/indices from AMER and import them in APAC.
    Use the following playbook:
    - change values in  ansible repository:
    inventory/jenkins/group_vars/all/all.yml → #CHNG
    - run playbook:  ansible-playbook install_kibana_objects.yml -i inventory/jenkins/inventory --vault-password-file=../vault -v
  15. Install mdmhub:

    \n
    $ ./install.sh -l mdmhub -r apac -c prod -e apac-dev -v 3.9.4
    \n
  16. Tickets:
    1. DNS names ticket:

    2. Firewall whitelisting

      Ticket queue: GBL-NETWORK ECS

      Title: Firewall exceptions for new BoldMoves PDKS cluster


      Description:

      Hi Team,\n\nPlease open all traffic listed in attached Excel sheet.\nIn case this is not the queue where I should request Firewall changes, kindly point me in the right direction.\n\nBest Regards,\nPiotr\nMDM Hub

      Attached excel:

      SourceSource IPDestinationDestination IPPort

      MDM Hub monitoring (euw1z1pl046.COMPANY.com)

      CI/CD server (sonar-gbicomcloud.COMPANY.com)

      10.90.98.0/24pdcs-apa1p.COMPANY.com-443

      MDM Hub monitoring (euw1z1pl046.COMPANY.com)

      CI/CD server (sonar-gbicomcloud.COMPANY.com)

      EMEA prod MDM Hub

      10.90.98.0/24APAC prod - PDKS cluster

      ●●●●●●●●●●●●●●●

      ●●●●●●●●●●●●●●●

      443

      9094

      Global prod MDM Hub10.90.96.0/24APAC prod - PDKS cluster

      ●●●●●●●●●●●●●●●

      ●●●●●●●●●●●●●●●

      443
      APAC prod - PDKS cluster

      ●●●●●●●●●●●●●●●

      ●●●●●●●●●●●●●●●

      Global prod MDM Hub10.90.96.0/248443
      APAC prod - PDKS cluster

      ●●●●●●●●●●●●●●●

      ●●●●●●●●●●●●●●●

      EMEA prod MDM Hub10.90.98.0/248443
  17. Integration tests:
    In mdm-hub-env-config prepare inventory/kube_dev_apac (copy kube_dev_amer and adjust variables)
    run "prepare_int_tests" playbook:

    \n
    $ ansible-playbook prepare_int_tests.yml -i inventory/kube_dev_apac/inventory -e src_dir="/mnt/c/Users/panu/gitrep/mdm-hub-inbound-services-all"
    \n


    in mdm-hub-inbound-services confirm test resources (citrus properties) for mdm-integration-tests have been replaced and run two Gradle tasks:
    -mdm-gateway/mdm-interation-tests/Tasks/verification/commonIntegrationTests
    -mdm-gateway/mdm-interation-tests/Tasks/verification/integrationTestsForCOMPANYModel


" }, { "title": "Configuration (emea)", "pageID": "218444982", "pageLink": "/pages/viewpage.action?pageId=218444982", "content": "


Setup Mongo Indexes and Collections:

EntityHistory


db.entityHistory.createIndex({COMPANYGlobalCustomerID: -1},  {background: true, name:  "idx_COMPANYGlobalCustomerID"});


DCR Service 2 Indexes:

DCR Service 2 Indexes
\n
db.DCRRegistryONEKEY.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});\n\ndb.DCRRegistry.createIndex({"status.changeDate": -1},  {background: true, name:  "idx_changeDate_FindDCRsBy"});\ndb.DCRRegistry.createIndex({extDCRRequestId: -1},  {background: true, name:  "idx_extDCRRequestId_FindByExtId"});\ndb.DCRRegistry.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});
\n



" }, { "title": "Configuration (gblus prod)", "pageID": "164470081", "pageLink": "/pages/viewpage.action?pageId=164470081", "content": "

Config file: gblmdm-hub-us-spec_v05.xlsx

AWS Resources

Resource Name
Resource Type
Specification
AWS Region
AWS Availability Zone
Dependen on
Description
Components
HUB
GW
Interface
GBL MDM US HUB Prod Data Svr1 - amraelp00007844EC2r5.2xlargeus-east-1b

EBS APP DATA MDM PROD SVR1
EBS DOCKER DATA MDM PROD SVR1

- Mongo - data redundancy and high availability
   primary, secondary, tertiary needs to be hosted on a separated server and zones - high availability if one zone is offline
- Disks:
    Mount 50G - /var/lib/docker/ - docker installation directory
    Mount 750GB - /app/ - docker applications local storage
OS: Red Hat Enterprise Linux Server release 7.4
mongo
EFK
-DATA
GBL MDM US HUB Prod Data Svr2 - amraelp00007870EC2r5.2xlargeus-east-1eEBS APP DATA MDM PROD SVR2
EBS DOCKER DATA MDM PROD SVR2
- Mongo - data redundancy and high availability
   primary, secondary, tertiary needs to be hosted on a separated server and zones - high availability if one zone is offline
- Disks:
    Mount 50G - /var/lib/docker/ - docker installation directory
    Mount 750GB - /app/ - docker applications local storage
OS: Red Hat Enterprise Linux Server release 7.4
mongo
EFK
-DATA
GBL MDM US HUB Prod Data Svr3 - amraelp00007847EC2r5.2xlargeus-east-1bEBS APP DATA MDM PROD SVR3
EBS DOCKER DATA MDM PROD SVR3
- Mongo - data redundancy and high availability
   primary, secondary, tertiary needs to be hosted on a separated server and zones - high availability if one zone is offline
- Disks:
    Mount 50G - /var/lib/docker/ - docker installation directory
    Mount 750GB - /app/ - docker applications local storage
OS: Red Hat Enterprise Linux Server release 7.4
mongo
EFK
-DATA
GBL MDM US HUB Prod Svc Svr1 - amraelp00007848EC2r5.2xlargeus-east-1bEBS APP SVC MDM PROD SVR1
EBS DOCKER SVC MDM PROD SVR1
- Kafka and zookeeper
- Kong and Cassandra
    Cassandra replication factory set to 3 – Kong proxy high availability
    Load balancer for Kong API
- Disks:
    Mount 50G - /var/lib/docker/ - docker installation directory
    Mount 450GB - /app/ - docker applications local storage
OS: Red Hat Enterprise Linux Server release 7.4
Kafka
Zookeeper
Kong
Cassandra

HUB

GW

inbound

outbound

GBL MDM US HUB Prod Svc Svr2 - amraelp00007849EC2r5.2xlargeus-east-1bEBS APP SVC MDM PROD SVR2
EBS DOCKER SVC MDM PROD SVR2
- Kafka and zookeeper
- Kong and Cassandra
    Cassandra replication factory set to 3 – Kong proxy high availability
    Load balancer for Kong API
- Disks:
    Mount 50G - /var/lib/docker/ - docker installation directory
    Mount 450GB - /app/ - docker applications local storage
OS: Red Hat Enterprise Linux Server release 7.4
Kafka
Zookeeper
Kong
Cassandra

HUB

GW

inbound

outbound

GBL MDM US HUB Prod Svc Svr3 - amraelp00007871EC2r5.2xlargeus-east-1eEBS APP SVC MDM PROD SVR3
EBS DOCKER SVC MDM PROD SVR3
- Kafka and zookeeper
- Kong and Cassandra
    Cassandra replication factory set to 3 – Kong proxy high availability
    Load balancer for Kong API
- Disks:
    Mount 50G - /var/lib/docker/ - docker installation directory
    Mount 450GB - /app/ - docker applications local storage
OS: Red Hat Enterprise Linux Server release 7.4
Kafka
Zookeeper
Kong
Cassandra

HUB

GW

inbound

outbound

EBS APP DATA MDM Prod Svr1EBS750 GB XFSus-east-1b
mount to /app on GBL MDM US HUB Prod Data Svr1 - amraelp00007844


EBS APP DATA MDM Prod Svr2EBS750 GB XFSus-east-1e
mount to /app on GBL MDM US HUB Prod Data Svr2 - amraelp00007870


EBS APP DATA MDM Prod Svr3EBS750 GB XFSus-east-1b
mount to /app on GBL MDM US HUB Prod Data Svr3 - amraelp00007847


EBS DOCKER DATA MDM Prod Svr1EBS50 GB XFSus-east-1b
mount to docker devicemapper on GBL MDM US HUB Prod Data Svr1 - amraelp00007844


EBS DOCKER DATA MDM Prod Svr2EBS50 GB XFSus-east-1e
mount to docker devicemapper on GBL MDM US HUB Prod Data Svr2 - amraelp00007870


EBS DOCKER DATA MDM Prod Svr3EBS50 GB XFSus-east-1b
mount to docker devicemapper on GBL MDM US HUB Prod Data Svr3 - amraelp00007847


EBS APP SVC MDM Prod Svr1EBS450 GB XFSus-east-1b
mount to /app on GBL MDM US HUB Prod Svc Svr1 - amraelp00007848


EBS APP SVC MDM Prod Svr2EBS450 GB XFSus-east-1b
mount to /app on GBL MDM US HUB Prod Svc Svr2 - amraelp00007849


EBS APP SVC MDM Prod Svr3EBS450 GB XFSus-east-1e
mount to /app on GBL MDM US HUB Prod Svc Svr3 - amraelp00007871


EBS DOCKER SVC MDM Prod Svr1EBS50 GB XFSus-east-1b
mount to docker devicemapper on GBL MDM US HUB Prod Svc Svr1 - amraelp00007848


EBS DOCKER SVC MDM Prod Svr2EBS50 GB XFSus-east-1b
mount to docker devicemapper on GBL MDM US HUB Prod Svc Svr2 - amraelp00007849


EBS DOCKER SVC MDM Prod Svr3EBS50 GB XFSus-east-1e
mount to docker devicemapper on GBL MDM US HUB Prod Svc Svr3 - amraelp00007871


GBLMDMHUB US S3 Bucket
gblmdmhubprodamrasp101478
S3
us-east-1





Load BalancerELBELB


GBL MDM US HUB Prod Svc Svr1
GBL MDM US HUB Prod Svc Svr2
GBL MDM US HUB Prod Svc Svr3

MAP 443 - 8443 (only HTTPS) - ssl offloading on KONG
Domain: gbl-mdm-hub-us-prod.COMPANY.com


NAME:  PFE-CLB-ATP-MDMHUB-US-PROD-001

DNS Name : internal-PFE-CLB-ATP-MDMHUB-US-PROD-001-146249044.us-east-1.elb.amazonaws.com




SSL cert for doiman domain gbl-mdm-hub-us-prod.COMPANY.comCertificateDomain : domain gbl-mdm-hub-us-prod.COMPANY.com






DNS RecordDNSAddress: gbl-mdm-hub-us-prod.COMPANY.com -> Load Balancer







Roles

Name
Type
Privileges
Member of
Description
Reqeusts IDProvided access
UNIX-universal-awscbsdev-mdmhub-us-prod-computers-UUnix Computer ROLEAccess to hosts:
GBL MDM US HUB Prod Data Svr1
GBL MDM US HUB Prod Data Svr2
GBL MDM US HUB Prod Data Svr3
GBL MDM US HUB Prod Svc Svr1
GBL MDM US HUB Prod Svc Svr2
GBL MDM US HUB Prod Svc Svr3

Computer role including all MDM servers-
UNIX-GBLMDMHUB-US-PROD-ADMINUser Role- dzdo root
- access to docker
- access to docker-engine (systemctl) – restart, stop, start docker engine
UNIX-GBLMDMHUB-US-PROD-U  Admin role to manage all resource on servers-

KUCR - 20200519090759337

WARECP - 20200519083956229

GENDEL - 20200519094636480

MORAWM03 - 20200519084328245

PIASEM - 20200519095309490

UNIX-GBLMDMHUB-US-PROD-HUBROLEUser Role- Read only for logs
- dzdo docker ps * - list docker container
- dzdo docker logs * - check docker container logs
- Read access to /app/* - check  docker container logs
UNIX-GBLMDMHUB-US-PROD-U  role without root access, read only for logs and check docker status. It will be used by monitoring-
UNIX-GBLMDMHUB-US-PROD-SEROLEUser Role
- dzdo docker * 
UNIX-GBLMDMHUB-US-PROD-U  service role - it will be used to run microservices  from Jenkins CD pipeline-

Service Account - GBL32452299i

mdmuspr mdmhubuspr - 20200519095543524

UNIX-GBLMDMHUB-US-PROD-UUser Role- Read only for logs
- Read access to /app/* - check  docker container logs
UNIX-GBLMDMHUB-US-PROD-U  
-


Ports - Security Group 

PFE-SG-GBLMDMHUB-US-APP-PROD-001

 

Port ApplicationWhitelisted
8443Kong (API proxy)ALL from COMPANY VPN
7000Cassandra (Kong DB)  - inter-node communicationALL from COMPANY VPN
7001Cassandra (Kong DB) - inter-node communicationALL from COMPANY VPN
9042Cassandra (Kong DB)  - client portALL from COMPANY VPN
9094Kafka - SASL_SSL protocolALL from COMPANY VPN
9093Kafka - SSL protocolALL from COMPANY VPN
9092

KAFKA  - Inter-broker communication   

ALL from COMPANY VPN
2181ZookeeperALL from COMPANY VPN
2888

Zookeeper - intercommunication

ALL from COMPANY VPN
3888

Zookeeper - intercommunication

ALL from COMPANY VPN
27017MongoALL from COMPANY VPN
9999HawtIO - administration consoleALL from COMPANY VPN
9200ElasticsearchALL from COMPANY VPN
9300Elasticsearch TCP - cluster communication portALL from COMPANY VPN
5601KibanaALL from COMPANY VPN
9100 - 9125Prometheus exportersALL from COMPANY VPN
9542Kong exporterALL from COMPANY VPN
2376Docker encrypted communication with the daemonALL from COMPANY VPN

Documentation

Service Account ( Jenkins / server access )
http://btondemand.COMPANY.com/solution/160303162657677

NSA - UNIX
- user access to Servers:
http://btondemand.COMPANY.com/solution/131014104610578


Instructions


How to add user access to UNIX-GBLMDMHUB-US-PROD-ADMIN


How to add/create new Service Account with access to UNIX-GBLMDMHUB-US-PROD-SEROLE


Service Account NameUNIX group namedetailsBTOnDemandLessons Learned 
mdmusprmdmhubusprService Account Name has to contain max 8 charactersGBL32452299i




How to open ports / create new Security Group - PFE-SG-GBLMDMHUB-US-APP-PROD-001

http://btondemand.COMPANY.com/solution/120906165824277

To create a new security group:

Create server Security Group and Open Ports on  SC queue Name: GBL-BTI-IOD AWS FULL SUPPORT

log in to http://btondemand.COMPANY.com/ go to Get Support 

Search for queue: GBL-BTI-IOD AWS FULL SUPPORT

Submit Request to this queue:

Request

Hi Team,
Could you please create a new security group and assign it with these servers.

GBL MDM US HUB Prod Data Svr1 - amraelp00007844.COMPANY.com
GBL MDM US HUB Prod Data Svr2 - amraelp00007870.COMPANY.com
GBL MDM US HUB Prod Data Svr3 - amraelp00007847.COMPANY.com
GBL MDM US HUB Prod Svc Svr1 - amraelp00007848.COMPANY.com
GBL MDM US HUB Prod Svc Svr2 - amraelp00007849.COMPANY.com
GBL MDM US HUB Prod Svc Svr3 - amraelp00007871.COMPANY.com


Please add the following owners:
Primary: VARGAA08
Secondary: TIRUMS05
(please let me know if approval is required)


New Security group Requested: PFE-SG-GBLMDMHUB-US-APP-PROD-001

Please Open the following ports:


Port Application Whitlisted

8443 Kong (API proxy) ALL from COMPANY VPN
7000 Cassandra (Kong DB) - inter-node communication ALL from COMPANY VPN
7001 Cassandra (Kong DB) - inter-node communication ALL from COMPANY VPN
9042 Cassandra (Kong DB) - client port ALL from COMPANY VPN
9094 Kafka - SASL_SSL protocol ALL from COMPANY VPN
9093 Kafka - SSL protocol ALL from COMPANY VPN
9092 KAFKA - Inter-broker communication ALL from COMPANY VPN
2181 Zookeeper ALL from COMPANY VPN
2888 Zookeeper - intercommunication ALL from COMPANY VPN
3888 Zookeeper - intercommunication ALL from COMPANY VPN
27017 Mongo ALL from COMPANY VPN
9999 HawtIO - administration console ALL from COMPANY VPN
9200 Elasticsearch ALL from COMPANY VPN
9300 Elasticsearch TCP - cluster communication port ALL from COMPANY VPN
5601 Kibana ALL from COMPANY VPN
9100 - 9125 Prometheus exporters ALL from COMPANY VPN
9542 Kong exporter ALL from COMPANY VPN
2376 Docker encrypted communication with the daemon ALL from COMPANY VPN


Apply this group to the following servers:
amraelp00007844
amraelp00007870
amraelp00007847
amraelp00007848
amraelp00007849
amraelp00007871

Regards,
Mikolaj


This will create a new Security Group

http://btondemand.COMPANY.com/My-Tickets/Ticket-Details?ticket=GBL32141041i

Then these security groups have to be assigned to servers through the IOD portal by the Servers Owner.

To open new ports:

log in to http://btondemand.COMPANY.com/ go to Get Support 

Search for queue: GBL-BTI-IOD AWS FULL SUPPORT

Submit Request to this queue:

Request

Hi,
Could you please modify the below security group and open the following port.

PROD security group:
Security group: PFE-SG-GBLMDMHUB-US-APP-PROD-001
Port: 2376
(this port is related to Docker for encrypted communication with the daemon)

The host related to this:
amraelp00007844
amraelp00007870
amraelp00007847
amraelp00007848
amraelp00007849
amraelp00007871

Regards,
Mikolaj


Certificates Configuration

Kafka 

GO TO:How to Generate JKS Keystore and Truststore

keytool -genkeypair -alias kafka.gbl-mdm-hub-us-prod.COMPANY.com -keyalg RSA -keysize 2048 -keystore server.keystore.jks -dname "CN=kafka.gbl-mdm-hub-us-prod.COMPANY.com, O=COMPANY, L=mdm_gbl_us_hub, C=US"
keytool -certreq -alias kafka.gbl-mdm-hub-us-prod.COMPANY.com -file kafka.gbl-mdm-hub-us-prod.COMPANY.com.csr -keystore server.keystore.jks

SAN:

gbl-mdm-hub-us-prod.COMPANY.com
amraelp00007848.COMPANY.com
●●●●●●●●●●●●●●
amraelp00007849.COMPANY.com
●●●●●●●●●●●●●
amraelp00007871.COMPANY.com
●●●●●●●●●●●●●●


Crete guest_user for KAFKA - "CN=kafka.guest_user.gbl-mdm-hub-us-prod.COMPANY.com, O=COMPANY, L=GBLMDMHUB-US-PROD-KAFKA, C=US":

GO TO: How to Generate JKS Keystore and Truststore

keytool -genkeypair -alias guest_user -keyalg RSA -keysize 2048 -keystore guest_user.keystore.jks -dname "CN=kafka.guest_user.gbl-mdm-hub-us-prod.COMPANY.com, O=COMPANY, L=GBLMDMHUB-US-PROD-KAFKA, C=US"
keytool -certreq -alias guest_user -file kafka.guest_user.gbl-mdm-hub-us-prod.COMPANY.com.csr -keystore guest_user.keystore.jks

Kong

openssl req -nodes -newkey rsa:2048 -sha256 -keyout gbl-mdm-hub-us-prod.key -out gbl-mdm-hub-us-prod.csr

Subject Alternative Names

gbl-mdm-hub-us-prod.COMPANY.com
amraelp00007848.COMPANY.com
●●●●●●●●●●●●●●
amraelp00007849.COMPANY.com
●●●●●●●●●●●●●
amraelp00007871.COMPANY.com
●●●●●●●●●●●●●●


EFK

PROD_GBL_US

openssl req -nodes -newkey rsa:2048 -sha256 -keyout mdm-log-management-gbl-us-prod.key -out mdm-log-management-gbl-us-prod.csr
mdm-log-management-gbl-us-prod.COMPANY.com

Subject Alternative Names
mdm-log-management-gbl-us-prod.COMPANY.com
gbl-mdm-hub-us-prod.COMPANY.com
amraelp00007844.COMPANY.com
●●●●●●●●●●●●●●
amraelp00007870.COMPANY.com
●●●●●●●●●●●●●●
amraelp00007847.COMPANY.com
●●●●●●●●●●●●●


esnode1
openssl req -nodes -newkey rsa:2048 -sha256 -keyout mdm-esnode1-gbl-us-prod.key -out mdm-esnode1-gbl-us-prod.csr
mdm-esnode1-gbl-us-prod.COMPANY.com - Elasticsearch esnode1

Subject Alternative Names
mdm-esnode1-gbl-us-prod.COMPANY.com
gbl-mdm-hub-us-prod.COMPANY.com
amraelp00007844.COMPANY.com
●●●●●●●●●●●●●●

esnode2
openssl req -nodes -newkey rsa:2048 -sha256 -keyout mdm-esnode2-gbl-us-prod.key -out mdm-esnode2-gbl-us-prod.csr
mdm-esnode2-gbl-us-prod.COMPANY.com - Elasticsearch esnode2

Subject Alternative Names
mdm-esnode2-gbl-us-prod.COMPANY.com
gbl-mdm-hub-us-prod.COMPANY.com
amraelp00007870.COMPANY.com
●●●●●●●●●●●●●●

esnode3
openssl req -nodes -newkey rsa:2048 -sha256 -keyout mdm-esnode3-gbl-us-prod.key -out mdm-esnode3-gbl-us-prod.csr
mdm-esnode3-gbl-us-prod.COMPANY.com - Elasticsearch esnode3

Subject Alternative Names
mdm-esnode3-gbl-us-prod.COMPANY.com
gbl-mdm-hub-us-prod.COMPANY.com
amraelp00007847.COMPANY.com
●●●●●●●●●●●●●


Domain Configuration:

Example request: GBL30514754i "Register domains "mdm-log-management*"


  1. log in to http://btondemand.COMPANY.com/getsupport
  2. What can we help you with? - Search for "Network Team Ticket"
  3. Select the most relevant topic - "DNS Request"
  4. Submit a ticket to this queue.
  5. Ticket Details: - GBL32508266i

Request

Hi,
Could you please register the following domains:

ADD the below DNS entry:
========================
mdm-log-management-gbl-us-prod.COMPANY.com              Alias Record to                             amraelp00007847.COMPANY.com[●●●●●●●●●●●●●]


Kind regards,
Mikolaj

Request DNS

Hi,
Could you please register the following domains:

ADD the below DNS entry for the ELB: PFE-CLB-ATP-MDMHUB-US-PROD-001:

========================
gbl-mdm-hub-us-prod.COMPANY.com              Alias Record to                             DNS Name : internal-PFE-CLB-ATP-MDMHUB-US-PROD-001-146249044.us-east-1.elb.amazonaws.com


Referenced ELB creation ticket: GBL32561307i


Kind regards,
Mikolaj




Environment Installation


DISC:

server1 amraelp00007844
    APP DISC: nvme1n1
   DOCKER DISC: nvme2n1

server2 amraelp00007870
   APP DISC: nvme2n1
   DOCKER DISC: nvme1n1

server3 amraelp00007847
   APP DISC: nvme2n1
   DOCKER DISC: nvme1n1

server4 amraelp00007848
   APP1 DISC: nvme2n1
   APP2 DISC: nvme3n1
   DOCKER DISC: nvme1n1

server5 amraelp00007849
   APP1 DISC: nvme2n1
   APP2 DISC: nvme3n1
   DOCKER DISC: nvme1n

server6 amraelp00007871
   APP1 DISC: nvme2n1
   APP2 DISC: nvme3n1
   DOCKER DISC: nvme1n1

Pre:

umount /var/lib/docker
lvremove /dev/datavg/varlibdocker
vgreduce datavg /dev/nvme1n1
vi /etc/fstab
RM - /dev/mapper/datavg-varlibdocker /var/lib/docker ext4 defaults 1 2


rmdir /var/lib/ -> docker
mkdir /app/docker
ln -s /app/docker /var/lib/docker


Start docker service after prepare_env_airflow_certs playbook run is completed
Clear content of /etc/sysconfig/docker-storage to DOCKER_STORAGE_OPTIONS="" to use deamon.json file


Ansible:

ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server1 --vault-password-file=~/vault-password-file
ansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server1 --vault-password-file=~/vault-password-file

CN_NAME=amraelp00007844.COMPANY.com
SUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●●

ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server2 --vault-password-file=~/vault-password-file
ansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server2 --vault-password-file=~/vault-password-file

CN_NAME=amraelp00007870.COMPANY.com
SUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●●

ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server3 --vault-password-file=~/vault-password-file
ansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server3 --vault-password-file=~/vault-password-file

CN_NAME=amraelp00007847.COMPANY.com
SUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●

ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server4 --vault-password-file=~/vault-password-file
ansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server4 --vault-password-file=~/vault-password-file

CN_NAME=amraelp00007848.COMPANY.com
SUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●●

ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server5 --vault-password-file=~/vault-password-file
ansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server5 --vault-password-file=~/vault-password-file

CN_NAME=amraelp00007849.COMPANY.com
SUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●

ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server6 --vault-password-file=~/vault-password-file
ansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server6 --vault-password-file=~/vault-password-file

CN_NAME=amraelp00007871.COMPANY.com
SUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●●



Docker Version:

amraelp00007844:root:[04:57 AM]:/home/morawm03> docker --version
Docker version 1.13.1, build b2f74b2/1.13.1

amraelp00007870:root:[04:57 AM]:/home/morawm03> docker --version
Docker version 1.13.1, build b2f74b2/1.13.1

amraelp00007847:root:[04:57 AM]:/home/morawm03> docker --version
Docker version 1.13.1, build b2f74b2/1.13.1

amraelp00007848:root:[04:57 AM]:/home/morawm03> docker --version
Docker version 1.13.1, build b2f74b2/1.13.1

amraelp00007849:root:[04:57 AM]:/home/morawm03> docker --version
Docker version 1.13.1, build b2f74b2/1.13.1

amraelp00007871:root:[05:00 AM]:/home/morawm03> docker --version
Docker version 1.13.1, build b2f74b2/1.13.1


Configure Registry Login (registry-gbicomcloud.COMPANY.com):

ansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server1 --vault-password-file=~/vault-password-file
ansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server2 --vault-password-file=~/vault-password-file
ansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server3 --vault-password-file=~/vault-password-file
ansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server4 --vault-password-file=~/vault-password-file
ansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server5 --vault-password-file=~/vault-password-file
ansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server6 --vault-password-file=~/vault-password-file

Registry (manual config):
  Copy certs: /etc/docker/certs.d/registry-gbicomcloud.COMPANY.com/ from (mdm-reltio-handler-env\\ssl_certs\\registry)
  docker login registry-gbicomcloud.COMPANY.com (login on service account too)
  user/pass: mdm/**** (check mdm-reltio-handler-env\\group_vars\\all\\secret.yml)




Playbooks installation order:

Install node_exporter (run on user with root access - systemctl node_exprter installation):
ansible-playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file
ansible-playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-file
ansible-playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus3 --vault-password-file=~/vault-password-file
ansible-playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus4 --vault-password-file=~/vault-password-file
ansible-playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus5 --vault-password-file=~/vault-password-file
ansible-playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus6 --vault-password-file=~/vault-password-file

Install Kafka
ansible-playbook install_hub_broker_cluster.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-file

Install Kafka TOPICS:
ansible-playbook install_hub_broker_cluster.yml -i inventory/prod_gblus/inventory --limit kafka1 --vault-password-file=~/vault-password-file

Install Mongo
ansible-playbook install_hub_mongo_rs_cluster.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-file


Install Kong
ansible-playbook install_mdmgw_gateway_v1.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-file


Update KONG Config
ansible-playbook update_kong_api_v1.yml -i inventory/prod_gblus/inventory --limit kong_v1_01 --vault-password-file=~/vault-password-file
Verification:
openssl s_client -connect amraelp00007848.COMPANY.com:8443 -servername gbl-mdm-hub-us-prod.COMPANY.com -CAfile /mnt/d/dev/mdm/GBL_US_NPROD/root_inter/RootCA-G2.cer
openssl s_client -connect amraelp00007849.COMPANY.com:8443 -servername gbl-mdm-hub-us-prod.COMPANY.com -CAfile /mnt/d/dev/mdm/GBL_US_NPROD/root_inter/RootCA-G2.cer
openssl s_client -connect amraelp00007871.COMPANY.com:8443 -servername gbl-mdm-hub-us-prod.COMPANY.com -CAfile /mnt/d/dev/mdm/GBL_US_NPROD/root_inter/RootCA-G2.cer


Install EFK
ansible-playbook install_efk_stack.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-file

Install Promehtues services :
mongo_exporter:
ansible-playbook install_prometheus_mongo_exporter.yml -i inventory/prod_gblus/inventory --limit mongo3_exporter --vault-password-file=~/vault-password-file
cadvisor:
ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file
ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-file
ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus3 --vault-password-file=~/vault-password-file
ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus4 --vault-password-file=~/vault-password-file
ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus5 --vault-password-file=~/vault-password-file
ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus6 --vault-password-file=~/vault-password-file
sqs_exporter:
ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus6 --vault-password-file=~/vault-password-file

Install Consul
ansible-playbook install_consul.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-file
# After operation get SecretID from consul container. On the container execute the following command:

$ consul acl bootstrap

and copy it as mgmt_token to consul secrets.yml

After install consul step run update consul playbook with proper mgmt_token (secret.yml) in every execution for each node.

Update Consul
ansible-playbook update_consul.yml -i inventory/prod_gblus/inventory --limit consul1 --vault-password-file=~/vault-password-file -v
ansible-playbook update_consul.yml -i inventory/prod_gblus/inventory --limit consul2 --vault-password-file=~/vault-password-file -v
ansible-playbook update_consul.yml -i inventory/prod_gblus/inventory --limit consul3 --vault-password-file=~/vault-password-file -v

Setup Mongo Indexes and Collections:

Create Collections and Indexes
\n
Create Collections and Indexes:\n    entityHistory\n\n        db.entityHistory.createIndex({country: -1},  {background: true, name:  "idx_country"});\n        db.entityHistory.createIndex({sources: -1},  {background: true, name:  "idx_sources"});\n        db.entityHistory.createIndex({entityType: -1},  {background: true, name:  "idx_entityType"});\n        db.entityHistory.createIndex({status: -1},  {background: true, name:  "idx_status"});\n        db.entityHistory.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});\n        db.entityHistory.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});\n        db.entityHistory.createIndex({"entity.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});\n        db.entityHistory.createIndex({"entity.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});\n        db.entityHistory.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});\n        db.entityHistory.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});\n        db.entityHistory.createIndex({entityChecksum: -1},  {background: true, name:  "idx_entityChecksum"});\n        db.entityHistory.createIndex({parentEntityId: -1},  {background: true, name:  "idx_parentEntityId"}); \n        \n        \n        \n\n    entityRelations\n        db.entityRelations.createIndex({country: -1},  {background: true, name:  "idx_country"});\n        db.entityRelations.createIndex({sources: -1},  {background: true, name:  "idx_sources"});\n        db.entityRelations.createIndex({relationType: -1},  {background: true, name:  "idx_relationType"});\n        db.entityRelations.createIndex({status: -1},  {background: true, name:  "idx_status"});\n        db.entityRelations.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});\n        db.entityRelations.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});\n        db.entityRelations.createIndex({startObjectId: -1},  {background: true, name:  "idx_startObjectId"});\n        db.entityRelations.createIndex({endObjectId: -1},  {background: true, name:  "idx_endObjectId"});\n        db.entityRelations.createIndex({"relation.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});    \n        db.entityRelations.createIndex({"relation.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});    \n        db.entityRelations.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});    \n        db.entityRelations.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});\n\n\n\n    LookupValues\n        db.LookupValues.createIndex({updatedOn: 1},  {background: true, name:  "idx_updatedOn"});\n        db.LookupValues.createIndex({countries: 1},  {background: true, name:  "idx_countries"});\n        db.LookupValues.createIndex({mdmSource: 1},  {background: true, name:  "idx_mdmSource"});\n        db.LookupValues.createIndex({type: 1},  {background: true, name:  "idx_type"});\n        db.LookupValues.createIndex({code: 1},  {background: true, name:  "idx_code"});\n        db.LookupValues.createIndex({valueUpdateDate: 1},  {background: true, name:  "idx_valueUpdateDate"});\n\n\n    ErrorLogs\n        db.ErrorLogs.createIndex({plannedResubmissionDate: -1},  {background: true, name:  "idx_plannedResubmissionDate_-1"});\n        db.ErrorLogs.createIndex({timestamp: -1},  {background: true, name:  "idx_timestamp_-1"});\n        db.ErrorLogs.createIndex({exceptionClass: 1},  {background: true, name:  "idx_exceptionClass_1"});\n        db.ErrorLogs.createIndex({status: -1},  {background: true, name:  "idx_status_-1"});\n\n\tbatchEntityProcessStatus\n    \tdb.batchEntityProcessStatus.createIndex({batchName: -1, sourceId: -1},  {background: true, name:  "idx_findByBatchNameAndSourceId"});\n\t    db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1},  {background: true, name:  "idx_EntitiesUnseen_SoftDeleteJob"});\n\t\tdb.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResult_ProcessingJob"});\n\t\tdb.batchEntityProcessStatus.createIndex({batchName: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResultAll_ProcessingJob"});\n\n\n    batchInstance\n\t\t- create collection\n\n\trelationCache\n\t\tdb.relationCache.createIndex({startSourceId: -1},  {background: true, name:  "idx_findByStartSourceId"});\n\n    DCRRequests\n          db.DCRRequests.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});\n          db.DCRRequests.createIndex({entityURI: -1, "status.name": -1},  {background: true, name:  "idx_entityURIStatusNameFind_SubmitVR"});\n          db.DCRRequests.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});\n          \n    entityMatchesHistory    \n          db.entityMatchesHistory.createIndex({_id: -1, "matches.matchObjectUri": -1, "matches.matchType": -1},  {background: true, name:  "idx_findAutoLinkMatch_CleanerStream"});\n
\n



Connect ENV with Prometheus:

Prometheus config
\n
node_exporter\n       - targets:\n          - "amraelp00007844.COMPANY.com:9100"\n          - "amraelp00007870.COMPANY.com:9100"\n          - "amraelp00007847.COMPANY.com:9100"\n          - "amraelp00007848.COMPANY.com:9100"\n          - "amraelp00007849.COMPANY.com:9100"\n          - "amraelp00007871.COMPANY.com:9100"\n         labels:\n            env: gblus_prod\n            component: node\n \n\nkafka\n       - targets:\n          - "amraelp00007848.COMPANY.com:9101"\n         labels:\n            env: gblus_prod\n            node: 1\n            component: kafka\n       - targets:\n          - "amraelp00007849.COMPANY.com:9101"\n         labels:\n            env: gblus_prod\n            node: 2\n            component: kafka\n       - targets:\n          - "amraelp00007871.COMPANY.com:9101"\n         labels:\n            env: gblus_prod\n            node: 3\n            component: kafka\n             \n             \nkafka_exporter\n       - targets:\n          - "amraelp00007848.COMPANY.com:9102"\n         labels:\n            trade: gblus\n            node: 1\n            component: kafka\n            env: gblus_prod\n       - targets:\n           - "amraelp00007849.COMPANY.com:9102"\n         labels:\n            trade: gblus\n            node: 2\n            component: kafka\n            env: gblus_prod\n       - targets:\n           - "amraelp00007871.COMPANY.com:9102"\n         labels:\n            trade: gblus\n            node: 3\n            component: kafka\n            env: gblus_prod \n \n \nComponents:\n    jmx_manager\n       - targets:\n          - "amraelp00007848.COMPANY.com:9104"\n         labels:\n            env: gblus_prod\n            node: 1\n            component: manager\n       - targets:\n          - "amraelp00007849.COMPANY.com:9104"\n         labels:\n            env: gblus_prod\n            node: 2\n            component: manager\n       - targets:\n          - "amraelp00007871.COMPANY.com:9104"\n         labels:\n            env: gblus_prod\n            node: 3\n            component: manager \n            \n    jmx_event_publisher\n       - targets:\n          - "amraelp00007848.COMPANY.com:9106"\n         labels:\n            env: gblus_prod\n            node: 1\n            component: publisher\n       - targets:\n          - "amraelp00007849.COMPANY.com:9106"\n         labels:\n            env: gblus_prod\n            node: 2\n            component: publisher\n       - targets:\n          - "amraelp00007871.COMPANY.com:9106"\n         labels:\n            env: gblus_prod\n            node: 3\n            component: publisher\n            \n    jmx_reltio_subscriber\n       - targets:\n          - "amraelp00007848.COMPANY.com:9105"\n         labels:\n            env: gblus_prod\n            node: 1\n            component: subscriber\n       - targets:\n          - "amraelp00007849.COMPANY.com:9105"\n         labels:\n            env: gblus_prod\n            node: 2\n            component: subscriber\n       - targets:\n          - "amraelp00007871.COMPANY.com:9105"\n         labels:\n            env: gblus_prod\n            node: 3\n            component: subscriber\n            \n  jmx_batch_service\n      - targets:\n          - "amraelp00007848.COMPANY.com:9107"\n        labels:\n          env: gblus_prod\n          node: 1\n          component: batch_service\n      - targets:\n          - "amraelp00007849.COMPANY.com:9107"\n        labels:\n          env: gblus_prod\n          node: 2\n          component: batch_service\n      - targets:\n          - "amraelp00007871.COMPANY.com:9107"\n        labels:\n          env: gblus_prod\n          node: 3\n          component: batch_service\n \n batch_service_actuator\n      - targets:\n          - "amraelp00007848.COMPANY.com:9116"\n        labels:\n          env: gblus_prod\n          node: 1\n          component: batch_service\n      - targets:\n          - "amraelp00007849.COMPANY.com:9116"\n        labels:\n          env: gblus_prod\n          node: 2\n          component: batch_service\n      - targets:\n          - "amraelp00007871.COMPANY.com:9116"\n        labels:\n          env: gblus_prod\n          node: 3\n          component: batch_service\n          \n          \nsqs_exporter   \n       - targets:\n          - "amraelp00007871.COMPANY.com:9122"\n         labels:\n            env: gblus_prod\n            component: sqs_exporter\n\n \n \ncadvisor\n \n       - targets:\n          - "amraelp00007844.COMPANY.com:9103"\n         labels:\n            env: gblus_prod\n            node: 1\n            component: cadvisor_exporter\n       - targets:\n          - "amraelp00007870.COMPANY.com:9103"\n         labels:\n            env: gblus_prod\n            node: 2\n            component: cadvisor_exporter           \n       - targets:\n          - "amraelp00007847.COMPANY.com:9103"\n         labels:\n            env: gblus_prod\n            node: 3\n            component: cadvisor_exporter   \n       - targets:\n          - "amraelp00007848.COMPANY.com:9103"\n         labels:\n            env: gblus_prod\n            node: 4\n            component: cadvisor_exporter   \n       - targets:\n          - "amraelp00007849.COMPANY.com:9103"\n         labels:\n            env: gblus_prod\n            node: 5\n            component: cadvisor_exporter   \n       - targets:\n          - "amraelp00007871.COMPANY.com:9103"\n         labels:\n            env: gblus_prod\n            node: 6\n            component: cadvisor_exporter               \n \n     \nmongodb_exporter\n \n      - targets:\n          - "amraelp00007847.COMPANY.com:9120"\n        labels:\n          env: gblus_prod\n          component: mongodb_exporter\n     \n \nkong_exporter\n       - targets:\n          - "amraelp00007848.COMPANY.com:9542"\n         labels:\n            env: gblus_prod\n            node: 1\n            component: kong_exporter\n       - targets:\n          - "amraelp00007849.COMPANY.com:9542"\n         labels:\n            env: gblus_prod\n            node: 2\n            component: kong_exporter\n       - targets:\n          - "amraelp00007871.COMPANY.com:9542"\n         labels:\n            env: gblus_prod\n            node: 3\n            component: kong_exporter
\n







" }, { "title": "Configuration (gblus)", "pageID": "164470073", "pageLink": "/pages/viewpage.action?pageId=164470073", "content": "

Config file: gblmdm-hub-us-spec_v04.xlsx

AWS Resources

Resource Name
Resource Type
Specification
AWS Region
AWS Availability Zone
Dependen on
Description
Components
HUB
GW
Interface

GBL MDM US HUB nProd Svr1 amraelp00007334

PFE-AWS-MULTI-AZ-DEV-us-east-1

EC2r5.2xlargeus-east-1b

EBS APP DATA MDM NPROD SVR1


EBS DOCKER DATA MDM NPROD SVR1

- Mongo -  no data redundancy for nProd

- Disks:
    Mount 50G - docker installation directory
    Mount 1000GB - /app/ - docker applications local storage


OS: Red Hat Enterprise Linux Server release 7.3 (Maipo)

mongo
EFK
HUBoutbound

GBL MDM US HUB nProd Svr2 amraelp00007335

PFE-AWS-MULTI-AZ-DEV-us-east-1

EC2r5.2xlargeus-east-1b

EBS APP DATA MDM NPROD SVR2


EBS DOCKER DATA MDM NPROD SVR2

- Kafka and zookeeper
- Kong and Cassandra
- Disks:
    Mount 50G - docker installation directory
    Mount 500GB - /app/ - docker applications local storage


OS: Red Hat Enterprise Linux Server release 7.3 (Maipo)

Kafka
Zookeeper
Kong
Cassandra
GWinbound
EBS APP DATA MDM nProd Svr1EBS1000 GB XFSus-east-1b
mount to /app on amraelp00007334


EBS APP DATA MDM nProd Svr2EBS500 GB XFSus-east-1b
mount to /app on amraelp00007335


EBS DOCKER DATA MDM nProd Svr1EBS50 GB XFSus-east-1b
mount to docker devicemapper on amraelp00007334


EBS DOCKER DATA MDM nProd Svr2EBS50 GB XFSus-east-1b
mount to docker devicemapper on amraelp00007335


GBLMDMHUB US S3 Bucket
gblmdmhubnprodamrasp100762
S3
us-east-1





SSL cert for doiman domain gbl-mdm-hub-us-nprod.COMPANY.comCertificateDomain : domain gbl-mdm-hub-us-nprod.COMPANY.com






DNS RecordDNSAddress: gbl-mdm-hub-us-nprod.COMPANY.com







Roles

Name
Type
Privileges
Member of
Description
Reqeusts IDProvided access
UNIX-IoD-global-mdmhub-us-nprod-computers-UUnix Computer ROLEAccess to hosts:
GBL MDM US HUB nProd Svr1
GBL MDM US HUB nProd Svr2

Computer role including all MDM servers

UNIX-GBLMDMHUB-US-NPROD-ADMIN-UUser Role- dzdo root
- access to docker
- access to docker-engine (systemctl) – restart, stop, start docker engine
UNIX-GBLMDMHUB-US-NPROD-COMPUTERS-UAdmin role to manage all resource on serversNSA-UNIX: 20200303065003900

KUCR - GBL32099554i

WARECP - 

GENDEL - GBL32134727i

MORAWM03 - GBL32097468i

UNIX-GBLMDMHUB-US-NPROD-HUBROLE-UUser Role- Read only for logs
- dzdo docker ps * - list docker container
- dzdo docker logs * - check docker container logs
- Read access to /app/* - check  docker container logs
UNIX-GBLMDMHUB-US-NPROD-COMPUTERS-Urole without root access, read only for logs and check docker status. It will be used by monitoringNSA-UNIX: 20200303065731900
UNIX-GBLMDMHUB-US-NPROD-SEROLE-UUser Role
- dzdo docker * 
UNIX-GBLMDMHUB-US-NPROD-COMPUTERS-Uservice role - it will be used to run microservices  from Jenkins CD pipelineNSA-UNIX: 20200303070216948

Service Account - GBL32099918i

mdmusnpr

UNIX-GBLMDMHUB-US-NPROD-READONLYUser Role- Read only for logs
- Read access to /app/* - check  docker container logs
UNIX-GBLMDMHUB-US-NPROD-COMPUTERS-U
NSA-UNIX: 20200303070544951


Ports - Security Group 

PFE-SG-GBLMDMHUB-US-APP-NPROD-001

 

Port ApplicationWhitelisted
8443Kong (API proxy)ALL from COMPANY VPN
9094Kafka - SASL_SSL protocolALL from COMPANY VPN
9093Kafka - SSL protocolALL from COMPANY VPN
2181ZookeeperALL from COMPANY VPN
27017MongoALL from COMPANY VPN
9999HawtIO - administration consoleALL from COMPANY VPN
9200ElasticsearchALL from COMPANY VPN
5601KibanaALL from COMPANY VPN
9100 - 9125Prometheus exportersALL from COMPANY VPN
9542Kong exporterALL from COMPANY VPN
2376Docker encrypted communication with the daemonALL from COMPANY VPN

Open ports between Jenkins and Airflow

Request to Przemek.Puchajda@COMPANY.com and Mateusz.Szewczyk@COMPANY.com - this is required to open ports between WBS<>IOD blocked traffic ( the requests take some time to finish so request at the beginning) 

  1. A connection is required from euw1z1dl039.COMPANY.com (●●●●●●●●●●●●●)

                       to amraelp00008810.COMPANY.com (●●●●●●●●●●●●●) port 2376. This connection is between airflow and docker host to run gblus DAGs.

                       to amraelp00008810.COMPANY.com (●●●●●●●●●●●●●) port 22. This connection is between airflow and docker host to run gblus DAGs.

      2. A connection is required from the Jenkins instance (gbinexuscd01 - ●●●●●●●●●●●●●).

                       to amraelp00008810.COMPANY.com (●●●●●●●●●●●●●) port 22. This connection is between Jenkins and the target host required for code deployment purposes.


Documentation

Service Account ( Jenkins / server access )
http://btondemand.COMPANY.com/solution/160303162657677

NSA - UNIX
- user access to Servers:
http://btondemand.COMPANY.com/solution/131014104610578


Instructions


How to add user access to UNIX-GBLMDMHUB-US-NPROD-ADMIN-U


How to add/create new Service Account with access to UNIX-GBLMDMHUB-US-NPROD-SEROLE-U


Service Account NameUNIX group namedetailsBTOnDemandLessons Learned 
mdmusnprmdmhubusnprService Account Name has to contain max 8 charactersGBL32099918i




How to open ports / create new Security Group - PFE-SG-GBLMDMHUB-US-APP-NPROD-001

http://btondemand.COMPANY.com/solution/120906165824277

To create a new security group:

Create server Security Group and Open Ports on  SC queue Name: GBL-BTI-IOD AWS FULL SUPPORT

log in to http://btondemand.COMPANY.com/ go to Get Support 

Search for queue: GBL-BTI-IOD AWS FULL SUPPORT

Submit Request to this queue:

Request

Hi Team,
Could you please create a new security group and assign it with two servers.

GBL MDM US HUB nProd Svr1 (amraelp00007334) - PFE-AWS-MULTI-AZ-DEV-us-east-1
and
GBL MDM US HUB nProd Svr2 (amraelp00007335) - PFE-AWS-MULTI-AZ-DEV-us-east-1


Please add the following owners:
Primary: VARGAA08
Secondary: TIRUMS05
(please let me know if approval is required)


New Security group Requested: PFE-SG-GBLMDMHUB-US-APP-NPROD-001

Please Open the following ports:
Port  Application Whitelisted
8443 Kong (API proxy) ALL from COMPANY VPN
9094 Kafka - SASL_SSL protocol ALL from COMPANY VPN
9093 Kafka - SASL_SSL protocol ALL from COMPANY VPN
2181 Zookeeper ALL from COMPANY VPN
27017 Mongo ALL from COMPANY VPN
9999 HawtIO - administration console ALL from COMPANY VPN
9200 Elasticsearch ALL from COMPANY VPN
5601 Kibana ALL from COMPANY VPN
9100 - 9125 Prometheus exporters ALL from COMPANY VPN


Apply this group to the following servers:
amraelp00007334
amraelp00007335

Regards,
Mikolaj


This will create a new Security Group

http://btondemand.COMPANY.com/My-Tickets/Ticket-Details?ticket=GBL32141041i

Then these security groups have to be assigned to servers through the IOD portal by the Servers Owner.

To open new ports:

log in to http://btondemand.COMPANY.com/ go to Get Support 

Search for queue: GBL-BTI-IOD AWS FULL SUPPORT

Submit Request to this queue:

Request

Hi,
Could you please modify the below security group and open the following port.

NONPROD security group:
Security group: PFE-SG-GBLMDMHUB-US-APP-NPROD-001
Port: 2376
(this port is related to Docker for encrypted communication with the daemon)

The host related to this:
amraelp00007334
amraelp00007335

Regards,
Mikolaj


Certificates Configuration

Kafka - GBL32139266i  

GO TO:How to Generate JKS Keystore and Truststore

keytool -genkeypair -alias kafka.gbl-mdm-hub-us-nprod.COMPANY.com -keyalg RSA -keysize 2048 -keystore server.keystore.jks -dname "CN=kafka.gbl-mdm-hub-us-nprod.COMPANY.com, O=COMPANY, L=mdm_gbl_us_hub, C=US"
keytool -certreq -alias kafka.gbl-mdm-hub-us-nprod.COMPANY.com -file kafka.gbl-mdm-hub-us-nprod.COMPANY.com.csr -keystore server.keystore.jks

SAN:

gbl-mdm-hub-us-nprod.COMPANY.com
amraelp00007334.COMPANY.com
●●●●●●●●●●●●
amraelp00007335.COMPANY.com
●●●●●●●●●●●●


Crete guest_user for KAFKA - "CN=kafka.guest_user.gbl-mdm-hub-us-nprod.COMPANY.com, O=COMPANY, L=GBLMDMHUB-US-NONPROD-KAFKA, C=US":

GO TO: How to Generate JKS Keystore and Truststore

keytool -genkeypair -alias guest_user -keyalg RSA -keysize 2048 -keystore guest_user.keystore.jks -dname "CN=kafka.guest_user.gbl-mdm-hub-us-nprod.COMPANY.com, O=COMPANY, L=GBLMDMHUB-US-NONPROD-KAFKA, C=US"
keytool -certreq -alias guest_user -file kafka.guest_user.gbl-mdm-hub-us-nprod.COMPANY.com.csr -keystore guest_user.keystore.jks

Kong - GBL32144418i

openssl req -nodes -newkey rsa:2048 -sha256 -keyout gbl-mdm-hub-us-nprod.key -out gbl-mdm-hub-us-nprod.csr

Subject Alternative Names

gbl-mdm-hub-us-nprod.COMPANY.com
amraelp00007334.COMPANY.com
●●●●●●●●●●●●
amraelp00007335.COMPANY.com
●●●●●●●●●●●●


EFK - GBL32139762i  , GBL32144243i

openssl req -nodes -newkey rsa:2048 -sha256 -keyout mdm-log-management-gbl-us-nonprod.key -out mdm-log-management-gbl-us-nonprod.csr
mdm-log-management-gbl-us-nonprod.COMPANY.com

Subject Alternative Names
mdm-log-management-gbl-us-nonprod.COMPANY.com
gbl-mdm-hub-us-nprod.COMPANY.com
amraelp00007334.COMPANY.com
●●●●●●●●●●●●
amraelp00007335.COMPANY.com
●●●●●●●●●●●●


openssl req -nodes -newkey rsa:2048 -sha256 -keyout mdm-esnode1-gbl-us-nonprod.key -out mdm-esnode1-gbl-us-nonprod.csr
mdm-esnode1-gbl-us-nonprod.COMPANY.com - Elasticsearch

Subject Alternative Names
mdm-esnode1-gbl-us-nonprod.COMPANY.com
gbl-mdm-hub-us-nprod.COMPANY.com
amraelp00007334.COMPANY.com
●●●●●●●●●●●●
amraelp00007335.COMPANY.com
●●●●●●●●●●●●


Domain Configuration:

Example request: GBL30514754i "Register domains "mdm-log-management*"


  1. log in to http://btondemand.COMPANY.com/getsupport
  2. What can we help you with? - Search for "Network Team Ticket"
  3. Select the most relevant topic - "DNS Request"
  4. Submit a ticket to this queue.
  5. Ticket Details:

Request

Hi,
Could you please register the following domains:

ADD the below DNS entry:
========================
mdm-log-management-gbl-us-nonprod.COMPANY.com              Alias Record to                             amraelp00007334.COMPANY.com[●●●●●●●●●●●●]
gbl-mdm-hub-us-nprod.COMPANY.com                                        Alias Record to                             amraelp00007335.COMPANY.com[●●●●●●●●●●●●]


Kind regards,
Mikolaj





Environment Installation


Pre:

rmdir /var/lib/ -> docker
ln -s /app/docker /var/lib/docker

umount /var/lib/docker
lvremove /dev/datavg/varlibdocker
vgreduce datavg /dev/nvme1n1

Clear content of /etc/sysconfig/docker-storage to DOCKER_STORAGE_OPTIONS="" to use deamon.json file


Ansible:

ansible-playbook prepare_env_gbl_us.yml -i inventory/dev_gblus/inventory --limit server1 --vault-password-file=~/vault-password-file

ansible-playbook prepare_env_airflow_certs.yml -i inventory/dev_gblus/inventory --limit server1 --vault-password-file=~/vault-password-file


ansible-playbook prepare_env_gbl_us.yml -i inventory/dev_gblus/inventory --limit server2 --vault-password-file=~/vault-password-file

ansible-playbook prepare_env_airflow_certs.yml -i inventory/dev_gblus/inventory --limit server2 --vault-password-file=~/vault-password-file


ansible-playbook prepare_env_gbl_us.yml -i inventory/dev_gblus/inventory --limit server3 --vault-password-file=~/vault-password-file

ansible-playbook prepare_env_airflow_certs.yml -i inventory/dev_gblus/inventory --limit server3 --vault-password-file=~/vault-password-file


copy daemon_docker_tls_overlay.json.j2 to /etc/docker/daemon.json

FIX using - https://stackoverflow.com/questions/44052054/unable-to-start-docker-after-configuring-hosts-in-daemon-json

$ sudo cp /lib/systemd/system/docker.service /etc/systemd/system/\n$ sudo sed -i 's/\\ -H\\ fd:\\/\\///g' /etc/systemd/system/docker.service\n$ sudo systemctl daemon-reload\n$ sudo service docker restart


Docker Version:

amraelp00007334:root:[10:10 AM]:/app> docker --version
Docker version 1.13.1, build b2f74b2/1.13.1

amraelp00007335:root:[10:04 AM]:/app> docker --version
Docker version 1.13.1, build b2f74b2/1.13.1

[root@amraelp00008810 docker]# docker --version
Docker version 19.03.13-ce, build 4484c46


Configure Registry Login (registry-gbicomcloud.COMPANY.com):

ansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit server1 --vault-password-file=~/vault-password-file - using ●●●●●●●●●●●●● root access
ansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit server2 --vault-password-file=~/vault-password-file
ansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit server3 --vault-password-file=~/vault-password-file
ansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file - using ●●●●●●●●●●●● service account
ansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-file
ansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit prometheus3 --vault-password-file=~/vault-password-file

Registry (manual config):
  Copy certs: /etc/docker/certs.d/registry-gbicomcloud.COMPANY.com/ from (mdm-reltio-handler-env\\ssl_certs\\registry)
  docker login registry-gbicomcloud.COMPANY.com (login on service account too)
  user/pass: mdm/**** (check mdm-reltio-handler-env\\group_vars\\all\\secret.yml)


Playbooks installation order:

Install node_exporter:
    ansible-playbook install_prometheus_node_exporter.yml -i inventory/dev_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file
    ansible-playbook install_prometheus_node_exporter.yml -i inventory/dev_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-file
ansible-playbook install_prometheus_node_exporter.yml -i inventory/dev_gblus/inventory --limit prometheus3 --vault-password-file=~/vault-password-file

Install Kafka
  ansible-playbook install_hub_broker.yml -i inventory/dev_gblus/inventory --limit broker --vault-password-file=~/vault-password-file

Install Mongo
  ansible-playbook install_hub_db.yml -i inventory/dev_gblus/inventory --limit mongo --vault-password-file=~/vault-password-file

Install Kong
  ansible-playbook install_mdmgw_gateway_v1.yml -i inventory/dev_gblus/inventory --limit kong_v1_01 --vault-password-file=~/vault-password-file

Update KONG Config (IT NEEDS TO BE UPDATED ON EACH ENV (DEV, QA, STAGE)!!)
  ansible-playbook update_kong_api_v1.yml -i inventory/dev_gblus/inventory --limit kong_v1_01 --vault-password-file=~/vault-password-file
  Verification:
    openssl s_client -connect amraelp00007335.COMPANY.com:8443 -servername gbl-mdm-hub-us-nprod.COMPANY.com -CAfile /mnt/d/dev/mdm/GBL_US_NPROD/root_inter/RootCA-G2.cer

Install EFK
  ansible-playbook install_efk_stack.yml -i inventory/dev_gblus/inventory --limit efk --vault-password-file=~/vault-password-file

Install FLUEND Forwarder (without this docker loggin may not work and docker commands will be blocked)
  ansible-playbook install_fluentd_forwarder.yml -i inventory/dev_gblus/inventory --limit docker-services --vault-password-file=~/vault-password-file

Install Promehtues services :
  mongo_exporter:
    ansible-playbook install_prometheus_mongo_exporter.yml -i inventory/dev_gblus/inventory --limit mongo_exporter1 --vault-password-file=~/vault-password-file
  cadvisor:
    ansible-playbook install_prometheus_stack.yml -i inventory/dev_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file
    ansible-playbook install_prometheus_stack.yml -i inventory/dev_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-file
ansible-playbook install_prometheus_stack.yml -i inventory/dev_gblus/inventory --limit prometheus3 --vault-password-file=~/vault-password-file
  sqs_exporter:
    ansible-playbook install_prometheus_stack.yml -i inventory/dev_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file
    ansible-playbook install_prometheus_stack.yml -i inventory/stage_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file
    ansible-playbook install_prometheus_stack.yml -i inventory/qa_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file

Install Consul 
ansible-playbook install_consul.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-file
# After operation get SecretID from consul container. On the container execute the following command:

$ consul acl bootstrap

and copy it as mgmt_token to consul secrets.yml

After install consul step run update consul playbook
Update Consul
ansible-playbook update_consul.yml -i inventory/prod_gblus/inventory --limit consul1 --vault-password-file=~/vault-password-file -v


Setup Mongo Indexes and Collections:

Create Collections and Indexes
\n
Create Collections and Indexes:\n    entityHistory\n\n        db.entityHistory.createIndex({country: -1},  {background: true, name:  "idx_country"});\n        db.entityHistory.createIndex({sources: -1},  {background: true, name:  "idx_sources"});\n        db.entityHistory.createIndex({entityType: -1},  {background: true, name:  "idx_entityType"});\n        db.entityHistory.createIndex({status: -1},  {background: true, name:  "idx_status"});\n        db.entityHistory.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});\n        db.entityHistory.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});\n        db.entityHistory.createIndex({"entity.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});\n        db.entityHistory.createIndex({"entity.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});\n        db.entityHistory.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});\n        db.entityHistory.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});\n        db.entityHistory.createIndex({entityChecksum: -1},  {background: true, name:  "idx_entityChecksum"});\n        db.entityHistory.createIndex({parentEntityId: -1},  {background: true, name:  "idx_parentEntityId"}); \n        \n        \n        \n\n    entityRelations\n        db.entityRelations.createIndex({country: -1},  {background: true, name:  "idx_country"});\n        db.entityRelations.createIndex({sources: -1},  {background: true, name:  "idx_sources"});\n        db.entityRelations.createIndex({relationType: -1},  {background: true, name:  "idx_relationType"});\n        db.entityRelations.createIndex({status: -1},  {background: true, name:  "idx_status"});\n        db.entityRelations.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});\n        db.entityRelations.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});\n        db.entityRelations.createIndex({startObjectId: -1},  {background: true, name:  "idx_startObjectId"});\n        db.entityRelations.createIndex({endObjectId: -1},  {background: true, name:  "idx_endObjectId"});\n        db.entityRelations.createIndex({"relation.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});    \n        db.entityRelations.createIndex({"relation.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});    \n        db.entityRelations.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});    \n        db.entityRelations.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});\n\n\n\n    LookupValues\n        db.LookupValues.createIndex({updatedOn: 1},  {background: true, name:  "idx_updatedOn"});\n        db.LookupValues.createIndex({countries: 1},  {background: true, name:  "idx_countries"});\n        db.LookupValues.createIndex({mdmSource: 1},  {background: true, name:  "idx_mdmSource"});\n        db.LookupValues.createIndex({type: 1},  {background: true, name:  "idx_type"});\n        db.LookupValues.createIndex({code: 1},  {background: true, name:  "idx_code"});\n        db.LookupValues.createIndex({valueUpdateDate: 1},  {background: true, name:  "idx_valueUpdateDate"});\n\n\n    ErrorLogs\n        db.ErrorLogs.createIndex({plannedResubmissionDate: -1},  {background: true, name:  "idx_plannedResubmissionDate_-1"});\n        db.ErrorLogs.createIndex({timestamp: -1},  {background: true, name:  "idx_timestamp_-1"});\n        db.ErrorLogs.createIndex({exceptionClass: 1},  {background: true, name:  "idx_exceptionClass_1"});\n        db.ErrorLogs.createIndex({status: -1},  {background: true, name:  "idx_status_-1"});\n\n\tbatchEntityProcessStatus\n        db.batchEntityProcessStatus.createIndex({batchName: -1, sourceId: -1},  {background: true, name:  "idx_findByBatchNameAndSourceId"});\n        db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1},  {background: true, name:  "idx_EntitiesUnseen_SoftDeleteJob"});\n        db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResult_ProcessingJob"});\n        db.batchEntityProcessStatus.createIndex({batchName: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResultAll_ProcessingJob"});\n\n    batchInstance\n\t\t- create collection\n\n\trelationCache\n\t\tdb.relationCache.createIndex({startSourceId: -1},  {background: true, name:  "idx_findByStartSourceId"});\n\n    DCRRequests\n          db.DCRRequests.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});\n          db.DCRRequests.createIndex({entityURI: -1, "status.name": -1},  {background: true, name:  "idx_entityURIStatusNameFind_SubmitVR"});\n          db.DCRRequests.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});\n          \n    entityMatchesHistory    \n          db.entityMatchesHistory.createIndex({_id: -1, "matches.matchObjectUri": -1, "matches.matchType": -1},  {background: true, name:  "idx_findAutoLinkMatch_CleanerStream"});\n\n
\n



Connect ENV with Prometheus:

Update config -  ansible-playbook install_prometheus_configuration.yml -i inventory/prod_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-file

Prometheus config
\n
node_exporter\n       - targets:\n          - "amraelp00007334.COMPANY.com:9100"\n          - "amraelp00007335.COMPANY.com:9100"\n         labels:\n            env: gblus_dev\n            component: node\n\n\nkafka\n       - targets:\n          - "amraelp00007335.COMPANY.com:9101"\n         labels:\n            env: gblus_dev\n            node: 1            \n            component: kafka\n            \n            \nkafka_exporter\n\n       - targets:\n          - "amraelp00007335.COMPANY.com:9102"\n         labels:\n            trade: gblus\n            node: 1\n            component: kafka\n            env: gblus_dev            \n\n\nComponents:\n    jmx_manager\n       - targets:\n          - "amraelp00007335.COMPANY.com:9104"\n         labels:\n            env: gblus_dev\n            node: 1\n            component: manager\n       - targets:\n          - "amraelp00007335.COMPANY.com:9108"\n         labels:\n            env: gblus_qa\n            node: 1\n            component: manager\n       - targets:\n          - "amraelp00007335.COMPANY.com:9112"\n         labels:\n            env: gblus_stage\n            node: 1\n            component: manager            \n    jmx_event_publisher\n       - targets:\n          - "amraelp00007334.COMPANY.com:9106"\n         labels:\n            env: gblus_dev\n            node: 1\n            component: publisher    \n       - targets:\n          - "amraelp00007334.COMPANY.com:9110"\n         labels:\n            env: gblus_qa\n            node: 1\n            component: publisher   \n       - targets:\n          - "amraelp00007334.COMPANY.com:9104"\n         labels:\n            env: gblus_stage\n            node: 1\n            component: publisher               \n    jmx_reltio_subscriber\n       - targets:\n          - "amraelp00007334.COMPANY.com:9105"\n         labels:\n            env: gblus_dev\n            node: 1\n            component: subscriber\n       - targets:\n          - "amraelp00007334.COMPANY.com:9109"\n         labels:\n            env: gblus_qa\n            node: 1\n            component: subscriber\n                   - targets:\n          - "amraelp00007334.COMPANY.com:9113"\n         labels:\n            env: gblus_stage\n            node: 1\n            component: subscriber\n  jmx_batch_service\n      - targets:\n          - "amraelp00007335.COMPANY.com:9107"\n        labels:\n          env: gblus_dev\n          node: 1\n          component: batch_service\n      - targets:\n          - "amraelp00007335.COMPANY.com:9111"\n        labels:\n          env: gblus_qa\n          node: 1\n          component: batch_service\n      - targets:\n          - "amraelp00007335.COMPANY.com:9115"\n        labels:\n          env: gblus_stage\n          node: 1\n          component: batch_service\n\nsqs_exporter    \n       - targets:\n          - "amraelp00007334.COMPANY.com:9122"\n         labels:\n            env: gblus_dev\n            component: sqs_exporter\n       - targets:\n          - "amraelp00007334.COMPANY.com:9123"\n         labels:\n            env: gblus_qa\n            component: sqs_exporter\n                   - targets:\n          - "amraelp00007334.COMPANY.com:9124"\n         labels:\n            env: gblus_stage\n            component: sqs_exporter\n\n\ncadvisor\n\n       - targets:\n          - "amraelp00007334.COMPANY.com:9103"\n         labels:\n            env: gblus_dev\n            node: 1\n            component: cadvisor_exporter\n       - targets:\n          - "amraelp00007335.COMPANY.com:9103"\n         labels:\n            env: gblus_dev\n            node: 2\n            component: cadvisor_exporter            \n\n\n    \nmongodb_exporter\n\n      - targets:\n          - "amraelp00007334.COMPANY.com:9120"\n        labels:\n          env: gblus_dev\n          component: mongodb_exporter\n    \n\nkong_exporter\n       - targets:\n          - "amraelp00007335.COMPANY.com:9542"\n         labels:\n            env: gblus_dev\n            component: kong_exporter
\n









" }, { "title": "Getting access to PDKS Rancher and Kubernetes clusters", "pageID": "259433725", "pageLink": "/display/GMDM/Getting+access+to+PDKS+Rancher+and+Kubernetes+clusters", "content": "
  1. Go to https://requestmanager.COMPANY.com/#/
  2. Search nsa-unix and select first link (NSA-UNIX)
  3. You will see the form for requesting an access which should be fulfilled like on an example below: 


Do you need to be added to any Role Groups? YES

Do you need privileged access to specific Servers in a Role Group? NO

Please provide the Server Location: Not applicable

NIS Domain: Other 

Add to Role Group(s) UNIX-GBLMDMHUB-US-PROD-ADMIN-U or UNIX-GBLMDMHUB-US-NPROD-ADMIN-U (depends on an environment)

Please provide information about Account Privileges: Add Privileges  

Please choose the Type of Privilege to Add: 

Please provide the UNIX Group Name:  UNIX-GBLMDMHUB-US-PROD-COMPUTERS-U or UNIX-GBLMDMHUB-US-NPROD-COMPUTERS-U


Please provide a brief Business Justification:

For prod:

atp-mdmhub-prod-amer
atp-mdmhub-prod-emea
atp-mdmhub-prod-apac

PDKS EKS clusters regarding project BoldMove.


For nprod:

atp-mdmhub-nprod-amer
atp-mdmhub-nprod-emea
atp-mdmhub-nprod-apac

PDKS EKS clusters regarding project BoldMove.


Comments or Special Instructions:  

I am creating this request to have an access to Global MDM HUB prod clusters. 


\"\"\"\"



" }, { "title": "UI:", "pageID": "308256633", "pageLink": "/pages/viewpage.action?pageId=308256633", "content": "" }, { "title": "Add new role and add users to the UI", "pageID": "308256635", "pageLink": "/display/GMDM/Add+new+role+and+add+users+to+the+UI", "content": "

MDM HUB UI roles standards:

Here is the role standard that has to be used to get access to the UI by specific users:

Environments


NON-PRODPROD

DEVQASTAGEPROD
GBL****
EMEA****
AMER****
APAC****
GBLUS****
ALL****

Use the 'ALL' keyword with connection to the 'NON-PROD' and 'PROD' - using this approach will produce only 2 roles for the system.

Role Schema:

<prefix>_<tenant>_<system name>_<application>_<environment>_<system>_<suffix>

<prefix> - COMM
<tenant> - ALL or GBL/AMER/EMEA e.t.c (recommendation is ALL)
<system name> - MDMHUB 
<application> - UI 
<environment> - PROD / NON-PROD  or specific based on a table above
<system> HUB_ADMIN / PTRS e.t.c Important: <system> name has to be in sync with HUB configuration users in e.g http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/nprod/users   
<suffix> ROLE


example roles:

HUB ADMIN → COMM_ALL_MDMHUB_UI_NON-PROD_HUB_ADMIN_ROLE - HUB UI group for hub-admin users - access to all clusters, and non-prod environments.

HUB ADMIN → COMM_ALL_MDMHUB_UI_PROD_HUB_ADMIN_ROLE - HUB UI group for hub-admin users - access to all clusters, and prod environments.

PTRS system → COMM_ALL_MDMHUB_UI_NON-PROD_PTRS_ROLE - HUB UI group for PTRS users - access to all clusters, and non-prod environments.

PTRS system → COMM_ALL_MDMHUB_UI_PROD_PTRS_ROLE - HUB UI group for PTRS users - access to all clusters, and prod environments.

The system is the user name used in HUB. All users related to the specific system can have access to the specific role.


For example, if someone from the PTRS system wants to have access to the UI, how to process such request:


  1. Add user to existing UI role
    1. Go to https://requestmanager1.COMPANY.com/Group/Default.aspx
    2. search a group:
    3. \"\"
    4. If a role is found in search results you can check current members or request a new member
    5. add a new user:
    6. \"\"
    7. save
    8. go to Cart https://requestmanager1.COMPANY.com/group/Review.aspx
    9. and submit the request.
  2. If the role does not exist:
    1. First, create a new role:
      1. click Create a NEW Security Group
      2. https://requestmanager1.COMPANY.com/group/Create.aspx?type=sec
      3. \"\"
      4. region -EMEA
      5. name - the name of a group 
      6. primary owner - AJ
      7. secondary owner  - Mikołaj Morawski
      8. Description - e.g. HUB UI group for hub-admin users - access to all clusters, and prod environments.
      9. now you can add users to this group
    2. Second, configure roles and access to the user in HUB:
      1. Important: <system> name has to be in sync with HUB configuration users in http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/nprod/users 
      2. Users can have access to the following roles and APIs:
        1. https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html
      1. Add roles and topics to the user:
        1. .e.g: http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/nprod/users/ptrs.yaml
          1. Put "kafka" section with specific kafka topics:
          2. Add mdm admin section with specific roles and access to topics:
            1. e.g. 
            2.     mdm_admin:
                    reconciliationTargets:
                      - emea-dev-out-full-ptrs-eu
                      - emea-dev-out-full-ptrs-global2
                      - emea-qa-out-full-ptrs-eu
                      - emea-qa-out-full-ptrs-global2
                      - emea-stag-out-full-ptrs-eu
                      - emea-stag-out-full-ptrs-global2
                      - gbl-dev-out-full-ptrs
                      - gbl-dev-out-full-ptrs-eu
                      - gbl-dev-out-full-ptrs-porind
                      - gbl-qa-out-full-ptrs-eu
                      - gbl-stage-out-full-ptrs
                      - gbl-stage-out-full-ptrs-eu
                      - gbl-stage-out-full-ptrs-porind
                    sources:
                      - ALL
                    countries:
                      - ALL
                    roles: &roles
                      - MODIFY_KAFKA_OFFSET
                      - RESEND_KAFKA_EVENT
                    kafka: *kafka
          3. REMEMBER TO ADD: Add mdm_auth  section  this  will  start  the  UI  access.
            1. Without this section the UI will not show HUB Admin tools! 
            2. mdm_auth:
              roles: *roles
          4. The mdm_auth section and roles there will cause the user will only see 2 pages in UI - in that case, MODIFY OFFSET and RESET_KAFKA_EVENTS
      2. When the roles and users are configured on the HUB end go to the first step and add selected users to the selected roles.
      3. Starting from this time any new e.g. PTRS user can be added to the COMM_ALL_MDMHUB_UI_NON-PROD_PTRS_ROLE and will be able to log in to UI and see the pages and use API through UI.




" }, { "title": "Current users and roles", "pageID": "347636361", "pageLink": "/display/GMDM/Current+users+and+roles", "content": "
EnvironmentClientClusterRoleCOMPANY UsersHUB internal user
NON-PRODMDMHUBALLCOMM_ALL_MDMHUB_UI_NON-PROD_HUB_ADMIN_ROLE

ALL HUB Team Members 

+

Andrew.J.Varganin@COMPANY.com

Nishith.Trivedi@COMPANY.com


e.g. http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/nprod/users/hub_admin.yamll
PRODMDMHUBALLCOMM_ALL_MDMHUB_UI_PROD_HUB_ADMIN_ROLE    

ALL HUB Team Members

+

Andrew.J.Varganin@COMPANY.com

Nishith.Trivedi@COMPANY.com

e.g. http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/prod/users/hub_admin.yaml
NON-PRODMDMETLALLCOMM_ALL_MDMHUB_UI_NON-PROD_MDMETL_ADMIN_ROLE

Anurag.Choudhary@COMPANY.com
Shikha@COMPANY.com
Raghav.Gupta@COMPANY.com
Khushboo.Bharti@COMPANY.com
Manisha.Kansal@COMPANY.com
Ajit.Tiwari@COMPANY.com
Sayak.Acharya@COMPANY.com
Jeevitha.R@COMPANY.com
Priya.Suthar@COMPANY.com
Joymalya.Bhattacharya@COMPANY.com
Chinthamani.Kalebu@COMPANY.com
Arindam.Roy2@COMPANY.com
NarendraSingh.Chouhan@COMPANY.com
Adrita.Sarkar@COMPANY.com
Manish.Panda@COMPANY.com
Meghana.Das@COMPANY.com
Hanae.Laroussi@COMPANY.com
Somil.Sethi@COMPANY.com
Shivani.Jha@COMPANY.com
Pradnya.Raikar@COMPANY.com
KOMAL.MANTRI@COMPANY.com
Absar.Ahsan@COMPANY.com
Asmita.Datta@COMPANY.com

e.g. http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/nprod/users/mdmetl_admin.yaml
PRODMDMETLALLCOMM_ALL_MDMHUB_UI_PROD_MDMETL_ADMIN_ROLE

Anurag.Choudhary@COMPANY.com
Shikha@COMPANY.com
Raghav.Gupta@COMPANY.com
Khushboo.Bharti@COMPANY.com
Manisha.Kansal@COMPANY.com
Ajit.Tiwari@COMPANY.com
Sayak.Acharya@COMPANY.com
Jeevitha.R@COMPANY.com
Priya.Suthar@COMPANY.com
Joymalya.Bhattacharya@COMPANY.com
Chinthamani.Kalebu@COMPANY.com
Arindam.Roy2@COMPANY.com
NarendraSingh.Chouhan@COMPANY.com
Manish.Panda@COMPANY.com
Meghana.Das@COMPANY.com
Hanae.Laroussi@COMPANY.com
Somil.Sethi@COMPANY.com
Shivani.Jha@COMPANY.com
Pradnya.Raikar@COMPANY.com
KOMAL.MANTRI@COMPANY.com
Asmita.Datta@COMPANY.com

e.g. https://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/prod/users/mdmetl_admin.yaml
NON-PRODPTRSALLCOMM_ALL_MDMHUB_UI_NON-PROD_PTRS_ROLEsagar.bodala@COMPANY.com
Aishwarya.Shrivastava@COMPANY.com
Tanika.Das@COMPANY.com
Rishabh.Singh@COMPANY.com
Bhushan.Shanbhag@COMPANY.com
Hasibul.Mallik@COMPANY.com
AbhinavMishra.Mishra@COMPANY.com
Asmita.Mishra@COMPANY.com
Prema.NayagiGS@COMPANY.com
e.g. http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/nprod/users/ptrs.yaml
PRODPTRSALLCOMM_ALL_MDMHUB_UI_PROD_PTRS_ROLEsagar.bodala@COMPANY.com
Aishwarya.Shrivastava@COMPANY.com
Tanika.Das@COMPANY.com
Rishabh.Singh@COMPANY.com
Bhushan.Shanbhag@COMPANY.com
Hasibul.Mallik@COMPANY.com
AbhinavMishra.Mishra@COMPANY.com
Asmita.Mishra@COMPANY.com
Prema.NayagiGS@COMPANY.com

e.g. http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/prod/users/ptrs.yaml


NON-PRODCOMPANYALLCOMM_ALL_MDMHUB_UI_NON-PROD_COMPANY_ROLEnavaneel.ghosh@COMPANY.com

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/pull-requests/1707/diff#amer/nprod/users/COMPANY.yml

PRODCOMPANYALLCOMM_ALL_MDMHUB_UI_PROD_COMPANY_ROLEnavaneel.ghosh@COMPANY.com

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/pull-requests/1707/diff#amer/nprod/users/COMPANY.yml

" }, { "title": "SSO and roles", "pageID": "322564881", "pageLink": "/display/GMDM/SSO+and+roles", "content": "

To login to UI dashboard You have to be in COMPANY network. sso authorization is made by SAML, using COMPANY pingfederate.


Auth flow

\"\"


SSO login


\"\"


SAML login role

After successful authentication with SAML we are receiving roles from Active Directory (Group Manager - distribution list)

Then we are decoding roles using following regexp:

COMM_(?<tenant>[A-Z]+)_MDMHUB_UI_(?<environment>NON-PROD|PROD)_(?<system>.+)_ROLE

When role is matching environment and tenant we are getting roles by searching system in user configuration.


Backend AD groups

ServiceNPROD GroupPROD GroupDescription
Kibana

COMM_ALL_MDMHUB_KIBANA_NON-PROD_ADMIN_ROLE

COMM_ALL_MDMHUB_KIBANA_PROD_ADMIN_ROLE
COMM_ALL_MDMHUB_KIBANA_NON-PROD_VIEWER_ROLECOMM_ALL_MDMHUB_KIBANA_PROD_VIEWER_ROLE
Grafana
COMM_ALL_MDMHUB_GRAFANA_PROD_ADMIN_ROLE


COMM_ALL_MDMHUB_GRAFANA_PROD_VIEWER_ROLE
AkhqCOMM_ALL_MDMHUB_KAFKA_NON-PROD_ADMIN_ROLECOMM_ALL_MDMHUB_KAFKA_PROD_ADMIN_ROLE

COMM_ALL_MDMHUB_KAFKA_NON-PROD_VIEWER_ROLECOMM_ALL_MDMHUB_KAFKA_PROD_VIEWER_ROLE
MonitoringCOMM_ALL_MDMHUB_ALL_NON-PROD_MON_ROLECOMM_ALL_MDMHUB_ALL_PROD_MON_ROLEThis groups aggregates users that are responsible for monitoring of MDMHUB 
AirflowCOMM_ALL_MDMHUB_AIRFLOW_NON-PROD_ADMIN_ROLECOMM_ALL_MDMHUB_AIRFLOW_PROD_ADMIN_ROLE

COMM_ALL_MDMHUB_AIRFLOW_NON-PROD_VIEWER_ROLECOMM_ALL_MDMHUB_AIRFLOW_PROD_VIEWER_ROLE












" }, { "title": "UI Connect Guide", "pageID": "322540727", "pageLink": "/display/GMDM/UI+Connect+Guide", "content": "

Log in to UI and switch Tenants

  1. To log in to UI please use the following link: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-emea-dev
  2. Log in to UI using your COMPANY credentials:
    1. \"\"
  3. There is no need to know each UI address, you can easily switch between Tenants using the following link (available on the TOP RIGHT corner in UI near the USERNAME):
    1. \"\"


What pages are available with the default VIEW role

By default, you are logged in with the default VIEW role, the following pages are available:

\"\"

  1. HUB Status

    1. You can use the HUB Dashboard main page that contains HUB platform status: Event processing details, Snowflake refresh time, started batches and ETA to load data to Reltio or get Events from Reltio.
  2. Ingestion Services Configuration

    1. This page contains the documentation related to the Data Quality checks, Source Match Categorization, Cleansing & Formatting, Auto-Fills, and Minimum Viable Profile Checks.
    2. You can choose a filter to switch between different entity types and use input boxes to filter results.
    3. You can use the 'Category' filter to include the operations that you are interested in
    4. You can use the 'Query' filter and put any text to find what you are looking for (e.g. 'prefix' to find rules with prefix word)
    5. You can use the 'Date' filter to find rules created or updated after a specific time - now using this filter you can easily find the rules added after data reload and reload data one more time to reflect changes. 
    6. This page contains also documentation related to duplicate identifiers and noise lists.
    7. You can choose a  filter to switch between different entity types and use input boxes to filter results
  3. Ingestion Services Tester

    1. This page contains the JSON tester, input JSON and click the 'Test' button to check the output JSON with all rules applied
    2. Click the 'Difference' to get only changed sections
    3. Click the 'Validation result' to get the rules that were executed.

More details here: HUB UI User Guide

What operations are available in the UI

As a user, you can request access to the technical operations in HUB. The details on how to access more operations are described in the section below.

Here you will get to know the different UI operations and what can be done using these operations:

HUB Admin allows to:

\"\"


  1. Kafka Offset

    1. Technical operation
    2. On this page user can modify Kafka offset on specific consumer group
    3. System/User that wants to have access to this page will be allowed to maintain the consumer group offset, change to:
      1. latest
      2. earliest
      3. specific date time
      4. shift by a specific number of events.
  2. HUB Reconciliation

    1. Technical operation
    2. Used internally by HUB Team.
    3. This operation allows us to mimic Reltio events generation - this operation generates the events to the input HUB topic so that we can reprocess the events.
    4. You can use this page and generates events by:
      1. provide an input array with entity/relation URIs
      2. or
      3. provide the query and select the source/market that you want to reprocess.
  3. Kafka Republish Events

    1. Technical operation
    2. This operation can be used to generate events for your Kafka topic
    3. Use case - you are consuming data from HUB and you want to test something on non-prod environments and consume events for a specific market one more time. You want to receive 1000 events for France market for your testing.
    4. You can use this page and generates events for the target topic:
      1. Specify the Countries/Sources/Limits/Dates and Target Reconciliation topic - as a result, you will receive the events.
  4. Reltio Reindex

    1. Technical operation
    2. This operation executes the Reltio Reindexing operation
    3. You can use this page and generates events by:
      1. provide the query and select the source/market that you want to reprocess.
      2. or
      3. provide the input file with entity/relation URIs, that will be sent to Reltio API.
  5. Merge/Unmerge Entities

    1. Business operation
    2. This operation consumes the input file and executes the merge/unmerge operations in Reltio
    3. More details about the file and process are described here: Batch merge & unmerge
  6. Update Identifiers

    1. Business operation
    2. This operation consumes the input file and executes the merge/unmerge operations in Reltio
    3. More details about the file and process are described here: Batch update identifiers
  7. Clear Cache

    1. Business operation
    2. Clear ETL Batch Cache
    3. More details about the file and process are described here: Batch clear ETL data load cache

How to request additional access to new operations

Please send the following email to the HUB DL: DL-ATP_MDMHUB_SUPPORT@COMPANY.com

Subject:

HUB UI - Access request for <user-name/system-name>

Body:

Please provide the access / update the existing access for <user-name/system-name> to HUB Admin operations.

ID

Details

Comments:

1

Action needed


Add user to the HUB UI

Edit user in the HUB UI (please provide the existing group name)

<any other>

2

Tenant


GBL, EMEA, AMER, GBLUS, APAC/ALL

Tenant - more details in Environments

By default please select ALL Tenants, but if you need access only to a specified one please select.

3

Environments


 PROD / NON-PROD  or specific: DEV/QA/STAGE/PROD

By default please select PROD / NON-PROD environments, but if you need access only to a specified one please select.

4

Permissions range


Choose the operation:

Kafka Offset

HUB Reconciliation

Kafka Republish Events

Reltio Reindex

Merge/Unmerge Entities

Update Identifiers

Clear Cache

5

COMPANY Team


ETL/COMPANY or DSR or Change Management e.t.c

8

Business justification


Needs access to execute merge unmerge operation in EMEA/AMER/APAC PROD Reltio

9

Point of contact


If you are from the system please provide the DL email and system details.

7

Sources


<optional  - list of sources to which user should have access>

required in Events/Reindex/Reconciliation operations

3

Countries


<optional  - list of countries to which user should have access>

required in Events/Reindex/Reconciliation operations


The request will be processed after Andrew.J.Varganin@COMPANY.com approval. 


In the response, you will receive the Group Name. Please use this for future reference.

e.g. PTRS system roles used in the PTRS system to manage UI operations.

   PTRS system → COMM_ALL_MDMHUB_UI_NON-PROD_PTRS_ROLE - HUB UI group for PTRS users - access to all clusters, and non-prod environments.

   PTRS system → COMM_ALL_MDMHUB_UI_PROD_PTRS_ROLE - HUB UI group for PTRS users - access to all clusters, and prod environments.

HUB Team will use the following SOP to add you to a selected role: Add a new role and add users to the UI

Get Help

In case of any questions, the GetHelp page or full HUB documentation is available here (UI page footer):

\"\"

GetHelp

Welcome to the Global MDM Home!




" }, { "title": "Users:", "pageID": "302705550", "pageLink": "/pages/viewpage.action?pageId=302705550", "content": "" }, { "title": "Add Direct API User to HUB", "pageID": "273694347", "pageLink": "/display/GMDM/Add+Direct+API+User+to+HUB", "content": "

To add a new user to MDM HUB direct API a few steps must be done. That document describes what activities must be fulfilled and who is responsible fot them.

Create PingFederate user - client's responsibility 

 If the client's authentication method is oauth2 then there is a need to create PingFederate user.

To add a user you must have a Ping Federate user created: How to Request PingFederate (PXED) External OAuth 2.0 Account 

Caution: If the authentication method is key auth then HUB Team generates it and sends it securely way to the client.


Send a request to MDM HUB that contains all necessary data - client's responsibility 

Send a request to create a new user with direct API access to HUB Team: dl-atp_mdmhub_support@COMPANY.com

The request must contain as follows:



1

Action needed

2

PingFederate username

3

Countries

4

Tenant

5

Environments

6

Permissions range

7

Sources

8

Business justification

9

Point of contact

10

Gateway

Description

  1. Action needed – this is a place where you decide if you want to create a new user or modify the existing one.
  2. PingFederate username – you need to create a user on the PingFederate side. Its username is crucial to authenticate on the HUB side. If you do not have a PingFederate user please check: https://confluence.COMPANY.com/display/GMDM/How+to+request+PingFederate+%28PXED%29+external+OAuth+2.0+account
  3. Countries - list of countries that access to will be granted
  4. Tenant – a tenant or list of tenants where the user will be created. Please notice that if you have a connection from open internet only EMEA is possible. If you have a local application split to Reltio Region it is recommended to request a local tenant. If you have a global solution you can call EMEA and your requests will be routed by HUB.
  5. Environments – list of environment instances – DEV/QA/STG/PROD
  6. Permissions range – do you need to write or read/write? To which entities do you need access? HCO/HCP/MCO
  7. Sources – to which sources do you need to have access?
  8. Business justification – please describe
    1. Why do you have a connection with HUB?
    2. Why the user must be created/modified?
    3. What’s the project name?
    4. Who’s the project manager?
  9. Point of contact – please add a DL group name - in case of any issues connected with that user
  10. Which API you want to call: EMEA, AMER, APAC,etc

Prepare new user on MDM HUB side - HUB Team Responsibility 

  1. Store clients' request in dedicated confluence space: Clients
  2. In the COMPANY tenants, there is a need to connect the new user with API Router directly.
  3. Change API router configuration, and add a new user with:
    1. user PingFederate name or when the user uses key auth add API key to secrets.yaml
    2. sources
    3. countries
    4. roles
  4. Change Manager configuration, add
    1. sources
    2. countries
  5. Change DCR service configuration - if applicable
    1. dcrServiceConfig-  initTrackingDetailsStatus, initTrackingDetail, dcrType
    2. roles - CREATE_DCR, GET_DCR
  6. You need to check how the request will be routed. If there is a  need to make a routing configuration, follow these steps:
    1. change API Router configuration by adding new countries to proper tenants
    2. change Manager configuration in destinated tenant by adding
      1. sources
      2. countries


" }, { "title": "Add External User to MDM Hub", "pageID": "164470196", "pageLink": "/display/GMDM/Add+External+User+to+MDM+Hub", "content": "

Kong configuration

  1. Firstly You need to have users logins from Ping Federate for every env
  2. Go folder inventory/{{ kong_env }}/group_vars/kong_v1 in repository mdm-hub-env-config
    Find section PLUGINS in file kong_{{ env }}.yml and then rule with name mdm-external-oauth
    1. in this section find "users_map"
    2. add there new entry with following rule:

      \n
      - "<user_name_from_ping_federate>:<user_name_in_mdm_hub>"
      \n
    3. change False to True in create_or_update setting for this rule

      \n
      create_or_update: True
      \n

      Repeat this steps( a-c ) for every environment {{ env }} you want to apply changes to(e.g., dev, qa, stage)

      {{ kong_env }} - environment on which kong instance is deployed

      {{ env }} - environment on which MDM Hub instance is deployed

      kong_envenv
      devdev, mapp, stage
      prodprod
      dev_gblus

      dev_gblus, qa_gblus, stage_gblus

      prod_gblusprod_gblus
      dev_usdev_us
      prod_usprod_us
  3. Go to folder inventory/{{ env }}/group_vars/gw-services

    In file gw_users.yml add section with new user after last added user, specify roles and sources needed for this user. E.g.,

    User configuration
    \n
    - name: "<user_name_in_mdm_hub>"\n  description: "<Some description>"\n  defaultClient: "ReltioAll"\n  getEntityUsesMongoCache: yes\n  lookupsUseMongoCache: yes\n  roles:\n    - <specify_only_roles_that_are_required_for_this_user>\n  countries:\n    - US\n  sources: \n\t- <specify_only_sources_needed by this user>
    \n

    Repeat this step for every environment {{ env }} you want to apply changes to( e.g., dev, qa, stage)

  4. After configuration changes You need to update kong using following command
    1. for nonprod gblus envs

      GBLUS NPROD - kong update
      \n
      ansible-playbook update_kong_api_v1.yml -i inventory/dev_gblus/inventory --limit kong_v1_01 --vault-password-file=~/ansible.secret
      \n
    2. for prod gblus env

      GBLUS PROD - kong update
      \n
      ansible-playbook update_kong_api_v1.yml -i inventory/prod_gblus/inventory --limit kong_v1_01 --vault-password-file=~/ansible.secret
      \n
    3. for nprod gbl envs

      GBL NPROD - kong update
      \n
      ansible-playbook update_kong_api_v1.yml -i inventory/dev/inventory --vault-password-file=~/ansible.secret
      \n
    4. for prod gbl env

      GBL PROD - kong update
      \n
      ansible-playbook update_kong_api_v1.yml -i inventory/prod/inventory --vault-password-file=~/ansible.secret
      \n
    5. for nprod US env

      US NPROD - kong update
      \n
      ansible-playbook update_kong_api_v1.yml -i inventory/dev_us/inventory --vault-password-file=~/ansible.secret
      \n
    6. for prod US

      US PROD - kong update
      \n
      ansible-playbook update_kong_api_v1.yml -i inventory/prod_us/inventory --vault-password-file=~/ansible.secret
      \n

      Troubleshooting

      In case when there will be a problem with deploying You need to set create_or_update as True also for route and manager service.

      Ansible secret

      To use this script You need to have ansible.secret file created in your home directory or adjust vault-password-file if needed.
      Another option is to change --vault-password-file to --ask-vault and provide ansible vault during the runtime.

  5. Before commiting changes find all occurrences where You set create_or_update to true and change it again to:

    \n
    create_or_update: False
    \n

    Then commit changes

  6. Redeploy gateway services on all modified envs. Before deploying please verify if there is no batch running in progress
    Jenkins job to deploy gateway services:
    https://jenkins-gbicomcloud.COMPANY.com/job/mdm-gateway/



" }, { "title": "Add new Batch to HUB", "pageID": "310944945", "pageLink": "/display/GMDM/Add+new+Batch+to+HUB", "content": "

To add a new batch to MDM HUB  a few steps must be done. That document describes what activities must be fulfilled and who is responsible for them.

Check source and country configuration

The first step is to check if DQ rules and SMC are configured for the new source. 

Repository: mdm-config-registry; Path: \\config-hub\\<env_tenant>\\mdm-manager\\quality-service\\quality-rules\\

If not you have to immediately send an email to a person that requested a new batch. This condition is usually performed on a separate task as prerequisite to adding the batch configuration.

"This is a new source. You have to send DQ and SMC requirements for a new source to A.J. and Eleni. Based on it a new HUB requirement deck will be prepared. When we received it the task can be planned. Until that time the task is blocked." 

The same exercise has to be made when we get requirements for a new country.

Authorization and authentication

Clients use mdmetl batch service user to populate data to Reltio. There is no changes needed.

Send a request to MDM HUB that contains all necessary data - client's responsibility 

Send a request to create a new batch to HUB Team: dl-atp_mdmhub_support@COMPANY.com

The request must contain as follows:



subject arealist of stages HCP/HCO/Affiliations
data source

countries list


source name
batch name
file typefull/incremental
frequency
bussines justification
single point of contact on client side

Prepare new batch on MDM HUB side - HUB Team Responsibility 

Repository: mdm-hub-cluster-env

Changes on manager level

In mdmetl.yaml configuration must be extended with:

Path: \\<tenant>\\<env>\\users\\mdmetl.yaml

  1. New sources
  2. New countries
  3. Add new batch with stages to batch_service, example:
batch_service:
defaultClient: "ReltioAll"
description: "MDMETL Informatica IICS User - BATCH loader"
batches:
"ONEKEY": <- new batch name
- "HCPLoading" <- new stage
- "HCOLoading" <- new stage
- "RelationLoading" <- new stage

In the MDM manager config, if the batch includes RelationLoading stage then add to the refAttributesEnricher configuration 

relationType: ProviderAffiliations
relationType: ContactAffiliations
relationType: ACOAffiliations
  1. New sources
  2. New countries

Changes in batch-service level

Based on stages that are adding there is a need to change a batch-service configuration.

Path: \\<tenant>\\<env>\\namespaces\\<namespace>\\config_files\\batch-service\\config\\application.yml

  1. Add configuration in BatchWorkflows, example:
- batchName: "PFORCERX_ODS"
batchDescription: "PFORCERX_ODS - HCO, HCP, Relation entities loading"
stages:
- stageName: "HCOLoading"
- stageName: "HCOSending"
softDependentStages: [ "HCOLoading" ]
processingJobName: "SendingJob"
- stageName: "HCOProcessing"
dependentStages: [ "HCOSending" ]
processingJobName: "ProcessingJob"
# --------------------------------
- stageName: "HCPLoading"
- stageName: "HCPSending"
softDependentStages: [ "HCPLoading" ]
processingJobName: "SendingJob"
- stageName: "HCPProcessing"
dependentStages: [ "HCPSending" ]
processingJobName: "ProcessingJob"
# ------------------
- stageName: "RelationLoading"
- stageName: "RelationSending"
dependentStages: [ "HCOProcessing", "HCPProcessing" ]
softDependentStages: [ "RelationLoading" ]
processingJobName: "SendingJob"
- stageName: "RelationProcessing"
dependentStages: [ "RelationSending" ]
processingJobName: "ProcessingJob"

If batch is full load than two additional stages must be configured, it destination is to allows deletating profiles:

- stageName: "EntitiesUnseenDeletion"
dependentStages: [ "HCOProcessing" ]
processingJobName: "DeletingJob"
- stageName: "HCODeletesProcessing"
dependentStages: [ "EntitiesUnseenDeletion" ]
processingJobName: "ProcessingJob"


2. Add configuration to bulkConfiguration, example:
"PFORCERX_ODS":
HCOLoading:
bulkLimit: 25
destination:
topic: "${env}-internal-batch-pforcerx-ods-hco"
maxInFlightRequest: 5
HCPLoading:
bulkLimit: 25
destination:
topic: "${env}-internal-batch-pforcerx-ods-hcp"
maxInFlightRequest: 5
RelationLoading:
bulkLimit: 25
destination:
topic: "${env}-internal-batch-pforcerx-ods-rel"
maxInFlightRequest: 5

All new dedicated topic must be configured. There is a need to add configuration in kafka-topics.yml, example:
emea-prod-internal-batch-pulse-kam-hco:
partitions: 6
replicas: 3

3. Add configuration in sendingJob, example:
PFORCERX_ODS:
HCOSending:
source:
topic: "${env}-internal-batch-pforcerx-ods-hco"
maxInFlightRequest: 5
bulkSending: false
bulkPacketSize: 10
reltioRequestTopic: "${env}-internal-async-all-mdmetl-user"
reltioReponseTopic: "${env}-internal-async-all-mdmetl-user-ack"
HCPSending:
source:
topic: "${env}-internal-batch-pforcerx-ods-hcp"
maxInFlightRequest: 5
bulkSending: false
bulkPacketSize: 10
reltioRequestTopic: "${env}-internal-async-all-mdmetl-user"
reltioReponseTopic: "${env}-internal-async-all-mdmetl-user-ack"
RelationSending:
source:
topic: "${env}-internal-batch-pforcerx-ods-rel"
maxInFlightRequest: 5
bulkSending: false
bulkPacketSize: 10
reltioRequestTopic: "${env}-internal-async-all-mdmetl-user"
reltioReponseTopic: "${env}-internal-async-all-mdmetl-user-ack"

4. If a batch is full load then deletingJob must be configured, for example:

PULSE_KAM:
EntitiesUnseenDeletion:
maxDeletesLimit: 10000
queryBatchSize: 10
reltioRequestTopic: "${env}-internal-async-all-mdmetl-user"
reltioResponseTopic: "${env}-internal-async-all-mdmetl-user-ack"



" }, { "title": "How to Request PingFederate (PXED) External OAuth 2.0 Account", "pageID": "263491721", "pageLink": "/display/GMDM/How+to+Request+PingFederate+%28PXED%29+External+OAuth+2.0+Account", "content": "

This instruction describes the Client steps that should be triggered to create the PingFederate account. Referring to security requirements HUB should only know the details about the UserName created by the PXED Team. HUB is not requesting external accounts, passwords and all the details are shared only with the Client. The client is sharing the user name to HUB and only after the User name is configured Client will gain the access to HUB resources. 


Contact Persons:


Details required to fulfill the PXED request are in this doc:

\"\"


User Name standard: <SYSTEM_NAME>-MDM_client


Steps:

  1. Go to https://requestmanager.COMPANY.com/#/
  2. In Search For Application type: PXED
  3. \"\"
  4.  Pick - Application enablement with enterprise authentication services (PXED, LDAP and/or SSO)
  5. Fulfill the request and send.
  6. Wait for the user name and password
  7. After confirmation share the Client Id with HUB and wait for the grant of access. Do not share the password. 



EXAMPLE: 

For the Reference Example request send for PFORCEOL user:


Request Ticket

GBL32702829i

Ticket ID

Name

Varganin, Andrew Joseph

Requested user name

AD Username

VARGAA08

Requested user Id

User Domain

AMER

Region (AMER/EMEA/APAC/US...)

Request ID

20200717112252425

request ID

Hosting location

External

Hosting location of the Client services: (External or  Internal COMPANY Network)

VCAS Reference number

V...

VCAS Reference number

Data Feed

No, API/Services

flow - requests send to HUB API then - API/Services

Application access methods

Web Browser

Type of access for the Client application - (Intranet/Web Browser e.t.c) 

Application User base

COMPANY colleagues

Contractors

Application User base

Application access devices

Laptop/Desktop

Tablets (iPad/Android/Windows)

Application access devices

Application Access Locations

Internet

Location (External - Internet / Internal - Intranet)

Application Name

<EXAMPLE: PFORCEOL (BIOPHARMA)>

Requested application name that requires new account

CMDB ID (Production Deployment)

SC....

CMDB ID (Production Deployment)

IPRM Solution profile number

....

IPRM Solution profile number

Number of users for the application

...

Number of users for the application

Concurrent Users

....

Concurrent Users

Comments

Application-to-Application Integration using NSA (Non-Standard Service Account.)  

PTRS will use REST APIs to authenticate to and access COMPANY Global MDM Services.
This application will access MDM API Services (MDM_client) and will need OAuth2 account (KOL-MDM_client) for access to those APIs/Services

full description of requested account and integration

Application Scope

All Users

Application Scope

Referenced tickets (only for example / reference purposes):

https://btondemand.COMPANY.com/My-Tickets/Ticket-Details?ticket=GBL32702829i

https://requestmanager.COMPANY.com/#/request/20201208091510997

" }, { "title": "Hub Operations", "pageID": "302705582", "pageLink": "/display/GMDM/Hub+Operations", "content": "" }, { "title": "Airflow:", "pageID": "164470119", "pageLink": "/pages/viewpage.action?pageId=164470119", "content": "" }, { "title": "Checking that Process Ends Correctly", "pageID": "164470118", "pageLink": "/display/GMDM/Checking+that+Process+Ends+Correctly", "content": "

To check that process ended without any issues you need to login into Prometheus and check the Alerts Monitoring PROD dashboard. You have to check rows in the GBL PROD Airflow DAG's Status panel. If you can see red rows (like on blow screenshot) it means that there occured some issues:

\"\"

Details of issues are available in the Airflow.

" }, { "title": "Common Problems", "pageID": "164470117", "pageLink": "/display/GMDM/Common+Problems", "content": "

Failed task getEarliestUploadedFile

During reviewing of failed DAG you noticed that the task getEarliestUploadedFile has failed state. In the task's logs you can see the line like this:

[2020-03-19 18:44:07,082] {{docker_operator.py:252}} INFO - Unable to find the earliest uploaded file. S3 directory is empty?

The issue is because getEarliestUploadedFile was not able to download the export file. In this case you need to check the S3 localtion and verify that the correct export file was uploded to valid location.


" }, { "title": "Deploy Airflow Components", "pageID": "164470010", "pageLink": "/display/GMDM/Deploy+Airflow+Components", "content": "

Deployment procedure is implemented as ansible playbook. The source code is stored in MDM Environment configuration repository. The runnable file is available under the path:  https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/install_mdmgw_airflow_services.yml and can be run by the command: 

ansible-playbook install_mdmgw_airflow_services.yml -i inventory/[env name]/inventory  

Deployment has following steps: 

  1. Creating directory structure on execution host, 
  2. Templating configuration files and transferring those to config location, 
  3. Creating DAG, variable and connections in Apache Airflow, 
  4. Restarting Airflow instance to apply configuration changes. 

After successful deployment the dag and configuration changes should be available to trigger in Airflow UI. 

" }, { "title": "Deploying DAGs", "pageID": "164469947", "pageLink": "/display/GMDM/Deploying+DAGs", "content": "

To deploy newly created DAG or configuration changes you have to run the deployment procedure implemented as ansible playbook install_mdmgw_airflow_services.yml:

ansible-playbook install_mdmgw_airflow_services.yml -i inventory/[env name]/inventory

If you you have access to Jenkins you can also use jenkins' jobs: https://jenkins-gbicomcloud.COMPANY.com/job/MDM_Airflow_Deploy_jobs/. Each environment has its own deploy job. Once you choose the right job you have to:

1 Click the button "Build Now": \"\"

2 After a few seconds the stage icon "Choose dags to deploy" will be active and will wait for choosing DAG to deploy:

\"\"

\"\"

3 Choose the DAG you wanted to deploy and approve you decision.


After this job will deploy all changes made by you to Airflow's server.




" }, { "title": "Error Grabbing Grapes - hub_reconciliation_v2", "pageID": "218438556", "pageLink": "/display/GMDM/Error+Grabbing+Grapes+-+hub_reconciliation_v2", "content": "

In hub_reconciliation_v2 airflow dag, during stage  entities_generate_hub_reconciliation_events grape error might occur:

\n
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:\nGeneral error during conversion: Error grabbing Grapes\n(...)
\n

Cause:

That could be caused by connectivity/configuration issues.

Workaround:

For this dag dependencies are mounted in container. Mounted directory is located in airflow server on path:

/app/airflow/{{ env_name }}/hub_reconciliation_v2/tmp/.groovy/grapes/
To solve this problem copy libs from working dag. E.g. hub_reconciliation_v2_gblus_prod

\n
amraelp00007847.COMPANY.com/app/airflow/gblus_prod/hub_reconciliation_v2/tmp/.groovy/grapes
\n
" }, { "title": "Batches (Batch Service):", "pageID": "302705680", "pageLink": "/pages/viewpage.action?pageId=302705680", "content": "" }, { "title": "Adding a New Batch", "pageID": "164469956", "pageLink": "/display/GMDM/Adding+a+New+Batch", "content": "

1. Add batch to batch_service.yml in the following sections

- add batch info to section batchWorkflows - add basing on some already defined
- add bulk configuration
- add to sendingJob
- add to deletingJob if needed

2. Add source and user for batch to batch_service_users.yml

- add for user mdmetl_nprod apropriate source and batch

3. Add user to:

- for appropriate source, country and roles

4. Add topic to bundle section in manager/config/application.yml 

5. Add kafka topics

We use kafka manager to add new topics which can be found under directory /inventory/<env>/group_vars/kafka/manager/topics.yml

Firstly set create_or_update to True after creation of topics change to False

7. Create topics and redeploy services by using Jenkins

https://jenkins-gbicomcloud.COMPANY.com/job/mdm-gateway/

8. Redeploy gateway on others envs qa, stage, prod only if there is no batch running - check it in mongo on batchInstance collection using following query: {"status" : "STARTED"}

9. Ask if new source should be added to dq rules

" }, { "title": "Cache Address ID Clear (Remove Duplicates) Process", "pageID": "163917838", "pageLink": "/display/GMDM/Cache+Address+ID+Clear+%28Remove+Duplicates%29+Process", "content": "

This process is similar to the Cache Address ID Update Process . So the user should load the file to mongo and process it with the following steps: 

  1. Download the files that were indicated by the user and apply on a specific environment (sometimes only STAGE and sometimes all envs)
    1. For example - 3 files - /us/prod/inbound/cdw/one-time-feeds/other/
    2. \"\"
  2. Merge these file to one file - Duplicate_Address_Ids_<date>.txt
  3. Proceed with the script.sh based on the Cache Address ID Update Process
  4. Generated Extract load to the removeIdsFromkeyIdRegistry collection
    1. mongoimport --host=localhost:27017 --username=admin --password=zuMMQvMl7vlkZ9XhXGRZWoqM8ux9d08f7BIpoHb --authenticationDatabase=admin --db=reltio_stage --collection=removeIdsFromkeyIdRegistry --type=csv --columnsHaveTypes --fields="_id.string(),key.string(),sequence.string(),generatedId.int64(),_class.string()" --file=EXTRACT_Duplicate_Address_Ids_16042021.txt --mode=insert
  5. CLEAR keyIdRegistry
    1. docker exec -it mongo_mongo_1 bash
    2. cd /data/configdb
    3. NPROD - nohup mongo duplicate_address_ids_clear.js &

    4. PROD   - nohup mongo --host mongo_reltio_repl_set/amraelp00007844.COMPANY.com:27017,amraelp00007870.COMPANY.com:27017,amraelp00007847.COMPANY.com:28017 -u mdm_hub -p <passw>--authenticationDatabase reltio_prod duplicate_address_ids_clear.js &

    5. FOR REFERENCE SCRIPT:
      1. \n
        CLEAR keyIdRegistry\n    db = db.getSiblingDB('reltio_dev')\n    db.auth("mdm_hub", "<pass>")\n    \n    db = db.getSiblingDB('reltio_prod')\n    db.auth("mdm_hub", "<pass>")\n\n\n\n    print("START")\n    var start = new Date().getTime();\n\n\n    var cursor = db.getCollection("removeIdsFromkeyIdRegistry").aggregate(    \n        [\n            \n        ], \n        { \n            "allowDiskUse" : false\n        }\n    )\n        \n    cursor.forEach(function (doc){\n        db.getCollection("keyIdRegistry").remove({"_id": doc._id});\n    });\n\n    var end = new Date().getTime();\n    var duration = end - start;\n    print("duration: " + duration + " ms")\n    print("END")\n\n\n    nohup mongo duplicate_address_ids_clear.js &\n\n    nohup mongo --host mongo_reltio_repl_set/amraelp00007844.COMPANY.com:27017,amraelp00007870.COMPANY.com:27017,amraelp00007847.COMPANY.com:28017 -u mdm_hub -p <pass>--authenticationDatabase reltio_prod duplicate_address_ids_clear.js &
        \n
  6. CLEAR batchEntityProcessStatus checksums
    1. docker exec -it mongo_mongo_1 bash
    2. cd /data/configdb
    3. NPROD - nohup mongo unset_checsum_duplicate_address_ids_clear.js &
    4. PROD   - nohup mongo --host mongo_reltio_repl_set/amraelp00007844.COMPANY.com:27017,amraelp00007870.COMPANY.com:27017,amraelp00007847.COMPANY.com:28017 -u mdm_hub -p <pass>--authenticationDatabase reltio_prod unset_checsum_duplicate_address_ids_clear.js &
    5. FOR REFERENCE SCRIPT
      1. \n
        CLEAR batchEntityProcessStatus\n\n    db = db.getSiblingDB('reltio_dev')\n    db.auth("mdm_hub", "<pass>")\n    \n    db = db.getSiblingDB('reltio_prod')\n    db.auth("mdm_hub", "<pass>")\n\n\n    print("START")\n    var start = new Date().getTime();\n    var cursor = db.getCollection("removeIdsFromkeyIdRegistry").aggregate(    \n        [\n        ], \n        { \n            "allowDiskUse" : false\n        }\n    )\n        \n    cursor.forEach(function (doc){\n        var key = doc.key \n        var arrVars = key.split("/");\n        \n        var type = "configuration/sources/"+arrVars[0]\n        var value = arrVars[3];\n       \n        print(type + " " + value)\n        \n        var result = db.getCollection("batchEntityProcessStatus").update(\n        { "batchName" : { $exists : true }, "sourceId" : { "type" : type, "value" : value } },\n        { $set: { "checksum": "" } },\n        { multi: true}\n        )\n        \n        printjson(result);\n         \n    });\n    \n    var end = new Date().getTime();\n    var duration = end - start;\n    print("duration: " + duration + " ms")\n    print("END")\n\n    nohup mongo unset_checsum_duplicate_address_ids_clear.js &\n    \n    nohup mongo --host mongo_reltio_repl_set/amraelp00007844.COMPANY.com:27017,amraelp00007870.COMPANY.com:27017,amraelp00007847.COMPANY.com:28017 -u mdm_hub -p <pass>--authenticationDatabase reltio_prod unset_checsum_duplicate_address_ids_clear.js &
        \n
  7. Verify nohup output
  8. Check few rows and verify if these rows do not exist in the KeyIdRegistry collection
  9. Check few profiles and verify if the checksum was cleared in the BatchEntityProcessStatus collection


  1. ISSUE - for the ONEKEY profiles there is a difference between the generated cache and the corresponding profile.
  2. ISSUE - for the GRV profiles there is a difference between the generated cache and the corresponding profile. - check the crosswalks values in COMPANY_ADDRESS_ID_EXTRACT_PAC_files - should be e.g. 00002b9b-f327-456c-959c-fd5b04ed04b8
  3. ISSUE - for the ENGAGE 1.0 profiles there is a difference between the generated cache and the corresponding profile.  check the crosswalks values in COMPANY_ADDRESS_ID_EXTRACT_ENG_ files - should be e.g 00002b9b-f327-456c-959c-fd5b04ed04b8

Please check the following example:

CUST_SYSTEM,CUST_TYPE,SRC_ADDR_ID,SRC_CUST_ID,SRC_CUST_ID_TYPE,PFZ_ADDR_ID,PFZ_CUST_ID,SRC_SYS,MDM_SRC_SYS,EXTRACT_DT
PROBLEM : HCPM,HCP,0000407429,8091473,HCE,38357661,1374316,HCPS,HCPS,2021-04-15
OK            : HCPM,HCP,a012K000022cqBoQAI,0012K00001lCEyYQAW,HCP,109525669,178336284,VVA,VVA,2021-04-15

For VVA the crosswalk is equal to the 001A000001VgOEVIA3 and it is easy to match with the ICUE profile and clear the cache 

for ONEKEY the generated row is equal to the - 

COMPANYAddressIDSeq|ONEKEY/HCP/HCE/8091473/0000407429,ONEKEY/HCP/HCE/8091473/0000407429,COMPANYAddressIDSeq,38357661,com.COMPANY.mdm.generator.db.KeyIdRegistry

The 8091473 is not a crosswalk so to remove the checksum from the BatchEntityProcessStatus collection there is a need to find the profile in Reltio - crosswalk si WUSM01113231 - and clear the cache in the BatchEntityProcessStatus collection.

In my example, there was only one crosswalk. So it was easy to find this profile. For multiple profiles, there is a need to find the solution. ( I think we need to ask CDW to provide the file for ONEKEY with an additional crosswalk column, so we will be able to match the crosswalk with the Key and clear the checksum)


    Solution: once we receive ONEKEY KeyIdRegstriy Update file ask COMPANY Team to generate crosswalks ids - simple CSV file


  1. The file received from CDW does not contain crosswalks id, only COMPANYAddressIds - example input - https://gblmdmhubprodamrasp101478.s3.amazonaws.com/us/prod/inbound/cdw/one-time-feeds/COMPANY-address-id/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210511.txt
  2. Ask DT Team and download CSV file
  3. Load the file to TMP collection in Mongo e.g. - AddressIDCrosswalks_COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210511
  4. Execute the following:
    1. \n
      CLEAR batchEntityProcessStatus based on crosswalks ID list \n\n    db = db.getSiblingDB('reltio_dev')\n    db.auth("mdm_hub", "<pass>")\n    \n    db = db.getSiblingDB('reltio_prod')\n    db.auth("mdm_hub", "<pass>")\n\n\n    print("START")\n    var start = new Date().getTime();\n    var cursor = db.getCollection("AddressIDCrosswalks_COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210511").aggregate(    \n        [\n        ], \n        { \n            "allowDiskUse" : false\n        }\n    )\n        \n    cursor.forEach(function (doc){\n        \n        var type = "configuration/sources/ONEKEY";\n        var value = doc.COMPANYcustid_individualeid;\n       \n        print(type + " " + value)\n        \n        var result = db.getCollection("batchEntityProcessStatus").update(\n        { "batchName" : { $exists : true }, "sourceId" : { "type" : type, "value" : value } },\n        { $set: { "checksum": "" } },\n        { multi: true}\n        )\n        \n        printjson(result);\n         \n    });\n    \n    var end = new Date().getTime();\n    var duration = end - start;\n    print("duration: " + duration + " ms")\n    print("END")
      \n




" }, { "title": "Changelog of removed duplicates", "pageID": "172294537", "pageLink": "/display/GMDM/Changelog+of+removed+duplicates", "content": "

01.02.2021 - DROP keys
         Duplicate_Address_Ids.txt
         nohup ./script.sh inbound/Duplicate_Address_Ids.txt > EXTRACT_Duplicate_Address_Ids.txt &


19.04.2021 - DROP keys STAGE GBLUS
         Duplicate_Address_Ids_16042021.txt - 11 380 - 1 ONEKEY, ICUE, CENTRIS
         nohup ./script.sh inbound/Duplicate_Address_Ids_16042021.txt > EXTRACT_Duplicate_Address_Ids_16042021.txt &


17.05.2021 - DROP STAGE GBLUS
         Duplicate_Address_Ids_17052021.txt - 25121 - 1 ONEKEY
         nohup ./script.sh inbound/Duplicate_Address_Ids_17052021.txt > EXTRACT_Duplicate_Address_Ids_17052021.txt


25.06.2021 - DROP STAGE GBLUS
         Duplicate_Address_Ids_17052021.txt - 71509, 2 ONEKEY
         nohup ./script.sh inbound/Duplicate_Address_Ids_25062021.txt > EXTRACT_Duplicate_Address_Ids_25062021.txt &


12.07.2021 - DROP PROD GBLUS
         Duplicate_Address_Ids_12072021.txt - 4550 Duplicate_Address_Ids_12072021.txt - us/prod/inbound/cdw/one-time-feeds/Address-DeDup/FileSet-3/
         nohup ./script.sh inbound/Duplicate_Address_Ids_12072021.txt > EXTRACT_Duplicate_Address_Ids_12072021.txt &


" }, { "title": "Cache Address ID Update Process", "pageID": "164469955", "pageLink": "/display/GMDM/Cache+Address+ID+Update+Process", "content": "

1. Log using S3 browser to production bucket gblmdmhubprodamrasp101478 and go to dir /us/prod/inbound/cdw/one-time-feeds/COMPANY-address-id/ and check last update dates

2. Log using mdmusnpr service user to server amraelp00007334.COMPANY.com using ssh

3. Sync files from S3 using below command

docker run -u 27519996:24670575 -e "AWS_ACCESS_KEY_ID=<access_key>" -e "AWS_SECRET_ACCESS_KEY=<secret_access_key>" -e "AWS_DEFAULT_REGION=us-east-1" -v /app/mdmusnpr/AddressID/inbound:/src:z mesosphere/aws-cli s3 sync s3://gblmdmhubprodamrasp101478/us/prod/inbound/cdw/one-time-feeds/COMPANY-address-id/ /src

4. After syncing check new files with those two commads replacing new_file_name with name of the file which was updated. Check in script file that SRC_SYS and MDM_SRC_SYS exists, if not something is wrong and probably script needs to be updated ask the person who asked for address id update

cut -d',' -f8 <new_file_name> | sort | uniq
cut -d',' -f9 <new_file_name> | sort | uniq

5. Remove old extracts from /app/mdmusnpr/AddressID

rm EXTRACT_<new_file_name>

6. Run script which will prepare data for mongo

nohup ./script.sh inbound/<new_file_name> > EXTRACT_<new_file_name> &

Wait until processing in foreground finishes. Check after some time using below command:
ps ax | grep script
If process is marked as done You can continue with next file or if there is no more files You can proceed to next step.

7. Log in using Your user to the server amraelp00007334.COMPANY.com and change to root

8. Go to /app/mongo/config and remove old extracts

rm EXTRACT_<new_file_name>

9. Go to /app/mdmusnpr/AddressID and copy new extracts to mongo

cp EXTRACT_<new_file_name> /app/mongo/config/

10. Run mongo shell

docker exec -it mongo_mongo_1 bash
cd /data/configdb

11. Execute following command for each non prod env and for every new extract file

<db_name> - reltio_dev, reltio_qa, reltio_stage

mongoimport --host=localhost:27017 --username=admin --password=<db_password> --authenticationDatabase=admin --db=<db_name> --collection=keyIdRegistry --type=csv --columnsHaveTypes --fields="_id.string(),key.string(),sequence.string(),generatedId.int64(),_class.string()" --file=EXTRACT_<new_file_name> --mode=upsert

Write into changelog the number of records that were updated - it should be equal on all envs.

12. If needed and requested update production using following command

mongoimport --host=mongo_reltio_repl_set/amraelp00007844.COMPANY.com:27017,amraelp00007870.COMPANY.com:27017,amraelp00007847.COMPANY.com:28017 --username=admin --password=<prod_db_password> --authenticationDatabase=admin --db=reltio_prod --collection=keyIdRegistry --type=csv --columnsHaveTypes --fields="_id.string(),key.string(),sequence.string(),generatedId.int64(),_class.string()" --file=EXTRACT_<new_file_name> --mode=upsert

13. Verify number of entries from input file with updated records number in mongo

14. Update changelog

15. Respond to email that update is done

16. Force merge will be generated - there will be mail about this.

17. Download force merge delta from S3 using S3 browser and change name to merge_<date>_1.csv

bucket: gblmdmhubprodamrasp101478

path: us/prod/inbound/HcpmForceMerge/ForceMergeDelta

18. Upload file merge_<date>_1.csv to

bucket: gblmdmhubprodamrasp101478

path: us/prod/inbound/hub/merge_unmerge_entities/input/

19. Trigger dag 

https://mdm-monitoring.COMPANY.com/airflow/tree?dag_id=merge_unmerge_entities_gblus_prod_gblus

20. After dag is finished login using S3 Browser 

bucket: gblmdmhubprodamrasp101478

path: us/prod/inbound/hub/merge_unmerge_entities/output/<most_recent_date>_<most_recent_time>
so for date 17/5/2021 and time 12:11: 39, the file looks like this: 
         us/prod/inbound/hub/merge_unmerge_entities/output/20210517_121139

and download result file, check for failed merge and send it in response to email about force merge




" }, { "title": "Changelog of updated", "pageID": "164469954", "pageLink": "/display/GMDM/Changelog+of+updated", "content": "

20.11.2020 - Loading NEW files:

GRV & ENGAGE 1.0
nohup ./script.sh inbound/COMPANY_ADDRESS_ID_EXTRACT_PAC_ENG.txt > EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_PAC_ENG.txt &
IQVIA_RX
nohup ./script.sh inbound/COMPANY_ADDRESS_ID_EXTRACT_HCPS00.txt > EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_HCPS00.txt &
IQVIA_MCO & MILLIMAN & MMIT
nohup ./script.sh inbound/COMPANY_ACCOUNT_ADDR_ID_EXTRACT.txt > EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT.txt &

09.12.2020 - Loading new file: -> 460927

14.12.2020 - Loading new file: PAC_ENG -> 820 document, CAPP-> 464583 document

16.12.2020 - Loading MILLIMAN_MCO: 10504 document

22.12.2020 - Loading CPMRTE: 15686 document, CAPP: 1287, PAC_ENG: 1340, VVA: 11927070, IMS: 343, HCO i SAP problem, CENTRIS: 41496, hcps00: 4215

29.12.2020 - Loading PAC_ENG: 1260, CAPP: 1414

04.01.2021 - Loading PAC_ENG: 330, CAPP: 338

08.01.2021 - Loading HCPS00: 3214

11.01.2021 - Loading PAC_ENG: 496, CAPP: 512

18.01.2021 - Loading PAC_ENG: 616, CAPP: 795

25.01.2021 - Loading PAC_ENG: 1009, CAPP: 939

01.02.2021 - Loading PAC_ENG: 884, CAPP: 1106

08.02.2021 - Loading PAC_ENG: 576, CAPP: 394

15.02.2021 - Loading PAC_ENG: 690, CAPP: 696

17.02.2021 - Loading VVA: 12048364

22.02.2021 - Loading PAC_ENG: 724, CAPP: 757

01.03.2021 - Loading PAC_ENG: 906, CAPP: 969

26.04.2021 - Loading PAC_ENG: 738, CAPP: 795

11.05.2021 - Loading PAC_ENG: 589, CAPP: 626

17.05.2021 - Loading PAC_ENG: 489, CAPP: 613

17.05.2021 - Loading - us/prod/inbound/cdw/one-time-feeds/COMPANY-address-id/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210511.txt

                     Updated: 1171703 - customers updated - cleared cache in batchEntityProcessStatus collection for reload

                     Updated: 1513734 - document(s) imported successfully in KeyIdRegistry

18.05.2021 - STAGE only
      COMPANY_ACCOUNT_ADDR_ID_EXTRACT_HCOS_20210511_fix.txt - 43771 document(s) imported successfully
      COMPANY_ACCOUNT_ADDR_ID_EXTRACT_IMS_20210511.txt - 10076 document(s) imported successfully


19.05.3021 -  Load 15 Files to PROD and clear cache. Load these files to DEV QA and STAGE
      2972 May 17 11:40 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_DVA_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_DVA_20210511.txt &
      19124366 May 19 07:11 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_HCOS_20210511_fix.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_HCOS_20210511_fix.txt &
      3154666 May 17 11:41 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_IMS_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_IMS_20210511.txt &
      221969 May 17 11:40 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MLM_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MLM_20210511.txt &
      214430 May 17 11:41 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MMIT_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MMIT_20210511.txt &
      163142 May 17 11:40 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_SAP_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_SAP_20210511.txt &
      73236 May 17 15:00 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_APUS-VVA_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_APUS-VVA_20210511.txt &
      6399709 May 17 15:00 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_CENTRIS_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_CENTRIS_20210511.txt &
      60175 May 17 15:00 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_EMDS-VVA_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_EMDS-VVA_20210511.txt &
      318915 May 17 14:59 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_ENG_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_ENG_20210511.txt &
      13528 May 17 14:59 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_HCPS_ZIP_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_HCPS_ZIP_20210511.txt &
      1360570 May 17 14:59 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_KOL_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_KOL_20210511.txt &
      8135990 May 17 14:59 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_PAC_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_PAC_20210511.txt &
      14583373 May 17 15:00 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_SHS_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_SHS_20210511.txt &
      283564 May 17 15:00 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_SHS_ZIP_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_SHS_ZIP_20210511.txt &


24.05.2021 - Loading PAC_ENG: Dev:1283, QA: 1283, Stage: 1509, Prod: 1283

                                         CAPP: Dev: 1873, QA: 1392, Stage: 1873, Prod: 1873


1/6/2021 - Loading PAC_ENG: 379, CAPP: 433


9/6/2021 - Loading PAC_ENG: 38, CAPP: 47


14/6/2021 - Loading PAC_ENG: 83, CAPP: 102

16/6/2021 - Loading COMPANY_ACCT: Prod: 236 

28/06/2021 - Loading PAC_ENG: Dev:182, QA: 182, Stage: 182, Prod: 646, CAPP: Dev: 215, QA: 215, Stage: 215, Prod: 215



02.07.2021
    Load 11 Files to PROD and clear cache. Load these files to DEV QA and STAGE
    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_HCOS_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_HCOS_20210630.txt &
    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_IMS_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_IMS_20210630.txt &
    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MLM_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MLM_20210630.txt &
    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_APUS-VVA_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_APUS-VVA_20210630.txt &
    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_CENTRIS_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_CENTRIS_20210630.txt &
    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_EMDS-VVA_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_EMDS-VVA_20210630.txt &
    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210630.txt &
    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_HCPS_ZIP_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_HCPS_ZIP_20210630.txt &
    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_KOL_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_KOL_20210630.txt &
    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_SHS_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_SHS_20210630.txt &
    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_SHS_ZIP_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_SHS_ZIP_20210630.txt &


5/7/2021 - Loading PAC_ENG: 39 , CAPP: 44


16.07.2021
    Load 1 VVA File to PROD and clear cache. Load this file to DEV QA and STAGE
    nohup ./script.sh inbound/COMPANY_ADDRESS_ID_EXTRACT_VVA_20210715.txt > EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_VVA_20210715.txt &

20.07.2021
    Load 1 VVA File to PROD and clear cache. Load this file to DEV QA and STAGE
    nohup ./script.sh inbound/COMPANY_ADDRESS_ID_EXTRACT_VVA_20210718.txt > EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_VVA_20210718.txt &


GBLUS/Fletcher PROD GO-LIVE COMPANYAddressID sequence - PROD (MAX)139510034 + 5000000 = 144510034







" }, { "title": "Manual Cache Clear", "pageID": "164470086", "pageLink": "/display/GMDM/Manual+Cache+Clear", "content": "
  1. Open Studio 3T and connect to appropriate Mongo DB
  2. Open IntelliShell
  3. Run following query for appropriate source - replace <source> with right name


\n
db.getCollection("batchEntityProcessStatus").updateMany({"sourceId.type":"configuration/sources/<source>"}, {$set: {"checksum" : ""}})
\n
" }, { "title": "Data Quality", "pageID": "492471763", "pageLink": "/display/GMDM/Data+Quality", "content": "" }, { "title": "Quality Rules Deployment Process", "pageID": "492471766", "pageLink": "/display/GMDM/Quality+Rules+Deployment+Process", "content": "

Resource changing

The process regards modifying the resources related to data quality configuration that are stored in Consul and load by mdm-manager, mdm-onekey-dcr-service, precallback-service components in runtime. They are present in mdm-config-registry/config-hub location.

When modifying data quality rules configuration present at mdm-config-registry/config-hub/<env_name>/mdm-manager/quality-service/quality-rules , the following rules should be applied:

  1. Each YAML file should be formatted in accordance with yamllint rules (See Yamllint validation rules)
  2. The attributes createdDate/modifiedDate were deleted from the rules configuration files. They will be automatically set for each rule during the deployment process. (See Deployment of changes)
  3. Adding more than one rule with the same value of name attribute is not allowed.

PR validation

Every PR to mdm-config-registry repository is validated for correctness of YAML syntax (See Yamllint validation rules). Upon PR creation the job is triggered that checks the format of YAML files using yamllint. The jobs succeeds only when all the yaml files in repository passed the yamllint test.

The PRs that did not passed validations should not be merged to master.

Deployment of changes

All changes in mdm-config-registry/config-hub should be deployed to consul using JENKINS JOBS. The separate job exist for deploying changes done on each environment. Eg. job deploy_config_amer_nprod_amer-dev is used to deploy all changes done on AMER DEV environment (all changes under path mdm-config-registry/config/hub/dev_amer). Jobs allow to deploy configuration from master branch or PR's to mdm-config-registry repo.

The deployment job flow can be described by the following diagram:


\"\"


Steps

  1. Clean workspace - wipes workspace of all the files left from previous job run.
  2. Checkout mdm-config-registry - this repository contains files with data quality configuration and yamllint rules
  3. Checkout mdm-hub-cluster-env - this repository contains script for assigning createdDate / modifiedDate attributes to quality rules and ansible job for running this script and uploading files to consul.
  4. Validate yaml files - runs yamllint validation for every YAML file at mdm-config-registry/config-hub/<env_name> (See Yamllint validation rules)
  5. Get previous quality rules registry files - downloads quality rules registry file produced after previous successfull run of a job. The file is responsible for storing information about modification dates and checksum of quality rules. Decision if modification dates should be update is made based on checksum change, . The registry file is a csv with the following headers:
    1. ID - ID for each quality rule in form of <file_name>:<rule_name>
    2. CREATED_DATE - stores createdDate attribute value for each rule
    3. MODIFIED_DATE - stores modifiedDate attribute value for each rule
    4. CHECKSUM - stores checksum counted for each rule
  6. Update Quality Rules files - runs ansible job responsible for:
    1. Running script QualityRuleDatesManager.groovy - responsible for adjusting createdDate / modifiedDate for quality rules based on checksum changes and creating new quality rules registry file.
    2. Updating changed quality rules files in Consul kv store.
  7. Archive quality rules registry file - save new registry file in job artifacts.


Algorithm of updating modification dates

The following algorithm is implemented in QualityRuleDatesManager.groovy script. The main goal of this is to update createdDate/modifiedDate in the case when new quality rule has been added or its definition changed.

\"\"

Yamllint validation rules

TODO

" }, { "title": "DCRs:", "pageID": "259432965", "pageLink": "/pages/viewpage.action?pageId=259432965", "content": "" }, { "title": "DCR Service 2:", "pageID": "302705607", "pageLink": "/pages/viewpage.action?pageId=302705607", "content": "" }, { "title": "Reject pending VOD DCR - transfer to Data Stewards", "pageID": "415993922", "pageLink": "/display/GMDM/Reject+pending+VOD+DCR+-+transfer+to+Data+Stewards", "content": "

Description

There's a DCR request which was sent to Veeva OpenData (VOD) by HUB however it hasn't been processed - we didn't receive information whether is should be ACCEPTED or REJECTED. This causes a couple of things:

Goal

We want to simulate REJECT response from VOD which will make DCR to return to Reltio for further processing by Data Stewards. This may be realized in a couple of ways: 

Procedure #1

Step 1 - Adjust below event template

JSON event to populate
\n
{\n  "eventType": "CHANGE_REJECTED",\n  "eventTime": 1712573721000,\n  "countryCode": "SG",\n  "dcrId": "a51f229331b14800846503600c787083",\n  "vrDetails": {\n    "vrStatus": "CLOSED",\n    "vrStatusDetail": "REJECTED",\n    "veevaComment": "MDM HUB: Simulated reject response to close DCR.",\n    "veevaHCPIds": [],\n    "veevaHCOIds": []\n  }\n}
\n

Step 2 - Populate event to topic $env-internal-veeva-dcr-change-events-in (for APAC-STAGE: apac-stage-internal-veeva-dcr-change-events-in). 

\"\"

After a couple of minutes two things should be in effect:

Step 3 - update MongoDB DCRRegistryVeeva collection

Document update
\n
{\n    $set : {\n        "status.name" : "REJECTED",\n        "status.changeDate" : "2024-04-07T17:42:37.882195Z"\n    }\n}
\n


\"\"


Step 4 - check Reltio DCR

Check if DCR status has changed to "DS Action Required" and DCR Tracing details has been updated with simulated Veeva Reject response. 

\"\"

" }, { "title": "Close VOD DCR - override any status", "pageID": "492489948", "pageLink": "/display/GMDM/Close+VOD+DCR+-+override+any+status", "content": "

This SoP is almost identical to the one in Override VOD Accept to VOD Reject for VOD DCR with small updates:

In Step 1, please also update target = VOD to target = Reltio

" }, { "title": "Override VOD Accept to VOD Reject for VOD DCR", "pageID": "490649621", "pageLink": "/display/GMDM/Override+VOD+Accept+to+VOD+Reject+for+VOD+DCR", "content": "

Description

There's a DCR request which was sent to Veeva OpenData (VOD) and mistakenly ACCEPTED, however business requires such DCR to be Rejected and redirected to DSR for processing via Reltio Inbox.

Goal

We want to:

Procedure

Step 0 - Assume that VOD_NOT_FOUND

  1. Set retryCounter to 9999
  2. Wait for 12h

Step 1 - Adjust DCR document in MongoDB in DCRRegistry collection (Studio3T)

  1. Remove incorrect DCR Tracking entries for your DCR (trackingDetails section) - usually nested attribute 3 and 4 in this section
  2. Set retryCounter to 0
  3. Set status.name to "SENT_TO_VEEVA"

Step 2 - update MongoDB DCRRegistryVeeva collection

Document update
\n
{\n    $set : {\n        "status.name" : "REJECTED",\n        "status.changeDate" : "2024-04-07T17:42:37.882195Z"\n    }\n}
\n

Step 3 - Adjust below event template

JSON event to populate
\n
{\n  "eventType": "CHANGE_REJECTED",\n  "eventTime": 1712573721000,\n  "countryCode": "SG",\n  "dcrId": "a51f229331b14800846503600c787083",\n  "vrDetails": {\n    "vrStatus": "CLOSED",\n    "vrStatusDetail": "REJECTED",\n    "veevaComment": "MDM HUB: Simulated reject response to close DCR.",\n    "veevaHCPIds": [],\n    "veevaHCOIds": []\n  }\n}
\n


Step 4 - Populate event to topic $env-internal-veeva-dcr-change-events-in (for APAC-STAGE: apac-stage-internal-veeva-dcr-change-events-in). 

\"\"

After a couple of minutes (it depends on the traceVR schedule - it my take up to 6h on PROD) two things should be in effect:

\"\"


Step 6 - check Reltio DCR

Check if DCR status has changed to "DS Action Required" and DCR Tracing details has been updated with simulated Veeva Reject response. 

" }, { "title": "DCR escalation to Veeva Open Data (VOD)", "pageID": "430348063", "pageLink": "/pages/viewpage.action?pageId=430348063", "content": "

Integration fail

It occasionally happens that DCR response files from Veeva are not being delivered to S3 bucket which is used for ingestion by HUB. VOD provides CVS/ZIP files every day, even though there's no actual payload related to DCRs - files contain only CSV headers. This disruption may be caused by two things: 

Either way, we need to pin point of the two are causing the problem.

Troubleshooting 

It's usually good to check when the last synchronization took place.

GMFT issue

If there is more than one file (usually this dir should be empty) in outbound directory /globalmdmprodaspasp202202171415/apac/prod/outbound/vod/APAC/DCR_request it means that GMFT job does not push files from S3 to SFTP. The files which are properly processed by GMFT job are copied to Veeva SFTP and additionally moved to  /globalmdmprodaspasp202202171415/apac/prod/archive/vod/APAC/DCR_request.

Veeva Open Data issue

Once you are sure it's not GMFT issue, check archive directory for the latest DCR response file: 

If the latest file is older that 24h → there's an issue on VOD side. 


Who to contact?




" }, { "title": "DCR rejects from IQVIA due to missing RDM codes", "pageID": "475927691", "pageLink": "/display/GMDM/DCR+rejects+from+IQVIA+due+to+missing+RDM+codes", "content": "

Description

Sometimes our Clients are being provided with below error message when they are trying to send DCRs to OneKey. 

This request was not accepted by the IQVIA due to missing RDM code mapping and was redirected to Reltio Inbox. The reason is: 'Target lookup code not found for attribute: HCPSpecialty, country: CA, source value: SP.ONCM.'. This means that there is no equivalent of this code in IQVIA code mapping. Please contact MDM Hub DL-ATP_MDMHUB_SUPPORT@COMPANY.com asking to add this code and click "SendTo3Party" in Reltio after Hub's confirmation.

Why

This is caused when PforceRx tries to send DCR with changes on attribute with Lookup Values. On HUB end we're trying to remap canonical codes from Reltio/RDM to source mapping values which are specific to OneKey and understood by them. 

Usual we are dealing with situation that for each canonical code there is a proper source code mapping mapping. Please refer to below screen (Mongo collection LookupValues). 

\"\"


However when their is no such mapping like in case below (no ONEKEY entry in sourceMappings) then we're dealing with problem above

\"\"


For more information about canonical code mapping and the flow to get target code sent to OneKey or VOD, please refer to → Veeva: create DCR method (storeVR), section "Mapping Reltio canonical codes → Veeva source codes"

How

We should contact people responsible for RDM codes mappings (MDM COMPANY team) to add find out correct sourceMapping value for this specific canonical code for specific country. In the end they will contact AJ to add it to RDM (usually every week).

" }, { "title": "Defaults", "pageID": "284795409", "pageLink": "/display/GMDM/Defaults", "content": "

DCR defaults map the source codes of the Reltio system to the codes in the OneKey or VOD (Veeva Open Data) system. 

Occur for specific types of attributes: HCPSpecialities, HCOSpecialities, HCPTypeCode, HCOTypeCode, HCPTitle, HCOFacilityType. 

The values ​​are configured in the Consul system. To configure the values:

  1.  Sort the source (.xlsx) file:


    \"\"
  2. Divide the file into separate sheets for each attribute.
  3. Save the sheets in separate csv format files - columns separated by semicolons.
  4. Paste the contents of the files into the appropriate files in the consul configuration repository - mdm-config-registry:

    \"\"
      - each environment has its own folder in the configuration repository
      - files must have header- Country;CanonicalCode;Default


For more information about canonical code mapping and the flow to get target code sent to OneKey or VOD, please refer to → Veeva: create DCR method (storeVR), section "Mapping Reltio canonical codes → Veeva source codes"


" }, { "title": "Go-Live Readiness", "pageID": "273696220", "pageLink": "/display/GMDM/Go-Live+Readiness", "content": "

Procedure:\"\"


" }, { "title": "OneKey Crosswalk is Missing and IQVIA Returned Wrong ID in TraceVR Response", "pageID": "259432967", "pageLink": "/display/GMDM/OneKey+Crosswalk+is+Missing+and+IQVIA+Returned+Wrong+ID+in+TraceVR+Response", "content": "


This SOP describes how to FIX the case when there is a DCR in OK_NOT_FOUND status and IQVIA change  the individualID from wrong one to correct one (due to human error)


Example Case based on EMEA PROD:


New Case (2023-03-21)

ONEKEY responded with ACCEPTED with ONEKEY ID but OneKey VR Trace response contains: "requestStatus": "VAS_FOUND_BUT_INVALID".

DCR2 Service is checking every 12h if Onekey already provided the data to Reltio. We must manually close this DCR.

Steps:

In amer-prod-internal-onekey-dcr-change-events-in topic find the latest event for ID ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●.

Change from:

\n
{\n\t"eventType": "DCR_CHANGED",\n\t"eventTime": 1677801600678,\n\t"eventPublishingTime": 1677801600678,\n\t"countryCode": "CA",\n\t"dcrId": "f19305a6e6af4b5aa03d26c1ec1ae5a6",\n\t"targetChangeRequest": {\n\t\t"vrStatus": "CLOSED",\n\t\t"vrStatusDetail": "ACCEPTED",\n\t\t"oneKeyComment": "ONEKEY response comment: Already Exists-Data Privacy\\nONEKEY HCP ID: WCAP00028176\\nONEKEY HCO ID: WCAH00052991",\n\t\t"individualEidValidated": "WCAP00028176",\n\t\t"workplaceEidValidated": "WCAH00052991",\n\t\t"vrTraceRequest": "{\\"isoCod2\\":\\"CA\\",\\"validation.clientRequestId\\":\\"f19305a6e6af4b5aa03d26c1ec1ae5a6\\"}",\n\t\t"vrTraceResponse": "{\\"response.traceValidationRequestOutputFormatVersion\\":\\"1.8\\",\\"response.status\\":\\"SUCCESS\\",\\"response.resultSize\\":1,\\"response.totalNumberOfResults\\":1,\\"response.success\\":true,\\"response.results\\":[{\\"codBase\\":\\"WCA\\",\\"cisHostNum\\":\\"7853\\",\\"userEid\\":\\"07853\\",\\"requestType\\":\\"Q\\",\\"responseEntityType\\":\\"ENT_ACTIVITY\\",\\"clientRequestId\\":\\"f19305a6e6af4b5aa03d26c1ec1ae5a6\\",\\"cegedimRequestEid\\":\\"9d02f7547dbc4e659a9d230c91f96279\\",\\"customerRequest\\":null,\\"trace1ClientRequestDate\\":\\"2023-02-27T23:53:44Z\\",\\"trace2CegedimOkcProcessDate\\":\\"2023-02-27T23:53:40Z\\",\\"trace3CegedimOkeTransferDate\\":\\"2023-02-27T23:54:23Z\\",\\"trace4CegedimOkeIntegrationDate\\":\\"2023-02-27T23:55:47Z\\",\\"trace5CegedimDboResponseDate\\":\\"2023-03-02T21:23:36Z\\",\\"trace6CegedimOkcExportDate\\":null,\\"requestComment\\":null,\\"responseComment\\":\\"Already Exists-Data Privacy\\",\\"individualEidSource\\":null,\\"individualEidValidated\\":\\"WCAP00028176\\",\\"workplaceEidSource\\":\\"WCAH00052991\\",\\"workplaceEidValidated\\":\\"WCAH00052991\\",\\"activityEidSource\\":null,\\"activityEidValidated\\":\\"WCAP0002817602\\",\\"addressEidSource\\":null,\\"addressEidValidated\\":\\"WCA00000006206\\",\\"countryEid\\":\\"CA\\",\\"processStatus\\":\\"REQUEST_RESPONDED\\",\\"requestStatus\\":\\"VAS_FOUND_BUT_INVALID\\",\\"updateDate\\":\\"2023-03-02T21:37:16Z\\"}]}"\n\t}\n}
\n

To:

\n
{\n\t"eventType": "DCR_CHANGED",\n\t"eventTime": 1677801600678,\n\t"eventPublishingTime": 1677801600678,\n\t"countryCode": "CA",\n\t"dcrId": "f19305a6e6af4b5aa03d26c1ec1ae5a6",\n\t"targetChangeRequest": {\n\t\t"vrStatus": "CLOSED",\n\t\t"vrStatusDetail": "REJECTED",\n\t\t"oneKeyComment": "ONEKEY response comment: Already Exists-Data Privacy\\nONEKEY HCP ID: WCAP00028176\\nONEKEY HCO ID: WCAH00052991",\n\t\t"individualEidValidated": "WCAP00028176",\n\t\t"workplaceEidValidated": "WCAH00052991",\n\t\t"vrTraceRequest": "{\\"isoCod2\\":\\"CA\\",\\"validation.clientRequestId\\":\\"f19305a6e6af4b5aa03d26c1ec1ae5a6\\"}",\n\t\t"vrTraceResponse": "{\\"response.traceValidationRequestOutputFormatVersion\\":\\"1.8\\",\\"response.status\\":\\"SUCCESS\\",\\"response.resultSize\\":1,\\"response.totalNumberOfResults\\":1,\\"response.success\\":true,\\"response.results\\":[{\\"codBase\\":\\"WCA\\",\\"cisHostNum\\":\\"7853\\",\\"userEid\\":\\"07853\\",\\"requestType\\":\\"Q\\",\\"responseEntityType\\":\\"ENT_ACTIVITY\\",\\"clientRequestId\\":\\"f19305a6e6af4b5aa03d26c1ec1ae5a6\\",\\"cegedimRequestEid\\":\\"9d02f7547dbc4e659a9d230c91f96279\\",\\"customerRequest\\":null,\\"trace1ClientRequestDate\\":\\"2023-02-27T23:53:44Z\\",\\"trace2CegedimOkcProcessDate\\":\\"2023-02-27T23:53:40Z\\",\\"trace3CegedimOkeTransferDate\\":\\"2023-02-27T23:54:23Z\\",\\"trace4CegedimOkeIntegrationDate\\":\\"2023-02-27T23:55:47Z\\",\\"trace5CegedimDboResponseDate\\":\\"2023-03-02T21:23:36Z\\",\\"trace6CegedimOkcExportDate\\":null,\\"requestComment\\":null,\\"responseComment\\":\\"Already Exists-Data Privacy\\",\\"individualEidSource\\":null,\\"individualEidValidated\\":\\"WCAP00028176\\",\\"workplaceEidSource\\":\\"WCAH00052991\\",\\"workplaceEidValidated\\":\\"WCAH00052991\\",\\"activityEidSource\\":null,\\"activityEidValidated\\":\\"WCAP0002817602\\",\\"addressEidSource\\":null,\\"addressEidValidated\\":\\"WCA00000006206\\",\\"countryEid\\":\\"CA\\",\\"processStatus\\":\\"REQUEST_RESPONDED\\",\\"requestStatus\\":\\"VAS_FOUND_BUT_INVALID\\",\\"updateDate\\":\\"2023-03-02T21:37:16Z\\"}]}"\n\t}\n}
\n

and post back to the topic. DCR will be closed in 24h.


New Case (2024-03-19)


We need to force close/reject a couple of DCRs which cannot closed themselves. There were sent to OneKey, but for some reasons OK does not recognize them.  IQVIA have not generated the TraceVR response and we need to simulate it.  To break TRACEVR process for this DCRs we need to manually change the Mongo Status to REJECTED. If we keep SENT we are going to ask IQVIA forever in - TODO - describe this in SOP


Change from:
\"\"

To:


\"\"


\n
    "vrStatus": "CLOSED",\n    "vrStatusDetail": "REJECTED", 
\n



\n
 {\n  "eventType": "DCR_CHANGED",\n  "eventTime": <current_time>,\n  "eventPublishingTime": <current_time>,\n  "countryCode": "<country>",\n  "dcrId": "<dcr_id>",\n  "targetChangeRequest": {\n    "vrStatus": "CLOSED",\n    "vrStatusDetail": "REJECTED",\n    "oneKeyComment": "HUB manual update due to MR-<ticket_number>",\n    "individualEidValidated": null,\n    "workplaceEidValidated": null,\n    "vrTraceRequest": "{\\"isoCod2\\":\\"<country>\\",\\"validation.clientRequestId\\":\\"<dcr_id>\\"}",\n    "vrTraceResponse": "{\\"response.traceValidationRequestOutputFormatVersion\\":\\"1.8\\",\\"response.status\\":\\"SUCCESS\\",\\"response.resultSize\\":1,\\"response.totalNumberOfResults\\":1,\\"response.success\\":true,\\"response.results\\":[{\\"codBase\\":\\"W<country>\\",\\"cisHostNum\\":\\"4605\\",\\"userEid\\":\\"HUB\\",\\"requestType\\":\\"Q\\",\\"responseEntityType\\":\\"ENT_ACTIVITY\\",\\"clientRequestId\\":\\"<dcr_id>\\",\\"cegedimRequestEid\\":\\"\\",\\"customerRequest\\":null,\\"trace1ClientRequestDate\\":\\"2024-02-27T09:29:34Z\\",\\"trace2CegedimOkcProcessDate\\":\\"2024-02-27T09:29:34Z\\",\\"trace3CegedimOkeTransferDate\\":\\"2024-02-27T09:32:22Z\\",\\"trace4CegedimOkeIntegrationDate\\":\\"2024-02-27T09:29:48Z\\",\\"trace5CegedimDboResponseDate\\":\\"2024-03-04T14:51:54Z\\",\\"trace6CegedimOkcExportDate\\":null,\\"requestComment\\":\\"\\",\\"responseComment\\":\\"HUB manual update due to MR-<ticket_number>\\",\\"individualEidSource\\":null,\\"individualEidValidated\\":null,\\"workplaceEidSource\\":null,\\"workplaceEidValidated\\":null,\\"activityEidSource\\":null,\\"activityEidValidated\\":null,\\"addressEidSource\\":null,\\"addressEidValidated\\":null,\\"countryEid\\":\\"<country>\\",\\"processStatus\\":\\"REQUEST_RESPONDED\\",\\"requestStatus\\":\\"VAS_NOT_FOUND\\",\\"updateDate\\":\\"2024-03-04T16:06:29Z\\"}]}"\n  }\n}
\n
" }, { "title": "CHANGELOG", "pageID": "411338079", "pageLink": "/display/GMDM/CHANGELOG", "content": "

List of DCRs:

\"\"Re COMPANY RE IM44066249 VR missing FR.msg

" }, { "title": "Update DCRs with missing comments", "pageID": "425495306", "pageLink": "/display/GMDM/Update+DCRs+with+missing+comments", "content": "

Description

Due to temporary problem with our calls to Reltio workflow API we had multiple DCRs with missing workflow comments. The symptoms of this error were: no changeRequestComment field in DCRRegistry mongo collection and lack of content in Comment field in Reltio while viewing DCR by entityUrl.
We have created a solution allowing to find deficient DCRs and update their comments in database and Reltio.

Goal

We want to find all deficient DCRs in a given environment and update their comments in DCRRegistry and Reltio.
This can be accomplished by following the procedure described below.

Procedure

Step 1 - Configure the solution

Go to tools/dcr-update-workflow-comments module in mdm-hub-inbound-services repository.

Prepare env configuration.
Provide mongo.dbName and manager.url in application.yaml file.
Create a file named application-secrets.yaml. Copy the content from application-secretsExample.yaml file and replace mock values with real ones appropriate to a given environment.

Prepare solution configuration. 
Provide desired mode (find/repair) and DCR endTime time limits for deficient DCRs search in application.yaml.
Here is an example of update-comments configuration.

application.yaml
\n
update-comments:\n  mode: find\n  starting: 2024-04-01T10:00:00Z\n  ending: 2024-05-15T10:00:00Z
\n

Step 2 - Find deficient DCRs

Run the application using ApplicationServiceRunner.java in find mode with Spring profile: secrets.

\"\"

As a result, dcrs.csv file will appear in resources directory. It contains a list of DCRs to be updated in the next step. Those are DCRs ended within the configuration time limits, with no changeRequestComment field in DCRRegistry and having not empty processInstanceId (that value is needed to retrieve workflow comments from Reltio). This list can be viewed and altered if there is a need to omit a specific DCR update.

\"\"

Step 3 - Repair the DCRs

Change update-comments.mode configuration to repair. Run the application exactly the same as in Step 2.
As a result, report.txt file will be created in resources directory. It will contain a log for every DCR with its update status. If the update fails, it will contain the reason. 

In case of failed updated, the application can be ran again with dcrs.csv needed adjustments.

" }, { "title": "GBLUS DCRs:", "pageID": "310966586", "pageLink": "/pages/viewpage.action?pageId=310966586", "content": "" }, { "title": "ICUE VRs manual load from file", "pageID": "310966588", "pageLink": "/display/GMDM/ICUE+VRs+manual+load+from+file", "content": "

This SOP describes the manual load of selected ICUE DCRS to the GBLUS environment.

Scope and issue description:

On GBLUS PROD VRs(DCRs) are sent to IQVIA(ONEKEY) for validation using events. The process is responsible for this is described on this page (OK DCR flows (GBLUS)). IQVIA receives the data based on singleton profiles. 

The current flow enables only GRV and ENGAGE. ICUE was disabled from the flow and requires manual work to load this to IQVIA due to a high number of ICUE standalone profiles created by this system on January/February 2023. 

More details related to the ICUE issue are here:

\"\"ODP_ US IQVIA DRC_VR Request for 2023.msg\"\"DCR_Counts_GBLUS_PROD.xlsx

Steps to add ICUE in the IQVIA validation process:


  1. Check if there are no loads on environment GBLUS PROD:
    1. Check reltio-* topics and check if there are no huge number of events per minute and if there is no LAG on topics:
    2. \"\"
  2. Pick the input file from a client and after approval from Monica.Mulloy@COMPANY.com proceed with changes:
    1. example email and input file:
    2. First batch_ Leftover ICUE VRs (27th Feb-31st March).msg
  3. Generate the events for the VR topic
    1. - id: onekey_vr_dcrs_manual
      destination: "${env}-internal-onekeyvr-in"
    2. Reconciliation target ONEKEY_DCRS_MANUAL
    3. use the resendLastEvent operation in the publisher (generate CHANGES events)
  4. After all events are pushed to topic verify on akhq if generated events are available on desired topic
  5. Wait for events aggregation window closure(24h).
  6. Check if VR's are visible in DCRRequests mongo collection. createTime should be within the last 24h

    \n
    { "entity.uri" : "entities/<entity_uri>" }
    \n


" }, { "title": "HL DCR:", "pageID": "302705613", "pageLink": "/pages/viewpage.action?pageId=302705613", "content": "" }, { "title": "How do we answer to requests about DCRs?", "pageID": "416002490", "pageLink": "/pages/viewpage.action?pageId=416002490", "content": "" }, { "title": "EFK:", "pageID": "284806852", "pageLink": "/pages/viewpage.action?pageId=284806852", "content": "" }, { "title": "FLEX Environments - Elasticsearch Shard Limit", "pageID": "513736765", "pageLink": "/display/GMDM/FLEX+Environments+-+Elasticsearch+Shard+Limit", "content": "

Alert

Sometimes, below alert gets triggered:

\"\"


This means that Elasticsearch has allocated >80% of allowed number of shards (default 1000 max).

Further Debugging

Also, we can check directly on the EFK cluster what is the shard count:

  1. Log into Kibana and choose "Dev Tools" from the panel on the left:

    \"\"

  2. Use one of below API calls:

    To fetch current cluster status and number of active/unassigned shards (# of active shards + # of unassigned shards = # of allocated shards):
    GET _cluster/health
    \"\"

    To check the current assigned shards limit:
    GET
    \"\"


Solution: Removing Old Shards/Indices

This is the preferred solution. Old indices can be removed through Kibana.


  1. Log into Kibana and choose "Management" from the panel on the left:

    \"\"

  2. Choose "Index Management":

    \"\"

  3. Find and mark indices that can be removed. In my case, I searched for indices containing "2023" in their names:

    \"\"

  4. Click "Manage Indices" and "Delete Indices". Confirm:

    \"\"

Solution: Increasing the Limit

This is not the preferred solution, as it is not advised to go beyond the default limit of 1000 shards per node - it can lead to worse performance/stability of the Elasticsearch cluster.

TODO: extend this section when we need to increase the limit somewhere, use this article: https://www.elastic.co/guide/en/elasticsearch/reference/7.4/misc-cluster.html



" }, { "title": "Kibana: How to Restore Data from Snapshots", "pageID": "284806856", "pageLink": "/display/GMDM/Kibana%3A+How+to+Restore+Data+from+Snapshots", "content": "

NOTE: The time of restoring is based on the amount of data you wanted to restore. Before beginning of restoration you have to be sure that the elastic cluster has a sufficient amount of storage to save restoring data.

To restore data from the snapshot you have to use "Snapshot and Restore" site from Kibana. It is one of sites avaiable in "Stack Management" section:

\"\"


\"\"


Select the snapshot which contains data you are interested in and click the Restore button:

\"\"


In the presented wizard please set up the following options:

Disable the option "All data streams and indices" and provide index patterns that match index or data stream you want to restore:

\"\"


It is important to enable option "Rename data streams and indices" and set "Capture pattern" as "(.+)" and "Replacement pattern" as "$1-restored-<idx>", where the idx <1, 2, 3, ... , n> - it is required once we restore more than one snapshot from the same datastream. In another case, the restore operation will override current elasticsearch objects and we lost the data:

\"\"

The rest of the options on this page have to be disabled:

\"\"

Click the "Next" button to move to "Index settings" page. Leave all options disabled and go to the next page.

On the page "Review restore details" you can see the summary of the restore process settings. Validate them and click the "Restore snapshot" button to start restoring.

You can track the restoration progress in "Restore Status" section:

\"\"


When data is no longer needed, it should be deleted:

\"\"







" }, { "title": "External proxy", "pageID": "379322691", "pageLink": "/display/GMDM/External+proxy", "content": "" }, { "title": "No downtime Kong restart/upgrade", "pageID": "379322693", "pageLink": "/pages/viewpage.action?pageId=379322693", "content": "


This SOP describes how to perform "no downtime" restart. 

Resources

http://awsprodv2.COMPANY.com/ - AWS console

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/install_kong.yml - ansible playbook 

SOP

Remove one node instance from target groups (AWS console)

  1. Access AWS console http://awsprodv2.COMPANY.com/. Log in using COMPANY SSO
  2. Choose Account: prod-dlp-wbs-rapid (432817204314). Role: WBS-EUW1-GBICC-ALLENV-RO-SSO
    \"\"
  3. Change region to Europe(Ireland - eu-west-1)
  4. Got to EC2 → Load Balancing → Target Groups
    \"\"

    \"\"
  5. Search for target group

    \n
    -prod-gbl-mdm
    \n

    There should be 4 target groups visible. 1 for mdmhub api and 3 for Kafka

    \"\"
  6. Remove first instance (EUW1Z2DL113) from all 4 target groups.

    Perform below steps for all target groups

    To do so, open each target group select desired instance and choose 'deregister'. Now this instance should have 'Health status': 'Draining'.
    Next do the same operation for other target groups.

    Do not remove two instances from consumer group at the same time. It'll cause API unabailability.
    Also make sure to remove the same instance from all target groups.


    \"\"
    \"\"

Wait for Instance to be removed from target group

  1. Wait for target groups to be adjusted. Deregistered instance should eventually be removed from target group
    \"\"

Additionally you can check kong logs directly

First instance: 


\n
ssh ec2-user@euw1z2dl113.COMPANY.com\ncd /app/kong/\ndocker-compose logs -f --tail=0\n# Check if there are new requests to exteral api
\n


Second isntance: 

\n
ssh ec2-user@euw1z2dl114.COMPANY.com\ncd /app/kong/\ndocker-compose logs -f --tail=0\n# Check if there are new requests to exteral api
\n
Some internal requests may be still visible, eg. metrics

Perform restart of Kong on removed instance (Ansible playbook)

Execute ansible playbook inside mdm-hub-cluster-env repository inside 'ansible' directory

For the first instance:

\n
ansible-playbook install_kong.yml -i inventory/proxy_prod/inventory  -l kong_01
\n

For the second instance:

\n
ansible-playbook install_kong.yml -i inventory/proxy_prod/inventory  -l kong_02
\n

Make sure that kong_01 is the same instance you've removed from target group(check ansible inventory)

\"\"

Re-add the removed instance

Perform this steps for all target groups


  1. Select target group
    \"\"
    Choose 'Register targets'
  2. Filter instances to find previously removed instance. Select it and choose 'Include as pending below'. Make sure that correct port is chosen
    \"\"
  3. Verify below request and select 'Register pending targets'
    \"\"
    Instance should be in 'Initial' state in target group
    \"\"

Wait for instance to be properly added to target group

Wait for all instances to have 'Healthy' status instead of 'Initial'. Make sure everything work as expected (Check Kong logs)
\"\"

Perform steps 1-5 for second Kong instance

Second instance: euw1z2dl114.COMPANY.com

Second Kong host(ansible inventory): kong_02

" }, { "title": "Full Environment Refresh - Reltio Clone", "pageID": "386803861", "pageLink": "/display/GMDM/Full+Environment+Refresh+-+Reltio+Clone", "content": "" }, { "title": "Full Environment Refresh", "pageID": "386803864", "pageLink": "/display/GMDM/Full+Environment+Refresh", "content": "

Introduction

Below steps are the record of steps done in January 2024 due to Reltio Data Clone between GBLUS PROD → STAGE and APAC PROD → STAGE.

Environment refresh consists of:

  1. disabling MDM Hub components
  2. full cleanup of existing STAGE data: Kafka and MongoDB
  3. identifying and copying cache collections from PROD to STAGE MongoDB
  4. re-enabling MDM Hub components
  5. running the Hub Reconciliation DAG


Disabling Services, Kafka Cleanup

  1. Comment out the EFK topics in fluentd configuration:

    \n
    mdm-hub-cluster-env\\apac\\nprod\\namespaces\\apac-backend\\values.yaml
    \n

    \"\"

  2. Deploy apac-backend through Jenkins, to apply the fluentd changes:
    https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_backend_apac_nprod/
    (fluentd pods in the apac-backend namespace should recreate)

  3. Block the apac-stage mdmhub deployment job in Jenkins:
    https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/

  4. Notify the monitoring/support Team, that the environment is disabled (in case alerts are triggered or users inquire via emails)
  5. Use Kubernetes & Helm command line tools to uninstall the mdmhub components and Kafka topics:
    1. use kubectx/kubectl to switch context to apac-nprod cluster:

      \"\"

    2. use helm to uninstall below two releases from the apac-nprod cluster (you can confirm release names by using the "$ helm list -A" command):

      \n
      $ helm uninstall mdmhub -n apac-stage\n$ helm uninstall kafka-resources-apac-stage -n apac-backend
      \n

      \"\"

    3. confirm there are no pods in the apac-stage namespace:
      \"\"

    4. list remaining Kafka topics (kubernetes kafkatopic resources) with "apac-stage" prefix:
      \"\"
      manually remove all the remaining "apac-stage" prefixed topics. Note that it is expected that some topics remain - some of them have been created by Kafka Streams, for example.

MongoDB Cleanup

  1. Log into the APAC NPROD MongoDB through Studio 3T.
  2. Clear all the collections in the apac-stage database.
    \"\"
    Exceptions:
    • "batchInstance" collection
    • "quartz-" prefixed collections
    • "shedLock" collection

  3. Wait until MongoDB cleans all these collections (could take a few hours):
    \"\"

  4. Log into the APAC PROD MongoDB through Studio 3T. You want to have both connections in the same session.
  5. Copy below collections from APAC PROD (Ctrl+C):
    • keyIdRegistry
    • relationCache
    • sequenceCounters

  6. Right click APAC NPROD database "apac-stage" and choose "Paste Collections"
    \"\"

  7. Dialog will appear - use below options for each collection:
    • Collections Copy Mode: Append to existing target collection
    • Documents Copy Mode: Overwrite documents with same _id
    • Copy indices from the source collection: uncheck
      \"\"

  8. Wait until all the collections are copied.
    \"\"

Snowflake Cleanup

  1. Cleanup the base tables:

    \n
    TRUNCATE TABLE CUSTOMER.ENTITIES;\nTRUNCATE TABLE CUSTOMER.RELATIONS;\nTRUNCATE TABLE CUSTOMER.LOV_DATA;\nTRUNCATE TABLE CUSTOMER.MATCHES;\nTRUNCATE TABLE CUSTOMER.MERGES;\nTRUNCATE TABLE CUSTOMER.HIST_INACTIVE_ENTITIES;
    \n
  2. Run the full materialization jobs:

    \n
    CALL CUSTOMER.MATERIALIZE_FULL_ALL('M', 'CUSTOMER');\nCALL CUSTOMER.HI_MATERIALIZE_FULL_ALL('CUSTOMER');
    \n
  3. Check for any tables that haven't been cleaned properly:

    \n
    SELECT *\nFROM INFORMATION_SCHEMA.TABLES\nWHERE 1=1\nAND TABLE_TYPE = 'BASE TABLE'\nAND TABLE_NAME ILIKE 'M^_%' ESCAPE '^'\nAND ROW_COUNT != 0;
    \n
  4. Run the materialization for those tables specifically or you can run the queries prepared from the bellow query:

    \n
    SELECT 'TRUNCATE TABLE ' || TABLE_SCHEMA || '.' || TABLE_NAME || ';'\nFROM INFORMATION_SCHEMA.TABLES\nWHERE 1=1\nAND TABLE_TYPE = 'BASE TABLE'\nAND TABLE_NAME ILIKE 'M^_%' ESCAPE '^'\nAND ROW_COUNT != 0;
    \n

Re-Enabling Hub

  1. Get a confirmation that the Reltio data cloning process has finished.
  2. Re-enable the mdmhub apac-stage deployment job and perform a deployment of an adequate version.
  3. Uncomment previously commented (look: Disabling The Services, Kafka Cleanup, 1.) EFK transaction topic list, deploy apac-backend. Fluentd pods in the apac-backend namespace should recreate.
  4. Wait for both deployments to finish (should be performed one after another).
  5. Test the MDM Hub API - try sending a couple of GET requests to fetch some entities that exist in Reltio. Confirm that the result is correct and the requests are visible in Kibana (dashboard APAC-STAGE API Calls):
    \"\"

  6. (2025-05-19 Piotr: we no longer need to do this - Matches Enricher now deploys with minimum 1 pod in every environment) Run below command in your local Kafka client environment.

    \n
    kafka-console-consumer.sh --bootstrap-server kafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094 --group apac-stage-matches-enricher --topic apac-stage-internal-reltio-matches-events --consumer.config client.sasl.properties
    \n

    This needs to be done to create the consumergroup, so that Keda can scale the deployment in the future.

Running The Hub Reconciliation

  1. After confirming that Hub is up and working correctly, navigate to APAC NPROD Airflow:
    https://airflow-apac-nprod-gbl-mdm-hub.COMPANY.com/home

  2. Trigger the hub_reconciliation_v2_apac_stage DAG:
    \"\"
    \"\"

  3. To minimize the chances of overfilling the Kafka storage, set retention of reconciliation metrics topics to an hour:
    1. Navigate to APAC NPROD AKHQ:
      https://akhq-apac-nprod-gbl-mdm-hub.COMPANY.com/

    2. Find below topics and navigate to their "Configs" tabs:
    3. For each topic, find the config "retention.ms" (do not mistake it with "delete.retention.ms", which is responsible for compaction) and set it to 3600000. Apply changes.
      \"\"

  4. Monitor the DAG, event processing and Kafka/Elasticsearch storage.
  5. After the DAG finishes, disable reconciliation jobs (if reconciliations start uncontrollably before the data is fully restored, it will unnecessarily increase the workload):
    1. Manually disable the hub_reconciliation_v2_apac_stage DAG: https://airflow-apac-nprod-gbl-mdm-hub.COMPANY.com/dags/hub_reconciliation_v2_apac_stage/grid
    2. Manually disable the reconciliation_snowflake_apac_stage DAG: https://airflow-apac-nprod-gbl-mdm-hub.COMPANY.com/dags/reconciliation_snowflake_apac_stage/grid
  6. After all reconciliation events are processed, the environment is ready to use. Compare entity/relation counts between Reltio-MongoDB-Snowflake to confirm that everything went well.
  7. Re-enable reconciliation jobs from 5.





" }, { "title": "Full Environment Refresh - Legacy (Docker Environments)", "pageID": "164470082", "pageLink": "/pages/viewpage.action?pageId=164470082", "content": "

Steps to take when a Hub environment needs to be cleaned up or refreshed.

1.Preparation

$ ./consumer_groups_sasl.sh --describe --group <group_name> | sort

For every consumer group in this environment. This will list currently connected consumers.

If there are external consumers connected they will prevent deletion of topics they're connected to. Contact people responsible for those consumers to disconnect them.



2. Stop GW/Hub components: subscriber, publisher, manager, batch_channel


$ docker stop <container name>


3. Double-check that consumer groups (internal and external) have been disconnected


4. Delete all topics:

a) Preparation:

b) Deleting the topics:

          (...) continue for all topics

5. Check whether topics are deleted on disk and using $ ./topics.sh --list 

6. Recreate the topics by launching the Ansible playbook with parameter create_or_update: True set for desired topics in topics.yml

\"\"

7. Cleanup MongoDB:


8. After confirming everything is ready (in case of environment refresh there has to be a notification from Reltio that it's ready) restart GW and Hub components

9. Check component logs to confirm they started up and connected correctly.


" }, { "title": "Hub Application:", "pageID": "302706338", "pageLink": "/pages/viewpage.action?pageId=302706338", "content": "" }, { "title": "Batch Channel: Importing MAPP's Extract", "pageID": "164470063", "pageLink": "/display/GMDM/Batch+Channel%3A+Importing+MAPP%27s+Extract", "content": "

To import MAPP's extract you have to:

  1. Have original extract (eg. original.csv) which was uploaded to Teams channel,
  2. Open it in Excel and save as "CSV (Comma delimited) (*.csv)",
  3. Run dos2unix tool on the file.
  4. Do steps from 2 and 3 on extract file (eg. changes.csv) received form MAPP's team,
  5. Compare original file to file with changes and select only lines which was changed in the second file: ( head -1 changes.csv && diff original.csv changes.csv | grep '^>' | sed 's/^> //' ) > result.csv
  6. Divide result file into the smaller ones by running splitFile.sh script: ./splitFile.sh  result.csv. The script will generate set of files where theirs names will end with _{idx}.{extension} eg.: result_00.csv, result_01.csv, result_02.csv etc.
  7. Upload the result set of files to s3 location: s3://pfe-baiaes-eu-w1-project/mdm/inbound/mapp/. This action will trigger batch-channel component, which will start loading changes to MDM.


\"\"splitFile.sh

" }, { "title": "Callback Service: How to Find Events Stuck in Partial State", "pageID": "273681936", "pageLink": "/display/GMDM/Callback+Service%3A+How+to+Find+Events+Stuck+in+Partial+State", "content": "

What is partial state?

When an event gets processed by Callback Service, if any change is done at the precallback stage, event will not be sent further, to Event Publisher. It is expected that in a few seconds another event will come, signaling the change done by precallback logic - this one gets passed to Publisher and downstream clients/Snowflake as far as precallback detects no need for a change.

Sometimes the second event is not coming - this is what we call a partial state. It means, that update event will actually not reach Snowflake and downstream clients. PartialCounter functionality of CallbackService was implemented to monitor such behaviour.

How to identify that an event is stuck in partial state?

PartialCounter is counting events which have not been passed down to Event Publisher (identified by Reltio URI) and exporting this count as a Prometheus (Actuator) metric. Prometheus alert "callback_service_partial_stuck_24h" is notifying us that an event has been stuck for more than 24 hours.

How to find events stuck in partial state?

Use below command to fetch the list of currently stuck events as JSON array (example for emea-dev). You will have to authorize using mdm_test_user or mdm_admin:

\n
# curl https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/precallback/partials
\n

\"\"


More details can be found in Swagger Documentation: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/

What to do?

Events identified as stuck in partial state should be reconciled.

" }, { "title": "Integration Test - how to run tests locally from your computer to target environment", "pageID": "337839648", "pageLink": "/display/GMDM/Integration+Test+-+how+to+run+tests+locally+from+your+computer+to+target+environment", "content": "

Steps:

  1. First, choose the environment and go to the Jenkins integration tests directory:
  2. https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/
  3. based on APAC DEV:
  4. go to https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/
  5. choose the latest RUN and click Workspace on the left
  6. \"\"
  7. Click on /home/jenkins workspace link
  8. \"\"
  9. Go to /code/mdm-integretion-tests/src/test/resources/ 
  10. \"\"
  11. Download 3 files
    1. citrus-application.properties
    2. kafka_jaas.conf
    3. kafka_truststore.jks
  12. Edit 
    1. citrus-application.properties
    2. change local K8s URLS to real URLS and local PATH. Leave other variables as is. 
    3. in that case, use the KeePass that contains all URLs:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/credentials.kdbx


Example code that is adjusted to APAC DEV

API URLs + local PATH to certs

This is just the example from APAC DEV that contains the C:\\\\Users\\\\mmor\\\\workspace\\\\SCM\\\\mdm-hub-inbound-services\\\\ path - replace this with your own code localization 

\n
citrus.spring.java.config=com.COMPANY.mdm.tests.config.SpringConfiguration\n\njava.security.auth.login.config=C:\\\\Users\\\\mmor\\\\workspace\\\\SCM\\\\mdm-hub-inbound-services\\\\mdm-integretion-tests\\\\src\\\\test\\\\resources\\\\kafka_jaas.conf\n\nreltio.oauth.url=https://auth.reltio.com/\nreltio.oauth.basic=secret\nreltio.url=https://mpe-02.reltio.com/reltio/api/2NBAwv1z2AvlkgS\nreltio.username=svc-pfe-mdmhub\nreltio.password=secret\nreltio.apiKey=secret\nreltio.apiSecret=secret\n\nmongo.dbUrl=mongodb://admin:secret@mongo-apac-nprod-gbl-mdm-hub.COMPANY.com:27017/reltio_apac-dev?authMechanism=SCRAM-SHA-256&authSource=admin\nmongo.url=mongodb://mongo-apac-nprod-gbl-mdm-hub.COMPANY.com:27017\nmongo.dbName=reltio_apac-dev\nmongo.username=mdmgw\nmongo.password=secret\n\ngateway.url=https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-apac-dev\ngateway.username=mdm_test_user\ngateway.apiKey=secret\n\nbatchService.url=https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-batch-apac-dev\nbatchService.username=mdm_test_user\nbatchService.apiKey=secret\nbatchService.limitedUsername=mdm_test_user_limited\nbatchService.limitedApiKey=secret\n\nmapchannel.url=https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/dev-map-api\nmapchannel.username=mdm_test_user\nmapchannel.apiKey=secret\n\napiRouter.url=https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-apac-dev\napiRouter.dcrReltioUserApiKey=secret\napiRouter.dcrOneKeyUserApiKey=secret\napiRouter.intTestUserApiKey=secret\napiRouter.dcrReltioUser=mdm_dcr2_test_reltio_user\napiRouter.dcrOneKeyUser=mdm_dcr2_test_onekey_user\napiRouter.intTestUser=mdm_test_user\n\nadminService.url=https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-apac-dev\nadminService.intTestUserApiKey=secret\nadminService.intTestUser=mdm_test_user\n\ndeg.url=https://hcp-gateway-dev.eu.cloudhub.io/v1\ndeg.oAuth2Service=https://hcp-gateway-dev.eu.cloudhub.io/\ndeg.apiKey=secret\ndeg.apiSecret=secret\n\nkafka.brokers=kafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094\nkafka.group=int_test_dev\nkafka.topic=apac-dev-out-simple-all-int-tests-all\nkafka.security.protocol=SASL_SSL\nkafka.sasl.mechanism=SCRAM-SHA-512\nkafka.ssl.truststore.location=C:\\\\Users\\\\mmor\\\\workspace\\\\SCM\\\\mdm-hub-inbound-services\\\\mdm-integretion-tests\\\\src\\\\test\\\\resources\\\\kafka_truststore.jks\nkafka.ssl.truststore.password=secret\nkafka.receive.timeout=60000\nkafka.purgeEndpoints.timeout=100000\n...\n...\n...
\n



  1. Now go to your local code checkout - mdm-hub-inbound-services\\mdm-integretion-tests
  2. Copy 3 files to the mdm-integretion-tests/src/test/resources
  3. \"\"
  4. Select the test and click RUN
  5. \"\"
  6. END - the result: You are running Jenkins integration tests from your local computer on target DEV environment. 
  7. Now you can check logs locally and repeat. 







" }, { "title": "Manager: Reload Entity - Fix COMPANYAddressID Using Reload Action", "pageID": "229180577", "pageLink": "/display/GMDM/Manager%3A+Reload+Entity+-+Fix+COMPANYAddressID+Using+Reload+Action", "content": "
  1. Before starting check what DQ rules have -reload action on the list. Now it is SourceMatchCategory and COMPANYAddressId
    1. check here - - example dq rule
    2. update with -reload operation to reload more DQ rules
  2. Generate events using the script :
    1.  script
    2. or
    3. script - fix SourceMatchCategory without ONEKEY
    4. the script gets all ACTIVE entities with Addresses
      1. that have missing COMPANYAddressId
      2. that COMPANYAddressID is lower that correct value for each env: emea 5000000000  amer 6000000000  apac 7000000000
    5. Script generate events: example:
      1. entities/lwBrc9K|{"targetEntity":{"entityURI":"entities/lwBrc9K","sources":["FUSIONMDM"],"targetType":"entityUri"},"overwrites":[{"uriMask":"COMPANYAddressID"}]}
        entities/1350l3D6|{"targetEntity":{"entityURI":"entities/1350l3D6","sources":["FUSIONMDM"],"targetType":"entityUri"},"overwrites":[{"uriMask":"COMPANYAddressID"}]}
        entities/1350kZNI|{"targetEntity":{"entityURI":"entities/1350kZNI","sources":["FUSIONMDM"],"targetType":"entityUri"},"overwrites":[{"uriMask":"COMPANYAddressID"}]}
        entities/cPSKBB9|{"targetEntity":{"entityURI":"entities/cPSKBB9","sources":["FUSIONMDM"],"targetType":"entityUri"},"overwrites":[{"uriMask":"COMPANYAddressID"}]}
  3. Make a fix for COMPANYAddressID that is lower than the correct value for each env
    1. Go to the keyIdRegistry Mongo collection
    2. find all entries that have generatedId lower than emea 5000000000  amer 6000000000  apac 7000000000
    3. increase the generatedId  adding the correct value from correct environments using the script - script
  4. Get the file and push it to the <env>-internal-async-all-reload-entity topic
    1. ./start_sasl_producer.sh <env>-internal-async-all-reload-entity
    2. or using the input file  
    3. ./start_sasl_producer.sh <env>-internal-async-all-reload-entity < reload_dev_emea_pack_entities.txt (file that contains each json generated by the Mongo script, each row in new line)



How to Run a script on docker:

example emea DEV:

go to - svc-mdmnpr@euw1z2dl111
docker exec -it mongo_mongo_1 bash
cd  /data/configdb
create script - touch reload_entities_fix_COMPANYaddressid_hub.js
edit header:
db = db.getSiblingDB("<DB>")
db.auth("mdm_hub", "<PASS>")
RUN: nohup mongo --host mongo_dev_emea_reltio_rs/euw1z2dl111.COMPANY.com:27017 -u mdm_hub -p <PASS> --authenticationDatabase reltio_dev reload_entities_fix_COMPANYaddressid_hub.js &

OR
nohup mongo --host mongo_dev_emea_reltio_rs/euw1z2dl111.COMPANY.com:27017 -u mdm_hub -p <PASS> --authenticationDatabase reltio_dev reload_entities_fix_sourcematch_hub_DEV.js > smc_DEV_FIX.out 2>&1 &
nohup mongo --host mongo_dev_emea_reltio_rs/euw1z2dl111.COMPANY.com:27017 -u mdm_hub -p <PASS> --authenticationDatabase reltio_qa reload_entities_fix_sourcematch_hub_QA.js > smc_QA_FIX.out 2>&1 &
nohup mongo --host mongo_dev_emea_reltio_rs/euw1z2dl111.COMPANY.com:27017 -u mdm_hub -p <PASS> --authenticationDatabase reltio_stage reload_entities_fix_sourcematch_hub_STAGE.js > smc_STAGE_FIX.out 2>&1 &

" }, { "title": "Manager: Resubmitting Failed Records", "pageID": "164470200", "pageLink": "/display/GMDM/Manager%3A+Resubmitting+Failed+Records", "content": "

There is new API in manager for getting/resubmitting/removing failed records from batches.

1. Get failed records method - it returns list of errors basing on provided criterias

ii. Example:

[
        {
            "field" : "HubAsyncBatchServiceBatchName",
            "operation" : "Equals",
            "value" : "testBatchBundle"
        }
    
]

b. Response

i. List of Error objects

ii. Example:

[

    {
        "id""5fa93377e720a55f0bb68c99",
        "batchName""testBatchBundle",
        "objectType""configuration/entityTypes/HCP",
        "batchInstanceId""0+3j45V7S1K1GT2i6c3Mqw",
        "key""{\\r\\n  \\"type\\" : \\"SHS\\",\\r\\n  \\"value\\" : \\"TEST:HCP:b09b6085-28dc-451d-85b6-fe3ce2079446\\"\\r\\n}",
        "errorClass""javax.ws.rs.ClientErrorException",
        "errorMessage""HTTP 409 Conflict",
        "resubmitted"false,
        "deleted"false
    },
    {
        "id""5fa93378e720a55f0bb68ca6",
        "batchName""testBatchBundle",
        "objectType""configuration/entityTypes/HCP",
        "batchInstanceId""0+3j45V7S1K1GT2i6c3Mqw",
        "key""{\\r\\n  \\"type\\" : \\"SHS\\",\\r\\n  \\"value\\" : \\"TEST:HCP:25bfc672-9ba1-44a5-b3c1-d657de701d76\\"\\r\\n}",
        "errorClass""javax.ws.rs.ClientErrorException",
        "errorMessage""HTTP 409 Conflict",
        "resubmitted"false,
        "deleted"false
    },
    {
        "id""5fa93377e720a55f0bb68c9a",
        "batchName""testBatchBundle",
        "objectType""configuration/entityTypes/HCP",
        "batchInstanceId""0+3j45V7S1K1GT2i6c3Mqw",
        "key""{\\r\\n  \\"type\\" : \\"SHS\\",\\r\\n  \\"value\\" : \\"TEST:HCP:60067d46-07a6-4902-b9e8-1bf2acbc8a6e\\"\\r\\n}",
        "errorClass""javax.ws.rs.ClientErrorException",
        "errorMessage""HTTP 409 Conflict",
        "resubmitted"false,
        "deleted"false
    },
    {
        "id""5fa93377e720a55f0bb68c9b",
        "batchName""testBatchBundle",
        "objectType""configuration/entityTypes/HCP",
        "batchInstanceId""0+3j45V7S1K1GT2i6c3Mqw",
        "key""{\\r\\n  \\"type\\" : \\"SHS\\",\\r\\n  \\"value\\" : \\"TEST:HCP:e8d05d96-7aa3-4059-895e-ce20550d7ead\\"\\r\\n}",
        "errorClass""javax.ws.rs.ClientErrorException",
        "errorMessage""HTTP 409 Conflict",
        "resubmitted"false,
        "deleted"false
    },
    {
        "id""5fa96ba300061d51e822854a",
        "batchName""testBatchBundle",
        "objectType""configuration/entityTypes/HCP",
        "batchInstanceId""iN2LB3TiT3+Sd5dYemDGHg",
        "key""{\\r\\n  \\"type\\" : \\"SHS\\",\\r\\n  \\"value\\" : \\"TEST:HCP:973411ec-33d4-477e-a6ae-aca5a0875abb\\"\\r\\n}",
        "errorClass""javax.ws.rs.ClientErrorException",
        "errorMessage""HTTP 409 Conflict",
        "resubmitted"false,
        "deleted"false
    }

]


2. Resubmit failed records - it takes list of FieldFilter objects and returns list of errors that were resubmitted - if it was correctly resubmitted resubmitted flag is set to true

a.  Request

i. List of FieldFilter objects

b. Response

i. List of Error objects

3. Remove failed records - it takes list of FieldFilter objects that contains criteria for removing error objects and returns list of errors that were deleted - if it was correctly deleted deleted flag is set to true

a.  Request

i. List of FieldFilter objects

b. Response

i. List of Error objects

" }, { "title": "Issues diagnosis", "pageID": "438905271", "pageLink": "/display/GMDM/Issues+diagnosis", "content": "" }, { "title": "API issues", "pageID": "438905273", "pageLink": "/display/GMDM/API+issues", "content": "

Symptoms


Confirmation

To confirm if problem with API is really occurring, you have to invoke some operation that is shared by HTTP interface. To do this you can use Postman or other tool that can run HTTP requests. Below you can find a few examples that describe how to check API in components that expose this:


Reasons finding

Below diagram presents the HTTP request processing flow with engaged components:

\"\"


" }, { "title": "Kafka:", "pageID": "164470059", "pageLink": "/pages/viewpage.action?pageId=164470059", "content": "" }, { "title": "Client Configuration", "pageID": "243862610", "pageLink": "/display/GMDM/Client+Configuration", "content": "


      1. Installation

To install kafka binary version 2.8.1 should be downloaded and installed from

https://kafka.apache.org/downloads


      2. The email from the MDMHUB Team

In the email received from the MDMHUB support team you can find connection parameters like server address, topic name, group name, and the following files:


      3. Example command to test client and configuration

To connect with Kafka using the command line client save delivered files on your disc and run the following command:

export KAFKA_OPTS=-Djava.security.auth.login.config={ ●●●●●●●●●●●● Kafka_client_jaas.conf }

kafka-console-consumer.sh --bootstrap-server { kafka server } --group { group } --topic { topic_name } --consumer.config { consumer config file eg. client.sasl.properties}


For example for amer dev:

●●●●●●●●●●● in provided file: kafka_client_jaas.conf

Kafka server: kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094

Group: dev-mule

Topic: dev-out-full-pforcerx-grv-all

Consumer config is in provided file: client.sasl.properties

export KAFKA_OPTS=-Djava.security.auth.login.config=kafka_client_jaas.conf

kafka-console-consumer.sh --bootstrap-server kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 --group dev-mule --topic dev-out-full-pforcerx-grv-all --consumer.config client.sasl.properties

" }, { "title": "Client Configuration in k8s", "pageID": "284806978", "pageLink": "/display/GMDM/Client+Configuration+in+k8s", "content": "

Each of k8s clusters have installed kafka-client pod. To find this pod you have to list all pods deployed in *-backend namespace and select pod which name starts with kafka-client:

\n
kubectl get pods --namespace emea-backend  | grep kafka-client
\n


To run commands on this pod you have to remember its name and use in "kubectl exec" command:

Using kubectl exec with kafka client
\n
kubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -- <command>
\n


As a <command> you can use all of standard Kafka client scripts eg. kafka-consumer-groups.sh or one of wrapper scripts which simplify configuration of standard scripts - broker and authentication configuration. They are the following scripts:


Kafka-client pod has other kafka tool named kcat. To use this tool you have to run commands on container kafka-kcat unsing wrapper script kcat.sh:

Running kcat.sh on emea-nprod cluster
\n
kubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -c kafka-kcat -- kcat.sh
\n



NOTE: Remember that all wrapper scripts work with admin permissions.


Examples

Describe the current offsets of a group

Describe group dev_grv_pforcerx on emea-nprod cluster
\n
kubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -- consumer_groups.sh --describe --group dev_grv_pforcerx
\n


Reset offset of group to earliset

Reset offset to earliest for group group1 and topic gbl-dev-internal-gw-efk-transactions on emea-nprod cluster
\n
kubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -- reset_offsets.sh --group group1 --to-earliest gbl-dev-internal-gw-efk-transactions
\n


Consumer events from the beginning of topic. It will produce the output where each of lines will have the following format: <message key>|<message body>

Read topic gbl-dev-internal-gw-efk-transactions from beginning on emea-nprod cluster
\n
kubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -- start_consumer.sh gbl-dev-internal-gw-efk-transactions --from-beginning
\n


Send messages defined in text file to kafka topics. Each of message in file have to have following format: <message key>|<message body>

Send all messages from file file_with_messages.csv to topic gbl-dev-internal-gw-efk-transactions
\n
kubectl exec -i --namespace emea-backend kafka-client-8585fbb7f9-55cjm -- start_producer.sh gbl-dev-internal-gw-efk-transactions < file_with_messages.csv
\n


Delete consumer group on topic

Delete consumer group test on topic gbl-dev-internal-gw-efk-transactions emea-nprod cluster
\n
kubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -- consumer_groups.sh --delete-offsets --group test gbl-dev-internal-gw-efk-transactions
\n


List topics and their partitions using kcat

List topcis into on emea-nprod cluster
\n
kubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -c kafka-kcat -- kcat.sh -L
\n



" }, { "title": "How to Add a New Consumer Group", "pageID": "164470080", "pageLink": "/display/GMDM/How+to+Add+a+New+Consumer+Group", "content": "

These instructions demonstrate how to add an additional consumer group to an existing topic.


  1. Open file "topics.yml" located under mdm-reltio-handler-env\\inventory\\<environment_name>\\group_vars\\kafka and find the topic to be updated. In this example new consumer group "flex_dev_prj2" was added to topic "dev-out-full-flex-all".

\"\"

   2. Make sure the parameter "create_or_update" is set to True for the desired topic:

\"\"

   3.  Additionally, double-check that the parameter "install_only_topics" in the "all.yml" file is set to True:

\"\"

    4. Save the files after making the changes. Run ansible to update the configuration using the following command:  ansible-playbook install_hub_broker.yml -i inventory/<environment_name>/inventory --limit broker1 --vault-password-file=~/vault-password-file

\"\"

   5. Double-check ansible output to make sure changes have been implemented correctly.

   6. Change the "create_or_update" parameter in "topics.yml" back to False.

   7. Save the file and upload the new configuration to git. 






" }, { "title": "How to Generate JKS Keystore and Truststore", "pageID": "164470062", "pageLink": "/display/GMDM/How+to+Generate+JKS+Keystore+and+Truststore", "content": "

This instruction is based on the current GBL PROD Kafka keystore.jks and trustrore.jks generation. 


  1. Create a certificate pair using keytool genkeypair command 
    1. keytool -genkeypair -alias kafka.mdm-gateway.COMPANY.com -keyalg RSA -keysize 2048 -keystore server.keystore.jks -dname "CN=kafka.mdm-gateway.COMPANY.com, O=COMPANY, L=mdm_hub, C=US"  
    2. set the security password, set the same ●●●●●●●●●●●● the key passphrase
  2. Now create a certificate signing request ( csr ) which has to be passed on to our external / third party CA ( Certificate Authority ).
    1. keytool -certreq -alias kafka.mdm-gateway.COMPANY.com -file kafka.mdm-gateway.COMPANY.com.csr -keystore server.keystore.jks 
  3. Send the csr file through the Request Manager:
    1. Log in to the BT On Demand
    2. Go to Request Manager.
    3. Click "Continue"
    4. Search for " Digital Certificates"
    5. Select the " Digital Certificates" Application and click "Continue"
    6. Click "Checkout"
    7. Select "COMPANY SSL Certificate - Internal Only" and fill:
      1. Copy CSR file
      2. fill SAN e.g from the GBL PROD Kafka: 

      3. fill email address

    8. select "No" for additional SSL Cert request, 
    9. Continue
    10. Send the CSR reqeust.
  4. When you receive the signed certificate verify the certificate
    1. Check the Subject: CN and O should be filled just like in the  1.a.
    2. Check the SAN: there should be the list of hosts from 3.g.ii.
  5. If the certificate is correct CONTINUE:
  6. Now we need to import these certificates into server.keystore.jks keystore. Import the intermediate certificate first --> then the root certificate --> and then the signed cert.
    1. keytool -importcert -alias inter -file PBACA-G2.cer -keystore server.keystore.jks
    2. keytool -importcert -alias root -file RootCA-G2.cer -keystore server.keystore.jks
    3. keytool -importcert -alias kafka.mdm-gateway.COMPANY.com -file kafka.mdm-gateway.COMPANY.com.cer -keystore server.keystore.jks
  7. After importing all three certificates you should see : "Certificate reply was installed in keystore" message.
  8. Now list the keystore and check if all the certificates are imported successfully.
    1. keytool -list -keystore server.keystore.jks
    2. Your keystore contains 3 entries
    3. For debugging start with "-v" parameter
  9. Lets create a truststore now. Set the security ●●●●●●●●●● different than the keystore
    1. keytool -import -file PBACA-G2.cer -alias inter -keystore server.truststore.jks
    2. keytool -import -file RootCA-G2.cer -alias root -keystore server.truststore.jks




COMPANY Certificates:

\"\"PBACA-G2.cer \"\"RootCA-G2.cer


" }, { "title": "Reset Consumergroup Offset", "pageID": "243862614", "pageLink": "/display/GMDM/Reset+Consumergroup+Offset", "content": "

To reset offset on Kafka topic you need to have configured the command line client. The tool that can do this action is kafka-consumer-groups.sh. You have to specify a few parameters which determine where you want to reset the offset:

and specify the offset value by proving one of following parameters:

1. --shift-by

Reset offsets shifting current offset by provided number which can be negative or positive:

kafka-consumer-groups.sh --bootstrap-server { server } --group { group } -–command-config {  client.sasl.properties } --reset-offsets --shift-by {  number from formula } --topic {  topic } --execute


2. --to-datetime

Switch which can be used to rest offset from datetime. Date should be in format ‘YYYY-MM-DDTHH:mm:SS.sss’

kafka-consumer-groups.sh --bootstrap-server { server }--group { group } -–command-config {  client.sasl.properties } --reset-offsets --to-datetime 2022-02-02T00:00:00.000Z --topic {  topic } --execute


3. --to-earliest

Switch which can be used to reset the offsets to the earliest (oldest) offset which is available in the topic.

kafka-consumer-groups.sh --bootstrap-server { server }--group { group } -–command-config {  client.sasl.properties } --reset-offsets -–to-earliest --topic {  topic } --execute


4. --to-latest

Switch which can be used to reset the offsets to the latest (the most recent) offset which is available in the topic.

kafka-consumer-groups.sh --bootstrap-server { server }--group { group } -–command-config {  client.sasl.properties } --reset-offsets -–to-latest --topic {  topic } --execute


Example

Let's assume that you want to have 10000 messages to read by your consumer and the topic has 10 partitions. The first step is moving the current offset to the latest to make sure that there is no messages to read on the topic:

kafka-consumer-groups.sh --bootstrap-server { server }--group { group } -–command-config {  client.sasl.properties } --reset-offsets --to-latest --topic {  topic } --execute

Then calculate the offset you need to shift to achieve requested lag using following formula:

-1 * desired_lag / number_of_partitions

In our example the result will be: -1 * 10000 / 10 = -1000. Use this value in the below  command:

kafka-consumer-groups.sh --bootstrap-server { server } --group { group } -–command-config {  client.sasl.properties } --reset-offsets --shift-by -1000 --topic {  topic } --execute



" }, { "title": "Kong gateway", "pageID": "462065054", "pageLink": "/display/GMDM/Kong+gateway", "content": "" }, { "title": "Kong gateway migration", "pageID": "462065057", "pageLink": "/display/GMDM/Kong+gateway+migration", "content": "

Installation procedure

  1. Deploy crds

    \n
    # Download package with crds to current directory\ntar -xzf crds_to_deploy.tar.gzcd crds_to_deploy/\nbase=$(pwd)
    \n


    1. Backup olds crds

      \n
      # Switch to proper k8s context\nkubectx atp-mdmhub-nprod-apac\n\n# Get all crds from cluster and saves them into file ${crd_name}_${env}.yaml\n# Args:\n# $1 = env\ncd $base\nmkdir old_apac_nprod\ncd old_apac_nprod\nget_crds.sh apac_nprod\n\n
      \n


    2. create new crds

      \n
      cd $base/new/splitted/\n# create new crds\nfor i in $(ls); do echo $i; kubectl create -f $i ; done\n# apply new crds\nfor i in $(ls); do echo $i; kubectl apply -f $i ; done\n# replace crds that were not properly installed \nfor i in   kic-crds.yaml01 kic-crds.yaml03 kic-crds.yaml05 kic-crds.yaml07 kic-crds.yaml10 kic-crds.yaml11; do echo $i ; kubectl replace -f $i; done
      \n


    3. Apply new version of gatewayconfigrations 

      \n
      cd $base/new\nkubectl replace -f gatewayconfiguration-new.yaml
      \n


    4. Apply old version of kongingress

      \n
      cd $base/old\nkubectl replace -f kongingresses.configuration.konghq.com.yaml
      \n


      # Performing tests is advised to check if everything is working
  2. Deploy operators with version that have kong-gateway-operator(4.32.0 or newer)
    # Performing tests is advised to check if everything is working
  3. Merge configuration
    http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/pull-requests/1967/overview

  4. Deploy backend (4.33.0-project-boldmove-SNAPSHOT or newer)
    # Performing tests is advised to check if everything is working

  5. Deploy mdmhub components (4.33.0-project-boldmove-SNAPSHOT or newer)
    # Performing tests is advised to check if everything is working

Tests

  1. Checking all ingresses
    \n
    # Change /etc/hosts if dns's are not yet changed. To obtain all hosts that should be modified in /etc/hosts: \n# Switch to correct k8s context\n# k get ingresses -o custom-columns=host0:.spec.rules[0].host -A | tail -n +2 | sort | uniq | tr '\\n' ' '\n# To get dataplane svc: \n# k get svc -n kong -l gateway-operator.konghq.com/dataplane-service-type=ingress\nendpoints=$(kubectl get  ingress -A  -o custom-columns="NAME:.metadata.name,HOST:.spec.rules[0].host,PATH:.spec.rules[0].http.paths[0].path" | tail -n +2 | awk '{print "https://"$2":443"$3}')\nwhile IFS= read -r line; do echo -e "\\n\\n---- $line ----"; curl -k $line; done <<< $endpoints
    \n
  2. Checking plugins 
    \n
    export apikey="xxxxxxxxx"\nexport reltio_authorization="yyyyyyyyy"\nexport consul_token="zzzzzzzzzzz"\n\n\nkey-auth:\n  curl https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-emea-dev\n  curl https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-emea-dev -H "apikey: $apikey"\n  curl https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-emea-dev/entities/2c9cf5a5 -H 'apikey: $apikey'\n\nmdm-external-oauth:\n  curl --location --request POST 'https://devfederate.COMPANY.com/as/token.oauth2?grant_type=client_credentials' --header 'Content-Type: application/x-www-form-urlencoded' --header 'Origin: http://10.192.71.136:8000' --header "Authorization: Basic $reltio_authorization" | jq .access_token\n  curl https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-dev/entities/2c9cf5a5 --header 'Authorization: Bearer access_token_from_previous_command'\n\ncorrelation-id:\n  curl -v https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-emea-dev/entities/2c9cf5a5 -H "apikey: $apikey" 2>&1 | grep hub-correlation-id  \n\nbackend-auth:\n  kibana-backend-auth:\n   # Web browser \n    https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home#/\n\nsession:\n   # Web browser \n   # Open debugger console in web browser and check if kong cookies are set\n\npre-function:\n  k logs -n emea-backend -l app=consul -f --tail=0\n  k exec -n airflow airflow-scheduler-0 -- curl -k http://http-mdmhub-kong-kong-proxy.kong.svc.cluster.local:80/v1/kv/dev?token=$consul_token\n\nopentelemetry:\n  curl https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-emea-dev/entities/testtest -H "apikey: $apikey"\n  +\n  # Web browser\n  https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/apm/services/kong/overview?comparisonEnabled=true&environment=ENVIRONMENT_ALL&kuery=&latencyAggregationType=avg&offset=1d&rangeFrom=now-15h&rangeTo=now&serviceGroup=&transactionType=request\n\nprometheus:\n  k exec -it dataplane-kong-knkcn-bjrc7-75bb85fc4c-2msfv -- /bin/bash\n  curl localhost:8100/metrics\n\n
    \n
  3. Check logs
    1. Gateway operator
    2. Kong operator
    3. Old kong pod - proxy and ingress controller
    4. New kong dataplane
    5. New kong controlPlane
  4. Status of new kong objects: 
    1. Dataplane
    2. Controlplane
    3. Gateway
      \n
      k get Gateway,dataplane,controlplane -n kong
      \n
  5. Check services in old and new kong 
    1. Old kong
      \n
      services=$(k exec -n kong mdmhub-kong-kong-f548788cd-27ltl -c proxy -- curl -k https://localhost:8444/services); echo $services | jq .
      \n
    2. New kong
      \n
       services=$(k exec -n kong dataplane-kong-knkcn-bjrc7-5c9f596ff9-t94lf -c proxy -- curl -k https://localhost:8444/services); echo $services | jq .
      \n



Reference

Kong operator configuration

https://github.com/Kong/kong-operator/blob/main/deploy/crds/charts_v1alpha1_kong_cr.yaml

Kong gateway operator crd's reference

https://docs.konghq.com/gateway-operator/latest/reference/custom-resources/#dataplanedeploymentoptions

\"\"get_crds.sh\"\"crds_to_deploy.tar.gz

" }, { "title": "MongoDB:", "pageID": "164470061", "pageLink": "/pages/viewpage.action?pageId=164470061", "content": "" }, { "title": "Mongo-SOP-001: Mongo Scripts", "pageID": "164470056", "pageLink": "/display/GMDM/Mongo-SOP-001%3A+Mongo+Scripts", "content": "
\n
hub_errors\n db.hub_errors.createIndex({plannedResubmissionDate: -1}, {background: true, name: "idx_plannedResubmissionDate_-1"});\n db.hub_errors.createIndex({timestamp: -1}, {background: true, name: "idx_timestamp_-1"});\n db.hub_errors.createIndex({exceptionClass: 1}, {background: true, name: "idx_exceptionClass_1"});\n db.hub_errors.createIndex({status: -1}, {background: true, name: "idx_status_-1"});\n\n\ngateway_errors\n db.gateway_errors.createIndex({plannedResubmissionDate: -1}, {background: true, name: "idx_plannedResubmissionDate_-1"});\n db.gateway_errors.createIndex({timestamp: -1}, {background: true, name: "idx_timestamp_-1"});\n db.gateway_errors.createIndex({exceptionClass: 1}, {background: true, name: "idx_exceptionClass_1"});\n db.gateway_errors.createIndex({status: -1}, {background: true, name: "idx_status_-1"});\n\n\ngateway_transactions\n db.gateway_transactions.createIndex({transactionTS: -1}, {background: true, name: "idx_transactionTS_-1"});\n db.gateway_transactions.createIndex({status: -1}, {background: true, name: "idx_status_-1"});\n db.gateway_transactions.createIndex({requestId: -1}, {background: true, name: "idx_requestId_-1"});\n db.gateway_transactions.createIndex({username: -1}, {background: true, name: "idx_username_-1"});\n\n\nentityHistory\n db.entityHistory.createIndex({country: -1}, {background: true, name: "idx_country"});\n db.entityHistory.createIndex({sources: -1}, {background: true, name: "idx_sources"});\n db.entityHistory.createIndex({entityType: -1}, {background: true, name: "idx_entityType"});\n db.entityHistory.createIndex({status: -1}, {background: true, name: "idx_status"});\n db.entityHistory.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\n db.entityHistory.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\n db.entityHistory.createIndex({"entity.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"});\n db.entityHistory.createIndex({"entity.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"});\n db.entityHistory.createIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"});\n\n\nentityRelations\n db.entityRelations.createIndex({country: -1}, {background: true, name: "idx_country"});\n db.entityRelations.createIndex({sources: -1}, {background: true, name: "idx_sources"});\n db.entityRelations.createIndex({entityType: -1}, {background: true, name: "idx_relationType"});\n db.entityRelations.createIndex({status: -1}, {background: true, name: "idx_status"});\n db.entityRelations.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\n db.entityRelations.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\n db.entityRelations.createIndex({startObjectId: -1}, {background: true, name: "idx_startObjectId"});\n db.entityRelations.createIndex({endObjectId: -1}, {background: true, name: "idx_endObjectId"});\n db.entityRelations.createIndex.({"relation.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"}); \n db.entityRelations.createIndex.({"relation.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"}); \n db.entityHistory.createIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"});\n\n\n\n\n\n
\n
\n
var start = new Date().getTime();\n\nvar result = db.getCollection("entityRelations").aggregate(\n\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: { \n\t\t\t        "status" : "ACTIVE"\n\t\t\t}\n\t\t},\n\n//\t\t// Stage 2\n//\t\t{\n//\t\t\t$limit: 1000\n//\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$lookup: // Equality Match\n\t\t\t{\n\t\t\t    from: "entityHistory",\n\t\t\t    localField: "relation.endObject.objectURI",\n\t\t\t    foreignField: "_id",\n\t\t\t    as: "matched_entity"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$match: {\n\t\t\t        "$or" : [\n\t\t\t            {\n\t\t\t                "matched_entity.status" : "INACTIVE"\n\t\t\t            }, \n\t\t\t            {\n\t\t\t                "matched_entity.status" : "LOST_MERGE"\n\t\t\t            },\n\t\t\t            {\n\t\t\t                "matched_entity.status" : "DELETED"\n\t\t\t            }            \n\t\t\t        ]\n\t\t\t}\n\t\t},\n\n\t\t// Stage 5\n\t\t{\n\t\t\t$group: {\n\t\t\t\t\t\t  _id:"$matched_entity.status", \n\t\t\t\t\t\t  count:{$sum:1}, \n\t\t\t}\n\t\t},\n\n\t]\n\n\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n\n);\n\n\n    \t\nprintjson(result._batch)    \t\n\nvar end = new Date().getTime();\nvar duration = end - start;\nprint("duration: " + duration + " ms")\nprint("END")
\n
\n
print("START")\nvar start = new Date().getTime();\n\nvar result = db.getCollection("entityHistory").aggregate(\n   // Pipeline\n   [\n      // Stage 1\n      {\n         $match: {\n                 "status" : "LOST_MERGE",\n                 "$and" : [\n                     {\n                         "$or" : [\n                             {\n                                 "mdmSource" : "RELTIO"\n                             },\n                             {\n                                 "mdmSource" : {\n                                     "$exists" : false\n                                 }\n                             }\n                         ]\n                     }\n                 ]\n         }\n      },\n\n      // Stage 2\n      {\n         $graphLookup: {\n             "from" : "entityHistory",\n             "startWith" : "$_id",\n             "connectFromField" : "parentEntityId",\n             "connectToField" : "_id",\n             "as" : "master",\n             "maxDepth" : 10.0,\n             "depthField" : "depthField"\n         }\n      },\n\n      // Stage 3\n      {\n         $unwind: {\n             "path" : "$master",\n             "includeArrayIndex" : "arrayIndex",\n             "preserveNullAndEmptyArrays" : false\n         }\n      },\n\n      // Stage 4\n      {\n         $match: {\n             "master.status" : {\n                 "$ne" : "LOST_MERGE"\n             }\n         }\n      },\n\n      // Stage 5\n      {\n         $redact: {\n             "$cond" : {\n                 "if" : {\n                     "$ne" : [\n                         "$master._id",\n                         "$parentEntityId"\n                     ]\n                 },\n                 "then" : "$$KEEP",\n                 "else" : "$$PRUNE"\n             }\n         }\n      },\n\n   ]\n\n   // Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n\n);\n\n\nresult.forEach(function(obj) {\n    var id = obj._id;\n    var masterId = obj.master._id;\n\n   if( masterId !== undefined){\n\n     print( id + " " + " " + obj.parentEntityId +" replaced to "+ masterId);\n     var currentTime = new Date().getTime();\n\n      var result = db.entityHistory.update( {"_id":id}, {$set: { "parentEntityId":masterId, "forceModificationDate": NumberLong(currentTime) } });\n      printjson(result);\n   }\n\n});\n\n\nvar end = new Date().getTime();\nvar duration = end - start;\nprint("duration: " + duration + " ms")\nprint("END")\n\n\n
\n
\n
db = db.getSiblingDB('reltio')\nvar file = cat('crosswalks.txt');  // read the  crosswalks file\nvar crosswalk_ids = file.split('\\n'); // create an array of crosswalks\nfor (var i = 0, l = crosswalk_ids.length; i < l; i++){ // for every crosswalk search it in the entityHistory\n    print("ID crosswalk: " + crosswalk_ids[i])\n    var result =  db.entityHistory.find({\n         status: { $eq: "ACTIVE" },\n         "entity.crosswalks.value": crosswalk_ids[i]\n    }).projection({id:1, country:1})\n    printjson(result.toArray());\n}
\n
\n
db.getCollection("entityHistory").aggregate(\n\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: { status: { $eq: "ACTIVE" }, entityType:"configuration/entityTypes/HCP" , mdmSource: "RELTIO",         "lastModificationDate" : {\n\t\t\t            "$gte" : NumberLong(1529966574477)\n\t\t\t        } }\n\t\t},\n\n\t\t// Stage 2\n\t\t{\n\t\t\t$project: { _id: 0, "entity.crosswalks": 1,"entity.uri":2, "entity.updatedTime":3 }\n\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$unwind: "$entity.crosswalks"\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$group: {_id:"$entity.crosswalks.value", count:{$sum:1}, entities:{$push: {uri:"$entity.uri", modificationTime:"$entity.updatedTime"}}}\n\t\t},\n\n\t\t// Stage 5\n\t\t{\n\t\t\t$match: { count: { $gte: 2 } }\n\t\t},\n\n\t\t// Stage 6\n\t\t{\n\t\t\t$redact: {\n\t\t\t    "$cond" : {\n\t\t\t        "if" : {\n\t\t\t            "$ne" : [\n\t\t\t                "$entity.crosswalks.0.value", \n\t\t\t                "$entity.crosswalks.1.value"\n\t\t\t            ]\n\t\t\t        }, \n\t\t\t        "then" : "$$KEEP", \n\t\t\t        "else" : "$$PRUNE"\n\t\t\t    }\n\t\t\t}\n\t\t},\n\t],\n\n\t// Options\n\t{\n\t\tallowDiskUse: true\n\t}\n\n\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n\n);\n\n\n
\n
\n
print("START")\nvar start = new Date().getTime();\n\nvar result = db.getCollection("entityHistory").aggregate(\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: {\n\t\t\t        "status" : "LOST_MERGE", \n\t\t\t        "entityType" : {\n\t\t\t            "$exists" : false\n\t\t\t        },        \n\t\t\t        "$and" : [\n\t\t\t            {\n\t\t\t                "$or" : [\n\t\t\t                    {\n\t\t\t                        "mdmSource" : "RELTIO"\n\t\t\t                    }, \n\t\t\t                    {\n\t\t\t                        "mdmSource" : {\n\t\t\t                            "$exists" : false\n\t\t\t                        }\n\t\t\t                    }\n\t\t\t                ]\n\t\t\t            }\n\t\t\t        ]\n\t\t\t}\n\t\t},\n\n\t\t// Stage 2\n\t\t{\n\t\t\t$graphLookup: {\n\t\t\t    "from" : "entityHistory", \n\t\t\t    "startWith" : "$_id", \n\t\t\t    "connectFromField" : "parentEntityId", \n\t\t\t    "connectToField" : "_id", \n\t\t\t    "as" : "master", \n\t\t\t    "maxDepth" : 10.0, \n\t\t\t    "depthField" : "depthField"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$unwind: {\n\t\t\t    "path" : "$master", \n\t\t\t    "includeArrayIndex" : "arrayIndex", \n\t\t\t    "preserveNullAndEmptyArrays" : false\n\t\t\t}\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$match: {\n\t\t\t    "master.status" : {\n\t\t\t        "$ne" : "LOST_MERGE"\n\t\t\t    }\n\t\t\t}\n\t\t},\n\n\t\t// Stage 5\n\t\t{\n\t\t\t$redact: {\n\t\t\t    "$cond" : {\n\t\t\t        "if" : {\n\t\t\t            "$eq" : [\n\t\t\t                "$master._id", \n\t\t\t                "$parentEntityId"\n\t\t\t            ]\n\t\t\t        }, \n\t\t\t        "then" : "$$KEEP", \n\t\t\t        "else" : "$$PRUNE"\n\t\t\t    }\n\t\t\t}\n\t\t}\n\t]\n\n\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n);\n\n\t\nresult.forEach(function(obj) {\n    var id = obj._id;\n\n    var masterEntityType = obj.master.entityType;\n\t\n\tif( masterEntityType !== undefined){\n      if(obj.entityType == undefined){\n\t    print("entityType is " + obj.entityType + " for " + id +", changing to "+ masterEntityType);\n\t    var currentTime = new Date().getTime();\n\t\n        var result = db.entityHistory.update( {"_id":id}, {$set: { "entityType":masterEntityType, "lastModificationDate": NumberLong(currentTime) } });\n        printjson(result);\n      }\n\t}\n\n});\n    \t\n    \t\nvar end = new Date().getTime();\nvar duration = end - start;\nprint("duration: " + duration + " ms")\nprint("END")
\n
\n
db.getCollection("gateway_transactions").aggregate(\n\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: { \n\t\t\t    "$and" : [\n\t\t\t        {\n\t\t\t        "transactionTS" : {\n\t\t\t            "$gte" : NumberLong(1551974500000)\n\t\t\t        }, \n\t\t\t        "username" : "dea_batch"\n\t\t\t        }\n\t\t\t    ]\n\t\t\t}\n\t\t},\n\n\t\t// Stage 2\n\t\t{\n\t\t\t$group: {\n\t\t\t  _id:"$requestId", \n\t\t\t  count:  {  $sum:1  },\n\t\t\t  transactions: { $push : "$$ROOT" }\n\t\t\t}\n\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$unwind: {\n\t\t\t    path : "$transactions",\n\t\t\t}\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$addFields: {\n\t\t\t    \n\t\t\t    "statusNumber": { \n\t\t\t        $cond: { \n\t\t\t            if: { \n\t\t\t                $eq: ["$transactions.status", "failed"] \n\t\t\t            }, \n\t\t\t            then: 0, \n\t\t\t            else: 1 \n\t\t\t        }\n\t\t\t      } \n\t\t\t       \n\t\t\t  \n\t\t\t}\n\t\t},\n\n\t\t// Stage 5\n\t\t{\n\t\t\t$sort: {\n\t\t\t "transactions.requestId": 1, \n\t\t\t "statusNumber": -1,\n\t\t\t "transactions.transactionTS": -1 \n\t\t\t}\n\t\t},\n\n\t\t// Stage 6\n\t\t{\n\t\t\t$group: {\n\t\t\t      _id:"$_id", \n\t\t\t      transaction: { "$first": "$$CURRENT" }\n\t\t\t}\n\t\t},\n\n\t\t// Stage 7\n\t\t{\n\t\t\t$addFields: {\n\t\t\t     "transaction.transactions.count": "$transaction.count" \n\t\t\t}\n\t\t},\n\n\t\t// Stage 8\n\t\t{\n\t\t\t$replaceRoot: {\n\t\t\t    newRoot: "$transaction.transactions"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 9\n\t\t{\n\t\t\t$addFields: {\n\t\t\t    "file_raw_line": "$metadata.file_raw_line",\n\t\t\t    "filename": "$metadata.filename"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 10\n\t\t{\n\t\t\t$project: {\n\t\t\t    requestId : 1,\n\t\t\t    count: 2,\n\t\t\t    "filename": 3,\n\t\t\t    uri: "$mdmUri",\n\t\t\t    country: 5,\n\t\t\t    source: 6,\n\t\t\t    crosswalkId: 7,\n\t\t\t    status: 8,\n\t\t\t    timestamp: "$transactionTS",\n\t\t\t    //"file_raw_line": 10,\n\t\t\t\n\t\t\t}\n\t\t},\n\t],\n\n\t// Options\n\t{\n\t\tallowDiskUse: true\n\t}\n\n\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n\n);\n\n\n
\n


Export Config for Studio3T - format:

<ExportSettings>
<VERSION>1</VERSION>
<exportSource>CURRENT_QUERY_RESULT</exportSource>
<skipValue>0</skipValue>
<limitValue>0</limitValue>
<exportFormat>CSV</exportFormat>
<exportOptions>
<VERSION>2</VERSION>
<emptyFieldImportStrategy>MAKE_NULL</emptyFieldImportStrategy>
<delimiter> </delimiter>
<encapsulator>&quot;</encapsulator>
<isEscapeControlChars>false</isEscapeControlChars>
<exportNullFieldsAsEmptyStrings>true</exportNullFieldsAsEmptyStrings>
<isAddColHeaders>true</isAddColHeaders>
<selectedFields>
<string>_id</string>
<string>count</string>
<string>country</string>
<string>crosswalkId</string>
<string>filename</string>
<string>requestId</string>
<string>source</string>
<string>status</string>
<string>timestamp</string>
<string>uri</string>
</selectedFields>
<noArrays>false</noArrays>
<noNestedFields>false</noNestedFields>
<noHeader>false</noHeader>
<skipLines>0</skipLines>
<parseError>false</parseError>
<trimLeadingSpaces>false</trimLeadingSpaces>
<trimTrailingSpaces>false</trimTrailingSpaces>
<isUnixLF>false</isUnixLF>
<csvPreset>Excel</csvPreset>
</exportOptions>
<selectedFields>
<string>_id</string>
<string>count</string>
<string>country</string>
<string>crosswalkId</string>
<string>filename</string>
<string>requestId</string>
<string>source</string>
<string>status</string>
<string>timestamp</string>
<string>uri</string>
</selectedFields>
<exportTargetType>FILE</exportTargetType>
<exportPath>D:\\docs\\FLEX\\REPORT_transaction_log\\10_10_2018\\load_report.csv</exportPath>
<noCursorTimeout>true</noCursorTimeout>
</ExportSettings>



\n
 db.entityHistory.aggregate([\n {$match: { status: { $eq: "ACTIVE" }, entityType:"configuration/entityTypes/HCP" } },\n {$project: { _id: 1, "country":1 } },\n {$group : {_id:"$country", count:{$sum:1},}},\n {$match: { count: { $gte: 2 } } },\n],{ allowDiskUse: true } )
\n
\n
//https://stackoverflow.com/questions/43778747/check-if-a-field-exists-in-all-the-elements-of-an-array-in-mongodb-and-return-th?rq=1\n\n// find entities where ALL crosswalk array objects has delete date set (not + exists false)\ndb.entityHistory.find({\n    entityType: "configuration/entityTypes/HCP",\n    country: "br",\n    status: "ACTIVE",\n    "entity.crosswalks": { $not: { $elemMatch: { deleteDate: {$exists:false} } } }\n})\n\n// find entities where ANY OF crosswalk array objecst has delete date set\ndb.entityHistory.find({\n    entityType: "configuration/entityTypes/HCP",\n    country: "br",\n    status: "ACTIVE",\n    "entity.crosswalks": {   $elemMatch: { deleteDate: {$exists:true} }  }\n})
\n
\n
db.getCollection("entityHistory").update(\n    { \n        "status" : "LOST_MERGE", \n        "entity" : {\n            "$exists" : true\n        }\n    },\n    { \n        $set: { "lastModificationDate": NumberLong(1551433013000) }, \n        $unset: {entity:""}\n    },\n    { multi: true }\n)\n\n\n
\n
\n
// Stages that have been excluded from the aggregation pipeline query\n__3tsoftwarelabs_disabled_aggregation_stages = [\n\n\t{\n\t\t// Stage 2 - excluded\n\t\tstage: 2,  source: {\n\t\t\t$limit: 1000\n\t\t}\n\t},\n]\n\ndb.getCollection("hub_errors").aggregate(\n\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: {\n\t\t\t        "exceptionClass" : "com.COMPANY.publishinghub.processing.RDMMissingEventForwardedException",\n\t\t\t         "status" : "NEW"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$project: { \n\t\t\t      "entityId":"$exchangeInHeaders.kafka[dot]KEY",\n\t\t\t      "attributeName": "$exceptionDetails.attributeName",\n\t\t\t      "attributeValue":  "$exceptionDetails.attributeValue", \n\t\t\t      "errorCode":  "$exceptionDetails.errorCode"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$group: {\n\t\t\t   _id: { entityId:"$entityId", attributeValue:  "$attributeValue",attributeName:"$attributeName"}, // can be grouped on multiple properties \n\t\t\t   dups: { "$addToSet": "$_id" }, \n\t\t\t   count: { "$sum": 1 } \n\t\t\t}\n\t\t},\n\n\t\t// Stage 5\n\t\t{\n\t\t\t$group: {\n\t\t\t   //_id: { attributeValue:  "$_id.attributeValue",attributeName:"$_id.attributeName"}, // can be grouped on multiple properties \n\t\t\t   _id: { attributeName:"$_id.attributeName"}, // can be grouped on multiple properties \n\t\t\t    entities: { "$addToSet": "$_id.entityId" }\n\t\t\t}\n\t\t},\n\n\t\t// Stage 6\n\t\t{\n\t\t\t$project: {\n\t\t\t    _id: 1,\n\t\t\t    sample_entities: { $slice: [ "$entities", 10 ] } \n\t\t\t    affected_entities_count: { $size: "$entities" } \n\t\t\t}\n\t\t},\n\t],\n\n\t// Options\n\t{\n\t\tallowDiskUse: true\n\t}\n\n\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n\n);\n\n\n
\n
\n
// GET\ndb.entityHistory.find({})\n// GET random 20 entities\ndb.entityHistory.aggregate( \n    [ \n        { $match : { status : "ACTIVE" } },\n        { \n            $sample: {size: 20} \n        },  \n        {\n          $project: {_id:1}\n        },\n\n] )\n    \n// entity get by ID\ndb.entityHistory.find({\n"_id":"entities/rOATtJD"\n})\n\n\ndb.entityHistory_PforceRx.find({\n        _id: "entities/Tq4c32l"\n})\n\n// Specialities exists\ndb.entityHistory.find({\n    "entity.attributes.Specialities": {\n          $exists: true\n    }\n}).limit(20)\n\n// Specialities size > 4\ndb.entityHistory.find({\n    "entity.attributes.Specialities": {\n        $exists: true\n    },\n     $and: [\n        {$where: "this.entity.attributes.Specialities.length > 6"}, \n        {$where: "this.sources.length >= 2"},\n    ]\n\n})\n.limit(10)\n// only project ID\n.projection({id:1})\n\n\n// Address size > 4\ndb.entityHistory.find({\n    "entity.attributes.Address": {\n        $exists: true\n    },\n     $and: [\n        {$where: "this.entity.attributes.Address.length > 4"}, \n        {$where: "this.sources.length > 2"},\n    ]\n\n})\n.limit(10)\n// only project ID\n//.projection({id:1})\n\n\n// Address AddressType size 2\ndb.entityHistory.find({\n        "entity.attributes.Address": {\n            $exists: true\n        },\n        "entity.attributes.Address.value.Status.lookupCode": {\n            $exists: true,\n            $eq: "ACTV"\n        },\n    }, {\n        "entity.attributes.Address.value.Status": 1\n    })\n    .limit(10)\n\n\n// Address AddressType size 2\ndb.entityHistory.find({\n    "entity.attributes.Address": {\n        $exists: true\n    },\n     $and: [\n        {$where: "this.entity.attributes.Address.length >= 4"}, \n        {$where: "this.sources.length >= 4"},\n    ]\n\n})\n.limit(2)\n//.projection({id:1})\n// only project ID\n\n\ndb.entityHistory.find({\n        "entity.attributes.Address": {\n            $exists: true\n        },\n        "entity.attributes.Address.value.BestRecord": {\n            $exists: true\n        }\n})\n.limit(2)\n// only project ID\n//.projection({id:1})\n\ndb.entityHistory.find({\n        "entity.attributes.Address": {\n            $exists: true\n        },\n        "entity.attributes.Address.value.ValidationStatus": {\n            $exists: true\n        },\n        "entityType":"configuration/entityTypes/HCO",\n        $and: [{\n            $where: "this.entity.attributes.Address.length > 4"\n        \n        }]\n    })\n    .limit(1)\n// only project ID\n//.projection({id:1})\n\n\n\n//SOURCE NAME\ndb.entityHistory.find({\n        "entity.attributes.Address": {\n            $exists: true\n        },\n        lastModificationDate: {\n            $gt: 1534850405000\n        }\n    })\n    .limit(10)\n// only project\n\n\n\ndb.entityHistory.find({\n            "entity.attributes.Address": {\n            $exists: true\n        },\n        "entity.attributes.Address.refRelation.objectURI": {\n            $exists: false\n        },\n    }).limit(10)\n// only project\n\n\n// Phone exists\ndb.entityHistory.find({\n    "entity.attributes.Phone": {\n          $exists: true\n    }\n})   .limit(1)\n\n//Specialities exists\ndb.entityHistory.find({\n    "entity.attributes.Specialities": {\n        $exists: true\n    },\n    country: "mx"\n}).limit(10)\n    \n// Speclaity Code\ndb.entityHistory.find({\n   "entity.attributes.Specialities": {\n        $exists: true\n    },\n    "entity.attributes.Specialities.value.Specialty.lookupCode": "WMX.TE",\n    country: "mx"\n}).limit(1)\n    \n// entity.attributes. Identifiers License exists\ndb.entityHistory.find({\n    "entity.attributes.Identifiers": {\n        $exists: true\n    },\n    country: "mx"\n}).limit(1)\n    \n    \n// Name of organization is empty\ndb.entityHistory.find({\n    entityType: "configuration/entityTypes/HCO",\n    "entity.attributes.Name": {\n        $exists: false\n    },\n    // "parentEntityId": {\n    //     $exists: false\n    // },\n    country: "mx"\n}).limit(10)\n\n\n\n\n// RELACJE\n// GET\ndb.entityRelations.find({})\n\n// entity get by ID startObjectID\ndb.entityRelations.find({\n        startObjectId: "entities/14tDdkhy"\n})\n\ndb.entityRelations.find({\n        endObjectId: "entities/14tDdkhy"\n})\n\n\ndb.entityRelations.find({\n        _id: "relations/RJx9ZkM"\n})\n\ndb.entityRelations.find({\n   "relation.attributes.ActPhone": {\n       $exists: true\n   }\n}).limit(1)\n\n\n\n// Address size > 4\ndb.entityRelations.find({\n    "relation.attributes.Phone": {\n        $exists: true\n    },\n    "relationType":"configuration/relationTypes/HasAddress",\n     //$and: [\n//        {$where: "this.relation.attributes.Address.length > 3"}, \n        //{$where: "this.sources.length >= 2"},\n    //]\n\n})\n.limit(10)\n// only project ID\n//.projection({id:1})\n\n\n\n\n// \ndb.entityRelations.find({\n    "relation.crosswalks": {\n        $exists: true\n    },\n    "relation.crosswalks.deleteDate": {\n        $exists: true\n    }\n\n})\n.limit(10)\n// only project ID\n//.projection({id:1})\n\n\ndb.entityRelations.find({\n    "relation.startObject": {\n        $exists: true\n    },\n    "relation.startObject.objectURI": {\n        $exists: false\n    }\n\n})\n.limit(1)\n\n\n\n// merge finder\ndb.entityRelations.find({\n    "relation.startObject": {\n        $exists: true\n    },\n    "relation.endObject": {\n        $exists: true\n    },\n     $and: [\n        {$where: "this.relation.startObject.crosswalks.length > 2"}, \n        {$where: "this.sources.length >= 1"},\n    ]\n\n})\n.limit(10)\n// only project ID\n//.projection({id:1})\n\n\n// merge finder\ndb.entityRelations.find({\n        "relation.startObject": {\n            $exists: true\n        },\n        "relation.endObject": {\n            $exists: true\n        },\n        //"relation.startObject.crosswalks.0.uri": mb.regex.startsWith("relation.startObject.objectURI")\n         "relation.startObject.crosswalks.0.uri": /^relation.startObject.objectURI.*$/i\n})\n.limit(2)\n\n\n\n\n\n// Phone - HasAddress\ndb.entityRelations.find({\n    "relation.attributes.Phone": {\n        $exists: true\n    },\n    "relationType":"configuration/relationTypes/HasAddress",\n})\n.limit(10)\n\n// ActPhone - Activity\ndb.entityRelations.find({\n    "relation.attributes.ActPhone": {\n        $exists: true\n    },\n    "relationType":"configuration/relationTypes/Activity",\n})\n\n\n// Identifiers - HasAddress\ndb.entityRelations.find({\n    "relation.attributes.Identifiers": {\n        $exists: true\n    },\n    "relationType":"configuration/relationTypes/HasAddress",\n})\n.limit(10)\n\n\n// Identifiers - Activity\ndb.entityRelations.find({\n    "relation.attributes.ActIdentifiers": {\n        $exists: true\n    },\n    "relationType":"configuration/relationTypes/Activity",\n})\n\n\n\n\ndb.entityHistory.find({\n            "entity.attributes.Address": {\n            $exists: true\n        }\n    })\n// only project\n\n\ndb.entityHistory.find({\n            "entity.attributes.Address": {\n            $exists: true\n        },\n        "entity.attributes.Address.refRelation.uri": {\n            $exists: false\n        },\n        "entity.attributes.Address.refRelation.objectURI": {\n            $exists: true\n        },\n    })\n// only project\n\n\ndb.entityHistory.find({\n            "entity.attributes.Address": {\n            $exists: true\n        },\n        "entity.attributes.Address.refRelation.uri": {\n            $exists: true\n        },\n        "entity.attributes.Address.refRelation.objectURI": {\n            $exists: false\n        }\n    })\n// only project\n\ndb.entityHistory.find({\n            "entity.attributes.Address": {\n            $exists: true\n        },\n        "entity.attributes.Address.refRelation.uri": {\n            $exists: true\n        },\n        "entity.attributes.Address.refRelation.objectURI": {\n            $exists: true\n        },\n    })\n\ndb.entityHistory.find({\n        "entity.attributes.Address": {\n            $exists: true\n        },\n        lastModificationDate: {\n            $gt: 1534850405000\n        }\n    })\n    .limit(10)\n// only project\n\ndb.entityHistory.find({})\n// GET random 20 entities\n\n    \n// entity get by ID\ndb.entityHistory.find({\n        _id: "entities/Nzn07bq"\n})\n\n\n// Address AddressType size 2\ndb.entityHistory.find({\n    "entity.attributes.Address": {\n        $exists: true\n    },\n     $and: [\n        {$where: "this.entity.attributes.Address.length >= 4"}, \n        {$where: "this.sources.length >= 4"},\n    ]\n\n})\n.limit(2)\n\n\n\n
\n
\n
db.getCollection("entityHistory").aggregate(\n\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: {   \t\n\t\t\t     mdmSource: "RELTIO"        \n\t\t\t}\n\t\t},\n\n\t\t// Stage 2\n\t\t{\n\t\t\t$limit: 1000\n\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$addFields: {\n\t\t\t   "crosswalksSize":  { $size: { "$ifNull": [ "$entity.crosswalks", [] ] } }\n\t\t\t}\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$project: {\n\t\t\t    _id: 1,\n\t\t\t    crosswalksSize:1 \n\t\t\t    \n\t\t\t}\n\t\t},\n\n\t]\n\n\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n\n);\n\n
\n
\n
// COPY THIS SECTION 
\n



" }, { "title": "Mongo-SOP-002: Running mongo scripts remotely on k8s cluster", "pageID": "284809016", "pageLink": "/display/GMDM/Mongo-SOP-002%3A+Running+mongo+scripts+remotely+on+k8s+cluster", "content": "

Get the tool:

  1. Go to file http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/browse/helm/mongo/src/scripts/run_mongo_remote/run_mongo_remote.sh?at=refs%2Fheads%2Fproject%2Fboldmove in inbound-services repository.
  2. Download the file to your computer.

The tool requires kubenetes installed and WSL (tested on WSL2) for working correctly.

Usage guide:

Available commands:

Shows general help message for the script tool:

\"\"

Execute to run script remotely on pod agent on k8s script. Script will be copied from the given path on local machine to pod and then run on pod. To get details about accepted arguments run ./run_mongo_remote.sh exec --help

\"\"

Execute to download script results from pod agent and save in given path on your local machine. To get details about accepted arguments run ./run_mongo_remote.sh get --help

\"\"

Example flow:

  1. Save mongo script you want to run in file example_script.js (Script file has to have .js or .mongo extension for tool to run correctly)
  2. Run ./run_mongo_remote.sh exec example_script.js emea_dev to run your script on emea_dev environment
  3. Upon complection the path where the script results were saved on pod agent will be returned (eg. /pod/path/result.txt)
  4. Run ./run_mongo_remote.sh get /pod/path/result.txt local/machine/path/example_script_result.txt emea_dev to save script results on your local machine.

Tool edition

The tool was written using bashly - a bash framework for developing CLI applications.

The tool source is available HERE. Edit files and generate singular output script based on guides available on bashly site.

DO NOT EDIT run_mongo_remote.sh file MANUALLY (it may result in script not working correctly).

" }, { "title": "Notifications:", "pageID": "430347505", "pageLink": "/pages/viewpage.action?pageId=430347505", "content": "" }, { "title": "Sending notification", "pageID": "430347508", "pageLink": "/display/GMDM/Sending+notification", "content": "

We send notifications to our clients in the case of the following events:

  1. Unplanned outage - MDMHUB is not available for our clients - REST API, Kafka or Snowflake doesn't work properly and clients are not able to connect.
    Currently, you have to send notification in the case of the following events:
    1. kong_http_500_status_prod

    2. kong_http_502_status_prod
    3. kong_http_503_status_prod
    4. kong3_http_500_status_prod
    5. kong3_http_502_status_prod
    6. kong3_http_503_status_prod
    7. kafka_missing_all_brokers_prod
  2. Planned outage - it is maintenance window when we have to do some maintenance tasks that will cause temporary problems with accessing to MDMHUB endpoints,
  3. Update configuration - some of MDMHUB endpoints are changed i.e.: rest API URL address, Kafka address etc.

We always sends notification in the case of unplanned outage to inform our clients about and let them know that somebody from us is working on issue. Planned outage and update configuration are always planned activity that are confirmed with release management and scheduled to specific time range.

Notification Layout

  1. You send notifications using your COMPANY's email account.
  2. As CC always set our DLs: DL-ATP_MDMHUB_SUPPORT_PROD@COMPANY.com, DL-ATP_MDMHUB_SUPPORT@COMPANY.com
  3. Add our clients as BCC according to table mentioned below:

\"\"


{"name":"MDM_Hub_notification_recipients.xlsx","type":"xlsx","pageID":"430347508"}

Loading

\"\"


On the above screen we can see a few placeholders,

Notification templates

Below you can find notification templates that you can get, fill and send to our clients:

  1. Generic template: notification.msg
  2. Kafka issues: kafka.msg
  3. API issues: api.msg




" }, { "title": "COMPANYGlobalCustomerID:", "pageID": "302706348", "pageLink": "/pages/viewpage.action?pageId=302706348", "content": "" }, { "title": "Fix \"\" or null IDs - Fix Duplicates", "pageID": "250675882", "pageLink": "/pages/viewpage.action?pageId=250675882", "content": "

The following SOP describes how to fix "" or null COMPANYGlobalCustomerIDs values in Mongo and regenerate events in Snowflake.

The SOP also contains the step to fix duplicated values and regenerate events.


Steps:

  1.  Check empty or null: 
    1. \n
      \t    db = db.getSiblingDB("reltio_amer-prod");\n\t\tdb.getCollection("entityHistory").find(\n\t\t\t{\n\t\t\t\t"$or" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t"COMPANYGlobalCustomerID" : ""\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t"COMPANYGlobalCustomerID" : {\n\t\t\t\t\t\t\t"$exists" : false\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t],\n\t\t\t\t"status" : {\n\t\t\t\t\t"$ne" : "DELETED"\n\t\t\t\t}\n\t\t\t}\n\t\t);
      \n
    2. Mark all ids for further event regeneration. 
  2. Run the Scritp on Studio3t or K8s mongo
    1. Script - http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/docker/mongo_utils/scripts/COMPANYglobalcustomerids_fix_empty_null_script.js
    2. Run on K8s:
      1. log in to correct cluster on backend namespace 
      2. copy script - kubectl cp  ./reload_entities_fix_COMPANY_id_DEV.js mongo-0:/tmp/reload_entities_fix_COMPANY_id_DEV.js
      3. run - nohup mongo --host mongo/localhost:27017 -u admin -p <pass> --authenticationDatabase admin reload_entities_fix_COMPANY_id_DEV.js > out/reload_DEV.out 2>&1 &
      4. download result - kubectl cp mongo-0:/tmp/out/reload_DEV.out ./reload_DEV.out
      5. Using output find all "TODO" lines and regenerate correct events
  3. Check duplicates:
    1. \n
      \t\t\t\t// Pipeline\n\t\t\t[\n\t\t\t\t// Stage 1\n\t\t\t\t{\n\t\t\t\t\t$group: {\n\t\t\t\t\t_id: {COMPANYID: "$COMPANYID"},\n\t\t\t\t\tuniqueIds: {$addToSet: "$_id"},\n\t\t\t\t\tcount: {$sum: 1}\n\t\t\t\t\t}\n\t\t\t\t},\n\n\t\t\t\t// Stage 2\n\t\t\t\t{\n\t\t\t\t\t$match: { \n\t\t\t\t\tcount: {"$gt": 1}\n\t\t\t\t\t}\n\t\t\t\t},   \n\t\t\t],\n\n\t\t\t// Options\n\t\t\t{\n\t\t\t\tallowDiskUse: true\n\t\t\t}\n\n\t\t\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/
      \n
    2. If there are duplicates run run the Scritp on Studio3t or K8s mongo
      1. Script - http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/docker/mongo_utils/scripts/COMPANYglobalcustomerids_fix_duplicates_script.js
      2. Run on K8s:
        1. log in to correct cluster on backend namespace 
        2. copy script - kubectl cp  ./reload_entities_fix_COMPANY_id_DEV.js mongo-0:/tmp/reload_entities_fix_COMPANY_id_DEV.js
        3. run - nohup mongo --host mongo/localhost:27017 -u admin -p <pass> --authenticationDatabase admin reload_entities_fix_COMPANY_id_DEV.js > out/reload_DEV.out 2>&1 &
        4. download result - kubectl cp mongo-0:/tmp/out/reload_DEV.out ./reload_DEV.out
        5. Using output find all "TODO" lines and regenerate correct events
  4. Reload events    


Events RUN

You can use the following 2 scripts:

\n
#!/bin/bash\n\nfile=$1\nevent_type=$2\n\ndos2unix $file\n\njq -R -s -c 'split("\\n")' < "${file}"  | jq --arg eventTimeArg `date +%s%3N` --arg eventType ${event_type} -r '.[] | . +"|{\\"eventType\\": \\"\\($eventType)\\", \\"eventTime\\": \\"\\($eventTimeArg)\\", \\"entityModificationTime\\": \\"\\($eventTimeArg)\\", \\"entitiesURIs\\": [\\"" + (.|tostring) + "\\"], \\"mdmSource\\": \\"RELTIO\\", \\"viewName\\": \\"default\\"}"'\n\n
\n

This script input is the file with entityid separated by new line

Exmaple:

entities/xVIK0nh
entities/uP4eLws
entities/iiKryQO
entities/ZYjRCFN
entities/13n4v93A


Example execution:

./script.sh dev_reload_empty_ids.csv HCP_CHANGED >> EMEA_DEV_events.txt


OR


\n
#!/bin/bash\n\nfile=$1\n\ndos2unix $file\n\njq -R -s -c 'split("\\n")' < "${file}"  | jq --arg eventTimeArg `date +%s%3N` -r '.[] | (. | tostring | split(",") | .[0] | tostring ) +"|{\\"eventType\\": \\""+ ( . | tostring | split(",") | if .[1] == "LOST_MERGE" then "HCP_LOST_MERGE" else "HCP_CHANGED" end ) + "\\", \\"eventTime\\": \\"\\($eventTimeArg)\\", \\"entityModificationTime\\": \\"\\($eventTimeArg)\\", \\"entitiesURIs\\": [\\"" + (. | tostring | split(",") | .[0] | tostring ) + "\\"], \\"mdmSource\\": \\"RELTIO\\", \\"viewName\\": \\"default\\"}"'\n\n
\n

This script input is the file with entityId,status separate by new line

Example:

entities/10BBdiHR,LOST_MERGE
entities/10BBdv4D,LOST_MERGE
entities/10BBe7qz,LOST_MERGE
entities/10BBgKFF,INACTIVE
entities/10BBgOVV,ACTIVE


Example execution:

./script_2_columns.sh dev_reload_lost_merges.csv >> EMEA_DEV_events.txt


Push the generate file to Kafka topic using Kafka producer:

./start_sasl_producer.sh prod-internal-reltio-events < EMEA_PROD_events.txt


Snowflake Check

\n
-- COMPANY COMPANY_GLOBAL_CUSTOMER_ID checks - null/empty\nSELECT count(*) FROM ENTITIES  WHERE COMPANY_GLOBAL_CUSTOMER_ID  IS NULL OR COMPANY_GLOBAL_CUSTOMER_ID  = '' \nSELECT * FROM ENTITIES  WHERE COMPANY_GLOBAL_CUSTOMER_ID  IS NULL OR COMPANY_GLOBAL_CUSTOMER_ID  = '' \n\n-- duplicates\nSELECT COMPANY_GLOBAL_CUSTOMER_ID \nFROM ENTITIES \nWHERE COMPANY_GLOBAL_CUSTOMER_ID  IS NOT NULL OR COMPANY_GLOBAL_CUSTOMER_ID  != '' \nGROUP BY COMPANY_GLOBAL_CUSTOMER_ID HAVING COUNT(*) >1\n\n
\n









" }, { "title": "Initialization Process", "pageID": "218694652", "pageLink": "/display/GMDM/Initialization+Process", "content": "

The process will sync COMPANYGlobalCustomerID attributes to the MongoDB (EntityHistory and COMPANYIDRegistry) and then refresh the snowflake with this data.

The process is divided into the following steps:

  1. Create an index in Mongo
    1. db.entityHistory.createIndex({COMPANYGlobalCustomerID: -1},  {background: true, name:  "idx_COMPANYGlobalCustomerID"});
  2. Configure entity-enricher so it has the ov:false option for COMPANYGlobalCustomerID
    1. bundle.nonOvAttributesToInclude:
      - COMPANYCustID
      - COMPANYGlobalCustomerID
  3. Deploy the hub components with callback enabled -COMPANYGlobalCustomerIDCallback (3.9.1 version)
  4. RUN hub_reconciliation_v2 - first run the HUB Reconciliation -> this will enrich all Mongo data with COMPANYGlobaCustomerID with ov:true and ov:false values
    1. based on EMEA this is here - http://euw1z1dl039.COMPANY.com:8080/airflow/tree?dag_id=hub_reconciliation_v2_emea_dev&root=
    2. doc - HUB Reconciliation Process V2
    3. check if the configuration contains the following - nonOvAttrToInclude: "COMPANYCustID,COMPANYGlobalCustomerID"
    4. check S3 directory structure and reconciliation.properties file in emea/<env>/inbound/hub/hub_reconciliation/ 
      1. http://euw1z1dl039.COMPANY.com:8080/airflow/tree?dag_id=hub_reconciliation_v2_emea_dev
      2. http://euw1z1dl039.COMPANY.com:8080/airflow/tree?dag_id=hub_reconciliation_v2_emea_qa
      3. http://euw1z1dl039.COMPANY.com:8080/airflow/tree?dag_id=hub_reconciliation_v2_emea_stage
  5. RUN hub_COMPANYglobacustomerid_initial_sync_<ENV> DAG
    1. It contains 2 steps:
      1. COMPANYglobacustomerid_active_inactive_reconciliation 
        1. the groovy script that - check the HUB entityHistory ACTIVE/INACTIVE/DELETED entities - for all these entities get ov:true COMPANYGlobalCustomerId and enrich Mongo and Cache
      2. COMPANYglobacustomerid_lost_merge_reconciliation  
        1. the groovy script that - this step checks LOST_MERGE entities. Do the merge_tree full export from Reltio. Based on merge_tree adds the 
  6. RUN snowflake_reconciliation - full snowflake reconciliation by generating the full file with empty checksums





" }, { "title": "Remove Duplicates and Regenerate Events", "pageID": "272368703", "pageLink": "/display/GMDM/Remove+Duplicates+and+Regenerate+Events", "content": "

This SOP describes the workaround to fix the COMPANYGlobalCustomerID duplicated values.


Case:

There are 2 entities with the same COMPANYGlobalCustomerID.

Example:

    1Qbu0jBQ - Jun 14, 2022 @ 18:10:44.963    ID-mdmhub-reltio-subscriber-dynamic-866b588c7-w9crm-1655205289718-0-157609    ENTITY_CREATED    entities/1Qbu0jBQ    RELTIO    success    entities/1Qbu0jBQ    
    3Ot2Cfw  - Aug 11, 2022 @ 18:53:31.433    ID-mdmhub-reltio-subscriber-dynamic-79cd788b59-gtzm6-1659525443436-0-1693016    ENTITY_CREATED    entities/3Ot2Cfw    RELTIO    success    entities/3Ot2Cfw


3Ot2Cfw  is a WINNER

1Qbu0jBQ  is a LOSER. 


Rule: if there are duplicates, always pick the LOST_MERGED entity and update the looser only with the different value. Do not change an active entity:

Steps:

  1. GO to Reltio to the winner and check the other (OV:FALSE) COMPANYGlobalCustomerIDs
  2. Pick the new value from the list:
  3. Check if there are no duplicates in Mongo, and search for a new value by the COMPANY in the cache. If exists pick different.
  4. Update Mongo Cache:
    1. \"\"
  5. Regenerate event:
    1. if the loser entity is now active in Reltio but not active in Mongo regenerate CREATED event:
      1. entities/1Qbu0jBQ|{  "eventType" : "HCP_CREATED",  "eventTime" : "1666090581000",  "entityModificationTime" : "1666090581000",  "entitiesURIs" : [ "entities/1Qbu0jBQ" ],  "mdmSource" : "RELTIO",  "viewName" : "default" }
    2. if the loser entity is not present in Reltio because is a looser regenerate LOST_MERGE event:
      1. entities/1Q7XLreu|{"eventType":"HCO_LOST_MERGE","eventTime":1666018656000,"entityModificationTime":1666018656000,"entitiesURIs":["entities/1Q7XLreu"],"mdmSource":"RELTIO","viewName":"default"}
  6. Example PUSH to PROD:
    1. \"\"
  7. Check Mongo, an updated entity should change COMPANYGlobalCustomerID
  8. Check Reltio
  9. Check Snowflake
" }, { "title": "Project FLEX (US):", "pageID": "302705645", "pageLink": "/pages/viewpage.action?pageId=302705645", "content": "" }, { "title": "Batch Loads - Client-Sourced", "pageID": "164470098", "pageLink": "/display/GMDM/Batch+Loads+-+Client-Sourced", "content": "


  1. Log in to US PROD Kibana: https://amraelp00006209.COMPANY.com:5601/app/kibana
    1. use the dedicated "kibana_gbiccs_user" 
  2. Go to the Dashboards Tab - "PROD Batch loads"
    1. \"\"
  3. Change the Time rage 
    1. \"\"
    2. Choose 24 hours to check if the new file was loaded for the last 24 hours.
  4. The Dashboard is divided into the following sections:
    1. File by type - this visualization presents how many file of the specific type were loaded during a specific time range
    2. File load count - this visualization presents when the specific file was loaded
    3. File load summary - on this table you can verify the detailed information about file load
    4. \"\"
  5. Check if files are loaded with the following agenda:
    1. SAP - incremental loads - max 4 files per day, min 2 files per day
      1.  Agenda: 

        whenhours
        Monday-Friday 1. 01:20 CET time
        2. 13:20 CET time
        3. 17:20 CET time
        4. 21:20 CET time
        Saturday1. 01:20 CET time
        Sundaynone
    2. HIN - incremental loads - 2 file per day. WKCE.*.txt and WKHH.*.txt
      1. Agenda:

        whenhours
        Tuesday-Saturday1. estimates: 12PM - 1PM CET time
    3. DEA - full load -  1 file per week FF_DEA_IN_.*.txt
      1. Agenda:

        whenhours
        Tuesday1. estimates: 10AM - 12PM CET time
    4. 340B - incremental load - 4 files per month. 340B_FLEX_TO_RELTIO_*.txt
      1. Agenda:

        Files uploaded on 3rd, 10th, 24th and the last day of the month at ~12:30 PM CET time. If the upload day is on the weekend, the file will be loaded on the next workday.

  6. Check if DEA file limit was not exceeded. 
    1. Check "Suspended Entities" attribute. If this parameter is grater than 0, it means that DEA post processing was not invoked. Current DEA post processing limit is 22 000. To increase limit - Send the notification (7.d), after agreement do (8.)
  7. Take an action if the input files are not delivered on schedule:

    1. SAP 
      1. To:  santosh.dube@COMPANY.com;Venkata.Mandala@COMPANY.com;Jayant.Srivastava@COMPANY.com;DL-GMFT-EDI-PRD-SUPPORT@COMPANY.com
      2. CC: tj.struckus@COMPANY.com;Patrick.Neuman@COMPANY.com;przemyslaw.warecki@COMPANY.com;mikolaj.morawski@COMPANY.com;Melissa.Manseau@COMPANY.com;Deanna.Max@COMPANY.com;Laura.Faddah@COMPANY.com;DL-CBK-MAST@COMPANY.com;BalaSubramanyam.Thirumurthy@COMPANY.com
    2. HIN
      1. To: santosh.dube@COMPANY.com;Venkata.Mandala@COMPANY.com;Jayant.Srivastava@COMPANY.com;DL-GMFT-EDI-PRD-SUPPORT@COMPANY.com
      2. CC: tj.struckus@COMPANY.com;Patrick.Neuman@COMPANY.com;przemyslaw.warecki@COMPANY.com;mikolaj.morawski@COMPANY.com;Melissa.Manseau@COMPANY.com;Deanna.Max@COMPANY.com;Laura.Faddah@COMPANY.com;DL-CBK-MAST@COMPANY.com; BalaSubramanyam.Thirumurthy@COMPANY.com
    3. DEA
      1. To: santosh.dube@COMPANY.com;Venkata.Mandala@COMPANY.com;Jayant.Srivastava@COMPANY.com;DL-GMFT-EDI-PRD-SUPPORT@COMPANY.com
      2. CC: tj.struckus@COMPANY.com;Patrick.Neuman@COMPANY.com;przemyslaw.warecki@COMPANY.com;mikolaj.morawski@COMPANY.com;Melissa.Manseau@COMPANY.com;Deanna.Max@COMPANY.com;Laura.Faddah@COMPANY.com;DL-CBK-MAST@COMPANY.com; BalaSubramanyam.Thirumurthy@COMPANY.com
    4. DEA - limit notification
      1. To: santosh.dube@COMPANY.com;tj.struckus@COMPANY.com;Melissa.Manseau@COMPANY.com;BalaSubramanyam.Thirumurthy@COMPANY.com
      2. CC: przemyslaw.warecki@COMPANY.com;mikolaj.morawski@COMPANY.com
  8. Take an action if DEA limit was exceeded. 
    1. Login to each PROD host
    2. Go to "cd /app/mdmgw/batch_channel/config/"
    3. Edit "application.yml" on each host:
    4. Change poller.inputFormats.DEA.deleteDateLimit: 22 000 to new value.
    5. Restart Components: 
      1. Execute https://jenkins-gbicomcloud.COMPANY.com:8443/job/mdm_manage_playbooks/job/Microservices/job/manage_microservices__prod_us/
        1. component: mdmgw_batch-channel_1
        2. node: all_nodes
        3. command: restart
    6. Load the latest DEA file (MD5 checksum skips all entities, so only post-processing step will be executed) 
    7. Change and commit new limit to GIT: https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/prod_us/group_vars/gw-services/batch_channel.yml 


Example Emails:

  1. DEA limit exceeded: 
    1. DEA load check

      Hi Team,

      We just received the DEA file, the current DEA post processing process is set to 22 000 limitation. The DEA load resulted in xxxx profiles to be updated in post-processing. Should I change the limit and re-process profiles ?

      Regards,

  2. HIN File missing
    1. HIN PROD file missing

      Hi,

       Today we expected to receive new HIN files. I checked that HIN files are missing on S3 bucket. Last week we received files at <time> CET time.

      Here is the screenshot that presents files that we received last week:

      <screen from S3 bucket>

      Could you please verify this.

      Regards,







" }, { "title": "Batch Loads - Update Addresses", "pageID": "164469820", "pageLink": "/display/GMDM/Batch+Loads+-+Update+Addresses", "content": "
  1. Log in to US PROD Kibana: https://amraelp00006209.COMPANY.com:5601/app/kibana
    1. use the dedicated "kibana_gbiccs_user" 
  2. Go to the Dashboards Tab - "PROD Batch loads"
    1. \"\"
  3. Change the Time rage 
    1. \"\"
    2. Choose 24 hours to check if the new file was loaded for the last 24 hours.
  4. The Dashboard is divided into the following sections:
    1. File by type - this visualization presents how many file of the specific type were loaded during a specific time range
    2. File load count - this visualization presents when the specific file was loaded
    3. File load summary - on this table you can verify the detailed information about file load
    4. File load status count - the user name ("integration_batch_user") that executes the API and "status" - the number of requests ended with the status. To get more details o to PROD Api Calls
    5. Response status load summary - the number of requests ended with the specific status. To get more details o to PROD Api Calls
    6. \"\"
  5. The result report name or the details saved in Kibana contains correlation ID. 
    1. example Report name: DEV_update_profiles_integration_testing_ID-5e1b4bdf7525-1574860947734-0-819_REPORT.csv 
    2. example correlation ID: ID-5e1b4bdf7525-1574860947734-0-819
  6. To get more details o to PROD Api Calls
  7. Search by the correlation ID related to the latest Addresses update file load. 
  8. \"\"
  9. The following screenshot presents how many operations were invoked during the Addresses update.
    1. In this example, the input file contains 3 Customers.
    2. During the process, 3 Search API calls and 3 Attribute Updates API calls were invoked with success. 


DOC

Please read the following Technical Design document related to the Addresses updating process. This document contains a detailed description of the process, all inbound and outbound interface types.




S3 report and distribution


The report is uploaded to the S3 location: 

PROD location: mdmprodamrasp42095/PROD/archive/ADDRESSES/

The report is published in the AWS S3 bucket.

File name format is following: “<name>_<correlation_id>.csv”

Where <name> is the input file name.

Where <correlation_id> is the number of the batch related to the whole addresses update process. Using the correlation number Operator can find and verify all updates send to Reltio and easily verify the status of the batch.



Download the file and publish it to the SharePoint location. 

Send the notification to the designated mailing group. 


SharePoint upload location:


Mailing group:

    To: Melissa.Manseau@COMPANY.com,santosh.dube@COMPANY.com,Deanna.Max@COMPANY.com,Laura.Faddah@COMPANY.com,Xin.Sun@COMPANY.com,crystal.sawyer@COMPANY.com 

    CC:przemyslaw.warecki@COMPANY.com,mikolaj.morawski@COMPANY.com


Email template:



FLEX Addresses updating process - Report - <generation_date>

Hi, 

 Please be informed that the Addresses updating process report is available for verification.

Report:

 → <SharePoint URL>

Regards,

Mikolaj 














" }, { "title": "Batch Loads - Update Identifiers", "pageID": "164470070", "pageLink": "/display/GMDM/Batch+Loads+-+Update+Identifiers", "content": "
  1. Log in to US PROD Kibana: https://amraelp00006209.COMPANY.com:5601/app/kibana
    1. use the dedicated "kibana_gbiccs_user" 
  2. Go to the Dashboards Tab - "PROD Batch loads"
    1. \"\"
  3. Change the Time rage 
    1. \"\"
    2. Choose 24 hours to check if the new file was loaded for the last 24 hours.
  4. The Dashboard is divided into the following sections:
    1. File by type - this visualization presents how many file of the specific type were loaded during a specific time range
    2. File load count - this visualization presents when the specific file was loaded
    3. File load summary - on this table you can verify the detailed information about file load
    4. File load status count - the user name ("identifiers_batch_user") that executes the API and "status" - the number of requests ended with the status. To get more details o to PROD Api Calls
    5. Response status load summary - the number of requests ended with the specific status. To get more details o to PROD Api Calls
    6. \"\"
  5. The result report name or the details saved in Kibana contains correlation ID. 
    1. example Report name: DEV_update_profiles_integration_testing_ID-5e1b4bdf7525-1574860947734-0-819_REPORT.csv 
    2. example correlation ID: ID-5e1b4bdf7525-1574860947734-0-819
  6. To get more details o to PROD Api Calls
  7. Search by the correlation ID related to the latest Identifiers file load. 
  8. \"\"
  9. The following screenshot presents how many operations were invoked during the Identifiers update.
    1. In this example, the input file contains 3 Customers.
    2. During the process, 3 Search API calls and 3 Attribute Updates API calls were invoked with success. 


DOC

Please read the following Technical Design document related to the Identifiers updating process. This document contains a detailed description of the process, all inbound and outbound interface types.


\"\"



S3 report and distribution


The report is uploaded to the S3 location: 

PROD location: mdmprodamrasp42095/PROD/archive/IDENTIFIERS/

The report is published in the AWS S3 bucket.

File name format is following: “<name>_<correlation_id>.csv”

Where <name> is the input file name.

Where <correlation_id> is the number of the batch related to the whole identifiers update process. Using the correlation number Operator can find and verify all updates send to Reltio and easily verify the status of the batch.



Download the file and publish it to the SharePoint location. 

Send the notification to the designated mailing group. 


SharePoint upload location:


Mailing group:

    To: Melissa.Manseau@COMPANY.com,santosh.dube@COMPANY.com,Deanna.Max@COMPANY.com,Laura.Faddah@COMPANY.com,Xin.Sun@COMPANY.com,crystal.sawyer@COMPANY.com 

    CC:przemyslaw.warecki@COMPANY.com,mikolaj.morawski@COMPANY.com


Email template:



FLEX Identifiers updating process - Report - <generation_date>

Hi, 

 Please be informed that the Identifiers updating process report is available for verification.

Report:

 → <SharePoint URL>

Regards,

Mikolaj 














" }, { "title": "FLEX QC", "pageID": "164470057", "pageLink": "/display/GMDM/FLEX+QC", "content": "


Agenda

The following table presents the scheduled agenda of the process:

whenhours
Each Saturday 13:00 (UTC time)


The process has to be verified on Monday morning CET time. After successful verification the report has to be sent to the designated mailing group.

Prometheus Dashboard

There is a requirement to monitor the process after each run and send the generated comparison report. 

The overview Monitoring Prometheus dashboard is available here:

https://mdm-monitoring.COMPANY.com/grafana/d/COVgYieiz/alerts-monitoring?orgId=1&refresh=10s&var-region=us

\"\"

When the dashboard contains GREEN color on "US PROD Airflow DAG's Status" panel -  The process ended with success.

When the dashboard contains RED color on "US PROD Airflow DAG's Status" panel -  The process ended with failure. The details are available in Airflow.


Airflow

  1. Log in to Airflow platform: https://cicd-gbl-mdm-hub.COMPANY.com/airflow/tree?dag_id=flex_validate_us_prod 
    1. you can use admin user
    2. Login page
    3. \"\"
  2. Go to the "flex_validate_us_prod" Job
    1. \"\"
    2. To check details of the specific Task, click on the Task and then in pop up window click "View Logs" 
    3. *_validation_tasks - these tasks are "Sub DAG's". To verify the internal tasks click on the SUB DAG, then in pop up window click "Zoom into SUB DAG". 
  3. After LOGs verification there is a possibility to re-run the process from the last failure point, To do this process the following steps:
    1. Click on the Task. In the pop-up window choose "Clear" 
    2. \"\"
    3. Clearing deletes the previous state of the task instance, allowing it to get re-triggered by the scheduler or a backfill command. It means that all future tasks are cleaned and started one more time.



DOC

Please read the following Technical Design document related to the FLEX Quality check process. this document contains a detailed description of the Airflow process, all inbound and outbound interfaces types.

\"\"



S3 report and distribution

The comparison report is uploaded to the S3 location: 

PROD location: mdmprodamrasp42095/verify/PROD/report/

File name format is following: “comparison_report_full_<date>.csv”
Where <date> is YYYYMMDDTHHMMSS (20191001T072509)


Download the file and publish it to the SharePoint location. 

Send the notification to the designated mailing group. 


Report preprocessing and XLSX create:

  1. Open comparision_report_full_<data>.csv with Notepad++
  2. Because excel removed leading 000 characters the replacement needs to be done using Search mode: Regular expression. 
    1. Replace all

      \n
      ;"0(.*?)";
      \n

      to 

      \n
      ;="0\\1";
      \n

      \"\"

  3. Check the CSV for multi-line comments (NotesText attribute). They might disturb the CSV format.
     
    1. Replace all

      \n
      ([^"])\\n
      \n

      to 

      \n
      "\\1"
      \n

      (remove the quote marks - cannot escape backslash in Confluence)

    2. Fix the header row (add the removed \\n)

  4. Save file
  5. Open CSV file by double click - to open this file in Excel.
    1. \"\"
    2. Click on the left top corner to mark all columns and rows
    3. \"\"
    4. double click on the line between column "A" and "B" to adjust column width.
    5. \"\"
    6. Apply the "Filter" option on the Header.
    7. \"\"
    8. Verify result. Each row needs to start with a source name. Check the source column. Check if the NotesText attribute is in one row, and the format is correct.
    9. When the format is correct the source column should contain only the following values:
    10. \"\"
  6. Save the file in XLSX format
    1. Click "File" → Save as. Choose "Save as type" = "Excel Workbook (*.xlsx)
    2. \"\"
  7. Send both CSV and XLSX format to the SharePoint location:
    1. \"\"


8. 

As recently requested, I have deleted rows with “attributes.Name.value” error and with CXkfvVy entity.



SharePoint upload location:


Mailing group:

    To: Manseau, Melissa <Melissa.Manseau@COMPANY.com>; Dube, Santosh R <santosh.dube@COMPANY.com>;  Faddah, Laura Jordan <Laura.Faddah@COMPANY.com>; Sun, Ivy <Xin.Sun@COMPANY.com>; Antoine, Melissa <melissa.antoine@COMPANY.com>; DL-CBK-MAST <DL-CBK-MAST@COMPANY.com>

    CC: Warecki, Przemyslaw <Przemyslaw.Warecki@COMPANY.com>; Morawski, Mikolaj <Mikolaj.Morawski@COMPANY.com>; Anuskiewicz, Piotr <Piotr.Anuskiewicz@COMPANY.com>


Email template:

<generation_date> - each report is generated during the weekend. So for example when the report generation was executed between 01/04/2020-01/05/2020 (weekend), then the generation_date should be the same.

The date format should be consistent with US notation. (MM/dd/yyyy)  e.g. 01/04/2020-01/05/2020

<SharePoint URL> - the URL in the email needs to be formated, because due to the spaces in the path. 


FLEX QC result - Report - <generation_date>

Hi,


Please be informed that the new QC report is available for verification.


Report:

 → <SharePoint URL>


Best Regards,

Karol



Contact: BalaSubramanyam.Thirumurthy@COMPANY.com,santosh.dube@COMPANY.com when FLEX/HIN/DEA file is missing.

Contact: Venkata.Mandala@COMPANY.com Chakrapani.Kruthiventi@COMPANY.com,santosh.dube@COMPANY.com when SAP file is missing.

Contact: santosh.dube@COMPANY.com,Venkata.Mandala@COMPANY.com,Jayant.Srivastava@COMPANY.com,DL-GMFT-EDI-PRD-SUPPORT@COMPANY.com - With GIS FILE transfer problem (missing files)


14/02/2023

Hi Karol,

You can remove me from this distribution going forward.

Thanks,

Deanna K. Max


27/02/2023

Hi Karol,

I’ve moved to a new role and no longer need to be apart of this distribution. Can you please remove me?

Regards,

Crystal Sawyer 









" }, { "title": "Generate events to prod-out-full-gblus-flex-all*.json file", "pageID": "333156205", "pageLink": "/display/GMDM/Generate+events+to+prod-out-full-gblus-flex-all*.json+file", "content": "
  1. Go to gblmdmhubprodamrasp101478/us/prod/inbound/oneview-cov/prod-out-full-gblus-flex-all (concat_s3_files_gblus_prod input directory)
  2. Copy files for desired period of time to your local workspace
  3. Download attached script and modify events variable
    \"\"
  4. Execute attached script in the directory below downloaded files. It will find the latest event for every element in events list and store them in agregated_events.json
  5. Arrange with the person requesting event generation that they stop the process for 24h. When they stop the process, you can add the found events to a file in gblmdmhubprodamrasp101478/us/prod/inbound/oneview-cov/inbound s3 directory
  6. After file is modified thay can start ingestion process and verify if events were properly generated

\"\"findEvents.sh

" }, { "title": "Re-Loading SAP/HIN/DEA Files After Batch Channel Stopped", "pageID": "164470077", "pageLink": "/pages/viewpage.action?pageId=164470077", "content": "


These are the steps to be taken to correctly process SAP/HIN/DEA files after mdmgw_batch_channel docker container is stopped on PROD and has to be restarted:


  1. Create an emergency RFC for this action
  2. Change configuration of the batch_channel component on PROD1 (amraelp00006207) under /app/mdmgw/batch_channel/config/application.yml:


change relativePathPattern: DEA/.* to relativePathPattern: DEA_LOAD/.*
change relativePathPattern: HIN/.* to relativePathPattern: HIN_LOAD/.*
change relativePathPattern: SAP/.* to relativePathPattern: SAP_LOAD/.*


This is required because GIS publishes files to */DEA/HIN/SAP automatically and we don't want to consume them during the fix.


     3. Empty all /inbound/* directories by moving all files from:

/inbound/SAP to /archive/SAP_tmp

/inbound/DEA to /archive/DEA_tmp

/inbound/HIN to /archive/HIN_tmp


4. After inbound directories are empty start batch_channel component on PROD1 (amraelp00006207). Process files in FIFO order by moving them in order from:

/archive/SAP_tmp to /inbound/SAP_LOAD

/archive/DEA_tmp to /inbound/DEA_LOAD

/archive/HIN_tmp to /inbound/HIN_LOAD


5. After these files are processes stop batch_channel on PROD1 (amraelp00006207).


6. Restore configuration on PROD1 under /app/mdmgw/batch_channel/config/application.yml:


relativePathPattern: DEA_LOAD/.* to relativePathPattern: DEA/.* 
relativePathPattern: HIN_LOAD/.* to relativePathPattern: HIN/.* 
relativePathPattern: SAP_LOAD/.* to relativePathPattern: SAP/.* 


7. Start batch_channel on PROD1, PROD2 and PROD3 waiting 1 minute before start on each subsequent node.

8. Check if nodes started and clustered correctly:


9. Move previously processed files from /archive/*_load to /archive/*

" }, { "title": "S3 keys replacement", "pageID": "379129646", "pageLink": "/display/GMDM/S3+keys+replacement", "content": "

PROD ( amraelp00006207, amraelp00006208, amraelp00006209):

Remember that the replacement has to be done on all three instances!

  1. Replace keys for batch channel and do recreate containers. 

/app/mdmgw/batch_channel/config/application.yml

      2.  Replace keys for reltio subscriber and do recreate containers

/app/mdmhub/reltio_subscriber/config/application.yml

     3. Replace keys for archiver and do not recreate containers

/app/archiver/config/archiver.env

    4. Replace keys for airflow dags 

https://cicd-gbl-mdm-hub.COMPANY.com/airflow/home


NPROD (DEV / TEST - amraelp00005781): 


  1. Replace keys for batch channel and recreate containers. 

/app/mdmgw/dev-mdm-srv/batch_channel/config/application.yml

/app/mdmgw/test-mdm-srv/batch_channel/config/application.yml




After manual replacement in the components:

Replace keys in the repository:

Use replace_aws_keys.sh to find and replace keys in the repository. 

Deploy changes! MDM Hub Deploy Jobs and MDM Gateway Deploy Jobs





" }, { "title": "Project Highlander:", "pageID": "302705635", "pageLink": "/pages/viewpage.action?pageId=302705635", "content": "" }, { "title": "Highlander IDL Quality Check", "pageID": "164470068", "pageLink": "/display/GMDM/Highlander+IDL+Quality+Check", "content": "

It is required to check HCO and HCP counts at selected checkpoins of C8 flow and document it.

Checkpoints

Document

Please create document using the template.

Procedures

Retrieving counts from  Reltio

Call following API

To get HCP counts

\n
GET https://{{url}}/reltio/api/{{tenantID}}/entities/_facets?facet=type,attributes.Country&options=searchByOv&max=2000&filter=equals(type,'HCP') and in(attributes.Country,"AI,AN,AG,AR,AW,BS,BB,BZ,BM,BO,BR,CL,CO,CR,CW,DO,EC,GT,GY,HN,JM,KY,LC,MX,NI,PA,PY,PE,PN,SV,SX,TT,UY,VG,VE")
\n


To get HCO counts:

\n
GET https://{{url}}/reltio/api/{{tenantID}}/entities/_facets?facet=type,attributes.Country&options=searchByOv&max=2000&filter=equals(type,'HCO') and in(attributes.Country,"AI,AN,AG,AR,AW,BS,BB,BZ,BM,BO,BR,CL,CO,CR,CW,DO,EC,GT,GY,HN,JM,KY,LC,MX,NI,PA,PY,PE,PN,SV,SX,TT,UY,VG,VE")
\n


Retrieving counts from HUB (global)


Query
\n
db.getCollection("entityHistory").aggregate(\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: {\n\t\t\t     "$and" : [\n\t\t\t        {"status" : "ACTIVE"}, \n\t\t\t        {"country" : {\n\t\t\t            "$in" : [\n\t\t\t                "ai", \n\t\t\t                "an", \n\t\t\t                "ag", \n\t\t\t                "ar", \n\t\t\t                "aw", \n\t\t\t                "bs", \n\t\t\t                "bb", \n\t\t\t                "bz", \n\t\t\t                "bm", \n\t\t\t                "bo", \n\t\t\t                "br", \n\t\t\t                "cl", \n\t\t\t                "co", \n\t\t\t                "cr", \n\t\t\t                "cw", \n\t\t\t                "do", \n\t\t\t                "ec", \n\t\t\t                "gt", \n\t\t\t                "gy", \n\t\t\t                "hn", \n\t\t\t                "jm", \n\t\t\t                "ky", \n\t\t\t                "lc", \n\t\t\t                "mx", \n\t\t\t                "ni", \n\t\t\t                "pa", \n\t\t\t                "py", \n\t\t\t                "pe", \n\t\t\t                "pn", \n\t\t\t                "sv", \n\t\t\t                "sx", \n\t\t\t                "tt", \n\t\t\t                "uy", \n\t\t\t                "vg", \n\t\t\t                "ve"\n\t\t\t            ]\n\t\t\t        }}\n\t\t\t        ]      \n\t\t\t}\n\t\t},\n\t\t// Stage 2\n\t\t{\n\t\t\t$group: {\n\t\t\t_id: {entityType: "$entityType", country: "$country" }, count: { $sum: 1 }\n\t\t\t}\n\t\t},\n\n\t]\n);\n\n
\n

Retrieving counts from HUB (C8 filters)


Query
\n
db.getCollection("entityHistory").aggregate(\n    // Pipeline\n    [\n        // Stage 1\n        {\n            $match: {\n                 "$and" : [\n                    {"status" : "ACTIVE"},\n                    {"country" : {\n                        "$in" : [\n                            "ai", \n                            "an", \n                            "ag", \n                            "ar", \n                            "aw",\n                            "bs",\n                            "bb",\n                            "bz",\n                            "bm",\n                            "bo",\n                            "br",\n                            "cl",\n                            "co",\n                            "cr",\n                            "cw",\n                            "do",\n                            "ec",\n                            "gt",\n                            "gy",\n                            "hn",\n                            "jm",\n                            "ky",\n                            "lc",\n                            "mx",\n                            "ni",\n                            "pa",\n                            "py",\n                            "pe",\n                            "pn",\n                            "sv",\n                            "sx",\n                            "tt",\n                            "uy",\n                            "vg",\n                            "ve"\n                        ]\n                    }},\n                    {\n                            "entity.crosswalks" : {\n                                "$elemMatch" : {\n                                    "type" : {\n                                        "$in" : [\n                                            "configuration/sources/OK",\n                                            "configuration/sources/CRMMI",\n                                            "configuration/sources/Reltio"                                            \n                                        ]\n                                    },\n                                    "deleteDate" : {\n                                        "$exists" : false\n                                    }\n                                }\n                            }\n                        }\n                    ]     \n            }\n        },\n \n        // Stage 2\n        {\n            $addFields: {\n                "market":    \n                 {"$switch": {\n                   branches: [\n                  { case:  {"$in" : [ "$country", ["ag","ai","aw","bb","bs","cr","do","gt","hn","jm","lc","ni","pa","sv","tt","vg","cw","sx" ]]}, then: "ac" },\n                  { case:  {"$in" : [ "$country", ["uy" ]]}, then: "ar" }\n                   ],\n               default: "$country"\n            }  \n                 }\n            }\n        },\n \n        // Stage 3\n        {\n            $group: {\n            _id: {entityType: "$entityType", market: "$market" }, count: { $sum: 1 }\n            }\n        },\n \n    ]\n);\n\n\n
\n



" }, { "title": "RawData:", "pageID": "347666020", "pageLink": "/pages/viewpage.action?pageId=347666020", "content": "" }, { "title": "Restore raw entity data", "pageID": "347666025", "pageLink": "/display/GMDM/Restore+raw+entity+data", "content": "

The following SOP describes how to restore raw entity data.



Steps:


  1. Login to UI
  2. Go to HUB Admin →  Restore Raw Data → Restore entities
  3. Fill in the filters
        a) Source environment - restore data from other environment (restore QA on DEV), default value will restore data from currently logged environment
        b) Entity type - restore data only for selected entity types - requires at least one selected
        c) Countries - restore data only for selected countries
        d) Sources - restore data only for selected sources
        e) Restore entities created after - only entities created after this date will be restored
      
  4. Click the execute button
  5. Validate the results in Kibana API Calls Kibana



\"\"

" }, { "title": "Restore raw relation data", "pageID": "347666056", "pageLink": "/display/GMDM/Restore+raw+relation+data", "content": "



Steps:


  1. Login to UI
  2. Go to HUB Admin →  Restore Raw Data → Restore relations
  3. Fill in the filters
        a) Source environment - restore data from other environment (restore QA on DEV), default value will restore data from currently logged environment
        b) Countries - restore data only for selected countries
        c) Sources - restore data only for selected sources
        d) Relation types - restore data only for selected relation type
        e) Restore relations created after - only relations created after this date will be restored
      
  4. Click the execute button
  5. Validate the results in Kibana API Calls Kibana



\"\"

" }, { "title": "Reconciliation:", "pageID": "164470071", "pageLink": "/pages/viewpage.action?pageId=164470071", "content": "" }, { "title": "How to Start the Reconciliation Process", "pageID": "164470058", "pageLink": "/display/GMDM/How+to+Start+the+Reconciliation+Process", "content": "

This procedure describes the reconciliation process between Reltio and Mongo. The result of this process is the Entities and Relations events generated for the HUB internal Kafka topics.

       0. Check if the entityHistory and entityRelations contains the following indexes:

entityHistory
 db.entityHistory.createIndex({country: -1}, {background: true, name: "idx_country"});
db.entityHistory.createIndex({sources: -1}, {background: true, name: "idx_sources"});
db.entityHistory.createIndex({entityType: -1}, {background: true, name: "idx_entityType"});
db.entityHistory.createIndex({status: -1}, {background: true, name: "idx_status"});
db.entityHistory.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});
db.entityHistory.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});
db.entityHistory.createIndex({"entity.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"});
db.entityHistory.createIndex({"entity.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"});
entityRelations
 db.entityRelations.createIndex({country: -1}, {background: true, name: "idx_country"});
db.entityRelations.createIndex({sources: -1}, {background: true, name: "idx_sources"});
db.entityRelations.createIndex({entityType: -1}, {background: true, name: "idx_relationType"});
db.entityRelations.createIndex({status: -1}, {background: true, name: "idx_status"});
db.entityRelations.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});
db.entityRelations.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});
db.entityRelations.createIndex({startObjectId: -1}, {background: true, name: "idx_startObjectId"});
db.entityRelations.createIndex({endObjectId: -1}, {background: true, name: "idx_endObjectId"});
db.getCollection("entityRelations").createIndex({"relation.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_asc"});


  1. Export Reltio Data
    1. TODO
  2. Import the Reltio Data to Mongo:
    1. Check the following required variables in the mdm-reltio-handler-env/inventory/prod/group_vars/mongo/all.yml

      GBL PROD Example:


      mongo_install_dir: /app/mongo

      hub_db_reltio_user: "mdm_hub"

      hub_db_reltio_●●●●●●●●●●●●● secret_hub_db_reltio_●●●●●●●●●●●●


      hub_db_admin_user: admin

      hub_db_admin_●●●●●●●●●●●●● secret_hub_db_admin_●●●●●●●●●●●●


      hub_db_name: reltio


      #COMPENSATION EVENTS VARIABLES:

      MONGO_URL: "10.12.199.141:27017"


      reltio_entities_export_url_name: "https://reltio-data-exports.s3.amazonaws.com/entities/pfe_mdm_api/2019/25-Feb-2019/fw2ztf8k3jpdffl_14-21_entities_bbf5.zip..."
      reltio_entities_export_file_name: "fw2ztf8k3jpdffl_14-21_entities_bbf5" # THE SAME AS FILE NAME FROM URL
      reltio_entities_export_date_timestamp_ms: "1551052800000" # RETIO EXPORT DATE
      reltio_entities_export_LAST_date_timestamp_ms: "1548288000000" # RETIO LAST EXPORT DATE. Do not SET when you want to do the reconciliation on all entities


      reltio_relations_export_url_name: "https://reltio-data-exports.s3.amazonaws.com/relations/pfe_mdm_api/2019/25-Feb-2019/fw2ztf8k3jpdffl_14-21_relations_afa6.zip..."
      reltio_relations_export_file_name: "fw2ztf8k3jpdffl_14-21_relations_afa6" # THE SAME AS FILE NAME FROM URL
      reltio_relations_export_date_timestamp_ms: "1551052800000" # RETIO EXPORT DATE
      reltio_relations_export_LAST_date_timestamp_ms: "1548806400000" # RETIO LAST EXPORT DATE. Do not SET when you want to do the reconciliation on all entities


      KAFKA_BOOTSTRAP_SERVERS: "10.192.70.189:9094,10.192.70.156:9094,10.192.70.159:9094"

      kafka_import_events_user: "hub_prod"
      kafka_import_events_●●●●●●●●●●●●● secret_kafka_import_events_●●●●●●●●●●●●
      kafka_import_events_truststore_●●●●●●●●●●●●● secret_kafka_import_events_truststore_●●●●●●●●●●●●

      internal_reltio_events_topic: "prod-internal-reltio-events"
      internal_reltio_relations_topic: "prod-internal-reltio-relations-events"

      reconciliate_entities: True # set To False when you want to do the reconciliation only for relations
      reconciliate_relations: True #set To False when you want to do the reconciliation only for entities


      For US PROD Set additional parameters:

      external_user_id: 25084803
      external_group_id: 20796763


    2. On the new files set only reltio_entities_export_.*  or reltio_relations_export_.* variables. According to the export date time and file name.

    3. check PRIMARY

      Check which Mongo instance is PRIMARY. If the first instance is primary execute ansbile playbooks with --limit mongo1 parameter. Otherwise change --limit attribute to other node

    4. Execute: ansible-playbook extract_reltio_data.yml -i inventory/prod/inventory --limit mongo1 --vault-password-file=ansible.secret

    5. Check logs 

    6. Execute: docker logs --tail 1000 mongo_mongoimport_<date> -f
    7. Wait until container will stop then go to the next step.
  3. Create indexes on imported collections:
    1.  db.getCollection("fw2ztf8k3jpdffl_15-55_entities_9d83").createIndex({uri: -1}, {background: true, name: "idx_uri"});
      db.getCollection("fw2ztf8k3jpdffl_15-55_entities_9d83").createIndex({type: -1}, {background: true, name: "idx_type"});
      db.getCollection("fw2ztf8k3jpdffl_15-55_entities_9d83").createIndex({createdTime: -1}, {background: true, name: "idx_createdTime"});
      db.getCollection("fw2ztf8k3jpdffl_15-55_entities_9d83").createIndex({updatedTime: -1}, {background: true, name: "idx_updatedTime"});
      db.getCollection("fw2ztf8k3jpdffl_15-55_entities_9d83").createIndex({"attributes.Country.lookupCode": -1}, {background: true, name: "idx_country"});
      db.getCollection("fw2ztf8k3jpdffl_15-55_entities_9d83").createIndex({"crosswalks.value": -1}, {background: true, name: "idx_crosswalks"});
       db.getCollection("fw2ztf8k3jpdffl_14-21_relations_afa6").createIndex({uri: -1}, {background: true, name: "idx_uri"});
      db.getCollection("fw2ztf8k3jpdffl_14-21_relations_afa6").createIndex({updatedTime: -1}, {background: true, name: "idx_updatedTime"});
      db.getCollection("fw2ztf8k3jpdffl_14-21_relations_afa6").createIndex({"crosswalks.value": -1}, {background: true, name: "idx_crosswalks"});
    2.   Wait until indexes are build

    3. Execute:docker logs --tail 1000 mongo_mongo_1 -f
  4. Based on the imported Reltio data generate missing events:
    1. Execute: ansible-playbook generate_compensation_events.yml -i inventory/prod/inventory --limit mongo1 --vault-password-file=ansible.secret
    2. Wait until the docker containers stop. ETA: 1h - 1h 30min
    3. Check docker logs
    4. Verify the .*_compensation_result collections. 
    5. Check the number of Events for each type for entities: 

      HCP_CREATED | HCO_CREATED
      HCP_CHANGED | HCO_CHANGED
      HCP_MERGED | HCO_MERGED | HCP_LOST_MERGE | HCO_LOST_MERGE
      HCP_REMOVED | HCO_REMOVED


    6. Check the number of Events for each type for relations:

      RELATIONSHIP_CREATED
      RELATIONSHIP_CHANGED
      RELATIONSHIP_MERGED
      RELATIONSHIP_LOST_MERGE
      RELATIONSHIP_REMOVED

    7. Check if the count do not contain the anomalies. Verify the problem if exists. 
    8. Check the logs in the /app/mongo/compensation_events/scripts_entities/.*.out. Check if the logs contain "REPORT AN ERROR TO Reltio" - analyse the problem and report the issue to Reltio. 

    9. Check the logs in the /app/mongo/compensation_events/scripts_relations/.*.out. Check if the logs contain "REPORT AN ERROR TO Reltio" - analyse the problem and report the issue to Reltio. 
  5. When all the events are correct generate events to Kafka internal topic: 
    1. Execute: ansible-playbook generate_compensation_events_kafka.yml -i inventory/prod/inventory --limit mongo1 --vault-password-file=ansible.secret
  6. Verify the internal kafka topics and docker logs. 


" }, { "title": "Hub Reconciliation Monitoring", "pageID": "273707408", "pageLink": "/display/GMDM/Hub+Reconciliation+Monitoring", "content": "

Check Reconciliation dashboard

Check reconciliation dashboard for every environmento on every monday. Ensure that set timespan corresponds with time of last reconciliation(friday-sunday):

Urls

EMEA PROD Reconciliation dashboard

GBL PROD Reconciliation dashboard

AMER PROD Reconciliation dashboard

GBLUS PROD Reconciliation dashboard

APAC PROD Reconciliation dashboard
\"\"

START -  the number of entities/relations/mergeTree that the reconciliation started for

END -  the number of entities/relations/mergeTree that were fully processed(Calculated checksum and checksum from Reltio export differ)

REJECTED - to check the number of entities/relations/mergeTree that were rejected(Calculated checksum and checksum from Reltio export are the same)

Issues

  1. ENTITIES/RELATION/MERGETREE START/REJECTED/END == 0
    Check reconciliation topics if there were produced and consumed events during last weekend
    Check airflow dags
  2. ENTITIES/RELATION/MERGETREE END > 50k
    Check HUB EVENTS dashboard
    Check snowflake

Check HUB EVENTS dashboard

HUB events dashboard describes events that were processed by event publisher and sent to output topics(clients/snowflake)

Urls

EMEA PROD: https://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/emea-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))
GBL PROD: https://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/gbl-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))
AMER PROD: https://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/amer-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))
GBLUS PROD: https://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/gblus-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))
APAC PROD: https://kibana-apac-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/apac-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))

Aplied filter in kibana dashboard

metadata.HUB_RECONCILIATION: true


\"\"

Appling above filter we receive all reconciliation events that were processed by our streaming channel. Now we need to analyze two cases:

  1. comment field == 'No change in data detected (Entity MD5 checksum did not change), ignoring.'
    \"\"
    Although these events checksums differed during reconciliation calculation, after recalculating checksum in entity-enricher, the events were found to be the same. In that case we should check reltio export
  2. comment field != 'No change in data detected (Entity MD5 checksum did not change), ignoring.'
    \"\"
    This situation means that those events are really different and needed to be reconciled. For these entities/relations we send update event to snowflake topic. That's standard process but number of such events shouldn't be to big. If it exceeds 50k then we should analyse what have changed in snowflake(Check snowflake) and check if everything is appropriate.

Please check events 5 HCPs, 5 HCOs and 5 relations from different time periods. Eg, the first hour of reconciliation, the middle of reconciliation and the last hour of reconciliation.

Check reltio export

We should download Reltio export used during reconciliation from s3 bucket. We can check archivisation path in hub_reconciliation_v2_* dags configuration:
E.g.

For AMER PROD: gblmdmhubprodamrasp101478/amer/prod/inbound/hub/hub_reconciliation/entities/archive/

Check snowflake

We should compare the last event to the previous one and see if there are any problems. We can use similar query:

\n
select * from landing.HUB_KAFKA_DATA where record_metadata:key='entities/GOyJxoA' ORDER BY record_metadata:CreateTime desc limit 10;
\n

\"\"

If there is only one rekord in snowflake HUB_KAFKA_DATA this means that retention time has passed and we do not have data any data to compare to. In this case we can check object in reltio. Unfortunately Reltio doesn't keep all changes(eg. rdm changes) so checking in reltio doesn't always provide explanation.

Check object in reltio

Unfortunately Reltio doesn't keep all changes(eg. rdm changes) so checking in reltio doesn't always provide explanation what has changed. This solution should be used as a last resort.


To compare objects in reltio we need to performr Reltio api requests with time parameter.

Time parameter allows you to get the object in the state it was in at selected time

Steps:

  1. Find object in Reltio UI
    \"\"
  2. Find last update date
    \"\"
  3. Perform Reltio api request without time parameter

    \n
    curl --location --request GET 'https://eu-360.reltio.com/reltio/api/Xy67R0nDA10RUV6/entities/PcepVgw?options=ovOnly' \\\n--header 'Authorization: Bearer 357b69a4-4709-43b8-95df-06ef9839599f'
    \n
  4. Perform Reltio api request with time parameter

    \n
    curl --location --request GET 'https://eu-360.reltio.com/reltio/api/Xy67R0nDA10RUV6/entities/PcepVgw?options=ovOnly&time=1663064886000' \\\n--header 'Authorization: Bearer 357b69a4-4709-43b8-95df-06ef9839599f'
    \n
  5. Compare results

Check reconciliations topics

Check if new events showed up on reconciliation topic on last dag run and if those events were consumed:
EMEA PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=emea_prod&var-kube_env=emea_prod&var-topic=emea-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=
AMER PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=amer_prod&var-kube_env=amer_prod&var-topic=amer-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=
GBL PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=gbl_prod&var-kube_env=gbl_prod&var-topic=gbl-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=
APAC PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=apac_prod&var-kube_env=apac_prod&var-topic=apac-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=
GBLUS PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=gblus_prod&var-kube_env=gblus_prod&var-topic=gblus-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=
\"\"

If there were no events generated during last weekend then please check airflow dags.

If events were generated but not processed the please check mdmhub reconciliation service configuration.

Check airflow dags

If there is any issue please verify corresponding airflow dags. None of subsequent stages should be failed:

https://airflow-amer-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_amer_prod
https://airflow-amer-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_gblus_prod
https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_emea_prod
https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_gbl_prod
https://airflow-apac-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_apac_prod

Raport:

Every reconciliation check should be finished with short raport posted on teams chat

EnvEntities ENDRelation ENDMerges ENDSummmary(OK/NOK)Comment
EMEA PROD




GBL PROD




AMER PROD




GBLUS PROD




APAC PROD






Check Reconciliation dashboard

Check reconciliation dashboard for every environmento on every monday. Ensure that set timespan corresponds with time of last reconciliation(friday-sunday):

Urls

EMEA PROD Reconciliation dashboard

GBL PROD Reconciliation dashboard

AMER PROD Reconciliation dashboard

GBLUS PROD Reconciliation dashboard

APAC PROD Reconciliation dashboard
\"\"

START -  the number of entities/relations/mergeTree that the reconciliation started for

END -  the number of entities/relations/mergeTree that were fully processed(Calculated checksum and checksum from Reltio export differ)

REJECTED - to check the number of entities/relations/mergeTree that were rejected(Calculated checksum and checksum from Reltio export are the same)

Issues

  1. ENTITIES/RELATION/MERGETREE START/REJECTED/END == 0
    Check reconciliation topics if there were produced and consumed events during last weekend
    Check airflow dags
  2. ENTITIES/RELATION/MERGETREE END > 50k
    Check HUB EVENTS dashboard
    Check snowflake

Check HUB EVENTS dashboard

HUB events dashboard describes events that were processed by event publisher and sent to output topics(clients/snowflake)

Urls

EMEA PROD: https://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/emea-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))
GBL PROD: https://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/gbl-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))
AMER PROD: https://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/amer-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))
GBLUS PROD: https://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/gblus-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))
APAC PROD: https://kibana-apac-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/apac-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))

Aplied filter in kibana dashboard

metadata.HUB_RECONCILIATION: true


\"\"

Appling above filter we receive all reconciliation events that were processed by our streaming channel. Now we need to analyze two cases:

  1. comment field == 'No change in data detected (Entity MD5 checksum did not change), ignoring.'
    \"\"
    Although these events checksums differed during reconciliation calculation, after recalculating checksum in entity-enricher, the events were found to be the same. In that case we should check reltio export
  2. comment field != 'No change in data detected (Entity MD5 checksum did not change), ignoring.'
    \"\"
    This situation means that those events are really different and needed to be reconciled. For these entities/relations we send update event to snowflake topic. That's standard process but number of such events shouldn't be to big. If it exceeds 50k then we should analyse what have changed in snowflake(Check snowflake) and check if everything is appropriate.

Please check events 5 HCPs, 5 HCOs and 5 relations from different time periods. Eg, the first hour of reconciliation, the middle of reconciliation and the last hour of reconciliation.

Check reltio export

We should download Reltio export used during reconciliation from s3 bucket. We can check archivisation path in hub_reconciliation_v2_* dags configuration:
E.g.

For AMER PROD: gblmdmhubprodamrasp101478/amer/prod/inbound/hub/hub_reconciliation/entities/archive/

Check snowflake

We should compare the last event to the previous one and see if there are any problems. We can use similar query:

\n
select * from landing.HUB_KAFKA_DATA where record_metadata:key='entities/GOyJxoA' ORDER BY record_metadata:CreateTime desc limit 10;
\n

\"\"

If there is only one rekord in snowflake HUB_KAFKA_DATA this means that retention time has passed and we do not have data any data to compare to. In this case we can check object in reltio. Unfortunately Reltio doesn't keep all changes(eg. rdm changes) so checking in reltio doesn't always provide explanation.

Check object in reltio

Unfortunately Reltio doesn't keep all changes(eg. rdm changes) so checking in reltio doesn't always provide explanation what has changed. This solution should be used as a last resort.


To compare objects in reltio we need to performr Reltio api requests with time parameter.

Time parameter allows you to get the object in the state it was in at selected time

Steps:

  1. Find object in Reltio UI
    \"\"
  2. Find last update date
    \"\"
  3. Perform Reltio api request without time parameter

    \n
    curl --location --request GET 'https://eu-360.reltio.com/reltio/api/Xy67R0nDA10RUV6/entities/PcepVgw?options=ovOnly' \\\n--header 'Authorization: Bearer 357b69a4-4709-43b8-95df-06ef9839599f'
    \n
  4. Perform Reltio api request with time parameter

    \n
    curl --location --request GET 'https://eu-360.reltio.com/reltio/api/Xy67R0nDA10RUV6/entities/PcepVgw?options=ovOnly&time=1663064886000' \\\n--header 'Authorization: Bearer 357b69a4-4709-43b8-95df-06ef9839599f'
    \n
  5. Compare results

Check reconciliations topics

Check if new events showed up on reconciliation topic on last dag run and if those events were consumed:
EMEA PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=emea_prod&var-kube_env=emea_prod&var-topic=emea-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=
AMER PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=amer_prod&var-kube_env=amer_prod&var-topic=amer-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=
GBL PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=gbl_prod&var-kube_env=gbl_prod&var-topic=gbl-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=
APAC PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=apac_prod&var-kube_env=apac_prod&var-topic=apac-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=
GBLUS PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=gblus_prod&var-kube_env=gblus_prod&var-topic=gblus-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=
\"\"

If there were no events generated during last weekend then please check airflow dags.

If events were generated but not processed the please check mdmhub reconciliation service configuration.

Check airflow dags

If there is any issue please verify corresponding airflow dags. None of subsequent stages should be failed:

https://airflow-amer-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_amer_prod
https://airflow-amer-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_gblus_prod
https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_emea_prod
https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_gbl_prod
https://airflow-apac-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_apac_prod

Raport:

Every reconciliation check should be finished with short raport posted on teams chat

EnvEntities ENDRelation ENDMerges ENDSummmary(OK/NOK)Comment
EMEA PROD




GBL PROD




AMER PROD




GBLUS PROD




APAC PROD






" }, { "title": "Verifying Reconciliation Results", "pageID": "164470187", "pageLink": "/display/GMDM/Verifying+Reconciliation+Results", "content": "
  1. Run reconciliation dag in airflow for given entities, relations, merge-tree
    1. GBLUS DEV - http://euw1z1dl039.COMPANY.com:8080/airflow/tree?dag_id=hub_reconciliation_v2_gblus_dev
    2. GBLUS QA - http://euw1z1dl039.COMPANY.com:8080/airflow/tree?dag_id=hub_reconciliation_v2_gblus_qa
    3. GBLUS STAGE - http://euw1z1dl039.COMPANY.com:8080/airflow/tree?dag_id=hub_reconciliation_v2_gblus_stage
  2. After reconciliation is finished go to kibana to make verification (https://mdm-log-management-gbl-us-nonprod.COMPANY.com:5601/app/kibana#)
  3. Go to Discover dashboard and choose from dropdown list appropriate filter: docker.<env>
    1. \"\"
    2. switch to Lucene
    3. choose the correct time range
    4. choose the correct index docker.<env>
  4. Add following custom filters  
    1. tag is depending on environment, it can be
      1. docker.dev.mdm-hub-reconciliation-service
      2. docker.qa.mdm-hub-reconciliation-service
      3. docker.stage.mdm-hub-reconciliation-service
      4. docker.prod.mdm-hub-reconciliation-service
    2. data.logger_name, choose if you want to check reconciliation type:
      1. com.COMPANY.mdm.reconciliation.stream.ReconciliationMergeLogic for mergeTree 
      2. com.COMPANY.mdm.reconciliation.stream.ReconciliationLogic - for entities/relations
        1. To check only entities in the search box write entities  to select only one object type (using LUCENE type)
        2. To check only relations in the search box write relation  to select only one object type (using LUCENE type)
    3. data.message is START - to check the number of entities/relations/mergeTree that the reconciliation started for
    4. data.message is END - to check the number of entities/relations/mergeTree that were fully processed
    5. data.message is REJECTED - to check the number of entities/relations/mergeTree that were rejected
    6. choose the appropriate time of reconciliation processing
  5. Differences verification between export and mongo
    1. find URI of the object to verify in kibana
      1. check the Event Publisher dashboard for this uri, if the Reconciliation process detected this as a difference (END) and in the Publisher dasbhaord there is a comment "No change in data detected (Entity MD5 checksum did not change), ignoring." it means something is wrong and you can compare the Reltio export entity with Mongo Entity.
    2. download export from S3 (us/<env>/inbound/hub/hub_reconciliation/<object_type>/archive)
      1. find the JSON in the part_ files - "zgrep "entities/<id>" part-00*"
      2. save the JSON to the file that will be passed to the calculateChecksum.groovy script - file format:
      3. [
        json,
        json
        ]

    3. process exported object using calculateChecksum.groovy from docker and save the object
      1. Modify the script:
        • add  EntityKt filteredEntity = EntityFilter.filter to the reconciliation event output so you can check the whole JSON in the output file
        • change to the outfile.append(uri + "|" + newLine + "\\n")
        • check the file for reference and use this calculateChecksum.groovy
      2. Script RUN:
        1. Run with the following parameters: D:\\docs\\EMEA\\Reconciliation_PROCESS\\entities\\part_01020222.txt entities FULL COMPANYCustID 1 https://api-emea-prod-gbl-mdm-hub.COMPANY.com:8443/prod/gw bhW
          1. path
          2. entities/relations/merge_tree
          3. FULL - to get full JSON compare MD5
          4. this is from the DAG config - hub_reconciliation_v2.yml.params.nonOvAttrToInclude
          5. manager URL
          6. manager API KEY
        2. \"\"
        3. Output file is in the - D:\\opt\\kafka_utils\\data
    4. export object with the same uri from mongo db using simple json format
      1. \"\"
    5. compare those two export using some compare tool, but before reformat those jsons
      1. Use Intellij compare two JSON files function
" }, { "title": "Snowflake:", "pageID": "337856693", "pageLink": "/pages/viewpage.action?pageId=337856693", "content": "" }, { "title": "How to fix issue in Reltio Parser with lookup typos", "pageID": "337858475", "pageLink": "/display/GMDM/How+to+fix+issue+in+Reltio+Parser+with+lookup+typos", "content": "

This procedure shows how to manage typos in lookup codes that can resolve to the same alias in Snowflake, producing errors in Reltio Configuration Parser

  1. Go to ReltioConfigurations  collection in MongoDB
  2. Find configurations with typo that you want to fix (one by one or with filters)
  3. Using Edit Document option, open each affected configuration and find attribute with wrong lookupCode
  4. Fix typos and save changes


Example with screenshots

In this example we fix added white symbol at the end of "DCRType" lookup code on APAC DEV. We go to this environment:

\"\"

Find our configurations:

\"\"

Check them for possible typo:

\"\"

Fix it in each affected configuration and save. This ensures that next parsing will be successfull.

" }, { "title": "SSL Certificates:", "pageID": "218453496", "pageLink": "/pages/viewpage.action?pageId=218453496", "content": "" }, { "title": "Generating a CSR", "pageID": "218454469", "pageLink": "/display/GMDM/Generating+a+CSR", "content": "

Go to the configuration repository (mdm-hub-env-config).

Find the expiring certificate.

Kong

For KONG / KAFKA FLEX PROD mdm-hub-env-config/ssl_certs/prod_us/certs/mdm-ihub-us-trade-prod.COMPANY.com.key 

Certificate should be in ssl_certs/{{ env }}/certs/{{ url }}.pem

For example: ssl_certs/prod/certs/mdm-gateway.COMPANY.com.pem


We will generate our new certificate from the existing private key. Private key is in the same directory as certificate, ending with .key extension.

Copy it to some temporary directory and decrypt:

\n
anuskp@CF-341562:/mnt/c/Users/panu/gitrep/mdm-hub-env-config/ssl_certs/prod/certs$ ls -l\ntotal 32\n-rwxrwxrwx 1 anuskp anuskp  7353 Nov 12 11:59 mdm-gateway.COMPANY.com.key\n-rwxrwxrwx 1 anuskp anuskp 24459 Jan 28 15:05 mdm-gateway.COMPANY.com.pem\nanuskp@CF-341562:/mnt/c/Users/panu/gitrep/mdm-hub-env-config/ssl_certs/prod/certs$ cp mdm-gateway.COMPANY.com.key ~/temp\nanuskp@CF-341562:/mnt/c/Users/panu/gitrep/mdm-hub-env-config/ssl_certs/prod/certs$ cd ~/temp\nanuskp@CF-341562:~/temp$ ansible-vault decrypt ./mdm-gateway.COMPANY.com.key --vault-password-file=~/ap\nDecryption successful
\n


Contents of this file are confidential. Do not share it with anyone outside of your Team.


Generate a CSR from the private key:

CSR Value Guidlines

During last Certificate request we received below CSR guidlines:

Common Name: Needs to have FQDN

Organizational Unit: No specific requirement -  optional attribute.

Organization: COMPANY, Inc                NOT  COMPANY [OR]  COMPANY Inc  [OR] COMPANY Inc.

Locality: City or Location must be spelled correctly. No abbreviations allowed

State: Must use full name of State or Province, no abbreviations allowed

Country: US (Always use 2 char. Country code)

Key Size: at least 2048 is recommended.


\n
anuskp@CF-341562:~/temp$ openssl req -new -key mdm-gateway.COMPANY.com.key -out mdm-gateway.COMPANY.com.csr\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:US\nState or Province Name (full name) [Some-State]:Connecticut\nLocality Name (eg, city) []:Groton\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY, Inc\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:mdm-gateway-int.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT_PROD@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge password []:\nAn optional company name []:\nanuskp@CF-341562:~/temp$ ls -l\ntotal 16\n-rw-r--r-- 1 anuskp anuskp 1098 Feb 10 15:58 mdm-gateway.COMPANY.com.csr\n-rw------- 1 anuskp anuskp 1734 Feb 10 15:52 mdm-gateway.COMPANY.com.key
\n


All information provided should be exactly the same as existing certificate's. Email should be set to support DL:

\"\"


Kafka - existing guide

Keystores/Truststores should be in ssl_certs/{{ env }}/ssl/server.keystore.jks

For example: ssl_certs/prod/ssl/server.keystore.jks


Go to some temporary directory and generate new Keystore:

\n
anuskp@CF-341562:~/temp$ keytool -genkeypair -alias kafka.mdm-gateway.COMPANY.com -keyalg RSA -keysize 2048 -keystore server.keystore.jks -dname "CN = kafka.mdm-gateway.COMPANY.com, O = COMPANY"\nEnter keystore <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=2031523">●●●●●●●●●●●●●●●●●●</a> new password:\nEnter key password for <kafka.mdm-gateway.COMPANY.com>\n        (RETURN if same as keystore password):\n\nWarning:\nThe JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore server.keystore.jks -destkeystore server.keystore.jks -deststoretype pkcs12".
\n


Key password should be the same as keystore password. After the certificate has been switched, remember to save the new keystore password in inventory/{{ env }}/group_vars/kafka/secret.yml.

In the -dname param insert the same parameters as existing certificate's.

Generate CSR from the keystore:

\n
anuskp@CF-341562:~/temp$ keytool -certreq -alias kafka.mdm-gateway.COMPANY.com -file kafka.mdm-gateway.COMPANY.com.csr -keystore server.keystore.jks\nEnter keystore <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=2031525">●●●●●●●●●●●●●●●●●●●</a>\nThe JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore server.keystore.jks -destkeystore server.keystore.jks -deststoretype pkcs12".\nanuskp@CF-341562:~/temp$ ls -l\ntotal 8\n-rw-r--r-- 1 anuskp anuskp 1027 Feb 10 16:11 kafka.mdm-gateway.COMPANY.com.csr\n-rw-r--r-- 1 anuskp anuskp 2161 Feb 10 16:07 server.keystore.jks
\n


EFK

Every Elasticsearch node may have its own certificate:

There is only one certificate for Kibana:


Generating CSRs from existing .key files is exactly the same as for Kong. Remember to set parameters ("O", "L", "CN") exactly the same as existing certificate's.




" }, { "title": "Requesting a new certificate", "pageID": "218454527", "pageLink": "/display/GMDM/Requesting+a+new+certificate", "content": "

Go to https://requestmanager.COMPANY.com/. Search for Digital Certificates and click the first and only position found:

\"\"


COMPANY-issued certificates

Check the COMPANY SSL Certificate - Internal Only checkbox.

\"\"


Entrust-issued certificates

Check the Entrust External SSL certificate checkbox and click the first link:

\"\"


You will be redirected to the Entrust portal. Check if renewing an existing certificate works. If it doesn't, follow below steps:



Wait for the email with new certificate from Entrust.




" }, { "title": "Rotating EFK certificates", "pageID": "218454407", "pageLink": "/display/GMDM/Rotating+EFK+certificates", "content": "
  1. Elasticsearch

    1. Single instance (non-prod clusters)

      Go to Elasticsearch config directory on host. For example:

      /app/efk/elasticsearch/config - US DEV (amraelp00005781.COMPANY.com)
      /apps/efk/elasticsearch/config - GBL DEV (euw1z1dl039.COMPANY.com)

      \n
      [mdm@euw1z1dl039 config]$ ls -l\ntotal 48\n-rw-rw-r-- 1 mdm   7000  1445 Feb 22  2019 admin-ca.pem\n-rw------- 1 mdm docker  1708 Jul 27  2020 elasticsearch-admin-key.pem\n-rw------- 1 mdm docker  1765 Jul 27  2020 elasticsearch-admin.pem\n-rw-rw---- 1 mdm docker   199 Mar 30  2020 elasticsearch.keystore\n-rw------- 1 mdm docker  1013 Jul 27  2020 elasticsearch.yml\n-rw------- 1 mdm docker  1704 Jul 27  2020 esnode-key.pem\n-rw------- 1 mdm docker  1801 Feb  9 05:00 esnode.pem\n-rw------- 1 mdm docker  3320 Mar 30  2020 jvm.options\n-rw------- 1 mdm docker 10899 Mar 30  2020 log4j2.properties\n-rw------- 1 mdm docker  1972 Jul 27  2020 root-ca.pem
      \n


      Check the elasticsearch.yml config file. By default, esnode.pem should contain the certificate and esnode-key.pem should contain private key.
      If you have generated new CSR based on existing private key, you only need to update the esnode.pem file:

      \n
      [mdm@euw1z1dl039 config]$ vi esnode.pem
      \n


      Remove all file contents and copy-paste the new certificate. Save the changes.

      Now restart the container and make sure it's working and not throwing errors in the logs:

      \n
      [mdm@euw1z1dl039 config]$ docker restart elasticsearch\nelasticsearch\n[mdm@euw1z1dl039 config]$ docker logs --tail 100 -f elasticsearch
      \n


      Log into Kibana and check that dashboards are correctly displaying data.


    2. Clustered (production clusters)

      On every Elasticsearch node go to the Elasticsearch config directory and replace esnode.pem certificate file, as shown in 1a.

      Once done, restart all Elasticsearch instances. Check logs. All instances should throw the following error in logs:

      \n
      [2022-02-10T10:53:19,770][ERROR][c.f.s.a.BackendRegistry ] [prod-gbl-data-2] Not yet initialized (you may need to run sgadmin)\n[2022-02-10T10:53:19,798][ERROR][c.f.s.a.BackendRegistry ] [prod-gbl-data-2] Not yet initialized (you may need to run sgadmin)
      \n


      Now, run the following command on all hosts in Elasticsearch cluster:

      \n
      docker exec elasticsearch bash -c "export JAVA_HOME=/usr/share/elasticsearch/jdk/ && cd /usr/share/elasticsearch/plugins/search-guard-7/tools && ./sgadmin.sh -cd ../sgconfig/ -h {{ elasticsearch_cluster_network_host }} -cn {{ elasticsearch_cluster_name }} -nhnv -cacert ../../../config/root-ca.pem -cert ../../../config/elasticsearch-admin.pem  -key ../../../config/elasticsearch-admin-key.pem"
      \n


      where:

      {{ elasticsearch_cluster_network_host }} - instance's name in cluster, check in host_vars, for example (in configuration repository): mdm-hub-env-config/inventory/prod/host_vars/efk1/all.yml
      {{ elasticsearch_cluster_name }} - cluster name, is the same for all nodes, check in group_vars, for example: mdm-hub-env-config/inventory/prod/group_vars/efk-services/all.yml

      So, on example of GLOBAL PROD (2 clusters):

      Run the following on PROD4 (euw1z1pl025.COMPANY.com):

      \n
      [mdm@euw1z1pl025 config]$ docker exec elasticsearch bash -c "export JAVA_HOME=/usr/share/elasticsearch/jdk/ && cd /usr/share/elasticsearch/plugins/search-guard-7/tools && ./sgadmin.sh -cd ../sgconfig/ -h 'euw1z1pl025.COMPANY.com' -cn 'elasticsearch-prod-gbl-cluster' -nhnv -cacert ../../../config/root-ca.pem -cert ../../../config/elasticsearch-admin.pem  -key ../../../config/elasticsearch-admin-key.pem"\nSearch Guard Admin v7\nWill connect to euw1z1pl025.COMPANY.com:9300 ... done\nConnected as CN=elasticsearch-admin.COMPANY.com,O=COMPANY\nElasticsearch Version: 7.6.2\nSearch Guard Version: 7.6.2-41.0.0\nContacting elasticsearch cluster 'elasticsearch-prod-gbl-cluster' and wait for YELLOW clusterstate ...\nClustername: elasticsearch-prod-gbl-cluster\nClusterstate: YELLOW\nNumber of nodes: 2\nNumber of data nodes: 2\nsearchguard index already exists, so we do not need to create one.\nINFO: searchguard index state is YELLOW, it seems you miss some replicas\nPopulate config from /usr/share/elasticsearch/plugins/search-guard-7/sgconfig\n../sgconfig/sg_action_groups.yml OK\n../sgconfig/sg_internal_users.yml OK\n../sgconfig/sg_roles.yml OK\n../sgconfig/sg_roles_mapping.yml OK\n../sgconfig/sg_config.yml OK\n../sgconfig/sg_tenants.yml OK\nWill update '_doc/config' with ../sgconfig/sg_config.yml\n   SUCC: Configuration for 'config' created or updated\nWill update '_doc/roles' with ../sgconfig/sg_roles.yml\n   SUCC: Configuration for 'roles' created or updated\nWill update '_doc/rolesmapping' with ../sgconfig/sg_roles_mapping.yml\n   SUCC: Configuration for 'rolesmapping' created or updated\nWill update '_doc/internalusers' with ../sgconfig/sg_internal_users.yml\n   SUCC: Configuration for 'internalusers' created or updated\nWill update '_doc/actiongroups' with ../sgconfig/sg_action_groups.yml\n   SUCC: Configuration for 'actiongroups' created or updated\nWill update '_doc/tenants' with ../sgconfig/sg_tenants.yml\n   SUCC: Configuration for 'tenants' created or updated\nDone with success
      \n



      Run the following on PROD5 (euw1z2pl024.COMPANY.com):

      \n
      [mdm@euw1z2pl024 config]$ docker exec elasticsearch bash -c "export JAVA_HOME=/usr/share/elasticsearch/jdk/ && cd /usr/share/elasticsearch/plugins/search-guard-7/tools && ./sgadmin.sh -cd ../sgconfig/ -h 'euw1z2pl024.COMPANY.com' -cn 'elasticsearch-prod-gbl-cluster' -nhnv -cacert ../../../config/root-ca.pem -cert ../../../config/elasticsearch-admin.pem  -key ../../../config/elasticsearch-admin-key.pem"\nSearch Guard Admin v7\nWill connect to euw1z2pl024.COMPANY.com:9300 ... done\nConnected as CN=elasticsearch-admin.COMPANY.com,O=COMPANY\nElasticsearch Version: 7.6.2\nSearch Guard Version: 7.6.2-41.0.0\nContacting elasticsearch cluster 'elasticsearch-prod-gbl-cluster' and wait for YELLOW clusterstate ...\nClustername: elasticsearch-prod-gbl-cluster\nClusterstate: YELLOW\nNumber of nodes: 2\nNumber of data nodes: 2\nsearchguard index already exists, so we do not need to create one.\nINFO: searchguard index state is YELLOW, it seems you miss some replicas\nPopulate config from /usr/share/elasticsearch/plugins/search-guard-7/sgconfig\n../sgconfig/sg_action_groups.yml OK\n../sgconfig/sg_internal_users.yml OK\n../sgconfig/sg_roles.yml OK\n../sgconfig/sg_roles_mapping.yml OK\n../sgconfig/sg_config.yml OK\n../sgconfig/sg_tenants.yml OK\nWill update '_doc/config' with ../sgconfig/sg_config.yml\n   SUCC: Configuration for 'config' created or updated\nWill update '_doc/roles' with ../sgconfig/sg_roles.yml\n   SUCC: Configuration for 'roles' created or updated\nWill update '_doc/rolesmapping' with ../sgconfig/sg_roles_mapping.yml\n   SUCC: Configuration for 'rolesmapping' created or updated\nWill update '_doc/internalusers' with ../sgconfig/sg_internal_users.yml\n   SUCC: Configuration for 'internalusers' created or updated\nWill update '_doc/actiongroups' with ../sgconfig/sg_action_groups.yml\n   SUCC: Configuration for 'actiongroups' created or updated\nWill update '_doc/tenants' with ../sgconfig/sg_tenants.yml\n   SUCC: Configuration for 'tenants' created or updated\nDone with success
      \n


      Check the logs. There should be no new errors. Check Kibana - whether you can login and view data in dashboards.

  2. Kibana

    Go to Kibana config directory on host. For example:

    /app/efk/kibana/config

    \n
    [root@amraelp00005781 config]# ls -l\ntotal 12\n-rw-r--r-- 1 mdmihnpr mdmihub 1964 Jul 10  2020 kibana.crt\n-rw-r--r-- 1 mdmihnpr mdmihub 1704 Jul 10  2020 kibana.key\n-rw-rwxr-- 1 mdmihnpr mdmihub  536 Jul  5  2020 kibana.yml
    \n


    Modify the kibana.crt file. Remove its contents and copy-paste new certificate.

    \n
    [root@amraelp00005781 config]# vi kibana.crt
    \n


    Do the same for kibana.key, unless you have generated the CSR based on the existing private key.

    Restart the Kibana container and check logs:

    \n
    [root@amraelp00005781 config]# docker restart kibana\nkibana\n[root@amraelp00005781 config]# docker logs --tail 100 -f kibana
    \n


    Wait for Kibana to come back up and make sure there are no errors in logs and you can login to web app and view data in dashboards.

REMEMBER TO PUSH NEW CERTIFICATES TO CONFIGURATION REPO



" }, { "title": "Rotating FLEX Kafka certificates", "pageID": "387161356", "pageLink": "/display/GMDM/Rotating+FLEX+Kafka+certificates", "content": "

Kafka FLEX certificate is the same as for the Kong FLEX

1 Email to Santosh.

If there is a need to rotate Kafka certificate on FLEX environment, approval from the business is required.

To: santosh.dube@COMPANY.com

Cc: dl-atp_mdmhub_support@COMPANY.com

Hi Santosh,

We created the RFC ticket in our Jira - <Link to the ticket>

The FLEX PROD Kafka certificate is expiring, we need to go through the deployment procedure and replace the certificate on our Kafka.

We prepared the following deployment procedure – '<doc>’ – added to attachment.


Could you please approve this request because we need to trigger this deployment to replace the certificates.


Let me know in case of any questions.

Regards,

\"\"



Change the certificate:


2. Check if CA cert has changed

!IMPORTANT! If intermediate certificate changed, it would be required to contact FLEX team to replace it. 


To: DL-CBK-MAST@COMPANY.com anisha.sahu@COMPANY.com santosh.dube@COMPANY.com

Dear FLEX team,

We are providing new client.trustore.jks file which should be changed from your side. The change was forced by the change in policy of providing new certificates and server retirement. Due to the new certificate is signed by the other intermediate CA there is a need to change client truststore.

Please treat this as a high priority as the certificate will expire in 2 days.

Kind regards,


Remember to attach new client.truststore.jks file!

It is not required to create additional email thread with client if there is a need to change only the certificate. 


3. Rotate certificate

3.1 create keystore


Create new keystore with new key-pair. Private key should be in repository under mdm-hub-env-config/ssl_certs/prod_us/certs/mdm-ihub-us-trade-prod.COMPANY.com.key and certificate should be requested.

Tools → import Key Pair → 

\"\"


→ PKCS #8 → 

\"\"

→ and than choose private key and certificates from directories in the repo.


Passwords can be found under mdm-hub-env-config/inventory/prod_us/host_vars/kafka1/secret.yml

3.2 Rotate certificates on machines

Once done, log into host and go to /app/kafka/ssl.

Back existing server.keystore.jks up:

\n
$ cp server.keystore.jks server.keystore.jks-backup
\n

And upload the modified server.keystore.jks.


Restart Kafka container and wait for it to come back up:

\n
$ docker restart kafka_kafka_1
\n


Replace the keystore and restart Kafka container on each node.

Wait for Kafka to come up and become fully operational before restarting next node. After certificate has been successfully rotated, push modified keystore to the mdm-hub-env-config repository. CER and CSR files are no longer useful and can be disposed of.



Provide the evidence in the email thread:


After the replacement evidence file should be sent:

\"\"


" }, { "title": "Rotating FLEX Kong certificates", "pageID": "387161359", "pageLink": "/display/GMDM/Rotating+FLEX+Kong+certificates", "content": "

Kafka certificate is the same as for the kong


Rotating FLEX Kong certificate.

If there is a need to rotate Kafka certificate on FLEX environment, approval from the business is required.

To: santosh.dube@COMPANY.com

Cc: dl-atp_mdmhub_support@COMPANY.com

Hi Santosh,

We created the RFC ticket in our Jira - <Link to the ticket>

The FLEX PROD Kong certificate is expiring, we need to go through the deployment procedure and replace the certificate on our Kong API gateway.

We prepared the following deployment procedure – '<doc>’ – added to attachment.


Could you please approve this request because we need to trigger this deployment to replace the certificates.


Let me know in case of any questions.

Regards,
\"\"




Change the certificate:

!IMPORTANT! If intermediate certificate changed, it would be required to contact FLEX team to replace it. 


To: DL-CBK-MAST@COMPANY.com anisha.sahu@COMPANY.com santosh.dube@COMPANY.com

Dear FLEX team,

We are providing new client.trustore.jks file which should be changed from your side. The change was forced by the change in policy of providing new certificates and server retirement. Due to the new certificate is signed by the other intermediate CA there is a need to change client truststore.

Please treat this as a high priority as the certificate will expire in 2 days.

Kind regards,


Remember to attach new client.truststore.jks file!

It is not required to create additional email thread with client if there is a need to change only the certificate. 

  1. You should receive three certificates from COMPANY/Entrust: Server Certificate and Intermediate (PBACA G2) or Intermediate and Root. Open the Server Certificate in the text editor:

    \"\"

    \"\"


    Copy all received certificates into a chain in the following sequence:

    1. Server Certificate
    2. Intermediate
    3. Root:

    \"\"

  2. Go to main directory with command line and ansible installed
  3. Make sure you are on master branch and have newest changes fetched
    git checkout master
    git pull
  4. Comment out all sections in mdm-hub-env-config\\inventory\\prod_us\\group_vars\\kong\\all.yml except “kong_certificates”

    \"\"


  5. Comment out all sections in mdm-hub-env-config\\roles\\update_kong_api\\tasks\\main.yml except “Add Certificates”part

    \"\"


  6. Execute ansible playbook
    (Limit it to only one Kong host in the cluster)
    $ ansible-playbook update_kong_api.yml -i inventory/prod_us/inventory --vault-password-file=/home/karol/password --limit kong1
  7. Verify if server is responding with correct certificate
    openssl s_client -connect mdm-ihub-us-trade-prod.COMPANY.com:443 </dev/null
    openssl s_client -connect amraelp00006207.COMPANY.com:8443 </dev/null

          openssl s_client -connect amraelp00006208.COMPANY.com:8443</dev/null

          openssl s_client -connect amraelp00006209.COMPANY.com:8443</dev/null




Provide the evidence in the email thread:


After the replacement evidence file should be sent:

\"\"


" }, { "title": "Rotating Kafka certificates", "pageID": "229180645", "pageLink": "/display/GMDM/Rotating+Kafka+certificates", "content": "

After receiving signed SSL certificate, place it in the same mdm-hub-env-config repo directory as existing Kafka keystore. For example:
ssl_certs/prod/ssl/[server.keystore.jks] - for Global PROD


Add the certificate to keystore, using the command:

\n
$ keytool -importcert -alias kafka.mdm-gateway.COMPANY.com -file kafka.mdm-gateway.COMPANY.com.cer -keystore server.keystore.jks
\n

Important: use the same alias as existing certificate in this keystore, to overwrite it


Once done, log into host and go to /app/kafka/ssl.

Back existing server.keystore.jks up:

\n
$ cp server.keystore.jks server.keystore.jks-backup
\n

And upload the modified server.keystore.jks.


Restart Kafka container and wait for it to come back up:

\n
$ docker restart kafka_kafka_1
\n


If there are multiple Kafka instances (Production), replace the keystore and restart Kafka container on each node. Wait for Kafka to come up and become fully operational before restarting next node. You can check node availability using, for example, AKHQ.

After certificate has been successfully rotated, push modified keystore to the mdm-hub-env-config repository. CER and CSR files are no longer useful and can be disposed of.

" }, { "title": "Rotating Kong certificate", "pageID": "218453498", "pageLink": "/display/GMDM/Rotating+Kong+certificate", "content": "

You should receive three certificates from COMPANY/Entrust: Server Certificate and Intermediate (PBACA G2) or Intermediate and Root. Open the Server Certificate in the text editor:

\"\"

\"\"


Copy all received certificates into a chain in the following sequence:

  1. Server Certificate
  2. Intermediate
  3. Root:

\"\"


Save the file as {hostname}.pem - for example mdm-gateway.COMPANY.com.pem and switch it in configuration repository:


Go to appropriate Kong group_vars:


Make sure all "create_or_update" flags are set to "False":

\"\"


Go down to #CERTIFICATES and switch the "create_or_update" flag. Path to the .pem file should not have changed - if you chose a different filename, adjust it here:

\"\"


Run the update_kong_api_v1.yml playbook. Limit it to only one Kong host in the cluster. After it has finished, switch the "create_or_update" flag back to "False" and push new certificate to the repository.

$ ansible-playbook update_kong_api_v1.yml -i inventory/prod/inventory --vault-password-file=~/ap --limit kong_v1_01


Check all SNIs on all Kong instances using s_client:


$ openssl s_client -servername mdm-gateway-int.COMPANY.com -connect euw1z1pl017.COMPANY.com:8443
$ openssl s_client -servername mdm-gateway-int.COMPANY.com -connect euw1z1pl021.COMPANY.com:8443
$ openssl s_client -servername mdm-gateway-int.COMPANY.com -connect euw1z1pl022.COMPANY.com:8443
$ openssl s_client -servername mdm-gateway.COMPANY.com -connect euw1z1pl017.COMPANY.com:8443
...



" }, { "title": "Hub upgrade procedures and calendar", "pageID": "401611801", "pageLink": "/display/GMDM/Hub+upgrade+procedures+and+calendar", "content": "

Backend components upgrade policy

  1. Major upgrade once a year
  2. Patch upgrades every quarter

Upgrade table

Componentcurrent versionlatest upgrade datenewest patch releaseplanned patch upgrade datenewest stable releaseplanned major upgrade dateNotes
Prometheus2.53.4 (monitoring host)2025-04-10--2.53.4-

\n MR-10396\n -\n Getting issue details...\n STATUS\n

kube-prometheus-stack61.7.22025-05--70.1.0-

\n MR-9578\n -\n Getting issue details...\n STATUS\n

Airflow2.7.22023-112.7.3-2.10.52025 Q2

\n MR-10437\n -\n Getting issue details...\n STATUS\n

Monstache6.7.212025-05--6.7.21-

\n MR-10437\n -\n Getting issue details...\n STATUS\n

Kong Gateway3.4.22024-09--3.9.02025 Q3
Kong Ingress Controller3.2.02024-093.2.4-3.4.42025 Q3
Kong external proxy3.3.12023-10--3.9.02025 Q3
OpenJDK - AdoptOpenJDK11.0.14.1_12022(?)11.0.27_62025 Q2Temurin 17.0.15+6-LTS2025 Q3
Jenkins2.462.32024-10--2.504.12025 Q3All versions newer than 2.462.3 require Java 17
Consul1.16.22023-111.16.6-1.21.02025 Q2

\n MR-10437\n -\n Getting issue details...\n STATUS\n

Elasticsearch8.11.42024-02--9.0.12025 Q4
Fluentd1.16.52024-051.16.8-1.182025 Q4Replace with Fluent Bit instead?
Fluent Bit2.2.32025-02--4.0.12025 Q4
Apache Kafka3.7.02024-073.7.22025 Q24.0.02026 Q1
AKHQ0.23.02024-08--0.25.12026 Q1
MongoDB6.0.212025-04--2026 Q2

\n MR-10399\n -\n Getting issue details...\n STATUS\n

" }, { "title": "Airflow upgrade procedure", "pageID": "401611840", "pageLink": "/display/GMDM/Airflow+upgrade+procedure", "content": "


Introduction

Airflow used by MDM HUB is maintained by Apache: https://airflow.apache.org/

To deploy airflow we are using official airflow helm chart: https://github.com/airflow-helm/charts



Prerequisite

  1. Verify changelog for changes that could alter behaviour/usage in new version and plan configuration adjustments to make it work correctly.
    https://airflow.apache.org/docs/apache-airflow/stable/release_notes.html
  2. Ensure base images are mirrored to COMPANY artifactory.


Generic procedure

Procedure assumes that upgrade will be executed and tested on the SBX first.

Upgrade Steps

Airflow version upgrade

  1. Apply changes in mdm-hub-inbound-services:
    1. Change airflow airflowVersion and defaultAirflowTagtag to updated version in:
      1. http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/browse/helm/airflow/src/main/helm/values.yaml
    2. Change airflow docker base image version in:
      1. http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/browse/helm/airflow/docker/Dockerfile 
    3. Apply other changes to helm chart if necessary (Prerequisite step 1)
  2. Apply configuration changes in mdm-hub-cluster-env:
    1. Apply needed changes to configuration if necessary (Prerequisite step 1)
      1. http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/sandbox/namespaces/airflow/values.yaml
  3. Build and deploy changes with new configuration.
  4. Verify if the component is working properly:
    1. Check if component started
    2. Go to the Airflow main page and verify if everything is working as expected (no log in issues, no errors, can see dags etc.)
    3. Check component logs for errors
  5. Check if all dags are working properly
    1. For dags with periodic schedule - wait for them to be triggered 
    2. For dags executed from UI  - execute all of them with test data 

Airflow helm template upgrade

  1. Deploy current airflow version on local environment from mdm-hub-inboud-services
  2. Get current airflow helm manifest and save it to airflow_manifest_1.yaml

    \n
    helm get manifest -n airflow airflow > airflow_manifest_1.yaml
    \n
  3. Pull new airflow chart version from chart repostiroy and replace in aiflow/charts directory. Copy old chart version to some temporary directory outside repository for comparison

    \n
    helm pull apache-airflow/airflow --version "1.13.0"\nmv airflow-1.13.0.tgz ${repo_dir}/mdm-hub-inbound-services/helm/airflow/src/main/helm/charts/airflow-1.13.0.tgz
    \n
  4. Extract old helm chart and check MODIFICATION_LIST file for modifiactions applied on helm chart. Apply needed changes to new airflow chart.

    \n
    tar -xzf airflow-1.10.0_modified.tgz\ncat airflow/MODIFICATION_LIST
    \n
  5. Perform helm upgrade with new helm chart version. Verify if airflow is working as expected
  6. Get current airflow manifest and save it to airflow_manifest_2.yaml

    \n
    helm get manifest -n airflow airflow > airflow_manifest_2.yaml
    \n
  7. Compare generated manifests and verify if there are breaking changes
  8. Fix all issues




Past upgrades

Upgrade Airflow x → y

Description:


Procedure:


Reference tickets:


Reference PR's:
http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/pull-requests/1283/overview



" }, { "title": "AKHQ upgrade procedure", "pageID": "401611810", "pageLink": "/display/GMDM/AKHQ+upgrade+procedure", "content": "



Introduction

AKHQ used in MDM HUB is mantained by tchiotludo/akhq.




Prerequisite

  1. Verify changelog for changes that could alter behaviour/usage in new version and plan configuration adjustments to make it work correctly.
  2. Ensure base images are mirrored to COMPANY artifactory.




Generic procedure

Procedure assumes that upgrade will be executed and tested on the SBX first.

Upgrade Steps

  1. Apply changes in mdm-hub-inbound-services:
    1. Change akhq image tag to updated version in:
      1. mdm-hub-inbound-services/helm/kafka/chart/src/main/helm/templates/akhq/akhq.yaml
      2. mdm-hub-inbound-services/helm/kafka/chart/src/main/helm/values.yaml
    2. Apply other changes to helm chart if necessary (Prerequisite step 1)
  2. Apply configuration changes in mdm-hub-cluster-env:
    1. Change akhq image tag to updated version in mdm-hub-cluster-env/amer/sandbox/namespaces/amer-backend/values.yaml (example for SBX)
    2. Apply other changes to configuration if necessary (Prerequisite step 1)
  3. Build and deploy changes with new configuration.
  4. Verify if the component is working properly:
    1. Check if component started
    2. Go to the AKHQ dashboard and verify if everything is working as expected (no log in issues, no errors, can see topics, consumergroups etc.)
    3. Check component logs for errors




Past upgrades

Upgrade AKHQ 0.14.1 → 0.24.0 (0.23.0)

Description:

This update required upgrade to version 0.24.0. After checking changes between previous version and target version it become obvious that there are required additional changes to helm chart.

There were detected errors during upgrade verification for which no fix was found in version 0.24.0. That resulted in changing version to 0.23.0, where the issue didn't occur.

Procedure:

  1. Pushed base image to COMPANY artifactory: artifactory.COMPANY.com/mdmhub-docker-dev/tchiotludo/akhq:0.24.0
  2. Applied inbound-services changes:
    1. changed image tag to 0.24.0 in:
      1. akhq.yaml
      2. values.yaml
    2. Applied necessary changes to akhq-cm.yaml (based of changelog requirements):
      1. added micronaut configuration
      2. moved topic-data property under ui-options property
      3. adjusted security configuration
  3. Changed image tag to 0.24.0 in cluster-env values.yaml
  4. Build inbound-services changes and deployed them with new configuration on SBX environment.
  5. Verified if component is working:
    1. component started
    2. there was an error present after logging In
    3. there was an exception thrown in logs:
      java.lang.NullPointerException: null\nat org.akhq.repositories.AvroWireFormatConverter.convertValueToWireFormat(AvroWireFormatConverter.java:39)\n\tat org.akhq.repositories.RecordRepository.newRecord(RecordRepository.java:454)\n\tat org.akhq.repositories.RecordRepository.lambda$getLastRecord$3(RecordRepository.java:109)\n\tat java.base/java.lang.Iterable.forEach(Unknown Source)\n\tat org.akhq.repositories.RecordRepository.getLastRecord(RecordRepository.java:107)\n\tat org.akhq.controllers.TopicController.lastRecord(TopicController.java:224)\n\tat org.akhq.controllers.$TopicController$Definition$Exec.dispatch(Unknown Source)\n\tat io.micronaut.context.AbstractExecutableMethodsDefinition$DispatchedExecutableMethod.invoke(AbstractExecutableMethodsDefinition.java:351)\n\tat io.micronaut.context.DefaultBeanContext$4.invoke(DefaultBeanContext.java:583)\n\tat io.micronaut.web.router.AbstractRouteMatch.execute(AbstractRouteMatch.java:303)\n\tat io.micronaut.web.router.RouteMatch.execute(RouteMatch.java:111)\n\tat io.micronaut.http.context.ServerRequestContext.with(ServerRequestContext.java:103)\n\tat io.micronaut.http.server.RouteExecutor.lambda$executeRoute$14(RouteExecutor.java:656)\n\tat reactor.core.publisher.FluxDeferContextual.subscribe(FluxDeferContextual.java:49)\n\tat reactor.core.publisher.InternalFluxOperator.subscribe(InternalFluxOperator.java:62)\n\tat reactor.core.publisher.FluxSubscribeOn$SubscribeOnSubscriber.run(FluxSubscribeOn.java:194)\n\tat io.micronaut.reactive.reactor.instrument.ReactorInstrumentation.lambda$null$0(ReactorInstrumentation.java:62)\n\tat reactor.core.scheduler.WorkerTask.call(WorkerTask.java:84)\n\tat reactor.core.scheduler.WorkerTask.call(WorkerTask.java:37)\n\tat io.micrometer.core.instrument.composite.CompositeTimer.recordCallable(CompositeTimer.java:68)\n\tat io.micrometer.core.instrument.Timer.lambda$wrap$1(Timer.java:171)\n\tat io.micronaut.scheduling.instrument.InvocationInstrumenterWrappedCallable.call(InvocationInstrumenterWrappedCallable.java:53)\n\tat java.base/java.util.concurrent.FutureTask.run(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)\n\tat java.base/java.lang.Thread.run(Unknown Source) \n
  6. Found no fix / workaround for this in 0.24.0 version, decided to change version to 0.23.0
  7. Applied inbound-services changes:
    1. changed image tag to 0.23.0 in:
      1. akhq.yaml
      2. values.yaml
  8. Changed image tag to 0.23.0 in cluster-env values.yaml
  9. Build inbound-services changes and deployed them with new configuration on SBX environment.
  10. Verified if component is working:
    1. component started
    2. no errors present on dashboard, everything is as expected
    3. no errors in logs

Reference tickets:

[MR-6778] Prepare AKHQ upgrade plan to version 0.24.0

Reference PR's:

[MR-6778] AKHQ upgraded to 0.23.0

[MR-6778] SANDBOX: AKHQ version change to 0.23.0

" }, { "title": "Consul upgrade procedure", "pageID": "401611813", "pageLink": "/display/GMDM/Consul+upgrade+procedure", "content": "

Introduction

Consul used in MDM is installed using official Consul Helm chart provided by Hashicorp.


Prerequisite

Before upgrade verify checklist:



Generic procedure

Procedure assumes that upgrade will be executed and tested on the SBX first.

Upgrade steps:

  1. Upgrade Consul Helm chart
  2. Upgrade Consul Docker images
  3. Update this confluence page


Past upgrades

Upgrade 1.10.2 → 1.16.2

Description

This was the only Consul upgrade so far.

Procedure

  1. Upgrade Consul Helm chart
    1. Add Hashicorp Helm repo and find the newest Consul chart and app version

      \n
      helm repo add hashicorp https://helm.releases.hashicorp.com\nhelm search repo hashicorp/consul
      \n
    2. In helm/consul/src/main/helm/Chart.yaml uncomment repository and change version number
    3. Update dependencies

      \n
      cd helm/consul/src/main/helm\nhelm dependency update
      \n
    4. Comment repository line back in Chart.yaml
    5. Commit only the updated charts/consul-*.tgz and Chart.yaml files
  2. Upgrade Consul Docker image
    1. Pull official images from Docker Hub
      1. https://hub.docker.com/r/hashicorp/consul/tags
      2. https://hub.docker.com/r/hashicorp/consul-k8s-control-plane/tags
    2. Tag images with artifactory.COMPANY.com/mdmhub-docker-dev/ prefix
    3. Push images to Artifactory
  3. Update cluster-env configuration (backend namespace)
    1. Change Docker image tags to uploaded in previous step
  4. Deploy updated backend
  5. Ensure cluster is in a running state

Reference tickets

Reference PRs


" }, { "title": "Elastic stack upgrade", "pageID": "401611843", "pageLink": "/display/GMDM/Elastic+stack+upgrade", "content": "

Introduction:

ECK stack used in MDM is installed using official ECK stack installation procedures provided by Elasticsearch B.V..


Prerequisite

Before upgrade verify checklist:



Generic procedure

Procedure assumes that upgrade will be executed and tested on the SBX first.

Upgrade Elastic stack steps:

  1. Upgrade Elasticsearch docker image
  2. Upgrade Elasticsearch plugins and dependencies
  3. Upgrade Kibana docker image
  4. Upgrade Logstash docker image
  5. Upgrade Logstash drivers and dependencies
  6. Upgrade FleetServer docker image
  7. Upgrade APM jar agents
  8. Update this confluence page

Past upgrades

ECK operator installation

Uninstall olm ECK operator 

  1. Scale down the number of olm-operator pods to 0
  2. Delete eck olm Subscription with orphan propagation
    kubectl delete subscription my-elastic-cloud-eck --cascade=orphan\n
  3. Delete all eck olm InstallPlans with orphan propagation
    kubectl delete installplans install-* --cascade=orphan\n
  4. Delete all "eck" ClusterServiceVersions with orphan propagation
    for ns in $(kubectl get namespaces -o name | cut -c 11-);\ndo\necho $ns;\nkubectl delete csv elastic-cloud-eck.v2.10.0 -n $ns --cascade=orphan;\ndone\n
  5. Scale down elastic-operator to 0
  6. Delete eck operator objects:
    1. ConfigMaps
      for cm in $(kubectl get cm | awk '{if ($1 ~ "elastic-") print $1}');\ndo\n  echo $cm;\n  kubectl delete cm $cm --cascade=orphan;\ndone\n
    2. ServiceAccount
      kubectl delete sa elastic-operator --cascade=orphan\n
    3. Elastic operator cert
      kubectl delete ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● --cascade=orphan\n
    4. ClusterRole - everything with "elastic" in name besides elastic-agent
      for cr in $(kubectl get clusterrole | grep -v elastic-agent | awk '{if ($1 ~ "elastic") print $1}')\ndo\n  echo $cr;\n  kubectl delete clusterrole $cr --cascade=orphan;\ndone\n
    5. Service
      kubectl delete service elastic-operator-service --cascade=orphan\n
    6. Deployment eck-operator
      kubectl delete deployment eck-operator

Install eck-operator standalone

  1. Adjust labels and annotaions of CRDs
    for CRD in $(kubectl get crds --no-headers -o custom-columns=NAME:.metadata.name | grep k8s.elastic.co); do\n  echo "changing $CRD";   \n  kubectl annotate crd "$CRD" meta.helm.sh/release-name="operators";\n  kubectl annotate crd "$CRD" meta.helm.sh/release-namespace="operators";\n  kubectl label crd "$CRD" app.kubernetes.io/managed-by=Helm;\ndone\n
  2. Install eck-operator without OLM by deploying operators version 4.1.19-project-boldmove-SNAPSHOT or newer

Upgrade ECK stack

Procedure:

  1. Upgrade Elastic stack docker images
    1. Pull from DockerHub and push the newest possible docker tags image of all Elastic stack components besides APM agent
    2. Download from maver repo and push to artifactory maven gallery the newest jar of APM agent
    3. Change version tag in inbound-services repo of all Elastic stack components
  2. Repeat steps 3 - 5 in the following order:
    1. Elasticsearch - wait until all nodes are updated (shards relocation lasts long)
    2. Kibana
    3. Logstash and FleetServer
  3. Update cluster-env configuration (backend namespaces)
    1. Change Docker image tag
  4. Deploy updated backend with Jenkins job
  5. Ensure backend component is working fine
  6. Deploy mdmhub to update APM agents
  7. Ensure mdmhub components are working fine

Reference tickets: 


" }, { "title": "Fluent Bit (Fluentbit) upgrade procedure", "pageID": "401611834", "pageLink": "/display/GMDM/Fluent+Bit+%28Fluentbit%29+upgrade+procedure", "content": "

Introduction:

FluentBit used in MDM is installed using official Fleuntbit installtion proc provided by Cloud Native Computing Foundation.


Prerequisite

Before upgrade verify checklist:



Generic procedure

Procedure assumes that upgrade will be executed and tested on the SBX first.

Upgrade steps:

  1. Upgrade Fluentbit Docker images
  2. Update this confluence page

Past upgrades

Upgrade 1.8.112.2.2

Description:

This was the only Fluentbit upgrade so far.

Procedure:

  1. Upgrade Fluentbit docker image
    1. Pull from DockerHub and push the newest possible docker tag image of fluentbit-debug and fluentbit to artifactory.
    2. Change version tag in inbound-services repo of mdmhub fluentbit and kubevents fluentbit.
  2. Update cluster-env configuration (envs and backend namespaces)
    1. Change Docker image tags to uploaded in previous step
  3. Deploy updated backend for kubevents and mdmhub for components logs with Jenkins jobs
  4. Ensure kubevents and mdmhub logs are being stored in Elasticsearch, check Kibanas.

Reference tickets: 

Reference PRs:


" }, { "title": "Fluentd upgrade procedure", "pageID": "401611830", "pageLink": "/display/GMDM/Fluentd+upgrade+procedure", "content": "

Introduction:

Fluentd used in MDM is installed using official Fluentd installation procedures provided by Cloud Native Computing Foundation.


Prerequisite

Before upgrade verify checklist:



Generic procedure

Procedure assumes that upgrade will be executed and tested on the SBX first.

Upgrade steps:

  1. Upgrade Fluentd Docker images
  2. Upgrade Fluentd plugins and dependencies
  3. Update this confluence page

Past upgrades

Upgrade fluentd-kubernetes-daemonset - v1.12-debian-elasticsearch7-1 → v1.16.2-debian-elasticsearch7-1.1

Procedure:

  1. Change docker image base to the newest version in env-config repo, (ex. "fluentd-kubernetes-daemonset:v1.16.2-debian-elasticsearch7-1.1")
  2. Build image with docker build job : https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm_manage_playbooks/job/Docker/job/build_Dockerfile/
  3. Update cluster-env repo configuration with the new image tag for fluentd (ex. 981)
  4. Test on SBX
  5. After checking fluentd output logs, the following actions were needed to be taken:
    1. upgrading of the following plugins and dependencies:
      1. "ruby-kafka", "~> 1.5"
      2. "fluent-plugin-kafka", "0.19.2"
    2. defining new mappings in "backend" and "others" datastreams:
        "properties": {\n    "kubernetes.labels.app": {\n      "dynamic": true,\n      "type": "object",\n      "enabled": false\n    }\n
    3. execute ansible playbook with index template update 
    4. rollover "backend" and "others" datastreams after mappings change

Reference tickets: 


" }, { "title": "Kafka clients upgrade procedure", "pageID": "401611855", "pageLink": "/display/GMDM/Kafka+clients+upgrade+procedure", "content": "

Introduction

There are two tools that we need to take under consideration when upgrade'ing Kafka clients, both are managed by Confluent Inc.:



Prerequisite

Before proceeding with upgrade verify checklist:



Generic procedure

Procedure assumes that upgrade will be executed and tested on the SBX first.

Upgrade Steps

cp-kcat:

  1. Change image tag in mdm-hub-inbound-services/helm/kafka/kcat/docker/Dockerfile.
  2. Build and deploy changes.
  3. Verify if container is working correctly.
  4. Verify if all wrapper scripts included in mdm-hub-inbound-services/helm/kafka/kcat/docker/bin are running correctly.

cp-kafka:

  1. Change image tag in mdm-hub-inbound-services/helm/kafka/kafka-client/docker/Dockerfile.
  2. Build and deploy changes.
  3. Verify if container is working correctly.
  4. Verify if all wrapper scripts included in mdm-hub-inbound-services/helm/kafka/kafka-client/docker/bin are running correctly.



Past upgrades

Upgrade cp-kcat 7.30→ 7.5.2 and cp-kafka 6.1.0→7.5.2

Description:

This update require to update both cp-kcat and cp-kafka to version 7.5.2 to eliminate CVE-2023-4911 vulnerability.

Procedure:

  1. Pushed base images for updated components to COMPANY artifactory:
    1. confluentinc/cp-kcat:7.5.2 →  artifactory.COMPANY.com/mdmhub-docker-dev/mdmtools/confluentinc/cp-kcat:7.5.2
    2. confluentinc/cp-kafka:7.5.2 → artifactory.COMPANY.com/mdmhub-docker-dev/confluentinc/cp-kafka:7.5.2
  2. Changed images versions in Dockerfiles:
    1. cp-kcat 7.30→ 7.5.2
    2. cp-kafka 6.1.0→7.5.2
  3. Built changes and deployed on SBX environment.
  4. Verified that both containers started successfully.
  5. Executed into each container and tested if all wrapper scripts present at /opt/app/bin are running and returning expected results.
  6. Deployed changes to other environments.

Reference tickets:

Reference PR's:

" }, { "title": "Kafka upgrade procedure", "pageID": "401611803", "pageLink": "/display/GMDM/Kafka+upgrade+procedure", "content": "

Introduction

Kafka used in MDM is installed, configured and upgraded using Strimzi Kafka Operator


Prerequisite

Before upgrade verify checklist:

  1. There must be no critical errors for the environment Alerts Monitoring
  2. Kafka Cluster Overview must  show 0 for 
    1. Under-Replicated Partitions
    2. Under-Min-ISR Partitions
    3. Offline Partitions
    4. Unclean Leader Election
    5. Preferred Replica Imbalance >0 is not a blocker, but a high number may indicate an issue with Kafka performance.



Generic procedure

Procedure assumes that upgrade will be executed and tested on the SBX first.

Upgrade steps:

  1. Verify if Strimzi Kafka Operator supports Kafka version you want to install (Supported versions - https://strimzi.io/downloads/)
    1. if not, upgrade Strimzi chart first
  2. Change Kafka version in environment configuration
  3. Update this confluence page


Past upgrades

Upgrade 3.6.1 → 3.7.0 and ZK to KRaft migration

Description

This upgrade was part of the \n MR-8004\n -\n Getting issue details...\n STATUS\n Epic.

Procedure

  1. Upgrade Strimzi operator to the version supporting Kafka 3.6.1
    1. Add Strimzi Helm repo and find the newest Consul chart and app version

      \n
      helm repo add strimzi https://strimzi.io/charts\nhelm search repo strimzi/strimzi-kafka-operator
      \n
    2. In helm/operators/src/main/helm/Chart.yaml uncomment Strimzi repository and change version number
    3. Update dependencies

      \n
      cd helm/operators/src/main/helm\nhelm dependency update
      \n
    4. Comment repository line back in Chart.yaml
    5. Commit only the updated charts/strimzi-kafka-operator-helm-*.tgz and Chart.yaml files
  2. Upgrade default Kafka to 3.7.0 in mdm-hub-inbound-services
  3. Upgrade Kafka per environment
    1. Deploy updated operators with the new Strimzi
    2. Update cluster-env configuration (backend namespace)
    3. Deploy updated backend
    4. Ensure cluster is in a running state

Reference tickets

Reference PRs

Upgrade 3.5.1 → 3.6.1

Description

This upgrade was part of the \n MR-8004\n -\n Getting issue details...\n STATUS\n Epic.

Procedure

  1. Upgrade Strimzi operator to the version supporting Kafka 3.6.1
    1. Add Strimzi Helm repo and find the newest Consul chart and app version

      \n
      helm repo add strimzi https://strimzi.io/charts\nhelm search repo strimzi/strimzi-kafka-operator
      \n
    2. In helm/operators/src/main/helm/Chart.yaml uncomment Strimzi repository and change version number
    3. Update dependencies

      \n
      cd helm/operators/src/main/helm\nhelm dependency update
      \n
    4. Comment repository line back in Chart.yaml
    5. Commit only the updated charts/strimzi-kafka-operator-helm-*.tgz and Chart.yaml files
  2. Upgrade default Kafka to 3.6.1 in mdm-hub-inbound-services
    1. change Kafka config and wait for the operator to apply changes:
      1. remove inter.broker.protocol.version: "3.5"
      2. remove log.message.format.version: "3.5"
      3. set kafka.version: 3.6.1
  3. Upgrade Kafka per environment
    1. Deploy updated operators with the new Strimzi strimzi
    2. Update cluster-env configuration (backend namespace)
    3. Deploy updated backend
    4. Ensure cluster is in a running state

Reference tickets

Reference PRs

" }, { "title": "Kong upgrade procedure", "pageID": "401611825", "pageLink": "/display/GMDM/Kong+upgrade+procedure", "content": "

Introduction

Kong used in MDM HUB is mantained by Kong/kong.



Prerequisite

  1. Verify changelog for changes that could alter behaviour/usage in new version and plan configuration adjustments to make it work correctly.
  2. Ensure base images are mirrored to COMPANY artifactory.


Generic Procedure

Procedure assumes that upgrade will be executed and tested on the SBX first.

Upgrade Steps

  1. Change image tag to updated version in mdm-hub-env-config/docker/kong3/Dockerfile
  2. Build and push docker image based on updated Dockerfile.
  3. Change the tag of kong image in mdm-inbound-services/helm/kong/src/main/helm/values.yaml to the one that was build in Step 2.
  4. Change the tag of kong image in mdm-cluster-env/helm/amer/sandbox/namespaces/kong/values.yaml to the one that was build in Step 2.
  5. Build changes from Step 3 and deploy with configuration added in Step 4.
  6. Verify update:
    1. Check if component started.
    2. Check if API requests are accepted and return correct responses
    3. Check if kong-mdm-external-oauth-plugin works properly (try OAuth authorization and then some API calls to verify it)




Past upgrades

Upgrade Kong 3.2.2 → 3.4.2

Description:

This update required update to version 3.4.2 to fix the CVE-2023-4911 vulnerability on NPROD and PROD.

Procedure:

  1. Changed image tag to 3.4.2 in mdm-hub-env-config/docker/kong3/Dockerfile
  2. Built and pushed docker image to artifactory.
  3. Changed the tag of kong image in mdm-inbound-services/helm/kong/src/main/helm/values.yaml to the one that was build in Step 2 (951).
  4. Changed the tag of kong image in mdm-cluster-env/helm/{tenant}/{nprod|prod}/namespaces/kong/values.yaml to the one that was build in Step 2 (951).
  5. Built changes from Step 3 and deploy with configuration added in Step 4.
  6. Verified update:
    1. Component started.
    2. API requests were accepted and returned correct responses
    3. kong-mdm-external-oauth-plugin worked properly (checked OAuth and some API requests)

Reference Tickets:

[MR-7599] Update kong to 3.4.2

Reference PR's:

[MR-7599] Updated kong to 3.4.2

[MR-7599] Updated kong to 3.4.2



" }, { "title": "Mongo upgrade procedure", "pageID": "401611849", "pageLink": "/display/GMDM/Mongo+upgrade+procedure", "content": "

Introduction:

Mongo used in MDM is managed by mongodb-kubernetes-operator. When updating mongo, we must think about all components at the same time.

Mongo operator bring additional images to orchestrate and managed mongo cluster 


Prerequisite

Before migration verify checklist:



Generic procedure

Procedure assumes that upgrade will be executed and tested on the SBX first.

Upgrade steps:

  1. Verify if MongoDB Kubernetes operator documentation provides specific for planned upgrade 
  2. Upgrade Mongo Operator
    1. Update cluster-env configuration (operators namespace)
    2. Deploy new Operator
    3. Ensure if cluster is in running state  
  3. Upgrade Mongo 
    1. Update cluster-env configuration (backend namespace) 
    2. Deploy updated backend 
      NOTE: step a and b can be execute multiple times (first we upgrade mongo images then we updated featureCompatibilityVersion parameter) 
    3. Ensure if cluster is in running state   
  4. Update confluence page


Past upgrades

Upgrade 4.2.6 → 6.0.9

Description:

This upgrade required multiple intermediate upgrades without upgrading Mongo Kubernetes Operator 

Procedure:

    1. Upgrade image 4.2.6 → 4.4.24 by updating cluster-env configuration (backend namespace)
    2. Deploy updated backend
    3. Ensure if cluster is in running state  
    4. Upgrade featureCompatibilityVersion to 4.4 by updating cluster-env configuration (backend namespace)
    5. Deploy updated backend
    6. Ensure if cluster is in running state 
    7. Upgrade image 4.4.24  → 5.0.20 by updating cluster-env configuration (backend namespace)
    8. Deploy updated backend
    9. Ensure if cluster is in running state  
    10. Upgrade featureCompatibilityVersion to 5.0 by updating cluster-env configuration (backend namespace)
    11. Deploy updated backend
    12. Ensure if cluster is in running state 
    13. Upgrade image  5.0.20 → 6.0.9 by updating cluster-env configuration (backend namespace)
    14. Deploy updated backend
    15. Ensure if cluster is in running state  
    16. Upgrade featureCompatibilityVersion to 6.0 by updating cluster-env configuration (backend namespace)
    17. Deploy updated backend
    18. Ensure if cluster is in running state 

Reference tickets: 

Reference PRs:

Upgrade Operator 0.7.3 → 0.8.2 

Description:

This upgrade was required to enable mongo horizon feature. Previous version of operator was unstable and sometimes failed to complete reconciliation of mongo cluster. 
Mongo itself was no updated in this upgrade

Procedure:

    1. Update cluster-env configuration (operators namespace)
    2. Deploy new Operator
    3. Ensure if cluster is in running state  

Reference tickets: 

Reference PRs:

Upgrade 6.0.9 → 6.0.11

Description:

This upgrade required only upgrading mongo image. At this time there was no newer version of mongodb Kubernetes operator. 

Procedure:

    1. Update cluster-env configuration (backend namespace)
    2. Deploy updated backend
    3. Ensure if cluster is in running state  

Reference tickets: 

Reference PRs:

Upgrade 6.0.11 → 6.0.21

Description

This was planned periodic upgrade. During this upgrade also kubernetes mongo operator was upgraded from 0.8.2 to 0.12.0. 

To perform this upgrade there was change needed in MongoDBCommunity helm template. We were using users configuration in wrong way - uniqueness constraint on  scramCredentialsSecretName field was violated 

Procedure:

Reference tickets

Reference PRs



MongoDBCommunity
" }, { "title": "Monstache upgrade procedure", "pageID": "401611821", "pageLink": "/display/GMDM/Monstache+upgrade+procedure", "content": "

Introduction:

Monstache used in MDM is installed using official Monstache installation procedure provided by Ryan Wynn.


Prerequisite

Before upgrade verify checklist:



Generic procedure

Procedure assumes that upgrade will be executed and tested on the SBX first.

Upgrade steps:

  1. Upgrade Monstache Docker images
  2. Update this confluence page

Past upgrades

Upgrade 6.7.0 → 6.7.17

Description:

This was the only Monstache upgrade so far.

Procedure:

  1. Upgrade Monstache docker image
    1. Pull from DockerHub and push the newest possible docker tag image of monstache to artifactory.
    2. Change version tag in inbound-services repo of monstache.
  2. Update cluster-env configuration (envs and backend namespaces)
    1. Change Docker image tags to uploaded in previous step
  3. Deploy updated backend with Jenkins job
  4. Ensure monstache is working fine, check logs on monstache Pod logs dir.

Reference tickets: 


Upgrade 6.7.17 → 6.7.21

Description:

Upgrade Monstache docker image to version 6.7.21

Procedure:

  1. Upgrade Monstache docker image
    1. Pull from DockerHub and push the newest possible docker tag image of monstache to artifactory.
    2. Change version tag in inbound-services repo of monstache.
  2. Update cluster-env configuration (envs and backend namespaces)
    1. Change Docker image tags to uploaded in previous step
  3. Deploy updated backend with Jenkins job
  4. Ensure monstache is working fine, check logs on monstache Pod logs dir. PASSED

Reference tickets: 



" }, { "title": "Prometheus upgrade procedure", "pageID": "521705242", "pageLink": "/display/GMDM/Prometheus+upgrade+procedure", "content": "

Monitoring host

Introduction

Official Prometheus site: https://prometheus.io/

To deploy Prometheus we use official docker image: https://hub.docker.com/r/prom/prometheus/

Prerequisites

  1. Verify CHANGELOG for changes that could alter behaviour/usage in new version and plan configuration adjustments to make it work correctly.
  2. Verify if other monitoring components are in versions compatible with version to which prometheus is upgraded. List of components to check:
    1. Thanos
    2. Telegraf
    3. SQS Exporter
    4. S3 Exporter
    5. Node Exporter
    6. Karma
    7. Grafana
    8. DNS Exporter
    9. cAdvisor
    10. Blackbox Exporter
    11. Alertmanager
  3. Ensure base images are mirrored to COMPANY artifactory.

Generic Procedure

Upgrade steps

  1. Apply configuration changes in mdm-hub-cluster-env:
    1. Change prometheus image tag to updated version in mdm-hub-cluster-env/ansible/roles/install_monitoring_prometheus/defaults/main.yml
    2. Apply other changes to configuration if necessary (Prerequisites step 1)
    3. Upgrade dependant monitoring components if necessary (Prerequisites step 2)
  2. Install monitoring stack using ansible-playbook:
    ansible-playbook install_monitoring_stack.yml -i inventory/monitoring/inventory --vault-password-file=$VAULT_PASSWORD_FILE\n
  3. Verify installation:
    1.  Check if monitoring components are up and running
    2. Check logs
    3. Check metrics and dashboards
  4. Fix all issues

Past Upgrades

Upgrade monitoring host Prometheus v2.30.3 → v2.53.4

Description:

This upgrade was a huge change in Prometheus version, therefore also Thanos had to be updated from main-2023-11-03-7e879c6 to v0.37.2 to maintain compatibility between those components. Some additional configuration adjustments had to be made on Thanos side during this upgrade.

Procedure:

  1. Checked prerequisites
    1. Verified that no breaking changes were made made in Prometheus that would require configuration adjustments on our side.
    2. Verified that alongside Prometheus, Thanos have to be updated to v0.37.2 to keep compatibility
    3. Pushed Prometheus v2.53.4 and Thanos v.0.37.2 to COMPANY artifactory.
  2. Changed Prometheus tag to v2.53.4 and Thanos tag to v0.37.2 in mdm-hub-cluster-env/ansible/roles/install_monitoring_prometheus/defaults/main.yml
  3. Installed monitoring stack using ansible-playbook
  4. Verified installation - noticed issues with Thanos Query that couldn't connected to Thanos Sidecar and Thanos Store
  5. Made adjustments in Thanos configuration to fix those issues (See reference PR)
  6. Installed monitoring stack using ansible-playbook again
  7. Verified installation - all components, dashboards and metrics were working correctly
  8. Upgrade finished successfully

Reference Tickets:

Reference PR's:


K8s cluster

Introduction

To deploy Prometheus on k8s clusters we use the following chart: kube-prometheus-stack.

It contains definition of Prometheus and related crd's.


Prerequisites

Check which chart version uses Prometheus in version to which you want to upgrade. Verify Prometheus CHANGELOG and kube-prometheus-stack chart templates and default values for changes that could alter behaviour/usage in new version and plan configuration adjustments to make it work correctly.


Generic Procedure

Upgrade Steps

  1. Download and unpack kube-prometheus-stack-<new_version>
  2. Replace CRD's:
    cd kube-prometheus-stack\\charts\\crds\\crds\nkubectl -n monitoring replace -f "*.yaml"
  3. Create and build PR with helm chart upgrade
    1. update version in mdm-hub-inbound-services/helm/monitoring/src/main/helm/Chart.yaml
    2. update package version replacing charts/kube-prometheus-stack-<old_version>.tgz with charts/kube-prometheus-stack-<new_version>.tgz
  4. Deploy PR to SBX cluster
  5. Verify installation and merge the PR
    1. Get the number of metrics and alerts from Prometheus and compare them with the number before upgrade
    2. Verify if Grafana dashboards are working correctly
  6. Proceed to NPROD/PROD deployments (Verify installation after each of them)


Past Upgrades

Upgrade monitoring host Prometheus v2.39.1 → v2.53.1

Description:

To perform this upgrade it was necessary to upgrade used helm chart (kube-prometheus-stack) from v41.7.4 (containing Prometheus v2.39.1) to v61.7.2 (containing Prometheus v.2.53.1)

Procedure:

  1. Checked prerequisites
    1. Verified that no breaking changes were made made in Prometheus that would require configuration adjustments on our side.
    2. Verified that kube-prometheus-stack v61.7.2 contained Prometheus v2.53.1
  2. Downloaded and unpacked kube-prometheus-stack-61.7.2.tgz
  3. Replaced CRD's
  4. Created PR with upgraded chart version and replaced old package with kube-prometheus-stack-61.7.4.tgz (See reference PR)
  5. Deployed changes to SBX from PR
  6. Verified Installation (SBX)
    1. No lost metrics
    2. All alerts correct
    3. Grafana dashboards working correctly
  7. Merged PR

Reference Tickets:

Reference PR's:

" }, { "title": "Infrastructure", "pageID": "302705566", "pageLink": "/display/GMDM/Infrastructure", "content": "" }, { "title": "How to access AWS Console", "pageID": "310939854", "pageLink": "/display/GMDM/How+to+access+AWS+Console", "content": "

Add new user access to AWS Account

Request access to the correct Security Group in the Request Manager

https://requestmanager1.COMPANY.com/Group/Default.aspx

ie, for accessing the 432817204314 Account using the WBS-EUW1-GBICC-ALLENV-RO-SSO role, use the 

WBS-EUW1-GBICC-ALLENV-RO-SSO_432817204314_PFE-AWS-PROD Security Group

AWS Console

Always use this AWS Console address: http://awsprodv2.COMPANY.com/ and there select the Account you want to use

\"\"

" }, { "title": "How to login to hosts with SSH", "pageID": "310940209", "pageLink": "/display/GMDM/How+to+login+to+hosts+with+SSH", "content": "
  1. Generate a SSH key pair - private and public
  2. Copy the public key to the ~/.ssh/authorized_keys file on the host and account you want to use
  3. use ssh command to login, ie. ssh ec2-user@euw1z2dl115.COMPANY.com
  4. List the content of the ~/.ssh/authorized_keys file to check which keys are used
" }, { "title": "How to restart the EC2 instance", "pageID": "310940306", "pageLink": "/display/GMDM/How+to+restart+the+EC2+instance", "content": "
  1. Login to AWS Console (How to access AWS Console)

  2. Select EC2 Service from the search box
  3. In the navigation pane, choose Instances.

  4. Select the instance and choose Instance state, Reboot instance.
    Alternatively, select the instance and choose Actions, Manage instance state. In the screen that opens, choose Reboot, and then Change state.

  5. Choose Reboot when prompted for confirmation
    \"\"

More: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-reboot.html

" }, { "title": "HUB-UI: Timeout issue after authorization", "pageID": "337840086", "pageLink": "/display/GMDM/HUB-UI%3A+Timeout+issue+after+authorization", "content": "

Issue description:

When accesing HUB-UI site, after successfuly authorizing via SSO, the timeout may occur when trying to access the site.

Solution:

Check if you have valid COMPANY certificates installed in your browser. You can do that by clicking on padlock icon in browser search and checking if the connection is safe:

\"\"

If not, you have to install certificates:

  1. Install RootCA-G2.cer:
    1. Double-click on certificate
    2. Choose Install Certificate
    3. Local Machine
    4. Choose "Place all certificates in the following store" and select store: "Trusted Root Certification Authorities"
    5. Click finish to complete the instalation process
  2. Install PBACA-G2.cer:
    1. Double-click on certificate
    2. Choose Install Certificate
    3. Local Machine
    4. Choose "Automatically select the certificate store based on type of certificate"
    5. Click finish to complete the instalation process
  3. Reboot computer
  4. Verify by accessing HUB-UI
" }, { "title": "Key Auth Not Working on Hosts - Fix", "pageID": "172294447", "pageLink": "/display/GMDM/Key+Auth+Not+Working+on+Hosts+-+Fix", "content": "

In case you are unable to use SSH authentication via RSA key, the cause might be wrong /home/{user}/.ssh directory context.

Check /var/log/secure:

\"\"

The "maximum authentication attempts exceeded" error might indicate that his is the case.

Check the /home/{user}/.ssh directory with the "-Z" option:

$ ls -laZ /home/{user}/.ssh

\"\"

On the screen above is an example of wrong context. Fix it by:

$ chcon -R system_u:object_r:usr_t:s0 /home/{user}/.ssh


Verify the context has changed:

\"\"


" }, { "title": "Kubernetes Operations", "pageID": "228923667", "pageLink": "/display/GMDM/Kubernetes+Operations", "content": "" }, { "title": "Kubernetes upgrades", "pageID": "337842009", "pageLink": "/display/GMDM/Kubernetes+upgrades", "content": "

Introduction

Kubernetes clusters provided by PDKS are upgraded quarterly. To make sure it doesn't break MDM Hub, we've established the process described in this article.

K8s upgrade process in the PDKS platform

\"\"

Verify MDM Hub's compatibility with the new K8s version

\"\"

kube-no-trouble

Upgrades are done 1 version up, ie. 1.23 → 1.24, so we need to make sure we've not using any APIs removed in the upgraded version.

To find all objects using deprecated API, run kube-no-trouble 

\"\"

If there are "Deprecated APIs" listed for the next K8s version, MDM Hub's team must provide upgrades.

In the example, an upgrade from 1.23 to 1.24 doesn't require any work.

Upgrade sandbox/non-prod/prod clusters

\"\"

PDKS does a rolling upgrade of all nodes, starting with Control Plane, then dynamic (or "flex") nodes, and then the static nodes.

Assist and verify

\"\"

MDM Hub's team support during prod upgrades

MDM Hub's team presence and assistance are required during prod upgrades. During the agreed upgrade window one designated person must be actively monitoring the upgrade process and react if issues are found.

" }, { "title": "MongoDB backup and restore", "pageID": "322548514", "pageLink": "/display/GMDM/MongoDB+backup+and+restore", "content": "

Introduction

Percona Backup for MongoDB

We are using Percona Backup for MongoDB (PBM) - an open-source and distributed solution for consistent backups and restore of production MongoDB clusters. 

\"\"

PBM functions used in MDM Hub are marked in green.

How are backups done in MDM Hub?

Architecture

The solution was built in 4 parts

Code

Configuration

General rules 

Details

Config is stored per environment in mdm-hub-cluster-env project in {env}/prod/namespaces/{env}-backend/values.yaml path, under mongo.pbm key.

\"\"

Where are backups stored?

All backups are stored in separate s3 buckets.

Backup

How to do a manual full backup?

Run a pbm backup --wait command in a mongodb-pbm-client pod

\"\"

How to do an incremental backup?

You don't have to do anything. If you really need to do an incremental backup, wait for 10 minutes for the next scheduled point-in-time backup.

Restore

How to restore DB when it's empty - Disaster Recovery (DR) scenario

Percona configuration is stored in the database itself. If the database is completely removed (EKS cluster, PVCs, or all data from DB), the Percona agent won't be able to restore the DB from backup.

You need at least an empty MongoDB and PBM configuration restored.

  1. Deploy MDM Hub Backend Using Jenkins Job
    1. An empty database will be created
    2. Percona will be configured
    3. pbm-agent pod will be created
  2. Choose between preferred restore ways
    1. full backup
    2. incremental backup

How to restore DB from a full backup

  1. Shut down all MongoDB clients - MDM Hub components
  2. Disable PITR
    $ pbm config --set pitr.enabled=false
  3. Run pbm list to get a named list of backups
    \"\"
  4. Run pbm restore [<backup_name>]
  5. Run pbm status to check the current restore status
  6. After a successful restore, enable PITR back
    $ pbm config --set pitr.enabled=true

How to restore DB from an incremental (Point-in-time Recovery)

  1. Shut down all MongoDB clients - MDM Hub components
  2. Disable PITR
    $ pbm config --set pitr.enabled=false
  3. Run pbm list to get an available time range for the PITR restore
    \"\"
  4. Run pbm restore  --time=2006-01-02T15:04:05
  5. Run pbm status to check the current restore status
  6. After a successful restore, enable PITR back
    $ pbm config --set pitr.enabled=true
" }, { "title": "Restart service", "pageID": "228923671", "pageLink": "/display/GMDM/Restart+service", "content": "

To restart MDMHUB service you have to have access to the Kubernetes console:

  1. Find the pod name that you want to restart: kubectl get pods --namespace {{mdmhub env namespace}}

raselek@CF-0YVKSY:~/kafka/amer_dev/kafka_client$ kubectl get pods --namespace amer-dev

NAME                                                 READY   STATUS    RESTARTS   AGE

mdmhub-batch-service-dbbf4486d-snpgc                 2/2     Running   0          22h

mdmhub-callback-service-55c6dd696d-5bn4h             2/2     Running   0          22h

mdmhub-entity-enricher-f9f884f97-cwqqc               2/2     Running   0          22h

mdmhub-event-publisher-756b46cfd7-7ccqp              2/2     Running   0          22h

mdmhub-mdm-api-router-9b9596f8b-8wqrn                2/2     Running   0          9h

mdmhub-mdm-manager-678764db5-fqlzf                   2/2     Running   0          9h

mdmhub-mdm-reconciliation-service-66b65c7bf8-jhvhv   2/2     Running   0          9h

mdmhub-reltio-subscriber-6495fb4878-c8hp5            2/2     Running   0          9h

2. Delete the pod that you selected: kubectl delete pod {{selected pod name}} --namespace {{mdmhub env namespace}}

raselek@CF-0YVKSY:~/kafka/amer_dev/kafka_client$ kubectl delete pod mdmhub-mdm-reconciliation-service-66b65c7bf8-jhvhv --namespace amer-dev

pod "mdmhub-mdm-reconciliation-service-66b65c7bf8-jhvhv" deleted

3. After above operation you will be able to see newly created pod:

raselek@CF-0YVKSY:~/kafka/amer_dev/kafka_client$ kubectl get pods --namespace amer-dev

NAME                                                 READY   STATUS    RESTARTS   AGE

mdmhub-batch-service-dbbf4486d-snpgc                 2/2     Running   0          22h

mdmhub-callback-service-55c6dd696d-5bn4h             2/2     Running   0          22h

mdmhub-entity-enricher-f9f884f97-cwqqc               2/2     Running   0          22h

mdmhub-event-publisher-756b46cfd7-7ccqp              2/2     Running   0          22h

mdmhub-mdm-api-router-9b9596f8b-8wqrn                2/2     Running   0          9h

mdmhub-mdm-manager-678764db5-fqlzf                   2/2     Running   0          9h

mdmhub-mdm-reconciliation-service-66b65c7bf8-ns88k   2/2     Running   0          2m32s

mdmhub-reltio-subscriber-6495fb4878-c8hp5            2/2     Running   0          9h

It's restarted instance.


" }, { "title": "Scaling services", "pageID": "228923952", "pageLink": "/display/GMDM/Scaling+services", "content": "

To do this action access to the runtime configuration repository is required. You have to modify deployment configuration for selected component - let's assume that it is mdm-reconciliation-service:

  1. Modify values.yaml for MDMHUB environment {{region}}/{{cluster class}}/namespaces/{{mdmhub env name}}/values.yaml:

components:
  registry: artifactory.COMPANY.com/mdmhub-docker-dev
  deployments:
    mdm_reconciliation_service:
      enabled: true
      replicas: 2
      hostAliases: *hostAliases
      resources:
        component:
          requests:
            memory: "2560Mi"
            cpu: "200m"
          limits:
            memory: "3840Mi"
            cpu: "4000m"
      logging: *logging

And change the value of the "replicas" parameter. If it doesn't exist you have to add this to the component deployment configuration.

2. Commit and push changes,

3. Go to Jenkins job responsible for deploying changes to the selected environment and run the job,

4. After deploying check if the configuration has been applied correctly: kubectl get pods --namespace {{mdmhub env name}}:

raselek@CF-0YVKSY:~/kafka/amer_dev/kafka_client$ kubectl get pods --namespace amer-dev

NAME                                                 READY   STATUS    RESTARTS   AGE

mdmhub-batch-service-dbbf4486d-snpgc                 2/2     Running   0          22h

mdmhub-callback-service-55c6dd696d-5bn4h             2/2     Running   0          22h

mdmhub-entity-enricher-f9f884f97-cwqqc               2/2     Running   0          22h

mdmhub-event-publisher-756b46cfd7-7ccqp              2/2     Running   0          22h

mdmhub-mdm-api-router-9b9596f8b-8wqrn                2/2     Running   0          9h

mdmhub-mdm-manager-678764db5-fqlzf                   2/2     Running   0          9h

mdmhub-mdm-reconciliation-service-66b65c7bf8-ns88k   2/2     Running   0          2m32s

mdmhub-mdm-reconciliation-service-66b68c7bf8-ndksk   2/2     Running   0          2m32s

mdmhub-reltio-subscriber-6495fb4878-c8hp5            2/2     Running   0          9h

You will be able to see the desired amount of pods.

" }, { "title": "Stop/Start service", "pageID": "228923678", "pageLink": "/pages/viewpage.action?pageId=228923678", "content": "

To do this action access to the runtime configuration repository is required. Start/Stop service means enable/disable component deployment. You have to modify deployment configuration for selected component - let's assume that it is mdm-reconciliation-service:

  1. Modify values.yaml for MDMHUB environment {{region}}/{{cluster class}}/namespaces/{{mdmhub env name}}/values.yaml:

components:
  registry: artifactory.COMPANY.com/mdmhub-docker-dev
  deployments:
    mdm_reconciliation_service:
      enabled: true
      hostAliases: *hostAliases
      resources:
        component:
          requests:
            memory: "2560Mi"
            cpu: "200m"
          limits:
            memory: "3840Mi"
            cpu: "4000m"
      logging: *logging

Change the enabled flag to false.

2. Commit and push changes,

3. Go to Jenkins job responsible for deploying changes to the selected environment and run the job,

4. After deploying check if the configuration has been applied correctly: kubectl get pods --namespace {{mdmhub env name}}

raselek@CF-0YVKSY:~/kafka/amer_dev/kafka_client$ kubectl get pods --namespace amer-dev

NAME                                                 READY   STATUS    RESTARTS   AGE

mdmhub-batch-service-dbbf4486d-snpgc                 2/2     Running   0          22h

mdmhub-callback-service-55c6dd696d-5bn4h             2/2     Running   0          22h

mdmhub-entity-enricher-f9f884f97-cwqqc               2/2     Running   0          22h

mdmhub-event-publisher-756b46cfd7-7ccqp              2/2     Running   0          22h

mdmhub-mdm-api-router-9b9596f8b-8wqrn                2/2     Running   0          9h

mdmhub-mdm-manager-678764db5-fqlzf                   2/2     Running   0          9h

mdmhub-reltio-subscriber-6495fb4878-c8hp5            2/2     Running   0          9h

There should not be any active pods of the disabled component.

To enable service you have to do the same steps but remember that "enabled" flag should be set to true.

" }, { "title": "Open Traffic from Outside COMPANY to MDM Hub", "pageID": "250142861", "pageLink": "/display/GMDM/Open+Traffic+from+Outside+COMPANY+to+MDM+Hub", "content": "

EMEA NProd

AWS Account ID: 432817204314

VPC ID: vpc-004cb58768e3c8459

SecurityGroup: sg-04d4116a040a7e1da - MDMHub-kafka-and-api-proxy-external-nprod-sg

Proxy documentation: EMEA External proxy


EMEA Prod

AWS Account ID: 432817204314

VPC ID: vpc-004cb58768e3c8459

SecurityGroup: sg-06305fd9d3b0992a6 - MDMHub-kafka-and-api-proxy-external-prod-sg

Proxy documentation: EMEA External proxy


EXUS (GBL) Prod

AWS Account ID: 432817204314

VPC ID: vpc-004cb58768e3c8459

SecurityGroup: sg-0cd8ba02f6351f383 - Mdm-reltio-internet-traffic-SG


US

no whitelisting

" }, { "title": "Replace S3 Keys", "pageID": "187796851", "pageLink": "/display/GMDM/Replace+S3+Keys", "content": "

CREATE ticket if there is an issue with KEYs (rotation required -  expired)

REQUEST:

http://btondemand.COMPANY.com/getsupport#!/g71h1sgv0/0

QUEUE: GBL-BTI-IOD AWS FULL SUPPORT

Hi Team,
Our S3 access key expired - I am receiving - The AWS Access Key Id you provided does not exists in our records.
KEY details:
BucketName User name Access key ID Secret access key
gblmdmhubnprodamrasp100762 SRVC-MDMGBLFT ●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Could you please regenerate this S3 key ?
Regards,
Mikolaj


BITBUCKET REPLACE:

inventory/<env>_gblus/group_vars/all/secret.yml

REPLACE and Post replace tasks:


REPLACE:
1. decrypt - group_vars/all/secret.yml
2. replace on non-prod and prod
3. encrypt and push


Post Replace TASK:
NON PROD

NEW nonprod <KEY> <SECRET>


REDEPLOY
1. Airflow:


All Airflow jobs - https://jenkins-gbicomcloud.COMPANY.com/job/MDM_Airflow_Deploy_jobs/ (take list from airflow_components variable)
- dev: concat_s3_files,merge_unmerge_entities_gblus,hub_reconciliation_v2,lookup_values_export_to_s3,reconciliation_koloneview,reconciliation_snowflake,reconciliation_icue,import_merges_from_reltio,export_merges_from_reltio_to_s3_full,export_merges_from_reltio_to_s3_inc
- qa: concat_s3_files,merge_unmerge_entities_gblus,hub_reconciliation_v2,lookup_values_export_to_s3,reconciliation_koloneview,reconciliation_snowflake,reconciliation_icue,import_merges_from_reltio,export_merges_from_reltio_to_s3_full,export_merges_from_reltio_to_s3_inc
- stage: merge_unmerge_entities_gblus,hub_reconciliation_v2,lookup_values_export_to_s3,reconciliation_koloneview,reconciliation_snowflake,reconciliation_icue,import_merges_from_reltio,export_merges_from_reltio_to_s3_full,export_merges_from_reltio_to_s3_inc


2. FLEX connector to S3 DEV AND QA


- replace in kafka-connect-flex
:/app/kafka-connect-flex/<env>/config/s3-connector-config.json
:/app/kafka-connect-flex/<env>/config/s3-connector-config-update.json
Update on Main(check logs with errors and execute)
- curl -X GET http://localhost:8083/connectors/S3SinkConnector/config
- curl -X PUT -H "Content-Type: application/json" localhost:8083/connectors/S3SinkConnector/config -d @/etc/kafka/config/s3-connector-config-update.json
- curl -X POST http://localhost:8083/connectors/S3SinkConnector/tasks/0/restart
- curl -X POST http://localhost:8083/connectors/S3SinkConnector/restart
- curl -X GET http://localhost:8083/connectors/S3SinkConnector/status

3. Snowflake:

--changeset warecp:LOV_DATA_STG runOnChange:true
create or replace stage landing.LOV_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/dev/outbound/SNOWFLAKE'
credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')
FILE_FORMAT=(TYPE= 'JSON' STRIP_OUTER_ARRAY= true)

create or replace stage landing.LOV_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/qa/outbound/SNOWFLAKE'
credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')
FILE_FORMAT=(TYPE= 'JSON' STRIP_OUTER_ARRAY= true)

create or replace stage landing.LOV_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/stage/outbound/SNOWFLAKE'
credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')
FILE_FORMAT=(TYPE= 'JSON' STRIP_OUTER_ARRAY= true)


--changeset morawm03:MERGE_TREE_DATA_STG runOnChange:true
create or replace stage landing.MERGE_TREE_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/dev/outbound/SNOWFLAKE_MERGE_TREE'
credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')
FILE_FORMAT=(TYPE= 'JSON' STRIP_OUTER_ARRAY= true COMPRESSION= 'GZIP')

create or replace stage landing.MERGE_TREE_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/qa/outbound/SNOWFLAKE_MERGE_TREE'
credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')
FILE_FORMAT=(TYPE= 'JSON' STRIP_OUTER_ARRAY= true COMPRESSION= 'GZIP')

create or replace stage landing.MERGE_TREE_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/s3://gblmdmhubnprodamrasp100762/us/dev/outbound/SNOWFLAKE_MERGE_TREE/outbound/SNOWFLAKE_MERGE_TREE'
credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')
FILE_FORMAT=(TYPE= 'JSON' STRIP_OUTER_ARRAY= true COMPRESSION= 'GZIP')

--changeset warecp:reconcilation_URL runOnChange:true
create or replace stage customer.RECONCILIATION_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/dev/inbound/hub/reconciliation/SNOWFLAKE/'
credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')
FILE_FORMAT = ( TYPE = CSV FIELD_DELIMITER = ',' COMPRESSION=NONE )

create or replace stage customer.RECONCILIATION_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/qa/inbound/hub/reconciliation/SNOWFLAKE/'
credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')
FILE_FORMAT = ( TYPE = CSV FIELD_DELIMITER = ',' COMPRESSION=NONE )

create or replace stage customer.RECONCILIATION_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/stage/inbound/hub/reconciliation/SNOWFLAKE/'
credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')
FILE_FORMAT = ( TYPE = CSV FIELD_DELIMITER = ',' COMPRESSION=NONE )




PROD:

NEW prod <KEY> <SECRET>


REDEPLOY
1. Airflow:


All Airflow jobs - https://jenkins-gbicomcloud.COMPANY.com/job/MDM_Airflow_Deploy_jobs/job/deploy_mdmgw_airflow_services__prod_gblus/ (take list from airflow_components variable)
- prod: concat_s3_files,merge_unmerge_entities_gblus,hub_reconciliation_v2,lookup_values_export_to_s3,reconciliation_koloneview,reconciliation_snowflake,reconciliation_icue,export_merges_from_reltio_to_s3_full,export_merges_from_reltio_to_s3_inc

               Manulay replace connections and variables in http://amraelp00007847.COMPANY.com:9110/airflow/home for gblus prod DAGS


2. FLEX connector to S3


- replace in kafka-connect-flex (on Master only)
:/app/kafka-connect-flex/prod/config/s3-connector-config.json
:/app/kafka-connect-flex/prod/config/s3-connector-config-update.json
Update on Main(check logs with errors and execute)
- curl -X GET http://localhost:8083/connectors/S3SinkConnector/config
- curl -X PUT -H "Content-Type: application/json" localhost:8083/connectors/S3SinkConnector/config -d @/etc/kafka/config/s3-connector-config-update.json
- curl -X POST http://localhost:8083/connectors/S3SinkConnector/tasks/0/restart
- curl -X POST http://localhost:8083/connectors/S3SinkConnector/restart
- curl -X GET http://localhost:8083/connectors/S3SinkConnector/status


3. Snowflake:



--changeset warecp:LOV_DATA_STG runOnChange:true
create or replace stage landing.LOV_DATA_STG url='s3://gblmdmhubprodamrasp101478/us/prod/outbound/SNOWFLAKE'
credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')
FILE_FORMAT=(TYPE= 'JSON' STRIP_OUTER_ARRAY= true)


--changeset morawm03:MERGE_TREE_DATA_STG runOnChange:true
create or replace stage landing.MERGE_TREE_DATA_STG url='s3://gblmdmhubprodamrasp101478/us/prod/outbound/SNOWFLAKE_MERGE_TREE'
credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')
FILE_FORMAT=(TYPE= 'JSON' STRIP_OUTER_ARRAY= true COMPRESSION= 'GZIP')


--changeset warecp:reconcilation_URL runOnChange:true
create or replace stage customer.RECONCILIATION_DATA_STG url='s3://gblmdmhubprodamrasp101478/us/prod/inbound/hub/reconciliation/SNOWFLAKE/'
credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')
FILE_FORMAT = ( TYPE = CSV FIELD_DELIMITER = ',' COMPRESSION=NONE )


4. HOST:


- replace archiver-services
on 3 nodes:
:/app/archiver/.s3cfg
:/app/archiver/config/archiver.env




" }, { "title": "Resize PV, LV, FS", "pageID": "164470164", "pageLink": "/display/GMDM/Resize+PV%2C+LV%2C+FS", "content": "
\n
sudo pvresize /dev/nvme2n1\nsudo lvextend -L +<SIZE_TO_INCREASE>G /dev/mapper/docker-thinpool
\n

Extention lvm using additional disk.

\n
sudo pvcreate /dev/nvme3n1 \nsudo vgextend mdm_vg /dev/nvme3n1\nsudo lvm lvextend -l +100%FREE /dev/mdm_vg/data\nsudo xfs_growfs -d /dev/mapper/mdm_vg-data
\n


" }, { "title": "Resolve Docker Issues After Instance Restart (Flex US)", "pageID": "163927016", "pageLink": "/pages/viewpage.action?pageId=163927016", "content": "

After restarting one of the US FLEX instances, issues with service user mdmihpr/mdmihnpr may come up.

Resolve them using the following:

Change owner of the Docker socket

[root@amraelp00005781 run]# cd /var/run/
[root@amraelp00005781 run]# chown root:mdmihub docker.sock

Increase VM memory

If the ElasticSearch is not starting:

[root@amraelp00005781 run]# sysctl -w vm.max_map_count=262144

Reset offset on EFK topics

If there are no logs on Kibana, use the Kafka Client to reset offsets on efk topics using the "--to-datetime" option, pointing to 6 months prior.

Prune the Docker

If there is a ThinPool Error coming up, use:

[root@amraelp00005781 run]# docker system prune -a
" }, { "title": "Service User ●●●●●●●●●●●●●●●● [https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1588321]", "pageID": "194547472", "pageLink": "/pages/viewpage.action?pageId=194547472", "content": "

Log into the machine via other account with root access.

For service user mdm (GBL NPROD/PROD):

\n
$ chage -I -1 -m 0 -M 99999 -E -1 mdm
\n
" }, { "title": "Jenkins", "pageID": "250676213", "pageLink": "/display/GMDM/Jenkins", "content": "" }, { "title": "Proxy on bitbucket-insightsnow.COMPANY.com (fix Hostname issue and timeouts)", "pageID": "250147973", "pageLink": "/pages/viewpage.action?pageId=250147973", "content": "


On GBLUS DEV host amraelp00007335.COMPANY.com (●●●●●●●●●●●●) setup service and route to proxy bitbucket:


kong_services:
#----------------------DEV---------------------------
- create_or_update: False
vars:
name: "{{ kong_env }}-bitbucket-proxy"
url: "http://bitbucket-insightsnow.COMPANY.com/"
connect_timeout: 120000
write_timeout: 120000
read_timeout: 120000

kong_routes:
#----------------------DEV---------------------------
- create_or_update: False
vars:
name: "{{ kong_env }}-bitbucket-proxy-route"
service: "{{ kong_env }}-bitbucket-proxy"
paths: [ "/" ]
methods: [ "GET", "POST", "PATCH", "DELETE" ]


Then we can access Bitbucket through:

curl https://gbl-mdm-hub-us-nprod.COMPANY.com:8443/repos?visibility=public

Change is in the and currently deplyed: http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/dev_gblus/group_vars/kong_v1/kong_dev.yml


-----------------------------------------------------------------------------------------------------------------

Next setup the nginx proxy to route 80 port to 8443 port.

Go to ec2-user@gbinexuscd01:/opt/cd-env/bitbucket-proxy

RUN bitbucket-nginx:

dded05295c16        nginx:1.17.3                                                          "nginx -g 'daemon of…"   About an hour ago   Up 16 minutes           0.0.0.0:80->80/tcp                                            bitbucket-nginx

Config:


\n
http {\n    server {\n        listen              80;\n        server_name         gbinexuscd01;\n\n        location / {\n            rewrite ^\\/(.*) /$1 break;\n            proxy_pass  https://gbl-mdm-hub-us-nprod.COMPANY.com:8443;\n            resolver <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1588839">●●●●●●●●●●</a>;\n        }\n    }\n}\n\nevents {}
\n


This config will route port 80 to gbl-mdm-hub-us-nprod.COMPANY.com:8443 host to bitbucket



Next, add to all Jenkins and Jenkins-Slaves the following entry in /etc/hosts:

docker exec -it -u root jenkins bash
docker exec -it -u root nexus_jenkins_slave2 bash
docker exec -it -u root nexus_jenkins_slave bash


vi /etc/hosts

add:
●●●●●●●●●●●●● bitbucket-insightsnow.COMPANY.com

where ●●●●●●●●●●●●● is a IP of bitbucket-nginx

to check run
docker inspect bitbucket-nginx
"Gateway": "192.168.128.1",



Then check on each Slave and Jenkins:
curl http://bitbucket-insightsnow.COMPANY.com/repos?visibility=public

You should receive the HTML page response.





" }, { "title": "Unable to Find Valid Certification Path to Requested Target (GBLUS)", "pageID": "164470045", "pageLink": "/pages/viewpage.action?pageId=164470045", "content": "

The following issue is caused by missing COMPANY - PBACA-G2.cer and RootCA-G2.cer in the java cacerts file.


Issue:

06:41:54 2020-12-24 06:41:52.843  INFO   --- [       Thread-4] c.consol.citrus.report.LoggingReporter   :  
FAILURE: Caused by: ResourceAccessException: I/O error on POST request for "https://gbl-mdm-hub-us-nprod.COMPANY.com:8443/apidev/hcp":
sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException:
unable to find valid certification path to requested target; nested exception is javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException:
PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

https://jenkins-gbicomcloud.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/project%252Ffletcher/151/console 


Solution:

Log in to:

mapr@gbinexuscd01 - ●●●●●●●●●●●●●

docker exec -it nexus_jenkins_slave bash

cd /etc/ssl/certs/java

touch PBACA-G2.cer  - PBACA-G2.cer
touch RootCA-G2.cer  - RootCA-G2.cer

keytool -importcert -trustcacerts -keystore cacerts -alias COMPANYInter -file PBACA-G2.cer -storepass changeit
keytool -importcert -trustcacerts -keystore cacerts -alias COMPANYRoot -file RootCA-G2.cer -storepass changeit

next - docker exec -it nexus_jenkins_slave2 bash


Permanent Solution. TODO:

add PBACA-G2.cer and RootCA-G2.cer to /etc/ssl/certs/java/cacerts in Dockerfile:


COPY certs/PBACA-G2.cer /etc/ssl/certs/java/PBACA-G2.cer
COPY certs/RootCA-G2.cer /etc/ssl/certs/java/RootCA-G2.cer
RUN cd /etc/ssl/certs/java && keytool -importcert -trustcacerts -keystore cacerts -alias COMPANYInter -file PBACA-G2.cer -storepass changeit -noprompt
RUN cd /etc/ssl/certs/java && keytool -importcert -trustcacerts -keystore cacerts -alias COMPANYRoot -file RootCA-G2.cer -storepass changeit -noprompt

fix - nexus_jenkins_slave2 and nexus_jenkins_slave
" }, { "title": "Monitoring", "pageID": "411343429", "pageLink": "/display/GMDM/Monitoring", "content": "" }, { "title": "FLEX: Monitoring Batch Loads", "pageID": "513737976", "pageLink": "/display/GMDM/FLEX%3A+Monitoring+Batch+Loads", "content": "

Opening The Dashboard

Use one of links below:

Navigating The Dashboard

Use the selector in upper right corner to change the time range (for example Last 24 hours or Last 7 days).

\"\"

The search bar allows searching for a specific file name.


\"\"


The dashboard is divided into 5 main sections:

  1. File by type - how many files of each input type have been loaded. File types are: SAP, DEA, HIN, FLEX_340B, IDENTIFIERS, ADDRESSES, FLEX_BULK.
  2. File load status count - breakdown of each file type and final status of records from that file
  3. File load count - depiction of loads through time
  4. File load summary - most important section, containing detailed information about each loaded file:
    • File - file type
    • Start time/End time - start and end of file processing. Important note: this applies only to parsing, preprocessing and mapping the records - those are later loaded into Reltio asynchronously
    • File name
    • Status - indicates that the file processing has finished correctly, without interruption or failures
    • Load time
    • Bad Records - records that could not be parsed or mapped, usually due to malformed input
    • Input Entities - number of records (lines) that the file contained
    • Processed Entities - number of individual profiles extracted from the file. This number may be lower than Input Entities, for example due to input model requiring aggregation of multiple lines (SAP), skipping unchanged records (DEA) etc.
    • Created - number of profiles that were identified as missing from MDM and have been passed to Reltio
    • Updated - number of profiles that were identified as changed since last loaded and have been passed to Reltio
    • Post Processing - Only for DEA - number of profiles that are present in MDM, but were not present in the DEA file. In this case, the records will be deleted in MDM (but there is a limit of 22,000 deleted profiles per single file - security mechanism)
    • Skipped Entities - number of profiles that were not updated in Reltio, because their data has not changed since the last load. This is detected using records' checksums, calculated for each record while processing the file. Checksums are stored in MDM Hub's cache and compared with the future records
    • Suspended Entities - Only for DEA - number of profiles that could have been deleted from MDM, but were not due to the 22,000 delete limit being exceeded
    • Count
  5. Response status load summary - final statuses of loading the records into Reltio. Records are loaded asynchronously and their statuses are being gradually updated in this section, after the file is present in the File load summary section
" }, { "title": "Quality Gateway Alerts", "pageID": "438317787", "pageLink": "/display/GMDM/Quality+Gateway+Alerts", "content": "

Quality Gateway is MDM Hub's publishing layer framework responsible for detecting Data Quality issues before publishing an event downstream (to Kafka consumers or Snowflake). You can find more details on the Quality Gateway in the documentation: Quality Gateway - Event Publishing Filter

There are 4 statuses that an event (entity/relationship) can receive after being processed by the Quality Gateway:

AUTO_RESOLVED events mean that they were preceded by a BROKEN one, which signifies potential data problems or processing problems.

This is why we have implemented two alerts to track these statuses, which may be otherwise missed.

quality_gateway_auto_resolved_sum/quality_gateway_auto_resolved_event

Both alerts should be approached similarly, as it is expected that they always get triggered together and tell us about the same thing.

Pick an example from one of the quality_gateway_auto_resolved_event alerts and take the entity/relationship URI:

\"\"


Use Kibana's HUB Events dashboard to find all the recent events for this URI:

\"\"\"\"


If you find no events at first, try extending the time range (for example 7 days).

Scroll down to the event list and open each event. Under metadata.quality.* keys you will find Quality Gateway info:

\"\"


Find the first BROKEN event. Under metadata.quality.issues you will find the list of quality rules that this event did not pass. Quality rules from this list match quality Rules configured in the Event Publisher's config.

Example repository config file path (amer-prod): mdm-hub-cluster-env\\amer\\prod\\namespaces\\amer-prod\\config_files\\event-publisher\\config\\application.yml

\"\"


Quality rules are expressions written in Groovy. Every event passing the appliesTo filter must also pass the mustPass filter, otherwise it will be BROKEN.


Records in BROKEN state are saved in MongoDB along with the full event that triggered the rejection. For AUTO_RESOLVED and MANUALLY_RESOLVED it is a bit more tricky - record is no longer in MongoDB.

To find the exact event that triggered the rejection you can use the AKHQ - Publisher's and QualityGateway's input Kafka topic is ${env}-internal-reltio-proc-event. Keep in mind that the retention configured for this topic should be around 7 days - events older than that get automatically removed from the topic.

\"\"


Search by the entity/relationship URI in Key. Match the BROKEN event with Kibana by the timestamp.


There is an infinite number of ways in which an event can be broken, so some investigation will often be needed.

Most common cases until now:

Blank Profile

Description: when fetching the entity JSON through Postman, the JSON has no attributes, but entity is not inactive.

\"\"

This is not expected and should be reported to the COMPANY MDM Team.

RDM Temporary Failure

Description: all lookup attribute values in the entity JSON are having lookupErrors. At least one lookupCode per JSON is expected (unless there are no lookup attributes).

Good:

\"\"

Bad:

\"\"


This is not expected and should be reported to the COMPANY MDM Team.

For extra points, find the exact API request/response to which Reltio responded with lookupErrors and add it to the ticket. You can find the request/response in Kibana's component logs (Discover > amer-prod-mdmhub) in MDM Manager's logs - POST entitites/_byUris.



" }, { "title": "Thanos", "pageID": "411343433", "pageLink": "/display/GMDM/Thanos", "content": "
\n
\n
\n
\n

Components:

Thanos stack is running on monitoring host: amraelp00020595.COMPANY.com under /app/monitoring/prometheus/ orchestrated with docker-compose:

\n
-bash-4.2$ docker-compose ps \nNAME             IMAGE                                 COMMAND                  SERVICE          CREATED        STATUS         PORTS\nbucket_web       artifactory.p:main-7e879c6   "/bin/thanos tools b…"   bucket_web       3 weeks ago    Up 2 seconds   \ncompactor        artifactory.p:main-7e879c6   "/bin/thanos compact…"   compactor        44 hours ago   Up 44 hours    \nprometheus       artifactory.p...:v2.30.3     "/bin/prometheus --c…"   prometheus       3 weeks ago    Up 3 weeks     0.0.0.0:9090->9090/tcp, ...\nquery            artifactory.p:main-7e879c6   "/bin/thanos query -…"   query            3 weeks ago    Up 3 weeks     \nquery_frontend   artifactory.p:main-7e879c6   "/bin/thanos query-f…"   query_frontend   3 weeks ago    Up 3 weeks     \nrule             artifactory.p:main-7e879c6   "/bin/thanos rule --…"   rule             3 weeks ago    Up 3 weeks     \nstore            artifactory.p:main-7e879c6   "/bin/thanos store -…"   store            3 weeks ago    Up 3 weeks     \nthanos           artifactory.p:main-7e879c6   "/bin/thanos sidecar…"   thanos           3 weeks ago    Up 3 weeks     0.0.0.0:10901-10902->10901-10902/tcp,...
\n
\n
\n
\n
\n
\n
\n

Thonos (sidecar):

Thanos rule:
Thanos store:
Thanos bucket_web:
Thanos query_frontend:
Thanos query:
Thanos compactor


Thanos oveview dashbord: Thanos / Overview - Dashboards - Grafana (COMPANY.com) 



\n
\n
\n
\n

\"\"

\n
\n
\n
\n
\n
\n

General troubleshooting: 

Every troubleshooting starts with analyzing logs from component which is mentioned in alert. 
Thanos components logs always give clear information about the problem:

Typical procedure:

  • Check alerts
  • Check status of components with command: docker-compose ps 
  • Check component log if it is crashlooping: with command: docker-compose logs <name_of_component>


Alerts rules:

Below links to prometheus rules that can generate alerts: 

Knows issues: 

Thanos sidecar permission denied

Alart: after 24H ThanosCompactHalted

Description: thanos can't read shared folder with Prometheus

Solution:

  1. Check thanos logs: docker-compose logs thanos
  2. confirm issue "permission denied" accessing files
  3. Restart thanos with: docker-compose restart thanos


Compactor halted

Alart: ThanosCompactHalted.

Logs (docker-compose logs compactor)

\n
compactor         | ts=2024-03-25T13:23:43.380462226Z caller=compact.go:491 level=error msg="critical error detected; halting" err="compaction: group 0@3028247278749986641: compact blocks [/data/compact/0@3028247278749986641/01HSK9YKWVEDZGE9MF4XGARS58 /data/compact/0@3028247278749986641/01HSKBNHNJ9B1PC0NAYR5F67SJ /data/compact/0@3028247278749986641/01HSKDCFFEC9SZM5N5PTHK3TYM /data/compact/0@3028247278749986641/01HSKF3D9E0H1B4ZMAJ1YHKM1A]: populate block: chunk iter: cannot populate chunk 8 from block 01HSKDCFFEC9SZM5N5PTHK3TYM: segment index 0 out of range"
\n

Description: Chunk uploaded to S3 is broken

Solution:

  1. Go to https://mdm-monitoring.COMPANY.com/thanos-bucket-web/blocks
  2. Search for block 01HSKF3D9E0H1B4ZMAJ1YHKM1A
  3. Click on block
  4. Click on "Mark Deletion"
    \"\"
  5. Restart compactor with: docker-compose restart compactor 
  6. Verify if metric thanos_compact_halted returned to 0
    Grafana -> thanos_compact_halted  


Expired S3 keys

Alart: maybe not tested: ThanosSidecarBucketOperationsFailed

Description: thanos can't access S3:

  1. Check Thanos bucket page whether you can see data chunks from S3: https://mdm-monitoring.COMPANY.com/thanos-bucket-web/blocks
  2. Check components logs and confirm that Store, sidecar and bucket use old S3 keys
  3. Rotate S3 Keys 

High memory usage by store

Alart: - 

Description: thanos store consumed over then 20% node memory 

Solution: No clear solution what was the root cause


\n
\n
\n
" }, { "title": "Snowflake", "pageID": "218446612", "pageLink": "/display/GMDM/Snowflake", "content": "" }, { "title": "Dynamic Views Backwards Compatibility Error SOP", "pageID": "322555521", "pageLink": "/display/GMDM/Dynamic+Views+Backwards+Compatibility+Error+SOP", "content": "

For the process documentation please visit the following page:

Snowflake: Backwards compatibility

There are two artifacts that can be created for this process and will be delivered to the HUB-DL:

  1. breaking-changes.info - this file is created when an attribute changes its type from a lov to a non-lov value or vice-versa. Lov attributes have the *_LKP suffix in the column names for dynamic views therefore in this scenario there will be an additional column created and the data will be transferred to it. Bot columns will still be present in Snowflake. There is no action needed from the HUB end.

  2. breaking-changes.error - this file is only created when an existing column is converted into a nested value (is a parent value for multiple other attributes). Each nested value has a separate dynamic view that contains all of its attributes. The changes in this file are omitted in the snowflake refresh. When that kind of change will be discovered HUB will send information to Change Management and Delottie team to manage that case. 
" }, { "title": "How to Gather Detailed Logs from Snowflake Connector", "pageID": "234979546", "pageLink": "/display/GMDM/How+to+Gather+Detailed+Logs+from+Snowflake+Connector", "content": "


How To change the Kafka Consumer parameters in Snowflake Kafka Conenctor:

add do docker-compose.yml:

        environment:
          - "CONNECT_MAX_POLL_RECORDS=50"
          - "CONNECT_MAX_POLL_INTERVAL_MS=900000"
    recreate container.


How To enable JDBC TRACE on Snowflake Kafka Connector:

    JDBC TRACE LOGS are in the TMP directory:
    https://github.com/snowflakedb/snowflake-kafka-connector/pull/201/commits/650b92cfa362217ca4dfdf2c6768026e862a9b45

    add 
        environment:
          - "JDBC_TRACE=true"

     additionally you can enable trace on whole connector:

      - "CONNECT_LOG4J_LOGGERS=org.apache.kafka.connect=TRACE"

      more details here:

            https://docs.confluent.io/platform/current/connect/logging.html#connect-logging-docker

            https://docs.confluent.io/platform/current/connect/logging.html


    mount volume:
       volumes:
          - "/app/kafka-connect/prod/logs:/tmp:Z"

    recreate container.
    

    LOGS are in the:
        amraelp00007848:mdmuspr:[05:59 AM]:/app/kafka-connect/prod/logs> pwd
        /app/kafka-connect/prod/logs/snowflake_jdbc0.log.0
        
    Also gather the logs from the Container stdout:
        docker logs prod_kafka-connect-snowflake >& prod_kafka-connect-snowflake_after_restart_24032022_jdbc_trace.log
   


Additional details about DEBUG with snowflake debug:

https://docs.confluent.io/platform/current/connect/logging.html#check-log-levels

You can enable the DEBUG logs by editing the "connect" logfile. (it is different to the JDBC trace setting we used before)

This is the link to our doc explaining the log enabling: 
ttps://docs.snowflake.com/en/user-guide/kafka-connector-ts.html#reporting-issues
In more details, on the confluent documentation:
https://docs.confluent.io/platform/current/connect/logging.html#using-the-kconnect-api

It is also possible to use an API call:

 curl -s -X PUT -H "Content-Type:application/json" \\                        http://localhost:8083/admin/loggers/com.snowflake.kafka.connector \\-d '{"level": "DEBUG"}' | jq '.'


Share with Snowflake support. 
    

" }, { "title": "How to Refresh LOV_DATA in Lookup Values Processing", "pageID": "218446615", "pageLink": "/display/GMDM/How+to+Refresh+LOV_DATA+in+Lookup+Values+Processing", "content": "
  1. Log in to proper Snowflake instance (credentials are stored in ansible repository):
    1. NPROD:
      1. EMEA (EU) - https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com
      2. AMER (US) - https://amerdev01.us-east-1.privatelink.snowflakecomputing.com
    2. PROD: 
      1. EMEA (GBL) - https://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com
      2.  AMER (US) - https://amerprod01.us-east-1.privatelink.snowflakecomputing.com
  2. Set proper role, warehouse and database:
    1. example (EU): 

      DB NameCOMM_GBL_MDM_DMART_PROD

      Default warehouse name

      COMM_MDM_DMART_WH

      DevOps role name

      COMM_PROD_MDM_DMART_DEVOPS_ROLE
  3. Run commands in the following order:
    1. COPY INTO landing.lov_data from @landing.LOV_DATA_STG pattern='.*.json';
    2. call customer.refresh_lov();
    3. call customer.materialize_view_full_refresh('M', 'CUSTOMER','CODES');
    4. call customer.materialize_view_full_refresh('M', 'CUSTOMER','CODE_SOURCE_MAPPINGS');
    5. call customer.materialize_view_full_refresh('M', 'CUSTOMER','CODE_TRANSLATIONS');
    6. REMOVE @landing.LOV_DATA_STG pattern='.*.json';

       
" }, { "title": "Issue: Cannot Execute Task, EXECUTE TASK Privilege Must Be Granted to Owner Role", "pageID": "196884458", "pageLink": "/display/GMDM/Issue%3A+Cannot+Execute+Task%2C+EXECUTE+TASK+Privilege+Must+Be+Granted+to+Owner+Role", "content": "

Environment details:

SF: https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com

db: COMM_EU_MDM_DMART_DEV

schema: CUSTOMER

role: COMM_GBL_MDM_DMART_DEV_DEVOPS_ROLE

Issue:

The command is working fine:

\n
CREATE OR REPLACE TASK customer.refresh_customer_sl_eu_legacy_views\n   WAREHOUSE = COMM_MDM_DMART_WH\n   AFTER customer.refresh_customer_consolidated_views\nAS\nCALL customer.refresh_sl_views('COMM_EU_MDM_DMART_DEV_DB','CUSTOMER','COMM_GBL_MDM_DMART_DEV_DB','CUSTOMER_SL','%','I','M', false);\nALTER TASK customer.refresh_customer_sl_eu_legacy_views resume;
\n


The command that is causing the issue:

\n
ALTER TASK customer.refresh_customer_consolidated_views resume;\n\nSQL Error [91089] [23001]: Cannot execute task , EXECUTE TASK privilege must be granted to owner role
\n


Solution:

  1. http://btondemand.COMPANY.com/getsupport
  2. Choose Snowflake
  3. \"\"
    1. Issue:
      1. Describe your issue - Cannot execute task, EXECUTE TASK privilege must be granted to owner role
      2. Please provide a detailed description:
        1. Hi Team,
          We are facing the following issue:
          SQL Error [91089] [23001]: Cannot execute task, EXECUTE TASK privilege must be granted to owner role
          during the execution of the following command:
          ALTER TASK customer.refresh_customer_consolidated_views resume;

          Environment details:
          HOST: https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com
          DB: COMM_EU_MDM_DMART_DEV
          SCHEMA: CUSTOMER
          ROLE: COMM_GBL_MDM_DMART_DEV_DEVOPS_ROLE

          Could you please fix this issue in DEV/QA/STAGE and additionally on PROD:
          HOST: https://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com

          Please let me know if you need any other details.

    2. Created ticket for reference: - http://digitalondemand.COMPANY.com/My-Tickets/Ticket-Details?ticket=RF3372664 


" }, { "title": "PTE: Add Country", "pageID": "302686106", "pageLink": "/display/GMDM/PTE%3A+Add+Country", "content": "

There are two files in the Snowflake Bitbucket repo that are used in the deployment for PTE:

src/sql/global/pte_sl/tables/driven_tables.sql

src/sql/global/pte_sl/views/report_views.sql


driven_tables.sql

This file contains the definitions of supporting tables used for the calculation of the PTE_REPORT view.

DRIVEN_TABLE2_STATIC contains the list of identifiers per country and the column placement in the pte_report view. There can be a maximum of five identifiers per country and they should be provided by the PTE team. If there are no identifiers added for a country in the table the list of identifiers will be calculated "dynamically" based on the number of HCPs having the identifier.

Column nameDescription
ISO_CODEISO2 code of the country ie. 'TR', 'FR', 'PL' etc.
CANONICAL_CODERDM code that will appear in PTE_REPORT as IDENTIFIER_CODE
LANG_DESCRDM code description that will appear in PTE_REPORT as IDENTIFIER_CODE_DESC
CODE_IDTYPE_LKP value used to connect to the identifiers table to extract the value.
MODEL'p' or 'i' showing whether the codes for the country should be taken from the IQVIA ('i') or COMPANY ('p') data model.
ORDER_IDA number from 1 to 5. Showing the placement of the code among identifiers. Code from 1 will be mapped to IDENTIFIER1_CODE etc.

report_views.sql

DRIVEN_TABLE1 is a view that derives the basic information for the country from the COUNTRY_CONFIG table. The country ISO2 code has to be added into the WHERE clause depending on whether the country should have data from the IQVIA data model (the first part of the query) or from the COMPANY data model (after the UNION)

\n
\n DRIVEN_TABLE1 Expand source\n
\n
\n
CREATE OR REPLACE VIEW PTE_SL."DRIVEN_TABLE1" AS(\nSELECT\n    ISO_CODE,\n    NAME,\n    LABEL,\n    RELTIO_TENANT,\n    HUB_TENANT,\n    SF_INSTANCE,\n    SF_TENANTDATABASE,\n    CUSTOMERSL_PREFIX\nFROM CUSTOMER.COUNTRY_CONFIG \nWHERE ISO_CODE in ('SK', 'PH', 'CL', 'CO', 'AR', 'MX')\nAND CUSTOMERSL_PREFIX = 'i_'\nUNION ALL\nSELECT\n    ISO_CODE,\n    NAME,\n    LABEL,\n    RELTIO_TENANT,\n    HUB_TENANT,\n    SF_INSTANCE,\n    SF_TENANTDATABASE,\n    CUSTOMERSL_PREFIX\nFROM CUSTOMER.COUNTRY_CONFIG\nWHERE ISO_CODE in ('AD', 'BL', 'BR', 'FR', 'GF', 'GP', 'MC', 'MC', 'MF', 'MQ', 'MU', 'NC', 'PF', 'PM', 'RE', 'TF', 'WF', 'YT')\nAND CUSTOMERSL_PREFIX = 'p_'\n);
\n
\n


PTE_REPORT this is the view from which the clients take their data. Unfortunately the data required varies from country to country and also is some cases between nprod and prod due to data availability.

GO_STATUS. By default for the IQVIA data model the values for GO_STATUS are YES/NO and for the COMPANY data model they're Y/N if there's an exception you have to manually add the country to the case in the view.

\n
\n GO_STATUS Expand source\n
\n
\n
CAST(CASE\n    WHEN HCP.GO_STATUS_LKP = 'LKUP_GOVOFF_GOSTATUS:GO' AND HCP.COUNTRY IN ('CO', 'CL', 'AR', 'MX') THEN 'Y'\n    WHEN HCP.GO_STATUS_LKP = 'LKUP_GOVOFF_GOSTATUS:NGO' AND HCP.COUNTRY IN ('CO', 'CL', 'AR', 'MX') THEN 'N'\n    WHEN HCP.GO_STATUS_LKP = 'LKUP_GOVOFF_GOSTATUS:GO' THEN 'YES'\n    WHEN HCP.GO_STATUS_LKP = 'LKUP_GOVOFF_GOSTATUS:NGO' THEN 'NO'\n\tWHEN HCP.COUNTRY IN ('CO', 'CL', 'AR', 'MX') THEN 'N'\n    ELSE 'NO'\nEND AS VARCHAR(200)) AS "GO_STATUS",
\n
\n
" }, { "title": "QC", "pageID": "234712311", "pageLink": "/display/GMDM/QC", "content": "

Snowflake QC Check data is located in the CUSTOMER.QUALITY_CONTROL table.


Duplicated COMPANY_GLOBAL_CUSTOMER_ID

sql:

SELECT COMPANY_global_customer_id, COUNT(1)
FROM customer.entities
WHERE COMPANY_global_customer_id is not null
AND last_event_type not like '%LOST_MERGE%'
AND last_event_type not like '%REMOVED%'
GROUP BY COMPANY_global_customer_id
HAVING COUNT(1) > 1

Description:

COMPANY Global Customer ID should be unique for every entity in Reltio. In case of any duplicates you have to check if it's a Snowflake data refresh issue (data is OK in Reltio not in Snowflake), or something is wrong with the flow (check if the id's are duplicated in COMPANYIdRegistry in Mongo). 


Merges with object data

sql:

SELECT ENTITY_URI
FROM CUSTOMER.ENTITIES
WHERE LAST_EVENT_TYPE IN ('HCP_LOST_MERGE', 'HCO_LOST_MERGE', 'MCO_LOST_MERGE')
AND OBJECT IS NOT NULL

Description:

All entities in the *Lost_Merge status should have null values in the object column. If that's not the case they have to be cleared manually either by re-sending the specified record to Snowflake or by manually setting the object field for them as null. 


Active crosswalks assigned to more than one different entity

sql:

SELECT CROSSWALK_URI
FROM
CUSTOMER.M_ENTITY_CROSSWALKS
WHERE ACTIVE = TRUE
AND ACTIVE_CROSSWALK = TRUE
GROUP BY CROSSWALK_URI
HAVING COUNT(ENTITY_URI) > 1


Description:

A crosswalk should be active for only one entity_uri. If that's not the case then either the entities should be merged (contact: DLER-COMPANY-MDM-Support <COMPANY-MDM-Support@iqvia.com>) or they were merged but the lost_merge event wasn't delivered to snowflake / mdm_hub.


Duplicated entities in materialized views

sql:

SELECT ENTITY_URI, 'HCO' TYPE, COUNT(1)
FROM CUSTOMER.M_HCO
GROUP BY ENTITY_URI
HAVING COUNT(1) > 1
UNION ALL
SELECT ENTITY_URI, 'HCP' TYPE, COUNT(1)
FROM CUSTOMER.M_HCP
GROUP BY ENTITY_URI
HAVING COUNT(1) > 1

Description:

There are duplicated records in materialized tables. Investigate what caused the duplicates and run the full materialization procedure to fix it.


Entities with the same global id and parent global id

sql:

SELECT ENTITY_URI, COMPANY_GLOBAL_CUSTOMER_ID, PARENT_COMPANY_GLOBAL_CUSTOMER_ID
FROM CUSTOMER.ENTITIES
WHERE COMPANY_GLOBAL_CUSTOMER_ID = PARENT_COMPANY_GLOBAL_CUSTOMER_ID
AND COMPANY_GLOBAL_CUSTOMER_ID IS NOT NULL

Description:

Check if this is the case in the hub. If not re-send the data into snowflake if yes than contact the support team.


Missing ID's for specializations:

sql:

SELECT ENTITY_URI
FROM CUSTOMER.M_SPECIALITIES
WHERE SPECIALITIES_URI IS NULL

Description:

Review the affected entities. If their missing an id review them with the hub. Make sure they're active in Reltio and Hub. You might have to reload it in snowflake if it's not updated.


" }, { "title": "Snowflake - Prometheus Alerts", "pageID": "401026870", "pageLink": "/display/GMDM/Snowflake+-+Prometheus+Alerts", "content": "

SNOWFLAKE TASK FAILED

Description: This alert means that one of the regularly scheduled snowflake tasks have failed. To fix this you have to find the task that was failed in Snowflake, check the reason, and fix it. Snowflake task dag's have an auto suspend function after ten conscutive failed runs, if the issue isn't resolved at the time you'll need to manually restart the root task.

Queries:

  1. Idnetify failed tasks

    \n
    SELECT *\nFROM TABLE(INFORMATION_SCHEMA.TASK_HISTORY(RESULT_LIMIT=>5000, ERROR_ONLY=>TRUE))\n;
    \n
  2. Use the ERROR_CODE and ERROR_MESSAGE columns to find out the information needed to determine the cause of the error.
  3. After determining and fixing the cause of the issue you can manually run all the queries that are left in the task tree. To get them you can use the following code:

    \n
    SELECT *\nFROM TABLE(INFORMATION_SCHEMA.TASK_DEPENDENTS('<task_name>'))\n;
    \n

    Remember that if a schema isn't selected for the session you need submit it with the task name.
    You can also use the execute task query with the RETRY LAST option to restart the flow. This will only work if a new run wasn't started yet and you have to run it on the root task not the task that failed.

    \n
    EXECUTE TASK <root_task_name> RETRY LAST;
    \n

    SNOWFLAKE TASK FAILED 603

    Description: This alert means that one of the regularly scheduled snowflake tasks have failed. To fix this you have to find the task that was failed in Snowflake, check the reason, and fix it. Snowflake task dag's have an auto suspend function after ten conscutive failed runs, if the issue isn't resolved at the time you'll need to manually restart the root task.

    Queries:

    1. Idnetify failed tasks

      \n
      SELECT *\nFROM TABLE(INFORMATION_SCHEMA.TASK_HISTORY(RESULT_LIMIT=>5000, ERROR_ONLY=>TRUE))\n;
      \n
    2. You can manually run all the queries that are left in the task tree. To get them you can use the following code:

      \n
      SELECT *\nFROM TABLE(INFORMATION_SCHEMA.TASK_DEPENDENTS('<task_name>'))\n;
      \n

      Remember that if a schema isn't selected for the session you need submit it with the task name.
      You can also use the execute task query with the RETRY LAST option to restart the flow. This will only work if a new run wasn't started yet and you have to run it on the root task not the task that failed.

      \n
      EXECUTE TASK <root_task_name> RETRY LAST;
      \n

SNOWFLAKE TASK NOT STARTED 24h

Description: A Snowflake scheduled task hasn't run in the last day. You need to check if the alert is factually correct and solve any issues that are stopping the task from running. Please note that on production the materialization is scheduled every two hours, so if a materialization task isn't run for 24h that means that we missied twelve materialization cycles of data, hence it's important to get it fixed as soon as possible.

Queries:

  1. Check when the task was last run

    \n
    SELECT *\nFROM TABLE(INFORMATION_SCHEMA.TASK_HISTORY(RESULT_LIMIT=>5000))\nWHERE 1=1\nAND DATABASE_NAME ='<database_name>'\nAND NAME = '<task_name>'\nORDER BY QUERY_START_TIME DESC\n;
    \n
  2. If the task is running succesfully the issue might be with prometheus data scraping. Check the following dashboard to see when the data was last succesfully scraped:
    Snowflake Tasks - Dashboard

    If the task wasn't run in the last 24h. It might be suspended. Verify it using the command:

    \n
    SHOW TASKS;
    \n

    The column STATE will tell you if the task is suspended or started, and the LAST_SUSPENDED_REASON columns will tell you what was the reason of the last suspension. If it's SUSPENDED_DUE_TO_ERRORS you need to get the list of all of the dependent tasks and find which one of the failed (reminder: the root task gets suspended if any of the child tasks faila ten times in a row). To find out the failed task and the dependants of the suspended task you can use the queries from the alert SNOWFLAKE TASK FAILED.

  3. To restart a suspended task run the query:

    \n
    ALTER TASK <schema_name>.<task_name> resume;
    \n

SNOWFLAKE DUPLICATED COMPANY GLOBAL CUSTOMER ID'S

Description: COMPANY Global Customer Id's are unique identifiers calculated by the Hub. In some cases of wrongly done unmerge events on Reltio side there might be entities with wrongly assigned hub-callback crosswalks, or there might be another reason that caused the duplicates. The ID's need to be unique so ti should be verified, fixed, and the data reloaded in a timely manner.

Queries:

  1. Identify COMPANY global customer id's with duplicates:

    \n
    SELECT COMPANY_global_customer_id, COUNT(1)\nFROM customer.entities\nWHERE COMPANY_global_customer_id is not null\nAND last_event_type not like '%LOST_MERGE%'\nAND last_event_type not like '%REMOVED%'\nGROUP BY COMPANY_global_customer_id\nHAVING COUNT(1) > 1\n;
    \n



    Variant of the query that returns entity uri's for easier querying:

    \n
    SELECT ENTITY_URI\nFROM CUSTOMER.ENTITIES\nWHERE COMPANY_GLOBAL_CUSTOMER_ID IN (\n    SELECT COMPANY_global_customer_id\n    FROM customer.entities\n    WHERE COMPANY_global_customer_id is not null\n    AND last_event_type not like '%LOST_MERGE%'\n    AND last_event_type not like '%REMOVED%'\n    GROUP BY COMPANY_global_customer_id\n    HAVING COUNT(1) > 1\n)\n;
    \n
  2. Check if the duplicates are reflected in MongoDB. If the data in Mongo doesn't have the duplicates use hub ui to resend the events to Snowflake.
  3. Check if Reltio contains the duplicated data if not reconcile the affected entities, if yes review the reason. If it's because of a Hub_Callback you might need to manually delete the crosswalk, and check COMPANYIDRegistry in Mongo, if it also contains duplicates that you need to delete it there also.

SNOWFLAKE LAST ENTITY EVENT TIME

Description: The alert informs of Snowflake production tenants where the last update was more than four hours ago. The refresh on production is every two hours and the traffic is high enough that there should be updates in every cycle.

Queries:

  1. Check how many minutes ago was the last update in Snowflake

    \n
    SELECT DATEDIFF('MINUTE', (SELECT MAX(SF_UPDATE_TIME) FROM CUSTOMER.ENTITIES), (SELECT CURRENT_TIMESTAMP()));
    \n
  2. If it's over four hours check the kafka snowflake topic if it has an active consumer and if the data is flowing correctly to the landing schema. Review any latest changes in Snowflake refresh to make sure that there's nothing impacting the tasks and they're all started.
  3.  If the data in snowflake is OK than the issue might be with the data scrape.
    Snowflake Tasks - Dashboard

SNOWFLAKE MISSING COMPANY GLOBAL ID'S IN MATERIALIZED DATA

Description: This alert informs us that there are entities in Snowflake that don't have a COMPANY Global Customer ID. This is a mandatory identifier and as such should be available for all event types (excluding DCR's). It's also used by down steram clients to identify records and in case the value is deleted from an entity it will be deleted in the down streams.

Queries:

  1. Check the impact in the qc table:

    \n
    SELECT *\nFROM CUSTOMER.QC_COMPANY_ID\nORDER BY DATE DESC\n;
    \n
  2. Get the list of all entities that are missing the id's

    \n
    SELECT *\nFROM CUSTOMER.ENTITIES\nWHERE COMPANY_GLOBAL_CUSTOMER_ID IS NULL\nAND ENTITY_TYPE != 'DCR'\nAND COUNTRY != 'US'\nAND (SELECT CURRENT_DATABASE()) not ilike 'COMM_EU%'\n;
    \n
  3. Check the data in Mongo, AKHQ, Reltio.
  4. Consider informing down stream cleints to stop ingestion of the data until the issue is fixed

SNOWFLAKE GENERATED EVENTS WITHOUT COMPANY GLOBAL CUSTOMER ID'S

Description: This alert stops events without COMPANY Global Customer ID's from reaching the materialized data layer. It will add information about this occurences into a special table and delete those events before materialization.

Queries:

  1. Check the list of impacted entity_uri's

    \n
    SELECT *\nFROM CUSTOMER.MISSING_COMPANY_ID\n;
    \n
  2. Check for the reason of missing COMPANY Global Customer Id's similiarly to missing global id's in materialized data alaer.
  3. After finding and fixnig the reason of the issue use Hub UI to resend the profiles into Snowflake to make sure we have the correct data.
  4. Clear the missing COMPANY id table

    \n
    TRUNCATE TABLE CUSTOMER.MISSING_COMPANY_ID;
    \n

SNOWFLAKE TOPIC NO CONSUMER

Description: The Kafka Connector from Mongo to Snowflake has data which isn't consumed.

Queries:

  1. Check if the consumer is online you might have to restart it's pod to get it working again.


SNOWFLAKE VIEW MATERIALIZATION FAILED

Description: This alert informs you that one or more views have failed in their last materialization attempt. The alert checks the data from CUSTOMER.MATERIALZED_VIEW_LOG table for the last seven days and chooses the last materialization attempt based on the largest id.

Queries:

  1. Query that the alert is based upon

    \n
    SELECT COUNT(VIEW_NAME) FAILED_MATERIALIZATION\nFROM (\n    SELECT VIEW_NAME, MAX(ID) ID, SUCCESS, ERROR_MESSAGE, MATERIALIZED_OPTION, ROW_NUMBER() OVER (PARTITION BY VIEW_NAME ORDER BY ID DESC) AS RN\n    FROM CUSTOMER.MATERIALIZED_VIEW_LOG\n    GROUP BY VIEW_NAME, ERROR_MESSAGE, ID, SUCCESS, MATERIALIZED_OPTION\n    HAVING DATEDIFF('days', MAX(START_TIME),  (SELECT CURRENT_DATE())) < 7\n)\nWHERE RN = 1\nAND SUCCESS = 'FALSE';
    \n
  2. Modified version that will show you the error message that Snowflake ended the materialization attempt. Those are standard SQL errors on which you have to find out the root cause and the resolution of the issue.

    \n
    SELECT VIEW_NAME, ERROR_MESSAGE\nFROM (\n    SELECT VIEW_NAME, MAX(ID) ID, SUCCESS, ERROR_MESSAGE, MATERIALIZED_OPTION, ROW_NUMBER() OVER (PARTITION BY VIEW_NAME ORDER BY ID DESC) AS RN\n    FROM CUSTOMER.MATERIALIZED_VIEW_LOG\n    GROUP BY VIEW_NAME, ERROR_MESSAGE, ID, SUCCESS, MATERIALIZED_OPTION\n    HAVING DATEDIFF('days', MAX(START_TIME),  (SELECT CURRENT_DATE())) < 7\n)\nWHERE RN = 1\nAND SUCCESS = 'FALSE';
    \n

SNOWFLAKE MISSING DESC IN CODES VIEW

Description: This alert indicates that there are codes without descriptions in the CUSTOMER.M_CODES data table.

Queries:

  1. Check the missing data:

    \n
    SELECT CODE_ID, DESC\nFROM CUSTOMER.M_CODES\nWHERE DESC IS NULL;
    \n
  2. Check the Dynamic view to make sure it's not a materialization issue:

    \n
    SELECT CODE_ID, DESC\nFROM CUSTOMER.CODES\nWHERE DESC IS NULL;
    \n
  3. If it's a materialization issue then rematerialize the table.

    \n
    CALL CUSTOMER.MATERIALIZE_VIEW_FULL_REFRESH('M', 'CUSTOMER', 'CODES');
    \n
  4. If the data is missing in the dynamic view, check the code in RDM. If it has a source mapping from the source Reltio with the canonical value set to true, then it should have data in Snowflake. Check why it isn't flowing. If there is no such entry notify COMPANY team.


" }, { "title": "Release", "pageID": "386809112", "pageLink": "/display/GMDM/Release", "content": "

Release history:

4.1.24 [TEMPLATE - draft]

\n

4.1.24 [TEMPLATE - example]

\n

4.1.28

\n

4.1.29

\n

4.10.0

\n

4.11.0

\n

4.11.1

\n

4.12.0

\n

4.12.1

\n

4.12.2

\n

4.14.0

\n

4.14.1

\n

4.15.0

\n

4.16.0

\n

4.16.1

\n

4.17.0

\n

4.18.0

\n

4.18.1

\n

4.19.0

\n

4.21.0

\n

4.22.0

\n

4.23.0

\n

4.25.0

\n

4.28.0

\n

4.3.0

\n

4.30.0

\n

4.31.0

\n

4.32.0

\n

4.33.0

\n

4.34.0

\n

4.35.0

\n

4.38.0

\n

4.39.0

\n

4.40.0

\n

4.41.0

\n

4.42.0

\n

4.43.0

\n

4.44.0

\n

4.45.0

\n

4.46.0

\n

4.47.0

\n

4.47.1

\n

4.48.0

\n

4.49.0

\n

4.50.0

\n

4.51.0

\n

4.54.0

\n

4.54.1

\n

4.55.0

\n

4.56.0

\n

4.58.0

\n

4.59.0

\n

4.6.0

\n

4.60.0

\n

4.62.0

\n

4.63.0

\n

4.9.0

\n

Snowflake Release

\n


Release process description (TBD):

Text:

Diagram:

How branches work, differences between release and FIX deployemend(TBD):

Text:

Diagram:


Release rules:

  1. Always do PR review.
  2. Do not deploy unencrypted files.
  3. Release versioning: normal path 4.x, FIX version 4.10.x
  4. TBD
  5. TBD


Release calendar:

TBD




" }, { "title": "Snowflake Release", "pageID": "430080179", "pageLink": "/display/GMDM/Snowflake+Release", "content": "" }, { "title": "Current Release", "pageID": "438309059", "pageLink": "/display/GMDM/Current+Release", "content": "


Release report:

Release:2.2.0Release date:

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Grzegorz SzczęsnyPlanned GO-LIVE:wed Jul 03
Jira linkCategoryDescriptionDeveloped ByDevelopment FinishedTested By Test Scenarios / ResultsTesting FinishedAdditional Notes

\n MR-9001\n -\n Getting issue details...\n STATUS\n
\n MR-8942\n -\n Getting issue details...\n STATUS\n

Feature ChangeUpdate the data mart with code changes needed for Onekey and DLUP data.SZCZEG0102.07.2024SARMID03Done validating below:
✅Onekey Data Mapping.
✅ DLUP Data Mapping.
03.07.2024

\n MR-9056\n -\n Getting issue details...\n STATUS\n

Feature ChangeUpdate the Country Table for Transparency_SL with new data.SZCZEG0102.07.2024SARMID03✅New data passed the checking.03.07.2024

\n MR-8988\n -\n Getting issue details...\n STATUS\n

ChangeImproved the MATERIALIZE_VIEW_INCREMENTAL_REFRESH procedure to cover 5 options, that were previously covered by 5 separate procedures and replaced their use with the new oneHARAKR02.07.2024












PROD deployment report:

PROD deployment date:Wed Jun 26 12:27:48 UTC 2024

Deployed by:Grzegorz Szczęsny
ENV:LinkStatusDetails
AMER

SUCCESS


APAC

SUCCESS


EMEA


SUCCESS


GBL(EX-US)


SUCCESS


GBLUS


SUCCESS


GLOBAL

SUCCESS


" }, { "title": "2.1.0", "pageID": "430080184", "pageLink": "/display/GMDM/2.1.0", "content": "


Release report:

Release:2.1.0Release date:Wed Jun 26 12:27:48 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Grzegorz SzczęsnyPlanned GO-LIVE:wed Jun 19
Jira linkCategoryDescriptionDeveloped ByDevelopment FinishedTested By Test Scenarios / ResultsTesting FinishedAdditional Notes

\n MR-8919\n -\n Getting issue details...\n STATUS\n

New FeaturePOC - The point of this ticket is to check if calculating a delta based on the SF_UPDATE_TIME from the materialized ENTITY_UPDATE_DATES table will be more efficient than using the stream. If this results in better performance than we're going to calculate deltas on our base tables dropping the streams.SZCZEG0128.05.2024SZCZEG01

Verified the change on times and the data quality by running the procedures simultanously on EMEA STAGE for a period of time

old:
\"\"


new:

\"\"



\n MR-8862\n -\n Getting issue details...\n STATUS\n

New FeatureDue to a change done in RDM we lost some descriptions for certain codes. It's important that we have the visibility for such issues in the future, therefore the need for this alert.SZCZEG0129.05.2024-New alert in Prometheus no need for additional testing--

\n MR-8969\n -\n Getting issue details...\n STATUS\n

ChangeAdjusted TRANSPARENCY_SL views to filter based on COUNTRY code (COMPANY model vs iquvia)HARAKR13.06.2024



\n MR-9003\n -\n Getting issue details...\n STATUS\n

ChangeUdate TRANSPARENCY_SL schema to Secure Views instead of views, due to the need to have the data from EMEA PROD available in AMER lower envs.SZCZEG0121.06.2024-Checked the view type on PROD--

\n MR-8986\n -\n Getting issue details...\n STATUS\n

ChangeChenge the way incremental code updates treat hard deleted lov's.SZCZEG0118.06.2024SZCZEG01


\n MR-8740\n -\n Getting issue details...\n STATUS\n

ChangeSuspend the WAREHOUSE_SUSPEND task.SZCZEG0118.04.2024-Pushed diretly to PROD--

\n MR-8701\n -\n Getting issue details...\n STATUS\n

New FeatureAdd new views in the PT&E schema for Saudi Arabia HCO / IDENTIFIERSSZCZEG0118.04.2024-Checked the views availability and record counts.--

\n MR-8712\n -\n Getting issue details...\n STATUS\n

BugfixFix a case where column order changes and it causes global views to not update properly.SZCZEG0118.04.2024SZCZEG01Rerun the case that cause the issue--

\n MR-8827\n -\n Getting issue details...\n STATUS\n

ChangeAdd email column to PT&E EU/APAC reportsSZCZEG0122.05.2024SZCZEG01Checked the column availability--

\n MR-8863\n -\n Getting issue details...\n STATUS\n

ChangeAdd a case for code materialization where there are more than one descriptions from the source Reltio but not all of them are CanonicalValues.SZCZEG0122.05.2024SZCZEG01Checked with the existing misisng descriptions.--

\n MR-7038\n -\n Getting issue details...\n STATUS\n

New FeatureAdd enchanced logging for manually called procedures.SZCZEG0122.05.2024SZCZEG01---

\n MR-8896\n -\n Getting issue details...\n STATUS\n

ChangeRemove DE from PTE_REPORT_EU, change values "Without Title", "Unknown", and "Unspecified" to null.SZCZEG0122.05.2024SZCZEG01---

\n MR-8916\n -\n Getting issue details...\n STATUS\n

ChangeRemove "Unknown" Country Codes from missing COMPANY global customer id's.SZCZEG0128.05.2024SZCZEG01---

\n MR-8994\n -\n Getting issue details...\n STATUS\n

ChangeUpdata column names for PTE_REPORT_SA.SZCZEG0118.06.2024SZCZEG01---

\n MR-8992\n -\n Getting issue details...\n STATUS\n

ChangeAdd missing columns to the Transparency_SL reports (MVP1 review).SZCZEG0118.06.2024SZCZEG01---

\n MR-8980\n -\n Getting issue details...\n STATUS\n

ChangeAdd US data into the Global DataMart TRANSPARENCY_SL.SZCZEG0118.06.2024SZCZEG01---

\n MR-8977\n -\n Getting issue details...\n STATUS\n

ChangeAdd hard coded columns to the TRANSPARENCY_SL data mart.SZCZEG0118.06.2024SZCZEG01---

\n MR-8844\n -\n Getting issue details...\n STATUS\n

New FeatureCreate Initial Data Mart for the TRANSPARENCY_SL project.SZCZEG0118.06.2024SZCZEG01---

\n MR-9016\n -\n Getting issue details...\n STATUS\n

BugfixFix on MR-8986. The procedure was launched in the landing schema but it tried to use a function that is only available in customer. Not finding the function in the current schema it returned an errorSZCZEG0125.06.2024SZCZEG01---

\n MR-8991\n -\n Getting issue details...\n STATUS\n

New FeatureChange refreh entities to use a calculated delta instead of strems. Followup to POC MR-8919.SZCZEG0118.06.2024SZCZEG01---

PROD deployment report:

PROD deployment date:Wed Jun 26 12:27:48 UTC 2024

Deployed by:Grzegorz Szczęsny
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm_snowflake_deploy/view/AMER/job/deploy_mdmhub_snowflake__amer_prod/165/

SUCCESS


APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm_snowflake_deploy/view/APAC/job/deploy_mdmhub_snowflake__apac_prod/135/

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm_snowflake_deploy/view/EMEA/job/deploy_mdmhub_snowflake__emea_prod/218/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm_snowflake_deploy/view/GBL/job/deploy_mdmhub_snowflake__gbl_prod/238/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm_snowflake_deploy/view/GBLUS/job/deploy_mdmhub_snowflake__gblus_prod/229/

SUCCESS


GLOBAL

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm_snowflake_deploy/view/GLOBAL/job/deploy_mdmhub_snowflake__global_prod/57/

SUCCESS


\"\"CHANGELOG_2_1_0.md

" }, { "title": "4.1.24 [TEMPLATE - draft]", "pageID": "386815558", "pageLink": "/pages/viewpage.action?pageId=386815558", "content": "

Release report:

Release:4.1.24Release date:Tue Jan 16 21:08:10 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:TODOPlanned GO-LIVE:Tue Jan 30 (in 2 weeks)
StageLinkStatusComments (images 600px)
Build:TODO

SUCCESS 


CHANGELOG:TODO



Unit tests:TODO

SUCCESS

TODO

Integration tests:

Execution date: TODO

Executed by: TODO

AMERTODO

[84] SUCCESS

[0] FAILED

[0] REPEATED


TODO

APACTODO

[89] SUCCESS

[0] FAILED

[0] REPEATED


TODO

EMEATODO

[89] SUCCESS

[0] FAILED

[0] REPEATED


TODO

GBL(EX-US)TODO

[72] SUCCESS

[0] FAILED

[0] REPEATED


TODO
GBLUSTODO

[74] SUCCESS

[0] FAILED

[0] REPEATED


TODO

Tests ready and approved:
  • approved by: TODO
Release ready and approved:
  • approved by: TODO


DEV and QA tests results:

DEV and QA deployment date:TODO Wed Jan 17 09:35:31 UTC 2024

Deployment approved:
  • approved by: TODO
Deployed by:TODO
ENV:LinkStatusDetails
AMERTODO

SUCCESS


APACTODO

SUCCESS


EMEA

TODO

SUCCESS


GBL(EX-US)

TODO

SUCCESS


GBLUS

TODO

SUCCESS 



STAGE deployment details:

STAGE deployment date:TODO Wed Jan 17 09:35:31 UTC 2024

Deployment approved:
  • approved by: TODO
Deployed by:TODO
ENV:LinkStatusDetails
AMERTODO

SUCCESS


APACTODO

SUCCESS


EMEA

TODO

SUCCESS


GBL(EX-US)

TODO

SUCCESS


GBLUS

TODO

SUCCESS 



STAGE test phase details:

Verification date



Verification by


Dashboard

Hints

Status

Details

MDMHUB / MDMHUB Component errors

Increased number of alerts → there's certainly something wrong



MDMHUB / MDMHUB KPIsSpikes, especially wide ones, suggest potential issue

MDMHUB / MDMHUB Components resourceComponent specific analysis, especially good for the ones updated within latest release (check news fragments)

General / Snowflake QC Trends

Quick and easy way to determine if there's something wrong with QC. Any change (lower/higher) → potential issue



Kubernetes / K8s Cluster Usage Statistics

Good for PROD environments since NPROD is to prone to project specific loads



Kubernetes / Pod Monitoring

Component specific analysis, especially good for the ones updated within latest release (check news fragments)



General / kubernetes-persistent-volumes 

Storage trend over time 



General / Alerts Statistics 

Increase after release → potential issue 



General / SSL Certificates and Endpoint Availability

Lower widget, multiple stacked endpoints at the same time for a long period



PROD deployment report:

PROD deployment date:TODO Wed Jan 17 09:35:31 UTC 2024

Deployment approved:
  • approved by: TODO
Deployed by:TODO
ENV:LinkStatusDetails
AMERTODO

SUCCESS


APACTODO

SUCCESS


EMEA

TODO

SUCCESS


GBL(EX-US)

TODO

SUCCESS


GBLUS

TODO

SUCCESS


PROD deploy hypercare details:

Verification date



Verification by


Dashboard

Hints

Status

Details

MDMHUB / MDMHUB Component errors

Increased number of alerts → there's certainly something wrong



MDMHUB / MDMHUB KPIsSpikes, especially wide ones, suggest potential issue

MDMHUB / MDMHUB Components resourceComponent specific analysis, especially good for the ones updated within latest release (check news fragments)

General / Snowflake QC Trends

Quick and easy way to determine if there's something wrong with QC. Any change (lower/higher) → potential issue



Kubernetes / K8s Cluster Usage Statistics

Good for PROD environments since NPROD is to prone to project specific loads



Kubernetes / Pod Monitoring

Component specific analysis, especially good for the ones updated within latest release (check news fragments)



General / kubernetes-persistent-volumes 

Storage trend over time 



General / Alerts Statistics 

Increase after release → potential issue 



General / SSL Certificates and Endpoint Availability

Lower widget, multiple stacked endpoints at the same time for a long period




" }, { "title": "4.1.24 [TEMPLATE - example]", "pageID": "386809114", "pageLink": "/pages/viewpage.action?pageId=386809114", "content": "

Release report:

Release:4.1.24Tue Jan 16 21:08:10 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Mikołaj MorawskiTue Jan 30 (in 2 weeks)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/467/ 

SUCCESS 


CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/387d6b51ebf7ade55692d80388d81e3c1e59117d 



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/467/testReport/ 

SUCCESS

\"\"

Integration tests:

Execution date: Wed Jan 24 18:01:08 UTC 2024

Executed by: Mikołaj Morawski

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/372/testReport/

[84] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/314/testReport/

[89] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/466/testReport/

[88] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/384/testReport/

[73] SUCCESS

[0] FAILED

[1] REPEATED

\"\"

  • failed tests - DerivedHcpAddressesTestCase.derivedHCPAddressesTest 
    • during run on Reltio there were multiple events and test got bloced
    • Test was repeated manually and passed with success 
      • <screenshot from local execution>
GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/321/testReport/

[74] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

Tests ready and approved:
Release ready and approved:
  • approved by: Mikołaj Morawski 


STAGE deployment details:

STAGE deployment date:Wed Jan 17 09:35:31 UTC 2024

Deployment approved:
  • approved by: Mikołaj Morawski 
Deployed by:Mikołaj Morawski
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/331/

SUCCESS

comments

APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/145/

SUCCESS

comments

EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/365/

SUCCESS

comments

GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/211/

SUCCESS

comments

GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/234/

SUCCESS 

comments


STAGE test phase details:

Test Test description ResponsibleStatus
Alerts verificationTo check if any of alerts in STG environments is a prod deployment release stopper. e.g. Latuch, Lukasz 

e.g. SUCCESS

SnowFlake checkTo check if there are any QC checks or tasks failed that can happend on prod environments. 

Data Quality GatewayTo check if there are any broken events. 

Environment check

To check if there are any issues on STG environment that can be a PROD release stopper



TBD


TBD


PROD deployment report:

PROD deployment date:Wed Jan 17 09:35:31 UTC 2024

Deployment approved:
  • approved by: Mikołaj Morawski 
Deployed by:Mikołaj Morawski
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/255/ 

SUCCESS

comments

APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/226/

SUCCESS

comments

EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/270/

SUCCESS

comments

GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/195/

SUCCESS

comments

GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/229/

SUCCESS

comments



" }, { "title": "4.1.28", "pageID": "386815544", "pageLink": "/display/GMDM/4.1.28", "content": "

Release report:

Release:4.1.28Release date:Thu Feb 08 10:10:38 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Rafał KućPlanned GO-LIVE:Thu Feb 29
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/470/

SUCCESS 


CHANGELOG:

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/966ebe3374d1de8d89764bbf5fd4e39e638a5723#CHANGELOG.md

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/39953783022e8b06c49af2e872b7cf66f2a8b26b



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/470/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: Tue Feb 13 18:00:57 UTC 2024

Executed by: Mikołaj Morawski

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/391/testReport/

[84] SUCCESS

[0] FAILED

[1] REPEATED

\"\"

  • one failed test - com.COMPANY.mdm.tests.events.COMPANYGlobalCustomerIdTest.test
    • repeated from local PC one more time by Mikołaj Morawski
    • during run on Reltio there were multiple events and test got blocked
    • Test was repeated manually and passed with success
    • \"\"
APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/330/testReport/

[89] SUCCESS

[0] FAILED

[1] REPEATED

\"\"

  • one failed test - com.COMPANY.mdm.tests.events.COMPANYGlobalCustomerIdTest.test
    • repeated from local PC one more time by Mikołaj Morawski
    • during run on Reltio there were multiple events and test got blocked
    • Test was repeated manually and passed with success
    • \"\"
EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/485/testReport/

[88] SUCCESS

[0] FAILED

[1] REPEATED

\"\"

  • one failed test -  com.COMPANY.mdm.tests.dcr2.DCR2ServiceTest.shouldCreateHCPOneKeyRedirectToReltio
    • repeated from local PC one more time by Mikołaj Morawski
    • during run on Reltio there were multiple events and test got blocked
    • Test was repeated manually and passed with success
    • \"\"
GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/395/testReport/

[73] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/332/testReport/

[74] SUCCESS

[0] FAILED

[1] REPEATED

\"\"

  • one failed test -  com.COMPANY.mdm.tests.events.COMPANYGlobalCustomerIdSearchOnLostMergeEntitiesTest.test
    • repeated from local PC one more time by Mikołaj Morawski
    • during run on Reltio there were multiple events and test got blocked
    • Test was repeated manually and passed with success
    • \"\"
Tests ready and approved:
  • approved by: Mikołaj Morawski
Release ready and approved:
  • approved by: Mikołaj Morawski


STAGE deployment details:

STAGE deployment date:Wed Feb 14 08:57:24 UTC 2024

Deployment approved:
  • approved by: Mikołaj Morawski
Deployed by:Mikołaj Morawski
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/342/

SUCCESS


APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/161/

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/378/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/220/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/243/

SUCCESS 



PROD deployment report:

PROD deployment date:Thu Feb 29 09:29:58 UTC 2024

Deployment approved:
  • approved by: Mikołaj Morawski
Deployed by:Filip Sądowicz
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/269/

SUCCESS


APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/238/

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/284/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/200/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/239/

SUCCESS




" }, { "title": "4.1.31", "pageID": "401024639", "pageLink": "/display/GMDM/4.1.31", "content": "

Release report:

Release:4.1.31Release date:Fri Mar 01 12:21:23 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Kacper UrbańskiPlanned GO-LIVE:Mon Mar 04
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/98/

SUCCESS 


CHANGELOG:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/98/artifact/CHANGELOG.md/*view*/



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/98/testReport/

SUCCESS

TODO

Integration tests:

Execution date: N/A

Executed by: N/A

AMERN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A

APACN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A

EMEAN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A

GBL(EX-US)N/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A
GBLUSN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A

Tests ready and approved:
  • approved by: N/A
Release ready and approved:
  • approved by: Kacper Urbański


STAGE deployment details:

STAGE deployment date:TODO Wed Jan 17 09:35:31 UTC 2024

Deployment approved:
  • approved by: Kacper Urbański
Deployed by:TODO
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/344/

SUCCESS


APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/163/

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/385/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/222/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/245/

SUCCESS 



PROD deployment report:

PROD deployment date:TODO Wed Jan 17 09:35:31 UTC 2024

Deployment approved:
  • approved by: Kacper Urbański
Deployed by:TODO
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/275/

SUCCESS


APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/239/

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/288/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/202/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/241/

SUCCESS




" }, { "title": "4.1.29", "pageID": "401613066", "pageLink": "/display/GMDM/4.1.29", "content": "

Release report:

Release:4.1.29Release date:Wed Feb 28 10:32:26 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Kacper UrbańskiPlanned GO-LIVE:Thu Mar 07 (in 1 weeks)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/472/

SUCCESS 


CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/4c3f8a5fc460bb0cc20e55f736850f2416b6e9f3#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/472/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: Wed Feb 28

Executed by: Mikołaj Morawski

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/407/testReport/

[84] SUCCESS

[0] FAILED

[1] REPEATED


\"\"

one failed test - com.COMPANY.mdm.tests.events.COMPANYGlobalCustomerIdTest.test

  • repeated from local PC one more time by Mikołaj Morawski
  • during run on Reltio there were multiple events and test got blocked
  • Test was repeated manually and passed with success
  • \"\"
APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/350/testReport/

[66] SUCCESS

[18] FAILED

[3] REPEATED


\"\"

  • All [18] DCR tests failed due to RDM issue on Reltio side:
  • same set of tests is successful on EMEA and AMER so logic is working correctly
  • RCA:

\"\"

Repeated tests:

  • repeated from local PC one more time by Mikołaj Morawski
  • during run on Reltio there were multiple events and test got blocked
  • Test was repeated manually and passed with success
  • \"\"
  • \"\"
EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/501/testReport/

[84] SUCCESS

[0] FAILED

[3] REPEATED


\"\"

Repeated tests:

  • repeated from local PC one more time by Mikołaj Morawski
  • during run on Reltio there were multiple events and test got blocked
  • Test was repeated manually and passed with success
  • \"\"
  • \"\"
GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/411/testReport/

[72] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/349/testReport/

[74] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

Tests ready and approved:
  • approved by: Mikołaj Morawski
Release ready and approved:
  • approved by: Mikołaj Morawski


STAGE deployment details:

STAGE deployment date:Wed Feb 28 11:17:34 UTC 2024

Deployment approved:
  • approved by: Mikołaj Morawski
Deployed by:Kacper Urbański
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/job/deploy_mdmhub_amer_nprod_amer-stage/343/

SUCCESS


APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/job/deploy_mdmhub_amer_nprod_amer-stage/343/

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/job/deploy_mdmhub_emea_nprod_emea-stage/382/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/job/deploy_mdmhub_emea_nprod_gbl-stage/221/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/job/deploy_mdmhub_amer_nprod_gblus-stage/244/

SUCCESS 



PROD deployment report:

PROD deployment date:TODO

Deployment approved:
  • approved by: Mikołaj Morawski
Deployed by:Rafał Kuć
ENV:LinkStatusDetails
AMERTODO

SUCCESS


APACTODO

SUCCESS


EMEA

TODO

SUCCESS


GBL(EX-US)

TODO

SUCCESS


GBLUS

TODO

SUCCESS




" }, { "title": "4.3.0", "pageID": "408556244", "pageLink": "/display/GMDM/4.3.0", "content": "

Release report:

Release:4.3.0Release date:Thu Mar 14 11:30:13 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Mikołaj MorawskiPlanned GO-LIVE:Tue Mar 21 (in 1 weeks)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/477/

SUCCESS 


CHANGELOG:

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/7d6036dfb79366537f79272b026ab24ec1ea1b62#CHANGELOG.md

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/d30b468528cb98adc181b4e5d192c776328d70e8#CHANGELOG.md

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/73bdcaaa0997b156ce79728af6c90dfd0f3cfa1b#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/477/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: Thu Mar 14

Executed by: Mikołaj Morawski

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/419/testReport/

[81] SUCCESS

[0] FAILED

[3] REPEATED


\"\"

  • DCR tests failed due to RDM issue on Reltio side:
  • same set of tests is successful on EMEA and AMER so logic is working correctly
  • RCA: 
    expected:<A[UTO_REJECTED]> but was:<A[uto Rejected]>

    Repeated tests:

    • repeated from local PC one more time by Mikołaj Morawski
    • Test was repeated manually and passed with success

\"\"

\"\"

\"\"

APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/359/testReport/

[89] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/511/testReport/

[89] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/420/testReport/

[72] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/358/testReport/

[74] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

Tests ready and approved:
  • approved by: Mikołaj Morawski
Release ready and approved:
  • approved by: Mikołaj Morawski

STAGE deployment details:

STAGE deployment date:Thu Mar 14 14:48:33 UTC 2024

Deployment approved:
  • approved by: Mikołaj Morawski
Deployed by:Mikołaj Morawski
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/351/

SUCCESS


APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/182/

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/392/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/224/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/247/

SUCCESS 


PROD deployment report:

PROD deployment date:Thu Mar 21 11:00:42 UTC 2024

Deployment approved:
  • approved by: Mikołaj Morawski
Deployed by:Filip Sądowicz
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/282/

SUCCESS


APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/246/

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/302/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/207/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/246/

SUCCESS




" }, { "title": "4.6.0", "pageID": "410815299", "pageLink": "/display/GMDM/4.6.0", "content": "

Release report:

Release:4.6.0Release date:Thu Mar 21 14:01:19 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Mikołaj MorawskiPlanned GO-LIVE:Tue Mar 28 (in 1 weeks)
StageLinkStatusComments (images 600px)
Build:

https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/484/

++ https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/485/

SUCCESS 


CHANGELOG:

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/9a3b6fe4bdf5573691cb37d5f994fe0f93b661fa#CHANGELOG.md

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/c9c3d307b27704264bf4d0b5fefc51bc02b78e79#CHANGELOG.md

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/99cadba8373475c979f12b0c2ae815908b72b582#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/484/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: Thu Mar 21

Executed by: Mikołaj Morawski

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/422/testReport/

[83] SUCCESS

[1] FAILED

[0] REPEATED


\"\"

APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/365/testReport/

[87] SUCCESS

[2] FAILED

[0] REPEATED


\"\"

  • DCR tests failed due to RDM issue on Reltio side:
  • same set of tests is successful  AMER so logic is working correctly
  • RCA:
  • org.junit.ComparisonFailure: expected:<A[uto Rejected]> but was:<A[UTO_REJECTED]>
  • Ignoring and approved by Mikołaj Morawski because we are still waiting for RDM configuration on DEV
EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/517/testReport/

[87] SUCCESS

[2] FAILED

[0] REPEATED


\"\"

  • DCR tests failed due to RDM issue on Reltio side:
  • same set of tests is successful  AMER so logic is working correctly
  • RCA:
  • org.junit.ComparisonFailure: expected:<A[uto Rejected]> but was:<A[UTO_REJECTED]>
  • Ignoring and approved by Mikołaj Morawski because we are still waiting for RDM configuration on DEV
GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/426/testReport/

[72] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/363/testReport/

[74] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

Tests ready and approved:
  • approved by: Mikołaj Morawski
Release ready and approved:
  • approved by: Mikołaj Morawski

STAGE deployment details:

STAGE deployment date:Thu Mar 26 08:01:19 UTC 2024

Deployment approved:
  • approved by: Mikołaj Morawski
Deployed by:Mikołaj Morawski
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/job/deploy_mdmhub_amer_nprod_amer-stage/355/

SUCCESS


APACN/A (blocked due to VOD project)

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/job/deploy_mdmhub_emea_nprod_emea-stage/398/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/job/deploy_mdmhub_amer_nprod_gblus-stage/252/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/job/deploy_mdmhub_emea_nprod_gbl-stage/228/

SUCCESS 


PROD deployment report:

PROD deployment date:Thu Mar 28 09:23:52 UTC 2024

Deployment approved:
  • approved by: Mikołaj Morawski
Deployed by:Filip Sądowicz
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/290/

SUCCESS


APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/251/

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/307/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/210/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/251/

SUCCESS




" }, { "title": "4.9.0", "pageID": "415995497", "pageLink": "/display/GMDM/4.9.0", "content": "

Release report:

Release:4.9.0Release date:Thu Apr 10 10:01:19 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Rafał KućPlanned GO-LIVE:Tue Apr 11 (in 1 day)
StageLinkStatusComments (images 600px)
Build:

https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/491/

FAILED

The code has been released but job failed because of issue related to docker cleanup
CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/0467698f97b08623c8edc9f134ea2156737c8df7#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/491/testReport/

SUCCESS


Integration tests:

Execution date: Thu Apr 10

Executed by: Rafał Kuć

AMER

[0] SUCCESS

[0] FAILED

[0] REPEATED



APACSkipped due to development of IoD project

[0] SUCCESS

[0] FAILED

[0] REPEATED



EMEA

[0] SUCCESS

[0] FAILED

[0] REPEATED



GBL(EX-US)

[0] SUCCESS

[0] FAILED

[0] REPEATED



GBLUS

[0] SUCCESS

[0] FAILED

[0] REPEATED



Tests ready and approved:
  • approved by: Rafał Kuć
Release ready and approved:
  • approved by: Rafał Kuć

STAGE deployment details:

STAGE deployment date:Thu Apr 10 11:01:19 UTC 2024

Deployment approved:
  • approved by: Rafał Kuć
Deployed by:Rafał Kuć
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/363/

SUCCESS


APACN/A (blocked due to VOD project)

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/408/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/

SUCCESS 


PROD deployment report:

PROD deployment date:Thu Apr 11 09:23:52 UTC 2024

Deployment approved:
  • approved by: Rafał Kuć
Deployed by:Rafał Kuć
ENV:LinkStatusDetails
AMER

SUCCESS


APAC

SUCCESS


EMEA


SUCCESS


GBL(EX-US)


SUCCESS


GBLUS


SUCCESS




" }, { "title": "4.10.0", "pageID": "415212536", "pageLink": "/display/GMDM/4.10.0", "content": "

Release report:

Release:4.10.0Release date:Thu Apr 18 19:03:35 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Wed 24 (in 1 weeks)
StageLinkStatusComments (images 600px)
Build:

https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/492/

SUCCESS 


CHANGELOG:

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/2939c70fcc57caa8040a895889c88af99a396665#CHANGELOG.md

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/0467698f97b08623c8edc9f134ea2156737c8df7#CHANGELOG.md

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/d110ea29c10875123e738d32eb166875db7a6948#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/492/testReport/

SUCCESS


Integration tests:

Execution date: Thu Apr 18

Executed by: Krzysztof Prawdzik

AMER

[85] SUCCESS

[0] FAILED

[0] REPEATED



APAC

[89] SUCCESS

[0] FAILED

[0] REPEATED




EMEA

[89] SUCCESS

[0] FAILED

[0] REPEATED



GBL(EX-US)

[72] SUCCESS

[0] FAILED

[0] REPEATED



GBLUS

[74] SUCCESS

[0] FAILED

[0] REPEATED



Tests ready and approved:
  • approved by: Krzysztof Prawdzik
Release ready and approved:
  • approved by: Mikołaj Morawski

STAGE deployment details:

STAGE deployment date:Thu Apr 18 19:57:21 UTC 2024

Deployment approved:
  • approved by: Mikołaj Morawski
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/369/

SUCCESS


APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/202/

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/413/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/236/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/261/

SUCCESS 


PROD deployment report:

PROD deployment date:Thu Apr 25 ??:??:?? UTC 2024

Deployment approved:
  • approved by: Mikołaj Morawski
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMER

SUCCESS


APAC

SUCCESS


EMEA


SUCCESS


GBL(EX-US)


SUCCESS


GBLUS


SUCCESS




" }, { "title": "4.11.0", "pageID": "416001899", "pageLink": "/display/GMDM/4.11.0", "content": "

Release report:

Release:4.11.0Release date:Tue Apr 23 10:41:13 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Mon Apr 29 (in 1 week)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/493/

SUCCESS 


CHANGELOG:

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/20128ed85fda3830ebbb2874f7cd9cecd3031e18#CHANGELOG.md

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/2939c70fcc57caa8040a895889c88af99a396665#CHANGELOG.md

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/0467698f97b08623c8edc9f134ea2156737c8df7#CHANGELOG.md

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/d110ea29c10875123e738d32eb166875db7a6948#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/493/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: Tue Apr 23

Executed by: Krzysztof Prawdzik

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/447/testReport/

[84] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/382/testReport/

[93] SUCCESS

[0] FAILED

[8] REPEATED


\"\"

  • part of CHina tests failed due to some timeout:
  • RCA: 
    Action timeout after 360000 milliseconds.
    Failed to receive message on endpoint: 'apac-dev-out-full-mde-cn'

  • Repeated tests:

    • repeated from local PC one more time by Krzysztof Prawdzik
    • Test was repeated manually and passed with success

\"\"

\"\"

\"\"

EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/539/testReport/

[89] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/445/testReport/

[72] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/386/testReport/

[74] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

Tests ready and approved:
  • approved by: Krzysztof Prawdzik
Release ready and approved:
  • approved by: Mikołaj Morawski

STAGE deployment details:

STAGE deployment date:Tue Apr 23 11:26:52 UTC 2024

Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/371/Tue Apr 23 

SUCCESS


APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/204/

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/418/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/237/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/262/

SUCCESS 


PROD deployment report:

PROD deployment date:Mon Apr 29 08:37:50 UTC 2024

Deployment approved:
  • approved by: Mikołaj Morawski
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/304/

SUCCESS


APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/256/

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/323/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/215/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/258/

SUCCESS




" }, { "title": "4.11.1", "pageID": "415221783", "pageLink": "/display/GMDM/4.11.1", "content": "

Release report:

Release:4.11.1Release date:Wed May 08 08:16:41 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Wed May 08 (same day)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/101/

SUCCESS 


CHANGELOG:

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/dbe984a2a9bb73ba141aad9386d741fd3fc8334d#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/493/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: N/A

Executed by: N/A

AMERN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

APACN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

EMEAN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

GBL(EX-US)N/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

GBLUSN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

Tests ready and approved:
  • approved by: N/A
Release ready and approved:
  • approved by: 

STAGE deployment details:

STAGE deployment date:Wed May 08 08:54:16 UTC 2024

Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/374/

SUCCESS


APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/209/

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/420/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/239/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/272/

SUCCESS 


PROD deployment report:

PROD deployment date:Wed May 08 10:07:44 UTC 2024

Deployment approved:
  • approved by: 
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/307/

SUCCESS


APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/261/

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/332/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/218/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/263/

SUCCESS




" }, { "title": "4.12.0", "pageID": "425492972", "pageLink": "/display/GMDM/4.12.0", "content": "

Release report:

Release:4.12.0Release date:Mon May 13 12:03:50 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Thu May 16
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/2/

SUCCESS 


CHANGELOG:

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/dc117aa31a81375f4572ca68a22491d02094e91e#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/2/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: Mon May 13

Executed by: Krzysztof Prawdzik

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/463/testReport/

[81] SUCCESS

[3] FAILED

[0] REPEATED


\"\"

  • RCA: 
    Tenant [wn60kG248ziQSMW] is not registered.
APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/398/testReport/

[99] SUCCESS

[0] FAILED

[2] REPEATED


\"\"

  • one of China tests failed due to timeout:
  • RCA: 
    Action timeout after 360000 milliseconds.
    Failed to receive message on endpoint: 'apac-dev-out-full-mde-cn'

  • Repeated tests:

    • repeated from local PC one more time by Krzysztof Prawdzik
    • Test was repeated manually and passed with success

\"\"

\"\"

EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/554/testReport/

[88] SUCCESS

[1] FAILED

[0] REPEATED


\"\"

  • RCA: 
    Tenant [wn60kG248ziQSMW] is not registered.
GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/459/testReport/

[72] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/401/testReport/

[73] SUCCESS

[0] FAILED

[1] REPEATED


\"\"

  • one of the tests failed due to unsufficient time to get proper eventType:
  • RCA: 
    Validation failed: Values not equal for element '$.eventType', expected 'HCP_MERGED' but was 'ENTITY_POTENTIAL_LINK_FOUND'
  • Repeated test:

    • repeated from local PC one more time by Krzysztof Prawdzik
    • Test was repeated manually with increased number of retries and passed with success

\"\"

Tests ready and approved:
  • approved by: Krzysztof Prawdzik
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

STAGE deployment date:Mon May 13 12:52:59 UTC 2024

Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/376/

SUCCESS


APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/211/

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/422/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/241/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/275/

SUCCESS 


PROD deployment report:

PROD deployment date:Thu May 16 09:35:26 UTC 2024

Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/309/

SUCCESS


APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/263/

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/336/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/220/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/266/

SUCCESS




" }, { "title": "4.12.1", "pageID": "425136247", "pageLink": "/display/GMDM/4.12.1", "content": "

Release report:

Release:4.12.1Release date:Tue May 21 08:44:41 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Tue May 21 (same day)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/102/

SUCCESS 


CHANGELOG:

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/0849434b3c67a63f36b13211cb19c23e4c77b25e#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/102/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: N/A

Executed by: N/A

AMERN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A

N/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A
EMEAN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A
GBL(EX-US)N/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A
GBLUSN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A
Tests ready and approved:
  • approved by: N/A
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

STAGE deployment date:Tue May 21 09:26:46 UTC 2024

Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/377/

SUCCESS


APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/212/

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/423/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/242/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/279/

SUCCESS 


PROD deployment report:

PROD deployment date:


Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/314/

SUCCESS


APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/265/

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/340/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/221/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/270/

SUCCESS




" }, { "title": "4.14.0", "pageID": "430082856", "pageLink": "/display/GMDM/4.14.0", "content": "

Release report:

Release:4.14.0Release date:Wed May 29 15:14:52 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Thu Jun 6
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/4/

SUCCESS 


CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/0d962b08c9a6caa4520868f8c33a577c85356a8f#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/4/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: Wed May 29

Executed by: Krzysztof Prawdzik

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/473/

[83] SUCCESS

[0] FAILED

[1] REPEATED

\"\"

Recent changes in com.COMPANY.mdm.tests.dcr2.DCR2ServiceTest.shouldInactivateHCP test has caused its instability.

  • repeated from local PC one more time by Krzysztof Prawdziik
  • Test was repeated manually and passed with success
  • fix for this test is being prapered
APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/413/

[99] SUCCESS

[0] FAILED

[1] REPEATED

\"\"

EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/565/

[88] SUCCESS

[0] FAILED

[1] REPEATED

\"\"

Recent changes in com.COMPANY.mdm.tests.dcr2.DCR2ServiceTest.shouldInactivateHCP test has caused its instability.

  • repeated from local PC one more time by Krzysztof Prawdziik
  • Test was repeated manually and passed with success
  • fix for this test is being prapered
GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/469/

[72] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/411/testReport/

[74] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

Tests ready and approved:
  • approved by: Krzysztof Prawdzik
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

STAGE deployment date:Wed May 29 16:36:37 UTC 2024

Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/379/

SUCCESS


APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/214/

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/426/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/244/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/287/

SUCCESS 


STAGE test phase details:

Verification date

17:05 - 18:00 + 12:15



Verification by



Dashboard

Hints

Status

Details

MDMHUB / MDMHUB Component errors

Increased number of alerts → there's certainly something wrong

SUCCESS

APAC NPROD

\"\"

EMEA NPROD

\"\"

\"\"

MDMHUB / MDMHUB KPIsSpikes, especially wide ones, suggest potential issue

SUCCESS

\"\"

MDMHUB / MDMHUB Components resourceComponent specific analysis, especially good for the ones updated within latest release (check news fragments)

SUCCESS

Batch service

\"\"

Entity enricher

\"\"

Map channel

\"\"

MDM Auth

\"\"

MDM Reconciliation

\"\"

Raw data

\"\"

General / Snowflake QC Trends

Quick and easy way to determine if there's something wrong with QC. Any change (lower/higher) → potential issue

SUCCESS


Kubernetes / Vertical Pod Autoscaler (VPA)

Change in memory requirement before and after deployment → potential issue 

not verified


Kubernetes / K8s Cluster Usage Statistics

Good for PROD environments since NPROD is to prone to project specific loads

SUCCESS


Kubernetes / Pod Monitoring

Component specific analysis, especially good for the ones updated within latest release (check news fragments)

\"(question)\"

APAC DEV

\"\"

General / kubernetes-persistent-volumes 

Storage trend over time 

SUCCESS


General / Alerts Statistics 

Increase after release → potential issue 

SUCCESS

APAC NPROD

\"\"

GBLUS NPROD

\"\"

GBL

\"\"

General / SSL Certificates and Endpoint Availability

Lower widget, multiple stacked endpoints at the same time for a long period

SUCCESS


PROD deployment report:

PROD deployment date:Thu Jun 06 11:37:04 UTC 2024

Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/322/

SUCCESS


APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/268/

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/349/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/224/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/273/

SUCCESS


PROD deploy hypercare details:

Verification date

12:37



Verification by



Dashboard

Hints

Status

Details

MDMHUB / MDMHUB Component errors

Increased number of alerts → there's certainly something wrong

SUCCESS


MDMHUB / MDMHUB KPIsSpikes, especially wide ones, suggest potential issue

SUCCESS

\"\"

\"\"

MDMHUB / MDMHUB Components resourceComponent specific analysis, especially good for the ones updated within latest release (check news fragments)



General / Snowflake QC Trends

Quick and easy way to determine if there's something wrong with QC. Any change (lower/higher) → potential issue

SUCCESS


Kubernetes / Vertical Pod Autoscaler (VPA)

Change in memory requirement before and after deployment → potential issue 



Kubernetes / K8s Cluster Usage Statistics

Good for PROD environments since NPROD is to prone to project specific loads

SUCCESS


Kubernetes / Pod Monitoring

Component specific analysis, especially good for the ones updated within latest release (check news fragments)

SUCCESS

\"\"

\"\"


\"\"

General / kubernetes-persistent-volumes 

Storage trend over time 

SUCCESS

\"\"

General / Alerts Statistics 

Increase after release → potential issue 

SUCCESS

\"\"

General / SSL Certificates and Endpoint Availability

Lower widget, multiple stacked endpoints at the same time for a long period

SUCCESS


" }, { "title": "4.12.2", "pageID": "430083918", "pageLink": "/display/GMDM/4.12.2", "content": "

Release report:

Release:4.12.2Release date:Tue Jun 04 12:19:52 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Tue Jun 4 (same day)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/103/

SUCCESS 


CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/0abf8b37a2ac6b27c093cba3f3288ebd2c9ebfc4#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/103/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: N/A

Executed by: N/A

AMERN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A

N/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A
EMEAN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A
GBL(EX-US)N/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A
GBLUSN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A
Tests ready and approved:
  • approved by: N/A
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

STAGE deployment date:Tue Jun 04 13:27:51 UTC 2024

Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMER

SUCCESS


APAC

SUCCESS


EMEA


SUCCESS


GBL(EX-US)


SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/288/

SUCCESS 


PROD deployment report:

PROD deployment date:


Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/320/

SUCCESS


APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/267/

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/347/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/272/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/223/

SUCCESS




" }, { "title": "4.14.1", "pageID": "430087408", "pageLink": "/display/GMDM/4.14.1", "content": "

Release report:

Release:4.14.1Release date:Tue Jun 11 10:27:15 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Tue Jun 11 (same day)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/105/

SUCCESS 


CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/69c634998c0b05dd2ed74677bcb638c55213b940#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/105/testReport/

SUCCESS


Integration tests:

Execution date: N/A

Executed by: N/A

AMERN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A

N/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A
EMEAN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A
GBL(EX-US)N/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A
GBLUSN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED


N/A
Tests ready and approved:
  • approved by: N/A
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

STAGE deployment date:Tue Jun 11 11:27:31 UTC 2024

Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/383/

SUCCESS


APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/218/

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/429/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/246/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/290/

SUCCESS 


STAGE test phase details:

Verification date

 17:05 - 18:00 +  12:15



Verification by



Dashboard

Hints

Status

Details

MDMHUB / MDMHUB Component errors

Increased number of alerts → there's certainly something wrong

e.g. SUCCESS


MDMHUB / MDMHUB KPIsSpikes, especially wide ones, suggest potential issue



MDMHUB / MDMHUB Components resourceComponent specific analysis, especially good for the ones updated within latest release (check news fragments)


General / Snowflake QC Trends

Quick and easy way to determine if there's something wrong with QC. Any change (lower/higher) → potential issue



Kubernetes / Vertical Pod Autoscaler (VPA)

Change in memory requirement before and after deployment → potential issue 



Kubernetes / K8s Cluster Usage Statistics

Good for PROD environments since NPROD is to prone to project specific loads



Kubernetes / Pod Monitoring

Component specific analysis, especially good for the ones updated within latest release (check news fragments)



General / kubernetes-persistent-volumes 

Storage trend over time 



General / Alerts Statistics 

Increase after release → potential issue 



General / SSL Certificates and Endpoint Availability

Lower widget, multiple stacked endpoints at the same time for a long period



PROD deployment report:

PROD deployment date:Tue Jun 11 12:40:35 UTC 2024

Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/326/

SUCCESS


APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/270/

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/354/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/225/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/277/

SUCCESS


PROD deploy hypercare details:

Verification date

usually Deployment_date + 24-48h



Verification by



Dashboard

Hints

Status

Details

MDMHUB / MDMHUB Component errors

Increased number of alerts → there's certainly something wrong

e.g. SUCCESS


MDMHUB / MDMHUB KPIsSpikes, especially wide ones, suggest potential issue



MDMHUB / MDMHUB Components resourceComponent specific analysis, especially good for the ones updated within latest release (check news fragments)



General / Snowflake QC Trends

Quick and easy way to determine if there's something wrong with QC. Any change (lower/higher) → potential issue



Kubernetes / Vertical Pod Autoscaler (VPA)

Change in memory requirement before and after deployment → potential issue 



Kubernetes / K8s Cluster Usage Statistics

Good for PROD environments since NPROD is to prone to project specific loads



Kubernetes / Pod Monitoring

Component specific analysis, especially good for the ones updated within latest release (check news fragments)



General / kubernetes-persistent-volumes 

Storage trend over time 



General / Alerts Statistics 

Increase after release → potential issue 



General / SSL Certificates and Endpoint Availability

Lower widget, multiple stacked endpoints at the same time for a long period



" }, { "title": "4.15.0", "pageID": "430350581", "pageLink": "/display/GMDM/4.15.0", "content": "

Release report:

Release:4.15.0Release date:Thu Jun 13 15:45:35 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Thu Jun 20 (in 1 week)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/8/

SUCCESS 


CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/6aab2f8a14ba7406e1e2de60a81a4af2d34d6094#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/4/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: 

Executed by: Krzysztof Prawdzik

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/485/

[84] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

APAC

[99] SUCCESS

[0] FAILED

[1] REPEATED


EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/575/

[89] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/482/

[72] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/422/

[74] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

Tests ready and approved:
  • approved by: Krzysztof Prawdzik
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

STAGE deployment date:Thu Jun 13 17:46:23 UTC 2024

Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/385/

SUCCESS

APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/220/

SUCCESS

EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/431/

SUCCESS

GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/248/

SUCCESS

GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/292/

SUCCESS 

STAGE test phase details:

Verification date

15:30 - 16:20



Verification by



Dashboard

Hints

Status

Details

MDMHUB / MDMHUB Component errors


SUCCESS


MDMHUB / MDMHUB KPIs

SUCCESS


MDMHUB / MDMHUB Components resource

SUCCESS

AMER-STAGE - HTTP 401 - known issue with authorization to OneKey (IB)

\"\"

General / Snowflake QC Trends


SUCCESS


Kubernetes / K8s Cluster Usage Statistics


SUCCESS


Kubernetes / Pod Monitoring


SUCCESS

APAC DEV - Damian's tests + Krzysztof published old version for a moment which behave strangle on APAC DEV only (selective router)

\"\"

General / kubernetes-persistent-volumes 


SUCCESS

General / Alerts Statistics 

Why there are duplicates with _ and - ?

\"\"

\"(question)\"

EMEA-NPROD - Marek knows about this ? 

\"\"

APAC-STAGE - something wrong with monitoring? constant "1" independent from timeframe?

\"\"

GBLUS-STAGE - Greg is working on it - note from karma

\"\"

General / SSL Certificates and Endpoint Availability


\"(question)\"

APAC-NPROD - real issue or monitoring false positives? 

\"\"

EMEA-NPROD

\"\"


PROD deployment report:

PROD deployment date:Thu Jun 20 11:52:28 UTC 2024

Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/329/

SUCCESS


APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/272/

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/363/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/227/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/279/

SUCCESS


PROD deploy hypercare details:

Verification date

15:45 + review 11:00 (Bachanowicz, Mieczysław (Irek)

Verification by

Dashboard

Status

Details

MDMHUB / MDMHUB Component errors

\"(warning)\"

DCR Oneky change was deployed without extensive testing od NPROD. Verified with Paweł - no major risks to leave it unattained for the weekend.

GBLUS-PROD - mdm-manager, peak processing

\"\"

APAC-PROD - onekey. 

: Did not happen since then. 

\"\"

APAC-PROD mdm-manager

\"\"

APAC-PROD DCR2 Service

\"\"

EMEA-PROD map-channel, strange errors

\"\"

\"(warning)\" GBL-PROD pforcerx channel - \n MR-9012\n -\n Getting issue details...\n STATUS\n

\"(tick)\"  Did not happen since then. 

\"\"

\"(warning)\" GBL-PROD - Created \n MR-9011\n -\n Getting issue details...\n STATUS\n

\"\"

MDMHUB / MDMHUB KPIs\"(tick)\" 

\"(tick)\" APAC-PROD to Greg → IB: This is a recurring thing. Happens every week. 

\"\"

\"(tick)\"  EMEA-PROD → IB: This is a recurring thing. Happens every week. 

\"\"

\"(tick)\"  GBL-PROD → IB: This is a recurring thing. Happens every week. 

\"\"

MDMHUB / MDMHUB Components resource\"(tick)\"

General / Snowflake QC Trends

\"(tick)\"

Kubernetes / K8s Cluster Usage Statistics

\"(tick)\"

Kubernetes / Pod Monitoring

\"(tick)\"

EMEA-PROD - known issue during deployment

\"\"

General / kubernetes-persistent-volumes \"(tick)\"
General / Alerts Statistics \"(tick)\"

AMER-PROD zookeeper reelection

\"\"

GBLUS-PROD high processing, corresponds with manager issue

\"\"

EMEA-PROD deployment issue

\"\"

General / SSL Certificates and Endpoint Availability\"(tick)\"
" }, { "title": "4.16.0", "pageID": "438895667", "pageLink": "/display/GMDM/4.16.0", "content": "

Release report:

Release:4.16.0Release date:Mon Jun 24 15:13:56 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Thu Jun 27 (in 3 days)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/9/

SUCCESS 


CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/0789f75320df48915b3eaa82d1669bfe2fdc0668#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/9/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: Tue Jun 25 17:00:03 UTC 2024

Executed by: Krzysztof Prawdzik

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/493/

[85] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/429/

[102] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/585/

[89] SUCCESS

[0] FAILED

[1] REPEATED

\"\"

GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/489/

[73] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/429/

[75] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

Tests ready and approved:
  • approved by: Krzysztof Prawdzik
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

STAGE deployment date:Mon Jun 24 21:05:13 UTC 2024

Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/386/

SUCCESS

APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/222/

SUCCESS

EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/434/

SUCCESS

GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/249/

SUCCESS

GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/293/

SUCCESS 

STAGE test phase details:

Verification date

  10:45 - 11:45

Verification by

Dashboard

Status

Details

MDMHUB / MDMHUB Component errors

\"(tick)\"

\"(tick)\"  AMER-STAGE - small issues with COMPANYGlobalCustomerID (COMPANY Customer Id: 02-100373164 does not exist in Reltio or is deactivated)

\"\"

\"(tick)\"  APAC-STAGE - AWS issue

\"\"

AWS does not show any problems with their S3 services 

\"\"

\"(tick)\"  EMEA-STAGE, manager

\"(tick)\"  GLB-STAGE, manager 

MDMHUB / MDMHUB KPIs

\"(tick)\"

MDMHUB / MDMHUB Components resource

\"(tick)\"


General / Snowflake QC Trends

\"(tick)\"


Kubernetes / K8s Cluster Usage Statistics

\"(tick)\"


Kubernetes / Pod Monitoring

\"(tick)\"

\"(tick)\"  APAC-STAGE - Mon morning - HCONames memory reload, config update by Karol

\"\"

General / kubernetes-persistent-volumes 

\"(tick)\"


General / Alerts Statistics 

\"(warning)\"

\"(warning)\"   APAC-STAGE - Friday, 17:00, a lot of strange errors, corelates with AWS issue 

\"\"

\"(question)\"  AMER-STAGE + APAC-STAGE + GBLUS - stage- Grzesiek - wt/środa - Snowflake na Stageach?4


General / SSL Certificates and Endpoint Availability

\"(tick)\"

PROD deployment report:

PROD deployment date:Thu Jun 27 09:43:12 UTC 2024

Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/331/

SUCCESS

Deployment log:
4.16.0-amer-prod-deploy.log
APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/274/

SUCCESS

Deployment log:
4.16.0-apac-prod-deploy.log

EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/365/

SUCCESS

Deployment log:
4.16.0-emea-prod-deploy.log

GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/228/

SUCCESS

Deployment log:
4.16.0-gbl-prod-deploy.log

GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/281/

SUCCESS

Deployment log:
4.16.0-gblus-prod-deploy.log

PROD deploy hypercare details:

Verification date

16-17:00

Verification by

Dashboard

Status

Details

MDMHUB / MDMHUB Component errors

\"(warning)\" 

\"(warning)\"  AMER-PROD - mdm service 2 + OneKey - 2 examples of failed lookup codes transformation 

  •  Issue found for two DCR requests. Failed to send req to OK:  IB> Paweł - create ticket
  • 3bd7e9217a004b37a2c0cbd7afabda1f
  • 4d9e09c06b89494c950a759889cf12d0
    • low priority issue - lepsza obsługa lookupów - często się to pojawia na różnych środowiskach (APAC-PROD) "Create dcr exception"
    • wywałko w endpoicie OneKey

log1.txt

\"\"

\"(tick)\" AMER-PROD - clean NPE → create ticket to clean up such "errors"

\"\"

\"(question)\"  GBLUS-PROD - single error, however huge

\"\"

\"(tick)\"   EMEA-PROD, map-channel, brak trace'a, kubernetes restarted component. 

\"\"

\"(tick)\"  EMEA-PROD, minor issue, for further investigation (Krzysiek) - low prio

\"\"

\"(warning)\"  \"(warning)\"  GBLUS-PROD, Know issue - \n MR-9011\n -\n Getting issue details...\n STATUS\n

\"\"

MDMHUB / MDMHUB KPIs\"(tick)\" 
  • \"(warning)\"   Publishing latency ~1year -known issue, ticket to create (IB)

GBL-PROD

\"\"

MDMHUB / MDMHUB Components resource\"(tick)\" 
  • AMER-PROD, map channel, high CPU usage, to verify on Mondays

\"\"

General / Snowflake QC Trends

\"(tick)\" 

Kubernetes / K8s Cluster Usage Statistics

\"(tick)\" 

Kubernetes / Pod Monitoring

\"(tick)\" 
General / kubernetes-persistent-volumes \"(tick)\" 
General / Alerts Statistics \"(tick)\" 

\"(tick)\" GBL-PROD - confirm with Damian that's not an issue

\"\"

General / SSL Certificates and Endpoint Availability\"(tick)\" 

\"(tick)\"  US-PROD 

  • IB > Ticket to create to check env selectors for us-prod

\"\"

" }, { "title": "4.17.0", "pageID": "438899752", "pageLink": "/display/GMDM/4.17.0", "content": "

Release report:

Release:4.17.0Release date:Fri Jun 28 15:13:34 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Thu Jul 4 (in 3 days)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/10/

SUCCESS 


CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/14f625d0b5d47629245ed7fd0d0112e7ad5675e8#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/10/testReport/

SUCCESS


Integration tests:

Execution date: 

Executed by: Krzysztof Prawdzik

AMER

[85] SUCCESS

[0] FAILED

[0] REPEATED


APAC

[102] SUCCESS

[0] FAILED

[0] REPEATED


EMEA

[89] SUCCESS

[0] FAILED

[1] REPEATED


GBL(EX-US)

[73] SUCCESS

[0] FAILED

[0] REPEATED


GBLUS

[75] SUCCESS

[0] FAILED

[0] REPEATED


Tests ready and approved:
  • approved by: Krzysztof Prawdzik
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

STAGE deployment date:


Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMER

SUCCESS


APAC

SUCCESS


EMEA


SUCCESS


GBL(EX-US)


SUCCESS


GBLUS


SUCCESS 


STAGE test phase details:

Verification date


Verification by


Dashboard

Status

Details

MDMHUB / MDMHUB Component errors



MDMHUB / MDMHUB KPIs

MDMHUB / MDMHUB Components resource

General / Snowflake QC Trends



Kubernetes / K8s Cluster Usage Statistics



Kubernetes / Pod Monitoring



General / kubernetes-persistent-volumes 

General / Alerts Statistics 

General / SSL Certificates and Endpoint Availability

PROD deployment report:

PROD deployment date:


Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMER

SUCCESS


APAC

SUCCESS


EMEA


SUCCESS


GBL(EX-US)


SUCCESS


GBLUS


SUCCESS


PROD deploy hypercare details:

Verification date


Verification by


Dashboard

Status

Details

MDMHUB / MDMHUB Component errors



MDMHUB / MDMHUB KPIs

MDMHUB / MDMHUB Components resource

General / Snowflake QC Trends



Kubernetes / K8s Cluster Usage Statistics



Kubernetes / Pod Monitoring



General / kubernetes-persistent-volumes 

General / Alerts Statistics 

General / SSL Certificates and Endpoint Availability

" }, { "title": "4.16.1", "pageID": "438900696", "pageLink": "/display/GMDM/4.16.1", "content": "

Release report:

Release:4.16.1Release date:Tue Jul 02 10:02:19 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Tue Jul 02 (same day)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/108/

SUCCESS 


CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/60a14c07d0421cb25ee9d1e29aa376705d20686d



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/108/testReport/

SUCCESS


Integration tests:

Execution date: N/A

Executed by: N/A

AMERN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

APACN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

EMEAN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

GBL(EX-US)N/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

GBLUSN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

Tests ready and approved:
  • approved by: Krzysztof Prawdzik
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

STAGE deployment date:


Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMER

SUCCESS


APAC

SUCCESS


EMEA


SUCCESS


GBL(EX-US)


SUCCESS


GBLUS


SUCCESS 


STAGE test phase details:

Verification date

13.00 - 14.00

Verification by

Dashboard

Status

Details

MDMHUB / MDMHUB Component errors

\"(tick)\" 


MDMHUB / MDMHUB KPIs\"(tick)\" 
MDMHUB / MDMHUB Components resource\"(tick)\" 

General / Snowflake QC Trends

\"(tick)\" 

Kubernetes / K8s Cluster Usage Statistics

\"(tick)\" 

Kubernetes / Pod Monitoring

\"(tick)\" 
General / kubernetes-persistent-volumes \"(tick)\" 
General / Alerts Statistics \"(tick)\" 
General / SSL Certificates and Endpoint Availability\"(tick)\" 

PROD deployment report:

PROD deployment date:


Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/332/

SUCCESS


APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/275/

SUCCESS


EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/369/

SUCCESS


GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/230/

SUCCESS


GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/282/

SUCCESS


PROD deploy hypercare details:

Verification date


Verification by


Dashboard

Status

Details

MDMHUB / MDMHUB Component errors



MDMHUB / MDMHUB KPIs

MDMHUB / MDMHUB Components resource

General / Snowflake QC Trends



Kubernetes / K8s Cluster Usage Statistics



Kubernetes / Pod Monitoring



General / kubernetes-persistent-volumes 

General / Alerts Statistics 

General / SSL Certificates and Endpoint Availability

" }, { "title": "4.18.0", "pageID": "438900984", "pageLink": "/display/GMDM/4.18.0", "content": "

Release report:

Release:4.18.0Release date:Tue Jul 02 14:57:49 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Thu Jul 04 (in 2 days)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/11/

SUCCESS 


CHANGELOG:

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/14f625d0b5d47629245ed7fd0d0112e7ad5675e8#CHANGELOG.md

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/60a14c07d0421cb25ee9d1e29aa376705d20686d

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/f90e4505509822513ae8c27a48a776e3acd67c8e



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/11/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: Tue Jul 02 15:59:32 UTC 2024

Executed by: Krzysztof Prawdzik

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/499/

[85] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

APAC

[94] SUCCESS

[1] FAILED

[7] REPEATED

\"\"

  • one of China tests failed due to timeout:
  • RCA: 
    Action timeout after 360000 milliseconds.
    Failed to receive message on endpoint: 'apac-dev-out-full-hcp-merge-cn'

  • Repeated tests:

    • several test failed due ro recent change of DCR tracking statues on APAC DEV on Reltio side
    • repeated from local PC (with updated values) one more time by Krzysztof Prawdzik
    • Tests were repeated manually and passed with success
    • fix for these tests is being prepared

\"\"

\"\"

EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/591/

[89] SUCCESS

[0] FAILED

[1] REPEATED

\"\"

GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/495/

[73] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/435/

[75] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

Tests ready and approved:
  • approved by: Krzysztof Prawdzik
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

STAGE deployment date:Tue Jul 02 15:34:46 UTC 2024

Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/389/

SUCCESS

Deployment log:
4.18.0-amer-stage-deploy.log
APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/224/

SUCCESS

Deployment log:
4.18.0-apac-stage-deploy.log

EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/436/

SUCCESS

Deployment log:
4.18.0-emea-stage-deploy.log

GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/252/

SUCCESS

Deployment log:
4.18.0-gbl-stage-deploy.log

GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/295/

SUCCESS 

Deployment log:
4.18.0-gblus-stage-deploy.log

STAGE test phase details:

Verification date


Verification by


Dashboard

Status

Details

MDMHUB / MDMHUB Component errors



MDMHUB / MDMHUB KPIs

MDMHUB / MDMHUB Components resource

General / Snowflake QC Trends



Kubernetes / K8s Cluster Usage Statistics



Kubernetes / Pod Monitoring



General / kubernetes-persistent-volumes 

General / Alerts Statistics 

General / SSL Certificates and Endpoint Availability

PROD deployment report:

PROD deployment date:Thu Jul 04 08:28:26 UTC 2024

Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/333/

SUCCESS

Deployment log:
4.18.0-amer-prod-deploy.log
APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/276/

SUCCESS

Deployment log:
4.18.0-apac-prod-deploy.log

EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/371/

SUCCESS

Deployment log:
4.18.0-emea-prod-deploy.log

GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/231/

SUCCESS

Deployment log:
4.18.0-gbl-prod-deploy.log

GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/283/

SUCCESS

Deployment log:
4.18.0-gblus-prod-deploy.log

PROD deploy hypercare details:

Verification date

15:30 - 17:00

Verification by

Dashboard

Status

Details

MDMHUB / MDMHUB Component errors

\"(tick)\" 

\"(tick)\"  AMER-PROD - batch-service: data input issue, OneMed job - incorrect data ← Piotr

\"\"

\"(tick)\"   AMER-PROD, mdm-dcr2-service: know issue: "Can't convert data to Json string"

AMER-PROD, manager: Error processing request

\"\"

\"(tick)\"  AMER-PROD, onekey-dcr: know-issue

\"\"

\"(tick)\"  APAC-PROD, mdm-manager

\"\"

\"(question)\"  EMEA-PROD, MAPP channel

non-cirtical - needs to be verified "later"

\"\"

\"(question)\"  EMEA-PROD, manager,

minor to verify cause:" avax.ws.rs.ClientErrorException: HTTP 429 Too Many Requests at"

\"\"

\"(tick)\"  GBL-PROD, manager - known issue

\"\"


MDMHUB / MDMHUB KPIs\"(tick)\" 

\"(question)\"  GBLUS-PROD - why it wasn't smoothly processed? 

\"\"

GBL-PROD

\"\"

MDMHUB / MDMHUB Components resource\"(tick)\" 

General / Snowflake QC Trends

\"(tick)\" 


Kubernetes / K8s Cluster Usage Statistics

\"(tick)\" 

Kubernetes / Pod Monitoring

\"(tick)\" 

\"(tick)\"  GBLUS-PROD 

\"\"

GBL-PROD, publisher, manager high usage

\"\"

\"(warning)\" \"(question)\"  EMEA-PROD, 7d

\"\"

\"(warning)\" \"(question)\"  EMEA-PROD

\"\"


General / kubernetes-persistent-volumes 

\"(tick)\" 


General / Alerts Statistics \"(tick)\" 

\"(warning)\"  AMER-PROD, empty COMPANYGlobalCustomerId

Ticker raised by COMPANY to Reltio team - \n HSM-708\n -\n Getting issue details...\n STATUS\n + support.reltio.com/hc/requests/105633

\"\"

GBL-PROD, not an issue

\"\"

GBLUS-PROD, probably COMPANY manual merge/unmerge

\"\"

General / SSL Certificates and Endpoint Availability


\"(tick)\"  Schedule meeting with Marek how to deep dive to diagnose 

\n MR-9088\n -\n Getting issue details...\n STATUS\n

\n MR-9089\n -\n Getting issue details...\n STATUS\n

Kibana "Kube-events" indice contains logs from kubernets



\"(warning)\"  \"(question)\" EMEA-PROD - DCR, required further verification with Marek/Damian. 

\"\"

" }, { "title": "4.18.1", "pageID": "438317171", "pageLink": "/display/GMDM/4.18.1", "content": "

Release report:

Release:4.18.1Release date:Mon Jul 08 15:01:32 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Tue Jul 09 (in 1 day)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/109/

SUCCESS 


CHANGELOG:

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/446610ec20f2837570cb75c518ff0dc03bd7528f#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/109/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: N/A

Executed by: N/A

AMERN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

APACN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A
EMEAN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

GBL(EX-US)N/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

GBLUSN/A

[0] SUCCESS

[0] FAILED

[0] REPEATED

N/A

Tests ready and approved:
  • approved by: 
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

STAGE deployment date:(Tue Jul 09 07:07:46 UTC 2024

Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/390/

SUCCESS

Deployment log:
4.18.1-amer-stage-deploy.log
APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/225/

SUCCESS

Deployment log:
4.18.1-apac-stage-deploy.log

EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/437/

SUCCESS

Deployment log:
4.18.1-emea-stage-deploy.log

GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/254/

SUCCESS

Deployment log:
4.18.1-gbl-stage-deploy.log

GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/296/

SUCCESS 

Deployment log:
4.18.1-gblus-stage-deploy.log

STAGE test phase details:

Verification date

12:00

Verification by

Dashboard

Status

Details

MDMHUB / MDMHUB Component errors

\"(tick)\" 


MDMHUB / MDMHUB KPIs\"(tick)\" 
MDMHUB / MDMHUB Components resource\"(tick)\" 

General / Snowflake QC Trends

\"(tick)\" 

Kubernetes / K8s Cluster Usage Statistics

\"(tick)\" 

Kubernetes / Pod Monitoring

\"(tick)\" 
General / kubernetes-persistent-volumes \"(tick)\" 
General / Alerts Statistics \"(tick)\" 
General / SSL Certificates and Endpoint Availability\"(tick)\" 

PROD deployment report:

PROD deployment date:Thu Jul 04 08:28:26 UTC 2024

Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/335/

SUCCESS

Deployment log:
4.18.1-amer-prod-deploy.log
APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/278/

SUCCESS

Deployment log:
4.18.1-apac-prod-deploy.log

EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/374/

SUCCESS

Deployment log:
4.18.1-emea-prod-deploy.log

GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/235/

SUCCESS

Deployment log:
4.18.1-gbl-prod-deploy.log

GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/285/

SUCCESS

Deployment log:
4.18.1-gblus-prod-deploy.log

PROD deploy hypercare details:

Verification date


Verification by


Dashboard

Status

Details

MDMHUB / MDMHUB Component errors



MDMHUB / MDMHUB KPIs

MDMHUB / MDMHUB Components resource

General / Snowflake QC Trends



Kubernetes / K8s Cluster Usage Statistics



Kubernetes / Pod Monitoring



General / kubernetes-persistent-volumes 

General / Alerts Statistics 

General / SSL Certificates and Endpoint Availability

" }, { "title": "4.19.0", "pageID": "438317571", "pageLink": "/display/GMDM/4.19.0", "content": "

Release report:

Release:4.19.0Release date:Tue Jul 09 14:29:10 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Thu Jul 11 (in 2 days)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/12/

SUCCESS 


CHANGELOG:

http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/106376c5e3a96725ae10c4eff57dc19157549d1c#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/12/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: Tue Jul 09 17:00:03 UTC 2024

Executed by: Krzysztof Prawdzik

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/504/

[85] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/444/

[98] SUCCESS

[0] FAILED

[4] REPEATED

\"\"

\"\"

\"\"

\"\"

\"\"

EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/597/

[90] SUCCESS

[0] FAILED

[0] REPEATED


\"\"

GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/500/

[72] SUCCESS

[1] FAILED

[0] REPEATED

\"\"

GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/440/

[75] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

Tests ready and approved:
  • approved by: Krzysztof Prawdzik
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

STAGE deployment date:Tue Jul 09 15:15:26 UTC 2024

Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/391/

SUCCESS

Deployment log:

APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/226/

SUCCESS

Deployment log:

EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/438/

SUCCESS

Deployment log:

GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/255/

SUCCESS

Deployment log:

GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/297/

SUCCESS 

Deployment log:

STAGE test phase details:

Verification date

11:00 - 12:00

Verification by

Dashboard

Status

Details

MDMHUB / MDMHUB Component errors

\"(tick)\" 


MDMHUB / MDMHUB KPIs\"(tick)\" 
MDMHUB / MDMHUB Components resource\"(tick)\"

General / Snowflake QC Trends

\"(tick)\"

Kubernetes / K8s Cluster Usage Statistics

\"(tick)\"

Kubernetes / Pod Monitoring

\"(tick)\"
General / kubernetes-persistent-volumes \"(tick)\"
General / Alerts Statistics \"(question)\" 

\"(question)\" APAC-STAGE - known issue?

\"\"

\"(question)\"  APAC-STAGE, kong 503, kube job completion? pod crash looping pdk?

\"\"

General / SSL Certificates and Endpoint Availability

\"(tick)\" \"(question)\" 

Need to monitor production deployment for this irregularities

AMER-NPROD

\"\"

\"\"

\"(tick)\"  \"(warning)\"  APAC-DEV, dcr, Klaudia: bean issue, strange, nothing corelated to recent changes in code. Error: "requestScopedExchange"

\"\"

\"(tick)\"  \"(question)\" EMEA-QA,  dcr, Klaudia checked logs, nothing unusual. Need to increase logs in blackbox exporter

\"\"

PROD deployment report:

PROD deployment date:Thu Jul 11 10:17:20 UTC 2024

Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/338/

SUCCESS

Deployment log:

APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/279/

SUCCESS

Deployment log:

EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/376/

SUCCESS

Deployment log:

GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/236/

SUCCESS

Deployment log:

GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/289/

SUCCESS

Deployment log:

PROD deploy hypercare details:

Verification date

13:30 - 14:30 + warning revalidation on 10:00

Verification by

Dashboard

Status

Details

MDMHUB / MDMHUB Component errors

\"(tick)\" 

\"(tick)\"  APAC-PROD, manager

\n MR-9097\n -\n Getting issue details...\n STATUS\n

\n MR-9098\n -\n Getting issue details...\n STATUS\n

\"\"

\"(tick)\"  GBL-PROD

  • We need to meetup with Grzesiek and verify this issues

\"\"

MDMHUB / MDMHUB KPIs\"(tick)\" 
MDMHUB / MDMHUB Components resource\"(tick)\"

General / Snowflake QC Trends

\"(tick)\"

Kubernetes / K8s Cluster Usage Statistics

\"(tick)\"

Kubernetes / Pod Monitoring

\"(tick)\"

\"(warning)\" GBL-PROD

\"\"

Verification on Monday - high memory usage

\"\"

General / kubernetes-persistent-volumes \"(tick)\"
General / Alerts Statistics \"(tick)\"

\"(question)\"  AMER-PROD 

  • disk space

\"\"


\"(tick)\"\"(question)\" AMER-PROD

  • Publisher broken events
  • Zookeper - info from Marek in Karma that it's nothing to be afraid of
  • Quality gateway - confirmed with Piotr

\"\"


\"(tick)\" GBLUS-PROD

  • Publisher broken events
  • Snowflake

\"\"


\"(tick)\" \"(question)\" EMEA-PROD

  • High load - confirmed with Marek and Piotr
\"\"


\"(tick)\" GBL-PROD

  • High eta - china reaload (info in karma)
\"\"


\"(tick)\" GBLUS-PROD

  • Quality gateway - Dominiq addressed it to Deloite (info from Piotr)
  • Confirmed with Piotr
\"\"
General / SSL Certificates and Endpoint Availability\"(tick)\"
" }, { "title": "4.21.0", "pageID": "438910809", "pageLink": "/display/GMDM/4.21.0", "content": "

Release report:

Release:4.21.0Release date:Tue Jul 09 14:29:10 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Thu Jul 18 (in 2 days)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/18/

SUCCESS 


CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/ef6b59b63a3800a08e98c2e36e2853d45ed97395#CHANGELOG.md



Unit tests:

SUCCESS

\"\"

Integration tests:

Execution date: Sun Jul 14 17:00:05 UTC 2024

Executed by: Krzysztof Prawdzik

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/510/

[85] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/450/

[102] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/600/

[90] SUCCESS

[0] FAILED

[0] REPEATED

\"\"


GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/505/

[72] SUCCESS

[1] FAILED

[0] REPEATED

\"\"

GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/443/

[75] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

Tests ready and approved:
  • approved by: Krzysztof Prawdzik
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

STAGE deployment date:Tue Jul 16 22:15:07 UTC 2024

Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/400/

SUCCESS

Deployment log:
4.21.0-amer-stage-deploy.log
APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/238/

SUCCESS

Deployment log:
4.21.0-apac-stage-deploy.log

EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/446/

SUCCESS

Deployment log:
4.21.0-emea-stage-deploy.log

GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/260/

SUCCESS

Deployment log:
4.21.0-gbl-stage-deploy.log

GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/305/

SUCCESS 

Deployment log:
4.21.0-gblus-stage-deploy.log

STAGE test phase details:

Verification date

13:00

Verification by

Dashboard

Status

Details

MDMHUB / MDMHUB Component errors

\"(tick)\"

\"(tick)\" AMER-NPROD - know issue during deployment

\"\"

\"(tick)\" APAC-STAGE - dcr servce 2 

  • create ticket to change error 400 to warning

\"\"

\"(tick)\"

  • to verify if these publishing errors may cause some synchronization issues in SF

\"\"

\"(tick)\"

  • Callback - Java Heap Space? Memory issue. Caused by APAC-PROD to APAC-STAGE cloning

\"\"

MDMHUB / MDMHUB KPIs\"(tick)\"

\"(tick)\"  APAC-STAGE - env cloning

\"\"

\"(question)\"  EMEA-STAGE, 1h+ long publishing times

\"\"

MDMHUB / MDMHUB Components resource\"(tick)\"

General / Snowflake QC Trends

\"(tick)\"

Kubernetes / K8s Cluster Usage Statistics

\"(tick)\"

Kubernetes / Pod Monitoring

\"(tick)\"
General / kubernetes-persistent-volumes \"(tick)\"
General / Alerts Statistics \"(tick)\"

\"(question)\"  EMEA-STAGE - high ETA

\"\"

this graph does not reflect this

\"\"

General / SSL Certificates and Endpoint Availability\"(tick)\"

\"(tick)\"  APAC-STAGE, cloning related

\"\"

\"(tick)\" EMEA/GBL - a lot of strange endpoint failuers

  • Marek/Damian - to verify

\"\"


PROD deployment report:

PROD deployment date:Thu Jul 18 12:57:55 UTC 2024

Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_backend_amer_prod/226/

SUCCESS

Deployment log:
4.21.0-amer-prod-deploy.log
APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/282/

SUCCESS

Deployment log:
4.21.0-apac-prod-deploy.log

EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/380/

SUCCESS

Deployment log:
4.21.0-emea-prod-deploy.log

GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/239/

SUCCESS

Deployment log:
4.21.0-gbl-prod-deploy.log

GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/292/

SUCCESS

Deployment log:
4.21.0-gblus-prod-deploy.log

PROD deploy hypercare details:

Verification date


Verification by

Release on prod wasn't verified since Crowdstrike. 

Dashboard

Status

Details

MDMHUB / MDMHUB Component errors



MDMHUB / MDMHUB KPIs

MDMHUB / MDMHUB Components resource

General / Snowflake QC Trends



Kubernetes / K8s Cluster Usage Statistics



Kubernetes / Pod Monitoring



General / kubernetes-persistent-volumes 

General / Alerts Statistics 

General / SSL Certificates and Endpoint Availability

" }, { "title": "4.22.0", "pageID": "438327818", "pageLink": "/display/GMDM/4.22.0", "content": "

Release report:

Release:4.22.0Release date:Tue Jul 23 16:32:08 UTC 2024

STATUSES: SUCCESS / FAILED / REPEATED

Released by:Krzysztof PrawdzikPlanned GO-LIVE:Thu Jul 25 (in 2 days)
StageLinkStatusComments (images 600px)
Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/19/

SUCCESS 


CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/e366164c1adff5b1ccfd79dea28f068bc34a0ee2#CHANGELOG.md



Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/19/testReport/

SUCCESS

\"\"

Integration tests:

Execution date: Tue Jul 23 17:24:15 UTC 2024

Executed by: Krzysztof Prawdzik

AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/517/

[85] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/457/

[94] SUCCESS

[8] FAILED

[0] REPEATED

\"\"

EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/608/

[90] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/510/

[72] SUCCESS

[1] FAILED

[0] REPEATED

\"\"

GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/449/

[75] SUCCESS

[0] FAILED

[0] REPEATED

\"\"

Tests ready and approved:
  • approved by: Krzysztof Prawdzik
Release ready and approved:
  • approved by: Krzysztof Prawdzik

STAGE deployment details:

STAGE deployment date:Tue Jul 23 17:23:40 UTC 2024

Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/404/

SUCCESS

Deployment log:
4.22.0-amer-stage-deploy.log
APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/243/

SUCCESS

Deployment log:
4.22.0-apac-stage-deploy.log

EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/450/

SUCCESS

Deployment log:
4.22.0-emea-stage-deploy.log

GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/264/

SUCCESS

Deployment log:
4.22.0-gbl-prod-deploy.log

GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/309/

SUCCESS 

Deployment log:
4.22.0-gblus-prod-deploy.log

STAGE test phase details:

Verification date

11:15 - 12:30

Verification by

Dashboard

Status

Details

MDMHUB / MDMHUB Component errors

\"(tick)\"

\"(tick)\" AMER-STAGE, EMEA-STAGE, known errors for OneKey DCR

\"(tick)\" EMEA-STAGE, mdmhub-mdm-manager, issues already reported earlier

\"(tick)\"  GBL-STAGE, something with batches (UpdateHCPBatchRestRoute) - probably wrong JSON - ticket to make it more pleasant 

\"\"

MDMHUB / MDMHUB KPIs\"(tick)\"
  • Irek>ask Rafał - what does "Publishing latency" mean - total delay of our processing stack?
MDMHUB / MDMHUB Components resource\"(tick)\"

\"(tick)\" EMEA-STAGE, Batch service, more memory usage? → nothing to worry about

\"\"

\"(tick)\" GBLUS, api-router, more memory? → nothing to worry about

General / Snowflake QC Trends

\"(tick)\" 

Kubernetes / K8s Cluster Usage Statistics


EMEA-NPROD, higher CPU usage, storage usage increase

\"\"

Kubernetes / Pod Monitoring

\"(tick)\"

AMER-NPROD, something is happening → batch processing, Reltio caps events to be processed which we compl

\"\"

General / kubernetes-persistent-volumes \"(tick)\"

EMEA-NPROD, increasing storage usage → entity enricher working (15M events being processed)

  • need to be verified with Marek

\"\"

General / Alerts Statistics \"(tick)\"

\"(tick)\" APAC-NPROD,

  • \"(tick)\"  Target down, what does it mean? We don't have such alerts → glitch in the matrix
  • Publisher broken events - addressed in Karma by Will
  • \"(tick)\"  reconciliation_events_threshold_exceeded?
  • \"(tick)\"  customresource_status_condition → Related to Kafka migration
  • KubeJobFailed
  • pod_crashlooping_pdks - more than usuall
  • zookeeper_fsync_time_too_long - waiting for more data

AMER-NPROD

  • dag_failed_nprod
  • pod_crashlooping_hub_nprod
  • pod_crashlooping_pdks

\"\"

EMEA-NPROD

  • dag_failed_nprod
  • \"(tick)\" customresource_status_condition - 
  • \"(tick)\" Piotr DCR testing API - kong3_http_503_status_nprod

\"\"

General / SSL Certificates and Endpoint Availability\"(tick)\"EMEA-DEV, dcr - Piotr testing

PROD deployment report:

PROD deployment date:

Thu Jul 25 11:07:26 UTC 2024

 
Deployment approved:
  • approved by: Krzysztof Prawdzik
Deployed by:Krzysztof Prawdzik
ENV:LinkStatusDetails
AMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/344/

SUCCESS

Deployment log:
4.22.0-amer-prod-deploy.log
APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/284/

SUCCESS

Deployment log:
4.22.0-apac-prod-deploy.log

EMEA

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/382/

SUCCESS

Deployment log:
4.22.0-emea-prod-deploy.log

GBL(EX-US)

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/241/

SUCCESS

Deployment log:
4.22.0-gbl-prod-deploy.log

GBLUS

https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/294/

SUCCESS

Deployment log:
4.22.0-gblus-prod-deploy.log

PROD deploy hypercare details:

Verification date

15:30 - 16:40

Verification by

Dashboard

Status

Details

MDMHUB / MDMHUB Component errors

\"(tick)\"

AMER-PROD, Incorrect payload on Kafka, Piotr manually moved offset to fix this. 

\"\"

GBLUS-PROD, single error with ";" and ")" 

APAC-PROD, map channel:

  • Failure not recovered
  • Processing of message: KR-6687996c10e6767c9e1cab6f failed with error: Invalid format: "6/20/1970" is malformed at "/20/1970"
    • Piotr claims that this is DLQ queue probably with single problematic event. 

\"\"

EMEA-PROD, map-channel:

  • 400x Unexpected response: { "status": "ERROR", "status_code": 403, "error_message": "com.COMPANY.gcs.hcp.gateway.exception.RateLimitExceededException - TotalRequests Limit exceeded! (maxRequestsPerMinute=1200)" }
  • Unexpected response: { "status": "ERROR", "status_code": 404, "error_message": "Contact not found by contact_id=a0EF000000pI8bAMAS! (market=IE)" }\"\"
MDMHUB / MDMHUB KPIsWithout refactoring this dashboard, no insights can be extracted. Skipping
MDMHUB / MDMHUB Components resource\"(tick)\"

General / Snowflake QC Trends

\"(tick)\"

EMEA-PROD, Empty COMPANYGlobalCusdtomerID - such entities are deleted at Snowflake level → nothing gets populated to downstream. 

\"\"

Kubernetes / K8s Cluster Usage Statistics

\"(tick)\"

Kubernetes / Pod Monitoring

\"(tick)\"

APAC-PROD, suspicious memory usage? 

\"\"

EMEA-PROD, config deploy

\"\"

General / kubernetes-persistent-volumes \"(tick)\"
General / Alerts Statistics \"(tick)\"

AMER-PROD

  • \"(tick)\"  publisher_broken_events_prod
  • quality_gateway_auto_resolved_event
  • hub_callback_loop

GBLUS-PROD

  • \"(tick)\"  snowflake_last_entity_event_time_prod

EMEA-PROD

  • dag_failed_prod - existis for a long time, addressed in karma ..
  • snowflake_generated_events_without_COMPANY_global_customer_ids_prod

APAC-PROD

  • \"(tick)\"  pod_crashlooping_pdks - long time error in karma


General / SSL Certificates and Endpoint Availability\"(tick)\"
" }, { "title": "FAQ", "pageID": "462236735", "pageLink": "/display/GMDM/FAQ", "content": "

Questions and answers about HUB topics.

" }, { "title": "What is survivorship strategy in Reltio and where to find it?", "pageID": "462236738", "pageLink": "/pages/viewpage.action?pageId=462236738", "content": "

Simple attributes on Reltio profiles (not nested ones) have an OV attribute - showing whether the attribute value should be shown to user.

Example:
\"\"
This HCO has two COMPANY Customer IDs (from different crosswalks) and the visible one won during calculation of survivorship strategy.


The survivorship rules can be configured separately for each environment and attribute. Those are part of Reltio configuration and can be accessed here (authentication type is Bearer token):
{{RELTIO_URL}}/{{tenantID}}/configuration


Description of Reltio survivorship rules:
https://docs.reltio.com/en/model/consolidate-data/design-survivorship-rules/survivorship-rules

" } ]