[ { "title": "HUB Overview", "": "", "pageLink": "/display/GMDM/HUB+Overview", "content": " services provide services for clients using (Reltio or Nucleus 360) in following fields:As abstraction layer providing for data livering common processes that are hiding complexity of interaction with Reltio API.Enhancing Reltio functionality by data quality validating and through cleaning services.Extending data protection by limiting clients' lowing to publish data to multiple clients using event streaming and batch consist of:Integration Gateway providing services for data handling in Reltio (storing and accessing entities directly).Publishing Hub being responsible for publishing profiles to e MDM HUB ecosystem is presented at the picture below.   " }, { "title": "Modules", "": "", "pageLink": "/display/GMDM/Modules", "content": "" }, { "title": "Direct Channel", "": "", "pageLink": "/display/GMDM/Direct+Channel", "content": "DescriptionDirect channel exposes unified REST interface to update/search profiles in systems. The diagram below shows the logical architecture of the Direct Channel module. Logical architectureComponentsComponentSubcomponentDescriptionAPI GatewayKong components playing the role of proxAuthentication engineKong module providing client authentication servicesManager/ microservice orchestrating callsData service validating data sent to client access to routing engineroute calls to calls in EFK service for tracing reasons. Reltio Adapterhandles communication with systemNucleus Adapterhandle communication with systemHUB StoreMongoDB database plays the role of persistence store for requests to regional servicesFlowsFlowDescriptionCreate/Update or Update / entitySearch entityGet EntityRead entityRead LOVRead LOVValidate HCPValidate HCP" }, { "title": "Streaming channel", "": "", "pageLink": "/display//Streaming+channel", "content": "DescriptionStreaming channel distributes profile updates through topics in near real-time to consumers.  Reltio events generate on profile changes are sent via queue to HUB enriches events with profile data and dedupes them. During the process, callback service process data (for example: calculate ranks and names, clean unused topics) and updates profile in with the calculated values.   Publisher distributes events to target client topics based on the configured routing built-in provides access to up to date data in both the object and the relational model. Logical architectureComponentsComponentDescriptionReltio subscriberConsume events from ReltioCallback serviceTrigger callback actions on incoming events for example calculated rankingsDirect updates triggered by callbacksHUB StoreKeeps MDM data historyReconciliation serviceReconcile missing eventsPublisherEvaluates routing rules and publishes data do downstream consumersSnowflake data in the relation modelKafka data to from enricherEnrich events with full data retrieved from ReltioFlowsFlowDescriptionReltio events streamingDistribute data changes to downstream consumers in the streaming modeNucleus events streamingDistribute data changes to downstream consumers in the streaming modeSnowflake: Events publish flowDistribute Reltio MDM data changes to " }, { "title": "Java Batch Channel", "": "", "pageLink": "/display//Java+Batch+Channel", "content": " is the set of services responsible to load file extract delivered by the external source to Reltio. The heart of the module is file loader service aka inc-batch-channel that maps flat model to Reltio model and orchestrates the load through asynchronous interface manage by Manager. Batch flows are managed by Apache Airflow scheduler.Logical architectureComponentsApache Airflow - batch flows scheduler and le loader aka inc-batch-channel - maps files to model  and orchestrate profiles loads Manager/Orchestrator - java microservice orchestrating calls  batches - generic flow for loading source data from flat files into Reltio" }, { "title": "ETL Batch Channel", "": "", "pageLink": "/display//ETL+Batch+Channel", "content": "DescriptionETL Batch channel exposes REST   for components like and manages a loading process in an asynchronous way.With its own cache based on , it supports full loads providing a delta detection logic.Logical architectureComponentsBatch service - exposes REST for platforms to load batch data into and controls the loading - a registry of batch loads and a cache to handle delta nager/Orchestrator - java microservice orchestrating calls into and providing validation and data protection services. FlowsETL batch flow -  ageneric flow for loading source data with tools like into Reltio" }, { "title": "Environments", "": "", "pageLink": "/display/GMDM/Environments", "content": "Reltio Export IPsEnvironmentIPsReltio Team commentEMEA NON-PRODEMEA PROD- ●●●●●●●●●●●●- ●●●●●●●●●●●●- ●●●●●●●●●●●●are available across all EMEA environmentsAPAC NON-PRODAPAC PROD- ●●●●●●●●●●●- ●●●●●●●●●●●●●●- ●●●●●●●●●●●●●are available across all environmentsGBLUS NON-PRODGBLUS PROD- ●●●●●●●●●●●●●- ●●●●●●●●●●●- ●●●●●●●●●●●●● for the dev/test and 361 tenants, the IPs can be used by any of the PRODThe tenants use the same access points as the " }, { "title": "", "": "", "pageLink": "/display/", "content": "ContactsTypeContactCommentSupported MDMHUB environmentsDLDL-ADL-ATP-GLOBAL_MDM_RELTIO@Supports Reltio instancesGBLUS - Reltio only" }, { "title": "AMER Non PROD Cluster", "": "", "pageLink": "/display/GMDM/AMER+Non+PROD+Cluster", "content": "Physical ArchitectureKubernetes clusternameIPConsole addressresource typeAWS regionFilesystemComponentsTypeatp-mdmhub-nprod-amer10.9.64.0/1810.9.0.0/18EKS over per node,6TBx2 replicated , , , , microservicesoutbound and inboundNon PROD - backend  managerkubectl logs {{pod name}} --namespace kongamer-backendKafkamdm-kafka-kafka-0mdm-kafka-kafka-1mdm-kafka-kafka-2Kafkalogsamer-backendKafka -kafka-kafka-exporter-*Kafka logs {{pod name}} --namespace amer-backendamer-backendZookeeper mdm-kafka-zookeeper-0mdm-kafka-zookeeper-1mdm-kafka-zookeeper-2Zookeeperlogsamer-backendMongomongo-0Mongologsamer-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace amer-backendamer-backendFluentDfluentd-*EFK - fluentdkubectl logs {{pod name}} --namespace amer-backendamer-backendElasticsearchelasticsearch-es-default-0elasticsearch-es-default-1EFK - elasticsearchkubectl logs {{pod name}} --namespace amer-backendamer-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace amer-backendmonitoringCadvisormonitoring-cadvisor-*Docker logs {{pod name}} --namespace monitoringamer-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace amer-backendamer-backendMongo exportermongo-exporter-*mongo metrics exporter---amer-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace amer-backendamer-backendConsulconsul-consul-server-0consul-consul-server-1consul-consul-server-2Consulkubectl logs {{pod name}} --namespace amer-backendamer-backendSnowflake connectoramer-dev-mdm-connect-cluster-connect-*amer-qa-mdm-connect-cluster-connect-*amer-stage-mdm-connect-cluster-connect-*Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace amer-backendmonitoringKafka Connect Exportermonitoring-jdbc-snowflake-exporter-amer-dev-*monitoring-jdbc-snowflake-exporter-amer-stage-*monitoring-jdbc-snowflake-exporter-amer-stage-*Kafka metric exporterkubectl logs {{pod name}} --namespace monitoringamer-backendAkhqakhq-*Kafka UIlogsCertificates Wed Aug 31 21:57:19 CEST until: 22:07:17 CEST 2036ResourceCertificate LocationValid fromValid to Issued , , , , Consul, , 14:13:53 GMTTue, GMT 18 11:07:55 2022 GMTJan 18 11:07:55 2024 :9094Setup and check connections:Snowflake - managing service accounts - " }, { "title": "AMER DEV Services", "": "", "pageLink": "/display//AMER+DEV+Services", "content": " NameEndpointGateway API OAuth2 External - DEV Federate API KEY auth - DEV HUB  ://gblmdmhubnprodamrasp100762HUB UI MDM DataMartResource NameEndpointDB warehouse dashboardsResource NameEndpointHUB Performance Topics Overview Statistics Overview dashboardsResource NameEndpointKibana (DEV prefixed dashboards)DocumentationResource documentation NameEndpointAirflow UIConsulResource NameEndpointConsul UIAKHQ - KafkaResource NameEndpointAKHQ UI means part of name which - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger definition - if availableamer-devBatch Servicemdmhub-batch-service-*Batch service, batch loaderlogsamer-devApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogsamer-devSubscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogsamer-devEnrichermdmhub-entity-enricher-*Reltio events enricherlogsamer-devCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsamer-devPublishermdmhub-event-publisher-*Events publisherlogsamer-devReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation (GBLUS)MDM SystemsReltioDEV - wn60kG248ziQSMWResource NameEndpointSQS queue name Gateway Usersvc-pfe-mdmhubRDM ResourcesResource NameEndpointMongomongodb://:MigrationThe is the first environment that was migrated from old ifrastructure ( based) to a new one - Kubernetes based. The following table presents old endpoints and their substitutes in the new environment. Everyone who wants to connect with has to use new scriptionOld endpointNew endpointManager API:8443/dev-ext:8443/dev-ext Service API:8443/dev-batch-ext:8443/dev-batch-ext API:8443/v1:8443/v1" }, { "title": "", "": "", "pageLink": "/display/GMDM/AMER+QA+Services", "content": " NameEndpointGateway API OAuth2 External - DEV Federate API KEY auth - DEV ://gblmdmhubnprodamrasp100762HUB UI MDM DataMartResource NameEndpointDB Url NameCOMM_AMER_MDM_DMART_QA_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_AMER_MDM_DMART_QA_DEVOPS_ROLEResource NameEndpointHUB Performance Topics Overview Statistics Overview NameEndpointKibana (QA prefixed dashboards)DocumentationResource API documentation ConsulResource NameEndpointConsul UIAKHQ - KafkaResource NameEndpointAKHQ UI means part of name which - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger definition - if availableamer-qaBatch Servicemdmhub-batch-service-*Batch service, batch loaderlogsamer-qaApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogsamer-qaSubscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogsamer-qaEnrichermdmhub-entity-enricher-*Reltio events enricherlogsamer-qaCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsamer-qaPublishermdmhub-event-publisher-*Events publisherlogsamer-qaReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation (GBLUS)MDM SystemsReltioDEV - wn60kG248ziQSMWResource NameEndpointSQS queue name Gateway Usersvc-pfe-mdmhubRDM ResourcesResource NameEndpointMongomongodb:// SASL SSLKibana" }, { "title": "AMER STAGE Services", "": "", "pageLink": "/display//AMER+STAGE+Services", "content": " NameEndpointGateway API OAuth2 External - DEV Federate API KEY auth - DEV HUB  ://gblmdmhubnprodamrasp100762HUB UI MDM DataMartResource NameEndpointDB Url NameCOMM_AMER_MDM_DMART_STG_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_AMER_MDM_DMART_STG_DEVOPS_ROLEResource NameEndpointHUB Performance Topics Overview Statistics Overview NameEndpointKibana (STAGE prefixed dashboards)DocumentationResource API documentation NameEndpointAirflow UIConsulResource NameEndpointConsul UIAKHQ - KafkaResource NameEndpointAKHQ UI means part of name which - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger definition - if availableamer-stageBatch Servicemdmhub-batch-service-*Batch service, batch loaderlogsamer-stageApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogsamer-stageSubscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogsamer-stageEnrichermdmhub-entity-enricher-*Reltio events enricherlogsamer-stageCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsamer-stagePublishermdmhub-event-publisher-*Events publisherlogsamer-stageReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation (GBLUS)MDM SystemsReltioDEV - wn60kG248ziQSMWResource NameEndpointSQS queue name Usersvc-pfe-mdmhubRDM ResourcesResource NameEndpointMongomongodb:// SASL SSLKibana" }, { "title": "GBLUS-DEV Services", "": "", "pageLink": "/display//GBLUS-DEV+Services", "content": " NameEndpointGateway API OAuth2 External - DEV Federate API KEY auth - DEV ://gblmdmhubnprodamrasp100762HUB UI MDM DataMartResource NameEndpointDB UrlDB warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_DEV_MDM_DMART_DEVOPS_ROLEGrafana dashboardsResource NameEndpointHUB Performance Topics Overview Statistics Overview NameEndpointKibana (DEV prefixed dashboards)DocumentationResource API documentation ConsulResource NameEndpointConsul UIAKHQ - KafkaResource NameEndpointAKHQ UI means part of name which changing)DescriptionLogsPod portsgblus-stageManagermdmhub-mdm-manager-*Gateway APIlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger definition - if availablegblus-stageBatch Servicemdmhub-batch-service-*Batch service, batch loaderlogsgblus-stageApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogsgblus-stageSubscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogsgblus-stageEnrichermdmhub-entity-enricher-*Reltio events enricherlogsgblus-stageCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsgblus-stagePublishermdmhub-event-publisher-*Events publisherlogsgblus-stageReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation (GBLUS)MDM SystemsReltioDEV(gblus_dev) - sw8BkTZqjzGr7hnResource NameEndpointSQS queue name Gateway Usersvc-pfe-mdmhubRDM ResourcesResource NameEndpointMongomongodb://:MigrationThe following table presents old endpoints and their substitutes in the new environment. Everyone who wants to connect with gblus dev has to use new scriptionOld endpointNew endpointManager API:8443/dev-ext Service API:8443/dev-batch-ext API:8443/v1" }, { "title": "", "": "", "pageLink": "/display//GBLUS-QA+Services", "content": " NameEndpointGateway API OAuth2 External - DEV Federate API KEY auth - DEV HUB  ://gblmdmhubnprodamrasp100762HUB NameEndpointDB warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_QA_MDM_DMART_DEVOPS_ROLEResource NameEndpointHUB Performance Topics Overview Statistics Overview NameEndpointKibana (QA prefixed dashboards)DocumentationResource documentation NameEndpointAirflow UIConsulResource NameEndpointConsul UIAKHQ - KafkaResource NameEndpointAKHQ UI means part of name which changing)DescriptionLogsPod portsgblus-stageManagermdmhub-mdm-manager-*Gateway APIlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger definition - if availablegblus-stageBatch Servicemdmhub-batch-service-*Batch service, batch loaderlogsgblus-stageApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogsgblus-stageSubscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogsgblus-stageEnrichermdmhub-entity-enricher-*Reltio events enricherlogsgblus-stageCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsgblus-stagePublishermdmhub-event-publisher-*Events publisherlogsgblus-stageReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation (GBLUS)MDM SystemsReltioQA(gblus_qa) - rEAXRHas2ovllvTSQS queue name Gateway Usersvc-pfe-mdmhubRDM ResourcesResource NameEndpointMongomongodb:// SASL SSLKibanaMigrationThe following table presents old endpoints and their substitutes in the new environment. Everyone who wants to connect with gblus qa has to use new scriptionOld endpointNew endpointManager API:8443/qa-ext Service API:8443/qa-batch-ext API:8443/v1" }, { "title": "GBLUS-STAGE Services", "": "", "pageLink": "/display//GBLUS-STAGE+Services", "content": " NameEndpointGateway API OAuth2 External - DEV UI MDM DataMartResource NameEndpointDB Url NameCOMM_GBL_MDM_DMART_STGDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_STG_MDM_DMART_DEVOPS_ROLEResource NameEndpointHUB Performance Topics Overview Statistics Overview NameEndpointKibana (STAGE prefixed dashboards)DocumentationResource documentation NameEndpointAirflow UIConsulResource NameEndpointConsul UIAKHQ - KafkaResource NameEndpointAKHQ UI means part of name which changing)DescriptionLogsPod portsgblus-stageManagermdmhub-mdm-manager-*Gateway APIlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger definition - if availablegblus-stageBatch Servicemdmhub-batch-service-*Batch service, batch loaderlogsgblus-stageApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogsgblus-stageSubscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogsgblus-stageEnrichermdmhub-entity-enricher-*Reltio events enricherlogsgblus-stageCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsgblus-stagePublishermdmhub-event-publisher-*Events publisherlogsgblus-stageReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation (GBLUS)MDM SystemsReltioSTAGE(gblus_stage) - 48ElTIteZz05XwTSQS queue name Gateway Usersvc-pfe-mdmhubRDM ResourcesResource NameEndpointMongomongodb:// SASL SSLKibana" }, { "title": "AMER PROD Cluster", "": "", "pageLink": "/display//AMER+PROD+Cluster", "content": "Physical ArchitectureKubernetes clusternameIPConsole addressresource typeAWS regionFilesystemComponentsTypeatp-mdmhub-prod-amer10.9.64.0/1810.9.0.0/18EKS over per node,6TBx3 replicated , , , , microservicesoutbound and inboundPROD - backend  managerkubectl logs {{pod name}} --namespace kongamer-backendKafkamdm-kafka-kafka-0mdm-kafka-kafka-1mdm-kafka-kafka-2Kafkalogsamer-backendKafka -kafka-kafka-exporter-*Kafka logs {{pod name}} --namespace amer-backendamer-backendZookeeper mdm-kafka-zookeeper-0mdm-kafka-zookeeper-1mdm-kafka-zookeeper-2Zookeeperlogsamer-backendMongomongo-0Mongologsamer-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace amer-backendamer-backendFluentDfluentd-*EFK - fluentdkubectl logs {{pod name}} --namespace amer-backendamer-backendElasticsearchelasticsearch-es-default-0elasticsearch-es-default-1EFK - elasticsearchkubectl logs {{pod name}} --namespace amer-backendamer-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace amer-backendmonitoringCadvisormonitoring-cadvisor-*Docker logs {{pod name}} --namespace monitoringamer-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace amer-backendamer-backendMongo exportermongo-exporter-*mongo metrics exporter---amer-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace amer-backendamer-backendConsulconsul-consul-server-0consul-consul-server-1consul-consul-server-2Consulkubectl logs {{pod name}} --namespace amer-backendamer-backendSnowflake connectoramer-prod-mdm-connect-cluster-connect-*amer-qa-mdm-connect-cluster-connect-*amer-stage-mdm-connect-cluster-connect-*Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace amer-backendmonitoringKafka Connect Exportermonitoring-jdbc-snowflake-exporter-amer-prod-*monitoring-jdbc-snowflake-exporter-amer-stage-*monitoring-jdbc-snowflake-exporter-amer-stage-*Kafka metric exporterkubectl logs {{pod name}} --namespace monitoringamer-backendAkhqakhq-*Kafka UIlogsCertificates Wed Aug 31 21:57:19 CEST until: 22:07:17 CEST 2036ResourceCertificate LocationValid fromValid to Issued , , , , Consul, , 14:13:53 GMTTue, 14:13:53 GMT 18 2022 GMTJan 18 11:07:55 2024 :9094Setup and check connections:Snowflake - managing service accounts - via - Get Support → Submit ticket → GBL-ATP-COMMERCIAL SNOWFLAKE DOMAIN ADMI" }, { "title": "", "": "", "pageLink": "/display//AMER+PROD+Services", "content": " NameEndpointGateway API OAuth2 External - DEV Federate API KEY auth - DEV DataMartResource NameEndpointDB Url NameCOMM_AMER_MDM_DMART_PROD_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_AMER_MDM_DMART_PROD_DEVOPS_ROLEResource NameEndpointHUB Performance Topics Overview Statistics Overview NameEndpointKibana (PROD prefixed dashboards)DocumentationResource API documentation NameEndpointAirflow UIConsulResource NameEndpointConsul UI - KafkaResource NameEndpointAKHQ Kafka means part of name which - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger definition - if availableamer-prodBatch -batch-service-*Batch service, batch loaderlogsamer-prodApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogsamer-prodSubscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogsamer-prodEnrichermdmhub-entity-enricher-*Reltio events enricherlogsamer-prodCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsamer-prodPublishermdmhub-event-publisher-*Events publisherlogsamer-prodReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation (GBLUS)MDM Ys7joaPjhr9DwBJResource NameEndpointSQS queue name Gateway Usersvc-pfe-mdmhub-prodRDM ResourcesResource NameEndpointMongomongodb://::9094 SASL SSLKibana" }, { "title": "", "": "", "pageLink": "/display/GMDM/GBL+US+PROD+Services", "content": " S3Gateway API OAuth2 External - DEV Federate API KEY auth - DEV HUB  ://gblmdmhubprodamrasp101478Snowflake MDM DataMartDB UrlDB warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_PROD_MDM_DMART_DEVOPS_ROLEHUB Performance Topics Overview Overview (PROD prefixed dashboards)DocumentationManager API documentation UIConsulConsul UI - KafkaAKHQ Kafka UI & LogsENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod portsgblus-prodManagermdmhub-mdm-manager-*Gateway APIlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger definition - if availablegblus-prodBatch -batch-service-*Batch service, batch loaderlogsgblus-prodSubscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogsgblus-prodEnrichermdmhub-entity-enricher-*Reltio events enricherlogsgblus-prodCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsgblus-prodPublishermdmhub-event-publisher-*Events publisherlogsgblus-prodReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogsgblus-prodOnekey DCRmdmhub-mdm-onekey-dcr-service-*Onekey ( (GBLUS)ENGAGE (GBLUS)KOL_ONEVIEW ( (GBLUS)GRACE (GBLUS)MDM SystemsReltioPROD- 9kL30u7lFoDHp6XSQS queue name Gateway Usersvc-pfe-mdmhub-prodRDM ResourcesMongomongodb://::9094 SASL SSLKibana" }, { "title": "AMER SANDBOX Cluster", "": "", "pageLink": "/display//AMER+SANDBOX+Cluster", "content": "Physical ArchitectureKubernetes clusternameIPConsole addressresource typeAWS regionFilesystemComponentsTypeatp-mdmhub-sbx-amer●●●●●●●●●●●●●●●●●●●●●●● EKS over per nodeKong, , , , microservicesoutbound and inboundSANDBOX - backend  managerkubectl logs {{pod name}} --namespace kongamer-backendKafkamdm-kafka-kafka-0mdm-kafka-kafka-1mdm-kafka-kafka-2Kafkalogsamer-backendKafka -kafka-kafka-exporter-*Kafka logs {{pod name}} --namespace amer-backendamer-backendZookeepermdm-kafka-zookeeper-0mdm-kafka-zookeeper-1mdm-kafka-zookeeper-2Zookeeperlogsamer-backendMongomongo-0Mongologsamer-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace amer-backendamer-backendFluentDfluentd-*EFK - fluentdkubectl logs {{pod name}} --namespace amer-backendamer-backendElasticsearchelasticsearch-es-default-0elasticsearch-es-default-1elasticsearch-es-default-2EFK - elasticsearchkubectl logs {{pod name}} --namespace amer-backendmonitoringCadvisormonitoring-cadvisor-*Docker logs {{pod name}} --namespace monitoringamer-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace amer-backendamer-backendMongo exportermongo-exporter-*mongo metrics exporter---amer-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace amer-backendamer-backendConsulconsul-consul-server-0consul-consul-server-1consul-consul-server-2Consulkubectl logs {{pod name}} --namespace amer-backendamer-backendSnowflake connectoramer-devsbx-mdm-connect-cluster-connect-*Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace amer-backendamer-backendAkhqakhq-*Kafka UIlogsCertificates Wed Aug 31 21:57:19 CEST until: 22:07:17 CEST 2036ResourceCertificate LocationValid fromValid to Issued , , , , Consul, , 15:16:-02-21 15:16:04" }, { "title": "", "": "", "pageLink": "/display//AMER+DEVSBX+Services", "content": " NameEndpointGateway API OAuth2 External - DEV Federate API KEY auth - DEV HUB   UI dashboardsResource NameEndpointHUB Performance Topics Overview Statistics Overview dashboardsResource NameEndpointKibana ( prefixed dashboards)DocumentationResource API documentation NameEndpointAirflow UIConsulResource NameEndpointConsul UIAKHQ - KafkaResource NameEndpointAKHQ UI means part of name which  - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger definition - if availableamer-devsbxBatch Servicemdmhub-batch-service-*Batch service, batch loaderlogsamer-devsbxApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogsamer-devsbxEnrichermdmhub-entity-enricher-*Reltio events enricherlogsamer-devsbxCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsamer-devsbxPublishermdmhub-event-publisher-*Events publisherlogsamer-devsbxReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogsInternal ResourcesResource NameEndpointMongomongodb://::9094 SASL SSLKibana" }, { "title": "", "": "", "pageLink": "/display/GMDM/APAC", "content": "" }, { "title": "", "": "", "pageLink": "/display/GMDM/APAC+Non+PROD+Cluster", "content": "Physical ArchitectureKubernetes clusternameIPConsole addressresource typeAWS regionFilesystemComponentsTypeatp-mdmhub-nprod-apac●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●EKS over per node,6TBx2 replicated , , , , MDMHUB microservicesinbound/outboundComponents & LogsDEV - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsapac-devManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger definition - if availableapac-batch-service-*Batch -devAPI routermdmhub-mdm-api-router-*API Routerlogsapac-devReltio Subscribermdmhub-reltio-subscriber-*Reltio -entity-enricher-*Entity Enricherlogsapac-devCallback Servicemdmhub-callback-service-*Callback Servicelogsapac-devEvent Publishermdmhub-event-publisher-*Event Publisherlogsapac-devReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation -devCallback delay servicemdmhub-callback-delay-service-*Callback delay servicelogsQA - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsapac-qaManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger definition - if availableapac-qaBatch Servicemdmhub-batch-service-*Batch -qaAPI routermdmhub-mdm-api-router-*API Routerlogsapac-qaReltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogsapac-qaEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsapac-qaCallback Servicemdmhub-callback-service-*Callback Servicelogsapac-qaEvent Publishermdmhub-event-publisher-*Event Publisherlogsapac-qaReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation -qaCallback delay servicemdmhub-callback-delay-service-*Callback delay servicelogsSTAGE - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsapac-stageManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger definition - if availableapac-stageBatch Servicemdmhub-batch-service-*Batch -stageAPI routermdmhub-mdm-api-router-*API Routerlogsapac-stageReltio -reltio-subscriber-*Reltio Subscriberlogsapac-stageEntity Enrichermdmhub-entity-enricher-*Entity -stageCallback Servicemdmhub-callback-service-*Callback Servicelogsapac-stageEvent Publishermdmhub-event-publisher-*Event Publisherlogsapac-stageReconciliation -stageCallback delay servicemdmhub-callback-delay-service-*Callback delay servicelogsNon PROD - backend NamespaceComponentPodDescriptionLogskongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kongapac-backendKafkamdm-kafka-kafka-0mdm-kafka-kafka-1mdm-kafka-kafka-2Kafkalogsapac-backendKafka -kafka-kafka-exporter-*Kafka logs {{pod name}} --namespace apac-backendapac-backendZookeeper mdm-kafka-zookeeper-0mdm-kafka-zookeeper-1mdm-kafka-zookeeper-2Zookeeperlogsapac-backendMongomongo-0Mongologsapac-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace apac-backendapac-backendFluentDfluentd-*EFK - fluentdkubectl logs {{pod name}} --namespace apac-backendapac-backendElasticsearchelasticsearch-es-default-0elasticsearch-es-default-1EFK - elasticsearchkubectl logs {{pod name}} --namespace apac-backendapac-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace -backendmonitoringcAdvisormonitoring-cadvisor-*Docker logs {{pod name}} --namespace monitoringapac-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace apac-backendapac-backendMongo exportermongo-exporter-*mongo metrics exporter----backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace apac-backendapac-backendConsulconsul-consul-server-0consul-consul-server-1consul-consul-server-2Consulkubectl logs {{pod name}} --namespace apac-backendapac-backendSnowflake connectorapac-dev-mdm-connect-cluster-connect-*apac-qa-mdm-connect-cluster-connect-*apac-stage-mdm-connect-cluster-connect-*Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace apac-backendmonitoringKafka Connect Exportermonitoring-jdbc-snowflake-exporter-apac-dev-*monitoring-jdbc-snowflake-exporter-apac-stage-*monitoring-jdbc-snowflake-exporter-apac-stage-*Kafka metric exporterkubectl logs {{pod name}} --namespace monitoringapac-backendAKHQakhq-*Kafka UIlogsCertificates ResourceCertificate LocationValid fromValid to Issued , , , , Consul, ," }, { "title": "", "": "", "pageLink": "/display//APAC+DEV+Services", "content": " NameEndpointGateway API OAuth2 External - DEV Federate  ://globalmdmnprodaspasp202202171347HUB UI MDM DataMartResource NameEndpointDB UrlDB NameCOMM_APAC_MDM_DMART_DEV_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_APAC_MDM_DMART_DEV_DEVOPS_ROLEResource NameEndpointHUB Performance Topics Overview Overview State Monitoring Monitoring NameEndpointKibana (DEV prefixed dashboards)DocumentationResource NameEndpointManager API documentation NameEndpointAirflow UIConsulResource NameEndpointConsul UIAKHQ - KafkaResource NameEndpointAKHQ UI (, , - 2NBAwv1z2AvlkgSResource NameEndpointSQS queue name NameEndpointMongomongodb://::9094 SASL SSLKibanaElasticsearch" }, { "title": "", "": "", "pageLink": "/display/GMDM/APAC+QA+Services", "content": " NameEndpointGateway API OAuth2 External - DEV Federate API KEY auth - DEV HUB  ://globalmdmnprodaspasp202202171347HUB UI MDM DataMartResource NameEndpointDB UrlDB warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_APAC_MDM_DMART_QA_DEVOPS_ROLEResource NameEndpointHUB Performance Topics Overview Overview State Monitoring Monitoring NameEndpointKibana (QA prefixed dashboards)DocumentationResource ConsulResource NameEndpointConsul UIAKHQ - KafkaResource NameEndpointAKHQ UI (, , NameEndpointSQS queue name Gateway Usersvc-pfe-mdmhubRDM ResourcesResource NameEndpointMongomongodb://::9094 SASL SSLKibanaElasticsearch" }, { "title": "", "": "", "pageLink": "/display//APAC+STAGE+Services", "content": " NameEndpointGateway API OAuth2 External - DEV Federate API KEY auth - DEV HUB  ://globalmdmnprodaspasp202202171347HUB UI MDM DataMartResource NameEndpointDB UrlDB NameCOMM_APAC_MDM_DMART_STG_DBDefault warehouse role nameCOMM_APAC_MDM_DMART_STG_DEVOPS_ROLEResource NameEndpointHUB Performance Topics Overview Monitoring Monitoring NameEndpointKibana (STAGE prefixed dashboards)DocumentationResource documentation NameEndpointAirflow UIConsulResource NameEndpointConsul UIAKHQ - KafkaResource NameEndpointAKHQ UI (, , STAGE - Y4StMNK3b0AGDf6Resource NameEndpointSQS queue ::9094 SASL SSLKibanaElasticsearch" }, { "title": "", "": "", "pageLink": "/display//APAC+PROD+Cluster", "content": "Physical ArchitectureKubernetes clusternameIPConsole addressresource typeAWS regionFilesystemComponentsTypeatp-mdmhub-prod-apac●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●EKS over per node,6TBx2 replicated , , , , MDMHUB microservicesinbound/outboundComponents & LogsPROD - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsapac-prodManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger definition - if availableapac-prodBatch -batch-service-*Batch -prodAPI routermdmhub-mdm-api-router-*API Routerlogsapac-prodReltio -reltio-subscriber-*Reltio Subscriberlogsapac-prodEntity Enrichermdmhub-entity-enricher-*Entity -prodCallback Servicemdmhub-callback-service-*Callback -prodEvent Publishermdmhub-event-publisher-*Event Publisherlogsapac-prodReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation -prodCallback delay servicemdmhub-callback-delay-service-*Callback delay servicelogsNon PROD - backend NamespaceComponentPodDescriptionLogskongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kongapac-backendKafkamdm-kafka-kafka-0mdm-kafka-kafka-1mdm-kafka-kafka-2Kafkalogsapac-backendKafka -kafka-kafka-exporter-*Kafka logs {{pod name}} --namespace apac-backendapac-backendZookeeper mdm-kafka-zookeeper-0mdm-kafka-zookeeper-1mdm-kafka-zookeeper-2Zookeeperlogsapac-backendMongomongo-0Mongologsapac-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace apac-backendapac-backendFluentDfluentd-*EFK - fluentdkubectl logs {{pod name}} --namespace apac-backendapac-backendElasticsearchelasticsearch-es-default-0elasticsearch-es-default-1EFK - elasticsearchkubectl logs {{pod name}} --namespace apac-backendapac-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace -backendmonitoringcAdvisormonitoring-cadvisor-*Docker logs {{pod name}} --namespace monitoringapac-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace apac-backendapac-backendMongo exportermongo-exporter-*mongo metrics exporter----backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace apac-backendapac-backendConsulconsul-consul-server-0consul-consul-server-1consul-consul-server-2Consulkubectl logs {{pod name}} --namespace apac-backendapac-backendSnowflake connectorapac-prod-mdm-connect-cluster-connect-*Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace apac-backendmonitoringKafka Connect Exportermonitoring-jdbc-snowflake-exporter-apac-prod-*Kafka exporterkubectl logs {{pod name}} --namespace monitoringapac-backendAKHQakhq-*Kafka UIlogsCertificates ResourceCertificate LocationValid fromValid to Issued , , , , Consul, ," }, { "title": "", "": "", "pageLink": "/display/GMDM/APAC+PROD+Services", "content": " NameEndpointGateway API OAuth2 External - DEV Federate API KEY auth - DEV HUB  ://globalmdmprodaspasp202202171415HUB UI MDM DataMartResource NameEndpointDB DB warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_APAC_MDM_DMART_PROD_DEVOPS_ROLEResource NameEndpointHUB Performance Topics Overview Overview State Monitoring Monitoring NameEndpointKibana (DEV prefixed dashboards)DocumentationResource API ConsulResource NameEndpointConsul UIAKHQ - KafkaResource NameEndpointAKHQ UI (, , - 2NBAwv1z2AvlkgSResource NameEndpointSQS queue name Gateway Usersvc-pfe-mdmhub-prodRDM ResourcesResource NameEndpointMongomongodb://::9094 SASL SSLKibanaElasticsearch" }, { "title": "", "": "", "pageLink": "/display/GMDM/EMEA", "content": "" }, { "title": " proxy", "": "", "pageLink": "/display//EMEA+External+proxy", "content": "The page describes the external proxy servers. deployed in a DLP (Double Lollipop) account, used by clients outside of the COMPANY network, to access . proxy instancesEnvironmentConsole accessresource typeAWS regionAWS Account IDComponentsNon PROD use the role: (EUW1Z2DL115)ssh ec2-user@EC2eu-west-1432817204314KongPRODi-091aa7f1fe1ede714 (EUW1Z2DL113)ssh ec2-user@i-05c4532bf7b8d7511 (EUW1Z2DL114)ssh  External Hub EndpointsEnvironmentServiceEndpointInbound security group configurationNon PRODAPI - due to the limit of 60 rules per , add new ones to::::9095ClientsEnvironmentClientsNon PRODFind all details in the Security GroupMDMHub-kafka-and-api-proxy-external-nprod-sgPRODFind all details in the Security GroupMDMHub-kafka-and-api-proxy-external-prod-sgAnsible configurationResourceAddressInstall proxy cadvisor PROD inventory inventory SOPsHow to access to restart the EC2 instanceHow to login to hosts with downtime restart/upgrade" }, { "title": "EMEA Non PROD Cluster", "": "", "pageLink": "/display//EMEA+Non+PROD+Cluster", "content": "Physical ArchitectureKubernetes clusternameIPConsole addressresource typeAWS regionFilesystemComponentsTypeatp-mdmhub-nprod-emea10.90.96.0/2310.90.98.0/23 over EC2eu-west-1~100GBper node,7.3Ti replicated Portworx volumesKong, , , , MDMHUB microservicesinbound/outboundComponents & LogsDEV - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsemea-devManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger definition - if availableemea-devBatch Servicemdmhub-batch-service-*Batch -devAPI routermdmhub-mdm-api-router-*API Routerlogsemea-devReltio Subscribermdmhub-reltio-subscriber-*Reltio -entity-enricher-*Entity service-*Callback Servicelogsemea-devEvent Publishermdmhub-event-publisher-*Event Publisherlogsemea-devReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation ServicelogsQA - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsemea-qaManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger definition - if availableemea-qaBatch Servicemdmhub-batch-service-*Batch -qaAPI routermdmhub-mdm-api-router-*API Routerlogsemea-qaReltio -reltio-subscriber-*Reltio Subscriberlogsemea-qaEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsemea-qaCallback Servicemdmhub-callback-service-*Callback Servicelogsemea-qaEvent Publishermdmhub-event-publisher-*Event Publisherlogsemea-qaReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation ServicelogsSTAGE - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsemea-stageManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger definition - if availableemea-stageBatch Servicemdmhub-batch-service-*Batch -stageAPI routermdmhub-mdm-api-router-*API Routerlogsemea-stageReltio -reltio-subscriber-*Reltio Subscriberlogsemea-stageEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsemea-stageCallback Servicemdmhub-callback-service-*Callback Servicelogsemea-stageEvent Publishermdmhub-event-publisher-*Event -stageReconciliation (namespace)ComponentPodDescriptionLogsPod portsgbl-devManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger definition - if availablegbl-devBatch Servicemdmhub-batch-service-*Batch Servicelogsgbl-devReltio Subscribermdmhub-reltio-subscriber-*Reltio -entity-enricher-*Entity Enricherlogsgbl-devCallback Servicemdmhub-callback-service-*Callback Servicelogsgbl-devEvent Publishermdmhub-event-publisher-*Event Publisherlogsgbl-devReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation -devDCR Servicemdmhub-mdm-dcr-service-*DCR Channel mdmhub-mdm-map-channel-*MAP -devPforceRX -pforcerx-channel-*PforceRX ChannellogsGBL QA - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsgbl-qaManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger definition - if availablegbl-qaBatch Servicemdmhub-batch-service-*Batch -qaReltio -reltio-subscriber-*Reltio -qaEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsgbl-qaCallback Servicemdmhub-callback-service-*Callback Servicelogsgbl-qaEvent Publishermdmhub-event-publisher-*Event Publisherlogsgbl-qaReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogsgbl-qaDCR Servicemdmhub-mdm-dcr-service-*DCR Channel mdmhub-mdm-map-channel-*MAP Channellogsgbl-qaPforceRX -pforcerx-channel-*PforceRX ChannellogsGBL STAGE - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsgbl-stageManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger definition - if availablegbl-stageBatch Servicemdmhub-batch-service-*Batch -stageReltio -reltio-subscriber-*Reltio -stageEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsgbl-stageCallback Servicemdmhub-callback-service-*Callback Servicelogsgbl-stageEvent Publishermdmhub-event-publisher-*Event Publisherlogsgbl-stageReconciliation Servicelogsgbl-stageDCR Servicemdmhub-mdm-dcr-service-*DCR Channel mdmhub-mdm-map-channel-*MAP Channellogsgbl-stagePforceRX -pforcerx-channel-*PforceRX ChannellogsNon PROD - backend NamespaceComponentPodDescriptionLogskongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kongemea-backendKafkamdm-kafka-kafka-0mdm-kafka-kafka-1mdm-kafka-kafka-2Kafkalogsemea-backendKafka -kafka-kafka-exporter-*Kafka logs {{pod name}} --namespace emea-backendemea-backendZookeeper mdm-kafka-zookeeper-0mdm-kafka-zookeeper-1mdm-kafka-zookeeper-2Zookeeperlogsemea-backendMongomongo-0Mongologsemea-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace emea-backendemea-backendFluentDfluentd-*EFK - fluentdkubectl logs {{pod name}} --namespace emea-backendemea-backendElasticsearchelasticsearch-es-default-0elasticsearch-es-default-1EFK - elasticsearchkubectl logs {{pod name}} --namespace emea-backendemea-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace emea-backendmonitoringcAdvisormonitoring-cadvisor-*Docker logs {{pod name}} --namespace monitoringemea-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace emea-backendemea-backendMongo exportermongo-exporter-*mongo metrics exporter---emea-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace emea-backendemea-backendConsulconsul-consul-server-0consul-consul-server-1consul-consul-server-2Consulkubectl logs {{pod name}} --namespace emea-backendemea-backendSnowflake connectoremea-dev-mdm-connect-cluster-connect-*emea-qa-mdm-connect-cluster-connect-*emea-stage-mdm-connect-cluster-connect-*Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace emea-backendmonitoringKafka Connect Exportermonitoring-jdbc-snowflake-exporter-emea-dev-*monitoring-jdbc-snowflake-exporter-emea-stage-*monitoring-jdbc-snowflake-exporter-emea-stage-*Kafka metric exporterkubectl logs {{pod name}} --namespace monitoringemea-backendAKHQakhq-*Kafka UIlogsCertificates ResourceCertificate LocationValid fromValid to Issued , , , , Consul, ," }, { "title": "", "": "", "pageLink": "/display//EMEA+DEV+Services", "content": " NameEndpointGateway API OAuth2 External - DEV Federate API KEY auth - DEV HUB  ://pfe-atp-eu--nprod-mdmhub/emea/ MDM DataMartResource NameEndpointDB UrlDB NameCOMM_EMEA_MDM_DMART_DEV_DBDefault warehouse role nameCOMM_EMEA_MDM_DMART_DEVOPS_DEV_ROLEResource NameEndpointHUB Performance Topics Overview Overview State Monitoring Monitoring NameEndpointKibana (DEV prefixed dashboards)DocumentationResource API documentation Service 2 documentation NameEndpointAirflow UI NameEndpointConsul UI - KafkaResource NameEndpointAKHQ Kafka UI - COMPANY (GBLUS)MDM SystemsReltioDEV - wn60kG248ziQSMWResource NameEndpointSQS queue Usersvc-pfe-mdmhubRDM ResourcesResource NameEndpointMongomongodb://:27017Kafka" }, { "title": "", "": "", "pageLink": "/display/GMDM/EMEA+QA+Services", "content": " NameEndpointGateway API OAuth2 External - DEV Federate API KEY auth - DEV HUB  ://pfe-atp-eu--nprod-mdmhub/emea/qaHUB UI MDM DataMeResource NameEndpointDB UrlDB warehouse nameCOMM_MDM_DMART_WHDevOps role dashboardsResource NameEndpointHUB Performance dashboardsResource NameEndpointKibana (QA prefixed dashboards)DocumentationResource documentation NameEndpointAirflow UI NameEndpointConsul UI - KafkaResource NameEndpointAKHQ Kafka UI - COMPANY (GBLUS)MDM SystemsReltioQA - vke5zyYwTifyeJSResource NameEndpointSQS queue name Gateway Usersvc-pfe-mdmhubRDM ResourcesResource NameEndpointMongomongodb://:27017Kafka" }, { "title": "", "": "", "pageLink": "/display//EMEA+STAGE+Services", "content": " NameEndpointGateway API OAuth2 External - DEV Federate API KEY auth - DEV HUB  ://pfe-atp-eu--nprod-mdmhub/emea/stageHUB UI MDM DataMartResource NameEndpointDB UrlDB NameCOMM_EMEA_MDM_DMART_STG_DBDefault warehouse role nameCOMM_EMEA_MDM_DMART_STG_DEVOPS_ROLEResource NameEndpointHUB Performance Topics Overview Statistics monitoring Overview NameEndpointKibana (STAGE prefixed dashboards)DocumentationResource API documentation NameEndpointAirflow UI NameEndpointConsul NameEndpointAKHQ Kafka UI - COMPANY (GBLUS)MDM SystemsReltioSTAGE - Dzueqzlld107BVWResource NameEndpointSQS queue name Gateway Usersvc-pfe-mdmhubRDM ResourcesResource NameEndpointMongomongodb://:27017Kafka" }, { "title": "GBL DEV Services", "": "", "pageLink": "/display/GMDM/GBL+DEV+Services", "content": " NameEndpointGateway API OAuth2 External - DEV Federate API KEY auth - DEV HUB  ://pfe-atp-eu--nprod-mdmhub (eu-west-1)HUB UI MDM DataMartResource NameEndpointDB UrlDB warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_DEV_MDM_DMART_DEVOPS_ROLEMonitoringResource NameEndpointHUB Performance Topics Overview Monitoring State Overview NameEndpointKibana (DEV prefixed dashboards)DocumentationResource NameEndpointManager documentation NameEndpointAirflow UI NameEndpointConsul UI - KafkaResource NameEndpointAKHQ Kafka UI SystemsReltio GBL DEV - FLy4mo0XAh0YEbNResource NameEndpointSQS queue name Gateway UserIntegration_Gateway_UserRDM ResourcesResource NameEndpointMongomongodb://:27017Kafka" }, { "title": "", "": "", "pageLink": "/display/GMDM/GBL+QA+Services", "content": " Federate API KEY auth - DEV HUB  ://pfe-atp-eu--nprod-mdmhub (eu-west-1)HUB UI MDM DataMartDB UrlDB NameCOMM_EU_MDM_DMART_QA_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_QA_MDM_DMART_DEVOPS_ROLEMonitoringHUB Performance Topics Overview Monitoring State Overview(QA prefixed dashboards)DocumentationManager documentation NameEndpointAirflow UI NameEndpointConsul UI - KafkaResource NameEndpointAKHQ Kafka UI SystemsReltio GBL MAPP - AwFwKWinxbarC0ZSQS queue name Gateway UserIntegration_Gateway_UserRDM ResourcesMongomongodb://: SSLKibana" }, { "title": "GBL STAGE Services", "": "", "pageLink": "/display/GMDM/GBL+STAGE+Services", "content": " S3Gateway API OAuth2 External - DEV Federate API KEY auth - DEV HUB  ://pfe-atp-eu--nprod-mdmhub (eu-west-1)HUB DataMartDB Url warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_STG_MDM_DMART_DEVOPS_ROLEMonitoringHUB Overview(STAGE prefixed dashboards)DocumentationManager documentation NameEndpointAirflow UI NameEndpointConsul UI - KafkaResource NameEndpointAKHQ Kafka UI SystemsReltio GBL STAGE - FW4YTaNQTJEcN2gSQS queue name Gateway UserIntegration_Gateway_UserRDM ResourcesMongomongodb://:27017Kafka" }, { "title": "EMEA PROD Cluster", "": "", "pageLink": "/display//EMEA+PROD+Cluster", "content": "Physical ArchitectureKubernetes clusternameIPConsole addressresource typeAWS regionFilesystemComponentsTypeatp-mdmhub-nprod-emea10.90.96.0/2310.90.98.0/23 over EC2eu-west-1~100GBper node,7.3Ti replicated Portworx volumesKong, , , , MDMHUB microservicesinbound/outboundComponents & LogsPROD - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsemea-prodManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger definition - if availableemea-prodBatch -batch-service-*Batch -prodAPI routermdmhub-mdm-api-router-*API Routerlogsemea-prodReltio -reltio-subscriber-*Reltio Subscriberlogsemea-prodEntity Enrichermdmhub-entity-enricher-*Entity service-*Callback Servicelogsemea-prodEvent Publishermdmhub-event-publisher-*Event -prodReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation - backend NamespaceComponentPodDescriptionLogskongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kongemea-backendKafkamdm-kafka-kafka-0mdm-kafka-kafka-1mdm-kafka-kafka-2Kafkalogsemea-backendKafka -kafka-kafka-exporter-*Kafka logs {{pod name}} --namespace emea-backendemea-backendZookeeper mdm-kafka-zookeeper-0mdm-kafka-zookeeper-1mdm-kafka-zookeeper-2Zookeeperlogsemea-backendMongomongo-0mongo-1mongo-2Mongologsemea-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace emea-backendemea-backendFluentDfluentd-*EFK - fluentdkubectl logs {{pod name}} --namespace emea-backendemea-backendElasticsearchelasticsearch-es-default-0elasticsearch-es-default-1elasticsearch-es-default-2EFK - elasticsearchkubectl logs {{pod name}} --namespace emea-backendemea-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace emea-backendmonitoringcAdvisormonitoring-cadvisor-*Docker logs {{pod name}} --namespace monitoringemea-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace emea-backendemea-backendMongo exportermongo-exporter-*mongo metrics exporter---emea-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace emea-backendemea-backendConsulconsul-consul-server-0consul-consul-server-1consul-consul-server-2Consulkubectl logs {{pod name}} --namespace emea-backendemea-backendSnowflake connectoremea-prod-mdm-connect-cluster-connect-*Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace emea-backendmonitoringKafka Connect Exportermonitoring-jdbc-snowflake-exporter-emea-prod-*Kafka Connect metric exporterkubectl logs {{pod name}} --namespace monitoringemea-backendAKHQakhq-*Kafka UIlogsCertificates ResourceCertificate LocationValid fromValid to Issued , , , , Consul, ," }, { "title": "", "": "", "pageLink": "/display/GMDM/EMEA+PROD+Services", "content": " NameEndpointGateway API OAuth2 External - PROD Federate API KEY auth - PROD HUB  ://pfe-atp-eu--prod-mdmhub/emea/prodHUB UI MDM DataMartResource NameEndpointDB NameCOMM_EMEA_MDM_DMART_PROD_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_EMEA_MDM_DMART_PROD_DEVOPS_ROLEMonitoringResource NameEndpointHUB Overview Statistics monitoring Overview (PROD prefixed dashboards)DocumentationResource documentation NameEndpointAirflow UI NameEndpointConsul UI - KafkaResource (GBLUS)MDM SystemsReltioPROD_EMEA - Xy67R0nDA10RUV6Resource NameEndpointSQS queue API - UIReltio Gateway Usersvc-pfe-mdmhub-prodRDM ResourcesResource NameEndpointMongo:27017Kafka:9094/,:9094/,:9094/Kibana" }, { "title": "", "": "", "pageLink": "/display//GBL+PROD+Services", "content": " Federate API KEY auth - PROD HUB  ://pfe-baiaes-eu--project/ DataMartDB Url NameCOMM_EU_MDM_DMART_PROD_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_GBL_MDM_DMART_PROD_DEVOPS_ROLEMonitoringHUB Performance Topics Overview Statistics monitoring Overview (PROD prefixed dashboards)DocumentationManager API documentation UI UI - KafkaAKHQ Kafka UI - COMPANY (GBLUS)MDM SystemsReltioPROD_EMEA - FW2ZTF8K3JpdfFlSQS queue name - API - UIReltio Gateway Userpfe_mdm_apiRDM ResourcesMongo:27017Kafka:9094/,:9094/,:9094/Kibana" }, { "title": "US Trade (FLEX)", "": "", "pageLink": "/pages/tion?pageId=", "content": "" }, { "title": "US Non PROD Cluster", "": "", "pageLink": "/display/GMDM/US+Non+PROD+Cluster", "content": "Physical ArchitectureHostsIDIPHostnameDocker TypeSpecificationAWS RegionFilesystemDEV●●●●●●●●●●●●●mdmihnprEC2r4.2xlargeus-east750 GB - /app15 GB - /var/lib/dockerComponents & LogsENVHostComponentDocker nameDescriptionLogsOpen PortsDEVDEVManagerdevmdmsrv_mdm-manager_1Gateway /app/mdmgw/dev-mdm-srv/manager/log8849, 9104DEVDEVBatch file processor, poller/app/mdmgw/dev-mdm-srv/batch_channel/log9121DEVDEVPublisherdevmdmhubsrv_event-publisher_1Event publisher/app/mdmhub/dev-mdm-srv/event_publisher/log9106DEVDEVSubscriberdevmdmhubsrv_reltio-subscriber_1SQS Reltio event subscriber/app/mdmhub/dev-mdm-srv/reltio_subscriber/log9105DEVDEVConsoledevmdmsrv_console_1Hawtio console9999ENVHostComponentDocker nameDescriptionLogsOpen PortsTESTDEVManagertestmdmsrv_mdm-manager_1Gateway /app/mdmgw/test-mdm-srv/manager/log8850, 9108TESTDEVBatch Channeltestmdmsrv_batch-channel_1Batch file processor, poller/app/mdmgw/test-mdm-srv/batch_channel/log9111TESTDEVPublishertestmdmhubsrv_event-publisher_1Event publisher/app/mdmhub/test-mdm-srv/event_publisher/log9110TESTDEVSubscribertestmdmhubsrv_reltio-subscriber_1SQS Reltio event subscriber/app/mdmhub/test-mdm-srv/reltio_subscriber/log9109Back-End HostComponentDocker nameDescriptionLogsOpen PortsDEVFluentDfluentdEFK - FluentD/app/efk/fluentd/log24225DEVKibanakibanaEFK - Kibanadocker logs /app/efk/elasticsearch/logs9200DEVPrometheusprometheusPrometheus Federation slave serverdocker logs prometheus9119DEVMongomongo_mongo_1Mongodocker logs mongo_mongo_127017DEVMongo Exportermongo_exporterMongo → Prometheus exporter/app/mongo_exporter/logs9120DEVMonstache Connectormonstache-connectorMongo → Elasticsearch exporter8095DEVKafkakafka_kafka_1Kafkadocker logs kafka_kafka_19093, , 9101DEVKafka Exporterkafka_kafka_exporter_1Kafka → exporterdocker logs -exporter-devSQS → exporterdocker logs -exporter-dev9122DEVCadvisorcadvisorDocker → exporterdocker logs cadvisor9103DEVKongkong_kong_1API Manager/app/mdmgw//kong_logs8000, , 32774DEVKong - DBkong_kong-database_1Kong Cassandra databasedocker logs kong_kong-database_19042DEVZookeeperkafka_zookeeper_1Zookeeperdocker logs kafka_zookeeper_12181DEVNode Exporter(non-docker) node_exporterPrometheus node exportersystemctl status node_exporter9100CertificatesResourceCertificate fromValid to O = - Server Truststore = Default Company LtdST = Some-StateC = AUKafka - Server KeyStore CN = KafkaFlexOU = UnknownO = UnknownL = UnknownST = UnknownC = UnknownElasticsearch groupsResource NameTypeDescriptionSupportuserComputer : mdmihnprName: SRVGBL-Pf6687993Uid: 27634358Gid: userUnix Role GroupRole: ADMIN_ROLEportsSecurity groupSG Name: Submit ticket to FULL SUPPORTInternal ClientsNameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopicFLEX US userflex_nprodExternal OAuth2Flex-MDM_client- "CREATE_HCP"- "CREATE_HCO"- "UPDATE_HCP"- "UPDATE_HCO"- "GET_ENTITIES"- "SCAN_ENTITIES"ALL- "FLEXProposal"- "FLEX"- "FLEXIDL"- "Calculate"- "SAP"dev-out-full-flex-alltest-out-full-flex-alltest2-out-full-flex-alltest3-out-full-flex-allInternal HUB usermdm_test_userExternal OAuth2Flex-MDM_client- "CREATE_HCP"- "CREATE_HCO"- "UPDATE_HCP"- "UPDATE_HCO"- "GET_ENTITIES"- "DELETE_CROSSWALK"- "GET_RELATION"- "SCAN_ENTITIES"- "SCAN_RELATIONS"- "LOOKUPS"- "ENTITY_ATTRIBUTES_UPDATE"ALL- "FLEXProposal"- "FLEX"- "FLEXIDL"- "Calculate"- "AddrCalc"- "SAP"- "HIN"- "DEAIntegration Batch Update userintegration_batch_userKey AuthN/A- "GET_ENTITIES"- "ENTITY_ATTRIBUTES_UPDATE"- "GENERATE_ID"- "CREATE_HCO"- " "FLEXProposal"- "FLEX"- "FLEXIDL"- "Calculate"- "AddrCalc"dev-internal-integration-testsFLEX Batch Channel userflex_batch_devKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "FLEX"- "FLEXIDL"dev-internal-hco-create-flexflex_batch_testKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "FLEX"- "FLEXIDL"test-internal-hco-create-flexflex_batch_test2Key AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "FLEX"- "FLEXIDL"test2-internal-hco-create-flexflex_batch_test3Key AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "FLEX"- "FLEXIDL"test3-internal-hco-create-flexSAP Batch Channel usersap_batch_devKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "SAP"dev-internal-hco-create-sapsap_batch_testKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "SAP"test-internal-hco-create-sapsap_batch_test2Key AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "SAP"test2-internal-hco-create-sapsap_batch_test3Key AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "SAP"test3-internal-hco-create-sapHIN Batch Channel userhin_batch_devKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "HIN"dev-internal-hco-create-hinhin_batch_testKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "HIN"test-internal-hco-create-hinhin_batch_test2Key AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "HIN"test2-internal-hco-create-hinhin_batch_test3Key AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "HIN"test3-internal-hco-create-hinDEA Batch Channel userdea_batch_devKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "DEA"dev-internal-hco-create-deadea_batch_testKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "DEA"test-internal-hco-create-deadea_batch_test2Key AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "DEA"test2-internal-hco-create-deadea_batch_test3Key AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "DEA"test3-internal-hco-create-dea340B Batch Channel user340b_batch_devKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "340B"dev-internal-hco-create-340b340b_batch_testKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "340B"test-internal-hco-create-340b" }, { "title": " DEV Services", "": "", "pageLink": "/display/GMDM/US+DEV+Services", "content": " - DEV:8443/dev-extPing Federate API KEY auth - DEV:8443/:9094MDM HUB  ://mdmnprodamrasp22124/MonitoringResource NameEndpointHUB Performance Topics Overview Statistics monitoring Overview (DEV prefixed DEV - keHVup25rN7ij3YResource NameEndpointSQS queue name Gateway UserIntegration_Gateway_US_UserRDM ResourcesResource NameEndpointMongomongodb://:::2181Kibana:5601/app//hawtio/#/login" }, { "title": " TEST (QA) Services", "": "", "pageLink": "/display//US+TEST+%28QA%29+Services", "content": " API OAuth2 External - TEST2:8443/test2-extGateway API OAuth2 External - TEST3:8443/test3-extGateway API KEY auth - TEST:8443/testGateway API KEY auth - TEST2:8443/test2Gateway API KEY auth - TEST3:8443/test3Ping Federate HUB  ://mdmnprodamrasp22124/LogsResource NameEndpointKibana:5601/app/kibana (TEST prefixed US TEST - cnL0Gq086PrguOdResource NameEndpointSQS queue name Reltio Gateway UserIntegration_Gateway_US_UserRDM Reltio US TEST2 - JKabsuFZzNb4K6kResource NameEndpointSQS queue name UserIntegration_Gateway_US_UserRDM Reltio US TEST3 - Yy7KqOqppDVzJpkResource NameEndpointSQS queue UserIntegration_Gateway_US_UserRDM /app//hawtio/#/login" }, { "title": "US PROD Cluster", "": "", "pageLink": "/display//US+PROD+Cluster", "content": "Physical ArchitectureHostsIDIPHostnameDocker TypeSpecificationAWS RegionFilesystemPROD1●●●●●●●●●●●●●●mdmihpr EC2r4.xlarge us-east-1e500 - /app15 GB - /var/lib/●●●●●●●●●●●●●●mdmihprEC2r4.xlarge us-east-1e500 - /app15 GB - /var/lib/dockerPROD3●●●●●●●●●●●●mdmihprEC2r4.xlarge us-east-1e500 - /app15 GB - /var/lib/dockerComponents nameDescriptionLogsOpen PortsPROD1, , PROD3Managermdmgw_mdm-manager_1Gateway /app/mdmgw/manager/log9104, 8851PROD1Batch Channelmdmgw_batch-channel_1Batch file processor, poller/app/mdmgw/batch_channel/log9107PROD1, , PROD3Publishermdmhub_event-publisher_1Event publisher/app/mdmhub/event_publisher/log9106PROD1, , PROD3Subscribermdmhub_reltio-subscriber_1SQS Reltio event subscriber/app/mdmhub/reltio_subscriber/, , PROD3ElasticsearchelasticsearchEFK - Elasticsearch/app/efk/elasticsearch/, , PROD3FluentDfluentdEFK - FluentD/app/efk/fluentd/logPROD3KibanakibanaEFK - Kibanadocker logs slave serverdocker logs prometheus9109PROD1, , PROD3Mongomongo_mongo_1Mongodocker logs mongo_mongo_127017PROD3Monstache Connectormonstache-connectorMongo → exporterPROD1, , logs , 9094PROD1, , PROD3Kafka Exporterkafka_kafka_exporter_1Kafka → exporterdocker logs , , PROD3CadvisorcadvisorDocker → exporterdocker logs cadvisor9103PROD3SQS -exporterSQS → exporterdocker logs , , PROD3Kongkong_kong_1API Manager/app/mdmgw//kong_logs8000, , , , PROD3Kong - DBkong_kong-database_1Kong Cassandra databasedocker logs , , , PROD3Zookeeperkafka_zookeeper_1Zookeeperdocker logs kafka_zookeeper_12181, 2888, 3888PROD1, , PROD3Node Exporter(non-docker) node_exporterPrometheus node exportersystemctl status node_exporter9100CertificatesResourceCertificate fromValid to O = Truststore Root CA G2Kafka - Server TruststorePROD1 - - - O = COMPANYElasticsearchesnode1 - - groupsResource NameTypeDescriptionSupportELBLoad BalancerReference LB Name: PFE-CLB-JIRA-HARMONY-PROD-001CLB name: -PROD-001DNS name: userComputer RoleComputer Role: UNIX-UNIVERSAL-AWSCBSDEV-MDMIHPR-COMPUTERS-U Login: mdmihprName: SRVGBL-mdmihprUID: 25084803GID: userUnix Role GroupUnix-mdmihubProd-URole: : Submit ticket to FULL SUPPORTS3S3 Bucketmdmprodamrasp42095 (us-east-1)Username: login: ClientsNameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopicInternal "CREATE_HCP"- "CREATE_HCO"- "UPDATE_HCP"- "UPDATE_HCO"- "GET_ENTITIES"- "DELETE_CROSSWALK"- "GET_RELATION"- "SCAN_ENTITIES"- "SCAN_RELATIONS"- "LOOKUPS"- "ENTITY_ATTRIBUTES_UPDATE"ALL- "FLEXProposal"- "FLEX"- "FLEXIDL"- "Calculate"- "AddrCalc"prod-internal-reltio-eventsInternal usermdm_test_userExternal OAuth2MDM_client- "CREATE_HCP"- "CREATE_HCO"- "UPDATE_HCP"- "UPDATE_HCO"- "GET_ENTITIES"- "DELETE_CROSSWALK"- "GET_RELATION"- "SCAN_ENTITIES"- "SCAN_RELATIONS"- "LOOKUPS"- "ENTITY_ATTRIBUTES_UPDATE"ALL- "FLEXProposal"- "FLEX"- "FLEXIDL"- "Calculate"- "AddrCalc"- "SAP"- "HIN"- "DEA"Integration Batch Update userintegration_batch_userKey AuthN/A- "GET_ENTITIES"- "ENTITY_ATTRIBUTES_UPDATE"- "GENERATE_ID"- "CREATE_HCO"- " "FLEXProposal"- "FLEX"- "FLEXIDL"- "Calculate"- "AddrCalc"FLEX userflex_prodExternal OAuth2Flex-MDM_client- "CREATE_HCP"- "CREATE_HCO"- "UPDATE_HCP"- "UPDATE_HCO"- "GET_ENTITIES"- "SCAN_ENTITIES"ALL- "FLEXProposal"- "FLEX"- "FLEXIDL"- "Calculate"prod-out-full-flex-allFLEX Batch Channel userflex_batchKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "FLEX"- "FLEXIDL"prod-internal-hco-create-flexSAP Batch Channel usersap_batchKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "SAP"prod-internal-hco-create-sapHIN Batch Channel userhin_batchKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "HIN"prod-internal-hco-create-hinDEA Batch Channel userdea_batchKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "DEA"prod-internal-hco-create-dea340B Batch Channel user340b_batchKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "340B"prod-internal-hco-create-340b" }, { "title": "", "": "", "pageLink": "/display//US+PROD+Services", "content": " - PROD API OAuth2 - PROD API KEY auth - PROD  ://mdmprodamrasp42095/- FLEX: PROD/inbound/FLEX- SAP: PROD/inbound/SAP- HIN: PROD/inbound/HIN- : PROD/inbound/DEA- 340B: PROD/inbound/340BMonitoringResource NameEndpointHUB Performance Topics Overview Statistics monitoring Overview NameEndpointKibana:5601/app/kibanaMDM SystemsReltio US PROD -  NameEndpointSQS queue name Reltio Gateway UserIntegration_Gateway_US_UserRDM ResourcesResource NameEndpointMongomongodb://:27017,:27017,:::::::2181Kibana:5601/app/kibanaElasticsearch:9200:9200:9200Hawtio:9999/hawtio/#/login:9999/hawtio/#/login:9999/hawtio/#/login" }, { "title": "Components", "": "", "pageLink": "/display/", "content": "" }, { "title": "Apache Airflow", "": "", "pageLink": "/display/GMDM/Apache+Airflow", "content": " is platform created by Apache and designed to schedule workflows called rflow docs: are using airflow on kubernetes with helm of official airflow helm chart: this architecture airflow consists of 3 main components:Scheduler - scheduling, monitoring and executing tasksWebserver - Airflow UIDatabase(PostgreSQL)InterfacesUI e.g. /api//docs: are configure in mdm-hub-cluster-env repository in /inventory/${environment}/group_vars/gw-airflow-services/${dag_name}.yaml filesUsed flows are described in dags list" }, { "title": "API Gateway", "": "", "pageLink": "/display/GMDM/API+Gateway", "content": "DescriptionKong (API Gateway) is the component used as the gateway for all requests in the MDM HUB. This component exposes only one URL to the external clients, which means that all internal docker containers are secured and it is not possible to access them. This allows to track whole network traffic access in one place. is the router that redirects requests to specific services using configured routes. contains multiple additional plugins, these plugins are connected with the specific services and add additional security (, , Oauth2-External) or user management. authorized users are allowed to execute specific operations in the nology: is a predefined component installed using a container. uses the Lua language and engine. (docker image: kong:1.1.1-centos)Kong stores the whole configuration in the Cassandra Database ( docker image: cassandra:3)Kong uses a customized plugin for the token verification - OAuth 2.0 ExternalCode link:  Admin API DOCOauth2 External plugin: /mdm-external-oauth-pluginFlowsKong is responsible for the security, user management, and access layer to HUB:  patternDescriptionAdmin APIREST APIGET http://localhost:8001/Internal and secured PORT available only in the docker container used by to manage existing services, routes, plugins, consumers, certificatesExternal APIREST APIGET https://localhost:8443/External and secured PORT exposed to the and accessed by clients. Dependent componentsComponentInterfaceFlowDescriptionCassandra - kong_kong-database_1TCP internal docker communicationN/Akong configuration databaseHUB internal docker communicationN/AThe route to all HUB microservices, required to expose to external clients ConfigurationKong configuration is divided into 5 sections:1  valueDescription- snowflake_api_user: create_or_update: False vars: username: snowflake_api_user plugins: - name: key-auth parameters: key: "{{ secret_kong_owflake_api_y_y }}" for the user with key-auth authentication - used only for the technical services l External OAuth2 users are configured in the utes Sections2 CertificatesConfig ParameterDefault valueDescription- gbl_mdm_hub_us_nprod: create_or_update: False vars: cert: "{{ lookup('file', '{{playbook_dir}}/ssl_certs/{{ env_name }}/certs/m') }}" key: "{{ lookup('file', '{{playbook_dir}}/ssl_certs/{{ env_name }}/certs/y') }}" snis: - - - ""N/A Configuration of the SSL Certificate in the Kong.3 ServicesConfig ParameterDefault valueDescriptionkong_services: - create_or_update: False vars: name: "{{ kong_env }}-manager-service" url: "http://{{ kong_env }}mdmsrv_mdm-manager_1:8081" connect_timeout: write_timeout: read_timeout: 120000N/AKong Service - this is a main part of the configuration, this connects internally with container.  allows configuring multiple services with multiple routes and plugins.4  valueDescription- create_or_update: False vars: name: "{{ kong_env }}-manager-ext-int-api-oauth-route" service: "{{ kong_env }}-manager-service" paths: [ "/{{ kong_env }}-ext" ] methods: [ "GET", "POST", "PATCH", "DELETE" ]N/AExposes the route to the service. Clients using have to add the path to the invocation to access specified services. "-ext" suffix defines the that used the External OAuth 2.0 plugin connected to the PingFederate. Configures the methods that the user is allowed to invoke. 5 PluginsConfig ParameterDefault valueDescription- create_or_update: False vars: name: key-auth route: "{{ kong_env }}-manager-int-api-route" config: hide_credentials: trueN/AThe type of plugin "key-auth" used for the internal or technical users that authenticate using a security key- create_or_update: False vars: name: mdm-external-oauth route: "{{ kong_env }}-manager-ext-int-api-oauth-route" config: introspection_url: authorization_value: "{{ cret_oauth2_authorization_value }}" hide_credentials: true users_map: - "e2a6de9c38be44f4a3c1b53f50218cf7:engage"N/AThe type of plugin "mdm-external-oauth" is a customized plugin used for all External Clients that are using tokens generated in the e configuration contains introspection_url - Ping API for token e most important part of this configuration is the users_map The Key is the PingFedeate User, the is the HUB user configured in the services." }, { "title": "API Router", "": "", "pageLink": "/display//API+Router", "content": "DescriptionThe api router component is responsible for routing requests to regional services. Application exposes REST to call services from different regions simultaneously. The component provides centralized authorization and authentication service and transaction log feature. router uses http4k library which is a lightweight  HTTP toolkit written in that enables the serving and consuming of HTTP services in a functional and consistent ,spring bootCode link: api routerRequest flowComponentDescriptionAuthentication serviceauthenticates user by x-consumer-username headerRequest enricherdetects request sources, countries and roleAuthorization serviceauthorizes user permissions to role, countries and sourcesService callercalls services, tries 3 times in case of an exception,requests are routed to the appropriate based on the countries parameter, if the requests contains countries from multiple regions, different regional services are called, if the request contains no countries, default user or application country is setService response transformer and filtertransforms and/or filters service responses (e.g. data anonymization) depending on the defined request and/or response filtration parameters (e.g. header, http method, path)Response composercomposes responses from services, if multiple services responded, the response is concatenatedRequest enrichmentParameterMethodsourcescountriesrolecreate hcorequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedCREATE HCOupdate hcorequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedUPDATE_HCObatch create hcorequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedCREATE_HCObatch update hcorequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedUPDATE_HCOcreate hcprequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedCREATE_HCPupdate hcprequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedUPDATE_HCPbatch create hcprequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedCREATE_HCPbatch update hcprequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedUPDATE_HCPcreate mcorequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedCREATE_MCOupdate mcorequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedUPDATE_MCObatch create mcorequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedCREATE_MCObatch update mcorequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedUPDATE_MCOcreate entityrequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedCREATE_ENTITYupdate entityrequest body crosswalk attribute, only one allowedrequest body Country attribute, only one entities by urissources not allowedrequest param Country attribute, 0 or more allowedGET_ENTITIESget entity by urisources not allowedrequest param Country attribute, 0 or more allowedGET_ENTITIESdelete entity by crosswalktype query param, required at least onerequest param Country attribute, 0 or more entity matchessources not allowedrequest param Country attribute, 0 or more relationrequest body crosswalk attributes, required at least onerequest param Country attribute, 0 or more create relationrequest body crosswalk attributes, required at least onerequest param Country attribute, 0 or more allowedCREATE_RELATIONget relation by urisources not allowedrequest param Country attribute, 0 or more allowedGET_RELATIONdelete relation by crosswalktype query param, required at least onerequest param Country attribute, 0 or more lookupssources not allowedrequest param Country attribute, 0 or more allowedLOOKUPSConfigurationConfig parameterDescriptiondefaultCountrydefault application instance countryusersusers configuration listed belowzoneszones configuration listed belowresponseTransformresponse transformation definitions explained belowUser configurationConfig parameterDescriptionnameuser namedescriptionuser descriptionrolesallowed user rolescountriesallowed user countriessourcesallowed user sourcesdefaultCountryuser default countryZone configurationConfig parameterDescriptionurlmdm service urluserNamemdm service user namelogMessagesflag indicates that mdm service messages should be loggedtimeoutMsmdm service request timeoutResponse transformation configurationConfig parameterDescriptionfiltersrequest and response filter configuationmapresponse body transformation definitionsFilters configurationConfig parameterDescriptionrequestrequest filter configuationresponseresponse filter methodpathAPI REST call pathheaderslist of headers with name and value parametersResponse filter configurationConfig parameterDescriptionbodyresponse body JSTL transformation definitionExample configuration of response transformationAPI router configurationresponseTransform: - filters:      request:        method: GET        path: /entities.*        headers: - name: X-Consumer-Username            value: mdm_test_user      response:        body:          ntent: | contains(true,[for (.crosswalks) .type == "configuration/sources/HUB_CALLBACK"])    map: - ntent: | .crosswalks - ntent: | ." }, { "title": "Batch Service", "": "", "pageLink": "/display/GMDM/Batch+Service", "content": "DescriptionThe batch-service component is responsible for managing the batch loads to . It exposes that clients use to create a new instance of a batch and upload data. The component is responsible for managing the batch instances and stages, processing the data, gathering acknowledge responses from the Manager component. Batch service stores data in two collections batchInstance - stores all instances of batches and statistics gathered during load and batchEntityProcessStatus  - stores metadata information about all objects that were loaded through all batches. These two collections are required to manage and process the data, check the checksum deduplication process, mark entities as processed after , and soft-delete entities in case of full files load. The component uses the Asynchronous operations using topics as the stages for each part of the load. Technology:  java 8, boot, mongodb, kafka-streams, apache camel, , shedlock-, spring-schedulerCode link: batch-serviceFlowsETL BatchesBatch Controller: creating and updating batch instanceBulk Service: loading bulk dataProcessing JOBSoftDeleting interfacesBatch Controller - manage batch instancesInterface patternDescriptionCreate a new instance for the specific batchREST /batchController/{batchName}/instancesCreates a new instance of the specific batch. Returns the object of Batch with a generated ID that has to be used in the all below requests. Based on the ID client is able to check the status or load data using this instance. It is not possible to start new batch instance once the previous one is not completed. Get batch instance detailsREST /batchController/{batchName}/instances/{batchInstanceId}Returns current details about the specific batch instance. Returns object with all stages, statuses, and statistics. Initialize the stage or complete the stage and save statistics in the cache. REST /batchController/{batchName}/instances/{batchInstanceId}/stages/{stageName}Creates or updates the specific stage in the batch. Using this operation clients are able to do two things.1. initialize and start the stage before loading the data. In that case, the Body request should be empty.2. update and complete the stage after loading the data. In that case, the Body should contain the stage name and ients have permission to update only "" stages. The next stages are managed by the internal batch-service itialize multiple stages or complete the stages and save statistics in the cache. REST /batchController/{batchName}/instances/{batchInstanceId}/stagesThis operation is similar to the single-stage management operation. This operation allows manage of multiple stages in one move the specific batch instance from the /batchController/{batchName}/instances/{batchInstanceId}Additional service operation used to delete the batch instances from cache. The permission for this operation is not exposed to external clients, this operation is used only by the HUB support team. Clear cache ( clear objects from batchEntityProcessStatus collection that stores metada of objects and is used in deduplication logic)REST /batchController/{batchName}/_clearCacheheaders:  objectType: ENTITY/RELATION  entityType: e.g. configuration/entityTypes/HCPAdditional service operation used to clear cache for the specific batch. The user can provide additional parameters to the to specify what type of objects should be removed from the cache. Operation is used by the clients after executing smoke tests on PROD and during testing on DEV environments. It allows clearing the cache after load to avoid data deduplication during load. Bulk Service - load data using previously created batch instancesInterface patternDescriptionLoad multiple entities using create operationREST /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/entitiesThe operation should be used once the user created a new batch instance and initialized the "Loading" stage. At that moment client is able to load entities to the system. The operation accepts the bulk of entities and loads the data to topic. Using POST operation the standard creates operation is used.Load multiple entities using the partial override operationREST APIPATCH /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/entitiesThis operation is similar to the above. The PATCH operation force to use partialOverride operation. Load multiple relations using create operationREST /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/relationsThe operation is similar to the above. Using POST operation the standard creates operation is used. Using /relations suffix in the clients is able to create relations objects in MDM.Load multiple Tags using PATCH operation - append operationREST APIPATCH /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/tagsThe operation should be used once the user created a new batch instance and initialized the "Loading" stage. At that moment client is able to load tags to the system. The operation accepts the bulk of entities and loads the data to topic. Using PATCH operation the standard append operation is used so all tags in the input array are added to specified profile in MDM.Load multiple Tags using delete operation - removal operationREST /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/tagsThis operation is similar to the above. The DELETE operation removes selected from the system.Load multiple merge requests using operation, this will result in a merge between two /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/entities/_mergeThe operation should be used once the user created a new batch instance and initialized the "Loading" stage. At that moment client is able to load merge requests to the system - this will result in merging operation between two entities specified in the request. The operation accepts the bulk of merging requests and loads the data to 's topic. Load multiple unmerge requests using operation, this will result in a unmerge between two /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/entities/_unmergeThe operation should be used once the user created a new batch instance and initialized the "Loading" stage. At that moment client is able to load unmerge requests to the system - this will result in unmerging operation between two entities specified in the request. The operation accepts the bulk of unmerging requests and loads the data to 's topic. Dependent componentsComponentInterfaceFlowDescriptionManagerAsyncMDMManagementServiceRouteEntitiesCreateProcess bulk objects with entities and creates the HCP/HCO/ in . Returns asynchronous ACK responseEntitiesUpdateProcess entities and creates using partialOverride property the in . Returns asynchronous responseRelationsCreateProcess bulk objects with entities and creates the HCP/HCO/ in . Returns asynchronous StoreMongo connectionN/AStore cache data in mongo collectionConfigurationBatch Workflows configuration, main config for all and : - batchName: "ONEKEY" batchDescription: " and entities and relations loading" stages: - stageName: "HCOLoading"The main part of the batches configuration. Each batch has to contain:batchName - the name of the specific batch, used in the API tchDescription - additional description for the specificstages - the list of dependent stages arranged in the execution is configuration presents the workflow for the specific batch, Administrator can setup these stages in the order that is required for the batch and Client requirements. The main assumptions:The "Loading" Stage is the first one e "Sending" Stage is dependent on the "Loading" stageThe "Processing" Stage is dependent on the "Sending" ere is the possibility to add 2 additional optional stages:"EntitiesUnseenDeletion" - used only once the full file is loaded and the soft-delete process is required"HCODeletesProcessing" - process soft-deleted objects to check if all ACKs were received. Available jobs:SendingJobProcessingJobDeletingJobDeletingRelationJobIt is possible to set up different stage names but the assumption is to reuse the existing names to keep e JOB is dependent on each other in two ways:softDependentStages - allows starting next stage immediately after the dependent one is started. Used in the Sending stages to immediately send data to the pendentStages - hard dependent stages, this blocks the starting of the stage until the previous one is ended.  - stageName: "HCOSending"softDependentStages: ["HCOLoading"]processingJobName: "SendingJob"Example configuration of Sending stage dependent from the stage. In this stage, data is taken from the stage and published to the Manager component for further processing- stageName: "HCOProcessing"dependentStages: [": "ProcessingJob"Example configuration of the stage. This stage starts once the Sending JOB is completed. It uses the batchEntityProcessStatus collection to check if all responses were received from . - stageName: "RelationLoading"- stageName: "RelationSending" dependentStages: [ "HCOProcessing"] softDependentStages: ["RelationLoading"] processingJobName: "SendingJob"- stageName: "RelationProcessing" dependentStages: [ "RelationSending" ] processingJobName: "ProcessingJob"The full example configuration for the loading, sending, and processing stages.- stageName: "EntitiesUnseenDeletion" dependentStages: ["RelationProcessing"] processingJobName: "DeletingJob"- stageName: "HCODeletesProcessing" dependentStages: ["EntitiesUnseenDeletion"] processingJobName: "ProcessingJob"Configuration for entities. The example configuration that is used in the full files. It is triggered at the end of the and checks the data that should be removed. - stageName: "RelationsUnseenDeletion" dependentStages: ["HCODeletesProcessing"] processingJobName: "DeletingRelationJob"- stageName: "RelationDeletesProcessing" dependentStages: ["RelationsUnseenDeletion"] processingJobName: "ProcessingJob"Configuration for relations. The example configuration that is used in the full files. It is triggered at the end of the and checks the data that should be removed. Loading stage configuration for load through requestConfig ParameterDescriptionbulkConfiguration: destinations: "": : bulkLimit: 25 destination: topic: "{{ env_local_name }}-internal-batch-onekey-hcp"The configuration contains the following:destinations - list of batches and kafka topics on which data should be loaded from REST to ." - batch nameHCPLoading - specific configuration for loading stagebulkLimit - limit of entities/relations in one ic - target topic nameSending stage configuration for to (Reltio)Config valueDescriptionsendingJob: numberOfRetriesOnError: 3Number of retries once an exception occurs during events publishing  pauseBetweenRetriesSecs: to wait between the next retry idleTimeWhenProcessingEndsSec: once to wait for new events and complete the Sending JOB threadPoolSize:2Number of threads used to Producer "": HCPSending: source: topic: "{{ env_local_name }}-internal-batch-onekey-hcp" bulkSending: false bulkPacketSize: 10 reltioRequestTopic: "{{ env_local_name }}-internal-async-all-onekey" reltioReponseTopic: "{{ env_local_name }}-internal-async-all-onekey-ack"The specific configuration for Sending Stage"ONEKEY" - batch nameHCPSending - specific configuration for sending source topic name from which data is consumedbulkSending - by default false (bundling is implemented and managed in Manager client, currently there is no need to bundle the events on client-side)bulkPacketSize - optionally once bulkSending is true, batch-service is able to bundle the requests. reltioRequestTopic- processing requests in managerreltioReponseTopic - processing ACK in batch-serviceProcessing stage config for checking processing entities status in (Reltio) - check ACK collectorConfig ParameterDefault useBetweenQueriesSecs:60Interval in which Cache is cached if all were received.Entities/Relations UnseenDeletion Job config for Reltio Request Topic and for entities soft nfig ParameterDefault valueDescriptiondeletingJob: "Symphony": "EntitiesUnseenDeletion":The specific configuration for Deleting Stage"Symphony" - batch nameEntitiesUnseenDelettion- specific configuration for soft-delete stagemaxDeletesLimit: 100The limit is a safety switch in case if we get a corrupted file (empty or partial).It prevents from deleting all profiles Reltio in such cases.queryBatchSize: 10The number of entities/relations downloaded from Cache in one callreltioRequestTopic: "{{ env_local_name }}-internal-async-all-symphony"target topic - processing requests in managerreltioResponseTopic: "{{ env_local_name }}-internal-async-all-symphony-ack"ack topics - processing ACK in batch-serviceUsersConfig ParameterDescription- name: "mdmetl_nprod" description: " Informatica IICS User - BATCH loader" defaultClient: "ReltioAll" roles: - "CREATE_HCP" - "CREATE_HCO" - "CREATE_MCO" - "CREATE_BATCH" - "GET_BATCH" - "MANAGE_STAGE" - "CLEAR_CACHE_BATCH" countries: - sources: - "SHS"... batches: "Symphony": - "HCPLoading"The example user configuration. The configuration is divided into the following sections:roles - available roles to create specific objects and manage batch instancescountries - list of countries that user is allowed to loadsources - list of sources that user is allowed to loadbatches - list of batch names with corresponding stages. In general external users are able to create/edit stages : "mongodb://mdm_batch_service:{{ m_batch_ssword }}@{{ mongo.springURL }}/{{ mongo.dbName }}"Full Mongo DB URLmongo.dbName: "{{ mongo.dbName }}"Mongo database rvers: "{{ rvers }}" Id: "batch_service_ }}" component group slMechanism: "{{ slMechanism }}"SASL curityProtocol: "{{ curityProtocol }}"Security Protocolkafka.sslTruststoreLocation: /opt/batch-service/config/kafka_truststore.jksSSL trustore file locationkafka.sslTruststorePassword: "{{ kafka.sslTruststorePassword }}"SSL trustore file ername: batch_serviceKafka ssword: "{{ hub_broker_tch_service }}" dedicated user :SSL algorightAdvanced configuration (do not edit if not required)Config Parameterspring: kafka: properties: sasl: mechanism: ${slMechanism} security: protocol: ${curityProtocol} gorithm: consumer: properties: : bootstrap-servers: - ${rvers} groupId: ${Id} auto-offset-reset: earliest max-poll-records: 50 fetch-max-wait: 1s fetch-min-size: enable-auto-commit: false ssl: trustStoreLocation: file:${kafka.sslTruststoreLocation} trustStorePassword: ${kafka.sslTruststorePassword} producer: bootstrap-servers: - ${rvers} groupId: ${Id} auto-offset-reset: earliest ssl: trustStoreLocation: file:${kafka.sslTruststoreLocation} trustStorePassword: ${kafka.sslTruststorePassword} streams: bootstrap-servers: - ${rvers} applicationId: ${Id}_ack # for GroupID have to different that consumer clientId: batch_service_ID stateDir: /tmp # num-stream-threads: 1 - default 1 ssl: trustStoreLocation: file:${kafka.sslTruststoreLocation} trustStorePassword: ${kafka.sslTruststorePassword}Additional config (do not edit if not required)Config Parameterserver.port: utdown.enabled=false:clude: prometheus, health, low-bean-definition-overriding: in-run-controller: : component: metrics: metric-registry=prometheusMeterRegistry:server: use-forward-headers: true forward-headers-strategy: FRAMEWORKspringdoc: swagger-ui: disable-swagger-default-url: TruerestService: #service port - do not change if it run in docker container port: 8082schedulerTreadCount: 5" }, { "title": "Callback Delay Service", "": "", "pageLink": "/display//Callback+Delay+Service", "content": "DescriptionThe application consists of two streams - precallback and postcallback. When the precallback stream detects the need to change the ranking for a given relationship, it generates an event to the post callback stream. The post callback stream collects events in the time window for a given key and processes the last one. This allows you to avoid updating the rankings multiple times when loading relations using sponsible for following transformations: relation rakingApplies transformations to the input stream producing the output nology: , boot, MongoDB, link: callback-delay-service FlowsOtherHCOtoHCOAffiliations RankingsExposed interfacesPreCallbackDelay -(rankings)Interface NameTypeEndpoint patternDescriptioncallback inputKAFKA${env}-internal-reltio-full-delay-eventsEvents processed by the precallback serviceoutput  - callbacksKAFKA${env}-internal-reltio-proc-eventsResult events processed by the precallback delay serviceoutput - processing KAFKA${env}-internal-async-all-bulk-callbacksUpdateAttribute requests sent to Manager component for asynchronous processingDependent componentsComponentInterfaceFlowDescriptionManagerAsyncMDMManagementServiceRouteRelationshipAttributesUpdateUpdate relationship attributes in asynchronous modeHub StoreMongo connectionN/AGet mongodb stored relation data when cache is nfigurationMain ConfigurationDefault Id${env}-precallback-delay-serviceThe application ID. Each stream processing application must have a unique ID. The same ID must be given to all instances of the application. It is recommended to use only alphanumeric characters, (dot), - (hyphen), and _ (underscore). Examples: "hello_world", "hello_world-.0.0"reads10Number of threads used in the ructuredLogAndContinueExceptionHandlerDeserialization exception 3600000Number of milliseconds to wait time before next poll of ze2097152Events message sizeCallbackWithDelay Stream -(rankings)Config ParameterDefault valueDescriptionpreCallbackDelay.eventInputTopic${env}-internal-reltio-full-delay-eventsinput topicpreCallbackDelay.eventDelayTopic${env}-internal-reltio-full-callback-delay-eventsdelay stream input topic, when the precallback stream detects the need to modify ranks for a given relationship group, it produces an event for this topic. Events for a given key are aggregated in a time windowpreCallbackDelay.eventOutputTopic${env}-internal-reltio-proc-eventsoutput topic for ernalAsyncBulkCallbacksTopic${env}-internal-async-all-bulk-callbacksoutput topic for oreName${env}-relation-data-storeRelation data cache store namepreCallbackDelay.rankCallback.featureActivationtrueParameter used to enable/disable the Rank llbackSourceHUB_CALLBACKCrosswalk used to update with Rank with-delay-raw-relation-checksum-dedupe-storetopic name that store rawRelation MD5 checksum - used in rank callback tentionPeriod1hstore retention periodpreCallbackDelay.rankCallback.rawRelationChecksumDedupeStore.windowSize10mstore window attribute-changes-checksum-dedupe-storetopic name that store attribute changes MD5 checksum - used in rank callback tentionPeriod1hstore retention tributeChangesChecksumDedupeStore.windowSize10mstore window to be activatedpreCallbackDelay.rankTransform.featureActivationtrueParaemter defines in the feature should be tiveRankSorterOtherHCOtoHCOAffiliationsDelayRankSorterRank sorter filiationN/AThe source order defined for the specific Ranking. Details about the algorithm in:  OtherHCOtoHCOAffiliations RankSorterdeduplicationPost callback stream ddeduplication configdeduplication.pingInterval1mPost callback stream ping invervaldeduplication.duration1hPost callback stream window acePeriod0sPost callback stream deduplication grace teLimit122869944Post callback stream deduplication byte ppressNamecallback-rank-delay-suppressPost callback stream deduplication suppress callback-rank-delay-suppressPost callback stream deduplication oreNamecallback-rank-delay-suppress-deduplication-storePost callback stream deduplication store nameRank sort order config:The component allows you to set different sorting (ranking) configurations depending on the country of the relationship. Relations for selected countries are sorted based on the rankExecutionOrder configuration - in the order of the items on the list. The following sorters are available:ATTRIBUTE - sort relationships based on the values (or lookup codes) ​​of defined attributesACTIVE - sort relationships based on their status (ACTIVE, NON-ACTIVE)SOURCE - sort relations based on the order of sourcesLUD - sort relations based on their update time - ascending or descending orderSample rankSortOrder confiugration:rankSortOrder: affiliation: config: - countries: - AU - rankExecutionOrder: - type: ACTIVE - type: ATTRIBUTE attributeName: lookupCode: true order: REL.HIE: 1 I: 2 REL.FPA: 3 G: 4 REL.BUY: 5 N: 6 R: 7 REL.MBR: 8 M: 9 SS: 10 REL.WPC: 11 REL.WPIC: 12 U: 13 - type: SOURCE order: Reltio: 1 ONEKEY: 2 : 3 SAP: 4 PFORCERX: 5 PFORCERX_ODS: 5 KOL_OneView: 6 ONEMED: 6 ENGAGE: 7 MAPP: 8 GRV: 9 GCP: 10 : 11 PCMS: 12 PTRS: 13 - type: LUD" }, { "title": "Callback Service", "": "", "pageLink": "/display/", "content": "DescriptionResponsible for following transformations: names calculationDangling affiliationsCrosswalk cleanerPotential match queue cleanerPrecallback stream - (rankings)Applies transformations to the input stream producing the output nology: java 8, spring boot, MongoDB, link: callback-service FlowsCallbacksHCONames Callback for IQVIA modelDanglingAffiliations CallbackCrosswalkCleaner CallbackNotMatch CallbackPreCallbacks (Rankings/COMPANYGlobalCustomerId/Canada Micro-Bricks/HCPType)Exposed -(rankings)Interface NameTypeEndpoint patternDescriptioncallback inputKAFKA${env}-internal-reltio-full-eventsEvents enriched by the component. Full JSON dataoutput  - callbacksKAFKA${env}-internal-reltio-proc-eventsEvents that are already processed by the precallback services (contains updated Ranks and Reltio callback is also processed)output - processing KAFKA${env}-internal-async-all-bulk-callbacksUpdateAttribute requests sent to Manager component for asynchronous processingHCO NamesInterface NameTypeEndpoint patternDescriptioncallback inputKAFKA${env}-internal-callback-hconame-inevents being sent by the event publisher component. Event types being considered:  HCO_CREATED, HCO_CHANGED, RELATIONSHIP_CREATED, RELATIONSHIP_CHANGEDcallback outputKAFKA${env}-internal-hconames-rel-createRelation Create requests sent to Manager component for asynchronous processingDanging NameTypeEndpoint patternDescriptioncallback inputKAFKA${env}-internal-callback-orphanClean-inevents being sent by the event publisher component. Event types being considered:  'HCP_REMOVED', 'HCO_REMOVED', 'MCO_REMOVED', 'HCP_INACTIVATED', 'HCO_INACTIVATED', 'MCO_INACTIVATED'callback outputKAFKA${env}-internal-async-all-orphanCleanRelation Update (soft-delete) requests sent to Manager component for asynchronous processingCrosswalk CleanerInterface NameTypeEndpoint patternDescriptioncallback inputKAFKA${env}-internal-callback-cleaner-inevents being sent by the event publisher component. Event types being considered: 'HCO_CHANGED', 'HCP_CHANGED', 'MCO_CHANGED', 'RELATIONSHIP_CHANGED'callback outputKAFKA${env}-internal-async-all-cleaner-callbacksDelete Crosswalk or Soft-Delete requests sent to Manager component for asynchronous processingNotMatch callback (clean potential match patternDescriptioncallback inputKAFKA${env}-internal-callback-potentialMatchCleaner-inevents being sent by the event publisher component. Event types being considered:  'RELATIONSHIP_CHANGED', 'RELATIONSHIP_CREATED'callback outputKAFKA${env}-internal-async-all-notmatch-callbacksNotMatch requests sent to Manager component for asynchronous processingDependent componentsComponentInterfaceFlowDescriptionManagerMDMIntegrationServiceGetEntitiesByUrisRetrieve multiple entities by providing the list of entities URISAsyncMDMManagementServiceRouteRelationshipUpdateUpdate relationship object in asynchronous modeEntitiesUpdateUpdate entity object in asynchronous mode - set soft-deleteCrosswalkDeleteRemove Crosswalk from entity/relation in asynchronous modeNotMatchSet Not a Match between two  entitiesHub StoreMongo connectionN/AStore cache data in mongo collectionConfigurationMain Id${env}-entity-enricherThe application ID. Each stream processing application must have a unique ID. The same ID must be given to all instances of the application. It is recommended to use only alphanumeric characters, (dot), - (hyphen), and _ (underscore). Examples: "hello_world", "hello_world-.0.0"reads10Number of threads used in the ructuredLogAndContinueExceptionHandlerDeserialization exception 3600000Number of milliseconds to wait time before next poll of ze2097152Events message sizegateway.apiKey${gateway.apiKey}API key used in the communication to used to turn on/off logging the payloadgateway.url${gateway.url}Manager erName${erName}Manager user nameHCO valueDescriptioncallback.hconames.eventInputTopic${env}-internal-callback-hconame-ininput topiccallback.hconames.HCPCalculateStageTopic${env}-internal-callback-hconame-hcp4calcinternal AsyncHCONames${env}-internal-hconames-rel-createoutput duplicationWindowDuration10The size of the windows in duplicationWindowGracePeriod10sThe grace period to admit out-of-order events to a -name-dedupe-storededuplication topic ceptedEntityEventTypesHCO_CREATED, HCO_CHANGEDaccepted events types for entity ceptedRelationEventTypesRELATIONSHIP_CREATED, RELATIONSHIP_CHANGEDaccepted events types for relationship ceptedCountriesAI,,AR,AW,BS,,,,,BR,,,,,DO,,,,HN,,,,,,,PY,,,,,,, of countries aceppted in further processing pactedHcpTraverseRelationTypesconfiguration/relationTypes/Activity, configuration/relationTypes/Managed, configuration/relationTypes/Iaccepted relationship types to travers for impacted inHCOTraverseRelationTypesconfiguration/relationTypes/Activity, configuration/relationTypes/Managed, configuration/relationTypes/Iaccepted relationship types to travers for impacted main faultHOSPthe Type code name for inHCOStructurTypeCodese.g.: AD:- "WFR.TSR.JUR"- "N"- "A"Cotains the map where the:KEY is the country Values are the for the corresponding country, duplicationeither duplication or callback.hconames.windowSessionDeduplication must be duplication.durationduration size of time acePeriodgrace period related to time teLimitbyte limit of ppressNamename name of the step in orageNamewhen switching from storageName must be differentname of Materialized Session duplication.pingIntervalinterval in which ping messages are being generatedcallback.hconames.windowSessionDeduplicationeither duplication or callback.hconames.windowSessionDeduplication must be setcallback.hconames.windowSessionDeduplication.durationduration size of session teLimitbyte limit of ppressNamename name of the step in orageNamewhen switching from storageName must be differentname of Materialized Session Storecallback.hconames.windowSessionDeduplication.pingIntervalinterval in which ping messages are being generatedPfe HCO NamesConfig ParameterDefault eHconames.eventInputTopic${env}-internal-callback-hconame-ininput eHconames.HCPCalculateStageTopic${env}-internal-callback-hconame-hcp4calcinternal AsyncHCONames${env}-internal-hconames-rel-createoutput eHconames.timeWindoweither eHconames.timeWindow or ssionWindow must be eHconames.timeWindow.durationduration size of time acePeriodgrace period related to time teLimitbyte limit of ppressNamename name of the step in switching from eHconames.timeWindow to ssionWindow storageName must be differentname of Materialized Session eHconames.timeWindow.pingIntervalinterval in which ping messages are being ssionWindoweither eHconames.timeWindow or ssionWindow must be ssionWindow.durationduration size of session limit of ppressNamename name of the step in orageNamewhen switching from duplication to eHconames.windowSessionDeduplication storageName must be differentname of Materialized Session ssionWindow.pingIntervalinterval in which ping messages are being generatedDanging AffiliationsConfig ParameterDefault valueDescriptioncallback.danglingAffiliations.eventInputTopic${env}-internal-callback-orphanClean-ininput ceptedEntityEventTypesHCP_REMOVED, HCO_REMOVED, MCO_REMOVED, HCP_INACTIVATED, HCO_INACTIVATED, MCO_INACTIVATEDaccepted entity eventscallback.danglingAffiliations.eventOutputTopic${env}-internal-async-all-orphanCleanoutput bAsyncOperationrel-updatekafka record headercallback.danglingAffiliations.exceptCrosswalkTypesconfiguration/sources/Reltiocrosswalk types to excludeCrosswalk osswalkCleaner.eventInputTopic${env}-internal-callback-cleaner-ininput ceptedEntityEventTypesMCO_CHANGED, HCP_CHANGED, HCO_CHANGEDaccepted entity ceptedRelationEventTypesRELATIONSHIP_CHANGEDaccepted relation waysconfiguration/sources/HUB_CallbackHub callback crosswalk osswalkCleaner.hardDeleteCrosswalkTypes.exceptconfiguration/sources/ReltioCleanserReltio cleanser crosswalk waysconfiguration/sources/HUB_CallbackHub callback crosswalk osswalkCleaner.hardDeleteCrosswalkRelationTypes.exceptconfiguration/sources/ReltioCleanserReltio cleanser crosswalk waysconfiguration/sources/HUB_USAGETAGCrosswalks list to ftDeleteCrosswalkTypes.whenOneKeyNotExistsconfiguration/sources/IQVIA_PRDP, configuration/sources/IQVIA_RAWDEACrosswalk list to soft-delete when crosswalk does not ftDeleteCrosswalkTypes.exceptconfiguration/sources/HUB_CALLBACK, configuration/sources/ to bAsyncOperationcrosswalk-deletekafka record bAsyncOperationcrosswalk-relation-deletekafka record bAsyncOperationhcp-updatekafka record bAsyncOperationhco-updatekafka record Keyconfiguration/sources/ crosswalk osswalkCleaner.eventOutputTopic${env}-internal-async-all-cleaner-callbacksoutput ferbackLookupCodesHCPIT.RBI, HCOIT.RBIOneKey referback crosswalk lookup KeyLookupCodesHCPIT.OK, HCOIT.OKOneKey crosswalk lookup codesNotMatch callback (clean potential match queue)Config ParameterDefault valueDescriptioncallback.potentialMatchLinkCleaner.eventInputTopic${env}-internal-callback-potentialMatchCleaner-ininput ceptedRelationEventTypes- RELATIONSHIP_CREATED- RELATIONSHIP_CHANGEDaccepted relation ceptedRelationObjectTypes- "configuration/relationTypes/FlextoHCOSAffiliations"- "configuration/relationTypes/FlextoDDDAffiliations"- "configuration/relationTypes/SAPtoHCOSAffiliations"accepted relationship tchTypesInCache- "AUTO_LINK"- "POTENTIAL_LINK"PotentialMatch cache object bAsyncOperationentities-not-match-setkafka record headercallback.potentialMatchLinkCleaner.eventOutputTopic${env}-internal-async-all-notmatch-callbacksoutput topicPreCallback Stream -(rankings)Config ParameterDefault valueDescriptionpreCallback.eventInputTopic${env}-internal-reltio-full-eventsinput topicpreCallback.eventOutputTopic${env}-internal-reltio-proc-eventsoutput topic for ernalAsyncBulkCallbacksTopic${env}-internal-async-all-bulk-callbacksoutput topic for seURLN/AManager URL defined per mIntegrationService.apiKeyN/AManager secret API KEY defined per mIntegrationService.logMessagesfalseParameter used to turn on/off logging the ipEventTypesENTITY_MATCHES_CHANGED, ENTITY_AUTO_LINK_FOUND, ENTITY_POTENTIAL_LINK_FOUND, DCR_CREATED, DCR_CHANGED, DCR_REMOVEDEvents skipped in the intainDuration10mCache duration time (for callbacks MD5 checksum)erval5mCache deletion intervalpreCallback.rankCallback.featureActivationtrueParameter used to enable/disable the llbackSourceHUB_CallbackCrosswalk used to update with Rank untriesAG, , AN, AR, AW, , , , , BR, BS, , , , , , , DO, , , , , , , , , HN, ID, IN, IT, , , , , , , , MY, , , , , , PF, PH, PK, PM, , PY, RE, RU, , , , , , , TR, , , , , , , , , , EMPTYList of countries for wich process activates the (different between and GBLUS)preCallback.rankCallback.rawEntityChecksumDedupeStoreNameraw-entity-checksum-dedupe-storetopic name that store rawEntity MD5 checksum - used in rank callback tributeChangesChecksumDedupeStoreNameattribute-changes-checksum-dedupe-storetopic name that store attribute changes MD5 checksum - used in rank callback rwardMainEventsDuringPartialUpdatefalseThe parameter used to define if we want to forward partial events. By default it is false so only events that are fully calculated are sent furtherpreCallback.rankCallback.ignoreAndRemoveDuplicatesfalseThe parameter used in the may contain duplicities in the group. It is set to False because now is removing duplicated tiveCleanerCallbacksSpecialityCleanerCallback, , EmailCleanerCallback, PhoneCleanerCallbackList of cleaner callbacks to be tiveCallbacksSpecialityCallback, , , , , PhoneCallbackList of to be activatedpreCallback.rankTransform.featureActivationtrueParaemter defines in the feature should be tiveRankSorterSpecialtyRankSorter, AffiliationRankSorter, AddressRankSorter, IdentifierRankSorter, EmailRankSorter, filiationN/AThe source order defined for the specific Ranking. Details about the algorithm in:  Affiliation oneN/AThe source order defined for the specific Ranking. Details about the algorithm in: /AThe source order defined for the specific Ranking. Details about the algorithm in: /AThe source order defined for the specific Ranking. Details about the algorithm in: Specialty entifierN/AThe source order defined for the specific Ranking. Details about the algorithm in: Identifier ltioN/AThe source order defined for the specific Ranking. Details about the algorithm in: Address ltioN/AThe source order defined for the specific Ranking. Details about the algorithm in: Addresses RankSorter" }, { "title": "China Selective Router", "": "", "pageLink": "/display//China+Selective+Router", "content": "DescriptionThe -selective-router component is responsible for enriching events and transformig from COMPANY model to model. Component is using operation using kafka topics. To transform COMPANY object it needs to be consumed from input topic and based on configuration it is enriched, entity is connected with mainHco and as a last step event model is transformed to model, after all operations event is sending to output nology:  java 11, spring boot, kafka-streams, kafkaCode link: -selective-routerFlowsTransformation flowExposed interfacesInterface NameTypeEndpoint patternDescriptionEvent transformer topologyKAFKAtopic: {env}-{topic_postfix}Transform event from COMPANY model to model, and send to ouptut topicDependent componentsComponentInterfaceFlowDescriptionData modelHCPModelConverterN/AConverter to transform Entity to COMPANY model or to modelConfigurationConfig ParameterDescriptioneventTransformer: - country: "CN" eventInputTopic: "${env}-internal-full-hcp-merge-cn" eventOutputTopic: "${env}-out-full-hcp-merge-cn" enricher: inaRefEntityProcessor hcoConnector: processor: inaHcoConnectorProcessor transformer: m.event_PANYToIqviaEventTransformer refEntity: - type: attribute: ContactAffiliations relationLookupAttribute: lationshipDescription relationLookupCode: CON - type: MainHCO attribute: ContactAffiliations relationLookupAttribute: lationshipDescription relationLookupCode: IThe main part of -selective-router configuration, contains list of event transformaton configurationcountry - specify country, value of this parameter have to be in event country section otherwise event will be skippedeventInputTopic - input topiceventOutputTopic - output topicenricher - specify class to enrich event, based on refEntity configuration this class is resposible for collecting related and mainHco cessor - specify class to connect with main , in this class is made a call to reltio for all connections by . Based on received data is created additional attribute 'OtherHcoToHco' contains mainHco entity collected by enricher.hcoConnector.enabled - enable or disable hcoConnectorhcoConnector.hcoAttrName - specify additional attibute name to place connected mainHcohcoConnector.outRelations - specify the list of out relation to filter while calling reltio for connectionsrefEntity - contains list of attributes containing information about or MainHCO entity ( - type of entity: or tribute - base attribute to search for lationLookupAttribute - attribute to search for lookupCode to decide what entity we are looking lationLookupCode - code specify entity type" }, { "title": "Component Template", "": "", "pageLink": "/display//Component+Template", "content": "DescriptionTechnology:Code link:FlowsExposed interfacesInterface NameTypeEndpoint patternDescriptionREST API|KAFKADependent componentsComponentInterfaceFlowDescriptionfor whatConfigurationConfig valueDescription" }, { "title": "DCR Service", "": "", "pageLink": "/display/GMDM/DCR+Service", "content": "" }, { "title": ", "": "", "pageLink": "/display//DCR+Service+2", "content": "DescriptionResponsible for the processing. Client (PforceRx) sends the DCRs through REST , DCRs are routed to the target system (OneKey/Veeva Opendata/Reltio). Client (Pforcerx) retrieves the status of the using status . Service also contains -streams functionality to process the updates asynchronously and update the DCRRegistry are accessible with REST lies transformations to the input stream producing the output nology: java 8, spring boot, MongoDB, link: dcr-service-2 FlowsPforceRx flowsCreate state changeGet statusOneKey: create method (submitVR) - directOneKey: generate DCR Change Events (traceVR)OneKey: process Change EventsVeeva: create method (storeVR)Veeva: generate DCR Change Events (traceVR)Veeva: process DCR Change EventsReltio: create method - directReltio: process DCR Change EventsExposed interfacesREST patternDescriptionCreate DCRsREST /dcrCreate DCRsGET DCRs statusREST APIGET /dcr/statusGET DCRs statusOneKey StreamInterface NameTypeEndpoint patternDescriptioncallback inputKAFKA{env}-internal-onekey-dcr-change-events-inEvents generated by the component after OneKey DataSteward Action. Flow responsible for events generation is : generate DCR Change Events (traceVR)output  - callbacksMongomongoDCR Registry updated Veeva OpenData StreamInterface NameTypeEndpoint patternDescriptioncallback inputKAFKA{env}-internal-veeva-dcr-change-events-inEvents generated by the component after Veeva DataSteward Action. Flow responsible for events generation is : generate DCR Change Events (traceVR)output  - callbacksMongomongoDCR Registry updated Reltio StreamInterface NameTypeEndpoint patternDescriptioncallback inputKAFKA{env}-internal-reltio-dcr-change-events-inEvents generated by after . Published by the event-publisher component selector: "(conciliationTarget==null) .headers.eventType in ['full'] && ubtype in ['DCR_CREATED', 'DCR_CHANGED', 'DCR_REMOVED']" output  - callbacksMongomongoDCR Registry updated  routingCreate DCRroute the requests to componentManagerMDMIntegrationServiceGetEntitiesByUrisRetrieve multiple entities by providing the list of entities URISGetEntityByIdget entity by the idGetEntityByCrosswalkget entity by the change requests in ReltioOK in in the moment only Veeva realized this interface, however in the future will be exposed via this interface as well  Hub StoreMongo connectionN/AStore cache data in mongo collectionTransaction LoggerTransactionServiceTransactionsSaves each status change in transactionsConfigurationConfig Id${env}_dcr2The application ID. Each stream processing application must have a unique ID. The same ID must be given to all instances of the application. It is recommended to use only alphanumeric characters, (dot), - (hyphen), and _ (underscore). Examples: "hello_world", "hello_world-.0.0"reads10Number of threads used in the ructuredLogAndContinueExceptionHandlerDeserialization exception stomTrustStoreSslEngineFactorySSL mon.ping.PingPartitionerPing partitioner required in application with PING 3600000Number of milliseconds to wait time before next poll of cords10Number of records downloaded in one poll from ze2097152Events message sizedataStewardResponseConfig: reltioResponseStreamConfig: enable: true eventInputTopic: - ${env}-internal-reltio-dcr-change-events-in    sendTo3PartyDecisionTable:      - target:         decisionProperties:          sourceName: "VEEVA_CROSSWALK"      - target:         decisionProperties:          countries: ["ID","PK","MY","TH"]      - target: OneKey    sendTo3PartyTopics:      :        - ${env}-internal-sendtothirdparty-ds-requests-in      :        - ${env}-internal-onekeyvr-ds-requests-in VeevaResponseStreamConfig: enable: true eventInputTopic: - ${env}-internal-veeva-dcr-change-events-in  onekeyResponseStreamConfig: enable: true eventInputTopic: - ${env}-internal-onekey-dcr-change-events-in maxRetryCounter: 20 deduplication: duration: 2m gracePeriod: 0s byteLimit: suppressName: dcr2-onekey-response-stream-suppress name: dcr2-onekey-response-stream-with-delay storeName: dcr2-onekey-response-window-deduplication-store pingInterval: 1m- ${env}-internal-reltio-dcr-change-events-in- ${env}-internal-onekey-dcr-change-events-in- ${env}-internal-veeva-dcr-change-events-in- ${env}-internal-sendtothirdparty-ds-requests-in- ${env}-internal-onekeyvr-ds-requests-inConfiguration related to the event processing from , or is related to and allows to configure the aggregation window for events (processing ) - 24hMaxRetryCounter should be set to a high number - 1000000targetDecisionTable: - target: Reltio decisionProperties: userName: "mdm_dcr2_test_reltio_user" - target: OneKey decisionProperties: userName: "mdm_dcr2_test_onekey_user" - target:     decisionProperties:      sourceName: "VEEVA_CROSSWALK" - target:     decisionProperties:      countries: ["ID","PK","MY","TH"] - target: Reltio decisionProperties: country: GBLIST OF the following combination of attributesEach attribute in the configuration is optional. The decision table is making the validation based on the input request and the main object- the main object is , if the is empty then the decision table is checking . The result of the decision table is the , the routing to the Reltio MDM system, or service. userName the user name that executes the requestsourceNamethe source name of the Main objectcountrythe county defined in the requestoperationTypethe operation type for the object{ insert, update, delete }affectedAttributesthe list of attributes that the user is changingaffectedObjects{ , , HCP_HCO}RESULT →  TargetType {Reltio, , Veeva}PreCloseConfig: acceptCountries: - "IN" - "SA"   rejectCountries: - "PL" - "GB"DCRs with countries which belong to acceptCountries attribute are automatically accepted (PRE_APPROVED) or rejected (PRE_REJECTED) when belong to rejectCountires. acceptCountriesList of values, example: [ IN, , , ...]rejectCountriesList of values, example: [ IN, , PL ]transactionLogger: simpleDCRLog: enable: true kafkaEfk: enable: trueTransaction ServiceThe configuration that enables/disables the transaction loggeroneKeyClient: url: http://devmdmsrv_onekey-dcr-service_1:8092 userName: configuration that allows connecting to serviceVeevaClient: url: username: user apiKey: ""Veeva Integration Service The configuration that allows connecting to dcr servicemanager: url: :8443/${env}/gw userName:dcr_service_2_user logMessages: true timeoutMs: 120000MDM configuration that allows connecting to Indexes" }, { "title": " service connect guide", "": "", "pageLink": "/display/GMDM/DCR+service+connect+guide", "content": "IntroductionThis guide provides comprehensive instructions on integrating new client applications with the (Data Change Request) service in the MDM HUB system. It is intended for technical engineers, client architects, solution designers, and MDM/Mulesoft teams.Table of ContentsOverviewThe service processes Data Change Requests (DCRs) sent by clients through a REST API. These DCRs are routed to target systems such as , , or Reltio. The service also includes -streams functionality to process updates asynchronously and update the DCRRegistry cess to should be confirmed in advance with the MDM HUB → StartedPrerequisitesAPI credentials (username and configurations (, , updated whitelists to allow you access InstructionsCreate MDM HUB User: Follow the SOP to add a direct user to the HUB.   the steps outlined in → Add Direct API User to : Use to acquire an access tokenAPI DCR:  /dcrGet Status: GET /dcr/statusGet Multiple DCR Statuses: GET /dcr/_statusGet Entity Details: GET /{objectUri}MethodsGET: Retrieve informationPOST: Create new DCRsAuthentication and step is to acquire access token. If you are connecting first time to you should create MDM HUB user Once you have the  username and password, you can acquire the access token. \\ // Use devfederate for , stgfederate for STAGE, prodfederate for --header 'Content-Type: application/x-www-form-urlencoded' \\\n--header 'Authorization: Basic Base64-encoded(username:password)'\n\nResponse:\n{\n "access_token": "12341SPRtjWQzaq6kgK7hXkMVcTzX", \n "token_type": "Bearer",\n "expires_in": 1799 // The token expires after the time - "expires_in" field. Once the token expires, it must be refreshed.\n}\nBelow you can see, how should be configured to obtain access_tokenUsing Access TokenInclude the access token in the Authorization header for all Ensure DNS resolution for : Configure VPN access if requiredWhitelists: Add necessary IP addresses to the whitelistCreating DCRsThis method is used to create new objects in the MDM HUB system. Below is an example request to create a new HCP object in the re examples and the entire data model can be found at: service swaggerExample new HCP\ncurl --location '{api_url}/dcr' \\ // e.g., 'Content-Type: application/json' \\\n--header 'Authorization: Bearer ${access_token_value}' \\ // e.g., --data-raw '[\n {\n "country": "${dcr_country}", // e.g., CA\n        "createdBy": "${created_by}", // e.g., Test user\n        "extDCRComment": "${external_system_comment}", // e.g., This is test to create new HCP\n        "extDCRRequestId": "${external_system_request_id}", // e.g., CA-VR-\n        "dcrType": "${dcr_type}", // e.g., PforceRxDCR\n        "entities": [\n {\n "@type": "hcp",\n "action": "insert",\n "updateCrosswalk": {\n "type": "${source_system_name}", // e.g.,  \n                    "value": "${source_system_value}" // e.g.,  \n                },\n "values": {\n "birthDate": "",\n "birthYear": "2017",\n "firstName": "Maurice",\n "lastName": "Brekke",\n "title": "HCPTIT.1118",\n "middleName": "Karen",\n "subTypeCode": "HCPST.A",\n "addresses": [\n {\n "action": "insert",\n "values": {\n "sourceAddressId": {\n "source": "${source_system_name}", // e.g., PFORCERX\n                                    "id": "${address_source_system_value}"   // e.g., -CA-VR-                                 },\n "addressLine1": " Terrace",\n "addressLine2": "Waynetown",\n "addressLine3": "Designer Books gold parsing",\n "addressType": ""buildingName": "Handmade Cotton Shirt",\n "city": "Singapore",\n "country": "SG",\n "zip": "ZIP 5"\n }\n }\n ] \n }\n }\n ]\n }\n]'\nRequest placeholders:parameter namedescriptionexampleapi_urlAPI router URL token value0001WvxKA16VWwlufC2dslSILdbEdcr_countryMain entity countryCAcreated_byCreated by userTest userexternal_system_commentComment that will be populate to next processing stepsThis is test DCRexternal_system_request_idID for tracking processingCA-VR-00255752dcr_typeProvided by team when user with permission will be createdPforceRxDCRsource_system_nameSource system name. User used to invoke request has to have access to this sourcePFORCERXsource_system_valueID of this object in source systemHCO-CA-VR-00255752address_source_system_valueID of address in source systemADR-CA-VR-00255752Handling success response\n[\n {\n "requestStatus": "${request_status}", // e.g., REQUEST_ACCEPTED\n        "extDCRRequestId": "${external_system_request_id},   // e.g., CA-VR-\n        "dcrRequestId": "${mdm_hub_dcr_request_id}",   // e.g., 4a480255a4e942e18c6816fa0c89a0d2\n        "targetSystem": "${target_system_name}",   // e.g., Reltio\n        "country": "${dcr_request_country}",   // e.g., CA\n        "": {\n "status": "CREATED",\n "updateDate": " "dcrid": "${reltio_dcr_status_entity_uri}"   // e.g., entities/0HjtwJO\n        }\n }\n]\nResponse placeholders:parameterdescriptionexampleexternal_system_request_idDCR request id in source systemCA-VR-00255752mdm_hub_dcr_request_idDCR request id in system4a480255a4e942e18c6816fa0c89a0d2target_system_nameDCR target system name, one of values: , , VeevaReltiodcr_request_countryDCR request countryCArequest_statusDCR request status, one of values: REQUEST_ACCEPTED, REQUEST_FAILED, REQUEST_REJECTEDREQUEST_ACCEPTEDreltio_dcr_status_entity_uriURI of status entity in systementities/0HjtwJORejected Response\n[\n {\n "requestStatus": "REQUEST_REJECTED",\n "errorMessage": "DuplicateRequestException -> Request [97aa3b3f-35dc-404c-9d4a-edfaf9e7121211c] has already been processed",\n "errorCode": "DUPLICATE_REQUEST",\n "extDCRRequestId": "97aa3b3f-35dc-404c-9d4a-edfaf9e7121211c"\n }\n]\nFailed Response\n[\n {\n "requestStatus": "REQUEST_FAILED",\n "errorMessage": "Target lookup code not found for attribute: HCPTitle, country: , source value: HCPTIT..",\n "errorCode": "VALIDATION_ERROR",\n "extDCRRequestId": "97aa3b3f-35dc-404c-9d4a-edfaf9e712121121c"\n }\n]\nIn case of incorrect user configuration in the system, the will return errors as follows. In these cases, please contact the MDM HUB tting statusProcessing of will take some time. status can be track via get status calls. processing ends when it reaches the final status: ACCEPTED or REJECTED. When the gets the ACCEPTED status, the following fields will appear in its status: "objectUri" and "COMPANYCustomerId". These can be used to find created/modified entities in the system. Full documentation can be found at → Get status.Example RequestBelow is an example query for the selected external_system_request_id\ncurl --location '{api_url}/dcr/_status/${external_system_request_id}' \\ // e.g., --header 'Authorization: Bearer ${access_token_value}' // e.g., 0001WvxKA16VWwlufC2dslSILdbE \nHandling ResponsesSuccess Response\n{\n "requestStatus": "REQUEST_ACCEPTED",\n "extDCRRequestId": "8600ca9a--45d0-97f6-152f01d70158",\n "dcrRequestId": "a2848f2a573344248f78bff8dc54871a",\n "targetSystem": "Reltio",\n "country": "AU",\n "": {\n "status": "ACCEPTED",\n "objectUri": "entities/0Hhskyx", // \n "COMPANYCustomerId": "03-", // usually . only when creating or updating without references to in request\n        "updateDate": " "changeRequestUri": "changeRequests/0N38Jq0",\n "dcrid": "entities/0EUulla"\n }\n}\nRejected Response\n{\n "requestStatus": "REQUEST_REJECTED",\n "errorMessage": "Received DCR_CHANGED event, updatedBy: , on . Updating status to: REJECTED",\n "extDCRRequestId": "-937e-434d-948c-6a282a736c4f",\n "dcrRequestId": "0b4125648b6c4d9cb785856841f7d65d",\n "targetSystem": "Veeva",\n "country": "HK",\n "": {\n "status": "REJECTED",\n "updateDate": " "comment": "This was REJECTED by the VEEVA Data Steward with the following comment: [A-20022] Veeva Data Steward: Your request has been rejected..",\n "changeRequestUri": "changeRequests/0IojkYP",\n "dcrid": "entities/0qmBUXU"\n }\n}\nGetting multiple statusesMultiple statuses can be selected at once using the status filtering status\ncurl --location '{api_url}/dcr/_status?updateFrom=2021-10-17T20%3A31%3A31.424Z&updateTo=2023-10-17T20%3A31%3A31.424Z&limit=5&offset=3' \\\n--header 'Authorization: Bearer ${access_token_value}' // e.g., 0001WvxKA16VWwlufC2dslSILdbE \nExample ResponseSuccess Response\n[\n {\n "requestStatus": "REQUEST_ACCEPTED",\n "extDCRRequestId": "8d3eb4f7-7a08-4813-9a90-73caa7537eba",\n "dcrRequestId": "360d152d58d7457ab6a0610b718b6b8b",\n "targetSystem": "OneKey",\n "country": "AU",\n "": {\n "status": "ACCEPTED",\n "objectUri": "entities/05jHpR1",\n "COMPANYCustomerId": "03-",\n "updateDate": " "comment": " response comment: accepted response - HCP EID assigned\\nONEKEY ID: WUSM03999911",\n "changeRequestUri": "8b32b8544ede4c72b7adfa861b1dc53f",\n "dcrid": "entities/04TxaQB"\n }\n },\n {\n "requestStatus": "REQUEST_ACCEPTED",\n "extDCRRequestId": "b66be6bd-655a-47f8-b78b-684e80166096",\n "dcrRequestId": "becafcb2cd004c1d89ecfc670de1de70",\n "targetSystem": "Reltio",\n "country": "AU",\n "": {\n "status": "ACCEPTED",\n "objectUri": "entities/06SVUCq",\n "COMPANYCustomerId": "03-",\n "updateDate": " "comment": " [-mdmhub]] -",\n "changeRequestUri": "changeRequests/06sXnXH",\n "dcrid": "entities/08LAHeQ"\n }\n }\n]\nGet entityThis method is used to prepare a request for modifying entities and to validate the created/modified entities in the process. Use the "objectUri" field available after accepting the to query MDM system.Example entity request\ncurl --location '{api_url}/${objectUri}' \\ // e.g., entities/05jHpR1\n --header 'Authorization: Bearer ${access_token_value}' // e.g., 0001WvxKA16VWwlufC2dslSILdbE \nExample ResponseSuccess ResponseGet entity response\n{\n "type": "configuration/entityTypes/HCP",\n "uri": "entities/06SVUCq",\n "createdBy": "mdmhub",\n "createdTime": ,\n "updatedBy": "Re-cleansing of null in tenant 2NBAwv1z2AvlkgS background task. (started by )",\n "updatedTime": ,\n "attributes": {\n "COMPANYGlobalCustomerID": [\n {\n "uri": "entities/06SVUCq/attributes/COMPANYGlobalCustomerID/LoT0xC2",\n "type": "configuration/entityTypes//attributes/COMPANYGlobalCustomerID",\n "value": "03-",\n "ov": true\n }\n ],\n "TypeCode": [\n {\n "uri": "entities/06SVUCq/attributes/TypeCode/LoT0XcU",\n "type": "configuration/entityTypes//attributes/TypeCode",\n "value": "RS",\n "ov": true\n }\n ],\n "Addresses": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv",\n "value": {\n "AddressType": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressType/dZqkWUB",\n "type": "configuration/entityTypes//attributes/Addresses/attributes/AddressType",\n "value": "TYS.P",\n "ov": true\n }\n ],\n "COMPANYAddressID": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/COMPANYAddressID/dZqkakR",\n "type": "configuration/entityTypes//attributes/Addresses/attributes/COMPANYAddressID",\n "value": "",\n "ov": true\n }\n ],\n "AddressLine1": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine1/dZqkf0h",\n "type": "configuration/entityTypes//attributes/Addresses/attributes/AddressLine1",\n "value": "addressLine1",\n "ov": true\n }\n ],\n "AddressLine2": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine2/dZqkjGx",\n "type": "configuration/entityTypes//attributes/Addresses/attributes/AddressLine2",\n "value": "addressLine2",\n "ov": true\n }\n ],\n "AddressLine3": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine3/dZqknXD",\n "type": "configuration/entityTypes//attributes/Addresses/attributes/AddressLine3",\n "value": ""ov": true\n }\n ],\n "City": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/City/dZqkrnT",\n "type": "configuration/entityTypes//attributes/Addresses/attributes/City",\n "value": "city",\n "ov": true\n }\n ],\n "Country": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/Country/dZqkw3j",\n "type": "configuration/entityTypes//attributes/Addresses/attributes/Country",\n "value": "GB",\n "ov": true\n }\n ],\n "Zip5": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/Zip5/dZql0Jz",\n "type": "configuration/entityTypes//attributes/Addresses/attributes/Zip5",\n "value": "zip5",\n "ov": true\n }\n ],\n "Source": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF",\n "value": {\n "SourceName": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceName/dZql8qV",\n "type": "configuration/entityTypes//attributes/Addresses/attributes/Source/attributes/SourceName",\n "value": "PforceRx",\n "ov": true\n }\n ],\n "SourceAddressID": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceAddressID/dZqlD6l",\n "type": "configuration/entityTypes//attributes/Addresses/attributes/Source/attributes/SourceAddressID",\n "value": "string",\n "ov": true\n }\n ]\n },\n "ov": true,\n "label": "PforceRx"\n }\n ],\n "": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv//dZrp4Jz",\n "type": "configuration/entityTypes//attributes/Addresses/attributes/VerificationStatus",\n "value": "Unverified",\n "ov": true\n }\n ],\n "VerificationStatusDetails": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/VerificationStatusDetails/hLXLd9W",\n "type": "configuration/entityTypes//attributes/Addresses/attributes/VerificationStatusDetails",\n "value": "Address Verification Status is unverified - unable to verify. the output fields will contain the input data.\\nPost-Processed Verification Match Level is 0 - none.\\nPre-Processed Verification Match Level is 0 - none.\\nParsing Status isidentified and parsed - All input data has been able to be identified and placed into is 0 - none.\\nContext Identification Match Level is 5 - delivery point (postbox or is PostalCodePrimary identified by context - postalcodeprimary identified by context.\\nThe accuracy matchscore, which gives the similarity between the input data and closest reference data match is 100%.",\n "ov": true\n }\n ],\n "AVC": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AVC/hLXLhPm",\n "type": "configuration/entityTypes//attributes/Addresses/attributes/AVC",\n "value": "---100",\n "ov": true\n }\n ],\n "AddressRank": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressRank/gjq5qMj",\n "type": "configuration/entityTypes//attributes/Addresses/attributes/AddressRank",\n "value": "1",\n "ov": true\n }\n ]\n },\n "ov": true,\n "label": "TYS.P - addressLine1, addressLine2, city, , GB"\n }\n ]\n },\n "crosswalks": [\n {\n "type": "configuration/sources/ReltioCleanser",\n "value": "06SVUCq",\n "uri": "entities/06SVUCq/crosswalks/dZrp03j",\n "reltioLoadDate": ,\n "createDate": ,\n "updateDate": ,\n "attributes": [\n "/attributes/Addresses/dZqkSDv",\n "/attributes/Addresses/dZqkSDv/AVC/hLXLhPm",\n "/attributes/Addresses/dZqkSDv/VerificationStatus/dZrp4Jz",\n "/attributes/Addresses/dZqkSDv/VerificationStatusDetails/hLXLd9W"\n ]\n },{\n "type": "configuration/sources/Reltio",\n "value": "06SVUCq",\n "uri": "entities/06SVUCq/crosswalks/dZqkNxf",\n "reltioLoadDate": ,\n "createDate": ,\n "updateDate": ,\n "attributes": [\n "/attributes/Addresses/dZqkSDv/Country/dZqkw3j",\n "/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceName/dZql8qV",\n "/attributes/Addresses/dZqkSDv/Zip5/dZql0Jz",\n "/attributes/Addresses/dZqkSDv/AddressLine1/dZqkf0h",\n "/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceAddressID/dZqlD6l",\n "/attributes/Addresses/dZqkSDv/COMPANYAddressID/dZqkakR",\n "/attributes/Addresses/dZqkSDv/Source/dZql4aF",\n "/attributes/Addresses/dZqkSDv/AddressLine2/dZqkjGx",\n "/attributes/Addresses/dZqkSDv/City/dZqkrnT",\n "/attributes/Addresses/dZqkSDv/AddressLine3/dZqknXD",\n "/attributes/Addresses/dZqkSDv",\n "/attributes/Addresses/dZqkSDv/AddressType/dZqkWUB"\n ],\n "singleAttributeUpdateDates": {\n "/attributes/Addresses/dZqkSDv/Country/dZqkw3j": " "/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceName/dZql8qV": " "/attributes/Addresses/dZqkSDv/Zip5/dZql0Jz": " "/attributes/Addresses/dZqkSDv/AddressLine1/dZqkf0h": " "/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceAddressID/dZqlD6l": " "/attributes/Addresses/dZqkSDv/COMPANYAddressID/dZqkakR": " "/attributes/Addresses/dZqkSDv/Source/dZql4aF": " "/attributes/Addresses/dZqkSDv/AddressLine2/dZqkjGx": " "/attributes/Addresses/dZqkSDv/City/dZqkrnT": " "/attributes/Addresses/dZqkSDv/AddressLine3/dZqknXD": " "/attributes/Addresses/dZqkSDv": " "/attributes/Addresses/dZqkSDv/AddressType/dZqkWUB": " }\n },{\n "type": "configuration/sources/HUB_CALLBACK",\n "value": "06SVUCq",\n "uri": "entities/06SVUCq/crosswalks/LoT0kPG",\n "reltioLoadDate": ,\n "createDate": ,\n "updateDate": ,\n "attributes": [\n "/attributes/TypeCode/LoT0XcU",\n "/attributes/COMPANYGlobalCustomerID/LoT0xC2",\n "/attributes/Addresses/dZqkSDv/AddressRank/gjq5qMj",\n "/attributes/Addresses/dZqkSDv"\n ],\n "singleAttributeUpdateDates": {\n "/attributes/TypeCode/LoT0XcU": " "/attributes/COMPANYGlobalCustomerID/LoT0xC2": " "/attributes/Addresses/dZqkSDv/AddressRank/gjq5qMj": " "/attributes/Addresses/dZqkSDv": " }\n }\n ]\n}\nRejected not found response\n{\n "code": ""message": "Entity not found"\n}\nTroubleshooting GuideAll documentation with a detailed description of flows can be found at → PforceRx DCR flowsCommon Issues and SolutionsDuplicate Request:Error Message: "DuplicateRequestException -> Request [ID] has already been processed."Solution: Ensure that the extDCRRequestId is unique for each request.  This ID is used to track processing and prevent duplicate submissions. Generate a new unique ID for every new lidation Error:Error Message: "Target lookup code not found for attribute: [Attribute], country: [Country], source value: [Value]."Solution: This error indicates that the provided attribute values or lookup codes are incorrect or not recognized by the rify Attribute Values: Double-check the attribute values in your request against the expected values and formats documented in the specification (Swagger documentation).Correct Lookup Codes: Ensure that you are using the correct lookup codes for attributes that require them (e.g., country codes, title codes). Example: If you receive "Target lookup code not found for attribute: HCPTitle, country: , source value: HCPTIT..", verify that 'HCPTIT.' is a valid Title code for ('SG').Network Errors:Issue: Unable to connect to endpoint. Common errors include "Connection refused," "Timeout," " resolution failure."Solutions:Verify Network Connectivity: Use the ping command (e.g., ping ) to check if the endpoint is reachable. Use traceroute to diagnose network path eck VPN Connection: If VPN access is required, ensure that your VPN connection is active and correctly rewall Settings: Confirm that your firewall rules are not blocking outbound traffic on the necessary ports (typically 443 for ) to the endpoint. Contact your network administrator to verify firewall settings if needed.DNS Resolution: Ensure that your server is correctly resolving endpoint hostname to an thentication Errors:Issue:  requests are rejected due to authentication failures. Common errors include "Invalid credentials," " expired," "Unauthorized."Solutions:Verify API Credentials: Double-check that you are using the correct username and password for cess Token Validity: If using Bearer Token authentication, ensure that your access token is valid and not expired. Access tokens typically have a limited lifespan (e.g., 30 minutes).Token Refresh: Implement token refresh logic in your client application to automatically obtain a new access token when the current one thorization Header: Verify that you are including the access token correctly in the Authorization header of your requests, using the "Bearer " scheme (e.g., Authorization: Bearer ).Service Unavailable Errors:Issue: Intermittent connectivity issues or request failures with "503 Service Unavailable" or "500 Internal Server Error" :Check Service Status: Check if there is a known outage or maintenance activity for the MDM HUB service. A service status page may be available (check with Requests: Implement retry logic in your client application to handle transient service interruptions. Use exponential backoff to avoid overwhelming the service during Support: If the issue persists, contact the MDM HUB support team to report the service unavailability and get further assistance.Missing Configuration for Message: "RuntimeException -> User [User] dcrServiceConfig is missing."Missing dcr service cofiguration\n[\n {\n "requestStatus": "REQUEST_FAILED",\n "errorMessage": "RuntimeException -> User test_user dcrServiceConfig is missing",\n "extDCRRequestId": "97aa3b3f-35dc-404c-9d4a-edfaf9e7b11c"\n }\n]\nSolution: Contact the MDM HUB team to ensure the user configuration is correctly set rmission Denied to create :Error Message: "User is not permitted to perform: [Action]"Missing role\n{\n "code": "403",\n "message": "User is not permitted to perform: CREATE_DCR"\n}\nSolution: Ensure the user has the necessary permissions to perform the rify User Permissions: Contact the MDM HUB team or your administrator to verify that your user account has the necessary roles and permissions to perform the requested action (e.g., CREATE_DCR, GET_DCR_STATUS) and access the specified type (e.g., PforceRxDCR).DCR Type Access: Ensure that your user configuration includes access to the specific type you are trying to lidation Error:Error Message: "ValidationException -> User [User] doesn't have access to PforceRXDCR dcrType."Invalid configuration\n[\n {\n "requestStatus": "REQUEST_REJECTED",\n "errorMessage": "ValidationException -> User test_user doesn't have access to PforceRXDCR dcrType",\n "errorCode": "VALIDATION_ERROR",\n "extDCRRequestId": "97aa3b3f-35dc-404c-9d4a-edfaf9e71212112121c"\n }\n]\nDescription: This error occurs when the user does not have the necessary permissions to access a specific type (PforceRXDCR) in the MDM HUB system.Possible Causes:The user has not been granted the required permissions for the specified typeThe user configuration is incomplete or incorrectSolution:Verify User Permissions: Ensure that the user has been granted the necessary permissions to access the PforceRXDCR  type. This can be done by checking the user roles and permissions in the MDM HUB system" }, { "title": "Entity ", "": "", "pageLink": "/display//Entity+Enricher", "content": "DescriptionAccepts simple events on the input. Performs the following calls to Reltio:getEntitiesByUrisgetRelationgetChangeRequestfindEntityCountryProduces the events enriched with the targetEntity / targetRelation field retrieved from nology: java 8, boot, mongodb, -streamsCode link: entity-enricher Exposed interfacesInterface NameTypeEndpoint patternDescriptionentity enricher inputKAFKA${env}-internal-reltio-eventsevents being sent by the event publisher component. Event types being considered: HCP_*, _*, ENTITY_MATCHES_CHANGEDentity enricher outputKAFKA${env}-internal-reltio-full-eventsDependent componentsComponentInterfaceFlowDescriptionManagerMDMIntegrationServicegetEntitiesByUrisgetRelationgetChangeRequestfindEntityCountryConfigurationConfig ParameterDefault valueDescriptionbundle.enabletrueenable / disable putTopics${env}-internal-reltio-eventsinput readPoolSize10number of thread pool sizebundle.pollDuration10spoll intervalbundle.outputTopic${env}-internal-reltio-full-eventsoutput Id${env}-entity-enricherThe application ID. Each stream processing application must have a unique ID. The same ID must be given to all instances of the application. It is recommended to use only alphanumeric characters, . (dot), - (hyphen), and _ (underscore). Examples: "hello_world", "hello_world-.0.0"teway.apiKey${gateway.apiKey}teway.url${gateway.url}erName${erName}" }, { "title": "HUB APP", "": "", "pageLink": "/display//HUB+APP", "content": "DescriptionHUB is a front-end application that presents basic information about the MDM HUB cluster. This component allows you to manage and Airflow Dags or view quality service e app allows users to log in with their COMPANY nology: AngularCode link: mdm-hub-appFlowsUser flowsAdmin flowsAccess:Add new role and add users to the componentsComponentInterfaceDescriptionMDM ManagerREST APIUsed to fetch quality service configuration and for testing entitiesMDM AdminREST APIUsed to manage kafka, airflow dags and reconciliation serviceConfigurationComponent is configured via environment variablesEnvironment variableDefault valueDescriptionBACKEND_URIN/ Manager URIADMIN_URIN/ URIINGRESS_PREFIXN/AApplication context path" }, { "title": "Hub Store", "": "", "pageLink": "/display/", "content": "Hub store is a mongo cache where are stored: EntityHistory, EntityMatchesHistory, valueDescriptionmongo:host: ***:27017,***:27017,***:27017dbName: reltio_${env}user: ***url: mongodb://${er}:${ssword}@${}/${mongo.dbName}Mong DB connection configuration" }, { "title": "Inc batch channel", "": "", "pageLink": "/display/GMDM/Inc+batch+channel", "content": " for data loads of data to Reltio. It takes plain data files(eg. txt, csv) and, based on defined mappings, converts it into json objects, which are then sent to de link: inc-batch-channelFlowsIncremantal batch Dependent componentsComponentInterface nameDescriptionManagerKafkaEvents constructed by inc-batch-channel are transferred to the kafka topic, from where they are read by mdm-manager and sent to Reltio. When the event is processed by the manager send message on the appropriate topic:Example input topic: gbl-prod-internal-async-all-sapExample ACK topic: gbl-prod-internal-async-all-sap-ackBatch ServiceBatch ControllerUsed to store loads state and statistics. All information are placed in mongodbMongoDb collectionsGenBatchDags - stores stages stateGenBatchAttributeHisotry - stores state of objects loaded by inc-batch-channelgenBatchLastBatchIds - last batch id for every batchgenBatchProcessorStartTime - start time of all batch stagesgenBatchTagMappings -ConfigurationConnectionsmongoConnectionProps.dbUrlFull Mongo DB ngo.dbNameMongo database rversKafka Hostname  component group slMechanismSASL curityProtocolSecurity Protocolkafka.sslTruststoreLocationSSL trustore file locationkafka.sslTruststorePasswordSSL trustore file ernameKafka sswordKafka dedicated user :SSL algorightBatches configuration:batches.${batch_name}Batch configurationbatches.${batch_name}.inputFolderDirectory with input filesbatches.${batch_name}.outputFolderDirectory with output filesbatches.${batch_name}.columnsDefinitionFileFile defining mappingbatches.${batch_name}.requestTopicManager topic with events that are going to be sent to Reltiobatches.${batch_name}.ackTopicAck topicbatches.${batch_name}.parserTypeParser type. Defines separator and encoding formatbatches.${batch_name}.preProcessingDefine preprocessin of input filesbatches.${batch_name}.stages.${stage_name}.stageOrderStage prioritybatches.${batch_name}.stages.${stage_name}.processorTypeProcessor type:SIMPLE - change is applied only in mongoENTITY_SENDER - change is sent to Reltiobatches.${batch_name}.stages.${stage_name}.outputFileNameOutput file namebatches.${batch_name}.stages.${stage_name}.disabledIf stage is disabledbatches.${batch_name}.stages.${stage_name}.definitionsDefine which definition is used to map input filebatches.${batch_name}.stages.${stage_name}.deltaDetectionEnabledIf previous and current state of objects are comparedbatches.${batch_name}.stages.${stage_name}.initDeletedLoadEnabledbatches.${batch_name}.stages.${stage_name}.fullAttributesMergebatches.${batch_name}.stages.${stage_name}.postDeleteProcessorEnabledbatches.${batch_name}.stages.${stage_name}.senderHeadersDefines http headers" }, { "title": "", "": "", "pageLink": "/display//Kafka+Connect", "content": "DescriptionKafka Connect is a tool for scalably and reliably streaming data between Apache Kafka® and other data systems.  It makes it simple to quickly define connectors that move large data sets in and out of . can ingest entire databases or collect metrics from all your application servers into topics, making the data available for stream processing with low latency.FlowsSnowflake: Base tables refreshSnowflake: Events publish flowSnowflake: History InactiveSnowflake: LOV data publish : data publish flowConfigurationKafka Connect - properties ic-internal-kafka-connect-snowflake-offset ic-internal-kafka-connect-snowflake-config ic tocolSASL_PLAINTEXT tocolSASL_chanismSCRAM-SHA-512connectors - SnowflakeSinkConnector - properties p-out-full-snowflake-all:HUB_KAFKA_DATAtopicsssphraseyThere is an one exception connected with FLEX environment. The S3SinkConnector is used here - properties giontries10 pression.typenone topics.dirtopicsze1000000timezoneUTClocale patibilityNONE yrtitioner.rmatYYYY-MM-ddtimestamp.extractorWallclock" }, { "title": "Manager", "": "", "pageLink": "/display//Manager", "content": "DescriptionManager is the main component taking part in client interactions with MDM orchestrates calls with  the following services: adapters translating client input into callsProcess logic  - mapping  simple calls into multiple MDM callsQuality engine - validating data flowing into MDMsTransaction engine - logging requests for tracing purposesAutorisation engine - controlling user privileges  Cache engine - reduce calls by reading data directly from Hub storeManager services are accessible with REST .  Some services are exposed as asynchronous operations through for performance nology: , , Apache CamelCode link: mdm-managerFlowsGet entitySearch entitiesValidate /Update readCreate relationsMerge interfacesInterface NameTypeEndpoint patternDescriptionGet entityREST /entities/{entityId}Get detailed entity informationGet multiple entitiesREST /entities/_byUrisReturn multiple entities with provided urisGet entity countryREST APIGET /entities/{entityId}/_countryReturn country for an entity with the provided /entities/{entitiyId/_mergePOST/entities/{entitiyId/_unmerge_byUrisMerge entity A with entity B using Reltio uris as IDs.Unmerge entity B from entity A using uris as ComplexREST /entities/_unmergeMerge entity A with entity B using request body (JSON) with ids.Unmerge entity B from entity A using request body (JSON) with eate/Update entityREST /hcpPATCH /hcpPOST /hcoPATCH /hcoCreate/partially update entityCreate/Update multiple entitiesREST /batch/hcpPATCH /batch/hcpPOST /batch/hcoPATCH /batch/hcoBatch create entitiesGet entity by /crosswalkGet entity by crosswalkDelete entity by crosswalkREST APIDELETE /entities/crosswalkDelete entityt by crosswalkCreate/Update relationREST /relations/_dbscanPATCH /relations/Create/update relationGet relationREST APIGET /relations/{relationId}Get relation by reltio URIGet relation by crosswalkREST APIGET /relations/crosswalkGet relation by crosswalkDelete relation by crosswalkREST APIDELETE /relations/crosswalkDelete relation by crosswalkBatch create relationREST /batch/relationBatch create relationCreate/replace/update profileREST APIPOST /mcoPATCH /mcoCreate, replace or partially update profileCreate/replace/update batch profileREST /batch/mcoPATCH /batch/mcoCreate, replace or partially update profilesUpdate Usage FlagsREST APIPOST /updateUsageFlagsCreate, Update, Remove UsageType UsageFlags of "Addresses' Address field of and entitiesSearch for change requestsREST /changeRequests/_byEntityCrosswalkSearch for change requests by entity crosswalkGet change request by uriREST /changeRequests/{uri}Get change request by change requestREST /changeRequestCreate change request - internalGet change requestREST /changeRequestGet change request - internalDependent componentsComponentInterfaceDescriptionReltio interfaceUsed to communicate with interfaceUsed to communicate with interfaceProvide user authorizationMDM interfaceProvides routingConfigurationThe configuration is a composition of dependent components configurations and parameters specifived nfig ParameterDefault valueDescriptionmongo.urlMongo urlmongo.dbNameMongo database namemongoConnectionProps.dbUrlMongo database urlmongoConnectionProps.dbNameMongo database erMongo sswordMongo user passwordmongoConnectionProps.entityCollectionNameEntity collection namemongoConnectionProps.lovCollectionNameLov collection name" }, { "title": "Authorization Engine", "": "", "pageLink": "/display/GMDM/Authorization+Engine", "content": "DescriptionAuthorization Engine is responsible for authorizing users executing operations. All operations are secured and can be executed only by users that have specific roles. The engine checks if a user has a role allowed access to operation. is engaged in all flows exposed by Manager interfacesInterface NameTypeJava class:methodDescriptionAuthorization ServiceJavaAuthorizationService:processCheck user permission to run a specific operation. If the user has granted a role to run this operation method will allow to call it. In other case authorization exception will throwDependent componentsAll of the below operations are exposed by Manager component and details about was described here. Description column of below table has role names which have to be assigned to user permitted to use described ponentInterfaceDescriptionManagerGET /entities/*GET_ENTITIESGET /relations/*GET_RELATIONGET /changeRequests/*GET_CHANGE_REQUESTSDELETE /entities/crosswalkDELETE /relations/crosswalkDELETE_CROSSWALKPOST /hcpPOST /batch/hcpCREATE_HCPPATCH /hcpPATCH /batch/hcpUPDATE_HCPPOST /hcoPOST /batch/hcoCREATE_HCOPATCH /hcoPATCH /batch/hcoUPDATE_HCOPOST /mcoPOST /batch/mcoCREATE_MCOPATCH /mcoPATCH /batch/mcoUPDATE_MCOPOST /relationsCREATE_RELATIONPATCH /relationsUPDATE_RELATIONPOST /changeRequestCREATE_CHANGE_REQUESTPOST /updateUsageFlagsUSAGE_FLAG_UPDATEPOST /entities/{entityId}/_mergeMERGE_ENTITIESPOST /entities/{entityId}/_unmergeUNMERGE_ENTITIESGET /lookupLOOKUPSConfigurationConfiguration parameterDescriptionusers[].nameUser nameusers[].descriptionDescription of userusers[].defaultClientDefault client that is used in the case when the user doesn't specify countryusers[].rolesList of roles assigned to userusers[].countriesList of countries whose data can be managed by userusers[].sourcesList of sources (crosswalk types) whose can be used during manage data by the user" }, { "title": "MDM Routing Engine", "": "", "pageLink": "/display//MDM+Routing+Engine", "content": "DescriptionMDM Routing Engine is responsible for making a decision on which MDM system has to be used to process client requests. The call is made based on a decision table that maps MDM system with a  the case of multiple systems for the same market, the decision table contains a user dimension allowing to select MDM system by user name. is engaged in all flows supported by Manager  interfacesInterface NameTypeJava class:: default :getDefaultMDMClient(username)Get default client specified for the userJavaMDMClientFactory: client that supports the specified countryJavaMDMClientFactory:getMDMClient(country, user);Get client that  supported specified country and userDependent componentsComponentInterfaceDescriptionReltio AdapterJavaProvides integrations with AdapterJavaProvides integration with parameterDescriptionusers[].namename of userusers[].defaultClientdefault mdm client for userclientsDecisionTable.{selector name}.countries[]List of countriesclientsDecisionTable.{selector name}.clients[]Map where the key is username and value is client name that will be used to process data comes from defined countries.Special key "default" defines the default client which will be used in the case when there is no specific client for mFactoryConfig.{mdm client of client. Only two values are supported: "reltio" or "nucleus".mdmFactoryConfig.{mdm client name}.configMDM client configuration. It is based on adapter type: Reltio or " }, { "title": "Nucleus Adapter", "": "", "pageLink": "/display/GMDM/Nucleus+Adapter", "content": "DescriptionNucleus-adapter is a component of that is used to communicate with . It provides 4 types of operations:get entity,get entities,create/update entity,get relationNucleus 360 is an old COMPANY platform comparing to Reltio. It's used to store and manage data about ) and healthcare organizations(hco).It uses batch processing so the results of the operation are applied for the golden record after a certain period of cleus accepts requests with an XML formatted body and also sends responses in the same nology: java 8, nucleusCode link: nucleus-adapterFlowsCreate/update entityGet entityGet entitiesGet relationsExposed interfacesInterface NameTypeJava class:methodDescriptionget entityJavaNucleusMDMClient:getEntityProvides a mechanism to obtain information about the specified entity. Entity can be obtained by entity id, e.g. xyzf325Two Nucleuses methods are used to obtain detailed information about the rst is Look up method, thanks to which we can obtain basic information about entity(xml format) by its xt, we provide that information for the second method, Get Profile Details that sends a response with all available information (xml format).Finally, we gather all received information about the entity, convert it to Relto model(json format) and transfer it to a t entitiesJavaNucleusMDMClient:getEntitiesProvide a mechanism to obtain basic information about a group of entities. This entity group is determined based on the defined filters(e.g. first name, last name, professional type code).For this purpose only look up method is used. This way we receive only basic information about entities but it is performance-optimized and does not create unnecessary load on the eate/update entityJavaNucleusMDMClient:creteEntityUsing the Nucleus Add Update web service method nucleus-adapter provides a mechanism to create or update data present in the database according to the business rules(createEntity method).Nucleus-adapter accepts formatted requests body, maps it to xml format, and then sends it to relationsJavaNucleusMDMClient:getRelationTo get relations nucleus-adapter uses the affiliation cleus produces XML formatted response and nucleus-adapter transforms it to componentsComponentInterfaceDescriptionNucleushttps://{{ nuleus host }}/CustomerManage_COMPANY_EU_Prod/c?singleWsdlNucleus endpoint for Creating/updating hcp and hcohttps://{{ nuleus host }}/Nuc360ProfileDetails5.0/ endpoint for getting details about entityhttps://{{ nuleus host }}/Nuc360QuickSearch5.0/LookupNucleus endpoint for getting basic information about entityhttps://{{ nuleus host }}/Nuc360DbSearch5.0/api/affiliationNucleus endpoint for getting relations informationConfigurationConfig seURLnullBase url of endpoint for creating/updating fileDetailsUrlnullNucleus endpoint for getting detailed information about ditionalOptions.quickSearchUrlnullNucleus endpoint for getting basic information about filiationUrlnullNucleus endpoint for getting information about entities faultIdTypenullDefault for entities search(used if another not provided)" }, { "title": "Quality Engine and Rules", "": "", "pageLink": "/display//Quality+Engine+and+Rules", "content": " engine is used to verify data quality in entity attributes. It is used for , , entities.Quality engine is responsible for preprocessing Entity when a specific precondition is met. This engine is started in the following cases:Rest operation () on /hco endpoint on operation () on /hcp endpoint on operation () on /mco endpoint on has two two components quality-engine and quality-engine-integrationTechnology:fasterxmlCode link:quality-engine -  rules -  requirements (provided by → 20-Design → Hub → Global-MDM_DQ_*FlowsValidation by quality rules is done before sending entities to reltio. Quality rules should be enabled in configuration.Data quality checking is started in rvice.QualityService. Whole rule flow for entity have one context (leContext)RuleRule have following configurationname - name of the rule - it is requiredpreconditions - preconditions that should be met to run the rulecheck - check that should be triggered if preconditions are metaction - action that should be triggered if check is evaluated to truePreconditionsStructure:Example:preconditions:    - type: source      values:          - CENTRISPossible types:not - it evaluates to true if all preconditions that are underneath evaluate to falsematch - it evaluate to true if given attribute value matches any of listed patterns to trueanyMatch - it evaluate to true if given array attribute value matches any of listed patterns to trueexistsInContext - it checks if given fieldName with specified value exists in contextcontext - check if entity context values contains only allowed once source - check if entity has source of given typeChecksStructure:Example:check:   type: match   attribute: FirstName   values:       - '[^0-9@#$%^&*~!"<>?/|\\_]+'Possible types:ageCheck - check if age specified in date or attribute is older than specified number of yearsmandatoryGroup - check if at least one from specified list of attributes existsmandatory - check if specified attribute existsmandatoryAll - check if all specified attributes existsmandatoryArray - check if specified nested attribute existsnot - check if opposite of the check is truegroupMatch - check of group of attributes matches specified valuesmatch - check if attribute value matches specified given valueempty - empty checkActionsStructure:Example:action:   type: add   attributes:      - DataQuality[].DQDescription   value: "{source}_005_02"Possible types:clean - cleans attribute value - replaces pattern with given stringreject - rejects entityremove - remove attributeset - sets attribut valuemodify - modify attribute valueadd - adds attribute valuechineseNameToEnglish - converts value to englishaddressDigest - calculate address digestaddressCrosswalkValue - sets digest valueconvertCase - convert case lower, upper, capitalizeremoveEmptyAttributes - removes empty attributesprefixByCountry - adds country prefix to attribute valuemakeSourceAddressInfo - adds attribute with source address infopadding - pads attribute value with specified characterassignId - assings id setContextValue - set value that will be stored in contextDependent componentsComponentInterfaceFlowDescriptionmanagerQualityServiceValidationRuns quality engine validationConfigurationConfig valueDescriptionvalidationOntrueIt turns on or off validation - it needs to specified in turns on or off validation for updateshcpQualityRulesConfigslist of files with quality rules for hcpIt contains a list of files with quality rules for hcphcoQualityRulesConfigslist of files with quality rules for hcoIt contains a list of files with quality rules for hcohcpAffiliatedHCOsQualityRulesConfigslist of files with quality rules for affilitated hcpIt contains a list of files with quality rules for affilitated HCOmcoQualityRulesConfigslist of files with quality rules for mcoIt contains a list of files with quality rules for " }, { "title": "Reltio Adapter", "": "", "pageLink": "/display/GMDM/Reltio+Adapter", "content": "DescriptionReltio-adapter is a component of of mdm-manager) that is used to communicate with Reltio. Technology: Java,Code link: reltio-adapterFlowsCreate/update entityGet entityGet entitiesMerge entityUnmerge entityCreate relationGet relationsCreate DCRGet DCRReject DCRDelete DCRExposed interfacesInterface NameTypeEndpoint patternDescriptionGet entityJavaReltioMDMClient:getEntityGet detailed entity information by entity URIGet entitiesJavaReltioMDMClient:getEntitiesGet basic information about a group of entities based on applied filtersCreate/Update entityJavaReltioMDMClient:createEntityCreate/partially update entity(HCO, , MCO)Create/Update multiple entitiesJavaReltioMDMClient:createEntitiesBatch create entitiesDelete entityJavaReltioMDMClient:deleteEntityDeletes entity by its URIFind entityJavaReltioMDMClient:findEntityFinds entity. The search mechanism is flexible and chooses the proper method:If URI applied in entityPattern then use the getEntity method.If URI not specified and finds crosswalks then uses getEntityByCrosswalk methodOtherwise, it uses the find matches methodMerge entitiesJavaReltioMDMClient:mergeEntitiesMerge two entities basing on reltio merging so accepts explicit winner as explicitWinnerEntityUri.Unmerge entitiesJavaReltioMDMClient:unmergeEntitiesUnmerge entitiesUnmerge Entity TreeJavaReltioMDMClient:treeUnmergeEntitiesUnmerge entities recursively(details in reltio treeunmerge entitiesJavaReltioMDMClient:scanEntitiesIterate entities of a specific type in a particular lete crosswalkJavaReltioMDMClient:deleteCrosswalkDeletes crosswalk from an objectFind matchesJavaReltioMDMClient:findMatchesReturns potential matches based on rules in entity type configurationGet entity connectionsJavaReltioMDMClient:getMultipleEntityConnectionsGet connected entity by a crosswalkJavaReltioMDMClient:getEntityByCrosswalkGet entity by the crosswalkDelete relation by a crosswalkJavaReltioMDMClient:deleteRelationDelete relation by relation URIGet relationJavaReltioMDMClient:getRelationGet relation by relation /Update relationJavaReltioMDMClient:createRelationCreate/update relationScan relationsJavaReltioMDMClient:scanRelationsIterate entities of a specific type in a particular t relation by a crosswalkJavaReltioMDMClient:getRelationByCrosswalkGet relation by the crosswalkBatch create relationJavaReltioMDMClient: create relationSearch for change requestsJavaReltioMDMClient:searchSearch for change requests by entity crosswalkGet change request by URIJavaReltioMDMClient:getChangeRequestGet change request by change requestJavaReltioMDMClient:createChangeRequestCreate change request - internalDelete change requestJavaReltioMDMClient:deleteChangeRequestDelete change requestApply change requestJavaReltioMDMClient:applyChangeRequestApply data change requestReject change requestJavaReltioMDMClient:rejectChangeRequestReject data change requestAdd/update external inforJavaReltioMDMClient:createOrUpdateExternalInfoAdd external info to specified DCRDependenciesComponentInterfaceDescriptionReltioGET {TenantURL}/entities/{Entity ID}Get detailed information about the entity {TenantURL}/entitiesGet basic( or chosen ) information about entity based on applied filters {TenantURL}/entities/_byCrosswalk/{crosswalkValue}?type={sourceType}Get entity by crosswalk {TenantURL}/{entity object URI}Delete entity {TenantURL}/entitiesCreate/update single or a bunch of entities {TenantURL}/entities/_dbscan {TenantURL}/entities/{winner}/_sameAs?uri=entities/{looser}Merge entities basing on looser and winner ID {TenantURL}//_unmerge?contributorURI=Unmerge entities {TenantURL}//_treeUnmerge?contributorURI=Tree unmerge entities {TenantURL}/relations/Get relation by relation URI {TenantURL}/relationsCreate relation {TenantURL}/relations/_dbscan GET {TenantURL}/changeRequests Get change request {TenantURL}/changeRequests/{id}Returns a data change request by {TenantURL}/changeRequests Create data change request {TenantURL}/changeRequests/{id} Delete data change request {TenantURL}/changeRequests/_byUris/_applyThis applies (commits) all changes inside a data change request to real entities and {TenantURL}/changeRequests/_byUris/_rejectReject data change request {TenantURL}/entities/_matches Returns potential matches based on rules in entity type { connected entities /{crosswalk URI}Delete crosswalk {TenantURL}/changeRequests/0000OVV/_externalInfoAdd/update external info to thURLnullReltio authentication seURLnullReltio base URLmdmConfig.rdmUrlnullReltio  RDM ernamenullReltio sswordnullReltio passwordmdmConfig.apiKeynullReltio apiKeymdmConfig.apiSecretnullReltio isecondsToExpiretranslateCache.objectsLimit" }, { "title": "Map Channel", "": "", "pageLink": "/display/GMDM/Map+Channel", "content": " integrates and systems data. External systems use the queue or REST to load data. The data is then copied to the internal queue. This allows to redo the processing at a later time. The identifier and market contained in the data are used to retrieve complete data via REST requests. The data is then sent to the Manager component to storage in the system. Application provides features for filtering events by country, status or permissions. This component uses different mappers to process data for the COMPANY or data nology: , , Apache CamelCode link: events processingExposed interfacesInterface nameTypeEndpoint patternDescriptioncreate contactREST /gcpcreate profile based on contact dataupdate contactREST APIPUT /gcp/{gcpId}update profile based on contact datacreate userREST /grvcreate profile based on user dataupdate userREST /grv/{grvId}update profile based on user dataDependent componentsComponentInterfaceDescriptionManagerREST APIcreate HCP, create , update , update HCOConfigurationThe configuration is a composition of dependent components configurations and parameters specifived below. processing configConfig paramDefault valueDescriptionkafkaProducerPropkafka producer propertieskafkaConsumerPropkafka consumer propertiesprocessing.endpointskafka internal topics configurationprocessing.endpoints.[endpoint-type].topickafka entpoint-type topic -type].activeOnStartupshould endpoint start on application startupprocessing.endpoints.[endpoint-type].consumerCountkafka endpoint consumer countprocessing.endpoints.[endpoint-type].breakOnFirstErrorshould kafka rebalance on errorprocessing.endpoints.[endpoint-type].autoCommitEnableshould kafka cuto commit enableDEG configConfig paramDefault valueDescriptionDEG.urllDEG gateway URLDEG.oAuth2ServiceDEG authorization service tocolDEG protocolDEG.portDEG efixDEG prefixTransaction log configConfig paramDefault valueDescriptiontransactionLogger.kafkaEfk.enableshould kafka efk transaction logger ickafka efk topic nametransactionLogger.kafkaEfk.logContentOnlyOnFailedLog request body only on failed mpleLog.enableshould simple console transaction logger enableFilter configConfig paramDefault Vlist of allowed Vlist of allowed GCP countriesdeactivatedStatuses.[Source].[Country]list of attribute values for which will be deleted for given country and sourcedeactivateGCPContactWhenInactivelst of countries for which will be deleted when contact is inactivedeactivatedWhenNoPermissionslst of countries for which will be deleted when contact permissions are missingdeleteOption.[Source].noneHCP will be sent to when deleted date is presentdeleteOption.[Source].hardcall delete crosswalk action when deleted date is presentdeleteOption.[Source].softcall update when delete date is presentMapper configConfig paramDefault valueDescriptiongcpMappername of mapper implenentationgrvMappername of mapper implenentationMappingsIQVIA mapping" }, { "title": "MDM Admin", "": "", "pageLink": "/display/GMDM/MDM+Admin", "content": "DescriptionMDM Admin exposes an of tools automating repetitive and/or difficult Operating Procedures and Tasks. It also aggregates APIs of various Hub components that should not be exposed to the world, while providing an authorization layer. Permissions to each Admin operation can be granted to client's user.FlowsKafka OffsetResend EventsPartial ListReconciliationExposed interfacesREST APISwagger: componentsComponentInterfaceFlowDescriptionReconciliation ServiceReconciliation Service APIEntities uses internal to trigger reconciliations. Passes the same inputs and returns the same lations ReconciliationPartials ReconciliationPrecallback ServicePrecallback Service APIPartials ListAdmin fetches a list of partials directly from and returns it to the user or uses it to reconcile all entities stuck in partial rtials ReconciliationAirflowAirflow APIEvents ResendAdmin allows triggering an Airflow DAG with request parameters/body and checking its Resend ComplexKafkaKafka Client/Admin APIKafka allows modifying topic/group nfigurationConfig ParameterDefault valueDescriptionairflow-config: url: user: admin password: ${ssword} dag: reconciliation_system_amer_dev-Dependent Airflow configuration including external URL, name and credentials. Entities Reload operation will trigger a DAG of configured name in the configured :services: reconciliationService: -service-svc:8081 precallbackService: mdmhub-precallback-service-svc:8081URLs of dependent services. Default values lead to internal services." }, { "title": "MDM Integration Tests", "": "", "pageLink": "/display//MDM+Integration+Tests", "content": "DescriptionThe module contains Integration Tests. All Integration Tests are divided into different categories based on environment on which are nology:JUnitSpring TestCitrusGradle tasksThe table shows which environment uses which gradle task.EnvironmentGradle taskConfiguration propertiesALLcommonIntegrationTests-GBLUSintegrationTestsForCOMPANYModelRegionUS script with configuration: tasks - IT categoriesThe table shows which test categories are included in gradle taskTest categorycommonIntegrationTestsCommonIntegrationTestintegrationTestsForCOMPANYModelRegionUSIntegrationTestForCOMPANYModelIntegrationTestForCOMPANYModelRegionUSintegrationTestsForCOMPANYModelChinaIntegrationTestForCOMPANYModelIntegrationTestForCOMPANYModelChinaintegrationTestsForCOMPANYModelIntegrationTestForCOMPANYModel●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●integrationTestsForCOMPANYModelRegionAMERIntegrationTestForCOMPANYModel●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●IntegrationTestForCOMPANYModelRegionAMERintegrationTestsForCOMPANYModelRegionAPACIntegrationTestForCOMPANYModel●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●integrationTestsForCOMPANYModelRegionEMEAIntegrationTestForCOMPANYModel●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●IntegrationTestForCOMPANYModelRegionEMEAintegrationTestsForIqviaModelIntegrationTestForIqiviaModelTests are configured in adle file: use cases included in categoriesTest categoryTest use casesCommonIntegrationTestCommon Integration TestIntegrationTestForIqiviaModelIntegration Test For Iqvia ModelIntegrationTestForCOMPANYModelIntegration Test For COMPANY ModelIntegrationTestForCOMPANYModelRegionUSIntegration Test For COMPANY Model Region USIntegrationTestForCOMPANYModelChinaIntegration Test For COMPANY Model ChinaIntegrationTestForCOMPANYModelRegionAMERIntegration Test For COMPANY Model Region AMER●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●Integration Test For COMPANY Model DCR2ServiceIntegrationTestsForCOMPANYModelRegionEMEAIntegration Test For COMPANY Model Region EMEA" }, { "title": "Nucleus Subscriber", "": "", "pageLink": "/display/GMDM/Nucleus+Subscriber", "content": "DescriptionNucleus subscriber collects events from modifies it and then transfer to the right .Data changes are stored as archive files on from where they are then pulled byt the nucleus e next step is to modify the event from the format to one accepted by . The modified data is then transfered to the appropriate .Data pulls from are performed periodically so the changes made  are visible after some rt of: Streaming channgelTechnology: , , Apache CamelCode link: nucleus-subscriberFlowsEntity change events processing (Nucleus) Exposed interfacesInterface NameTypeEndpoint patternDescriptionKafka topic KAFKA{env}-internal-nucleus-eventsEnents pulled from are then transformed and published to kafka topicDependenciesComponentInterfaceFlowDescriptionAWS S3Entity change events processing (Nucleus)Stores events regarding data modification in reltioEntity enricherNucleus Subscriber downstream component. Collects events from and produces events enriched with the targetEntityConfigurationConfig subscriber portnucleus_rvers10.192.71.136:9094Kafka servernucleus_subscriber.lockingPolicy.zookeeperServernullZookeeper servernucleus_NamenullZookeeper group namenucleus_xSize100000nucleus_duplicationCache.expirationTimeSeconds3600nucleus_IdhubKafka group Idnucleus_ernamenullKafka usernamenucleus_sswordnullKafka user passwordnucleus_icdev-internal-integration-testsnucleus_icdev-internal-reltio-dictionaries-eventsnucleus_icdev-internal-integration-testsnucleus_ngoConnectionProp.dbUrlnullMongoDB urlnucleus_ngoConnectionProp.dbNamenullMongoDB database namenucleus_ernullMongoDB usernucleus_sswordnullMongoDB user passwordnucleus_echConnectionOnStartupnullCheck connection on startup( yes/no )nucleus_subscriber.poller.typefileSource typenucleus_subscriber.poller.enableOnStartupyesEnable on startup( yes/no )nucleus_leMasknullInput files masknucleus_subscriber.poller.bucketNamecandf-mesosName of bucketnucleus_cessingTimeoutMs3000000Timeout in milisecondsnucleus_putFolderC:/PROJECTS/COMPANY/GIT/mdm-publishing-hub/nucleus-subscriber/src/test/resources/dataInput directorynucleus_subscriber.poller.outputFoldernullOutput directorynucleus_ynullPoller keynucleus_cretnullPoller secretnucleus_gionEU_WEST_1Poller regionnucleus_loweSubDirsnullAllowed sub directories( e.g. by country code - AU, CA )nucleus_leFormat.hcp.*Professional.expInput fiile format for hcpnucleus_leFormat.hco.*Organization.expInput fiile format for hconucleus_leFormat.dictionary.*Code_Header.expInput fiile format for dictionarynucleus_leFormat.dictionaryItem.*Code_Item.expInput fiile format for dictionary Itemnucleus_leFormat.dictionaryItemDesc.*Code_Item_Description.expInput fiile format fornucleus_leFormat.dictionaryItemExternal.*Code_Item_External.expInput fiile format fornucleus_stomerMerge.*customer_merge.expInput fiile format for customer mergenucleus_leFormat.specialty.*Specialty.expInput fiile format for specialitynucleus_dress.*Address.expInput fiile format foraddressnucleus_.*Degree.expInput fiile format for degreenucleus_entifier.*Identifier.expInput fiile format foridentifiernucleus_munication.*Communication.expInput fiile format forcommunicationnucleus_leFormat.optout.*Optout.expInput fiile format for optoutnucleus_filiation.*Affiliation.expInput fiile format for affiliationnucleus_filiationRole.*AffiliationRole.expInput fiile format for affiliation role." }, { "title": "", "": "", "pageLink": "/display//OK+DCR+Service", "content": "DescriptionValidation of information regarding healthcare institutions and professionals based on ONE KEY webservices databaseTechnology: java 8, boot, mongodb, -streamsCode link: mdm-onekey-dcr-service FlowsData Steward ResponseSubmit Validation RequestTrace Validation RequestExposed interfacesInterface NameTypeEndpoint patternDescriptioninternal onekeyvr inputKAFKA${env}-internal-onekeyvr-inevents being sent by the event publisher component. Event types being considered: HCP_*, _*, ENTITY_MATCHES_CHANGEDinternal onekeyvr change requests inputKAFKA${env}-internal-onekeyvr-change-requests-inDependent componentsComponentInterfaceFlowDescriptionManagerGetEntitygetEntitygetting the entity from RELTIOMDMIntegrationServicegetMatchesgetting matches from RELTIOtranslateLookupstranslating lookup codescreateEntityDCR entity created in and the relation between the processed entity and the entitycreateResponsepatchEntityupdating the entity in RELTIOBoth service and the Manager service are called with the retry KeyIntegrationService.url${oneKeyClient.url}erName${erName}ssword${ssword}nnectionPoint${nnectionPoint}KeyIntegrationService.logMessages${oneKeyClient.logMessages}xAttemts22Limit to the number of attempts -> Exponential Back itialIntervalMs1000Initial interval -> Exponential Back ltiplier2.0Multiplier -> Exponential Back xIntervalMs3600000Max interval -> Exponential Back tewayIntegrationService.url${gateway.url}erName${erName}tewayIntegrationService.apiKey${gateway.apiKey}tewayIntegrationService.logMessages${gateway.logMessages}tewayIntegrationService.timeoutMs${gateway.timeoutMs}xAttemts22Limit to the number of attempts -> Exponential Back itialIntervalMs1000Initial interval -> Exponential Back ltiplier2.0Multiplier -> Exponential Back xIntervalMs3600000Max interval -> Exponential Back bmitVR.eventInputTopic${env}-internal-onekeyvr-inSubmit Validation input ipEventTypeSuffix_REMOVED_INACTIVATED_LOST_MERGESubmit Validation event type string endings to oreNamewindow-deduplication-storeInternal kafka topic that stores events to bmitVR.window.duration4hThe size of the windows in Internal kafka topic that stores events being grouped acePeriod0The grace period to admit out-of-order events to a teLimit107374182Maximum number of bytes the size-constrained suppression buffer will ppressNamedcr-suppressThe specified name for the suppression node in the on0 0 * ? * * # every stanceNamemdm-onekey-dcr-serviceCan be any string, and the value has no meaning to the scheduler itself - but rather serves as a mechanism for client code to distinguish schedulers when multiple instances are used within the same program. If you are using the clustering features, you must use the same name for every instance in the cluster that is ‘logically’ the same ipUpdateChecktrueWhether or not to skip running a quick web request to determine if there is an updated version of available for download. If the check runs, and an update is found, it will be reported as available in ’s logs. You can also disable the update check with the system property “ipUpdateCheck=true” (which you can set in your system environment or as a -D on the java command line). It is recommended that you disable the update check for production nameInstanceIdGeneratorOnly used if stanceId is set to “AUTO”. Defaults to “mpleInstanceIdGenerator”, which generates an instance id based upon host name and time stamp. Other implementations include (which gets the instance id from the system property “stanceId”, and HostnameInstanceIdGenerator which uses the local host name (tLocalHost().getHostName()). You can also implement the InstanceIdGenerator interface your ngoUri${mongo.url}bStore.dbName${mongo.dbName}llectionPrefix stanceIdAUTOCan be any string, but must be unique for all schedulers working as if they are the same ‘logical’ Scheduler within a cluster. You may use the value “” as the instanceId if you wish the Id to be generated for you. Or the value “SYS_PROP” if you want the value to come from the system property “stanceId”readCount1" }, { "title": "Publisher", "": "", "pageLink": "/display//Publisher", "content": " is member of channel. It distributes events to target client topics based on configured routing in tasks:Filtering events beased on their contentRouting events based publisher configurationEnriching nucleus eventsUpdating mongoTechnology: , , : event-publisherFlowsReltio events streamingNucleus Events StreamingCallbacksEvent filtering and routing rulesLOV update process (Nucleus)Data Steward ResponseSubmit Validation RequestSnowflake: Events publish flowExposed interfacesInterface NameTypeEndpoint patternDescriptionKafka - input topics for entities dataKAFKA${env_name}-internal-reltio-proc-events${env_name}-internal-nucleus-eventsStores events about entities, relations and change requests changes. - input topics for dicrtionaries dataKAFKA${env_name}-internal-reltio-dictionaries-events${env_name}-internal-nucleus-dictionaries-eventsStores events about lookup (LOV) changes. - output topicsKAFKA${env_name}-out-**(All topics that get events from publisher)Output topics for Publisher.Event after filtration process is then transferred on the appropriate topic based on routing rules defined in the configurationResend eventsRESTPOST /resendLastEventAllow triggering reconstruction event. Events are created based on the current state fetch for MongoDB and then forwarded according to defined routing 's collectionsMongo collectionentityHistoryCollection stored last known state of entities dataMongo collectionentityRelationsCollection stored last known state of relations dataMongo collectionLookupValuesCollection stored last known state of lookups (LOVs) dataDependenciesComponentInterfaceFlowDescriptionCallback ServiceKAFKAEntity change events processing (Reltio)Creates input for PublisherResponsible for following transformations: names calculationDangling affiliationsCrosswalk cleanerPrecallback streamMongoDBEntity change events processing (Reltio)Entity change events processing (Nucleus)Stores the last known state of objects such as: entities, relations. Used as cache data to reduce Reltio load. Is updated after every entity change eventKafka connectorKAFKASnowflake: Events publish flowReceives events from the publisher and loads it to Snowflake databaseClients of the that receive events from , , etcConfigurationConfig valueDescriptionevent_ersnullPublisher users dictionary used to authenticate user in er parameters:name,description,roles(list) - currently there is only one role which can be assign to user:RESEND_EVENT - user with this role is granted to use resend last event operationevent_tiveCountries- AD- BL- FR- GF- GP- MF- MQ- MU- NC- PF- PM- RE- WF- YT- CNList of active countriesevent_erval60mInterval of lookups (LOVs) from Reltioevent_tchSize1000Poller batch sizeevent_publisher.lookupValuesPoller.enableOnStartupyesEnable on startup( yes/no )event_publisher.lookupValuesPoller.dbCollectionNameLookupValuesMongo's collection name stored fetched lookup dataevent_comingEventsincomingEvents: reltio: topic: dev-internal-reltio-entity-and-relation-events enableOnStartup: no startupOrder: 10 properties: autoOffsetReset: latest consumersCount: 20 maxPollRecords: 50 pollTimeoutMs: 30000Configuration of the incoming topic with events regarding entities, relations etc.event_publisher.eventRouter.dictionaryEventsdictionaryEvents: reltio: topic: dev-internal-reltio-dictionaries-events enableOnStartup: true startupOrder: 30 properties: autoOffsetReset: earliest consumersCount: 10 maxPollRecords: 5 pollTimeoutMs: 30000Configuration of incoming topic with events regarding dictionary changes.event_publisher.eventRouter.historyCollectionNameentityHistoryName of collection stored entities stateevent_lationCollectionNameentityRelationsName of collection stored relations stateevent_utingRules.[]nullList of routing rules. Routing rule definition has following parametersid - unique identifier of rule,selector - conditional expression written in groovy which filters incoming events,destination - topic name." }, { "title": "Raw data service", "": "", "pageLink": "/display/GMDM/Raw+data+service", "content": " data service is the component used to process source data. Allows you to remove expired data in real time. Provides a REST interface for restoring source data on the nology:,,spring bootCode link: Raw data serviceFlows Raw data flowsExposed interfacesBatch Controller - manage batch instancesInterface nameTypeEndpoint patternDescriptionRestore entitiesREST /restore/entitiesRestore entities for selected parameters: entity types, sources, countries, date from1. Create consumer for entities topic and given offset - date from2. Poll and filter . Produce data to bundle input topicRestore relationsREST /restore/relationsRestore entities for selected parameters: sources, countries, relation types and date from1. Create consumer for relations topic and given offset - date from2. Poll and filter . Produce data to bundle input topicRestore entitiesREST /restore/entities/countCount entities for selected parameters: entity types, sources, countries, date fromRestore entitiesREST /restore/relations/countCount relations for selected parameters: sources, countries, relation types and date fromConfigurationConfig Idkafka group idkafkaOtherother kafka consumer/producer propertiesentityTopictopic used to store entity datarelationTopictopic used to store relation tchKeyStoreNamestate store name used to store entities patch lationStoreNamestate store name used to store relations patch keysstreamConfig.enabledis raw data stream processor enabledstreamConfig.kafkaOtherraw data processor stream kafka other propertiesrestoreConfig.enabledis restore topic consumer poll nsumer.kafkaOtherother kafka consumer ducer.outputrestore data producer output topic - manager bundle input producer properties" }, { "title": "Reconciliation Service", "": "", "pageLink": "/display//Reconciliation+Service", "content": "Reconciliation service is used to consume reconciliation event from reltio and decide is entity or relation should be refreshed in mongo cache. after reconsiliation this service also produce metrics from reconciliation, it counts changes and produce event with all metatdta and statistics about reconciliated entity/relationFlowsReconciliation+HUB-ClientReconciliation metricsConfigurationConfig ParameterDefault valueDescriptionreconciliation: eventInputTopic: eventOutputTopic:reconciliation: eventInputTopic: ${env}-internal-reltio-reconciliation-events eventOutputTopic: ${env}-internal-reltio-eventsConsumes event from eventInputTopic, decide about reconiliation and produce event to eventOutputTopicreconciliation: eventMetricsInputTopic: eventMetricsOutputTopic:metricRules: - name: operationRegexp: pathRegexp: valueRegexp: reconciliation: eventInputTopic: ${env}-internal-reltio-reconciliation-events eventOutputTopic: ${env}-internal-reltio-events eventMetricsInputTopic: ${env}-internal-reltio-reconciliation-metrics-event eventMetricsOutputTopic: ${env}-internal-reconciliation-metrics-efk-transactionsmetricRules: - name: reconciliation.object.missed operationRegexp: "remove" pathRegexp: "" valueRegexp: ".*" - name: ded operationRegexp: "add" pathRegexp: "" valueRegexp: ".*" - name: ror operationRegexp: "add" pathRegexp: "^.*/lookupCode$" valueRegexp: ".*" - name: anged operationRegexp: "replace" pathRegexp: "^.*/lookupCode$" valueRegexp: ".*" - name: anged operationRegexp: "add|replace|remove" pathRegexp: "^/attributes/.+$" valueRegexp: ".*" - name: ason operationRegexp: ".*" pathRegexp: ".*" valueRegexp: ".*"Consume event from eventMetricsInputTopic, then calculate diff betwent current and previous event, based on diff produce statisctis and metrics. After all produce event with all information to eventMetricsOutputTopic" }, { "title": "Reltio Subscriber", "": "", "pageLink": "/display//Reltio+Subscriber", "content": " subscriber is part of Reltio events streaming flow. It consumes Reltio events from , filters, maps, and transfers to the Kafka rt of: channelTechnology: , , Apache CamelCode link: reltio-subscriberFlowsEntity change events processing (Reltio)Exposed interfacesInterface NameTypeEndpoint patternDescriptionKafka topic KAFKA${env}-internal-reltio-eventsEnents pulled from are then transformed and published to - queueEntity change events processing (Reltio)It stores events about entities modification in reltioEntity enricherReltio Subscriber downstream component. Collects events from and produces events enriched with the target entityConfigurationConfig ParameterDefault valueDescriptionreltio_ltio.queuempe-01_FLy4mo0XAh0YEbNReltio queue namereltio_ltio.queueOwner930358522410Reltio queue owner numberreltio_ncurrentConsumers1Max number of concurrent consumersreltio_ssagesPerPoll10Messages per pollreltio_icdev-internal-reltio-eventsPublisher kafka topicreltio_lisher.enableOnStartupyesEnable on startupreltio_lterSelfMergesnoFilter self merges( yes/no )reltio_icdev-internal-reltio-relations-eventsRelationship publisher topicreltio_icnullDCR publisher topicreltio_rvers10.192.71.136:9094Kafka serversreltio_IdhubKafka group Idreltio_slMechanismPLAINKafka sasl mechanismreltio_curityProtocolSASL_SSLKafka security protocolreltio_subscriber.kafka.sslTruststoreLocationsrc/test/resources/uststore.jksKafka truststore locationreltio_subscriber.kafka.sslTuststorePasswordkafka123Kafka truststore passwordreltio_ernamenullKafka usernamereltio_sswordnullKafka user passwordreltio_pressionCodecnullKafka compression codecreltio_subscriber.poller.types3Source typereltio_subscriber.poller.enableOnStartupnoEnable on startup( yes/no )reltio_leMask.*Input files maskreltio_subscriber.poller.bucketNamecandf-mesosName of bucketreltio_cessingTimeoutMs7200000Timeout in milisecondsreltio_putFoldernullInput directoryreltio_subscriber.poller.outputFoldernullOutput directoryreltio_ynullPoller keyreltio_cretnullPoller secretreltio_gionEU_WEST_1Poller regionreltio_lowedEventTypes- ENTITY_CREATED- ENTITY_REMOVED- ENTITY_CHANGED- ENTITY_LOST_MERGE- ENTITIES_MERGED- ENTITIES_SPLITTED- RELATIONSHIP_CREATED- RELATIONSHIP_CHANGED- RELATIONSHIP_REMOVED- RELATIONSHIP_MERGED- RELATION_LOST_MERGE- CHANGE_REQUEST_CHANGED- CHANGE_REQUEST_CREATED- CHANGE_REQUEST_REMOVED- ENTITIES_MATCHES_CHANGEDEvent types that are processed when received.Other event types are being rejectedreltio_ansactionLogger.kafkaEfk.enablenullTransaction logger enabled( true/false)reltio_ansactionLogger.kafkaEfk.logContentOnlyOnFailednullLog content only on failed( true/false)reltio_IdnullKafka consumer group Idreltio_OffsetResetnullKafka transaction logger topicreltio_nsumerCountnullreltio_ssionTimeoutMsnullSession timeoutreltio_xPollRecordsnullreltio_eakOnFirstErrornullreltio_nsumerRequestTimeoutMsnullreltio_mpleLog.enablenull" }, { "title": "Clients", "": "", "pageLink": "/display/GMDM/Clients", "content": "The section describes clients (systems) that publish or subscribe data to vis \n \n \n \n \n\n \n \n \n \n\n \n \n\n \n \n \n\n \n \n \n \n \n \n\n \n \n \n \nAggregated Contact ListCOMPANY MDM TeamNameContactAndrew J. Tirumalasowjanya.tirumala@John -INF_Support_PforceOL@Solanki, ( - Mumbai) <>;Yagnamurthy, Maanasa ( - Hyderabad) <>;ChinaMing Ming <, Dawei <>a@lfand@Dinesh.-Commercial_APAC@GRACEDL-AIS-Mule-Integration-Support@;hvaryu@dala@;alapati@MedicDL-F&BO-MEDIC@GBL USClientContactsCDWNarayanan, <>Raman, >ETLNayan, >, >, >Brahma, <>, > contactsDube, R <>, >, <>Business TeamMax, <>, >GIS(file transfer)Mandala, Venkata <>, <>" }, { "title": "KOL", "": "", "pageLink": "/display/GMDM/KOL", "content": "\nData pushing\n Figure 22. KOL authentication with Identity ManagerKOL system push data to using REST . To authenticate, uses external Oauth2 authorization service named Manager to fetch access token. Then system sends the REST request to integration service endpoint which validates access token using Manager API.\n\nKOL manage data for several countries. Many of these is loaded to default MDM system (Reltio), supported by integration service but for , , and CA countries data is sent to Nucleus 360. Decision, where the data should be loaded, is made by logic. Based on Country attribute value, manager selects the right . It is important to set the attribute value correctly during data updating. Same rule applies to the country query parameter during data fetching. Thanks to this, manager is able to process the right data in the right MDM system. In case of updating data with the attribute set incorrectly, the REST request will be rejected. When data is being fetched without country attribute query parameter set, the default ) will be used to resolve the data.\n\nEvent processing\nKOL application receives events in one standard way – kafka topic. Events from Reltio MDM system are published to this topic directly after has processed changes, sent event to and processed them by Event Publisher. It means that the processes change and send events in real time. Client, who listens for events, does not have to wait for receiving them too long.\n Figure 23. Difference between processing events in and 360The situation changes when the entity changes are processed by Nucleus 360. This publishes changes once in a while, so the events will be delivered to kafka topic with longer delay." }, { "title": " DWH", "": "", "pageLink": "/display//Japan+DWH", "content": "ContactsJapan DWH Feed Support DL:  - it is valid until 15/04/2023DL-ATP-SERVICEOPS-JPN-DATALAKE@ - it will be valid since  FlowsJapan DWH has only one batch process which consume the incremental file export from data warehouse, process this and loads data to . This process is based on incremental batch engine and run on put filesThe input files are delivered by GIS to  TPRODS3 service accountdidn't createdsvc_gbi-cc_mdm_japan_rw_s3S3 Access key IDdidn't -baiaes-eu--nprod-projectpfe-baiaes-eu--projectS3 /UAT/inbound//mdm/inbound//Input data file mask JPDWH_[0-9]+.zipJPDWH_[0-9]+.zipCompressionZipZipFormatFlat files, dedicated format Flat files, dedicated format ExampleJPDWH_20200421202224.zipJPDWH_20200421202224.zipSchedulenoneAt on every day-of-week from (0 8 * * 1-5). The input file is not delivered in 's holidays ()Airflow jobinc_batch_jp_stageinc_batch_jp_prodData mapping The detailed filed mappings are presented in the pping rules:Inactive HCPs, HCOs are not loaded in .   They are filtered out using delete flags present in source files.  Profiles being inactivated in source are soft-deleted from Reltio. Affiliations between hospitals and departments are not delivered by the source directly. They are derived from  file (doctor – institution association) having department referring to a dictionary on affiliations.  Each hospital in Reltio has dedicated departments objects although departments are global dictionary in DWH. HCP addresses are copied from affiliated HCOs.  workplaces refer to departments. Departments point to Main HCOs using MainHCO relations.  HCP affiliations pointing to inactive HCOs are skipped during the load, but profiles are load. Department names  and hospital names are added to address attributes (, MainHcoName) associated with HCPs to allow searching by its nfigurationFlow configuration is stored in configuration repository. For each environment where the flow should be enabled the configuration file inc_batch_jp.yml has to be created in the location related to configured environment: inventory/[env name]/group_vars/gw-airflow-services/ and the batch name "inc_batch_jp" has to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Below table prresents the location of inc_batch_jp.yml file for and PROD env:UATPRODinc_batch_jp.yml configuration changes is done by executing the deploy 's components PsThere is no particular SOP procedure for this flow. All common SOPs was described in the "Airflow:" chapter." }, { "title": "Nucleus", "": "", "pageLink": "/display/GMDM/Nucleus", "content": "ContactsDelivering of data used by 's processes is maintained by are several batch processes that loads data extracted from MDM. Data are delivered for countries: , , , , and as zip archive available at put filesUATPRODS3 service accountdidn't createdsvc_mdm_project_nuc360_rw-s3S3 Access key IDdidn't -baiaes-eu--nprod-projectpfe-baiaes-eu--projectS3 /UAT/inbound/APAC_CCV/AU/mdm/UAT/inbound/APAC_CCV/KR/mdm/UAT/inbound/nuc360/inc-batch//mdm/UAT/inbound/nuc360/inc-batch//mdm/UAT/inbound/nuc360/inc-batch//mdm/UAT/inbound/nuc360/inc-batch/CA/mdm/inbound/nuc360/inc-batch/AU/mdm/inbound/nuc360/inc-batch/KR/mdm/inbound/nuc360/inc-batch/GB/mdm/inbound/nuc360/inc-batch//mdm/inbound/nuc360/inc-batch/DK/mdm/inbound/nuc360/inc-batch/ data file mask NUCLEUS_CCV_[0-9_]+.zipNUCLEUS_CCV_[0-9_]+.zipCompressionZipZipFormatFlat files in format Flat files in format ExampleNUCLEUS_CCV__20200609_211102.zipNUCLEUS_CCV__20200609_211102.zipSchedulenoneinc_batch_apac_ccv_au_prod - at on every day-of-week from (0 17 * * 1-5)inc_batch_apac_ccv_kr_prod - at on every day-of-week from (0 8 * * 1-5)inc_batch_eu_ccv_gb_stage - at on every day-of-week from (0 7 * * 1-5)inc_batch_eu_ccv_pt_stage - at on every day-of-week from (0 7 * * 1-5)inc_batch_eu_ccv_dk_stage - at on every day-of-week from (0 7 * * 1-5)inc_batch_amer_ccv_ca_prod - at on every day-of-week from (0 17 * * 1-5)Airflow's DAGSinc_batch_apac_ccv_au_stageinc_batch_apac_ccv_kr_stageinc_batch_eu_ccv_gb_stageinc_batch_eu_ccv_pt_stageinc_batch_eu_ccv_dk_stageinc_batch_amer_ccv_ca_stageinc_batch_apac_ccv_au_prodinc_batch_apac_ccv_kr_prodinc_batch_eu_ccv_gb_stageinc_batch_eu_ccv_pt_stageinc_batch_eu_ccv_dk_stageinc_batch_amer_ccv_ca_prodData mappingData mapping is described in the following nfigurationFlows configuration is stored in configuration repository. For each environment where the flows should be enabled configuration files has to be created in the location related to configured environment: inventory/[env name]/group_vars/gw-airflow-services/ and the batch name has to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Below table presents the location of flows configuration files for and PROD env:Flow configuration deploy changes of 's configuration you have to execute SOP Deploying DAGsSOPsThere is no particular SOP procedure for this flow. All common SOPs was described in the "Airflow:" chapter." }, { "title": "Veeva New Zealand", "": "", "pageLink": "/display/GMDM/Veeva+New+Zealand", "content": "ContactsDL- flow transforms the 's data to model and loads the result to . Data contains HCPs and HCOs from is flow is divided into two steps:Pre-proccessing - Copying source files from 's bucket, filtering once and uploading result to HUB's bucket,Incremental batch - Running the standard incremental batch process.Each of these steps are realized by separated 's put filesUATPRODVeeva's service accountSRVC-MDMHUB_GBL_NONPRODSRVC-MDMHUB_GBLVeeva's bucketapacdatalakeprcaspasp55737apacdatalakeprcaspasp63567Veeva's bucket regionap-southeast-1ap-southeast-1Veeva's Folderproject_kangaroo/landing/veeva/sf_account/project_kangaroo/landing/veeva/sf_address_vod__c/project_kangaroo/landing/veeva/sf_child_account_vod__c/project_kangaroo/landing/veeva/sf_account/project_kangaroo/landing/veeva/sf_address_vod__c/project_kangaroo/landing/veeva/sf_child_account_vod__c/'s Input data file mask * (all files inside above folders)* (all files inside above 's Input data file compressionnonenoneHUB's Bucketpfe-baiaes-eu--nprod-projectpfe-baiaes-eu--projectHUB's /UAT/inbound/APAC_VEEVA/mdm/inbound/APAC_PforceRx/'s input data file maskin_nz_[0-9]+.zipin_nz_[0-9]+.zipHUS's input data file compressionZipZipSchedule (is set only for pre-processing DAG)noneAt on every day-of-week from (0 8 * * 1-5)Pre-processing 's DAGinc_batch_apac_veeva_wrapper_stageinc_batch_apac_veeva_wrapper_prodIncremental batch 's DAGinc_batch_apac_veeva_stageinc_batch_apac_veeva_prodData mappingData mapping is described in the following nfigurationConfiguration of this flow is defined in two configuration files. First of these inc_batch_apac_veeva_wrapper.yml specifies the pre-processing DAG configuration and the second inc_batch_apac_veeva.yml defines configuration of DAG for standard incremental batch process. To activate the flow on environment files should be created in the following location inventory/[env name]/group_vars/gw-airflow-services/ and batch names "inc_batch_apac_veeva_wrapper" and "inc_batch_apac_veeva" have to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Changes made in configuration are applied on environment by running low table presents the location of flows configuration files for and PROD env:Configuration fileUATPRODinc_batch_apac_veeva_wrapper.yml is no dedicated SOP procedures for this flow. However, you must remember that this flow consists of two DAGs which both have to finish l common SOPs was described in the "Incremental batch flows: SOP" chapter." }, { "title": "ODS", "": "", "pageLink": "/display/GMDM/ODS", "content": " - APAC ODS  - EU ODS , <>; velmurugan, Aarthi <> - AMER ODS SupportFlowThe flow transforms the 's data to model and loads the result to . Data contains HCPs and HCOs from: , ID, IN, MY, PH, PK, , , , , , , , , , , , , , PM, RE, , , , , RS is flow is divided into two steps:Pre-proccessing - Copying source files from 's bucket and then uploading these to HUB's bucket,Incremental batch - Running the standard incremental batch process.Each of these steps are realized by separated 's put filesUAT APACUAT EUPROD APACPROD EUSupported countriesHK, ID, IN, MY, PH, PK, , , , , , , , , , , , , PM, RE, , , , , , ID, IN, MY, PH, PK, , , , , , , , , , , , , PM, RE, , , , , service bucketapacdatalakeintaspasp100939apacdatalakeintaspasp100939apacdatalakeintaspasp104492pfe-gbi-eu--prod-partner-internalODS folder/-odsd-file-extracts/gateway/GATEWAY/ODS/PROD/GCMDM/ODS Input data file mask ****ODS Input data file compressionzipzipzipzipHUB's -baiaes-eu--nprod-projectpfe-baiaes-eu--nprod-projectpfe-baiaes-eu--projectpfe-baiaes-eu--projectHUB's /UAT/inbound/ODS//mdm/UAT/inbound/ODS//mdm/inbound/ODS//mdm/inbound/ODS//'s input data file mask****HUS's input data file compressionzipzipzipzipPre-processing 's DAGmove_ods_apac_export_stagemove_ods_eu_export_stagemove_ods_apac_export_prodmove_ods_eu_export_prodPre-processing 's DAG schedulenonenone0 6 * * 1-50 7 * * 2  (At on batch 's batch 's DAG schedulenonenone0 8 * * 1-50 8 * * 2 (At on Tuesday.)Data mappingData mapping is described in the following nfigurationConfiguration of this flow is defined in two configuration files. First of these specifies the pre-processing DAG configuration and the second inc_batch_apac_ods.yml defines configuration of DAG for standard incremental batch process. To activate the flow on environment files should be created in the following location inventory/[env name]/group_vars/gw-airflow-services/ and batch names "move_ods_apac_export" and "inc_batch_apac_ods" have to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Changes made in configuration are applied on environment by running components low table presents the location of flows configuration files for and PROD env:Configuration fileUATPRODmove_ods_apac_export.yml is no dedicated SOP procedures for this flow. However, you must remember that this flow consists of two DAGs which both have to finish l common SOPs was described in the "Incremental batch flows: SOP" chapter." }, { "title": "", "": "", "pageLink": "/display/GMDM/", "content": "ACLsNameGateway User NameAuthenticationPing UserRolesCountriesSourcesTopicChina client "CREATE_HCP"- "CREATE_HCO"- "UPDATE_HCO"- "UPDATE_HCP"- "GET_ENTITIES"- CN- "CN3RDPARTY"- "MDE"- "FACE"- "EVR"- dev-out-full-mde-cn- stage-out-full-mde-cn- dev-out-full-mde-cnContactsQianRu. generation process ( DCR)[.1] update processesReportsReports" }, { "title": "Corrective batch process for EVR", "": "", "pageLink": "/display//Corrective+batch+process+for+EVR", "content": "Corrective batch process for fixes data using standard incremental batch mechanism. The process gets data from csv file, transforms to json model and loads to Reltio. During loading of changes following 's attributes can be changed:Name,Title,SubTypeCode,,Specific Workplace can be ignored or its can be changed,Specific can be e load saves the changes in Reltio under crosswalk where:type of crosswalk is EVR,crosswalk's value is the same as Reltio id,crosswalk's source table is "corrective".Thanks this, it is easy to find changes that was made by this put filesThe input files are delivered to bucketUATPRODInput -baiaes-eu--nprod-projectpfe-baiaes-eu--projectInput /UAT/inbound//EVR/mdm/inbound//EVR/Input data file mask evr_corrective_file_[0-9]*.zipevr_corrective_file_[0-9]*.zipCompressionzipzipFormatFlat files in format Flat files in format Exampleevr_corrective_file_20201109.zipevr_corrective_file_20201109.zipSchedulenonenoneAirflow's DAGSinc_batch_china_evr_stageinc_batch_china_evr_prodData mappingMapping from CSV to 's json was describe in this document: evr_corrective_file_format_new.xlsxExample file presented input data: evr_corrective_file_20221215.csvConfigurationFlows configuration is stored in configuration repository. For each environment where the flow should be enabled configuration file inc_batch_china_evr.yml has to be created in the location related to configured environment: inventory/[env name]/group_vars/gw-airflow-services/ and the batch name "inc_batch_china" has to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Below table presents the location of flow configuration files for and PROD environment:UATPROD is no particular SOP procedure for this flow. All common SOPs was described in the "Incremental batch flows: SOP" chapter." }, { "title": "Reports", "": "", "pageLink": "/display/GMDM/Reports", "content": "Daily ReportsThere are 4 reports which their preparing is triggered by china_generate_reports_[env] DAG. The starts all dependent report DAGs and then waits for files published by them on . When all required files are delivered to , DAG sents the email with generted reports to all configured ina_generate_reports_[env]|-- china_import_and_gen_dcr_statistics_report_[env] |-- import_pfdcr_from_reltio_[env] +-- china_dcr_statistics_report_[env]|-- china_import_and_gen_merge_report_[env] |-- import_merges_from_reltio_[env] +-- china_merge_report_[env]|-- china_total_entities_report_[env]+-- china_hcp_by_source_report_[env]Daily DAGs are triggered by DAG china_generate_reportsUATPRODParent at applied to all reports: by source reportThe Report shows how many HCPs was delivered to MDM by specific e Output  files are delivered to bucket:-baiaes-eu--nprod-projectpfe-baiaes-eu--projectOutput /outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_hcp_by_source_report_.*.xlsxchina_hcp_by_source_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_hcp_by_source_report_20201113093437.xlsxchina_hcp_by_source_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_hcp_by_source_report_stagechina_hcp_by_source_report_prodReport Templatechina_hcp_by_source_template.xlsxMongo scripthcp_by_source_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ColumnDescriptionSourceThe source which delivered HCPHCPNumber of all HCPs which has the sourceDaily IncrementalNumber of HCPs modified last utc tal entities reportThe report shows total entities count, grouped by entity type, theirs validation status and speaker e Output  files are delivered to bucketUATPRODOutput -baiaes-eu--nprod-projectpfe-baiaes-eu--projectOutput /outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_total_entities_report_.*.xlsxchina_total_entities_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_total_entities_report_20201113093437.xlsxchina_total_entities_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_total_entities_report_stagechina_total_entities_report_prodReport Templatechina_total_entities_template.xlsxMongo scripttotal_entities_report.jsApplied filters"country" : "CN""status": "ACTIVE"Report fields description:ColumnDescriptionTotal_Hospital_MDMNumber of total hospital MDMTotal_Dept_MDMNumber of total department MDMTotal_HCP_MDMNumber of total HCP MDMValidated_HCPNumber of validated HCPPending_HCPNumber of pending HCPNot_Validated_HCPNumber of validated HCPOther_Status_HCP?Number of with other statusTotal_Speaker Number of total speakersTotal_Speaker_EnabledNumber of enabled speakersTotal_Speaker_DisabledNumber of disabled statistics reportThe report shows statistics about data change requests which were created in . Generating of this report is divided into two steps:Importing PfDataChengeRequest data from - this step is realized by import_pfdcr_from_reltio_[env] DAG. It schedules export data in using Export Entities operation and then waits for result. After export file is ready, DAG load its content to mongo,Generating report - generates report based on proviosly imported data. This step is perform by china_dcr_statistics_report_[env] h of above steps are run sequentially by china_import_and_gen_dcr_statistics_report_[env] DAG. The Output  files are delivered to bucket:-baiaes-eu--nprod-projectpfe-baiaes-eu--projectOutput /outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_dcr_statistics_report_.*.xlsxchina_dcr_statistics_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_dcr_statistics_report_20201113093437.xlsxchina_dcr_statistics_report_20201113093437.xlsxAirflow's DAGSchina_dcr_statistics_report_stagechina_dcr_statistics_report_prodReport Templatechina_dcr_statistics_template.xlsxMongo scriptchina_dcr_statistics_report.jsApplied filtersThere are no additional conditions applied to select dataReport fields description:ColumnDescriptionTotal_DCR_MDMTotal number of DCRsNew_HCP_DCRTotal number of DCRs of type NewHCPNew_HCO_L1_DCRTotal number of DCRs of type number of DCRs of type NewHCOL2MultiAffil_DCRTotal number of DCRs of type MultiAffilNew_HCP_DCR_CompletedTotal number of DCRs of type which have completed statusNew_HCO_L1_DCR_CompletedTotal number of DCRs of type NewHCOL1 which have completed statusNew_HCO_L2_DCR_CompletedTotal number of DCRs of type which have completed statusMultiAffil_DCR_CompletedTotal number of DCRs of type which have completed statusNew_HCP_AcceptTotal number of DCRs of type which were acceptedNew_HCP_UpdateTotal number of DCRs of type which were updated during responding for theseNew_HCP_MergeTotal number of DCRs of type which were accepted and response had entities to mergeNew_HCP_MergeUpdateTotal number of DCRs of type which were updated and response had entities to mergeNew_HCP_RejectTotal number of DCRs of type which were rejectedNew_HCP_CloseTotal number of closed DCRs of type NewHCPAffil_AcceptTotal number of DCRs of type which were acceptedAffil_RejectTotal number of DCRs of type which were rejectedAffil_AddTotal number of DCRs of type which data were updated during respondingMultiAffil_DCR_CloseTotal number of closed DCRs of type MultiAffilNew_HCO_L1_UpdateTotal number of closed DCRs of type NewHCOL1 which data were updated during respondingNew_HCO_L1_RejectTotal number of rejected DCRs of type NewHCOL1 New_HCO_L1_CloseTotal number of closed DCRs of type NewHCOL1 New_HCO_L2_AcceptTotal number of accepted DCRs of type NewHCOL2New_HCO_L2_UpdateTotal number of DCRs of type which data were updated during respondingNew_HCO_L2_RejectTotal number of rejected DCRs of type number of closed DCRs of type NewHCOL2New_HCP_DCR_OpenedTotal number of opend DCRs of type NewHCPMultiAffil_DCR_OpenedTotal number of opend DCRs of type number of opend DCRs of type number of opend DCRs of type number of failed DCRs of type NewHCPMultiAffil_DCR_FailedTotal number of failed DCRs of type number of failed DCRs of type NewHCOL1New_HCO_L2_DCR_FailedTotal number of failed DCRs of type NewHCOL2Merge reportThe report shows statistics about merges which were occurred in . Generating of this report, similar to statistics report, is divided into two steps:Importing merges data from - this step is performed by import_merges_from_reltio_[env] DAG. It schedules export data in unsing Export Merge Tree operation and then waits for result. After export file is ready, loads its content to mongo,Generating report - generates report based on previously imported data. This step is performed by china_merge_report_[env] h of above steps are run sequentially by china_import_and_gen_merge_report_[env] DAG. The Output  files are delivered to bucket:-baiaes-eu--nprod-projectpfe-baiaes-eu--projectOutput /outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_merge_report_.*.xlsxchina_merge_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_merge_report_20201113093437.xlsxchina_merge_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_import_and_gen_merge_report_stagechina_import_and_gen_merge_report_prodReport Templatechina_daily_merges_template.xlsxMongo scriptmerge_report.jsApplied filters"country" : "CN"Report fields description:ColumnDescriptionDateDate when merges occurredDaily_Merge_HosptialTotal number of merges on HCODaily_Merge_HCPTotal number of merges on number of manual merges on number of manual merges on are 8 reports. All of them are triggered by china_monthly_generate_reports_[env] which then waits for files, generated and published to bucket by each depended DAGs. When all required files exist on , prepares the email with all files and sents this defined ina_monthly_generate_reports_[env]|-- china_monthly_hcp_by_SubTypeCode_report_[env]|-- china_monthly_hcp_by_channel_report_[env]|-- china_monthly_hcp_by_city_type_report_[env]|-- china_monthly_hcp_by_department_report_[env]|-- china_monthly_hcp_by_gender_report_[env]|-- china_monthly_hcp_by_hospital_class_report_[env]|-- china_monthly_hcp_by_province_report_[env]+-- china_monthly_hcp_by_source_report_[env]Monthly DAGs are triggered by DAG china_monthly_generate_reportsUATPRODParent DAGchina_monthly_generate_reports_stagechina_monthly_generate_reports_prodHCP by source reportThe report shows how many HCPs were delivered by specific e Output  files are delivered to bucketUATPRODOutput -baiaes-eu--nprod-projectpfe-baiaes-eu--projectOutput /outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_monthly_hcp_by_source_report_.*.xlsxchina_monthly_hcp_by_source_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_monthly_hcp_by_source_report_20201113093437.xlsxchina_monthly_hcp_by_source_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_monthly_hcp_by_source_report_stagechina_monthly_hcp_by_source_report_prodReport Templatechina_monthly_hcp_by_source_template.xlsxMongo scriptmonthly_hcp_by_source_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ColumnDescriptionSourceSource that delivered HCPHCPNumber of all HCPs which has the by channel reportThe report presents amount of HCPs which were delivered to MDM through specific e Output  files are delivered to bucketUATPRODOutput -baiaes-eu--nprod-projectpfe-baiaes-eu--projectOutput /outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_monthly_hcp_by_channel_report_.*.xlsxchina_monthly_hcp_by_channel_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_monthly_hcp_by_channel_report_20201113093437.xlsxchina_monthly_hcp_by_channel_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_monthly_hcp_by_channel_report_stagechina_monthly_hcp_by_channel_report_prodReport Templatechina_monthly_hcp_by_channel_template.xlsxMongo scriptmonthly_hcp_by_channel_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ColumnDescriptionChannelChannel nameHCPNumber of all HCPs which match the by SubTypeCode reportThe report presents HCPs grouped by its Medical Title (SubTypeCode)The Output  files are delivered to bucketUATPRODOutput -baiaes-eu--nprod-projectpfe-baiaes-eu--projectOutput /outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_monthly_hcp_by_SubTypeCode_report_.*.xlsxchina_monthly_hcp_by_SubTypeCode_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_monthly_hcp_by_SubTypeCode_report_20201113093437.xlsxchina_monthly_hcp_by_SubTypeCode_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_monthly_hcp_by_SubTypeCode_report_stage china_monthly_hcp_by_SubTypeCode_report_prodReport Templatechina_monthly_hcp_by_SubTypeCode_template.xlsxMongo scriptmonthly_hcp_by_SubTypeCode_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ColumnDescriptionMedical TitleMedical Title (SubTypeCode) of HCPHCPNumber of all HCPs which match the medical titleHCP by city type reportThe report shows amount of which works in specific city type. Type of city in not avaiable in data. To know what is type of specific citys report uses additional collection chinaGeography which has mapping between city's name and its type. Data in the collection can be updated on request of 's e Output  files are delivered to bucketUATPRODOutput -baiaes-eu--nprod-projectpfe-baiaes-eu--projectOutput /outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_monthly_hcp_by_city_type_report_.*.xlsxchina_monthly_hcp_by_city_type_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_monthly_hcp_by_city_type_report_20201113093437.xlsxchina_monthly_hcp_by_city_type_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_monthly_hcp_by_city_type_report_stage china_monthly_hcp_by_city_type_report_prodReport Templatechina_monthly_hcp_by_city_type_template.xlsxMongo scriptmonthly_hcp_by_city_type_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ColumnDescriptionCity TypeCity Type taken from chinaGeography collection which match lueHCPNumber of all HCPs which match the city typeHCP by department reportThe report presents the HCPs grouped by department where they e Output  files are delivered to bucketUATPRODOutput -baiaes-eu--nprod-projectpfe-baiaes-eu--projectOutput /outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_monthly_hcp_by_department_report_.*.xlsxchina_monthly_hcp_by_department_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_monthly_hcp_by_department_report_20201113093437.xlsxchina_monthly_hcp_by_department_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_monthly_hcp_by_department_report_stage china_monthly_hcp_by_department_report_prodReport Templatechina_monthly_hcp_by_department_template.xlsxMongo scriptmonthly_hcp_by_department_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ColumnDescriptionDeptDepartment's nameHCPNumber of all HCPs which match the deptHCP by gender reportThe report presents the HCPs grouped by e Output  files are delivered to bucketUATPRODOutput -baiaes-eu--nprod-projectpfe-baiaes-eu--projectOutput /outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_monthly_hcp_by_gender_report_.*.xlsxchina_monthly_hcp_by_gender_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel 's DAGSchina_monthly_hcp_by_gender_report_stage china_monthly_hcp_by_gender_report_prodReport Templatechina_monthly_hcp_by_gender_template.xlsxMongo scriptmonthly_hcp_by_gender_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ColumnDescriptionGenderGenderHCPNumber of all HCPs which match the genderHCP by hospital class reportThe report presents the HCPs grouped by theirs e Output  files are delivered to bucketUATPRODOutput -baiaes-eu--nprod-projectpfe-baiaes-eu--projectOutput /outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask  xlsxMicrosoft DAGSchina_monthly_hcp_by_hospital_class_report_stage china_monthly_hcp_by_hospital_class_report_prodReport Templatechina_monthly_hcp_by_hospital_class_template.xlsxMongo scriptmonthly_hcp_by_hospital_class_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ColumnDescriptionClassClassificationHCPNumber of all HCPs which match the classHCP by province reportThe report presents the HCPs grouped by province where they e Output  files are delivered to bucketUATPRODOutput -baiaes-eu--nprod-projectpfe-baiaes-eu--projectOutput /outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_monthly_hcp_by_province_report_.*.xlsxchina_monthly_hcp_by_province_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_monthly_hcp_by_province_report_20201113093437.xlsxchina_monthly_hcp_by_province_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_monthly_hcp_by_province_report_stage china_monthly_hcp_by_province_report_prodReport Templatechina_monthly_hcp_by_province_template.xlsxMongo scriptmonthly_hcp_by_province_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ProvinceName of provinceHCPNumber of all HCPs which match the ProvinceSOPsHow can I check the status of generating reports?Status of generating reports can be chacked by verification of task statuses on main DAGs - china_generate_reports_[env] for reports or china_monthly_generate_reports_[env] for reports. Both of these DAGs have task "sendEmailReports" which waits for files generated by dependent DAGs. If required files are not published to in confgured amount of time, the task will fail with following message:\n[ 12:12:54,085] {{docker_:252}} INFO - Caught: timeException: ERROR: Elapsed time . Timeout exceeded: 300\n[ 12:12:54,086] {{docker_:252}} timeException: ERROR: Elapsed time . Timeout exceeded: 300\n[ 12:12:54,086] {{docker_:252}} INFO - at tListOfFilesLoop(oovy:221)\n\tat cessReport(oovy:257)\n[ 12:12:54,290] {{docker_:252}} INFO - at (oovy:279)\n[ 12:12:55,552] {{:1058}} ERROR - docker container failed: {'StatusCode': 1}\nIn this case you have to check the status of all dependent DAGs to find the reason on failure, resolve the issue and retry all failed tasks starting by tasks in dependend DAGs and finishing by task in main DAG.Daily reports failed due to error durign importing data from . What to do?If you are able to see that DAGs import_pfdcr_from_reltio_[env] or import_merges_from_reltio_[env] in failed state, it probably means that export data from took longer then usual. To confirm this supposing you have to show details of importing DAG and check status of waitingForExportFile task. If it has failed state and in the logs you can see following messages:\n[ 12:09:10,957] {{s3_key_:88}} INFO - Poking for key : ://pfe-baiaes-eu--project/mdm/reltio_exports/merges_from_reltio_20201204T000718/_SUCCESS\n[ 12:09:11,074] {{:1047}} ERROR - Snap. Time is OUT.\nTraceback (most recent call last):\n File "/usr/local/lib/python3.7/site-packages/airflow/models/", line 922, in _run_raw_task\n result = task_copy.execute(context=context)\n File "/usr/local/lib/python3.7/site-packages/airflow/sensors/base_sensor_", line 116, in execute\n raise . is OUT.')\rflowSensorTimeout: Snap. is ] {{:1078}} INFO - Marking task as FAILED.\nYou can be pretty sure that the export is still processed on side. You can confirm this by using tasks api. If on the returned list you are able to see tasks in processing state, it means that still works on this export. To fix this issue in DAG you have to restart the failed task. The will start checking existance of export file once agine." }, { "title": "CDW ()", "": "", "pageLink": "/pages/tion?pageId=", "content": ", <>Balan, Sakthi <>, >GatewayAMER(manager)NameGateway User NameAuthenticationPing UserRolesCountriesDefaultCountrySourcesTopicCDW user (NPROD)cdwExternal OAuth2CDW-MDM_client["CREATE_HCO","UPDATE_HCO","GET_ENTITIES","USAGE_FLAG_UPDATE"]["US"]["SHS","SHS_MCO","IQVIA_MCO","CENTRIS","SAP","IQVIA_DDD","ONEKEY","DT_340b","DEA","HUB_CALLBACK","IQVIA_RAWDEA","IQVIA_PDRP","ENGAGE","GRV","ICUE","KOL_OneView","COV","ENGAGE 1.0","GRV","IQVIA_RX","MILLIMAN_MCO","ICUE","KOL_OneView","SHS_RX","MMIT","INTEGRICHAIN_TRADE_PARTNER","INTEGRICHAIN_SHIP_TO","EMDS_VVA","APUS_VVA","BMS ( user (PROD)cdwExternal OAuth2CDW-MDM_client["CREATE_HCO","UPDATE_HCO","GET_ENTITIES","USAGE_FLAG_UPDATE"]["US"]["SHS","SHS_MCO","IQVIA_MCO","CENTRIS","SAP","IQVIA_DDD","ONEKEY","DT_340b","DEA","HUB_CALLBACK","IQVIA_RAWDEA","IQVIA_PDRP","ENGAGE","GRV","ICUE","KOL_OneView","COV","ENGAGE 1.0","GRV","IQVIA_RX","MILLIMAN_MCO","ICUE","KOL_OneView","SHS_RX","MMIT","INTEGRICHAIN_TRADE_PARTNER","INTEGRICHAIN_SHIP_TO","EMDS_VVA","APUS_VVA","BMS (NAV)","EXAS","POLARIS_DM","ANRO_DM","ASHVVA","MM_C1st","KFIS","DVA","Reltio","DDDV","IQVIA_DDD_ZIP","867","MYOV_VVA","COMPANY_ACCTS"]FlowsFlowDescriptionSnowflake: Events publish flowEvents are published to snowflakeSnowflake: Base tables refreshTable is refreshed ( in prod) with those eventsSnowflake MDMTable are read by an process implemented by used flag on addressesCDW docs: Best Address Data flowClient software  " }, { "title": " (GBLUS)", "": "", "pageLink": "/pages/tion?pageId=", "content": "ContactsNayan, >, >ACLsNameGateway User NameAuthenticationPing UserRolesCountriesSourcesTopicBatchesETL batch load usermdmetl_nprodOAuth2SVC-MDMETL_client- "CREATE_HCP"- "CREATE_HCO"- "CREATE_MCO"- "CREATE_BATCH"- "GET_BATCH"- "MANAGE_STAGE"- "CLEAR_CACHE_BATCH"US- "SHS"- "SHS_MCO"- "IQVIA_MCO"- "CENTRIS"- "ENGAGE 1.0"- "GRV"- "IQVIA_DDD"- "SAP"- "ONEKEY"- "IQVIA_RAWDEA"- "IQVIA_PDRP"- "COV"- "IQVIA_RX"- "MILLIMAN_MCO"- "ICUE"- "KOL_OneView"- "SHS_RX"- "MMIT"- "INTEGRICHAIN"N/Abatches: "Symphony": - "" "Centris": - "" "IQVIA_DDD": - "HCOLoading" - "RelationLoading" "SAP": - "HCOLoading" "": - "" - "HCOLoading" - "RelationLoading" "IQVIA_RAWDEA": - "" "IQVIA_PDRP": - "" "PFZ_CUSTID_SYNC": - "COMPANYCustIDLoading" "OneView": - "HCOLoading" "HCPM": - "" "SHS_MCO": - "MCOLoading" - "RelationLoading" "IQVIA_MCO": - "MCOLoading" - "RelationLoading" "": - "" "MILLIMAN_MCO": - "MCOLoading" - "RelationLoading" "VEEVA": - "" - "HCOLoading" - "MCOLoading" - "RelationLoading" "SHS_RX": - "" "MMIT": - "MCOLoading" - "RelationLoading" "DDD_SAP": - "RelationLoading" "INTEGRICHAIN": - "HCOLoading"L Get/Resubmit Errorsmdmetl_nprodOAuth2SVC-MDMETL_client- "GET_ERRORS"- "RESUBMIT_ERRORS"USALLN/AN/AFlowsBatch Controller: creating and updating batch instance - the user invokes the batch-service to create a new batch instanceBulk Service: loading bulk data - the user invokes the batch-service to load the dataAfter load, the processing starts - ETL BatchesClient software  data loaderSOPsAdding a New BatchCache Address ID Clear (Remove Duplicates) ProcessCache Address ID Update ProcessManager: Resubmitting Failed RecordsSOP in ClearUpdating ETL Dictionaries in ConsulUpdating Dictionary" }, { "title": "KOL_ONEVIEW (GBLUS)", "": "", "pageLink": "/pages/tion?pageId=", "content": "ContactsBrahma, Bagmita <>, >, >DL DL-iMed_L3@ACLsNameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopicKOL_OneView userkol_oneviewOAuth2KOL-MDM-PFORCEOL_client- "CREATE_HCP"- "UPDATE_HCP"- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"- "LOOKUPS"USKOL_OneViewN/AKOL_OneView TOPICN/AKafka JassN/A"(conciliationTarget==null || conciliationTarget == 'KOL_ONEVIEW') .headers.eventType in ['full' [' .headers.objectType in ['', 'HCO']"USKOL_OneViewprod-out-full-koloneview-allFlowsCreate/Update /MCOGet software  connector" }, { "title": " ()", "": "", "pageLink": "/pages/tion?pageId=", "content": "ContactsBablani, >, >, <>, >, >, >, <>ACLsNameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopicGRV UsergrvOAuth2GRV-MDM_client- "GET_ENTITIES"- "LOOKUPS"- "VALIDATE_HCP"- "CREATE_HCP"- "UPDATE_HCP"US- "GRV"N/AGRV-AIS-MDM Usergrv_aisOAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●- "GET_ENTITIES"- "LOOKUPS"- "VALIDATE_HCP"- "CREATE_HCP"- "UPDATE_HCP"- "CREATE_HCO"- "UPDATE_HCO"US- "GRV"- "CENTRIS"- "ENGAGE"N/AGRV TOPICN/AKafka JassN/ in ['full_not_trimmed'] && ['GRV'].intersect(ource) .headers.objectType in [' in ['HCP_CHANGED']"USGRVprod-out-full-grv-allFlowsCreate/Update /MCOGet software  connector" }, { "title": " (GBLUS)", "": "", "pageLink": "/pages/", "content": ".anley@ACLsNameGateway User NameAuthenticationPing UsergraceOAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●- "GET_ENTITIES"- "LOOKUPS"US- "GRV"- "CENTRIS"- "ENGAGE"N/ software  - read only" }, { "title": "KOL_ONEVIEW (, , )", "": "", "pageLink": "/pages/tion?pageId=", "content": "ContactsDL--INF_Support_PforceOL@Solanki, ( - Mumbai) <>, Maanasa ( - Hyderabad) <>ACLsEMEANameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicKOL_ONEVIEW user (NPROD)kol_oneviewExternal OAuth2KOL-MDM-PFORCEOL_clientKOL-MDM_client["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","GET_ENTITIES","LOOKUPS"]["AD","AE","AO","AR","AU","BF","BH","BI","BJ","BL","BO","BR","BW","BZ","CA","CD","CF","CG","CH","CI","CL","CM","CN","CO","CP","CR","CV","DE","DJ","DK","DO","DZ","EC","EG","ES","ET","FI","FO","FR","GA","GB","GF","GH","GL","GM","GN","GP","GQ","GT","GW","HN","IE","IL","IN","IQ","IR","IT","JO","JP","KE","KW","LB","LR","LS","LY","MA","MC","MF","MG","ML","MQ","MR","MU","MW","MX","NA","NC","NG","NI","NZ","OM","PA","PE","PF","PL","PM","PT","PY","QA","RE","RU","RW","SA","SD","SE","SL","SM","SN","SV","SY","SZ","TD","TF","TG","TN","TR","TZ","UG","UY","VE","WF","YE","YT","ZA","ZM","ZW"]GB- "KOL_OneView"KOL_ONEVIEW user (PROD)kol_oneviewExternal OAuth2KOL-MDM-PFORCEOL_clientKOL-MDM_client["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","GET_ENTITIES","LOOKUPS"]["AD","AE","AO","AR","AU","BF","BH","BI","BJ","BL","BO","BR","BW","BZ","CA","CD","CF","CG","CH","CI","CL","CM","CN","CO","CP","CR","CV","DE","DJ","DK","DO","DZ","EC","EG","ES","ET","FO","FR","GA","GB","GF","GH","GL","GM","GN","GP","GQ","GT","GW","HN","IE","IL","IN","IQ","IR","IT","JO","JP","KE","KW","LB","LR","LS","LY","MA","MC","MF","MG","ML","MQ","MR","MU","MW","MX","NA","NC","NG","NI","NZ","OM","PA","PE","PF","PL","PM","PT","PY","QA","RE","RU","RW","SA","SD","SL","SM","SN","SV","SY","SZ","TD","TF","TG","TN","TR","TZ","UG","UY","VE","WF","YE","YT","ZA","ZM","ZW"]GB- "KOL_OneView"AMERNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicKOL_ONEVIEW user (NPROD)kol_oneviewExternal OAuth2KOL-MDM-PFORCEOL_client["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","GET_ENTITIES","LOOKUPS"]["AR","BR","CA","MX","UY"]CA- "KOL_OneView"KOL_ONEVIEW user (PROD)kol_oneviewExternal OAuth2KOL-MDM-PFORCEOL_client["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","GET_ENTITIES","LOOKUPS"]["AR","BR","CA","MX","UY"]CA- "KOL_OneView"APACNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicKOL_ONEVIEW user (NPROD)kol_oneviewExternal OAuth2KOL-MDM-PFORCEOL_client["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","GET_ENTITIES","LOOKUPS"]["AU","IN","KR","NZ","JP"]JP- "KOL_OneView"KOL_ONEVIEW user (PROD)kol_oneviewExternal OAuth2KOL-MDM-PFORCEOL_client["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","GET_ENTITIES","LOOKUPS"]["AU","IN","KR","NZ","JP"]JP- "KOL_OneView"KafkaEMEAEnvNameKafka routing ruleTopicPartitionsemea-prodKol_oneviewkol_oneview"(conciliationTarget==null || conciliationTarget == 'KOL_ONEVIEW') .headers.eventType in ['full'] && [') && .headers.objectType in ['', ''] && untry in ['ie', 'gb']"-${env}-out-full-koloneview-all3emea-devKol_oneviewkol_oneview-${env}-out-full-koloneview-all3emea-qaKol_oneviewkol_oneview-${env}-out-full-koloneview-all3emea-stageKol_oneviewkol_oneview-${env}-out-full-koloneview-all3AMEREnvNameKafka routing ruleTopicPartitionsgblus-prodKol_oneviewkol_oneview"(conciliationTarget==null || conciliationTarget == 'KOL_OneView') && .headers.eventType in ['full' [' .headers.objectType in ['', 'HCO']"-${env}-out-full-koloneview-all3gblus-devKol_oneviewkol_oneview-${env}-out-full-koloneview-all3gblus-qaKol_oneviewkol_oneview-${env}-out-full-koloneview-all3gblus-stageKol_oneviewkol_oneview-${env}-out-full-koloneview-all3" }, { "title": " (, )", "": "", "pageLink": "/pages/tion?pageId=", "content": "ContactsTODOGatewayEMEANameGateway User NameAuthenticationPing UserRolesCountriesDefaultCountrySourcesTopicGRV user (NPROD)grvExternal OAuth2GRV-MDM_client- GET_ENTITIES- LOOKUPS- VALIDATE_HCP["CA"]GBGRVN/AGRV user (PROD)grvExternal OAuth2GRV-MDM_client- GET_ENTITIES- LOOKUPS- VALIDATE_HCP["CA"]GBGRVN/AAMER(manager)NameGateway User NameAuthenticationPing UserRolesCountriesDefaultCountrySourcesTopicGRV user (NPROD)grvExternal OAuth2GRV-MDM_client["GET_ENTITIES","LOOKUPS","VALIDATE_HCP","CREATE_HCP","UPDATE_HCP"]["US"]GRVN/AGRV user (PROD)grvExternal OAuth2GRV-MDM_client["GET_ENTITIES","LOOKUPS","VALIDATE_HCP","CREATE_HCP","UPDATE_HCP"]["US"]GRVN/AKafkaAMEREnvNameKafka UsernameConsumergroupPublisher routing ruleTopicPartitionsgblus-prodGrvgrv"(conciliationTarget==null) && .headers.eventType in ['full_not_trimmed'] && ['GRV'].intersect(ource) && .headers.objectType in [' in ['HCP_CHANGED']"- ${env}-out-full-grv-allgblus-devGrvgrv- ${local_env}-out-full-grv-allgblus-qaGrvgrv- ${local_env}-out-full-grv-allgblus-stageGrv grv- ${local_env}-out-full-grv-all" }, { "title": " (Global, , , )", "": "", "pageLink": "/pages/tion?pageId=", "content": ", ()GatewayEMEANameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicGANT OAuth2GANT-MDM_client- "GET_ENTITIES"- "LOOKUPS"["AD", "", "AI", "AM", "AN","AR", "AT", "AU", "AW", "BA","BB", "BE", "BG", "BL", "BM","BO", "", "BR", "BS", "BY","BZ", "CA", "CH", "CL", "CN","CO", "CP", "CR", "CW", "CY","CZ", "DE", "DK", "DO", "DZ","EC", "EE", "EG", "ES", "FI","FO", "FR", "", "GF", "GP","GR", "GT", "", "HK", "HN","HR", "", "ID", "IE", "IL","IN", "IT", "JM", "JP", "KR","KY", "KZ", "LC", "LT", "LU","LV", "MA", "MC", "MF", "MQ","MU", "MX", "MY", "", "NI","NL", "NO", "", "PA", "PE","PF", "PH", "PK", "PL", "PM","PN", "PT", "PY", "RE", "RO","RS", "RU", "", "SE", "SG","SI", "SK", "", "", "", "", "TR", "TT", "TW","UA", "UY", "VE", "VG", "VN","WF", "", "YT", "ZA"]GBGRVN/AAMERAction RequiredUser configurationPingFederate UsernameGANT-MDM_clientCountriesBrazilTenantAMEREnvironments (PROD/NON-PROD/ALL)ALLAPI Servicesext-api-gw-amer-stage/entities,  ext-api-gw-amer-stage/,, we are fetching hcp data from , Earlier It was instanceNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicGANT OAuth2GANT-MDM_client- "GET_ENTITIES"- "LOOKUPS"["BR"]BR- ONEKEY- CRMMI- MAPPN/AAPACAction RequiredUser configurationPingFederate UsernameGANT-MDM_clientCountriesIndiaTenantAPACEnvironments (PROD/NON-PROD/ALL)ALLAPI Servicesext-api-gw-apac-stage/entities,  ext-api-gw-apac-stage/,, we are fetching hcp data from , Earlier It was instanceNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicGANT OAuth2GANT-MDM_client- "GET_ENTITIES"- "LOOKUPS"["IN"]IN- ONEKEY- CRMMI- MAPPN/A" }, { "title": "Medic (, , )", "": "", "pageLink": "/pages/tion?pageId=", "content": "ContactsDL-F&BO-MEDIC@GatewayEMEANameGateway User NameAuthenticationPing  user (NPROD)medicExternal OAuth2MEDIC-MDM_client●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AR","BR","CO","FR","GR","IE","IN","IT","NZ"]IE["MEDIC"]Medic user (PROD)medicExternal OAuth2MEDIC-MDM_client●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AR","BR","CO","FR","GR","IE","IN","IT","NZ"]IE["MEDIC"]AMERNameGateway User NameAuthenticationPing   user (NPROD)medicExternal ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●, ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●[" (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI","DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE","GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO","IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC","MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM","PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]Medic user (PROD)medicExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AR","BR","CO","FR","GR","IE","IN","IT","NZ","US"]["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI","DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE","GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO","IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC","MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM","PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]APACNameGateway User NameAuthenticationPing  user (NPROD)medicExternal ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AR","BR","CO","FR","GR","IE","IN","IT","NZ"]IN["MEDIC"]Medic user (PROD)medicExternal OAuth2MEDIC-MDM_client●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AR","BR","CO","FR","GR","IE","IN","IT","NZ"]IN["MEDIC"]" }, { "title": "PTRS (, , )", "": "", "pageLink": "/pages/tion?pageId=", "content": "RequirementsEnvPublisher routing ruleTopicemea-prod(ptrs-eu)"(conciliationTarget==null || conciliationTarget == 'PTRS_RECONCILIATION') .headers.eventType in ['full'] && untry in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', '', 're', 'bl', 'mf', 'wf', 'pm', 'tf', 'br', 'mx', 'id', 'pt'] && .headers.objectType in ['', 'HCO']"01/Mar/23 4:14 ] Shanbhag, BhushanOkay in that case we want market's events to come from emea-prod-out-full-ptrs-global2 topic only. ${env}-out-full-ptrs-euemea prod and nprodsAdding MC and to out-full-ptrs-eu15/05/2023Sagar: Hi ,Can you please add below counties for to country configuration list for (Prod, Stage QA & Dev)1. Monaco2. Andorra\n MR-6236\n -\n Getting issue details...\n STATUS\n ${env}-out-full-ptrs-euContactsAPI: ;unKumar@Kafka: dala@GatewayEMEANameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicPTRS user (NPROD)ptrsExternal OAuth2PTRS-MDM_client["CREATE_HCO","CREATE_HCP","GET_ENTITIES","LOOKUPS"]["AG","AI","AN","AR","AW","BB","BL","BM","BO","BR","BS","BZ","CL","CO","CR","CW","DO","EC","FR","GF","GP","GT","GY","HN","ID","IL","JM","KY","LC","MF","MQ","MU","MX","NC","NI","PA","PE","PF","PH","PM","PN","PT","PY","RE","SV","SX","TF","TR","TT","UY","VE","VG","WF","YT"]["PTRS"]PTRS user (PROD)ptrsExternal OAuth2PTRS-MDM_client["CREATE_HCO","CREATE_HCP","GET_ENTITIES","LOOKUPS"]["AG","AI","AN","AR","AW","BB","BL","BM","BO","BR","BS","BZ","CL","CO","CR","CW","DO","EC","FR","GF","GP","GT","GY","HN","ID","IL","JM","KY","LC","MF","MQ","MU","MX","NC","NI","PA","PE","PF","PH","PM","PN","PT","PY","RE","SV","SX","TF","TR","TT","UY","VE","VG","WF","YT"]["PTRS"]AMER(manager)NameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicPTRS user (NPROD)ptrsExternal OAuth2PTRS-MDM_client["CREATE_HCO","CREATE_HCP","GET_ENTITIES","LOOKUPS"]["MX","BR"]["PTRS"]PTRS user (PROD)ptrsExternal OAuth2PTRS-MDM_client["CREATE_HCO","CREATE_HCP","GET_ENTITIES","LOOKUPS"]["MX","BR"]["PTRS"]APAC(manager)NameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicPTRS user (NPROD)ptrsExternal OAuth2PTRS_RELTIO_ClientPTRS-MDM_client["CREATE_HCO","CREATE_HCP","GET_ENTITIES"]["ID","JP","PH"]["VOC","PTRS"]PTRS user (PROD)ptrsExternal OAuth2PTRS_RELTIO_ClientPTRS-MDM_client["CREATE_HCO","CREATE_HCP","GET_ENTITIES"]["JP"]["VOC","PTRS"]KafkaEMEAEnvNameKafka routing ruleTopicPartitionsemea-prod(ptrs-eu)Ptrsptrs"(conciliationTarget==null || conciliationTarget == 'PTRS_RECONCILIATION') .headers.eventType in ['full'] && untry in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', '', 're', 'bl', 'mf', 'wf', 'pm', 'tf', 'br', 'mx', 'id', 'pt', 'ad', 'mc'] && .headers.objectType in ['', 'HCO']"${env}-out-full-ptrs-eu3emea-prod (ptrs-global2)Ptrsptrs"(conciliationTarget==null || conciliationTarget == 'PTRS_GLOBAL2_REGENERATION') .headers.eventType in ['full'] && untry in ['tr'] && .headers.objectType in ['', 'HCO']"${env}-out-full-ptrs-global23emea-dev (ptrs-global2)Ptrsptrs"(conciliationTarget==null || conciliationTarget == 'PTRS_GLOBAL2_REGENERATION') .headers.eventType in ['full'] && untry in ['tr'] && .headers.objectType in ['', 'HCO']"${env}-out-full-ptrs-global23emea-qa (ptrs-eu)Ptrsptrsemea-dev-ptrs-eu"(conciliationTarget==null || conciliationTarget == 'PTRS_EU_REGENERATION') .headers.eventType in ['full'] && untry in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', '', 're', 'bl', 'mf', 'wf', 'pm', 'tf'] && .headers.objectType in ['', 'HCO']"${env}-out-full-ptrs-eu3emea-qa (ptrs-global2)Ptrsptrs"(conciliationTarget==null || conciliationTarget == 'PTRS_GLOBAL2_REGENERATION') .headers.eventType in ['full'] && untry in ['tr'] && .headers.objectType in ['', 'HCO']"${env}-out-full-ptrs-global23emea-stage (ptrs-eu)Ptrsptrsemea-stage-ptrs-eu"(conciliationTarget==null || conciliationTarget == 'PTRS_EU_REGENERATION') .headers.eventType in ['full'] && untry in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', '', 're', 'bl', 'mf', 'wf', 'pm', 'tf', 'pt', 'id', 'tr'] && .headers.objectType in ['', 'HCO']"${env}-out-full-ptrs-eu3emea-stage (ptrs-global2)Ptrsptrs"(conciliationTarget==null || conciliationTarget == 'PTRS_GLOBAL2_REGENERATION') .headers.eventType in ['full'] && untry in ['tr'] && .headers.objectType in ['', 'HCO']"${env}-out-full-ptrs-global23AMEREnvNameKafka UsernameConsumergroupPublisher routing ruleTopicPartitionsamer-prod(ptrs-amer)Ptrsptrs"(conciliationTarget==null || conciliationTarget == 'PTRS_AMER_REGENERATION') .headers.eventType in ['full'] && untry in ['mx', 'br'] && .headers.objectType in ['', 'HCO']"${env}-out-full-ptrs-amer3amer-dev (ptrs-amer)Ptrsptrsamer-dev-ptrs"(conciliationTarget==null || conciliationTarget == 'PTRS_AMER_REGENERATION') .headers.eventType in ['full'] && untry in ['mx', 'br'] && .headers.objectType in ['', 'HCO']"${env}-out-full-ptrs-amer3amer-qa (ptrs-amer)Ptrsptrsamer-qa-ptrs"(conciliationTarget==null || conciliationTarget == 'PTRS_AMER_REGENERATION') .headers.eventType in ['full'] && untry in ['mx', 'br'] && .headers.objectType in ['', 'HCO']"${env}-out-full-ptrs-amer3amer-stage (ptrs-amer)Ptrsptrsamer-stage-ptrs"(conciliationTarget==null || conciliationTarget == 'PTRS_AMER_REGENERATION') .headers.eventType in ['full'] && untry in ['mx', 'br'] && .headers.objectType in ['', 'HCO']"${env}-out-full-ptrs-amer3APACEnvNameKafka routing ruleTopicPartitionsapac-dev (ptrs-apac)Ptrsptrs"(conciliationTarget==null || conciliationTarget == '') .headers.eventType in ['full'] && untry in ['pk'] && .headers.objectType in ['', 'HCO']"${env}-out-full-ptrs-apacapac-qa (ptrs-apac)Ptrsptrs"(conciliationTarget==null || conciliationTarget == '') .headers.eventType in ['full'] && untry in ['pk'] && .headers.objectType in ['', 'HCO']"${env}-out-full-ptrs-apacapac-stage (ptrs-apac)Ptrsptrs"(conciliationTarget==null || conciliationTarget == '') .headers.eventType in ['full'] && untry in ['pk'] && .headers.objectType in ['', 'HCO']"${env}-out-full-ptrs-apacGBLEnvNameKafka routing ruleTopicPartitionsgbl-prodPtrsptrs"(conciliationTarget==null || conciliationTarget == 'PTRS_REGENERATION') .headers.eventType in ['full'] && untry in ['co', 'mx', 'br', 'ph'] && .headers.objectType in ['', 'HCO']"- ${env}-out-full-ptrsgbl-prod (ptrs-eu)Ptrsptrs"(conciliationTarget==null || conciliationTarget == 'PTRS_EU_REGENERATION') .headers.eventType in ['full'] && untry in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', '', 're', 'bl', 'mf', 'wf', 'pm', 'tf'] && .headers.objectType in ['', 'HCO']"${env}-out-full-ptrs-eugbl-prod (ptrs-porind).headers.eventType in ['full'] && untry in ['id', 'pt'] && .headers.objectType in ['', ''] && !ubtype.endsWith('_MATCHES_CHANGED') && (conciliationTarget==null || conciliationTarget == 'PTRS_PORIND_REGENERATION')"${env}-out-full-ptrs-porindgbl-devPtrsptrs".headers.eventType in ['full'] && untry in ['co', 'mx', 'br', 'ph', 'cl', 'tr'] && .headers.objectType in ['', ''] && !ubtype.endsWith('_MATCHES_CHANGED') && (conciliationTarget==null || conciliationTarget == 'PTRS_REGENERATION')"- ${env}-out-full-ptrs20gbl-dev (ptrs-eu)Ptrsptrsptrs_nprod".headers.eventType in ['full'] && untry in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', '', 're', 'bl', 'mf', 'wf', 'pm', 'tf'] && .headers.objectType in ['', ''] && !ubtype.endsWith('_MATCHES_CHANGED') && (conciliationTarget==null || conciliationTarget == 'PTRS_EU_REGENERATION')"- ${env}-out-full-ptrs-eugbl-dev (ptrs-porind)Ptrsptrs".headers.eventType in ['full'] && untry in ['id', 'pt'] && .headers.objectType in ['', ''] && !ubtype.endsWith('_MATCHES_CHANGED') && (conciliationTarget==null || conciliationTarget == 'PTRS_PORIND_REGENERATION')"- ${env}-out-full-ptrs-porindgbl-qa (ptrs-eu)Ptrsptrs".headers.eventType in ['full'] && untry in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', '', 're', 'bl', 'mf', 'wf', 'pm', 'tf'] && .headers.objectType in ['', ''] && (conciliationTarget==null)"- ${env}-out-full-ptrs-eu20gbl-stagePtrsptrs"(conciliationTarget==null || conciliationTarget == 'PTRS_LATAM') .headers.eventType in ['full'] in ['co', 'mx', 'br', 'ph', 'cl','tr'] && .headers.objectType in ['', 'HCO']"- ${env}-out-full-ptrsgbl-stage (ptrs-eu)Ptrsptrsptrs_nprod"(conciliationTarget==null || conciliationTarget == 'PTRS_EU') .headers.eventType in ['full'] in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', '', 're', 'bl', 'mf', 'wf', 'pm', 'tf'] && .headers.objectType in ['', 'HCO']"- ${env}-out-full-ptrs-eugbl-stage (ptrs-porind)Ptrsptrs".headers.eventType in ['full'] && untry in ['id', 'pt'] && .headers.objectType in ['', ''] && !ubtype.endsWith('_MATCHES_CHANGED') && (conciliationTarget==null || conciliationTarget == 'PTRS_PORIND_REGENERATION')"- ${env}-out-full-ptrs-porind" }, { "title": "OneMed (EMEA)", "": "", "pageLink": "/pages/tion?pageId=", "content": ";alapati@GatewayEMEANameGateway User NameAuthenticationPing UserRolesCountriesDefaultCountrySourcesTopicOneMed user (NPROD)onemedExternal OAuth2ONEMED-MDM_client["GET_ENTITIES","LOOKUPS"]["AR","AU","BR","CH","CN","DE","ES","FR","GB","IE","IL","IN","IT","JP","MX","NZ","PL","SA","TR"]IE["CICR","CN3RDPARTY","CRMMI","EVR","FACE","GCP","GRV","KOL_OneView","LocalMDM","MAPP","MDE","OK","Reltio","Rx_Audit"]OneMeduser (PROD)onemedExternal OAuth2ONEMED-MDM_client["GET_ENTITIES","LOOKUPS"]["AR","AU","BR","CH","CN","DE","ES","FR","GB","IE","IL","IN","IT","JP","MX","NZ","PL","SA","TR"]IE["CICR","CN3RDPARTY","CRMMI","EVR","FACE","GCP","GRV","KOL_OneView","LocalMDM","MAPP","MDE","OK","Reltio","Rx_Audit"]" }, { "title": "GRACE (, , )", "": "", "pageLink": "/pages/tion?pageId=", "content": "ContactsDL-AIS-Mule-Integration-Support@RequirementsPartial requirementsSent by neededNeed Plugin Configuration for below usernamesusernameGRACE MAVENS SFDC - DEV - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● - DevGRACE MAVENS SFDC - STG - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● - StageGRACE MAVENS SFDC - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● - ProdcountriesAU,,IN,, () and AR, , (AMER)tenantAPAC and (prod/nonprods/all)ALLAPI services exposedHCP , LookupsSourcesGraceBusiness justificationClient ID used by application to search and HCOsGatewayEMEANameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicGRACE usergraceExternal ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AD","AG","AI","AM","AN","AR","AT","AU","AW","BA","BB","BE","BG","BL","BM","BO","BQ","BR","BS","BY","BZ","CA","CH","CL","CN","CO","CP","CR","CW","CY","CZ","DE","DK","DO","DZ","EC","EE","ES","FI","FO","FR","GB","GD","GF","GL","GP","GR","GT","GY","HK","HN","HR","HU","ID","IE","IL","IN","IT","JM","JP","KR","KY","KZ","LC","LT","LU","LV","MA","MC","MF","MQ","MU","MX","MY","NC","NI","NL","NO","NZ","PA","PE","PF","PH","PK","PL","PM","PN","PT","PY","RE","RO","RS","RU","SA","SE","SG","SI","SK","SR","SV","SX","TF","TH","TN","TR","TT","TW","UA","US","UY","VE","VG","VN","WF","XX","YT","ZA"]GB["NONE"]N/AGRACE UsergraceExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AD","AG","AI","AM","AN","AR","AT","AU","AW","BA","BB","BE","BG","BL","BM","BO","BQ","BR","BS","BY","BZ","CA","CH","CL","CN","CO","CP","CR","CW","CY","CZ","DE","DK","DO","DZ","EC","EE","ES","FI","FO","FR","GB","GD","GF","GL","GP","GR","GT","GY","HK","HN","HR","HU","ID","IE","IL","IN","IT","JM","JP","KR","KY","KZ","LC","LT","LU","LV","MA","MC","MF","MQ","MU","MX","MY","NC","NI","NL","NO","NZ","PA","PE","PF","PH","PK","PL","PM","PN","PT","PY","RE","RO","RS","RU","SA","SE","SG","SI","SK","SR","SV","SX","TF","TH","TN","TR","TT","TW","UA","US","UY","VE","VG","VN","WF","XX","YT"]GB["NONE"]N/AAMERNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicGRACE usergraceExternal (all)●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["CA","US","AR","UY","MX"]["NONE"]N/ (●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●External OAuth2 (gblus-stage)●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●External OAuth2 (amer-stage)●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●GRACE UsergraceExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AD","AR","AU","BR","CA","DE","ES","FR","GB","GF","GP","IN","IT","JP","KR","MC","MF","MQ","MX","NC","NZ","PF","PM","RE","SA","TR","US","UY"]["NONE"]N/AAPACNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicGRACE usergraceExternal (all)●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AR","AU","BR","CA","HK","ID","IN","JP","KR","MX","MY","NZ","PH","PK","SG","TH","TW","US","UY","VN"]["NONE"]N/ ( UsergraceExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AD","AR","AU","BR","CA","DE","ES","FR","GB","GF","GP","IN","IT","JP","KR","MC","MF","MQ","MX","NC","NZ","PF","PM","RE","SA","TR","US","UY"]["NONE"]N/A" }, { "title": "Snowflake (Global, GBLUS)", "": "", "pageLink": "/pages/tion?pageId=", "content": ", <>ACLsNameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopicSnowflake topicSnowflake TopicKafka JAASN/.headers.eventType in ['full_not_trimmed'].headers.objectType in ['', '', '', 'RELATIONSHIP']) ||(.headers.eventType in ['simple' .headers.objectType in ['ENTITY'])) -out-full-snowflake-allFlowsSnowflake participate in two flows:Snowflake: Events publish flowEvent publisher pushes all events regarding entity/relation change to topic that is created for ( {{$env}}-out-full-snowflake-all }} ). Then component pulls those events and loads them to Snowflake table(Flat model).ReconciliationMain goal of reconciliation process is to synchronise Snowflake database with owflake periodically exports entities and creates csv file with their identifiers and checksums. The file is sent to from where it is then downloaded in the reconciliation process. This process compares the data in the file with the values stored in .A reconciliation event is created and posted on topic in two cases:the cheksum has changedthere is lack of entity in csv fileClient software  is responsible for collecting kafka events and loading them to database in flat PsCurrently there are no SOPs for snowflake." }, { "title": "Vaccine (GBLUS)", "": "", "pageLink": "/pages/tion?pageId=", "content": "ContactsVajapeyajula, >BAVISHI, <>, >, >, >FlowsFlowDescriptionSnowflake: Events publish flowEvents AUTO_LINK_FOUND and POTENTIAL_LINK_FOUND are published to snowflakeSnowflake: Base tables refreshMATCHES table is refreshed ( in prod) with those eventsSnowflake table are read by an process implemented by process creates relations like  SAPtoHCOSAffiliations. FlextoDDDAffiliations, FlextoHCOSAffiliations through created relations, the callback is triggered and removes LINKS using callsClient software  clients links/software/description ACLsNameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopicDerivedAffilations Batch Load userderivedaffiliations_loadN/AN/A- "CREATE_RELATION"- "UPDATE_RELATION"- *" }, { "title": " ()", "": "", "pageLink": "/pages/tion?pageId=", "content": "ContactsBrahma, Bagmita <>, >, > User NameAuthenticationPing  user (NPROD)icueExternal OAuth2ICUE-MDM_client["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","CREATE_MCO","UPDATE_MCO","GET_ENTITIES","LOOKUPS"]["US"]["ICUE"]consumer: regex: - "^.*-out-full-icue-all$" - "^.*-out-full-icue-grv-all$"groups: - icue_dev - icue_qa - icue_stage - dev_icue_grv - qa_icue_grv - stage_icue_grvICUE user (PROD)icueExternal OAuth2ICUE-MDM_client["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","CREATE_MCO","UPDATE_MCO","GET_ENTITIES","LOOKUPS"]["US"]["ICUE"]consumer: regex: - "^.*-out-full-icue-all$" - "^.*-out-full-icue-grv-all$"groups: - icue_prod - prod_icue_grvKafkaGBLUS (icue-grv-mule)NameKafka routing ruleTopicPartitionsicue - DEVicue_nprod".headers.eventType in ['full_not_trimmed'] && .headers.objectType in [''] && ['GRV'].intersect(ource) && !(['ICUE'].intersect(ource)) && ubtype in ['HCP_CREATED', 'HCP_CHANGED']"${local_env}-out-full-icue-grv-all"icue - QAicue_nprod${local_env}-out-full-icue-grv-allicue - STAGEicue_nprod${local_env}-out-full-icue-grv-allicue  - PRODicuex_prod${env}-out-full-icue-grv-allFlowsCreate/Update HCO/MCOGet software  connector" }, { "title": "ESAMPLES (GBLUS)", "": "", "pageLink": "/pages/tion?pageId=", "content": ", <>, >, >, >ACLsNameGateway User NameAuthenticationPing ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●- "GET_ENTITIES"USall_sourcesN/ software  - read only" }, { "title": "VEEVA_FIELD (, )", "": "", "pageLink": "/pages/tion?pageId=", "content": ", <>, <>GatewayEMEANameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicVEEVA_FIELD user (NPROD)veeva_fieldExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AD","AG","AI","AM","AN","AR","AT","AU","AW","BA","BB","BE","BG","BL","BM","BO","BQ","BR","BS","BY","BZ","CA","CH","CL","CN","CO","CP","CR","CW","CY","CZ","DE","DK","DO","DZ","EC","EE","ES","FI","FO","FR","GB","GF","GL","GP","GR","GT","GY","HK","HN","HR","HU","ID","IE","IL","IN","IT","JM","JP","KR","KY","KZ","LC","LT","LU","LV","MA","MC","MF","MQ","MU","MX","MY","NC","NI","NL","NO","NZ","PA","PE","PF","PH","PK","PL","PM","PN","PT","PY","RE","RO","RS","RU","SA","SE","SG","SI","SK","SV","SX","TF","TH","TN","TR","TT","TW","UA","UY","VE","VG","VN","WF","XX","YT"]GB["AHA","AMA","AMPCO","AMS","AOA","BIODOSE","BUPA","CH","CICR","CN3RDPARTY","CRMMI","CRMMI-SUR","CSL","DDD","DEA","DT_340b","ENGAGE","EVR","FACE","GCP","GRV","HCH","HCOS","HMS","HUB_CALLBACK","HUB_Callback","HUB_USAGETAG","IMSDDD","IMSPLAN","JPDWH","KOL_OneView","KOL_OneView","LLOYDS","LocalMDM","MAPP","MDE","MEDIC","NHS","NUCLEUS","OK","ONEKEY","PCMS","PFORCERX","PFORCERX_ID","PFORCERX_ODS","PTRS","RX_AUDIT","Reltio","ReltioCleanser","Rx_Audit","SAP","SYMP","VEEVA","VEEVA_AU","VEEVA_NZ","VEEVA_PHARMACY_AU","XPO"]N/AVEEVA_FIELD user (PROD)veeva_fieldExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AD","AG","AI","AM","AN","AR","AT","AU","AW","BA","BB","BE","BG","BL","BM","BO","BQ","BR","BS","BY","BZ","CA","CH","CL","CN","CO","CP","CR","CW","CY","CZ","DE","DK","DO","DZ","EC","EE","ES","FI","FO","FR","GB","GF","GL","GP","GR","GT","GY","HK","HN","HR","HU","ID","IE","IL","IN","IT","JM","JP","KR","KY","KZ","LC","LT","LU","LV","MA","MC","MF","MQ","MU","MX","MY","NC","NI","NL","NO","NZ","PA","PE","PF","PH","PK","PL","PM","PN","PT","PY","RE","RO","RS","RU","SA","SE","SG","SI","SK","SV","SX","TF","TH","TN","TR","TT","TW","UA","UY","VE","VG","VN","WF","XX","YT"]GB["AHA","AMA","AMPCO","AMS","AOA","BIODOSE","BUPA","CH","CICR","CN3RDPARTY","CRMMI","CRMMI-SUR","CSL","DDD","DEA","DT_340b","ENGAGE","EVR","FACE","GCP","GRV","HCH","HCOS","HMS","HUB_CALLBACK","HUB_Callback","HUB_USAGETAG","IMSDDD","IMSPLAN","JPDWH","KOL_OneView","LLOYDS","LocalMDM","MAPP","MDE","MEDIC","NHS","NUCLEUS","OK","ONEKEY","PCMS","PFORCERX","PFORCERX_ID","PFORCERX_ODS","PTRS","RX_AUDIT","Reltio","ReltioCleanser","Rx_Audit","SAP","SYMP","VEEVA","VEEVA_AU","VEEVA_NZ","VEEVA_PHARMACY_AU","XPO"]N/AAMERNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicVEEVA_FIELD   user (NPROD)veeva_fieldExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["CA", "US"]["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI","DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE","GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO","IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC","MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM","PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]N/AExternal OAuth2(GBLUS-STAGE)55062bae02364c7598bc3ffbfe38e07bVEEVA_FIELD user (PROD)veeva_fieldExternal (ALL)67b77aa7ecf045539237af0dec890e59726b6d341f994412a998a3e32fdec17a["GET_ENTITIES","LOOKUPS"]["CA", "US"]["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI","DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE","GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO","IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC","MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM","PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]N/ software  - read only" }, { "title": "PFORCEOL (, , )", "": "", "pageLink": "/pages/tion?pageId=", "content": ", <>, <>RequirementsPartial requirementsSent by AdhvaryuPforceOL Dev - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●PforceOL Stage - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●PforceOL Prod - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●  RO DK BR IL TR GR NO CA JP MX AT AR RU   IN   TH ES CZ LT   ID    FI CH SA  BE  IT    CL EE HR LV RS   CN SI FR BG  WA PKNew Requirements - 2024Action neededNeed Access to PFORCEOL - DEV, , , usernameDEV & QA: ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●: ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●PROD: ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●CountriesAC, , , , AR, AT, , AW, , BE, , , BR, BS, , CA, CH, , , , , , , , , , , DO, , , , FI, , , , , , , , , , , HN, , , IE, , IN, IT, , , , , , , , , , , , , MY, NI, , , , , , PH, PL, , QA, , , , , , , , , , , TR, , , , , , , , , YE, : "Keep the other countries for now"Full list:, , , , , AM, AN, AR, AT, , AW, , , BE, , , , , , , BR, BS, BY, , CA, CH, , , , , , , , , , , , , DO, , , , , , FI, , , , , , , , , , , , , HN, HR, , , ID, IE, , IN, , IT, , , , , , , , , LT, , , , , , , , , MY, , , , , , , , , PF, PH, PK, , PM, , , PY, QA, RE, , , , , , , , , , , , , , , , TR, , , , , , , , , , , , , , , YE, , , , , , EX-USEnvironmentsDEV, QA, , rangeRead access for and and that are configured in OneMed:, ,OK, PFORCERX_ODS, , , LEGACY_SFA_IDL, PTRS, , iCUE, IQVIA_DDD, DCR_SYNC, , , justificationThese changes are required as part of . This project is responsible to ensure an improvised system due to which the proposed changes will help the OneMed technical team to build a better solution to search for data within system through integration.Point of contactAnvesh (), ()Excel sheet with countries: GatewayEMEANameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicPFORCEOL user (NPROD)pforceolExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["NO","AD","AG","AI","AM","AN","AR","AT","AU","AW","BA","BB","BE","BG","BL","BM","BO","BQ","BR","BS","BY","BZ","CA","CH","CL","CN","CO","CP","CR","CW","CY","CZ","DE","DK","DO","DZ","EC","EE","EG","ES","FI","FO","FR","GB","GF","GL","GP","GR","GT","GY","HK","HN","HR","HU","ID","IE","IL","IN","IR","IT","JM","JP","KR","KY","KZ","LC","LT","LU","LV","MA","MC","MF","MQ","MU","MX","MY","NC","NI","NL","false","NZ","PA","PE","PF","PH","PK","PL","PM","PN","PT","PY","RE","RO","RS","RU","SA","SE","SG","SI","SK","SV","SX","TF","TH","TN","TR","TT","TW","UA","UK","US","UY","VE","VG","VN","WA","WF","XX","YT","ZA"]GB["AHA","AMA","AMPCO","AMS","AOA","BIODOSE","BUPA","CH","CICR","CN3RDPARTY","CRMMI","CRMMI-SUR","CSL","DDD","DEA","DT_340b","ENGAGE","EVR","FACE","GCP","GRV","HCH","HCOS","HMS","HUB_CALLBACK","HUB_Callback","HUB_USAGETAG","IMSDDD","IMSPLAN","JPDWH","KOL_OneView","KOL_OneView","LLOYDS","LocalMDM","MAPP","MDE","MEDIC","NHS","NUCLEUS","OK","ONEKEY","PCMS","PFORCERX","PFORCERX_ID","PFORCERX_ODS","PTRS","RX_AUDIT","Reltio","ReltioCleanser","Rx_Audit","SAP","SYMP","VEEVA","VEEVA_AU","VEEVA_NZ","VEEVA_PHARMACY_AU","XPO"]N/APFORCEOL user (PROD)pforceolExternal OAuth2- ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["NO","AD","AG","AI","AM","AN","AR","AT","AU","AW","BA","BB","BE","BG","BL","BM","BO","BQ","BR","BS","BY","BZ","CA","CH","CL","CN","CO","CP","CR","CW","CY","CZ","DE","DK","DO","DZ","EC","EE","EG","ES","FI","FO","FR","GB","GF","GL","GP","GR","GT","GY","HK","HN","HR","HU","ID","IE","IL","IN","IR","IT","JM","JP","KR","KY","KZ","LC","LT","LU","LV","MA","MC","MF","MQ","MU","MX","MY","NC","NI","NL","false","NZ","PA","PE","PF","PH","PK","PL","PM","PN","PT","PY","RE","RO","RS","RU","SA","SE","SG","SI","SK","SV","SX","TF","TH","TN","TR","TT","TW","UA","UK","UY","VE","VG","VN","WA","WF","XX","YT","ZA"]GB["AHA","AMA","AMPCO","AMS","AOA","BIODOSE","BUPA","CH","CICR","CN3RDPARTY","CRMMI","CRMMI-SUR","CSL","DDD","DEA","DT_340b","ENGAGE","EVR","FACE","GCP","GRV","HCH","HCOS","HMS","HUB_CALLBACK","HUB_Callback","HUB_USAGETAG","IMSDDD","IMSPLAN","JPDWH","KOL_OneView","LLOYDS","LocalMDM","MAPP","MDE","MEDIC","NHS","NUCLEUS","OK","ONEKEY","PCMS","PFORCERX","PFORCERX_ID","PFORCERX_ODS","PTRS","RX_AUDIT","Reltio","ReltioCleanser","Rx_Audit","SAP","SYMP","VEEVA","VEEVA_AU","VEEVA_NZ","VEEVA_PHARMACY_AU","XPO"]N/AAMERNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicPFORCEOL  user (NPROD)pforceolExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["CA", "US"]["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI","DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE","GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO","IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC","MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM","PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]N/AExternal OAuth2(GBLUS-STAGE)223ca6b37aef4168afaa35aa2cf39a3ePFORCEOL user (PROD)pforceolExternal OAuth2 (ALL)e678c66c02c64b599b351e0ab02bae9fe6ece8da20284c6987ce3b8564fe9087["GET_ENTITIES","LOOKUPS"]["CA", "US"]["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI","DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE","GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO","IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC","MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM","PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]N/ software  - read only" }, { "title": "1CKOL (Global)", "": "", "pageLink": "/pages/tion?pageId=", "content": "Contacts:, <>; , >Old Contacts:Data load support:First Name: IlyaLast Name: EnkovichOffice:  ●●●●●●●●●●●●●●●●●●Mob: ●●●●●●●●●●●●●●●●●●Internet: E-mail: enkovich.i.s@Backup contact:First Name: SergeyLast Name: PortnovOffice: ●●●●●●●●●●●●●●●●●●Mob: ●●●●●●●●●●●●●●●●●●Internet: E-mail: portnov.s.a@Flows1CKOL has one batch process which consumes export files from data warehouse, process this, and loads data to . This process is base on incremental batch engine and run on put filesThe input files are delivered by 1CKOL to bucketMAPP Review - Europe - 1cKOL - All Documents ()UATPRODS3 service accountsvc_gbicc_euw1_project_mdm_inbound_1ckol_rw_s3svc_gbicc_euw1_project_mdm_inbound_1ckol_rw_s3S3 Access key IDAKIATCTZXPPJXRNSDOGNAKIATCTZXPPJXRNSDOGNS3 Bucketpfe-baiaes-eu--nprod-projectpfe-baiaes-eu--projectS3 /UAT/inbound/KOL/RU/mdm/inbound/KOL/RU/Input data file mask KOL_Extract_Russia_[0-9]+.zipKOL_Extract_Russia_[0-9]+.zipCompressionzipzipFormatFlat files, 1CKOL dedicated format Flat files, 1CKOL dedicated format ExampleKOL_Extract_Russia_.zipKOL_Extract_Russia_.zipSchedulenonenoneAirflow job inc_batch_eu_kol_ru_stage mapping Data mapping is described in the attached nfigurationFlow configuration is stored in configuration repository. For each environment where the flow should be enabled the configuration file inc_batch_eu_kol_ru.yml has to be created in the location related to configured environment: inventory/[env name]/group_vars/gw-airflow-services/ and the batch name "inc_batch_eu_kol_ru" has to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Below table prresents the location of inc_batch_jp.yml file for Test, Dev, , Stage and PROD envs:inc_batch_eu_kol_ruUAT configuration changes is done by executing the deploy 's components PsThere is no particular SOP procedure for this flow. All common SOPs was described in the "Incremental batch flows: SOP" chapter." }, { "title": "", "": "", "pageLink": "/display/GMDM/Snowflake+MDM+Data+Mart", "content": "The section describes    in . contains data from tenants published into via , permissions, warehouses used in in : NewMdmSfRoles_231017.xlsx" }, { "title": "Connect Guide", "": "", "pageLink": "/display//Connect+Guide", "content": "How to add a user to the DATA Role:  Users accessing snowflake have to create a ticket and add themselves to the DATA role. This will allow the user to view CUSTOMER_SL schema (users access layer to to  on the TOP: "Group Manager" -  on the "Distribution Lists"Search for the correct group you want to be added. Check the group name here: "List Of Groups With Access To The DataMart" In the search write Name" for selected Request AccessClick "Add Myself" and then save Go to "Cart" and click "Submit Request"How to connect to the DB:Go to the Environments oose the Environments that you want to view:e.g. EMEA - EMEAChoose the or PROD environmentse.g - EMEA STAGE this page go to the Snowflake MDM DataMartClick on the DB Urle.g. - The following page will open:Click "Sign in using COMPANY SSO"Open "New Worksheet"Choose:ROLE: WAREHOUSE:  COMM_MDM_DMART_WH                                          - this is based on the "Snowflake MDM DataMart" table - Default warehouse nameDATABASE:      COMM__MDM_DMART__DB          - this is based on the "Snowflake MDM DataMart" table - DB NameSCHEMA:        DataMartSince 1.xlsx[Expired ] Groups that have access to CUSTOMER_SL schema:Role NameSF InstanceDB NameCOMM_AMER_MDM_DMART_DEV_DATA_ROLEAMERAMERDEVsfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_DEV_DATA_ROLECOMM_AMER_MDM_DMART_QA_DATA_ROLEAMERAMERQAsfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_QA_DATA_ROLECOMM_AMER_MDM_DMART_STG_DATA_ROLEAMERAMERSTAGEsfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_STG_DATA_ROLECOMM_AMER_MDM_DMART_PROD_DATA_ROLEAMERAMERPRODsfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_DATA_ROLECOMM_MDM_DMART_DEV_DATA_ROLEAMERUSDEVsfdb_us-east-1_amerdev01_COMM_DEV_MDM_DMART_DATA_ROLECOMM_MDM_DMART_QA_DATA_ROLEAMERUSQAsfdb_us-east-1_amerdev01_COMM_QA_MDM_DMART_DATA_ROLECOMM_MDM_DMART_STG_DATA_ROLEAMERUSSTAGEsfdb_us-east-1_amerdev01_COMM_STG_MDM_DMART_DATA_ROLECOMM_MDM_DMART_PROD_DATA_ROLEAMERUSPRODsfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_DATA_ROLECOMM_APAC_MDM_DMART_DEV_DATA_ROLEEMEAAPACDEVsfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_DEV_DATA_ROLECOMM_APAC_MDM_DMART_QA_DATA_ROLEEMEAAPACQAsfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_QA_DATA_ROLECOMM_APAC_MDM_DMART_STG_DATA_ROLEEMEAAPACSTAGEsfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_STG_DATA_ROLECOMM_APAC_MDM_DMART_PROD_DATA_ROLEEMEAAPACPRODsfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_DATA_ROLECOMM_EMEA_MDM_DMART_DEV_DATA_ROLEEMEAEMEADEVsfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_DEV_DATA_ROLECOMM_EMEA_MDM_DMART_QA_DATA_ROLEEMEAEMEAQAsfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_QA_DATA_ROLECOMM_EMEA_MDM_DMART_STG_DATA_ROLEEMEAEMEASTAGEsfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_STG_DATA_ROLECOMM_EMEA_MDM_DMART_PROD_DATA_ROLEEMEAEMEAPRODsfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_DATA_ROLECOMM_MDM_DMART_DEV_DATA_ROLEEMEAEUDEVsfdb_eu-west-1_emeadev01_COMM_DEV_MDM_DMART_DATA_ROLECOMM_MDM_DMART_QA_DATA_ROLEEMEAEUQAsfdb_eu-west-1_emeadev01_COMM_QA_MDM_DMART_DATA_ROLECOMM_MDM_DMART_STG_DATA_ROLEEMEAEUSTAGEsfdb_eu-west-1_emeadev01_COMM_STG_MDM_DMART_DATA_ROLECOMM_MDM_DMART_PROD_DATA_ROLEEMEAEUPRODsfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_DATA_ROLECOMM_GBL_MDM_DMART_DEV_DATA_ROLEEMEAGBLDEVsfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_DEV_DATA_ROLECOMM_GBL_MDM_DMART_QA_DATA_ROLEEMEAGBLQAsfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_QA_DATA_ROLECOMM_GBL_MDM_DMART_STG_DATA_ROLEEMEAGBLSTAGEsfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_STG_DATA_ROLECOMM_GBL_MDM_DMART_PROD_DATA_ROLEEMEAGBLPRODsfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_DATA_ROLE" }, { "title": "Data model", "": "", "pageLink": "/display/GMDM/Data+model", "content": "The data mart contains data in object & relational data models. The fragment of the model is presented in the picture below. The object data model includes the latest version of Reltio JSON documents representing entities, relationships, lovs, merge-tree. They are loaded into  ENTITIES, RELATIONS, LOV_DATA, MERGES, MATCHES tables. They are loading from using a HUB streaming interface described e object model is transformed into the relation model by a set of dynamic views using processing query language. Dynamic views are generated dynamically from the data model. The regeneration process is maintained in and triggered or on-demand.  The generation process starts from root objects like , , walks through tree and generates views with the following rules:  for simple attributes like first name,  a view column is generated in the current r nested attributes like addresses, a new view is generated, nested attribute uri and parent key from the parent view become primary key in the new viewfor lookup values like gender the lookup id is generatedModel versionsThere are two versions of data model maintained in the data mart: data model - the current model maintained in all regional data marts that consume data from regional instancesIqivia data model - legacy model from the first Reltio instance maintained in    regional data mart that consumes data from (ex-us)Key generation strategyObject model:ObjectsKey , MATCHES MERGESentity_uri, country*Reltio entity unique identifier and countryRELATIONSrelation_uri, country*Reltio relationship unique identifier & countryLOV_DATAid, mdm_region*the concatenation of Reltio LOV name + ':'+ canonical code as id & mdm region  * - only in global data martRelational model: columnsDescriptionroot objects like , , , MERGE_HISTORY, , country*Reltio entity unique identifier and countryAFFILIATIONSrelation_uri, country*Reltio relationship unique identifier and countrychild views for nested attributes Addresses, Specialties rent view keys, nested attribute uri, country* parent view keys + nested attribute uri  + country  * - only in global data martSchemas: contains the following schemas:Schema nameDescriptionLANDINGSchemas used by HUB ETL processes as stage areaCUSTOMERMain schema containing data data CUSTOMER_SLAccess schema to CUSTOMER schema dataAES_RS_SLContains views presenting data in data model" }, { "title": "AES_RS_SL", "": "", "pageLink": "/display//AES_RS_SL", "content": "The schema contains a set of views that mimic from . The views integrate both data models COMPANY and IQIVIA and present data from all countries available in Reltio.Differences from original Redshift martTechnical ids in views keeping nested attributes values are different from ones. They are based on Reltio attribute uris instead of checksum generated from attribute reign keys for code values to be joined with the dictionary table are also generated using a different strategy." }, { "title": "CUSTOMER schema", "": "", "pageLink": "/display//CUSTOMER+schema", "content": "This is the main schema containing data in two formats.Object model that represents Reltio JSON format. Data in the format are kept in ENTITIES , RELATIONS, MERGE_TREE tables. Relation model is created as a part of views (standard or materialized) derived from the object model. Most of the views are generated in an automated way based on configuration. They directly reflect object model. There are two sets of views as there are two models in Reltio: COMPANY and ,  Those views can change dynamically as config is updated.\n\n \n \n \n \n \n \n\n \n \n \n \n\n \n \n\n \n \n \n\n \n \n \n \n \n \n\n \n \n \n \n" }, { "title": "Customer base objects", "": "", "pageLink": "/display/GMDM/Customer+base+objects", "content": "ENTITIESKeeps Relto entities objectsColumnTypeDescriptionENTITY_URITEXTReltio entityt uriCOUNTRYTEXTCountryENTITY_TYPETEXTEntity type for example: , flag CREATE_TIMETIMESTAMP_LTZCreate timeUPDATE_TIMETIMESTAMP_LTZUpdate timeOBJECTVARIANTJSON objectLAST_EVENT_TYPETEXTThe last event updated the objectLAST_EVENT_TIMETIMESTAMP_LTZLast event timePARENTTEXTParent entity uriCHECKSUMNUMBERChecksumCOMPANY_GLOBAL_CUSTOMER_IDTEXTEntity COMPANY Global IdPARENT_COMPANY_GLOBAL_CUSTOMER_ITEXTIn case of lost merge that field store COMPANY Global Id of the winner entity else is emptyHIST_INACTIVE_ENTITIESUsed for history inactive onekey crosswals. Structure is a copy of entities lumnTypeDescriptionENTITY_URITEXTReltio entityt uriCOUNTRYTEXTCountryENTITY_TYPETEXTEntity type for example: , flag CREATE_TIMETIMESTAMP_LTZCreate timeUPDATE_TIMETIMESTAMP_LTZUpdate timeOBJECTVARIANTJSON objectLAST_EVENT_TYPETEXTThe last event updated the objectLAST_EVENT_TIMETIMESTAMP_LTZLast event timePARENTTEXTParent entity uriCHECKSUMNUMBERChecksumCOMPANY_GLOBAL_CUSTOMER_IDTEXTEntity COMPANY Global IdPARENT_COMPANY_GLOBAL_CUSTOMER_ITEXTIn case of lost merge that field store COMPANY Global Id of the winner entity else is emptyRELATIONSKeeps relations objectsColumnTypeDescriptionRELATION_URITEXTReltio relation uriCOUNTRYTEXTCountryRELATION_TYPETEXTRelation typeACTIVEBOOLEANActive flagCREATE_TIMETIMESTAMP_LTZCreate timeUPDATE_TIMETIMESTAMP_LTZUpdate timeSTART_ENTITY_URITEXTSource entity uri END_ENTITY_URITEXTTarget entity uriOBJECTVARIANTJSON object LAST_EVENT_TYPETEXTThe last event type modified the recordLAST_EVENT_TIMETIMESTAMP_LTZLast event timePARENTTEXTnot usedCHECKSUMNUMBERChecksumMATCHESThe table presents active and historical matches found in for all lumnTypeDescriptionENTITY_URITEXTReltio entity uriTARGET_ENTITY_URITEXTReltio entity uri to which matches ENTITY_URIMATCH_TYPETEXTMatch typeMATCH_RULE_NAMETEXTMatch rule nameCOUNTRYTEXTCountryLAST_EVENT_TYPETEXTThe last event type modified the recordLAST_EVENT_TIMETIMESTAMP_LTZLast event timeLAST_EVENT_CHECKSUMNUMBERThe last event checksumACTIVEBOOLEANActive flagMATCH_HISTORYThe view shows match history for active and inactive matches enriched by merge data. The merge info is available for matches that were inactivated by the merge action triggered by users or Reltio background processes.  ColumnTypeDescriptionENTITY_URITEXTReltio entity uriTARGET_ENTITY_URITEXTReltio entity uri to which matches ENTITY_URIMATCH_TYPETEXTMatch typeMATCH_RULE_NAMETEXTMatch rule nameCOUNTRYTEXTCountryLAST_EVENT_TYPETEXTThe last event type modified the recordLAST_EVENT_TIMETIMESTAMP_LTZLast event timeLAST_EVENT_CHECKSUMNUMBERThe last event checksumACTIVEBOOLEANActive flagMERGEDBOOLEANMerge indicator, the true value indicates that the merge happened for the RGE_REASONTEXT Merge reason MERGE_USERTEXTReltio user name or process name that executed the mergeMERGE_DATETO_TIMESTAMP_LTZMerge date MERGE_RULETEXTMerge rule that triggered the mergeMERGESThe table presents active merges found in based on the merge_tree lumnTypeDescriptionENTITY_URITEXTReltio entity uriLAST_UPDATE_TIMETO_TIMESTAMP_LTZDate of the last update on the selected rowCREATE_TIMETO_TIMESTAMP_LTZCreation date on the selected rowOBJECTVARIANTJSON object MERGE_HISTORYThe view shows merge history for active entities. The merge history view is build based on export. ColumnTypeDescriptionENTITY_URITEXTReltio entity uriLOSER_ENTITY_URITEXTReltio entity uri for the merge loserMERGE_REASONTEXT Merge reason Merge on the flyThis indicates automatic match rules were able to find matches for a newly added entity. Therefore, the new entity was not created as a separate entity in the platform but was merged into an existing one rge by crosswalksIf a newly added entity has the same crosswalk as that of an existing entity in the platform, such entities are merged automatically on the fly because the platform does not allow multiple entities with the same matic merge by crosswalksSometimes, two entities with the same crosswalk may exist in the platform (simultaneously added entities). In this case, such entities are merged automatically using a special background merge (Matches found on object creation)This indicates that several entities are grouped into one merge request because all such entities will be merged at the same time to create a single entity in the platform. The reason for a group merge can be an automatic match rule or same crosswalk or rges found by background merge processThe background match thread (incremental match processor) modifies entities as a result of create/change/remove events and performs a rematch. During the rematch, if some entities match using the automatic match rules, such entities are rge by handThis is a merge performed by a user through the or from the by going through the potential RGE_RULETEXTMerge rule that triggered the mergeUSERTEXTUser name which executed the mergeMERGE_DATETO_TIMESTAMP_LTZMerge date ENTITY_HISTORYKeeps event history for entities and relationsColumnTypeDescriptionEVENT_KEYTEXTEvent keyEVENT_PARTITIONNUMBERPartition number in KafkaEVENT_OFFSETNUMBEROffset in KafkaEVENT_TOPICTEXTName of the topic in where this event is storedEVENT_TIMETIMESTAMP_LTZTimestamp when the event was generatedEVENT_TYPETEXTEvent typeCOUNTRYTEXTCountryENTITY_URITEXTReltio entity uriCHECKSUMNUMBERChecksumLOV_DATAKeeps LOV objectsColumnTypeDescriptionIDTEXTLOV identifier  RDM object in formatCODESColumnTypeDescriptionSOURCETEXTSource MDM system nameCODE_IDTEXTCode id - generated by concatenated LOV name and canonical codeCANONICAL_CODETEXTCanonical codeLOV_NAMETEXTLOV (Dictionary) nameACTIVEBOOLEANActive flagDESCTEXTEnglish descriptionCOUNTRYTEXTCode countryPARENTSTEXTParent code idCODE_TRANSLATIONSRDM code translationsColumnTypeDescriptionSOURCETEXTSource MDM system nameCODE_IDTEXTCode idCANONICAL_CODETEXTCanonical codeLOV_NAMETEXTLOV (Dictionary) nameACTIVEBOOLEANActive flagLANG_CODETEXTLanguage codeLAND_DESCTEXTLanguage descriptionCOUNTRYTEXTCountryCODE_SOURCE_MAPPINGSSource code mappings to canonical codes in RDMColumnTypeDescriptionSOURCETEXTSource MDM system nameCODE_IDTEXTCode idSOURCE_NAMETEXTSource nameSOURCE_CODETEXTSource codeACTIVEBOOLEANActve flag (true - active, false - inactive)IS_CANONICALBOOLEANIs canonicalCOUNTRYTEXTCountryLAST_MODIFIEDTIMESTAMP_LTZLast modified datePARENTTEXTParent codeENTITY_CROSSWALKSKeeps entity crosswalksColumnTypeDescriptionCROSSWALK_URITEXTCrosswalk uriENTITY_URITEXTEntity uriENTITY_TYPETEXTEntity typeACTIVEBOOLEANActive flagTYPETEXTCrosswalk typeVALUETEXTCrosswalk tableCREATE_DATETIMESTAMP_NTZCreate dateUPDATE_DATETIMESTAMP_NTZUpdate dateRELTIO_LOAD_DATETIMESTAMP_NTZDate when this crosswalk was loaded to IdRELATION_CROSSWALKSKeeps relations crosswalksColumnTypeDescriptionCROSSWALK_URITEXTCrosswalk URIRELATION_URITEXTRelation URIRELATION_TYPETEXTRelation typeACTIVEBOOLEANActive flagTYPETEXTCrosswalk typeVALUETEXTCrosswalk tableCREATE_DATETIMESTAMP_NTZCreate dateUPDATE_DATETIMESTAMP_NTZUpdate dateDELETE_DATETIMESTAMP_NTZDelete dateRELTIO_LOAD_DATETIMESTAMP_NTZDate when this relation was loaded to ReltioATTRIBUTE_SOURCEPresents information about what crosswalk provided the given attribute. The view can be joined with views for nested attributes to get also attribute lumnTypeDescriptionATTTRIBUTE_URITEXTAttribute URIENTITY_URTEXTEntity URIACTIVEBOOLEANIs entity activeTYPETEXTCrosswalk typeVALUETEXTCrosswalk valueSOURCE_TABLETEXTCrosswalk source tableENTITY_UPDATE_DATESPresents information about updated dates of entities in Reltio MDM or SnowflakeThe view can be used to query updated records in a period of time including root objects like , , , and child objects like IDENTIFIERS, SPECIALTIES, ADDRESSED lumnTypeDescriptionENTITY_URITEXTEntity URIACTIVEBOOLEANIs entity of entityCOUNTRYTEXTCountry iso codeMDM_CREATE_TIMETIMESTAMP_LTZEntity create time in ReltioMDM_UPDATE_TIMETIMESAMP_LTZEntity update time in ReltioSF_CREATE_TIMETIMESTAMP_LTZEntity create time in last update time in SnowflakeLAST_EVENT_TIMETIMESTAMP_LTZLast event timestampCHECKSUMNUMBERChecksumCOMPANY_GLOBAL_CUSTOMER_IDTEXTEntity COMPANY Global IdPARENT_COMPANY_GLOBAL_CUSTOMER_ITEXTIn case of lost merge that field store COMPANY Global Id of the winner entity else is emptyRELATION_UPDATE_DATESPresents information about updated dates of relations or SnowflakeThe view can be used to query all updated entries in a period of time from  and child objects like AFFIL_RELATION_TYPEColumnTypeDescriptionRELATION_URITEXTEntity URIACTIVEBOOLEANIs entity activeRELATION_TYPETEXTType of entityCOUNTRYTEXTCountry iso codeMDM_CREATE_TIMETIMESTAMP_LTZRelation create time in ReltioMDM_UPDATE_TIMETIMESAMP_LTZRelation update time in ReltioSF_CREATE_TIMETIMESTAMP_LTZRelation create time in Snowflake DBSF_UPDATE_TIMETIMESTAMP_LTZRelation last update time in SnowflakeLAST_EVENT_TIMETIMESTAMP_LTZLast event timestampCHECKSUMNUMBERChecksum" }, { "title": "Data Materialization Process", "": "", "pageLink": "/display/GMDM/Data+Materialization+Process", "content": "" }, { "title": "Dynamic views for ", "": "", "pageLink": "/display/GMDM/Dynamic+views++for+IQVIA+MDM+Model", "content": " care providerReltio URI: configuration/entityTypes/HCPMaterialized: URILOV NameENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypeFIRST_NAMEVARCHARFirst Nameconfiguration/entityTypes//attributes/FirstNameLAST_NAMEVARCHARLast Nameconfiguration/entityTypes//attributes//entityTypes//attributes/MiddleNameNAMEVARCHARNameconfiguration/entityTypes//attributes/NamePREFIXVARCHARconfiguration/entityTypes//attributes/PrefixLKUP_IMS_PREFIXSUFFIX_NAMEVARCHARGeneration Suffixconfiguration/entityTypes//attributes/SuffixNameLKUP_IMS_SUFFIXPREFERRED_NAMEVARCHARconfiguration/entityTypes//attributes/PreferredNameNICKNAMEVARCHARconfiguration/entityTypes//attributes/NicknameCOUNTRY_CODEVARCHARCountry Codeconfiguration/entityTypes//attributes/CountryLKUP_IMS_COUNTRY_CODEGENDERVARCHARconfiguration/entityTypes//attributes/GenderLKUP_IMS_GENDERTYPE_CODEVARCHARType codeconfiguration/entityTypes//attributes/TypeCodeLKUP_IMS_HCP_CUST_TYPEACCOUNT_TYPEVARCHARAccount Typeconfiguration/entityTypes//attributes/AccountTypeSUB_TYPE_CODEVARCHARSub type codeconfiguration/entityTypes//attributes/SubTypeCodeLKUP_IMS_HCP_SUBTYPETITLEVARCHARconfiguration/entityTypes//attributes/TitleLKUP_IMS_PROF_TITLEINITIALSVARCHARInitialsconfiguration/entityTypes//attributes/InitialsD_O_BDATEDate of Birthconfiguration/entityTypes//attributes/DoBY_O_BVARCHARBirth Yearconfiguration/entityTypes//attributes/YoBMAPP_HCP_STATUSVARCHARconfiguration/entityTypes//attributes/MAPPHcpStatusLKUP_MAPP_HCPSTATUSGO_STATUSVARCHARconfiguration/entityTypes//attributes/GOStatusLKUP_GOVOFF_GOSTATUSPIGO_STATUSVARCHARconfiguration/entityTypes//attributes/PIGOStatusLKUP_GOVOFF_PIGOSTATUSNIPPIGO_STATUSVARCHARconfiguration/entityTypes//attributes/NIPPIGOStatusLKUP_GOVOFF_NIPPIGOSTATUSPRIMARY_PIGO_RATIONALEVARCHARconfiguration/entityTypes//attributes/PrimaryPIGORationaleLKUP_GOVOFF_PIGORATIONALESECONDARY_PIGO_RATIONALEVARCHARconfiguration/entityTypes//attributes/SecondaryPIGORationaleLKUP_GOVOFF_PIGORATIONALEPIGOSME_REVIEWVARCHARconfiguration/entityTypes//attributes/PIGOSMEReviewLKUP_GOVOFF_PIGOSMEREVIEWGSQ_DATEDATEGSQDateconfiguration/entityTypes//attributes/GSQDateMAPP_DO_NOT_USEVARCHARconfiguration/entityTypes//attributes/MAPPDoNotUseLKUP_GOVOFF_DONOTUSEMAPP_CHANGE_DATEVARCHARconfiguration/entityTypes//attributes/MAPPChangeDateMAPP_CHANGE_REASONVARCHARconfiguration/entityTypes//attributes/MAPPChangeReasonIS_EMPLOYEEBOOLEANconfiguration/entityTypes//attributes/IsEmployeeVALIDATION_STATUSVARCHARValidation Status of the Customerconfiguration/entityTypes//attributes/ValidationStatusLKUP_IMS_VAL_STATUSSOURCE_CHANGE_DATEDATESourceChangeDateconfiguration/entityTypes//attributes/SourceChangeDateSOURCE_CHANGE_REASONVARCHARSourceChangeReasonconfiguration/entityTypes//attributes/SourceChangeReasonORIGIN_SOURCEVARCHAROriginating /entityTypes//attributes/OriginSourceOK_VR_TRIGGERVARCHARconfiguration/entityTypes//attributes/OK_VR_TriggerLKUP_IMS_SEND_FOR_VALIDATIONBIRTH_CITYVARCHARBirth Cityconfiguration/entityTypes//attributes/BirthCityBIRTH_STATEVARCHARBirth Stateconfiguration/entityTypes//attributes/BirthStateSTATE_CODEBIRTH_COUNTRYVARCHARBirth Countryconfiguration/entityTypes//attributes/BirthCountryCOUNTRY_CDD_O_DDATEconfiguration/entityTypes//attributes/DoDY_O_DVARCHARconfiguration/entityTypes//attributes/YoDTAX_IDVARCHARconfiguration/entityTypes//attributes/TaxIDSSN_LAST4VARCHARconfiguration/entityTypes//attributes/SSNLast4MEVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes//attributes/ NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes//attributes/NPIUPINVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes//attributes/UPINKAISER_PROVIDERBOOLEANconfiguration/entityTypes//attributes/KaiserProviderMAJOR_PROFESSIONAL_ACTIVITYVARCHARconfiguration/entityTypes//attributes/MajorProfessionalActivityMPA_CDPRESENT_EMPLOYMENTVARCHARconfiguration/entityTypes//attributes/PresentEmploymentPE_CDTYPE_OF_PRACTICEVARCHARconfiguration/entityTypes//attributes/TypeOfPracticeTOP_CDSOLOBOOLEANconfiguration/entityTypes//attributes/SoloGROUPBOOLEANconfiguration/entityTypes//attributes/GroupADMINISTRATORBOOLEANconfiguration/entityTypes//attributes/AdministratorRESEARCHBOOLEANconfiguration/entityTypes//attributes/ResearchCLINICAL_TRIALSBOOLEANconfiguration/entityTypes//attributes/ClinicalTrialsWEBSITE_URLVARCHARconfiguration/entityTypes//attributes/WebsiteURLIMAGE_LINKSVARCHARconfiguration/entityTypes//attributes/ImageLinksDOCUMENT_LINKSVARCHARconfiguration/entityTypes//attributes/DocumentLinksVIDEO_LINKSVARCHARconfiguration/entityTypes//attributes/VideoLinksDESCRIPTIONVARCHARconfiguration/entityTypes//attributes/DescriptionCREDENTIALSVARCHARconfiguration/entityTypes//attributes/CredentialsCREDFORMER_FIRST_NAMEVARCHARconfiguration/entityTypes//attributes/FormerFirstNameFORMER_LAST_NAMEVARCHARconfiguration/entityTypes//attributes/FormerLastNameFORMER_MIDDLE_NAMEVARCHARconfiguration/entityTypes//attributes/FormerMiddleNameFORMER_SUFFIX_NAMEVARCHARconfiguration/entityTypes//attributes/FormerSuffixNameSSNVARCHARconfiguration/entityTypes//attributes/SSNPRESUMED_DEADBOOLEANconfiguration/entityTypes//attributes/PresumedDeadDEA_BUSINESS_ACTIVITYVARCHARconfiguration/entityTypes//attributes/DEABusinessActivitySTATUS_IMSVARCHARconfiguration/entityTypes//attributes/StatusIMSLKUP_IMS_STATUSSTATUS_UPDATE_DATEDATEconfiguration/entityTypes//attributes/StatusUpdateDateSTATUS_REASON_CODEVARCHARconfiguration/entityTypes//attributes/StatusReasonCodeLKUP_IMS_SRC_DEACTIVE_REASON_CODECOMMENTERSVARCHARCommentersconfiguration/entityTypes//attributes/CommentersSOURCE_CREATION_DATEDATEconfiguration/entityTypes//attributes/SourceCreationDateSOURCE_NAMEVARCHARconfiguration/entityTypes//attributes/SourceNameSUB_SOURCE_NAMEVARCHARconfiguration/entityTypes//attributes/SubSourceNameEXCLUDE_FROM_MATCHVARCHARconfiguration/entityTypes//attributes/ExcludeFromMatchPROVIDER_IDENTIFIER_TYPEVARCHARProvider Identifier Typeconfiguration/entityTypes//attributes/ProviderIdentifierTypeLKUP_IMS_PROVIDER_IDENTIFIER_TYPECATEGORYVARCHARCategory Codeconfiguration/entityTypes//attributes/CategoryLKUP_IMS_HCP_CATEGORYDEGREE_CODEVARCHARDegree Codeconfiguration/entityTypes//attributes/DegreeCodeLKUP_IMS_DEGREESALUTATION_NAMEVARCHARSalutation Nameconfiguration/entityTypes//attributes/SalutationNameIS_BLACK_LISTEDBOOLEANIndicates to Blacklist the profileconfiguration/entityTypes//attributes/IsBlackListedTRAINING_HOSPITALVARCHARTraining /entityTypes//attributes/TrainingHospitalACRONYM_NAMEVARCHARAcronymNameconfiguration/entityTypes//attributes/AcronymNameFIRST_SET_DATEDATEDate of /entityTypes//attributes//entityTypes//attributes/CreateDateUPDATE_DATEDATEDate of /entityTypes//attributes/UpdateDateCHECK_DATEDATEDate of /entityTypes//attributes/CheckDateSTATE_CODEVARCHARSituation of Active, Inactive, Retired)configuration/entityTypes//attributes/StateCodeLKUP_IMS_PROFILE_STATESTATE_DATEDATEDate when state of the record was last nfiguration/entityTypes/HCP/attributes/StateDateVALIDATION_CHANGE_REASONVARCHARReason for Validation Status changeconfiguration/entityTypes//attributes/ValidationChangeReasonLKUP_IMS_VAL_STATUS_CHANGE_REASONVALIDATION_CHANGE_DATEDATEDate of Validation changeconfiguration/entityTypes//attributes/ whether sales reps need to make an appointment to see the nfiguration/entityTypes/HCP/attributes/AppointmentRequiredNHS_STATUSVARCHARNational Health System Statusconfiguration/entityTypes//attributes/NHSStatusLKUP_IMS_SECTOR_OF_CARENUM_OF_PATIENTSVARCHARNumber of attached patientsconfiguration/entityTypes//attributes/NumOfPatientsPRACTICE_SIZEVARCHARPractice Sizeconfiguration/entityTypes//attributes/PracticeSizePATIENTS_X_DAYVARCHARPatients Per Dayconfiguration/entityTypes//attributes/PatientsXDayPREFERRED_LANGUAGEVARCHARPreferred Spoken Languageconfiguration/entityTypes//attributes//entityTypes//attributes/PoliticalAffiliationLKUP_IMS_POL_AFFILPRESCRIBING_LEVELVARCHARPrescribing Levelconfiguration/entityTypes//attributes/PrescribingLevelLKUP_IMS_PRES_LEVELEXTERNAL_RATINGVARCHARExternal Ratingconfiguration/entityTypes//attributes/ExternalRatingTARGETING_CLASSIFICATIONVARCHARTargeting Classificationconfiguration/entityTypes//attributes/TargetingClassificationKOL_TITLEVARCHARKey Opinion Leader Titleconfiguration/entityTypes//attributes/KOLTitleSAMPLING_STATUSVARCHARSampling Status of HCPconfiguration/entityTypes//attributes//entityTypes//attributes/AdministrativeNamePROFESSIONAL_DESIGNATIONVARCHARconfiguration/entityTypes//attributes/ProfessionalDesignationLKUP_IMS_PROF_DESIGNATIONEXTERNAL_INFORMATION_URLVARCHARconfiguration/entityTypes//attributes/ExternalInformationURLMATCH_STATUS_CODEVARCHARconfiguration/entityTypes//attributes/MatchStatusCodeLKUP_IMS_MATCH_STATUS_CODESUBSCRIPTION_FLAG1BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes//attributes/SubscriptionFlag1SUBSCRIPTION_FLAG2BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes//attributes/SubscriptionFlag2SUBSCRIPTION_FLAG3BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes//attributes/SubscriptionFlag3SUBSCRIPTION_FLAG4BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes//attributes/SubscriptionFlag4SUBSCRIPTION_FLAG5BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes//attributes/SubscriptionFlag5SUBSCRIPTION_FLAG6BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes//attributes/SubscriptionFlag6SUBSCRIPTION_FLAG7BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes//attributes/SubscriptionFlag7SUBSCRIPTION_FLAG8BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes//attributes/SubscriptionFlag8SUBSCRIPTION_FLAG9BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes//attributes/SubscriptionFlag9SUBSCRIPTION_FLAG10BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes//attributes/SubscriptionFlag10MIDDLE_INITIALVARCHARMiddle Initial. This attribute is populated from /entityTypes//attributes/MiddleInitialDELETE_ENTITYBOOLEANProperty for removingconfiguration/entityTypes//attributes/DeleteEntityPARTY_IDVARCHARconfiguration/entityTypes//attributes/PartyIDLAST_VERIFICATION_STATUSVARCHARconfiguration/entityTypes//attributes/LastVerificationStatusLAST_VERIFICATION_DATEDATEconfiguration/entityTypes//attributes/LastVerificationDateEFFECTIVE_DATEDATEconfiguration/entityTypes//attributes/EffectiveDateEND_DATEDATEconfiguration/entityTypes//attributes/EndDatePARTY_LOCALIZATION_CODEVARCHARconfiguration/entityTypes//attributes/PartyLocalizationCodeMATCH_PARTY_NAMEVARCHARconfiguration/entityTypes//attributes/MatchPartyNameLICENSEReltio URI: configuration/entityTypes//attributes/: URILOV NameLICENSE_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCATEGORYVARCHARconfiguration/entityTypes//attributes/License/attributes/CategoryLKUP_IMS_LIC_CATEGORYNUMBERVARCHARState License INTEGER. A unique license is listed for each license the physician holds. There is no standard format syntax. Format examples: 18986, , . There is also no limit to the of licenses a physician can hold in a state. Example: A physician can have an inactive resident license plus unlimited active licenses. Residents can have as many as four licenses since some states issue licenses every yearconfiguration/entityTypes//attributes/License/attributes/NumberBOARD_EXTERNAL_IDVARCHARBoard External IDconfiguration/entityTypes//attributes/License/attributes/BoardExternalIDBOARD_CODEVARCHARState License Board Code. For The board code will always be /entityTypes//attributes/License/attributes/BoardCodeSTLIC_BRD_CD_LOVSTATEVARCHARState License State. Two character field. //attributes/License/attributes/ISOCountryCodeLKUP_IMS_COUNTRY_CODEDEGREEVARCHARState . A physician may hold more than one license in a given state. However, not more than one or more than one DO license in the same nfiguration/entityTypes/HCP/attributes/License/attributes/DegreeLKUP_IMS_DEGREEAUTHORIZATION_STATUSVARCHARAuthorization Statusconfiguration/entityTypes//attributes/License/attributes/AuthorizationStatusLKUP_IMS_IDENTIFIER_STATUSLICENSE_NUMBER_KEYVARCHARState License Number Keyconfiguration/entityTypes//attributes/License/attributes/LicenseNumberKeyAUTHORITY_NAMEVARCHARAuthority Nameconfiguration/entityTypes//attributes/License/attributes/AuthorityNamePROFESSION_CODEVARCHARProfessionconfiguration/entityTypes//attributes/License/attributes/ProfessionCodeLKUP_IMS_PROFESSIONTYPE_IDVARCHARAuthorization Type idconfiguration/entityTypes//attributes/License/attributes/TypeIdTYPEVARCHARState License Type. U = Unlimited there is no restriction on the physician to practice medicine; implies restrictions of some sort. For example, the physician may practice only in a given county, admit patients only to particular hospitals, or practice under the supervision of a physician with a license in state or private hospitals or other settings; T = Temporary issued to a physician temporarily practicing in an underserved area outside his/her state of licensure. Also granted between board meetings when new licenses are issued. span for a temporary license varies from state to state. Temporary licenses typically expire from the date they are issued; R = Resident License granted to a physician in graduate medical education (e.g., residency training).configuration/entityTypes//attributes/License/attributes/TypeLKUP_IMS_LICENSE_TYPEPRIVILEGE_IDVARCHARLicense Privilegeconfiguration/entityTypes//attributes/License/attributes/PrivilegeIdPRIVILEGE_NAMEVARCHARLicense Privilege Nameconfiguration/entityTypes//attributes/License/attributes/PrivilegeNamePRIVILEGE_RANKVARCHARLicense Privilege Rankconfiguration/entityTypes//attributes/License/attributes/PrivilegeRankSTATUSVARCHARState License Status. A = . Physician is licensed to practice within the state; I = Inactive. If the physician has not reregistered a state license OR if the license has been suspended or revoked by ; X = unknown. If the state has not provided current information Note: Some state boards issue inactive licenses to physicians who want to maintain licensure in the state although they are currently practicing in /entityTypes//attributes/License/attributes/DeactivationReasonCodeLKUP_IMS_SRC_DEACTIVE_REASON_CODEEXPIRATION_DATEDATEconfiguration/entityTypes//attributes/License/attributes/ExpirationDateISSUE_DATEDATEState License Issue Dateconfiguration/entityTypes//attributes/License/attributes/IssueDateBRD_DATEDATEState License as of date or pull date. The as of date (or stamp date) is the date the current license file is provided to the Database nfiguration/entityTypes/HCP/attributes/License/attributes/BrdDateSAMPLE_ELIGIBILITYVARCHARconfiguration/entityTypes/HCP/attributes/License/attributes/SampleEligibilitySOURCE_CDVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes//attributes/License/attributes/SourceCDRANKVARCHARLicense Rankconfiguration/entityTypes//attributes/License/attributes/RankCERTIFICATIONVARCHARCertificationconfiguration/entityTypes//attributes/License/attributes/CertificationREQ_SAMPL_NON_CTRLVARCHARRequest Samples Non-Controlledconfiguration/entityTypes//attributes/License/attributes/ReqSamplNonCtrlREQ_SAMPL_CTRLVARCHARRequest Samples Controlledconfiguration/entityTypes//attributes/License/attributes/ReqSamplCtrlRECV_SAMPL_NON_CTRLVARCHARReceives /entityTypes//attributes/License/attributes/RecvSamplNonCtrlRECV_SAMPL_CTRLVARCHARReceives Samples Controlledconfiguration/entityTypes//attributes/License/attributes/RecvSamplCtrlDISTR_SAMPL_NON_CTRLVARCHARDistribute Samples /entityTypes//attributes/License/attributes/DistrSamplNonCtrlDISTR_SAMPL_CTRLVARCHARDistribute Samples Controlledconfiguration/entityTypes//attributes/License/attributes/ Schedule I flagconfiguration/entityTypes//attributes/License/attributes/SampDrugSchedIFlagSAMP_DRUG_SCHED_II_FLAGVARCHARSample Drug Schedule II flagconfiguration/entityTypes//attributes/License/attributes/SampDrugSchedIIFlagSAMP_DRUG_SCHED_III_FLAGVARCHARSample Drug Schedule III flagconfiguration/entityTypes//attributes/License/attributes/ flagconfiguration/entityTypes//attributes/License/attributes/SampDrugSchedIVFlagSAMP_DRUG_SCHED_V_FLAGVARCHARSample Drug Schedule V flagconfiguration/entityTypes//attributes/License/attributes/SampDrugSchedVFlagSAMP_DRUG_SCHED_VI_FLAGVARCHARSample Drug Schedule VI flagconfiguration/entityTypes//attributes/License/attributes/SampDrugSchedVIFlagPRESCR_NON_CTRL_FLAGVARCHARPrescribe Non-controlled flagconfiguration/entityTypes//attributes/License/attributes/PrescrNonCtrlFlagPRESCR_APP_REQ_NON_CTRL_FLAGVARCHARPrescribe Application Request for Non-controlled Substances Flagconfiguration/entityTypes//attributes/License/attributes/PrescrAppReqNonCtrlFlagPRESCR_CTRL_FLAGVARCHARPrescribe Controlled flagconfiguration/entityTypes//attributes/License/attributes/PrescrCtrlFlagPRESCR_APP_REQ_CTRL_FLAGVARCHARPrescribe Application Request for Controlled Substances Flagconfiguration/entityTypes//attributes/License/attributes/PrescrAppReqCtrlFlagPRESCR_DRUG_SCHED_I_FLAGVARCHARPrescrDrugSchedIFlagconfiguration/entityTypes//attributes/License/attributes/PrescrDrugSchedIFlagPRESCR_DRUG_SCHED_II_FLAGVARCHARPrescribe Schedule II Flagconfiguration/entityTypes//attributes/License/attributes/PrescrDrugSchedIIFlagPRESCR_DRUG_SCHED_III_FLAGVARCHARPrescribe Schedule III Flagconfiguration/entityTypes//attributes/License/attributes/PrescrDrugSchedIIIFlagPRESCR_DRUG_SCHED_IV_FLAGVARCHARPrescribe Schedule IV Flagconfiguration/entityTypes//attributes/License/attributes/PrescrDrugSchedIVFlagPRESCR_DRUG_SCHED_V_FLAGVARCHARPrescribe Schedule V Flagconfiguration/entityTypes//attributes/License/attributes/PrescrDrugSchedVFlagPRESCR_DRUG_SCHED_VI_FLAGVARCHARPrescribe Schedule /entityTypes//attributes/License/attributes/PrescrDrugSchedVIFlagSUPERVISORY_REL_CD_NON_CTRLVARCHARSupervisory Relationship for Non-Controlled Substancesconfiguration/entityTypes//attributes/License/attributes/SupervisoryRelCdNonCtrlSUPERVISORY_REL_CD_CTRLVARCHARSupervisoryRelCdCtrlconfiguration/entityTypes//attributes/License/attributes/SupervisoryRelCdCtrlCOLLABORATIVE_NONCTRLVARCHARCollaboration for /entityTypes//attributes/License/attributes/CollaborativeNonctrlCOLLABORATIVE_CTRLVARCHARCollaboration for Controlled Substancesconfiguration/entityTypes//attributes/License/attributes/CollaborativeCtrlINCLUSIONARYVARCHARInclusionaryconfiguration/entityTypes//attributes/License/attributes/InclusionaryEXCLUSIONARYVARCHARExclusionaryconfiguration/entityTypes//attributes/License/attributes/ExclusionaryDELEGATION_NON_CTRLVARCHARDelegationNonCtrlconfiguration/entityTypes//attributes/License/attributes/DelegationNonCtrlDELEGATION_CTRLVARCHARDelegation for /entityTypes//attributes/License/attributes/DelegationCtrlDISCIPLINARY_ACTION_STATUSVARCHARconfiguration/entityTypes//attributes/License/attributes/DisciplinaryActionStatusADDRESSReltio URI: configuration/entityTypes//attributes/Address, configuration/entityTypes//attributes/AddressMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESS_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypePRIMARY_AFFILIATIONVARCHARconfiguration/relationTypes//attributes/PrimaryAffiliation, configuration/relationTypes//attributes/PrimaryAffiliationLKUP_IMS_YES_NOSOURCE_ADDRESS_IDVARCHARconfiguration/relationTypes//attributes/SourceAddressID, configuration/relationTypes//attributes/SourceAddressIDADDRESS_TYPEVARCHARconfiguration/relationTypes//attributes/, configuration/relationTypes//attributes/AddressTypeLKUP_IMS_ADDR_TYPECARE_OFVARCHARconfiguration/relationTypes//attributes/CareOf, configuration/relationTypes//attributes/CareOfPRIMARYBOOLEANconfiguration/relationTypes//attributes/Primary, configuration/relationTypes//attributes/PrimaryADDRESS_RANKVARCHARconfiguration/relationTypes//attributes/AddressRank, configuration/relationTypes//attributes/AddressRankSOURCE_NAMEVARCHARconfiguration/relationTypes//attributes/SourceAddressInfo/attributes/SourceName, configuration/relationTypes//attributes/SourceAddressInfo/attributes/SourceNameSOURCE_LOCATION_IDVARCHARconfiguration/relationTypes//attributes/SourceAddressInfo/attributes/SourceLocationId, configuration/relationTypes//attributes/SourceAddressInfo/attributes/SourceLocationIdADDRESS_LINE1VARCHARconfiguration/entityTypes/Location/attributes/AddressLine1, configuration/entityTypes/Location/attributes/AddressLine1ADDRESS_LINE2VARCHARconfiguration/entityTypes/Location/attributes/AddressLine2, configuration/entityTypes/Location/attributes/AddressLine2ADDRESS_LINE3VARCHARAddressLine3configuration/entityTypes/Location/attributes/AddressLine3, configuration/entityTypes/Location/attributes/AddressLine3ADDRESS_LINE4VARCHARAddressLine4configuration/entityTypes/Location/attributes/AddressLine4, configuration/entityTypes/Location/attributes/AddressLine4PREMISEVARCHARconfiguration/entityTypes/Location/attributes/Premise, configuration/entityTypes/Location/attributes/PremiseSTREETVARCHARconfiguration/entityTypes/Location/attributes/Street, configuration/entityTypes/Location/attributes/StreetFLOORVARCHARN/Aconfiguration/entityTypes/Location/attributes/Floor, configuration/entityTypes/Location/attributes/FloorBUILDINGVARCHARN/Aconfiguration/entityTypes/Location/attributes/Building, configuration/entityTypes/Location/attributes/BuildingCITYVARCHARconfiguration/entityTypes/Location/attributes/City, configuration/entityTypes/Location/attributes/CitySTATE_PROVINCEVARCHARconfiguration/entityTypes/Location/attributes/StateProvince, configuration/entityTypes/Location/attributes/StateProvinceSTATE_PROVINCE_CODEVARCHARconfiguration/entityTypes/Location/attributes/StateProvinceCode, configuration/entityTypes/Location/attributes/StateProvinceCodeLKUP_IMS_STATE_CODEPOSTAL_CODEVARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/, configuration/entityTypes/Location/attributes/Zip/attributes/PostalCodeZIP5VARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/Zip5, configuration/entityTypes/Location/attributes/Zip/attributes/Zip5ZIP4VARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/Zip4, configuration/entityTypes/Location/attributes/Zip/attributes/Zip4COUNTRYVARCHARconfiguration/entityTypes/Location/attributes/CountryLKUP_IMS_COUNTRY_CODECBSA_CODEVARCHARCore Based Statistical Areaconfiguration/entityTypes/Location/attributes/CBSACode, configuration/entityTypes/Location/attributes/CBSACodeCBSA_CDFIPS_COUNTY_CODEVARCHARFIPS county Codeconfiguration/entityTypes/Location/attributes/FIPSCountyCode, configuration/entityTypes/Location/attributes/FIPSCountyCodeFIPS_STATE_CODEVARCHARFIPS State Codeconfiguration/entityTypes/Location/attributes/, configuration/entityTypes/Location/attributes/FIPSStateCodeDPVVARCHARUSPS delivery point validation. R = Range Check; C = Clerk; F = Formally Valid; V = /entityTypes/Location/attributes/DPV, configuration/entityTypes/Location/attributes/DPVMSAVARCHARMetropolitan Statistical Area for a businessconfiguration/entityTypes/Location/attributes/, configuration/entityTypes/Location/attributes/MSALATITUDEVARCHARconfiguration/entityTypes/Location/attributes/GeoLocation/attributes/LatitudeLONGITUDEVARCHARconfiguration/entityTypes/Location/attributes/GeoLocation/attributes/LongitudeGEO_ACCURACYVARCHARconfiguration/entityTypes/Location/attributes/GeoLocation/attributes/GeoAccuracyGEO_CODING_SYSTEMVARCHARconfiguration/entityTypes/Location/attributes/GeoLocation/attributes/GeoCodingSystemADDRESS_INPUTVARCHARconfiguration/entityTypes/Location/attributes/AddressInput, configuration/entityTypes/Location/attributes/AddressInputSUB_ADMINISTRATIVE_AREAVARCHARThis field holds the smallest geographic data element within a country. For instance, , configuration/entityTypes/Location/attributes/SubAdministrativeAreaPOSTAL_CITYVARCHARconfiguration/entityTypes/Location/attributes/PostalCity, configuration/entityTypes/Location/attributes/PostalCityLOCALITYVARCHARThis field holds the most common population center data element within a country. For instance, , nfiguration/entityTypes/Location/attributes/Locality, configuration/entityTypes/Location/attributes/LocalityVERIFICATION_STATUSVARCHARconfiguration/entityTypes/Location/attributes/, configuration/entityTypes/Location/attributes/VerificationStatusSTATUS_CHANGE_DATEDATEStatus Change Dateconfiguration/entityTypes/Location/attributes/StatusChangeDate, configuration/entityTypes/Location/attributes/StatusChangeDateADDRESS_STATUSVARCHARStatus of the Addressconfiguration/entityTypes/Location/attributes/, configuration/entityTypes/Location/attributes/AddressStatusACTIVE_ADDRESSBOOLEANconfiguration/relationTypes//attributes/Active, configuration/relationTypes//attributes/ActiveLOC_CONF_INDVARCHARconfiguration/relationTypes//attributes/LocConfInd, configuration/relationTypes//attributes/LocConfIndLKUP_IMS_LOCATION_CONFIDENCEBEST_RECORDVARCHARconfiguration/relationTypes//attributes/, configuration/relationTypes//attributes/BestRecordRELATION_STATUS_CHANGE_DATEDATEconfiguration/relationTypes//attributes/RelationStatusChangeDate, configuration/relationTypes//attributes/RelationStatusChangeDateVALIDATION_STATUSVARCHARValidation status of the Address. When Addresses are merged, the loser Address is set to , configuration/relationTypes//attributes/ValidationStatusLKUP_IMS_VAL_STATUSSTATUSVARCHARconfiguration/relationTypes//attributes/Status, configuration/relationTypes//attributes/StatusLKUP_IMS_ADDR_STATUSHCO_NAMEVARCHARconfiguration/relationTypes//attributes/, configuration/relationTypes//attributes/HcoNameMAIN_HCO_NAMEVARCHARconfiguration/relationTypes//attributes/MainHcoName, configuration/relationTypes//attributes/MainHcoNameBUILD_LABELVARCHARconfiguration/relationTypes//attributes/BuildLabel, configuration/relationTypes//attributes/BuildLabelPO_BOXVARCHARconfiguration/relationTypes//attributes/POBox, configuration/relationTypes//attributes/POBoxVALIDATION_REASONVARCHARconfiguration/relationTypes//attributes/, configuration/relationTypes//attributes/ValidationReasonLKUP_IMS_VAL_STATUS_CHANGE_REASONVALIDATION_CHANGE_DATEDATEconfiguration/relationTypes//attributes/ValidationChangeDate, configuration/relationTypes//attributes/ValidationChangeDateSTATUS_REASON_CODEVARCHARconfiguration/relationTypes//attributes/StatusReasonCode, configuration/relationTypes//attributes/StatusReasonCodeLKUP_IMS_SRC_DEACTIVE_REASON_CODEPRIMARY_MAILBOOLEANconfiguration/relationTypes//attributes/PrimaryMail, configuration/relationTypes//attributes/PrimaryMailVISIT_ACTIVITYVARCHARconfiguration/relationTypes//attributes/VisitActivity, configuration/relationTypes//attributes/VisitActivityDERIVED_ADDRESSVARCHARconfiguration/relationTypes//attributes/derivedAddress, configuration/relationTypes//attributes/derivedAddressNEIGHBORHOODVARCHARconfiguration/entityTypes/Location/attributes/Neighborhood, configuration/entityTypes/Location/attributes/NeighborhoodAVCVARCHARconfiguration/entityTypes/Location/attributes/, configuration/entityTypes/Location/attributes/AVCCOUNTRY_CODEVARCHARconfiguration/entityTypes/Location/attributes/CountryLKUP_IMS_COUNTRY_CODEGEO_ITUDEVARCHARconfiguration/entityTypes/Location/attributes/GeoLocation/attributes/LatitudeGEO_LOCATION.LONGITUDEVARCHARconfiguration/entityTypes/Location/attributes/GeoLocation/attributes/LongitudeGEO_O_ACCURACYVARCHARconfiguration/entityTypes/Location/attributes/GeoLocation/attributes/GeoAccuracyGEO_O_CODING_SYSTEMVARCHARconfiguration/entityTypes/Location/attributes/GeoLocation/attributes/GeoCodingSystemADDRESS_PHONEReltio URI: configuration/relationTypes//attributes/Phone, configuration/relationTypes//attributes/PhoneMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESS_URIVARCHARgenerated key descriptionPHONE_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPE_IMSVARCHARconfiguration/relationTypes//attributes/Phone/attributes/TypeIMS, configuration/relationTypes//attributes/Phone/attributes/TypeIMSLKUP_IMS_COMMUNICATION_TYPENUMBERVARCHARconfiguration/relationTypes//attributes/Phone/attributes/Number, configuration/relationTypes//attributes/Phone/attributes/NumberEXTENSIONVARCHARconfiguration/relationTypes//attributes/Phone/attributes/Extension, configuration/relationTypes//attributes/Phone/attributes/ExtensionRANKVARCHARconfiguration/relationTypes//attributes/Phone/attributes/Rank, configuration/relationTypes//attributes/Phone/attributes/RankACTIVE_ADDRESS_PHONEBOOLEANconfiguration/relationTypes//attributes/Phone/attributes/Active, configuration/relationTypes//attributes/Phone/attributes/ActiveBEST_PHONE_INDICATORVARCHARconfiguration/relationTypes//attributes/Phone/attributes/BestPhoneIndicator, configuration/relationTypes//attributes/Phone/attributes/BestPhoneIndicatorADDRESS_DEAReltio URI: configuration/relationTypes//attributes/, configuration/relationTypes//attributes/: NameADDRESS_URIVARCHARgenerated key descriptionDEA_URIVARCHARgenerated key CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeNUMBERVARCHARconfiguration/relationTypes//attributes//attributes/Number, configuration/relationTypes//attributes//attributes/NumberEXPIRATION_DATEDATEconfiguration/relationTypes//attributes//attributes/ExpirationDate, configuration/relationTypes//attributes//attributes/ExpirationDateSTATUSVARCHARconfiguration/relationTypes//attributes//attributes/Status, configuration/relationTypes//attributes//attributes/StatusLKUP_IMS_IDENTIFIER_STATUSDRUG_SCHEDULEVARCHARconfiguration/relationTypes//attributes//attributes/DrugSchedule, configuration/relationTypes//attributes//attributes/DrugScheduleBUSINESS_ACTIVITY_CODEVARCHARBusiness Activity Codeconfiguration/relationTypes//attributes//attributes/BusinessActivityCode, configuration/relationTypes//attributes//attributes/BusinessActivityCodeSUB_BUSINESS_ACTIVITY_CODEVARCHARSub Business Activity Codeconfiguration/relationTypes//attributes//attributes/SubBusinessActivityCode, configuration/relationTypes//attributes//attributes/SubBusinessActivityCodeDEA_CHANGE_REASON_CODEVARCHARDEA Change Reason Codeconfiguration/relationTypes//attributes//attributes/DEAChangeReasonCode, configuration/relationTypes//attributes//attributes/DEAChangeReasonCodeLKUP_IMS_SRC_DEACTIVE_REASON_CODEAUTHORIZATION_STATUSVARCHARAuthorization Statusconfiguration/relationTypes//attributes//attributes/, configuration/relationTypes//attributes//attributes/AuthorizationStatusLKUP_IMS_IDENTIFIER_STATUSADDRESS_OFFICE_INFORMATIONReltio URI: configuration/relationTypes//attributes/OfficeInformation, configuration/relationTypes//attributes/OfficeInformationMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESS_URIVARCHARgenerated key descriptionOFFICE_INFORMATION_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypeBEST_TIMESVARCHARconfiguration/relationTypes//attributes/OfficeInformation/attributes/, configuration/relationTypes//attributes/OfficeInformation/attributes/BestTimesAPPT_REQUIREDBOOLEANconfiguration/relationTypes//attributes/OfficeInformation/attributes/, configuration/relationTypes//attributes/OfficeInformation/attributes/ApptRequiredOFFICE_NOTESVARCHARconfiguration/relationTypes//attributes/OfficeInformation/attributes/OfficeNotes, configuration/relationTypes//attributes/OfficeInformation/attributes/OfficeNotesSPECIALITIESReltio URI: configuration/entityTypes//attributes/, configuration/entityTypes//attributes/SpecialitiesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSPECIALITIES_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSPECIALTY_TYPEVARCHARconfiguration/entityTypes//attributes/Specialities/attributes/SpecialtyType, configuration/entityTypes//attributes/Specialities/attributes/SpecialtyTypeLKUP_IMS_SPECIALTY_TYPESPECIALTYVARCHARconfiguration/entityTypes//attributes/Specialities/attributes/Specialty, configuration/entityTypes//attributes/Specialities/attributes/SpecialtyLKUP_IMS_SPECIALTYRANKVARCHARSpecialty Rankconfiguration/entityTypes//attributes/Specialities/attributes/Rank, configuration/entityTypes//attributes/Specialities/attributes/RankDESCVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes//attributes/Specialities/attributes/DescGROUPVARCHARconfiguration/entityTypes//attributes/Specialities/attributes/Group, configuration/entityTypes//attributes/Specialities/attributes/GroupSOURCE_CDVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes//attributes/Specialities/attributes/SourceCDSPECIALTY_DETAILVARCHARconfiguration/entityTypes//attributes/Specialities/attributes/, configuration/entityTypes//attributes/Specialities/attributes/SpecialtyDetailPROFESSION_CODEVARCHARProfessionconfiguration/entityTypes//attributes/Specialities/attributes/ProfessionCodeLKUP_IMS_PROFESSIONPRIMARY_SPECIALTY_FLAGBOOLEANconfiguration/entityTypes//attributes/Specialities/attributes/PrimarySpecialtyFlag, configuration/entityTypes//attributes/Specialities/attributes/PrimarySpecialtyFlagSORT_ORDERVARCHARconfiguration/entityTypes//attributes/Specialities/attributes/, configuration/entityTypes//attributes/Specialities/attributes/SortOrderBEST_RECORDVARCHARconfiguration/entityTypes//attributes/Specialities/attributes/, configuration/entityTypes//attributes/Specialities/attributes//entityTypes//attributes/Specialities/attributes/, configuration/entityTypes//attributes/Specialities/attributes/SubSpecialtyLKUP_IMS_SPECIALTYSUB_SPECIALTY_RANKVARCHARSubSpecialty Rankconfiguration/entityTypes//attributes/Specialities/attributes/SubSpecialtyRank, configuration/entityTypes//attributes/Specialities/attributes/SubSpecialtyRankTRUSTED_INDICATORVARCHARconfiguration/entityTypes//attributes/Specialities/attributes/, configuration/entityTypes//attributes/Specialities/attributes/TrustedIndicatorLKUP_IMS_YES_NORAW_SPECIALTYVARCHARconfiguration/entityTypes//attributes/Specialities/attributes/, configuration/entityTypes//attributes/Specialities/attributes/RawSpecialtyRAW_SPECIALTY_DESCRIPTIONVARCHARconfiguration/entityTypes//attributes/Specialities/attributes/RawSpecialtyDescription, configuration/entityTypes//attributes/Specialities/attributes/RawSpecialtyDescriptionIDENTIFIERSReltio URI: configuration/entityTypes//attributes/Identifiers, configuration/entityTypes//attributes/: URILOV NameIDENTIFIERS_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypeTYPEVARCHARconfiguration/entityTypes//attributes/Identifiers/attributes/Type, configuration/entityTypes//attributes/Identifiers/attributes/,/entityTypes//attributes/Identifiers/attributes/ID, configuration/entityTypes//attributes/Identifiers/attributes/IDORDERVARCHARDisplays the order of priority for an for those facilities that share an . Valid values are: P ?the on a business record is the primary identifier for the business and O ?the is a secondary identifier. (Using P for the supports aggregating clinical volumes and avoids double counting).configuration/entityTypes//attributes/Identifiers/attributes/Order, configuration/entityTypes//attributes/Identifiers/attributes/OrderCATEGORYVARCHARAdditional information about the identifer. For a identifer, the subcategory code (e.g. , , ). For a identifier, contains the activity code (e.g. M for Mid Level Practitioner)configuration/entityTypes//attributes/Identifiers/attributes/Category, configuration/entityTypes//attributes/Identifiers/attributes/CategoryLKUP_IMS_IDENTIFIERS_CATEGORYSTATUSVARCHARconfiguration/entityTypes//attributes/Identifiers/attributes/Status, configuration/entityTypes//attributes/Identifiers/attributes/StatusLKUP_IMS_IDENTIFIER_STATUSAUTHORIZATION_STATUSVARCHARAuthorization Statusconfiguration/entityTypes//attributes/Identifiers/attributes/, configuration/entityTypes//attributes/Identifiers/attributes/AuthorizationStatusLKUP_IMS_IDENTIFIER_STATUSDEACTIVATION_REASON_CODEVARCHARconfiguration/entityTypes//attributes/Identifiers/attributes/, configuration/entityTypes//attributes/Identifiers/attributes/DeactivationReasonCodeLKUP_IMS_SRC_DEACTIVE_REASON_CODEDEACTIVATION_DATEDATEconfiguration/entityTypes//attributes/Identifiers/attributes/DeactivationDate, configuration/entityTypes//attributes/Identifiers/attributes/DeactivationDateREACTIVATION_DATEDATEconfiguration/entityTypes//attributes/Identifiers/attributes/ReactivationDate, configuration/entityTypes//attributes/Identifiers/attributes/ReactivationDateNATIONAL_ID_ATTRIBUTEVARCHARconfiguration/entityTypes//attributes/Identifiers/attributes/NationalIdAttribute, configuration/entityTypes//attributes/Identifiers/attributes/NationalIdAttributeAMAMDDO_FLAGVARCHARAMA -DO Flagconfiguration/entityTypes//attributes/Identifiers/attributes/AMAMDDOFlagMAJOR_PROF_ACTVARCHARMajor /entityTypes//attributes/Identifiers/attributes/MajorProfActHOSPITAL_HOURSVARCHARHospitalHoursconfiguration/entityTypes//attributes/Identifiers/attributes/HospitalHoursAMA_HOSPITAL_IDVARCHARAMAHospitalIDconfiguration/entityTypes//attributes/Identifiers/attributes/AMAHospitalIDPRACTICE_TYPE_CODEVARCHARPracticeTypeCodeconfiguration/entityTypes//attributes/Identifiers/attributes/PracticeTypeCodeEMPLOYMENT_TYPE_CODEVARCHAREmploymentTypeCodeconfiguration/entityTypes//attributes/Identifiers/attributes/EmploymentTypeCodeBIRTH_CITYVARCHARBirthCityconfiguration/entityTypes//attributes/Identifiers/attributes/BirthCityBIRTH_STATEVARCHARBirthStateconfiguration/entityTypes//attributes/Identifiers/attributes/BirthStateBIRTH_COUNTRYVARCHARBirthCountryconfiguration/entityTypes//attributes/Identifiers/attributes/BirthCountryMEDICAL_SCHOOLVARCHARMedicalSchoolconfiguration/entityTypes//attributes/Identifiers/attributes/MedicalSchoolGRADUATION_YEARVARCHARGraduationYearconfiguration/entityTypes//attributes/Identifiers/attributes/GraduationYearNUM_OF_PYSICIANSVARCHARNumOfPysiciansconfiguration/entityTypes//attributes/Identifiers/attributes/NumOfPysiciansSTATEVARCHARLicenseStateconfiguration/entityTypes//attributes/Identifiers/attributes/State, configuration/entityTypes//attributes/Identifiers/attributes/StateLKUP_IMS_STATE_CODETRUSTED_INDICATORVARCHARconfiguration/entityTypes//attributes/Identifiers/attributes/, configuration/entityTypes//attributes/Identifiers/attributes/TrustedIndicatorLKUP_IMS_YES_NOHARD_LINK_INDICATORVARCHARconfiguration/entityTypes//attributes/Identifiers/attributes/HardLinkIndicator, configuration/entityTypes//attributes/Identifiers/attributes/HardLinkIndicatorLKUP_IMS_YES_NOLAST_VERIFICATION_STATUSVARCHARconfiguration/entityTypes//attributes/Identifiers/attributes/LastVerificationStatus, configuration/entityTypes//attributes/Identifiers/attributes/LastVerificationStatusLAST_VERIFICATION_DATEDATEconfiguration/entityTypes//attributes/Identifiers/attributes/LastVerificationDate, configuration/entityTypes//attributes/Identifiers/attributes/LastVerificationDateACTIVATION_DATEDATEconfiguration/entityTypes//attributes/Identifiers/attributes/ActivationDate, configuration/entityTypes//attributes/Identifiers/attributes/ActivationDateSPEAKERReltio URI: configuration/entityTypes//attributes/SpeakerMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSPEAKER_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypeIS_SPEAKERBOOLEANconfiguration/entityTypes//attributes/Speaker/attributes/IsSpeakerIS_COMPANY_APPROVED_SPEAKERBOOLEANAttribute to track if an is a COMPANY approved speakerconfiguration/entityTypes//attributes/Speaker/attributes/IsCOMPANYApprovedSpeakerLAST_BRIEFING_DATEDATETrack that the received the briefing/training to be certified as an approved /entityTypes//attributes/Speaker/attributes/LastBriefingDateSPEAKER_STATUSVARCHARconfiguration/entityTypes//attributes/Speaker/attributes/SpeakerStatusLKUP_SPEAKERSTATUSSPEAKER_TYPEVARCHARconfiguration/entityTypes//attributes/Speaker/attributes/SpeakerTypeLKUP_SPEAKERTYPESPEAKER_LEVELVARCHARconfiguration/entityTypes//attributes/Speaker/attributes/SpeakerLevelLKUP_SPEAKERLEVELHCP_WORKPLACE_MAIN_HCOReltio URI: configuration/entityTypes//attributes/MainHCOMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameWORKPLACE_URIVARCHARgenerated key descriptionMAINHCO_URIVARCHARgenerated key CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeNAMEVARCHARNameconfiguration/entityTypes//attributes/NameOTHER_NAMESVARCHAROther Namesconfiguration/entityTypes//attributes/OtherNamesTYPE_CODEVARCHARCustomer Typeconfiguration/entityTypes//attributes/TypeCodeLKUP_IMS_HCO_CUST_TYPESOURCE_IDVARCHARSource IDconfiguration/entityTypes//attributes/SourceIDVALIDATION_STATUSVARCHARconfiguration/relationTypes/I/attributes/ValidationStatusLKUP_IMS_VAL_STATUSVALIDATION_CHANGE_DATEDATEconfiguration/relationTypes/I/attributes/ValidationChangeDateAFFILIATION_STATUSVARCHARconfiguration/relationTypes/I/attributes/AffiliationStatusLKUP_IMS_STATUSCOUNTRYVARCHARCountry Codeconfiguration/relationTypes/I/attributes/CountryLKUP_IMS_COUNTRY_CODEHCP_WORKPLACE_MAIN_HCO_CLASSOF_TRADE_NReltio URI: configuration/entityTypes//attributes/ClassofTradeNMaterialized: URILOV NameWORKPLACE_URIVARCHARgenerated key descriptionMAINHCO_URIVARCHARgenerated key descriptionCLASSOFTRADEN_URIVARCHARgenerated key CodeACTIVEVARCHARActive TypePRIORITYVARCHARNumeric code for the primary class of tradeconfiguration//attributes/ClassofTradeN/attributes/PriorityCLASSIFICATIONVARCHARconfiguration/entityTypes//attributes/ClassofTradeN/attributes/ClassificationLKUP_IMS_HCO_CLASSOFTRADEN_CLASSIFICATIONFACILITY_TYPEVARCHARconfiguration/entityTypes//attributes/ClassofTradeN/attributes/FacilityTypeLKUP_IMS_HCO_CLASSOFTRADEN_FACILITYTYPESPECIALTYVARCHARconfiguration/entityTypes//attributes/ClassofTradeN/attributes/SpecialtyLKUP_IMS_HCO_CLASSOFTRADEN_SPECIALTYHCP_MAIN_WORKPLACE_CLASSOF_TRADE_NReltio URI: configuration/entityTypes//attributes/ClassofTradeNMaterialized: NameMAINWORKPLACE_URIVARCHARgenerated key descriptionCLASSOFTRADEN_URIVARCHARgenerated key CodeACTIVEVARCHARActive TypePRIORITYVARCHARNumeric code for the primary class of tradeconfiguration//attributes/ClassofTradeN/attributes/PriorityCLASSIFICATIONVARCHARconfiguration/entityTypes//attributes/ClassofTradeN/attributes/ClassificationLKUP_IMS_HCO_CLASSOFTRADEN_CLASSIFICATIONFACILITY_TYPEVARCHARconfiguration/entityTypes//attributes/ClassofTradeN/attributes/FacilityTypeLKUP_IMS_HCO_CLASSOFTRADEN_FACILITYTYPESPECIALTYVARCHARconfiguration/entityTypes//attributes/ClassofTradeN/attributes/SpecialtyLKUP_IMS_HCO_CLASSOFTRADEN_SPECIALTYPHONEReltio URI: configuration/entityTypes//attributes/Phone, configuration/entityTypes//attributes/PhoneMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePHONE_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypeTYPE_IMSVARCHARconfiguration/entityTypes//attributes/Phone/attributes/TypeIMS, configuration/entityTypes//attributes/Phone/attributes/TypeIMSLKUP_IMS_COMMUNICATION_TYPENUMBERVARCHARconfiguration/entityTypes//attributes/Phone/attributes/Number, configuration/entityTypes//attributes/Phone/attributes/NumberEXTENSIONVARCHARconfiguration/entityTypes//attributes/Phone/attributes/Extension, configuration/entityTypes//attributes/Phone/attributes/ExtensionRANKVARCHARconfiguration/entityTypes//attributes/Phone/attributes/Rank, configuration/entityTypes//attributes/Phone/attributes/RankCOUNTRY_CODEVARCHARconfiguration/entityTypes//attributes/Phone/attributes/CountryCode, configuration/entityTypes//attributes/Phone/attributes/CountryCodeLKUP_IMS_COUNTRY_CODEAREA_CODEVARCHARconfiguration/entityTypes//attributes/Phone/attributes/AreaCode, configuration/entityTypes//attributes/Phone/attributes/AreaCodeLOCAL_NUMBERVARCHARconfiguration/entityTypes//attributes/Phone/attributes/, configuration/entityTypes//attributes/Phone/attributes/LocalNumberFORMATTED_NUMBERVARCHARFormatted number of the phoneconfiguration/entityTypes//attributes/Phone/attributes/, configuration/entityTypes//attributes/Phone/attributes/FormattedNumberVALIDATION_STATUSVARCHARconfiguration/entityTypes//attributes/Phone/attributes/, configuration/entityTypes//attributes/Phone/attributes/ValidationStatusVALIDATION_DATEDATEconfiguration/entityTypes//attributes/Phone/attributes/ValidationDate, configuration/entityTypes//attributes/Phone/attributes/ValidationDateLINE_TYPEVARCHARconfiguration/entityTypes//attributes/Phone/attributes/LineType, configuration/entityTypes//attributes/Phone/attributes/LineTypeFORMAT_MASKVARCHARconfiguration/entityTypes//attributes/Phone/attributes/, configuration/entityTypes//attributes/Phone/attributes/FormatMaskDIGIT_COUNTVARCHARconfiguration/entityTypes//attributes/Phone/attributes/, configuration/entityTypes//attributes/Phone/attributes/DigitCountGEO_AREAVARCHARconfiguration/entityTypes//attributes/Phone/attributes/, configuration/entityTypes//attributes/Phone/attributes/GeoAreaGEO_COUNTRYVARCHARconfiguration/entityTypes//attributes/Phone/attributes/, configuration/entityTypes//attributes/Phone/attributes/GeoCountryDQ_CODEVARCHARconfiguration/entityTypes//attributes/Phone/attributes/, configuration/entityTypes//attributes/Phone/attributes/DQCodeACTIVE_PHONEBOOLEANDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes//attributes/Phone/attributes/ActiveBEST_PHONE_INDICATORVARCHARconfiguration/entityTypes//attributes/Phone/attributes/BestPhoneIndicator, configuration/entityTypes//attributes/Phone/attributes/BestPhoneIndicatorPHONE_SOURCE_DATAReltio URI: configuration/entityTypes//attributes/Phone/attributes/SourceData, configuration/entityTypes//attributes/Phone/attributes/SourceDataMaterialized: NamePHONE_URIVARCHARgenerated key descriptionSOURCE_DATA_URIVARCHARgenerated key CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeDATASET_IDENTIFIERVARCHARconfiguration/entityTypes//attributes/Phone/attributes//attributes/, configuration/entityTypes//attributes/Phone/attributes//attributes/DatasetIdentifierDATASET_PARTY_IDENTIFIERVARCHARconfiguration/entityTypes//attributes/Phone/attributes//attributes/, configuration/entityTypes//attributes/Phone/attributes//attributes/DatasetPartyIdentifierDATASET_PHONE_TYPEVARCHARconfiguration/entityTypes//attributes/Phone/attributes//attributes/DatasetPhoneType, configuration/entityTypes//attributes/Phone/attributes//attributes/DatasetPhoneTypeLKUP_IMS_COMMUNICATION_TYPERAW_DATASET_PHONE_TYPEVARCHARconfiguration/entityTypes//attributes/Phone/attributes//attributes/RawDatasetPhoneType, configuration/entityTypes//attributes/Phone/attributes//attributes/RawDatasetPhoneTypeBEST_PHONE_INDICATORVARCHARconfiguration/entityTypes//attributes/Phone/attributes//attributes/BestPhoneIndicator, configuration/entityTypes//attributes/Phone/attributes//attributes/BestPhoneIndicatorEMAILReltio URI: configuration/entityTypes//attributes/Email, configuration/entityTypes//attributes/EmailMaterialized: NameEMAIL_URIVARCHARgenerated key CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPE_IMSVARCHARconfiguration/entityTypes//attributes//attributes/TypeIMS, configuration/entityTypes//attributes//attributes/TypeIMSLKUP_IMS_EMAIL_TYPEEMAILVARCHARconfiguration/entityTypes//attributes//attributes/Email, configuration/entityTypes//attributes//attributes/EmailDOMAINVARCHARconfiguration/entityTypes//attributes//attributes/Domain, configuration/entityTypes//attributes//attributes/DomainDOMAIN_TYPEVARCHARconfiguration/entityTypes//attributes//attributes/, configuration/entityTypes//attributes//attributes/DomainTypeUSERNAMEVARCHARconfiguration/entityTypes//attributes//attributes/Username, configuration/entityTypes//attributes//attributes/UsernameRANKVARCHARconfiguration/entityTypes//attributes//attributes/Rank, configuration/entityTypes//attributes//attributes/RankVALIDATION_STATUSVARCHARconfiguration/entityTypes//attributes//attributes/, configuration/entityTypes//attributes//attributes/ValidationStatusVALIDATION_DATEDATEconfiguration/entityTypes//attributes//attributes/ValidationDate, configuration/entityTypes//attributes//attributes/ValidationDateACTIVE_EMAIL_HCPVARCHARconfiguration/entityTypes//attributes//attributes/ActiveDQ_CODEVARCHARconfiguration/entityTypes//attributes//attributes/, configuration/entityTypes//attributes//attributes/DQCodeSOURCE_CDVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes//attributes//attributes/SourceCDACTIVE_EMAIL_HCOBOOLEANconfiguration/entityTypes//attributes//attributes/ActiveDISCLOSUREDisclosure - Reporting derived attributesReltio URI: configuration/entityTypes//attributes/Disclosure, configuration/entityTypes//attributes/DisclosureMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameDISCLOSURE_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypeDGS_CATEGORYVARCHARconfiguration/entityTypes//attributes/Disclosure/attributes/DGSCategory, configuration/entityTypes//attributes/Disclosure/attributes/DGSCategoryLKUP_BENEFITCATEGORY_HCP,LKUP_BENEFITCATEGORY_HCODGS_TITLEVARCHARconfiguration/entityTypes//attributes/Disclosure/attributes/DGSTitleLKUP_BENEFITTITLEDGS_QUALITYVARCHARconfiguration/entityTypes//attributes/Disclosure/attributes/DGSQualityLKUP_BENEFITQUALITYDGS_SPECIALTYVARCHARconfiguration/entityTypes//attributes/Disclosure/attributes/DGSSpecialtyLKUP_BENEFITSPECIALTYCONTRACT_CLASSIFICATIONVARCHARconfiguration/entityTypes//attributes/Disclosure/attributes/ContractClassificationLKUP_CONTRACTCLASSIFICATIONCONTRACT_CLASSIFICATION_DATEDATEconfiguration/entityTypes//attributes/Disclosure/attributes/ContractClassificationDateMILITARYBOOLEANconfiguration/entityTypes//attributes/Disclosure/attributes/MilitaryLEGALSTATUSVARCHARconfiguration/entityTypes//attributes/Disclosure/attributes/LEGALSTATUSLKUP_LEGALSTATUSTHIRD_PARTY_VERIFYReltio URI: configuration/entityTypes//attributes/ThirdPartyVerify, configuration/entityTypes//attributes/ThirdPartyVerifyMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameTHIRD_PARTY_VERIFY_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSEND_FOR_VERIFYVARCHARconfiguration/entityTypes//attributes/ThirdPartyVerify/attributes/SendForVerify, configuration/entityTypes//attributes/ThirdPartyVerify/attributes/SendForVerifyLKUP_IMS_SEND_FOR_VALIDATIONVERIFY_DATEVARCHARconfiguration/entityTypes//attributes/ThirdPartyVerify/attributes/, configuration/entityTypes//attributes/ThirdPartyVerify/attributes/VerifyDatePRIVACY_PREFERENCESReltio URI: configuration/entityTypes//attributes/PrivacyPreferences, configuration/entityTypes//attributes/PrivacyPreferencesMaterialized: NamePRIVACY_PREFERENCES_URIVARCHARgenerated key CodeACTIVEVARCHARActive /entityTypes//attributes/PrivacyPreferences/attributes/OptOutOPT_OUT_START_DATEDATEconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/OptOutStartDateALLOWED_TO_CONTACTBOOLEANconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/AllowedToContactPHONE_OPT_OUTBOOLEANconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/PhoneOptOut, configuration/entityTypes//attributes/PrivacyPreferences/attributes/PhoneOptOutEMAIL_OPT_OUTBOOLEANconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/EmailOptOut, configuration/entityTypes//attributes/PrivacyPreferences/attributes/EmailOptOutFAX_OPT_OUTBOOLEANconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/FaxOptOut, configuration/entityTypes//attributes/PrivacyPreferences/attributes/FaxOptOutVISIT_OPT_OUTBOOLEANconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/VisitOptOut, configuration/entityTypes//attributes/PrivacyPreferences/attributes/VisitOptOutAMA_NO_CONTACTBOOLEANconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/AMANoContactPDRPBOOLEANconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/PDRPPDRP_DATEDATEconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/PDRPDateTEXT_MESSAGE_OPT_OUTBOOLEANconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/TextMessageOptOutMAIL_OPT_OUTBOOLEANconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/MailOptOut, configuration/entityTypes//attributes/PrivacyPreferences/attributes/MailOptOutOPT_OUT_CHANGE_DATEDATEThe date the opt out indicator was changedconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/OptOutChangeDateREMOTE_OPT_OUTBOOLEANconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/RemoteOptOut, configuration/entityTypes//attributes/PrivacyPreferences/attributes/RemoteOptOutOPT_OUT_ONE_KEYBOOLEANconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/OptOutOneKey, configuration/entityTypes//attributes/PrivacyPreferences/attributes/OptOutOneKeyOPT_OUT_SAFE_HARBORBOOLEANconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/OptOutSafeHarborKEY_OPINION_LEADERBOOLEANconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/KeyOpinionLeaderRESIDENT_INDICATORBOOLEANconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/ResidentIndicatorALLOW_SAFE_HARBORBOOLEANconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/AllowSafeHarborSANCTIONReltio URI: configuration/entityTypes//attributes/SanctionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSANCTION_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive Entity TypeSANCTION_IDVARCHARCourt sanction Id for any nfiguration/entityTypes/HCP/attributes/Sanction/attributes/SanctionIdACTION_CODEVARCHARCourt sanction code for a caseconfiguration/entityTypes//attributes/Sanction/attributes/ActionCodeACTION_DESCRIPTIONVARCHARconfiguration/entityTypes//attributes/Sanction/attributes/ActionDescriptionBOARD_CODEVARCHARCourt case board idconfiguration/entityTypes//attributes/Sanction/attributes/BoardCodeBOARD_DESCVARCHARcourt case board descriptionconfiguration/entityTypes//attributes/Sanction/attributes/BoardDescACTION_DATEDATEconfiguration/entityTypes//attributes/Sanction/attributes/ActionDateSANCTION_PERIOD_START_DATEDATEconfiguration/entityTypes//attributes/Sanction/attributes/SanctionPeriodStartDateSANCTION_PERIOD_END_DATEDATEconfiguration/entityTypes//attributes/Sanction/attributes/SanctionPeriodEndDateMONTH_DURATIONVARCHARconfiguration/entityTypes//attributes/Sanction/attributes/MonthDurationFINE_AMOUNTVARCHARconfiguration/entityTypes//attributes/Sanction/attributes/FineAmountOFFENSE_CODEVARCHARconfiguration/entityTypes//attributes/Sanction/attributes/OffenseCodeOFFENSE_DESCRIPTIONVARCHARconfiguration/entityTypes//attributes/Sanction/attributes/OffenseDescriptionOFFENSE_DATEDATEconfiguration/entityTypes//attributes/Sanction/attributes/OffenseDateHCP_SANCTIONSReltio URI: configuration/entityTypes//attributes/SanctionsMaterialized: URILOV NameSANCTIONS_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypeIDENTIFIER_TYPEVARCHARconfiguration/entityTypes//attributes/Sanctions/attributes/IdentifierTypeLKUP_IMS_HCP_IDENTIFIER_TYPEIDENTIFIER_IDVARCHARconfiguration/entityTypes//attributes/Sanctions/attributes/IdentifierIDTYPE_CODEVARCHARType of sanction/restriction for a given providedconfiguration/entityTypes//attributes/Sanctions/attributes/TypeCodeLKUP_IMS_SNCTN_RSTR_ACTNDEACTIVATION_REASON_CODEVARCHARconfiguration/entityTypes//attributes/Sanctions/attributes/DeactivationReasonCodeLKUP_IMS_SNCTN_RSTR_DACT_RSNDISPOSITION_CATEGORY_CODEVARCHARconfiguration/entityTypes//attributes/Sanctions/attributes/DispositionCategoryCodeLKUP_IMS_SNCTN_RSTR_DSP_CATGEXCLUSION_CODEVARCHARconfiguration/entityTypes//attributes/Sanctions/attributes/ExclusionCodeLKUP_IMS_SNCTN_RSTR_EXCLDESCRIPTIONVARCHARconfiguration/entityTypes//attributes/Sanctions/attributes/DescriptionURLVARCHARconfiguration/entityTypes//attributes/Sanctions/attributes/URLISSUED_DATEDATEconfiguration/entityTypes//attributes/Sanctions/attributes/IssuedDateEFFECTIVE_DATEDATEconfiguration/entityTypes//attributes/Sanctions/attributes/EffectiveDateREINSTATEMENT_DATEDATEconfiguration/entityTypes//attributes/Sanctions/attributes/ReinstatementDateIS_STATE_WAIVERBOOLEANconfiguration/entityTypes//attributes/Sanctions/attributes/IsStateWaiverSTATUS_CODEVARCHARconfiguration/entityTypes//attributes/Sanctions/attributes/StatusCodeLKUP_IMS_IDENTIFIER_STATUSSOURCE_CODEVARCHARconfiguration/entityTypes//attributes/Sanctions/attributes/SourceCodeLKUP_IMS_SNCTN_RSTR_SRCPUBLICATION_DATEDATEconfiguration/entityTypes//attributes/Sanctions/attributes/PublicationDateGOVERNMENT_LEVEL_CODEVARCHARconfiguration/entityTypes//attributes/Sanctions/attributes/GovernmentLevelCodeLKUP_IMS_GOVT_LVLHCP_GSA_SANCTIONReltio URI: configuration/entityTypes//attributes/GSASanctionMaterialized: NameGSA_SANCTION_URIVARCHARgenerated key CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSANCTION_IDVARCHARconfiguration/entityTypes//attributes/GSASanction/attributes/SanctionIdFIRST_NAMEVARCHARconfiguration/entityTypes//attributes/GSASanction/attributes/FirstNameMIDDLE_NAMEVARCHARconfiguration/entityTypes//attributes/GSASanction/attributes/MiddleNameLAST_NAMEVARCHARconfiguration/entityTypes//attributes/GSASanction/attributes/LastNameSUFFIX_NAMEVARCHARconfiguration/entityTypes//attributes/GSASanction/attributes/SuffixNameCITYVARCHARconfiguration/entityTypes//attributes/GSASanction/attributes/CitySTATEVARCHARconfiguration/entityTypes//attributes/GSASanction/attributes/StateZIPVARCHARconfiguration/entityTypes//attributes/GSASanction/attributes/ZipACTION_DATEVARCHARconfiguration/entityTypes//attributes/GSASanction/attributes/ActionDateTERM_DATEVARCHARconfiguration/entityTypes//attributes/GSASanction/attributes/TermDateAGENCYVARCHARconfiguration/entityTypes//attributes/GSASanction/attributes/AgencyCONFIDENCEVARCHARconfiguration/entityTypes//attributes/GSASanction/attributes/ConfidenceDEGREESDO NOT USE THIS ATTRIBUTE - will be deprecatedReltio URI: configuration/entityTypes//attributes/DegreesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameDEGREES_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypeDEGREEVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes//attributes/Degrees/attributes/DegreeDEGREEBEST_DEGREEVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes//attributes/Degrees/attributes/BestDegreeCERTIFICATESReltio URI: configuration/entityTypes//attributes/CertificatesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCERTIFICATES_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCERTIFICATE_IDVARCHARconfiguration/entityTypes//attributes/Certificates/attributes/CertificateIdNAMEVARCHARconfiguration/entityTypes//attributes/Certificates/attributes/NameBOARD_IDVARCHARconfiguration/entityTypes//attributes/Certificates/attributes/BoardIdBOARD_NAMEVARCHARconfiguration/entityTypes//attributes/Certificates/attributes/BoardNameINTERNAL_HCP_STATUSVARCHARconfiguration/entityTypes//attributes/Certificates/attributes/InternalHCPStatusINTERNAL_HCP_INACTIVE_REASON_CODEVARCHARconfiguration/entityTypes//attributes/Certificates/attributes/InternalHCPInactiveReasonCodeINTERNAL_SAMPLING_STATUSVARCHARconfiguration/entityTypes//attributes/Certificates/attributes/InternalSamplingStatusPVS_ELIGIBILTYVARCHARconfiguration/entityTypes//attributes/Certificates/attributes/PVSEligibiltyEMPLOYMENTReltio URI: configuration/entityTypes//attributes/EmploymentMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEMPLOYMENT_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTITLEVARCHARconfiguration/relationTypes/Employment/attributes/TitleSUMMARYVARCHARconfiguration/relationTypes/Employment/attributes/SummaryIS_CURRENTBOOLEANconfiguration/relationTypes/Employment/attributes/IsCurrentNAMEVARCHARNameconfiguration//attributes/NameCREDENTIALDO NOT USE THIS ATTRIBUTE - will be deprecatedReltio URI: configuration/entityTypes//attributes/: noColumnTypeDescriptionReltio Attribute URILOV NameCREDENTIAL_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeRANKVARCHARconfiguration/entityTypes//attributes/Credential/attributes/RankCREDENTIALVARCHARconfiguration/entityTypes//attributes/Credential/attributes/CredentialCREDPROFESSIONReltio URI: configuration/entityTypes//attributes/ProfessionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePROFESSION_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive Entity TypePROFESSION_CODEVARCHARProfessionconfiguration/entityTypes//attributes/Profession/attributes//entityTypes//attributes/Profession/attributes/RankEDUCATIONReltio URI: configuration/entityTypes//attributes/EducationMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEDUCATION_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypeSCHOOL_NAMEVARCHARconfiguration/entityTypes//attributes/Education/attributes/SchoolNameLKUP_IMS_SCHOOL_CODETYPEVARCHARconfiguration/entityTypes//attributes/Education/attributes/TypeDEGREEVARCHARconfiguration/entityTypes//attributes/Education/attributes/DegreeYEAR_OF_GRADUATIONVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes//attributes/Education/attributes/YearOfGraduationGRADUATEDBOOLEANDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes//attributes/Education/attributes/GraduatedGPAVARCHARconfiguration/entityTypes//attributes/Education/attributes/GPAYEARS_IN_PROGRAMVARCHARYear in , in training in current programconfiguration/entityTypes//attributes/Education/attributes/YearsInProgramSTART_YEARVARCHARconfiguration/entityTypes//attributes/Education/attributes/StartYearEND_YEARVARCHARconfiguration/entityTypes//attributes/Education/attributes/EndYearFIELDOF_STUDYVARCHARSpecialty Focus or Specialty Trainingconfiguration/entityTypes//attributes/Education/attributes/ NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes//attributes/Education/attributes/EligibilityEDUCATION_TYPEVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes//attributes/Education/attributes/EducationTypeRANKVARCHARconfiguration/entityTypes//attributes/Education/attributes/RankMEDICAL_SCHOOLVARCHARconfiguration/entityTypes//attributes/Education/attributes/MedicalSchoolTAXONOMYReltio URI: configuration/entityTypes//attributes/Taxonomy, configuration/entityTypes//attributes/TaxonomyMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameTAXONOMY_URIVARCHARgenerated key CodeACTIVEVARCHARActive TypeTAXONOMYVARCHARconfiguration/entityTypes//attributes/Taxonomy/attributes/Taxonomy, configuration/entityTypes//attributes/Taxonomy/attributes/TaxonomyTAXONOMY_CD,LKUP_IMS_JURIDIC_CATEGORYTYPEVARCHARconfiguration/entityTypes//attributes/Taxonomy/attributes/Type, configuration/entityTypes//attributes/Taxonomy/attributes/TypeTAXONOMY_TYPEPROVIDER_TYPEVARCHARconfiguration/entityTypes//attributes/Taxonomy/attributes/ProviderType, configuration/entityTypes//attributes/Taxonomy/attributes/ProviderTypeCLASSIFICATIONVARCHARconfiguration/entityTypes//attributes/Taxonomy/attributes/Classification, configuration/entityTypes//attributes/Taxonomy/attributes/ClassificationSPECIALIZATIONVARCHARconfiguration/entityTypes//attributes/Taxonomy/attributes/Specialization, configuration/entityTypes//attributes/Taxonomy/attributes/SpecializationPRIORITYVARCHARconfiguration/entityTypes//attributes/Taxonomy/attributes/Priority, configuration/entityTypes//attributes/Taxonomy/attributes/PriorityTAXONOMY_PRIORITYSTR_TYPEVARCHARconfiguration/entityTypes//attributes/Taxonomy/attributes/StrTypeLKUP_IMS_STRUCTURE_TYPEDP_PRESENCEReltio URI: configuration/entityTypes//attributes/DPPresence, configuration/entityTypes//attributes/DPPresenceMaterialized: NameDP_PRESENCE_URIVARCHARgenerated key CodeACTIVEVARCHARActive TypeCHANNEL_CODEVARCHARconfiguration/entityTypes//attributes/DPPresence/attributes/ChannelCode, configuration/entityTypes//attributes/DPPresence/attributes/ChannelCodeLKUP_IMS_DP_CHANNELCHANNEL_NAMEVARCHARconfiguration/entityTypes//attributes/DPPresence/attributes/ChannelName, configuration/entityTypes//attributes/DPPresence/attributes/ChannelNameCHANNEL_URLVARCHARconfiguration/entityTypes//attributes/DPPresence/attributes/ChannelURL, configuration/entityTypes//attributes/DPPresence/attributes/ChannelURLCHANNEL_REGISTRATION_DATEDATEconfiguration/entityTypes//attributes/DPPresence/attributes/ChannelRegistrationDate, configuration/entityTypes//attributes/DPPresence/attributes/ChannelRegistrationDatePRESENCE_TYPEVARCHARconfiguration/entityTypes//attributes/DPPresence/attributes/, configuration/entityTypes//attributes/DPPresence/attributes/PresenceTypeLKUP_IMS_DP_PRESENCE_TYPEACTIVITYVARCHARconfiguration/entityTypes//attributes/DPPresence/attributes/Activity, configuration/entityTypes//attributes/DPPresence/attributes/ActivityLKUP_IMS_DP_SCORE_CODEAUDIENCEVARCHARconfiguration/entityTypes//attributes/DPPresence/attributes/Audience, configuration/entityTypes//attributes/DPPresence/attributes/AudienceLKUP_IMS_DP_SCORE_CODEDP_SUMMARYReltio URI: configuration/entityTypes//attributes/, configuration/entityTypes//attributes/DPSummaryMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameDP_SUMMARY_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSUMMARY_TYPEVARCHARconfiguration/entityTypes//attributes//attributes/, configuration/entityTypes//attributes//attributes/SummaryTypeLKUP_IMS_DP_SUMMARY_TYPESCORE_CODEVARCHARconfiguration/entityTypes//attributes//attributes/ScoreCode, configuration/entityTypes//attributes//attributes/ScoreCodeLKUP_IMS_DP_SCORE_CODEADDITIONAL_ATTRIBUTESReltio URI: configuration/entityTypes//attributes/, configuration/entityTypes//attributes/AdditionalAttributesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDITIONAL_ATTRIBUTES_URIVARCHARgenerated key CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeATTRIBUTE_NAMEVARCHARconfiguration/entityTypes//attributes/AdditionalAttributes/attributes/AttributeName, configuration/entityTypes//attributes/AdditionalAttributes/attributes/AttributeNameATTRIBUTE_TYPEVARCHARconfiguration/entityTypes//attributes/AdditionalAttributes/attributes/, configuration/entityTypes//attributes/AdditionalAttributes/attributes/AttributeTypeLKUP_IMS_TYPE_CODEATTRIBUTE_VALUEVARCHARconfiguration/entityTypes//attributes/AdditionalAttributes/attributes/, configuration/entityTypes//attributes/AdditionalAttributes/attributes/AttributeValueATTRIBUTE_RANKVARCHARconfiguration/entityTypes//attributes/AdditionalAttributes/attributes/, configuration/entityTypes//attributes/AdditionalAttributes/attributes/AttributeRankADDITIONAL_INFOVARCHARconfiguration/entityTypes//attributes/AdditionalAttributes/attributes/, configuration/entityTypes//attributes/AdditionalAttributes/attributes/AdditionalInfoDATA_QUALITYData QualityReltio URI: configuration/entityTypes//attributes/, configuration/entityTypes//attributes/DataQualityMaterialized: NameDATA_QUALITY_URIVARCHARgenerated key CodeACTIVEVARCHARActive TypeSEVERITY_LEVELVARCHARconfiguration/entityTypes//attributes//attributes/, configuration/entityTypes//attributes//attributes/SeverityLevelLKUP_IMS_DQ_SEVERITYSOURCEVARCHARconfiguration/entityTypes//attributes//attributes/Source, configuration/entityTypes//attributes//attributes/SourceSCOREVARCHARconfiguration/entityTypes//attributes//attributes/Score, configuration/entityTypes//attributes//attributes/ScoreCLASSIFICATIONReltio URI: configuration/entityTypes//attributes/Classification, configuration/entityTypes//attributes/: noColumnTypeDescriptionReltio Attribute URILOV NameCLASSIFICATION_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypeCLASSIFICATION_TYPEVARCHARconfiguration/entityTypes//attributes/Classification/attributes/, configuration/entityTypes//attributes/Classification/attributes/ClassificationTypeLKUP_IMS_CLASSIFICATION_TYPECLASSIFICATION_VALUEVARCHARconfiguration/entityTypes//attributes/Classification/attributes/, configuration/entityTypes//attributes/Classification/attributes/ClassificationValueCLASSIFICATION_VALUE_NUMERIC_QUANTITYVARCHARconfiguration/entityTypes//attributes/Classification/attributes/ClassificationValueNumericQuantity, configuration/entityTypes//attributes/Classification/attributes/ClassificationValueNumericQuantitySTATUSVARCHARconfiguration/entityTypes//attributes/Classification/attributes/Status, configuration/entityTypes//attributes/Classification/attributes/StatusLKUP_IMS_CLASSIFICATION_STATUSEFFECTIVE_DATEDATEconfiguration/entityTypes//attributes/Classification/attributes/EffectiveDate, configuration/entityTypes//attributes/Classification/attributes/EffectiveDateEND_DATEDATEconfiguration/entityTypes//attributes/Classification/attributes/EndDate, configuration/entityTypes//attributes/Classification/attributes/EndDateNOTESVARCHARconfiguration/entityTypes//attributes/Classification/attributes/Notes, configuration/entityTypes//attributes/Classification/attributes/NotesTAGReltio URI: configuration/entityTypes//attributes/Tag, configuration/entityTypes//attributes/: noColumnTypeDescriptionReltio Attribute URILOV NameTAG_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypeTAG_TYPE_CODEVARCHARconfiguration/entityTypes//attributes/Tag/attributes/TagTypeCode, configuration/entityTypes//attributes/Tag/attributes/TagTypeCodeLKUP_IMS_TAG_TYPE_CODETAG_CODEVARCHARconfiguration/entityTypes//attributes/Tag/attributes/, configuration/entityTypes//attributes/Tag/attributes/TagCodeSTATUSVARCHARconfiguration/entityTypes//attributes/Tag/attributes/Status, configuration/entityTypes//attributes/Tag/attributes/StatusLKUP_IMS_TAG_STATUSEFFECTIVE_DATEDATEconfiguration/entityTypes//attributes/Tag/attributes/EffectiveDate, configuration/entityTypes//attributes/Tag/attributes/EffectiveDateEND_DATEDATEconfiguration/entityTypes//attributes/Tag/attributes/EndDate, configuration/entityTypes//attributes/Tag/attributes/EndDateNOTESVARCHARconfiguration/entityTypes//attributes/Tag/attributes/Notes, configuration/entityTypes//attributes/Tag/attributes/NotesEXCLUSIONSReltio URI: configuration/entityTypes//attributes/Exclusions, configuration/entityTypes//attributes/ExclusionsMaterialized: CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypePRODUCT_IDVARCHARconfiguration/entityTypes//attributes/Exclusions/attributes/, configuration/entityTypes//attributes/Exclusions/attributes/ProductIdLKUP_IMS_PRODUCT_IDEXCLUSION_STATUS_CODEVARCHARconfiguration/entityTypes//attributes/Exclusions/attributes/ExclusionStatusCode, configuration/entityTypes//attributes/Exclusions/attributes/ExclusionStatusCodeLKUP_IMS_EXCL_STATUS_CODEEFFECTIVE_DATEDATEconfiguration/entityTypes//attributes/Exclusions/attributes/EffectiveDate, configuration/entityTypes//attributes/Exclusions/attributes/EffectiveDateEND_DATEDATEconfiguration/entityTypes//attributes/Exclusions/attributes/EndDate, configuration/entityTypes//attributes/Exclusions/attributes/EndDateNOTESVARCHARconfiguration/entityTypes//attributes/Exclusions/attributes/Notes, configuration/entityTypes//attributes/Exclusions/attributes/NotesEXCLUSION_RULE_IDVARCHARconfiguration/entityTypes//attributes/Exclusions/attributes/ExclusionRuleId, configuration/entityTypes//attributes/Exclusions/attributes/ExclusionRuleIdACTIONReltio URI: configuration/entityTypes//attributes/Action, configuration/entityTypes//attributes/: noColumnTypeDescriptionReltio Attribute URILOV NameACTION_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypeACTION_CODEVARCHARconfiguration/entityTypes//attributes/Action/attributes/, configuration/entityTypes//attributes/Action/attributes/ActionCodeLKUP_IMS_ACTION_CODEACTION_NAMEVARCHARconfiguration/entityTypes//attributes/Action/attributes/ActionName, configuration/entityTypes//attributes/Action/attributes/ActionNameACTION_REQUESTED_DATEDATEconfiguration/entityTypes//attributes/Action/attributes/ActionRequestedDate, configuration/entityTypes//attributes/Action/attributes/ActionRequestedDateACTION_STATUSVARCHARconfiguration/entityTypes//attributes/Action/attributes/, configuration/entityTypes//attributes/Action/attributes/ActionStatusLKUP_IMS_ACTION_STATUSACTION_STATUS_DATEDATEconfiguration/entityTypes//attributes/Action/attributes/ActionStatusDate, configuration/entityTypes//attributes/Action/attributes/ActionStatusDateALTERNATE_NAMEReltio URI: configuration/entityTypes//attributes/AlternateName, configuration/entityTypes//attributes/AlternateNameMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameALTERNATE_NAME_URIVARCHARgenerated key CodeACTIVEVARCHARActive Entity TypeNAME_TYPE_CODEVARCHARconfiguration/entityTypes//attributes/AlternateName/attributes/NameTypeCode, configuration/entityTypes//attributes/AlternateName/attributes/NameTypeCodeLKUP_IMS_NAME_TYPE_CODENAMEVARCHARconfiguration/entityTypes//attributes/AlternateName/attributes/Name, configuration/entityTypes//attributes/AlternateName/attributes/NameFIRST_NAMEVARCHARconfiguration/entityTypes//attributes/AlternateName/attributes/FirstName, configuration/entityTypes//attributes/AlternateName/attributes/FirstNameMIDDLE_NAMEVARCHARconfiguration/entityTypes//attributes/AlternateName/attributes/MiddleName, configuration/entityTypes//attributes/AlternateName/attributes/MiddleNameLAST_NAMEVARCHARconfiguration/entityTypes//attributes/AlternateName/attributes/LastName, configuration/entityTypes//attributes/AlternateName/attributes/LastNameSUFFIX_NAMEVARCHARconfiguration/entityTypes//attributes/AlternateName/attributes/SuffixName, configuration/entityTypes//attributes/AlternateName/attributes/SuffixNameLANGUAGEReltio URI: configuration/entityTypes//attributes/Language, configuration/entityTypes//attributes/LanguageMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameLANGUAGE_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypeLANGUAGE_CODEVARCHARconfiguration/entityTypes//attributes/Language/attributes/, configuration/entityTypes//attributes/Language/attributes/LanguageCodePROFICIENCY_LEVELVARCHARconfiguration/entityTypes//attributes/Language/attributes/, configuration/entityTypes//attributes/Language/attributes/ProficiencyLevelSOURCE_DATAReltio URI: configuration/entityTypes//attributes/SourceData, configuration/entityTypes//attributes/SourceDataMaterialized: NameSOURCE_DATA_URIVARCHARgenerated key CodeACTIVEVARCHARActive TypeCLASS_OF_TRADE_CODEVARCHARconfiguration/entityTypes//attributes//attributes/ClassOfTradeCode, configuration/entityTypes//attributes//attributes/ClassOfTradeCodeRAW_CLASS_OF_TRADE_CODEVARCHARconfiguration/entityTypes//attributes//attributes/RawClassOfTradeCode, configuration/entityTypes//attributes//attributes/RawClassOfTradeCodeRAW_CLASS_OF_TRADE_DESCRIPTIONVARCHARconfiguration/entityTypes//attributes//attributes/RawClassOfTradeDescription, configuration/entityTypes//attributes//attributes/RawClassOfTradeDescriptionDATASET_IDENTIFIERVARCHARconfiguration/entityTypes//attributes//attributes/, configuration/entityTypes//attributes//attributes/DatasetIdentifierDATASET_PARTY_IDENTIFIERVARCHARconfiguration/entityTypes//attributes//attributes/, configuration/entityTypes//attributes//attributes/DatasetPartyIdentifierPARTY_STATUS_CODEVARCHARconfiguration/entityTypes//attributes//attributes/PartyStatusCode, configuration/entityTypes//attributes//attributes/PartyStatusCodeNOTESReltio URI: configuration/entityTypes//attributes/Notes, configuration/entityTypes//attributes/: noColumnTypeDescriptionReltio Attribute URILOV NameNOTES_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeNOTE_CODEVARCHARconfiguration/entityTypes//attributes/Notes/attributes/NoteCode, configuration/entityTypes//attributes/Notes/attributes/NoteCodeLKUP_IMS_NOTE_CODENOTE_TEXTVARCHARconfiguration/entityTypes//attributes/Notes/attributes/NoteText, configuration/entityTypes//attributes/Notes/attributes/NoteTextHCOHealth care providerReltio URI: configuration/entityTypes/HCOMaterialized: URILOV NameENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeNAMEVARCHARNameconfiguration/entityTypes//attributes//entityTypes//attributes/TypeCodeLKUP_IMS_HCO_CUST_TYPESUB_TYPE_CODEVARCHARCustomer /entityTypes//attributes/SubTypeCodeLKUP_IMS_HCO_SUBTYPEEXCLUDE_FROM_MATCHVARCHARconfiguration/entityTypes//attributes/ExcludeFromMatchOTHER_NAMESVARCHAROther Namesconfiguration/entityTypes//attributes/OtherNamesSOURCE_IDVARCHARSource IDconfiguration/entityTypes//attributes/SourceIDVALIDATION_STATUSVARCHARconfiguration/entityTypes//attributes/ValidationStatusLKUP_IMS_VAL_STATUSORIGIN_SOURCEVARCHAROriginating Sourceconfiguration/entityTypes//attributes/OriginSourceCOUNTRY_CODEVARCHARCountry Codeconfiguration/entityTypes//attributes/CountryLKUP_IMS_COUNTRY_CODEFISCALVARCHARconfiguration/entityTypes//attributes/FiscalSITEVARCHARconfiguration/entityTypes//attributes/SiteGROUP_PRACTICEBOOLEANconfiguration//attributes/GroupPracticeGEN_FIRSTVARCHARStringconfiguration/entityTypes//attributes/GenFirstLKUP_IMS_HCO_GENFIRSTSREP_ACCESSVARCHARStringconfiguration/entityTypes//attributes/SrepAccessLKUP_IMS_HCO_SREPACCESSACCEPT_MEDICAREBOOLEANconfiguration/entityTypes//attributes/AcceptMedicareACCEPT_MEDICAIDBOOLEANconfiguration/entityTypes//attributes/AcceptMedicaidPERCENT_MEDICAREVARCHARconfiguration/entityTypes//attributes/PercentMedicarePERCENT_MEDICAIDVARCHARconfiguration/entityTypes//attributes/PercentMedicaidPARENT_COMPANYVARCHARReplacement Parent Satelliteconfiguration/entityTypes//attributes/ParentCompanyHEALTH_SYSTEM_NAMEVARCHARconfiguration/entityTypes//attributes/HealthSystemNameVADODBOOLEANconfiguration/entityTypes//attributes/VADODGPO_MEMBERSHIPBOOLEANconfiguration/entityTypes//attributes/GPOMembershipACADEMICBOOLEANconfiguration/entityTypes//attributes/AcademicMKT_SEGMENT_CODEVARCHARconfiguration/entityTypes//attributes/MktSegmentCodeTOTAL_LICENSE_BEDSVARCHARconfiguration/entityTypes//attributes/TotalLicenseBedsTOTAL_CENSUS_BEDSVARCHARconfiguration/entityTypes//attributes/TotalCensusBedsNUM_PATIENTSVARCHARconfiguration/entityTypes//attributes/NumPatientsTOTAL_STAFFED_BEDSVARCHARconfiguration/entityTypes//attributes/TotalStaffedBedsTOTAL_SURGERIESVARCHARconfiguration/entityTypes//attributes/TotalSurgeriesTOTAL_PROCEDURESVARCHARconfiguration/entityTypes//attributes/TotalProceduresOR_SURGERIESVARCHARconfiguration/entityTypes//attributes/ORSurgeriesRESIDENT_PROGRAMBOOLEANconfiguration/entityTypes//attributes/ResidentProgramRESIDENT_COUNTVARCHARconfiguration/entityTypes//attributes/ResidentCountNUMS_OF_PROVIDERSVARCHARNum_of_providers displays the total number of distinct providers affiliated with a business. Current Data: Value /entityTypes//attributes/NumsOfProvidersCORP_PARENT_NAMEVARCHARCorporate Parent Nameconfiguration/entityTypes//attributes//entityTypes//attributes//entityTypes//attributes//entityTypes//attributes/OwnerSubNameFORMULARYVARCHARconfiguration/entityTypes//attributes/FormularyLKUP_IMS_HCO_FORMULARYE_MEDICAL_RECORDVARCHARconfiguration/entityTypes//attributes/EMedicalRecordLKUP_IMS_HCO_ERECE_PRESCRIBEVARCHARconfiguration/entityTypes//attributes/EPrescribeLKUP_IMS_HCO_ERECPAY_PERFORMVARCHARconfiguration/entityTypes//attributes/PayPerformLKUP_IMS_HCO_PAYPERFORMCMS_COVERED_FOR_TEACHINGBOOLEANconfiguration/entityTypes//attributes/CMSCoveredForTeachingCOMM_HOSPBOOLEANIndicates whether the facility is a short-term (average length of stay is ) acute care, or non federal hospital. Values: Yes and Nullconfiguration/entityTypes//attributes/CommHospEMAIL_DOMAINVARCHARconfiguration/entityTypes//attributes/EmailDomainSTATUS_IMSVARCHARconfiguration/entityTypes//attributes/StatusIMSLKUP_IMS_STATUSDOING_BUSINESS_AS_NAMEVARCHARconfiguration/entityTypes//attributes/DoingBusinessAsNameCOMPANY_TYPEVARCHARconfiguration//attributes/CompanyTypeLKUP_IMS_ORG_TYPECUSIPVARCHARconfiguration//attributes/CUSIPSECTOR_IMSVARCHARSectorconfiguration/entityTypes//attributes/SectorIMSLKUP_IMS_HCO_SECTORIMSINDUSTRYVARCHARconfiguration/entityTypes//attributes/IndustryFOUNDED_YEARVARCHARconfiguration/entityTypes//attributes/FoundedYearEND_YEARVARCHARconfiguration/entityTypes//attributes/EndYearIPO_YEARVARCHARconfiguration/entityTypes//attributes/IPOYearLEGAL_DOMICILEVARCHARState of Legal Domicileconfiguration/entityTypes//attributes/LegalDomicileOWNERSHIP_STATUSVARCHARconfiguration/entityTypes//attributes/OwnershipStatusLKUP_IMS_HCO_OWNERSHIPSTATUSPROFIT_STATUSVARCHARThe profit status of the facility. Values include: For Profit, Not For Profit, Government, Armed Forces, or NULL (If data is unknown or Not Confidential and Proprietary to IMS Health. Field Name Data Type Field Description Applicable).configuration/entityTypes//attributes/ProfitStatusLKUP_IMS_HCO_PROFITSTATUSCMIVARCHARCMI is the Case Mix Index for an organization. This is a government-assigned measure of the complexity of medical and surgical care provided to inpatients by a hospital under the prospective payment system (). It factors in a hospital?s use of technology for patient care and medical services? level of acuity required by the patient nfiguration/entityTypes/HCO/attributes/CMISOURCE_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/SourceNameSUB_SOURCE_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/SubSourceNameDEA_BUSINESS_ACTIVITYVARCHARconfiguration/entityTypes/HCO/attributes/DEABusinessActivityIMAGE_LINKSVARCHARconfiguration/entityTypes/HCO/attributes/ImageLinksVIDEO_LINKSVARCHARconfiguration/entityTypes/HCO/attributes/VideoLinksDOCUMENT_LINKSVARCHARconfiguration/entityTypes/HCO/attributes/DocumentLinksWEBSITE_URLVARCHARconfiguration/entityTypes/HCO/attributes/WebsiteURLTAX_IDVARCHARconfiguration/entityTypes/HCO/attributes/TaxIDDESCRIPTIONVARCHARconfiguration/entityTypes/HCO/attributes/DescriptionSTATUS_UPDATE_DATEDATEconfiguration/entityTypes/HCO/attributes/StatusUpdateDateSTATUS_REASON_CODEVARCHARconfiguration/entityTypes/HCO/attributes/StatusReasonCodeLKUP_IMS_SRC_DEACTIVE_REASON_CODECOMMENTERSVARCHARCommentersconfiguration/entityTypes/HCO/attributes/CommentersCLIENT_TYPE_CODEVARCHARClient /entityTypes//attributes//entityTypes//attributes/OfficialNameVALIDATION_CHANGE_REASONVARCHARconfiguration/entityTypes//attributes/ValidationChangeReasonLKUP_IMS_VAL_STATUS_CHANGE_REASONVALIDATION_CHANGE_DATEDATEconfiguration//attributes/ValidationChangeDateCREATE_DATEDATEconfiguration//attributes/CreateDateUPDATE_DATEDATEconfiguration/entityTypes//attributes/UpdateDateCHECK_DATEDATEconfiguration//attributes/CheckDateSTATE_CODEVARCHARSituation of the workplace: /entityTypes//attributes/StateCodeLKUP_IMS_PROFILE_STATESTATE_DATEDATEDate when state of the record was last the status of the Organization changedconfiguration/entityTypes//attributes/StatusChangeReasonNUM_EMPLOYEESVARCHARconfiguration/entityTypes//attributes/NumEmployeesNUM_MED_EMPLOYEESVARCHARconfiguration/entityTypes//attributes/NumMedEmployeesTOTAL_BEDS_INTENSIVE_CAREVARCHARconfiguration/entityTypes//attributes/TotalBedsIntensiveCareNUM_EXAMINATION_ROOMVARCHARconfiguration/entityTypes//attributes/NumExaminationRoomNUM_AFFILIATED_SITESVARCHARconfiguration/entityTypes//attributes/NumAffiliatedSitesNUM_ENROLLED_MEMBERSVARCHARconfiguration//attributes/NumEnrolledMembersNUM_IN_PATIENTSVARCHARconfiguration/entityTypes//attributes/NumInPatientsNUM_OUT_PATIENTSVARCHARconfiguration/entityTypes//attributes/NumOutPatientsNUM_OPERATING_ROOMSVARCHARconfiguration/entityTypes//attributes/NumOperatingRoomsNUM_PATIENTS_X_WEEKVARCHARconfiguration/entityTypes//attributes/NumPatientsXWeekACT_TYPE_CODEVARCHARconfiguration/entityTypes//attributes/ActTypeCodeLKUP_IMS_ACTIVITY_TYPEDISPENSE_DRUGSBOOLEANconfiguration/entityTypes//attributes/DispenseDrugsNUM_PRESCRIBERSVARCHARconfiguration/entityTypes//attributes/NumPrescribersPATIENTS_X_YEARVARCHARconfiguration/entityTypes//attributes/PatientsXYearACCEPTS_NEW_PATIENTSVARCHARY/N field indicating whether the workplace accepts new patientsconfiguration/entityTypes//attributes/AcceptsNewPatientsEXTERNAL_INFORMATION_URLVARCHARconfiguration//attributes/ExternalInformationURLMATCH_STATUS_CODEVARCHARconfiguration/entityTypes//attributes/MatchStatusCodeLKUP_IMS_MATCH_STATUS_CODESUBSCRIPTION_FLAG1BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes//attributes/SubscriptionFlag1SUBSCRIPTION_FLAG2BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes//attributes/SubscriptionFlag2SUBSCRIPTION_FLAG3BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes//attributes/SubscriptionFlag3SUBSCRIPTION_FLAG4BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes//attributes/SubscriptionFlag4SUBSCRIPTION_FLAG5BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes//attributes/SubscriptionFlag5SUBSCRIPTION_FLAG6BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes//attributes/SubscriptionFlag6SUBSCRIPTION_FLAG7BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes//attributes/SubscriptionFlag7SUBSCRIPTION_FLAG8BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes//attributes/SubscriptionFlag8SUBSCRIPTION_FLAG9BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes//attributes/SubscriptionFlag9SUBSCRIPTION_FLAG10BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes//attributes/SubscriptionFlag10ROLE_CODEVARCHARconfiguration/entityTypes//attributes/RoleCodeLKUP_IMS_ORG_ROLE_CODEACTIVATION_DATEVARCHARconfiguration/entityTypes//attributes/ActivationDatePARTY_IDVARCHARconfiguration//attributes/PartyIDLAST_VERIFICATION_STATUSVARCHARconfiguration/entityTypes//attributes/LastVerificationStatusLAST_VERIFICATION_DATEDATEconfiguration/entityTypes//attributes/LastVerificationDateEFFECTIVE_DATEDATEconfiguration/entityTypes//attributes/EffectiveDateEND_DATEDATEconfiguration/entityTypes//attributes/EndDatePARTY_LOCALIZATION_CODEVARCHARconfiguration/entityTypes//attributes/PartyLocalizationCodeMATCH_PARTY_NAMEVARCHARconfiguration/entityTypes//attributes/MatchPartyNameDELETE_ENTITYBOOLEANDeleteEntity flag to identify compliant dataconfiguration/entityTypes//attributes/DeleteEntityOK_VR_TRIGGERVARCHARconfiguration/entityTypes//attributes/OK_VR_TriggerLKUP_IMS_SEND_FOR_VALIDATIONHCO_MAIN_HCO_CLASSOF_TRADE_NReltio URI: configuration/entityTypes//attributes/ClassofTradeNMaterialized: NameMAINHCO_URIVARCHARgenerated key descriptionCLASSOFTRADEN_URIVARCHARgenerated key CodeACTIVEVARCHARActive TypePRIORITYVARCHARNumeric code for the primary class of tradeconfiguration//attributes/ClassofTradeN/attributes/PriorityCLASSIFICATIONVARCHARconfiguration/entityTypes//attributes/ClassofTradeN/attributes/ClassificationLKUP_IMS_HCO_CLASSOFTRADEN_CLASSIFICATIONFACILITY_TYPEVARCHARconfiguration/entityTypes//attributes/ClassofTradeN/attributes/FacilityTypeLKUP_IMS_HCO_CLASSOFTRADEN_FACILITYTYPESPECIALTYVARCHARconfiguration/entityTypes//attributes/ClassofTradeN/attributes/SpecialtyLKUP_IMS_HCO_CLASSOFTRADEN_SPECIALTYHCO_ADDRESS_UNITReltio URI: configuration/entityTypes/Location/attributes/UnitMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESS_URIVARCHARgenerated key descriptionUNIT_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeUNIT_NAMEVARCHARconfiguration/entityTypes/Location/attributes/Unit/attributes/UnitNameUNIT_VALUEVARCHARconfiguration/entityTypes/Location/attributes/Unit/attributes/UnitValueHCO_ADDRESS_BRICKReltio URI: configuration/entityTypes/Location/attributes/BrickMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESS_URIVARCHARgenerated key descriptionBRICK_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPEVARCHARconfiguration/entityTypes/Location/attributes/Brick/attributes/TypeLKUP_IMS_BRICK_TYPEBRICK_VALUEVARCHARconfiguration/entityTypes/Location/attributes/Brick/attributes/BrickValueLKUP_IMS_BRICK_VALUESORT_ORDERVARCHARconfiguration/entityTypes/Location/attributes/Brick/attributes/SortOrderKEY_FINANCIAL_FIGURES_OVERVIEWReltio URI: configuration/entityTypes//attributes/KeyFinancialFiguresOverviewMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameKEY_FINANCIAL_FIGURES_OVERVIEW_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypeFINANCIAL_STATEMENT_TO_DATEDATEconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/FinancialStatementToDateFINANCIAL_PERIOD_DURATIONVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/FinancialPeriodDurationSALES_REVENUE_CURRENCYVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueCurrencySALES_REVENUE_CURRENCY_CODEVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueCurrencyCodeSALES_REVENUE_RELIABILITY_CODEVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueReliabilityCodeSALES_REVENUE_UNIT_OF_SIZEVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueUnitOfSizeSALES_REVENUE_AMOUNTVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueAmountPROFIT_OR_LOSS_CURRENCYVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossCurrencyPROFIT_OR_LOSS_RELIABILITY_TEXTVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossReliabilityTextPROFIT_OR_LOSS_UNIT_OF_SIZEVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossUnitOfSizePROFIT_OR_LOSS_AMOUNTVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossAmountSALES_TURNOVER_GROWTH_RATEVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/SalesTurnoverGrowthRateSALES3YRY_GROWTH_RATEVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/Sales3YryGrowthRateSALES5YRY_GROWTH_RATEVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/Sales5YryGrowthRateEMPLOYEE3YRY_GROWTH_RATEVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/Employee3YryGrowthRateEMPLOYEE5YRY_GROWTH_RATEVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/Employee5YryGrowthRateCLASSOF_TRADE_NReltio URI: configuration/entityTypes//attributes/ClassofTradeNMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCLASSOF_TRADE_N_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypePRIORITYVARCHARNumeric code for the primary class of tradeconfiguration//attributes/ClassofTradeN/attributes/PriorityCLASSIFICATIONVARCHARconfiguration/entityTypes//attributes/ClassofTradeN/attributes/ClassificationLKUP_IMS_HCO_CLASSOFTRADEN_CLASSIFICATIONFACILITY_TYPEVARCHARconfiguration/entityTypes//attributes/ClassofTradeN/attributes/FacilityTypeLKUP_IMS_HCO_CLASSOFTRADEN_FACILITYTYPESPECIALTYVARCHARconfiguration/entityTypes//attributes/ClassofTradeN/attributes/SpecialtyLKUP_IMS_HCO_CLASSOFTRADEN_SPECIALTYSPECIALTYDO NOT USE THIS ATTRIBUTE - will be deprecatedReltio URI: configuration/entityTypes//attributes/SpecialtyMaterialized: NameSPECIALTY_URIVARCHARgenerated key CodeACTIVEVARCHARActive TypeSPECIALTYVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration//attributes/Specialty/attributes/SpecialtyTYPEVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration//attributes/Specialty/attributes/TypeGSA_EXCLUSIONReltio URI: configuration/entityTypes//attributes/GSAExclusionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameGSA_EXCLUSION_URIVARCHARgenerated key CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSANCTION_IDVARCHARconfiguration//attributes//attributes/SanctionIdORGANIZATION_NAMEVARCHARconfiguration/entityTypes//attributes//attributes/OrganizationNameADDRESS_LINE1VARCHARconfiguration/entityTypes//attributes//attributes/AddressLine1ADDRESS_LINE2VARCHARconfiguration/entityTypes//attributes//attributes/AddressLine2CITYVARCHARconfiguration/entityTypes//attributes//attributes/CitySTATEVARCHARconfiguration/entityTypes//attributes//attributes/StateZIPVARCHARconfiguration/entityTypes//attributes//attributes/ZipACTION_DATEVARCHARconfiguration/entityTypes//attributes//attributes/ActionDateTERM_DATEVARCHARconfiguration/entityTypes//attributes//attributes/TermDateAGENCYVARCHARconfiguration/entityTypes//attributes//attributes/AgencyCONFIDENCEVARCHARconfiguration/entityTypes//attributes//attributes/ConfidenceOIG_EXCLUSIONReltio URI: configuration/entityTypes//attributes/OIGExclusionMaterialized: NameOIG_EXCLUSION_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSANCTION_IDVARCHARconfiguration//attributes/OIGExclusion/attributes/SanctionIdACTION_CODEVARCHARconfiguration/entityTypes//attributes/OIGExclusion/attributes/ActionCodeACTION_DESCRIPTIONVARCHARconfiguration//attributes/OIGExclusion/attributes/ActionDescriptionBOARD_CODEVARCHARCourt case board idconfiguration/entityTypes//attributes/OIGExclusion/attributes/BoardCodeBOARD_DESCVARCHARcourt case board descriptionconfiguration//attributes/OIGExclusion/attributes/BoardDescACTION_DATEDATEconfiguration/entityTypes//attributes/OIGExclusion/attributes/ActionDateOFFENSE_CODEVARCHARconfiguration//attributes/OIGExclusion/attributes/OffenseCodeOFFENSE_DESCRIPTIONVARCHARconfiguration/entityTypes//attributes/OIGExclusion/attributes/OffenseDescriptionBRICKReltio URI: configuration/entityTypes//attributes/BrickMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameBRICK_URIVARCHARgenerated key CodeACTIVEVARCHARActive TypeTYPEVARCHARconfiguration/entityTypes//attributes/Brick/attributes/TypeLKUP_IMS_BRICK_TYPEBRICK_VALUEVARCHARconfiguration/entityTypes//attributes/Brick/attributes/BrickValueLKUP_IMS_BRICK_VALUEEMRReltio URI: configuration/entityTypes//attributes/: noColumnTypeDescriptionReltio Attribute URILOV NameEMR_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive Entity TypeNOTESBOOLEANY/N field indicating whether workplace uses software to write notesconfiguration/entityTypes//attributes//attributes/NotesPRESCRIBESBOOLEANY/N field indicating whether the workplace uses software to write a prescriptionsconfiguration/entityTypes//attributes//attributes/PrescribesLKUP_IMS_EMR_PRESCRIBESELABS_X_RAYSBOOLEANY/N indicating whether the workplace uses software for /entityTypes//attributes//attributes/ElabsXRaysLKUP_IMS_EMR_ELABS_XRAYSNUMBER_OF_PHYSICIANSVARCHARNumber of physicians that use EMR software in the workplaceconfiguration/entityTypes//attributes//attributes/NumberOfPhysiciansPOLICYMAKERVARCHARIndividual who makes decisions regarding softwareconfiguration/entityTypes//attributes//attributes/PolicymakerSOFTWARE_TYPEVARCHARName of the software used at the workplaceconfiguration/entityTypes//attributes//attributes/SoftwareTypeADOPTIONVARCHARWhen the software was adopted at the workplaceconfiguration/entityTypes//attributes//attributes/AdoptionBUYING_FACTORVARCHARBuying factor which influenced the workplace's decision to purchase the /entityTypes//attributes//attributes/BuyingFactorOWNERVARCHARIndividual who made the decision to purchase softwareconfiguration/entityTypes//attributes//attributes/OwnerAWAREBOOLEANconfiguration/entityTypes//attributes//attributes/AwareLKUP_IMS_EMR_AWARESOFTWAREBOOLEANconfiguration//attributes//attributes/SoftwareLKUP_IMS_EMR_SOFTWAREVENDORVARCHARconfiguration/entityTypes//attributes//attributes/VendorLKUP_IMS_EMR_VENDORBUSINESS_HOURSReltio URI: configuration/entityTypes//attributes/BusinessHoursMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameBUSINESS_HOURS_URIVARCHARgenerated key CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeDAYVARCHARconfiguration/entityTypes//attributes//attributes/DayPERIODVARCHARconfiguration/entityTypes//attributes//attributes/PeriodTIME_SLOTVARCHARconfiguration/entityTypes//attributes//attributes/TimeSlotSTART_TIMEVARCHARconfiguration/entityTypes//attributes//attributes/StartTimeEND_TIMEVARCHARconfiguration/entityTypes//attributes//attributes/EndTimeAPPOINTMENT_ONLYBOOLEANconfiguration/entityTypes//attributes//attributes/AppointmentOnlyPERIOD_STARTVARCHARconfiguration//attributes//attributes/PeriodStartPERIOD_ENDVARCHARconfiguration/entityTypes//attributes//attributes/PeriodEndACO_DETAILSACO DetailsReltio URI: configuration/entityTypes//attributes/ACODetailsMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACO_DETAILS_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypeACO_TYPE_CODEVARCHARAcoTypeCodeconfiguration/entityTypes//attributes/ACODetails/attributes/AcoTypeCodeLKUP_IMS_ACO_TYPEACO_TYPE_CATGVARCHARAcoTypeCatgconfiguration/entityTypes//attributes/ACODetails/attributes/AcoTypeCatgACO_TYPE_MDELVARCHARAcoTypeMdelconfiguration/entityTypes//attributes/ACODetails/attributes/AcoTypeMdelACO_DETAIL_IDVARCHARAcoDetailIdconfiguration//attributes/ACODetails/attributes/AcoDetailIdACO_DETAIL_CODEVARCHARAcoDetailCodeconfiguration/entityTypes//attributes/ACODetails/attributes/AcoDetailCodeLKUP_IMS_ACO_DETAILACO_DETAIL_GROUP_CODEVARCHARAcoDetailGroupCodeconfiguration/entityTypes//attributes/ACODetails/attributes/AcoDetailGroupCodeLKUP_IMS_ACO_DETAIL_GROUPACO_VALVARCHARAcoValconfiguration//attributes/ACODetails/attributes/AcoValTRADE_STYLE_NAMEReltio URI: configuration/entityTypes//attributes/TradeStyleNameMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameTRADE_STYLE_NAME_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypeORGANIZATION_NAMEVARCHARconfiguration/entityTypes//attributes/TradeStyleName/attributes/OrganizationNameLANGUAGE_CODEVARCHARconfiguration/entityTypes//attributes/TradeStyleName/attributes/LanguageCodeFORMER_ORGANIZATION_PRIMARY_NAMEVARCHARconfiguration//attributes/TradeStyleName/attributes/FormerOrganizationPrimaryNameDISPLAY_SEQUENCEVARCHARconfiguration/entityTypes//attributes/TradeStyleName/attributes/DisplaySequenceTYPEVARCHARconfiguration/entityTypes//attributes/TradeStyleName/attributes/TypePRIOR_DUNS_NUMBERReltio URI: configuration/entityTypes//attributes/PriorDUNSNUmberMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePRIOR_DUNSN_UMBER_URIVARCHARgenerated key CodeACTIVEVARCHARActive TypeTRANSFER_DUNS_NUMBERVARCHARconfiguration//attributes/PriorDUNSNUmber/attributes/TransferDUNSNumberTRANSFER_REASON_TEXTVARCHARconfiguration/entityTypes//attributes/PriorDUNSNUmber/attributes/TransferReasonTextTRANSFER_REASON_CODEVARCHARconfiguration/entityTypes//attributes/PriorDUNSNUmber/attributes/TransferReasonCodeTRANSFER_DATEVARCHARconfiguration/entityTypes//attributes/PriorDUNSNUmber/attributes/TransferDateTRANSFERRED_FROM_DUNS_NUMBERVARCHARconfiguration//attributes/PriorDUNSNUmber/attributes/TransferredFromDUNSNumberTRANSFERRED_TO_DUNS_NUMBERVARCHARconfiguration/entityTypes//attributes/PriorDUNSNUmber/attributes/TransferredToDUNSNumberINDUSTRY_CODEReltio URI: configuration/entityTypes//attributes/IndustryCodeMaterialized: NameINDUSTRY_CODE_URIVARCHARgenerated key CodeACTIVEVARCHARActive /entityTypes//attributes/IndustryCode/attributes/DNBCodeINDUSTRY_CODEVARCHARconfiguration/entityTypes//attributes/IndustryCode/attributes/IndustryCodeINDUSTRY_CODE_DESCRIPTIONVARCHARconfiguration/entityTypes//attributes/IndustryCode/attributes/IndustryCodeDescriptionINDUSTRY_CODE_LANGUAGE_CODEVARCHARconfiguration//attributes/IndustryCode/attributes/IndustryCodeLanguageCodeINDUSTRY_CODE_WRITING_SCRIPTVARCHARconfiguration//attributes/IndustryCode/attributes/IndustryCodeWritingScriptDISPLAY_SEQUENCEVARCHARconfiguration/entityTypes//attributes/IndustryCode/attributes/DisplaySequenceSALES_PERCENTAGEVARCHARconfiguration/entityTypes//attributes/IndustryCode/attributes/SalesPercentageTYPEVARCHARconfiguration/entityTypes//attributes/IndustryCode/attributes/TypeINDUSTRY_TYPE_CODEVARCHARconfiguration/entityTypes//attributes/IndustryCode/attributes/IndustryTypeCodeIMPORT_EXPORT_AGENTVARCHARconfiguration/entityTypes//attributes/IndustryCode/attributes/ImportExportAgentACTIVITIES_AND_OPERATIONSReltio URI: configuration/entityTypes//attributes/ActivitiesAndOperationsMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACTIVITIES_AND_OPERATIONS_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive /entityTypes//attributes/ActivitiesAndOperations/attributes/LineOfBusinessDescriptionLANGUAGE_CODEVARCHARconfiguration//attributes/ActivitiesAndOperations/attributes/LanguageCodeWRITING_SCRIPT_CODEVARCHARconfiguration//attributes/ActivitiesAndOperations/attributes/WritingScriptCodeIMPORT_INDICATORBOOLEANconfiguration/entityTypes//attributes/ActivitiesAndOperations/attributes/ImportIndicatorEXPORT_INDICATORBOOLEANconfiguration/entityTypes//attributes/ActivitiesAndOperations/attributes/ExportIndicatorAGENT_INDICATORBOOLEANconfiguration/entityTypes//attributes/ActivitiesAndOperations/attributes/AgentIndicatorEMPLOYEE_DETAILSReltio URI: configuration/entityTypes//attributes/EmployeeDetailsMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEMPLOYEE_DETAILS_URIVARCHARgenerated key CodeACTIVEVARCHARActive TypeINDIVIDUAL_EMPLOYEE_FIGURES_DATEVARCHARconfiguration/entityTypes//attributes/EmployeeDetails/attributes/IndividualEmployeeFiguresDateINDIVIDUAL_TOTAL_EMPLOYEE_QUANTITYVARCHARconfiguration/entityTypes//attributes/EmployeeDetails/attributes/IndividualTotalEmployeeQuantityINDIVIDUAL_RELIABILITY_TEXTVARCHARconfiguration/entityTypes//attributes/EmployeeDetails/attributes/IndividualReliabilityTextTOTAL_EMPLOYEE_QUANTITYVARCHARconfiguration/entityTypes//attributes/EmployeeDetails/attributes/TotalEmployeeQuantityTOTAL_EMPLOYEE_RELIABILITYVARCHARconfiguration/entityTypes//attributes/EmployeeDetails/attributes/TotalEmployeeReliabilityPRINCIPALS_INCLUDEDVARCHARconfiguration/entityTypes//attributes/EmployeeDetails/attributes/PrincipalsIncludedMATCH_QUALITYReltio URI: configuration/entityTypes//attributes/: noColumnTypeDescriptionReltio Attribute URILOV NameMATCH_QUALITY_URIVARCHARgenerated key CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCONFIDENCE_CODEVARCHARDnB Match Quality Confidence Codeconfiguration/entityTypes//attributes/MatchQuality/attributes//entityTypes//attributes/MatchQuality/attributes/DisplaySequenceMATCH_CODEVARCHARconfiguration/entityTypes//attributes/MatchQuality/attributes/MatchCodeBEMFABVARCHARconfiguration/entityTypes//attributes/MatchQuality/attributes/BEMFABMATCH_GRADEVARCHARconfiguration/entityTypes//attributes/MatchQuality/attributes/: configuration/entityTypes//attributes/OrganizationDetailMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameORGANIZATION_DETAIL_URIVARCHARgenerated key CodeACTIVEVARCHARActive Entity TypeMEMBER_ROLEVARCHARconfiguration//attributes//attributes/MemberRoleSTANDALONEBOOLEANconfiguration//attributes//attributes/StandaloneCONTROL_OWNERSHIP_DATEDATEconfiguration/entityTypes//attributes//attributes/ControlOwnershipDateOPERATING_STATUSVARCHARconfiguration/entityTypes//attributes//attributes/OperatingStatusSTART_YEARVARCHARconfiguration/entityTypes//attributes//attributes/StartYearFRANCHISE_OPERATION_TYPEVARCHARconfiguration/entityTypes//attributes//attributes/FranchiseOperationTypeBONEYARD_ORGANIZATIONBOOLEANconfiguration/entityTypes//attributes//attributes/BoneyardOrganizationOPERATING_STATUS_COMMENTVARCHARconfiguration/entityTypes//attributes//attributes/OperatingStatusCommentDUNS_HIERARCHYReltio URI: configuration/entityTypes//attributes/DUNSHierarchyMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameDUNS_HIERARCHY_URIVARCHARgenerated key CodeACTIVEVARCHARActive TypeGLOBAL_ULTIMATE_DUNSVARCHARconfiguration/entityTypes//attributes//attributes/GlobalUltimateDUNSGLOBAL_ULTIMATE_ORGANIZATIONVARCHARconfiguration/entityTypes//attributes//attributes/GlobalUltimateOrganizationDOMESTIC_ULTIMATE_DUNSVARCHARconfiguration/entityTypes//attributes//attributes/DomesticUltimateDUNSDOMESTIC_ULTIMATE_ORGANIZATIONVARCHARconfiguration/entityTypes//attributes//attributes/DomesticUltimateOrganizationPARENT_DUNSVARCHARconfiguration/entityTypes//attributes//attributes/ParentDUNSPARENT_ORGANIZATIONVARCHARconfiguration/entityTypes//attributes//attributes/ParentOrganizationHEADQUARTERS_DUNSVARCHARconfiguration/entityTypes//attributes//attributes/HeadquartersDUNSHEADQUARTERS_ORGANIZATIONVARCHARconfiguration/entityTypes//attributes//attributes/HeadquartersOrganizationAFFILIATIONSReltio URI: configuration/relationTypes/HasHealthCareRole, configuration/relationTypes/AffiliatedPurchasing, configuration/relationTypes/Activity, configuration/relationTypes/ManagedMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameRELATION_URIVARCHARReltio Relation URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagRELATION_TYPEVARCHARReltio Relation TypeSTART_ENTITY_URIVARCHARReltio Start Entity URIEND_ENTITY_URIVARCHARReltio End Entity URIREL_GROUPVARCHARHCRS relation group from the relationship type, each rel group refers to one relation idconfiguration/relationTypes/AffiliatedPurchasing/attributes/, configuration/relationTypes/Managed/attributes/RelGroupLKUP_IMS_RELGROUP_TYPEREL_ORDER_AFFILIATEDPURCHASINGVARCHAROrderconfiguration/relationTypes/AffiliatedPurchasing/attributes/RelOrderSTATUS_REASON_CODEVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/StatusReasonCode, configuration/relationTypes/Activity/attributes/StatusReasonCode, configuration/relationTypes/Managed/attributes/StatusReasonCodeLKUP_IMS_SRC_DEACTIVE_REASON_CODESTATUS_UPDATE_DATEDATEconfiguration/relationTypes/AffiliatedPurchasing/attributes/StatusUpdateDate, configuration/relationTypes/Activity/attributes/StatusUpdateDate, configuration/relationTypes/Managed/attributes/StatusUpdateDateVALIDATION_CHANGE_REASONVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/ValidationChangeReason, configuration/relationTypes/Activity/attributes/ValidationChangeReason, configuration/relationTypes/Managed/attributes/ValidationChangeReasonLKUP_IMS_VAL_STATUS_CHANGE_REASONVALIDATION_CHANGE_DATEDATEconfiguration/relationTypes/AffiliatedPurchasing/attributes/ValidationChangeDate, configuration/relationTypes/Activity/attributes/ValidationChangeDate, configuration/relationTypes/Managed/attributes/ValidationChangeDateVALIDATION_STATUSVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/, configuration/relationTypes/Activity/attributes/, configuration/relationTypes/Managed/attributes/ValidationStatusLKUP_IMS_VAL_STATUSAFFILIATION_STATUSVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/, configuration/relationTypes/Activity/attributes/, configuration/relationTypes/Managed/attributes/AffiliationStatusLKUP_IMS_STATUSCOUNTRYVARCHARCountry Codeconfiguration/relationTypes/AffiliatedPurchasing/attributes/Country, configuration/relationTypes/Activity/attributes/Country, configuration/relationTypes/Managed/attributes//relationTypes/AffiliatedPurchasing/attributes/, configuration/relationTypes/Activity/attributes/AffiliationNameSUBSCRIPTION_FLAG1BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag1, configuration/relationTypes/Activity/attributes/SubscriptionFlag1, configuration/relationTypes/Managed/attributes/SubscriptionFlag1SUBSCRIPTION_FLAG2BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag2, configuration/relationTypes/Activity/attributes/SubscriptionFlag2, configuration/relationTypes/Managed/attributes/SubscriptionFlag2SUBSCRIPTION_FLAG3BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag3, configuration/relationTypes/Activity/attributes/SubscriptionFlag3, configuration/relationTypes/Managed/attributes/SubscriptionFlag3SUBSCRIPTION_FLAG4BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag4, configuration/relationTypes/Activity/attributes/SubscriptionFlag4, configuration/relationTypes/Managed/attributes/SubscriptionFlag4SUBSCRIPTION_FLAG5BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag5, configuration/relationTypes/Activity/attributes/SubscriptionFlag5, configuration/relationTypes/Managed/attributes/SubscriptionFlag5SUBSCRIPTION_FLAG6BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag6, configuration/relationTypes/Activity/attributes/SubscriptionFlag6, configuration/relationTypes/Managed/attributes/SubscriptionFlag6SUBSCRIPTION_FLAG7BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag7, configuration/relationTypes/Activity/attributes/SubscriptionFlag7, configuration/relationTypes/Managed/attributes/SubscriptionFlag7SUBSCRIPTION_FLAG8BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag8, configuration/relationTypes/Activity/attributes/SubscriptionFlag8, configuration/relationTypes/Managed/attributes/SubscriptionFlag8SUBSCRIPTION_FLAG9BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag9, configuration/relationTypes/Activity/attributes/SubscriptionFlag9, configuration/relationTypes/Managed/attributes/SubscriptionFlag9SUBSCRIPTION_FLAG10BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag10, configuration/relationTypes/Activity/attributes/SubscriptionFlag10, configuration/relationTypes/Managed/attributes/SubscriptionFlag10BEST_RELATIONSHIP_INDICATORVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/BestRelationshipIndicator, configuration/relationTypes/Activity/attributes/BestRelationshipIndicator, configuration/relationTypes/Managed/attributes/BestRelationshipIndicatorLKUP_IMS_YES_NORELATIONSHIP_RANKVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/, configuration/relationTypes/Activity/attributes/, configuration/relationTypes/Managed/attributes/RelationshipRankRELATIONSHIP_VIEW_CODEVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/RelationshipViewCode, configuration/relationTypes/Activity/attributes/RelationshipViewCode, configuration/relationTypes/Managed/attributes/RelationshipViewCodeRELATIONSHIP_VIEW_TYPE_CODEVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/RelationshipViewTypeCode, configuration/relationTypes/Activity/attributes/RelationshipViewTypeCode, configuration/relationTypes/Managed/attributes/RelationshipViewTypeCodeRELATIONSHIP_STATUSVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/, configuration/relationTypes/Activity/attributes/, configuration/relationTypes/Managed/attributes/RelationshipStatusLKUP_IMS_RELATIONSHIP_STATUSRELATIONSHIP_CREATE_DATEDATEconfiguration/relationTypes/AffiliatedPurchasing/attributes/RelationshipCreateDate, configuration/relationTypes/Activity/attributes/RelationshipCreateDate, configuration/relationTypes/Managed/attributes/RelationshipCreateDateUPDATE_DATEDATEconfiguration/relationTypes/AffiliatedPurchasing/attributes/UpdateDate, configuration/relationTypes/Activity/attributes/UpdateDate, configuration/relationTypes/Managed/attributes/UpdateDateRELATIONSHIP_START_DATEDATEconfiguration/relationTypes/AffiliatedPurchasing/attributes/RelationshipStartDate, configuration/relationTypes/Activity/attributes/RelationshipStartDate, configuration/relationTypes/Managed/attributes/RelationshipStartDateRELATIONSHIP_END_DATEDATEconfiguration/relationTypes/AffiliatedPurchasing/attributes/RelationshipEndDate, configuration/relationTypes/Activity/attributes/RelationshipEndDate, configuration/relationTypes/Managed/attributes/RelationshipEndDateCHECKED_DATEDATEconfiguration/relationTypes/Activity/attributes/CheckedDatePREFERRED_MAIL_INDICATORBOOLEANconfiguration/relationTypes/Activity/attributes/PreferredMailIndicatorPREFERRED_VISIT_INDICATORBOOLEANconfiguration/relationTypes/Activity/attributes/PreferredVisitIndicatorCOMMITTEE_MEMBERVARCHARconfiguration/relationTypes/Activity/attributes/CommitteeMemberLKUP_IMS_MEMBER_MED_COMMITTEEAPPOINTMENT_REQUIREDBOOLEANconfiguration/relationTypes/Activity/attributes/AppointmentRequiredAFFILIATION_TYPE_CODEVARCHARAffiliation Type Codeconfiguration/relationTypes/Activity/attributes/AffiliationTypeCodeWORKING_STATUSVARCHARconfiguration/relationTypes/Activity/attributes/WorkingStatusLKUP_IMS_WORKING_STATUSTITLEVARCHARconfiguration/relationTypes/Activity/attributes/TitleLKUP_IMS_PROF_TITLERANKVARCHARconfiguration/relationTypes/Activity/attributes/RankPRIMARY_AFFILIATION_INDICATORBOOLEANconfiguration/relationTypes/Activity/attributes/PrimaryAffiliationIndicatorACT_WEBSITE_URLVARCHARconfiguration/relationTypes/Activity/attributes/ActWebsiteURLACT_VALIDATION_STATUSVARCHARconfiguration/relationTypes/Activity/attributes/ActValidationStatusLKUP_IMS_VAL_STATUSPREF_OR_ACTIVEVARCHARconfiguration/relationTypes/Activity/attributes/PrefOrActiveCOMMENTERSVARCHARCommentersconfiguration/relationTypes/Activity/attributes/CommentersREL_ORDER_MANAGEDBOOLEANOrderconfiguration/relationTypes/Managed/attributes/RelOrderPURCHASING_CLASSIFICATIONReltio URI: configuration/relationTypes/AffiliatedPurchasing/attributes/: noColumnTypeDescriptionReltio Attribute URILOV NameCLASSIFICATION_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URICLASSIFICATION_TYPEVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/ClassificationTypeLKUP_IMS_CLASSIFICATION_TYPECLASSIFICATION_INDICATORVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/ClassificationIndicatorLKUP_IMS_CLASSIFICATION_INDICATORCLASSIFICATION_VALUEVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/ClassificationValueCLASSIFICATION_VALUE_NUMERIC_QUANTITYVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/ClassificationValueNumericQuantitySTATUSVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/StatusLKUP_IMS_CLASSIFICATION_STATUSEFFECTIVE_DATEDATEconfiguration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/EffectiveDateEND_DATEDATEconfiguration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/EndDateNOTESVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/NotesPURCHASING_SOURCE_DATAReltio URI: configuration/relationTypes/AffiliatedPurchasing/attributes/SourceDataMaterialized: URILOV NameSOURCE_DATA_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URIDATASET_IDENTIFIERVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes//attributes/DatasetIdentifierSTART_OBJECT_DATASET_PARTY_IDENTIFIERVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes//attributes/StartObjectDatasetPartyIdentifierEND_OBJECT_DATASET_PARTY_IDENTIFIERVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes//attributes/EndObjectDatasetPartyIdentifierRANKVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes//attributes/RankACTIVITY_PHONEReltio URI: configuration/relationTypes/Activity/attributes/ActPhoneMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACT_PHONE_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URITYPE_IMSVARCHARconfiguration/relationTypes/Activity/attributes//attributes/TypeIMSLKUP_IMS_COMMUNICATION_TYPENUMBERVARCHARconfiguration/relationTypes/Activity/attributes//attributes/NumberEXTENSIONVARCHARconfiguration/relationTypes/Activity/attributes//attributes/ExtensionRANKVARCHARconfiguration/relationTypes/Activity/attributes//attributes/RankCOUNTRY_CODEVARCHARconfiguration/relationTypes/Activity/attributes//attributes/CountryCodeLKUP_IMS_COUNTRY_CODEAREA_CODEVARCHARconfiguration/relationTypes/Activity/attributes//attributes/AreaCodeLOCAL_NUMBERVARCHARconfiguration/relationTypes/Activity/attributes//attributes/LocalNumberFORMATTED_NUMBERVARCHARFormatted number of the phoneconfiguration/relationTypes/Activity/attributes//attributes/FormattedNumberVALIDATION_STATUSVARCHARconfiguration/relationTypes/Activity/attributes//attributes/ValidationStatusLINE_TYPEVARCHARconfiguration/relationTypes/Activity/attributes//attributes/LineTypeFORMAT_MASKVARCHARconfiguration/relationTypes/Activity/attributes//attributes/FormatMaskDIGIT_COUNTVARCHARconfiguration/relationTypes/Activity/attributes//attributes/DigitCountGEO_AREAVARCHARconfiguration/relationTypes/Activity/attributes//attributes/GeoAreaGEO_COUNTRYVARCHARconfiguration/relationTypes/Activity/attributes//attributes/GeoCountryACTIVEBOOLEANDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/relationTypes/Activity/attributes//attributes/ActiveACTIVITY_PRIVACY_PREFERENCESReltio URI: configuration/relationTypes/Activity/attributes/PrivacyPreferencesMaterialized: NamePRIVACY_PREFERENCES_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URIPHONE_OPT_OUTBOOLEANconfiguration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/PhoneOptOutALLOWED_TO_CONTACTBOOLEANconfiguration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/AllowedToContactEMAIL_OPT_OUTBOOLEANconfiguration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/EmailOptOutMAIL_OPT_OUTBOOLEANconfiguration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/MailOptOutFAX_OPT_OUTBOOLEANconfiguration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/FaxOptOutREMOTE_OPT_OUTBOOLEANconfiguration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/RemoteOptOutOPT_OUT_ONEKEYBOOLEANconfiguration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/OptOutOnekeyVISIT_OPT_OUTBOOLEANconfiguration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/VisitOptOutACTIVITY_SPECIALITIESReltio URI: configuration/relationTypes/Activity/attributes/SpecialitiesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSPECIALITIES_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URISPECIALTY_TYPEVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/SpecialtyTypeLKUP_IMS_SPECIALTY_TYPESPECIALTYVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/SpecialtyLKUP_IMS_SPECIALTYEMAIL_OPT_OUTBOOLEANconfiguration/relationTypes/Activity/attributes/Specialities/attributes/EmailOptOutDESCVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/DescGROUPVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/GroupSOURCE_CDVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/SourceCDSPECIALTY_DETAILVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/SpecialtyDetailPROFESSION_CODEVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/ProfessionCodeRANKVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/RankPRIMARY_SPECIALTY_FLAGBOOLEANPrimary Specialty flag to be populated by client teams according to business rulesconfiguration/relationTypes/Activity/attributes/Specialities/attributes/PrimarySpecialtyFlagSORT_ORDERVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/SortOrderBEST_RECORDVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/BestRecordSUB_SPECIALTYVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/SubSpecialtyLKUP_IMS_SPECIALTYSUB_SPECIALTY_RANKVARCHARSubSpecialty Rankconfiguration/relationTypes/Activity/attributes/Specialities/attributes/SubSpecialtyRankACTIVITY_IDENTIFIERSReltio URI: configuration/relationTypes/Activity/attributes/ActIdentifiersMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACT_IDENTIFIERS_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URIIDVARCHARconfiguration/relationTypes/Activity/attributes//attributes/IDTYPEVARCHARconfiguration/relationTypes/Activity/attributes//attributes/TypeLKUP_IMS_HCP_IDENTIFIER_TYPEORDERVARCHARDisplays the order of priority for an for those facilities that share an . Valid values are: P ?the on a business record is the primary identifier for the business and O ?the is a secondary identifier. (Using P for the supports aggregating clinical volumes and avoids double counting).configuration/relationTypes/Activity/attributes//attributes/OrderAUTHORIZATION_STATUSVARCHARAuthorization Statusconfiguration/relationTypes/Activity/attributes//attributes/AuthorizationStatusLKUP_IMS_IDENTIFIER_STATUSNATIONAL_ID_ATTRIBUTEVARCHARconfiguration/relationTypes/Activity/attributes//attributes/NationalIdAttributeACTIVITY_ADDITIONAL_ATTRIBUTESReltio URI: configuration/relationTypes/Activity/attributes/AdditionalAttributesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDITIONAL_ATTRIBUTES_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URIATTRIBUTE_NAMEVARCHARconfiguration/relationTypes/Activity/attributes/AdditionalAttributes/attributes/AttributeNameATTRIBUTE_TYPEVARCHARconfiguration/relationTypes/Activity/attributes/AdditionalAttributes/attributes/AttributeTypeLKUP_IMS_TYPE_CODEATTRIBUTE_VALUEVARCHARconfiguration/relationTypes/Activity/attributes/AdditionalAttributes/attributes/AttributeValueATTRIBUTE_RANKVARCHARconfiguration/relationTypes/Activity/attributes/AdditionalAttributes/attributes/AttributeRankADDITIONAL_INFOVARCHARconfiguration/relationTypes/Activity/attributes/AdditionalAttributes/attributes/AdditionalInfoACTIVITY_BUSINESS_HOURSReltio URI: configuration/relationTypes/Activity/attributes/BusinessHoursMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameBUSINESS_HOURS_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URIDAYVARCHARconfiguration/relationTypes/Activity/attributes//attributes/DayPERIODVARCHARconfiguration/relationTypes/Activity/attributes//attributes/PeriodTIME_SLOTVARCHARconfiguration/relationTypes/Activity/attributes//attributes/TimeSlotSTART_TIMEVARCHARconfiguration/relationTypes/Activity/attributes//attributes/StartTimeEND_TIMEVARCHARconfiguration/relationTypes/Activity/attributes//attributes/EndTimeAPPOINTMENT_ONLYBOOLEANconfiguration/relationTypes/Activity/attributes//attributes/AppointmentOnlyPERIOD_STARTVARCHARconfiguration/relationTypes/Activity/attributes//attributes/PeriodStartPERIOD_ENDVARCHARconfiguration/relationTypes/Activity/attributes//attributes/PeriodEndPERIOD_OF_DAYVARCHARconfiguration/relationTypes/Activity/attributes//attributes/PeriodOfDayACTIVITY_AFFILIATION_ROLEReltio URI: configuration/relationTypes/Activity/attributes/AffiliationRoleMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameAFFILIATION_ROLE_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URIROLE_RANKVARCHARconfiguration/relationTypes/Activity/attributes/AffiliationRole/attributes/RoleRankROLE_NAMEVARCHARconfiguration/relationTypes/Activity/attributes/AffiliationRole/attributes/RoleNameLKUP_IMS_ROLEROLE_ATTRIBUTEVARCHARconfiguration/relationTypes/Activity/attributes/AffiliationRole/attributes/RoleAttributeROLE_TYPE_ATTRIBUTEVARCHARconfiguration/relationTypes/Activity/attributes/AffiliationRole/attributes/RoleTypeAttributeROLE_STATUSVARCHARconfiguration/relationTypes/Activity/attributes/AffiliationRole/attributes/RoleStatusBEST_ROLE_INDICATORVARCHARconfiguration/relationTypes/Activity/attributes/AffiliationRole/attributes/BestRoleIndicatorACTIVITY_EMAILReltio URI: configuration/relationTypes/Activity/attributes/ActEmailMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACT_EMAIL_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URITYPE_IMSVARCHARconfiguration/relationTypes/Activity/attributes//attributes/TypeIMSLKUP_IMS_COMMUNICATION_TYPEEMAILVARCHARconfiguration/relationTypes/Activity/attributes//attributes/EmailDOMAINVARCHARconfiguration/relationTypes/Activity/attributes//attributes/DomainDOMAIN_TYPEVARCHARconfiguration/relationTypes/Activity/attributes//attributes/DomainTypeUSERNAMEVARCHARconfiguration/relationTypes/Activity/attributes//attributes/UsernameRANKVARCHARconfiguration/relationTypes/Activity/attributes//attributes/RankVALIDATION_STATUSVARCHARconfiguration/relationTypes/Activity/attributes//attributes/ValidationStatusACTIVEBOOLEANconfiguration/relationTypes/Activity/attributes//attributes/ActiveACTIVITY_BRICKReltio URI: configuration/relationTypes/Activity/attributes/BrickMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameBRICK_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URITYPEVARCHARconfiguration/relationTypes/Activity/attributes/Brick/attributes/TypeLKUP_IMS_BRICK_TYPEBRICK_VALUEVARCHARconfiguration/relationTypes/Activity/attributes/Brick/attributes/BrickValueLKUP_IMS_BRICK_VALUESORT_ORDERVARCHARconfiguration/relationTypes/Activity/attributes/Brick/attributes/SortOrderACTIVITY_CLASSIFICATIONReltio URI: configuration/relationTypes/Activity/attributes/: noColumnTypeDescriptionReltio Attribute URILOV NameCLASSIFICATION_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URICLASSIFICATION_TYPEVARCHARconfiguration/relationTypes/Activity/attributes/Classification/attributes/ClassificationTypeLKUP_IMS_CLASSIFICATION_TYPECLASSIFICATION_INDICATORVARCHARconfiguration/relationTypes/Activity/attributes/Classification/attributes/ClassificationIndicatorLKUP_IMS_CLASSIFICATION_INDICATORCLASSIFICATION_VALUEVARCHARconfiguration/relationTypes/Activity/attributes/Classification/attributes/ClassificationValueCLASSIFICATION_VALUE_NUMERIC_QUANTITYVARCHARconfiguration/relationTypes/Activity/attributes/Classification/attributes/ClassificationValueNumericQuantitySTATUSVARCHARconfiguration/relationTypes/Activity/attributes/Classification/attributes/StatusLKUP_IMS_CLASSIFICATION_STATUSEFFECTIVE_DATEDATEconfiguration/relationTypes/Activity/attributes/Classification/attributes/EffectiveDateEND_DATEDATEconfiguration/relationTypes/Activity/attributes/Classification/attributes/EndDateNOTESVARCHARconfiguration/relationTypes/Activity/attributes/Classification/attributes/NotesACTIVITY_SOURCE_DATAReltio URI: configuration/relationTypes/Activity/attributes/SourceDataMaterialized: URILOV NameSOURCE_DATA_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URIDATASET_IDENTIFIERVARCHARconfiguration/relationTypes/Activity/attributes//attributes/DatasetIdentifierSTART_OBJECT_DATASET_PARTY_IDENTIFIERVARCHARconfiguration/relationTypes/Activity/attributes//attributes/StartObjectDatasetPartyIdentifierEND_OBJECT_DATASET_PARTY_IDENTIFIERVARCHARconfiguration/relationTypes/Activity/attributes//attributes/EndObjectDatasetPartyIdentifierRANKVARCHARconfiguration/relationTypes/Activity/attributes//attributes/RankMANAGED_CLASSIFICATIONReltio URI: configuration/relationTypes/Managed/attributes/: NameCLASSIFICATION_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URICLASSIFICATION_TYPEVARCHARconfiguration/relationTypes/Managed/attributes/Classification/attributes/ClassificationTypeLKUP_IMS_CLASSIFICATION_TYPECLASSIFICATION_INDICATORVARCHARconfiguration/relationTypes/Managed/attributes/Classification/attributes/ClassificationIndicatorLKUP_IMS_CLASSIFICATION_INDICATORCLASSIFICATION_VALUEVARCHARconfiguration/relationTypes/Managed/attributes/Classification/attributes/ClassificationValueCLASSIFICATION_VALUE_NUMERIC_QUANTITYVARCHARconfiguration/relationTypes/Managed/attributes/Classification/attributes/ClassificationValueNumericQuantitySTATUSVARCHARconfiguration/relationTypes/Managed/attributes/Classification/attributes/StatusLKUP_IMS_CLASSIFICATION_STATUSEFFECTIVE_DATEDATEconfiguration/relationTypes/Managed/attributes/Classification/attributes/EffectiveDateEND_DATEDATEconfiguration/relationTypes/Managed/attributes/Classification/attributes/EndDateNOTESVARCHARconfiguration/relationTypes/Managed/attributes/Classification/attributes/NotesMANAGED_SOURCE_DATAReltio URI: configuration/relationTypes/Managed/attributes/SourceDataMaterialized: URILOV NameSOURCE_DATA_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URIDATASET_IDENTIFIERVARCHARconfiguration/relationTypes/Managed/attributes//attributes/DatasetIdentifierSTART_OBJECT_DATASET_PARTY_IDENTIFIERVARCHARconfiguration/relationTypes/Managed/attributes//attributes/StartObjectDatasetPartyIdentifierEND_OBJECT_DATASET_PARTY_IDENTIFIERVARCHARconfiguration/relationTypes/Managed/attributes//attributes/EndObjectDatasetPartyIdentifierRANKVARCHARconfiguration/relationTypes/Managed/attributes//attributes/Rank" }, { "title": "Dynamic views for ", "": "", "pageLink": "/display//Dynamic+views+for+COMPANY+MDM+Model", "content": " care providerReltio URI: configuration/entityTypes/HCPMaterialized: URILOV NameENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive /entityTypes//attributes/CountryCOMPANY_CUST_IDVARCHARAn auto-generated unique COMPANY id assigned to an /entityTypes//attributes/COMPANYCustIDPREFIXVARCHARPrefix added before the name, e.g., Mr, Ms, Drconfiguration/entityTypes//attributes/PrefixHCPPrefixNAMEVARCHARNameconfiguration/entityTypes//attributes/NameFIRST_NAMEVARCHARFirst Nameconfiguration/entityTypes//attributes/FirstNameLAST_NAMEVARCHARLast Nameconfiguration/entityTypes//attributes//entityTypes//attributes//entityTypes//attributes/CleansedMiddleNameSTATUSVARCHARStatus, e.g., /entityTypes//attributes/StatusHCPStatusSTATUS_DETAILVARCHARDeactivation reasonconfiguration/entityTypes//attributes/StatusDetailHCPStatusDetailDEACTIVATION_CODEVARCHARDeactivation reasonconfiguration/entityTypes//attributes/DeactivationCodeHCPDeactivationReasonCodeSUFFIX_NAMEVARCHARGeneration /entityTypes//attributes/SuffixNameSuffixNameGENDERVARCHARGenderconfiguration/entityTypes//attributes/GenderGenderNICKNAMEVARCHARNicknameconfiguration/entityTypes//attributes/NicknamePREFERRED_NAMEVARCHARPreferred Nameconfiguration/entityTypes//attributes/PreferredNameFORMATTED_NAMEVARCHARFormatted /entityTypes//attributes/FormattedNameTYPE_CODEVARCHARHCP Type Codeconfiguration/entityTypes//attributes//entityTypes//attributes/SubTypeCodeHCPSubTypeCodeIS_COMPANY_APPROVED_SPEAKERBOOLEANIs COMPANY Approved Speakerconfiguration/entityTypes//attributes/IsCOMPANYApprovedSpeakerSPEAKER_LAST_BRIEFING_DATEDATELast Briefing Dateconfiguration/entityTypes//attributes/SpeakerLastBriefingDateSPEAKER_TYPEVARCHARSpeaker typeconfiguration/entityTypes//attributes/SpeakerTypeSPEAKER_STATUSVARCHARSpeaker /entityTypes//attributes/SpeakerStatusHCPSpeakerStatusSPEAKER_LEVELVARCHARSpeaker Statusconfiguration/entityTypes//attributes/SpeakerLevelSPEAKER_EFFECTIVE_DATEDATESpeaker Effective Dateconfiguration/entityTypes//attributes/SpeakerEffectiveDateSPEAKER_DEACTIVATE_REASONVARCHARSpeaker Effective Dateconfiguration/entityTypes//attributes/SpeakerDeactivateReasonDELETION_DATEDATEDeletion Dataconfiguration/entityTypes//attributes/DeletionDateACCOUNT_BLOCKEDBOOLEANIndicator of account blocked or notconfiguration/entityTypes//attributes/AccountBlockedY_O_BVARCHARBirth Yearconfiguration/entityTypes//attributes/YoBD_O_DDATEconfiguration/entityTypes//attributes/DoDY_O_DVARCHARconfiguration/entityTypes//attributes/YoDTERRITORY_NUMBERVARCHARTitle of /entityTypes//attributes/TerritoryNumberWEBSITE_URLVARCHARWebsite URLconfiguration/entityTypes//attributes/WebsiteURLTITLEVARCHARTitle of /entityTypes//attributes/TitleHCPTitleEFFECTIVE_END_DATEDATEconfiguration/entityTypes//attributes/EffectiveEndDateCOMPANY_WATCH_INDBOOLEANCOMPANY Watch Indconfiguration/entityTypes//attributes/COMPANYWatchIndKOL_STATUSBOOLEANKOL Statusconfiguration/entityTypes//attributes/KOLStatusTHIRD_PARTY_DECILVARCHARThird Party Decilconfiguration/entityTypes//attributes/ThirdPartyDecilFEDERAL_EMP_LETTER_DATEDATEFederal Emp Letter Dateconfiguration/entityTypes//attributes//entityTypes//attributes//entityTypes//attributes//entityTypes//attributes/SpeakerTravelIndicatorSPEAKER_INFOVARCHARSpeaker Informationconfiguration/entityTypes//attributes/SpeakerInfoDEGREEVARCHARDegree /entityTypes//attributes/DegreePRESENT_EMPLOYMENTVARCHARPresent Employmentconfiguration/entityTypes//attributes/PresentEmploymentPE_CDEMPLOYMENT_TYPE_CODEVARCHAREmployment Type Codeconfiguration/entityTypes//attributes/EmploymentTypeCodeEMPLOYMENT_TYPE_DESCVARCHAREmployment Type Descriptionconfiguration/entityTypes//attributes/EmploymentTypeDescTYPE_OF_PRACTICEVARCHARType Of Practiceconfiguration/entityTypes//attributes/TypeOfPracticeTOP_CDTYPE_OF_PRACTICE_DESCVARCHARType Of Practice Descriptionconfiguration/entityTypes//attributes/TypeOfPracticeDescSCHOOL_SEQ_NUMBERVARCHARSchool Sequence Numberconfiguration/entityTypes//attributes/SchoolSeqNumberMRM_DELETE_FLAGBOOLEANMRM Delete Flagconfiguration/entityTypes//attributes/MRMDeleteFlagMRM_DELETE_DATEDATEMRM Delete Dateconfiguration/entityTypes//attributes//entityTypes//attributes/CNCYDateAMA_HOSPITALVARCHARAMA Hospital Infoconfiguration/entityTypes//attributes/AMAHospitalAMA_HOSPITAL_DESCVARCHARAMA Hospital Descconfiguration/entityTypes//attributes/AMAHospitalDescPRACTISE_AT_HOSPITALVARCHARPractise At /entityTypes//attributes/PractiseAtHospitalSEGMENT_IDVARCHARSegment IDconfiguration/entityTypes//attributes/SegmentIDSEGMENT_DESCVARCHARSegment Descconfiguration/entityTypes//attributes/SegmentDescDCR_STATUSVARCHARStatus of profileconfiguration/entityTypes//attributes/DCRStatusDCRStatusPREFERRED_LANGUAGEVARCHARLanguage preferenceconfiguration/entityTypes//attributes/PreferredLanguageSOURCE_TYPEVARCHARType of the sourceconfiguration/entityTypes//attributes/SourceTypeSTATE_UPDATE_DATEDATEUpdate date of stateconfiguration/entityTypes//attributes/StateUpdateDateSOURCE_UPDATE_DATEDATEUpdate date at sourceconfiguration/entityTypes//attributes/SourceUpdateDateCOMMENTERSVARCHARCommentersconfiguration/entityTypes//attributes/CommentersIMAGE_GALLERYVARCHARconfiguration/entityTypes//attributes//entityTypes//attributes/BirthCityBIRTH_STATEVARCHARBirth Stateconfiguration/entityTypes//attributes/BirthStateStateBIRTH_COUNTRYVARCHARBirth /entityTypes//attributes/BirthCountryCountryD_O_BDATEDate of /entityTypes//attributes/DoBORIGINAL_SOURCE_NAMEVARCHAROriginal Source Nameconfiguration/entityTypes//attributes/OriginalSourceNameSOURCE_MATCH_CATEGORYVARCHARSource Match Categoryconfiguration/entityTypes//attributes/SourceMatchCategoryALTERNATE_NAMEReltio URI: configuration/entityTypes//attributes/AlternateNameMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameALTERNATE_NAME_URIVARCHARGenerated CodeACTIVEVARCHARActive Entity TypeNAME_TYPE_CODEVARCHARconfiguration/entityTypes//attributes/AlternateName/attributes/NameTypeCodeHCPAlternateNameTypeFULL_NAMEVARCHARconfiguration/entityTypes//attributes/AlternateName/attributes/FullNameFIRST_NAMEVARCHARconfiguration/entityTypes//attributes/AlternateName/attributes/FirstNameMIDDLE_NAMEVARCHARconfiguration/entityTypes//attributes/AlternateName/attributes/MiddleNameLAST_NAMEVARCHARconfiguration/entityTypes//attributes/AlternateName/attributes/LastNameVERSIONVARCHARconfiguration/entityTypes//attributes/AlternateName/attributes/VersionADDRESSESReltio URI: configuration/entityTypes//attributes/Addresses, configuration/entityTypes//attributes/Addresses, configuration/entityTypes//attributes/: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESSES_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeADDRESS_TYPEVARCHARconfiguration/entityTypes//attributes/Addresses/attributes/, configuration/entityTypes//attributes/Addresses/attributes/, configuration/entityTypes//attributes/Addresses/attributes/AddressTypeAddressTypeCOMPANY_ADDRESS_IDVARCHARCOMPANY Address IDconfiguration/entityTypes//attributes/Addresses/attributes/COMPANYAddressID, configuration/entityTypes//attributes/Addresses/attributes/COMPANYAddressID, configuration/entityTypes//attributes/Addresses/attributes/COMPANYAddressIDADDRESS_LINE1VARCHARconfiguration/entityTypes//attributes/Addresses/attributes/AddressLine1, configuration/entityTypes//attributes/Addresses/attributes/AddressLine1, configuration/entityTypes//attributes/Addresses/attributes/AddressLine1ADDRESS_LINE2VARCHARconfiguration/entityTypes//attributes/Addresses/attributes/AddressLine2, configuration/entityTypes//attributes/Addresses/attributes/AddressLine2, configuration/entityTypes//attributes/Addresses/attributes/AddressLine2ADDRESS_LINE3VARCHARconfiguration/entityTypes//attributes/Addresses/attributes/AddressLine3, configuration/entityTypes//attributes/Addresses/attributes/AddressLine3, configuration/entityTypes//attributes/Addresses/attributes/AddressLine3ADDRESS_LINE4VARCHARconfiguration/entityTypes//attributes/Addresses/attributes/AddressLine4, configuration/entityTypes//attributes/Addresses/attributes/AddressLine4, configuration/entityTypes//attributes/Addresses/attributes/AddressLine4CITYVARCHARconfiguration/entityTypes//attributes/Addresses/attributes/City, configuration/entityTypes//attributes/Addresses/attributes/City, configuration/entityTypes//attributes/Addresses/attributes/CitySTATE_PROVINCEVARCHARconfiguration/entityTypes//attributes/Addresses/attributes/StateProvince, configuration/entityTypes//attributes/Addresses/attributes/StateProvince, configuration/entityTypes//attributes/Addresses/attributes/StateProvinceStateCOUNTRY_ADDRESSESVARCHARconfiguration/entityTypes//attributes/Addresses/attributes/Country, configuration/entityTypes//attributes/Addresses/attributes/Country, configuration/entityTypes//attributes/Addresses/attributes/CountryCountryPO_BOXVARCHARconfiguration/entityTypes//attributes/Addresses/attributes/POBox, configuration/entityTypes//attributes/Addresses/attributes/POBox, configuration/entityTypes//attributes/Addresses/attributes/POBoxZIP5VARCHARconfiguration/entityTypes//attributes/Addresses/attributes/Zip5, configuration/entityTypes//attributes/Addresses/attributes/Zip5, configuration/entityTypes//attributes/Addresses/attributes/Zip5ZIP4VARCHARconfiguration/entityTypes//attributes/Addresses/attributes/Zip4, configuration/entityTypes//attributes/Addresses/attributes/Zip4, configuration/entityTypes//attributes/Addresses/attributes/Zip4STREETVARCHARconfiguration/entityTypes//attributes/Addresses/attributes/Street, configuration/entityTypes//attributes/Addresses/attributes/Street, configuration/entityTypes//attributes/Addresses/attributes/StreetPOSTAL_CODE_EXTENSIONVARCHARPostal Code Extensionconfiguration/entityTypes//attributes/Addresses/attributes/PostalCodeExtension, configuration/entityTypes//attributes/Addresses/attributes/PostalCodeExtension, configuration/entityTypes//attributes/Addresses/attributes/PostalCodeExtensionADDRESS_USAGE_TAGVARCHARconfiguration/entityTypes//attributes/Addresses/attributes/AddressUsageTag, configuration/entityTypes//attributes/Addresses/attributes/AddressUsageTagAddressUsageTagCNCY_DATEDATECNCY Dateconfiguration/entityTypes//attributes/Addresses/attributes/CNCYDate, configuration/entityTypes//attributes/Addresses/attributes/CNCYDateCBSA_CODEVARCHARCore Based Statistical Areaconfiguration/entityTypes//attributes/Addresses/attributes/CBSACode, configuration/entityTypes//attributes/Addresses/attributes/CBSACode, configuration/entityTypes//attributes/Addresses/attributes/CBSACodePREMISEVARCHARconfiguration/entityTypes//attributes/Addresses/attributes/Premise, configuration/entityTypes//attributes/Addresses/attributes/PremiseISO3166-2VARCHARThis field holds the ISO 3166 2-character country nfiguration/entityTypes/HCP/attributes/Addresses/attributes/ISO3166-2, configuration/entityTypes//attributes/Addresses/attributes/ISO3166-2, configuration/entityTypes//attributes/Addresses/attributes/ISO3166-2ISO3166-3VARCHARThis field holds the ISO 3166 3-character country nfiguration/entityTypes/HCP/attributes/Addresses/attributes/ISO3166-3, configuration/entityTypes//attributes/Addresses/attributes/ISO3166-3, configuration/entityTypes//attributes/Addresses/attributes/ISO3166-3ISO3166-NVARCHARThis field holds the ISO 3166 N-digit numeric country nfiguration/entityTypes/HCP/attributes/Addresses/attributes/ISO3166-N, configuration/entityTypes//attributes/Addresses/attributes/ISO3166-N, configuration/entityTypes//attributes/Addresses/attributes/ISO3166-NLATITUDEVARCHARLatitudeconfiguration/entityTypes//attributes/Addresses/attributes/Latitude, configuration/entityTypes//attributes/Addresses/attributes/Latitude, configuration/entityTypes//attributes/Addresses/attributes/LatitudeLONGITUDEVARCHARLongitudeconfiguration/entityTypes//attributes/Addresses/attributes/Longitude, configuration/entityTypes//attributes/Addresses/attributes/Longitude, configuration/entityTypes//attributes/Addresses/attributes/LongitudeGEO_ACCURACYVARCHARconfiguration/entityTypes//attributes/Addresses/attributes/, configuration/entityTypes//attributes/Addresses/attributes/, configuration/entityTypes//attributes/Addresses/attributes/GeoAccuracyVERIFICATION_STATUSVARCHARconfiguration/entityTypes//attributes/Addresses/attributes/, configuration/entityTypes//attributes/Addresses/attributes/, configuration/entityTypes//attributes/Addresses/attributes/VerificationStatusVERIFICATION_STATUS_DETAILSVARCHARconfiguration/entityTypes//attributes/Addresses/attributes/VerificationStatusDetails, configuration/entityTypes//attributes/Addresses/attributes/VerificationStatusDetails, configuration/entityTypes//attributes/Addresses/attributes/VerificationStatusDetailsAVCVARCHARconfiguration/entityTypes//attributes/Addresses/attributes/, configuration/entityTypes//attributes/Addresses/attributes/, configuration/entityTypes//attributes/Addresses/attributes/AVCSETTING_TYPEVARCHARSetting Typeconfiguration/entityTypes//attributes/Addresses/attributes/, configuration/entityTypes//attributes/Addresses/attributes/SettingTypeADDRESS_SETTING_TYPE_DESCVARCHARAddress Setting Type Descconfiguration/entityTypes//attributes/Addresses/attributes/AddressSettingTypeDesc, configuration/entityTypes//attributes/Addresses/attributes/AddressSettingTypeDescCATEGORYVARCHARCategoryconfiguration/entityTypes//attributes/Addresses/attributes/Category, configuration/entityTypes//attributes/Addresses/attributes/CategoryAddressCategoryFIPS_CODEVARCHARconfiguration/entityTypes//attributes/Addresses/attributes/FIPSCode, configuration/entityTypes//attributes/Addresses/attributes/FIPSCodeFIPS_COUNTY_CODEVARCHARconfiguration/entityTypes//attributes/Addresses/attributes/FIPSCountyCode, configuration/entityTypes//attributes/Addresses/attributes/FIPSCountyCodeFIPS_COUNTY_CODE_DESCVARCHARconfiguration/entityTypes//attributes/Addresses/attributes/FIPSCountyCodeDesc, configuration/entityTypes//attributes/Addresses/attributes/FIPSCountyCodeDescFIPS_STATE_CODEVARCHARconfiguration/entityTypes//attributes/Addresses/attributes/, configuration/entityTypes//attributes/Addresses/attributes/FIPSStateCodeFIPS_STATE_CODE_DESCVARCHARconfiguration/entityTypes//attributes/Addresses/attributes/FIPSStateCodeDesc, configuration/entityTypes//attributes/Addresses/attributes/FIPSStateCodeDescCARE_OFVARCHARCare Ofconfiguration/entityTypes//attributes/Addresses/attributes/CareOf, configuration/entityTypes//attributes/Addresses/attributes/CareOfMAIN_PHYSICAL_OFFICEVARCHARMain Physical Officeconfiguration/entityTypes//attributes/Addresses/attributes/MainPhysicalOffice, configuration/entityTypes//attributes/Addresses/attributes/MainPhysicalOfficeDELIVERABILITY_CONFIDENCEVARCHARDeliverability Confidenceconfiguration/entityTypes//attributes/Addresses/attributes/DeliverabilityConfidence, configuration/entityTypes//attributes/Addresses/attributes/DeliverabilityConfidenceAPPLIDVARCHARAPPLIDconfiguration/entityTypes//attributes/Addresses/attributes/APPLID, configuration/entityTypes//attributes/Addresses/attributes/APPLIDSMPLDLV_INDBOOLEANSMPLDLV /entityTypes//attributes/Addresses/attributes/SMPLDLVInd, configuration/entityTypes//attributes/Addresses/attributes/SMPLDLVIndSTATUSVARCHARStatusconfiguration/entityTypes//attributes/Addresses/attributes/Status, configuration/entityTypes//attributes/Addresses/attributes/StatusAddressStatusSTARTER_ELIGIBLE_FLAGVARCHARStarterEligibleFlagconfiguration/entityTypes//attributes/Addresses/attributes/StarterEligibleFlag, configuration/entityTypes//attributes/Addresses/attributes/StarterEligibleFlagDEA_FLAGBOOLEANDEA Flagconfiguration/entityTypes//attributes/Addresses/attributes/DEAFlag, configuration/entityTypes//attributes/Addresses/attributes/DEAFlagUSAGE_TYPEVARCHARUsage Typeconfiguration/entityTypes//attributes/Addresses/attributes/, configuration/entityTypes//attributes/Addresses/attributes/UsageTypePRIMARYBOOLEANPrimary /entityTypes//attributes/Addresses/attributes/Primary, configuration/entityTypes//attributes/Addresses/attributes/PrimaryEFFECTIVE_START_DATEDATEEffective Start Dateconfiguration/entityTypes//attributes/Addresses/attributes/EffectiveStartDate, configuration/entityTypes//attributes/Addresses/attributes/EffectiveStartDateEFFECTIVE_END_DATEDATEEffective End Dateconfiguration/entityTypes//attributes/Addresses/attributes/EffectiveEndDate, configuration/entityTypes//attributes/Addresses/attributes/EffectiveEndDateADDRESS_RANKVARCHARAddress Rank for priorityconfiguration/entityTypes//attributes/Addresses/attributes/AddressRank, configuration/entityTypes//attributes/Addresses/attributes/AddressRank, configuration/entityTypes//attributes/Addresses/attributes/AddressRankSOURCE_SEGMENT_CODEVARCHARSource Segment Codeconfiguration/entityTypes//attributes/Addresses/attributes/SourceSegmentCode, configuration/entityTypes//attributes/Addresses/attributes/SourceSegmentCodeSEGMENT1VARCHARSegment1configuration/entityTypes//attributes/Addresses/attributes/Segment1, configuration/entityTypes//attributes/Addresses/attributes/Segment1SEGMENT2VARCHARSegment2configuration/entityTypes//attributes/Addresses/attributes/Segment2, configuration/entityTypes//attributes/Addresses/attributes/Segment2SEGMENT3VARCHARSegment3configuration/entityTypes//attributes/Addresses/attributes/Segment3, configuration/entityTypes//attributes/Addresses/attributes/Segment3ADDRESS_INDBOOLEANAddressIndconfiguration/entityTypes//attributes/Addresses/attributes/AddressInd, configuration/entityTypes//attributes/Addresses/attributes/AddressIndSCRIPT_UTILIZATION_WEIGHTVARCHARScript Utilization Weightconfiguration/entityTypes//attributes/Addresses/attributes/ScriptUtilizationWeight, configuration/entityTypes//attributes/Addresses/attributes/ScriptUtilizationWeightBUSINESS_ACTIVITY_CODEVARCHARBusiness Activity Codeconfiguration/entityTypes//attributes/Addresses/attributes/BusinessActivityCode, configuration/entityTypes//attributes/Addresses/attributes/BusinessActivityCodeBUSINESS_ACTIVITY_DESCVARCHARBusiness Activity Descconfiguration/entityTypes//attributes/Addresses/attributes/BusinessActivityDesc, configuration/entityTypes//attributes/Addresses/attributes//entityTypes//attributes/Addresses/attributes/PracticeLocationRank, configuration/entityTypes//attributes/Addresses/attributes/PracticeLocationRankPracticeLocationRankPRACTICE_LOCATION_CONFIDENCE_INDVARCHARPractice /entityTypes//attributes/Addresses/attributes/PracticeLocationConfidenceInd, configuration/entityTypes//attributes/Addresses/attributes/PracticeLocationConfidenceIndPRACTICE_LOCATION_CONFIDENCE_DESCVARCHARPractice /entityTypes//attributes/Addresses/attributes/PracticeLocationConfidenceDesc, configuration/entityTypes//attributes/Addresses/attributes/PracticeLocationConfidenceDescSINGLE_ADDRESS_INDBOOLEANSingle Address Indconfiguration/entityTypes//attributes/Addresses/attributes/SingleAddressInd, configuration/entityTypes//attributes/Addresses/attributes/SingleAddressIndSUB_ADMINISTRATIVE_AREAVARCHARThis field holds the smallest geographic data element within a country. For instance, nfiguration/entityTypes/HCP/attributes/Addresses/attributes/SubAdministrativeArea, configuration/entityTypes//attributes/Addresses/attributes/SubAdministrativeArea, configuration/entityTypes//attributes/Addresses/attributes/SubAdministrativeAreaSUPER_ADMINISTRATIVE_AREAVARCHARThis field holds the largest geographic data element within a nfiguration/entityTypes/HCO/attributes/Addresses/attributes/SuperAdministrativeAreaADMINISTRATIVE_AREAVARCHARThis field holds the most common geographic data element within a country. For instance, , and nfiguration/entityTypes/HCO/attributes/Addresses/attributes/AdministrativeAreaUNIT_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/Addresses/attributes/UnitNameUNIT_VALUEVARCHARconfiguration/entityTypes/HCO/attributes/Addresses/attributes/UnitValueFLOORVARCHARN/Aconfiguration/entityTypes/HCO/attributes/Addresses/attributes/FloorBUILDINGVARCHARN/Aconfiguration/entityTypes/HCO/attributes/Addresses/attributes/BuildingSUB_BUILDINGVARCHARconfiguration/entityTypes/HCO/attributes/Addresses/attributes/SubBuildingNEIGHBORHOODVARCHARconfiguration/entityTypes/HCO/attributes/Addresses/attributes/NeighborhoodPREMISE_NUMBERVARCHARconfiguration/entityTypes/HCO/attributes/Addresses/attributes/PremiseNumberADDRESSES_SOURCESourceReltio URI: configuration/entityTypes//attributes/Addresses/attributes/Source, configuration/entityTypes//attributes/Addresses/attributes/Source, configuration/entityTypes//attributes/Addresses/attributes/SourceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESSES_URIVARCHARGenerated CodeACTIVEVARCHARActive TypeSOURCE_NAMEVARCHARSourceNameconfiguration/entityTypes//attributes/Addresses/attributes/Source/attributes/SourceName, configuration/entityTypes//attributes/Addresses/attributes/Source/attributes/SourceName, configuration/entityTypes//attributes/Addresses/attributes/Source/attributes/SourceNameSOURCE_RANKVARCHARSourceRankconfiguration/entityTypes//attributes/Addresses/attributes/Source/attributes/, configuration/entityTypes//attributes/Addresses/attributes/Source/attributes/, configuration/entityTypes//attributes/Addresses/attributes/Source/attributes/SourceRankSOURCE_ADDRESS_IDVARCHARSource Address IDconfiguration/entityTypes//attributes/Addresses/attributes/Source/attributes/SourceAddressID, configuration/entityTypes//attributes/Addresses/attributes/Source/attributes/SourceAddressID, configuration/entityTypes//attributes/Addresses/attributes/Source/attributes/SourceAddressIDLEGACY_IQVIA_ADDRESS_IDVARCHARLegacy address idconfiguration/entityTypes//attributes/Addresses/attributes/Source/attributes/LegacyIQVIAAddressID, configuration/entityTypes//attributes/Addresses/attributes/Source/attributes/LegacyIQVIAAddressIDADDRESSES_DEADEAReltio URI: configuration/entityTypes//attributes/Addresses/attributes/, configuration/entityTypes//attributes/Addresses/attributes/: KeyDEA_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeNUMBERVARCHARNumberconfiguration/entityTypes//attributes/Addresses/attributes//attributes/Number, configuration/entityTypes//attributes/Addresses/attributes//attributes/NumberEXPIRATION_DATEDATEExpiration Dateconfiguration/entityTypes//attributes/Addresses/attributes//attributes/ExpirationDate, configuration/entityTypes//attributes/Addresses/attributes//attributes/ExpirationDateSTATUSVARCHARStatusconfiguration/entityTypes//attributes/Addresses/attributes//attributes/Status, configuration/entityTypes//attributes/Addresses/attributes//attributes/StatusAddressDEAStatusSTATUSVARCHARStatusconfiguration/entityTypes//attributes/Addresses/attributes//attributes/Status, configuration/entityTypes//attributes/Addresses/attributes//attributes/StatusSTATUS_DETAILVARCHARDeactivation Reason Codeconfiguration/entityTypes//attributes/Addresses/attributes//attributes/, configuration/entityTypes//attributes/Addresses/attributes//attributes/StatusDetailHCPDEAStatusDetailSTATUS_DETAILVARCHARDeactivation Reason Codeconfiguration/entityTypes//attributes/Addresses/attributes//attributes/, configuration/entityTypes//attributes/Addresses/attributes//attributes/StatusDetailDRUG_SCHEDULEVARCHARDrug Scheduleconfiguration/entityTypes//attributes/Addresses/attributes//attributes/DrugSchedule, configuration/entityTypes//attributes/Addresses/attributes//attributes/DrugScheduleDRUG_SCHEDULEVARCHARDrug Scheduleconfiguration/entityTypes//attributes/Addresses/attributes//attributes/DrugSchedule, configuration/entityTypes//attributes/Addresses/attributes//attributes/DrugScheduleApp-LSCustomer360DEADrugScheduleEFFECTIVE_DATEDATEEffective Dateconfiguration/entityTypes//attributes/Addresses/attributes//attributes/EffectiveDate, configuration/entityTypes//attributes/Addresses/attributes//attributes/EffectiveDateSTATUS_DATEDATEStatus Dateconfiguration/entityTypes//attributes/Addresses/attributes//attributes/StatusDate, configuration/entityTypes//attributes/Addresses/attributes//attributes/StatusDateDEA_BUSINESS_ACTIVITYVARCHARBusiness Activityconfiguration/entityTypes//attributes/Addresses/attributes//attributes/DEABusinessActivity, configuration/entityTypes//attributes/Addresses/attributes//attributes/DEABusinessActivityDEABusinessActivityDEA_BUSINESS_ACTIVITYVARCHARBusiness Activityconfiguration/entityTypes//attributes/Addresses/attributes//attributes/DEABusinessActivity, configuration/entityTypes//attributes/Addresses/attributes//attributes/DEABusinessActivitySUB_BUSINESS_ACTIVITYVARCHARSub Business Activityconfiguration/entityTypes//attributes/Addresses/attributes//attributes/SubBusinessActivity, configuration/entityTypes//attributes/Addresses/attributes//attributes/SubBusinessActivityDEABusinessSubActivitySUB_BUSINESS_ACTIVITYVARCHARSub Business Activityconfiguration/entityTypes//attributes/Addresses/attributes//attributes/SubBusinessActivity, configuration/entityTypes//attributes/Addresses/attributes//attributes//entityTypes//attributes/Addresses/attributes//attributes/BusinessActivityDescSUB_BUSINESS_ACTIVITY_DESCVARCHARSub Business Activity Descconfiguration/entityTypes//attributes/Addresses/attributes//attributes/SubBusinessActivityDescADDRESSES_OFFICE_INFORMATIONReltio URI: configuration/entityTypes//attributes/Addresses/attributes/OfficeInformation, configuration/entityTypes//attributes/Addresses/attributes/OfficeInformationMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESSES_URIVARCHARGenerated KeyOFFICE_INFORMATION_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypeBEST_TIMESVARCHARBest Timesconfiguration/entityTypes//attributes/Addresses/attributes/OfficeInformation/attributes/, configuration/entityTypes//attributes/Addresses/attributes/OfficeInformation/attributes/BestTimesAPPT_REQUIREDBOOLEANAppointment Required or notconfiguration/entityTypes//attributes/Addresses/attributes/OfficeInformation/attributes/, configuration/entityTypes//attributes/Addresses/attributes/OfficeInformation/attributes/ApptRequiredOFFICE_NOTESVARCHAROffice Notesconfiguration/entityTypes//attributes/Addresses/attributes/OfficeInformation/attributes/OfficeNotes, configuration/entityTypes//attributes/Addresses/attributes/OfficeInformation/attributes/OfficeNotesCOMPLIANCEComplianceReltio URI: configuration/entityTypes//attributes/: noColumnTypeDescriptionReltio Attribute URILOV NameCOMPLIANCE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypeGO_STATUSVARCHARconfiguration/entityTypes//attributes/Compliance/attributes/GOStatusHCPComplianceGOStatusPIGO_STATUSVARCHARconfiguration/entityTypes//attributes/Compliance/attributes/PIGOStatusHCPPIGOStatusNIPPIGO_STATUSVARCHARconfiguration/entityTypes//attributes/Compliance/attributes/NIPPIGOStatusHCPNIPPIGOStatusPRIMARY_PIGO_RATIONALEVARCHARconfiguration/entityTypes//attributes/Compliance/attributes/PrimaryPIGORationaleHCPPIGORationaleSECONDARY_PIGO_RATIONALEVARCHARconfiguration/entityTypes//attributes/Compliance/attributes/SecondaryPIGORationaleHCPPIGORationalePIGOSME_REVIEWVARCHARconfiguration/entityTypes//attributes/Compliance/attributes/PIGOSMEReviewHCPPIGOSMEReviewGSQ_DATEDATEconfiguration/entityTypes//attributes/Compliance/attributes/GSQDateDO_NOT_USEBOOLEANconfiguration/entityTypes//attributes/Compliance/attributes/DoNotUseCHANGE_DATEDATEconfiguration/entityTypes//attributes/Compliance/attributes/ChangeDateCHANGE_REASONVARCHARconfiguration/entityTypes//attributes/Compliance/attributes/ChangeReasonMAPPHCP_STATUSVARCHARconfiguration/entityTypes//attributes/Compliance/attributes/MAPPHCPStatusMAPP_MAILVARCHARconfiguration/entityTypes//attributes/Compliance/attributes/MAPPMailDISCLOSUREDisclosureReltio URI: configuration/entityTypes//attributes/DisclosureMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameDISCLOSURE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeBENEFIT_CATEGORYVARCHARBenefit Categoryconfiguration/entityTypes//attributes/Disclosure/attributes/BenefitCategoryHCPBenefitCategoryBENEFIT_TITLEVARCHARBenefit Titleconfiguration/entityTypes//attributes/Disclosure/attributes//entityTypes//attributes/Disclosure/attributes//entityTypes//attributes/Disclosure/attributes/BenefitSpecialtyHCPBenefitSpecialtyCONTRACT_CLASSIFICATIONVARCHARContract Classificationconfiguration/entityTypes//attributes/Disclosure/attributes/ContractClassificationCONTRACT_CLASSIFICATION_DATEDATEContract Classification Dateconfiguration/entityTypes//attributes/Disclosure/attributes/ContractClassificationDateMILITARYBOOLEANMilitaryconfiguration/entityTypes//attributes/Disclosure/attributes/MilitaryCIVIL_SERVANTBOOLEANCivil Servantconfiguration/entityTypes//attributes/Disclosure/attributes/CivilServantCREDENTIALCredential URI: configuration/entityTypes//attributes/: noColumnTypeDescriptionReltio Attribute URILOV NameCREDENTIAL_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCREDENTIALVARCHARconfiguration/entityTypes//attributes/Credential/attributes/CredentialCredentialOTHER_CDTL_TXTVARCHAROther Credential Textconfiguration/entityTypes//attributes/Credential/attributes/OtherCdtlTxtPRIMARY_FLAGBOOLEANPrimary Flagconfiguration/entityTypes//attributes/Credential/attributes/PrimaryFlagEFFECTIVE_END_DATEDATEEffective End Dateconfiguration/entityTypes//attributes/Credential/attributes/EffectiveEndDatePROFESSIONProfession InformationReltio URI: configuration/entityTypes//attributes/ProfessionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePROFESSION_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypePROFESSIONVARCHARconfiguration/entityTypes//attributes/Profession/attributes/ProfessionHCPSpecialtyProfessionPROFESSION_SOURCESourceReltio URI: configuration/entityTypes//attributes/Profession/attributes/SourceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePROFESSION_URIVARCHARGenerated KeySOURCE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive /entityTypes//attributes/Profession/attributes/Source/attributes/SourceNameSOURCE_RANKVARCHARSourceRankconfiguration/entityTypes//attributes/Profession/attributes/Source/attributes/SourceRankSPECIALITIESReltio URI: configuration/entityTypes//attributes/, configuration/entityTypes//attributes/SpecialitiesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSPECIALITIES_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypeSPECIALTYVARCHARSpecialty of the entity, e.g., Adult Congenital Heart Diseaseconfiguration/entityTypes//attributes/Specialities/attributes/Specialty, configuration/entityTypes//attributes/Specialities/attributes/SpecialtyHCPSpecialty,App-LSCustomer360SpecialtyPROFESSIONVARCHARconfiguration/entityTypes//attributes/Specialities/attributes/ProfessionHCPSpecialtyProfessionPRIMARYBOOLEANWhether Primary Specialty or notconfiguration/entityTypes//attributes/Specialities/attributes/Primary, configuration/entityTypes//attributes/Specialities/attributes/PrimaryRANKVARCHARRankconfiguration/entityTypes//attributes/Specialities/attributes/RankTRUST_INDICATORVARCHARconfiguration/entityTypes//attributes/Specialities/attributes/TrustIndicatorDESCVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes//attributes/Specialities/attributes/DescSPECIALTY_TYPEVARCHARType of , e.g. /entityTypes//attributes/Specialities/attributes/SpecialtyTypeApp-LSCustomer360SpecialtyTypeGROUPVARCHARGroup, Specialty belongs toconfiguration//attributes/Specialities/attributes//entityTypes//attributes/Specialities/attributes/SpecialtyDetailSPECIALITIES_SOURCEReltio URI: configuration/entityTypes//attributes/Specialities/attributes/SourceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSPECIALITIES_URIVARCHARGenerated KeySOURCE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive /entityTypes//attributes/Specialities/attributes/Source/attributes/SourceNameSOURCE_RANKVARCHARRankconfiguration/entityTypes//attributes/Specialities/attributes/Source/attributes/SourceRankSUB_SPECIALITIESReltio URI: configuration/entityTypes//attributes/SubSpecialitiesMaterialized: CodeACTIVEVARCHARActive TypeSPECIALTY_CODEVARCHARSub specialty code of the entityconfiguration/entityTypes//attributes//attributes/SpecialtyCodeSUB_SPECIALTYVARCHARSub specialty of the entityconfiguration/entityTypes//attributes//attributes/SubSpecialtyPROFESSION_CODEVARCHARProfession Codeconfiguration/entityTypes//attributes//attributes/ProfessionCodeSUB_SPECIALITIES_SOURCEReltio URI: configuration/entityTypes//attributes//attributes/SourceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSUB_SPECIALITIES_URIVARCHARGenerated CodeACTIVEVARCHARActive /entityTypes//attributes//attributes/Source/attributes/SourceNameSOURCE_RANKVARCHARRankconfiguration/entityTypes//attributes//attributes/Source/attributes/SourceRankEDUCATIONReltio URI: configuration/entityTypes//attributes/EducationMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEDUCATION_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSCHOOL_CDVARCHARconfiguration/entityTypes//attributes/Education/attributes/SchoolCDSCHOOL_NAMEVARCHARconfiguration/entityTypes//attributes/Education/attributes/SchoolNameYEAR_OF_GRADUATIONVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes//attributes/Education/attributes/YearOfGraduationSTATEVARCHARconfiguration/entityTypes//attributes/Education/attributes/StateCOUNTRY_EDUCATIONVARCHARconfiguration/entityTypes//attributes/Education/attributes/CountryTYPEVARCHARconfiguration/entityTypes//attributes/Education/attributes/TypeGPAVARCHARconfiguration/entityTypes//attributes/Education/attributes/ NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes//attributes/Education/attributes/GraduatedEMAILReltio URI: configuration/entityTypes//attributes/Email, configuration/entityTypes//attributes/Email, configuration/entityTypes//attributes/EmailMaterialized: CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPEVARCHARType of Email, e.g., /entityTypes//attributes//attributes/Type, configuration/entityTypes//attributes//attributes/Type, configuration/entityTypes//attributes//attributes/TypeEmailTypeEMAILVARCHAREmail addressconfiguration/entityTypes//attributes//attributes/Email, configuration/entityTypes//attributes//attributes/Email, configuration/entityTypes//attributes//attributes/EmailRANKVARCHARRank used to assign priority to a Emailconfiguration/entityTypes//attributes//attributes/Rank, configuration/entityTypes//attributes//attributes/Rank, configuration/entityTypes//attributes//attributes/RankEMAIL_USAGE_TAGVARCHARconfiguration/entityTypes//attributes//attributes/EmailUsageTag, configuration/entityTypes//attributes//attributes/EmailUsageTag, configuration/entityTypes//attributes//attributes/EmailUsageTagEmailUsageTagUSAGE_TYPEVARCHARUsage Type of an Emailconfiguration/entityTypes//attributes//attributes/, configuration/entityTypes//attributes//attributes/, configuration/entityTypes//attributes//attributes/UsageTypeDOMAINVARCHARconfiguration/entityTypes//attributes//attributes/Domain, configuration/entityTypes//attributes//attributes/Domain, configuration/entityTypes//attributes//attributes/DomainVALIDATION_STATUSVARCHARconfiguration/entityTypes//attributes//attributes/, configuration/entityTypes//attributes//attributes/, configuration/entityTypes//attributes//attributes/ValidationStatusDOMAIN_TYPEVARCHARStatus of Emailconfiguration/entityTypes//attributes//attributes/, configuration/entityTypes//attributes//attributes/DomainTypeUSERNAMEVARCHARDomain on which is createdconfiguration/entityTypes//attributes//attributes/Username, configuration/entityTypes//attributes//attributes/UsernameEMAIL_SOURCESourceReltio URI: configuration/entityTypes//attributes//attributes/Source, configuration/entityTypes//attributes//attributes/Source, configuration/entityTypes//attributes//attributes/SourceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEMAIL_URIVARCHARGenerated CodeACTIVEVARCHARActive /entityTypes//attributes//attributes/Source/attributes/SourceName, configuration/entityTypes//attributes//attributes/Source/attributes/SourceName, configuration/entityTypes//attributes//attributes/Source/attributes/SourceNameSOURCE_RANKVARCHARSourceRankconfiguration/entityTypes//attributes//attributes/Source/attributes/, configuration/entityTypes//attributes//attributes/Source/attributes/, configuration/entityTypes//attributes//attributes/Source/attributes/SourceRankIDENTIFIERSReltio URI: configuration/entityTypes//attributes/Identifiers, configuration/entityTypes//attributes/Identifiers, configuration/entityTypes//attributes/: CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPEVARCHARIdentifier Typeconfiguration/entityTypes//attributes/Identifiers/attributes/Type, configuration/entityTypes//attributes/Identifiers/attributes/Type, configuration/entityTypes//attributes/Identifiers/attributes/TypeHCPIdentifierType,HCOIdentifierTypeIDVARCHARIdentifier IDconfiguration/entityTypes//attributes/Identifiers/attributes/ID, configuration/entityTypes//attributes/Identifiers/attributes/ID, configuration/entityTypes//attributes/Identifiers/attributes/IDEXTL_DATEDATEExternal Dateconfiguration/entityTypes//attributes/Identifiers/attributes/EXTLDateACTIVATION_DATEDATEActivation Dateconfiguration/entityTypes//attributes/Identifiers/attributes/ActivationDate, configuration/entityTypes//attributes/Identifiers/attributes/ActivationDateREFER_BACK_ID_STATUSVARCHARStatusconfiguration/entityTypes//attributes/Identifiers/attributes/ReferBackIDStatus, configuration/entityTypes//attributes/Identifiers/attributes/ReferBackIDStatusDEACTIVATION_DATEDATEIdentifier Deactivation Dateconfiguration/entityTypes//attributes/Identifiers/attributes/DeactivationDate, configuration/entityTypes//attributes/Identifiers/attributes/DeactivationDateSTATEVARCHARIdentifier Stateconfiguration/entityTypes//attributes/Identifiers/attributes/StateStateSOURCE_NAMEVARCHARName of the Identifier sourceconfiguration/entityTypes//attributes/Identifiers/attributes/SourceName, configuration/entityTypes//attributes/Identifiers/attributes/SourceName, configuration/entityTypes//attributes/Identifiers/attributes/SourceNameTRUSTVARCHARTrustconfiguration/entityTypes//attributes/Identifiers/attributes/Trust, configuration/entityTypes//attributes/Identifiers/attributes/Trust, configuration/entityTypes//attributes/Identifiers/attributes/TrustSOURCE_START_DATEDATEStart date at sourceconfiguration/entityTypes//attributes/Identifiers/attributes/SourceStartDateSOURCE_UPDATE_DATEDATEUpdate date at sourceconfiguration/entityTypes//attributes/Identifiers/attributes/SourceUpdateDate, configuration/entityTypes//attributes/Identifiers/attributes/SourceUpdateDateSTATUSVARCHARStatusconfiguration/entityTypes//attributes/Identifiers/attributes/Status, configuration/entityTypes//attributes/Identifiers/attributes/StatusHCPIdentifierStatus,HCOIdentifierStatusSTATUS_DETAILVARCHARIdentifier Deactivation Reason Codeconfiguration/entityTypes//attributes/Identifiers/attributes/, configuration/entityTypes//attributes/Identifiers/attributes/StatusDetailHCPIdentifierStatusDetail,HCOIdentifierStatusDetailDRUG_SCHEDULEVARCHARStatusconfiguration/entityTypes//attributes/Identifiers/attributes/DrugScheduleTAXONOMYVARCHARconfiguration/entityTypes//attributes/Identifiers/attributes/TaxonomySEQUENCE_NUMBERVARCHARconfiguration/entityTypes//attributes/Identifiers/attributes/SequenceNumberMCRPE_CODEVARCHARconfiguration/entityTypes//attributes/Identifiers/attributes/MCRPECodeMCRPE_START_DATEDATEconfiguration/entityTypes//attributes/Identifiers/attributes/MCRPEStartDateMCRPE_END_DATEDATEconfiguration/entityTypes//attributes/Identifiers/attributes/MCRPEEndDateMCRPE_IS_OPTEDBOOLEANconfiguration/entityTypes//attributes/Identifiers/attributes/MCRPEIsOptedEXPIRATION_DATEDATEconfiguration/entityTypes//attributes/Identifiers/attributes/ExpirationDateORDERVARCHAROrderconfiguration/entityTypes//attributes/Identifiers/attributes/OrderREASONVARCHARReasonconfiguration/entityTypes//attributes/Identifiers/attributes//entityTypes//attributes/Identifiers/attributes/StartDateEND_DATEDATEIdentifier End Dateconfiguration/entityTypes//attributes/Identifiers/attributes/EndDateDATA_QUALITYReltio URI: configuration/entityTypes//attributes/, configuration/entityTypes//attributes/, configuration/entityTypes//attributes/DataQualityMaterialized: CodeACTIVEVARCHARActive TypeDQ_DESCRIPTIONVARCHARDQ Descriptionconfiguration/entityTypes//attributes//attributes/DQDescription, configuration/entityTypes//attributes//attributes/DQDescription, configuration/entityTypes//attributes//attributes/DQDescriptionDQDescriptionLICENSEReltio URI: configuration/entityTypes//attributes/License, configuration/entityTypes//attributes/: NameLICENSE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCATEGORYVARCHARCategory License belongs to, e.g., /entityTypes//attributes/License/attributes/CategoryPROFESSION_CODEVARCHARProfession Informationconfiguration/entityTypes//attributes/License/attributes/ProfessionCodeHCPProfessionNUMBERVARCHARState License INTEGER. A unique license is listed for each license the physician holds. There is no standard format syntax. Format examples: 18986, , . There is also no limit to the of licenses a physician can hold in a state. Example: A physician can have an inactive resident license plus unlimited active licenses. Residents can have as many as four licenses since some states issue licenses every yearconfiguration/entityTypes//attributes/License/attributes/Number, configuration/entityTypes//attributes/License/attributes/NumberREG_AUTH_IDVARCHARRegAuthIDconfiguration/entityTypes//attributes/License/attributes/RegAuthIDSTATE_BOARDVARCHARState Boardconfiguration/entityTypes//attributes/License/attributes//entityTypes//attributes/License/attributes/StateBoardNameSTATEVARCHARState License State. Two character field. standard nfiguration/entityTypes/HCP/attributes/License/attributes/State, configuration/entityTypes//attributes/License/attributes/StateTYPEVARCHARState License Type. U = Unlimited there is no restriction on the physician to practice medicine; implies restrictions of some sort. For example, the physician may practice only in a given county, admit patients only to particular hospitals, or practice under the supervision of a physician with a license in state or private hospitals or other settings; T = Temporary issued to a physician temporarily practicing in an underserved area outside his/her state of licensure. Also granted between board meetings when new licenses are issued. span for a temporary license varies from state to state. Temporary licenses typically expire from the date they are issued; R = Resident License granted to a physician in graduate medical education (e.g., residency training).configuration/entityTypes//attributes/License/attributes/TypeST_LIC_TYPESTATUSVARCHARState License Status. A = . Physician is licensed to practice within the state; I = Inactive. If the physician has not reregistered a state license OR if the license has been suspended or revoked by ; X = unknown. If the state has not provided current information Note: Some state boards issue inactive licenses to physicians who want to maintain licensure in the state although they are currently practicing in /entityTypes//attributes/License/attributes/StatusDetailHCPLicenseStatusDetailTRUSTVARCHARTrust flagconfiguration/entityTypes//attributes/License/attributes/TrustDEACTIVATION_REASON_CODEVARCHARDeactivation Reason Codeconfiguration/entityTypes//attributes/License/attributes/DeactivationReasonCodeHCPLicenseDeactivationReasonCodeEXPIRATION_DATEDATELicense Expiration Dateconfiguration/entityTypes//attributes/License/attributes/ExpirationDateISSUE_DATEDATEState License Issue Dateconfiguration/entityTypes//attributes/License/attributes/IssueDateSTATE_LICENSE_PRIVILEGEVARCHARState License Privilegeconfiguration/entityTypes//attributes/License/attributes/StateLicensePrivilegeSTATE_LICENSE_PRIVILEGE_NAMEVARCHARState License Privilege Nameconfiguration/entityTypes//attributes/License/attributes/StateLicensePrivilegeNameSTATE_LICENSE_STATUS_DATEDATEState License Status Dateconfiguration/entityTypes//attributes/License/attributes/StateLicenseStatusDateRANKVARCHARRank of Licenseconfiguration/entityTypes//attributes/License/attributes/RankCERTIFICATION_CODEVARCHARCertification Codeconfiguration/entityTypes//attributes/License/attributes/CertificationCodeHCPLicenseCertificationLICENSE_SOURCESourceReltio URI: configuration/entityTypes//attributes/License/attributes/SourceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameLICENSE_URIVARCHARGenerated CodeACTIVEVARCHARActive /entityTypes//attributes/License/attributes/Source/attributes/SourceNameSOURCE_RANKVARCHARSourceRankconfiguration/entityTypes//attributes/License/attributes/Source/attributes/SourceRankLICENSE_REGULATORYLicense RegulatoryReltio URI: configuration/entityTypes//attributes/License/attributes/RegulatoryMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameLICENSE_URIVARCHARGenerated KeyREGULATORY_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive Sampl Non Ctrlconfiguration/entityTypes//attributes/License/attributes/Regulatory/attributes/ReqSamplNonCtrlREQ_SAMPL_CTRLVARCHARReq /entityTypes//attributes/License/attributes/Regulatory/attributes/ReqSamplCtrlRECV_SAMPL_NON_CTRLVARCHARRecv Sampl Non Ctrlconfiguration/entityTypes//attributes/License/attributes/Regulatory/attributes/RecvSamplNonCtrlRECV_SAMPL_CTRLVARCHARRecv Sampl Ctrlconfiguration/entityTypes//attributes/License/attributes/Regulatory/attributes/RecvSamplCtrlDISTR_SAMPL_NON_CTRLVARCHARDistr /entityTypes//attributes/License/attributes/Regulatory/attributes//entityTypes//attributes/License/attributes/Regulatory/attributes/ I Flagconfiguration/entityTypes//attributes/License/attributes/Regulatory/attributes/SampDrugSchedIFlagSAMP_DRUG_SCHED_II_FLAGVARCHARSamp Drug Sched II Flagconfiguration/entityTypes//attributes/License/attributes/Regulatory/attributes/SampDrugSchedIIFlagSAMP_DRUG_SCHED_III_FLAGVARCHARSamp Drug Sched III Flagconfiguration/entityTypes//attributes/License/attributes/Regulatory/attributes/SampDrugSchedIIIFlagSAMP_DRUG_SCHED_IV_FLAGVARCHARSamp /entityTypes//attributes/License/attributes/Regulatory/attributes/SampDrugSchedIVFlagSAMP_DRUG_SCHED_V_FLAGVARCHARSamp /entityTypes//attributes/License/attributes/Regulatory/attributes//entityTypes//attributes/License/attributes/Regulatory/attributes/SampDrugSchedVIFlagPRESCR_NON_CTRL_FLAGVARCHARPrescr /entityTypes//attributes/License/attributes/Regulatory/attributes/PrescrNonCtrlFlagPRESCR_APP_REQ_NON_CTRL_FLAGVARCHARPrescr App /entityTypes//attributes/License/attributes/Regulatory/attributes//entityTypes//attributes/License/attributes/Regulatory/attributes/PrescrCtrlFlagPRESCR_APP_REQ_CTRL_FLAGVARCHARPrescr App /entityTypes//attributes/License/attributes/Regulatory/attributes/ I Flagconfiguration/entityTypes//attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedIFlagPRESCR_DRUG_SCHED_II_FLAGVARCHARPrescr Drug Sched II Flagconfiguration/entityTypes//attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedIIFlagPRESCR_DRUG_SCHED_III_FLAGVARCHARPrescr Drug Sched III Flagconfiguration/entityTypes//attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedIIIFlagPRESCR_DRUG_SCHED_IV_FLAGVARCHARPrescr Drug Sched IV Flagconfiguration/entityTypes//attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedIVFlagPRESCR_DRUG_SCHED_V_FLAGVARCHARPrescr Drug Sched V Flagconfiguration/entityTypes//attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedVFlagPRESCR_DRUG_SCHED_VI_FLAGVARCHARPrescr /entityTypes//attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedVIFlagSUPERVISORY_REL_CD_NON_CTRLVARCHARSupervisory /entityTypes//attributes/License/attributes/Regulatory/attributes/SupervisoryRelCdNonCtrlSUPERVISORY_REL_CD_CTRLVARCHARSupervisory Rel Cd Ctrlconfiguration/entityTypes//attributes/License/attributes/Regulatory/attributes/SupervisoryRelCdCtrlCOLLABORATIVE_NONCTRLVARCHARCollaborative Non ctrlconfiguration/entityTypes//attributes/License/attributes/Regulatory/attributes/CollaborativeNonctrlCOLLABORATIVE_CTRLVARCHARCollaborative ctrlconfiguration/entityTypes//attributes/License/attributes/Regulatory/attributes/CollaborativeCtrlINCLUSIONARYVARCHARInclusionaryconfiguration/entityTypes//attributes/License/attributes/Regulatory/attributes/InclusionaryEXCLUSIONARYVARCHARExclusionaryconfiguration/entityTypes//attributes/License/attributes/Regulatory/attributes/ExclusionaryDELEGATION_NON_CTRLVARCHARDelegation Non Ctrlconfiguration/entityTypes//attributes/License/attributes/Regulatory/attributes/DelegationNonCtrlDELEGATION_CTRLVARCHARDelegation Ctrlconfiguration/entityTypes//attributes/License/attributes/Regulatory/attributes/DelegationCtrlCSRReltio URI: configuration/entityTypes//attributes/CSRMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCSR_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive /entityTypes//attributes//attributes/ProfessionCodeHCPProfessionAUTHORIZATION_NUMBERVARCHARAutorization number of CSRconfiguration/entityTypes//attributes//attributes/AuthorizationNumberREG_AUTH_IDVARCHARRegAuthIDconfiguration/entityTypes//attributes//attributes/RegAuthIDSTATE_BOARDVARCHARState Boardconfiguration/entityTypes//attributes//attributes//entityTypes//attributes//attributes/StateBoardNameSTATEVARCHARState of nfiguration/entityTypes/HCP/attributes/CSR/attributes/StateCSR_LICENSE_TYPEVARCHARCSR License Typeconfiguration/entityTypes//attributes//attributes/CSRLicenseTypeCSR_LICENSE_TYPE_NAMEVARCHARCSR License Type Nameconfiguration/entityTypes//attributes//attributes/CSRLicenseTypeNameCSR_LICENSE_PRIVILEGEVARCHARCSR License Privilegeconfiguration/entityTypes//attributes//attributes/CSRLicensePrivilegeCSR_LICENSE_PRIVILEGE_NAMEVARCHARCSR License Privilege Nameconfiguration/entityTypes//attributes//attributes/CSRLicensePrivilegeNameCSR_LICENSE_EFFECTIVE_DATEDATECSR License Effective Dateconfiguration/entityTypes//attributes//attributes/CSRLicenseEffectiveDateCSR_LICENSE_EXPIRATION_DATEDATECSR License Expiration Dateconfiguration/entityTypes//attributes//attributes/CSRLicenseExpirationDateCSR_LICENSE_STATUSVARCHARCSR License Statusconfiguration/entityTypes//attributes//attributes/CSRLicenseStatusHCPLicenseStatusSTATUS_DETAILVARCHARCSRLicenseDeactivationReasonconfiguration/entityTypes//attributes//attributes/StatusDetailHCPLicenseStatusDetailCSR_LICENSE_DEACTIVATION_REASONVARCHARCSR License Deactivation Reasonconfiguration/entityTypes//attributes//attributes/CSRLicenseDeactivationReasonHCPCSRLicenseDeactivationReasonCSR_LICENSE_CERTIFICATIONVARCHARCSR License Certificationconfiguration/entityTypes//attributes//attributes/CSRLicenseCertificationHCPLicenseCertificationCSR_LICENSE_TYPE_PRIVILEGE_RANKVARCHARCSR License Type Privilege Rankconfiguration/entityTypes//attributes//attributes/CSRLicenseTypePrivilegeRankCSR_REGULATORYCSR RegulatoryReltio URI: configuration/entityTypes//attributes//attributes/RegulatoryMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCSR_URIVARCHARGenerated KeyREGULATORY_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeREQ_SAMPL_NON_CTRLVARCHARReq Sampl Non Ctrlconfiguration/entityTypes//attributes//attributes/Regulatory/attributes/ReqSamplNonCtrlREQ_SAMPL_CTRLVARCHARReq /entityTypes//attributes//attributes/Regulatory/attributes/ReqSamplCtrlRECV_SAMPL_NON_CTRLVARCHARRecv Sampl Non Ctrlconfiguration/entityTypes//attributes//attributes/Regulatory/attributes/RecvSamplNonCtrlRECV_SAMPL_CTRLVARCHARRecv Sampl Ctrlconfiguration/entityTypes//attributes//attributes/Regulatory/attributes/RecvSamplCtrlDISTR_SAMPL_NON_CTRLVARCHARDistr /entityTypes//attributes//attributes/Regulatory/attributes/DistrSamplNonCtrlDISTR_SAMPL_CTRLVARCHARDistr Sampl Ctrlconfiguration/entityTypes//attributes//attributes/Regulatory/attributes/ I Flagconfiguration/entityTypes//attributes//attributes/Regulatory/attributes/SampDrugSchedIFlagSAMP_DRUG_SCHED_II_FLAGVARCHARSamp Drug Sched II Flagconfiguration/entityTypes//attributes//attributes/Regulatory/attributes/SampDrugSchedIIFlagSAMP_DRUG_SCHED_III_FLAGVARCHARSamp Drug Sched III Flagconfiguration/entityTypes//attributes//attributes/Regulatory/attributes/SampDrugSchedIIIFlagSAMP_DRUG_SCHED_IV_FLAGVARCHARSamp /entityTypes//attributes//attributes/Regulatory/attributes/SampDrugSchedIVFlagSAMP_DRUG_SCHED_V_FLAGVARCHARSamp /entityTypes//attributes//attributes/Regulatory/attributes//entityTypes//attributes//attributes/Regulatory/attributes/SampDrugSchedVIFlagPRESCR_NON_CTRL_FLAGVARCHARPrescr /entityTypes//attributes//attributes/Regulatory/attributes/PrescrNonCtrlFlagPRESCR_APP_REQ_NON_CTRL_FLAGVARCHARPrescr App /entityTypes//attributes//attributes/Regulatory/attributes//entityTypes//attributes//attributes/Regulatory/attributes/PrescrCtrlFlagPRESCR_APP_REQ_CTRL_FLAGVARCHARPrescr App /entityTypes//attributes//attributes/Regulatory/attributes/ I Flagconfiguration/entityTypes//attributes//attributes/Regulatory/attributes/PrescrDrugSchedIFlagPRESCR_DRUG_SCHED_II_FLAGVARCHARPrescr Drug Sched II Flagconfiguration/entityTypes//attributes//attributes/Regulatory/attributes/PrescrDrugSchedIIFlagPRESCR_DRUG_SCHED_III_FLAGVARCHARPrescr Drug Sched III Flagconfiguration/entityTypes//attributes//attributes/Regulatory/attributes/PrescrDrugSchedIIIFlagPRESCR_DRUG_SCHED_IV_FLAGVARCHARPrescr Drug Sched IV Flagconfiguration/entityTypes//attributes//attributes/Regulatory/attributes/PrescrDrugSchedIVFlagPRESCR_DRUG_SCHED_V_FLAGVARCHARPrescr Drug Sched V Flagconfiguration/entityTypes//attributes//attributes/Regulatory/attributes/PrescrDrugSchedVFlagPRESCR_DRUG_SCHED_VI_FLAGVARCHARPrescr /entityTypes//attributes//attributes/Regulatory/attributes/PrescrDrugSchedVIFlagSUPERVISORY_REL_CD_NON_CTRLVARCHARSupervisory /entityTypes//attributes//attributes/Regulatory/attributes/SupervisoryRelCdNonCtrlSUPERVISORY_REL_CD_CTRLVARCHARSupervisory Rel Cd Ctrlconfiguration/entityTypes//attributes//attributes/Regulatory/attributes/SupervisoryRelCdCtrlCOLLABORATIVE_NONCTRLVARCHARCollaborative Non ctrlconfiguration/entityTypes//attributes//attributes/Regulatory/attributes/CollaborativeNonctrlCOLLABORATIVE_CTRLVARCHARCollaborative ctrlconfiguration/entityTypes//attributes//attributes/Regulatory/attributes/CollaborativeCtrlINCLUSIONARYVARCHARInclusionaryconfiguration/entityTypes//attributes//attributes/Regulatory/attributes/InclusionaryEXCLUSIONARYVARCHARExclusionaryconfiguration/entityTypes//attributes//attributes/Regulatory/attributes/ExclusionaryDELEGATION_NON_CTRLVARCHARDelegation Non Ctrlconfiguration/entityTypes//attributes//attributes/Regulatory/attributes/DelegationNonCtrlDELEGATION_CTRLVARCHARDelegation Ctrlconfiguration/entityTypes//attributes//attributes/Regulatory/attributes/DelegationCtrlPRIVACY_PREFERENCESReltio URI: configuration/entityTypes//attributes/PrivacyPreferencesMaterialized: NamePRIVACY_PREFERENCES_URIVARCHARGenerated CodeACTIVEVARCHARActive Entity TypeAMA_NO_CONTACTBOOLEANCan be Contacted through or notconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/AMANoContactFTC_NO_CONTACTBOOLEANCan be Contacted through or notconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/FTCNoContactPDRPBOOLEANPhysician Data Restriction Program enrolled or notconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/PDRPPDRP_DATEDATEPhysician Data Restriction Program enrolment dateconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/PDRPDateOPT_OUT_START_DATEDATEOpt Out Start Dateconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/OptOutStartDateALLOWED_TO_CONTACTBOOLEANIndicator whether allowed to contactconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/AllowedToContactPHONE_OPT_OUTBOOLEANOpted Out for being contacted on Phone or notconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/PhoneOptOutEMAIL_OPT_OUTBOOLEANOpted Out for being contacted through or notconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/EmailOptOutFAX_OPT_OUTBOOLEANOpted Out for being contacted through Fax or notconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/FaxOptOutMAIL_OPT_OUTBOOLEANOpted Out for being contacted through Mail or notconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/MailOptOutNO_CONTACT_REASONVARCHARReason for no contactconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/NoContactReasonNO_CONTACT_EFFECTIVE_DATEDATEEffective date of no contactconfiguration/entityTypes//attributes/PrivacyPreferences/attributes/NoContactEffectiveDateCERTIFICATESReltio URI: configuration/entityTypes//attributes/CertificatesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCERTIFICATES_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive Entity TypeCERTIFICATE_IDVARCHARCertificate Id of Certificate received by /entityTypes//attributes/Certificates/attributes/CertificateIdSPEAKERReltio URI: configuration/entityTypes//attributes/SpeakerMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSPEAKER_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeLEVELVARCHARLevelconfiguration/entityTypes//attributes/Speaker/attributes/LevelHCPTierLevelTIER_STATUSVARCHARTier /entityTypes//attributes/Speaker/attributes/TierStatusHCPTierStatusTIER_APPROVAL_DATEDATETier Approval Dateconfiguration/entityTypes//attributes/Speaker/attributes/TierApprovalDateTIER_UPDATED_DATEDATETier Updated Dateconfiguration/entityTypes//attributes/Speaker/attributes/TierUpdatedDateTIER_APPROVERVARCHARTier Approverconfiguration/entityTypes//attributes/Speaker/attributes/TierApproverEFFECTIVE_DATEDATESpeaker Effective Dateconfiguration/entityTypes//attributes/Speaker/attributes/EffectiveDateDEACTIVATE_REASONVARCHARSpeaker Deactivate Reasonconfiguration/entityTypes//attributes/Speaker/attributes/DeactivateReasonIS_SPEAKERBOOLEANconfiguration/entityTypes//attributes/Speaker/attributes/IsSpeakerSPEAKER_TIER_RATIONALETier URI: configuration/entityTypes//attributes/Speaker/attributes/TierRationaleMaterialized: CodeACTIVEVARCHARActive TypeTIER_RATIONALEVARCHARTier Rationaleconfiguration/entityTypes//attributes/Speaker/attributes//attributes/TierRationaleHCPTierRationalRAWDEAReltio URI: configuration/entityTypes//attributes/RAWDEAMaterialized: CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeDEA_NUMBERVARCHARRAW DEA Numberconfiguration/entityTypes//attributes/RAWDEA/attributes/DEANumberDEA_BUSINESS_ACTIVITYVARCHARDEA Business Activityconfiguration/entityTypes//attributes/RAWDEA/attributes/DEABusinessActivityEFFECTIVE_DATEDATERAW /entityTypes//attributes/RAWDEA/attributes/EffectiveDateEXPIRATION_DATEDATERAW DEA Expiration Dateconfiguration/entityTypes//attributes/RAWDEA/attributes/ExpirationDateNAMEVARCHARRAW /entityTypes//attributes/RAWDEA/attributes/NameADDITIONAL_COMPANY_INFOVARCHARAdditional Company Infoconfiguration/entityTypes//attributes/RAWDEA/attributes/AdditionalCompanyInfoADDRESS1VARCHARRAW DEA Address 1configuration/entityTypes//attributes/RAWDEA/attributes/Address1ADDRESS2VARCHARRAW DEA Address 2configuration/entityTypes//attributes/RAWDEA/attributes/Address2CITYVARCHARRAW /entityTypes//attributes/RAWDEA/attributes/CitySTATEVARCHARRAW DEA Stateconfiguration/entityTypes//attributes/RAWDEA/attributes/StateZIPVARCHARRAW /entityTypes//attributes/RAWDEA/attributes/ZipBUSINESS_ACTIVITY_SUB_CDVARCHARBusiness Activity Sub Cdconfiguration/entityTypes//attributes/RAWDEA/attributes/BusinessActivitySubCdPAYMT_INDVARCHARPaymt Indicatorconfiguration/entityTypes//attributes/RAWDEA/attributes/PaymtIndHCPRAWDEAPaymtIndRAW_DEA_SCHD_CLAS_CDVARCHARRaw /entityTypes//attributes/RAWDEA/attributes//entityTypes//attributes/RAWDEA/attributes/StatusPHONEReltio URI: configuration/entityTypes//attributes/Phone, configuration/entityTypes//attributes/Phone, configuration/entityTypes//attributes/PhoneMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePHONE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypeTYPEVARCHARconfiguration/entityTypes//attributes/Phone/attributes/Type, configuration/entityTypes//attributes/Phone/attributes/Type, configuration/entityTypes//attributes/Phone/attributes/TypePhoneTypeNUMBERVARCHARPhone numberconfiguration/entityTypes//attributes/Phone/attributes/Number, configuration/entityTypes//attributes/Phone/attributes/Number, configuration/entityTypes//attributes/Phone/attributes/NumberFORMATTED_NUMBERVARCHARconfiguration/entityTypes//attributes/Phone/attributes/, configuration/entityTypes//attributes/Phone/attributes/, configuration/entityTypes//attributes/Phone/attributes/FormattedNumberEXTENSIONVARCHARExtension, if anyconfiguration/entityTypes//attributes/Phone/attributes/Extension, configuration/entityTypes//attributes/Phone/attributes/Extension, configuration/entityTypes//attributes/Phone/attributes/ExtensionRANKVARCHARRank used to assign priority to a Phone numberconfiguration/entityTypes//attributes/Phone/attributes/Rank, configuration/entityTypes//attributes/Phone/attributes/Rank, configuration/entityTypes//attributes/Phone/attributes/RankPHONE_USAGE_TAGVARCHARconfiguration/entityTypes//attributes/Phone/attributes/PhoneUsageTag, configuration/entityTypes//attributes/Phone/attributes/PhoneUsageTag, configuration/entityTypes//attributes/Phone/attributes/PhoneUsageTagPhoneUsageTagUSAGE_TYPEVARCHARUsage Type of a Phone numberconfiguration/entityTypes//attributes/Phone/attributes/, configuration/entityTypes//attributes/Phone/attributes/, configuration/entityTypes//attributes/Phone/attributes/UsageTypeAREA_CODEVARCHARconfiguration/entityTypes//attributes/Phone/attributes/AreaCode, configuration/entityTypes//attributes/Phone/attributes/AreaCode, configuration/entityTypes//attributes/Phone/attributes/AreaCodeLOCAL_NUMBERVARCHARconfiguration/entityTypes//attributes/Phone/attributes/, configuration/entityTypes//attributes/Phone/attributes/, configuration/entityTypes//attributes/Phone/attributes/LocalNumberVALIDATION_STATUSVARCHARconfiguration/entityTypes//attributes/Phone/attributes/, configuration/entityTypes//attributes/Phone/attributes/, configuration/entityTypes//attributes/Phone/attributes/ValidationStatusLINE_TYPEVARCHARconfiguration/entityTypes//attributes/Phone/attributes/LineType, configuration/entityTypes//attributes/Phone/attributes/LineType, configuration/entityTypes//attributes/Phone/attributes/LineTypeFORMAT_MASKVARCHARconfiguration/entityTypes//attributes/Phone/attributes/, configuration/entityTypes//attributes/Phone/attributes/, configuration/entityTypes//attributes/Phone/attributes/FormatMaskDIGIT_COUNTVARCHARconfiguration/entityTypes//attributes/Phone/attributes/, configuration/entityTypes//attributes/Phone/attributes/, configuration/entityTypes//attributes/Phone/attributes/DigitCountGEO_AREAVARCHARconfiguration/entityTypes//attributes/Phone/attributes/, configuration/entityTypes//attributes/Phone/attributes/, configuration/entityTypes//attributes/Phone/attributes/GeoAreaGEO_COUNTRYVARCHARconfiguration/entityTypes//attributes/Phone/attributes/, configuration/entityTypes//attributes/Phone/attributes/, configuration/entityTypes//attributes/Phone/attributes/GeoCountryCOUNTRY_CODEVARCHARTwo digit code for a Countryconfiguration/entityTypes//attributes/Phone/attributes/CountryCode, configuration/entityTypes//attributes/Phone/attributes/CountryCodePHONE_SOURCESourceReltio URI: configuration/entityTypes//attributes/Phone/attributes/Source, configuration/entityTypes//attributes/Phone/attributes/Source, configuration/entityTypes//attributes/Phone/attributes/SourceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePHONE_URIVARCHARGenerated KeySOURCE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive /entityTypes//attributes/Phone/attributes/Source/attributes/SourceName, configuration/entityTypes//attributes/Phone/attributes/Source/attributes/SourceName, configuration/entityTypes//attributes/Phone/attributes/Source/attributes/SourceNameSOURCE_RANKVARCHARSourceRankconfiguration/entityTypes//attributes/Phone/attributes/Source/attributes/, configuration/entityTypes//attributes/Phone/attributes/Source/attributes/, configuration/entityTypes//attributes/Phone/attributes/Source/attributes/SourceRankSOURCE_ADDRESS_IDVARCHARSourceAddressIDconfiguration/entityTypes//attributes/Phone/attributes/Source/attributes/SourceAddressID, configuration/entityTypes//attributes/Phone/attributes/Source/attributes/SourceAddressID, configuration/entityTypes//attributes/Phone/attributes/Source/attributes/SourceAddressIDHCP_ADDRESS_ZIPReltio URI: configuration/entityTypes/Location/attributes/: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESS_URIVARCHARGenerated CodeACTIVEVARCHARActive TypePOSTAL_CODEVARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/PostalCodeZIP5VARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/Zip5ZIP4VARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/Zip4DEAReltio URI: configuration/entityTypes//attributes/, configuration/entityTypes//attributes/: CodeACTIVEVARCHARActive TypeNUMBERVARCHARconfiguration/entityTypes//attributes//attributes/Number, configuration/entityTypes//attributes//attributes/NumberSTATUSVARCHARconfiguration/entityTypes//attributes//attributes/Status, configuration/entityTypes//attributes//attributes/StatusSTATUSVARCHARconfiguration/entityTypes//attributes//attributes/Status, configuration/entityTypes//attributes//attributes/StatusApp-LSCustomer360DEAStatusEXPIRATION_DATEDATEconfiguration/entityTypes//attributes//attributes/ExpirationDate, configuration/entityTypes//attributes//attributes/ExpirationDateDRUG_SCHEDULEVARCHARconfiguration/entityTypes//attributes//attributes/DrugSchedule, configuration/entityTypes//attributes//attributes/DrugScheduleApp-LSCustomer360DEADrugScheduleDRUG_SCHEDULE_DESCRIPTIONVARCHARconfiguration/entityTypes//attributes//attributes/DrugScheduleDescription, configuration/entityTypes//attributes//attributes/DrugScheduleDescriptionBUSINESS_ACTIVITYVARCHARconfiguration/entityTypes//attributes//attributes/BusinessActivity, configuration/entityTypes//attributes//attributes/BusinessActivityApp-LSCustomer360DEABusinessActivityBUSINESS_ACTIVITY_PLUS_SUB_CODEVARCHARBusiness Activity SubCodeconfiguration/entityTypes//attributes//attributes/BusinessActivityPlusSubCode, configuration/entityTypes//attributes//attributes/BusinessActivityPlusSubCodeApp-LSCustomer360DEABusinessActivitySubcodeBUSINESS_ACTIVITY_DESCRIPTIONVARCHARStringconfiguration/entityTypes//attributes//attributes/BusinessActivityDescription, configuration/entityTypes//attributes//attributes/BusinessActivityDescriptionApp-LSCustomer360DEABusinessActivityDescriptionPAYMENT_INDICATORVARCHARStringconfiguration/entityTypes//attributes//attributes/PaymentIndicator, configuration/entityTypes//attributes//attributes/PaymentIndicatorApp-LSCustomer360DEAPaymentIndicatorTAXONOMYReltio URI: configuration/entityTypes//attributes/Taxonomy, configuration/entityTypes//attributes/TaxonomyMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameTAXONOMY_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTAXONOMYVARCHARTaxonomy related to , e.g., /entityTypes//attributes/Taxonomy/attributes/Taxonomy, configuration/entityTypes//attributes/Taxonomy/attributes/TaxonomyApp-LSCustomer360Taxonomy,TAXONOMY_CDTYPEVARCHARType of Taxonomy, e.g., Primaryconfiguration/entityTypes//attributes/Taxonomy/attributes/Type, configuration/entityTypes//attributes/Taxonomy/attributes/,TAXONOMY_TYPESTATE_CODEVARCHARconfiguration/entityTypes//attributes/Taxonomy/attributes/StateCodeGROUPVARCHARGroup Taxonomy belongs toconfiguration/entityTypes//attributes/Taxonomy/attributes/GroupPROVIDER_TYPEVARCHARTaxonomy /entityTypes//attributes/Taxonomy/attributes/ProviderType, configuration/entityTypes//attributes/Taxonomy/attributes/ProviderTypeCLASSIFICATIONVARCHARClassification of /entityTypes//attributes/Taxonomy/attributes/Classification, configuration/entityTypes//attributes/Taxonomy/attributes/ClassificationSPECIALIZATIONVARCHARSpecialization of Taxonomyconfiguration/entityTypes//attributes/Taxonomy/attributes/Specialization, configuration/entityTypes//attributes/Taxonomy/attributes/SpecializationPRIORITYVARCHARTaxonomy Priorityconfiguration/entityTypes//attributes/Taxonomy/attributes/Priority, configuration/entityTypes//attributes/Taxonomy/attributes/PriorityTAXONOMY_PRIORITYSANCTIONReltio URI: configuration/entityTypes//attributes/SanctionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSANCTION_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive Entity TypeSANCTION_IDVARCHARCourt sanction Id for any nfiguration/entityTypes/HCP/attributes/Sanction/attributes/SanctionIdACTION_CODEVARCHARCourt sanction code for a caseconfiguration/entityTypes//attributes/Sanction/attributes/ActionCodeACTION_DESCRIPTIONVARCHARCourt sanction Action Descriptionconfiguration/entityTypes//attributes/Sanction/attributes/ActionDescriptionBOARD_CODEVARCHARCourt case board idconfiguration/entityTypes//attributes/Sanction/attributes/BoardCodeBOARD_DESCVARCHARcourt case board descriptionconfiguration/entityTypes//attributes/Sanction/attributes/BoardDescACTION_DATEDATECourt sanction Action Dateconfiguration/entityTypes//attributes/Sanction/attributes/ActionDateSANCTION_PERIOD_START_DATEDATESanction Period Start Dateconfiguration/entityTypes//attributes/Sanction/attributes/SanctionPeriodStartDateSANCTION_PERIOD_END_DATEDATESanction Period End Dateconfiguration/entityTypes//attributes/Sanction/attributes/SanctionPeriodEndDateMONTH_DURATIONVARCHARSanction Duration in Monthsconfiguration/entityTypes//attributes/Sanction/attributes/MonthDurationFINE_AMOUNTVARCHARFine Amount for Sanctionconfiguration/entityTypes//attributes/Sanction/attributes/FineAmountOFFENSE_CODEVARCHAROffense Code for /entityTypes//attributes/Sanction/attributes/OffenseCodeOFFENSE_DESCRIPTIONVARCHAROffense /entityTypes//attributes/Sanction/attributes/OffenseDescriptionOFFENSE_DATEDATEOffense Date for Sanctionconfiguration/entityTypes//attributes/Sanction/attributes/OffenseDateGSA_SANCTIONReltio URI: configuration/entityTypes//attributes/GSASanctionMaterialized: CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSANCTION_IDVARCHARSanction Id of as per listconfiguration/entityTypes//attributes/GSASanction/attributes/SanctionIdFIRST_NAMEVARCHARFirst Name of as per listconfiguration/entityTypes//attributes/GSASanction/attributes/FirstNameMIDDLE_NAMEVARCHARMiddle Name of as per listconfiguration/entityTypes//attributes/GSASanction/attributes/MiddleNameLAST_NAMEVARCHARLast Name of as per listconfiguration/entityTypes//attributes/GSASanction/attributes/LastNameSUFFIX_NAMEVARCHARSuffix Name of as per listconfiguration/entityTypes//attributes/GSASanction/attributes/SuffixNameCITYVARCHARCity of as per listconfiguration/entityTypes//attributes/GSASanction/attributes/CitySTATEVARCHARState of as per listconfiguration/entityTypes//attributes/GSASanction/attributes/StateZIPVARCHARZip of HCP as per listconfiguration/entityTypes//attributes/GSASanction/attributes/ZipACTION_DATEVARCHARAction Date for /entityTypes//attributes/GSASanction/attributes/ActionDateTERM_DATEVARCHARTerm Date for /entityTypes//attributes/GSASanction/attributes/TermDateAGENCYVARCHARAgency that imposed /entityTypes//attributes/GSASanction/attributes/AgencyCONFIDENCEVARCHARConfidence as per listconfiguration/entityTypes//attributes/GSASanction/attributes/ConfidenceMULTI_CHANNEL_COMMUNICATION_CONSENTReltio URI: configuration/entityTypes//attributes/MultiChannelCommunicationConsentMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameMULTI_CHANNEL_COMMUNICATION_CONSENT_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive Entity TypeCHANNEL_TYPEVARCHARChannel type for the consent, e.g. email, , nfiguration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/ChannelTypeCHANNEL_VALUEVARCHARValue of the channel for consent - e@configuration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/ChannelValueCHANNEL_CONSENTVARCHARThe consent for the corresponding channel and the id - yes or noconfiguration/entityTypes//attributes/MultiChannelCommunicationConsent/attributes/ChannelConsentChannelConsentSTART_DATEDATEStart date of the consentconfiguration/entityTypes//attributes/MultiChannelCommunicationConsent/attributes/StartDateEXPIRATION_DATEDATEExpiration date of the consentconfiguration/entityTypes//attributes/MultiChannelCommunicationConsent/attributes/ExpirationDateCOMMUNICATION_TYPEVARCHARDifferent communication type that the individual prefers, for e.g. - New Product Launches, Sales/Discounts, Brand-level Newsconfiguration/entityTypes//attributes/MultiChannelCommunicationConsent/attributes/CommunicationTypeCOMMUNICATION_FREQUENCYVARCHARHow frequently can the individual be communicated to. monthly/weeklyconfiguration/entityTypes//attributes/MultiChannelCommunicationConsent/attributes/CommunicationFrequencyCHANNEL_PREFERENCE_FLAGBOOLEANWhen checked denotes the preferred channel of communicationconfiguration/entityTypes//attributes/MultiChannelCommunicationConsent/attributes/ChannelPreferenceFlagEMPLOYMENTReltio URI: configuration/entityTypes//attributes/EmploymentMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEMPLOYMENT_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeNAMEVARCHARNameconfiguration/entityTypes/Organization/attributes/NameTITLEVARCHARconfiguration/relationTypes/Employment/attributes/TitleSUMMARYVARCHARconfiguration/relationTypes/Employment/attributes/SummaryIS_CURRENTBOOLEANconfiguration/relationTypes/Employment/attributes/IsCurrentHCOHealth care organizationReltio URI: configuration/entityTypes/HCOMaterialized: URILOV NameENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPE_CODEVARCHARType Codeconfiguration/entityTypes//attributes/TypeCodeHCOTypeCOMPANY_CUST_IDVARCHARCOMPANY Customer IDconfiguration/entityTypes//attributes/COMPANYCustIDSUB_TYPE_CODEVARCHARSubType Codeconfiguration/entityTypes//attributes/SubTypeCodeHCOSubTypeSUB_CATEGORYVARCHARSubCategoryconfiguration/entityTypes//attributes//entityTypes//attributes/StructureTypeCodeHCOStructureTypeCodeNAMEVARCHARNameconfiguration/entityTypes//attributes/NameDOING_BUSINESS_AS_NAMEVARCHARconfiguration/entityTypes//attributes/DoingBusinessAsNameFLEX_RESTRICTED_PARTY_INDVARCHARparty indicator for /entityTypes//attributes/FlexRestrictedPartyIndTRADE_PARTNERVARCHARStringconfiguration/entityTypes//attributes/TradePartnerSHIP_TO_SR_PARENT_NAMEVARCHARStringconfiguration/entityTypes//attributes/ShipToSrParentNameSHIP_TO_JR_PARENT_NAMEVARCHARStringconfiguration/entityTypes//attributes/ShipToJrParentNameSHIP_FROM_JR_PARENT_NAMEVARCHARStringconfiguration/entityTypes//attributes/ShipFromJrParentNameTEACHING_HOSPITALVARCHARTeaching /entityTypes//attributes/TeachingHospitalOWNERSHIP_STATUSVARCHARconfiguration/entityTypes//attributes//entityTypes//attributes/ProfitStatusHCOProfitStatusCMIVARCHARCMIconfiguration/entityTypes//attributes/CMICOMPANY_HCOS_FLAGVARCHARCOMPANY HCOS Flagconfiguration/entityTypes//attributes/COMPANYHCOSFlagSOURCE_MATCH_CATEGORYVARCHARSource Match Categoryconfiguration/entityTypes//attributes/SourceMatchCategoryCOMM_HOSPVARCHARCommHospconfiguration/entityTypes//attributes/CommHospGEN_FIRSTVARCHARStringconfiguration/entityTypes//attributes/GenFirstHCOGenFirstSREP_ACCESSVARCHARStringconfiguration/entityTypes//attributes/SrepAccessHCOSrepAccessOUT_PATIENTS_NUMBERSVARCHARconfiguration/entityTypes//attributes/OutPatientsNumbersUNIT_OPER_ROOM_NUMBERVARCHARconfiguration/entityTypes//attributes/UnitOperRoomNumberPRIMARY_GPOVARCHARPrimary GPOconfiguration/entityTypes//attributes/PrimaryGPOTOTAL_PRESCRIBERSVARCHARTotal Prescribersconfiguration/entityTypes//attributes/TotalPrescribersNUM_IN_PATIENTSVARCHARTotal InPatientsconfiguration/entityTypes//attributes/NumInPatientsTOTAL_LIVESVARCHARTotal Livesconfiguration/entityTypes//attributes/entityTypes//attributes/TotalPharmacistsTOTAL_M_DSVARCHARTotal MDsconfiguration/entityTypes//attributes/TotalMDsTOTAL_REVENUEVARCHARTotal /entityTypes//attributes/TotalRevenueSTATUSVARCHARconfiguration/entityTypes//attributes/StatusHCOStatusSTATUS_DETAILVARCHARDeactivation Reasonconfiguration/entityTypes//attributes//entityTypes//attributes/AccountBlockCodeTOTAL_LICENSE_BEDSVARCHARTotal License Bedsconfiguration/entityTypes//attributes/TotalLicenseBedsTOTAL_CENSUS_BEDSVARCHARconfiguration/entityTypes//attributes/TotalCensusBedsTOTAL_STAFFED_BEDSVARCHARconfiguration/entityTypes//attributes//entityTypes//attributes/TotalSurgeriesTOTAL_PROCEDURESVARCHARTotal Proceduresconfiguration/entityTypes//attributes/TotalProceduresNUM_EMPLOYEESVARCHARNumber of Proceduresconfiguration/entityTypes//attributes/NumEmployeesRESIDENT_COUNTVARCHARResident Countconfiguration/entityTypes//attributes/ResidentCountFORMULARYVARCHARFormularyconfiguration/entityTypes//attributes/FormularyHCOFormularyE_MEDICAL_RECORDVARCHARe-Medical Recordconfiguration/entityTypes//attributes//entityTypes//attributes/EPrescribeHCOEPrescribePAY_PERFORMVARCHARPay Performconfiguration/entityTypes//attributes/PayPerformHCOPayPerformDEACTIVATION_REASONVARCHARDeactivation Reasonconfiguration/entityTypes//attributes/DeactivationReasonHCODeactivationReasonINTERNATIONAL_LOCATION_NUMBERVARCHARInternational location number (part 1)configuration/entityTypes//attributes/InternationalLocationNumberDCR_STATUSVARCHARStatus of profileconfiguration//attributes/DCRStatusDCRStatusCOUNTRY_HCOVARCHARCountryconfiguration/entityTypes//attributes/CountryORIGINAL_SOURCE_NAMEVARCHAROriginal /entityTypes//attributes/OriginalSourceNameSOURCE_UPDATE_DATEDATEconfiguration//attributes/SourceUpdateDateCLASSOF_TRADE_NReltio URI: configuration/entityTypes//attributes/ClassofTradeNMaterialized: NameCLASSOF_TRADE_N_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive Entity TypeSOURCE_COTIDVARCHARSource COT IDconfiguration/entityTypes//attributes/ClassofTradeN/attributes/SourceCOTIDCOTPRIORITYVARCHARPriorityconfiguration/entityTypes//attributes/ClassofTradeN/attributes/PrioritySPECIALTYVARCHARSpecialty of Class of Tradeconfiguration/entityTypes//attributes/ClassofTradeN/attributes/SpecialtyCOTSpecialtyCLASSIFICATIONVARCHARClassification of Class of Tradeconfiguration/entityTypes//attributes/ClassofTradeN/attributes/ClassificationCOTClassificationFACILITY_TYPEVARCHARFacility Type of Class of Tradeconfiguration/entityTypes//attributes/ClassofTradeN/attributes/FacilityTypeCOTFacilityTypeCOT_ORDERVARCHARCOT Orderconfiguration/entityTypes//attributes/ClassofTradeN/attributes/COTOrderSTART_DATEDATEStart Dateconfiguration/entityTypes//attributes/ClassofTradeN/attributes/StartDateSOURCEVARCHARSourceconfiguration//attributes/ClassofTradeN/attributes/SourcePRIMARYVARCHARPrimaryconfiguration/entityTypes//attributes/ClassofTradeN/attributes/PrimaryHCO_ADDRESS_ZIPReltio URI: configuration/entityTypes/Location/attributes/: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESS_URIVARCHARGenerated CodeACTIVEVARCHARActive TypePOSTAL_CODEVARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/PostalCodeZIP5VARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/Zip5ZIP4VARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/Zip4340BReltio URI: configuration//attributes/340bMaterialized: noColumnTypeDescriptionReltio Attribute URILOV Name340B_URIVARCHARGenerated CodeACTIVEVARCHARActive /entityTypes//attributes/340b/attributes/340BIDENTITY_SUB_DIVISION_NAMEVARCHAREntity /entityTypes//attributes/340b/attributes/EntitySubDivisionNamePROGRAM_CODEVARCHARProgram Codeconfiguration/entityTypes//attributes/340b/attributes/ProgramCode340BProgramCodePARTICIPATINGBOOLEANParticipatingconfiguration/entityTypes//attributes/340b/attributes/ParticipatingAUTHORIZING_OFFICIAL_NAMEVARCHARAuthorizing Official Nameconfiguration/entityTypes//attributes/340b/attributes//entityTypes//attributes/340b/attributes/AuthorizingOfficialTitleAUTHORIZING_OFFICIAL_TELVARCHARAuthorizing Official Telconfiguration/entityTypes//attributes/340b/attributes/AuthorizingOfficialTelAUTHORIZING_OFFICIAL_TEL_EXTVARCHARAuthorizing Official Tel Extconfiguration/entityTypes//attributes/340b/attributes/AuthorizingOfficialTelExtCONTACT_NAMEVARCHARContact Nameconfiguration/entityTypes//attributes/340b/attributes/ContactNameCONTACT_TITLEVARCHARContact /entityTypes//attributes/340b/attributes/ContactTitleCONTACT_TELEPHONEVARCHARContact Telephoneconfiguration/entityTypes//attributes/340b/attributes//entityTypes//attributes/340b/attributes/ContactTelephoneExtSIGNED_BY_NAMEVARCHARSigned By /entityTypes//attributes/340b/attributes/SignedByNameSIGNED_BY_TITLEVARCHARSigned By Titleconfiguration/entityTypes//attributes/340b/attributes/SignedByTitleSIGNED_BY_TELEPHONEVARCHARSigned By Telephoneconfiguration/entityTypes//attributes/340b/attributes/SignedByTelephoneSIGNED_BY_TELEPHONE_EXTVARCHARSigned By /entityTypes//attributes/340b/attributes/SignedByTelephoneExtSIGNED_BY_DATEDATESigned By /attributes/340b/attributes/SignedByDateCERTIFIED_DECERTIFIED_DATEDATECertified/Decertified Dateconfiguration/entityTypes//attributes/340b/attributes/CertifiedDecertifiedDateRURALVARCHARRuralconfiguration/entityTypes//attributes/340b/attributes/RuralENTRY_COMMENTSVARCHAREntry Commentsconfiguration/entityTypes//attributes/340b/attributes/EntryCommentsNATURE_OF_SUPPORTVARCHARNature Of Supportconfiguration/entityTypes//attributes/340b/attributes/NatureOfSupportEDIT_DATEVARCHAREdit Dateconfiguration/entityTypes//attributes/340b/attributes/EditDate340B_PARTICIPATION_DATESReltio URI: configuration/entityTypes//attributes/340b/attributes/ParticipationDatesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV Name340B_URIVARCHARGenerated KeyPARTICIPATION_DATES_URIVARCHARGenerated CodeACTIVEVARCHARActive TypePARTICIPATING_START_DATEDATEParticipating Start Dateconfiguration/entityTypes//attributes/340b/attributes/ParticipationDates/attributes/ParticipatingStartDateTERMINATION_DATEDATETermination Dateconfiguration/entityTypes//attributes/340b/attributes/ParticipationDates/attributes/TerminationDateTERMINATION_CODEVARCHARTermination Codeconfiguration/entityTypes//attributes/340b/attributes/ParticipationDates/attributes/TerminationCode340BTerminationCodeOTHER_NAMESReltio URI: configuration/entityTypes//attributes/OtherNamesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameOTHER_NAMES_URIVARCHARGenerated CodeACTIVEVARCHARActive TypeTYPEVARCHARTypeconfiguration//attributes//attributes/TypeNAMEVARCHARNameconfiguration/entityTypes//attributes//attributes/NameACOReltio URI: configuration/entityTypes//attributes/: noColumnTypeDescriptionReltio Attribute URILOV NameACO_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypeTYPEVARCHARTypeconfiguration//attributes//attributes/TypeHCOACOTypeACO_TYPE_CATEGORYVARCHARType Categoryconfiguration/entityTypes//attributes//attributes/ACOTypeCategoryHCOACOTypeCategoryACO_TYPE_GROUPVARCHARType Group of ACOconfiguration/entityTypes//attributes//attributes/ACOTypeGroupHCOACOTypeGroupACO_ACODETAILReltio URI: configuration/entityTypes//attributes//attributes/ACODetailMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACO_URIVARCHARGenerated KeyACO_DETAIL_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypeACO_DETAIL_CODEVARCHARDetail Code for /entityTypes//attributes//attributes//attributes/ACODetailCodeHCOACODetailACO_DETAIL_VALUEVARCHARDetail Value for ACOconfiguration/entityTypes//attributes//attributes//attributes/ACODetailValueACO_DETAIL_GROUP_CODEVARCHARDetail Value for ACOconfiguration/entityTypes//attributes//attributes//attributes/ACODetailGroupCodeHCOACODetailGroupWEBSITEReltio URI: configuration/entityTypes//attributes/WebsiteMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameWEBSITE_URIVARCHARGenerated CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeWEBSITE_URLVARCHARUrl of the websiteconfiguration//attributes/Website/attributes/WebsiteURLWEBSITE_SOURCESourceReltio URI: configuration/entityTypes//attributes/Website/attributes/SourceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameWEBSITE_URIVARCHARGenerated CodeACTIVEVARCHARActive /entityTypes//attributes/Website/attributes/Source/attributes/SourceNameSOURCE_RANKVARCHARSourceRankconfiguration/entityTypes//attributes/Website/attributes/Source/attributes/SourceRankSALES_ORGANIZATIONSales OrganizationReltio URI: configuration/entityTypes//attributes/SalesOrganizationMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSALES_ORGANIZATION_URIVARCHARGenerated CodeACTIVEVARCHARActive TypeSALES_ORGANIZATION_CODEVARCHARSales Organization Codeconfiguration/entityTypes//attributes/SalesOrganization/attributes/SalesOrganizationCodeCUSTOMER_ORDER_BLOCKVARCHARCustomer Order Blockconfiguration/entityTypes//attributes/SalesOrganization/attributes//entityTypes//attributes/SalesOrganization/attributes/CustomerGroupHCO_BUSINESS_UNIT_TAGReltio URI: configuration/entityTypes//attributes/BusinessUnitTAGMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameBUSINESSUNITTAG_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive /entityTypes//attributes/BusinessUnitTAG/attributes/BusinessUnitSEGMENTVARCHARSegmentconfiguration/entityTypes//attributes/BusinessUnitTAG/attributes/SegmentCONTRACT_TYPEVARCHARContract Typeconfiguration/entityTypes//attributes/BusinessUnitTAG/attributes/ContractTypeGLNReltio URI: configuration/entityTypes//attributes/GLNMaterialized: CodeACTIVEVARCHARActive TypeTYPEVARCHARGLN Typeconfiguration/entityTypes//attributes/GLN/attributes/TypeIDVARCHARGLN IDconfiguration/entityTypes//attributes/GLN/attributes/IDSTATUSVARCHARGLN Statusconfiguration/entityTypes//attributes/GLN/attributes/StatusHCOGLNStatusSTATUS_DETAILVARCHARGLN Statusconfiguration/entityTypes//attributes/GLN/attributes/StatusDetailHCOGLNStatusDetailHCO_REFER_BACKReltio URI: configuration/entityTypes//attributes/ReferBackMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameREFERBACK_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive Entity TypeREFER_BACK_IDVARCHARRefer Back IDconfiguration/entityTypes//attributes/ReferBack/attributes/ReferBackIDREFER_BACK_HCOSIDVARCHARGLN IDconfiguration/entityTypes//attributes/ReferBack/attributes/ReferBackHCOSIDDEACTIVATION_REASONVARCHARDeactivation Reasonconfiguration/entityTypes//attributes/ReferBack/attributes/DeactivationReasonBEDReltio URI: configuration/entityTypes//attributes/: noColumnTypeDescriptionReltio Attribute URILOV NameBED_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypeTYPEVARCHARTypeconfiguration//attributes/Bed/attributes/TypeHCOBedTypeLICENSE_BEDSVARCHARLicense Bedsconfiguration/entityTypes//attributes/Bed/attributes/LicenseBedsCENSUS_BEDSVARCHARCensus Bedsconfiguration/entityTypes//attributes/Bed/attributes/CensusBedsSTAFFED_BEDSVARCHARStaffed Bedsconfiguration/entityTypes//attributes/Bed/attributes/StaffedBedsGSA_EXCLUSIONReltio URI: configuration/entityTypes//attributes/GSAExclusionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameGSA_EXCLUSION_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSANCTION_IDVARCHARconfiguration//attributes//attributes/SanctionIdORGANIZATION_NAMEVARCHARconfiguration/entityTypes//attributes//attributes/OrganizationNameADDRESS_LINE1VARCHARconfiguration/entityTypes//attributes//attributes/AddressLine1ADDRESS_LINE2VARCHARconfiguration/entityTypes//attributes//attributes/AddressLine2CITYVARCHARconfiguration/entityTypes//attributes//attributes/CitySTATEVARCHARconfiguration/entityTypes//attributes//attributes/StateZIPVARCHARconfiguration/entityTypes//attributes//attributes/ZipACTION_DATEVARCHARconfiguration/entityTypes//attributes//attributes/ActionDateTERM_DATEVARCHARconfiguration/entityTypes//attributes//attributes/TermDateAGENCYVARCHARconfiguration/entityTypes//attributes//attributes/AgencyCONFIDENCEVARCHARconfiguration/entityTypes//attributes//attributes/ConfidenceOIG_EXCLUSIONReltio URI: configuration/entityTypes//attributes/OIGExclusionMaterialized: CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSANCTION_IDVARCHARconfiguration//attributes/OIGExclusion/attributes/SanctionIdACTION_CODEVARCHARconfiguration/entityTypes//attributes/OIGExclusion/attributes/ActionCodeACTION_DESCRIPTIONVARCHARconfiguration//attributes/OIGExclusion/attributes/ActionDescriptionBOARD_CODEVARCHARCourt case board idconfiguration/entityTypes//attributes/OIGExclusion/attributes/BoardCodeBOARD_DESCVARCHARcourt case board descriptionconfiguration//attributes/OIGExclusion/attributes/BoardDescACTION_DATEDATEconfiguration/entityTypes//attributes/OIGExclusion/attributes/ActionDateOFFENSE_CODEVARCHARconfiguration//attributes/OIGExclusion/attributes/OffenseCodeOFFENSE_DESCRIPTIONVARCHARconfiguration/entityTypes//attributes/OIGExclusion/attributes/OffenseDescriptionBUSINESS_DETAILReltio URI: configuration/entityTypes//attributes/BusinessDetailMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameBUSINESS_DETAIL_URIVARCHARGenerated CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeDETAILVARCHARDetailconfiguration/entityTypes//attributes/BusinessDetail/attributes/DetailHCOBusinessDetailGROUPVARCHARGroupconfiguration/entityTypes//attributes/BusinessDetail/attributes/GroupHCOBusinessDetailGroupDETAIL_VALUEVARCHARDetail Valueconfiguration/entityTypes//attributes/BusinessDetail/attributes/DetailValueDETAIL_COUNTVARCHARDetail Countconfiguration/entityTypes//attributes/BusinessDetail/attributes/DetailCountHINHINReltio URI: configuration/entityTypes//attributes/: CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeHINVARCHARHINconfiguration//attributes/HIN/attributes/HINTICKERReltio URI: configuration/entityTypes//attributes/TickerMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameTICKER_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSYMBOLVARCHARconfiguration/entityTypes//attributes/Ticker/attributes/SymbolSTOCK_EXCHANGEVARCHARconfiguration/entityTypes//attributes/Ticker/attributes/StockExchangeTRADE_STYLE_NAMEReltio URI: configuration/entityTypes//attributes/TradeStyleNameMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameTRADE_STYLE_NAME_URIVARCHARGenerated CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeORGANIZATION_NAMEVARCHARconfiguration/entityTypes//attributes/TradeStyleName/attributes/OrganizationNameLANGUAGE_CODEVARCHARconfiguration/entityTypes//attributes/TradeStyleName/attributes/LanguageCodeFORMER_ORGANIZATION_PRIMARY_NAMEVARCHARconfiguration//attributes/TradeStyleName/attributes/FormerOrganizationPrimaryNameDISPLAY_SEQUENCEVARCHARconfiguration/entityTypes//attributes/TradeStyleName/attributes/DisplaySequenceTYPEVARCHARconfiguration/entityTypes//attributes/TradeStyleName/attributes/TypeHRIOR_DUNS_NUMBERReltio URI: configuration/entityTypes//attributes/PriorDUNSNUmberMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePRIOR_DUNS_NUMBER_URIVARCHARGenerated CodeACTIVEVARCHARActive TypeTRANSFER_DUNS_NUMBERVARCHARconfiguration//attributes/PriorDUNSNUmber/attributes/TransferDUNSNumberTRANSFER_REASON_TEXTVARCHARconfiguration/entityTypes//attributes/PriorDUNSNUmber/attributes/TransferReasonTextTRANSFER_REASON_CODEVARCHARconfiguration/entityTypes//attributes/PriorDUNSNUmber/attributes/TransferReasonCodeTRANSFER_DATEVARCHARconfiguration/entityTypes//attributes/PriorDUNSNUmber/attributes/TransferDateTRANSFERRED_FROM_DUNS_NUMBERVARCHARconfiguration//attributes/PriorDUNSNUmber/attributes/TransferredFromDUNSNumberTRANSFERRED_TO_DUNS_NUMBERVARCHARconfiguration/entityTypes//attributes/PriorDUNSNUmber/attributes/TransferredToDUNSNumberINDUSTRY_CODEReltio URI: configuration/entityTypes//attributes/IndustryCodeMaterialized: CodeACTIVEVARCHARActive /entityTypes//attributes/IndustryCode/attributes/DNBCodeINDUSTRY_CODEVARCHARconfiguration/entityTypes//attributes/IndustryCode/attributes/IndustryCodeINDUSTRY_CODE_DESCRIPTIONVARCHARconfiguration/entityTypes//attributes/IndustryCode/attributes/IndustryCodeDescriptionINDUSTRY_CODE_LANGUAGE_CODEVARCHARconfiguration//attributes/IndustryCode/attributes/IndustryCodeLanguageCodeINDUSTRY_CODE_WRITING_SCRIPTVARCHARconfiguration//attributes/IndustryCode/attributes/IndustryCodeWritingScriptDISPLAY_SEQUENCEVARCHARconfiguration/entityTypes//attributes/IndustryCode/attributes/DisplaySequenceSALES_PERCENTAGEVARCHARconfiguration/entityTypes//attributes/IndustryCode/attributes/SalesPercentageTYPEVARCHARconfiguration/entityTypes//attributes/IndustryCode/attributes/TypeINDUSTRY_TYPE_CODEVARCHARconfiguration/entityTypes//attributes/IndustryCode/attributes/IndustryTypeCodeIMPORT_EXPORT_AGENTVARCHARconfiguration/entityTypes//attributes/IndustryCode/attributes/ImportExportAgentACTIVITIES_AND_OPERATIONSReltio URI: configuration/entityTypes//attributes/ActivitiesAndOperationsMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACTIVITIES_AND_OPERATIONS_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypeLINE_OF_BUSINESS_DESCRIPTIONVARCHARconfiguration/entityTypes//attributes/ActivitiesAndOperations/attributes/LineOfBusinessDescriptionLANGUAGE_CODEVARCHARconfiguration//attributes/ActivitiesAndOperations/attributes/LanguageCodeWRITING_SCRIPT_CODEVARCHARconfiguration//attributes/ActivitiesAndOperations/attributes/WritingScriptCodeIMPORT_INDICATORBOOLEANconfiguration/entityTypes//attributes/ActivitiesAndOperations/attributes/ImportIndicatorEXPORT_INDICATORBOOLEANconfiguration/entityTypes//attributes/ActivitiesAndOperations/attributes/ExportIndicatorAGENT_INDICATORBOOLEANconfiguration/entityTypes//attributes/ActivitiesAndOperations/attributes/AgentIndicatorEMPLOYEE_DETAILSReltio URI: configuration/entityTypes//attributes/EmployeeDetailsMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEMPLOYEE_DETAILS_URIVARCHARGenerated CodeACTIVEVARCHARActive Entity TypeINDIVIDUAL_EMPLOYEE_FIGURES_DATEVARCHARconfiguration/entityTypes//attributes/EmployeeDetails/attributes/IndividualEmployeeFiguresDateINDIVIDUAL_TOTAL_EMPLOYEE_QUANTITYVARCHARconfiguration/entityTypes//attributes/EmployeeDetails/attributes/IndividualTotalEmployeeQuantityINDIVIDUAL_RELIABILITY_TEXTVARCHARconfiguration/entityTypes//attributes/EmployeeDetails/attributes/IndividualReliabilityTextTOTAL_EMPLOYEE_QUANTITYVARCHARconfiguration/entityTypes//attributes/EmployeeDetails/attributes/TotalEmployeeQuantityTOTAL_EMPLOYEE_RELIABILITYVARCHARconfiguration/entityTypes//attributes/EmployeeDetails/attributes/TotalEmployeeReliabilityPRINCIPALS_INCLUDEDVARCHARconfiguration/entityTypes//attributes/EmployeeDetails/attributes/PrincipalsIncludedKEY_FINANCIAL_FIGURES_OVERVIEWReltio URI: configuration/entityTypes//attributes/KeyFinancialFiguresOverviewMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameKEY_FINANCIAL_FIGURES_OVERVIEW_URIVARCHARGenerated CodeACTIVEVARCHARActive TypeFINANCIAL_STATEMENT_TO_DATEDATEconfiguration//attributes/KeyFinancialFiguresOverview/attributes/FinancialStatementToDateFINANCIAL_PERIOD_DURATIONVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/FinancialPeriodDurationSALES_REVENUE_CURRENCYVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueCurrencySALES_REVENUE_CURRENCY_CODEVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueCurrencyCodeSALES_REVENUE_RELIABILITY_CODEVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueReliabilityCodeSALES_REVENUE_UNIT_OF_SIZEVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueUnitOfSizeSALES_REVENUE_AMOUNTVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueAmountPROFIT_OR_LOSS_CURRENCYVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossCurrencyPROFIT_OR_LOSS_RELIABILITY_TEXTVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossReliabilityTextPROFIT_OR_LOSS_UNIT_OF_SIZEVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossUnitOfSizePROFIT_OR_LOSS_AMOUNTVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossAmountSALES_TURNOVER_GROWTH_RATEVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/SalesTurnoverGrowthRateSALES3YRY_GROWTH_RATEVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/Sales3YryGrowthRateSALES5YRY_GROWTH_RATEVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/Sales5YryGrowthRateEMPLOYEE3YRY_GROWTH_RATEVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/Employee3YryGrowthRateEMPLOYEE5YRY_GROWTH_RATEVARCHARconfiguration/entityTypes//attributes/KeyFinancialFiguresOverview/attributes/Employee5YryGrowthRateMATCH_QUALITYReltio URI: configuration/entityTypes//attributes/: noColumnTypeDescriptionReltio Attribute URILOV NameMATCH_QUALITY_URIVARCHARGenerated CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCONFIDENCE_CODEVARCHARDnB Match Quality Confidence Codeconfiguration/entityTypes//attributes/MatchQuality/attributes//entityTypes//attributes/MatchQuality/attributes/DisplaySequenceMATCH_CODEVARCHARconfiguration/entityTypes//attributes/MatchQuality/attributes/MatchCodeBEMFABVARCHARconfiguration/entityTypes//attributes/MatchQuality/attributes/BEMFABMATCH_GRADEVARCHARconfiguration/entityTypes//attributes/MatchQuality/attributes/: configuration/entityTypes//attributes/OrganizationDetailMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameORGANIZATION_DETAIL_URIVARCHARGenerated CodeACTIVEVARCHARActive Entity TypeMEMBER_ROLEVARCHARconfiguration//attributes//attributes/MemberRoleSTANDALONEBOOLEANconfiguration//attributes//attributes/StandaloneCONTROL_OWNERSHIP_DATEDATEconfiguration/entityTypes//attributes//attributes/ControlOwnershipDateOPERATING_STATUSVARCHARconfiguration/entityTypes//attributes//attributes/OperatingStatusSTART_YEARVARCHARconfiguration/entityTypes//attributes//attributes/StartYearFRANCHISE_OPERATION_TYPEVARCHARconfiguration/entityTypes//attributes//attributes/FranchiseOperationTypeBONEYARD_ORGANIZATIONBOOLEANconfiguration/entityTypes//attributes//attributes/BoneyardOrganizationOPERATING_STATUS_COMMENTVARCHARconfiguration/entityTypes//attributes//attributes/OperatingStatusCommentDUNS_HIERARCHYReltio URI: configuration/entityTypes//attributes/DUNSHierarchyMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameDUNS_HIERARCHY_URIVARCHARGenerated CodeACTIVEVARCHARActive TypeGLOBAL_ULTIMATE_DUNSVARCHARconfiguration/entityTypes//attributes//attributes/GlobalUltimateDUNSGLOBAL_ULTIMATE_ORGANIZATIONVARCHARconfiguration/entityTypes//attributes//attributes/GlobalUltimateOrganizationDOMESTIC_ULTIMATE_DUNSVARCHARconfiguration/entityTypes//attributes//attributes/DomesticUltimateDUNSDOMESTIC_ULTIMATE_ORGANIZATIONVARCHARconfiguration/entityTypes//attributes//attributes/DomesticUltimateOrganizationPARENT_DUNSVARCHARconfiguration/entityTypes//attributes//attributes/ParentDUNSPARENT_ORGANIZATIONVARCHARconfiguration/entityTypes//attributes//attributes/ParentOrganizationHEADQUARTERS_DUNSVARCHARconfiguration/entityTypes//attributes//attributes/HeadquartersDUNSHEADQUARTERS_ORGANIZATIONVARCHARconfiguration/entityTypes//attributes//attributes/HeadquartersOrganizationMCOManaged Care OrganizationReltio URI: configuration/entityTypes/MCOMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive TypeCOMPANY_CUST_IDVARCHARCOMPANY Customer IDconfiguration/entityTypes//attributes/COMPANYCustIDNAMEVARCHARNameconfiguration/entityTypes//attributes/NameTYPEVARCHARTypeconfiguration/entityTypes//attributes/TypeMCOTypeMANAGED_CARE_CHANNELVARCHARManaged Care Channelconfiguration/entityTypes//attributes/ManagedCareChannelMCOManagedCareChannelPLAN_MODEL_TYPEVARCHARPlanModelTypeconfiguration/entityTypes//attributes/PlanModelTypeMCOPlanModelTypeSUB_TYPEVARCHARSubTypeconfiguration/entityTypes//attributes/SubTypeMCOSubTypeSUB_TYPE2VARCHARSubType2configuration/entityTypes//attributes/SubType2SUB_TYPE3VARCHARSub Type 3configuration/entityTypes//attributes/SubType3NUM_LIVES_MEDICAREVARCHARMedicare Number of Livesconfiguration/entityTypes//attributes/NumLives_MedicareNUM_LIVES_MEDICALVARCHARMedical Number of Livesconfiguration/entityTypes//attributes/NumLives_MedicalNUM_LIVES_PHARMACYVARCHARPharmacy Number of Livesconfiguration/entityTypes//attributes/NumLives_PharmacyOPERATING_STATEVARCHARState Operating fromconfiguration/entityTypes//attributes/Operating_StateORIGINAL_SOURCE_NAMEVARCHAROriginal Source Nameconfiguration/entityTypes//attributes/OriginalSourceNameDISTRIBUTION_CHANNELVARCHARDistribution Channelconfiguration/entityTypes//attributes/DistributionChannelACCESS_LANDSCAPE_FORMULARY_CHANNELVARCHARAccess /entityTypes//attributes/AccessLandscapeFormularyChannelEFFECTIVE_START_DATEDATEEffective Start Dateconfiguration/entityTypes//attributes/EffectiveStartDateEFFECTIVE_END_DATEDATEEffective End Dateconfiguration/entityTypes//attributes/EffectiveEndDateSTATUSVARCHARStatusconfiguration/entityTypes//attributes/StatusMCOStatusSOURCE_MATCH_CATEGORYVARCHARSource Match Categoryconfiguration/entityTypes//attributes/SourceMatchCategoryCOUNTRY_MCOVARCHARCountryconfiguration/entityTypes//attributes/CountryAFFILIATIONSReltio URI: configuration/relationTypes/FlextoDDDAffiliations, configuration/relationTypes/Ownership, configuration/relationTypes/PAYERtoPLAN, configuration/relationTypes/PBMVendortoMCO, configuration/relationTypes/, configuration/relationTypes/MCOtoPLAN, configuration/relationTypes/FlextoHCOSAffiliations, configuration/relationTypes/FlextoSAPAffiliations, ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●, configuration/relationTypes/HCOStoDDDAffiliations, configuration/relationTypes/EnterprisetoBOB, configuration/relationTypes/OtherHCOtoHCOAffiliations, configuration/relationTypes/ContactAffiliations, configuration/relationTypes/VAAffiliations, configuration/relationTypes/PBMtoPLAN, configuration/relationTypes/Purchasing, configuration/relationTypes/BOBtoMCO, configuration/relationTypes/DDDtoSAPAffiliations, ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●, configuration/relationTypes/, configuration/relationTypes/SAPtoHCOSAffiliationsMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameRELATION_URIVARCHARReltio Relation URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagRELATION_TYPEVARCHARReltio Relation TypeSTART_ENTITY_URIVARCHARReltio Start Entity URIEND_ENTITY_URIVARCHARReltio End Entity URISOURCEVARCHARconfiguration/relationTypes/FlextoDDDAffiliations/attributes/Source, configuration/relationTypes/Ownership/attributes/Source, configuration/relationTypes/PAYERtoPLAN/attributes/Source, configuration/relationTypes/PBMVendortoMCO/attributes/Source, configuration/relationTypes/ACOAffiliations/attributes/Source, configuration/relationTypes/MCOtoPLAN/attributes/Source, configuration/relationTypes/FlextoHCOSAffiliations/attributes/Source, configuration/relationTypes/FlextoSAPAffiliations/attributes/Source, configuration/relationTypes/attributes/Source, configuration/relationTypes/HCOStoDDDAffiliations/attributes/Source, configuration/relationTypes/EnterprisetoBOB/attributes/Source, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/Source, configuration/relationTypes/ContactAffiliations/attributes/Source, configuration/relationTypes/VAAffiliations/attributes/Source, configuration/relationTypes/PBMtoPLAN/attributes/Source, configuration/relationTypes/Purchasing/attributes/Source, configuration/relationTypes/BOBtoMCO/attributes/Source, configuration/relationTypes/DDDtoSAPAffiliations/attributes/Source, configuration/relationTypes/Distribution/attributes/Source, configuration/relationTypes/ProviderAffiliations/attributes/Source, configuration/relationTypes/SAPtoHCOSAffiliations/attributes/SourceLINKED_BYVARCHARconfiguration/relationTypes/FlextoDDDAffiliations/attributes/LinkedBy, configuration/relationTypes/FlextoHCOSAffiliations/attributes/LinkedBy, configuration/relationTypes/FlextoSAPAffiliations/attributes/LinkedBy, configuration/relationTypes/SAPtoHCOSAffiliations/attributes/LinkedByCOUNTRY_AFFILIATIONSVARCHARconfiguration/relationTypes/FlextoDDDAffiliations/attributes/Country, configuration/relationTypes/Ownership/attributes/Country, configuration/relationTypes/PAYERtoPLAN/attributes/Country, configuration/relationTypes/PBMVendortoMCO/attributes/Country, configuration/relationTypes/ACOAffiliations/attributes/Country, configuration/relationTypes/MCOtoPLAN/attributes/Country, configuration/relationTypes/FlextoHCOSAffiliations/attributes/Country, configuration/relationTypes/FlextoSAPAffiliations/attributes/Country, configuration/relationTypes/attributes/Country, configuration/relationTypes/HCOStoDDDAffiliations/attributes/Country, configuration/relationTypes/EnterprisetoBOB/attributes/Country, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/Country, configuration/relationTypes/ContactAffiliations/attributes/Country, configuration/relationTypes/VAAffiliations/attributes/Country, configuration/relationTypes/PBMtoPLAN/attributes/Country, configuration/relationTypes/Purchasing/attributes/Country, configuration/relationTypes/BOBtoMCO/attributes/Country, configuration/relationTypes/DDDtoSAPAffiliations/attributes/Country, configuration/relationTypes/Distribution/attributes/Country, configuration/relationTypes/ProviderAffiliations/attributes/Country, configuration/relationTypes/SAPtoHCOSAffiliations/attributes/CountryAFFILIATION_TYPEVARCHARconfiguration/relationTypes/PAYERtoPLAN/attributes/, configuration/relationTypes/PBMVendortoMCO/attributes/, configuration/relationTypes/MCOtoPLAN/attributes/, configuration/relationTypes/attributes/, configuration/relationTypes/EnterprisetoBOB/attributes/, configuration/relationTypes/VAAffiliations/attributes/, configuration/relationTypes/PBMtoPLAN/attributes/, configuration/relationTypes/BOBtoMCO/attributes/AffiliationTypePBM_AFFILIATION_TYPEVARCHARconfiguration/relationTypes/PAYERtoPLAN/attributes/PBMAffiliationType, configuration/relationTypes/PBMVendortoMCO/attributes/PBMAffiliationType, configuration/relationTypes/MCOtoPLAN/attributes/PBMAffiliationType, configuration/relationTypes/attributes/PBMAffiliationType, configuration/relationTypes/EnterprisetoBOB/attributes/PBMAffiliationType, configuration/relationTypes/PBMtoPLAN/attributes/PBMAffiliationType, configuration/relationTypes/BOBtoMCO/attributes/PBMAffiliationTypePLAN_MODEL_TYPEVARCHARconfiguration/relationTypes/PAYERtoPLAN/attributes/PlanModelType, configuration/relationTypes/PBMVendortoMCO/attributes/PlanModelType, configuration/relationTypes/MCOtoPLAN/attributes/PlanModelType, configuration/relationTypes/attributes/PlanModelType, configuration/relationTypes/EnterprisetoBOB/attributes/PlanModelType, configuration/relationTypes/PBMtoPLAN/attributes/PlanModelType, configuration/relationTypes/BOBtoMCO/attributes/PlanModelTypeMCOPlanModelTypeMANAGED_CARE_CHANNELVARCHARconfiguration/relationTypes/PAYERtoPLAN/attributes/ManagedCareChannel, configuration/relationTypes/PBMVendortoMCO/attributes/ManagedCareChannel, configuration/relationTypes/MCOtoPLAN/attributes/ManagedCareChannel, configuration/relationTypes/attributes/ManagedCareChannel, configuration/relationTypes/EnterprisetoBOB/attributes/ManagedCareChannel, configuration/relationTypes/PBMtoPLAN/attributes/ManagedCareChannel, configuration/relationTypes/BOBtoMCO/attributes/ManagedCareChannelMCOManagedCareChannelEFFECTIVE_START_DATEDATEconfiguration/relationTypes/MCOtoPLAN/attributes/EffectiveStartDateEFFECTIVE_END_DATEDATEconfiguration/relationTypes/MCOtoPLAN/attributes/EffectiveEndDateSTATUSVARCHARconfiguration/relationTypes/VAAffiliations/attributes/StatusAFFIL_RELATION_TYPEReltio URI: configuration/relationTypes/Ownership/attributes/, configuration/relationTypes/ACOAffiliations/attributes/, configuration/relationTypes/HCOStoDDDAffiliations/attributes/, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/, configuration/relationTypes/ContactAffiliations/attributes/, configuration/relationTypes/Purchasing/attributes/, configuration/relationTypes/DDDtoSAPAffiliations/attributes/, configuration/relationTypes/Distribution/attributes/, configuration/relationTypes/ProviderAffiliations/attributes/RelationTypeMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameRELATION_TYPE_URIVARCHARGenerated KeyRELATION_URIVARCHARReltio Relation URIRELATIONSHIP_GROUP_OWNERSHIPVARCHARconfiguration/relationTypes/Ownership/attributes//attributes/RelationshipGroupHCORelationGroupRELATIONSHIP_DESCRIPTION_OWNERSHIPVARCHARconfiguration/relationTypes/Ownership/attributes//attributes/RelationshipDescriptionHCORelationDescriptionRELATIONSHIP_ORDERVARCHARconfiguration/relationTypes/Ownership/attributes//attributes/RelationshipOrder, configuration/relationTypes/HCOStoDDDAffiliations/attributes//attributes/RelationshipOrder, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes//attributes/RelationshipOrder, configuration/relationTypes/Purchasing/attributes//attributes/RelationshipOrder, configuration/relationTypes/DDDtoSAPAffiliations/attributes//attributes/RelationshipOrder, configuration/relationTypes/Distribution/attributes//attributes/RelationshipOrderRANKVARCHARconfiguration/relationTypes/Ownership/attributes//attributes/Rank, configuration/relationTypes/ACOAffiliations/attributes//attributes/Rank, configuration/relationTypes/HCOStoDDDAffiliations/attributes//attributes/Rank, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes//attributes/Rank, configuration/relationTypes/ContactAffiliations/attributes//attributes/Rank, configuration/relationTypes/Purchasing/attributes//attributes/Rank, configuration/relationTypes/DDDtoSAPAffiliations/attributes//attributes/Rank, configuration/relationTypes/Distribution/attributes//attributes/Rank, configuration/relationTypes/ProviderAffiliations/attributes//attributes/RankAMA_HOSPITAL_IDVARCHARconfiguration/relationTypes/Ownership/attributes//attributes/, configuration/relationTypes/HCOStoDDDAffiliations/attributes//attributes/, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes//attributes/, configuration/relationTypes/Purchasing/attributes//attributes/, configuration/relationTypes/DDDtoSAPAffiliations/attributes//attributes/, configuration/relationTypes/Distribution/attributes//attributes/AMAHospitalIDAMA_HOSPITAL_HOURSVARCHARconfiguration/relationTypes/Ownership/attributes//attributes/AMAHospitalHours, configuration/relationTypes/HCOStoDDDAffiliations/attributes//attributes/AMAHospitalHours, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes//attributes/AMAHospitalHours, configuration/relationTypes/Purchasing/attributes//attributes/AMAHospitalHours, configuration/relationTypes/DDDtoSAPAffiliations/attributes//attributes/AMAHospitalHours, configuration/relationTypes/Distribution/attributes//attributes/AMAHospitalHoursEFFECTIVE_START_DATEDATEconfiguration/relationTypes/Ownership/attributes//attributes/EffectiveStartDate, configuration/relationTypes/ACOAffiliations/attributes//attributes/EffectiveStartDate, configuration/relationTypes/HCOStoDDDAffiliations/attributes//attributes/EffectiveStartDate, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes//attributes/EffectiveStartDate, configuration/relationTypes/ContactAffiliations/attributes//attributes/EffectiveStartDate, configuration/relationTypes/Purchasing/attributes//attributes/EffectiveStartDate, configuration/relationTypes/DDDtoSAPAffiliations/attributes//attributes/EffectiveStartDate, configuration/relationTypes/Distribution/attributes//attributes/EffectiveStartDate, configuration/relationTypes/ProviderAffiliations/attributes//attributes/EffectiveStartDateEFFECTIVE_END_DATEDATEconfiguration/relationTypes/Ownership/attributes//attributes/EffectiveEndDate, configuration/relationTypes/ACOAffiliations/attributes//attributes/EffectiveEndDate, configuration/relationTypes/HCOStoDDDAffiliations/attributes//attributes/EffectiveEndDate, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes//attributes/EffectiveEndDate, configuration/relationTypes/ContactAffiliations/attributes//attributes/EffectiveEndDate, configuration/relationTypes/Purchasing/attributes//attributes/EffectiveEndDate, configuration/relationTypes/DDDtoSAPAffiliations/attributes//attributes/EffectiveEndDate, configuration/relationTypes/Distribution/attributes//attributes/EffectiveEndDate, configuration/relationTypes/ProviderAffiliations/attributes//attributes/EffectiveEndDateACTIVE_FLAGBOOLEANconfiguration/relationTypes/Ownership/attributes//attributes/, configuration/relationTypes/ACOAffiliations/attributes//attributes/, configuration/relationTypes/HCOStoDDDAffiliations/attributes//attributes/, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes//attributes/, configuration/relationTypes/ContactAffiliations/attributes//attributes/, configuration/relationTypes/Purchasing/attributes//attributes/, configuration/relationTypes/DDDtoSAPAffiliations/attributes//attributes/, configuration/relationTypes/Distribution/attributes//attributes/, configuration/relationTypes/ProviderAffiliations/attributes//attributes/ActiveFlagPRIMARY_AFFILIATIONVARCHARconfiguration/relationTypes/Ownership/attributes//attributes/PrimaryAffiliation, configuration/relationTypes/ACOAffiliations/attributes//attributes/PrimaryAffiliation, configuration/relationTypes/HCOStoDDDAffiliations/attributes//attributes/PrimaryAffiliation, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes//attributes/PrimaryAffiliation, configuration/relationTypes/ContactAffiliations/attributes//attributes/PrimaryAffiliation, configuration/relationTypes/Purchasing/attributes//attributes/PrimaryAffiliation, configuration/relationTypes/DDDtoSAPAffiliations/attributes//attributes/PrimaryAffiliation, configuration/relationTypes/Distribution/attributes//attributes/PrimaryAffiliation, configuration/relationTypes/ProviderAffiliations/attributes//attributes/PrimaryAffiliationAFFILIATION_CONFIDENCE_CODEVARCHARconfiguration/relationTypes/Ownership/attributes//attributes/AffiliationConfidenceCode, configuration/relationTypes/ACOAffiliations/attributes//attributes/AffiliationConfidenceCode, configuration/relationTypes/HCOStoDDDAffiliations/attributes//attributes/AffiliationConfidenceCode, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes//attributes/AffiliationConfidenceCode, configuration/relationTypes/ContactAffiliations/attributes//attributes/AffiliationConfidenceCode, configuration/relationTypes/Purchasing/attributes//attributes/AffiliationConfidenceCode, configuration/relationTypes/DDDtoSAPAffiliations/attributes//attributes/AffiliationConfidenceCode, configuration/relationTypes/Distribution/attributes//attributes/AffiliationConfidenceCode, configuration/relationTypes/ProviderAffiliations/attributes//attributes/AffiliationConfidenceCodeRELATIONSHIP_GROUP_ACOAFFILIATIONSVARCHARconfiguration/relationTypes/ACOAffiliations/attributes//attributes/RelationshipGroupHCPRelationGroupRELATIONSHIP_DESCRIPTION_ACOAFFILIATIONSVARCHARconfiguration/relationTypes/ACOAffiliations/attributes//attributes/RelationshipDescriptionHCPRelationshipDescriptionRELATIONSHIP_STATUS_CODEVARCHARconfiguration/relationTypes/ACOAffiliations/attributes//attributes/RelationshipStatusCode, configuration/relationTypes/ContactAffiliations/attributes//attributes/RelationshipStatusCode, configuration/relationTypes/ProviderAffiliations/attributes//attributes/RelationshipStatusCodeHCPtoHCORelationshipStatusRELATIONSHIP_STATUS_REASON_CODEVARCHARconfiguration/relationTypes/ACOAffiliations/attributes//attributes/RelationshipStatusReasonCode, configuration/relationTypes/ContactAffiliations/attributes//attributes/RelationshipStatusReasonCode, configuration/relationTypes/ProviderAffiliations/attributes//attributes/RelationshipStatusReasonCodeHCPtoHCORelationshipStatusReasonCodeWORKING_STATUSVARCHARconfiguration/relationTypes/ACOAffiliations/attributes//attributes/, configuration/relationTypes/ContactAffiliations/attributes//attributes/, configuration/relationTypes/ProviderAffiliations/attributes//attributes/WorkingStatusWorkingStatusRELATIONSHIP_GROUP_HCOSTODDDAFFILIATIONSVARCHARconfiguration/relationTypes/HCOStoDDDAffiliations/attributes//attributes/RelationshipGroupHCORelationGroupRELATIONSHIP_DESCRIPTION_HCOSTODDDAFFILIATIONSVARCHARconfiguration/relationTypes/HCOStoDDDAffiliations/attributes//attributes/RelationshipDescriptionHCORelationDescriptionRELATIONSHIP_GROUP_OTHERHCOTOHCOAFFILIATIONSVARCHARconfiguration/relationTypes/OtherHCOtoHCOAffiliations/attributes//attributes/RelationshipGroupHCORelationGroupRELATIONSHIP_DESCRIPTION_OTHERHCOTOHCOAFFILIATIONSVARCHARconfiguration/relationTypes/OtherHCOtoHCOAffiliations/attributes//attributes/RelationshipDescriptionHCORelationDescriptionRELATIONSHIP_GROUP_CONTACTAFFILIATIONSVARCHARconfiguration/relationTypes/ContactAffiliations/attributes//attributes/RelationshipGroupHCPRelationGroupRELATIONSHIP_DESCRIPTION_CONTACTAFFILIATIONSVARCHARconfiguration/relationTypes/ContactAffiliations/attributes//attributes/RelationshipDescriptionHCPRelationshipDescriptionRELATIONSHIP_GROUP_PURCHASINGVARCHARconfiguration/relationTypes/Purchasing/attributes//attributes/RelationshipGroupHCORelationGroupRELATIONSHIP_DESCRIPTION_PURCHASINGVARCHARconfiguration/relationTypes/Purchasing/attributes//attributes/RelationshipDescriptionHCORelationDescriptionRELATIONSHIP_GROUP_DDDTOSAPAFFILIATIONSVARCHARconfiguration/relationTypes/DDDtoSAPAffiliations/attributes//attributes/RelationshipGroupHCORelationGroupRELATIONSHIP_DESCRIPTION_DDDTOSAPAFFILIATIONSVARCHARconfiguration/relationTypes/DDDtoSAPAffiliations/attributes//attributes/RelationshipDescriptionHCORelationDescriptionRELATIONSHIP_GROUP_DISTRIBUTIONVARCHARconfiguration/relationTypes/Distribution/attributes//attributes/RelationshipGroupHCORelationGroupRELATIONSHIP_DESCRIPTION_DISTRIBUTIONVARCHARconfiguration/relationTypes/Distribution/attributes//attributes/RelationshipDescriptionHCORelationDescriptionRELATIONSHIP_GROUP_PROVIDERAFFILIATIONSVARCHARconfiguration/relationTypes/ProviderAffiliations/attributes//attributes/RelationshipGroupHCPRelationGroupRELATIONSHIP_DESCRIPTION_PROVIDERAFFILIATIONSVARCHARconfiguration/relationTypes/ProviderAffiliations/attributes//attributes/RelationshipDescriptionHCPRelationshipDescriptionAFFIL_ACOReltio URI: configuration/relationTypes/Ownership/attributes/, configuration/relationTypes/ACOAffiliations/attributes/, configuration/relationTypes/HCOStoDDDAffiliations/attributes/, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/, configuration/relationTypes/ContactAffiliations/attributes/, configuration/relationTypes/Purchasing/attributes/, configuration/relationTypes/DDDtoSAPAffiliations/attributes/, configuration/relationTypes/Distribution/attributes/, configuration/relationTypes/ProviderAffiliations/attributes/: noColumnTypeDescriptionReltio Attribute URILOV NameACO_URIVARCHARGenerated KeyRELATION_URIVARCHARReltio Relation URIACO_TYPEVARCHARconfiguration/relationTypes/Ownership/attributes//attributes/ACOType, configuration/relationTypes/ACOAffiliations/attributes//attributes/ACOType, configuration/relationTypes/HCOStoDDDAffiliations/attributes//attributes/ACOType, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes//attributes/ACOType, configuration/relationTypes/ContactAffiliations/attributes//attributes/ACOType, configuration/relationTypes/Purchasing/attributes//attributes/ACOType, configuration/relationTypes/DDDtoSAPAffiliations/attributes//attributes/ACOType, configuration/relationTypes/Distribution/attributes//attributes/ACOType, configuration/relationTypes/ProviderAffiliations/attributes//attributes/ACOTypeHCOACOTypeACO_TYPE_CATEGORYVARCHARconfiguration/relationTypes/Ownership/attributes//attributes/ACOTypeCategory, configuration/relationTypes/ACOAffiliations/attributes//attributes/ACOTypeCategory, configuration/relationTypes/HCOStoDDDAffiliations/attributes//attributes/ACOTypeCategory, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes//attributes/ACOTypeCategory, configuration/relationTypes/ContactAffiliations/attributes//attributes/ACOTypeCategory, configuration/relationTypes/Purchasing/attributes//attributes/ACOTypeCategory, configuration/relationTypes/DDDtoSAPAffiliations/attributes//attributes/ACOTypeCategory, configuration/relationTypes/Distribution/attributes//attributes/ACOTypeCategory, configuration/relationTypes/ProviderAffiliations/attributes//attributes/ACOTypeCategoryHCOACOTypeCategoryACO_TYPE_GROUPVARCHARconfiguration/relationTypes/Ownership/attributes//attributes/ACOTypeGroup, configuration/relationTypes/ACOAffiliations/attributes//attributes/ACOTypeGroup, configuration/relationTypes/HCOStoDDDAffiliations/attributes//attributes/ACOTypeGroup, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes//attributes/ACOTypeGroup, configuration/relationTypes/ContactAffiliations/attributes//attributes/ACOTypeGroup, configuration/relationTypes/Purchasing/attributes//attributes/ACOTypeGroup, configuration/relationTypes/DDDtoSAPAffiliations/attributes//attributes/ACOTypeGroup, configuration/relationTypes/Distribution/attributes//attributes/ACOTypeGroup, configuration/relationTypes/ProviderAffiliations/attributes//attributes/ACOTypeGroupHCOACOTypeGroupAFFIL_RELATION_TYPE_ROLEReltio URI: configuration/relationTypes/ACOAffiliations/attributes//attributes/Role, configuration/relationTypes/ContactAffiliations/attributes//attributes/Role, configuration/relationTypes/ProviderAffiliations/attributes//attributes/: noColumnTypeDescriptionReltio Attribute URILOV NameRELATION_TYPE_URIVARCHARGenerated /relationTypes/ACOAffiliations/attributes//attributes/Role/attributes/Role, configuration/relationTypes/ContactAffiliations/attributes//attributes/Role/attributes/Role, configuration/relationTypes/ProviderAffiliations/attributes//attributes/Role/attributes/RoleRoleTypeRANKVARCHARconfiguration/relationTypes/ACOAffiliations/attributes//attributes/Role/attributes/Rank, configuration/relationTypes/ContactAffiliations/attributes//attributes/Role/attributes/Rank, configuration/relationTypes/ProviderAffiliations/attributes//attributes/Role/attributes/RankAFFIL_USAGE_TAGReltio URI: configuration/relationTypes/ProviderAffiliations/attributes/UsageTagMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameUSAGE_TAG_URIVARCHARGenerated KeyRELATION_URIVARCHARReltio Relation URIUSAGE_TAGVARCHARconfiguration/relationTypes/ProviderAffiliations/attributes//attributes/UsageTag" }, { "title": "CUSTOMER_SL schema", "": "", "pageLink": "/display//CUSTOMER_SL+schema", "content": "The schema plays the role of access layer for clients reading data. It includes a set of views that are directly inherited from CUSTOMER have the same structure as views in CUSTOMER schemat. To learn about view definitions please see CUSTOMER schema. In regional data marts, the schema views have MDM prefix. In CUSTOMER_SL schema in views are prefixed with 'P'  for COMPANY Reltio Model,'I' for model, and 'P_HI' for data for speed up access, most views are being materialized to physical tables. The process is transparent to users. Access views are being switched to physical tables automatically if they are available.  The refresh process is incremental and connected with the loading process. " }, { "title": "LANDING schema", "": "", "pageLink": "/display//LANDING+schema", "content": "LANDING schema plays a role of the staging database for publishing   data from tenants table for events published through lumnTypeDescriptionRECORD_METADATAVARIANTMetadata of event like key, topic, partition, create timeRECORD_CONTENTVARIANTEvent payloadLOV_DATATarget table for LOV data publish ColumnTypeDescription IDTEXTLOV object idOBJECTVARIANTRelto RDM json objectMERGE_TREE_DATATarget table for merge_tree exports from ReltioColumnTypeDescription  file pathOBJECTVARIANTRelto json objectHI_DATATarget table for ad-hoc historical inactive dataColumnTypeDescription OBJECTVARIANTHistorical Inactive json object" }, { "title": "PTE_SL", "": "", "pageLink": "/display/GMDM/PTE_SL", "content": "The schema plays the role of access layer for Clients reading data required for reports. It mimics its structure and logic. To make a connection to the PTE_SL schema you need to have a proper role assigned:COMM_GBL_MDM_DMART_DEV_PTE_ROLECOMM_GBL_MDM_DMART_QA_PTE_ROLECOMM_GBL_MDM_DMART_STG_PTE_ROLECOMM_GBL_MDM_DMART_PROD_PTE_ROLEthat are connected with groups:sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_DEV_PTE_ROLE\nsfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_QA_PTE_ROLE\nsfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_STG_PTE_ROLE\nsfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_PTE_ROLEInformation how to request for an acces is described here: Snowflake - connection guidSnowflake path to the client report: "COMM_GBL_MDM_DMART_PROD_DB"."PTE_SL"."PTE_REPORT"General assumptions for view creation:The views integrate both data models COMPANY and IQIVIA via a Union function. Meaning that they're calculated separately and then joined together.  The lang_code from the code translations is always 'en'In case the hcp identifiers aren't provided by the client there is an option to calculate them dynamically by the number of HCPs having the identifier.Driven tables:DRIVEN_TABLE1This is a view selecting data from the country_config table for countries that need to be added to the PTE_REPORTColumn nameDescriptionISO_CODEISO2 code of the countryNAMECountry nameLABELCountry label (name + iso_code)RELTIO_TENANTEither 'IQVIA' or the region of the tenant (...)HUB_TENANTIndicator of the HUB database the date comes fromSF_INSTANCEName of the instance the data comes from (-west-1...)SF_TENANTDATABASEFull database name form which the data comes fromCUSTOMERSL_PREFIXeither 'i_' for the IQVIA data model or 'p_' for the COMPANY data modelDRIVEN_TABLEV2 / DRIVEN_TABLE2_STATICDRIVEN_TABLEV2 is a view used to get the HCP identifiers and sort them by the count of HCPs that have the identifier. DRIVEN_TABLE2_STATIC is a table containing the list of identifiers used per country and the order in which they're placed in the PTE_REPORT view. If the country isn't available in DRIVEN_TABLE2_STATIC the report will use DRIVEN_TABLEV2 to get them calculated dynamically every time the report is lumn nameDescriptionISO_CDOEISO2 code of the countryCANONICAL_CODECanonical code of the description in EnglishCODE_IDCode idMODELeither 'i' for the IQVIA data model or 'p' for the COMPANY data modelORDER_IDOrder in which the identifier will be available in the PTE_REPORT view. Only identifiers from 1 to 5 will be used.DRIVEN_TABLE3Specialty dictionary provided by the client for the IQVIA data model only. Used for calculating the data.'IS PRESCRIBER' calculation method for IQIVIA modelThe path to the dictionary files on : -baiaes-eu--project/mdm/config/PTE_DictionariesColumn nameDescriptionCOUNTRY_CODEISO2 code of the nameMDM_CODECode idCANONICAL_CODECanonical code of the identifierLONG_DESCRIPTIONCode description in the specialty is a prescriber or not PTE_REPORT:The PTE_REPORT is the view from which the clients should get their data. It's an UNION of the reports for the IQVIA data model and the COMPANY data model. Calculation detail may be found in the respective articles:IQVIA: PTE_SL IQVIA MODELCOMPANY: PTE_SL COMPANY MODEL" }, { "title": "Data Sourcing", "": "", "pageLink": "/display//Data+Sourcing", "content": " CodeMDM and PolynesiaPFEMEACOMPANYPTE_REPORTFrench GuianaGFEMEACOMPANYPTE_REPORTWallis and FutunaWFEMEACOMPANYPTE_REPORTGuadeloupeGPEMEACOMPANYPTE_REPORTNew CaledoniaNCEMEACOMPANYPTE_REPORTMartiniqueMQEMEACOMPANYPTE_REPORTMauritiusMUEMEACOMPANYPTE_REPORTMonacoMCEMEACOMPANYPTE_REPORTAndorraADEMEACOMPANYPTE_REPORTTurkeyTREMEACOMPANYPTE_REPORT_TRSouth KoreaKRAPACCOMPANYPTE_REPORT_KRAll views are available in the global database in the PTE_SL schema." }, { "title": "PTE_SL IQVIA MODEL", "": "", "pageLink": "/display/GMDM/PTE_SL+IQVIA+MODEL", "content": " data model specification:name typedescription Reltio attribute URILOV Name additional querry conditions ( model)additional querry conditions (COMPANY model)HCP_IDVARCHARReltio Entity URIi_hcp.entity_uri or i_art_entity_urionly active hcp are returned (customer_sl.i_tive =' or i_art_entity_urionly active hcp are returnedHCO_IDVARCHARReltio Entity URIFor the model, all affiliation with i_tive = 'TRUE' and relation type in ('Activity','HasHealthCareRole') must be returned.i_hco.entity_uri select END_ENTITY_URI from customer_sl.i_affiliations where start_entity_uri ='T9u7Ej4'and active = 'TRUE'and relation_type in ('Activity','HasHealthCareRole') ;select * from customer_sl.p_affiliations where active=TRUE and relation_type = 'ContactAffiliations';WORKPLACE_NAMEVARCHARReltio workplace name or reltio workplace parent nfiguration/entityTypes/HCO/attributes/NameFor the model, all affiliation with i_tive = 'TRUE' and relation type in ('Activity','HasHealthCareRole') must be returned.i_ must be returnedselect from customer_sl.i_affiliations a,customer_sl.i_hco hcowhere a.end_entity_uri = hco.entity_uri and art_entity_uri ='T9u7Ej4'and tive = 'TRUE'and in ('Activity','HasHealthCareRole') ;For the COMPANY model, all affiliation with p_tive=TRUE and relation_type = 'ContactAffiliations'i_STATUSBOOLEANReltio Entity statusi_customer_sl.i_tivemapping rule TRUE = ACTIVEi_customer_sl.p_tivemapping rule TRUE = ACTIVELAST_MODIFICATION_DATETIMESAMP_LTZEntity update time in /entityTypes/HCP/updateTimecustomer_sl.i_entity_update_dates.SF_UPDATE_TIMEi_customer_sl.p_entity_update.SF_UPDATE_TIMEFIRST_NAMEVARCHARconfiguration/entityTypes//attributes/FirstNamei_customer_sl.i_rst_namei_customer_sl.p_rst_nameLAST_NAMEVARCHARconfiguration/entityTypes//attributes/LastNamei_customer_sl.i_st_namei_customer_sl.p_st_nameTITLE_CODEVARCHARconfiguration/entityTypes//attributes/TitleLOV Name COMPANY = HCPTitleLOV Name IQIVIA = LKUP_IMS_PROF_TITLEselect  nonical_code  from customer_sl.i_hcp hcp,customer_sl.i_codetranslations cwhere hcp.title_lkp = de_lect nonical_code fromcustomer_sl.i_hcp hcp,customer_sl.i_code_translations cwherehcp.title_lkp = de_idand hcp.entity_uri='T9u7Ej4'and untry='FR';select nonical_code from customer_sl.p_hcp hcp,customer_sl.p_codes cwhere hcp.title_lkp = de_idTITLE_DESCVARCHARconfiguration/entityTypes//attributes/TitleLOV Name COMPANY = THCPTitleLOV Name IQIVIA = LKUP_IMS_PROF_TITLEselect  ng_desc  from customer_sl.i_hcp hcp,customer_sl.i_code_translations cwhere hcp.title_lkp = de_lect ng_desc fromcustomer_sl.i_hcp hcp,customer_sl.i_code_translations cwherehcp.title_lkp = de_idand hcp.entity_uri='T9u7Ej4'and untry='FR';select sc from customer_sl.p_hcp hcp,customer_sl.p_codes cwhere hcp.title_lkp = de_idIS_PRESCRIBER'IS PRESCRIBER' calculation method for modelCASEWhen p_hcp.TYPE_CODE_LKP = 'HCPType:ES' then YCASEWhen p_hcp. = 'HCPType:RS' then NELSETo define                                                 codeconfiguration/entityTypes/Location/attributes/countrycustomer_sl.i_untrycustomer_sl.p_untryPRIMARY_ADDRESS_LINE_1IQIVIA: configuration/entityTypes/Location/attributes/AddressLine1COMPANY: configuration/entityTypes//attributes/Addresses/attributes/AddressLine1select address_line1 from customer_sl.i_address where address_rank=1select address_line1 from customer_sl.i_address where and from customer_sl.p_addresses a where =1PRIMARY_ADDRESS_LINE_2IQIVIA: configuration/entityTypes/Location/attributes/AddressLine2COMPANY: configuration/entityTypes//attributes/Addresses/attributes/AddressLine2select address_line2 from customer_sl.i_address where address_rank=1select a. address_line2 from customer_sl.p_addresses a where =1PRIMARY_ADDRESS_CITYIQIVIA: configuration/entityTypes/Location/attributes/CityCOMPANY: configuration/entityTypes//attributes/Addresses/attributes/Cityselect cityfrom customer_sl.i_address where address_rank=1select from customer_sl.p_addresses a where =1PRIMARY_ADDRESS_POSTAL_CODEIQIVIA: configuration/entityTypes/Location/attributes/Zip/attributes/: configuration/entityTypes//attributes/Addresses/attributes/Zip5select ZIP5 from customer_sl.i_address where address_rank=1select a.ZIP5 from customer_sl.p_addresses a where =1PRIMARY_ADDRESS_STATEIQIVIA: configuration/entityTypes/Location/attributes/: configuration/entityTypes//attributes/Addresses/attributes/StateProvinceLOV Name COMPANY = from customer_sl.i_address where address_rank=1select sc fromcustomer_sl.p_codes c,customer_sl.p_addresses awhere dress_rank=ATE_PROVINCE_LKP = de_id PRIMARY_ADDR_STATUSIQIVIA: configuration/entityTypes/Location/attributes/VerificationStatusCOMPANY: configuration/entityTypes//attributes/Addresses/attributes/VerificationStatuscustomer_sl.i_rification_statuscustomer_sl.p_rification_statusPRIMARY_SPECIALTY_CODEconfiguration/entityTypes//attributes/Specialities/attributes/SpecialtyLOV Name COMPANY = HCPSpecialtyLOV Name IQIVIA =LKUP_IMS_lect nonical_code from customer_sl.i_specialities s,customer_sl.i_code_translations cwhere s.specialty_lkp = de_idand s.entity_uri ='T9liLpi'and s.SPECIALTY_TYPE_LKP='LKUP_IMS_SPECIALTY_TYPE:' and ng_code = 'en'and untry = 'FR';select nonical_code from customer_sl.p_specialities s,customer_sl.p_codes cwhere s.specialty_lkp =de_idand s.rank = 1 ;There are no extra query conditions connected with SPECIALTY_TYPE_LKP because in the environment that parameter always has a NULL value. PRIMARY_SPECIALTY_DESCconfiguration/entityTypes//attributes/Specialities/attributes/SpecialtyLOV Name COMPANY = LKUP_IMS_SPECIALTYLOV Name IQIVIA =LKUP_IMS_elect  ng_desc from customer_sl.i_specialities s,customer_sl.i_code_translations cwhere s.specialty_lkp = de_idand s.entity_uri ='T9liLpi'and s.SPECIALTY_TYPE_LKP='LKUP_IMS_SPECIALTY_TYPE:' and ng_code = 'en'and untry = 'FR';select sc from customer_sl.p_specialities s,customer_sl.p_codes cwhere s.specialty_lkp =de_idand s.rank = 1 ;There are no extra query conditions connected with SPECIALTY_TYPE_LKP because in the environment that parameter always has a NULL value. GO_STATUSVARCHARconfiguration/entityTypes//attributes/Compliance/attributes/GOStatusgo_status <> ''CASEWhen i_hcp.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:GO' then YesCASEWhen i_hcp.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:NGO' then NoELSENULLgo_status <> ''CASEWhen p_compliance.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:GO' then YCASEWhen p_compliance.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:NGO' then NELSE Not defined(now this is an empty tabel)IDENTIFIER1_CODEVARCHARReltio identyfier nfiguration/entityTypes/HCP/attributes/Identifiers/attributes/Typeselect nonical_code from customer_sl.i_code_translations ct,customer_sl.i_identifiers de_id = d.TYPE_LKPThere is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the first lect nonical_code, ng_desc, , ct.*,d.* from customer_sl.i_code_translations ct,customer_sl.i_identifiers de_id = d.TYPE_LKPand  untry ='FR';select nonical_code from customer_sl.p_codes ct,customer_sl.p_identifiers = d.TYPE_LKPThere is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the first ENTIFIER1_CODE_DESCVARCHARconfiguration/entityTypes//attributes/Identifiers/attributes/Typeselect ng_desc from customer_sl.i_code_translations ct,customer_sl.i_identifiers de_id = d.TYPE_LKPselect sc from customer_sl.p_codes ct,customer_sl.p_identifiers = d.TYPE_LKPIDENTIFIER1_VALUEVARCHARconfiguration/entityTypes//attributes/Identifiers/attributes/IDselect id from customer_sl.i_ select id from customer_sl.p_identifiersIDENTIFIER2_CODEVARCHARconfiguration/entityTypes//attributes/Identifiers/attributes/ nonical_code from customer_sl.i_code_translations ct,customer_sl.i_identifiers de_id = TYPE_LKPMaximum two identyfiers can be returnedThere is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the second lect nonical_code from customer_sl.p_codes ct,customer_sl.p_identifiers = TYPE_LKPMaximum two identifiers can be returnedThere is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the second ENTIFIER2_CODE_DESCVARCHARconfiguration/entityTypes//attributes/Identifiers/attributes/Typeselect ng_desc from customer_sl.i_code_translations ct,customer_sl.i_identifiers de_id = d.TYPE_LKPselect sc from customer_sl.p_codes ct,customer_sl.p_identifiers = d.TYPE_LKPIDENTIFIER2_VALUEVARCHARconfiguration/entityTypes//attributes/Identifiers/attributes/ from customer_sl.i_select id from customer_sl.p_identifiersDGSCATEGORYVARCHARIQIVIA: configuration/entityTypes//attributes/Disclosure/attributes/DGSCategoryCOMPANY: configuration/entityTypes//attributes/DisclosureBenefitCategoryLKUP_BENEFITCATEGORY_HCP,LKUP_BENEFITCATEGORY_HCOselect ng_desc from customer_sl.i_code_translations ct,customer_sl.i_disclosure de_id = d.dgs_category_lkpselect DisclosureBenefitCategory from p_hcpDGSCATEGORY_CODEVARCHARconfiguration/entityTypes//attributes/Disclosure/attributes/DGSCategoryLKUP_BENEFITCATEGORY_HCP,LKUP_BENEFITCATEGORY_HCOselect nonical_code from customer_sl.i_code_translations ct,customer_sl.i_disclosure de_id = d.dgs_category_lkpcomment: select i_nonical_code for a valu returned from DisclosureBenefitCategory DGSTITLEVARCHARIQIVIA: configuration/entityTypes//attributes/Disclosure/attributes/DGSTitleCOMPANY: configuration/entityTypes//attributes/DisclosureBenefitTitleLKUP_BENEFITTITLEselect ng_desc from customer_sl.i_code_translations ct,customer_sl.i_disclosure de_id = d.DGS_TITLE_LKPselect DisclosureBenefitTitle from p_hcpDGSTITLE_CODEVARCHARconfiguration/entityTypes//attributes/Disclosure/attributes/DGSTitleLKUP_BENEFITTITLEselect nonical_code from customer_sl.i_code_translations ct,customer_sl.i_disclosure de_id = DGS_TITLE_LKPcomment: select i_nonical_code for a valu returned from DisclosureBenefitTitle DGSQUALITYVARCHARIQIVIA: configuration/entityTypes//attributes/Disclosure/attributes/DGSQualityCOMPANY: configuration/entityTypes//attributes/DisclosureBenefitQualityLKUP_BENEFITQUALITYselect ng_desc from customer_sl.i_code_translations ct,customer_sl.i_disclosure de_id = DisclosureBenefitQuality from p_hcpDGSQUALITY_CODEVARCHARconfiguration/entityTypes//attributes/Disclosure/attributes/DGSQualityLKUP_BENEFITQUALITYselect nonical_code from customer_sl.i_code_translations ct,customer_sl.i_disclosure de_id = DGS_QUALITY_LKPcomment: select i_nonical_code for a valu returned from  DGSSPECIALTYVARCHARIQIVIA: configuration/entityTypes//attributes/Disclosure/attributes/: configuration/entityTypes//attributes/DisclosureBenefitSpecialtyLKUP_BENEFITSPECIALTYselect ng_desc from customer_sl.i_code_translations ct,customer_sl.i_disclosure de_id = /entityTypes//attributes/Disclosure/attributes/DGSSpecialtyLKUP_BENEFITSPECIALTYselect canonical_code from customer_sl.i_code_translations ct,customer_sl.i_disclosure de_id = DGS_SPECIALTY_LKPcomment: select i_nonical_code for a valu returned from query should return values like:select from "COMM_GBL_MDM_DMART_PROD_DB"."CUSTOMER_SL"."I_SPECIALITIES" s,"COMM_GBL_MDM_DMART_PROD_DB"."CUSTOMER_SL"."I_CODE_TRANSLATIONS" cwhere SPECIALTY_TYPE_LKP='LKUP_IMS_SPECIALTY_TYPE:SPEC'and ='en' ← lang code ='PH' ← country conditionand s.ENTITY_URI ='ENTITI_URI'; ← entity uri conditionEMAILVARCHARA query should return values like:select EMAIL from "COMM_GBL_MDM_DMART_PROD_DB"."CUSTOMER_SL"."I_EMAIL" where rank= 1 and entity_uri ='ENTITI_URI';  ← entity uri conditionCAUTION: In case when multiple values are returned, the first one must be returned as a query query should return values like:select FORMATTED_NUMBER from "COMM_GBL_MDM_DMART_PROD_DB"."CUSTOMER_SL"."I_PHONE" where RANK=1 and entity_uri ='ENTITI_URI'; entity uri conditionCAUTION: In case when multiple values are returned, the first one must be returned as a query result." }, { "title": "'IS PRESCRIBER' calculation method for model", "": "", "pageLink": "/display/GMDM/%27IS+PRESCRIBER%27+calculation+method+for+IQIVIA+model", "content": "Parameters contains in model: xml parameter name in calculation metode.g. value from modelcustomer_sl.i_hcp.type_code_lkp fessional_type_cdi_hcp.type_code_lkp LKUP_IMS_HCP_CUST_TYPE:PRESselect nonical_code from ,customer_sl.i_codes B_TYPE_CODE_LKP = de_id fessional_subtype_cdprof_subtype_elect nonical_code from customer_sl.i_specialities s,customer_sl.i_codes cwheres.specialty_lkp = de_id and s.rank=1 and s.SPECIALTY_TYPE_LKP='LKUP_IMS_SPECIALTY_TYPE:' and rents='SPEC'spec.specialty_codespec_customer_sl.i_untryi_untryFRDictionaries parameters:profesion_type_subtype.csv as dict_subtypesprofesion_type_subtype_fr.csv as dict_subtypesprofessions_type_subtype.xlsxxmlvalue from file to calculate viewe.g. value to calculate viewmdm_codedict_m_codecanonical_codeWAR.TYP.Aprofessional_typedict_fessional_typeprofessional_typeNon-Prescriber, Prescribercountry_codedict_untry_codecountry_codeFRprofesion_type_speciality.csv as dict_specialtiesprofesion_type_speciality_fr.csv as from file to calculate viewe.g. value to calculate , Prescribercountry_codedict_untry_codecountry_codeFRIn a new PTE_SL view the files mentions above are migrated to driven_tabel3. So in a method description, there is an extra condition that matches a dependence with profession subtype or thod description:Query condition: driven_untry_code = i_untry and driven_nonical_code = prof_subtype_code and driven_tabel3.header_name = 'LKUP_IMS_HCP_SUBTYPE'driven_untry_code = i_untry and driven_nonical_code = spec_code and driven_tabel3.header_name='LKUP_IMS_SPECIALTY'CASE         WHEN i_hcp.type_code_lkp ='LKUP_IMS_HCP_CUST_TYPE:PRES' THEN 'Y'         WHEN    coalesce(prof_subtype_code,spec_code,'') = '' THEN 'N'         WHEN    coalesce(prof_subtype_code,'') <> '' THEN                    CASE                             WHEN coalesce(driven_nonical_code,'') = '' THEN 'N@1'                             –- for driven_tabel3.header_name = 'LKUP_IMS_HCP_SUBTYPE', this is a profession subtype checking condition                             WHEN coalesce(driven_nonical_code,'') <> '' THEN                                      –- for driven_tabel3.header_name = 'LKUP_IMS_HCP_SUBTYPE', this is a profession subtype checking condition                                        CASE                                                 WHEN driven_fessional_type = 'Prescriber' THEN 'Y'              –- for driven_tabel3.header_name = 'LKUP_IMS_HCP_SUBTYPE', this is a profession subtype checking condition                                                 WHEN driven_fessional_type = 'Non-Prescriber' THEN 'N'     –- for driven_tabel3.header_name = 'LKUP_IMS_HCP_SUBTYPE', this is a profession subtype checking condition                                                 ELSE 'N@2'                                        END                     END          WHEN    coalesce(spec_code,'') <> '' THEN                     CASE                              WHEN coalesce(driven_nonical_code,'') = '' THEN 'N@3'                                –- for driven_tabel3.header_name = '', this is a specialty checking condition                              WHEN coalesce(driven_nonical_code,'') <> '' THEN                                        –- for driven_tabel3.header_name = '', this is a specialty checking condition                                         CASE                                                  WHEN driven_fessional_type = 'Prescriber' THEN 'Y'                 –- for driven_tabel3.header_name = '', this is a specialty checking condition                                                  WHEN driven_fessional_type = 'Non-Prescriber' THEN 'N'        –- for driven_tabel3.header_name = '', this is a specialty checking condition                                                  ELSE 'N@4'                                          END                     END           ELSE 'N@99'END AS IS_PRESCRIBER" }, { "title": "PTE_SL COMPANY MODEL", "": "", "pageLink": "/display/GMDM/PTE_SL+COMPANY+MODEL", "content": "COMPANY data model specification:name typedescription Reltio attribute URILOV Name additional querry conditions (COMPANY model)HCP_IDVARCHARReltio Entity URIi_hcp.entity_uri or i_art_entity_urionly active hcp are returned (customer_sl.i_tive ='TRUE')HCO_IDVARCHARReltio Entity URISELECT HCO.ENTITY_URIFROM CUSTOMER_SL.P_HCP HCPINNER JOIN CUSTOMER_SL.P_AFFILIATIONS     ON HCP.ENTITY_URI= ART_ENTITY_URIINNER JOIN CUSTOMER_SL.P_HCO     = HCO.ENTITY_URIWHERE lation_type = 'ContactAffiliations'AND TIVE = 'TRUE';TO - DO An additional conditions that should be included:querry need to return only pairs for witch "P_AFFIL_RELATION_LATIONSHIPDESCRIPTION_LKP" = 'HCPRelationshipDescription:CON' A Pair HCP plus must be PLACE_NAMEVARCHARReltio workplace name or reltio workplace parent nfiguration/entityTypes/HCO/attributes/NameSELECT     ON HCP.ENTITY_URI= ART_ENTITY_URIINNER JOIN CUSTOMER_SL.P_HCO     = HCO.ENTITY_URIWHERE lation_type = 'ContactAffiliations'AND TIVE = 'TRUE';A Pair HCP plus must be Entity statusi_customer_sl.p_tivemapping rule TRUE = ACTIVELAST_MODIFICATION_DATETIMESAMP_LTZEntity update time in /entityTypes/HCP/updateTimep_entity_update.SF_UPDATE_TIMEFIRST_NAMEVARCHARconfiguration/entityTypes//attributes/FirstNamei_customer_sl.p_rst_nameLAST_NAMEVARCHARconfiguration/entityTypes//attributes/LastNamei_customer_sl.p_st_nameTITLE_CODEVARCHARconfiguration/entityTypes//attributes/TitleLOV Name COMPANY = HCPTitleLOV Name IQIVIA = LKUP_IMS_PROF_TITLEselect nonical_code from customer_sl.p_hcp hcp,customer_sl.p_codes cwhere hcp.title_lkp = de_idTITLE_DESCVARCHARconfiguration/entityTypes//attributes/TitleLOV Name COMPANY = THCPTitleLOV Name IQIVIA = LKUP_IMS_PROF_TITLEselect sc from customer_sl.p_hcp hcp,customer_sl.p_codes cwhere hcp.title_lkp = de_idIS_PRESCRIBERCASEWhen p_hcp. = 'HCPType:ES' then YCASEWhen p_hcp. = 'HCPType:RS' then NELSETo define                                                 codeconfiguration/entityTypes/Location/attributes/countrycustomer_sl.p_untryPRIMARY_ADDRESS_LINE_1IQIVIA: configuration/entityTypes/Location/attributes/AddressLine1COMPANY: configuration/entityTypes//attributes/Addresses/attributes/AddressLine1select a. address_line1 from customer_sl.p_addresses a where =1PRIMARY_ADDRESS_LINE_2IQIVIA: configuration/entityTypes/Location/attributes/AddressLine2COMPANY: configuration/entityTypes//attributes/Addresses/attributes/AddressLine2select a. address_line2 from customer_sl.p_addresses a where =1PRIMARY_ADDRESS_CITYIQIVIA: configuration/entityTypes/Location/attributes/CityCOMPANY: configuration/entityTypes//attributes/Addresses/attributes/ from customer_sl.p_addresses a where =1PRIMARY_ADDRESS_POSTAL_CODEIQIVIA: configuration/entityTypes/Location/attributes/Zip/attributes/: configuration/entityTypes//attributes/Addresses/attributes/Zip5select a.ZIP5 from customer_sl.p_addresses a where =1PRIMARY_ADDRESS_STATEIQIVIA: configuration/entityTypes/Location/attributes/: configuration/entityTypes//attributes/Addresses/attributes/StateProvinceLOV Name COMPANY = fromcustomer_sl.p_codes c,customer_sl.p_addresses awhere dress_rank=ATE_PROVINCE_LKP = de_id PRIMARY_ADDR_STATUSIQIVIA: configuration/entityTypes/Location/attributes/VerificationStatusCOMPANY: configuration/entityTypes//attributes/Addresses/attributes/VerificationStatuscustomer_sl.p_rification_statusPRIMARY_SPECIALTY_CODEconfiguration//attributes/Specialities/attributes/SpecialtyLOV Name COMPANY = HCPSpecialtyLOV Name IQIVIA =LKUP_IMS_SPECIALTYselect nonical_code from customer_sl.p_specialities s,customer_sl.p_codes cwhere s.specialty_lkp =de_idand s.rank = 1 ;There are no extra query conditions connected with SPECIALTY_TYPE_LKP because in the environment that parameter always has a NULL value. PRIMARY_SPECIALTY_DESCconfiguration/entityTypes//attributes/Specialities/attributes/SpecialtyLOV Name COMPANY = LKUP_IMS_SPECIALTYLOV Name IQIVIA =LKUP_IMS_SPECIALTYselect sc from customer_sl.p_specialities s,customer_sl.p_codes cwhere s.specialty_lkp =de_idand s.rank = 1 ;There are no extra query conditions connected with SPECIALTY_TYPE_LKP because in the environment that parameter always has a NULL value. GO_STATUSVARCHARconfiguration/entityTypes//attributes/Compliance/attributes/GOStatusgo_status <> ''CASEWhen p_compliance.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:GO' then YCASEWhen p_compliance.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:NGO' then NELSE Not defined(now this is an empty tabel)IDENTIFIER1_CODEVARCHARReltio identyfier nfiguration/entityTypes/HCP/attributes/Identifiers/attributes/Typeselect nonical_code from customer_sl.p_codes ct,customer_sl.p_identifiers = d.TYPE_LKPThere is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the first ENTIFIER1_CODE_DESCVARCHARconfiguration/entityTypes//attributes/Identifiers/attributes/Typeselect sc from customer_sl.p_codes ct,customer_sl.p_identifiers = d.TYPE_LKPIDENTIFIER1_VALUEVARCHARconfiguration/entityTypes//attributes/Identifiers/attributes/IDselect id from customer_sl.p_identifiersIDENTIFIER2_CODEVARCHARconfiguration/entityTypes//attributes/Identifiers/attributes/ nonical_code from customer_sl.p_codes ct,customer_sl.p_identifiers = TYPE_LKPMaximum two identifiers can be returnedThere is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the second ENTIFIER2_CODE_DESCVARCHARconfiguration/entityTypes//attributes/Identifiers/attributes/Typeselect sc from customer_sl.p_codes ct,customer_sl.p_identifiers = d.TYPE_LKPIDENTIFIER2_VALUEVARCHARconfiguration/entityTypes//attributes/Identifiers/attributes/IDselect id from customer_sl.p_identifiersDGSCATEGORYVARCHARIQIVIA: configuration/entityTypes//attributes/Disclosure/attributes/DGSCategoryCOMPANY: configuration/entityTypes//attributes/DisclosureBenefitCategoryLKUP_BENEFITCATEGORY_HCP,LKUP_BENEFITCATEGORY_HCOselect DisclosureBenefitCategory from p_hcpDGSCATEGORY_CODEVARCHARconfiguration/entityTypes//attributes/Disclosure/attributes/DGSCategoryLKUP_BENEFITCATEGORY_HCP,LKUP_BENEFITCATEGORY_HCOcomment: select i_nonical_code for a valu returned from DisclosureBenefitCategory DGSTITLEVARCHARIQIVIA: configuration/entityTypes//attributes/Disclosure/attributes/DGSTitleCOMPANY: configuration/entityTypes//attributes/DisclosureBenefitTitleLKUP_BENEFITTITLEselect from p_hcpDGSTITLE_CODEVARCHARconfiguration/entityTypes//attributes/Disclosure/attributes/DGSTitleLKUP_BENEFITTITLEcomment: select i_nonical_code for a valu returned from DisclosureBenefitTitle DGSQUALITYVARCHARIQIVIA: configuration/entityTypes//attributes/Disclosure/attributes/DGSQualityCOMPANY: configuration/entityTypes//attributes/DisclosureBenefitQualityLKUP_BENEFITQUALITYselect from p_hcpDGSQUALITY_CODEVARCHARconfiguration/entityTypes//attributes/Disclosure/attributes/DGSQualityLKUP_BENEFITQUALITYcomment: select i_nonical_code for a valu returned from  DGSSPECIALTYVARCHARIQIVIA: configuration/entityTypes//attributes/Disclosure/attributes/: configuration/entityTypes//attributes/DisclosureBenefitSpecialtyLKUP_BENEFITSPECIALTYDisclosureBenefitSpecialtyDGSSPECIALTY_CODEVARCHARconfiguration/entityTypes//attributes/Disclosure/attributes/DGSSpecialtyLKUP_BENEFITSPECIALTYcomment: select i_nonical_code for a valu returned from DisclosureBenefitSpecialtySECONDARY_SPECIALTY_DESCVARCHAREMAILVARCHARPHONEVARCHAR" }, { "title": "", "": "", "pageLink": "/display/GMDM/Global+Data+Mart", "content": "The section describes the structure of  in . The contains consolidated data from multiple regional data marts.Databases:The connects all markets using Snowflake DB Replication (if in the different zone) or Local DB (if in the same zone): detailsSnowflake  DB nameTypeModelEMEAlinkCOMM_EMEA_MDM_DMART__DBlocalP / P_HIAMERlinkCOMM_AMER_MDM_DMART__DBreplicaP / P_HIUSlinkCOMM_GBL_MDM_DMART_replicaP / P_HIAPAClinkCOMM_APAC_MDM_DMART__DBlocalP / P_HIEUlinkCOMM_EU_MDM_DMART__DBlocalIConsolidated GLOBAL Schema:The COMM_GBL_MDM_DMART__DB database includes the following schema:CUSTOMER - main schema containing consolidated views for all COMPANY STOMER_SL - access schema for users containing a set of views accessing CUSTOMER schema objectsP_ - COMPANY Reltio Model and are prefixed with 'P'P_HI - COMPANY Reltio Model with Historical Inactive onekey crosswalksI_  - Ex-US data are in the model and are prefixed with 'I'AES_RS_SL - schema containing views that mimic er accessing the CUSTOMER_SL schema can query across all markets, having in mind the following details:P_ prefixed viewsP_HI prefixed viewsI_ prefixed viewsConsolidated view from all markets that are from "P" e first column in each view is the MDM_REGION representing the information about the connection of the specific row to the market. Each market may contain a different number of columns and also some columns that exist in one market may not be available in the other. The views aggregate all columns from all rresponding data model: Dynamic views for COMPANY view from all markets that are from "P_HI" e first column in each view is the MDM_REGION representing the information about the connection of the specific row to the market. Each market may contain a different number of columns and also some columns that exist in one market may not be available in the other. The views aggregate all columns from all ew build based on , from market that is using "I" Model"Corresponding data model: Dynamic views for IQIVIA MDM ModelGLOBALInstance detailsENVSnowflake DB NameReltio TenantRefresh timeDEVCOMM_GBL_MDM_DMART_DEV_DBEMEA + AMER + US+ APAC + EUonce per dayQACOMM_GBL_MDM_DMART_QA_DBEMEA + AMER + US+ APAC + EUonce per daySTGCOMM_GBL_MDM_DMART_STG_DBEMEA + AMER + US+ APAC + EUonce per dayPRODCOMM_GBL_MDM_DMART_PROD_DBEMEA + AMER + US+ APAC + EUevery 2hRolesNPROD = /STGRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxPTE_SLWarehouseAD Group NameCOMM_GBL_MDM_DMART__DEVOPS_ROLEFullFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART__DEVOPS_ROLECOMM_GBL_MDM_DMART__MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART__MTCH_AFFIL_ROLECOMM_GBL_MDM_DMART__METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART__METRIC_ROLECOMM_GBL_MDM_DMART__MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART__MDM_ROLECOMM_GBL_MDM_DMART__READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART__READ_ROLECOMM_GBL_MDM_DMART__DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART__DATA_ROLECOMM_GBL_MDM_DMART__PTE_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART__PTE_ROLEPRODRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxPTE_SLWarehouseAD Group NameCOMM_GBL_MDM_DMART_PROD_DEVOPS_ROLEFullFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_DEVOPS_ROLECOMM_GBL_MDM_DMART_PROD_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PRD_MTCHAFFIL_ROLECOMM_GBL_MDM_DMART_PROD_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_METRIC_ROLECOMM_GBL_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_MDM_ROLECOMM_GBL_MDM_DMART_PROD_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_READ_ROLECOMM_GBL_MDM_DMART_PROD_DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_DATA_ROLECOMM_GBL_MDM_DMART_PROD_PTE_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_PTE_ROLE" }, { "title": "", "": "", "pageLink": "/display//Global+Data+Materialization+Process", "content": "" }, { "title": "Regional Data Marts", "": "", "pageLink": "/display/GMDM/Regional+Data+Marts", "content": "The regional data mart is presenting data from one region.  Data are loaded from one selected instance. They are being refreshed more frequently than the global mart. They are a good choice for clients operating in local markets. detailsENVSnowflake DB NameReltio TenantRefresh timeDEVCOMM_EMEA_MDM_DMART_DEV_DBwn60kG248ziQSMWevery day between am ESTQACOMM_EMEA_MDM_DMART_QA_DBvke5zyYwTifyeJSevery day between between *Due to many projects running on the environment the refresh time has been temporarily changed to "" for the client's D 2 hoursRolesNPROD = /STGRole NameLandingCustomerCustomer SLAES RS NameCOMM_EMEA_MDM_DMART__DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART__DEVOPS_ROLECOMM_EMEA_MDM_DMART__MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART__MTCH_AFFIL_ROLECOMM_EMEA_MDM_DMART__METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART__METRIC_ROLECOMM_EMEA_MDM_DMART__MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART__MDM_ROLECOMM_EMEA_MDM_DMART__READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART__READ_ROLECOMM_EMEA_MDM_DMART__DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART__DATA_ROLEPRODRole NameLandingCustomerCustomer SLAES RS -west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_DEVOPS_ROLECOMM_EMEA_MDM_DMART_PROD_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PRD_MTCHAFFIL_ROLECOMM_EMEA_MDM_DMART_PROD_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_METRIC_ROLECOMM_EMEA_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_MDM_ROLECOMM_EMEA_MDM_DMART_PROD_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_READ_ROLECOMM_EMEA_MDM_DMART_PROD_DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_DATA_ROLEAMERInstance detailsENVSnowflake DB NameReltio TenantRefresh timeDEV day between between between ESTPRODCOMM_AMER_MDM_DMART_PROD_DBYs7joaPjhr9DwBJevery 2 hoursRolesNPROD = /STGRole NameLandingCustomerCustomer SLAES RS NameCOMM_AMER_MDM_DMART__DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART__DEVOPS_ROLECOMM_AMER_MDM_DMART__MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART__MTCH_AFFIL_ROLECOMM_AMER_MDM_DMART__METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART__METRIC_ROLECOMM_AMER_MDM_DMART__MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART__MDM_ROLECOMM_AMER_MDM_DMART__READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART__READ_ROLECOMM_AMER_MDM_DMART__DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART__DATA_ROLEPRODRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group NameCOMM_AMER_MDM_DMART_PROD_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_DEVOPS_ROLECOMM_AMER_MDM_DMART_PROD_MTCH_AFFIL_RORead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_MTCH_AFFIL_ROCOMM_AMER_MDM_DMART_PROD_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_METRIC_ROLECOMM_AMER_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_MDM_ROLECOMM_AMER_MDM_DMART_PROD_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_READ_ROLECOMM_AMER_MDM_DMART_PROD_DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_DATA_ROLEUSInstance detailsENVSnowflake DB NameReltio TenantRefresh timeDEVCOMM_GBL_MDM_DMART_DEVsw8BkTZqjzGr7hnevery day between ESTQACOMM_GBL_MDM_DMART_QArEAXRHas2ovllvTevery day between ESTSTGCOMM_GBL_MDM_DMART_STG48ElTIteZz05XwTevery day ESTPRODCOMM_GBL_MDM_DMART_PROD9kL30u7lFoDHp6Xevery 2 hoursRolesNPROD = /STGRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group NameCOMM__MDM_DMART_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_us-east-1_amerdev01_COMM__MDM_DMART_DEVOPS_ROLECOMM_MDM_DMART__MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM__MDM_DMART_MTCH_AFFIL_ROLECOMM__MDM_DMART_ANALYSIS_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Onlysfdb_us-east-1_amerdev01_COMM__MDM_DMART_ANALYSIS_ROLECOMM__MDM_DMART_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM__MDM_DMART_METRIC_ROLECOMM_MDM_DMART__MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM__MDM_DMART_MDM_ROLECOMM__MDM_DMART_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM__MDM_DMART_READ_ROLECOMM_MDM_DMART__DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM__MDM_DMART_DATA_ROLEPRODRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group NameCOMM_PROD_MDM_DMART_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_DEVOPS_ROLECOMM_MDM_DMART_PROD_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_MTCH_AFFIL_ROLECOMM_PROD_MDM_DMART_ANALYSIS_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_ANALYSIS_ROLECOMM_PROD_MDM_DMART_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_METRIC_ROLECOMM_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_MDM_ROLECOMM_PROD_MDM_DMART_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_READ_ROLECOMM_MDM_DMART_PROD_DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_DATA_ROLEAPACInstance detailsENVSnowflake InstanceSnowflake DB NameReltio TenantRefresh timeDEVCOMM_APAC_MDM_DMART_DEV_DBw2NBAwv1z2AvlkgSevery day between am ESTQACOMM_APAC_MDM_DMART_QA_DBxs4oRCXpCKewNDKevery day between am ESTSTGCOMM_APAC_MDM_DMART_STG_DBY4StMNK3b0AGDf6every day between ESTPROD 2 hoursRolesNPROD = /STGRole NameLandingCustomerCustomer SLAES RS OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART__MTCH_AFFIL_ROLECOMM_APAC_MDM_DMART__METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART__METRIC_ROLECOMM_APAC_MDM_DMART__MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART__MDM_ROLECOMM_APAC_MDM_DMART__READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART__READ_ROLECOMM_APAC_MDM_DMART__DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART__DATA_ROLEPRODRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group NameCOMM_APAC_MDM_DMART_PROD_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_DEVOPS_ROLECOMM_APAC_MDM_DMART_PROD_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PRD_MTCHAFFIL_ROLECOMM_APAC_MDM_DMART_PROD_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_METRIC_ROLECOMM_APAC_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_MDM_ROLECOMM_APAC_MDM_DMART_PROD_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_READ_ROLECOMM_APAC_MDM_DMART_PROD_DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_DATA_ROLEEU (ex-us)Instance detailsENVSnowflake DB NameReltio TenantRefresh timeDEVCOMM_EU_MDM_DMART_DEV_DBFLy4mo0XAh0YEbNevery day between am ESTQACOMM_EU_MDM_DMART_QA_DBAwFwKWinxbarC0Zevery day between am ESTSTGCOMM_EU_MDM_DMART_STG_DBFW4YTaNQTJEcN2gevery day between hoursRolesNPROD = /STGRole NameLandingCustomerCustomer SLAES RS NameCOMM__MDM_DMART_OPS_ROLEDEVFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_eu-west-1_emeadev01_COMM__MDM_DMART_DEVOPS_ROLECOMM_MDM_DMART__MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM__MDM_DMART_MTCH_AFFIL_ROLECOMM_EU__MDM_DMART_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EU__MDM_DMART_METRIC_ROLECOMM_MDM_DMART__MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM__MDM_DMART_MDM_ROLECOMM_EU_MDM_DMART__READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM__MDM_DMART_READ_ROLECOMM_MDM_DMART__DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM__MDM_DMART_DATA_ROLEPRODRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group NameCOMM_PROD_MDM_DMART_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_DEVOPS_ROLECOMM_MDM_DMART_PROD_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_MTCH_AFFIL_ROLECOMM_EU_MDM_DMART_PROD_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EU_PROD_MDM_DMART_METRIC_ROLECOMM_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_MDM_ROLECOMM_PROD_MDM_DMART_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_READ_ROLECOMM_MDM_DMART_PROD_DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_DATA_ROLE" }, { "title": " API", "": "", "pageLink": "/display/GMDM/MDM+Admin+Management+API", "content": "" }, { "title": "Description", "": "", "pageLink": "/display/GMDM/Description", "content": "MDM Admin is a management , automating numerous repeatable tasks and enabling the end user to perform them, without the need to make a request and wait for one of engineers to pick it its current state, provides below services:Modify offsetGenerate outbound eventsReconcile an entity/relation (only used by Team)Each functionality is described in detail in the following chapters. listTenantEnvironmentMDM Admin API Base URLSwagger URL - API DocumentationGBL (EX-US)DEV QA STAGE PROD GBLUSDEV QA STAGE PROD EMEADEV QA STAGE PROD AMERDEV QA STAGE PROD APACDEV QA STAGE  Modify offsetIf you are consuming from outbound topic, you can now modify the offsets to skip/re-send messages. Please refer to the Swagger Documentation for additional details.Example 1Environment is . User wants to consume the last 100 messages from his topic again. He is using topic "emea-dev-out-full-test-topic-1" and consumer-group "emea-dev-consumergroup-1"Steps:Disable the consumer. will not allow offset manipulation, if the topic/consumergroup is being usedSend below request:\nPOST "topic": "emea-dev-out-full-test-topic-1", "groupId": "emea-dev-consumergroup-1",\n "shiftBy": -100\n}\nEnable the consumer. Last 100 events will be re-consumed.Example 2User wants to consume all available messages from the topic eps:Disable the consumer. will not allow offset manipulation, if the topic/consumergroup is being nd below request:\nPOST "topic": "emea-dev-out-full-test-topic-1", "groupId": "emea-dev-consumergroup-1",\n "offset": earliest\n}\nEnable the consumer. All events from the topic will be available for consumption send EventsAllows re-sending events to outbound topics, with filtering by Entity Type (entity or relation), modification date, country and source. Please refer to the Swagger Documentation for more details. Example use scenario is described nerated events are filtered by the topic routing rule (by country, event type etc.). Generating events for some country may not result in anything being produced on the topic, if this country is not added to the fore starting a Resend Events job, please make sure that the country is already added to the routing rule. Otherwise, request additional country to be added (: link to the instruction).ExampleFor development purposes, user needs to generate 10k of events to his "emea-dev-out-full-test-topic-1" topic for the new market - (BE).Steps:Send below request:\nPOST "countries": [\n "be"\n ], "objectType": "ENTITY",\n "limit": 10000,\n "reconciliationTarget": "emea-dev-out-full-test-topic-1"\n}\nA process will start on side, generating events on this topic. Response to the request will contain the process ID (dag_run_id):\n{\n "dag_id": "reconciliation_system_amer_dev",\n "dag_run_id": "manual__2022-11-30T14:12:07.+00:00",\n "execution_date": "",\n "state": "queued"\n}\nYou can check the status of this process by sending below request:\nGET "dag_id": "reconciliation_system_amer_dev",\n "dag_run_id": "manual__2022-11-30T14:12:07.+00:00",\n "execution_date": "",\n "state": "started"\n}\nOnce the process is completed, all the requested events will have been sent to the topic." }, { "title": "Requesting Access", "": "", "pageLink": "/display/GMDM/Requesting+Access", "content": "Access to should be requested via email sent to DL: low chapters contain required details and email dify OffsetRequired details:Team name (including Person of Contact)List of topicsList of consumergroupsUsername (already used for , etc.)Email template:\nHi provide us with access to . Details below:\n\nAPI: name: -out-full-test-topic\n - emea-qa-out-full-test-topic \n - emea-stage-out-full-test-topic \nConsumergroups: \n - emea-dev-hub \n - emea-qa-hub \n - emea-stage-hub \nUsername: mdm-hub-user\n\nBest Regards,\nPiotr\nResend EventsRequired details:Team name (including Person of Contact)List of topicsUsername (already used for , etc.)Email template:\nHi provide us with access to . Details below:\n\nAPI: Resend Events\nTeam name: MDM Hub\nTopics: \n - emea-dev-out-full-test-topic\nUsername: mdm-hub-user\n\nBest Regards,\nPiotr\n" }, { "title": "Flows", "": "", "pageLink": "/display/GMDM/Flows", "content": "" }, { "title": "Batch clear data load cache", "": "", "pageLink": "/display//Batch+clear+ETL+data+load+cache", "content": "DescriptionThis is the batch operation to clear batch cache. The process was design to clear mongo cache (removes records from batchEntityProcessStatus) for specified batch name, sourceId type and value. This process is an adapter to the /batchController/{batchName}/_clearCache operation exposed by service that allows user to clear to clear batch cache by crosswalk documentation exposed by by croswalksLink to HUB UI documentation: HUB UI User Guide Flow: The client delivers file including the list of source types and values to be cleared by HUB. File is uploaded to resource by e clear batch process is triggered by e process parses the input files and calls to clear le load through details: file size is 128MBHow to prepare the file to avoid unexpected errors:File format descriptionFile needs to be encoded with UTF-8 without put fileFile format: CSV Encoding: UTF-8EOL: to setup this using encoding:Set EOL to Unix:Check (bottom right corner):Column headers:SourceType - source crosswalk type that describes entitySourceValue - source crosswalk value that describes entityInput file example123SourceType;SourceValueReltio;upIP01WSAP;3000201428clear_cache_ex.csvInternalsAirflow process name: clear_batch_service_cache_{{ env }}" }, { "title": "Batch merge & unmerge", "": "", "pageLink": "/pages/tion?pageId=", "content": "DescriptionThis is the batch operation to merge/unmerge entities in Reltio. The process was designed to execute the force merge operation between objects. In , there are merge rules that automatically merge objects, but the user may explicitly define the merge between objects. This process is the adapter to the _merge or _unmerge operation that allows the user to specify the file with multi entries so there is no need to execute multiple times.  Flow: The client delivers files including the list of merge/unmerge operations to be executed by HUB. Files must be placed in resource controlled by either by a client or support via HUB UI. The batch process is triggered by directly or by HUB UIThe process parses the input files and calls to merge or unmerge e result of the process is the report file generated and published to S3File load through details: file size is 128 or 10k recordsHow to prepare the file to avoid unexpected errors:File format descriptionFile needs to be encoded with UTF-8 without bom. Merge operation Input fileFile format: CSV Encoding: UTF-8EOL: to setup this using encoding:Set EOL to Unix:Check (bottom right corner):File name format: merge_YYYYMMDD.csvDrop location: DEV: -baiaes-eu--nprod-project/mdm/DEV/merge_unmerge_entities/input/STAGE: ://pfe-baiaes-eu--nprod-project/mdm/STAGE/merge_unmerge_entities/input/PROD: Column headers:The column names are kept for backward compatibility. The winner of the merge is always the entity that was created earlier. There is currently no possibility to select an explicit winner via the merge_unmerge batch.WinnerSourceName - source name of the source entity: the survivor of the merge operation or the entity that will be splitWinnerId - id of the source entity: the survivor of the merge operation or the entity that will be splitLoserSourceName - source name of the target entity: the looser of the merge operation  the target entity: the loser of the merge operation In the output file there are two additional fields:responseStatus - the response statusresponseErrorMessage - the error messageMerge input file example\nWinnerSourceName;WinnerId;LoserSourceName;LoserId\nRELTIO;15hgDlsd;RELTIO;1JRPpffH\nRELTI;15hgDlsd;RELTIO;1JRPpffH\nOutput fileFile format: CSV Encoding: UTF-8File name format: status_merge_YYYYMMDD_.csv   - the number of the file process in . Starting with 1 to n. Drop location: DEV: -baiaes-eu--nprod-project/mdm/DEV/merge_unmerge_entities/output/YYYYMMDD_hhmmss/STAGE: ://pfe-baiaes-eu--nprod-project/mdm/DEV/merge_unmerge_entities/output/YYYYMMDD_hhmmss/PROD: Column headers:sourceId.type - source name of the source entity: the survivor of the merge operation or the entity that will be lue - id of the source entity: the survivor of the merge operation or the entity that will be splittedstatus - the response statuserrorCode - the error codeerrorMessage - the error meesageMerge output file example\nsourceId.type,lue,status,errorCode,errorMessage\nmerge_RELTIO_RELTIO,0009e93_00Ff82E,updated,,\nmerge_GRV_GRV,6422af22f7c95392db313216_23f45427-8cdc-43e6-9aea-0896d4cae5f8,updated,,\nmerge_RELTI_RELTIO,15hgDlsd_1JRPpffH,notFound,EntityNotFoundByCrosswalk,Entity not found by crosswalk in getEntityByCrosswalk [Type:RELTI Value:15hgDlsd]\nUnmerge operation Input fileFile format: CSV Encoding: UTF-8File name format: unmerge_YYYYMMDD_.csv   - the number of the file process in . Starting with 1 to n. Drop location: DEV: -baiaes-eu--nprod-project/mdm/DEV/merge_unmerge_entities/input/STAGE: ://pfe-baiaes-eu--nprod-project/mdm/STAGE/merge_unmerge_entities/input/Column headers:SourceURI - uri of the source entityTargetURI - uri of the extracted entityUnmerge input file example\nSourceURI;TargetURI\n15hgG6nP;15hgG6nQ1\n15hgG6qc;15hgG6rq\nOutput fileFile format: CSV Encoding: UTF-8File name format: status_umerge_YYYYMMDD_.csv   - the number of the file process in . Starting with 1 to n. Column headers:SourceURI - uri of the source entityTargetURI - uri of the extracted entityresponseStatus - the response statusresponseErrorMessage - the error messageUnmerge output file example\nsourceId.type,lue,status,errorCode,errorMessage\nunmerge_RELTIO_RELTIO,01lAEll_01jIfxx,updated,,\nunmerge_RELTIO_RELTIO,0144V4D_01EFVyb,updated,,\nInternalsAirflow process name: merge_unmerge_entities" }, { "title": "Batch reload data", "": "", "pageLink": "/display//Batch+reload+MapChannel+data", "content": "DescriptionThis process is used to reload source data from / systems. The user has two ways to indicate the data he wants to reload:CSV file - contains lines with entity uri or crosswalk valuesQuery mongo - only entities meeting the criteria will be reloadedIn process is used to control the flow  Flow: The client delivers files including the list of entity uris/crosswalk values. Files must be placed in resource controlled by either by a client via HUB or is triggered:The process parses the input and query mongo for selected entitiesFor each entity - sending events to raw / input topicsThe result of the process is the report file generated and published to S3File load through details: file size is 128MBInput file examplereload_map_channel_data.csv Output fileFile format: CSV Encoding: UTF-8File name format: report__reload_map_channel_data_YYYYMMDD_.csv   - the number of the file process in . Starting with 1 to n. Column headers: TODOOutput file example TODOSourceCrosswalkType,,IdentifierType,,status,errorCode,errorMessageReltio,upIP01W,ORCERX,TEST9_OEG_1000005218888,failed,404,Can't find entity for target: EntityURITargetObjectId(entityURI=entities/upIP01W)SAP,,P,,failed,CrosswalkNotFoundException,Entity not found by crosswalk in getEntityByCrosswalk [Type:SAP Value:]InternalsAirflow process name: reload_map_channel_data_{{ env }}" }, { "title": "Batch Reltio Reindex", "": "", "pageLink": "/display//Batch+Reltio+Reindex", "content": "DescriptionThis is the operation to execute Reltio Reindex API. The process was designed to get the input file with entities and schedule the Reltio Reindex API. More details about is available here: 5. Reltio ReindexHUB wraps the Entity URIs and schedules Reltio Task.  Flow: The client delivers files including the list of entity uris. The file is uploaded to the resource by process is triggered by e process parses the input files and calls Reltio le load through details: file size is 128MB. The user should be able to load around 7.4M entity uris lines in one file to fit into a 128 file size. Please check the file size before uploading. Larger files will be ease be aware that 128 file upload may take depending on the user network performance. Please wait until processing is finished and the response to prepare the file to avoid unexpected errors:File format descriptionFile needs to be encoded with UTF-8 without put fileFile format: CSV Encoding: UTF-8EOL: to setup this using encoding:Set EOL to Unix:Check (bottom right corner):Column headers:N/A - do not add headersInput file example123entities/E0pV5Xmentities/1CsgdXN4entities/2O5RmRireltio_reindex.csvInternalsAirflow process name: reindex_entities_mdm_{{ env }}" }, { "title": "Batch update identifiers", "": "", "pageLink": "/display/GMDM/Batch+update+identifiers", "content": "DescriptionThis is the batch operation to update identifiers in Reltio. The process was design to update selected identifiers selected by identifier lookup code. This process is an adapter to the /entities/_updateAttributes operation exposed by manager service that allows user to modify nested attributes using specific urce for the batch process is csv in which one row corresponds with single identifiers that should be process batch service is used to control the flow  Flow: The client delivers files including the list of identifiers that should be updated. Files must be placed in resource controlled by either by a client via HUB or e batch process is triggered by manually or scheduled wayThe process parses the input files and calls to update identifiersThe result of the process is the report file generated and published to S3File load through details: file size is 128 or 10k recordsHow to prepare the file to avoid unexpected errors:File format descriptionFile needs to be encoded with UTF-8 without bom. Input fileFile format: CSV Encoding: UTF-8EOL: to setup this using encoding:Set EOL to Unix:Check (bottom right corner):File name format: update_identifiers_YYYYMMDD_.csv   - the number of the file process in . Starting with 1 to n. Drop location: GBL:DEV: /gbl/dev/inbound/update_identifiersSTAGE: /gbl/stage/inbound/update_identifiersPROD: -baiaes-eu--project/mdm/inbound/update_identifiersEMEA:DEV: /emea/dev/inbound/update_identifiersQA: /emea/qa/inbound/update_identifiersSTAGE: /emea/stage/inbound/update_identifiersPROD: /emea/prod/inbound/update_identifiersColumn headers:SourceCrosswalkType - source crosswalk type that describes entity. If you use "Reltio" then you should use entity uri in column. For every other crosswalk type use SourceCrosswalkValue - source crosswalk value that describes entityIdentifierType - identifier type that you want to - identifier values that you want to set(update/insert/merge). More information in /entities/_updateAttributes documentationIdentifierTrust - trust flag for given identifier, accepted values: Yes, No and . In case of , default value No for , , and null for will be - source name of updated identifier. In case of , default value for , , and null for will be tion - action you want to perform on attribute. More information in /entities/_updateAttributes documentationdelete - IGNORE_ATTRIBUTE - IdentifierType has to exists - if it does not exists do not delete and share the information in the "details" attribute that the target key does not exist This operation works like DELETE FROM Identifiers WHERE key=(key)update - UPDATE_ATTRIBUTE - IdentifierType have to exists - if it does not exist return share the information in the "details" attribute that the target key does not exist   This operation works like UPDATE Identifiers SET (set) WHERE key=(key)Only allows updating existing attributes ( for example if the  ID  does not exist in the target - do not update this Identifier and share the information in the details that "ID" does not exist in the target)insert - INSERT_ATTRIBUTE  only allows to insert new attributes, if the "set" exists in the target return the information in the "details" element that such object already exists  This operation work like INSERT INTO Identifiers values (set)      Adds only a new element to the target rge - (insert or update) (similar to "update" but it makes an insert if "set" elements do not exist in target) - update attributes matched by the key or inserts a new one. If there are multiple keys related to one filter, it updates all matches or inserts a new one. In this case, we are checking the target array. For example, we matched multiple target Identifiers by the "key" and we want to "set" the "ID". If the target identifier does not have the "ID" we are making an INSERT_ATTRIBUTE, if the target attribute contains the "ID" we are making the UPDATE_ATTRIBUTEreplace -(delete or insert) - delete (IGNORE_ATTRIBUTE) attributes matched by the "key" and insert the new is operation works in a way that it will delete all target attributes matched by the "key" and put only one new Identifier in that place. For example, we had 3 Identifiers in the target matching by the "key". Replace will cause that now in the target we have 1 new Identifier. 3 old ones are removed (IGNORE_ATTRIBUTE) and a new one is inserted (INSERT_ATTRIBUTE).TargetCrosswalkType - HUB_ID is a default source that updates the data in /A - keep empty and add just this put file example123SourceCrosswalkType;SourceCrosswalkValue;IdentifierType;IdentifierValue;IdentifierTrust;IdentifierSourceName;Action;TargetCrosswalkTypeReltio;upIP01W;ORCERX;TEST9_OEG_1000005218888;;;update;SAP;;P;;Yes;SAP;update;update_identifier_20220323.csvOutput fileFile format: CSV Encoding: UTF-8File name format: report__update_identifiers_YYYYMMDD_.csv   - the number of the file process in . Starting with 1 to n. Column headers:SourceCrosswalkType - source crosswalk type that describes entity. If you use "Reltio" then you should use entity uri in column. For every other crosswalk type use SourceCrosswalkValue - source crosswalk value that describes entityIdentifierType - identifier type that you want to - identifier values that you want to set(update/insert/merge). More information in /entities/_updateAttributes documentationstatus- the response statuserrorCode - the error codeerrorMessage- the error messageOutput file example\nSourceCrosswalkType,,IdentifierType,,status,errorCode,errorMessage\nReltio,,TEST9_OEG_1000005218888,failed,404,Can't find entity for target: EntityURITargetObjectId(entityURI=entities/upIP01W)\nSAP,,P,,failed,CrosswalkNotFoundException,Entity not found by crosswalk in getEntityByCrosswalk [Type:SAP Value:]\nInternalsAirflow process name: update_identifiers_{{ env }}" }, { "title": "Callbacks", "": "", "pageLink": "/display/GMDM/Callbacks", "content": "DescriptionThe HUB Callbacks are divided into the following two sections: process is responsible for the Ranking of the selected attributes . This callback is based on the full enriched events from the "${env}-internal-reltio-full-events". Only events that do not require additional ranking updates in are published to the next processing stage. Some rankings calculations - like OtherHCOtoHCO is delayed and processed in PreDylayCallbackService - such functionality was required to gather all changes for relations in time windows and send events to Reltio only after the aggregation window is closed. This limits the number of events and updates to Reltio. OtherHCOtoHCOAffiliations Rankings - more details related to the OtherHCOtoHCO relation ranking with all PreDylayCallbackService  and DelayRankActivationProcessorrank details OtherHCOtoHCOAffiliations RankSorter"Post" Callback process is responsible for the specific logic and is based on the events published by the Event Publisher component. Here are the processes executed in the post callback process: - based on the "{env}-internal--callback-attributes-setter-in" events. Sets additional attributes for market  e.g. ComplianceMAPPHCPStatusCrosswalkActivator Callback  - based on the "${env}-internal-callback-activator-in" events. Activates selected crosswalk or soft-delete specific crosswalks based on the configuration. CrosswalkCleaner Callback - based on the "${env}-internal-callback-cleaner-in" events. Cleans orphan HUB_Callback crosswalk or soft-delete specific crosswalks based on the configuration. CrosswalkCleanerWithDelay Callback - based on the "${env}-internal-callback-cleaner-with-delay-in" events. Cleans orphan HUB_Callback crosswalk or soft-delete specific crosswalks based on the configuration with delay (aggregate events in time window)DanglingAffiliations Callback - based on the "${env}-internal-callback-orphan-clean-in" events. Removes orphan affiliations once one of the start or end objects was removed. Derived Addresses Callback  - based on the "${env}-internal-callback-derived-addresses-in" events. Rewrites an Address from to , connected to each other with some type of . used on IQVIA tenantHCONames Callback for IQVIA model - based on the "${env}-internal-callback-hconame-in" events. Caclucate HCO Names.  for COMPANY model -  based on the "${env}-internal-callback-hconame-in" events. in tMatch Callback - based on the "${env}-internal-callback-potential-match-cleaner-in" events. Based on the created relationships between two matched objects, removes the match using _notMatch operation. More details about the HUB callbacks are described in the sub-pages. Flow diagram​" }, { "title": "AttributeSetter Callback", "": "", "pageLink": "/display/GMDM/AttributeSetter+Callback", "content": "DescriptionCallback auto-fills configured static Attributes, as long as the profile's attribute values meet the requirements. If no requirement (rule) is met, an optional cleaner deletes the existing, Hub-provided value for this attribute. AttributeSetter uses Manager's Update Attributes async interface. event has been routed from EventPublisher, check the following:Entity must be active and have at least one active crosswalk Event Type must match configured allowedEventTypesCountry must match configured allowedCountriesFor each configured do the following:Check if the entityType matches For each rules do the following:Check if criteria are metIf criteria are met:Check if Hub crosswalk already provides the AutoFill value (either Attribute's value or lookupCode must match)If attribute value is already present, do nothingIf attribute is not present:Add inserting AutoFill attribute to the list of changesCheck if Hub crosswalk provides another value for this attributeIf Hub crosswalk provides another value, add deleting that attribute value to the list of changesIf no rules were matched for this and cleaner is enabled:Find the Hub-provided value of this attribute and add deleting this value to the list of changes (if exists)Map the list of changes into a single AttributeUpdateRequest object and send to Manager inbound nfigurationExample AttributeSetter rule (multiple allowed):\n - setAttribute: "ComplianceMAPPHCPStatus"\n entityType: "HCP"\n cleanerEnabled: true\n rules:\n - name: "AutoFill IF SubTypeCode = Administrator (HCPST.A) / Researcher/Scientist (HCPST.C) / Counselor/Social Worker () / ()"\n setValue: "n-HCP"\n where:\n - attribute: "SubTypeCode"\n values: [ "HCPST.A", "HCPST.C", , ]\n\n - name: "AutoFill IF SubTypeCode = ) AND PrimarySpecialty = Psychology (Y)"\n setValue: "n-HCP"\n where:\n - attribute: "SubTypeCode"\n values: [ "HCPST.R" ]\n - attribute: "Specialities"\n nested:\n - attribute: "Primary"\n values: [ "true" ]\n - attribute: "Specialty"\n values: [ "Y" ]\n\n - name: "AutoFill HCPMHS.HCP for all others"\n setValue: "HCPMHS.HCP"\nRule inserts ComplianceMAPPHCPStatus attribute for every HCP:"n-HCP" for every profile having SubTypeCode in [ "HCPST.A", "HCPST.C", , ]"n-HCP" for every profile having SubTypeCode == "HCPST.R" where one of == "Y" and has Primary flag"HCPMHS.HCP" in all other scenariosDependent ComponentsComponentUsageCallback ServiceMain component with flow implementationPublisherGeneration of incoming eventsManagerAsynchronous processing of generated AttributeUpdateRequest events" }, { "title": "CrosswalkActivator Callback", "": "", "pageLink": "/display/GMDM/CrosswalkActivator+Callback", "content": " is the opposite of . There are 4 main processing branches (described in more detail in the "Algorithm" section):WhenOneKeyExistsAndActive - activate all crosswalks having:crosswalk type as in the configuration,crosswalk value same as an existing, active crosswalk in this profile.WhenAnyOneKeyExistsAndActive - activate all crosswalks of types same as in configuration, as long as there is at least one active crosswalk present in this profile.WhenAnyCrosswalksExistsAndActive - activate all crosswalks of types same as in configuration, as long as there is at least one active crosswalk present in this profile (crosswalk types in the except section of configuration are not considered as active crosswalks).ActivateOneKeyReferbackCrosswalkWhenRelatedOneKeyCrosswalkExistsAndActive - activate referback crosswalk (with lookupCode in configuration), as long as there is at least one active crosswalk present in this profileAlgorithmFor each event from ${env}-internal-callback-activator-in topic, do:filter by event country (configured),filter by event type (configured, usually only CHANGED events),Processing: WhenOneKeyExistsAndActivefind all active crosswalks (exact source name is fetched from configuration)for each crosswalk in the input event entity do:if crosswalk type is in the configured list (getWhenOneKeyExistsAndActive) and crosswalk value is the same as one of active crosswalks, send activator request to Manager,activator request contains entityType,activated crosswalk with empty string ("") in deleteDate,Country attribute rewritten from the input event,Manager processes the request as cessing: WhenAnyOneKeyExistsAndActivefind all active crosswalks (exact source name is fetched from configuration)for each crosswalk in the input event entity do:if crosswalk type is in the configured list (getWhenAnyOneKeyExistsAndActive) and active crosswalks list is not empty, send activator request to Manager,activator request contains entityType,activated crosswalk with empty string ("") in deleteDate,Country attribute rewritten from the input event,Manager processes the request as cessing: WhenAnyCrosswalksExistsAndActivefind all active crosswalks (sources in the configuration except list are filtered out)for each crosswalk in the input event entity do:if crosswalk type is in the configured list (getWhenAnyCrosswalksExistsAndActive) and active crosswalks list is not empty, send activator request to Manager,activator request contains entityType,activated crosswalk with empty string ("") in deleteDate,Country attribute rewritten from the input event,Manager processes the request as cessing: ActivateOneKeyReferbackCrosswalkWhenRelatedOneKeyCrosswalkExistsAndActivefind all crosswalks,check for active OneKey crosswalk with lookupCode included in the configured list oneKeyLookupCodes,check for related inactive referback crosswalk with lookupCode included in the configured list referbackLookupCodes,if above conditions are met, send activator request to Manager,activator request contains:entityType,activated referback crosswalk with empty string ("") in deleteDate,Country attribute rewritten from the input event,Manager processes the request as pendent componentsComponentUsageCallback ServiceMain component with flow implementationPublisherRoutes incoming eventsManagerAsync processing of generated activator requests" }, { "title": "CrosswalkCleaner Callback", "": "", "pageLink": "/display/GMDM/CrosswalkCleaner+Callback", "content": "DescriptionThis process removes using the hard delete or soft-delete operation crosswalks on Entity or Relation objects. There are the following sections in this process.Hard Delete Crosswalks - EntitiesBased on the input configuration removes the crosswalk from once all other crosswalks were removed or inactivated.  Once the source decides to inactivated the crosswalk, associated attributes are removed from the Golden Profile (OV), and in that case Rank attributes delivered by the HUB have to be removed. The process is used to remove orphan HUB_CALLBACK crosswalks that are used in the PreCallbacks (Rankings/COMPANYGlobalCustomerId/Canada Micro-Bricks/HCPType) processHard Delete Crosswalks - RelationshipsThis is similar to the above. The only difference here is that the PreCallbacks (Rankings/COMPANYGlobalCustomerId/Canada Micro-Bricks/HCPType) process is adding new Rank attributes to the relationship between two objects. Once the relationship is deactivated by the , the orphan HUB_CALLBACK crosswalk is removed. Soft Delete Crosswalks This process does not remove the crosswalk from Reltio. It updates the existing providing additional deleteDate attribute on the soft-deleting crosswalk. In that case in Reltio the corresponding crosswalk becomes inactive. There are three types of soft-deletes:always - soft-delete crosswalks based on the configuration once all other crosswalks are removed or inactivated,whenOneKeyNotExists - soft-delete crosswalks based on the configuration once crosswalk is removed or inactivated. This process is similar to the "always" process by the activation is only based on the crosswalk inactivation,softDeleteOneKeyReferbackCrosswalkWhenOneKeyCrosswalkIsInactive - soft-delete referback crosswalk (lookupCode in configuration) once crosswalk is inactivated.Flow diagramStepsEvent publisher publishes full events to ${env}-internal-callback-cleaner-in including 'HCO_CHANGED', 'HCP_CHANGED', 'MCO_CHANGED', 'RELATIONSHIP_CHANGED' eventsOnly events with the correct event type are en the checks are activated checking if it is possible to: hard delete entity crosswalkshard delete relationship crosswalkssoft delete crosswalksIt is possible that for one event multiple checks are going to be activated, in that case, multiple output events will be generated. Once the criteria are successfully fulfilled, the events are generated to the "${env}-internal-async-all-cleaner-callbacks" topic to the next processing step in the Manager component. TriggersTrigger actionComponentActionDefault timeIN Events incoming mdm-callback-service:CrosswalkCleanerStream (callback package)Process events and calculate hard or soft-delete requests and publish to the next processing stage. realtime - events streamDependent componentsComponentUsageCallback ServiceMain component with flow implementationPublisherEvents publisher generates incoming eventsManagerAsynchronous process of generated events" }, { "title": "CrosswalkCleanerWithDelay Callback", "": "", "pageLink": "/display/GMDM/CrosswalkCleanerWithDelay+Callback", "content": "DescriptionCrosswalkCleanerWithDelay works similarly to . It is using the same topology, but events are trimmed (eliminateNeedlessData parameter - all the fields other than crosswalks are removed), and, which is most important, deduplication window is duplication window's parameters are configured, there are no default parameters. example:8 hour window ( config: duplication.duration)1 interval ( config: duplication.pingInterval)This means, that the delay is equal to more details on algorithm steps, see CrosswalkCleaner pendenciesComponentUsageCallback ServiceMain component with flow implementationPublisherRoutes incoming eventsManagerAsync processing of generated requests" }, { "title": "DanglingAffiliations Callback", "": "", "pageLink": "/display/GMDM/DanglingAffiliations+Callback", "content": "DescriptionDanglingAffiliation Callback consists of two sub-processes:DanglingAffiliations Based On Inactive Objects (legacy)DanglingAffiliations Based On Same Start And End Objects (added in )" }, { "title": "DanglingAffiliations Based On Inactive Objects", "": "", "pageLink": "/display//DanglingAffiliations+Based+On+Inactive+Objects", "content": "DescriptionThe process soft-deletes active relationships between inactivated start or end objects. Based on the configuration only REMOVED or INACTIVATE events are processed. It means that once the Start or End objects becomes inactive process checks the orphan relationship and sends the soft-delete request to the next processing stage. Flow diagramStepsEvent publisher publishes full events to ${env}-internal-callback-orphanClean-in including 'HCP_REMOVED', 'HCO_REMOVED', 'MCO_REMOVED', 'HCP_INACTIVATED', 'HCO_INACTIVATED', 'MCO_INACTIVATED' eventsOnly events with the correct event type are the next step, the is retrieved from the HUB by StartObjectURI or EndObjectURI.Once the relationship exists and is ACTIVE the Soft-Delete Request is generated to the "${env}-internal-async-all-cleaner-callbacks" topic to the next processing step in the Manager component. TriggersTrigger actionComponentActionDefault timeIN Events incoming mdm-callback-service: (callback package)Process events for inactive entities and calculate soft-delete requests and publish to the next processing stage. realtime - events streamDependent componentsComponentUsageCallback ServiceMain component with flow implementationPublisherEvents publisher generates incoming eventsManagerAsynchronous process of generated eventsHub StoreRelationship Cache" }, { "title": "DanglingAffiliations Based On Same Start And End Objects", "": "", "pageLink": "/display//DanglingAffiliations+Based+On+Same+Start+And+End+Objects", "content": "DescriptionThis process soft-deletes looping relations - active relations having the same startObject and ch loops can be created in one of two ways:merge-on-the-fly of two entities,manual merge of two entitiesboth of these create a RELATIONSHIP_CHANGED event, so the process is based off of and RELATIONSHIP_CHANGED events.Unlike the other DanglingAffiliations sub-process, this one does not query the cache for relations, because all the required information is in the processed event.Flow diagramStepsEvent publisher publishes full events to ${env}-internal-callback-orphanClean-in including and RELATIONSHIP_CHANGED eventsOnly events with the correct event type are processed.If there is a country list configured, the event country is also checked before rrent state of relation in the event is checked for the following:is startObject.objectURI the same as endObject.objectURI?is relation active (no endDate is set)?does the relation type match the configured list of relationTypes (only if configured list is not empty)?If all of the above are true, a soft-delete request is generated to the ${env}-internal-async-all-cleaner-callbacks topic to the next processing step in the Manager component. TriggersTrigger actionComponentActionDefault timeIN Events incoming mdm-callback-service: (callback package)Process events for relations and calculate soft-delete requests and publish to the next processing stage. realtime - events streamDependent componentsComponentUsageCallback ServiceMain component with flow implementationPublisherEvents publisher generates incoming eventsManagerAsynchronous process of generated events" }, { "title": "Derived Addresses Callback", "": "", "pageLink": "/display//Derived+Addresses+Callback", "content": "DescriptionThe Callback is a tool for rewriting an Address from to , connected to each other with some type of quence DiagramFlowProcess is a callback. It operates on four topics:${env}-internal-callback-derived-addresses-in – input topic, containing simple events:HCP_CREATEDHCP_CHANGEDHCO_CREATEDHCO_CHANGEDHCO_REMOVEDHCO_INACTIVATEDRELATIONSHIP_CREATEDRELATIONSHIP_CHANGEDRELATIONSHIP_REMOVED${env}-internal-callback-derived-addresses-hcp4calc – internal topic, containing URIs${env}- internal-derived-addresses-hcp-create – Manager bundle topic, processes Addresses sent${env}-internal-async-all-cleaner-callbacks – Manager async topic, cleans orphaned crosswalksStepsAlgorithm has 3 stages: Stage I – Event PublisherEvent Publisher routes all above event types to ${env}-internal-callback-derived-addresses-in topic, optional filtering by country/source. Stage II – Callback Service – Preprocessing StageIf event subType ~ HCP_*:pass targetEntity URI to ${env}-internal-callback-derived-addresses-hcp4calcIf event subtype ~ HCO_*:Find all ACTIVE relations of types ${walkRelationType} ending at this in entityRelations collection.Extract URIs of all HCPs at starts of these relations and send them to topic ${env}-internal-callback-derived-addresses-hcp4calcIf event subtype ~ RELATIONSHIP_*:Find the relation by URI in entityRelations eck if relation type matches the configured ${walkRelationType}Extract URI of the startObject () and send it to the topic ${env}-internal-callback-derived-addresses-hcp4calc Stage III – Callback Service – Main StageInput is HCP by URI in entityHistory collection. Check:If we cannot find entity in entityHistory, log error and skipIf found entity has other type than “configuration/entityTypes/”, log error and skipIf entity has status LOST_MERGE/DELETED/INACTIVE, skipIn entityHistory, find all relations of types ${walkRelationType} starting at this , extract at the end of each extracted ) do:Find HCO in entityHistory collectionWrap in a Create HCP Request:Rewrite all sub-attributes from each attributes from ${staticAddedFields}, according to strategy: overwrite or underwrite (add if missing)Add the required Country attribute (rewrite from two crosswalks:Data provider ${hubCrosswalk} with value: ${hcpId}_${hcoId}.Contributor provider Reltio type with HCP nd Create HPC Request to Manager through bundle topicIf has a crosswalk of type and sourceTable as below:type: ${hubCrosswalk.type}sourceTable: ${urceTable}value: ${hcpId}_${hcoId}but its does not match any found, send request to delete the crosswalk to MDM configurations have to be made (examples are for GBL tenants).Callback and handle following section to application.yml in GBL:\ncallback:\n...\n derivedAddresses:\n enabled: true\n walkRelationType: \n - configuration/relationTypes/HasHealthCareRole\n hubCrosswalk:\n type: HUB_Callback\n sourceTable: DerivedAddresses\n staticAddedFields:\n - attributeName: AddressType\n attributeValue: TYS.P\n strategy: over\n inputTopic: ${env}-internal-callback-derived-addresses-in\n hcp4calcTopic: ${env}-internal-callback-derived-addresses-hcp4calc\n outputTopic: ${env}-internal-derived-addresses-hcp-create\n cleanerTopic: ${env}-internal-async-all-cleaner-callbacks\nSince we are adding a new crosswalk, cleaning of which will be handled by the Derived Addresses callback itself, we should exclude this crosswalk from the Crosswalk Cleaner config (similar to crosswalkCleaner:\n ...\n hardDeleteCrosswalkTypes:\n ...\n exclude:\n - type: configuration/sources/HUB_Callback\n sourceTable: DerivedAddresses\nManagerAdd below to the Manager bundle config:\nbundle:\n...\n inputs:\n...\n - topic: "${env}-internal-derived-addresses-hcp-create"\n username: "mdm_callback_service_user"\n defaultOperation: hcp-create\nCheck DQ Rules configuration.If there are any rules that may reject the HUB_Callback/DerivedAddresses HCP Create, an exception should be made. Example: Validation Status is required.If is configured to be surrogate, add an exception and new rule, adding MD5 crosswalk to the Address:\n- name: generate address relation and refEnity crosswalk\n preconditions:\n - type: sourceAndSourceTable\n values:\n - source: HUB_Callback\n sourceTable: "DerivedAddresses"\n action:\n type: addressDigest\n value: MD5\n skipRefEntityCreation: false\n skipRefRelationCreation: false\n\n- name: Make surrogate crosswalk on address\n preconditions:\n - type: not\n preconditions:\n - type: sourceAndSourceTable\n values:\n - source: sourceTable: "DerivedAddresses"\n action:\n type: addressCrosswalkValue\n value: surrogate\nEvent PublisherRouting rule has to be added:\n- id: derived_addresses_callback\n destination: "${env}-internal-derived-addresses-in"\n selector: "(conciliationTarget==null)\n && .headers.eventType in ['simple']\n && untry in ['cn']\n && ubtype in ['HCP_CREATED', 'HCP_CHANGED', 'HCO_CREATED', 'HCO_CHANGED', 'HCO_REMOVED', 'HCO_INACTIVATED', 'RELATIONSHIP_CREATED', 'RELATIONSHIP_CHANGED', 'RELATIONSHIP_REMOVED']"\nDependent ComponentsComponentUsageCallback ServiceMain component with flow implementationManagerProcessing HCP Create, Crosswalk Delete operationsEvent PublisherGeneration of incoming events" }, { "title": " for IQVIA model", "": "", "pageLink": "/display/GMDM/HCONames+Callback+for+IQVIA+model", "content": "DescriptionThe names callback is responsible for calculating Names. At first events are filtered, deduplicated and the list of impacted hcp is being evaluated. Then the new are calculated. And finally if there is a need for update, the updates are being send for asynchronous processing in HUB Callback SourceFlow diagramSteps1. Impacted HCP GeneratorListen for the events on the ${env}-internal-callback-hconame-in lter out against the list of predefined countries (, AN, , AR, AW, BS, , , , , BR, , , , , DO, , , , HN, , , , , , , PY, , , , , , , , VE).Filter out against the list of predefined event types (HCO_CREATED, HCO_CHANGED, , RELATIONSHIP_CHANGED).Split into two following branches. Results of both are then published on the ${env}-internal-callback-hconame-hcp4calc. extract the "Name" attribute from the target entity.2. reject the event if "Name" does not . check if there was already a record with the identical Key + Name pair (a duplicate)4. reject the . find the list of impacted HCPs based on the . return a flat stream of the key and the . key: entities/dL144Hk, impactedHCP: 1, , 3 return (entities/dL144Hk, 1), (entities/dL144Hk, 2), (entities/dL144Hk, . map Event to RelationWrapper(type,uRI,country,startURI,endURI,active,,endObjectTyp)2. reject if any of fields missing3. check if there was already a record with the identical Key + Name pair (a duplicate)4. reject the . find the list of impacted HCPs based on the . return a flat stream of the key and the . key: entities/dL144Hk, impactedHCP: 1, , 3 return (entities/dL144Hk, 1), (entities/dL144Hk, 2), (entities/dL144Hk, 3)2.  Names Update StreamListen for the events on the ${env}e incoming list of HCPs is passed to the calculator (described below).The HcoMainCalculatorResult contains hcpUri, a list of entityAddresses and the mainWorkplaceUri (to update)The result is being mapped to the RelationRequest The RelationRequest is generated to the "${env}-internal-hconames-rel-create" topic.3. : get from mongo where uri equals calculate MainHCONameget all target for relations (paremeter traverseRelationTypes) when start object id equals r each target (curHCO) doif target is last in hierarchy thenreturn  if target tributes. is on the configured list defined by parameter mainHCOTypeCodes for selected countryreturn if target is on the configured list defined by parameter mainHCOStructurTypeCodes for selected countryreturn if target assofTradeN.FacilityType.lookupCode is on the configured list defined by parameter mainHCOFacilityTypeCodes for selected countryreturn all target when start object id is curHCO.uri (recursive call)update HCP addressesfind address in dress when fEntity.uri= found and address.HCOName<>calcHCOName or thencreate/update relation using sourceTriggers*Filter whole tableHide columnsReset all filtersCopy the filter to to to WordPrintDocumentationWhat's newRate our appOops, it seems that you need to place a table or a macro generating a table within the igger actionComponentActionDefault timeIN Events incoming mdm-callback-service:HCONamesUpdateStream (callback package)Evaluates the list of affected HCPs. Based on that the updates being sent when altime - events stream\n\n\n\n\nDependent componentsComponentUsageCallback ServiceMain component with flow implementationPublisherEvents publisher generates incoming eventsManagerAsynchronous process of generated eventsHub StoreCache​" }, { "title": " for COMPANY model", "": "", "pageLink": "/display//HCONames+Callback+for+COMPANY+model", "content": "DescriptionHCONames Callback for COMPANY data model differs from the one for IQVIA llback consists of two stages: preprocessing and main processing. Main processing stage takes in HCP URIs, so the preprocessing stage logic extracts such affected HCPs from , , RELATIONSHIP events.During main processing, Callback calculates trees, where nodes are HCOs (tree root is always the input ) and edges are Relationships. HCOs and MainHCOs are extracted from this tree. MainHCOs are chosen following some business specification from the config. Direct Relationships from HCPs to MainHCOs are created (or cleaned if no longer applicable). If any of 's Addresses matches , adequate sub-attribute is added to this gorithmStage I - preprocessingInput topic: ${env}-internal-callback-hconame-inInput event types:HCO_CREATEDHCO_CHANGEDHCP_CREATEDHCP_CHANGEDRELATIONSHIP_CREATEDRELATIONSHIP_CHANGEDFor each event from the topic:Deduplicate events by key (deduplication window size is configurable),using MongoDB entityRelations collection, build maximum dependency tree (recursive algorithm) consisting of HCPs and HCOs connected with:relations of type equal to hcoHcoTraverseRelationTypes from configuration,relations of type equal to hcoHcpTraverseRelationTypes from configuration,return all HCPs from the dependency tree (all visited HCPs),generate events having key and value equal to HCP uri and send to the main processing topic (${env}-internal-callback-hconame-hcp4calc).For each RELATIONSHIP event from the topic:Deduplicate events by key (deduplication window size is configurable),if relation's startObject is :add 's entityURI to result list,if relation's startObject is : similarly to events preprocessing, build dependency tree and return all HCPs from the tree. HCP URIs are added to the result list,for each HCP on the result list, generate an event and send to the main processing topic (${env}-internal-callback-hconame-hcp4calc).For each event from the topic:Deduplicate events by key (deduplication window size is configurable),generate events having key and value equal to HCP uri and send to the main processing topic (${env}-internal-callback-hconame-hcp4calc).Stage II - main processingInput topic: ${env}-internal-callback-hconame-hcp4calcFor each HCP from the topic:Deduplicate by entity URI (deduplication window size is configurable),fetch current state of from MongoDB, entityHistory collection,traversing by relation type from config, find all affiliated HCOs with "CON" descriptors,traversing by relation type from config, find all affiliated HCOs with MainHCO: "I" or "REL.HIE" descriptors,from the "CON" list, find all MainHCO candidates - MainHCO candidate must pass the configured specification. Below is MainHCO spec in EMEA PROD:if not yet existing, create new relationship to MainHCO candidates by generating a request and sending to Manager async topic: ${env}-internal-hconames-rel-create,if existing, but not on candidates list, delete the relationship by generating a request and sending to Manager async topic: ${env}-internal-async-all-cleaner-callbacks,if one of input 's Addresses matches or MainHCO Address, generate a request adding "" or "MainHCO" sub-attribute to the Address and send to Manager async topic: ${env}cessing events1. Find Impacted HCPListen for the events on the ${env}-internal-callback-hconame-in lter out against the list of predefined countries (, IE).Filter out against the list of predefined event types (HCO_CREATED, HCO_CHANGED, , RELATIONSHIP_CHANGED).Split into two following branches. Results of both are then published on the ${env}-internal-callback-hconame-hcp4calc. extract the "Name" attribute from the target entity.2. reject the event if "Name" does not . check if there was already a record with the identical Key + Name pair (a duplicate)4. reject the . find the list of impacted HCPs based on the . return a flat stream of the key and the . key: entities/dL144Hk, impactedHCP: 1, , 3 return (entities/dL144Hk, 1), (entities/dL144Hk, 2), (entities/dL144Hk, . map Event to RelationWrapper(type,uRI,country,startURI,endURI,active,,endObjectTyp)2. reject if any of fields missing3. check if there was already a record with the identical Key + Name pair (a duplicate)4. reject the . find the list of impacted HCPs based on the . return a flat stream of the key and the . key: entities/dL144Hk, impactedHCP: 1, , 3 return (entities/dL144Hk, 1), (entities/dL144Hk, 2), (entities/dL144Hk, 3)2. Select HCOs affiliated with for incoming list of HCPs on the ${env}r each HCP a list of affiliated HCOs is retrieved from a database. relation is based on type:configuration/relationTypes/ContactAffiliationsand description:"CON"3. Find Main HCO traversing hierarchyFor each from the list of selected HCOs above a list of is retrieved from the database.  relation is based on type:configuration/relationTypes/OtherHCOtoHCOAffiliationsand description:"I", "RLE.HIE"The step is being repeated recursively until there are no affiliated HCOs or the Subtype matches the one provided in bTypeCode (STOP condition)The result is being mapped to the RelationRequest The RelationRequest is generated to the "${env}-internal-hconames-rel-create" topic.4.  in HCP addresses if required So far there are two lists: HCOs affiliated with and ere's a check if HCP fields HCOName and MainHCOName which are also two lists match the names.If not, then the update event is being dress is nested attribute in the model ​Matching by uri must be replaced by matching by the key on attribute values. ​The match key will include , AddressLine1, AddressLine2,City,, Zip5.​The same key is configured in Reltio for address deduping. ​Changes the address key in Reltio must be consulted with HUB team​The target attributes in addresses will be populated by creating new HCP address having the same match key + HCOName and MainHCOName by source. Reltio will match the new address with the existing based on the match key.​Each HCP address will have own HUBCallback crosswalk {type=HUB_Callback, value={Address Attribute URI}, sourceTable=HCO_NAME}​4. Create HCO -> affiliation if not exist Also there's a check if the HCP outgoing relations point to HCOs. Only relations with the type "configuration/relationTypes/ContactAffiliations"and description"MainHCO" are being ropriate relations need to be created and not appropriate removed.Data model DependenciesComponentUsageCallback ServiceMain component with flow implementationPublisherRoutes incoming eventsManagerAsync processing of generated requests" }, { "title": "NotMatch Callback", "": "", "pageLink": "/display/GMDM/NotMatch+Callback", "content": "DescriptionThe NotMatch callback was created to clear the potential match queue for the suspect matches when the has been created by the DerivedAffiliationsbatch process. During this batch process, affiliations are created between and objects. The potential match queue is not cleared and this impacts the process because DS does not know what matches have to be processed through the . Potential match queue is cleared during RELATIONSHIP events processing using the "NotMatch callback" process. The process invokes _notMatch operation in and removed these matches from Reltio. All "_notMatch" matches are visible in the in the "Potental Matches"."Not a Match" TAB. Flow diagramStepsEvent publisher publishes simple events to $env-internal-callback-potentialMatchCleaner-in including RELATIONSHIP_CHANGED and RELATIONSHIP_CREATED events with source (limit to only the one loaded through DA batch)Only events with the correct event type are processed: RELATIONSHIP_CHANGED and RELATIONSHIP_CREATEDOnly events with the correct relationship type are processed. Accepted relationship types:FlextoHCOSAffiliationsFlextoDDDAffiliationsFlextoDDDAffiliationsThe HUB AUTOLINK Store is searchedif match exists in the store _notMatch operation is executed in asynchronous modeelse event is skippedAll _notMatch operations are published to the $env-internal-async-all-notmatch-callbacks topic and the Manager process these operations in asynchronous mode. TriggersTrigger actionComponentActionDefault timeIN Events incoming mdm-callback-service:PotentialMatchLinkCleanerStreamprocess relationship events in streaming mode and sets _notMatch in MDMrealtime - events streamDependent componentsComponentUsageCallback ServiceMain component with flow implementationPublisherEvents publisher generates incoming eventsManagerReltio Adapter for _notMatch operation in asynchronous modeHub StoreMatches Store" }, { "title": "PotentialMatchLinkCleaner Callback", "": "", "pageLink": "/display/GMDM/PotentialMatchLinkCleaner+Callback", "content": " accepts relationship events - this is configurable, usually:RELATIONSHIP_CREATEDRELATIONSHIP_CHANGEDFor each event from inbound topic (${env}-internal-callback-potential-match-cleaner-in):event is filtered by eventType (acceptedRelationEventTypes list in configuration),event is filtered by relationship type (acceptedRelationObjectTypes list in configuration),extract startObjectURI and endObjectURI from event targetRelation,search MongoDB, collection entityMatchesHistory, for records having both URIs in matches and having same matchType (matchTypesInCache list in configuration),if found a record in cache, check if it has already been sent (boolean field in the document),if record has not been yet sent, generate a EntitiesNotMatchRequest containing two fields:sourceEntityURI,,add the operation header and send the Request to pendenciesComponentUsageCallback ServiceMain component with flow implementationPublisherRoutes incoming eventsManagerAsync processing of generated requests" }, { "title": "PreCallbacks (Rankings/COMPANYGlobalCustomerId/Canada Micro-Bricks/HCPType)", "": "", "pageLink": "/pages/tion?pageId=", "content": "DescriptionThe main part of the process is responsible for setting up the attributes on the specific Attributes in Reltio. Based on the input JSON events, the difference between the RAW entity and the Ranked entity is calculated and changes shared through the asynchronous topic to Manager. Only events that contain no changes are published to the next processing stage, it limits the number of events sent to the external Clients. Only data that is ranked and contains the correct callback is shared further. During processing, if changes are detected main events are skipped and a callback is executed. This will cause the generation of new events in and the next calculation. The next calculation should detect 0 changes but that may occur that process will fall into an infinity loop. Due to this, the MD5 checksum is implemented on the Entity and AttributeUpdate request to percent such a situation. The is the setup with the chain of responsibility with the following steps:Enricher Processor Enrich object with serviceMultMergeProcessor - change the ID of the main entity to the loser Id when is different from - it means that the merge happened between timestamp when generated the EVENT and HUB retrieved the Entitty from Reltio. In that case the outcome entity contains 3 ID rankings - transform entity with correct Ranks attributesBased on the calculated rank generate pre-callback events that will be sent to callback Generation of changes on COMPANYGlobalCustomerIDs Autofill -BricksHCPType Callback Calculate HCPType attribute based on and SubTypeCode canonical Reltio codes.  reference attributes enriched in the first step (save in mongo only when cleanAdditionalRefAttributes is of inactivated events (for each changed event)OtherHCOtoHCOAffiliations Rankings Generation of the event to full-delay topic to process Ranking changes on relationships objects Flow diagramStepsEntity publishes full enriched events to ${env}-internal-reltio-full-eventsThe event is enriched with additional data required in the ranking process. More details in  that require enrichment of the objects once ranking the on . Rankings are calculated based on the implemented . Based on the activation criteria and the environment configuration the following Rank Sorters are activated:Address RankSorterAddresses RankSorterAffiliation RankSorterEmail RankSorterPhone RankSorterSpecialty RankSorterIdentifier RankSorterBased on the changes between sorted Entity and input entity, Callback is published to the next processing stage. In that case, is skipped.If no new changes are detected, is forwarder to further e enriched data required in the ranking is cleaned. This last step check the incoming event and generates an additional *_INACTIVATED event type once object contains EndDate (is inactive) TriggersTrigger actionComponentActionDefault timeIN Events incoming mdm-callback-service: (precallback package)Process full events, execute ranking services, generates callbacks, and published calculated events to the componentrealtime - events streamDependent componentsComponentUsageCallback ServiceMain component with flow implementationEntity EnricherGenerates incoming events full eventsManagerProcess callbacks generated by this serviceHub " }, { "title": "Global COMPANY ID callback", "": "", "pageLink": "/display/GMDM/Global+COMPANY+ID+callback", "content": "Proces provides a unique to each entity. The current solution on the side overwrites an entity's when it loses a merge. Global COMPANY ID pre-callback solution was created to contain Global COMPANY Id as a unique value for entity_ fulfill the requirement a solution based on is prepared. It includes elements like below:Modification on Orchestrator/Manager side - during the entity creation processCreation of COMPANYGloballId Pre-callback Modification on entity history to enrich search processLogical ArchitectureModification on Orchestrator/Manager side - during the entity creation processProcess descriptionThe request is sent to the HUB Manager - it may come from each source allowed. Like loading or direct channel. getCOMPANYIdOrRegister service is call and entityURI with is stored in COMPANYIdRegistry From an external system point of view, the response to a client is modified. COMPANY Global Id is a part of the main attributes section in the JSON file (not in a nest). In response, there are information about true and false{    "uri": "entities/19EaDJ5L",    "status": "created",    "errorCode": null,    "errorMessage": null,    "COMPANYGlobalCustomerID": "",    "crosswalk": {        "type": "configuration/sources/RX_AUDIT",        "value": "test1_104421022022_RX_AUDIT_1",        "deleteDate": ""    }}{    "uri": "entities/entityURI",    "type": "configuration/entityTypes/",    "createdBy": "username",    "createdTime": ,    "updatedBy": "username",    "updatedTime": ,"attributes": "COMPANYGlobalCustomerID": [            {                "type": "configuration/entityTypes//attributes/COMPANYGlobalCustomerID",                "ov": true,                "value": "04-",                "uri": "entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/1jVIrkG2D"            },            {                "type": "configuration/entityTypes//attributes/COMPANYGlobalCustomerID",                "ov": false,                "value": "",                "uri": "entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/1jVIrosrm"            },            {                "type": "configuration/entityTypes//attributes/COMPANYGlobalCustomerID",                "ov": false,                "value": "",                "uri": "entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/1jVIrhcNY"            },            {                "type": "configuration/entityTypes//attributes/COMPANYGlobalCustomerID",                "ov": false,                "value": "",                "uri": "entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/1jVIrnM10"            },            {                "type": "configuration/entityTypes//attributes/COMPANYGlobalCustomerID",                "ov": false,                "value": "",                "uri": "entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/1mrOsvf6P"            },            {                "type": "configuration/entityTypes//attributes/COMPANYGlobalCustomerID",                "ov": false,                "value": "",                "uri": "entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/2ZNzEowk3"            },            {                "type": "configuration/entityTypes//attributes/COMPANYGlobalCustomerID",                "ov": false,                "value": "04-",                "uri": "entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/2LG7Grmul"            }        ],3. How to store GlobalCOMPANYId process diagram - business eation of COMPANYGlobalId Pre-callbackA publisher event model is extended with two new values:COMPANYGlobalCustomerIDs - list of ID. For some merge events, there is two entityURI ID. The order of the IDs must match the order of the IDs in entitiURI rentCOMPANYGlobalCustomerID - it has value only for the event type. It contains winner entityURI.data class PublisherEvent(val eventType: ?,                          val eventTime: Long? = null,                          val entityModificationTime: Long? = null,                          val countryCode: String? = null,                          val entitiesURIs: List = emptyList(),                          val targetEntity: Entity? = null,                          val targetRelation: Relation? = null,                          val targetChangeRequest: ? = null,                          val dictionaryItem: DictionaryItem? = null,                          val mdmSource: String?,                          val viewName: String? = ,                          val matches: List? = null,                          val COMPANYGlobalCustomerIDs: List = emptyList(),                          val parentCOMPANYGlobalCustomerID: String? = null,                          @JsonIgnore                          val checksumChanged: = false,                          @JsonIgnore                          val isPartialUpdate: = false,                          @JsonIgnore                          val isReconciliation: = falseThere are made changes in  entityHistory collection on MongoDB sideFor each object in a collection, we store also COMPANYGlobalCustomerID:to have a relation between entityURI and COMPANYGLobalCustomerId to make a possible search for an entity that lost merge Additionally, new fields are stored in the Snowflake structure in %_ and %_ views in CUSTOMER_SL schema, like:COMPANY_GLOBAL_CUSTOMER_IDPARENT_COMPANY_GLOBAL_CUSTOMER_IDFrom an external system point of view, those internal changes are prepared to make a GlobalCOMPANYID filed case of overwriting GLobalCOMPANYID on side (lost merge) pre-callback main task is to search for an original value in . It will then insert this value into that entity in that has been overwritten due to lost cess diagram: Search LOST_MERGE entity with its first Global COMPANY IDProcess diagram:Process description:MDM HUB gets SEARCH calls from an external system. The search parameter is Global COMPANY rification entity status.  If entity status is 'LOST_MERGE' then replace in search request PfiezrGlobalCustomerId to parentCOMPANYGlobalCustomerIdMake a search call in Reltio with enriched dataDependent components" }, { "title": "", "": "", "pageLink": "/display/GMDM/Canada+Micro-Bricks", "content": "DescriptionThe process was designed to auto-fill the values on Addresses for market entities. The process is based on the events streaming, the main event is recalculated based on the current state and during comparison, the current mapping file the changes are generated. The generated change (partial event) updates the which leads to another change. Only when the entity is fully updated the main event is published to the output topic and processed in the next stage in the event publisher. The process also registers the Changelog events on the topic. the Changelog events are saved only when the state of the entity is not partial. The Changelog events are required in the that is triggered by the Airflow DAG. Business users may change the mapping file, this triggers the reload process, changelog events are processed and the updates are generated in r , we created a new brick type "Micro Brick" and implemented a new pre-callback service to populate the brick codes based on the postal code mapping file:95% of postal codes won't be in the file and the code should be set to the first characters of the postal codeThe mapping file will contain postal code - MicroBrick code pairsThe mapping file will be delivered , usually with no change.  However, 1-2 a year the Business will go thru a re-mapping exercise that could cause significant change.  Also, a few minor changes may happen (e.g., add new pair, etc.). A change process will be added to the scheduler as a DAG. This will be scheduled and will generate the export from the , when there will be mapping changes changelog events will trigger update to the existing codes in Reltio. A new code has been added for - "UGM"Flow diagramLogical ArchitecturePreCallback LogicReload LogicStepsOverview Reltio attributesBrick"uri": "configuration/entityTypes//attributes/Addresses/attributes/Brick",                Brick Type:                RDM: A new code has been added for - "UGM"                                    "uri": "configuration/entityTypes//attributes/Addresses/attributes/Brick/attributes/Type",                                    "lookupCode": "rdm/lookupTypes/BrickType",                Brick Value:                                    "uri": "configuration/entityTypes//attributes/Addresses/attributes/Brick/attributes/Value",                                    "lookupCode": "rdm/lookupTypes/BrickValue",PostalCode:"uri": "configuration/entityTypes//attributes/Addresses/attributes/Zip5",Canada postal codes format:e.g: K1A 0B1PreCallback LogicFlow:Activation:Check if feature flag activation is true and the acceptedCountires list contains entity countryTake into account only the CHANGED and CREATED events in this pre-callback implementationSteps:For each address in the entity check:Check if the contains BrickType= microBrickType and BrickValue!=null and PostalCode!=nullCheck if is in the fileif true compareif different generate UPDATE_ATTRIBUTEif in sync add with all attributes to MicroBrickChangelogif false compare with “numberOfPostalCodeCharacters” from different generate UPDATE_ATTRIBUTEif in sync add with all attributes to MicroBrickChangelogCheck if Address does not contain BrickType= microBrickType and BrickValue==null and !=nullcheck if is in the fileif true generate INSERT_ATTRIBUTEif false get “numberOfPostalCodeCharacters” from and generate INSERT_ATTRIBUTEAfter the Addresses array is checked, the main event is blocked when partial. Only when there are 0 changes main event is forwardedif there are changes send partialUpdate and skip the main event depending on the forwardMainEventsDuringPartialUpdateif there are 0 changes send and push to the changelog topicNote: The service contains 2 roles – the main role is to check for each address with a mapping file and generate MicroBrick Changes (INSERT (initial) UPDATE (changes)). The second role is to push events when we detected 0 changes. It means this flow should keep in sync the changelog topic with all changes that are happening in Reltio (address was added/removed/changed). Because will work on these changelog events and requires the exact URI to the this service needs to push all events with calculatedMicroBrickUri and calculatedMicroBrickValue and current value on postalCode for specific address represented by the address load Logic (Airflow DAG)Flow:  users make changes on the side to micro bricks epsDAG is scheduled once a month and process changes made by the Business users, this triggers the Reload Logic on Callback-Service componentsGet changes from snowflake and generate the fileIf there are 0 changes END the processIf there are change in the file push the changes to the Consul. Load current Configuration to GIT and push micro-bricks-mapping.csv to igger API call on to reload Consul configuration - this will cause that Pre-Callback processors and the will now use new mapping files. Only after this operation is successful go to the next step:Copy events from current topic to reload topic using tmp fileCopy events from current topic to reload topic using temporary fileNote: the micro-brick process is divided into 2 steps Pre-Callback generated ChangeLog events to the $env-internal-microbricks-changelog-eventsReload service is reading the events from $env-internal-microbricks-changelog-reload-eventsThe main goal here is to copy events from one topic to another using Console Producer and Consumer. Copy is made by the Console Consumer, we are generating a temporary file with all events, has to poll all events, and wait until no new events are in the topic. After this time Console Producer should send all events to the target ter events are in the target $env-internal-microbricks-changelog-reload-events topic the next step described below starts automatically. Reload Logic (Callback-Service)Flow:Activation: Exposes to reload Consul Configuration - because these changes are made once per month , there is no need to schedule this process in service internally. Reload is made by the and reloads mapping file inside y after Consul Configuration is reloaded the events are pushed from the $env-internal-microbricks-changelog-events to the $is triggers the MicroBrickReloadService because it is based on the Kafka-Streams – service is subscribing to events in real-timeSteps:New events to the $env-internal-microbricks-changelog-reload-events will trigger the following: consumer that will read the each event check:for each address in addresses changes check:check if is in the fileif true and the current mapping value is different than calculatedMicroBrickValue  → generate UPDATE_ATTRIBUTEif false and calculatedMicroBrickValue is different than “numberOfPostalCodeCharacters” from → generate UPDATE_ATTRIBUTEGather all changes and push them to the $env-internal-async-all-bulk-callbacksThe reload is required because it may happen that:A new row was addedThen AddressChange.postalCode will be in the which means that calculatedMicroBrickValue will be different than the one that we now have in the mapping file so we need to trigger UPDATE_e existing row was updatedThen AddressChange.postalCode will be in the and the calculatedMicroBrickValue will be different than the one that we now have in the mapping file so we need to trigger UPDATE_ATTRIBUTEThe existing row was removedThen AddressChange.postalCode will be missing in the mapping file, then we are going to compare calculatedMicroBrickValue with “numberOfPostalCodeCharacters” from , this will be a difference so UPDATE_ATTRIBUTE will be generatedNote: The data model requires the calculatedMicroBrickUri because we need to trigger UPDATE_ATTRIBUE on the specified BrickValue on a specific Address so an exact URI is required to work properly with the Reltio UPDATE_ATTRIBUTE operation. Only INSERT_ATTRIBUTE requires the only on the address attribute, and the body will contain and (this insert is handled in the pre-callback implementation). The changes made by will generate the next changes after the mapping file was updated. Once we trigger this event Reltio will generate the change, this change will be processed by the pre-callback service (MicroBrickProcessor). The result of this processor will be no-change-detected (entity and mapping file are in sync) and new CHANGELOG event generation. It may happen that during run new Changelog events will be constantly generated, but this will not impact the current process because events from the original topic to the target topic are triggered by the manual copy during reloading. Additionally, compaction window on will overwrite old changes with new changes generated from pre-callback. So we will have only one newest key on kafka topic after this time, and these changes will be copied to reload process after the next business change (1-2 times a year)Attachment docs with more details:IMPL: TEST: and ConfigurationChangeLog Event\nCHANGELOG Event:\n\nKafka KEY: entityUri\n\nBody:\ndata class MicroBrickChangelog(\n val entityUri: addressesChanges: List,\n)\ndata class AddressChange(\n val addressUri: postalCode: calculatedMicroBrickUri: calculatedMicroBrickValue: String,\n)\n\n\nTriggersTrigger actionComponentActionDefault timeIN Events incoming Callback Service: Pre-Callback: LogicFull events trigger pre-callback stream and during processing, partial events are processed with generated changes. If data is in sync partial event is not generated, and the main event is forwarded to external clientsrealtime - events streamUser  - triggers a change in mappingAPI: Callback-service - sync consul :ReloadService - streamingThe business user changes the mapping file. Process refreshed Consul store, copies data to changelog topic and this triggers real-time processing on Reload serviceManual Trigger by - events streamDependent componentsComponentUsageCallback ServiceMain component of flow implementationEntity EnricherGenerates incoming events full eventsManagerProcess callbacks generated by this service" }, { "title": "RankSorters", "": "", "pageLink": "/display/GMDM/RankSorters", "content": "" }, { "title": "Address RankSorter", "": "", "pageLink": "/display//Address+RankSorter", "content": "GLOBAL - IQVIA modelThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Address provided by source "Reltio" is higher in the hierarchy than the Address provided by "" source. Based on this configuration, each specialty will be sorted in the following order:addressSource: "Reltio": 1 "EVR": 2 "OK": 3 "AMPCO": 4 "": 5 "NUCLEUS": 6 "": 7 "MDE": 8 "LocalMDM": 9 "PFORCERX": 10 "VEEVA_NZ": 11 "VEEVA_AU": 12 "VEEVA_PHARMACY_AU": 13 "": 14 "FACE": 15 "KOL_OneView": 16 "": 17 "GCP": 18 "": 19 "CN3RDPARTY": 20 "Rx_Audit": 21 "PCMS": 22 "CICR": , Address Rank Sorting is based on the following configuration:Address will be sorted based on attribute in the following order:addressType: "[TYS.P]": 1 "[YS]": 2 "[TYS.S]": 3 "[TYS.L]": 4 "[TYS.M]": 5 "[Mailing]": 6 "[TYS.F]": 7 "[TYS.HEAD]": 8 "[AR]": 9 "[Unknown]": 10Address will be sorted based on attribute in the following order:addressValidationStatus: "[STA.3]": 1 "[validated]": 2 "[Y]": 3 "[STA.0]": 4 "[pending]": 5 "[NEW]": 6 "[RNEW]": 7 "[selfvalidated]": 8 "[SVALD]": 9 "[preregister]": 10 "[notapplicable]": 11 "[N]": 97 "[notvalidated]": 98 "[STA.9]": 99Address will be sorted based on Status attribute in the following order:addressStatus: "[VALD]": 1 "[]": 2 "[INAC]": 98 "[]": 99Address rank sort process operates under the following conditions:First, before address ranking the Affiliation RankSorter have to be executed. It is required to get the appropriate value on the imaryAffiliationIndicator attribute valueEach address is sorted with the following rules:sort by the PrimaryAffiliationIndicator value. The address with "true" values is ranked higher in the hierarchy. The attribute used in this step is taken from the imaryAffiliationIndicatorsort by Validation Status (lowest rank from the configuration on TOP) - attribute lidationStatussort by Status (lowest rank from the configuration on TOP) - attribute atussort by Source Name (lowest rank from the configuration on TOP) - this is calculated based on the osswalks, means that each address is associated with the appropriate crosswalk and based on the input configuration the order is rt by (true value wins against false value) - attribute imaryAffiliationsort by Address Type (lowest rank from the configuration on TOP) - attribute  by Rank (lowers rank on TOP) in descending order 1 -> 99 - attribute  by (highest date on TOP) in descending order 2017.07 -> 2017.06 - attribute osswalks.updateDatesort by Label value alphabetically in ascending order A -> Z - attribute belSorted addresses are recalculated for the new Rank – each Address Rank is reassigned with an appropriate number from lowest to ditionally:When leteDate exists, then the address is excluded from the sorting processWhen recalculated has a value equal to "1" then attribute is added with the value set to "true"Address rank sort process fallback operates under the following conditions:During Validation Status from configuration (, 1.b) sorting, when attribute is missing address, is placed on 90 position ( which means that empty validation status is higher in the ranking than e.g. STA.9 status)During Status from configuration (1.c) sorting when the attribute is missing address is placed on 90 position (which means that empty status is higher in the ranking than e.g. status)When Source system name (1.d) is missing address, address is placed on 99 positionWhen address Type (1.e) is empty, address is placed on 99 positionWhen Rank (1.f) is empty, address is placed on 99 positionFor multiple Address Types for the same relation – an address with a higher rank is takenBusiness requirements (provided by for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*" }, { "title": "Addresses RankSorter", "": "", "pageLink": "/display//Addresses+RankSorter", "content": "GLOBAL USThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Address provided by source "" is higher in the hierarchy than the Address provided by "COV" source. Configuration is divided by country and source lists, for which this order is applicable.  Based on this configuration, each address will be sorted in the following order:addressesSource: - countries: - "ALL" sources: - "ALL" rankSortOrder: "Reltio" : 1 "ONEKEY" : 2 "IQVIA_RAWDEA" : 3 "IQVIA_DDD" : 4 "HCOS" : 5 "SAP" : 6 "SAPVENDOR" : 7 "COV" : 8 "DVA" : 9 "ENGAGE" : 10 "KOL_OneView" : 11 "ONEMED" : 11 "ICUE" : 12 "DDDV" : 13 "MMIT" : 14 "MILLIMAN_MCO" : 15 "SHS": 16 "COMPANY_ACCTS" : 17 "" : 18 "SEAGEN": 19 "CENTRIS" : 20 "ASTELAS" : 21 "EMD_SERONO" : 22 "MAPP" : 23 "" : 24 "VALKRE" : 25 "THUB" : 26 "PTRS" : 27 "MEDISPEND" : 28 "PORZIO" : 29 Additionally, Addresses Rank Sorting is based on the following configuration:The address will be sorted based on attribute in the following order:addressType: "[OFFICE]": 1 "[PHYSICAL]": 2 "[MAIN]": 3 "[SHIPPING]": 4 "[MAILING]": 5 "[BILLING]": 6 "[SOLD_TO]": 7 "[HOME]": 8 "[PO_BOX]": 9Address rank sort process operates under the following conditions:Each address is sorted with the following rules:sort by address status (active addresses on top) - attribute Status (is Active)sort by the source order number from input source order configuration (lowest rank from the configuration on TOP) - source is taken from last updated crosswalk osswalks.updateDate once multiple from the same sourcesort by flag ( only with flag set to true on top) - attribute DEAFlagsort by (true on top) - attribute SingleAddressIndsort by Source Rank (lowers rank on TOP) in descending order 1 -> 99 - for rank is calculated with minus sign - attribute by address type of and only (lowest rank from the configuration on TOP) - attribute AddressTypesort by COMPANYAddressId (addresses with this attribute are on top) - attribute COMPANYAddressIDSorted addresses are recalculated for new Rank – each Address Rank is reassigned with an appropriate number from lowest to highest - attribute AddressRankAdditionally:When leteDate exists, then the address is excluded from the sorting explaining reverse rankings for Addresses:Here is the clarification:The minus rank can be related only to source and will be related to the lowest precedence l other sources, different than , contains the normal source precedence - it means that the SourceRank 1 will be on top. We will sort attribute in ascending order 1 -> 99 (lowest source rank on TOP), so SourceRank 1 will be first, SourceRank and so on.Due to the data in - That rank code is a number from 10 to -10 with the larger number (i.e., 10) being the top ranked. We have a logic that makes an opposite ranking on attribute. We are sorting in descending order …, meaning that the rank 10 will be on TOP (highest source rank on have reverse the logic for , otherwise it led to -10 ranked on contains minus sign and are ranked in descending order. (10,9,8…-1,-2..-10)I am sorry for the confusion that was made in previous is opposite logic for data is in:Addresses:  feature requires the following configuration:Address SourceThis map contains sources with appropriate sort numbers, which means e.g. Configuration is divided by country and source lists, for which this order is applicable. Address provided by source "Reltio" is higher in the hierarchy than the Address provided by "" source. Based on this configuration, each address will be sorted in the following order:EMEAaddressesSource: - countries: - GB - IE - FK - FR - BL - GP - MF - MQ - NC - PF - PM - RE - TF - WF - ES - DE - IT - VA - SM - TR - RU rankSortOrder: Reltio: 1 ONEKEY: 2 SAP: 3 SAPVENDOR: 4 PFORCERX: 5 PFORCERX_ODS: 5 KOL_OneView: 6 ONEMED: 6 ENGAGE: 7 MAPP: 8 SEAGEN: 9 GRV: 10 GCP: 11 : 12 BIODOSE: 13 : 14 CH: 15 HCH: 16 CSL: 17 1CKOL: 18 VEEVALINK: 19 VALKRE: 201 THUB: 21 PTRS: 22 MEDISPEND: 23 PORZIO: 24 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 ONEKEY: 2 MEDPAGESHCP: 3 MEDPAGESHCO: 3 SAP: 4 SAPVENDOR: 5 ENGAGE: 6 MAPP: 7 PFORCERX: 8 PFORCERX_ODS: 8 KOL_OneView: 9 ONEMED: 9 SEAGEN: 10 GRV: 11 GCP: 12 : 13 : 14 PULSE_KAM: 15 WEBINAR: 16 DREAMWEAVER: 17 EVENTHUB: 18 SPRINKLR: 19 VEEVALINK: 20 VALKRE: 21 THUB: 22 PTRS: 23 MEDISPEND: 24 PORZIO: 25 sources: - ALLAMERaddressesSource: - countries: - ALL rankSortOrder: Reltio: 1 DCR_SYNC: 2 : 3 IMSO: 4 : 5 PFCA: 6 : 7 PFORCERX: 8 PFORCERX_ODS: 8 : 9 SAPVENDOR: 10 LEGACY_SFA_IDL: 11 ENGAGE: 12 : 13 : 14 : 15 KOL_OneView: 16 ONEMED: 16 : 17 : 18 RX_AUDIT: 19 : 20 VALKRE: 21 THUB: 22 PTRS: 23 : 24 : 25 sources: - ALLAPACaddressesSource: - countries: - CN rankSortOrder: Reltio: 1 EVR: 2 : 3 FACE: 4 : 5 CN3RDPARTY: 6 : 7 PFORCERX_ODS: 7 KOL_OneView: 8 ONEMED: 8 ENGAGE: 9 : 10 GCP: 11 : 12 : 13 : 14 PTRS: 15 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 : 2 : 3 : 4 : 5 PFORCERX_ODS: 5 SAP: 6 SAPVENDOR: 7 KOL_OneView: 8 ONEMED: 8 ENGAGE: 9 : 10 SEAGEN: 11 : 12 : 13 : 14 PCMS: 15 WEBINAR: 16 DREAMWEAVER: 17 EVENTHUB: 18 : 19 : 20 VALKRE: 21 THUB: 22 PTRS: 23 : 24 : 25 sources: - ALLAddress Type attribute:This map contains attribute values with appropriate sort numbers, which means e.g. Address Type AT.OFF is higher in the hierarchy than . Based on this configuration, each address will be sorted in the following order:addressType: "[OFF]": 1 "[BUS]": 2 "[DEL]": 3 "[LGL]": 4 "[MAIL]": 5 "[BILL]": 6 "[HOM]": 7 "[UNSP]": 99 Address Status attributeThis map contains Address Status attribute values with appropriate sort numbers, which means e.g. Address Status VALID is higher in the hierarchy than the Address Status ACTV. Based on this configuration, each address will be sorted in the following order:addressStatus: "[AS.VLD]": 1 "[TV]": 1   NULL: 90 "[AC]": 99 "[VLD]": 99Address rank sort process operates under the following conditions:Each address is sorted with the following rules: sort by Primary affiliation indicator - address related to affiliation with primary usage tag on top, and addresses are compared by fields: , AddressLine1, , StateProvince and Zip5sort by imary attribute - primary addresses on TOP - applicable only for entitiessort by address status atus (contains the configuration)sort by the source order number from input source order configuration (lowest rank from the configuration on TOP) - source is taken from the last updated crosswalk osswalks.updateDate once multiple from the same sourcesort by address type (lowest rank from the configuration on TOP) - attribute dressTypesort by Source Rank (lowers rank on TOP) in descending order 1 -> 99 - attribute by COMPANYAddressId (addresses with this attribute are on top) - attribute PANYAddressIDsort by address label (alphabetically from A to Z)Sorted addresses are recalculated for new Rank – each Address Rank is reassigned with an appropriate number from lowest to highest - attribute AddressRankAdditionally:When leteDate exists, then the address is excluded from the sorting processBusiness requirements (provided by for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*" }, { "title": "Affiliation RankSorter", "": "", "pageLink": "/display//Affiliation+RankSorter", "content": "GLOBAL - IQVIA modelThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. provided by source "Reltio" is higher in the hierarchy than the provided by "" source. Based on this configuration, each specialty will be sorted in the following order:affiliation: "Reltio": 1 "EVR": 2 "OK": 3 "AMPCO": 4 "": 5 "NUCLEUS": 6 "": 7 "MDE": 8 "LocalMDM": 9 "PFORCERX": 10 "VEEVA_NZ": 11 "VEEVA_AU": 12 "VEEVA_PHARMACY_AU": 13 "": 14 "FACE": 15 "KOL_OneView": 16 "": 17 "GCP": 18 "": 19 "CN3RDPARTY": 20 "Rx_Audit": 21 "PCMS": 22 "CICR": 23The affiliation rank sort process operates under the following conditions:Each workplace is sorted with the following rules:sort by Source Name (lowest rank from the configuration on TOP) - this is calculated based on the osswalks, means that each address is associated with the appropriate crosswalk, and based on the input configuration the order is rt by (highest date on TOP) in descending order 2017.07 -> 2017.06 - attribute osswalks.updateDatesort by Label value alphabetically in ascending order A -> Z - attribute belSorted workplaces are recalculated for the new PrimaryAffiliationIndicator attribute – each is reassigned with an appropriate value. The winner gets the "true" on the PrimaryAffiliationIndicator. Any looser, if exists is reasigned to "false"Additionally:When leteDate exists, then the workplace is excluded from the sorting processGLOBAL USThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. with name "35" is higher in the hierarchy than with the name "27". Based on this configuration, each affiliation will be sorted in the following order:facilityType: "35": 1 "": 1 "34": 1 "27": 2Each affiliation before sorting is enriched with the attribute which contains information about because there are attributes that are needed during filiation rank sort process operates under the following conditions:Each affiliation is sorted with the following rulessort by facility type (the lower number is on top) - attribute ClassofTradeN.FacilityTypesort by affiliation confidence code DESC(the higher number or if exists it is on top) - attribute filiationConfidenceCodesort by staffed beds (if it exists it is higher and higher number on top) - attribute Bed.Type("StaffedBeds").Totalsort by total prescribers (if it exists it is higher and higher number on top) - attribute by org identifier (if it exists it is higher and if not it compares is as a string) - attribute Identifiers.Type("HCOS_ORG_ID").IDSorted affiliation are recalculated for new - each is reassigned with an appropriate number from lowest to highest - attribute RankAffiliation with Rank = "1" is enriched with the  attribute with the "Primary" ditionally:If facility type is not found it is set to 99EMEA//APACThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. provided by source "Reltio" is higher in the hierarchy than provided by "" source.  Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each specialty will be sorted in the following order:EMEAaffiliation: - countries: - GB - IE - FK - FR - BL - GP - MF - MQ - NC - PF - PM - RE - TF - WF - ES - DE - IT - VA - SM - TR - RU rankSortOrder: Reltio: 1 ONEKEY: 2 SAP: 3 SAPVENDOR: 4 PFORCERX: 5 PFORCERX_ODS: 5 KOL_OneView: 6 ONEMED: 6 ENGAGE: 7 MAPP: 8 SEAGEN: 9 VALKRE: 10 GRV: 11 GCP: 12 : 13 BIODOSE: 14 BUPA: 15 CH: 16 HCH: 17 CSL: 18 THUB: 19 PTRS: 20 1CKOL: 21 MEDISPEND: 22 VEEVALINK: 23 PORZIO: 24 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 ONEKEY: 2 MEDPAGESHCP: 3 MEDPAGESHCO: 3 SAP: 4 SAPVENDOR: 5 PFORCERX: 6 PFORCERX_ODS: 6 KOL_OneView: 7 ONEMED: 7 ENGAGE: 8 MAPP: 9 SEAGEN: 10 VALKRE: 11 GRV: 12 GCP: 13 : 14 : 15 PULSE_KAM: 16 WEBINAR: 17 DREAMWEAVER: 18 EVENTHUB: 19 SPRINKLR: 20 THUB: 21 PTRS: 22 VEEVALINK: 23 MEDISPEND: 24 PORZIO: 25 sources: - ALL AMERaffiliation: - countries: - ALL rankSortOrder: Reltio: 1 DCR_SYNC: 2 : 3 : 4 SAPVENDOR: 5 : 6 PFORCERX_ODS: 6 KOL_OneView: 7 ONEMED: 7 LEGACY_SFA_IDL: 8 ENGAGE: 9 : 10 SEAGEN: 11 VALKRE: 12 : 13 : 14 : 15 IMSO: 16 : 17 PFCA: 18 : 19 THUB: 20 PTRS: 21 RX_AUDIT: 22 : 23 : 24 : 25 sources: - ALLAPACaffiliation: - countries: - CN rankSortOrder: Reltio: 1 EVR: 2 : 3 FACE: 4 : 5 CN3RDPARTY: 6 GCP: 7 : 8 : 9 PFORCERX_ODS: 9 KOL_OneView: 10 ONEMED: 10 ENGAGE: 11 : 12 VALKRE: 13 : 14 PTRS: 15 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 : 2 : 3 : 4 : 5 SAPVENDOR: 6 : 7 PFORCERX_ODS: 7 KOL_OneView: 8 ONEMED: 8 ENGAGE: 9 : 10 SEAGEN: 11 VALKRE: 12 : 13 : 14 : 15 : 16 WEBINAR: 17 DREAMWEAVER: 18 EVENTHUB: 19 : 20 THUB: 21 PTRS: 22 : 23 : 24 : 25 sources: - ALLThe affiliation rank sort process operates under the following conditions:Each contact affiliation is sorted with the following rules:sort by affiliation status - active on topsort by source prioritysort by source rank - attribute , ascendingsort by confidence level - attribute filiationConfidenceCodesort by attribute last updated date - newest at the topsort by Label value alphabetically in ascending order A -> Z - attribute belSorted contact affiliations are recalculated for the new primary usage tag attribute – each contact affiliation is reassigned with an appropriate value. The winner gets the "true" on the primary usage ditionally:When leteDate exists, then the workplace is excluded from the sorting processBusiness requirements (provided by for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*" }, { "title": "Email RankSorter", "": "", "pageLink": "/display/GMDM/Email+RankSorter", "content": "GLOBAL - IQVIA modelThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. provided by source "1CKOL" is higher in the hierarchy than Email provided by any other source. Based on this configuration, each email address will be sorted in the following order:email: - countries: - "ALL" sources: - "ALL" rankSortOrder: "1CKOL": 1Email rank sort process operates under the following conditions:Each email is sorted with the following rulesGroup by the TypeIMS attribute and sort each group:sort by source rank (the lower number on top of the one with this attribute)sort by the validation status (VALID value is the winner) - attribute ValidationStatussort by (highest date on TOP) in descending order 2017.07 -> 2017.06 - attribute crosswalks.updateDatesort by email value alphabetically in ascending order A -> Z - attribute Sorted emails are recalculated for the new Rank - each Email Rank is reassigned with an appropriate numberGLOBAL USThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. provided by source "" is higher in the hierarchy than Email provided by "" source. Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each email address will be sorted in the following order:email: - countries: - "ALL" sources: - "ALL" rankSortOrder: "Reltio" : 1 "" : 2 "ENGAGE" : 3 "KOL_OneView" : 4 "ONEMED" : 4 "ICUE" : 5 "MAPP" : 6 "ONEKEY" : 7 "SHS" : 8 "": 9 "SEAGEN": 10 "CENTRIS" : 11 "ASTELAS" : 12 "EMD_SERONO" : 13 "" : 14 "IQVIA_RAWDEA" : 15 "COV" : 16 "THUB" : 17 "PTRS" : 18 "SAP" : 19 "SAPVENDOR": 20 "IQVIA_DDD" : 22 "VALKRE": 23 "MEDISPEND" : 24 "PORZIO" : 25Email rank sort process operates under the following conditions:Each email is sorted with the following rulessort by source order (the lower number on top)sort by source rank (the lower number on top of the one with this attribute)Sorted email are recalculated for new Rank - each Email Rank is reassigned with an appropriate numberEMEA/AMER/APACThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. provided by source "Reltio" is higher in the hierarchy than Email provided by "GCP" source. Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each email address will be sorted in the following order:: - countries: - GB - IE - FK - FR - BL - GP - MF - MQ - NC - PF - PM - RE - TF - WF - ES - DE - IT - VA - SM - TR - RU rankSortOrder: Reltio: 1 1CKOL: 2 GCP: 3 GRV: 4 : 5 ENGAGE: 6 MAPP: 7 VEEVALINK: 8 SEAGEN: 9 KOL_OneView: 10 ONEMED: 10 PFORCERX: 11 PFORCERX_ODS: 11 THUB: 12 PTRS: 13 ONEKEY: 14 SAP: 15 SAPVENDOR: 16 : 17 BIODOSE: 18 BUPA: 19 CH: 20 HCH: 21 CSL: 22 MEDISPEND: 23 PORZIO: 24 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 GCP: 2 GRV: 3 : 4 ENGAGE: 5 MAPP: 6 VEEVALINK: 7 SEAGEN: 8 KOL_OneView: 9 ONEMED: 9 PULSE_KAM: 10 SPRINKLR: 11 WEBINAR: 12 DREAMWEAVER: 13 EVENTHUB: 14 PFORCERX: 15 PFORCERX_ODS: 15 THUB: 16 PTRS: 17 ONEKEY: 18 MEDPAGESHCP: 19 MEDPAGESHCO: 19 SAP: 20 SAPVENDOR: 21 : 22 MEDISPEND: 23 PORZIO: 24 sources: - ALLAMERemail: - countries: - ALL rankSortOrder: Reltio: 1 DCR_SYNC: 2 GCP: 3 : 4 : 5 ENGAGE: 6 : 7 : 8 SEAGEN: 9 KOL_OneView: 10 ONEMED: 10 PFORCERX: 11 PFORCERX_ODS: 11 : 12 IMSO: 13 : 14 PFCA: 15 : 16 THUB: 17 PTRS: 18 : 19 SAPVENDOR: 20 LEGACY_SFA_IDL: 21 RX_AUDIT: 22 : 23 : 24 sources: - ALLAPACemail: - countries: - CN rankSortOrder: Reltio: 1 EVR: 2 : 3 FACE: 4 : 5 CN3RDPARTY: 6 ENGAGE: 7 : 8 : 9 KOL_OneView: 10 ONEMED: 10 PFORCERX: 11 PFORCERX_ODS: 11 THUB: 12 PTRS: 13 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 : 2 PCMS: 3 GCP: 4 : 5 : 6 ENGAGE: 7 : 8 : 9 SEAGEN: 10 KOL_OneView: 11 ONEMED: 11 : 12 WEBINAR: 13 : 14 EVENTHUB: 15 : 16 PFORCERX_ODS: 16 THUB: 17 PTRS: 18 : 19 : 20 : 21 SAPVENDOR: 22 : 23 : 24 sources: - rank sort process operates under the following conditions:Each email is sorted with the following rules sort by cleanser status - valid/invalidsort by source order (the lower number on top)sort by source rank (the lower number on top of the one with this attribute)sort by last updated date - newest at the topsort by email value alphabetically in ascending order A -> Z - attribute belSorted email are recalculated for new Rank - each Email Rank is reassigned with an appropriate numberBusiness requirements (provided by for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*" }, { "title": "Identifier RankSorter", "": "", "pageLink": "/display//Identifier+RankSorter", "content": "IQVIA Model (Global)AlgorithmThe identifier rank sort process operates under the following conditions:Each Identifier is grouped by Identifier Type: e.g GRV_ID / GCP ID / MI_ID / Physician_Code /. .. – each group is sorted separately.Each group is sorted with the following rules:By identifier "Source System order configuration" (lowest rank from the configuration on TOP)By identifier Order (lower ranks on TOP) in descending order 1 -> 99 - attribute OrderBy update date () (highest date on TOP) in descending order 2017.07 -> 2017.06  - attribute crosswalks.updateDateBy Identifier value (alphabetically in ascending order A -> Z)Sorted identifiers are optionally deduplicated (by Identifier Type in each group) – from each group, the lowest in rank and the duplicated identifier is removed. Currently the ( isIgnoreAndRemoveDuplicates = False) is set to False, which means that groups are not deduplicated. Duplicates are removed by rted identifiers are recalculated for the new Rank – each Rank (for each sorted group) is reassigned with an appropriate number from lowest to highest. - attribute - OrderIdentifier rank sort process fallback operates under the following conditions:When Identifier Type is empty – each empty identifier is grouped together. Each identifier with an empty type is added to the "EMPTY" group and sorted and duplicated separately.During source system from configuration (2.a) sorting when Source system is missing identifier is placed on 99 positionDuring (, ) sorting when the Source system is missing identifier is placed on 99 positionSource Order Configuration This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Identifier provided by source "Reltio" is higher in the hierarchy than the Identifier provided by the "" source. Based on this configuration each identifier will be sorted in the following order:Updated: (EX-US)Countries(in environment)CNOthersSource OrderReltio: 1EVR: 2MDE: 3MAPP: 4FACE: 5CRMMI: 6KOL_OneView: 7GRV: 8CN3RDPARTY: 9Reltio: 1EVR: 2OK: 3AMPCO: 4JPDWH: 5NUCLEUS: 6CMM: 7MDE: 8LocalMDM: 9PFORCERX: 10VEEVA_NZ: 11VEEVA_AU: 12VEEVA_PHARMACY_AU: 13CRMMI: 14FACE: 15KOL_OneView: 16GRV: 17GCP: 18MAPP: 19CN3RDPARTY: 20Rx_Audit: 21PCMS: 22CICR: 23COMPANY ModelAlgorithmIdentifier Rank sort algorithm slightly varies from the IQVIA model one:Identifiers are grouped by Type (Identifiers.Type field). Identifiers without a Type count as a separate group.Each group is sorted separately according to following rules:By Trust flag (ust field). "Yes" takes precedence over "No". If Trust flag is missing, it's as if it was equal to "No".By Source Order (table below). Lowest rank from configuration takes precedence. If a Source is missing in configuration, it gets the lowest possible order (99).By Status (atus). Valid/Active status takes precedence over Invalid/Inactive/missing status. List of status codes is configurable. Currently (), the following codes are configured in all COMPANY environments:Valid codes: [HCPIS.VLD], [TV], [HCOIS.VLD], [TV]Invalid codes: [AC], [VLD], [AC], [VLD]By Source Rank ( field). Lowest rank takes . Latest takes precedence. is equal to the highest of 3 dates: providing crosswalk's createDateproviding crosswalk's updateDateproviding crosswalk's singleAttributeUpdateDate for this Identifier (if present)By ID alphabetically. This is a fallback rted identifiers are recalculated for the new Rank – each Rank (for each sorted group) is reassigned with an appropriate number from lowest to highest. - attribute - urce Order ConfigurationUpdated: (in environment)ALLALLEU:GBIEFRBLGPMFMQNCPFPMRETFWFESDEITVASMTRRUOthers (AfME)CNOthersSource OrderReltio: 1ONEKEY: 2ICUE: 3ENGAGE: 4KOL_OneView: 5ONEMED: 5GRV: 6SHS: 7IQVIA_RX: 8IQVIA_RAWDEA: 9SEAGEN: 10CENTRIS: 11MAPP: 12ASTELAS: 13EMD_SERONO: 14COV: 15SAP: 16SAPVENDOR: 17IQVIA_DDD: 18PTRS: 19Reltio: 1ONEKEY: 2PFORCERX: 3PFORCERX_ODS: 3KOL_OneView: 4ONEMED: 4LEGACY_SFA_IDL: 5ENGAGE: 6MAPP: 7SEAGEN: 8GRV: 9GCP: 10SSE: 11IMSO: 12CS: 13PFCA: 14SAP: 15SAPVENDOR: 16PTRS: 17RX_AUDIT: 18Reltio: 1ONEKEY: 2PFORCERX: 3PFORCERX_ODS: 3KOL_ONEVIEW: 4ENGAGE: 5MAPP: 6SEAGEN: 7GRV: 8GCP: 9SSE: 101CKOL: 11SAP: 12SAPVENDOR: 13BIODOSE: 14BUPA: 15CH: 16HCH: 17CSL: 18Reltio: 1ONEKEY: 2MEDPAGES: 3MEDPAGESHCP: 3MEDPAGESHCO: 3PFORCERX: 4PFORCERX_ODS: 4KOL_ONEVIEW: 5ENGAGE: 6MAPP: 7SEAGEN: 8GRV: 9GCP: 10SSE: 11PULSE_KAM: 12WEBINAR: 13SAP: 14SAPVENDOR: 15SDM: 16PTRS: 17Reltio: 1EVR: 2MDE: 3FACE: 4GRV: 5CN3RDPARTY: 6GCP: 7PFORCERX: 8PFORCERX_ODS: 8KOL_OneView: 9ONEMED: 9ENGAGE: 10MAPP: 11PTRS: 12Reltio: 1ONEKEY: 2JPDWH: 3VOD: 4PFORCERX: 5PFORCERX_ODS: 5KOL_OneView: 6ONEMED: 6ENGAGE: 7MAPP: 8SEAGEN: 9GRV: 10GCP: 11SSE: 12PCMS: 13PTRS: 14SAP: 15SAPVENDOR: 16Business requirements (provided by for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*" }, { "title": "OtherHCOtoHCOAffiliations ", "": "", "pageLink": "/display//OtherHCOtoHCOAffiliations+RankSorter", "content": " (currently for and NZ)Business requirements (provided by for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*The functionality is configured in the callback delay service. Allows you to set different types of sorting for each country. The configuration for and is shown below.rankSortOrder: affiliation: - countries: - AU - NZ rankExecutionOrder: - type: ATTRIBUTE attributeName: lookupCode: true order: REL.HIE: 1 I: 2 REL.FPA: 3 G: 4 REL.BUY: 5 N: 6 R: 7 REL.MBR: 8 M: 9 SS: 10 REL.WPC: 11 REL.WPIC: 12 U: 13 - type: ACTIVE - type: SOURCE order: Reltio: 1 ONEKEY: 2 : 3 SAP: 4 PFORCERX: 5 PFORCERX_ODS: 5 KOL_OneView: 6 ONEMED: 6 ENGAGE: 7 MAPP: 8 GRV: 9 GCP: 10 : 11 PCMS: 12 PTRS: 13 - type: are grouped by endObjectId, then the whole bundle is sorted and ranked. The relationship's position on the list (its rank) for and is calculated based on the following algorithm:sorting by RelationshipDescription attribute  - relationship with REL.HIE value on topsorting by relationship activity - active at the topsort by source position - Reltio source on topsort by (newest on top)" }, { "title": "Phone RankSorter", "": "", "pageLink": "/display//Phone+RankSorter", "content": "GLOBAL - IQVIA modelThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Phones provided by source "Reltio" is higher in the hierarchy than the Address provided by "EVR" source. Based on this configuration, each phonewill be sorted in the following order:phone: - countries: - "ALL" sources: - "ALL" rankSortOrder: "Reltio": 1 "EVR": 2 "OK": 3 "AMPCO": 4 "": 5 "NUCLEUS": 6 "CMM": 7 "MDE": 8 "LocalMDM": 9 "PFORCERX": 10 "VEEVA_NZ": 11 "VEEVA_AU": 12 "VEEVA_PHARMACY_AU": 13 "": 14 "FACE": 15 "KOL_OneView": 16 "": 17 "GCP": 18 "": 19 "CN3RDPARTY": 20 "Rx_Audit": 21 "PCMS": 22 "CICR": 23Phone rank sort process operates under the following conditions:Each phone is sorted with the following rulesGroup by the TypeIMS attribute and sort each group:sort by "Source System order configuration" (lowest rank from the configuration on TOP)sort by source rank (the lower number on top of the one with this attribute)sort by the validation status (VALID value is the winner) - attribute ValidationStatussort by (highest date on TOP) in descending order 2017.07 -> 2017.06 - attribute crosswalks.updateDatesort by number value alphabetically in ascending order A -> Z - attribute mberSorted phones are recalculated for the new Rank - each is reassigned with an appropriate numberGLOBAL USThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Phone provided by source "" is higher in the hierarchy than the Phone provided by "ENGAGE" source.  Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each phone number will be sorted in the following order:phone: - countries: - "ALL" sources: - "ALL" rankSortOrder: "Reltio" : 1 "ONEKEY" : 2 "ICUE" : 3 "" : 4 "ENGAGE" : 5 "KOL_OneView" : 6 "ONEMED" : 6 "" : 7 "SHS" : 8 "IQVIA_RX" : 9 "IQVIA_RAWDEA" : 10 "SEAGEN": 11 "CENTRIS" : 12 "MAPP" : 13 "ASTELAS" : 14 "EMD_SERONO" : 15 "COV" : 16 "SAP" : 17 "SAPVENDOR": 18 "IQVIA_DDD" : 19 "VALKRE" : 20 "THUB" : 21 "PTRS" : 22 "MEDISPEND" : 23 "PORZIO" : 24Phone number rank sort process operates under the following conditions:Each phone number is sorted with the following rules, on top, it is grouped by by the Type attribute and sort each group sort by source order (the lower number on top) - source name is taken from the last updated crosswalk for this Phone attributesort by source rank (the lower number on top or the one with this attribute) - attribute for this Phone attributeSorted phone numbers are recalculated for new Rank - each is reassigned with an appropriate number - attribute Rank for /APACThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Phone provided by source "" is higher in the hierarchy than the Phone provided by "ENGAGE" source.  Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each phone number will be sorted in the following order:EMEAphone: - countries: - GB - IE - FK - FR - BL - GP - MF - MQ - NC - PF - PM - RE - TF - WF - ES - DE - IT - VA - SM - TR - RU rankSortOrder: Reltio: 1 ONEKEY: 2 PFORCERX: 3 PFORCERX_ODS: 3 VEEVALINK: 4 KOL_OneView: 5 ONEMED: 5 ENGAGE: 6 MAPP: 7 SEAGEN: 8 GRV: 9 GCP: 10 : 11 1CKOL: 12 THUB: 13 PTRS: 14 SAP: 15 SAPVENDOR: 16 BIODOSE: 17 : 18 CH: 19 HCH: 20 CSL: 21 MEDISPEND: 22 PORZIO: 23 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 ONEKEY: 2 MEDPAGESHCP: 3 MEDPAGESHCO: 3 PFORCERX: 4 PFORCERX_ODS: 4 VEEVALINK: 5 KOL_OneView: 6 ONEMED: 6 ENGAGE: 7 MAPP: 8 SEAGEN: 9 GRV: 10 GCP: 11 : 12 PULSE_KAM: 13 SPRINKLR: 14 WEBINAR: 15 DREAMWEAVER: 16 EVENTHUB: 17 SAP: 18 SAPVENDOR: 19 : 20 THUB: 21 PTRS: 22 MEDISPEND: 23 PORZIO: 24 sources: - ALLAMERphone: - countries: - ALL rankSortOrder: Reltio: 1 DCR_SYNC: 2 : 3 : 4 PFORCERX_ODS: 4 : 5 KOL_OneView: 6 ONEMED: 6 LEGACY_SFA_IDL: 7 ENGAGE: 8 : 8 SEAGEN: 9 : 10 GCP: 11 : 12 IMSO: 13 : 14 PFCA: 15 : 16 : 17 SAPVENDOR: 18 THUB: 19 PTRS: 20 RX_AUDIT: 21 : 22 PORZIO: 23 sources: - ALLAPACphone: - countries: - CN rankSortOrder: Reltio: 1 EVR: 2 : 3 FACE: 4 : 5 CN3RDPARTY: 6 GCP: 7 PFORCERX: 8 PFORCERX_ODS: 8 : 9 KOL_OneView: 10 ONEMED: 10 ENGAGE: 11 : 12 PTRS: 13 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 : 2 : 3 : 4 : 5 PFORCERX_ODS: 5 : 6 KOL_OneView: 7 ONEMED: 7 ENGAGE: 8 : 9 SEAGEN: 10 : 11 : 12 : 13 : 14 THUB: 15 PTRS: 16 : 17 SAPVENDOR: 18 : 19 WEBINAR: 20 DREAMWEAVER: 21 EVENTHUB: 22 : 23 : 24 sources: - ALLPhone number rank sort process operates under the following conditions:Each phone number is sorted with the following rules, on top, it is grouped by by the Type attribute and sort each group  sort by cleanser status - valid/invalidsort by source order (the lower number on top) - source name is taken from the last updated crosswalk for this Phone attributesort by source rank (the lower number on top or the one with this attribute) - attribute for this Phone attributelast update date - newest to oldestsort by label - alphabetical order A-ZSorted phone numbers are recalculated for new Rank - each is reassigned with an appropriate number - attribute Rank for Phone attributeBusiness requirements (provided by for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*" }, { "title": "Speaker ", "": "", "pageLink": "/display//Speaker+RankSorter", "content": "DescriptionUnlike other , Speaker Rank is expressed not by a nested "Rank" or "Order" field, but by the "ignore" flag."Ignore" flag sets the attribute's "ov" to false. By operating this flag, we assure that only the most valuable attribute is visible and sent downstream from gorithmSort all Speaker nestsSort by source hierarchyIf same source, sort by (higher of crosswalk.updateDate / ngleAttributeUpdateDates/{speaker attribute uri})If same source and , sort by attribute URI (fallback sorted groupIf first Speaker nest has ignored == true, set ignored := false for that nestIf every next Speaker nest does not have ignored == true, set ignored := true for that nestPost the list of changes to Manager's async interface using topicGlobal - IQVIA ModelSpeaker RankSorter is active only for . Source hierarchy is as follows:speaker: "Reltio": 1 "": 2 "FACE": 3 "EVR": 4 "MDE": 5 "": 6 "KOL_OneView": 7 "": 8 "CN3RDPARTY": 9Specific ConfigurationUnlike other flows, Speaker requires both =true and ov=false attribute values to work is is why:Entity configuration must be altered, to enrich entities with ov&nonOv values of "Speaker" attribute:\nbundle:\n nonOv: false\n ov: false\n nonOvAttributesToInclude:\n - "Speaker"\nPreCallback Service configuration must be altered to assure that nonOv values are cleaned from the event before passing it further down to the Event Publisher\ncleanOvFalseAttributeValues:\n - "Speaker"\nBusiness requirements (provided by for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*" }, { "title": "Specialty RankSorter", "": "", "pageLink": "/display//Specialty+RankSorter", "content": "GLOBAL - IQVIA modelThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Specialty provided by source "Reltio" is higher in the hierarchy than the provided by the "" source. Additionally, for , there is a difference between countries. The configuration for RU and contains only 4 sources and is different than the base configuration. Based on this configuration each specialty will be sorted in the following order:specialities: - countries: - "RU" - "TR" sources: - "ALL" rankSortOrder: "": 1 "GCP": 2 "OK": 3 "KOL_OneView": 4 - countries: - "ALL" sources: - "ALL" rankSortOrder: "Reltio": 1 "EVR": 2 "OK": 3 "AMPCO": 4 "": 5 "NUCLEUS": 6 "CMM": 7 "MDE": 8 "LocalMDM": 9 "PFORCERX": 10 "VEEVA_NZ": 11 "VEEVA_AU": 12 "VEEVA_PHARMACY_AU": 13 "": 14 "FACE": 15 "KOL_OneView": 16 "": 17 "GCP": 18 "": 19 "CN3RDPARTY": 20 "Rx_Audit": 21 "PCMS": 22 "CICR": 23The specialty rank sort process operates under the following conditions:Each Specialty is grouped by SPEC/TEND/QUAL/EDUC – each group is sorted separately.Each group is sorted with the following rules:By specialty "Source System order configuration" (lowest rank from the configuration on TOP)By specialty Rank (lower ranks on TOP) in descending order 1 -> 99By update date () (highest date on TOP) in descending order 2017.07 -> 2017.06 - attribute crosswalks.updateDateBy Specialty Value (alphabetically in ascending order A -> Z)Sorted specialties are optionally deduplicated (by in each group) – from each group, the lowest in rank and the duplicated specialty is removed. Currently the ( isIgnoreAndRemoveDuplicates = False) is set to False, which means that groups are not deduplicated. Duplicates are removed by rted specialties are recalculated for the new Ranks – each Rank (for each sorted group) is reassigned with an appropriate number from lowest to ditionally, for the = 1 the best record is set to true - attribute - PrimarySpecialtyFlagSpecialty rank sort process fallback operates under the following conditions:When Specialty Type is empty – each empty specialty is grouped together. Each specialty with an empty type is added to the "EMPTY" group and sorted and duplicated separately.During source system from configuration (2.a) sorting when Source system is missing specialty is placed on 99 positionDuring (, ) sorting when the Source system is missing specialty is placed on 99 feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. provided by source "" is higher in the hierarchy than the provided by the "ENGAGE" source. Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each will be sorted in the following order:specialities: - countries: - "ALL" sources: - "ALL" rankSortOrder: "Reltio" : 1 "ONEKEY" : 2 "IQVIA_RAWDEA" : 3 "" : 4 "ENGAGE" : 5 "KOL_OneView" : 6 "ONEMED" : 6 "SPEAKER" : 7 "ICUE" : 8 "SHS" : 9 "IQVIA_RX" : 10 "SEAGEN": 11 "CENTRIS" : 12 "ASTELAS" : 13 "EMD_SERONO" : 14 "MAPP" : 15 "" : 16 "THUB" : 17 "PTRS" : 18 "VALKRE" : 19 "MEDISPEND" : 20 "PORZIO" : 21The specialty rank sort process operates under the following conditions:Specialty is sorted with the following rules, but on the top, it is grouped by .SpecialityType attribute:Group by .SpecialityType attribute and sort each group: sort by specialty unspecified status value (higher value on the top) - attribute Specialty with value by source order number (the lower number on the top) - source name is taken from crosswalk that was last updatedsort by source rank (the lower on the top) - attribute by (the earliest on the top) - last update date is taken from lately updated crosswalksort by specialty attribute value (string comparison) - attribute SpecialtySorted specialties are recalculated for new Rank - each is reassigned with an appropriate number - attribute RankAdditionally:If the source is not found it is set to 99If specialty unspecified attribute name or value is not set it is set to 99EMEA//APACThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. provided by source "" is higher in the hierarchy than the provided by the "ENGAGE" source. Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each will be sorted in the following order:: - countries: - GB - IE - FK - FR - BL - GP - MF - MQ - NC - PF - PM - RE - TF - WF - ES - DE - IT - VA - SM - TR - RU rankSortOrder: Reltio: 1 ONEKEY: 2 PFORCERX: 3 PFORCERX_ODS: 3 VEEVALINK: 4 KOL_OneView: 5 ONEMED: 5 ENGAGE: 6 MAPP: 7 SEAGEN: 8 GRV: 9 GCP: 10 : 11 THUB: 12 PTRS: 13 1CKOL: 14 MEDISPEND: 15 PORZIO: 16 sources: - ALL - countries: - ALL sources: - ALL rankSortOrder: Reltio: 1 ONEKEY: 2 MEDPAGESHCP: 3 MEDPAGESHCO: 3 PFORCERX: 4 PFORCERX_ODS: 4 VEEVALINK: 5 KOL_OneView: 6 ONEMED: 6 ENGAGE: 7 MAPP: 8 SEAGEN: 9 GRV: 10 GCP: 11 : 12 PULSE_KAM: 13 WEBINAR: 14 DREAMWEAVER: 15 EVENTHUB: 16 SPRINKLR: 17 THUB: 18 PTRS: 19 MEDISPEND: 20 PORZIO: 21AMERspecialities: - countries: - ALL rankSortOrder: Reltio: 1 DCR_SYNC: 2 : 3 : 4 PFORCERX_ODS: 4 : 5 KOL_OneView: 6 ONEMED: 6 LEGACY_SFA_IDL: 7 ENGAGE: 8 : 9 SEAGEN: 10 : 11 : 12 : 13 : 14 PTRS: 15 RX_AUDIT: 16 PFCA: 17 : 18 : 19 : 20 sources: - ALLAPACspecialities: - countries: - CN rankSortOrder: Reltio: 1 EVR: 2 : 3 FACE: 4 : 5 CN3RDPARTY: 6 GCP: 7 : 8 : 9 PFORCERX_ODS: 9 : 10 KOL_OneView: 11 ONEMED: 11 ENGAGE: 12 : 13 : 14 PTRS: 15 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 : 2 : 3 : 4 : 5 PFORCERX_ODS: 5 : 6 KOL_OneView: 7 ONEMED: 7 ENGAGE: 8 : 9 SEAGEN: 10 : 11 : 12 : 13 : 14 WEBINAR: 15 DREAMWEAVER: 16 EVENTHUB: 17 : 18 THUB: 19 PTRS: 20 : 21 : 22 sources: - ALLThe specialty rank sort process operates under the following conditions:Specialty is sorted with the following rules, but on the top, it is grouped by .SpecialityType attribute:Group by .SpecialityType attribute and sort each group: sort by specialty unspecified status value (higher value on the top) - attribute Specialty with value by source order number (the lower number on the top) - source name is taken from crosswalk that was last updatedsort by source rank (the lower on the top) - attribute by (the earliest on the top) - last update date is taken from lately updated crosswalksort by specialty attribute value (string comparison) - attribute SpecialtySorted specialties are recalculated for new Rank - each is reassigned with an appropriate number - attribute Rank. The primary flag is set for the top ranked ditionally:If the source is not found it is set to 99If specialty unspecified attribute name or value is not set it is set to 99Business requirements (provided by for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*" }, { "title": "Enricher Processor", "": "", "pageLink": "/display//Enricher+Processor", "content": " is the first processor applied to incoming events. It enriches reference attributes with refEntity attributes, for the Rank calculation purposes. Usually, enriched attributes are removed after applying all - this is configurable using cleanAdditionalRefAttributes flag. The only exception is (EX-US), where attributes remain for CN. Removing "borrowed" attributes is carried out by the Cleaner targetEntity:Find reference attributes matching configurationFor each such attribute:Walk the relation to get entityFetch entity's current state through Manager (using cache)Rewrite entity's attributes to this reference attribute, inserting them in tributes pathsteps a-b are applied recursively, according to configured . is config from Precallback Service:\nrefLookupConfig:\n - cleanAdditionalRefAttributes: true\n country:\n - AU\n - IN\n - JP\n - KR\n - NZ\n entities:\n - attributes:\n - ContactAffiliations\n type: HCP\n maxDepth: 2\nHow to read the config:for entities with Country: , , , or ,of entity type ,enrich , so that it contains refEntity's attributes as sub-attributes,do that with depth 2 - so simply take 's attributes and insert them into tributes,after all calculations have finished, remove "borrowed" attributes, so that event passed to Event Publisher does not have them." }, { "title": "Cleaner Processor", "": "", "pageLink": "/display/GMDM/Cleaner+Processor", "content": "Cleaner Processor removed attributes enriched by . It is one of the last processors in execution order. Processor checks the cleanAdditionalRefAttributes flag in targetEntity:Find all refLookupConfig entries applicable for this r all attributes in found entries, remove tributes map." }, { "title": "Inactivation Generator", "": "", "pageLink": "/display//Inactivation+Generator", "content": "Inactivation Generator is one of event Processors. It checks input event's targetEntity and changes event type to INACTIVATED, if it detects one of below:for entities:targetEntity's endDate is set,for relations: targetRelation's endDate is set,targetRelation's startRefIgnored == true,targetRelation's endRefIgnored == each event:If targetEntity not null and targetEntity.endDate is null, skip event,If targetRelation not null:If targetRelation.endDate is null or artRefIgnored is null or targetRelation.endRefIgnored is null, skip event,Search the mapping for adequate output event type, according to table below. If no match found, skip event,Inbound event typeOutbound event typeHCP_CREATEDHCP_INACTIVATEDHCP_CHANGEDHCO_CREATEDHCO_INACTIVATEDHCO_CHANGEDMCO_CREATEDMCO_INACTIVATEDMCO_CHANGEDRELATIONSHIP_CREATEDRELATIONSHIP_INACTIVATEDRELATIONSHIP_CHANGEDReturn same event with new event type, according to table above." }, { "title": "MultiMerge Processor", "": "", "pageLink": "/display/GMDM/MultiMerge+Processor", "content": " is one of event r MERGED events, it checks if targetEntity.uri is equal to first URI from entitiesURIs. If it is different, is adjusted, by inserting targetEntity.uri in the beginning. This is to assure, that targetEntity.uri[0] always contains a merge winner, even in cases of multiple each event of type:HCP_MERGED,HCO_MERGED,MCO_MERGED,do:if targetEntity.uri is null, skip event,if entitiesURIs[0] and targetEntity.uri are equal, skip event,insert targetEntity.uri at the beginning of entitiesURIs and return the event." }, { "title": "OtherHCOtoHCOAffiliations Rankings", "": "", "pageLink": "/display/GMDM/OtherHCOtoHCOAffiliations+Rankings", "content": "DescriptionThe process was designed to rank OtherHCOtoHCOAffiliation with rules that are specific to the country. The current configuration contains Activator and Rankers available for and countries and the OtherHCOtoHCOAffiliationsType. The process (compared to the ) was designed to process RELATIONSHIP_CHANGE events, which are single events that contain one piece of information about specific relation. The process builds the cache with the hierarchy of objects when the main object is (The direction that we check and implement the Rankins: (child)END_OBJECT -> START_OBJECT(parent).  Change in the relation is not generating the HCO_CHANGE events so we need to check relations events. Relation change/create/remove events may change the hierarchy and ranking paring this to the ranking logic, change on object had whole information about the whole hierarchy in one event, this caused we could count and generate events based on is new logic builds this hierarchy based on RELATIONSHIP events, compact the changes in the time window, and generates events after aggregation to limit the number of changes in and calls. DATA VERIFICATION:Snowflake queries:\nSELECT COUNT(*) FROM (\n\nSELECT END_ENTITY_URI, COUNT(*) FROM COMM_APAC_MDM_DMART_PROD_STOMER_M_RELATIONS\n\nWHERE COUNTRY = 'AU' and RELATION_TYPE ='OtherHCOtoHCOAffiliations' and ACTIVE = TRUE\n\nGROUP BY END_ENTITY_URI\n\n)\n\n\n\n\nSELECT COUNT(*) FROM COMM_APAC_MDM_DMART_PROD_STOMER_M_ENTITIES\n\nWHERE ENTITY_TYPE='HCO' and COUNTRY ='AU' AND ACTIVE = TRUE\n\nSELECT COUNT(*) FROM (\n\nSELECT END_ENTITY_URI, COUNT(*) FROM COMM_APAC_MDM_DMART_PROD_STOMER_M_RELATIONS\n\nWHERE COUNTRY = '' and RELATION_TYPE ='OtherHCOtoHCOAffiliations' and ACTIVE = TRUE\n\nGROUP BY END_ENTITY_URI\n\n)\nExample few cases from QA:010Xcxi           200zxT2O                        2008NxIA                        21CVfmxOm                  2VCMuTvz                      2cvoyNhG                       2VCMnOvP                    200yZOis                          200JoRnN                        2\nSELECT END_ENTITY_URI, COUNTRY, COUNT(*) AS count FROM CUSTOMER_M_RELATIONS\n\nWHERE RELATION_TYPE ='OtherHCOtoHCOAffiliations' AND ACTIVE = TRUE\n\nAND COUNTRY IN ('AU','NZ')\n\nGROUP BY , COUNTRY\n\nORDER BY count DESC\nCq2pWio                       500KcdEA                        3T5NxyUa                       3ZsTdYcS                         3XhGoqwo                     300wMWdy                   3Cq1wjj8                         3The direction that we should check and implement the Rankins:(child)END_OBJECT -> START_OBJECT(parent)We are starting with objects and checking if this child is connected to multiple parents and we are ranking. In most cases, 99% of these will be one relation that will auto-filled with rank=1 during load. If not we are going to rank this using below implementation:Example: diagramLogical ArchitecturePreDelayCallback LogicStepsOverview Reltio attributes\nATTRIBUTES TO UPDATE/INSERT\nRANK\n {\n "label": "Rank",\n "name": "Rank",\n "description": "Rank",\n "type": "Int",\n "hidden": false,\n "important": false,\n "system": false,\n "required": false,\n "faceted": true,\n "searchable": true,\n "attributeOrdering": {\n "orderType": "ASC",\n "orderingStrategy": "LUD"\n },\n "uri": "configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/Rank",\n "skipInDataAccess": false\n },\nPreCallback Logic - RANK ActivatorDelayRankActivationProcessor:The purpose of this activator is to pick specific events and push them to delay-events topics, events from this topic will be ranked using the algorithm described on this page (OtherHCOtoHCOAffiliations Rankings), the flow is also described below.Logic:Check the activation criteria, when true process the event to the delay topic, otherwise, push the main event as is to proc-events topic to next HUB processing phase (event publishing)When all activation criteria are met:acceptedEventTypes – events are RELATION types from the listacceptedRelationObjectTypes – the event is relation type and is the type specified – OtherHCOToHCOacceptedCountries – relation is from a specified countryDo:pick the eventscopy the main event to the delayedEventsclear the mainEvents (do not push events to next publishing phase)Before sending apply the additionalFunctions (specify the interface/process and run all selected)Here change the Kafka Key and put the relation.endObject.objectURI as a RELATION event key.Example configuration for and :delayRankActivationCallback: featureActivation: true activators: - description: "Delay OtherHCOtoHCOAffiliations RELATION events from and country to calculate Rank in delay service" acceptedEventTypes: - RELATIONSHIP_CHANGED - RELATIONSHIP_CREATED - RELATIONSHIP_REMOVED - RELATIONSHIP_INACTIVATED acceptedRelationObjectTypes: - configuration/relationTypes/OtherHCOtoHCOAffiliations acceptedCountries: - AU - NZ additionalFunctions: - RelationEndObjectAsKafkaKeyPreDelayCallback - RANK LogicThe purpose of this pre-delay-callback service is to Rank specific objects (currently available OtherHCOToHCO ranking for and - OtherHCOtoHCOAffiliations Rankings)CallbackWithDelay and advantages:The cache is build on the fly based on (one-time GET of each end Object) and enriched by events during a lifetime - logic is in and we are using store in KafkaStreams.(optional) Model change (re-ranking) will cause the cache removal and regeneration of events – cache will be rebuilt with a new model so in case of future changes we can re-rank based on new e cache contains only required attributes and is updated in real-timeIn most cases it will happen that the relations are in sync so no changes will be pushed to the delay-events topic – everything will be pushed in real-time to target systems (Snowflake)In case of any change in any relation, we will aggregate all relations by the EndObjectId. This allows us to emit an aggregation window one time for each so that changes are generated for one entity in one run. It may also happen that one new relation is re-ranking whole objects hierarchy. Using this logic one event goes to the Delay logic, one event triggers the difference comparison and generation of multiple updates. These updates (after publishing) will go to the state and we are going to check if the data is in sync and if we generated all events. In that case, all events should flow to proc-events and to set a 1h window to aggregate multiple changes (relationship updates) and emit windows in 1h owflake is refreshed on PROD in 2h windows - we fit into this so that all events are ready and do not contain the partial state in ate (but Snowflake it may happen in some edge cases) The advantage of this solution is that all RELATIONS will have Rank in Snowflake, so there will be no state without Rank.Logic: PreDelayPoll event from internal-reltio-full-delay-eventsFor each rank sorter (currently OtherHCOToHCO) execute the logicWe need a state store that will contain the   cache of all relation e event key that will be moved here will be endObjectId so that all events related to the specific end object will be on one partition – so that we will ask to mongo one time (no parallelism by endObjectId)Check if “CurrentStateCache” contains the state for endObjectIdIf not – execute (This returns a list of relations)Transform the output to the modelIf exists – update (join) the current by and update relations if is in sync with and if true we are going to push such event to outputTopic (reltio-proc-events)execute function isRelationRankInSyncWithCurrentSortedState (, CurrentStateCache)If Relation.Rank ==null -> falseIf .Rank !=nullSort CurrentStateCacheCheck if RelatioID Rank is the same as (it means we need to check if the current is correct)If the function returns true – publish the Relationship event to OUTPUT TOPIC – Push events with equal to the relation (reverse logic of - RelationEndObjectAsKafkaKey)If the function returns false go to Delay stepPush event (end object id) to ${env}-internal-reltio-full-callback-delay-eventsDelayAggregate all events in the time window (configurable) by end object – check the closing window for a selected key after the inactivity period – extend the window for the selected key if a new event is in. To save space in the delay/suppress window store only endObjectIDsPostDelayWhen the aggregation window is closed do:Execute the activation rt(CurrentState) – check the whole hierarchy and sort the state to a desired stateThe result of this function is of related to the relations that have to be a result, push all events to bulk-callback topics that will cause an update in . and Configuration\nRelationData cache model:\n[\n Id: endObjectId\n relations:\n     - relationUri: relations/13pTXPR0\n       endObjectUri: endObjectId"      \n          country: AU \n         crosswalks:\n - type: ONEKEY\n value: WSK123sdcF\n deleteDate: \n : e.g. relations/13pTXPR0/attributes/Rank\n Rank: null\n \t Attributes:\n Status:\n \t - ACTIVE                     \n        RelationType/RelationshipDescription:\n - I\n - N\n\n]\n\n\nTriggersRankActivationTrigger actionComponentActionDefault timeIN Events incoming Callback Service: Pre-Callback: DelayRankActivationProcessor$env-internal-reltio-full-eventsFull events trigger pre-callback stream and the activation logic that will route the events to next processing staterealtime - events streamOUT Activated events to be sortedCallback Service: Pre-Callback: DelayRankActivationProcessor $env-internal-reltio-full-delay-eventsOutput topicrealtime - events streamTrigger actionComponentActionDefault timeIN Events incoming mdm-callback-delay-service: Pre-Delay-Callback: PreCallbackDelayStream$env-internal-reltio-full-delay-eventsDELAY: ${env}-internal-reltio-full-callback-delay-eventsFull events trigger pre-delay-callback stream and the ranking logicrealtime - events streamOUT Sorted events with the correct state mdm-callback-delay-service: Pre-Delay-Callback: PreCallbackDelayStream$env-internal-reltio-proc-eventsOutput topic with correct eventsrealtime - events streamOUT Reltio Updatesmdm-callback-delay-service: Pre-Delay-Callback: PostCallbackStream$env-internal-async-all-bulk-callbacksOutput topic with Reltio updatesrealtime - events streamDependent componentsComponentUsageCallback ServiceRELATION ranking activator that push events to delay serviceCallback Delay ServiceMain Service with OtherHCOtoHCOAffiliations Rankings logicEntity EnricherGenerates incoming events full eventsManagerProcess callbacks generated by this serviceAttachment docs with more technical implementation details:example-reqeusts.json" }, { "title": "HCPType Callback", "": "", "pageLink": "/display//HCPType+Callback", "content": "DescriptionThe process was designed to update HCPType RDM code in TypeCode attribute on profiles. The process is based on the events streaming, the main event is recalculated based on the current state and during comparison of existing TypeCode on and calculated value the callback is generated. This process (like all processes in ) blocks the main event and will send the update to external clients only when the update is visible in and contains correct code. The process uses the as a internal cache and calculates the output value based on current mapping. To limit the number of requests to RDM we are using the internal and we refresh this cache on PROD. Additionally we designed the in-memory cache to store 2 required codes (PRES/NON-PRESC) with HUB_CALLBACK source code is logic is related to these 2 values in Reltio HCP profiles:Type-  Prescriber (ES)Type - Non-Prescriber (RS)Why this process was designed:With the addition of the LOVs, we have hit the limit/issue where -Prescriber canonical codes no longer into is a size limit in ’s underlying tech stack It is a GCP physical limitation and cannot be increased. We cannot add new RDM codes to codes and this will cause issues in e previous logic:In the ingestion service layer (all calls) there was a rule called “HCP TypeCode”. This logic adds the as a concatenation of SubTypeCode and Ranked 1. Logic get source code and puts the concatenation in TypeCode attribute. The number of combination on source codes is reaching the limit so we are building new r future reference adding old rules that will be removed after we deploy the new process. rules (sort rank):- name: Sort specialities by source rank category: OTHER createdDate: modifiedDate: preconditions: - type: operationType values: - create - update - type: not preconditions: - type: source values: - HUB_CALLBACK - NUCLEUS - LEGACYMDM - PFORCERX_ID - type: not preconditions: - type: match attribute: TypeCode values: - "^.+$" action: type: sort key: Specialities sorter: SourceRankSorterDQ rules (add sub type code):- name: Autofill sub type code when sub type is null/empty category: AUTOFILL_BASE createdDate: modifiedDate: preconditions: - type: operationType values: - create - update - type: not preconditions: - type: source values: - HUB_CALLBACK - NUCLEUS - LEGACYMDM - PFORCERX_ID - KOL_OneView action: type: modify attributes: - TypeCode value: "{.Specialty}" replaceNulls: true when: - "" - "NULL"Example of previous input values:attributes: "TypeCode": [ { "value": "TYP.M-SP.WDE.04" } ]TYP.M is a is a value - PRESC:As we can see on this screenshot on there are 2920 combinations for one source that generates PRESC value. The new logic:The new logic was designed in pre callback service in hybrid mode. The logic uses the same assumptions like are made in previous version, but instead we are using Reltio Canonical codes, and this limits the number of combinations. We are providing this value using only one Source HUB_CALLBACK so there is no need to configure , and all other sources that provides multiple vantages:Service populates with canonical codesHCP Type LOVs reduced to single source (HUB_CALLBACK) and canonical codesThe change in HCP Type RDM will be processed using standard reindex is change is impacting the Historical Inactive flow – change described : HI HCPType enrichment. Key features in new logic and what you should know:The change in HCP Type RDM will be processed using standard reindex lculate the HCP TypeCode is based on the profile and Reltio canonical codesPreviously each source delivered data and the ingestion service calculated TypeCode based on data delivered by the we calculate on , not on the source level.We deliver only one value using HUB_CALLBACK once we receive the event we have access to ov:true – golden profileSpecialties, this is the list, each source has the and , so we pick with Rank 1 for selected bTypeCode is a single attribute, and can pick only ov:true value.2 canonical cocdes are mapped to TypeCode attribute like on the below example Activation/Deactivation profiles in and Historical Inactive flowSnowflake: HI HCPType enrichmentSnowflake: History Inactive When the whole profile is deactivated HUB_CALLBACK technical crosswalks are hard-deleted, will be hard-deletedThis is impact HI Views because the HUB_CALLBACK value will be droppedWe implemented a logic in HI view that will rebuild TypeCode attribute and put this PRES/NON-PRESC in JSON file visible in HI view. Reltio contains the checksum logic and is not generating the event when the sourceCode changes but is mapped to the same canonical codeWe implemented a delta detection logic and we are sending an update only when change is detected Lookup to RDM, requeiers the logic to resolve HUB_CALLBACK code to canonical code. Change only when Type does not exists Type changes from PRESC to NON-PRESC Type changes from NON-PRESC to of new input values:attributes: "TypeCode": [ { "value": } ]TYP.M is a SubTypeCode source code mapped to P.WDE.04 is a source code mapped to rdm/lookupTypes/HCPSubTypeCode:dm/lookupTypes/HCPSpecialty:Flow diagramLogical ArchitectureHCPType attributes and RDM                {                    "label": "Type",                    "name": "TypeCode",                    "description": "HCP Type Code",                    "type": "String",                    "hidden": false,                    "important": false,                    "system": false,                    "required": false,                    "faceted": true,                    "searchable": true,                    "attributeOrdering": {                        "orderType": "ASC",                        "orderingStrategy": "LUD"                    },                    "uri": "configuration/entityTypes//attributes/TypeCode",                    "lookupCode": "rdm/lookupTypes/HCPType",                    "skipInDataAccess": false                },Based on:SubTypeCode:                {                    "label": "Sub Type",                    "name": "SubTypeCode",                    "description": "HCP SubType Code",                    "type": "String",                    "hidden": false,                    "important": false,                    "system": false,                    "required": false,                    "faceted": true,                    "searchable": true,                    "attributeOrdering": {                        "orderType": "ASC",                        "orderingStrategy": "LUD"                    },                    "uri": "configuration/entityTypes//attributes/SubTypeCode",                    "lookupCode": "rdm/lookupTypes/HCPSubTypeCode",                    "skipInDataAccess": false                },Speciality:                        {                            "label": "Specialty",                            "name": "Specialty",                            "description": "Specialty of the entity, e.g., Adult Congenital Heart Disease",                            "type": "String",                            "hidden": false,                            "important": false,                            "system": false,                            "required": false,                            "faceted": true,                            "searchable": true,                            "attributeOrdering": {                                "orderingStrategy": "LUD"                            },                            "cardinality": {                                "minValue": 0,                                "maxValue": 1                            },                            "uri": "configuration/entityTypes//attributes/Specialities/attributes/Specialty",                            "lookupCode": "rdm/lookupTypes/HCPSpecialty",                            "skipInDataAccess": false                        },RDMCodes:rdm/lookupTypes/HCPType:RSrdm/lookupTypes/HCPType:ESHCPType LogicFlow:Component Startupduring the Pre-Callback component startup we are initializing in memory cache to store 2 PRESC and values for HUB_CALLBACK soruceThis implementation limits number of requests to through managerAlso this limit number of call manager service from pre-callback serviceThe Cache contains configuration and is invalidated after TTLActivationCheck if feature flag activation is trueTake into account only the CHANGED and CREATED events in this pre-callback implementation limited to objectsTake into account only profiles that crosswalks are not on the following list. When contains the crosswalks that are related to this configuration list skip the TypeCode generation. When the contains the following crosswalk and additionally valid crosswalk like generate a TypeCode.- type: not preconditions: - type: source values: - HUB_CALLBACK - NUCLEUS - LEGACYMDM - PFORCERX_IDStepsEach CHANGE or CREATE event triggers the following logic:Get the canonical code from /attributes/SubTypeCode pick a lookupCode if lookupCode is missing and lookupError exists pick a value if the SupTypeCode does not exists put an empty value = ""Get the canonical code from /attributes/Specialities/attributes/Specialty arraypick a speciality with Rank equal to 1pick a lookupCode  if lookupCode is missing and lookupError exists pick a value if the does not exists put an empty value = ""Combine to canonical codes, using "-" hyphen character as a concatenation.possible values:--""""-""-""Execute delta detection logic:: using the cache translate the generated value to PRESC or NPRES codeCompare the generated value with /attributes/TypeCodepick a lookupCode and compare to generated and translated value if lookupCode is missing and lookupError exists pick a value and compare to generated and not translated valueGenerate:INSERT_ATTRIBUTE: when does not exitsUPDATE_ATTRIBUTE: when value is differentForward main event to next processing topic when there are 0 iggersTrigger actionComponentActionDefault timeIN Events incoming Callback Service: Pre-Callback:HCP Type Callback logicFull events trigger pre-callback stream and during processing, partial events are processed with generated changes. If data is in sync partial event is not generated, and the main event is forwarded to external clientsrealtime - events streamDependent componentsComponentUsageCallback ServiceMain component of flow implementationEntity EnricherGenerates incoming events full eventsManagerProcess callbacks generated by this serviceHub StoreHUB Mongo CacheLOV readLookup RDM values flow" }, { "title": " IQVIA<->COMPANY", "": "", "pageLink": "/display//China+IQVIA%3C-%3ECOMPANY", "content": "DescriptionThe section and all subpages describe HUB adjustments for clients with transformation to the COMPANY model. HUB created a logic to allow clients to make a transparent transition between IQVIA and COMPANY Models. Additionally, the process will be adjusted to the new COMPANY model. The New process will eliminate a lot of DCRs that are currently created in the IQVIA tenant. The description of changes and all flows are described in this section and the subpages, links are displayed below. HUB processed all the changes in MR-4191 – the task, To verify and track please check Changes: is now using the IQVIA model (createHCP operation)The goal realized in these changes is to have the same features as COMPANY model but will use the IQVIA model (for change should be transparent)current IQVIA PROD - ( PROD)new COMPANY PROD -  ( in ) (input IQVIA model -> output COMPANY model transformation)Changes in Events Streaming (events) (input COMPANY model -> output IQVIA model transformation)Changes in map-channel. data in IQVIA model loaded to COMPANY modelCreate a common transformation class:transformIqviaToCOMPANYtransformCOMPANYToIqviaDCR adjustments to the COMPANY modelFlowsChina IQVIA - current flow and user properties + COMPANY changesOn this page, the current IQVIA flow for users is er properties for users, the activation B components and configuration used in HUBThe page contains also COMPANY changes and affected components that will be changedCreate complex methods - IQVIA model (legacy)This page describes the create operations used in IQVIA, based on this logic new COMPANY logic was adjusted.Old logic is complicated and will be deprecated in the logic contains the new solutions and was written in a more readable format. In the new logic, the process is moved outside of the to the external dcr-service-2 eate complex methods - COMPANY modelNew COMPANY logic for the creation of the and objects.Logic is divided into two sectionssimple - create an object without affiliationscomplex - create an object with affiliations Logic also triggers the process if e new COMPANY code changes add the and prefixes to the API.Existing COMPANY model operations will be switched to APIsIQVIA users will use - this is required to keep the old logic, in the future old will be deprecated and removed./ APIs are transparent for the external clients, this is handled on the HUB sideDCR IQVIA flowOLD IQVIA model logicDCR model logicChina - model transformation flowAdditionall, microservice used to transform COMPANY model events to IQVIA modelThe microservice used the predefined mapping and transforms the output events to the target output topicThe logic contains also the Reference Attributes lookup like: - get HCP → (Workplaces using COMPANY ContactAffiliations) - get (MainHCO using COMPANY OtherHCOtoHCOAffiliations)The output is combined and contains full information about all and objects (same as on IQVIA)Model Mapping (IQVIA<->COMPANY)Model mapping documentTransformation used during calls or events streaming processing User Profile ( user)User Profile for usercontains all details and configuration properties in one l /CrosswalkGeneratrs are configured in one file and are shared across all HUB microservices. TriggersDescribed in the separated sub-pages for each pendent componentsDescribed in the separated sub-pages for each cuments with HUB detailsmapping China_attributes.xlsxAPI: China_HUB_cxdcr: China_HUB_DCR_cx" }, { "title": " IQVIA - current flow and user properties + COMPANY changes", "": "", "pageLink": "/pages/tion?pageId=", "content": " this page, the current IQVIA flow is described. Contains the full description, and complex on IQVIA end with all details about HUB configuration and properties used for the IQVIA the next section of this page, the COMPANY changes are described in a generic way. More details of the new COMPANY complex model and adjustments were described in other subpages. IQVIACurrent process notes: uses the createHCP operation (the object with affiliation to HCO(Workplace) and source is the only source that creates DCRsCurrent operations used by details: (only used by event hub user)CreateHCORoute (china_apps)CreateHCPRoute (china_apps and (as a part of a createHCP route where is executed)UpdateHCPRoute (china_apps)Users:eventhubchina_appsmap_channelSources:GRVEVRMDEFACECN3RDPARTYMap_ChannelGRV source is there with CN countryManagerManager affiliations activation and configuration\naffiliationConfig:\n hcpToL1HcoRefAttributeName:\n Workplace:\n - country: "CN"\n hcpToL2HcoRefAttributeName:\n MainWorkplace:\n - country: "CN"\n hcoToHcoRefAttributeName:\n MainHCO:\n - country: "CN"\n waitForNewHcoDCRApprove:\n - country: "CN"\n\n\nDCRs current legacy config\ndcrConfig:\n dcrProcessing: routeEnableOnStartup: deadLetterEndpoint: "file:///opt/app/log/rejected/"\n externalLogActive: activationCriteria:\n NEW_HCO:\n - country: "CN"\n sources:\n - "CN3RDPARTY"\n - "FACE"\n - "GRV"\n NEW_HCP:\n - country: "CN"\n sources:\n - "GRV"\n NEW_WORKPLACE:\n - country: "CN"\n sources:\n - "GRV"\n - "MDE"\n - "FACE"\n - "CN3RDPARTY"\n - "EVR"\n\n externalDCRActivationCriteria:\n - country: "CN"\n sources:\n - "CN3RDPARTY"\n - "FACE"\n - "GRV"\n\n continueOnHCONotFoundActivationCriteria:\n - country: "CN"\n sources:\n - "GCP"\n - countries:\n - AD\n - BL\n - BR\n - DE\n - ES\n - FR\n - FR\n - GF\n - GP\n - IT\n - MC\n - MF\n - MQ\n - MU\n - MX\n - NC\n - NL\n - PF\n - PM\n - RE\n - RU\n - TR\n - WF\n - YT\n sources:\n - GRV\n - GCP\n validationStatusesMap:\n VALID: validated\n NOT_VALID: notvalidated\n PENDING: pending\n\n delayPrcInSeconds: 3600\n : "{{env_name}}-gw-dcr-requests"\n\n\nUsers that use country in HUB:china_apps\n- name: "china_apps"\n description: " applications access user"\n defaultClient: "ReltioAll"\n roles:\n - "CREATE_HCP"\n - "CREATE_HCO"\n - "UPDATE_HCO"\n - "UPDATE_HCP"\n - "GET_ENTITIES"\n - "RESPONSE_DCR"\n - "LOOKUPS"\n countries:\n - "CN"\n sources:\n - "CN3RDPARTY"\n - "MDE"\n - "FACE"\n - "EVR"\n\n\n\nmap_channel\n- name: "map_channel"\n description: "Map Channel (Handler) account"\n defaultClient: "ReltioAll"\n roles:\n - "UPDATE_HCP"\n - "CREATE_HCP"\n - "CREATE_HCO"\n - "DELETE_CROSSWALK"\n countries:\n - "CN"\n - "AD"\n…\n sources:\n - "GRV"\n - "GCP"\n\n\nCallback-Service:refLookupConfig\nrefLookupConfig:\n - country: CN\n maxDepth: 2\n useCache: true\n entities:\n - type: HCP\n attributes:\n - Workplace\n - type: HCO\n attributes:\n - MainHCO\nThe callback service is adding enrichment to . Workplace and inHCO objects – In mongo and in published events we are storing more information than the Reltio. The result is that we have the full data and and full data and inHCO full data. The MainHCO Workplace is enriched by references. The and Publisher move to data that contains full information in these lished events and are enriched with this data.Event publisher:\n- id: hcp-china\n selector: "(conciliationTarget==null)\n && .headers.eventType in [' in ['cn']\n && ['CN3RDPARTY', '', 'FACE', 'EVR', '', 'GCP', ' destination: "prod-out-full-mde-cn"\nPublishing of events and sources entities, full events (data is trimmed)COMPANYThe key concepts and general description of COMPANY adjustments:Current IQVIA flow should work only on old and will be deprecated in the futureOn the new COMPANY model there will be and APIs versions transparent for the external client, the is a new logic that will be used by all clients and also a client with the IQVIA modelOptimization of /batch/hcp method is made as a part of these changes because now all APIs allow to the provision of the list of entities. Created methods:New Service ( input bulk or single entity)- (simple method without affiliated ) (array of entities)- (complex method with affiliated ) (array of entities)- (array of entities)- (array of entities)Transformation executed if:Source: IQVIA (user profile configuration)Target: COMPANY (user profile configuration)Then execute the transformation and complex with affiliated -router service will be used to make a transparent transition between and APIs2 methods and v2All COMPANY clients using the COMPANY model will be switched to V2V1 will be removed in the future after IQVIA will be deprecatedTransformation LIB (full description on the different subpage):transformIqviaToCOMPANYtransformCOMPANYToIqviaUser Profile - Feature switchIQVIA vs COMPANY model on user configuration:User Profile objects will be provided. In one file whole configuration shared across all components will be present. Publishing changes: Selective Router - new microservice – translates events from the IQVIA model to the COMPANY modelInput: COMPANY model topicEnrich with data (workplace/mainHCO)Output: target COMPANY modelOpen API Documentation on contains the whole description, and documentation is managed in code and automatically generated.  processIntegrate manager complex method with dcr-service-2 (using triggers) Create requests that have the model in dcr-service-2K8s separated environmentAPAC-China-DEV is a separate environment used for the testing. The environment is set up dynamically on K8sThe component changes related to this adjustment:Reltio-Subscriber component is working on DEV as an events router:There is only one queue, but 2 output topics in the subscriber publisher. The event router makes a decision if we need to move this event to -DEV or -DEV (e.g. profiles tagged with -test-cases). Reltio-subscriber reads the tag name and pushes this event to topic {tag-name} – specified number of tag names allowed in publishing to output topic 2 profiles – test mode. PROD – normal mode by default normal PROD mode Manager Changes Create HCP/ operations used by HUB automated integration tests adding the -TEST tag that is routed only to -DEV environment  () Key concepts and changesCrosswalk Generator - configured in User Profile -allows to automatically generate a crosswalk when missing:(common) CrosswalkGenerator – first type (implementation) UUID generator (autofill: Type <>, Value: , SourceTable:)associated with the and User (when the user does not provide the crosswalk we can generate an or crosswalk)For example – if the missing filaitedHCO crosswalk then we will generate a new oneFind Service - configured in User Profile - contains the implementation of multiple search cases. User can be configured to use a specific set of searches. Used for example to find related to the HCP in Complex nd Object Method (_findObject (getByUri/getByCrosswalk/getByName e.t.c.):UserProfile configuration drivenInput entity objectSearch ObjectURICrosswalkSearch method (Reltio (?filter) ) – getByName (search by Reltio Name attribute - configurable)ere is a possibility to add multiple different searches or configure current searches by defining the attributes namesTrigger - configured in User Profile. Contains the mode implementation. The trigger is executed in the following situation:Find Service execution → result → decision to be madeDecisionFoundCreate with and ( create ) -> HCPNotFoundUserProfile: TriggerType configurationFunction result – (ACCEPT OR REJECT + ObjectToCreate)TriggerTypeCREATE (ACCEPT , object)IGNORE (ACCEPT , nullObject)REJECT  (REJECT , nullObject)DCR (ACCEPT, DCRObject)(custom function – can be Lookup) (customFunction(Object) (return CREATE/IGNORE/REJECT)) - for example used in to lookup to the STD_DPT name in and make a decision based on lookup result. " }, { "title": "China Selective Router - model transformation flow", "": "", "pageLink": "/display//China+Selective+Router+-+model+transformation+flow", "content": " selective router was created to enrich and transform event from COMPANY model to IQIVIA model. Component is also able to connect related mainHco with , based on reltio connections , in model its reflected as in attribute.Flow diagramStepsCollect event from input topicEnrich event - based on configuration collect and main entitiesfind attribute with refEntity uri call reltio thrue mdm-manager to collect all related and mainHco entities return event with list of , and list of with mainHco based on reltio connections and put mainHco attribute to hcoiterate by list of and call reltio to list of connection for current hcoif connection list is not empty and contains entity uri from list of mainHcoput exisitng mainhco to in 'OherHcoToHco' attribure (Name of attibute can be changed in configuration)Transform event from COMPANY model to modelinvoke HCPModelConverter wiht base evnet, list of and list of mainHcoresult of converter will be entity in modelput entity in output event to output topicTriggersTrigger actionComponentActionDefault timekafka messageeventTransformerTopologytransform event to modelrealtimeDependent componentsComponentUsageMdm managergetEntitisByUrigetEntityConnectionsByUriHCPModelConvertertoIqviaModel" }, { "title": "Create HCP/HCO complex methods - IQVIA model (legacy)", "": "", "pageLink": "/pages/tion?pageId=", "content": "DescriptionThe IQVIA user uses the following methods to create the HCP HCO objects - Create/Update . On this linked page the calls flow is described. The most complex and important thing is the following sections for users:Additional logic that is activated in the following cases:3 - during update parentHCO attribute is delivered in the request4 - during create/update affiliations are delivered in the request5 - during creation based on the configuration-specific sources are enriched with cached Relation objects and this object is injected into the main Entity as the reference attributeIQVIA user also activates the logic using this Create HCP method. The complex description of this flow is here  IQVIA flowCurrently, the activation process from the IQVIA flow is described here - generation process ( flow is described here:  COMPANY flowThe below flow diagram and steps description contain the detailed description of all cases used in and methods in legacy code. = logic / STEPS:The following files contain the rules in IQVIA - executed once HUB receives the JSON from the Client. rules are self-documented, details can be found in the following files: affiliatedHCO : affiliatedhco-country--quality-rules.yamlHCP:hcp-country--quality-rules.yaml(common) qualityServicePipelineProvider – execute rules file(common) dataProviderCrosswalkGuardrail – execute GuardRailsAffiliatedHCO LOGIC (affiliatedHCOs attribute):DQ Rules check and validation on affiliatedHCOIf empty -> add only Country from and Crosswalk from HCPIf not empty -> affiliatedHCOsEntity is combined as one entity from all attributes from all arrays with from and Crosswalk from HCPCreating affiliation logic is activated when affiliatedHCOs exist and is not emptyCreate :Update (true/false) (PATCH/POST)autoCreateHCO is used in isAutoCreateHCO method below. It activates create operation for and for all countries when affiliatedHCO is not found. \naffiliationConfig:\n autoCreateHCO:\n - country: "ALL"\n sources:\n - "MAPP"\n - "CRMMI"\n\n\nRUN pAndReplaceHospitalThe logic was designed to get MainHCO from affiliatedHCO and find this in Reltio. Only 1 element of MainHCO can en executes the SEARCH LOGIC (by uri/crosswalk/attributes) and gets AUTO rules e result is set the MainHCO.objectUri=Reltio found URI (object from the request is assigned and exists id)Then in the next methods, MainHCO contains the copy of all attributes from Reltio (the object is different than received from the client)For each affiliatedHCOs do:extractL2HCO [MainHCO] from affiliatedHCOs: (it means get MainHCO - Hospital - from > 1 -> Exception HCPMappingException(rmat("HCO has more than 1 affiliated HCO")when =1 – assign to new Entity object:attributes (copy tributes)crosswalk = osswalkuri = MainHCO. refEntity. on returned Hospiatl do:[SEARCH LOGIC] ltioMDMClient#findEntity[SEARCH LOGIC] shared across all searches on and servicesFind by ObjectURIOrFind by CrosswalkOrFind by Match API (entities/_matches) where JSON body in MainHCO entity:Verify matches resultCheck only .*Auto.* rulesresultSize > 1 - return nullif there are more than 2 entities with different uris - return  return nullif 1 match – returns entityIf Search result == null -> EntityNotFoundxception – hospital not found If found result then: set in fEntity.objectUri, and copy all attributes from to MainHCO(replace MainHCO + trim)Hospital is found and have the Reltio URIRUN pAndCreateHCO – returns the mappedHCOs arrayThe main logic of this method is to create a with MainHCO in and assign the received from () or Create affilaitedHCO object ( and each affiliatedHCOs doFirst Check - " map dict is set, map and create standardized HCO"if (tHCORDMMDict() ( means if CN then return LKUP_STD_DEPARTMENTS )logic:add do mappedHCOs (mapAndCreateStandardizedHCO)The result of this function is to set the AffilaitedHCO(Workplace).URI based on the search.We translate using LKUP_STD_DEPARTMENTS  code and then make a search in .If found set URI from not found execute CreateHCO method and assign URI from based on created objects.IF is null, – translate the using the lookup function to Reltio with LKUP_STD_DEPARTMENTS and Source=osswalkIf OK and the code exitsSet Department name to response code ( is not found in RDM break and exit. This may cause that the will be not found and you will receive the error - HCO Entity no foundFind entity (affiliatedHCO) (logic same as [SEARCH LOGIC]) (here we search affiliatedHCO with MainHCO attribute)If found set affiliateHCO.uri = ( ) automatically” for inHCO object and assing to inHCO- NULL/CLEARThis clear/null on inHCO is required because we are executing the CREATE_HCO operation with 2 objects. 1. affiliatedHCO 2. MainHCO (parentHCO in operation)This will create an object with MainHCO in inHCO- SET crosswalk to EVR with Random UUIDExecute logic – [ = logic / STEPS (check below)] (parameters 1= procEntity(affiliatedHCO), 2=MainHCO)check creation result:notFound -> NotFoundExceptionfailed -> RuntimeExceptionOK, -> set affiliateHCO.uri = reltioFoundUriSecond Check – “Create or update affiliated HCO”FOR and for affiliatedHCOs create the in and assign the Reltio URI to affilaitedHCOs automatically without search and DPT AutoCreateHCO logic based on param – currently PROD activated for and for all countrieslogic:Execute logic – [ = logic / STEPS (check below)] (parameters 1= procEntity(affiliatedHCO), 2=null) - send only without HospitalHere we are adding parentHCO to the request. is affiliatedHCO eck creation result:failed -> RuntimeOK -> set affiliateHCO.uri = reltioFoundUriThird Check – “ auto-creation is disabled”just return the affiliatedHCO without the Reltio URI assingRUN createHCOAffiliations (Create affiliation to and ) creating affiliation to HCOExtends HCP object with ) and Workplace(affiliatedHCO) referenced each affiliatedHCOs doExtract MainHCO object (this will be on HCP)If empty throw RuntimeExceptionIf existsRUN createAffilationAsRef - l2HCORefName = ----------- Creating MainWorkplace relation from to MainHCOLogic that creates affiliation between and MainHCO or affiliation between and affiliatedHCO (used here and below)Below we add 2 more attributes to and lidationChangeDateIf MainHCO.objectURi exits. OKELSE search - (here objectUri will be, this search is used in CREATE_HCO method)If still not found throw NotFoundExceptionElse assign and attributeson MainWorkplaceRefEntity – MainHCO.ObjectURIRefRelation – Crosswalk (sourceTable=,type=osswalk.type,value=HASH)Attributes - emptyThen check if the same relation on already exists comparing the attribute with generated crosswalkIf this is a new add to a new attribute that is MainWorkplaceRewriting validation status from main entity or set from entity – preprare reference attributes on WorkplaceRefEntity attributes set from: or lidationStatusValidationChangeDate or lidationChangeDateRUN createAffilationAsRef - l2HCORefName = WorkplaceSame logic as above but:----------- Creating Workplace relation from to affiliatedHCOResult – contains and refRelation attributesAffiliatedHCO LOGIC throws in some places EntityNotFoundException - process this exception here:activate LOGICCreate NEW_HCO("NewHCO") with entity and affiliatedHCOs Check if is in activationCriteria for (/FACE/CN3RDPARTY) Then check continueOnHCONotFoundActivationCriteria for only GCP – this will create (continue) without affiliation(common) Reference Relation Attributes for (relations taken from method - Main HCP create an object in ReltioCheck response:(common) COMPANYGlobalCustomerIdactivate LOGIC If NEW_HCO – send Request related to affiliatedHCOs and put this to dcrRequestIf does not contains NEW_HCO NEW_HCP Request with affiliatedHCO and send RequestIf does not contains NEW_HCO and send REQUEST(common) resolve status – set created/update/failed/e.t.c(common) ValidationException/EntityNotFoundException/HCPMappingException/ExceptionEND  = logic / STEPS:The following files contain the rules in IQVIA - executed once HUB receives the JSON from the Client. rules are self-documented, details can be found in the following files: : hco-country--quality-rules.yaml(common) qualityServicePipelineProvider – execute DQ rules(common) dataProviderCrosswalkGuardrail – execute GuardRailsParentHCO ↔ AffiliatedHCO LOGIC (parentHCO attribute processing):RUN createAffilationAsRef - = MainHCO ----------- Creating MainHCO relation from to parentHCOIf parentHCO.objectURi exits, ok. (the objectURi can be from create methods but can be also emptu)ELSE -> [SEARCH LOGIC]ltioMDMClient#findEntity (described in still not found throw NotFoundException -> Parent not foundElse if found in object and put MainHCO ref attribute:  – parentHCO.ObjectURIRefRelation – Crosswalk (sourceTable=MainHCO,type=osswalk.type,value=HASH)Attributes - emptyThen check if the same relation on already exists comparing the MainHCO attribute with generated crosswalkIf this is a new add to a new attribute that is MainHCO(common) Reference Attributes for method - create an object in ReltioCheck response:(common) Register COMPANYGlobalCustomerId(common) resolve status – set created/update/failed/e.t.c(common) ValidationException/EntityNotFoundException/HCPMappingException/ExceptionENDTriggersTrigger actionComponentActionDefault timeoperation linkREST callManager: /hco /hcp /mcocreate specific objects in MDM systemAPI synchronous requests - realtimeCreate/Update callManager: GET /lookupget lookup Code from ReltioAPI synchronous requests - realtimeLOV readREST callManager: GET /entity?filter=(criteria)search the specific objects in the systemAPI synchronous requests - realtimeSearch EntityREST callManager: GET /entityget Object from RetlioAPI synchronous requests - realtimeGet EntityKafka Request DCRManager: DCR eventpush Kafka DCR EventKafka asynchronous event - realtimeDCR IQVIA flowDependent componentsComponentUsageManagersearch entities in MDM systemsAPI REST and secure accessReltioReltio MDM systemDCR legacy processor" }, { "title": "Create complex methods - COMPANY model", "": "", "pageLink": "/pages/tion?pageId=", "content": "DescriptionThis is used to process complex requests. It supports the management of entities with the relationships between them. The user can provide data in the IQVIA or COMPANY model.Flow diagramFlow diagram (overview)(details on main diagram)Steps HCP Map HCP to COMPANY modelExtract parent - MainHCO attribute of affiliated entityExecute search service for affiliated and parent HCOIf affiliated or parent not found in MDM system: execute trigger serviceOtherwise set entity URI for found objectsExecute complex service for request - affiliated  and parent entitiesMap response to contact affiliations attributecreate relation between and affiliated relation between and parent HCOExecute HCP simple serviceHCP search entity serviceSearch entity service is used to search for existing entities in the system. This feature is configured for user via searchConfigHcpApi attribute. This configuration is divided for and affiliated entities and contains a list of searcher implementations - searcher tributedescriptionHCOsearch configuration for affiliated entityMAIN_HCO search configuration for parent entitysearcherTypetype of searcher implementationattributesattributes used for attribute search implementationHCP trigger serviceTrigger service is used to execute action when entities are missing in MDM system. This feature is configured for user via triggerType igger typedescriptionCREATEcreate missing or parent via complex request for missing objectsIGNOREignore missing objects, flow will continue, missing objects and relations will not be createdREJECTreject request, stop processing and return response to clientFlow diagram  (overview)(details on main diagram)Steps HCOMap request to COMPANY modelIf hco.uri attribute is null then create entityCreate relationif is not null then use to create other affiliationsif parentHCO.uri is null then use search service to find entityif entity is found then use is to create other affiliationsif entity is not found then create parentHCO and use to create other affiliationsif Relation exists then do nothingif doesn't exist then create relationTriggersTrigger actionComponentActionDefault timeREST callmanager hcp/complexcreate , objects and relationsAPI synchronous requests - realtimeREST callmanager complexcreate objects and relationsAPI synchronous requests - realtimeDependent componentsComponentUsageEntity search servicesearch entity HCP API opertaionTrigger serviceget trigger result opertaionEntity management serviceget entity connections" }, { "title": "Create HCP/HCO simple methods - COMPANY model", "": "", "pageLink": "/pages/tion?pageId=", "content": "DescriptionV2 API simple methods are used to manage the entities - HCP/HCO/ey support basic request with COMPANY model.Flow diagramSteps Crosswalk generator - auto-create crosswalk - if not exists Entity validationAuthorize request - check if user has appropriate permission, country, sourceGetEntityByCrosswalk operaion-  check if entity exists in reltio, applicable for PATCH operationQuality service - checks entity attributes against validation pipelineDataProviderCrosswalkCheck - check if entity contributor provider exists in reltioExecute HTTP request - post entities Reltio operationExecute GetOrRegister COMPANYGlobalCustomerID operation Crosswalk generator serviceCrosswalk generator service is used for creating crosswalk when entity crosswalk is missing. This feature is configured for user via crosswalkGeneratorConfig tributedescriptioncrosswalkGeneratorTypecrosswalk generator implementation typecrosswalk type valuesourceTablecrosswalk source table valueTriggersTrigger actionComponentActionDefault timeREST callManager: /hcpcreate HCP objects in MDM systemAPI synchronous requests - realtimeREST callManager: /hcocreate objects in MDM systemAPI synchronous requests - realtimeREST callManager: /mcocreate objects in MDM systemAPI synchronous requests - realtimeDependent componentsComponentUsageCOMPANY RegistrygetOrRegister operationCrosswalk generator servicegenerate crosswalk opertaion" }, { "title": " IQVIA flow", "": "", "pageLink": "/display//DCR+IQVIA+flow", "content": "DescriptionThe following page contains a detailed description of flow for clients. The logic is complicated and contains multiple rrently, it contains the following:Complex business rules for generating DCRs,Limited flexibility with IQVIA tenants, end-to-end technical processes (e.g., hand-offs, transfers, etc.)The flow is related to numerous file transfers & e idea is to make a simplified flow in the COMPANY model - details described here - COMPANY flowThe below diagrams and description contain the current state that will be deprecated in the future.Flow diagram - Overview - high levelFlow diagram - Overview - simplified viewStepsHUB LOGICHUB Configuration overview: CONFIG AND CLASSES:Logic is in the MDM-MANAGERNewHCODCRService - related to NEW_HCO, NEW_HCO_L1, NEW_HCO_L2NewHCPDCRService - related to NEW_HCPNewWorkplaceDCRService - related to  Config:\ndcrConfig:  \n dcrProcessing:   routeEnableOnStartup:   deadLetterEndpoint: "file:///opt/app/log/rejected/"\n  externalLogActive:   activationCriteria:\n          - country: "CN"\n        sources:\n          - "CN3RDPARTY"\n          - "FACE"\n          - "GRV"\n    NEW_HCP:\n      - country: "CN"\n        sources:\n          - "GRV"\n    NEW_WORKPLACE:\n      - country: "CN"\n        sources:\n          - "GRV"\n          - "MDE"\n          - "FACE"\n          - "CN3RDPARTY"\n          - "EVR"\n\n  continueOnHCONotFoundActivationCriteria:\n    - country: "CN"\n      sources:\n        - "GCP"\n    - countries:\n        - AD\n        - BL\n        - BR\n        - DE\n        - ES\n        - FR\n        - FR\n        - GF\n        - GP\n        - IT\n        - MC\n        - MF\n        - MQ\n        - MU\n        - MX\n        - NC\n        - NL\n        - PF\n        - PM\n        - RE\n        - RU\n        - TR\n        - WF\n        - YT\n      sources:\n        - GRV\n        - GCP\n  validationStatusesMap:\n    VALID: validated\n    NOT_VALID: : pending\nFlow diagram - DCR ActivationStepsIQVIA/  ACTIVATION LOGIC/ACTIVATION CRITERIA:HCPDCRService#isActive :(common) on IQVIA the first check is on the source and country(common) is activated for CN for source (TRUE – ACTIVATE)(common) NEW_HCO is activated for CN for CN3RDPARTY, FACE, source (TRUE – ACTIVATE)(common) is activated for CN for , , CN3RDPARTY, FACE, source (TRUE – ACTIVATE)The first 3 isActive checks are related to common checks, here we are checking the country and source of the and then we can verify more details.(REVALIDATION LOGIC) Then we check if the flag on is revalidated=trueIf trueGet From Reltio the current state by entityUri( Reltio Change requests connected to the all AWAITING_REVIEW with type NEW_HCPCheck HCP validation statusesConfigured statuses: "pending", "partial-validated", "partialValidated"From Entity get attributeCompare valuesIf match foundGet EVR crosswalksPatch entity using EVR crosswalk set to pending(NEW HCP isActive LOGIC) activation logic check (detailed):NEW_HCP detailed ACTIVATORCheck if is pendingIf False: is NOT pending:Check current valueIf OV ValidationStatus is "notvalidated" or "partialValidated" do further checks:Get LUD CrosswalkGet (EVR)DCR LUD Crosswalk(Check) if EVR changes are fresher then the changes on return FALSEGet current valueIf pending or partialValidated go to “If true, next”else return reject return FALSEIf true, next(Check) SpeakerStatus value and check if not "actv","enabled" then return FALSE(Check)Get Change Requests from Reltio with AWAITING_REVIEW if found return from , if null return FALSE(Check) Get For the and check if exists, if null return if above checks were not fulfilled return (TRUE – ACTIVATE)(NEW isActive LOGIC) activation logic check cd: detailed ACTIVATORGet ValidationStatus value from source HCP entityCheck if is equal to "enabled","validated","pending","A.3", "partial-validated", "partialValidated"If true return FALSE – is not activated for these statusesNext go to next Check(Check) SpeakerStatus value and check if not "actv","enabled" then return FALSEGET attributeGet attributeNow once we have Name we need to:Get details from Reltio related to this specific if any info in containsHospital nameOr nameIf true it means that there are already some DCRs created in for this HCP in relation to this Department/WorkplaceReturn REQUST_ALREADY_EXISTS and return FALSE (not activated)Finally, if above checks were not fulfilled return (TRUE – ACTIVATE)(NEW WORKPLACE isActive LOGIC) activation logic check cd:  detailed ACTIVATORGet ValidationStatus value from source HCP entityCheck if is equal to "enabled","validated", " true return FALSE – is not activated for these statusesNext go to next Check(Check) SpeakerStatus value and check if not "actv","enabled" then return FALSE(Check) Verify places – if null - return FALSE (not activate)Next check places, check all elements andRemove duplicated refEntity.objectUrisRemove Workplaces with "enabled","validated","pending" ValidationStatusesCheck the output list – if there are 0 Workplaces or ze() <2 then return FALSE, there are less than 2 workplaces so rejectNow filter Workplaces and find , check all elements andIf there are any workplaces related to (EMPTY) crosswalk name then filter them out, currently make for all because the condition is not metCheck ChangeRequests connected with the current HCPGet details from Reltio related to this specific if any info in contains created for the current for which we are trying to create true it means that there are already some DCRs created in for this HCP in relation to this REQUST_ALREADY_EXISTS and return FALSE (not activated)Finally, if the above checks were not fulfilled return (TRUE – ACTIVATE)Kafka sender - produce event to rvice.dcr.AbstractDCRService#sendDCRRequest EVENT DCR SENDSend a request from : class published to topic prod-gw-dcr-requestsFlow ( processor)StepsReceiver ( processor) (Camel) - ute.DCRServiceRoute LOGIC:DCRServiceRouteReceive request: ${body} – log input bodyCheck Delay time and postpone the to next runtimeDelay = Current Time – DCR Create Time (in new object initialization time)if timeDelay < 240 minDelay based on session or delayTime (depending what is lower value)Thread SleepNote: current sessionTimeout on PROD is 30 secondsElse Proceed the rvice.dcr.AbstractDCRService#processDCRRequest LOGIC:(common) Get From Reltio current ) Check Activation (only abstract, by source and country) criteria, if active true:(common) Start processing request(common) Create Change Request in Reltio (empty container)(common) Add External InfoHCPWithHCOExternalInfo objectSet NEW_HCP/ typeSet Reltio HCP RUISet Source entity crosswalkProcess DCR Custom Logic (NEW_HCP/NEW_HCO/NEW_WORKPALCE),Description belowUpdate in with created PfDataChangeRequest objectPfDataChangeRequest object is used by IQVIA and this is exported in excel file to = CreatedCrosswalk EVRIn case of error delete Reltio ChangeRequest (container) and throw ExceptionIf ok set the status to ACCEPTEDOtherwise REJECTEDNewHCPDCRService - STEPS  - Process DCR Custom Logic (NEW_HCP)NEW_HCP custom logicCreate a new type Entity (java object) to validatedSet Crosswalk = EVR – get existing or create newPATCH Entity HCP Object to Reltio using change request id (update existing container only)In ExternalInfo set affilaitedHCOs objectNewHCODCRService - STEPS  - Process DCR Custom Logic (NEW_HCO, custom logicCreate a new type Entity (java object)Set crosswalks from the entitySet ExternalInfo department and hospital names Get department name from Request form hospital name from Request from inHCOExecute HCODCRService#processAffiliations (method return status: 1 – NEW_HCO_L1(Workplace) or 2 – ), logic:Get affiliatedHCOs, for each element doFind L2HCO entity:Get MainHCO element from affiliatedHCO objectIf is null, return nullIf not nullFind object in using operationIf not foundSet EVR crosswalk on MainHCOPOST Entity HCO(MainHCO) Object to Reltio using change request id (update existing container only)And return object/entityURIIf found return object/entityURIFind L1HCO entity:Check if L2HCO is not null, then replace MainHCO attributes using the one found from and set refEntity uriFind Entity using standard search /crosswalk/match)If not foundSet EVR crosswalkRemove MainHCO() from objectsetup affiliation l1HCO -l2HCO (using reference attributes add to Workplace MainHCO reference attribute to create a relation between these 2 objectsPOST Entity HCO(Workplace with MainHCO) Object to Reltio using change request id (update existing container only)And return object/entityURIIf found return object/entityURISet enrich with:affliatedHCO that contains + objectsSet status:2 - If L2HCO URI is null1 – if is nullclear MainHCO to avoid Reltio errorif L2HCO existsadd reference attribute to HCP with reference to object (MainHCO)add Workplace reference attribute to HCP with reference to object (affiliatedHCO)PATCH Entity HCP Object to Reltio using change request id (update existing container 2 If status = 1 – set NEW_HCO_L1 type in externalInfoIf status =2 – set NEW_HCO_L2 type in externalInfoOtherwise, is not valid, all affiliations found, create affiliation without an entity in ( is not valid in that case)NewWorkplaceDCRService - STEPS  - Process DCR Custom Logic (NEW_WORKPLACE)Get entity from objectGet Workplace attributesRemove duplicated entityUris objectsFind workplaces in using GET operation and save EntityURisExecute the WorkplaceDCRService#updateAffiliationsLogic (response = false)The method input is HCPDCR IDList of AffiliatedHCOs(Workplaces) found in by GET operationThe result is HCP+HCO created in the Change requestFlowGet the Change request parameterGet source Entity from ReltioRemove changes from Change RequestCreate HCP Object new empty elementSet crosswalk to acceptedWorkplaces(SET) and add all Workplaces found in Workplaces from HCP object from workpalcesURIS toIf response=true – get from from affiiatedHCOs URISIf response-false – get from Workpalces from object from each WrokplaceURI do:Get Entity HCO from Reltio ObjectPATCH Entity HCP Object to Reltio using change request id (update existing container only) – the input request is HCP object + affilaitedHCOs object found from the affiliatedHCOsUris with new ids created in ReltioIn the set the affilaitedHCOs array to EntityURIS found in diagram - Response - process Response from DCRResponseRoute: response processing:REST apiActivated by china_apps user based on the IQVIA EVRs export Used by to accept/reject(Action) in ReltioDCRResponse () route, possible operations:POST (dcr_id,action)Dcr_id – Reltio Change Request IdAction – accept/updateHCP/updateHCO/ updateAffiliations/reject/merge/mergeUpdateAuthentication service, check user and roleCheck headersDcr_id is mandatorymergeUris structure is winner,looser with 2 idsCheck if in exists, otherwise throw NotFoundException and update the PfDataChangeRequest object in to closedLogic:If in is other than AWAITING_REVIEW throw BadRequestException with details that is already closed (because it means it is now ACCEPTED or the PfDataChangeRequest object in to completedCheck Action and do (FOR NEW_HCP):Accept: NEW_HCP acceptDCRCompose Entity and setValidationStatus = partialValidated (if partial flag in method)ValidationStatus = validated (if not partial)Set ValidationChangeDate to current dateGet From Reltio with id from current state from from current data from entity and enrich the Workplaces objects from using GET operation – retrieve method inputHCP with ValidationStatus/ValidationChangeDate/CountryAffiliatedHCOs from Reltio (Workplaces that were get from info)Exectue NewHCPDCRService#updateHCP LOGIC:Common updateHCP object method that updates in and closes the DCRUsed in NEW_ceptDCR/rejectDCR/updateHCO method andGet From Reltio with id from the current state from from current EntitySet EVR crosswalkSet (validated) and (current date) if missing / If not get from requestIf input exists (only when Workplaces are in request)mapAndCreateHCO (create HCOs in Reltio)execute modifyAffiliationStatusThis method checks if in all Workplaces were created and compares it to the list of Workplaces in input objectset validated or notvalidated statuses on depending on found in ReltioThe result of these 2 methods are Workplaces created in with parameterCreate HCP with affiliated Workplaces(optionally) in – execute complex updateHCP method -> now data is created in ReltioRemove changes from from – because changes were applied manually and had only a container for changes, we need to clear this to not apply it one more ly in – CLOSEDCheck the merge entities parameter and merge ject: NEW_HCP rejectDCRCompose Entity and setValidationStatus = notvalidatedSet to the current dateupdateHCP method inputHCP with ValidationStatus/ValidationChangeDate/CountryExecute NewHCPDCRService#updateHCPUpdateAffiliation: NEW_HCP updateAffilations logic:(input Entity object from Client)N/A for : updateHCO logic:(input Entity object from Client)N/A for NEW_HCPUpdateHCP: NEW_HCP updateHCP:What is the difference between acceptDCR and updateHCP ?In accept we can set to validated or partialValidate and we get all Workplaces from ReltioIn updateHCP we receive the from client together with Id. We can apply changes generated by the Client, not related to the object that is currently in the end in both cases we close and accept the ChangeRequest(input Entity object from Client)Execute NewHCPDCRService#updateHCP method (described above)Check Action and do (FOR NEW_HCO):Accept: NEW_ HCO acceptDCRN/A – only user can use this by (updateHCP operation)Reject: NEW_HCO rejectDCRExecute _reject – Change Request is REJECTED in ReltioUpdateAffiliation: NEW_HCO updateAffilations logic:N/A for this requestUpdateHCO: updateHCO logic:Get From Reltio with id from current state from from current EntityGet List of Entities from Client request and execute the:WorkplaceDCRService#updateAffiliationsLogic: (response = true)logic described aboveTrue logic activates the following:Create HCO 1 outside of – object created in outside of - object create in affiliations are made and an object created in Reltio ( with id in Reltio with affiliations to already created objects in ( and ) but the still in DCR)UpdateHCP: NEW_HCO updateHCP logic:N/A for and do (FOR NEW_WORKPLACE):Accept: acceptDCRGet From Reltio with id from current state from from current EntityGet List of Workplaces from the Change Request HCP WorkplaceDCRService#updateAffiliationsLogic: (response = true)logic described aboveTrue logic activates the following:Create HCO 1 outside of – object created in outside of - object create in ReltioThen affilaitions are made and object created in Reltio ( with id in Reltio with affilaitions to already created objects in ( and ) but the still in DCR)Apply ChanteRequest in Reltio - ACCEPTEDReject: NEW_WORKPLACE rejectDCRApply Reltio Change Request with creation only object in ReltioUpdateAffiliation: NEW_WORKPLACE updateAffilations logic:Same as acceptDCR but the Workplaces list is received from The Client requestUpdateHCO: NEW_WORKPLACE updateHCO logicN/AUpdateHCP: NEW_WORKPLACE updateHCP logic: the PfDataChangeRequest object in to closedTriggersTrigger actionComponentActionDefault timeOperation linkDetailsREST callManager: /hcpcreate specific objects in MDM systemAPI synchronous requests - realtimeCreate/Update the requestKafka Request DCRManager: DCR eventpush Kafka DCR EventKafka asynchronous event - realtimeDCR IQVIA flowPush event to processorKafka Request DCRDCRServiceRoute: Poll Kafka evenConsumes eventsKafka asynchronous event - realtimeDCR IQVIA flowPoll/Consumes events and process itRest call - responseManager:DCRResponseRoute /dcrResponse/{id}/acceptupdates by (accept/reject .)API synchronous requests - realtimeDCR IQVIA flowAPI to accept DCRRest call - DCR responseManager:DCRResponseRoute /dcrResponse/{id}/updateHCPupdates by (accept/reject .)API synchronous requests - realtimeDCR IQVIA flowAPI to update HCP through DCRRest call - DCR responseManager:DCRResponseRoute /dcrResponse/{id}/updateHCOupdates by (accept/reject .)API synchronous requests - realtimeDCR IQVIA flowAPI to update through DCRRest call - DCR responseManager:DCRResponseRoute POST /dcrResponse/{id}/updateAffiliationsupdates by (accept/reject .)API synchronous requests - realtimeDCR IQVIA flowAPI to update to affiliations through DCRRest call - DCR responseManager:DCRResponseRoute /dcrResponse/{id}/rejectupdates by (accept/reject .)API synchronous requests - realtimeDCR IQVIA flowAPI to reject DCRRest call - DCR responseManager:DCRResponseRoute /dcrResponse/{id}/mergeupdates by (accept/reject .)API synchronous requests - realtimeDCR IQVIA flowAPI to merge HCP entitiesDependent componentsComponentUsageManagersearch entities in MDM systemsAPI REST and secure accessReltioReltio legacy processor" }, { "title": " COMPANY flow", "": "", "pageLink": "/display/GMDM/DCR+COMPANY+flow", "content": "DescriptionTBD Flow diagram (drafts)StepsTBDTriggersTrigger actionComponentActionDefault timeDependent componentsComponentUsage" }, { "title": "Model Mapping (IQVIA<->COMPANY)", "": "", "pageLink": "/pages/tion?pageId=", "content": "DescriptionThe interface is used to map between IQIVIA and COMPANY model.Flow diagram-MappingAddress ↔ Addresses attribute mappingIQIVIA MODEL ATTRIBUTE [Address]COMPANY MODEL ATTRIBUTE [Addresses]AddressPremiseAddressesPremiseAddressBuildingAddressesBuildingAddressVerificationStatusAddressesVerificationStatusAddressStateProvinceAddressesStateProvinceAddressCountryAddressesCountryAddressAddressLine1AddressesAddressLine1AddressAddressLine2AddressesAddressLine2AddressAVCAddressesAVCAddressCityAddressesCityAddressNeighborhoodAddressesNeighborhoodAddressStreetAddressesStreetAddressGeolocationLatitudeAddressesLatitudeAddressGeolocationLongitudeAddressesLongitudeAddressGeolocationGeoAccuracyAddressesGeoAccuracyAddressZipZip4AddressesZip4AddressZipZip5AddressesZip5AddressZipPostalCodeAddressesPOBoxPhone attribute mappingsIQIVIA MODEL ATTRIBUTECOMPANY MODEL ATTRIBUTEPhoneLineTypePhoneLineTypePhoneLocalNumberPhoneLocalNumberPhoneNumberPhoneNumberPhoneFormatMaskPhoneFormatMaskPhoneGeoCountryPhoneGeoCountryPhoneDigitCountPhoneDigitCountPhoneCountryCodePhoneCountryCodePhoneGeoAreaPhoneGeoAreaPhoneFormattedNumberPhoneFormattedNumberPhoneAreaCodePhoneAreaCodePhoneValidationStatusPhoneValidationStatusPhoneTypeIMSPhoneTypePhoneActivePhonePrivacyOptOutEmail attribute mappingsIQIVIA MODEL ATTRIBUTECOMPANY MODEL ATTRIBUTEEmailEmailEmailDomainEmailDomainEmailDomainTypeEmailDomainTypeEmailValidationStatusEmailValidationStatusEmailTypeIMSEmailTypeEmailActiveEmailPrivacyOptOutEmailUsernameEmailSourceSourceNameHCO mappingsIQIVIA MODEL ATTRIBUTECOMPANY MODEL ATTRIBUTECountryCountryNameNameTypeCodeTypeCodeSubTypeCodeSubTypeCodeCMSCoveredForTeachingCMSCoveredForTeachingCommentersCommentersCommHospCommHospDescriptionDescriptionFiscalFiscalGPOMembershipGPOMembershipHealthSystemNameHealthSystemNameNumInPatientsNumInPatientsResidentProgramResidentProgramTotalLicenseBedsTotalLicenseBedsTotalSurgeriesTotalSurgeriesVADODVADODAcademicAcademicKeyFinancialFiguresOverviewSalesRevenueUnitOfSizeKeyFinancialFiguresOverviewSalesRevenueUnitOfSizeClassofTradeNSpecialtyClassofTradeNSpecialtyClassofTradeNClassificationClassofTradeNClassificationIdentifiersIDIdentifiersIDIdentifiersTypeIdentifiersTypeSourceNameOriginalSourceNameNumOutPatientsOutPatientsNumbersStatusValidationStatusUpdateDateSourceUpdateDateWebsiteURLWebsiteWebsiteURLOtherNames-OtherNamesName-Type (constant: OTHER_NAMES)OfficialName-OtherNamesName-Type (constant: OFFICIAL_NAME)Address*Addresses*Phone*Phone*HCP mappingsIQIVIA MODEL ATTRIBUTECOMPANY MODEL ATTRIBUTEDESCRIPTIONCountryCountryDoBDoBFirstNameFirstNamecase: (IQVIA -> COMPANY), if IQIVIA(FirstName) is empty then IQIVIA(Name) is used as COMPANY(FirstName) mapping resultLastNameLastNamecase: (IQVIA -> COMPANY), if IQIVIA(LastName) is empty then IQIVIA(Name) is used as COMPANY(LastName) mapping resultNameNameNickNameNickNameGenderGenderPrefferedLanguagePrefferedLanguagePrefixPrefixSubTypeCodeSubTypeCodeTitleTitleTypeCodeTypeCodePresentEmploymentPresentEmploymentCertificatesCertificatesLicenseLicenseIdentifiersIDIdentifiersIDIdentifiersTypeIdentifiersTypeUpdateDateSourceUpdateDateSourceNameSourceValidationSourceNameValidationChangeDateSourceValidationChangeDateValidationStatusSourceValidationStatusSpeakerSpeakerLevelSpeakerLevelSpeakerSpeakerTypeSpeakerTypeSpeakerSpeakerStatusSpeakerStatusSpeakerIsSpeakerIsSpeakerDPPresenceChannelCodeDigitalPresenceChannelCodeMETHOD PARAMContactAffiliationscase: (IQVIA -> COMPANY), param workplaces is converted to and added to ContactAffiliationsMETHOD PARAMContactAffiliationscase: (IQVIA -> COMPANY), param main workplaces are converted to and added to ContactAffiliationsWorkplaceMETHOD PARAMcase: (COMPANY → IQIVIA), param workplaces is converted to and assigned to WorkplaceMainWorkplaceMETHOD PARAMcase: (COMPANY → IQIVIA),  param main workplaces are converted to and assigned to MainWorkplaceAddress*Addresses*Phone*Phone*Email*Email*TriggersTrigger actionComponentActionDefault timeMethod , List workplaces, List mainWorkplaces, List addresses)realtimeMethod asstoCOMPANYModel(EntityKt  iqiviaModel, List workplaces, List mainWorkplaces)realtimeMethod asstoIqiviaModel(EntityKt  , List workplaces, List mainWorkplaces)realtimeMethod asstoCOMPANYModel(EntityKt iqiviaModel)realtimeMethod asstoIqiviaModel(EntityKt  COMPANYModel)realtimeDependent componentsComponentUsagedata-modelMapper uses models to convert between them" }, { "title": "User Profile ( user)", "": "", "pageLink": "/pages/tion?pageId=", "content": "DescriptionUser profile got new attributes used in tributeDescriptionsearchConfigHcpApiconfig search entity service for contains /MAIN_HCO search entity type configurationsearchConfigHcoApiconfig search entity service for APIsearcherTypetype of searcher implementationavailable values: [UriEntitySearch/CrosswalkEntitySearch/AttributesEntitySearch]attributesattribute names used in AttributesEntitySearchtriggerTypeV2 complex trigger configuration - action executed when there are missing entities in requestavailable values: [REJECT/IGNORE//CREATE]crosswalkGeneratorConfigauto-create entity crosswalk - if missing in requestcrosswalkGeneratorTypetype of crosswalk generator, available values: [UUID]typeauto-generated crosswalk type valuesoruceTableauto-generated crosswalk source table valuesourceModelsource model of entity provided by user for HCP/HCO complex,available values: [COMPANY,IQIVIA] Flow diagramTBDStepsTBDTriggersTrigger actionComponentActionDefault timeDependent componentsComponentUsage" }, { "title": "User", "": "", "pageLink": "/display/GMDM/User", "content": "The user is configured with a profile that is shared between all services. Configuration is provided via yaml files and loaded at boot time. To use the profile in any application, import the erConfiguration configuration from the mdm-user module. This operation will allow you to use the class, which is used to retrieve er profile used for authenticationgetEntityUsesMongoCacheretrive entity from mongo cache in get entity operationlookupsUseMongoCacheretrive lookups from mongo cache in entities/relationships in response to the clientguardrailsEnabledcheck if contributor provider crosswalk exists with data provider crosswalkrolesuser permissionscountriesuser allowed countriessourcesuser allowed crosswalksdefaultClientdefault mdm client namevalidationRulesForValidateEntityServicevalidation rules configurationbatchesuser allowed batches configurationdefaultCountryuser default country, used in api-router, when country is not provided in requestoverrideZonesuser country-zone configuration that overwrites default api-router behaviorkafkauser kafka configuration, used in kafka management servicereconciliationTargetsreconciliation targets, used in event resend service" }, { "title": "Country Cluster", "": "", "pageLink": "/display//Country+Cluster", "content": "General assumptionsMDM HUB will be populating country cluster itially, only default cluster country will be sent. In future, other clusters can be calculated and distributed to downstream the current phase, the default clustering model is based on country anges are backward compatible for downstream systems if they are not interested in consuming the cluster faultCountrycluster is an optional attribute. In case of lack of mapping, it will not be included in .Example of mapping: CountrycountryClusterAndorra (AD)France (FR)Maroco (MC)France ( in . Enrichment of  events  with extra parameter defaultClusterCoutryIt will be calculated based on a new config table that maps countries to cluster countriesconfiguration table must be implemented on sideIt can be used in routing rules in filtering events based on defaultCountryCluster2. Add a new column COUNTRY_CLUSTER representing the default country cluster  in views:ENTITIES, , , ENTITY_UPDATE_DATES, MDM_ENTITY_CROSSWALKSAdd country cluster config table 3. Handling cluster country sent by PforceRx in process in a transparent wayIf a new entity then the country will be set based on the address country.If an entity exists then the country will be set based on the existing country in in the event model{  "eventType": "HCP_CHANGED",  "eventTime": ,  "countryCode": "MC",  “defaultCountryCluster": "FR",   "entitiesURIs": ["entities/ysCkGNx“  ] ,  "targetEntity":  {  "uri": "entities/ytY3wd9",  "type": "configuration/entityTypes/HCP",Changes on client-sideMULEMULE must map defaultCountryCluster to country sent to PforceRx in the pipeline.ODSODS ETL process must use column cluster_country instead of country while reading data from " }, { "title": "Create/Update ", "": "", "pageLink": "/pages/tion?pageId=", "content": "DescriptionThe REST interfaces exposed through the Manager component used by clients to update or create objects. The update process is supported by all connected MDMs – Reltio and Nucleus360 with some limitations. At this moment is fully supported for entity types: , , . The supports only the update process. The decision which should be selected to process the update request is controlled by configuration. Configuration map defines country assignment to which stores country's data. Based on this map, selects the correct MDM system to forward the update e difference between Create and Update operations is the additional request during the update operation. During the update, an entity is retrieved from the by the crosswalk value for validation purposes. Diagrams 1 and 2 presents standard flow. On diagrams 3, , , 6 additional logic is optional and activated once the specific condition or attribute is provided. The diagrams below present a sequence of steps in processing client calls.Update :To increase Update / performance, the logic was slightly altered:ContributorProvider crosswalk is now looked up in entity not found by this crosswalk, fallback lookup using confirming that the ContributorProvider crosswalk exists in , add "partialOverride" to the request and continue processing with / logicFlow diagram1Create HCP/HCO/MCO2 Update (additional optional logic) Create/Update HCO with ParentHCO 4 (additional optional logic) Create/Update HCP with AffiliatedHCO&Relation5 (additional optional logic) Create/Update HCO with ParentHCO 6 (additional optional logic) Create/Update HCP with source crosswalk replace  client sends HTTP request to MDM Manager endpoint.Kong API Gateway receives requests and handles authentication.If the authentication succeeds, the request is forwarded to MDM Manager M Manager checks user permissions to call createEntity () operation and the correctness of the request.If the user's permissions are correct, MDM Manager proceeds with creating the specific object and returns created object in to the Client.During partialUpdate before update entity is retrieved from logic will be activated in the following cases:3 - during update parentHCO attribute is delivered in the request4 - during create/update affiliations are delivered in the request5 - during creation based on the configuration-specific sources are enriched with cached Relation objects and this object is injected to the main Entity as the reference attribute6 - during create/update when conditions are met, source crosswalk is replaced from to MAPP_ATTENDEETriggersTrigger actionComponentActionDefault timeREST callManager: /hco /hcp /mcocreate specific objects in MDM systemAPI synchronous requests - realtimeDependent componentsComponentUsageManagercreate update Entities in MDM systemsAPI REST and secure accessReltioReltio MDM systemNucleusNucleus MDM system" }, { "title": "", "": "", "pageLink": "/pages/tion?pageId=", "content": "DescriptionThe operation creates or updates the Relation of MDM Manager manages the relations in the MDM system. User can update the specific relation using a crosswalk to match or create a new object using unique crosswalks and information about start and end objectThe detailed process flow is shown below.Flow diagramCreate/Update RelationStepsThe client sends requests to the MDM Manager endpoint. receives requests and handles authentication.If the authentication succeeds, the request is forwarded to the  Manager Manager checks user permissions to call createRelation/updateRelation operation and the correctness of the request.If the user's permissions are correct, proceeds with the create/update operation.: after successfully update ( != failed), relations are cached in the MongoDB, the relations are then reused in (currently configured for the GBLUS ). This is required to enrich these relations to the objects during the update, this prevents losing reference attributes duringHCP create operation.OPTIONALLY: PATCH operation adds the PARTIAL_OVERRIDE header to switching the request to the partial update iggersTrigger actionComponentActionDefault timeREST callManager: or updates the in MDM systemAPI synchronous requests - realtimeDependent componentsComponentUsageManagercreate or updates the in MDM system" }, { "title": "Create/Update/Delete tags", "": "", "pageLink": "/pages/tion?pageId=", "content": "The REST interfaces exposed through the Manager component used by clients to update, delete or create tags assigned to entity objects. Difference between create and update is that tags are added and if the option returnObjects is set to true all previously added and new tags will be returned. Delete action removes one e diagrams below present a sequence of steps in processing client calls.Flow diagramCreate tagUpdate tagDelete tagStepsThe client sends HTTP request to MDM Manager endpoint.Kong API Gateway receives requests and handles authentication.If the authentication succeeds, the request is forwarded to MDM Manager M Manager checks user permissions to call createEntityTags operation and the correctness of the request.If the user's permissions are correct, MDM Manager proceeds with creating tags for entity and returns created tags in to the iggersTrigger actionComponentActionDefault timeREST callManager: /DELETE /entityTagscreate specific objects in MDM systemAPI synchronous requests - realtimeDependent componentsComponentUsageManagercreate update delete Entity Tags in MDM systemsAPI REST and secure accessReltioReltio MDM system" }, { "title": " flows", "": "", "pageLink": "/display/GMDM/DCR+flows", "content": "\n\n\n\nOverviewDCR (Data Change Request) process helps to improve existing data in source systems. Proposal for change is being created by source systems a as object (sometimes also called VR - Validation Request) which is usually being routed by to DS () either in or in Third party validators (, ). Response is provided twofold:response for specific - metadataprofile data update as a direct effect of a processing - payloadGeneral process flow High level solution architecture for flowSource: Lucid\n\n\n\n\n\nSolution for OneKey (OK)\n\n\n\nSolution for Veeva OpenData (VOD)\n\n\n\n\n\nArchitecture highlightsActors involved: PforceRX, Reltio, HUB, OneKeyKey components: (second version) for , , , tenantsProcess details:DCRs are created directly by using 's HUB APIPforceRx checks for status updates every 24h → finds out which DCRs has been updated (since last check ) and the pulls details from each one with with is realized by APIs - DCRs are created with /vr/submit and their status is verified every 8h with /vr/traceData profile updates (payload) are being delivered via CSV and and ( batch) to with COMPANY's DCRRegistryVeeva collections are used in for tracking purposes\n\n\n\nArchitecture highlightsActors involved: in , HUB, (VOD)Key components: (second version) for , , , tenantsProcess details:DCRs are created by (DSRs) in Reltio via 3rd Party Validation - input for DSRs is being provided by reports from PforceRxCommunication with via <>SFTP and synchronization jobs. DCRs are sent and received in batches every 24h DCRs metadata is being exchanged via multiple CSV files ZIPedData profile updates (payload) are being delivered via CSV and and ( batch) to with COMPANY's help   collections are used in tracking purposes\n\n\n\n\n\nSolution for (HL) \n\n\n\nSolution for on GBLUS - sources ICEU, GRV\n\n\n\n\n\nArchitecture highlightsActors involved: on behalf of PforceRX, Reltio, HUB, IQVIA wrapperKey components: (first version) for GBLUS tenantProcess details:DCRs are created by sending requests by - based on information acquired from PforceRxIntegration HUB <> → via files and <>SFTP. HUB confirms creation by returning file reports back to VeevaIntegration HUB <> IQVIA wrapper → via files and is responsible for translation of Veeva DCR CSV format to IQVIA CSV wrapper which then creates in approve or reject the DCRs in which updates data profiles accordingly. PforceRx receives update about changes in ReltioDCRRequest collection is used in for tracking purposes\n\n\n\nArchitecture highlights (draft)Actors involved: HUB, IQVIA wrapperKey components: (first version) for GBLUS tenantProcess details:POST events from sources are captured - some of them are translated to direct DCRs, some of them are gathered and then pushed via flat files to be transformed into DCRs to  \n\n\n" }, { "title": " generation process ( )", "": "", "pageLink": "/pages/tion?pageId=", "content": "The gateway supports following types: – created when new is registered in and requires external validationNewHCOL1 – created when not found in ReltioNewHCOL2 – created when not found in – created when a profile has multiple affiliations  generation processes are handled in two steps:During modification – if initial activation criteria are met, then a request is generated and published to -gw-dcr-requests the next step, the internal route DCRServiceRoute reads requests generated from the topic and processes as follows:checks if the time specified by delayPrcInSeconds elapsed since request generation – it makes sure that batch match process has finished and newly inserted profiles merge with the existing ecks if an entity, that caused generation, still exists;checks full activation criteria (table below) on the latest state of the target entity, if criteria are not met then the request is in external infocreates COMPANYDataChangeRequest entity in Reltio for tracking and exporting eated DCRs are exported by the ETL process managed by applying process (reject/approve actions) are executed through response executed by the external app manged by e table below presents activation criteria handled by system.Table 9. activation inCNCNCNCNSource , , FACE, , , FACE, , FACE, CN3RDPARTYValidationStatus inpending, partial-validatedor, if merged:OV: notvalidated, nonOV: pending/partial-validatedvalidated, pendingvalidated, pendingvalidated, pendingSpeakerStatus inenabled, nullenabled, nullenabled, nullenabled, nullWorkplaces foundtruetruefalsetrueDepartment foundtruetruefalseSimilar created in the pastfalsefalsefalsefalseUpdate: is now created if is pending or partial-validatedNewHCP is also created if is notvalidated, but most-recently updated crosswalk provides non-ov ValidationStatus as pending or partial-validated - in case gets merged into another entity upon creation/modification: request processing history is now available in via Transaction Log - dashboard , transaction type "CreateDCRRoute"DCR response processing history ( approve/reject flow) is now available in via Transaction Log - dashboard , transaction type """ }, { "title": "HL [Decommissioned ]", "": "", "pageLink": "/pages/tion?pageId=", "content": "ContactsVendorContactPforceRXDL-PForceRx-SUPPORT@IQVIA ( Wrapper) As a part of project, the processing flow was created which realizes following scenarios:Update HCP account details i.e. specialty, address, name (different sources of new account with primary affiliation to an existing organization,Add new account with a new business account,Update and add affiliation to a new ,Update HCP account details and remove existing details i.e. birth date, national id, …,Update HCP account and add new non primary affiliation to an existing organization,Update HCP account and add new primary affiliation to an existing organization,Update HCP account inactivate primary affiliation. Person account has more than 1 affiliation,Update HCP account inactivate non primary affiliation. Person account has more than 1 affiliation,Inactivate HCP account,Update and add a private address,Update and update existing private address,Update HCP and inactivate a private address, details i.e. address, name (different sources of account,Update and remove details,Inactivate account,Update address,Update and add new address,Update and inactivate address,Update 's existing affiliation.Above cases has been aggregated into six generic types in internal HUB model:NEW_HCP_GENERIC - represents cases when the new HCP object is created with or without affiliation to ,UPDATE_HCP_GENERIC - aggregates cases when the existing HCP object is changed,DELETE_HCP_GENERIC - represents the case when is deactivating,NEW_HCO_GENERIC - aggregates scenarios when new object is created with or without affiliations to parent ,UPDATE_HCO_GENERIC - represents cases when existing object is changing,DELETE_HCO_GENERIC - represents the case when is neral Process OverviewProcess steps: uploads request file to FTP location,PforceRx Channel component downloads the request file, validates and maps each requests to internal model, sends the request to , process the request: validating, enriching and mapping to , prepares the report file containing technical status of processing - at this time, report will contain only requests which don't pass the validation,Scheduled process in , prepares the requests file and uploads this to location. processes the file: creating DCRs in or rejecting the request due to errors. After that the response file is published to location, downloads the response and updates DCRs status,Scheduled process in gets requests and prepares next technical report - at this time the report has technical status which comes from DCR Wrappper,DCRs that was created by are reviewed by . can be accepted or rejected,After accepting or rejecting , publishes the message about this event, consumes the message and updates status, gets data to prepare a response file. The response file contains the final status of DCRs processing in request file specificationThe specification is available at following location: Wrapper request file specificationThe specification is available at following link:" }, { "title": "OK flows (GBLUS)", "": "", "pageLink": "/pages/tion?pageId=", "content": "DescriptionThe process is responsible for creating DCRs in and starting Change Requests Workflow for singleton entities created in Reltio. During this process, the communication to IQVIA OneKey VR API is established.  SubmitVR operation is executed to create a new Validation Request. The TraceVR operation is executed to check the status of the VR in . All DCRs are saved in the dedicated collection in HUB Mongo DB, required to gather metadata and trace the changes for each request. Some changes can be suggested by the DS using "Suggest" operation in and "Send to Third Party Validation" button, the process " OK Validation Request" is processing these changes and sends them to the service. The process is divided into 4 sections:Submit Validation RequestTrace Validation RequestData Steward ResponseData Steward OK Validation RequestThe below diagram presents an overview of the entire process. Detailed descriptions are available in the separated subpages.Flow diagramModel diagramStepsSubmitVRThe process of submitting is triggered by the events. The process aggregates events in a time window and once the window is closed the processing is started.During SubmitVR process checks are executed, getMatches operation in is invoked to verify potential matches for the singleton entities. Once all checks are correct new submitVR request is created in and is saved in and in aceVRThe process of tracing is triggered each  hours on cache r each the operation is executed in to verify the current status for the specific validation request.Once the checks are correct the is updated in and in . ResponseThe process is responsible for gathering changes on Change Requests objects from , the process is only accepting events without the ThirdPartyValidation flagBased on the received change invoked by the is updated in and in OK Validation RequestThe process is responsible for processing changes on Change Requests objects from , the process is only accepting events with the ThirdPartyValidation flag. This event is generated after DS clicks the "Send to Third Party Validation" button in Reltio. The DS is "Suggesting" changes on the specified profile, these changes are next sent to HUB with the event. The changes are not visible in Retlio, it is just a container that keeps the B is retrieving the "Preview" state from and calculating the changes that will send to using operationAfter successful response HUB is closing/rejecting the existing in . The _reject operation has to be invoked on the current in because the changes should no be applied to the profile. Changes are now validating in the system, and appropriate steps will be taken in the next phase (export changed data to or reject suggestion).TriggersDescribed in the separated sub-pages for each pendent componentsDescribed in the separated sub-pages for each process." }, { "title": " OK Validation Request", "": "", "pageLink": "/display/", "content": "DescriptionThe process the DS suggested changes based on the Change Request events received from Reltio(publishing) that are marked with the ThirdPartyValidation flag. The "suggested" changes are retrieved using the "preview" method and send to IQVIA OneKey or for validation. After successful submitVR response HUB is closing/rejecting the existing in and additionally creates a new object with relation to the entity in Reltio for tracking and status purposes. Because of the interface limitation, removal of attributes is send to IQVIA as a comment.Flow diagramStepsEvent publisher publishes full enriched events to $env-internal-[onekeyvr|thirdparty]-ds-requests-in: DCR_CHANGED("CHANGE_REQUEST_CHANGED") and DCR_CREATED("CHANGE_REQUEST_CREATED")Only events with and ThirdPartyValidation flag set to true and the Change Requests status equal to AWAITING_REVIEW are accepted in this process, otherwise, the event is rejected and processing B DCR Cache is verified if any ReltioDCR requests exist and are not in a FAILED status, then processing goes to the next step. request that contains targetChangeRequest is enriched with the current Entity data using HUB CacheVeeva specific: The entity is checked, If no crosswalk exists, then "golden profile" parameters should be used with below logicThe entity is checked, If active [ONEKEY|VOD] crosswalk exists the following steps are executed:The suggested state of the entity is retrieved from using the getEntityWithChangeRequests operation (parameters - entityUri and the from the event).  and are compared using the following rules: (full attributes that are part of comparing process are described here)Simple attributes (like FirstName/LastName):Values are compared using the equals method. if differences are found the suggested value is taken. If no differences are found for mandatory, the current value is takenfor optional, the none value is taken ( attributes (like Specialties/Addresses):Whole nested attributes are matched using Reltio "uri" attributes key.If there is a new , the new suggested nested attribute is takenVeeva specific: If there is a new degree*/HCP Focus area*, the new suggested nested attribute is taken. Since uses flat structure for these attributes, we need to calculate specialty attribute number (like specialty_5__v) to use when sending request. Attribute number = count of existing attributes +1.If there is no new  and there is a change in the existing attribute, the suggested nested change is taken. If there are multiple suggested changes, the one with the highest is taken.If there are no changesfor mandatory, the current nested attribute that is connected with the crosswalk is r optional, the none nested attribute is taken (no need to / OtherHCOtoHCOAffiliation:If there are no changes, return current listIf there is new with crosswalk, add it to current listAdditional checks:If there are changes associated with the other source (different than the [ONEKEY|VOD]), then these changes are ignored and the is saved in with comment listing what attributes were ignored e.g.: "Attributes: [YoB: ], [Email: ] ignored due to update on non-[onekey|VOD] attribute."If attribute associated with [ONEKY|VOD] source is removed, a comment specifying what should be removed on [ONEKY|VOD] side is generated and sent to [ONEKY|VOD], e.g.: "Please remove attributes: [Address: 10648 Savannah Plantation Ct, , , object is created in for the flow state recording and generation of the new unique ID for validation requests and data tracing. cache attributesValues for for for ()typeOK_VRPFORCERX_DCRRELTIO_SUGGESTstatusDCRRequestStatusDetails (, currentDate)createdByonekey-dcr-serviceUser which creates via Suggest button in which creates via Suggest button in ReltiodatenowSendTo3PartyValidationtrue (flag that indicates the objects created by this process)Calculated changes are mapped to the  submitVR Request and it's submitted using REST method /vr/ specific:  submitting request to requires creation of CSV files with agreed structure and placed on bucketIf the submission is successful then:atus is updated to SENT with [OK|VOD] request and response details  entity is created in and the relation between the processed entity and the entityReltio source name (crosswalk.type):  relation type: HCPtoDCR or HCOtoDCR (depending on the object type)DCR entity attributes: entity attributesMapping for OneKeyMapping for VeevaDCRIDOK VR Reqeust Id (cegedimRequestEid)ID assigned by  EntityURIthe processed entity URIVRStatus"OPEN"VRStatusDetail"SENT"Commentsoptionally commentsSentDatecurrent timeSendTo3PartyValidationtrueOtherwise (FAILED)atus is updated to FAILED with OK request and exception response details  entity is created in and the relation between the processed entity and the entityReltio source name (crosswalk.type):  relation type: HCPtoDCR or HCOtoDCR (depending on the object type)DCR entity attributes: entity attributesMappingDCRIDOK VR Reqeust Id (cegedimRequestEid)EntityURIthe processed entity URIVRStatus"CLOSED"VRStatusDetail"FAILED"CommentsONEKEY service failed [exception details]SentDatecurrent timeSendTo3PartyValidationtrueThe current object in is closed using the _reject operation - POST - /changeRequests//_rejectOtherwise, If crosswalk does not exist, or the crosswalk is soft-deleted, or entity is : the following steps are executed: object is created in for the flow state recording and generation of the new unique ID for validation requests and data tracing. cache attributesvaluestypeDCRType.OK_VRstatusDCRRequestStatusDetails (, currentDate)created byonekey-dcr-servicedatenowSendTo3PartyValidationtrue (flag that indicates the objects created by this process)atus is updated to FAILED and comment "No OK crosswalk available"DCR entity is created in and the relation between the processed entity and the entityReltio source name (crosswalk.type):  relation type: HCPtoDCR or HCOtoDCR (depending on the object type)DCR entity attributes: entity attributesMappingDCRIDOK VR Reqeust Id (cegedimRequestEid)EntityURIthe processed entity URIVRStatus"CLOSED"VRStatusDetail"REJECTED"CommentsNo crosswalk availableCreatedByMDM HUBSentDatecurrent timeSendTo3PartyValidationtrueThe current object in is closed using the _reject operation - POST - /changeRequests//_rejectEND  (suggested changes)HCPReltio AttributeONEKEY attributemandatory valueCountryisoCod2mandatorysimple nderCodeoptionalsimple efixNameCodeoptionalsimple lookupTitleindividual.titleCodeoptionalsimple lookupMiddleNameindividual.middleNameoptionalsimple rthYearoptionalsimple rthDayoptionalsimple valueTypeCodeindividual.typeCodeoptionalsimple nguageEidoptionalsimple optionalsimple valueIdentifier value 1individial.externalId1optionalsimple valueIdentifier value . (nested)Specialities[]individual.speciality1 / 2 / 3optionalcomplex (nested)Phone[]oneoptionalcomplex (nested)Email[]optionalcomplex (nested)Contact Affiliations[]placeEidoptionalContact AffiliationONEKEY dividualEidmandatoryIDHCOReltio AttributeONEKEY attributemandatory typeattribute ualNameworkplace.officialNameoptionalsimple valueCountryisoCod2mandatorysimple ualName2optionalcomplex ( optionalcomplex (nested)Addresses[]dressLine2address. (nested)Specialities[]workplace.speciality1 / 2 / 3optionalcomplex (nested)Phone[] (!FAX)ephoneoptionalcomplex (nested)Phone[] (FAX)workplace.faxoptionalcomplex (nested)Email[]optionalcomplex ( Events incoming mdm-onekey-dcr-service:ChangeRequestStreamprocess publisher full change request events in the stream that contain ThirdPartyValidation flagrealtime: events stream processing Dependent componentsComponentUsageOK ServiceMain component with flow implementationVeeva ServiceMain component with flow implementationPublisherEvents publisher generates incoming eventsHub and Entities Cache " }, { "title": "Data Steward Response", "": "", "pageLink": "/display/GMDM/Data+Steward+Response", "content": "DescriptionThe process updates the 's based on the Change Request events received from Reltio(publishing). Based on the decision the state attribute contains relevant information to update status.Flow diagramStepsEvent publisher publishes simple events to $env-internal-[onekeyvr|veeva]-change-requests-in: DCR_CHANGED("CHANGE_REQUEST_CHANGED") and DCR_REMOVED("CHANGE_REQUEST_REMOVED")Only the events without the ThirdPartyValidation flag are accepted, otherwise, the event is Rejected and the process is are processed in the and based on the ate attribute decision is madeIf the state is APPLIED or REJECTS, is retrieved from the cache based on the changeRequestURIIf exists in Cache The status in is updatedDCR entity attributesMappingVRStatusCLOSEDVRStatusDetailstate: APPLIED → ACCEPTEDstate: REJECTED → , the events are rejected and the transaction is endedOtherwise, the events are rejected and the transition is iggersTrigger actionComponentActionDefault timeIN Events incoming mdm-onekey-dcr-service:OneKeyResponseStreammdm-veeva-dcr-service:veevaResponseStreamprocess publisher full change request events in streamrealtime: events stream processing Dependent componentsComponentUsageOK ServiceMain component with flow implementationVeeva ServiceMain component with flow implementationPublisherEvents publisher generates incoming eventsHub and Entities Cache " }, { "title": "Submit Validation Request", "": "", "pageLink": "/display/GMDM/Submit+Validation+Request", "content": "DescriptionThe process of submitting new validation requests to the service based on the change events aggregated in time windows. During this process, new DCRs are created in Reltio.Flow diagramStepsEvent publisher publishes simple events to $env-internal-onekeyvr-in including HCP_*, _*, ENTITY_MATCHES_CHANGED Events are aggregated in a time window (recommended the window length ) and the last event is returned to the process after the window is mple events are enriched with the data using HUB CacheThen, the following checks are executedcheck if at least one crosswalk create date is equal or above for a given source name and cut off date specified in configuration - section submitVR/crosswalkDecisionTablescheck if entity attribute values match specified in configurationcheck if there is no valid created for the entity  check if the entity is activecheck if the OK crosswalk doesn't exist after the full entity retrieval from the HUB cachematch category is not 99GetMatches operation from returns 0 potential matchesIf any check is negative then the process is aborted. object is created in for the flow state recording and generation of the new unique ID for validation request and data e entity is mapped to OK VR Request and it's submitted using REST method /vr/submit.If the submission is successful then:atus is updated to SENT with OK request and response details  entity is created in and the relation between the processed entity and the entityReltio source name (crosswalk.type): relation type: HCPtoDCR or HCOtoDCR (depending on the object type)DCR entity attributes: entity attributesMappingDCRIDOK VR Reqeust Id (cegedimRequestEid)EntityURIthe processed entity URIVRStatus""OPEN"VRStatusDetail"SENT"CreatedByMDM HUBSentDatecurrent timeOtherwise FAILED status is recorded in  with an OK error atus is updated to FAILED with OK request and exception response details  entity is created in and the relation between the processed entity and the entityReltio source name (crosswalk.type):  relation type: HCPtoDCR or HCOtoDCR (depending on the object type)DCR entity attributes: entity attributesMappingDCRIDOK VR Reqeust Id (cegedimRequestEid)EntityURIthe processed entity URIVRStatus"CLOSED"VRStatusDetail"FAILED"CommentsONEKEY service failed [exception HUBSentDatecurrent timeTriggersTrigger actionComponentActionDefault timeIN Events incoming mdm-onekey-dcr-service:OneKeyStreamprocess publisher simple events in streamevents stream processing with 4h time window events aggregationOUT requestone-key-client:bmitValidationsubmit VR request to request for each accepted eventDependent componentsComponentUsageOK component with flow implementationPublisherEvents publisher generates incoming eventsManagerReltio Adapter for and created operationsOneKey AdapterSubmits Validation RequestHub StoreDCR and  → OK mapping file: onkey_mappings.xlsxOK mandatory / required fields: VR - Business Fields Requirements(COMPANY).xlsxOneKey Documentation" }, { "title": "Trace Validation Request", "": "", "pageLink": "/display/GMDM/Trace+Validation+Request", "content": "DescriptionThe process of tracing the changes based on the OneKey VR changes. During this process HUB, is triggered every hour for SENT 's and check VR status using web service. After verification is updated in or a new is started in for the manual validation.   hours OK VR requests with status SENT are queried in r each open requests, its status is checked it OK using REST method /vr/traceThe first check is the atus attribute, checking if the status is SUCCESSNext, if the process status (cessStatus) is REQUEST_PENDING_OKE | REQUEST_PENDING_JMS | REQUEST_PROCESSED or OK data export date () is then the processing of the request is postponed to the next checkexportDate or processStatus are optional and can be e process goes to the next step only if processStatus  is  REQUEST_RESPONDED | RESPONSE_SENTThe process is blocked to next check only if  trace6CegedimOkcExportDate is not null and is earlier than 24hIf the processStatus is validated and  is VAS_NOT_FOUND | VAS_INCOHERENT_REQUEST | VAS_DUPLICATE_PROCESS then is being closed with status REJECTEDDCR entity attributesMappingVRStatus""CLOSED"VRStatusDetail"REJECTED"ReceivedDatecurrent sponseCommentBefore these 2 next steps, the current Entity status is retrieved from . This is required to check if the entity was merged with OK entity. if responseStatus is VAS_FOUND | and OK crosswalk exists in entity which value equals to OK validated id (individualEidValidated or workplaceEidValidated) then is closed with status ACCEPTED.DCR entity attributesMappingVRStatus""CLOSED"VRStatusDetail"ACCEPTED"ReceivedDatecurrent sponseComment if responseStatus is VAS_FOUND | but OK crosswalk doesn't exist in then is created and workflow task is triggered for review. status entity is updated with status.  entity attributesMappingVRStatus""OPEN"VRStatusDetail"DS_ACTION_REQUIRED "ReceivedDatecurrent sponseCommentGET /changeRequests operation is invoked to get a new change request ID and start a new workflowPOST /workflow/_initiate operation is invoked to init new in attributesMappingchangeRequest.uriChangeRequest Reltio angesEntity URIcommentindividualEidValidated or workplaceEidValidatedPOST /entities?changeRequestId= - operation is invoked to update change request Entity container with Status to Closed, this change is only visible in once accepts the . Body attributesMappingattributes"DCRRequests": [ { "value": { "": [ { "value": "CLOSED" } ] }, "refEntity": { "crosswalks": [ { "type": "configuration/sources/", "value": "$requestId", "dataProvider": false, "contributorProvider": true }, { "type": "configuration/sources/", "value": "$requestId_REF", "dataProvider": true, "contributorProvider": false } ] }, "refRelation": { "crosswalks": [ { "type": "configuration/sources/", "value": "$requestId_REF" } ] } }]crosswalks"crosswalks": [ { "type": "configuration/sources/", "value": "", "dataProvider": false, "contributorProvider": true, "deleteDate": "" }, { "type": "configuration/sources/", "value": "$requestId_CR", "dataProvider": true, "contributorProvider": false, "deleteDate": "" }]TriggersTrigger actionComponentActionDefault timeIN Timer (cron)mdm-onekey-dcr-service:TraceVRServicequery mongo to get all SENT 's related to OK_VR processevery hourOUT requestone-key-client:aceValidationtrace VR request to request for each componentsComponentUsageOK component with flow implementationManagerReltio Adapter for GET /changeRequests and POST /workflow/_initiate operations OneKey AdapterTraceValidation RequestHub StoreDCR and  " }, { "title": "PforceRx DCR flows", "": "", "pageLink": "/display//PforceRx+DCR+flows", "content": "DescriptionMDM HUB exposes Rest to create and check the status of . The process is responsible for creating DCRs in and starting Change Requests Workflow DCRs created in or creating the DCRs (submitVR operation) in . requests can be routed to an external instance handling the requested country. The action is transparent to the caller. During this process, the communication to IQVIA OneKey VR API / Reltio API is established. The routing decision depends on the market, operation type, or changed profile ltio API:  createEntity (with ) operation is executed to create a completely new entity in the new Change Request in Reltio. (with ) operation is executed after calculation of the specific changes on complex or simple attributes on existing entity - this also creates a new Change Request.  Start Workflow operation is requested at the end, this starts the for the in so the change requests are started in the Reltio Inbox for VIA API: SubmitVR operation is executed to create a new Validation Request. The TraceVR operation is executed to check the status of the VR in l DCRs are saved in the dedicated collection in HUB Mongo DB, required to gather metadata and trace the changes for each request. The statuses are updated by consuming events generated by Reltio or periodic query action of open DCRs in can decide to route a to IQVIA as well - some changes can be suggested by the DS using the "Suggest" operation in and "Send to Third Party Validation" button, the process " OK Validation Request" is processing these changes and sends them to the service. The below diagram presents an overview of the entire process. Detailed descriptions are available in the separated subpages. doc URL: diagramDCR Service High-Level ArchitectureDCR HUB Logical ArchitectureModel diagramFlows:Create DCRThe client call /dcr method and pass the request in format to request is validated against the following rules:mandatory fields are setreference object , are available in attributes like specialties, addresses are in the changed objectThe service evaluates the target system based on country, operation type (create, update), changed attributes. The process is controlled by the decision table stored in the e is created in the target system through the APIThe result is stored in the registry. information entity is created in Reltio for e status with created object ids are returned in response to the statusThe client calls GET methodThe service queries registry in and returns the status to the ere are processes updating dcr status in the registry: change events are generated by when is accepted or rejected by DS. Events are processed by the ltio: process DCR Change EventsDCR change events are generated by when is accepted or rejected by DS. Events are processed by the Key: process DCR Change EventsDCR change events are generated by the service when is accepted or rejected by DS. Events are processed by the Key: generate DCR Change Events (traceVR)Every x configured the status method is queried to get status for open validation ltio: create method - directdirect method that creates in (contains mapping and logic description)OneKey: create method (submitVR) - directdirect method that creates in - executes the submitVR operation (contains mapping and logic description)TriggersDescribed in the separated sub-pages for each pendent componentsDescribed in the separated sub-pages for each process." }, { "title": "Create DCR", "": "", "pageLink": "/display//Create+DCR", "content": "DescriptionThe process creates change requests received from and sends the to the specified target service - Reltio, or (). is created in the system and then processed by the data stewards. The status is asynchronously updated by the HUB processes, Client represents the using a unique extDCRRequestId value. Using this value Client can check the status of the (Get status). Flow diagramSource: : component perspective StepsClients execute requestKong receives requests and handles authenticationIf the authentication succeeds the request is forwarded to the dcr-service-2 component, checks permissions to call this operation and the correctness of the request, then the flow is started and the following steps are executed:Parse and validate the request. The validation logic checks the following: Check if the list of contains unique quests that are duplicate will be rejected with the error message - "Found duplicated request(s)"For each in the input list execute the following checks:Users can define the following number of entities in the Request:at least one entity has to be defined, otherwise, the request will be rejected with an error message - "No entities found in the request"single HCPsinge HCOsinge HCP with single HCOsCheck if the main reference objects exist in for update and delete fId or fId, user have to specify one of:CrosswalkTargetObjectId - then the entity is retrieved from using get entity by crosswalk operationEntityURITargetObjectId - then the entity is retrieved from using get entity by uri operationCOMPANYCustomerIdTargetObjectId - then the entity is retrieved from using search operation by the COMPANYGlobalCustomerIDAttributes validation:Simple attributes - like firstName/lastName or update action on the main object:if the input parameter is defined with an empty value - "" - this will result in the removal of the target attributeif the input parameter is defined with a non-empty value - this will result in the update of the target attributeNested attributes - like Specialties/Addresses or each attribute, the user has to define the refId to uniquely identify the attributeFor action "update" - if the refId is not found in the target object request will be rejected with a detailed error message For action "insert" - the refId is not required - new reference attribute will be added to the target objectChanges validation:If the validation detected 0 changes (during comparison of applying changes and the target entity) -  the request is rejected with an error message - "No changes detected"Evaluate dcr service (based on the decision table config)The following decision table is defined to choose the target serviceLIST OF the following combination of attributes:attributedescriptionuserName the user name that executes the requestsourceNamethe source name of the Main objectcountrythe county defined in the requestoperationTypethe operation type for the object{ insert, update, delete }affectedAttributesthe list of attributes that the user is changingaffectedObjects{ , , HCP_HCO }RESULT →  {Reltio, , attribute in the configuration is optional. The decision table is making the validation based on the input request and the main object- the main object is , if the is empty then the decision table is checking . The result of the decision table is the , the routing to the Reltio MDM system, or service. Execute target service (reltio/onekey/veeva)Reltio: create method - directOneKey: create method (submitVR) - directVeeva: create method (storeVR)Create in and save in  If the submission is successful then:  entity is created in and the relation between the processed entity and the entityReltio source name (crosswalk.type):  relation type: HCPtoDCR or HCOtoDCR (depending on the object type)for "create" and "delete" operation the have to be created between objectsif this is just the "insert" operation the will be created after the acceptance of the Change Request in Reltio - Reltio: process DCR Change EventsDCR entity attributes once sent to OneKeyDCR entity attributesMappingDCRIDextDCRRequestIdEntityURIthe processed entity URIVRStatus"OPEN"VRStatusDetail"SENT_TO_OK"CreatedByMDM HUBSentDatecurrent timeCreateDatecurrent timeCloseDateif REJECTED | ACCEPTED -> current timedcrTypeevaluate based on config:dcrTypeRules: - type: CR0 size: 1 action: insert entity: m.api.dcr2.HCPDCR entity attributes once sent to VeevaDCR entity attributesMappingDCRIDextDCRRequestIdEntityURIthe processed entity URIVRStatus"OPEN"VRStatusDetail"SENT_TO_VEEVA"CreatedByMDM HUBSentDatecurrent timeCreateDatecurrent timeCloseDateif REJECTED | ACCEPTED -> current timedcrTypeevaluate based on config:dcrTypeRules: - type: CR0 size: 1 action: insert entity: m.api.dcr2.HCPDCR entity attributes once sent to Reltio → action is passed to DS and workflow is started.  entity attributesMappingDCRIDextDCRRequestIdEntityURIthe processed entity URIVRStatus"OPEN"VRStatusDetail"DS_ACTION_REQUIRED "CreatedByMDM HUBSentDatecurrent timeCreateDatecurrent timeCloseDateif REJECTED | ACCEPTED -> current timedcrTypeevaluate based on config:dcrTypeRules: - type: CR0 size: 1 action: insert entity: m.api.dcr2.HCPMongo Update: atus is updated to SENT with or request and response details or DS_ACTION_REQURIED with all Reltio detailsOtherwise FAILED status is recorded in  with a detailed error ngo Update:  atus is updated to FAILED with all required attributes, request, and exception response details  in Reltio (only requests that is  /workflow/_initiate operation is invoked to init new in attributesMappingchangeRequest.uriChangeRequest Reltio angesEntity URIThen Auto close logic is invoked to evaluate whether request meets conditions to be auto accepted or auto rejected. Logic is based on decision table PreCloseConfig. If untry is contained in ceptCountries or jectCountries then is accepted or rejected respectively. return to Client - During the flow, may be returned to Client with the specific errorCode or requestStatus. The description for all response codes is presented on this page: Get statusTriggersTrigger actionComponentActionDefault timeREST callDCR Service: POST /dcrcreate DCRs in the , or systemAPI synchronous requests - realtimeDependent componentsComponentUsageDCR ServiceMain component with flow ServiceOneKey operationsVeeva operations and /SFTP communication ManagerReltio Adapter - API operationsHub and Entities Cache " }, { "title": " state change", "": "", "pageLink": "/display//DCR+state+change", "content": "DescriptionThe following diagram represents the state changes. object stat is saved in HUB and in Reltio DCR entity object. The state of the is changed based on the Reltio/IQVIA/Veeva Data Steward action. is created (OPEN)  - Create DCRDCR is sent to , or VeevaWhen sent to logic is invoked to auto accept (PRE_ACCEPT) or auto reject (PRE_REJECT) Data Steward process the : process Change EventsOneKey process the - OneKey: process Change EventsVeeva Data Steward process the - Veeva: process Change EventsData Steward status change perspectiveTransaction are the following main assumptions regarding the transaction log in service: Main transaction The user sends to the service list of the Requests and receives the list of the DCR ResponsesTransaction service generates the transaction ID for the input request - this is used as the correlation ID for each separated Request in the listTransaction service save: (list of all) BODYthe Requests list and the Response change transactionDCR object state may change depending on the DS decision, for each state change (represented as a green box in the above diagram) the transaction is saved with the following attributes:Transaction METADATAmain transaction IDextDCRRequestIddcrRequestIdReltio:VRStatusVRStatusDetailHUB:DCRRequestStatusDetailsoptionally:errorMessageerrorCodeTransaction BODY:Input EventLog appenders: Transaction appender - saves whole events(metadata+body) to - data presented in the Kibana Dashboard >Simple Transaction logger - saves the transactions details to the file in the following format:{ID}    {extDCRRequestId}   {dcrRequestId}   {}   {VRStatusDetail}   {DCRRequestStatusDetails}   {errorCode}   {errorMessage}TriggersTrigger Service: POST /dcrcreate DCRs in the system or in OneKeyAPI synchronous requests - realtimeIN Events incoming dcr-service-2:DCRReltioResponseStreamprocess publisher full change request events in the streamrealtime: events stream processing IN Events incoming dcr-service-2:DCROneKeyResponseStreamprocess publisher full change request events in the streamrealtime: events stream processing IN Events incoming dcr-service-2:DCRVeevaResponseStreamprocess publisher full change request events in the streamrealtime: events stream processing Dependent componentsComponentUsageDCR ServiceMain component with flow ServiceOneKey   - operationsVeeva   - API operationsManagerReltio operationsHub and Entities Cache " }, { "title": "Get status", "": "", "pageLink": "/display//Get+DCR+status", "content": "DescriptionThe client creates DCRs in , or Veeva OpenData using the Create DCR operation. The status is then asynchronously updated in . The operation retrieves the current status of the DCRs that the updated date is between 'updateFrom' and 'updateTo' input parameters. PforceRx first asks what DCRs have been changed since last time they checked (usually ) and then iterate for each they get detailed info.Flow diagram,Source: LucidDependent flows:The DCRRegistry is enriched by the events that are generated by Reltio - the flow description is here - Reltio: process DCR Change EventsThe DCRRegistry is enriched by the events generated in service component - after operation is invoked to , each is traced asynchronously in this process - : process DCR Change EventsThe DCRRegistry is enriched by the events generated in service component - after operation is invoked to , each is traced asynchronously in this process - Veeva: process DCR Change EventsStepsStatusThere are the following request statuses that users may receive during Create operation or during checking the updated status using GET /dcr/_status operation described below:RequestStatusDCRStatus  statusDescriptionREQUEST_ACCEPTEDCREATEDSENT_TO_OKDCR was sent to the system for validation and pending the processing by in the systemREQUEST_ACCEPTEDCREATEDSENT_TO_VEEVADCR was sent to the VEEVA system for validation and pending the processing by in the systemREQUEST_ACCEPTEDCREATEDDS_ACTION_REQUIREDDCR is pending validation in , waiting for approval or rejectionREQUEST_ACCEPTEDCREATEDOK_NOT_FOUNDUsed when profile was not found after X retriesREQUEST_ACCEPTEDCREATEDVEEVA_NOT_FOUNDUsed when profile was not found after X retriesREQUEST_ACCEPTEDCREATEDWAITING_FOR_ETL_DATA_LOADUsed when waiting for actual data profile load from to appear in ReltioREQUEST_ACCEPTEDACCEPTEDACCEPTEDData Steward accepted the , changes were appliedREQUEST_ACCEPTEDACCEPTEDPRE_ACCEPTEDPreClose logic was invoked and automatically accepted according to decision table in PreCloseConfigREQUEST_REJECTEDREJECTED  rejected the changes presented in the Change RequestREQUEST_REJECTEDREJECTED PRE_REJECTEDPreClose logic was invoked and automatically rejected according to decision table in PreCloseConfigREQUEST_FAILED-FAILEDDCR requests failed due to: validation error/ unexpected error e.t.d - details in the errorCode and errorMessageError codes:There are the following classes of exception that users may receive during Create operation:ClasserrorCodeDescriptionHTTP code1DUPLICATE_REQUESTrequest rejected - extDCRRequestId  is registered - this is a duplicate request4032NO_CHANGES_DETECTEDentities are the same (request is the same) - no changes4003VALIDATION_ERRORref object does not exist (not able to find target object4043VALIDATION_ERRORref attribute does not exist - not able to find nested attribute in the target object4003VALIDATION_ERRORwrong number of entities in the input request400Clients execute the GET/dcr/_status requestKong receives requests and handles authenticationIf the authentication succeeds the request is forwarded to the dcr-service-2 component, checks permissions to call this operation and the correctness of the request, then the flow is started and the following steps are executedQuery on mongo is executed to get all DCRs matching input parameters:updateFrom (date-time) - last update from - angeDateupdateTo (date-time) - last update to - angeDatelimit (int) the maximum number of results returned through - the recommended value is 25. The max value for a single request is 50.offset(int) - result offset - the parameter used to query through results that exceeded the limit. Resulted values are aggregated and returned to the e client receives the List iggersTrigger actionComponentActionDefault timeREST callDCR Service: GET/dcr/_statusget status of created DCRs. Limit the results using query parameters like dates and offsetAPI synchronous requests - realtimeDependent componentsComponentUsageDCR ServiceMain component with flow implementationHub and  " }, { "title": "OneKey: create method (submitVR) - direct", "": "", "pageLink": "/display//OneKey%3A+create+DCR+method+%28submitVR%29+-+direct", "content": " method exposed in the  component responsible for submitting the to OneKeyFlow diagramStepsReceive the requestValidate - check if the onekey crosswalk exists once there is an update on the profile, otherwise reject the requestThe is mapped to OK VR Request and it's submitted using REST method /vr/submit. (mapping described below)If the submission is successful then:DCRRequesti updated to SENT_TO_OK with OK request and response details. in saved for tracing purposes. The process that reads and check is described here: OneKey: generate DCR Change Events (traceVR)Otherwise FAILED status is recorded and the response is returned with an OK error responseMappingVR - Business Fields Requirements_UK.xlsx - file that contains VR requirements and mapping to IQVIA ientRequestIdHUB_GENERATED_questDate1970-01-01T00:llDate1970-01-01T00:questCommentcountryYisoCod2reference rentUsualNamesubTypeCodeCOTFacilityType(TET.W.*)workplace.typeCodetypeCodeno value in // address with rank=1 emailstypeN/rankget email with rank=1 otherHCOAffiliationstypeN/Arankget affiliation with rank=1 reference EntityotherHCOAffiliations reference entity onekeyID rentWorkplaceEidphonestypecontains phone with rank=1  contains FAXnumberworkplace.faxrankget phone with rank=1 ientRequestIdHUB_GENERATED_questDate1970-01-01T00:llDate1970-01-01T00:questCommentcountryYisoCod2reference stNamemiddleNameindividual.middleNametypeCodeN/AsubTypeCodeHCPSubTypeCode(TYP..*)individual.typeCodetitleHCPTitle(TIT.*)individual.titleCodeprefixHCPPrefix(APP.*)efixNameCodesuffixN/AgenderGender(.*)nderCodespecialtiestypeCodeHCPSpecialty(SP.W.*)individual.speciality1typeN/Arankget speciality with rank=1 typeCodeHCPSpecialty(SP.W.*)individual.speciality2typeN/Arankget speciality with rank=2 typeCodeHCPSpecialty(SP.W.*)individual.speciality3typeN/Arankget speciality with rank=3 addressessourceAddressIdN/AaddressTypeN// address with rank=1 identifierstypeN/AidN/AphonestypeN/lePhonerankget phone with rank=1 emailstypeN/rankget phone with rank=1 contactAffiliationsno value in PFORCERXtypeRoleType(TIH.W.*)leprimaryN/Arankget affiliation with rank=1 contactAffiliations reference HCP full mapping check the section ientRequestIdHUB_GENERATED_IDFor full mapping check the section questDate1970-01-01T00:llDate1970-01-01T00:questCommentcountryYisoCod2addressesIf the address exists map to addressaddress (mapping HCO)elseIf the HCP address exists map to addressaddress (mapping HCP)contactAffiliationsno value in PFORCERXtypeRoleType(TIH.W.*)leprimaryN/Arankget affiliation with rank=1 TriggersTrigger actionComponentActionDefault timeREST callDCR Service: POST /dcrcreate DCRs in the ONEKEYAPI synchronous requests - realtimeDependent componentsComponentUsageDCR Service 2Main component with flow implementationHub and  " }, { "title": "OneKey: generate DCR Change Events (traceVR)", "": "", "pageLink": "/pages/tion?pageId=", "content": "DescriptionThis process is triggered after the was routed to based on the decision table configuration. The process of tracing the VR changes is based on the OneKey VR changes. During this process HUB, is triggered every hour for SENT 's and check VR status using web service. After verification, the DCR Change event is generated. The event is processed in the : process DCR Change Events and the is updated in with Accepted or Rejected status.  hours OK VR requests with status SENT are queried in r each open requests, its status is checked it OK using REST method /vr/traceThe first check is the atus attribute, checking if the status is SUCCESSNext, if the process status (cessStatus) is REQUEST_PENDING_OKE | REQUEST_PENDING_JMS | REQUEST_PROCESSED or OK data export date () is then the processing of the request is postponed to the next checkexportDate or processStatus are optional and can be e process goes to the next step only if processStatus  is  REQUEST_RESPONDED | RESPONSE_SENTThe process is blocked to next check only if  trace6CegedimOkcExportDate is not null and is earlier than 24hIf the processStatus is validated and  is VAS_NOT_FOUND | VAS_INCOHERENT_REQUEST | VAS_DUPLICATE_PROCESS then OneKeyDCREvent is being generated with status REJECTEDOneKeyChangeRequest attributesMappingvrStatus"CLOSED"vrStatusDetail"REJECTED"traceResponseReceivedDatecurrent sponseCommentNext. if responseStatus is VAS_FOUND | VAS_FOUND_BUT_INVALID then OneKeyDCREvent is being generated with status ACCEPTED. ( now the new profile will be loaded to Reltio using data load. The : process DCR Change Events is processing this events ad checks the if the is created and COMPANYCustomerGlobalId is assigned, this process will wait until is in so the client received the ACCEPTED only after this condition is met)  entity attributesMappingvrStatus"CLOSED"vrStatusDetail"ACCEPTED"traceResponseReceivedDatecurrent sponseComment \\nONEKEY ID = individualEidValidated or workplaceEidValidatedevents are published to the $env-internal-onekey-dcr-change-events-in topicEvent class OneKeyDCREvent(val eventType: String? = null, val eventTime: Long? = null, val eventPublishingTime: Long? = null, val countryCode: String? = null, val dcrId: String? = null, val targetChangeRequest: OneKeyChangeRequest,)data class OneKeyChangeRequest( val vrStatus : String? = null, val vrStatusDetail : String? = null, val oneKeyComment : String? = null, val individualEidValidated : String? = null, val workplaceEidValidated : String? = null, val vrTraceRequest : String? = null, val vrTraceResponse : String? = null,)TriggersTrigger actionComponentActionDefault timeIN Timer (cron)dcr-service:TraceVRServicequery mongo to get all SENT 's related to the processevery hourOUT Eventsdcr-service:TraceVRServicegenerate the OneKeyDCREventevery hourDependent componentsComponentUsageDCR ServiceMain component with flow implementationHub and  " }, { "title": "OneKey: process DCR Change Events", "": "", "pageLink": "/display/GMDM/OneKey%3A+process+DCR+Change+Events", "content": "\n\n\n\nDescriptionThe process updates the 's based on the Change Request events received from [ONEKEY|VOD] (after trace VR method result). Based on the [IQVIA|VEEVA] Data Steward decision the state attribute contains relevant information to update status. During this process also the comments created by IQVIA DS are retrieved and the relationship (optional step) between the object and the newly created entity is created. status is accepted only after the [ONEKEY|VOD] profile is created in , only then the will receive the ACCEPTED status. The process is checking Reltio with delay and retries if the load is still in progress waiting for [ONEKEY|VOD] profile. Flow diagram\n\n\n\n\n\nOneKey variant\n\n\n\nVeeva variant: \n\n\n\n\n\nStepsOneKey: generate DCR Change Events () publishes simple events to $env-internal-onekey-dcr-change-events-in: DCR_CHANGEDVeeva specific: : generate DCR Change Events () publishes simple events to $env-internal-veeva-dcr-change-events-in: DCR_CHANGEDEvents are aggregated in a time window (recommended the window length ) and the last event is returned to the process after the window is are processed in the and based on the KeyChangeRequest.vrStatus | evaChangeRequestDetails.vrStatus attribute decision is madeDCR is retrieved from the cache based on the _id of the DCRIf the event state is by [ONEKEY|VOD] crosswalkIf such crosswalk entity exists in Reltio:COMPANYGlobalCustomerId is saved in and will be returned to the Client During the process, the optional check is triggered - create the relation between the object and newly created entitiesif DCRRegistry contain an empty list of entityUris, or some of the newly created entity is not present in the list, the between this object and the has to be createdDCR entity is updated in and the relation between the processed entity and the entityReltio source name (crosswalk. type):  relation type: HCPtoDCR or HCOtoDCR (depending on the object created entities uris should be retrieved by the individualEidValidated or workplaceEidValidated (it may be both) attributes from the events that represent the or e status in and in is updatedDCR entity attributesMapping for OneKeyMapping for VeevaVRStatusCLOSEDVRStatusDetailstate: comments ({sponseComments})ONEKEY ID = individualEidValidated or workplaceEidValidatedVEEVA comments = sponseCommentsVEEVA ID = entityUrisCOMPANYGlobalCustomerIdThis is required in ACCEPTED status If the [ONEKEY|VOD] does not exist in ReltioRegenerate the Event with a new timestamp to the input topic so this will be processed in the next hoursUpdate the DCR statusDCR entity attributesMappingVRStatusOPENVRStatusDetailACCEPTEDupdate the status to the OK_NOT_FOUND | VEEVA_NOT_FOUND and increase the "retryCounter" attributeIf the event state is REJECTEDIf a Reltio DS has already seen this request, REJECT the and end the flow (if the initial target type is Reltio)The status in and in is updatedDCR entity attributesMappingVRStatusCLOSEDVRStatusDetailstate: REJECTEDComments[ONEKEY|VOD] comments ({sponseComments})If this is based on the routing table and it was never sent to the Reltio DS, then create the workflow and send this to the DS. Add the information comment that this was Rejected by the , so now Reltio DS has to decide if this should be REJECTED or APPLIED in Reltio. Add the comment that this is not possible to execute the sendTo3PartyValidation button in this case. Steps:Check if the initial target type is [ONEKEY|VOD]Use the DCR Request that was initially received from PforceRx and is a request (after validation) Send the to Reltio the service returns the following response:ACCEPTED (change request accepted by Reltio)update the status to DS_ACTION_REQUIERED and in the comment add the following: "This was REJECTED by the [ONEKEY|VOD] Data Steward with the following comment: <[ONEKEY|VOD] reject comment>. Please review this in and APPLY or REJECT. It is not possible to execute the sendTo3PartyValidation button in this case"initialize new Workflow in Reltio with the ve data in the entity status in and update with workflow ID and other attributes that were used in this JECTED  (failure or error response from Reltio)CLOSE the with the information that was REJECTED by the [ONEKEY|VOD] and Reltio also REJECTED the . Add the error message from both systems in the comment. TriggersTrigger actionComponentActionDefault timeIN Events incoming dcr-service-2:DCROneKeyResponseStreamdcr-service-2:DCRVeevaResponseStream ($env-internal-veeva-dcr-change-events-in)process publisher full change request events in the streamrealtime: events stream processing  2Main component with flow implementationManagerReltio Adapter  - API operationsPublisherEvents publisher generates incoming eventsHub and  \n\n\n" }, { "title": "Reltio: create method - direct", "": "", "pageLink": "/display//Reltio%3A+create+DCR+method+-+direct", "content": " method exposed in the Manager component responsible for submitting the Change Request to ReltioFlow diagramStepsReceive the request generated by componentDepending on the Action execute the method in the Manager component:insert - Execute standard Create/Update operation with additional parameterupdate - Execute Update Attributes operation with additional parameterthe combination of once updating existing parameter in Reltiothe INSERT_ATTRIBUTE once adding new attribute to Update Attribute operation with additional parameterthe UPDATE_END_DATE on the entity to inactivate this profileBased on the response is returned: processed the request successfully  Reltio returned the exception, Client will receive the detailed description in the errorMessageTriggersTrigger actionComponentActionDefault timeREST callDCR Service: POST /dcr2Create change Requests in ReltioAPI synchronous requests - realtimeDependent componentsComponentUsageDCR ServiceMain component with flow implementationHub and  " }, { "title": "Reltio: process DCR Change Events", "": "", "pageLink": "/display//Reltio%3A+process+DCR+Change+Events", "content": "DescriptionThe process updates the 's based on the Change Request events received from Reltio(publishing). Based on the decision the state attribute contains relevant information to update status. During this process also the comments created by DS are retrieved and the relationship (optional step) between the object and the newly created entity is created.Flow diagramStepsEvent publisher publishes simple events to $env-internal-reltio-dcr-change-events-in: DCR_CHANGED("CHANGE_REQUEST_CHANGED") and DCR_REMOVED("CHANGE_REQUEST_REMOVED")When the events do not contain the ThirdPartyValidation flag it means that DS APPLIED or REJECTED the , the following logic is appliedEvents are processed in the and based on the ate attribute decision is madeIf the state is APPLIED or REJECTS, is retrieved from the cache based on the changeRequestURIIf exists in Cache The status in is updatedDCR entity attributesMappingVRStatusCLOSEDVRStatusDetailstate: APPLIED → ACCEPTEDstate: REJECTED → , the events are rejected and the transaction is endedThe COMPANYCustomerGlobalId is retrieved for newly created entities in based on the main entity URI.During the process, the optional check is triggered - create the relation between the object and newly created entitiesif DCRRegistry contain an empty list of entityUris, or some of the newly created entity is not present in the list, the between this object and the has to be createdDCR entity is updated in and the relation between the processed entity and the entityReltio source name (crosswalk. type):  relation type: HCPtoDCR or HCOtoDCR (depending on the object type)The comments added by the during the processing of the Change request is retrieved using the following operation:GET /tasks?objectURI=entities/The processInstanceComments is retrieved from the response and added to angeRequestComment Otherwise, when the events contain the ThirdPartyValidation flag it means that DS decided to send the to IQVIA or for the validation, the following logic is applied:If the current targetType is | the and add the comment on the in that " was already processed by [ONEKEY|VEEVA] Data Stewards, REJECT because it is not allowed to send this one more time to [IQVIA|VEEVA]"If the current targetType is , it means that we can send this to [IQVIA|VEEVA] for validation Use the DCR Request that was initially received from PforceRx and is a request (after /dcr method in [ONEKEY|VEEVA] , the service returns the following response:ACCEPTED - update the status to [SENT_TO_OK|SENT_TO_VEEVA]REJECTED - it means that some unexpected exception occurred in [ONEKEY|VEEVA], or request was rejected by [ONEKEY|VEEVA], or the crosswalk does not exist in , and [ONEKEY|VEEVA]service rejected this requestVeeva specific: When crosswalk does not exist in , current version of profile is being sent to for validation independent from initial changes which where incorporated within DCRTriggersTrigger actionComponentActionDefault timeIN Events incoming dcr-service-2:DCRReltioResponseStreamprocess publisher full change request events in the streamrealtime: events stream processing  2Main component with flow implementationManagerReltio Adapter  - API operationsPublisherEvents publisher generates incoming eventsHub and Entities Cache " }, { "title": "Reltio: Profiles created by ", "": "", "pageLink": "/display/GMDM/Reltio%3A+Profiles+created+by+DCR", "content": " typeApproval/Reject Record visibility in MDMCrosswalk TypeCrosswalk ValueSourceDCR create for by created in MDMONEKEY|VODonekey id ONEKEY|VODApproved by created in source name from (KOL_OneView, PforceRx, etc)DCR source name from (KOL_OneView, , etc)DCR edit for by requested attribute updated in MDMONEKEY|VODONEKEY|VODApproved by requested attribute updated in uriReltioDCR edit for addressApproved by /VODNew address created in , existing address marked as inactiveONEKEY|VODONEKEY|VODApproved by DSRNew address created in , existing address marked as inactiveReltioentity " }, { "title": "Veeva DCR flows", "": "", "pageLink": "/display/GMDM/Veeva+DCR+flows", "content": "DescriptionThe process is responsible for creating DCRs which are stored (Store VR) to be further transferred and processed by . Changes can be suggested by the DS using "Suggest" operation in and "Send to Third Party Validation" button. All DCRs are saved in the dedicated collection in HUB Mongo DB, required to gather metadata and trace the changes for each request. During this process, the communication to is established via /SFTP communication. SubmitVR operation is executed to create a new ZIP files with requests spread across multiple CSV files. The TraceVR operation is executed to check if responded to initial Requests via ZIP file placed Inbound dir. The process is divided into 3 sections:Create request - VeevaSubmit DCR Request - VeevaTrace Validation Request - VeevaThe below diagram presents an overview of the entire process. Detailed descriptions are available in the separated process diagram for phaseFlow diagramStepsCreateVRProcess of saving requests in after being triggered by request information is translated to 's model and stored in dedicated collection for Veeva bmitVRThe process of submitting stored in Mongo Cache to Veeva's SFTP via bucket. The process aggregates events stored in since last ZIP is created with files containing request for . ZIP is placed in outbound dir in bucket which is further synchronized to Veeva's SFTP. Each is updated with ZIP file name which was used to transfer request to aceVRThe process of tracing is triggered each by bound bucket is searched for ZIP files with CSVs containing responses from . There are multiple dirs in buckets, each for specific group of countries (currently CN and of responses are spread across multiple files. Combined information is being nally information about is updated in and events are produced to dedicated topic for for further iggersDCR service 2 is being triggered via /dcr calls which are triggered by actions ( phase) → "Suggests 3rd party validation" which pushes from to componentsDescribed in the separated sub-pages for each gn document for HUB development Design → cxReltio mapping → VeevaOpenDataAPACDataDictionary.xlsxVOD model description () → Veeva_OpenData_APAC_Data_Dictionary .xlsx" }, { "title": "Create request - Veeva", "": "", "pageLink": "/display//Create+DCR+request+-+Veeva", "content": "DescriptionThe process of creating new requests to . During this process, new DCRs are created in DCRregistryVeeva mongo collection.Flow diagramStepsService is called by request is validated. If request is invalid - return response with status input request to modeltranslate lookup codes to source codesfill the model with input request valuesSave request to DCRRegistryVeeva mongo collection with status NEWMappingsDCR domain model→ mapping file: VeevaOpenDataAPACDataDictionary-mmor-mapping.xlsxVeeva integration guide" }, { "title": "Submit DCR Request - Veeva", "": "", "pageLink": "/display/GMDM/Submit+DCR+Request+-+Veeva", "content": "DescriptionThe process of submitting new validation requests to the OpenData service via VeevaAdapter (communication with ) based on DCRRegistryVeeva mongo collection . During this process, new DCRs are created in system. service flow: Veeva requests with status NEW are queried in DCRRegistryVeeva store. are group by countryFor each country:merge requests - create one zip file for each countryupload zip file to locationupdate status to SENT if upload status is successfulDCR entity attributesMappingDCRIDVeeva VR Request IdVRStatus"OPEN"VRStatusDetail"SENT"CreatedByMDM HUBSentDatecurrent timeSFTP integration service flow:Every N  hours grab all zip files from locationsUpload files to corresponding SFTP serverTriggersTrigger actionComponentActionDefault timeSpring schedulermdm-veeva-dcr-service:VeevaDCRRequestSenderprepare ZIP files for systemCalled every specified intervalDependent componentsComponentUsageVeeva adapterUpload request to location" }, { "title": "Trace Validation Request - Veeva", "": "", "pageLink": "/display//Trace+Validation+Request+-+Veeva", "content": "DescriptionThe process of tracing the changes based on the Veeva VR changes. During this process HUB, DCRRegistryVeeva Cache is triggered every hour for SENT 's and check VR status using (/SFTP integration). After verification event is sent to   response stream. get all responses using AdapterFor each response:check if status is terminal - (CHANGE_ACCEPTED, CHANGE_PARTIAL, CHANGE_REJECTED, CHANGE_CANCELLED)if not - go to next responsequery DCRregistryVeeva mongo collection for with given key and SENT statusget ID (vid__v) from response filegenerate change eventupdate status in DCRRegistryVeeva mongo collectionresolution is CHANGE_ACCEPTED, CHANGE_PARTIALDCR entity attributesMappingVRStatus"CLOSED"VRStatusDetail"ACCEPTED"ResponseTimeveeva response completed dateCommentsveeva response resolution notesresolution is CHANGE_REJECTED, CHANGE_CANCELLEDDCR entity attributesMappingVRStatus"CLOSED"VRStatusDetail"REJECTED"ResponseTimeveeva response completed dateCommentsveeva response resolution notesTriggersTrigger actionComponentActionDefault timeIN schedulermdm-veeva-dcr-service:VeevaDCRRequestTracestart trace validation request processevery hourOUT topicmdm-dcr-service-2:VeevaResponseStreamupdate status in , create relationsinvokes producer for each veeva responseDependent componentsComponentUsageDCR Service 2Process response event" }, { "title": "Veeva: create method ()", "": "", "pageLink": "/pages/tion?pageId=", "content": " method exposed in component responsible for creating new requests specific to () and storing them in dedicated collection for further submit. Since enables communication only via , it's required to use dedicated mechanism to actually trigger CSV/ZIP file creation and file placement in outbound directory. This will periodic call to method will be scheduled once a day (with cron) which will in the end call VeevaAdapter with method createChangeRequest.Flow diagramStepsReceive the requestValidate initial requestcheck if the Veeva crosswalk exists once there is an update on the profileotherwise it's required to prepare to create new profileIf there is any formal attribute missing or incorrect: skip requestThen the is mapped to Veeva Request by invoking mapper between HUB DCR → model For mapping purpose below mapping table should be used If there is not proper LOV mapping between HUB and , default fallback should be set to question mark → ?  Once proper request has been created, it should be stored as a entry in dedicated DCRRegistryVeeva collection to be ready for actually send via job and for future tracing purposesPrepare return response for initial request with below logicGenerate sample request after successful mongo insert →  generateResponse(dcrRequest, , null, null)Generate error when validation or exception →  generateResponse(dcrRequest, QUEST_FAILED, getErrorDetails(), null);Mapping HUB DCR → model Below table does not contain all new attributes which are new in Reltio. Only the most important ones were mentioned le STTM Stats_SG_HK_v3.xlsx contains full mapping requirements from data model. It does contain full data mapping which should be covered in target process for ltioHUBVEEVAAttribute PathDetailsDCR Request pathDetailsFile for Add Request?Required for Change Request?DescriptionReference (RDM/LOV)NOTEHCON/AMongo Generated ID for this | mapping from HUB Domain DCRRequest take this from DCRRequestD.dcrRequestId: String, // HUB request - required in servicechange_requestdcr_keyYYCustomer's internal identifier for this requestChange Requests comments extDCRCommentchange_requestdescriptionYYRequester free-text comments explaining the eatedBycreatedBychange_requestcreated_byYYFor requestor identificationN/ new objects - ADD, if veeva ID CHANGEchange_requestchange_request_typeYYADD_REQUEST or CHANGE_REQUESTN/Adepends on suggested changes (check use-cases)main entity object type or or HCOEntityTypeN/ Generated ID for this | KEYchange_request_hcodcr_keyYYCustomer's internal identifier for this requestReltio Uri and Reltio Typewhen insert new profileentities.HCO.updateCrosswalk.type (Reltio)lue (Reltio id)and refId.entityURIconcatenate Reltio:rvu44dmchange_request_hcoentity_keyYYCustomer's internal identifierCrosswalks - VEEVA crosswalkwhen update on VEEVAentities.HCO.updateCrosswalk.type (VEEVA)lue ( ID)change_request_hcovid__vYNVeeva ID of existing to update; if blank, the request will be interpreted as an add requestconfiguration/entityTypes//attributes//attributes/ elementTODO - add new attributechange_request_hcoalternate_name_1__vYN????change_request_hcobusiness_type__vYNHCOBusinessTypeTO BE CONFIRMEDconfiguration/entityTypes//attributes/ClassofTradeN/attributes/bTypeCodechange_request_hcpmajor_class_of_trade__vNNCOTFacilityTypeIn PforceRx - Account Type, more info: \n MR-9512\n -\n Getting issue details...\n STATUS\n configuration/entityTypes//attributes/Namenamechange_request_hcocorporate_name__vNYconfiguration/entityTypes//attributes/TotalLicenseBedsTODO - add new attributechange_request_hcocount_beds__vNYconfiguration/entityTypes//attributes//attributes/ with rank 1emailschange_request_hcoemail_1__vNNconfiguration/entityTypes//attributes//attributes/ with rank 2change_request_hcoemail_2__vNNconfiguration/entityTypes//attributes/Phone/attributes/ type TEL.FAX with best rankphoneschange_request_hcofax_1__vNNconfiguration/entityTypes//attributes/Phone/attributes/ type TEL.FAX with worst rankchange_request_hcofax_2__vNNconfiguration//attributes/ - add new attributechange_request_hcohco_status__vNNHCOStatusconfiguration/entityTypes//attributes/TypeCodetypecodechange_request_hcohco_type__vNNHCOTypeconfiguration/entityTypes//attributes/Phone/attributes/Numberphone type TEL.OFFICE with best rankphoneschange_request_hcophone_1__vNNconfiguration/entityTypes//attributes/Phone/attributes/Numberphone type TEL.OFFICE with worst rankchange_request_hcophone_2__vNNconfiguration/entityTypes//attributes/Phone/attributes/Numberphone type TEL.OFFICE with worst rankchange_request_hcophone_3__vNNconfiguration//attributes/untrychange_request_hcoprimary_country__vNNconfiguration/entityTypes//attributes/ClassofTradeN/attributes/ from COT specialtieschange_request_hcospecialty_1__vNNconfiguration/entityTypes//attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_10__vNNSpecialityconfiguration/entityTypes//attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_2__vNNconfiguration/entityTypes//attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_3__vNNconfiguration/entityTypes//attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_4__vNNconfiguration//attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_5__vNNconfiguration/entityTypes//attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_6__vNNconfiguration/entityTypes//attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_7__vNNconfiguration/entityTypes//attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_8__vNNconfiguration/entityTypes//attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_9__vNNconfiguration/entityTypes//attributes/Website/attributes/WebsiteURLfirst elementwebsiteURLchange_request_hcoURL_1__vNNconfiguration/entityTypes//attributes/Website/attributes/WebsiteURLN/AN/Achange_request_hcoURL_2__vNNHCP N/AMongo Generated ID for this | KEYchange_request_hcpdcr_keyYYCustomer's internal identifier for this requestReltio Uri and Reltio Typewhen insert new profileentities.HCO.updateCrosswalk.type (Reltio)lue (Reltio id)and refId.entityURIconcatenate Reltio:rvu44dmchange_request_hcpentity_keyYYCustomer's internal identifierconfiguration/entityTypes//attributes/untrychange_request_hcpprimary_country__vYYCrosswalks - VEEVA crosswalkwhen update on VEEVAentities.HCO.updateCrosswalk.type (VEEVA)lue (VEEVA ID)change_request_hcpvid__vNYconfiguration/entityTypes//attributes/FirstNamefirstNamechange_request_hcpfirst_name__vYNconfiguration/entityTypes//attributes/MiddlemiddleNamechange_request_hcpmiddle_name__vNNconfiguration/entityTypes//attributes/LastNamelastNamechange_request_hcplast_name__vYNconfiguration/entityTypes//attributes/NicknameTODO - add new attributechange_request_hcpnickname__vNNconfiguration/entityTypes//attributes/Prefixprefixchange_request_hcpprefix__vNNHCPPrefixconfiguration/entityTypes//attributes/SuffixNamesuffixchange_request_hcpsuffix__vNNconfiguration/entityTypes//attributes/Titletitlechange_request_hcpprofessional_title__vNNHCPProfessionalTitleconfiguration/entityTypes//attributes/SubTypeCodesubTypeCodechange_request_hcphcp_type__vYNHCPTypeconfiguration/entityTypes//attributes/ - add new attributechange_request_hcphcp_status__vNNHCPStatusconfiguration/entityTypes//attributes/AlternateName/attributes/FirstNameTODO - add new attributechange_request_hcpalternate_first_name__vNNconfiguration/entityTypes//attributes/AlternateName/attributes/LastNameTODO - add new attributechange_request_hcpalternate_last_name__vNNconfiguration/entityTypes//attributes/AlternateName/attributes/MiddleNameTODO - add new - add new attributechange_request_hcpfamily_full_name__vNNTO BE CONFRIMEDconfiguration/entityTypes//attributes/DoBbirthYearchange_request_hcpbirth_year__vNNconfiguration/entityTypes//attributes/Credential/attributes/ rank 1TODO - add new attributechange_request_hcpcredentials_1__vNNTO BE CONFIRMEDconfiguration/entityTypes//attributes/Credential/attributes/ - add new attributechange_request_hcpcredentials_2__vNNIn reltio there is attribute but not usedconfiguration/entityTypes//attributes/Credential/attributes/Credential3TODO - add new attributechange_request_hcpcredentials_3__vNN                            "uri": "configuration/entityTypes//attributes/Credential/attributes/Credential",configuration/entityTypes//attributes/Credential/attributes/ - add new attributechange_request_hcpcredentials_4__vNN                            "lookupCode": "rdm/lookupTypes/Credential",configuration/entityTypes//attributes/Credential/attributes/Credential5TODO - add new attributechange_request_hcpcredentials_5__vNNHCPCredentials                            "skipInDataAccess": false??TODO - add new attributechange_request_hcpfellow__vNNBooleanReferenceTO BE CONFRIMEDconfiguration/entityTypes//attributes/Gendergenderchange_request_hcpgender__vNNHCPGender?? Education ?? - add new attributechange_request_hcpeducation_level__vNNHCPEducationLevelTO BE CONFRIMEDconfiguration/entityTypes//attributes/Education/attributes/SchoolNameTODO - add new attributechange_request_hcpgrad_school__vNNconfiguration/entityTypes//attributes/Education/attributes/YearOfGraduationTODO - add new attributechange_request_hcpgrad_year__vNN??change_request_hcphcp_focus_area_10__vNNTO BE CONFRIMED??change_request_hcphcp_focus_area_1__vNN??change_request_hcphcp_focus_area_2__vNN??change_request_hcphcp_focus_area_3__vNN??change_request_hcphcp_focus_area_4__vNN??change_request_hcphcp_focus_area_5__vNN??change_request_hcphcp_focus_area_6__vNN??change_request_hcphcp_focus_area_7__vNN??change_request_hcphcp_focus_area_8__vNN??change_request_hcphcp_focus_area_9__vNNHCPFocusArea??change_request_hcpmedical_degree_1__vNNTO BE CONFRIMED??change_request_hcpmedical_degree_2__vNNHCPMedicalDegreeconfiguration/entityTypes//attributes/Specialities/attributes/ rank from 1 to 100specialtieschange_request_hcpspecialty_1__vYNconfiguration/entityTypes//attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_10__vNNconfiguration/entityTypes//attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_2__vNNconfiguration/entityTypes//attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_3__vNNconfiguration/entityTypes//attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_4__vNNconfiguration/entityTypes//attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_5__vNNconfiguration/entityTypes//attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_6__vNNconfiguration/entityTypes//attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_7__vNNconfiguration/entityTypes//attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_8__vNNconfiguration/entityTypes//attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_9__vNNSpecialtyconfiguration/entityTypes//attributes/WebsiteURLTODO - add new attributechange_request_hcpURL_1__vNNADDRESSMongo Generated ID for this | internal identifier for this requestReltio Uri and Reltio Typewhen insert new profileentities. OR HCO.updateCrosswalk.type (Reltio)entities.HCP OR lue (Reltio id)and refId.entityURIconcatenate Reltio:rvu44dmchange_request_addressentity_keyYYCustomer's internal identifierattributes/Addresses/attributes/'s internal address identifierattributes/Addresses/attributes/AddressLine1addressLine1change_request_addressaddress_line_1__vYNattributes/Addresses/attributes/AddressLine2addressLine2change_request_addressaddress_line_2__vNNattributes/Addresses/attributes/AddressLine3addressLine3change_request_addressaddress_line_3__vNNN/AN/AAchange_request_addressaddress_status__vNNAddressStatusattributes/Addresses/attributes/AddressTypeaddressTypechange_request_addressaddress_type__vYNAddressTypeattributes/Addresses/attributes/StateProvincestateProvincechange_request_addressadministrative_area__vYNAddressAdminAreaattributes/Addresses/attributes/Countrycountrychange_request_addresscountry__vYNattributes/Addresses/attributes/Citycitychange_request_addresslocality__vYYattributes/Addresses/attributes/Zip5zipchange_request_addresspostal_code__vYNattributes/Addresses/attributes/Source/attributes/SourceNameattributes/Addresses/attributes/Source/attributes/SourceAddressIDwhen VEEVA map VEEVA ID to sourceAddressIdchange_request_addressvid__vNYmap fromrelationTypes/OtherHCOtoHCOAffiliationsor relationTypes/ContactAffiliationsThis will be Affiliation or HCO.OtherHcoToHCO affiliationMongo Generated ID for this | KEYchange_request_parenthcodcr_keyYYCustomer's internal identifier for this lationUri (from Domain model)information about Reltio Relation IDchange_request_parenthcoparenthco_keyYYCustomer's internal identifier for this relationshipRELATION IDKEY entity_key from or (start object)change_request_parenthcochild_entity_keyYYChild Identifier in the /HCP fileSTART OBJECT IDendObject entity uri mapped to refId.EntityURITargetObjectIdKEY entity_key from or (end object, by affiliation)change_request_parenthcoparent_entity_keyYYParent identifier in the fileEND OBJECT IDchanges in Domain model mappingmap urceName - VEEVAmap urceValue - VEEVA IDadd to Domain modelmap if relation is from ID change_request_parenthcovid__vNYstart object entity type change_request_parenthcoentity_type__vYNattributes/RelationType/attributes/PrimaryAffiliationif is primaryTODO - add new attribute to otherHcoToHCOchange_request_parenthcois_primary_relationship__vNNBooleanReferenceHCO_HCO or HCP_HCOchange_request_parenthcohierarchy_type__vRelationHierarchyTypeattributes//attributes/RelationshipDescriptiontype from affiliationbased on or OtherHCOToHCO affiliationI think it will be 14-Emploted for HCP_HCOand 4-Manages for HCO_HCObut maybe we can map from affiliation.typechange_request_parenthcorelationship_type__vYNRelationTypeMongo collectionAll DCRs initiated by the dcr-service-2 and to be sent to will be stored in in new collection DCRRegistryVeeva. The idea is to gather all DCRs requested by the client through and schedule ‘SubmitVR’ process that will communicate with adapter.Typical use case: Client requests 3 DCRs during the daySubmitVR contains the schedule that gathers all DCRs with NEW status created during and using VeevaAdapter to push requests to / this store we are going to keep both types of DCRs:\ninitiated by PforceRX - PFORCERX_DCR("PforceRxDCR")\ninitiated by Reltio SubmitVR - SENDTO3PART_DCR("ReltioSuggestedAndSendTo3PartyDCR");\nStore class idea:_id – this is the same ID that was assigned to in dcr-service-2 VeevaVRDetails\n@Document("DCRRegistryVEEVA")\n@JsonIgnoreProperties(ignoreUnknown = true)\n@JsonInclude(N_NULL)\ndata class VeevaVRDetails(\n        @Id\n    val id: String? =     val type: DCRType,\n    val status: DCRRequestStatusDetails,\n    val createdBy: String? =     val createTime: ZonedDateTime? =     val endTime: ZonedDateTime? =     val veevaRequestTime: ZonedDateTime? =     val veevaResponseTime: ZonedDateTime? =     val veevaRequestFileName: String? = null\n    val veevaResponseFileName: String? = null    val veevaResponseFileTime: ZonedDateTime? = null\n    val country: String? =     val source: String? =     val extDCRComment: String? = null, // external Comment (client     val trackingDetails: List = mutableListOf(),\n\n    RAW FILE LINES mapped from DCRRequestD to Veeva model\n    val veevaRequest:\n            val change_request_csv: String,\n            val change_request_hcp_csv: String\n            val change_request_hco_csv: List\n            val change_request_address_csv: List\n            val change_request_parenthco_csv: List\n\n    RAW FILE LINES mapped from Response model\n    val veevaResponse:\n            val change_request_response_csv: String,\n            val change_request_response_hcp_csv: String\n            val change_request_response_hco_csv: List\n            val change_request_response_address_csv: List\n            val change_request_response_parenthco_csv: List\n)\nMapping Reltio canonical codes → source codesThere are a couple of steps performed to find out a mapping for canonical code from to source code understood by . Below steps are performed (in this order) once a code is found. Veeva Defaults Configuration is stored in mdm-config-registry > config-hub/stage_apac/mdm-veeva-dcr-service/defaultsThe purpose of these logic is to select one of possible multiple source codes on end for a single code on COMPANY side (1:N). The other scenario is when there is no actual source code for a canonical code on end (1:0), however this is usually covered by fallback code ere are a couple of files, each containing source codes for a specific attribute. The ones related to HCO.Specialty and have logic which selects proper ually there are constructed as a three column CSV: Country, Canonical Code, Source CodeFor specific Country we're looking for Canonical code and then we're sending Source code as it is (no trim required)Examples: IN;SP.PD;PD → source code will be sent to VODRDM lookups with RegExpThe main logic which is used to find out proper source code for canonical code. We're using codes configured in , however mongo collection LookupValues are used. For specific canonical code (code) we looking for sourceMappings with source = . Often country is embedded within source code so we're applying regexpConfig (more in Veeva Fallback section) to extract specific source code for particular eva FallbackConfiguration is stored in mdm-config-registry > config-hub/stage_apac/mdm-veeva-dcr-service/fallbackAvailable for a couple of attributes: -specialty.csvCOTSpecialtyhco-type-code.csvHCOTypehcp-specialty.csvHCPSpecialtyhcp-title.csvHCPTitlehcp-type-code.csvHCPSubTypeCodeUsually files are constructed as a one column , however the logic for extracting source code may be differentSource code is extracted using RegExp for each parameter. Check application.yml for this mdm-veeva-dcr-server component - -services > mdm-veeva-dcr-service/src/main/resources/application.yml to find out proper line and extract code sent to VOD.Example value for hco-specialty-type.csv: IN_?Regexp value for HCP.specialty: regexpConfig > HCPSpecialty: code sent to for country: "?" (only question mark without country callmdm-veeva-dcr-service: POST /dcr → eateChangeRequest(request)Creates and stores it in collection without actual send to synchronous requests - realtimeDependent componentsComponentUsageDCR Service 2Main component with flow implementationHub and  " }, { "title": "Veeva: create method (submitVR)", "": "", "pageLink": "/pages/tion?pageId=", "content": "DescriptionGather all stored entities in DCRRegistryVeeva collection (status = NEW) and sends them via /SFTP to (). This method triggers CSV/ZIP file creation and file placement in outbound directory. This method is triggered from cron which invokes ndDCRs() from  Flow diagramStepsReceive the request via scheduled trigger, usually every 24h (xedDelay) at specific time of day (itDelay)All entities (VeevaVRDetails) with status NEW are being retrieved from DCRRegistryVeeva collection Then VeevaCreateChangeRequest object is created which aggregates all content which should be placed in actual files. Each object contains only DCRs specific for countryEach country has its own /SFTP directory structure as well as dedicated server instanceOnce files are created with header and content, they are packed into single ZIP fileFinally ZIP file is placed in outbound directoryIf file was placedsuccessfuly - then VeevaChangeRequestACK status = SUCCESSotherwise - then VeevaChangeRequestACK status = FAILURE and process endsFinally, status of VeevaVRDetails entity in DCRRegistryVeeva collection is updated and set to SENT_TO_VEEVATriggersTrigger actionComponentActionDefault timeTimer (cron)mdm-veeva-dcr-service: ndDCRs()Takes all unsent entities (status = NEW) from collection and actually puts file on /SFTP directory via eateDCRsUsually every 24h (xedDelay) at specific time of day (itDelay)Dependent componentsComponentUsageDCR Service 2Main component with flow implementationHub and  " }, { "title": "Veeva: generate DCR Change Events (traceVR)", "": "", "pageLink": "/pages/tion?pageId=", "content": "DescriptionThe process is responsible for gathering responses from (). Responses are provided via CSV/ZIP files placed on /SFTP server in inbound directory which are specific for each country. During this process files should be retrieved, mapped from to model and published to topic to be properly processed by , : process DCR Change Events.Flow diagramSource: is trigger via cron, usually every 24h (xedDelay) at specific time of day (itDelay)For each country, each inbound directory in scanned for ZIP filesEach ZIP files (_DCR_Response_.zip) should be unpacked and processed. A bunch of files should be extracted. Specifically:change_request_response.csv → it's a manifest file with general information in specific columnsdcr_key → ID of which was established during request creation entity_key → ID of entity in , the same one we provided during request creationentity_type → type of entity (, ) which is being modified via this DCRresolution → has information whether was accepted or rejected. Full list of values is solution valueDescriptionCHANGE_PENDINGThis change is still processing and hasn't been resolvedCHANGE_ACCEPTEDThis change has been accepted without modificationCHANGE_PARTIALThis change has been accepted with additional changes made by the steward, or some parts of the change request have been rejectedCHANGE_REJECTEDThis change has been rejected in its entiretyCHANGE_CANCELLEDThis change has been cancelledchange_request_type change_request_type valueDescriptionADD_REQUESTwhether caused to create new profile in with new vid__v  (Veeva id)CHANGE_REQUESTjust update of existing profile in with existing and already known vid__v ( id)change_request_hcp_response.csv - contains information about related to HCPchange_request_hco_response.csv - contains information about related to HCOchange_request_address_response.csv - contains information about related to addresses which are related to specific or HCOchange_request_parenthco_response.csv - contains information about which correspond to relations between and , and and HCOFile with log: _DCR_Request_Job_Log.csv can be skipped. It does not contain any useful information to be processed automaticallyFor all responses from , we need to get corresponding entity (VeevaVRDetails)from collection DCRRegistryVeeva should be selected. In general, specific response files are not that important ( profiles updates will be ingested to HUB via channel) however when new profiles are created (change_request_ange_request_type = ADD_REQUEST) we need to extract theirs ID. We need to deep dive into change_request_hcp_response.csv or change_request_hco_response.csv to find vid__v (Veeva ID) for specific dcr_key This new ID should be stored in evaHCPIdsIt should be further used as a crosswalk value in Reltio:entities.HCO.updateCrosswalk.type (VEEVA)lue (VEEVA ID)Once data has been properly mapped from to HUB model, new VeevaDCREvent entity should be created and published to dedicated topic $env-internal-veeva-dcr-change-events-inPlease be advised, when the status of resolution is not final (CHANGE_ACCEPTED, CHANGE_REJECTED, CHANGE_CANCELLED, CHANGE_PARTIAL) we should not sent event to -service-2Then for each successfully processed entity (VeevaVRDetails) in  DCRRegistryVeeva collection should be updated Veeva CSV: resolutionMongo: DCRRegistryVeeva Entity: atus: DCRRequestStatusDetailsTopic: $env-internal-veeva-dcr-change-events-inEvent: VeevaDCREvent.vrDetails.vrStatusTopic: $env-internal-veeva-dcr-change-events-inEvent: should not be updated at all (stays as SENT)do not send events to -service-2 do not send events to -service-2 CHANGE_ACCEPTEDACCEPTEDCLOSEDACCEPTEDCHANGE_PARTIALACCEPTEDCLOSEDACCEPTEDresolutionNotes / veevaComment should contain more information what was rejected by DSCHANGE_REJECTEDREJECTEDCLOSEDREJECTEDCHANGE_CANCELLEDREJECTEDCLOSEDREJECTEDOnce files are processed, ZIP file should be moved from inbound to VeevaDCREvent Model\ndata class VeevaDCREvent (val eventType: String? =                           val eventTime: Long? =                           val eventPublishingTime: Long? =                           val countryCode: String? =                           val dcrId: String? =                          val vrDetails: VeevaChangeRequestDetails)\n\ndata class VeevaChangeRequestDetails (\n    val vrStatus: String? = null, - HUB CODEs\n    val vrStatusDetail: String? = null, - HUB CODEs\n    val veevaComment: String? =     val veevaHCPIds: List? =    val veevaHCOIds: List? = null)\nTriggersTrigger actionComponentActionDefault timeIN Timer (cron)mdm-veeva-dcr-service: aceDCRs()get responses from /SFTP directory, extract CSV files from ZIP file and publish events to kafka topicevery hourusually every 6h (xedDelay) at specific time of day (itDelay)OUT Events on -veeva-dcr-service: aceDCRs()$env-internal-veeva-dcr-change-events-inVeevaDCREvent event published to topic to be consumed by 2every hourusually every 6h (xedDelay) at specific time of day (itDelay)Dependent componentsComponentUsageDCR Service 2Main component with flow implementationHub and  " }, { "title": "ETL Batches", "": "", "pageLink": "/display/GMDM/ETL+Batches", "content": "DescriptionThe process is responsible for managing the batch instances/stages and loading data received from the channel to the system. The Batch service is a complex component that contains predefined JOBS, configuration that is using the JOBS implementations and using asynchronous communication with topis updates data in MDM system and gathered the acknowledgment events. Mongo cache stores the with corresponding stages and objects that contain metadata information about loaded e below diagram presents an overview of the entire process. Detailed descriptions are available in the separated subpages.Flow diagramModel diagramStepsThe client is able to create a new instance of the batch using - Batch Controller: creating and updating batch instance flowOnce the batch instance is created client is able to load the data using - Bulk Service: loading bulk data flowDuring data load, the following process startsSending JOB - send data received from REST to topicsProcessing JOB - check the status for the specific load if all were receivedSoftDeleting JOB - an optional job that is triggered at the end of the batch that was configured to use full file load - this starts the delta detection process and soft-deletes the objectsACK Collector - a streaming process that gathered events and updated Cache with the response statusFor the support opposes the additional Clear Cache operation is exposedTriggersDescribed in the separated sub-pages for each pendent componentsComponentUsageBatch ServiceMain component with flow implementationManagerAsynchronous events processingHub and cache" }, { "title": "ACK Collector", "": "", "pageLink": "/display//ACK+Collector", "content": "DescriptionThe flow process the response messages and updates the cache. Based on these responses the Processing flow is checking the Cache status and is blocking the workflow by the time all responses are received. This process updates the "status" attribute with the system response and the "updateDateMDM" with the corresponding update timestamp. Flow diagramStepsManager publisher responses to the ACK queue for each processed object through batch-serviceACK Collector process in the streaming mode the events and update the status in the cache. The following attributes are updated:status - MDM status that HUB received after entity/relationship object was created/updated/soft-deletedupdateDateMDM - timestamp once the was receivedentityId - corresponding entity/relation URI that is given by the systemerrorCode - optional  MDM error code once the status is failederrorMessage - optional MDM error message that contains detailed description once the status is failed. TriggersTrigger actionComponentActionDefault timeIN Events incoming batch-service: the cache based on the ACK responserealtimeDependent componentsComponentUsageBatch ServiceThe main componentManagerAsync route with ACK responsesHub StoreCache" }, { "title": "Batch Controller: creating and updating batch instance", "": "", "pageLink": "/display/GMDM/Batch+Controller%3A+creating+and+updating+batch+instance", "content": "DescriptionThe batch controller is responsible for managing the Batch Instances. The service allows to creation of a new batch instance for the specific Batch, create a new Stage in the batch and update stage with the statistics. The controller component manages the batch instances and checks the validation of the requests. Only authorized users are allowed to manage specific batches or stages. Additionally, it is not possible to START multiple instances of the same batch in one time. Once batch is started Client should load the data and at the end complete the current batch instance. Once user creates new batch instance the new unique ID is assigned, in the next request user has to use this ID to update the workflow. By default, once the batch instance is created all stages are initialized with status PENDING. Batch controller also manages the dependent stages and is marking the whole batch as COMPLETED at the end. Flow diagramStepsThe first step that the User has to make is the initialization of the new , during this operation process starts and a new unique ID is ing the Unique ID and available name user is able to start the STAGE. (by design users have to access only to the first "Loading" stage, but this can be changed in the configuration if required. In this request, the objects may be empty. It will cause the initialization of this specific STAGE - changed to that moment user is able to load data - the description is available in the next flow - Bulk Service: loading bulk dataAfter data loading User has to complete the STAGE. In this request, the objects have to be delivered. In the request, the User provides the statistics about the load or optionally errors.if there are errors during loading - BatchStageStatus = FAILEDif the load ended with success -    BatchStageStatus = COMPLETEDIn the end, the user should trigger the GET batch instance details operation and wait for the Batch completion ( after stage all dependent stages are started)To get more details about next internal steps check:Processing JOBSending JOBSoftDeleting JOBACK CollectorTriggersTrigger actionComponentActionDefault timeAPI BatchControllerRouteUser initializes the new batch instance, updates the STAGE, saves the statistics, and completes the corresponding is able to get batch instance details and wait for the load completionmuser request dependent, triggered by an external clientDependent componentsComponentUsageBatch ServiceThe main component that exposes the REST APIHub StoreBatch Instances Cache" }, { "title": "Batches registry", "": "", "pageLink": "/display/GMDM/Batches+registry", "content": "There is a list of batches configured from will be incremental file load and don’t need to enable the soft-delete process for entities (, ) and relations (, will be incremental file load and don’t need to enable the soft-delete process for entities (, ) and relations (, will be incremental file load and don’t need to enable the soft-delete process for entities (, ) and relations (, will be incremental file load and don’t need to enable the soft-delete process for entities (, ) and relations (, , , MCONEKEYONEKEY_FRHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (, ) and relations (, () = RE,,,,,,PM,,, will be incremental file load and don’t need to enable the soft-delete process for entities (, ) and relations (, will be incremental file load and don’t need to enable the soft-delete process for entities (, ) and relations (, will be incremental file load and don’t need to enable the soft-delete process for entities (, ) and relations (, will be incremental file load and don’t need to enable the soft-delete process for entities (, ) and relations (, ( and Greenland)ONEKEYONEKEY_DKHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (, ) and relations (, will be incremental file load and don’t need to enable the soft-delete process for entities (, ) and relations (, will be incremental file load and don’t need to enable the soft-delete process for entities (, ) and relations (, will be incremental file load and don’t need to enable the soft-delete process for entities (, ) and relations (, ZealandONEKEYONEKEY_NZHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (, ) and relations (, KoreaONEKEYONEKEY_KRHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (, ) and relations (, will be incremental file load and don’t need to enable the soft-delete process for entities (, ) and relations (, will be incremental file load and don’t need to enable the soft-delete process for entities (, ) and relations (, will be incremental file load and don’t need to enable the soft-delete process for entities (, ) and relations (, /UruguayONEKEYONEKEY_ARHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (, ) and relations (, NameStageDetailsAMERBrazilPFORCERX_ODSPFORCERX_ODSHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (, ) and relations (, /UruguayCanadaAPACJapan PFORCERX_ODSPFORCERX_ODSHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (, ) and relations (,  /New ZealandIndiaSouth KoreaEMEASaudi ArabiaPFORCERX_ODSPFORCERX_ODSHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and don’t need to enable the soft-delete process for entities (, ) and relations (,  DenmarkPortugalGRVTenantCountrySource NameBatch NameStageEMEAGRGCPGCPHCPLoadingITFRESRUTRSADKGLFOPTAMERCAGCPGCPHCPLoadingBRMXARAPACAUGCPGCPHCPLoadingNZINJPKRENGAGETenantCountrySource NameBatch NameStageAMERCAENGAGEENGAGEHCPLoadingHCOLoadingRelationLoading" }, { "title": "Bulk Service: loading bulk data", "": "", "pageLink": "/display//Bulk+Service%3A+loading+bulk+data", "content": "DescriptionThe bulk service is responsible for loading the bundled data using REST as the input and stage topics as the output. This process is strictly connected to the Batch Controller: creating and updating batch instance flow, which means that the Client should first initialize the new batch instance and stage. Using requests data is loaded to the next processing stages. Flow diagramStepsThe batch controller part is described in the Batch Controller: creating and updating batch instance ter the User starts the stage it is now possible to load the data. (Loading STAGE part on the diagram)Depending on the batch workflow configuration it is possible to load entities or relationsPOST /entities - create entities in updated entities in , in that case, the partialOverride option is usedPOST /relations - create relations in MDMPATCH /tags - add tags to objects in /tags - remove tags from objects in MDMPOST /entities/_merge - merges 2 entities in MDMPOST /entities/_unmerge -  unmerges entity B from entity A in , based on the configuration, there is a limitation of the objects in one call - by default user is allowed to send the list of 25 objects in one e response is the HTTP 200 code with an empty e API Loading stage is the synchronous operation, the rest of the process uses the Kafka Topics and all data is shared to the system in an asynchronous way. After all data through the specific STAGE, the Client should complete the STAGE, this will trigger the next processing steps described on the ELT Batch sub-pages. TriggersTrigger actionComponentActionDefault timeAPI BulkControllerRouteClients send the data to the bulk er request dependent, triggered by an external clientDependent componentsComponentUsageBatch ServiceThe main component that exposes the REST APIHub StoreBatch Instances Cache" }, { "title": "Clear Cache", "": "", "pageLink": "/display/GMDM/Clear+Cache", "content": "DescriptionThis flow is used to clear mongo cache (removes records from batchEntityProcessStatus) for specified batch name, object type and entity type. Optional list of countries (comma-separated) allows filtering by countries.Flow diagramStepsclient sends the request to the batch controller with specified parameters like batchName, objectType and entityType example: {{API_URL_BATCH_CONTROLLER}}/{{batchName}}/_clearCache?objectType=RELATION&entityType=configuration/relationTypes/ContactAffiliationsexample: {{API_URL_BATCH_CONTROLLER}}/{{batchName}}/_clearCache?objectType=ENTITY&entityType=configuration/entityTypes/HCP&countries=,IE,,,DKthe service checks if client is allowed to do this action - has appropriate role CLEAR_CACHE_BATCH the service process client request and executes mongo query with specified parametersthe service returns number of removed iggersTrigger actionComponentActionDefault timeAPI client calls request to clear the cacheuser request dependent, triggered by an external clientDependent componentsComponentUsageBatch ServiceThe main component that exposes the REST APIHub entities/relations cache" }, { "title": "Clear Cache by croswalks", "": "", "pageLink": "/display//Clear+Cache+by+croswalks", "content": "DescriptionThis flow is used to clear mongo cache (removes records from batchEntityProcessStatus) for specified batch name, sourceId type or/and valueFlow diagramStepsclient sends the request to the batch controller with specified parameters like batchName, sourceId type or/and valueexample: PATCH {{API_URL_BATCH_CONTROLLER}}/{{batchName}}/_clearCachebody: "sourceId": [\n {\n "type": "ABC",\n "value": "TEST:123"\n },{\n "type": "DEF"\n },{\n "value": "TEST:456"\n }\n ]\n}\nthe service checks if client is allowed to do this action - has appropriate role CLEAR_CACHE_BATCH the service process client request and executes mongo query with specified parametersthe service returns number of removed iggersTrigger actionComponentActionDefault timeAPI client calls request to clear the cacheuser request dependent, triggered by an external clientDependent componentsComponentUsageBatch ServiceThe main component that exposes the REST APIHub entities/relations cache" }, { "title": "PATCH Operation", "": "", "pageLink": "/display//PATCH+Operation", "content": "DescriptionEntity PATCH (UpdateHCP/UpdateHCO/UpdateMCO) operation differs slightly from the standard POST (CreateHCP/CreateHCO/CreateMCO) operation:PATCH operation includes contributor crosswalk verification - MDM is searched to make sure that the updated entity exists (to prevent creations of operation uses 's partialOverride parameter. It allows sending only a portion of attributes (usually only the ones that have changed since the last load). Existing attribute values that have not been provided in the request will not be wiped from gorithmPATCH operation logic consists of following steps:For each entity in the bundle (depending on the configuration, usually around 50 requests):Find contributor crosswalk - if contributor crosswalk cannot be determined, throw an exceptionSearch all the contributor crosswalks in - single search requestsFilter results - assign each found entity to corresponding crosswalkIf no entity found for a crosswalk - perform a fallback search by crosswalk using MDM APIFor every entity where contributor crosswalk was not found in above steps, generate a "Not Found" r remaining entities, perform /CreateMCO rge response from /CreateMCO with "Not Found" messages in correct order, return." }, { "title": "Processing JOB", "": "", "pageLink": "/display//Processing+JOB", "content": "DescriptionThe flow checks the using a poller that executes the query each minutes. During this processing, the count is decreasing until it reaches 0. The following query is used to check the count of objects that were not delivered. The process ends if the query return 0 objects - it means that we received ACK for each object and it is possible to go to the next dependent stage. "{'batchName': ?0 ,'':{ $gt: ?1 }, '$or':[ {'updateDateMDM':{ $lt: ?1 } }, { 'updateDateMDM':{ $exists : false } } ] }"Using query there is a possibility to find what objects are still not processed. In that case, the user should provide batchName==" currently loading batch " and use the date that is the batch start date. Flow diagramStepsThe process starts once the activation criteria are successful, which means that the dependent JOB is ing trigger mechanism data is polled from and counted.If the number of processed entities is equal to 0 process endselse the process is triggered after minutes. If this is the last stage in the current batch workflow statistics are calculated.  ( it means that there may be multiple processing jobs in one workflow, but only the last one is calculating all gathered statistics )The LAST stage will always contain the following staistisc: Each statistic is divided into 3 sections using "/" separator1 - entities or relations depending on the loaded - object type, it can be or any relationType - name{entities | relations}/{object type}/receivedCount - number of objects received {entities | relations}/{object type}/skippedCount - number of objects skipped because of delta detection{entities | relations}/{object type}/failedCount -  number of objects that got "failed" status from MDM{entities | relations}/{object type}/updatedCount - number of objects that got "updated" status from MDM{entities | relations}/{object type}/createdCount - number of objects that got "created" status from MDM{entities | relations}/{object type}/notFoundCount - number of objects that got "notFound"  status from (may occur once using partialOverride operation){entities | relations}/{object type}/deletedCount - number of objects that got "deleted" status from (may occur once object is in and the object updated alreade deleted entity){entities | relations}/{object type}/softDeletedCount - number of objects removed by the  JOB - used only during full files load.Example statistics:TriggersTrigger actionComponentActionDefault timeThe previous dependent JOB is completed. Triggered by the mechanismbatch-service:ProcessingJobTriggers mongo and checks the number of objects that are not yet processed.every 60 secondsDependent componentsComponentUsageBatch ServiceThe main component with implementationHub StoreThe cache that stores all information about the loaded objects" }, { "title": "Sending JOB", "": "", "pageLink": "/display//Sending+JOB", "content": "DescriptionThe JOB is responsible for sending the data from the Stage Kafka topics to the manager component. During this process data is checked, the checksum is calculated and compared to the previous state, os only the changes are applied to MDM. The Cache - Batch data store, contains multiple metadata attributes like sourceIngetstionDate - the time once this entity was recently shared by the Client, and the response status (create/update/failed) The Checksum is calculation is skipped for the "failed" objects. It means there is no need to clear the cache for the failed objects, the user just needs to reload the data. The JOB is triggered once the previous dependent job is completed or is started. There are two mode of dependences between STAGE and Sending STAGE(hard) dependentStages - the Sending stage will start once the previous dependent JOB is COMPLETEDsoftDependentStages - the Sending stage will start in parallel to the stage. It means that all loaded dates will be intimately sent to Reltio. The purpose of hard dependency is the case when the user has to and Relations objects. The sending of relation has to start after and load is . The process finishes once the stage queue is empty for (no new events are in the queue).The following query is used to retrieve processing object from cache. Where the batchName is the corersponding , and sourceId is the information about loaded source crosswalk.{'batchName': ?0, {'sourceId.type': ?1, 'lue': ?2,'urceTable': ?3 } }Flow diagramStepsThe process starts once the activation criteria are successful, which means that the (hard) dependent JOB is or soft dependent JOB l entities or relations are polled from stage topicif objects exist on topic for each:the current state is retrieved from Batch Cache if this is a new one the object is initialized with all required attributes and checksumthe checksum is calculated (for failed status checksum calculation is skipped)the sourceIngestionDate is updated to the current date (required to track the object and generate soft-deletes once the entity was not received)updateDate, attributes are updated and "deleted" flag is set to falseonce no new objects are on stage topic process is finished. The STAGE is updated with iggersTrigger actionComponentActionDefault timeThe previous dependent JOB is completed. Triggered by the mechanismbatch-service:SendingJobGet entries from stage topic, saved data in mongo and create/updates profiles using producer (asynchronous channel)once the dependence JOB is completedDependent componentsComponentUsageBatch ServiceThe main component with the Sending JOB implementationHub StoreThe cache that stores all information about the loaded objects" }, { "title": " JOB", "": "", "pageLink": "/display//SoftDeleting+JOB", "content": "DescriptionThis JOB is responsible for the soft-delete process for the full file loads. Batches that are configured with this JOB have to always deliver the full set of data. The process is triggered at the end of the workflow and soft-delete objects in the system. The following query is used to check how many objects are going to be removed and also to get all these objects and send the soft-delete requests. {'batchName': ?0, 'deleted': false, 'objectType': 'ENTITY OR RELATION', 'sourceIngestionDate':{ $lt: ?1 } }Once the object is soft deleted "deleted" flag is changed to "true"Using the mongo query there is a possibility to check what objects were soft-deleted by this process. In that case, the Administrator should provide the batchName=" currently loading batch" and the deleted parameter =" true".The process removes all objects that were not delivered in the current load, which means that the "SourceIngestionDate" is lower than the "BatchStartDate".It may occur that the number of objects to soft-delete exceeds the limit, in that case, the process is aborted and the Administrator should verify what objects are blocked and notify the client. The production limit is a maximum of 10000 objects in one load.Flow diagramSteps The process starts once the activation criteria are successful, which means that the dependent JOB is ing a query in the first step the process counts the number of entities to be soft-deletedIf the limit is exceeded the process is aborted and status with reason is saved in . The limit is a safety switch in case if we get a corrupted file (empty or partial). It prevents from deleting all   profiles in such the "RelationsUnseenDeletion" STAGE the following information is saved:statistics:maxDeletesLimit - currently configured limitentitiesUnseenResultCount - number of entities that process indicated to soft-deleteerrors:errorCode - 400 errorMessage - Entities delete limit exceeded, aborting soft delete sending.example:Else the Cache is queried and returned objects are sent Manager for removalIn the loop, all objects are queried from Cache and the data is sent to the corresponding topic. During this operation, the cache is updated and is preparedMDMRequest:entityTypecountryCrosswalktypevaluedeleteDate - current timestampCache attributes to update:updateDate = current time - cache object update = current time - date that contains the delete date of corresponding = current time - date that contains the time when the profile was sent to MDMdeleted = true - flag indicates that the profile was soft-deleted2023-07 Update: Set Soft-Delete Limit by CountryDeletingJob now allows additional configuration:\ndeletingJob:\n "TestDeletesPerCountryBatch":\n "EntitiesUnseenDeletion":\n maxDeletesLimit: 20\n queryBatchSize: 5\n reltioRequestTopic: "local-internal-async-all-testbatch"\n reltioResponseTopic: "local-internal-async-all-testbatch-ack"\n>     maxDeletesLimitPerCountry:\n> enabled: true\n> overrides:\n> CA: 10\n> BR: 30\nIf maxDeletesLimitPerCountry.enabled == true (default false):soft-deletes limit in is applied per country. Number of records to delete is fetched from Cache for each country, and if any of the countries exceeds the limit, the batch is failed with appropriate error ft-deletes limit can be changed for each country using the maxDeletesLimitPerCountry.overrides map. If country is not present in the overrides, default value from  is consideredTriggersTrigger actionComponentActionDefault timeThe previous dependent JOB is completed. Triggered by the mechanismbatch-service:AbstractDeletingJob (DeletingJob/DeletingRelationJob)Triggers mongo and soft-delete profiles using producer (asynchronous channel)once the dependence JOB is completedDependent componentsComponentUsageBatch ServiceThe main component with the JOB implementationManagerAsynchronous channel Hub StoreThe cache that stores all information about the loaded objects" }, { "title": "Event filtering and routing rules", "": "", "pageLink": "/display//Event+filtering+and+routing+rules", "content": "At various stages of processing events can be filtered based on some configurable criteria. This helps to lessen the load on the Hub and client systems, as well as simplifies processing on client side by avoiding the types of events that are of no interest to the target application. There are three places where event filtering is applied:Reltio Subscriber – filters events based on their (Reltio-defined) typeNucleus Subscriber – filters out duplicate events, based on event type and entityUriEvent Publisher – filters events based on their contentEvent type filteringEach event received from queue has a "type" attribute. Reltio Subscriber has a "allowedEventTypes" configuration parameter (in application.yml config file) that lists event types which are processed by application. Currently, complete list of supported types is:ENTITY_CREATEDENTITY_REMOVEDENTITY_CHANGEDENTITY_LOST_MERGEENTITIES_MERGEDENTITIES_SPLITTEDAn event that does not match this list is ignored, and "Message skipped" entry is added to a log ease keep in mind that while it is easy to remove an event type from this list in order to ignore it, adding new event type is a whole different story – it might not be possible without changes to the application source code.Duplicate detection (Nucleus)There's an in-memory cache maintained that stores entityUri and type of an event previously sent for that uri. This allows duplicate detection. The cache is cleared after successful processing of the whole zip file.Entity data-based filteringEvent Publisher component receives events from internal topic. After fetching current state from (via ) it imposes few additional filtering rules based on fetched data. Those rules are:Filtering based on that entity belongs to. This is based on value of country code, extracted from Country attribute of an entity. List of allowed codes is maintained as "activeCountries" parameter in application.yml config ltering based on Entity type. This is controlled by "allowedEntityTypes" configuration parameter, which currently lists two values: "HCP" and "". Those values are matched against "entityType" attribute of Entity (prefix "configuration/entityTypes/" is added automatically, so it does not need to be included in configuration file)Filtering out events that have empty "targetEntity" attribute – such events are considered outdated, plus they lack some mandatory information that would normally be extracted from targetEntity, such as originating country and source system. They are filtered out because Hub would not be able to process them correctly ltering out events that have value mismatch between "entitiesURIs" attribute of an event and "uri" attribute of targetEntity – for all event types except HCP_LOST_MERGE and HCO_LOST_MERGE. mismatch may arise when is processing events with significant delay (e.g. due to downtime, or when reprocessing events) – Event Publisher might be processing HCP_CHANGED (HCO_CHANGED) event for an Entity that was merged with another Entity since then, so HCP_CHANGED event is considered outdated, and we are expecting HCP_LOST_MERGE event for the same is filter is controlled by lterMismatchedURIs configuration parameter, which takes values (yes/no, true/false)Filtering out events based on timestamps. When HCP_CHANGED or HCO_CHANGED event arrives that has "eventTime" timestamp older than "updatedTime" of the targetEntity, it is assumed that another change for the same entity has already happened and that another event is waiting in the queue to be processed. By ignoring current event Event Publisher is ensuring that only the most recent change is forwarded to client is filter is controlled by lterOutdatedChanges configuration parameter, which can take values (yes/no, true/false)Event routingPublishing Hub supports multiple client systems subscribing for Entity change events. Since those clients might be interested in different subset of Events, the event routing mechanism was created to allow configurable, content-based routing of the events to specific client systems. Routing mechanics consists of three main parts: topics – each client system can has one or more dedicated topics where events of interest for that system are publishedMetadata extraction – as one of the processing steps, there are some pieces of information extracted from the Event and related Entity and put in processing context (as headers), so they can be easily nfigurable routing rules – Event Publisher's configuration file contains the whole section for defining rules that facilitates Groovy scripting language and the metadata.Available metadata is described in the table below.Table 10. Routing headersHeaderTypeValuesSource FieldDescriptioneventTypeStringfull simplenoneType of an event. "full" means Event Sourcing mode, with full targetEntity data. "simple" is just an event with basic data, without targetEntityeventSubtypeStringHCP_CREATED, HCP_CHANGED, ….event.eventTypeFor the full list of available event subtypes is specified in untryStringCN tributes .Country.lookupCodeCountry of origin for the of String["OK", "GRV"]event. osswalks.typeArray containing names of all the source systems as defined by Reltio crosswalksmdmSourceString["RELTIO", NUCLEUS"]NoneSystem of origin for the lfMergeBooleantrue, falseNoneIs the event "self-merge"? Enables filtering out merges on the uting rules configuration is found in utingRules section of application.yml configuration file. Here's an example of such rule: Elements of this configuration are described – unique identifier of the ruleselector – snippet of Groovy code, which should return true or false depending on whether or not message should be forwarded to the stination – name of the topic that message should be sent lector syntax can include, among the others, the elements listed in the table below.Table 11. Selector syntaxElementExampleDescriptioncomparison operators==, !=, <, > syntaxboolean operators&&,set operatorsin, intersectMessage untrySee Table 10 for list of available headers. ".headers" is the standard prefix that must be used do access themFull syntax reference can be found in Apache Camel documentation: . The limitation here is that the whole snippet should return a single boolean stination name can be literal, but can also reference any of the message headers from Table 10, with the following syntax: " }, { "title": "FLEX COV Flows", "": "", "pageLink": "/display//FLEX+COV+Flows", "content": "" }, { "title": "Address rank callback", "": "", "pageLink": "/display/GMDM/Address+rank+callback", "content": "The Address Rank Callback is used only in the FLEX COV environment to update the Rank attribute on Addresses. This process sends the callback to Reltio only when the specific source exists on the profile. The is used then by or in or by the downstream FLEX system. Address Rank Callback is triggered always when operation is invoked. The purpose of this process is to synchronize Reltio with correct address rank sort rrently the functionality is configured only for Trade Instance. Below is the diagram outlining the whole process. Process steps description:Event Publisher receives events from internal topic and calls to retrieve latest state of .Event Publisher internal user is authorized in to check source, country and appropriate access roles. invokes get entity operation in . Returned is then added to the Address Rank sort process, so the client will always get entity with sorted address rank order, but only when this feature is activated in configuration.When Address Rank Sort process is activated, each address in entity is sorted. In this case "AddressRank" and "BestRecord" attributes are set. When is equal to "1" attribute will always have "1" value.When Address Rank Callback process is activated, relation operation is invoked in . The Relation Request object contains Relation object for each sorted address. Each Relation will be created with "AddrCalc" source, where the start object is current entity id and the end object is id of the entity. In that case relation between entity and is created with additional rank attributes. There is no need to send multiple callback requests every time when get entity operation is invoked, so the Callback operation is invoked only when address rank sort order have changed.Entity data is stored in MongoDB database, for later use in Simple mode (publication of events that entityURI and require client to retrieve full Entity via REST API).For every Reltio event there are two events created: one in Simple mode and one in (full) mode. Based on metadata, and Routing Rules provided as a part of application configuration, the list of the target destinations for those events is created. Event is sent to all matched destinations." }, { "title": "DEA Flow", "": "", "pageLink": "/display//DEA+Flow", "content": "This flow processes files published by to Bucket. Flow steps are presented on the sequence diagram below.  Process steps description: files are uploaded to storage bucket to the appropriate directory intended only for component is monitoring location and processes the files uploaded to lder structure for is divided on "inbound" and "archive" directories. component is polling data from inbound directory, after successful processing the file is copied to "archive directory"Files downloaded from is processed in streaming mode. The processing of the file can be stared before full download of the file. Such solution is dedicated to speed up processing of the big files, because there is no need to wait until the file will be fully file load Start is saved for the specific load – as loadStartDate.Each line in file is parsed in component and mapped to the dedicated object. file is saved in , in that case one record is saved in one line in the file so there is no need to use record aggregator. Each line has specified length, each column has specified star and end point number in the is downloaded from MongoDB for each record. This context contains crosswalk ID, line from file, MD5 checksum, last modification date, delete flag. When is empty it means that this record is initially created – such object is send to . When is not empty the MD5 form the source file is compared to the MD5 from the (mongo). If MD5 checksums are equals – such object is skipped, otherwise – such object is send to . For each modified object, lastModificationDate is updated in – it is required to detected delete records as the final y when record MD5 checksum is not changed, record will be published to topic dedicated for events for records. They will be processed by component. The first step is authorization check to verify if this event was produced by Batch Channel component with appropriate source name and country and roles. Then the standard process for creation is stared. The full description of this process is in is an additional component for managing transaction logs. The role of this component is to save each successful or unsuccessful flow in transaction log. Additionally each log is saved in MongoDB to create a full report from current load and to correlate record flow between and Manager ter file is successfully processed, delete record processor is started. From Mongo Database each record with lastModificationDate less than loadStartDate and delete flag equal to false is downloaded. When the result count is grater that 1000, delete record processor is stoped – it is a protector feature in case of wrong file uploade which can generate multiple unexpected profiles deletion. Otherwise, when result count is less than 1000, each record from MongoDB is parsed and send to with deleteDate attribute on crosswalk. Then they will be processed by component. The first step is authorization check to verify if this event was produced by Batch Channel component with appropriate source name and country and roles. Then the standard process for creation is stared. The full description of this process is in section. Profiles created with deleteDate attribute on crosswalk are soft deleted in nally file is moved to archive subtree in bucket." }, { "title": "FLEX Flow", "": "", "pageLink": "/display/GMDM/FLEX+Flow", "content": "This flow processes FLEX files published by to Bucket. Flow steps are presented on the sequence diagram below. Process steps description:FLEX files are uploaded to storage bucket to the appropriate directory intended only for tch Channel component is monitoring location and processes the files uploaded to lder structure for is divided on "inbound" and "archive" directories. component is polling data from inbound directory, after successful processing the file is copied to "archive directory"Files downloaded from is processed in streaming mode. The processing of the file can be stared before full download of the file. Such solution is dedicated to speed up processing of the big files, because there is no need to wait until the file will be fully downloaded.Each line in file is parsed in component and mapped to the dedicated FLEX object. FLEX file is saved in CSV Data Format, in that case one FLEX record is saved in one line in the file so there is no need to use record aggregator. The first line in the file is always the header line with column names, each next line is the records with "," (comma character) delimiter. The most complex thing in FLEX mapping is Identifiers mapping. When Flex records contain "GROUP_KEY" ("Address Key") attribute it means that Identifiers saved in "Other Active IDs" will be added to entifiers nested attributes. "Other Active IDs" is one line string with key value pairs separated by "," (comma character), and key-value delimiter ":" (colon character). Additionally for each type of customer identifier is always saved in FlexID section.FLEX record will be published to topic dedicated for events for FLEX records. They will be processed by component. The first step is authorization check to verify if this event was produced by Batch Channel component with appropriate source name and country and roles. Then the standard process for creation is stared. The full description of this process is in is an additional component for managing transaction logs. The role of this component is to save each successful or unsuccessful flow in transaction log. Additionally each log is saved in MongoDB to create a full report from current load and to correlate record flow between and Manager ter file is successfully processed, it is moved to archive subtree in bucket." }, { "title": "HIN Flow", "": "", "pageLink": "/display//HIN+Flow", "content": "This flow processes HIN files published by to Bucket. Flow steps are presented on the sequence diagram below. Process steps description:HIN files are uploaded to storage bucket to the appropriate directory intended only for HIN tch Channel component is monitoring location and processes the files uploaded to lder structure for HIN is divided on "inbound" and "archive" directories. component is polling data from inbound directory, after successful processing the file is copied to "archive directory"Files downloaded from is processed in streaming mode. The processing of the file can be stared before full download of the file. Such solution is dedicated to speed up processing of the big files, because there is no need to wait until the file will be fully downloaded.Each line in file is parsed in component and mapped to the dedicated HIN object. HIN file is saved in , in that case one HIN record is saved in one line in the file so there is no need to use record aggregator. Each line has specified length, each column has specified star and end point number in the row.HIN record will be published to topic dedicated for events for FLEX records. They will be processed by component. The first step is authorization check to verify if this event was produced by Batch Channel component with appropriate source name and country and roles. Then the standard process for creation is stared. The full description of this process is in is an additional component for managing transaction logs. The role of this component is to save each successful or unsuccessful flow in transaction log. Additionally each log is saved in MongoDB to create a full report from current load and to correlate record flow between and Manager ter HIN file is successfully processed, it is moved to archive subtree in bucket." }, { "title": "SAP Flow", "": "", "pageLink": "/display/", "content": "This flow processes files published by GIS system to Bucket. Flow steps are presented on the sequence diagram below. Process steps description: files are uploaded to storage bucket to the appropriate directory intended only for component is monitoring location and processes the files uploaded to portant note: To facilitate fault tolerance the component will be deployed on multiple instances on different machines. However, to avoid conflicts, such as processing the same file twice, only one instance is allowed to do the processing at any given time. This is implemented via standard Apache Camel mechanism of , which is backed by Zookeeper distributed key-value store. When a new file is picked up by instance, the first processing step would be to create a key in Zookeeper, acting as a lock. Only one instance will succeed in creating the key, therefore only one instance will be allowed to lder structure for is divided on "inbound" and "archive" directories. component is polling data from inbound directory, after successful processing the file is copied to "archive directory"Files downloaded from is processed in streaming mode. The processing of the file can be stared before full download of the file. Such solution is dedicated to speed up processing of the big files, because there is no need to wait until the file will be fully downloaded.Each line in file is parsed in component and mapped to the dedicated object. In case of files where one record is saved in multiple lines in the file there is need to use SAPRecordAggregator. This class will read each line of the file and aggregate each line to create full record. Each line starts with Record Type character, the separator for is "~" (tilde character). Only lines that start with the following character are parsed and create full SAP record:1 – Header4 – Sales OrganizationE – LicenseC – NotesWhen header line is parsed Account Type attribute is checked. Only records with "" type are filtered and post to is downloaded from MongoDB for each record. This context contains Start Date for and 340B Identifiers. When is empty current timestamp is saved for each of the Identifiers, otherwise the start date for the identifiers is changed for the one saved in the cache. This Start Date always must be overwritten with the initial dates from mongo gregated SAP record will be published to topic dedicated for events for records. They will be processed by component. The first step is authorization check to verify if this event was produced by Batch Channel component with appropriate source name and country and roles. Then the standard process for creation is stared. The full description of this process is in is an additional component for managing transaction logs. The role of this component is to save each successful or unsuccessful flow in transaction log. Additionally each log is saved in MongoDB to create a full report from current load and to correlate record flow between and Manager ter file is successfully processed, it is moved to archive subtree in bucket." }, { "title": " overview", "": "", "pageLink": "/display/GMDM/US+overview", "content": "" }, { "title": "Generic Batch", "": "", "pageLink": "/display/GMDM/Generic+Batch", "content": "The generic batch offers the functionality of configuring processes of data loading from text files () into loading processes are defined in the configuration, without the need for changes in the scription of the processDefinition of single data flow Configuration (definition) od each data flow contains:Data flow name Definition of data files. Each file is described by: File name patternMappings for each column Columns in file definition are described by: Column index and name Column type (string, date, number, fixed value)Attribute of the entity to which the value from the column is mappedConditional mapping parametersAmazon resources and local temporary directory configurationAmazon  input directory   archive directory Local temporary directory  topic names for sending asynchronous requests  database connection parameters (common for all flow definitions) Currently defined data flows: files (with names required after preprocessing stage)Detailed columns to entity attribute mapping fileTH HCPTHCICRhcpEntitiesfileNamePattern: '(TH_Contact_In)+(\\.(?i)(txt))$'hcpAddressesfileNamePattern: '(TH_Contact_Address_In_JOINED)+(\\.(?i)(txt))$'hcpSpecialtiesfileNamePattern: '(TH_Contact_Speciality_In)+(\\.(?i)(txt))$'mdm-gateway\\batch-channel\\src\\main\\resources\\flows.ymlSA HCPSALocalMDMhcpEntitiesfileNamePattern: '(KSA_HCPs)+(\\.(?i)(csv))$'mdm-gateway\\batch-channel\\src\\main\\resources\\flows.yml" }, { "title": "Get Entity", "": "", "pageLink": "/display/GMDM/Get+Entity", "content": "DescriptionOperation getEntity of Manager fetches current state of from MongoDB e detailed process flow is shown below.Flow diagramGet EntityStepsClient sends HTTP request to endpoint. receives requests and handles authentication.If the authentication succeeds, the request is forwarded to Manager checks user permissions to call getEntity operation and the correctness of the request.If user's permissions are correct, proceeds with searching for the specified entity by Manager checks user profile configuration for operation to determine whether to return results based on MongoDB state or call Reltio r clients configured to use MongoDB – if the entity is found, then its status is checked. For entities with LOST_MERGE status parentEntityId attribute is used to fetch and return the parent Entity instead. This is in line with default Reltio behavior since Manager is supposed to mirror iggersTrigger actionComponentActionDefault timeREST callManager: GET /entity/{entityId}get specific objects from synchronous requests - realtimeDependent componentsComponentUsageManagerget Entities in systems" }, { "title": " events processing", "": "", "pageLink": "/pages/tion?pageId=", "content": "ContactsVendorContactMAP/DEG API lanc@This flow processes events from and systems distributed through . Processing is split into three stages. Since each stage is implemented as separate route and separated from other stages by persistent message store (), it is possible to turn each stage on/off separately using Console. subscriptionFirst processing stage is receiving data published by from queues, which is done as shown on diagram gure 5. First processing stageProcess steps description:Data changes in and are captured by and distributed via queues to MAP Channel components using queues with names:eh-out-reltio-gcp-update-eh-out-reltio-gcp-batch-update-eh-out-reltio-grv-update-Events pulled from queue are published to topic as a way of persisting them (allowing reprocessing) and to do event prioritizing and control throughput to Reltio. The following topics are used:-gw-internal-gcp-events-raw-gw-internal-grv-events-rawTo ensure correct ordering of messages in , there is a custom message key generated. It is a concatenation of market code and unique Contact/User id.Once the message is published to , it is confirmed in and deleted from the queue.Enrichment with dataFigure 6. Second processing stageSecond processing stage is focused on getting data from DEG system. The control flow is presented cess steps description:MAPChannel receives events from topic on which they were published in previous filters events based on country activation criteria – events coming from not activated countries are skipped. A list of active countries is controlled by configuration parameter, separately for each source (, GCP);Next, calls DEG REST services (INT2.1 or INT 2.2 depending on whether it is a or GCP event) to get detailed information about changed record. always returns current state of and records.Data from is published to topic (again, as a way of persisting them and separating processing stages). The topics used are:-gw-internal-gcp-events-deg-gw-internal-grv-events-degAgain, custom message key (which is a concatenation of market code and unique Contact/User idCreating entitiesLast processing stage involves mapping data to format and calling to create entities in Reltio. Process overview is shown gure 7. Third processing stageProcess steps description:MAPChannel receives events from topic on which they were published in previous filters events based on country activation criteria, events coming from not activated countries are skipped. A list of active countries is controlled by configuration parameter, separately for each source (, ) – this is exactly the same parameter as in previous maps data from / to :EMEA mappingGLOBAL mappingValidation status of mapped is checked – if it matches a configurable list of inactive statuses, then deleteCrosswalk operation is called on . As a result entity data originating from / is deleted from Reltio.Otherwise, calls REST operation /hcp on (INT4.1) to create or replace profile in Reltio. handles complexity of the update process in cessing events from multiple sources and prioritizationAs mentioned in previous sections, there are three different queues that are populated with events by . Each of them is processed by a separate , allowing for some flexibility and prioritizing one queue above others. This can be accomplished by altering consumer configuration found in application.yml file. Relevant section of mentioned file is shown below. Queue eh-out-reltio-gcp-batch-update-dev has 15 consumers (and therefore 15 processing threads), while two remaining queues have only 5 consumers each. This allows faster processing of GCP Batch e same principle applies to further stages of the processing, which use endpoints. Again, there is a configuration section dedicated to each of the internal topic that allows tuning the pace of processing. " }, { "title": "HUB UI User Guide", "": "", "pageLink": "/display/GMDM/HUB+UI+User+Guide", "content": "This page contains the complete user guide related to the HUB ease check the sub-pages to get details about the HUB UI and art with Main Page - HUB Status - main pageA handful of information that may be helpful when you are using HUB UI:UI URL:  (there is no need to know all URLs, click one, and in the top right corner you can easily switch between tenants).How to connect to and gain access to all features - Connect Guide(INTERNAL USAGE only by HUB Admins) role names and standards - Add new role and add users to the UIIf you want to add any new features to the HUB UI please send your suggestions to the HUB Team: " }, { "title": "HUB Admin", "": "", "pageLink": "/display/GMDM/HUB+Admin", "content": "All the subpages contain the user guide - how to use the hub admin gain access to the selected operation please read - UI Connect Guide" }, { "title": "1. Offset", "": "", "pageLink": "/display/GMDM/1.+Kafka+Offset", "content": "DescriptionThis tab is available to a user with the MODIFY_KAFKA_OFFSET management lows you to reset the offset for the selected topic and group. turn off your Consumer before executing this operation, it is not possible to manage the ACTIVE consumer groupRequired parametersGroup ID - the Kafka Consumer group that is connected to the topicTopic - The topic name that the user wants to manageDetailsThe offset parameter can take one of three values:earliest - reset the consumer group to the beginning of kafka topic - use this to read all events one more timelatest - reset the consumer group to the end of kafka topic - use this to skip all events and set consumer group at the end of the ift by - allows to move consumer group by specific ammount to events. negative number (e.g -1000) - shifts the consumer group by 1000 events to the left - means you will get 1000 events more  positive number (e.g. 1000) - shifts the consumer group by 1000 events to the right - means you will get 1000 events less Use Case - you want to read 1000 rst reset offest to latests - LAG will be 0Then shift by (-1000) - LAG will be 1000 eventsdate - allows to set the consumer group in a specific date, usefull when you want to read events since specific day. View" }, { "title": "10. Jobs Manager", "": "", "pageLink": "/display/GMDM/10.+Jobs+Manager", "content": "DescriptionThis page is available to users that scheduled the you to check the current status of an asynchronous operation Required parametersJob Type  choose a JOB to check the statusDetailsThe page shows the statuses of jobs for each and select the business the table below all the jobs for all users in your group are displayed. You can track the jobs and download the reports the Refresh view button to refresh the pageClick the icon to download the ew" }, { "title": "2. Partials", "": "", "pageLink": "/display/GMDM/2.+Partials", "content": "DescriptionThis tab is available to the user with the role to manage the precallback service. It allows you to download a list of partials - these are events for which the need to change the has been detected and their sending to output topics has been suspended. The operation allows you to specify the limit of returned records and to sort them by the time of their B ADMINUsed only internally by ADMINSRequired parametersN/A - by default, you will get all partial timestamp instead - mark as true to get  date format instead of the duration of partial in minutesReturn epoch millis- mark as true to get EPOCH timestamp instead of date formatLimit - put a number to limit the number of resultsSort - change the sort orderView" }, { "title": "3. HUB Reconciliation", "": "", "pageLink": "/display/GMDM/3.+HUB+Reconciliation", "content": "DescriptionThis tab is available to the user with the reconciliation service management role - RECONCILE and RECONCILE_COMPLEXThe operation accepts a list of identifiers for which it is to be performed. It allows you to trigger a reconciliation task for a selected type of object:relationsentitiespartialsDivided into 2 sections:TOP - Simple JOBS - simple query where input is the entity jobs - complex query that schedules Airflow mple JOBS:Required parametersN/A - by default generate CHANGE events and skip entity when it is in REMOE/INACTIVE/LOST_MERGE state. In that case, we only push CHANGE events.  valueDescriptionforcefalseSend an event to output topics even when a partial update is detected or the checksum is the same.push lost mergefalseReconcile event with statuspush inactivatedfalseReconcile event with INACTIVE statuspush removedfalseReconcile event with REMOVE statusViewComplex JOBS:Required parametersCountries - list countries for which you want to generate CHANGE events. DetailsSimpleParameterDefault valueDescriptionforcefalseSend an event to output topics even when a partial update is detected or the checksum is the untries  CA, MXSourcesN/Acrosswalks names for which you want to generate the events.Object TypeENTITYgenerates events from ENTITY or RELATION objectsEntity on object be for ENTITY: /DCRCan be for RELATION: input test in which you specify the relation e.g.: OtherHCOToHCOBatch limitN/Alimit the number of events - useful for testing purposesComplexParameterDefault valueDescriptionforcefalseSend an event to output topics even when a partial update is detectedEntity QueryN/ the MATCH query to get results and generate events. e.g.: { "status": "ACTIVE", "sources": "ONEKEY", "country": "gb" }Entities limitN/Alimit the number of events - useful for testing purposesRelation QueryN/ the MATCH query to get results and generate events. e.g.: { "status": "ACTIVE", "sources": "ONEKEY", "country": "gb" }Relation limitN/Alimit the number of events - useful for testing purposesView" }, { "title": "4. Events", "": "", "pageLink": "/display/GMDM/4.+Kafka+Republish+Events", "content": "DescriptionThis page is available to users with the publisher manager role -RESEND_KAFKA_EVENT and RESEND_KAFKA_EVENT_COMPLEXAllows you to resend events to output topics. It can be used in two modes: simple and e operation will trigger JOB  with selected parameters. In response, the user will receive an identifier that is used to check the status of the asynchronous operation in the 10. Jobs Manager mple modeRequired parametersCountries - list countries for which you want to generate CHANGE events. DetailsIn this mode, the user specifies values defined parameters:ParameterDefault CHANGE eventsnote:when you mark 'republish CHANGE events' - the process will generate CHANGE events for all entities that are , and will check if the entity is LOST_MERGE - then will generate LOST_MERGED events, DELETED - then will generate REMOVED events, INACTIVE - then will generate INACTIVATED events.when you mark 'republish CREATE events' - the process will generate CREATE events for all entities that are , and will check if the entity is LOST_MERGE - then will generate LOST_MERGED events, DELETED - then will generate REMOVED events, INACTIVE - then will generate INACTIVATED e difference between these 2 modes is, in one we generate CHANGEs in the second CREATE events (depending if whether this is generation or not)CountriestrueList of countries for which the task will be performedSourcesfalseList of sources for which the task will be performedObject typetrueObject type for which operation will be performed, available values: Entity, RelationReconciliation targettrueOutput namelimittrueLimit of generated eventsmodification time fromfalseEvents with a modification date greater than this will be generatedmodification time tofalseEvents with a modification date less than this will be generatedViewComplex modeRequired parametersEntities query or  Relation queryDetails      In this mode, the user himself defines the query that will be used to generate eventsParameterRequiredDescriptionSelect moderepublish CHANGE eventsnote:when you mark 'republish CHANGE events' - the process will generate CHANGE events for all entities that are , and will check if the entity is LOST_MERGE - then will generate LOST_MERGED events, DELETED - then will generate REMOVED events, INACTIVE - then will generate INACTIVATED events.when you mark 'republish CREATE events' - the process will generate CREATE events for all entities that are , and will check if the entity is LOST_MERGE - then will generate LOST_MERGED events, DELETED - then will generate REMOVED events, INACTIVE - then will generate INACTIVATED e difference between these 2 modes is, in one we generate CHANGEs in the second CREATE events (depending if whether this is generation or not)Entities querytrueResend entities queryEntities limitfalseResend entities limitRelation querytrueResend relations queryRelations limittrueResend relations limitReconciliation targettrueOutput nameView" }, { "title": "5. Reltio Reindex", "": "", "pageLink": "/display/GMDM/5.+Reltio+Reindex", "content": "DescriptionThis page is available to users with the reltio reindex role - REINDEX_ENTITIESAllows you to schedule Reltio Reindex JOB. It can be used in two modes: query and e operation will trigger JOB  with selected parameters. In response, the user will receive an identifier that is used to check the status of the asynchronous operation in the 10. Jobs Manager quired parametersSpecify Countries in query mode or file with entity uris in file mode.  ParameterDescriptionCountriesList of countries for which the task will be performedSourcesList of sources for which the task will be performedEntity typeObject type for which operation will be performed, available values: /DCRBatch limitAdd if you want to limit the reindex to the specific number - helpful with testing purposesfileInput fileFile format: CSV Encoding: headers: - details:HUB executes Reltio Reindex API with the following default parameters:ParameterAPI Parameter nameDefault detailed detailsEntity typeentityTypeN/AIf provided, the task restricts the reindexing scope to Entities of specified er can specify  the is search and the list will be generated. There is no need to pass this to Reltio API becouse we are using the generated URI listSkip entities countskipEntitiesCount0If provided, sets the number of Entities which are skipped during reindexing.-Entities limitentitiesLimitinfinityIf provided, sets the maximum number of Entities are reindexed-Updated sinceupdatedSinceN/ATimestamp in Unix format. If this parameter is provided, then only entities with greater or equal timestamp are reindexed. This is a good way to limit the reindexing to newer records.-Update entitiesupdateEntitiestrue If set to true, initiates update for Search, Match tables, History. If set to false, then no rematching, no history changes, only ES structures are updated.If set to true (default), in addition to refreshing the index, the task also updates history, match tables, and the analytics layer (RI). This ensures that all indexes and supporting structures are as up-to-date as possible. As explained above, however, triggering all these activities may decrease the overall performance level of the database system for business work, and overwhelm the event streaming channels. If set to false, the task updates data only. It does not perform rematching, or update history or analytics. These other activities can be performed at different times to spread out the performance impact.-Check crosswalk consistencycheckCrosswalksConsistencyfalseIf true, this will start a task to check if all crosswalks are unique before reindexing data. Please note, if entitiesLimit or distributed parameters have any value other than default, this parameter will be unavailableSpecify true to reindex each Entity, whether it has changed or not. This operation ensures that each Entity in the database is processed. Reltio does not recommend this option – it decreases the performance of the reindex task dramatically, and may overload the server, which will interfere with all database listentityUrisgenerated list of from or more entity URIs (separated by a comma) that you would like to process. For example: entities/, entities/.Reltio suggests to use 50-100K uris in one request, this is Reltio limitation. Our process splits to 100 files if required. Based on the input files size one JOB from HUB end may produce multiple Reltio tasks.UI generates list of from mongo querry or we are running the reindex with the input filesIgnore streaming eventsforceIgnoreInStreamingfalseIf set to true, no streaming events will be generated until after the reindex job has completed.-DistributeddistributedfalseIf set to true, the task runs in distributed mode, which is a good way to take advantage of a networked or clustered computing environment to spread the performance demands of reindexing over several nodes. -Job parts counttaskPartsCountN/A due to distributed=falseDefault value: 2The number of tasks which are created for distributed reindexing. Each task reindexes its own subset of Entities. Each task may be executed on a different node, so that all tasks can run in parallel. Recommended value: the number of nodes which can execute the tasks. Note: This parameter is used only in distributed mode ( distributed=true); otherwise, its ignored.-More detials in docs:" }, { "title": "6. Merge/Unmerge entities", "": "", "pageLink": "/pages/tion?pageId=", "content": "DescriptionThis page is available to users with the merge/unmerge role - MERGE_UNMERGE_ENTITIESAllows you to schedule JOB. It can be used in two modes: merge or e operation will trigger JOB  with selected parameters. In response, the user will receive an identifier that is used to check the status of the asynchronous operation in the 10. Jobs Manager quired parametersfile with profiles to be merged or unmerged in the selected formatDetailsfileInput fileFile format: CSV Encoding: UTF-8more details here - Batch merge & unmergeView" }, { "title": "7. Update Identifiers", "": "", "pageLink": "/display/GMDM/7.+Update+Identifiers", "content": "DescriptionThis page is available to users with the update identifiers role - UPDATE_IDENTIFIERSAllows you to schedule update identifiers e operation will trigger JOB  with selected parameters. In response, the user will receive an identifier that is used to check the status of the asynchronous operation in the 10. Jobs Manager quired parametersfile with profiles to be updated in the selected formatDetailsfileInput fileFile format: CSV Encoding: UTF-8more details here - Batch update identifiersView" }, { "title": "8. Clear Cache", "": "", "pageLink": "/display/GMDM/8.+Clear+Cache", "content": "DescriptionThis page is available to users with the clear cache role - CLEAR_CACHE_BATCHThe cache is related to the Direct Channel ETL jobs:Docs:  and ETL BatchesAllows you to clear the checksum cache. It can be used in three modes: query or by_source or e operation will trigger JOB  with selected parameters. In response, the user will receive an identifier that is used to check the status of the asynchronous operation in the 10. Jobs Manager tab.Query modeRequired parametersBatch name  - specify a batch name for which you want to clear the cacheObject type - ENTITY or type - e.g. configuration/relationTypes/Employment or configuration/entityTypes/HCPDetailsParameterDescriptionBatch nameSpecify a batch on which the clear cache will be triggeredObject type ENTITY or typeIf object type is ENTITY then e.g:configuration/entityTypes/HCOconfiguration/entityTypes/HCPIf object type is RELATION then e.g.:configuration/relationTypes/ContactAffiliationsconfiguration/relationTypes/EmploymentCountryAdd a country if required to limit the clear cache query  modeRequired parametersBatch name  - specify a batch name for which you want to clear the cacheSource - crosswalk type and valueDetailsSpecify a batch name and click add a source to specify new crosswalks that you want to remove from the modeRequired parametersBatch name  - specify a batch name for which you want to clear cachefile with crosswalks to be cleared in cache in the selected format for specified batchDetailsfileInput fileFile format: CSV Encoding: UTF-8more details here - Batch clear data load " }, { "title": "9. Restore Raw Data", "": "", "pageLink": "/display/GMDM/9.+Restore+Raw+Data", "content": "DescriptionThis page is available to users with the restore data role - RESTOREThe raw data contains data send to MDM HUB:Docs: Restore raw dataAllows you to restore raw (source) data on selected environmentThe operation will trigger asynchronous job with selected ore entitiesRequired parametersSource environment - restore data from another environment eg from QA to DEV environment, the default is the currently logged in environmentEntity type  - restore data only for specified entity type: , , parametersCountries - restore data only for specified entity country, eq: , IE, - restore data only for specified entity source, eq: , ONEKEYDate Time - restore data created after specified date timeViewRestore relationsRequired parametersSource environment - restore data from another environment eg from QA to DEV environment, the default is the currently logged in environmentOptional parametersCountries - restore data only for specified entity country, eq: , IE, - restore data only for specified entity source, eq: , types- restore data only for specified relation type, eg: configuration/relationTypes/OtherHCOtoHCOAffiliationsDate - restore data created after specified date timeView" }, { "title": "HUB Status - main page", "": "", "pageLink": "/display//HUB+Status+-+main+page", "content": "DescriptionThe is divided into the following sections: links to Ingestion Services ConfigurationIngestion Services TesterHUB AdminHEADERShows the current tenant name, click to quickly change the tenant to a different ows the logged-in user name. Click to log out. FOOTERLink to User GuideLink to Connect GideLink to the whole HUB documentationLink to the Get Help pageCurrently deployed versionClick to get the details about the CHANGELOGon PROD - released versionon NON-PROD- snapshot version - Changelog contains unreleased changes that will be deployed in the upcoming release to dashboard is divided into the following sections:On this page you can check HUB processing status / kafka topics LAGs / availability / Snowflake DataMart refresh.  (related to the Direct Channel)API Availability  - status related to HUB (all exposed by HUB e.g. based on )Reltio READ operations performance and latency - for example, GET Entity operations (every operation that gets data from Reltio)Reltio WRITE operations performance and latency - for example, operations (every operation that changes data in Reltio)Batches (related to the ETL Batch Channel)Currently running batches and duration of completed rrently running batches may cause data load and impact event processing visible in the dashboard below (inbound and outbound)Event Processing Shows information about events that we are processing to:Inbound - all updates made by HUB on profiles in the based on the: (loading and processing events into HUB from ETL)Direct Channel processing:loading data to (all updates on profiles on Reltio)    Outbound - streaming channel processing (related to the channel)shows the based on the:Streaming channel - all events processing starting from queue, events currently processing by channel microservices.DataMart (related to the  Mart)The time when the last REGIONAL and data martsShows the number of events that are still processing by HUB microservices and are not yet consumed by . " }, { "title": "Ingestion Services Configuration", "": "", "pageLink": "/display/GMDM/Ingestion+Services+Configuration", "content": "DescriptionThis page shows configuration related to theData checksSource Match CategorizationCleansing & FormattingAuto-FillsMinimum Viable Profile Check. Noise listsIdentifier noise listDuplicate identifier oose a filter to switch between different entity types and use input boxes to filter results.Available filters:FilterDescriptionEntity - choose an entity type that you want to review and click to limit the result and review only selected rulesCountryType a country code to limit the number of rules related to the specific countrySource Type a source to limit the number of rules related to the specific sourceQueryOpen Text filed -helps to limit the number of results when searching for specific attributes. Example case - put the "firstname" and click Search to get all rules that modify/use FirstName dit filedComparison typeDateUse a combination of these 3 attributes to find rules created before or after a specific date. Or to get rules modified after a specific date. Click on the:Noise List ConfigID Noise ConfigDuplicate ID ConfigAnd get detailed information about current rules for specific TE: remember to change entity type and click Search to view rules for different entity types.                                                                                  " }, { "title": "Ingestion Services Tester", "": "", "pageLink": "/display//Ingestion+Services+Tester", "content": "DescriptionThis site allows you to test quality service. The user can select the input entity using the 'upload' button, paste the content of the entity into the editor or drag it. After clicking the 'test' button, the entity will be sent to the quality service. After processing, the result will appear in the right window. The user can choose two modes of presenting the result - the whole entity or the difference. In the second mode, only changes made by quality service will be displayed. After clicking the 'validation result' button, a dialog box will be displayed with information on which rules were applied during the operation of the service for the selected entity.Quality service tester editorValidation summary                                      Here you can check which rules were "triggered" and check the rule in using the Rule arch by text using attribute or "triggered" keyword to get all triggered rules.                                            " }, { "title": " batch", "": "", "pageLink": "/display//Incremantal+batch", "content": "On the diagram below presented the generic structure of the batch flow. Data sources will have own instances of the flow configured:The flow consists of the following stages: Flow triggering is done by based on a schedule suited to a source data delivery time.  The source data files are downloaded from bucket managed by and they are preprocessed. The preprocessing is done using standard Unix tools run by  as docker containers, and it is specific to  particular source requirements. The goal of the stage is preparing data for the mapping stage by cleaning and formatting. Source data are mapped to Reltio data model using – custom component that uses flexible mapping rules expressed as metadata configuration. The component produces HCP/HCP/relation update events and publish it to dedicated topics. Each flow uses own topic to control access and prevent from uncontrolled data modification in Reltio by a source (Topic name is mapped to client privileges in ). The mapper generates update events in an order that reflects Reltio object dependencies. As first,  Main HCO events are generated, then child events, and at the end events.  receives update events, validates, call respective Reltio  to update profiles in , and send an acknowledge events (ACK) to a response topic containing statuses of processing update events. The events are processed in parallel. The number of threads depends on the number of consumers configured in the . The component receives ACKs and send events for the next Reltio object,  or if all events are processed than it generates a report from a load. At the end of the process, the input files and the load report are copied to an archive location in .   is a component that converts source data into documents in the unified format required by Reltio API. The component is flexible enough to support incremental batches as well as full snapshots of data. Handling a new type of data source is a matter of (in most cases) creating a new configuration that consists of stage and metadata parts. The first one defines details of so called "stages", i.e.: , , etc. The latter contains all mapping rules defining how to transform source data into attribute path/value form. Once data are transformed into the mentioned form it is easy to store it, merge it or do any other operation (including document creation) in the same way for all types of sources. This simple idea makes a very powerful tool that can be extended in many ways.  A stage is a logical group of steps that as a whole process single type of Reltio document, i.e.: entity.    At the beginning of each stage the component reads source data and generates attribute changes (events) and then stores this in an output file. It is worth to notice that there can be many source data configured. Once the output file is produced it is sorted. The above logic can be called phase 1 of a stage. Until now no database has been used. In the phase 2 the sorted file is read, events are aggregated into groups in such a way that each element of a group refers to the same Reltio document. Next all lookups are resolved against a database, merged with previous version of a document attributes and persisted. Then, Reltio document (Json) is created and sent to . The stage is finished when all acks from the gateway are collected. Under the hood each stage is a sequence of jobs: a job (i.e.: the one for sorting a file) can be started only in case its direct predecessor is finished with a success. Stages can be configured to run in parallel and depends on each other. Load reports At runtime collects various types of data that give insight into DAG state and load statistics. The report is written to disk each time a status of any job is changed. The report consists of three panels: Summary, Metrics and DAG. The summary panel contains details of all jobs within a DAG that was created for the current execution (load). The panel shows relationships between jobs in the form of a graph. The metrics panel presents details of a load. Each metric key is prefixed by a stage name.  Document processed or Document sent: number of documents processed with success. In the latter case the document was additionally sent to .  Document not sent due to its deleted status: number of documents not processed because of its status marked as deleted (only for initDeletedLoadEnabled set to false, otherwise a document is processed anyway) Document not sent due to lack of delta: number of documents not processed because there was not any change discovered (only for deltaDetectionEnabled set to true, otherwise a document is processed anyway)  creation error: number of documents not sent due to a problem with building  object. This may happen if source data are not complete, i.e.: only specializations without root object attributes were delivered Lookup error: number of documents not processed due to problems with finding referenced data in a database.  Record filtered out: number of records filtered out during attribute change generation step. By default no record is filtered out, this may be changed via mapping configuration. Invalid record error: number of invalid records " }, { "title": " offset modification", "": "", "pageLink": "/display/GMDM/Kafka+offset+modification", "content": "DescriptionThe REST interfaces exposed through the Manager component used by clients to modify kafka offset.During the update, we will check access to groupId and specyfic topic.Diagram 1 presents flow, and kafka communication during offset e diagrams below present a sequence of steps in processing client calls.Flow diagramStepsThe client sends HTTP request to MDM Manager endpoint.Kong API Gateway receives requests and handles authentication.If the authentication succeeds, the request is forwarded to MDM Manager M Manager checks user permissions to call kafka offset modification operation and the correctness of the request.If the user's permissions are correct, MDM Manager proceeds with offset modification.Offset modification cases:latest: to latest offsetearliest: to earliest offsetto date: to offset based on specyfied timestamp(Used to retrieve the earliest offset whose timestamp is greater than or equal to the given timestamp in the corresponding partition, timestamp – in milliseconds)If You want shift offset for specific message number you can use "shift" attribute and specify positive or negative number of messages to shift (offset is calculated in memory based on "offset + shift" properties)TriggersTrigger actionComponentActionDefault timeREST callManager: POST /kafka/offsetmodify kafka offsetAPI synchronous requests - realtimeRequestResponse{    "groupId": "mdm_test_user_group",    "topic": "amer-dev-in-guest-tests",    "offset": "latest"}{    "values": [        {            "topic": "amer-dev-in-guest-tests",            "partition": 0,            "offset": 2        }    ]}{    "groupId": "mdm_test_user_group",    "topic": "amer-dev-in-guest-tests",    "offset": "earliest"}{    "values": [        {            "topic": "amer-dev-in-guest-tests",            "partition": 0,            "offset": 0        }    ]}{    "groupId": "mdm_test_user_group",    "topic": "amer-dev-in-guest-tests",    "offset": ""}{    "values": [        {            "topic": "amer-dev-in-guest-tests",            "partition": 0,            "offset": 1        }    ]}{    "groupId": "mdm_test_user_group",    "topic": "amer-dev-in-guest-tests",    "offset": "latest"    "partition": 4}{    "values": [        {            "topic": "amer-dev-in-guest-tests",            "partition": 4,            "offset": 2        }    ]}{    "groupId": "mdm_test_user_group",    "topic": "amer-dev-in-guest-tests",    "offset": "",    "shift": 5}{    "values": [        {            "topic": "amer-dev-in-guest-tests",            "partition": 0,            "offset": 6        }    ]}Dependent componentsComponentUsageManagercreate update Entities in MDM systemsAPI REST and secure access" }, { "title": "LOV read", "": "", "pageLink": "/display//LOV+read", "content": "The flow is triggered by GET /lookup  call.  It retrives LOV data from HUB store. Process steps description:Client sends HTTP request to endpoint. receives request and handles authenticationIf the authentication succeeds, the request is forwarded to Manager checks user permissions to call getEntity operation and the correctness of the requestMDM Manager checks user profile configuration for lookup operation to determine whether to return results based on MongoDB state, or call Reltio quest parameters are used to dynamically generate a query. This query is executed in method.Query results are returned to the client" }, { "title": "LOV update process (Nucleus)", "": "", "pageLink": "/pages/tion?pageId=", "content": "\nProcess steps description:\n\n\tNucleus Subscriber monitors location where files are uploaded.\n\tWhen a new file is found, it is downloaded and processed. Single CCV zip file contains multiple *.exp files, which contain different parts of LOV – header, description, references to values from external systems.\n\tEach *.exp file is processed line by line, with Dictionary change events generated for each line. These events are published to a topic from where the Event Publisher component receives them.\n\tAfter file is processed completely, it is moved to archive subtree in bucket folder structure.\n\tWhen Dictionary change event is received in Publisher the current state of LOV is first fetched from database. New data from the event is then merged with that state and the result is saved back in Mongo.\n\n\n\nAdditional remarks:\n\n\tCorrectness is ensured by the fact that LOV id is used as partitioning key, guaranteeing that events related to the same LOV are processed sequentially by the same thread.\n\tDictionary change events are considered internal to – they are not forwarded to client systems subscribing to Entity change events.\n\n" }, { "title": "LOV update processes (Reltio)", "": "", "pageLink": "/pages/tion?pageId=", "content": "\n Figure 18. Updating LOVs from ReltioLOV update processes are triggered by timer on regular, configurable intervals. Their purpose is to synchronize dictionary values from Reltio. Below is the diagram outlining the whole process.\n\nProcess steps description:\n\n\tSynchronization processes are triggered at regular intervals.\n\tReltio Subscriber calls lookups to retrieve first batch of LOV data\n\tFetched data is inserted into the database. Existing records are updated\n\n\n\nSecond and third steps are repeated in a loop until there is no more LOV data remaining." }, { "title": "MDM Admin Flows", "": "", "pageLink": "/display//MDM+Admin+Flows", "content": "" }, { "title": "Kafka Offset", "": "", "pageLink": "/display/GMDM/Kafka+Offset", "content": "Swagger: allows offset manipulation for consumergroup-topic pair. Offsets can be set to earliest/latest/timestamp, or adjusted (shifted) by a numeric important point to mention is that in many cases offset does not equal to messages - shifting offset on a topic back by 100 may result in receiving 90 extra messages. This is due to compactation and retention - Kafka may mark offset as removed, but it still remains for the sake of continuity.Example 1Environment is . User wants to consume the last 100 messages from his topic again. He is using topic "emea-dev-out-full-test-topic-1" and consumer-group "emea-dev-consumergroup-1".User has disabled the consumer - Kafka will not allow offset manipulation, if the topic/consumergroup is being used.He sent below   "topic": "emea-dev-out-full-test-topic-1",\n  "groupId": "emea-dev-consumergroup-1",\n  "shiftBy": -100\n}\nUpon re-enabling the consumer, 100 of the last events were re-consumed.Example 2User wants to consume all available messages from the topic er has disabled the consumer and sent below   "topic": "emea-dev-out-full-test-topic-1",\n  "groupId": "emea-dev-consumergroup-1",\n  "offset": earliest\n}\nUpon re-enabling the consumer, all events from the topic were available for consumption again." }, { "title": "Partial List", "": "", "pageLink": "/display//Partial+List", "content": "Swagger: calls internal and returns a list of events stuck in partial state (more information here). List can be limited and sorted. Partial age can be displayed in one of below formats:HH:mm:ss.fff duration(default)YYYY-MM-DDThh:mm:ss.sss timestampepoch timestamp.ExampleUser has noticed an alert being triggered for GBLUS DEV, informing about events in partial state. To investigate the situation, he sends the following request:\nGET "entities/1sgqoyCR": " "entities/1eUqpXVe": " "entities/2ZlDTE2U": " "entities/2J1YiLW9": " "entities/1KgPnkhY": " "entities/1YpLnUIR": " realized, that it is difficult to quickly tell the age of each partial based on timestamp. He removed the absolute flag from "entities/1sgqoyCR": "27:26:56.228",\n "entities/1eUqpXVe": "218:29:05.406",\n "entities/2ZlDTE2U": "27:28:31.801",\n "entities/2J1YiLW9": "27:27:17.659",\n "entities/1KgPnkhY": "218:29:04.157",\n "entities/1YpLnUIR": "218:28:56.090"\n}\nThree partials have been stuck for . Other three partials - for over ." }, { "title": "Reconciliation", "": "", "pageLink": "/display/GMDM/Reconciliation", "content": "EntitiesSwagger: accepts a list of entity URIs. URIs not beginning with "entities/" are filtered out. For each URI it:Checks entityType () in MongoChecks status (ACTIVE/LOST_MERGE/INACTIVE/REMOVED) in MongoIf entity is , it generates a *_CHANGED event and sends it to the ${env}-internal-reltio-events to be enriched by the Entity EnricherIf entity has status other than ACTIVE:If entity has status LOST_MERGE and pushLostMerge parameter is true, generate a *_LOST_MERGE event.If entity has status INACTIVE and pushInactived parameter is true, generate a *_INACTIVATED event.If entity has status DELETED and pushRemoved parameter is true, generate a *_REMOVED event.*Additional parameter, force, may be used. When set to true, event will proceed to the even if rejected by .ExampleUser wants to reconcile 4 entities, which have different data in than in Reltio:entities/108dNvgB is is LOST_MERGEentities/10bH3nze is INACTIVEentities/1065AHEA is DELETEDrelations/101LIzcm was mistakenly added to the listBelow request is sent (, "entities/10VLBsCl", "entities/10bH3nze", "entities/1065AHEA", "relations/101LIzcm"]\nResponse:\n{\n "entities/10bH3nze": "false - Record with INACTIVE status in cache",\n "entities/1065AHEA": "false - Record with DELETED status in cache",\n "entities/10VLBsCl": "false - Record with status in cache",\n "entities/108dNvgB": "true",\n "relations/101LIzcm": "false"\n}\nOnly one event was generated: HCP_CHANGED for entities/er decided that he also need an HCP_LOST_MERGE event for entities/10VLBsCl. He sent the same request with pushLostMerge flag:\nPOST "entities/108dNvgB", "entities/10VLBsCl", "entities/10bH3nze", "entities/1065AHEA", "relations/101LIzcm"]\nResponse:\n{\n "entities/10bH3nze": "false - Record with INACTIVE status in cache",\n "entities/1065AHEA": "false - Record with DELETED status in cache",\n "entities/10VLBsCl": "true",\n "entities/108dNvgB": "true",\n "relations/101LIzcm": "false"\n}\nThis time, two events have been generated:HCP_CHANGED for for entities/10VLBsClRelationsSwagger: works the same way as for Entities, but this time URIs not beginning with "relations/" are filtered out.ExampleUser sent the same request as in previous example (", "entities/10VLBsCl", "entities/10bH3nze", "entities/1065AHEA", "relations/101LIzcm"]\nResponse:\n{\n "entities/10bH3nze": "false",\n "entities/1065AHEA": "false",\n "entities/10VLBsCl": "false",\n "entities/108dNvgB": "false",\n "relations/101LIzcm": "false - Record with DELETED status in cache"\n}\nFirst 4 URIs have been filtered out due to unexpected prefix. Event for relations/101LIzcm has not been generated, because this relation has DELETED status in me request has been sent with pushRemoved , "entities/10VLBsCl", "entities/10bH3nze", "entities/1065AHEA", "relations/101LIzcm"]\nResponse:\n{\n "entities/10bH3nze": "false",\n "entities/1065AHEA": "false",\n "entities/10VLBsCl": "false",\n "entities/108dNvgB": "false",\n "relations/101LIzcm": "true"\n}\nA single event has been generated: for relations/rtialsSwagger: Reconciliation API works the same way that Entities Reconciliation does, but it automatically fetches the current list of entities stuck in partial state using also handles push and force flags. Additionally, partials can be filtered by age, using partialAge parameter with one of following values: NONE (default), MINUTE, HOUR, DAY.ExampleUser wants to reload entities stuck in partial state in . Prometheus alert informs him that there are plenty, but he remembers that there is currently an ongoing data load, which may cause many temporary er decides that he should use the partialAge parameter with value DAY, to only reload the entities which have been stuck for a longer while, and not generate unnecessary additional traffic.He sends the following -\nFlow fetches a full list of partials from and filters out the ones stuck for . It then executes the Entities Reconciliation with this list. Response:\n{\n "entities/1yHHKEZ7": "true",\n "entities/2EHamZr3": "true",\n "entities/2EyP0kYM": "true",\n "entities/21QU96KG": "true",\n "entities/2BmHQMCn": "true"\n}\ /HCO_CHANGED events have been generated as a result." }, { "title": "Resend Events", "": "", "pageLink": "/display/GMDM/Resend+Events", "content": " triggers an Airflow DAG. The DAG:Runs a query on MongoDB and generates a list of entity/relation ing Event Publisher's /resendLastEvent , it produces outbound events for received reconciliationTarget (user-sent).Resend - SimpleSwagger: using , user does not actually write the query - they instead fill in the quired parameters are:country filter,objectType (entity, relation)reconciliationTarget - this is configured for each routing rule in Event Publisher and, according to support practices, should be equal to topic name,event limit - number.Optionally, objects can be filtered by:source,modification time.ExampleEnvironment is . User wants to generate 300 entity events (HCP_CHANGED or HCO_CHANGED) for , source . His outbound topic is emea-dev-out-full-user-all.He sends the "countries": [\n "pl"\n ], "sources": [\n "CRMMI"\n ], "objectType": "ENTITY",\n "limit": 300,\n "reconciliationTarget": "emea-dev-out-full-user-all"\n}\nResponse:\n{\n "dag_id": "reconciliation_system_emea_dev",\n "dag_run_id": "manual__2023-02-13T14:26:22.+00:00",\n "execution_date": "",\n "state": "queued"\n}\nA new Airflow DAG run was started. dag_run_id field contains this run's unique ID. Below request can be sent to fetch current status of this DAG run:\nGET "dag_id": "reconciliation_system_emea_dev",\n "dag_run_id": "manual__2023-02-13T14:26:22.+00:00",\n "execution_date": "",\n "state": "running"\n}\nAfter the has finished, 300 HCP_CHANGED/HCO_CHANGED events will have been generated to the emea-dev-out-full-user-all send - ComplexSwagger: Complex API, user writes their own quired parameters are:either entitiesQuery or relationsQuery - depending on object type and collection to be queried,reconciliationTarget.Optionally, resulting objects can be limited (separate fields for each query).ExampleAs in previous example, user wants to generate 300 events for , source . Output topic is is time, he sends the following "entitiesQuery": "{ 'country': 'pl', 'sources': '' }", "relationsQuery": "reconciliationTarget": "emea-dev-out-full-user-all",\n "limitEntities": 300,\n "limitRelations": null\n}\nResponse:\n{\n "dag_id": "reconciliation_system_emea_dev",\n "dag_run_id": "manual__2023-02-13T14:57:11.+00:00",\n "execution_date": "",\n "state": "queued"\n}\nResend - StatusSwagger: described in previous examples, this returns current status of run. Request url parameter must be equal to dag_run_id. Possible statuses are:queuedsuccessrunningfailed" }, { "title": "Internals", "": "", "pageLink": "/display/GMDM/Internals", "content": "" }, { "title": "Archive", "": "", "pageLink": "/display/GMDM/Archive", "content": "" }, { "title": " performance tests", "": "", "pageLink": "/display//APM+performance+tests", "content": "Performance tests were executed using tool placed on /CD server.Test scenario:Create HCPSmall entityMedium size entityBig entityGet previously created entityTests werer performed by 4 parallel users  in a loop for 60 min.Test results:The decrease in component efficiency is not more than 3%The increase in the load on the nodes in not more than 5%(within the measurement error)" }, { "title": "Client integration specifics", "": "", "pageLink": "/display/GMDM/Client+integration+specifics", "content": "" }, { "title": " integration with IQVIA", "": "", "pageLink": "/display//Saudi+Arabia+integration+with+IQVIA", "content": "Below design was confirmed with and during meeting. Concept of such solution was earlier approved by urce: Lucid" }, { "title": "Components providers - , networking, etc...", "": "", "pageLink": "/pages/tion?pageId=", "content": "TenantProviderReltioAWS accounts IDsIAM usersIAM rolesS3 bucketsNetwork (subnets, VPCe)Application IDEMEA NPRODPDCS - Kubernetes in IoDCOMPANYAirflow () - 211782433747Snowflake () - 211782433747Reltio () -  211782433747AWS () - 330470878083Airflow ()- :aws::user/svc_atp_euw1_mdmhub_nprod_rw_s3Snowflake () - arn:aws::user/svc_atp_euw1_mdmhub_nprod_rw_s3Reltio () - arn:aws::user/svc_atp_euw1_mdmhub_nprod_rw_s3Node Instance Role ARN: arn:aws:iam:role/atp-mdmhub-nprod-emea-eks-worker-NodeInstanceRole-1OG6IFX6DO8B9Reltio Export IAM Role: arn:aws:iam:role/-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - pfe-atp-eu--nprod-mdmhub Snowflake - pfe-atp-eu--nprod-mdmhubReltio - pfe-atp-eu--nprod-mdmhubVPCvpc-0c55bf38e97950aa5Subnetssubnet-067425933ced0e77f (●●●●●●●●●●●●●●)subnet-0e485098a41ac03ca (●●●●●●●●●●●●●●)SC3028977EMEA PRODAirflow () - 211782433747Snowflake () - 211782433747Reltio () -  211782433747AWS () - 330470878083S3 backup bucket - 604526422050Airflow () - arn:aws::user/ () - arn:aws::user/ () - arn:aws::user/svc_atp_euw1_mdmhub_mdm_exports_prod_rw_s3Node Instance Role ARN: arn:aws:iam:role/atp-mdmhub-prod-emea-eks-worker-n-NodeInstanceRole-11OT3ADBULAGCReltio Export IAM Role: arn:aws:iam:role/-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - pfe-atp-eu--prod-mdmhubSnowflake - pfe-atp-eu--prod-mdmhubReltio - pfe-atp-eu--prod-mdmhubBackups - pfe-atp-eu--prod-mdmhub-backupemaasp202207120811VPCvpc-0c55bf38e97950aa5Subnetssubnet-067425933ced0e77f (●●●●●●●●●●●●●●)subnet-0e485098a41ac03ca (●●●●●●●●●●●●●●) in IoDCOMPANYAirflow () - 555316523483Snowflake ()-  555316523483Reltio () -  555316523483AWS () - 330470878083Airflow () - arn:aws:iam:user/ () - arn:aws:iam:user/ () - arn:aws:iam:user/ Instance Role ARN: arn:aws:iam:role/atp-mdmhub-nprod-amer-eks-worker-NodeInstanceRole-1X8MZ6QZQD5V7Reltio Export IAM Role: arn:aws:iam:role/-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - gblmdmhubnprodamrasp100762Snowflake - gblmdmhubnprodamrasp100762Reltio - gblmdmhubnprodamrasp100762VPCvpc-0aedf14e7c9f0c024Subnetssubnet-0dec853f7c9e507dd (/18)subnet-07743203751be58b9 ( PRODAirflow () - 604526422050Snowflake ()- 604526422050Reltio () -  555316523483AWS () - 330470878083Backup bucket () - 604526422050Airflow () - arn:aws:iam:user/ () - arn:aws:iam:user/ () - arn:aws:iam:user/ Instance Role ARN: arn:aws:iam:role/atp-mdmhub-prod-amer-eks-worker-n-NodeInstanceRole-1KA6LWUDBA3OIReltio Export IAM Role: arn:aws:iam:role/-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - gblmdmhubprodamrasp101478Snowflake - gblmdmhubprodamrasp101478Reltio - gblmdmhubprodamrasp101478Backups - pfe-atp-us--prod-mdmhub-backupamrasp202207120808VPCvpc-0aedf14e7c9f0c024Subnetssubnet-0dec853f7c9e507dd (/18)subnet-07743203751be58b9 (/18)SC3211836APAC in IoDCOMPANYAirflow () - 555316523483Snowflake () - 555316523483Reltio () -  555316523483AWS () - rflow - () - arn:aws:iam:user/svc_atp_aps1_mdmetl_nprod_rw_s32. Snowflake () - arn:aws:iam:user/svc_atp_aps1_mdmetl_nprod_rw_s33. Reltio () - arn:aws:iam:user/ Instance Role ARN: arn:aws:iam:role/atp-mdmhub-nprod-apac-eks-worker-NodeInstanceRole-1053BVM6D7I2LReltio Export IAM Role: arn:aws:iam:role/-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - globalmdmnprodaspasp202202171347Snowflake - globalmdmnprodaspasp202202171347Reltio - globalmdmnprodaspasp202202171347VPCvpc-0d4b6d3f77ac3a877Subnetssubnet-018f9a3c441b24c2b (●●●●●●●●●●●●●●●)subnet-06e1183e436d67f29 (●●●●●●●●●●●●●●●)SC3028977APAC PRODAirflow () -Snowflake () - Reltio -  555316523483AWS () - 330470878083S3 backup bucket rflow - () -  arn:aws:iam:user/svc_atp_aps1_mdmetl_prod_rw_s32. Snowflake () - arn:aws:iam:user/svc_atp_aps1_mdmetl_prod_rw_s33. Reltio () - arn:aws:iam:user/ Instance Role ARN: arn:aws:iam:role/atp-mdmhub-prod-apac-eks-worker-n-NodeInstanceRole-1NMGPUSYG7H8QReltio Export IAM Role: arn:aws:iam:role/-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - globalmdmprodaspasp202202171415Snowflake - globalmdmprodaspasp202202171415Reltio - globalmdmprodaspasp202202171415Backups - pfe-atp-ap-se1-prod-mdmhub-backuaspasp202207141502VPCvpc-0d4b6d3f77ac3a877Subnetssubnet-018f9a3c441b24c2b (●●●●●●●●●●●●●●●)subnet-06e1183e436d67f29 (●●●●●●●●●●●●●●●) in IoDCOMPANYAirflow () - 555316523483Snowflake () - 555316523483Reltio () -  555316523483AWS () - 330470878083Airflow () - arn:aws:iam:user/ () - arn:aws:iam:user/ () - arn:aws:iam:user/: arn:aws:iam:role/-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - gblmdmhubnprodamrasp100762Snowflake - gblmdmhubnprodamrasp100762Reltio - gblmdmhubnprodamrasp100762Same as NPRODSC3028977GBLUS PRODAirflow () - 604526422050Snowflake - 604526422050Reltio () -   () - 330470878083S3 backup bucket - 604526422050Airflow () - arn:aws:iam:user/ () - arn:aws:iam:user/ () - arn:aws:iam:user/: arn:aws:iam:role/-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - gblmdmhubprodamrasp101478Snowflake - gblmdmhubprodamrasp101478Reltio - gblmdmhubprodamrasp101478Backups - pfe-atp-us--prod-mdmhub-backupamrasp202207120808Same as   PRODSC3211836GBL in () -Snowflake () - 211782433747Reltio () -   () - rflow () - arn:aws::user/svc_atp_euw1_mdmhub_nprod_rw_s32. Snowflake () - arn:aws::user/svc_atp_euw1_mdmhub_nprod_rw_s33. Reltio () - arn:aws::user/svc_atp_euw1_mdmhub_mdm_exports_prod_rw_s3Reltio Export IAM Role: arn:aws:iam:role/-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - pfe-atp-eu--nprod-mdmhubSnowflake - pfe-atp-eu--nprod-mdmhubReltio - pfe-atp-eu--nprod-mdmhubSame as () -Snowflake () - 211782433747Reltio () -   () - 330470878083S3 backup bucket - rflow () - arn:aws::user/svc_mdm_project_rw_s32. Snowflake () - arn:aws::user/svc_mdm_project_rw_s33. Reltio () - arn:aws::user/svc_mdm_project_rw_s3 ???Reltio Export IAM Role: arn:aws:iam:role/-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - pfe-baiaes-eu--projectSnowflake - pfe-baiaes-eu--projectReltio - pfe-baiaes-eu--projectBackups - pfe-atp-eu--prod-mdmhub-backupemaasp202207120811Same as PRODSC3211836FLEX NPRODCloudBroker - EC2IQVIAAirflow () -Reltio () - Airflow - mdmnprodamrasp22124Reltio - mdmnprodamrasp22124FLEX PRODAirflow () - Reltio () - Airflow - mdmprodamrasp42095Reltio - mdmprodamrasp42095ProxyRapid - EC2N/AAWS EC2 - 432817204314MonitoringCloudBroker - EC2N/AAWS EC2 - 604526422050AWS - 604526422050Thanos () - arn:aws:iam:user/ Role: arn:aws:iam:role/-ATP-MDMHUB-MONITORING-BACKUP-ROLE-01Grafana Backup - -prod-mdmhub-grafanaamrasp20240315101601Thanos - pfe-atp-us--prod-mdmhub-monitoringamrasp20240208135314Jenkins buildFLEX : vpc-12aa056a" }, { "title": "Configuration", "": "", "pageLink": "/display/GMDM/Configuration", "content": "\nAll runtime configuration is stored in repository and changes are monitored using GIT history. Sensitive data is encrypted by Ansible Vault using AES256 algorithm and decrypted only during automatic deployment managed by process in . " }, { "title": "●●●●●●●●●●●●● [", "": "", "pageLink": "/pages/tion?pageId=", "content": "\nConfiguration for all environments is placed in mdm-reltio-handler-env/inventory branch.\nAvailable environments:\n\n\tdev/qa/stage/uat/test\n\t\n\t\t●●●●●●●●●●●●●\n\t\t●●●●●●●●●●●●●\n\t\n\t\n\tprod\n\t\n\t\t●●●●●●●●●●●●●\n\t\t●●●●●●●●●●●●●\n\t\t●●●●●●●●●●●●●\n\t\n\t\n\n\n\nIn order to separate variables for each service, we created the following groups:\n\n\t[gw-services]\n\t[hub-services]\n\t[kong]\n\t[mongo]\n\t[kafka]\n\n" }, { "title": "", "": "", "pageLink": "/display/GMDM/Kafka", "content": "\nKafka deployment – this procedure is created to deploy /zookeeper on environments other than PROD.\n\tinstall_hub_broker_cluster.yml – this procedure is created to deploy /zookeeper on PROD environment.\n\n\n\nKafka variables\nProduction cluster requires the following variables:\n\n\tGlobally:\n\t\n\t\thub_broker_truststore_file/password – kafka server truststore file name and password\n\t\thub_broker_keystore_file/password – kafka keystore file name and password\n\t\thub_broker_admin_user/password – admin user name and password\n\t\thub_broker_jaas_config_file – file with Server auth(kafka) and Client auth(zookeeper)\n\t\tkafka_environment_KAFKA_ZOOKEEPER_CONNECT – list of zookeeper services required by kafka to enable cluster connection.\n\t\tzoo_users – zookeeper is deployed with server auth, this map contains admin user and password.\n\t\tzoo_servers - list of zookeeper servers, each host has to have unique id [1/2/3]\n\t\tkafka_extra_hosts – list of hosts, these lines will be added to /etc/hosts file on each kafka docker container\n\t\n\t\n\tVariables per host – unique – zookeeper server id\n\t\tkafka_environment_KAFKA_BROKER_ID – broker id\n\t\tkafka_environment_KAFKA_ADVERTISED_PORT – advertised port\n\t\tkafka_environment_KAFKA_ADVERTISED_HOST_NAME – host name\n\t\tfirewalld_ports – kafka ports to open in firewalld service.\n\t\n\t\n\tDevelopment instance requires the following variables:\n\t\n\t\thub_broker_truststore_file/password – kafka server truststore file name and password\n\t\thub_broker_keystore_file/password – kafka keystore file name and password\n\t\thub_broker_admin_user/password – admin user name and password\n\t\thub_broker_jaas_config_file – file with Server auth(kafka) and Client auth(zookeeper)\n\t\n\t\n\tAdditionally:\n\t\n\t\ttopics.yml – definitions of – definitions of users\n\t\n\t\n\n" }, { "title": "", "": "", "pageLink": "/display/", "content": "\nKong deployment procedures\n\n\tinstall_mdmgw_gateway.yml – this procedure is created to deploy / on all available environments.\n\tupdate_kong_api.yml – this procedure is created to manage kong api. Available components which can be managed are:\n\t\n\t\tconsumers\n\t\tapis\n\t\tcertificates\n\t\n\t\n\n\n\nKong variables\nCassandra memory parameters are controlled by:\n\n\tkong_database_max_heap_size: "512M" – overwrites Xms and parameters.\n\tkong_database_heap_newsize: "400M" – overwrites required variables:\n\n\tinstall_base_dir – docker-compose.yml file deployment directory\n\tkong_cluster_main_host – this parameter defines if and will be deployed in cluster mode. This parameter is declared on PROD environment and contains main CASSANDRA_BROADCAST_ADDRESS. On DEV environment this parameter is not defined.\n\n\n\nTo manage api through deployment procedure these maps are needed:\n\n\tkong_apis – defines apis. It is a list of apis with required parameters:\n\t\n\t\tkong_api_obj_name – api name (e.g. "gw-api")\n\t\tkong_api_obj_upstream_url – api upstream url (e.g. http://mdmgw_mdm-manager_1:8081)\n\t\tkong_api_obj_uris – (eg. /gw-api)\n\t\tkong_api_obj_methods – api methods (e.g. GET/POST/PATH)\n\t\tkong_api_obj_plugins (required plugin is key-auth)\n\t\n\t\n\tkong_consumers – defines consumers. It is a list of consumers with required parameters:\n\t\n\t\tkong_consumer_obj_username – user name\n\t\tkong_consumer_obj_auth_creds – required credentials "key-auth"\n\t\t\n\t\t\tkey – dedicated key for user\n\t\t\n\t\t\n\t\n\t\n\t[optional] kong_certificates - defines kong certificates to enable ssl communication. It is a list of snis with key and cert files:\n\t\n\t\tkong_certificate_obj_snis – list of available snis\n\t\tkong_certificate_obj_cert – certificate file\n\t\tkong_certificate_obj_key – server key file\n\t\n\t\n\n" }, { "title": "Mongo", "": "", "pageLink": "/display/GMDM/Mongo", "content": "\nMongo deployment procedures\n\n\tinstall_hub_db.yml – this procedure is created to deploy mongo on environments other than PROD.\n\tinstall_hub_mongo_cluster.yml – this procedure is created to deploy mongo cluster on PROD environment\n\n\n\nMongo variables\nProduction mongo cluster requires the following variables declared in /inventory/prod/group_vars/ all/all.yml file:\n\n\tmdm_mongo_base_dir – mongo base directory where shards/configs/routers will be deployed.\n\tmongo_first_run [True/False] - switch this variable to True when there is the first deployment of mongo cluster.\n\trecreate_services [True/False] - if True all docker-compose files will be started with "up -d" parameter, which means all mongo services will be recreated. Run with True when there is a need to add new shard instance.\n\tregenerate_firewalld_config [True/False] - if True, all ports defined in "mongo_cluster" map will be added to firewall – describes whole mongo cluster. On production environment there are 3 mongo - each instance can define mongo shards/configs/routers with required variables: [id, instance_name, port, host]\n\t\tmongo_server_02\n\t\tmongo_server_03\n\t\n\t\n\n\n\nDevelopment mongo instance requires the following variables declared in /inventory/dev/group_vars/all/all.yml file:\n\n\thub_db_install_dir – mongo base directory\n\thub_db_name – mongo db db name\n\thub_db_user – mongo db user " }, { "title": "Services - hub_gateway", "": "", "pageLink": "/display//Services+-+hub_gateway", "content": "\nServices deployment procedures\nHub deployment procedure: \n\n\tinstall_mdmhub_services.yml\n\n\n\n \nGateway deployment procedure:\n\n\tinstall_mdmgw_services.yml\n\n\n\nServices variables\n[gw-services] - this group contains variables for map channel and manager in the following two maps:\n\n\tmap_channel\n\tmdm_manager\n\n\n\n[hub-services] - this group contains variables for hub api, reltio subscriber and event publisher in the following maps:\n\n\tevent_publisher\n\thub_api\n\treltio_subscriber\n\n\n\nIt is possible to redefine JVM_OPTS or any other environment using these maps:\n\n\tmdm_manager_environments\n\t\n\t\te.g. "JVM_OPTS=-server -Xms128m -Xmx512m nfi g=/opt//config/kafka_nf"\n\t\n\t\n\tmap_channel_environments\n\tconsole_environments\n\n" }, { "title": "Data storage", "": "", "pageLink": "/display//Data+storage", "content": "\nPublishing Hub among other functions serves as data store, caching the latest state of each Entity fetched from . This allows clients to take advantage of increased performance and high availability provided by MongoDB database. " }, { "title": "Data structures", "": "", "pageLink": "/display//Data+structures", "content": "\n Figure 21. Structure of Publishing HUB's databasesThe following diagram shows the structure of DB collections used by Publishing Hub.\n\nDetailed description:\n\n\tentityHistory – collection storing (, ), along with some metadata for easier lookup/processing.\n\t\n\t\t_id – unique id of an Entity. Publishing Hub is reusing attribute "uri" from model (e.g. "entities/ipa1iKq")\n\t\tcountry – two-letter country code, in lowercase (e.g. "de")\n\t\tcreationDate – timestamp of record creation (i.e. insertion to Mongo)\n\t\tentity – the Reltio Entity\n\t\tentityType – type of the entity (e.g. "configuration/entityTypes/HCO")\n\t\tlastModificationDate – timestamp of last update of the record.\n\t\tmergedEntitiesUris – identifiers of child (merged) entities (for entities that "won" merge event in Reltio)\n\t\tparentEntityId – identifier of the parent entity (for entities in "" status)\n\t\tsources – array of source system codes (e.g. "OK", "", "FACE")\n\t\tstatus – current status of the entity (one of: ACTIVE, DELETED, – name of the source MDM system, currently one of "RELTIO", "NUCLEUS"\n\t\n\t\n\tLookupValues – collection storing dictionary data from Reltio.\n\t\n\t\t_id – unique id of the record. This is generated as concatenation of "type" and "code" attributes from Reltio\n\t\tupdatedOn – timestamp of last update of the record in Mongo\n\t\tvalueUpdatedOn – timestamp of last update of LOV in Reltio (values in are updated every 24h, whether or not they are actually changed in , so this value represents the timestamp of actual data change, not timestamp of refresh action)\n\t\ttype – type, as defined by , e.g. "configuration/lookupTypes/ IMS_LKUP_SPECIALTY"\n\t\tcode – code, as defined by , e.g. SPEC\n\t\tcountries – list of countries this is valid for\n\t\tmdmSource – name of the source MDM system, currently one of "RELTIO", "NUCLEUS"\n\t\tvalue – (full JSON, in Reltio-defined format – even for data)\n\t\n\t\n\n\n\nINSERT vs UPSERT\nTo speed up database operations takes advantage of MongoDB "upsert" flag of llection.update() method. This allows the application to skip the potentially costly query checking if the entity already exists in database. Instead the update operation is call right away, ceding the responsibility of checking for entity existence on internal mechanisms." }, { "title": "Indexes", "": "", "pageLink": "/display//Indexes", "content": "\nAll of the fields in database collections are indexed, except complex documents (i.e. "entity" in entityHistory, "value" in ). Queries that do not use indexes (for example querying arbitrarily nested attributes of "entity") might suffer from bad performance. " }, { "title": ", , ", "": "", "pageLink": "/display//DoR%2C+AC%2C+DoD", "content": "" }, { "title": "DoD - template", "": "", "pageLink": "/display//DoD+-+template", "content": "Requirements of task needed to be met before closing:Ticket deployed to dev and qa environmentChange is documentedAC are met." }, { "title": "DoR - template", "": "", "pageLink": "/display//DoR+-+template", "content": "Requirements of task needed to be met before pushing to the :Fields in ticket are filledFix versionEpic value is known and included in a ticket descriptionIf there is a deadline, it is understood and included in a ticket descriptionAcceptance Criteria are includedA ticket is estimated in Story Points." }, { "title": "Exponential Back Off", "": "", "pageLink": "/display//Exponential+Back+Off", "content": " mechanizm that increases the back off period for each retry attempt. When the interval has reached the interval, it is no longer increased. Stops retrying once the elapsed time has been reached.Example: The default interval is 2000L ms, the default multiplier is 1.5, and the default interval is 30000L. For 10 attempts the sequence will be as follows:requestback off ms120002300034500467505101256151877227808300009300001030000Note that the default elapsed time is X_VALUE. Use setMaxElapsedTime(long) to limit the maximum length of time that an instance should accumulate before returning plementation based on ." }, { "title": "HUB UI", "": "", "pageLink": "/display/", "content": "DRAFT:TODO:  dashboards through iframe - " }, { "title": "Integration Tests", "": "", "pageLink": "/display/GMDM/Integration+Tests", "content": "Integration tests are devided into different categories. These categories are used for different IT configuration: " }, { "title": "Common Integration Test", "": "", "pageLink": "/display//Common+Integration+Test", "content": "Test classTest caseFlowCommonGetEntityTeststestGetEntityByUriCreate HCPGet HCP by URI and validatetestSearchEntityCreate HCPGet entities using filter (get by country code, first name and last name)Validate if entity existstestGetEntityByCrosswalkCreate HCPGet entity by corsswalk and validate if existstestGetEntitiesByUrisCreate HCPGet entity by uris andvalidate if existstestGetEntityCountryCreate HCPGet entity by country and validate if existstestGetEntityCountryOvCreate HCPAdd new countrySend update requestGet and validateMake ignored = true and ov = false on all countriesSend update requestGet and validateCreateHCPTestcreateHCPTestCreate HCPGet entity and validateCreateRelationTestcreateRelationTestCreate between and and validateDeleteCrosswalkTestdeleteCrosswalkTestCreate HCODelete crosswalk and validate status responseUpdateHCOTestupdateHCPTestCreate HCOGet created nameValidate response statusGet and validate if it is updatedUpdateHCPUsingReltioContributorProviderupdateHCPUsingReltioContributorProviderTrueAndDataProviderFalseCreate HCPGet created and validateUpdate existing corosswalk and set contributorProvider to falseAdd new contributor provider crosswalkUpdate first nameSend update requestValidate if it is updatedPublishingEventTesttest1_hcpCreate HCPWait for HCP_CREATED eventUpdate first nameWait for HCP_CHANGED eventGet entity and validatetest2_hcpCreate HCPWait for HCP_CREATED eventUpdate 's last nameWait for crosswalkWait for HCP_REMOVED eventtest3_hcoCreate HCOWait for HCO_CREATED eventUpdate 's nameWait for HCO_CHANGED eventDelete crosswalkWait for HCO_REMOVED event" }, { "title": "Integration Test For Iqvia Model", "": "", "pageLink": "/display//Integration+Test+For+Iqvia+Model", "content": "Test classTest caseFlowCRUDHCOAsynctestSend to topicWait for created event and validateUpdate 's name and send to topicWait for updated event and validateRemove entitiesCRUDHCOAsyncComplextestCreate Source HCOSend with Source HCO to for created event and validateCreate HCO - set Source HCO as HCOSend with HCOWait for event and validateRemove entitiesCRUDHCPAsynctestSend to topicWait for created event and validateUpdate HCP's Last Name and send to topicWait for updated event and validateRemove entitiesCRUDPostBulkAsynctestHCOSend EntitiesUpdateRequest with multiple entities to topicWait for entities-create event with specific correlactionId headerValidate message payload and check if all entities are createdRemove entitiestestHCPSend EntitiesUpdateRequest with multiple entities to topicWait for entities-create event with specific correlactionId headerValidate message payload and check if all entities are createdRemove entitiestestHCPRejectedSend EntitiesUpdateRequest with multiple incorrect entities to topicWait for event with specific correlactionId headerCheck if all entities have and status is failedCreateRelationAsynctestCreateCreate HCPSend with Relation Activity between and to topicWait for event with specific correlactionId header and validate statustestCreateRelationsCreate and validate responseCreate HCP_3 and validate responseCreate HCP_4 and validate responseCreate between HCP_1 → , , HCP_3 → , HCP_4 → HCOSend event with all relations to topicWait for event with specific correlactionId header and validate statusRemove entitiestestCraeteWithAddressCopyCreate between and HCOSend event to topic with trueWait for event with specific correlactionId header and validate status is createdGet and updated - check if address exists and contains attributeRemove between and with PrimaryAffiliationIndicator = trueSend event to topicWait for event with specific correlactionId header and validate status is createdUpdate Relation - set delete date on nowSend event to topicWait for event with specific correlactionId header and validate status is deletedRemove entitiesHCOAsyncErrorsTestCasetestSend to topic - create with incorrect valuesWait for event with specific correlactionId header and validate status is failedHCPAsyncErrorsTestCasetestSend HCPRequest to topic - create without permissionsWait for event with specific correlactionId header and validate status is and validate status createdCreate HCP with affiliatedHCO and validate status createdGet and check if Workplace relation existsGet existing - update attribute and validate if status is and validate if list size is 1Add Country attribute to RelationSend RelationRequest event to topic with updated RelationWait for event with specific correlactionId header and validate status is and check if and Country existAdd attribute to RelationSend RelationRequest event to topic with updated RelationWait for event with specific correlactionId header and validate status is and check if , Country and    existRemove entitiesBundlingTesttestSend multiple to topic - create HCOsFor each request wait for event with status created and collect 's uriCheck if number of requests equals number of recived eventsSend multiple to topic - create HCPsFor each request wait for event with status created and collect 's uriCheck if number of requests equals number of recived eventsSend multiple to topic - create RelationFor each request wait for event with status created and collect 's uriCheck if number of requests equals number of recived eventsSet delete date on now for every HCOSend multiple to topicFor each request wait for event with status deletedSet delete date on now for every HCPSend multiple to topicFor each request wait for event with status deletedDCRResponseTestcreateAndAcceptDCRThenTryToAcceptAgainTestCreate HCOSet as 's with as if is createdAccept and check if response is again and check if response is BAD_REQUESTRemove entitiescreateAndPartialAcceptThenConfirmNoLoopCreate HCOSet as 's with as if is createdPartial accept and check if response is HCP entity and check if attribute is "partialValidated"Check if is not created - confirms that creation does not loopRemove entitiescreateAndRejectDCRThenTryToRejectAgainTestCreate HCOSet as 's with as if is createdReject and check if response is again and check if response is BAD_REQUESTRemove entitiesDeriveHCPAddressesTestCasederivedHCPAddressesTestCreate HCP and validate responseCreate with and validate responseCreate with 2 Addresses and validate responseCreate "Activity" Relation HCP → and validate responseCreate "Has Health Care Role" Relation HCP → and validate responseGet and check if contains 's and validate responseGet and check if contains updated 's and validate responseGet and check if contains 's Addresses (without removed)Remove "Has Health Care Role" Relation HCP → and validate responseGet and check if Addresses are removedRemove entitiesEVRDCRUpdateHCPLUDTestCasetestCreate as 's with as requests and check that was createdUpdate HCPValidationStatus = notvalidatedchange existing crosswalk - set DataProvider = trueadd crosswalk - EVR set ContributorProvider = trueadd another EVR crosswalk set = trueSend update request and vadiate responseUpdate (partial update)ValidationStatus = validatedRemove First and Last NameRemove crosswalksSend update request and validate responseGet and validateCheck if the (updateDate/singleAttributeUpdateDate) were refreshedRemove crosswalksExistingDepartmentAndHCPTestCasecreateHCP_HCPNotInPendingStatus_NoDCRCreate with as MainHCOCreate HCP with affiliated () and = validatedGet HCP and validate attributesGet Change requests and check if the list is emptyRemove as MainHCOCreate HCP with affiliated () and = pendingGet HCP and validate attributesGet Change requests and check if there is one NEW_HCP change requestRemove with as MainHCOCreate Department2 with as MainHCOCreate HCP with affiliated (Department1 ) and = pendingGet HCP and validate attributeshas only one (Department1 HCO)Update with affiliated (Department2 ) and = pendingGet HCP and validate attributeshas only one (Department2 HCO)Get Change requests and check if there is one NEW_HCP change requestRemove crosswalksNewHCODCRTestCasescreateHCP_DepartmentDoesNotExist_HCOL1DCRCreate with as MainHCOCreate HCP with affiliated (Department HCO)Get HCP and validate attributesValidate and MainWorkplaceGet Change requests and check if the list is emptyRemove crosswalkscreateHCP_HospitalAndDepartmentDoesNotExist_HCOL1DCRCreate (not created yet) as MainHCOCreate HCP with affiliated () and = pendingGet HCP and validate attributesGet and validate attributesGet Change requests and check if there is one NEW_HCO_L2 change requestRemove crosswalksNewHCPDCRTestCasecreateHCPTestCreate with affiliated (Department HCO)Get HCP and validate Workplace and MainWorkplaceRemove crosswalkscreateHCPPendingTestCreate with affiliated () and = pendingValidate responseValidate if is createdRemove crosswalkscreateHCPNotValidatedTestCreate with affiliated () and = notvalidatedValidate HCP responseValidate if is createdRemove crosswalkscreateHCPNotValidatedMergedIntoNotValidatedTestCreate HCP_1 with = notvalidated ( winner HCP)Create with affiliated () and = notvalidatedValidate HCP responseValidate if is not createdRemove crosswalkscreateHCPPendingMergedIntoNotValidatedTestCreate HCP_1 with = notvalidated ( winner HCP)Create with affiliated () and = pendingValidate responseValidate if is createdRemove crosswalkscreateHCPPendingMergedIntoNotValidatedWithAnotherGRVNotValidatedTestCreate HCP_1 with = notvalidated ( winner HCP)Create HCP_2 with = notvalidated (Merge loser HCP)Create HCP_3 with affiliated () and = pendingValidate if is createdRemove crosswalkscreateHCPNotValidatedMergedIntoNotValidatedWithAnotherGRVNotValidatedTestCreate HCP_1 with = notvalidated ( winner HCP)Create HCP_2 with = notvalidated (Merge loser HCP)Create HCP_3 with affiliated () and = notvalidatedValidate if is not createdRemove crosswalkscreateHCPPendingMergedIntoNotValidatedWithGRVAsUpdateTestCreate HCP_1 with = notvalidated ( winner HCP)Create HCP_2 with = notvalidated (Merge loser HCP)Create HCP_3 with affiliated () and = notvalidatedGet and validate corsswalk count == 3Validate if is not createdUpdate HCP_3 set code = pendingValidate if is createdRemove crosswalksPfDataChangeRequestLiveCycleTesttestCreate with parent HCP with affiliated () and = pendingCheck if existCheck if PfDataChangeRequest existAccpet that == validatedCheck that PfDataChangeRequest is closedRemove crosswalksResponseInfoTestTestCreate with parent HCP_1 with affiliated () and = pendingCreate HCP_2 with affiliated () and = pendingCheck that DCR_1 existCheck that DCR_2 existCheck that PfDataChangeRequest existRespond for DCR_1 - update with merged urischange = validatedGet and check if is validatedCheck if PfDataChangeRequest is closed and validate ResponseInfoRespond for DCR_2 - accept and validate messageCheck if PfDataChangeRequest is closed and validate ResponseInfoCheck that DCR_2 does not existRemove crosswalksRevalidateNewHCPDCRTestCasetestCreate and validate responseCreate with and validate responseCreate with affiliated (), = pending and validate responseCheck that existCheck that PfDataChangeRequest existRespond to - acceptCheck that has = validatedSend revalidate event to topicCheck that new was createdChecking that previous PfDataChangeRequest has =acceptCheck that new PfDataChangeRequest existCheck that has = pendingRemove crosswalksStandarNonExistingDepartmentTestCasecreateNewHCPTestCreate HCP with a new affiliated ( as HCP and validate attributes ( and MainWorkplace)UpdateHCPPhonestestCreate HCP and validate responseUpdate Phone and send patchHCP requestValidate response status is crosswalksGetEntityTeststestGetEntityByUriCreate HCP with = validated and affiliatedHcos (HCO_1, by uri and validate attributesRemove crosswalkstestSearchEntityCreate HCP with = validated and affiliatedHcos (HCO_1, entites using filter - HCP by country, first name and last nameValidate if entity existsRemove crosswalkstestSearchEntityWithoutCountryFilterCreate HCP with = validated and affiliatedHcos (HCO_1, by corsswalk HCO_1 and check if existsGet by corsswalk HCO_2 and check if existsGet entites using by country and (HCO_1 name or HCO_2 name)Validate if both existsRemove crosswalkstestGetEntityByCrosswalkCreate HCP with = validated and affiliatedHcos (HCO_1, by crosswalkValidate if HCP existsRemove crosswalkstestGetEntitiesByUrisCreate with = validated and affiliatedHcos (HCO_1, by if existsRemove crosswalkstestGetEntityCountryCreate HCP with = validated and affiliatedHcos (HCO_1, countryValidate reponseRemove crosswalkstestGetEntityCountryOvCreate HCP with = validated, affiliatedHcos (HCO_1, HCO_2) and Country = existing crosswalk - set ContributorProvider = trueadd new crosswalk as ignored = trueupdate Country - set to and validatecheck value == BR-Brazilcheck ov == trueUpdate HCP - make ignored=true, ov=false on all countriesGet and validatelookupCode == BRRemove crosswalksMergeUnmergeHCPTestcreateHCP1andHCP2_checkMerge_checkUnmerge_APICreate HCP_1 and validate responseCreate HCP_2 and validate responseMerge HCP_1 with HCP_2Get HCP_1 after merge and validate attributesGet after merge and validate attributesUnmerge HCP_1 and HCP_2Get HCP_1 after unmerge and validate attributesGet HCP_2 after unmerge and validate attributesUnmerge HCP_1 and - validate if response code is BAD_REQUESTMerge HCP_1 and - validate if response code is NOT_FOUNDRemove crosswalksHCPMatcherTestCasetestPositiveMatchCreate 2 the same objectsCheck that objects matchtestNegativeMatchCreate 2 different objectsCheck that objects do not entities with filter: country = BR and entityType = HCPValidate responseAll entites are least one entity has entities with filter: country = BR and entityType = responseAll entites are HCOGetEntityUSTestcreateHCPTestCreate and validate responseGet and check if existsRemove crosswalks" }, { "title": "Integration Test For COMPANY Model", "": "", "pageLink": "/display//Integration+Test+For+COMPANY+Model", "content": "Test classTest caseFlowAttributeSetterTestTestAttributeSetterCreate HCP with attributeGet entity and validate if has autofilled attributesUpdate TypeCode field: send "None" as attribute valueUpdate requestGet entity and validate autofileld attributes by rulesUpdate TypeCode fieldUpdate requestGet entity and validate autofileld attributes by rulesUpdate TypeCode fieldUpdate requestGet entity and validate autofilled crosswalk delete dateUpdate and validate if delete date has been batch instanceCreate batch stageValidate response code: 403 and message: Cannot access the processor which has been protectedGet batch instance with incorrect nameValidate response code: 403 and message: Batch 'testBatchNotAdded' is not allowed. Update batch stage with existing stage nameUpdate batch stage with limited userValidate response code: 403 and message: Stage '' is not allowed.Update batch stage with not authorized stage nameValidate response code: 403 and message: Stage '' passed in Body is not eateBatchInstanceCreate batch instance and validateComplete stage 1 and start stage 2Validate stagesComplete stage 2Start stage 3Validate all 3 stagesComplete stage 3 and finish batchGet batch instance and validateTestBatchBundlingErrorQueueTesttestBatchWorkflowTestCreate batch errors and check if there is no errorsCreate batch stage: HCO_LOADINGCreate batch stage: HCP_LOADINGCreate batch stage: RELATION_LOADINGSend entites to HCO_LOADING stageFinish HCO_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedSend entites to HCP_LOADING stageFinish HCP_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedSend relations to stageFinish sender job status - validate if all relations were sent to ReltioCheck processing job status - validate if all relatons were processedGet batch instance and validate completion statusValidate expected errorsResubmit errorsValidate expected errorsValidate if all errors were resubmitedTestBatchBundlingTesttestBatchWorkflowTestCreate batch instanceCreate batch stage: HCO_LOADINGCreate batch stage: HCP_LOADINGCreate batch stage: RELATION_LOADINGSend entites to HCO_LOADING stageFinish HCO_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedSend entites to HCP_LOADING stageFinish HCP_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedSend relations to stageFinish sender job status - validate if all relations were sent to ReltioCheck processing job status - validate if all relatons were processedGet batch instance and validate completion statusGet Relations by crosswalk and validateTestBatchHCOBulkTesttestBatchWorkflowTestCreate batch instanceCreate batch stage: HCO_LOADINGSend entites to HCO_LOADING stageFinish HCO_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validateTestBatchHCOTesttestBatchWorkflowTestCreate batch instanceCreate batch stage: HCO_LOADINGSend entites to HCO_LOADING stageFinish HCO_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validate created statustestBatchWorkflowTest_CheckFAILonLoadJobCreate batch instanceCreate batch stage: HCO_LOADINGSend entites to HCO_LOADING stageUpdate batch stage status: FAILEDGet batch instance and validatetestBatchWorkflowTest_SendEntities_Update_and_MD5SkipCreate batch instanceCreate batch stage: HCO_LOADINGSend entites to HCO_LOADING stageFinish HCO_LOADING batch instance and validate completion statusGet entities by crosswalk and validate create statusCreate batch instanceCreate batch stage: HCO_LOADINGSend entites to HCO_LOADING stage (skip 2 entities - MD5 check sum changed)Finish HCO_LOADING batch instance and validate completion statusGet entities by crosswalk and validate update statustestBatchWorkflowTest_SendEntities_Update_and_DeletesProcessingCreate batch instanceCreate batch stage: HCO_LOADINGSend entites to HCO_LOADING stageFinish HCO_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedCheck deleting job status - validate if all entities were sendCheck deleting processing job - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validate delete second runCreate batch instanceCreate batch stage: HCO_LOADINGSend entites to HCO_LOADING stage (skip 2 entities - delete in post processing)Finish HCO_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedCheck deleting job status - validate if all entities were sendCheck deleting processing job - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validate delete third runCreate batch instance for checking activationCreate batch stage: HCO_LOADINGSend entites to HCO_LOADING stageFinish HCO_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedCheck deleting job status - validate if all entities were sendCheck deleting processing job - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validate delete statusTestBatchHCPErrorQueueTesttestBatchWorkflowTestCreate batch instanceCreate batch stage: HCP_LOADINGGet errors and check if there is no errorsSend entites to HCP_LOADING stageFinish HCP_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedGet errors and validate if exists exceptedResubmit errorsGet errors and validate if all were batch instanceCreate batch stage: HCP_LOADINGSend entites to HCP_LOADING stage with update last nameFinish HCP_LOADING stageCheck sender job status - validate if all entities are created in mongoCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validateTestBatchHCPSoftDependentTesttestBatchWorkflowTestCreate batch instanceCreate batch stage: HCP_LOADINGCheck Sender job status - SOFT DEPENDENT Send entites to HCP_LOADING stageFinish HCP_LOADING stageCheck sender job status - validate if all entities are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validate created statusTestBatchHCPTesttestBatchWorkflowTestCreate batch instanceCreate batch stage: HCP_LOADINGSend entites to HCP_LOADING stageFinish HCP_LOADING stageCheck sender job status - validate if all entities are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validate created statusTestBatchMergeTesttestBatchWorkflowTestCreate 4 x HCP and validate respons statusGet entities and validate if are createdCreate batch instanceCreate batch stage: MERGE_ENTITIES_LOADINGSend merge entities objects (Reltio, Onekey)Finish MERGE_ENTITIES_LOADING stageCheck sender job status - validate if all tags are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities and validate update status (check if tags are visible in Reltio)Create batch instanceCreate batch stage: MERGE_ENTITIES_LOADINGSend unmerge entities objects (Reltio, Onekey)Finish MERGE_ENTITIES_LOADING stageCheck sender job status - validate if all tags are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusTestBatchPatchHCPPartialOverwriteTestCreate batch instanceCreate batch stage: HCP_LOADINGCreate HCP entity with crosswalk's delete date set on nowSend entites to HCP_LOADING stageFinish HCP_LOADING stageCheck sender job status - validate if all entities are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validate created statusCreate batch instanceCreate batch stage: HCP_LOADINGSend entites PATCH to HCP_LOADING stage with empty crosswalk's delete date and missing first and last nameFinish HCP_LOADING stageCheck sender job status - validate if all entities are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validate if are updateTestBatchRelationTesttestBatchWorkflowTestCreate batch instanceCreate batch stage: HCO_LOADINGCreate batch stage: HCP_LOADINGCreate batch stage: RELATION_LOADINGSend entites to HCO_LOADING stageFinish HCO_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedSend entites to HCP_LOADING stageFinish HCP_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedSend relations to stageFinish sender job status - validate if all relations were sent to ReltioCheck processing job status - validate if all relatons were processedGet batch instance and validate completion statusTestBatchTAGSTesttestBatchWorkflowTestCreate HCPGet and check if there is no tagsCreate batch instanceCreate batch stage: TAGS_LOADINGSend request: Append entity tags objectsFinish TAGS_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusCreate batch instanceCreate batch stage: request: Delete entity tags objectsCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate update statusGet entity and check if tags are removed from ReltioCOMPANYGlobalCustomerIdSearchOnLostMergeEntitiesTesttestCreate first HCP and validate response statusCreate second and validate response statusCreate third HCP and validate response statusMerge HCP2 with HCP3 and validate response statusMerge HCP2 with HCP1 and validate response statusGet entities: filter by COMPANYGlobalCustomerID and HCP1UriValidate if existsGet entities: filter by COMPANYGlobalCustomerID and HCP2UriValidate if existsGet entities: filter by COMPANYGlobalCustomerID and HCP3UriValidate if existsCOMPANYGlobalCustomerIdTesttestCreate HCP_1 with RX_AUDIT crosswalkWait for HCP_CREATED eventCreate HCP_2 with crosswalkWait for HCP_CREATED eventMerge both 's with RX_AUDIT being winnerWait for HCP_MERGE, HCP_LOST_MARGE and HCP_CHANGED eventsGet entities by uri and validate. Check if merge succeeded and resulting profile has winner COMPANYId.Update HCP_1: set delete date on RX_AUDIT crosswalkCheck if entity's has not changed after softDeleting the crosswalkGet HCP_1 and validate COMPANYGlobalCustomerID after soft deleting crosswalkRemove HCP_1 by crosswalkRemove HCP_2 by crosswalktestWithDeleteDateCreate HCP_1 with crosswalk delete dateWait for HCP_CREATED eventCreate HCP_2Wait for HCP_CREATED eventMerge both HCP'sWait for HCP_MERGE, HCP_LOST_MARGE and HCP_CHANGED eventsCheck if merge succeeded and resulting profile has winner move HCP_1 by crosswalkRemove HCP_2 by crosswalkRelationEventChecksumTesttestCreate HCP and validate statusGet and validate if existsCreate and validate statusCreate between and - validate response statusWait for RELATIONSHIP_CREATED event and validateFind Relation by id and keep checksumUpdate title attribute and validate responseWait for RELATIONSHIP_CHANGED eventValidate if checksum has changedDelete crosswalk and validateDelete HCP crosswalk and validateDelete Relation crosswalk and validateCreateChangeRequestTestcreateChangeRequestTestCreate HCPGet and validateUpdate HCP's First Name with dcrId from Change RequestInit Change Request and validate response is not nullDelete Change RequestDelete HCP's crosswalkAttributesEnricherNoCachedTesttestCreateFailedRelationNoCacheCreate with missing attributes - validate response stats is failedSearch Relation in mogno and check if not existsAttributesEnricherTesttestCreateCreate HCP and validateCreate HCP and validateCreate and validateGet HCP and validate if attribute existsUpdate 's Last HCP and validate if attribute existsCheck last Last Name is updatedRemove HCP, and by crosswalkAttributesEnricherWithDeleteDateOnRelationTesttestCreateAndUpdateRelationWithDeleteDateCreate HCP and validateCreate HCP and validateCreate and validateGet HCP and validate if attribute existsUpdate 's Last HCP and validate if attribute existsCheck if Last Name is updatedSet Relation's crosswalk delete date on now and updateUpdate 's Last HCP and validate that attribute does not existCheck last Last Name is updatedSend update request and check status is deletedAttributesEnricherWithMultipleEndObjectstestCreateWithMultipleEndObjectsCreate HCO_1Create HCO_2Create between and HCO_1Create Relation between and HCO_2Get and validate if attribute existsUpdate 's Last HCP and validate that attribute existsRemove all entitiesUpdateEntityAttributeTestshouldUpdateIdentifierCreate and validateUpdate 's attribute: insert idetifier and validateUpdate 's attribute: update idetifier and validateUpdate 's attribute: merge idetifier and validateUpdate 's attribute: replace idetifier and validateUpdate 's attribute: delete idetifier and validateRemove all entities by crosswalkCreateEntityTestcreateAndUpdateEntityTestCreate entityGet entity and validateUpdate ID attributeValidate updated entityGet matches entities and validate that response is not nullRemove entityCreateHCPWithoutCOMPANYAddressIdcreateHCPTestCreate HCPGet and validate fieldsGet generatedId from cache collection keyIdRegistryValidate if created 's address has COMPANYAddressIDCheck if equals generatedIdRemove entityGetMatchesTestcreateHCPTestCreate HCP_1Create HCP_2 with similar attributes and valuesGet matches for HCP_1Check if matches size >= 0TranslateLookupsTesttranslateLookupTestSend get translate lookups request: Type=, canonicalCode=A,sourceName= resposne is not nullDelayRankActivationTesttestCreate HCO_ACREATE HCO_B1CREATE HCO_B2CREATE HCO_B3CREATE RELATION → A (type: OtherHCOtoHCOAffiliations, rel type: G, source: ONEKEY)CREATE RELATION → A (type: OtherHCOtoHCOAffiliations, rel type: G, source: ONEKEY)CREATE RELATION → A (type: OtherHCOtoHCOAffiliations, rel type: G, source: ONEKEY)Check UPDATE ATTRIBUTE events:UPDATE RANK event exists with Rank = 3 for PDATE RANK event exists with Rank = 2 for heck PUBLISHED events: - RELATIONSHIP_CREATED event exists with Rank = 1B1 - RELATIONSHIP_CHANGED event exists with Rank = 3B2 - RELATIONSHIP_CHANGED event exists with Rank = 2Check order of events: - RELATIONSHIP_CHANGED and - RELATIONSHIP_CHANGED are after UPDATE eventsCREATE HCO_B4CREATE RELATION → A (type: OtherHCOtoHCOAffiliations, rel type: G, source: GRV)Check UPDATE ATTRIBUTE events:UPDATE RANK event exists with Rank = 4 for heck PUBLISHED events: - RELATIONSHIP_CHANGED event exists with Rank = 4Check order of events: - RELATIONSHIP_CHANGED is after UPDATE eventsCREATE HCO_B5CREATE RELATION → A (type: OtherHCOtoHCOAffiliations, rel type: , source: ONEKEY)Check UPDATE ATTRIBUTE events:UPDATE RANK event exists with Rank = 4 for PDATE RANK event exists with Rank = 3 for PDATE RANK event exists with Rank = 2 for PDATE RANK event exists with Rank = 5 for heck PUBLISHED events: - RELATIONSHIP_CHANGED event exists with Rank = 4B2 - RELATIONSHIP_CHANGED event exists with Rank = 3B3 - RELATIONSHIP_CHANGED event exists with Rank = 2B4 - RELATIONSHIP_CHANGED event exists with Rank = 5B5 - RELATIONSHIP_CREATED event exists with Rank = 1Check order of events:All published RELATIONSHIP_CHANGED are after UPDATE_RANK eventsSet deleteDate on heck UPDATE ATTRIBUTE events:UPDATE RANK event exists with Rank = 4 for heck PUBLISHED events: - RELATIONSHIP_CHANGED event exists with Rank = 4Check order of events:Published RELATIONSHIP_CHANGED is after UPDATE_RANK eventGet .A relation and check Rank = 3Get .A relation and check Rank = 2Get .A relation and check Rank = 4Get .A relation and check Rank = 1Clear dataRawDataTestshouldRestoreHCPCreate HCP entityDelete HCP by crosswalkSearch entity by name - expected not foundRestore entitySearch entity by nameClear datashouldRestoreHCOCreate entityDelete by crosswalkSearch entity by name - expected not foundRestore entitySearch entity by nameClear datashouldRestoreRelationCreate HCP entityCreate entityCreate relation from to HCODelete relation by crosswalkGet relation by crosswalk - expected not foundRestore relationGet relation by crosswalkClear dataTestBatchUpdateAttributesTesttestBatchWorkFlowTestCreate 2 x HCP and validate respons statusGet entities and validate if they are createdTest Insert batch instanceCreate batch stage: UPDATE_ATTRIBUTES_LOADINGInitialize UPDATE_ATTRIBUTES_LOADING stageSend updateEntityAttributeRequest objects with different identifiersFinish UPDATE_ATTRIBUTES_LOADING stageCheck sender job status - validate if all updates are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities and validate update status (check if inserted identifiers are visible in Reltio)Test Update batch instanceCreate batch stage: UPDATE_ATTRIBUTES_LOADINGInitialize UPDATE_ATTRIBUTES_LOADING stageSend updateEntityAttributeRequest objects with different identifiersFinish UPDATE_ATTRIBUTES_LOADING stageCheck sender job status - validate if all updates are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities and validate update status (check if updated identifiers are visible in Merge batch instanceCreate batch stage: UPDATE_ATTRIBUTES_LOADINGInitialize UPDATE_ATTRIBUTES_LOADING stageSend updateEntityAttributeRequest objects with different identifiersFinish UPDATE_ATTRIBUTES_LOADING stageCheck sender job status - validate if all updates are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities and validate update status (check if merged identifiers are visible in Replace batch instanceCreate batch stage: UPDATE_ATTRIBUTES_LOADINGInitialize UPDATE_ATTRIBUTES_LOADING stageSend updateEntityAttributeRequest objects with different identifiersFinish UPDATE_ATTRIBUTES_LOADING stageCheck sender job status - validate if all updates are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities and validate update status (check if replaced identifiers are visible in Reltio)Test Delete batch instanceCreate batch stage: UPDATE_ATTRIBUTES_LOADINGInitialize UPDATE_ATTRIBUTES_LOADING stageSend updateEntityAttributeRequest objects with different identifiersFinish UPDATE_ATTRIBUTES_LOADING stageCheck sender job status - validate if all updates are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities and validate update status (check if deleted identifiers are visible in Reltio)Remove all entities by crosswalk and all batch instances by id" }, { "title": "Integration Test For COMPANY Model China", "": "", "pageLink": "/display//Integration+Test+For+COMPANY+Model+China", "content": "Test classTest caseFlowChinaComplexEventCaseshouldCreateHCPAndConnectWithAffiliatedHCOByNameCreate (AffiliatedHCO) and validate responseGet entities with filter by 's Name and entityTypeValidate if existsCreate HCP (V2Complex method)with not existing MainHCOwith affiliatedHCO and existing 's and validateCheck if affiliatedHCO equals created (Workplace)Remove entitiesshouldCreateHCPAndMainHCOCreate HCO (AffiliatedHCO) and validate responseCreate HCP (V2Complex method)with AffiliatedHCO - set uri from previously created HCOwith MainHCO without uriGet and validateCheck if affiliatedHCO equals created (Workplace)Validate attributesRemove entitiesshouldCreateHCPAndAffiliatedHCOCreate (MainHCO) and validate responseCreate HCP (V2Complex method)with AffiliatedHCO without (not existing HCO)with MainHCO - set objectURI from previously created HCOGet HCP and validateCheck if MainHCO Uri equals created (MainWorkplace)Validate attributesRemove entitiesshouldCreateHCPAndConnectWithAffiliationsCreate (MainHCO) and validate responseCreate (AffiliatedHCO) and validate responseCreate HCP (V2Complex method)with AffiliatedHCO - set uri from previously created Affiliated HCOwith MainHCO - set objectURI from previously created HCOGet HCP and validateCheck if affiliatedHCO equals created (Workplace)Check if MainHCO Uri equals created (MainWorkplace)Validate and attributesRemove entitiesshouldCreateHCPAndAffiliationsCreate HCP (V2Complex method)without AffialitedHCO uriwithout MainHCO objectURIGet and validateCheck if is created and has correct attributesCheck if is created and has correct attributesValidate and attributesRemove entitiesChinaSimpleEventCaseshouldPublishCreateHCPInIqiviaModelCreate HCP in (V2Simple method)Validate responseGet HCP entity and validate attributesWait for output eventValidate eventValidate attributes and check if event is in IqiviaModelRemove entitiesChinaMergeEntityTestCraete HCP_1 (V2Complex method) and validate responseCraete HCP_2 (V2Complex method) and validate responseMerge entities HCP_1 and HCP_2Get HCP by HCP_1 uri and check if existsWait for event on merge response topicValidate eventRemove HCP (V2Complex method)with 2 affiliatedHCO which do not existwith 1 MainHCO which does not existGet entity and check if existWait for event on response topicValidate eventValidate (1 exists)Validate Workplaces (2 exists)Validate MainHCO (1 exists)Assert MainWorkplace equals MainHCORemove entities" }, { "title": "Integration Test For COMPANY Model DCR2Service", "": "", "pageLink": "/display//Integration+Test+For+COMPANY+Model+DCR2Service", "content": "Test classTest caseFlowDCR2ServiceTestshouldCreateHCPTestCreate and validate responseCreate request ( Change requestGet status and validateValidate created entityRemove entitiesshouldUpdateHCPChangePrimarySpecialtyTestCreate request: update DCR responseApply Change requestGet status and validateGet and validateGet and validateRemove all entitiesshouldCreateHCOTestCreate Request (-create) and validate responseApply Change requestGet status and validateGet and validateGet and validateRemove all entitiesshouldUpdateHCPChangePrimaryAffiliationTestCreate HCO_1 and valdiate responseCreate HCO_2 and validate responseCreate with affiliations and validate reponseGet HCO_1 and save and save COMPANYGlobalCustomerIdGet entities - search by 's COMPANYGlobalCustomerId and check if existsGet entities - search by 's COMPANYGlobalCustomerId and check if existsCreate Request and validate response: update primary affiliationApply Change requestGet status and validateGet and validateGet and validateRemove all entitiesshouldUpdateHCPIgnoreRelationCreate HCO_1 and valdiate responseCreate HCO_2 and validate responseCreate with affiliations and validate reponseGet HCO_1 and save and save COMPANYGlobalCustomerIdGet entities - search by 's COMPANYGlobalCustomerId and check if existsGet entities - search by 's COMPANYGlobalCustomerId and check if existsCreate Request and validate response: ignore affiliationApply Change requestGet status and validateWait for RELATIONSHIP_CHANGED eventWait for RELATIONSHIP_INACIVATED eventGet HCP and validateGet and validateRemove all entitiesshouldUpdateHCPAddPrimaryAffiliationTestCreate and validate responseCreate and validate responseCreate Request: update added new primary affiliationValidate responseApply Change requestGet status and validateGet and validateGet and validateRemove all entitiesshouldUpdateHCOAddAffiliationTestCreate HCO_1 and validateCreate HCO_2 and validateCreate Request: update add other affiliation (OtherHCOtoHCOAffiliations)Validate responseApply Change requestGet status and validateGet 's connections (OtherHCOtoHCOAffiliations) and validateGet and validateRemove all entitiesshouldInactivateHCPCreate HCP and validate responseCreate Request: Inactivate HCPValidate DCR responseApply Change requestGet status and validateGet and validateGet and validateRemove all entitiesshouldUpdateHCPAddPrivateAddressCreate HCP and validate responseCreate Request: update - add private addressValidate responseApply Change requestGet status and validateGet and validateGet and validateRemove all entitiesshouldUpdateHCPAddAffiliationToNewHCOCreate and validate responseCreate and validate responseCreate Request: update - add affiliation to new responseApply Change requestGet status and validateGet HCP and validateGet entity by crosswalk and save uriGet and validateRemove all request with unknown entityUriValidate response and check if REQUEST_FAILEDshouldCreateHCPOneKeyCreate HCP and validate responseCreate Request: create responseGet status and validateGet and validateGet and validateRemove all entitiesshouldCreateHCPOneKeySpecialityMappingCreate HCP and validate responseCreate Request: create OneKey HCP with speciality valueValidate responseGet status and validateGet and validateGet and validateRemove all entitiesshouldCreateHCPOneKeyRedirectToReltioCreate HCP and validate responseCreate Request: create OneKey HCP with speciality value "not found key"Validate responseApply Change RequestGet status and validateGet and validateGet and validateRemove all entitiesshouldCreateHCOOneKeyCreate nad validate responseCreate Request: create responseGet status and validateGet and validateGet and validateRemove all entitiesshouldReturnMissingDataExceptionCreate Request with missing dataValidate response: status = REQUEST_REJECTED and response has correct messageshouldReturnForbiddenAccessExceptionCreate Request with forbidden access dataValidate response: status = REQUEST_FAILED and response has correct messageshouldReturnInternalServerErrorCreate Request with internal server error dataValidate response: status = REQUEST_FAILED and response has correct message" }, { "title": "Integration Test For COMPANY Model Region AMER", "": "", "pageLink": "/display/, "content": "Test classTest caseFlowMicroBrickTestshouldCalculateMicroBricksCreate and validate responseWait for event on ChangeLog topic with specified countryGet entity and validate MicroBrickUpdate HCP with new zip codes and valdiate responseWait for event on ChangeLog topic with specified countryGet entity and validate entitiesValidateHCPTestvalidateHCPTestCreate HCP and validate response statusCreate validation request with valid paramsAssert if response is ok and validation status is "Valid"validateHCPTestNotValidCreate and validate response statusCreate validation request with not valid paramsAssert if response is ok and validation status is "NotValid"validateHCPLookupTestCreate HCP with "Speciality" attribute and validate response statusCreate lookup validation request with "Speciality" attributeAssert if response is ok and validation status is "Valid"" }, { "title": "Integration Test For COMPANY Model Region EMEA", "": "", "pageLink": "/display//Integration+Test+For+COMPANY+Model+Region+EMEA", "content": "Test classTest caseFlowAutofillTypeCodeTestshouldProcessNonPrescriberCreate entityValidate type code value is Non-Prescriber on output topicInactivate entityValidate type code value is Non-Prescriber on history inactive topicDelete entityshouldProcessPrescriberCreate entityValidate type code value is Prescriber on output topicInactivate entityValidate type code value is Prescriber on history inactive topicDelete entityshouldProcessMergeCreate first entityValidate type code is Prescriber on output topicCreate second entityValidate type code is Non-Prescriber on output topicMerge entitiesValidate type code is Prescriber on output topicInactivate first entityValidate type code is second entity crosswalkValidate entity has end date on output topicValidate type code value is Prescriber on output topicDelete entityshouldNotUpdateTypeCodeCreate HCP entity with correct type code valueValidate there is no type code value provided by HUB technical source on output topicDelete entityshouldProcessLookupErrorsCreate HCP entity with invalid sub type code and speciality valuesValidate type code value is concatenation of sub type code and speciality values on output topicInactivate entityValidate type code value is concatenation of sub type code and speciality values on history inactive topicDelete entity" }, { "title": "Integration Test For COMPANY Model Region US", "": "", "pageLink": "/display/GMDM/Integration+Test+For+COMPANY+Model+Region+US", "content": "Test classTest caseFlowCRUDMCOAsynctestSend MCORequest to topicWait for created eventValidate created nameSend MCORequest to topicWait for updated eventValidate updated entityDelete all entitiesTestBatchMCOTesttestBatchWorkflowTestCreate batch instance: testBatchCreate MCO_LOADNIG stageSend entities to MCO_LOADNIG stageFinish MCO_LOADNIG stageCheck sender job status - get batch instance and validate if all entities are createdCheck processing job status - get batch instance and validate if all entties are processedGet batch instance and check batch completion statusGet entities by crosswalk and check if all are createdRemove all entitiestestBatchWorkflowTest_SendEntities_Update_and_MD5SkipCreate batch instance: testBatchCreate MCO_LOADNIG stageSend entities to MCO_LOADNIG stageFinish MCO_LOADNIG stageCheck sender job status - get batch instance and validate if all entities are createdCheck processing job status - get batch instance and validate if all entties are processedGet batch instance and check batch completion statusGet entities by crosswalk and check if all are createdCreate batch instance: testBatchCreate MCO_LOADNIG stageSend entities to MCO_LOADNIG stage (skip 2 entities MD5 checksum changed)Finish MCO_LOADNIG stageCheck sender job status - get batch instance and validate if all entities are createdCheck processing job status - get batch instance and validate if all entties are processedGet batch instance and check batch completion statusGet entities by crosswalk and check if all are createdRemove all entitiesMCOBundlingTesttestSend multiple MCORequest to topicWait for created event for every MCORequestCheck if number of recived events equals number of sent requestsSet crosswalk's delete date on now for every requestSend all updated to topicWait for deleted event for every MCORequestEntityEventChecksumTesttestCreate HCPWait for HCP_CREATED eventGet created HCP by uri and check if existsFind by id created HCP in mogno and save "checksum"Update 's attribute and send requestWait for HCP_CHANGED eventFind by id created HCP in mogno and saveCheck if old checksum is different than current checksumRemove HCPWait for HCP_REMOVED eventEntityEventsTesttestCreate MCOWait for ENTITY_CREATED eventUpdate MCOWait for ENTITY_CHANGED eventRemove MCOWait for ENTITY_REMOVED eventHCPEventsMergeTesttestCreate HCP_1 and validate responseWait for HCP_CREATED eventGet HCP_1 and validate attributesCreate and validate and validate attributesMerge HCP_1 and HCP_2Wait for HCP_MERGED and validate attributesDelete HCP_1 crosswalkWait for HCP_CHANGED event and validate HCP_URIDelete HCP_1 and crosswalksWait for eventDelete HCP_2 crosswalkHCPEventsNotTrimmedMergeTesttestCreate HCP_1 and validate responseWait for HCP_CREATED eventGet HCP_1 and validate attributesCreate and validate and validate attributesMerge HCP_1 and HCP_2Wait for HCP_MERGED event and validate attributesGet HCP_2 and validate attributesDelete HCP_1 crosswalkWait for HCP_CHANGED event and validate HCP_URIDelete HCP_1 and crosswalksWait for eventDelete crosswalkMCOEventsTesttestCreate and validate reponseWait for MCO_CREATED event and validate urisUpdate 's name and validate responseWait for MCO_CHANGED event and validate urisDelete 's crosswalk and validate response statusWait for MCO_REMOVED event and validate urisRemove entitiesPotentialMatchLinkCleanerTestCreate : Start FLEXGet and validateCreate : End ONEKEYGet and validateGet matches by Start entityIdValidate matchesGet not matches by Start entityIdValidate - not match does not existGet Start from mongo entityMatchesHistory collectionValidate matches from mongoCreate DerivedAffiliation - realtion between and HCOGet matches by Start entityIdCheck if there is no matchesGet not matches by Start entityIdValidate not matches responseRemove all entitiesUpdateMCOTesttest1_createMCOTestCreate and validate responseGet by uri and validateRemove entitiestest2_updateMCOTestCreate and validate responseUpdate 's nameGet by uri and validateRemove entitiestest3_createMCOBatchTestCreate multiple MCOs using postBatchMCOValidate responseRemove entitiesUpdateUsageFlagsTesttest1_updateUsageFlagsCreate and validate responseGet entities using filter (Country & Uri) and validate if existsGet entities using filter () and validate if existsUpdate usage flags and validate responseGet entity and validate updated usage flagstest2_updateUsageFlagsCreate and validate responseGet entities using filter (Country & Uri) and validate if existsGet entities using filter () and validate if existsUpdate usage flags and validate responseGet entity and validate updated usage flagstest3_updateUsageFlagsCreate with 2 addresses (COMPANYAddressId=3001 and 3002) and validate responseGet entities using filter (Country & Uri) and validate if existsGet entities using filter () and validate if existsUpdate usage flags (COMPANYAddressId = 3002, action=set) and validate responseUpdate usage flags (COMPANYAddressId = 3001, action=set) and validate responseGet entity and validate updated usage flagsRemove usage flag and validate responseGet entity and validate updated usage flagsClear usage flag and validate responseget entity and validate updated usage flags " }, { "title": "", "": "", "pageLink": "/display/GMDM/MDM+Factory", "content": "\nMDM Client Factory was implemented in manager to select a specific (Reltio/Nucleus) based on a client selector configuration. Factory allows to register multiple MDM Clients on runtime and choose it based on country. To register Factory the following example configuration needs to be defined:\n\n\tclientDecisionTable\n\n\n\nBased on this configuration a specific request will be processed by Reltio or . Each selector has to define default view for a specific client. For example, 'ReltioAllSelector' has a definition of a default and PforceRx view which corresponds to two factory clients with different user name to Reltio.\n\n\n\tmdmFactoryConfig\n\n\n\nThis map contains . Each client has a specific unique name and a configuration with URL, username, ●●●●●●●●●●●● other specific values defined for a Client. This unique name is used in decision table to choose a factory client based on country in request.\n " }, { "title": "Mulesoft integration", "": "", "pageLink": "/display/GMDM/Mulesoft+integration", "content": "DescriptionMulesoft platform is integration portal that is used to integrate Clients from inside and outside of COMPANY network with integrationAPI Endpoints/search/hcp : The operation allows to search for HCPs in a country with multiple filter compiles the final data for a Profile (Golden Profile) when the data for it is requested./search/hco: The operation allows to search for HCOs in a country with multiple filter criteria./hcp : The allows management of HCPs in . (Get, Create, Update)/hco : The allows management of HCOs in . (Get, Create, Update)/lookups : This operation allows to fetch the list of values configured in /subscriptions/hcp : This operation allows to 'subscribe to' multiple in a singlerequest. The subscription is done by allowing a source create a 'crosswalk' of the source systemon the profile. It also allows the source system to insert all data that the source system has for therespective profile in while subscribing. The request specification is same as /hcp but itexpects an array of profiles. The subscription works in conjunction with events that aretriggered from for any 'subscribed' profiles that are modified by any other source system./entities/{countryType} : This operation directly allows to query for Entity withcustom Filter criteria. It allows to decide if the response needs to be formatted or if data isrequired without formatting - as it is provided by /hcp: This resource allows management of multiple HCPs in MDM at a time. (Create,: This resource allows management of multiple HCOs in MDM at a time. (Create,Update)/search/connection: This resource allows to view relationships an object (, ) has onelevel in selected direction (up, down, both)MuleSoft API Catalog:Requests routing on sideBelow values can change. Please check in source URL Configuration - AIS Application Integration Solutions Mule - ConfluenceAPI Country MappingTenantDevTest (QA)StageProdUSUSUSUSUSEMEAUK,IE,,,,,,,,,QA,,,,,,ET,ZW,,LB,,,,,,,,,,,,,,MR,,,,,,,,,,,,,,CD,,,,BF,,,,,,,YE,,,IT,,,PM,,,,RE,,,,,,PF,,,,,TR,AT,BE,,,,,,,,,,,,,,PL,RO,,,,AM,,,IS,,,,,,IE,,,,,,,,,QA,,,,,,ET,ZW,,LB,,,,,,,,,,,,,,MR,,,,,,,,,,,,,,CD,,,,BF,,,,,,,YE,,,IT,,,PM,,,,RE,,,,,,PF,,,,,TR,AT,BE,,,,,,,,,,,,,,PL,RO,,,,AM,,,IS,,,,,,IE,,,,,,,,,QA,,,,,,ET,ZW,,LB,,,,,,,,,,,,,,MR,,,,,,,,,,,,,,CD,,,,BF,,,,,,,YE,,,IT,,,PM,,,,RE,,,,,,PF,,,,,TR,AT,BE,,,,,,,,,,,,,,PL,RO,,,,AM,,,IS,,,,,,,IE,,,BF,,,,,CD,,,,,,,,,ET,,,,,,,IQ,,,,,LB,,,,,,MR,,,,,,QA,,,,,,,,,,,,,YE,,,ZW,,,IT,,,,,,,,,,PM,RE,,,,,,TR,AT,BE,,,,,,,,,,,,,,PL,RO,,,BR,AR,,,,,,BO,,BR,AR,,,,,,BO,,BR,AR,,,,,,BO,,BR,AR,,,,IN,,,,ID,MY,PK,PH,,,,,,,,,,,IN,,,,ID,MY,PK,PH,,,, ,,BN,,,,,,IN,,ID,MY,PK,PH,,,, ,,BN,,,,,,IN,,ID,MY,PK,PH,,,, ,, ( elseAPI URLsMuleSoft URLsEnvironmentCloud can be found under below url: documentation referenceSolution Profiles/MDM  URL Configuration for API AuthenticationDescribed how to use OAuth2How to use an how to request access to and how to use itConsumer On-boardingDescribed consumer onboarding process" }, { "title": "Multi view", "": "", "pageLink": "/display/GMDM/Multi+view", "content": "\nDuring getEntity or getRelation operation "ViewAdapterService" is activated. This feature contains two steps:\n\n\tAdapt\n\n\n\nBased on the following map each entity will be checked before return:\n\nThis means that for PforceRx view, only entities with source will be returned. Otherwise getEntity or operations will return "404" EntityNotFound exception. \nWhen entity can be returned with success the next step is started: \n\n\tFilter\n\n\n\nEach entity is filtered based on attribute list provided in list.\nThe process will take each attribute from entity and will check if this attribute exists in restricted for specific source crosswalk attribute list. When this attribute is not on restricted list, then it will be removed from entity. This way we will receive entity for specific view only with attribute restricted for specific source.\nMDM publishing HUB has an additional configuration for multi view process. When an entity with a specific country suits the configuration, operation is invoked with country and view name parameter. Then is activated, and entity is returned from a specific instance and saved in a mongo collection suffixed with a view name.\n \nFor this configuration entities from BR country will be saved in entityHistory and entityHistory_PforceRx mongo collections. In the view collection entities will be adapted and filtered by . " }, { "title": "Playbook", "": "", "pageLink": "/display/GMDM/Playbook", "content": "The document depicts how to request access to different sources. " }, { "title": "Issues list", "": "", "pageLink": "/display//Issues+list", "content": "" }, { "title": "Add a user to a new group.", "": "", "pageLink": "/pages/tion?pageId=", "content": "To create a request you need to use  a link: choose as follow:Than search a group and click request access:As the last step, you need to choose button and submit your request. " }, { "title": "Snowflake new schema/group/role creation", "": "", "pageLink": "/pages/tion?pageId=", "content": "Connect with: button.3. Then click that . And as a next . Now you are on create ticket site. The most important thing is to place a proper queue name in a detailed description place. For example a queue name for issues looks like this:  gbl-atp-commercial snowflake domain admin. I recommend to you to place it as a first line. And then the request text is required.6. There is a typical request for a new schema:-commercial snowflake domain adminHello,\nI'd like to ask to create a new schema and new roles on side.\nNew schema name: PTE_SL\nEnvironments: DEV, QA, , PROD, details below:\nDEV\t\nSnowflake instance: \t\nSnowflake DB name:COMM_GBL_MDM_DMART_DEV_DB\nQA\t\nSnowflake instance: \t\nSnowflake DB name: COMM_GBL_MDM_DMART_QA_DB\nSTG\t\nSnowflake instance: \t\nSnowflake DB name:COMM_GBL_MDM_DMART_STG_DB\nPROD\t\nSnowflake instance: \t\nSnowflake DB name: new roles with names (one for each environment): read-only acces on PTE_SL\nand\nadd a roles with full acces to new schema with names (one for each environment) Prod]_DEVOPS_ROLE - like in customer_sl schema7. If you are requesting for a new role too - like in an example above - you need to request to add this role to AD. In this case you need to provide primary and secondary owner details for all groups to be created. You can send a primary a secondary owner data or write that the ownership should be set like in another existing role. 8. Ticket example: " }, { "title": "AWS ELB NLB configuration request", "": "", "pageLink": "/display/GMDM/AWS+ELB+NLB+configuration+request", "content": "To create a ticket use this link: follow this link if you want to know all the specific steps and click: Snowflake new schema/group/role creationRemember to add a proper queue a request please attached full list of general information: incoming traffic fromThen please add a specific NLB information FOR EACH you requested for - even if the information is the same and obvious: No of ELBTypeEnvironmentELB Health CheckTarget Group additional information: e.x: 1 group with 3 servers:portWhere to add a Listener: e.x.: Listener to be added in informationAdditional information: e.x: IP ●●●●●●●●●●●● mdm-event-handler (Prod) should be able to access this ELBTicket example: request text:VPC: Public\nELB Type: Network Load Balancer\nHealth Checks: Passive\nAllowed incoming traffic from:\n●●●●●●●●●●●● mdm-event-handler (Prod)\n\. API\nListener:\:8443\n\nTarget Group:\:8443\:8443\:8443\n\. KAFKA\n\.1\nListener:\:9095\nTG:\:9095\:9095\:9095\n\.2\nListener:\:9095\nTG:\:9095\n\.3\nListener:\:9095\nTG:\:9095\n\.4\nListener:\:9095\nTG:\:9095\n\nGBL-BTI-EXT HOSTING AWS CLOUD" }, { "title": "To open a traffic between hosts", "": "", "pageLink": "/display//To+open+a+traffic+between+hosts", "content": "To create a ticket using this link:  follow this link if you want to know all the specific steps and click: Snowflake new schema/group/role creationRemember to add a proper queue a request please attached the full list of general information:SourceIP rangeIP range....Targets - remember to add each targets instancesTarget1NameCnameAddressPortTarget2........Example ticket: request text:Source:\. IP range: ●●●●●●●●●●●●●\. IP range: ●●●●●●●●●●●●●\n\nTarget1:\nLoadBalancer:\ canonical name = .\nName: \nAddress: ●●●●●●●●●●●●●●\nName: \nAddress: ●●●●●●●●●●●●●●\nTarget port: 443\n\nTarget2:\nhosts:\(●●●●●●●●●●●●●●)\(●●●●●●●●●●●●●)\(●●●●●●●●●●●●●●)\ntarget port: " }, { "title": "Support information with queue and names", "": "", "pageLink": "/display//Support+information+with+queue+and+DL+names", "content": "There are a few places when you can send your request: When we are adding a new client to our architecture there is a MUST to get from him a support queuesSystem/component/area nameDedicated queueSupport DLAdditional notesRapid, , GCP etcGBL-EPS-CLOUD OPS FULL SUPPORTEPS-CloudOps@AWS Global, environmentsIOD TeamGBL-BTI-IOD AWS FULL (same as , not a mistake)Rotating keys, AWS GBL US, FULL OS SUPPORT ( CloudFLEX TeamGBL-F&BO-MAST AMM SUPPORTDL-, file transfer issues in Interface Team (FLEX)GBL-SS SAP SALES ORDER regarding input filesSAP Master Date Team (FLEX)Dianna.OConnell@Queries regarding data in TeamGBL-NETWORK DDIAll domain and changesFirewall TeamGBL-NETWORK "Big" firewall changesSnowflakeGBL-ATP-COMMERCIAL SNOWFLAKE DOMAIN ADMINMDM Hub - non-prodGBL-ADL-ATP GLOBAL MDM - HUB DEVOPSDL-ATP_MDMHUB_SUPPORT@MDM Hub - prodGBL-ADL-ATP GLOBAL MDM - HUB DEVOPSDL-ATP_MDMHUB_SUPPORT_PROD@PDKSGBL-BAP-Kubernetes Service L2PDCSOps@PDKS Kubernetes cluster, ie. new NPRODGo to "PDKS Get Help" for details. TeamGBL- provisioning/modification issues with GBLUS Reltio - COMPANYGBL-ADL-ATP GLOBAL MDM - RELTIODL-ADL-ATP-GLOBAL_MDM_RELTIO@Team responsible for and batch /USFLEX Reltio - IQVIAGBL-MDM APP SUPPORTCOMPANY-MDM-Support@DL-Global-MDM-Support@Reltio consultingN/ consulting (NO support)ngh@ngh@It is no support, we can use that contact on technical issues level (API implementation etc) Reltio UI with data accesuse request manager: Customer MDM - GBLPing FederateDL-CIT-PXEDOperations@Ping Federate/OAuth2 supportMAPP NavigatorGBL-FBO-MAPP NAVIGATOR (rarely respond)MAPP Nav issuesHarmony BitbucketGBL-CBT-GBI HARMONY SERVICESDL-GBI-Harmony-Support@Confluence page:, JiraGBL-DA-DEVSECOPS TOOLS SUPPORTDL-SESRM-ATLASSIAN-SUPPORT <>ArtifactoryGBL-SESRM-ARTIFACTORY SUPPORTDL-SESRM-ARTIFACTORY-SUPPORT@Mule integration team supportDL-AIS Mule Integration Support DL-AIS-Mule-Integration-Support@Used to integrate with mule proxy .Koudstaal@POC if did not send an input file for the DCR process for 24 hoursExample: there is a description how to request with a ticket assigned to one of groups above. Snowflake new schema/group/role creation" }, { "title": "Global Clients", "": "", "pageLink": "/display/GMDM/Global+Clients", "content": "ClientContactCICRProbably ; <>; lanc@JOShweta.Kulkarni@; lanc@MAPPDL-BTAMS-MAPP-Navigator@; hvaryu@ODSDL--PFORCERX_ODS_Support@;alapati@PFORCEOLChristopher.Fani@;dl-pforcerx-support@ , <>; QianRu., ( - Mumbai)<>, Maanasa ( - Hyderabad) <>NEXUS ;DL-Acc-GBICC-Team@IMPROMPTUPRAWDOPODOBNIE , <>Balan, Sakthi <>, >, <>, >, <>EVENTHUBSNOWFLAKEClientContactC360DL-C360_Support@PT>;  , <>;dl-atp-dq-ops@accentureDL-Acc-GBICC-Team@Big shmukh@Mikhail.Komarov@" }, { "title": "How to login to Service Manager", "": "", "pageLink": "/display//How+to+login+to+Service+Manager", "content": "How to add a user to Service Manager toolChoose link: yourselfClick "Next >>"Choose proper role: Service desk analyst – and click „Needs training”When you have your training succeeded, there is a need to choose groups to which you want to be added :GBL-ADL-ATP GLOBAL MDM - HUB DEVOPSYou do it here:Please remember when you click “Add selected group to cart” there is a second approval step – click: “SUBMIT”.When permissions will be granted you can explore Service Manager possibilities here: " }, { "title": "How to Escalate btondemand Ticket Priority", "": "", "pageLink": "/display//How+to+Escalate+btondemand+Ticket+Priority", "content": "Below is a copy of: → How to Escalate Ticket PriorityHow to Escalate Ticket PriorityTickets will be opened as low priority by default and response time will align to the restoration and resolution times listed in the below. If your request priority needs to be change follow these instructions:Use the Chat function at  (or call the Service Desk at )Select Get SupportSelect "Click here to continue without selecting a ticket option."Select the existing ticket number you already openedAsk that ticket Priority be raised to Medium, High or Critical based on the issue and utilize one of the following key phrases to help set priority:Issue is is being impactedBatch is unable to proceedLife safety or physical security is impactedDevelopment work stopped awaiting resolution" }, { "title": "How to get AWS Account ID", "": "", "pageLink": "/display//How+to+get+AWS+Account+ID", "content": " components are deployed in different Accounts. In a ticket support process, you might be asked about the AWS Account ID of the host, load balancer, or other resources. You can get it quickly in at least two ways described ing Console:  (How to access Console) you can find the ID in any resource's Name (ARN).Using curlSSH to a host and run this curl command, same for all accounts:[ec2-user@euw1z2pl116 ~]$ curl http:///latest/dynamic/instance-identity/document{"accountId" : "","architecture" : "x86_64","availabilityZone" : "eu-west-1b","billingProducts" : null,"devpayProductCodes" : null,"marketplaceProductCodes" : null,"imageId" : "ami-05c4f918537788bab","instanceId" : "i-030e29a6e5aa27e38","instanceType" : ".2xlarge","kernelId" : null,"pendingTime" : "" : "","ramdiskId" : null,"region" : "eu-west-1","version" : ""}" }, { "title": "How to push image to ", "": "", "pageLink": "/display//How+to+push+Docker+image+to+", "content": "I am using the image as an example.Login to Log in with COMPANY credentials: Identity Token: COMPANY username and generated Identity Token in "docker login "marek@CF-19CHU8:~$ docker login Authenticating with existing credentials...Login SucceededPull, tag, and pushmarek@CF-19CHU8:~$ docker pull tchiotludo/akhq:.1: Pulling from tchiotludo/akhq...: sha256:b7f21a6a60ed1e89e525f57d6f06f53bea6e15c087a64ae60197d9a220244e9cStatus: Downloaded newer image for tchiotludo/akhq: docker tag tchiotludo/akhq:0.14.1 docker push push refers to repository [ digest: sha256:b7f21a6a60ed1e89e525f57d6f06f53bea6e15c087a64ae60197d9a220244e9c size: 1577And that's all, you can now use this image from !" }, { "title": "Emergency contact list", "": "", "pageLink": "/display/GMDM/Emergency+contact+list", "content": "In case of emergency please inform the person from the list attached to each environment.EMEA:, <>; , <>; , <>; , <>; , <>; <>; , <>; , Bhavanya <>; , >GBL:TO-DOGBL US:TO-DOEMEA:TO-DOAMER:TO-DO" }, { "title": "How to handle issues reported to ", "": "", "pageLink": "/display//How+to+handle+issues+reported+to+DL", "content": "Create a ticket in JiraName: "DL: {{ email title }}"Epic: BAUFix Version(s): BAUUse below template: all the red placeholders. Fill in the table where you can, based on original spond to the email, requesting additional details if any of the table rows could not be filled in.Update the ticket: the filled tableAdjust the priority based on the "Business impact details" row" }, { "title": "Sample estimation for jira tickets", "": "", "pageLink": "/display/GMDM/Sample+estimation+for+jira+tickets", "content": "1(Disable keycloak by default)(Investigate server git hooks in BitBucket)(Lack of changelog when build from master)(pvc-autoresizer deployment on PRODs)(Dashboards adjustments)2 (Move kong-mdm-external-oauth-plugin to mdm-utils repo) (Alert about not ready ScaledObject) (Reduce number of stored metrics and labels) (Old monitoring host decomissioning) (Quality Gateway: deploy publisher changes to PRODs) (Write article to describe upgrade procedure) (Fluentd - improve deployment time and downtime) (Turn on compression in reconciliation service)3 (POC: Create local git hook with secrets verification) (Replace hardcoded rate intervals) (Investigate and plan fix for different version of monitoring CRDs) (Fluentbit: deploy NPRODs) (Move jenkins agents containers definition to inbound-services repo)5 (Implement integration with Grafana) ( - configuration creation and deployment) ( dashboards backup process) (POC: Store transaction logs for 6 months)8 (Implement integration with Kibana) (Prepare upgrade plan to version 3.3.2) (Process analysis) (Implement Reltio mock) (Mongo backup process: implement backup process)" }, { "title": " - Frequently Asked Questions", "": "", "pageLink": "/display/GMDM/FAQ+-+Frequently+Asked+Questions", "content": "" }, { "title": "", "": "", "pageLink": "/display/GMDM/API", "content": "Is there an MDM Hub API Documentation?Of course - it is available for each component:Manager/API Router: Service: Service: is the difference between /api-emea-prod and /api-gw-emea-prod endpoints?Both of these endpoints are leading to different Components:/api-emea-prod is the API Router endpoint/api-gw-emea-prod is the Manager endpointBoth of these ' APIs can be used in similar way. The main difference is: allows routing Requests to the component: /api-emea-prod/dcr endpoint leads to API.API Router allows routing requests to other tenants, based on the search query filter's Country parameter.Example 1: We are trying to find HCPs named "" in the market. We can only use the EMEA HUB API:Sending an HTTP request:GET (type, 'configuration/entityTypes/') and equals(untry, '') and equals(rstName, 'John')returns nothing, because we are using the /api-gw-emea-prod/* endpoint - the Manager. It is connected directly to , which does not contain the nding an HTTP request:GET (type, 'configuration/entityTypes/') and equals(untry, '') and equals(rstName, 'John')routes the search to the GBLUS PROD Reltio, and returns results from there.Example 2: We are trying to find HCPs named "" in the , , IE and markets. We can only use the EMEA HUB API:Sending an HTTP request:GET (type, 'configuration/entityTypes/') and in(untry, ',,IE,AU') and equals(rstName, 'John')searches for , , or HCPs in . is available in this tenant, so it returns results, but only limited to this marketSending an HTTP request:GET (type, 'configuration/entityTypes/') and in(untry, ',,IE,AU') and equals(rstName, 'John')splits the search into three separate searches:- search for HCPs in the GBLUS PROD Reltio- search for or HCPs in the EMEA PROD Reltio- search for HCPs in returns aggregated results.What is the difference between /api-emea-prod and /ext-api-emea-prod endpoints use different Authentication methods:when using /api-emea-prod you are using an API Key authentication. Your requests must contain the apikey header with the secret that you received from Team.when using /ext-api-emea-prod you are using an authentication. You must fetch your token from the COMPANY and send it in your request's Authorization: Bearer is recommended that all the API Users use and /ext-api-emea-prod endpoint, leaving for support and debugging purposes.When should I use a GET Entity operation, when should I use a SEARCH Entity operation?There are two main ways of fetching an using HUB API:GET Entity:Sending GET /entities/{Reltio ID}It is the simplest and cheapest operation. Use it when you know the exact Reltio ID of the entity you want to ARCH Entity:Sending GET /entities?filter=equals() allows finding one or more profiles by their attributes' values. Use it when you do not know the exact Reltio ID or do not know how many results you ad more about Search filters here: two requests correspond to each other:GET , 'entities/0TWPf9d')Although both are quick, recommends only using the first one to find and entity by URI:GET Entity gets passed to Reltio as-is and results are returned straight awaySEARCH Entity gets analyzed on the side first. If the search filter does not specify a country (a required parameter!), a full list of allowed countries is fetched from the User's configuration and, as a result, the request may end up being sent to every single tenant.What is the difference between and PATCH /hcp, /hco, /entities operations?The key difference is:If we POST a record (crosswalk + attributes) to , it is created in straight away:if the crosswalk already existed in , it gets overwrittenif the record already existed in , the attributes get completely overwritten:attribute values that did not exist in Reltio before, now are addedattributes that had different values in Reltio before, now are updatedattribute values that were present in Reltio before, but did not exist in the POSTed record, now are removedIf we PATCH a record (crosswalk + attributes) to Hub:we check whether this crosswalk already exists in Reltio. If it does not, we return an HTTP Bad Request error response.If the record already existed in , only the PATCHed subset of attributes is updated:attribute values that did not exist in Reltio before, now are addedattributes that had different values in Reltio before, now are updatedattribute values that were present in Reltio before, but did not exist in the PATCHed record, are left untouchedPOST should be used if we are sending the full JSON - crosswalk + all TCH should be used if we are only sending incremental changes to a pre-existing profile." }, { "title": "Merging Into Existing Entities", "": "", "pageLink": "/display/GMDM/Merging+Into+Existing+Entities", "content": "Can I post a profile and merge it to one already existing in MDM?Yes, there are 3 ways you can do that:Merge-On-The-FlyContributor MergeManual MergeMerge-On-The-Fly - DetailsMerge-on-the-fly is a mechanism using matchGroups configuration. contain lists of requirements that two entities must pass in order to be merged. There are two types of matchGroups: "suspect" and "automatic". Suspects merely display as potential matches in , but groups trigger automatic merges of the objects.Example of an automatic matchGroup from 's configuration (EMEA PROD):\n {\n "uri": "configuration/entityTypes//matchGroups/ExctONEKEYID",\n "label": "(iii) Auto Rule - Exact Source Unique Identifier(ReferBack ID)",\n "type": "automatic",\n "useOvOnly": "true",\n "rule": {\n "and": {\n "exact": [\n "configuration/entityTypes//attributes/Identifiers/attributes/ID",\n "configuration/entityTypes//attributes/Country"\n ],\n "in": [\n {\n "values": [\n "OneKey ID"\n ],\n "uri": "configuration/entityTypes//attributes/Identifiers/attributes/Type"\n },\n {\n "values": [\n "ONEKEY"\n ],\n "uri": "configuration/entityTypes//attributes/OriginalSourceName"\n },\n {\n "values": [\n "Yes"\n ],\n "uri": "configuration/entityTypes//attributes/Identifiers/attributes/Trust"\n }\n ]\n }\n },\n "scoreStandalone": 100,\n "scoreIncremental": 0\n \nAbove example merges two entities having same Country attribute and same Identifier of type " ID". Identifier must have the Trusted flag and the OriginalSourceName must be "ONEKEY".When posting a record to , matchGroups are evaluated. If an automatic matchGroup is matched, will perform a Merge-On-The-Fly, adding the posted crosswalk to an existing posting an object to Reltio, we can use its Crosswalk contributorProvider/dataProvider mechanism to bind posted crosswalk to an existing one.If we know that a crosswalk exists in , we can add it to the crosswalks array with contributorProvider=true and dataProvider=false flags. Crosswalk marked like that serves as an indicator of an object to bind e other crosswalk must have the flags set the other way around: contributorProvider=false and dataProvider=true. This is the crosswalk that will de facto provide the attributes and be considered for the Hub's ingestion rules.Example - we are sending data with an crosswalk and binding that crosswalk to the existing crosswalk:\n{\n "hcp": {\n "type": "configuration/entityTypes/HCP",\n "attributes": {\n "FirstName": [\n {\n "value": "John"\n }\n ],\n "LastName": [\n {\n "value": "Doe"\n }\n ],\n "Country": [\n {\n "value": "ES"\n }\n ]\n },\n "crosswalks": [\n {\n "type": "configuration/sources/MAPP",\n "value": "B53DFCEA-8231--24F8-7E72C62C0147",\n "contributorProvider": false,\n "dataProvider": true\n },\n {\n "type": "configuration/sources/ONEKEY",\n "value": "WESR04566503",\n "contributorProvider": true,\n "dataProvider": false\n }\n ]\n }\n}\nEvery MDM record also has a crosswalk of type "Reltio" and value equal to Reltio ID. We can use that to bind our record to the entity:\n{\n "hcp": {\n "type": "configuration/entityTypes/HCP",\n "attributes": {\n "FirstName": [\n {\n "value": "John"\n }\n ],\n "LastName": [\n {\n "value": "Doe"\n }\n ],\n "Country": [\n {\n "value": "ES"\n }\n ]\n },\n "crosswalks": [\n {\n "type": "configuration/sources/MAPP",\n "value": "B53DFCEA-8231--24F8-7E72C62C0147",\n "contributorProvider": false,\n "dataProvider": true\n },\n {\n "type": "configuration/sources/Reltio",\n "value": "00TnuTu",\n "contributorProvider": true,\n "dataProvider": false\n }\n ]\n }\n}\nThis approach has a downside: crosswalks are bound, so they cannot be unmerged later nual Merge - DetailsLast approach is simply creating a record in and straight away merging it with another.Let's use the previous example. First, we are simply posting the data:\n{\n "hcp": {\n "type": "configuration/entityTypes/HCP",\n "attributes": {\n "FirstName": [\n {\n "value": "John"\n }\n ],\n "LastName": [\n {\n "value": "Doe"\n }\n ],\n "Country": [\n {\n "value": "ES"\n }\n ]\n },\n "crosswalks": [\n {\n "type": "configuration/sources/MAPP",\n "value": "B53DFCEA-8231--24F8-7E72C62C0147"\n }\n ]\n }\n}\nResponse:\n{\n "uri": "entities/0zu5sHM",\n "status": "created",\n "errorCode": null,\n "errorMessage": null,\n "COMPANYGlobalCustomerID": "",\n "crosswalk": {\n "type": "configuration/sources/MAPP",\n "value": "B53DFCEA-8231--24F8-7E72C62C0147",\n "updateDate": ,\n "deleteDate": ""\n }\n}\nWe can now use the from response to merge the new record into existing one:\nPOST /entities/0zu5sHM/_merge?uri=00TnuTu\n" }, { "title": "Quality rules", "": "", "pageLink": "/display//Quality+rules", "content": "Quality engine is responsible for preprocessing Entity when a specific precondition is met. This engine is started in the following cases:Rest operation () on /hco endpoint on operation () on /hcp endpoint on a validationOn parameter is set to true the first step in request processing is quality engine validation. MDM Manager Configuration should contain the following quality rules:hcpQualityRulesConfigshcoQualityRulesConfigshcpAffiliatedHCOsQualityRulesConfigsThese properties are able to accept a list of yaml files. Each file has to be added in environment repository in /config_files//mdm_mananger/config/.*quality-rules.yaml. Then each of these files has to be added to these variables in inventory //group_vars/gw-services/mdm_manager.yml. For request processing, files are loaded in the following order:hcpQualityRulesConfigshcpAffiliatedHCOsQualityRulesConfigsFor request processing, files are loaded only from the following configuration:hcoQualityRulesConfigsIt is a good practice to divide files in a common logic and a specific logic for countries. For example, Rules file names should have the following structure:hcp/hcp/affiliatedhco | common/country-* | quality-rules.yamlhcp-common-quality-rules.yamlhcp-country--quality-rules.yamlQuality rules yaml file is a set of rules, which will be applied on Entity. Each rule should have the following yaml structure: preconditionsmatch – the condition is met when the attribute matches the pattern or string value provided in values' list. e.g. source – the condition is met when the crosswalk type ends with the values provided in the list. e.g. default – (Empty)/Default value for precondition is "True" value. The preconditions section in yaml file is not eckmandatory – this type of check evaluates if the attribute is mandatory. When the check is correctly evaluated, then the action will be performed. ndatoryGroup – this check will pass when all attributes provided in the list will not be empty. e.g. mandatoryArray – this check will pass when the array provided in the list will contain at least minimum number of values. e.g. actionWhen the precondition and check are properly evaluated then a specific action can be invoked on entity – this action replaces attribute values which match the specific pattern with the value from replacement parameter. e.g. reject – this action rejects the entity when the precondition is met. e.g. remove- based on the madatoryGroup attributes list, this action removes these attributes from entity. e.g. set – this action sets the value provided in parameter on the specific attribute. e.g. modify – this action sets the value on the specific attribute based on attributes in entity. To reference entity's attributes, use curly braces {}. This rule adds country prefix for each element in specialties array. e.g. chineseNamesToEnglish – this action translates the attribute from source () to target attribute (English). e.g. addressDigest – this action counts MD5 based on attributes and creates Crosswalk for MD5 digest. e.g. autofillSourceName - this action adds SourceName if it not exists to given attributeaction: type: autofillSourceName attribute: AddressesThe logic of the quality engine rule check is as follows:The precondition is checked (if precondition section is not defined, then the default value is True)Then the check is evaluated on specified Entity (if check section is not defined, then by default the action will be executed without check evaluating)If the check will return attributes to process, then the action is executed.Quality rules DOC: " }, { "title": "Relation replacer", "": "", "pageLink": "/display//Relation+replacer", "content": "After getRelation operation is invoked, "Relation Replacer" feature can be activated on returned relation entity object. When entity is merged, sometimes does not replace objectUri id with new updated value. This process will detect such situation and replace objectUri with correct URI from crosswalk. Relation replacer process operates under the following conditions:Relation replacer will check and StartObject sections.When objectUri is different from each entity id from crosswalks section, then objectURI is replaced with entity id from crosswalks.When crosswalks contain multiple entries in list and there is a situation that crosswalks list contains different entity uri, relation replacer process ends with the following warning: "Object has more than one possible uri to replace" – it is not possible to decide which entity should be pointed as or after merge." }, { "title": " server", "": "", "pageLink": "/display//SMTP+server", "content": "Access to server is granted for each region separately:AMERDestination Host: : 25Authentication: NONEEMEADestination Host: : 25Authentication: NONEAPACDestination Host: : 25Authentication: NONETo request access to server there is need to fill in the relay registration form through portal." }, { "title": "Airflow", "": "", "pageLink": "/display/GMDM/Airflow", "content": "" }, { "title": "Overview", "": "", "pageLink": "/display/GMDM/Overview", "content": "ConfigurationAirflow is deployed on kubernetes cluster using official airflow helm chart: airflow chart adjustments(creting 's, k8s jobs, etc.) are located in components repository.Environment's specific configuration is located in cluster configuration ploymentLocal deploymentAirflow can be easily deployed on local kubernetes cluster for testing purposes. All you have to do is:If deployment is performed on windows machine please make sure that , , and .config files have unix line endings. Otherwise it will cause deployment errors.Edit .config file to enable airflow deployment(and any other component you want. To enable component it needs to have assigned value greater than 0\nenable_airflow=1\nRun ./ file located in main helm directory\n./\nEnvironment deploymentEnvironment deployment should be performed with great care.If deployment is performed on windows machine please make sure that , , and .config files have unix line endings. Otherwise it will cause deployment errors.Environment deployemnt can be performed after connecting local machine to remote kubernetes epare airflow configuration in cluster env just .config file to update airflow(and any other service you want)\nenable_airflow=1\nRun ./ script to update kuberntes clusterCheck if all airflow pods are working correctlyHelm chart configurationYou can find described available configuration in values.yaml file in airflow github repository. chart adjustmentsAdditionally to base airflow kubernetes resources there are created:Kubernetes job used to create additional usersPersistent volume claim for airflow dags data(for each prod/nonprod tenant)Secrets from cretsWebserver ingressDefinitions: helm templatesDags deploymentDags are deployed using playbook: install_mdmgw_airflow_services_k8s.ymlPlaybook uses kubectl command to work with airflow can run this playbook locally:To modify lists of dags that should be deployed during playbook run you have to adjust airflow_components list:e.g.\nairflow_components:\n - lookup_values_export_to_s3\nRun playbook(adjust environment)e.g.\nansible-playbook install_mdmgw_airflow_services.yml -i inventory/emea_dev/inventory\nOr with job:" }, { "title": "Airflow DAGs", "": "", "pageLink": "/display//Airflow+DAGs", "content": "" }, { "title": "●●●●●●●●●●●●●●● [", "": "", "pageLink": "/pages/tion?pageId=", "content": "DescriptionDag used to prepare data from FLEX(US) tenant to be lodaed into  tenant. kafka connector on FLEX enironment uploads files everyday to bucket as multiple small files. This takes those multiple files and concatenate them into one. team downloads this concatenated file from bucket and upload it into tenant via batch service.Example" }, { "title": "active_hcp_ids_report", "": "", "pageLink": "/display//active_hcp_ids_report", "content": " report of active hcp's from defined countries.Example mongo collection from query on entity_history collectionExport collection to excel formatExport report to directory" }, { "title": " reports", "": "", "pageLink": "/display//China+reports", "content": "DescriptionSet of dags that produces reports on gbl environment that are later sent via email:Single reports are generated by executing the defined queries on mongo, then extracts are published on . Then main dags download exports from and send an email with all in dag example:Report generating example:Dags listDags executed :china_generate_reports_gbl_prod - main dag that triggers the restchina_affiliation_status_report_gbl_prodchina_dcr_statistics_report_gbl_prodchina_hcp_by_source_report_gbl_prodchina_import_and_gen_dcr_statistics_report_gbl_prodchina_import_and_gen_merge_report_gbl_prodchina_merge_report_gbl_prodDags executed :china_monthly_generate_reports_gbl_prod - main dag that triggers the rest china_monthly_hcp_by_channel_report_gbl_prodchina_monthly_hcp_by_city_type_report_gbl_prodchina_monthly_hcp_by_department_report_gbl_prodchina_monthly_hcp_by_gender_report_gbl_prodchina_monthly_hcp_by_hospital_class_report_gbl_prodchina_monthly_hcp_by_province_report_gbl_prodchina_monthly_hcp_by_source_report_gbl_prodchina_monthly_hcp_by_SubTypeCode_report_gbl_prodchina_total_entities_report_gbl_prod" }, { "title": "clear_batch_service_cache", "": "", "pageLink": "/display/GMDM/clear_batch_service_cache", "content": "DescriptionThis dag is used to clear batch-service cache(mongo batchEntityProcessStatus collection). It deletes all records specified in csv file for specified clear cache batch-service batchController/{batch_name}/_clearCache endpoint is used.Dag used by put parameters:batchNamefileName\n{\n "fileName": "inputFile.csv",\n "batchName": "testBatchTAGS"\n}\nMain stepsDownload input file from directorySplits the file so that is has maximum of $partSize recordsExecutes request to batch-service batchController/{batch_name}/_clearCacheMove input file to archive directoryDeletes temporary workspace from report with information how many records have been deleted \n{'removedRecords': 1}\n\nExample" }, { "title": "distribute_nucleus_extract", "": "", "pageLink": "/display//distribute_nucleus_extract", "content": "DEPRECATEDDescriptionDistributes extracts that are sent by nucleus to directory between multiple directories for the respective countries that are later used by inc_batch_* dagsInput and output directories are configured in dags configuration file:Dag:" }, { "title": "export_merges_from_reltio_to_s3", "": "", "pageLink": "/display//export_merges_from_reltio_to_s3", "content": "DescriptionDag used to schedule Reltio merges export, adjust file format and then uload file to snowflake eps:Clearing workspace after previous runCalculating time range for incremental loads. For full exports(eg. export_merges_from_reltio_to_s3_full_emea_prod) this step sets start and end date as None. This way full extract is produced. For incremental loads start and end dates are calculated using last_days_count variableScheduling reltio exportWaiting for reltio export file( sensor).Postprocessing fileUpload file to snowflake directoryExample" }, { "title": "get_rx_audit_files", "": "", "pageLink": "/display//get_rx_audit_files", "content": " files from:SFTP server(external) directory(internal - constant)Files are the uploaded to defined directory that is later used by inc_batch_rx_audit dag.Example linksRX_AUDIT" }, { "title": "historical_inactive", "": "", "pageLink": "/display//historical_inactive", "content": "DescriptionDag used to implement history inactive processSteps:Download csv file with crosswalks of entities to recreateRecreate entities and upload to directory as stored procedureExample History Inactive" }, { "title": "hldcr_reconciliation", "": "", "pageLink": "/display//hldcr_reconciliation", "content": "DescriptionHL flow occasionally blocked some ' statuses from being sent to PforceRx in an outbound file, because has not received an event from , informing about Change Request resolution. The exact event expected is CHANGE_REQUEST_ prevent the above, HLDCR Reconciliation process runs regularly, doing the following steps:Query MongoDB store (Collection DCRRequests) for in CREATED status. Export result as r each VR from the list, generate a CHANGE_REQUEST_CHANGED event and post it to .Further processing is as usual - DCR Service enriches the event with current changeRequest state. If the changeRequest has been resolved, it updates the status in " }, { "title": "HUB Reconciliation process", "": "", "pageLink": "/display/GMDM/HUB+Reconciliation+process", "content": "The reconciliation process was created to synchronize Reltio with HUB. Because Reltio sometimes does not generate events, and therefore these events are not consumed by HUB from the queue and the HUB platform is out of sync with data. External Clients dose not receive the required changes, which cause that multiple systems are not consistent. To solve this problem this process was designed. The fully automated reconciliation process generates these missing events. Then these events are sent to the inbound topic, HUB platform process these events, updates mongo collection and route the events to external Clients rflowThe following diagram presents the reconciliation process steps:This directed acyclic diagram presents the steps that are taken to compare and HUB and produce the missing events. This diagram is divided into the following sections:Initialization and Reltio Data preparation - in this section the process invokes the Reltio export, and upload full export to ean_dirs_before_init, init_dirs, timestamp – these 3 tasks are responsible for the directory structure preparation required in the further steps and timestamp capture required for the reconciliation process. Reltio and HUB data changes in time and the export is made at a specific point in time. We need to ensure that during comparison only entities that were changed before are compared. This requirement guarantee that only correct events are generated and consistent data is compared. entities_export – the task invokes the Reltio Export API and triggers the export job in Reltio sensor_s3_reltio_file – this task is an bucket sensor. Because the Reltio export job is an asynchronous task running in the background, the file sensor checks the location ‘hub_reconciliation//RELTIO/inbound/’ and waits for export. When the success criteria are met, the process exits with success. The timeout for this job is set to , the poke interval is set to . download_reltio_s3_file, unzip_reltio_export, mongo_import_json_array, generate_mongo_indexes – these 4 tasks are invoked after successful export generation. Zip is downloaded and extracted to the file, then this file is uploaded to mongo collection. The generate_mongo_indexes task is responsible for generating mongo indexes in the newly uploaded collection. The indexes are created to optimize performance. archive_flex_s3_file_name – After successful mongo import Reltio export is archived for future reference. HUB validation - Reltio ↔ HUB comparison - the main comparison and events generation logic is invoked in this SUB DAG. The details are described in the section below. Events generation  - after data comparison, generated events are sent to selected en standard events processing begins. The details are described in HUB ease check the following documents to find more details: Entity change events processing (Reltio)Event filtering and routing rulesProcessing events on client sideHUB validation - Reltio ↔ HUB comparisonThis directed acyclic diagram (SUB DAG) presents the steps that are taken to compare HUB and data in both directions. Because data is already uploaded and HUB (“entityHistory”) collection is always available we can immediately start the comparison process. mongo_find_reltio_hub_differnces - this process compares data to HUB data.   aggregation pipeline matches the entities from Reltio export to HUB profiles located in mongo collection by entity URI (ID). All Reltio profiles that are not presented in Reltio export data are marked as missing. All attributes in are compared to HUB profile attributes - based on this when the difference is found, it means that the profile is out of sync and new even should be generated. Based on these changes the HCP_CHANGED or HCO_CHANGED events are generated.When the profile is missing the HCP_CREATED or HCO_CREATED events are generated. mongo_find_hub_reltio_differnces - this process compares HUB entities to data. The process is designed to find only missing entities in , based on these changes the HCP_REMOVED or HCO_REMOVED events are generatedMongo aggregation pipeline matches the entities from HUB mongo collection to Reltio profiles by entity URI (ID). All HUB profiles that are not presented in Reltio export data are marked as missing for future reference. mongo_generate_hub_events_differences - this task is related to the automated reconciliation process. The full process is described in this nfiguration and schedulingThe process can be started in on demand. The configuration for this process is stored in the MDM Environment configuration repository. The following section is responsible for the HUB Reconciliation process activation on the selected environment:\nactive_dags:\n gbl_dev:\n - hub_\nThe file is available in "inventory/scheduler/group_vars/all/all.yml"To activate the Reconciliation process on the new environment the new environment should be added to "active_dags" en the "ansible-playbook install_airflow_dags.yml" needs to be invoked. After this new process is ready for use in . Reconciliation process To synchronize Reltio with HUB and therefore synchronize profiles in with external Clients the fully automated process is started after full HUB<->Reltio comparison. this is the "mongo_generate_hub_events_differences" task. The automated reconciliation process generates events. Then these events are sent to the inbound topic, HUB platform process these events, updates mongo collection and route the events to flex e following diagram presents the reconciliation steps:Automated reconciliation process generates events:The following events are generated during this process:HCO_CHANGED / HCP_CHANGED - In this case, has not generated ENTITY_CHANGED event for the entityBased on Reltio to HUB comparison, when the comparison result contains ATTRIBUTE_VALUE_MISSING or for the entity the event is e events are aggregated based on so only one change event for the selected entity is generatedHCO_CREATED / HCP_CHANGED - In this case, has not generated ENTITY_CREATED event for the entityBased on Reltio to HUB comparison when the comparison result contains ENTITY_MISSING difference the create event is generated. It means that contains the entity and this entity is missing HUB mongo collection, so there is a need to generate and send missing CREATED events.HCO_REMVED - In this case, has not generated ENTITY_REMOVED event for the entityBased on HUB to Reltio comparison when the comparison result contains ENTITY_MISSING difference the delete event is generated. It means that the HUB cache contains an additional entity that was deactivated/removed from the system, so there is a need to generate and send the missing REMOVED events.HCO_MERGED and HCO_LOST_MERGE - In this case, has not generated an ENTITY_MERGED event for the winner entity and for the looser sed on Reltio extracted data and HUB mongo cache these events are generated.Entities from source data are matched by crosswalk value with data.When Reltio entity does not match the Mongo Entity URI and does not contain entity presented in and data that was matched by crosswalk value, it means that this entity was merged in en MERGED and event is generated for these entities.2. Next, Event Publisher receives events from the internal topic and calls to retrieve the latest state of . Entity data in is added to the event to form a full event. For REMOVED events, where Entity data is by definition not available in Reltio at the time of the event, Event Publisher fetches the cached Entity data from database instead.3. Event Publisher extracts the metadata from Entity (type, country of origin, source system).4. Entity data is stored in the MongoDB database, for later use5. For every Reltio event, there are two events created: one in Simple mode and one in (full) mode. Based on the metadata, and Routing Rules provided as a part of application configuration, the list of the target destinations for those events is created. The event is sent to all matched destinations to the target topic (-out-full-) when the event type is full or (-out-simple-) when the event type is simple. " }, { "title": "HUB Reconciliation Process ", "": "", "pageLink": "/display/GMDM/HUB+Reconciliation+Process+", "content": "Hub reconciliation process is starting from downloading perties file with following information:reconciliationType - reconciliation type - possible values: FULL_RECONCILIATION or (since last run)eventType - event type - it is used in in generating events for kafka - possible values: FULL or CROSSWALK_ONLYreconcileEntities - if set to true entities will be reconciliatedreconcileRelations - if set to true relations will be reconciliatedreconcileMergeTree - if set to true mergeTree will be reconciliatedSets hub reconciliation properties in the processIf reconcileEntities is set to true that process for reconciliate entities is started Process gets last timestamp when entities was lately exported Entities export is triggered from Reltio - this step is done by groovy script Process is checking if export is finished by verifing if the file  with exists on folder /us//inboud/hub/hub_reconciliation/entities/inbound/entities_export_ In this step process is setting timestamp for future reconciliation of entities - it is set in airflow variables this step is responsible for checking which entities has been changed and generate events for changed entitiesfirstly we get export file from  folder /us//inboud/hub/hub_reconciliation/entities/inbound/entities_export_we unzip the file in bash scriptfor the unzipped file we there are two optionsif we than calculateChecksum groovy script is executed which calculates checksum for exported entities and generates only with checksumif we don't than is generated with whole entityin the last step we send those generated events to specified kafka topics Events from topic will be processed by reconciliation serviceReconciliation service is checking basing on checksum change/changes if should be generated it compares checksum if it exists from with the one that we have in entityHistory tableit compares entity objects from with the one that we have in mongo in entityHistory table if checksum is absent - objects on both sides are normalized before compare processit compares SimpleCrosswalkOnlyEntity objects if CROSSWALK_ONLY reconciliation event type is choosen - move export folder on from inbound to archive . If reconcileRelations is set to true that process for reconciliate relations is started Process gets last timestamp when relations was lately exported Relations export is triggered from Reltio - this step is done by groovy script Process is checking if export is finished by verifing if the file  with exists on folder /us//inboud/hub/hub_reconciliation/relations/inbound/relations_export_ In this step process is setting timestamp for future reconciliation of relations - it is set in airflow variables this step is responsible for checking which relations has been changed and generate events for changed relationsfirstly we get export file from  folder /us//inboud/hub/hub_reconciliation/relations/inbound/relations_export_we unzip the file in bash scriptfor the unzipped file we there are two optionsif we than calculateChecksum groovy script is executed which calculates checksum for exported relations and generates only with checksumif we don't than is generated with whole relationin the last step we send those generated events to specified kafka topic Events from topic will be processed by reconciliation serviceReconciliation service is checking basing on checksum change/object changes if should be generated it compares checksum if it exists from with the one that we have in mongo in entityRelation compares relation objects from with the one that we have in mongo in entityRelation table if checksum is absent - objects on both sides are normalized before compare processit compares SimpleCrosswalkOnlyRelation objects if CROSSWALK_ONLY reconciliation event type is choosen - move export folder on from inbound to archive . If reconcileMergeTree is set to true that process for reconciliate relations is started Process gets last timestamp when merge tree was lately exported Merge tree export is triggered from Reltio - this step is done by groovy script Process is checking if export is finished by verifing if the file  with exists on folder /us//inboud/hub/hub_reconciliation/merge_tree/inbound/merge_tree_export_ In this step process is setting timestamp for future reconciliation of merge tree - it is set in airflow variables this step is responsible for checking which merge tree has been changed and generate events for changed merge tree objectsfirstly we get export file from  folder /us//inboud/hub/hub_reconciliation/merge_tree/inbound/merge_tree_export_we unzip the file in bash scriptfor the unzipped file we there are two optionsif we than calculateChecksum groovy script is executed which creates ReconciliationMergeEvent with uri of the main object and list of loosers uriif we don't than is generated with whole merge tree objectin the last step we send those generated events to specified kafka topic Events from topic will be processed by reconciliation serviceReconciliation service is sending merge and lost_merger PublisherEvent for winner and every looser - move export folder on from inbound to archive folder" }, { "title": "import_merges_from_reltio", "": "", "pageLink": "/display/GMDM/import_merges_from_reltio", "content": "DescriptionSchedules reltio merges export, and imports it into is is scheduled by china_import_and_gen_merge_report and data imported into mongo are used by china_merge_report to generate raport filesExample" }, { "title": "import_pfdcr_from_reltio", "": "", "pageLink": "/display//import_pfdcr_from_reltio", "content": "DescriptionSchedules reltio entities export, download it from , make small changes in export and import into is is scheduled by china_import_and_gen_dcr_statistics_report and data imported into mongo is used by china_dcr_statistics_report to generate raport filesExample" }, { "title": "inc_batch", "": "", "pageLink": "/display/", "content": "DescriptionProces used to load idl files stored on into . This dags is basing on mdmhub inc_batch_channel batch instance in mongo using batch-service /batchController endpointDownload idl files from directoryExtract compressed archivesPreprocess files(eg. dos2unix )Run inc_batch_channel componentArchive input files and reportsExample" }, { "title": "Initial events generation process", "": "", "pageLink": "/display/GMDM/Initial+events+generation+process", "content": "Newly connected clients doesn't have konwledge about entities which was created in MDM before theirs connecting. Due to this the initial event loading process was designed. Process loads events about already existing entities to client's kafka topic. Thanks this the new client is synced with rflowThe process was implemented as 's DAG:Process steps:prepareWorkingDir - prepares directories structure required for the process,getLastTimestamp - gets time marked of last process execution. This marker is used to determine which of events has been sent by previously running process. If the process is run first time the marker has always 0 value,getTimestamp - gets current time marker,generatesEvents - generates events file based on current state. Data used to prepare event messages is selected based on condition stModificationDate > lastTimestamp,divEventsByEventKind - divides events file based on event kind: simple or full,loadFullEvents* - it is a group of steps that populates full events to specific topic. The amount of this steps is based on amount of topics specified in configuration,loadSimpleEvents* - similar to above, those steps populates simple events to specific topic. The amount of this steps is based on amount of topics specified in configuration,setLastTimestamp - save current time marker. It will be used in the next process execution as last time nfiguration and schedulingThe process can be started on e 's configuration is stored in the MDM Environment configuration enable the process on specific environment:Its should be valid with template "generate_events_for_[client name]" and added to the list "airflow_components" which is defined in "inventory/[env name]/group_vars/gw-airflow-services/all.yml" file,Create configuration file in "inventory/[env name]/group_vars/gw-airflow-services/generate_events_for_[client name].yml" with content as below:The process configuration\n---\n\ngenerate_events_for_test_name: "generate_events_for_test" #Process name. It has to be the same as in "airflow_components" list avaiable in all.yml\ngenerate_events_for_test_base_dir: "{{ install_base_dir }}/{{ generate_events_for_test_name }}"\ngenerate_events_for_test:\n dag: #Airflow's DAG configuration section\n template: "generate_" #do not change\n variables:\n DOCKER_URL: "tcp://:2376" #do not change\n dataDir: "{{ generate_events_for_test_base_dir }}/data" #do not change\n configDir: "{{ generate_events_for_test_base_dir }}/config" #do not change\n logDir: "{{ generate_events_for_test_base_dir }}/log" #do not change\n tmpDir: "{{ generate_events_for_test_base_dir }}/tmp" #do not change\n user:\n id: "7000" #do not change\n name: "mdm" #do not change\n groupId: "" #do not change\n groupName: "docker" #do not change\n mongo: #mongo configuration properties\n host: "localhost"\n port: "27017"\n user: "mdm_gw"\n password: "{{ secret_generate_events_for_ssword }}" #password is taken from the secret.yml file\n authDB: "reltio"\n kafka: #kafka configuration properties\n username: "hub"\n password: "{{ secret_generate_events_for_ssword }}" #password is taken from the secret.yml file\n servers: ":9094"\n properties:\n "tocol": SASL_SSL\n "chanism": PLAIN\n "uststore.location": /opt/kafka_utils/config/kafka_truststore.jks\n "ssword": "{{ secret_generate_events_for_perties.sslTruststorePassword }}" #password is taken from the secret.yml file\n "": ""\n countries: #Events will be generated only for below countries\n - CR\n - BR\n targetTopics: #Target topics list. It is array of pairs topic name and event Kind. Only simple and full event kind are allowed.\n - topic: dev-out-simple-int_test\n eventKind: simple\n - topic: dev-out-full-int_test\n eventKind: full\n\n...\nthen the playbook install_mdmgw_services.yml needs to be invoked to update runtime configuration." }, { "title": "lookup_values_export_to_s3", "": "", "pageLink": "/display/GMDM/lookup_values_export_to_s3", "content": " used to extract lookup values from mongo and upload it to . The file from i then pulled into snowflake.Example" }, { "title": "MAPP IDL Export process", "": "", "pageLink": "/display/GMDM/MAPP+IDL+Export+process", "content": " used to generate excel with entities export. Export is based on two monogo collections: lookupValues and entityHistory. Excel files are then uploaded into directoryExcels are used in process on gbl_prod environment.Example" }, { "title": "mapp_update_idl_export_config", "": "", "pageLink": "/display/GMDM/mapp_update_idl_export_config", "content": " is used to update configuration of mapp_idl_excel_template dags stored in nfiguration is stored in mappExportConfig collection and consists of information about configuration and crosswalks order for each country.Example" }, { "title": "merge_unmerge_entities", "": "", "pageLink": "/display//merge_unmerge_entities", "content": "DescriptionThis dag implements batch merge & unmerge process. It download file from with list of files to merge or unmerge and then process documents. To process documents batch-service is used. After documents are processed report is generated and transferred to directory.FlowBatch service batch creationDownloading source file from s3Input file conversion to unix formatFile processingRecords are sent to batch service using /bulkService ter all entities are sent then stage is closed and statistics are written to stage statisticsWaiting for batch to be completedrecords sent to batch service are then transferred to manager internal topic and then processed by manager which sends requests to Reltio. If all events are processed then batch processing stage is closed which causes whole batch to be is generated using batchEntittyProcessStatus mongo collection and saved in temporary report collectionReport is exported and saved in bucket altogether with input fileInput directory is cleared Tmp report mongo collection is dropped " }, { "title": "micro_bricks_reload", "": "", "pageLink": "/display/GMDM/micro_bricks_reload", "content": "DescriptionDag extract data from snowflake table that contains microbricks exceptions. Data is then in git repository from where it will be pulled by consul and loaded into mdmhub components.If microbricks mapping file has changed since run then we'll wait for mapping reload and  copy events from {{ env_name }}-internal-microbricks-changelog-events topic into {{ env_name }}-internal-microbricks-changelog-reload-events"Example" }, { "title": "move_ods_", "": "", "pageLink": "/pages/tion?pageId=", "content": "DescriptionDag copies files from external source buckets and uploads them to our internal bucket to the desired location. This data is later used in inc_batch_* dagsExample" }, { "title": "rdm_errors_report", "": "", "pageLink": "/display//rdm_errors_report", "content": "DEPRECATEDDescriptionThis dags generate report with all rdm errors from collection and publish it to bucket.Example" }, { "title": "reconcile_entities", "": "", "pageLink": "/display//reconcile_entities", "content": "Details:Process allowing export data from mongo based on query and generate  request for each package or generate a flat file from exported entities and push to eps:Pull config from requeste.g. {'entitiesQuery': {'country': {'$in': ['FR']}, 'sources': {'$in': ['ONEKEY']}}}Drop mongo collections used in previous runGenerating list of entities and/or relations to reconcile using provided queryTrigger /reconciliation/entities and/or /reconciliation/relations endpoint for all entities and relations from the list from previous step. This will cause generating Reltio event and sending it to processing.Example" }, { "title": "reconciliation_ptrs", "": "", "pageLink": "/display//reconciliation_ptrs", "content": "DEPRECATEDDetailsProcess allowing to reconcile events for ptrs source.Logic: Reconciliation processSteps:Downloading input file with checksums from directoryDrop mongo collections used in previous runInporting input file into mongo reconciliation_ptrs collection and prepare output collection reconciliationRecords_ptrsTrigger /resendLastEvent publisher endpoint to resend event for each entity from input file that checksum differs. This will cause event to be generated to ptrs output topicExample" }, { "title": "reconciliation_snowflake", "": "", "pageLink": "/display//reconciliation_snowflake", "content": " allowing to reconcile events for snowflake topic.Logic: Reconciliation processSteps:Downloading input file with entities checksums from directoryDrop mongo collections used in previous runInporting input file into mongo reconciliation_snowflake collection and prepare output collection reconciliationRecords_snowflakeTrigger /resendLastEvent publisher endpoint to resend event for each entity from input file that checksum differs. This will cause event to be generated to snowflake topic and consumed by snowflake kafka connectorExample" }, { "title": "Kubernetes", "": "", "pageLink": "/display/", "content": "" }, { "title": "Platform Overview", "": "", "pageLink": "/display/GMDM/Platform+Overview", "content": "In the latest physical architecture, services are deployed in clusters managed by (PDKS)There are non-prod and prod cluster for each region: , ,  ArchitectureThe picture below presents the layout of HUB services in cluster managed by   NodesThere are two groups of nodes:Static, stateful nodes that have storage configured dedicated for running backend stateful servicesInstance Type:  .2xlargeNode labels:  nodes - dedicated for stateless services that are dynamically scaledInstance Type:  .2xlargeNode labels:  storage appliance is used to manage persistence volumes required by stateful nfiguration: Default storage Class:  pwx-repl2-scReplication: 2Operators MDM HUB uses operators to manage applications like:Application NameOperator (with operator0.6.2KafkaStrimzi0.27.xElasticSearchElasticsearch operator1.9.0PrometheusPrometheus operator8.7.3MonitoringCluster are monitored by local service integrated with central Prometheus and services For details got to monitoring section.Logging All logs from HUB components are sent to Elastic service and can be discovered by r details got to  dashboard section. Backend componentsNameVersionMongoDB4.2.6Kafka2.8.1ElasticSearch7.13.1Prometheus2.15.2Scaling TO BE ImplementationKubernetes objects are implemented using helm - package manager for . There are several modules that connected together makes the application:operators - delivers a set of operators used to manage backend components of : Mongo operator, operator, operator, operator and operator,consul - delivers consul server instance, user management tools and git2consul - the tool used to synchronize consul key-value registry with a git repository,airflow - deploys an instance of Airflow server,eck - using operator creates EFK stack - Kibana, and , - installs server,kafka-resources - installs topics, connector instances, managed users and , - using operators installs a server,-resources - delivers basic configuration: users, plugins etc,mongo - installs mongo server instance, configures users and their permissions,monitoring - install server and exporters used to monitors resources, components and endpoints,migration - a set of tools supported migration from old ( based environments) to new infrastructure,mdmhub - delivers the components, their configuration and l above modules are stored in application source code as a part of module nfigurationThe runtime configuration is stored in mdm-hub-cluster-env repository. Configuration has following structure:[region]/ - MDMHUB rerion eg: , amer,     -  cluster class. or prod values are possible,        namespaces/ - logical spaces where coponents are deployed            monitoring/ - configuration of prometheus stack                service-monitors/                values.yaml - namespace level variables            [region]-dev/ - specific configuration for dev env eg.: , hub components configuration                config_files/ - MDMHUB components configuration files                    all|mdm-manager|batch-service|.../                values.yaml - variables specific for dev env.                kafka-topics.yaml - kafka topic configuration            [region]-qa/ - specific configuration for qa env                config_files/                    all|mdm-manager|batch-service|.../            [region]-stage/ - specific configuration for stage env                config_files/                    all|mdm-manager|batch-service|.../                values.yaml                kafka-topics.yaml            [region]-prod/ - specific configuration for prod env                config_files/                    all|mdm-manager|batch-service|.../                values.yaml                kafka-topics.yaml            [region]-backend/ - backend services configuration: EFK stack, , etc.                #eck specific files                values.yaml            kong/ - configuration of proxy                values.yaml            airflow/ - configuration of scheduler                values.yaml        users/ #users configuration            mdm_test_user.yaml            callback_service_user.yaml            ...        values.yaml #cluster level variables        secrets.yaml #cluster level sensitive data    values.yaml #region level variablesvalues.yaml #values common for all environments and #implementation of deployment procedureApplication is deployed by script. The script does this in the following steps:Decrypt sensitive data: passwords, certificates, token, etc,Prepare the order of values and secrets precedence (the last listed variables override all other variables):common values for all environments,region values,cluster variables,users values,namespace helm package,Do some package customization if required,Install helm package to the selected ploymentBuildJob: mdm-hub-inbound-services/feature/kubernatesDeployAll Kubernetes deployment jobsAMER:Deploy backend: , , mongoDB, EFK, Consul, , MDM HUBAdministrationAdministration tasks and standard operating procedures were described here." }, { "title": "Migration guide", "": "", "pageLink": "/display/GMDM/Migration+guide", "content": "Phase 0Validate configuration:validate if all configuration was moved correctly - compare application.yml files, check topic name prefix (on k8s env the prefix has 2 parts), check Reltio confguration etc,Check if reading event from is disabled on k8s - reltio-subscriber,Check if reading evets from MAP is disabled on k8s - map-channel,Check if event-publisher is configured to publish events to old - all client topics (*-out-*) without eck if network traffic is opened:from old servers to new REST api endpoint,from k8s cluster to old ,from k8s cluster to old REST endpoint,Make a mongo dump of data collections from mongo - remember start date and time:find mongo-migration-* pod and run shell on /opt/mongo_utils/datamkdir datacd datanohup &start date is shown in the first line of log file:head nohup.out #example output → [Mon Jul  4 12:09:32 UTC 2022] Dumping all collections without: entityHistory, entityMatchesHistory, entityRelations and from source database mongovalidate the output of dump tool by:cd /opt/mongo_utils/data/data tail -f nohup.outRestore dumped collections in the new mongo instance:cd /opt/mongo_utils/data/datamv nohup.out nohup.out.dumpnohup dump/ &tail -f nohup.out #validate the outputValidate the target database and check if only entityHistory, entityMatchesHistory, entityRelations and coolections were copied from source. If there are more collections than mentioned, you can delete eate a new consumer group ${new_env}-event-publisher for sync-event-publisher component on topic ${old_env}-internal-reltio-proc-events located on old instance. Set offset to start date and time of mongo dump - do this by command line client because has a problem with this action,Configure and run sync-event-publisher - it is responsible for the synchronization of mongo DB with the old environment. The component has to be connected with the old and Manager and the routing rules list has to be empty,Phase 1(External clients are still connected to old endpoints of rest services and kafka):Check if something is waiting for processing on kafka topics and there are active batches in batch service,If there is a data on kafka topics stop subscriber and wait until all data in enricher, callback and publisher will be processed. Check it out by monitoring input topics of these components,Wait unit all data will be processed by the snowflake connector,Disable jobs,Stop outbound (mdmhub) components,Stop inbound (mdmgw) components,Disable all 's DAGs assigned to the migrated environment,Turn off the snowflake connector at the old environment,Turn off sync-event-publisher on k8s environment, to copy mongo databases - copy only collections with caches, data collections were synced before (mongodump + sync-event-publisher). Before start check collections in old mongo instance. You can delete all temporary collections lookup_values_export_to_s3_*, reconciliation_* etc.#dumpingcd /opt/mongo_utils/datamkdir non_datacd non_datanohup &tail -f nohup.out #validate the output#restoringnohup dump/ &tail -f nohup.out #validate the outputEnable reltio subscriber on - check credentials and turn on route,Enable processing events on MAP queues - if map-channel exists on migrated environment,:forward all incoming traffic to the new instance of rules for paths from: \n MR-3140\n -\n Getting issue details...\n STATUS\n Delete all plugins and key-auth plugins might be required to remove routes, when playbook will throw a duplication error Snowflake connector located at k8s cluster, Turn on components (without sync-event-publisher) on k8s environment,Change api url and secret (manager apikey) in snowflake deployment configuration (Ansible)Chnage api key in depenedent stall dashboards,Add mappings to ,Add transaction topics to ase 2 (Environment run in K8s):Run Kibana Migration Tool to copy indexes, - after migration, to copy all data from old output topics to new ase 2 (All external clients confirmed that they switched their applications to new endpoints):Wait until all clients will be switched to new endpoints,Phase 3 (All environments are migrated to kubernetes):Stop old mongo instance,Stop fluentd and kibana, and at old environment,Decommission old environment remember after migrationReview requests on k8s + Resource management for components - doneMongoDB on k8s has only 1 instanceKong delete plugin - add consul-server service to ingress - consul ui already exposes UI redirect doesn't work due to consul being stubborn about using /ui path. Decision: skip this, send client new consul address Fix issue with manage and batch-service user being duplicated in mappings - doneVerify if mdm hub components are using external api address and switch to internal k8s service address - checked, confirmed nothing is using external addressesCheck if Portworx requires setting affinity rules to be running only on 3 nodesakhq - disable default k8s token automount - done" }, { "title": " tests", "": "", "pageLink": "/display//PDKS+Cluster+tests", "content": " used in testsAPI: resources3 static EC2 nodesCPU reserved >67%RAM reserved >67%0-4 dynamic EC2 nodes in , scaled based on loadEach app deployed in 1 replica, so no over testsExpected resultsNo downtimes of and all services exposed to enarioOne EKS node downForce node drain with timeout and grace period set to low . ResultsOne EKS node was unavailable for ~1 or ~3 minutes. Unavailability was handled correctly by by sending HTTP 500 responsesStatic nodes resources were reserved in more than 67%, so draining 1 of 3 nodes caused scaling up dynamic nodesEvery time managed to start new pod and heal all servicesThere was no need for manual operational work to fix anythingConclusionsTest was partially successfulFailover workedAPI downtime was shortNo operational work was requiredTo remove risk of services unavailabilityIncrease number of instancesTo reduce time of services unavailabilityTest if reducing time of a Pod to less than 60s could workScale testsExpected resultsEKS node scaling up and down should be automatic based on cluster capacity. ScenariosScale pods up, to overcome capacity of static , then scale sultsScale up and down test was carried out while doing failover tests. When 1 of 3 static nodes became unavailable, scaled up number of dynamic instances. First to 1 and then to 2. After a static node was once again operational, scaled down dynamic nodes to nclusions" }, { "title": "Portworx - storage administration guide", "": "", "pageLink": "/display/GMDM/Portworx+-+storage+administration+guide", "content": " is not longer used in clustersPortworx, what is it?Commercial product, validated storage solution and a standard for clusters. It uses AWS EBS volumes, adds a replication and provides a k8s storage class as a result. It then can be used just as any k8s storage by defining PVC. What problem does it solve?How to:use Portworx storageConfigure Persistent Volume Claim to use one of configured on classes are availablepwx-repl2-sc - storage has 2 replicas - use on non-prodpwx-repl3-sc - storage has 3 replicasextend volumesIn just change PVC requested size and deploy changes to a cluster with a job. No other action should be required. Example change: MR-3124 change persistent volumes claimscheck status, statistics and alertsTBDOne of the tools should provide volume status and statistics: is responsible for what is described in the table below. In short: if any change in setup is required, create a support ticket to a queue found on Support information with queues names (If link doesn't work, go to  search in "PDKS Docs" section for "WTST-0299 Platform Standards")Kubernetes Portworx storage class documentationPortworx on docs" }, { "title": "Resource management for components", "": "", "pageLink": "/display/GMDM/Resource+management+for+components", "content": " components resources are managed automatically by the Vertical Pod Autoscaler - table below is no longer applicableK8s resource requests vs limits Quotes on how to understand resource limitsrequests is a guarantee, limits is an obligationGalo NavarroWhen you create a Pod, the scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of and memory it can provide for Pods. The scheduler ensures that, for each resource type, the sum of the resource requests of the scheduled containers is less than the capacity of the node. Note that although actual memory or resource usage on nodes is very low, the scheduler still refuses to place a Pod on a node if the capacity check fails. This protects against a resource shortage on a node when resource usage later increases, for example, during a peak in request Pods with resource requests are resource configuration per componentIMPORTANT: table is outdated. The current and memory configuration are in mdm-hub-cluster-env git repository. [m]Memory [Mi]ComponentRequestLimitRequestLimitmdm-callback-servi16002560mdm-hub-reltio-subscriber2001000400640mdm-hub-event-publisher20020008001280mdm-hub-entity-enricher20020008001280mdm-api-rout8001280mdm-manag10002000mdm-reconciliation-servi16002560mdm-batch-service20020008001280Kafka500400010000 ( agent200400200500Elasticsearch5002000800020000Kibana100200010241536Airflow - scheduler2007005122048Airflow - webserver2007002561024Airflow - postgresql250-256-Airflow - stat56512Consul100500256512git2consul100500256512Kong10020005122048Prometheus200100015363072Legendrequires tuningproposaldeployedUseful linksLinks helpful when talking about k8s resource management: and ContainersHow Pods with resource requests are scheduledSizing pods for apps without fearing cluster configuration git repository" }, { "title": "Standards and rules", "": "", "pageLink": "/display//Standards+and+rules", "content": " definitionLimit size for has to be defined in "m" (milliCPU), ram in "Mi" (mibibytes) and storage in "Gi" (Gibibytes). More details about resource limits you can find on vs GiB: What’s the Difference Between Gigabytes and Gibibytes?At its most basic level, one is defined as (1,000,000,000) bytes and one as 1024³ (1,073,741,824) bytes. That means one equals 0.93 GiB. Source: check current resource configuration, check: Resource management for secure our images from changing of remote images which come from remote registries such as before using remote these as a base image in the implementation, you have to publish the remote image in our private registry objects naming standardsKafka topicsName template: <$envName>-$-$Topic Types: in - topics for producing events by external systemsout - topics for consuming events by external systemsinternal - topics used by servicesConsumer template: <$envName>-<$componentName>-[$processName]Standardized environment andardized component namesbatch-servicecallback-servicemdm-managerevent-publisherapi-routerreconciliation-servicereltio-subscriber" }, { "title": "Technical details", "": "", "pageLink": "/display//Technical+details", "content": " nameSubnet maskRegionDetailssubnet-07743203751be58b910.9.64.0/18amersubnet-0dec853f7c9e507dd10.9.0.0/18amersubnet-018f9a3c441b24c2b●●●●●●●●●●●●●●●apacsubnet-06e1183e436d67f2910.116.176.0/20apacsubnet-0e485098a41ac03ca10.90.144.0/20emeasubnet-067425933ced0e77f10.90.128.0/20emea" }, { "title": "SOPs", "": "", "pageLink": "/display//SOPs", "content": "Standard operation procedures are available here." }, { "title": "Downstream system migration guide", "": "", "pageLink": "/display//Downstream+system+migration+guide", "content": "This chapter describes steps that you have to take if you want to switch your application to new channel (Rest services)If you use the direct channel to communicate with the only thing that you should do is changing of endpoint addresses. The authentication mechanism, based on serving by stays unchanged. Please remember that probably network traffic between your services and has to be opened before switching your application to new HUB e following table presents old endpoints and their substitutes in the new environment. Everyone who wants to connect with has to use new endpoints.EnvironmentOld endpointNew endpointAffected /STAGE/v1 DEV:8443/dev-ext ENGAGE, KOL_ONEVIEW, , , , , APIGBLUS QA:8443/qa-ext ENGAGE, KOL_ONEVIEW, , , , , ENGAGE, KOL_ONEVIEW, , , , , PROD PROD ENGAGE, KOL_ONEVIEW, , , , , MULEManager APIGBLUS PROD APIEMEA DEV/QA/STAGE/v1 DEV:8443/dev-ext , PforceRx, JORouter APIEMEA DEV:8443/dev-ext/gw , PforceRx, JORouter APIEMEA QA:8443/qa-ext/gw QA:8443/qa-batch-ext APIEMEA STAGE:8443/stage-ext , PforceRx, JORouter APIEMEA STAGE:8443/stage-ext/gw APIEMEA STAGE:8443/stage-batch-ext APIEMEA , PforceRxRouter APIEMEA PROD:8443/prod-ext/gw PROD:8443/prod-batch-ext APIGBL DEV:8443/dev-ext , , KOL_ONEVIEW, , , , PTRS, VEEVA_FIELD,Manager APIGBL QA (MAPP):8443/mapp-ext , , KOL_ONEVIEW, , , , PTRS, VEEVA_FIELD,Manager APIGBL STAGE:8443/stage-ext , , KOL_ONEVIEW, , , PTRS, VEEVA_FIELDManager APIGBL PROD , , KOL_ONEVIEW, , , , PTRS, VEEVA_FIELDManager APIGBL PROD APIEXTERNAL GBL DEV:8443/dev-ext , (MAPP):8443/mapp-ext , APIEXTERNAL GBL STAGE:8443/stage-ext , , , MAPPRouter , MAPPRouter , , channel (Kafka)Switching to a new environment requires configuration change on your side:Change the 's broker address,Change JAAS configuration - in the new architecture, we decided to change JAAS authentication mechanisms to SCRAM. To be sure that you are using the right authentication you have to change a few parameters in 's connection:JAAS login config file which path is specified in "nfig" java property. It should look like below:KafkaClient {  ramLoginModule required username="" ●●●●●●●●●●●●●●●●●●●>";};                   b.  change the value of "chanism" property to "SCRAM-SHA-512"                   c. if you configure login using "nfig" property you have to change its value to " required username="" ●●●●●●●●●●●●●●●●●●●>";"You should receive new credentials (username and password) in the email about changing endpoints. In another case to get the proper username and ●●●●●●●●●●●●●●● contact our support e following table presents old endpoints and their substitutes in the new environment. Everyone who wants to connect with has to use new endpoints.EnvironmentOld endpointNew endpointAffected /::9094ENGAGE, KOL_ONEVIEW, , , MULEKafkaGBLUS :9094,:9094,::9094ENGAGE, KOL_ONEVIEW, , , /::9094 (external):9094MAP (external), PforceRx, :9094,:9094,::9094,:9094,::9095,:9095,:9095 (external):9094MAP (external), PforceRx, MULEKafkaGBL DEV/QA/::9094 (external):9094MAP (external), , KOL_ONEVIEW, PTRS, , ENGAGE, ,KafkaGBL :9094,:9094,::9094,:9094,:9094 (external):9094MAP (external), , KOL_ONEVIEW, PTRS, ENGAGE, ,KafkaEXTERNAL GBL (Snowflake)There are no changes required if you use to get data." }, { "title": "", "": "", "pageLink": "/display/GMDM/MDM+HUB+Log+Management", "content": "MDM HUB has built in a log management solution that allows to trace data going through the system (incoming and outgoing events).It improves: to trace input and output dataCompliance requirementsSecurityAny user activity is recordedThreat protection and discoveryMonitoringOutages & performance bottlenecks detectionAnalytics Metrics & trends in real-timeAnomalies detectionThe solution is based on EFK stack: - provides storage and indexing and search capabilitiesFluentd - ships, transforms and loads logsKibana - provides for solutions is presented on the picture below: HUB microservices generetes log events and place them on monitoring topics.  processes events from topics and store them in presents data to users.    " }, { "title": "EFK Environments", "": "", "pageLink": "/display/GMDM/EFK+Environments", "content": "" }, { "title": "Elastic Cloud on Kubernetes in MDM HUB", "": "", "pageLink": "/display//Elastic+Cloud+on+Kubernetes+in+MDM+HUB", "content": "OverviewAfter migration on platform from on premise solutions we started to use Elastic Cloud on Kubernetes (ECK). With we can streamline critical operations, such as:Setting up hot-warm-cold viding lifecycle policies for logs and transactions, snapshots of obsolete/older/less utility eating dashboards visualising data of core processes.Logs, transactions and mongo collectionsWe splitted all the data entering the Elastic Stack cluster into different categories listed as follows:1. services logsFor forwarding logs we use where its used as a sidecar/agent container inside the mdmhub service e sidecar/agents send data directly to a backend service on cluster.2. Backend logs and transactionsFor backend logs and transactions forwarding we use as a forwarder and aggregator, lightweight pod instance deployed on case of unavailability, secondary output is defined on storage to not miss any data coming from services.3. MongoDB collectionsIn this scenario we decided to use , sync daemon written in Go that continously indexes MongoDB collections into .We use it to mirror data gathered in MongoDB collections in as a backup and a source for 's dashboards visualisations.Data streamsMDM HUB services and backend logs and transactions are managed by Data streams mechanism.A data stream lets us store append-only time series data (logs/transactions) across multiple indices while giving a single named resource for lifecycle policies and snapshots managementIndex templates, index lifecycle policies and snapshots for index management are enirely covered by the built-in scription of the index lifecycle divided into phases:Index rollover - logs and transactions are stored in hot-tiersIndex rollover - logs and transactions are moved to delete phaseSnapshot - deleted logs and transactions from elasticsearch are snapshotted on bucketSnapshot -  logs and transactions are deleted from bucket - index is no longer availableAll snapshotted indices may be restored and recreated on ximum sizes and ages for the indexes rollovers and snapshots are included in the following tables: environmentstypeindex rollover hot phaseindex rollover delete phasesnapshot phase MDM HUB logsage: 7dsize: 100gbage: 30dage: 180dBackend logsage: 7dsize: 100gbage: 30dage: 180dKafka transactionsage: 7dsize: 25gbage: 30dage: 180dPROD environmentstypeindex rollover hot phaseindex rollover delete phasesnapshot phase MDM HUB logsage: 7dsize: 100gbage: 90dage: 365dBackend logsage: 7dsize: 100gbage: 90dage: 365dKafka transactionsage: 7dsize: 25gbage: 180dage:  365dAditionally, we execute full snapshot policy on basis. It is responsible for incremental storing all the indexes on buckets as a backup. Snapshots locationsenvironmentS3 /archive/elastic/fullEMEA PRODpfe-atp-eu--prod-mdmhub-backupemaasp202207120811emea/archive/elastic/fullAMER NPRODgblmdmhubnprodamrasp100762amer/archive/elastic/fullAMER PRODpfe-atp-us--prod-mdmhub-backupamrasp202207120808amer/archive/elastic/fullAPAC NPRODglobalmdmnprodaspasp202202171347apac/archive/elastic/fullAPAC PRODpfe-atp-ap-se1-prod-mdmhub-backuaspasp202207141502apac/archive/elastic/fullMongoDB collections data are stored on permanently, they are not covered by the index lifecycle dashboardsKibana " }, { "title": "", "": "", "pageLink": "/display/GMDM/Kibana+Dashboards", "content": "" }, { "title": "Tracing areas", "": "", "pageLink": "/display//Tracing+areas", "content": "Log data are generated in the following actions: calls request timestampoperation namerequest payloadresponse statusMDM events timestampmdm nameevent typeevent payload" }, { "title": "MDM HUB Monitoring", "": "", "pageLink": "/display//MDM+HUB+Monitoring", "content": "" }, { "title": "AKHQ", "": "", "pageLink": "/display/GMDM/AKHQ", "content": " () is a tool for browsing, changing and monitoring 's " }, { "title": "", "": "", "pageLink": "/pages/tion?pageId=", "content": "KIBANAUS PROD :5601/app/kibanaUser: kibana_dashboard_viewUS NONPROD :5601/app/kibanaUser: kibana_dashboard_view=====GBL PROD GBL  EMEA  =====GBLUS PROD GBLUS  AMER  =====APAC PROD APAC NONPROD GRAFANAKeePass - download password to the KeePass is sent in a separate email to improve the security level of credentials get access, you only need to download the application 2.50 version () and use a password that is sent to log in to ter you do it you will see a screen like:Then just click a title that you are interested in. And you get a window like:Here you have a user name, and a proper link and when you click 3 dots = red square you will get the password." }, { "title": "Grafana Dashboard Overview", "": "", "pageLink": "/display//Grafana+Dashboard+Overview", "content": " is deployed on the host and is available under the following URL:All the dashboards are built using 's metrics." }, { "title": "Alerts Monitoring PROD&NON_PROD", "": "", "pageLink": "/pages/tion?pageId=", "content": "PROD:  PROD:  contains firing alerts and last Airflow DAG runs statuses for (left side) and (right ., e. number of alerts firingb., f. turns red when one or more DAG JOBS have failedc., currently firingd., table containing all the DAGs and their run count for each of the statuses" }, { "title": "", "": "", "pageLink": "/display/GMDM/AWS+SQS", "content": "Dashboard:  dashboard is describing the queue used in Reltio→MDM HUB e dashboard is divided into following sections:a. Approximate number of messages - how many messages are currently waiting in the queueb. Approximate number of messages delayed - how many messages are waiting to be added in the queuec. Approximate number of messages invisible - how many messages are not timed out nor deleted" }, { "title": "Docker Monitoring", "": "", "pageLink": "/display//Docker+Monitoring", "content": "Dashboard:  dashboard is describing the containers running on hosts in each environment. Switch currently viewed environment/host using the variables at the top of the dashboard ("env", "host").The dashboard is divided into following sections:a. Running containers - how many containers are currently running on this hostb. Total Memory Usagec. Total CPU Usaged. CPU Usage - over time use per containere. Memory Usage - over time Memory use per containerf. Network Rx - received bytes per container over timeg. Network Tx - transmited bytes per container over time" }, { "title": "Host Statistics", "": "", "pageLink": "/display//Host+Statistics", "content": "\n\n\n\nDashboard:  template source:  dashboard is describing various statistics related to hosts' resource usage. It uses metrics from the node_exporter. You can change the currently viewed environment and host using variables at the top of the dashboard.\n\n\n\n\n\nBasic CPU / Mem / Disk Gaugea. Busyb. Used RAM Memoryc. Used SWAP - hard disk memory used for swappingd. Used Root FSe. CPU System Load (1m avg)f. (5m avg)\n\n\n\n\n\nBasic . . Total RAMc. Total SWAPd. Total RootFSe. System Load (1m avg)f. Uptime - time since last restart\n\n\n\n\n\nBasic CPU / Mem Grapha. CPU Basic - CPU state %b. Memory Basic - memory (SWAP + RAM) use\n\n\n\n\n\nBasic . Network Traffic Basic - network traffic in bytes per interfaceb, Disk Space Used Basic - disk usage per mount\n\n\n\n\n\nCPU Memory Net Diska. - percentage use per status/operationb. Memory Stack - use per status/operationc. Network Traffic - detailed network traffic in bytes per interface. Negative values correspond to transmited bytes, positive to received.d. Disk Space Used - disk usage per mounte. Disk - disk operations per partition. Negative values correspond to write operations, positive - read operations.f. I/O Usage Read / Write - bytes read(positive)/written(negative) per partitiong. I/O Usage Times - time of I/O operations in per partition\n\n\n\n\n\ the dashboard template is a publicaly-available project, the panels/graphs are sufficiently described and do not require further explanation.\n\n\n" }, { "title": "HUB Batch Performance", "": "", "pageLink": "/display//HUB+Batch+Performance", "content": "\n\n\n\nDashboard:  Batch loading rateb. Batch loading latencyc. Batch sending rated. Batch sending latencye. Batch processing rate - batch processing in ops/sf. Batch processing latency - batch processing time in secondsg. Batch loading max gauge - max loading time in secondsh. Batch sending max gauge - max sending time in secondsi. Batch processing max gauge - max processing in seconds\n\n\n" }, { "title": "HUB Overview Dashboard", "": "", "pageLink": "/display//HUB+Overview+Dashboard", "content": "\n\n\n\nDashboard:  dashboard contains information about topics/consumer groups in HUB - downstream from Reltio.\n\n\n\n\n\na. Lag by - lag on each consumer groupb. Message consume per minute - messages consumed by each INBOUND consumer groupc. Message in per minute - inbound messages count by each topicd. Lag by - lag on each OUTBOUND consumer groupe. Message consume per minute - messages consumed by each OUTBOUND consumer groupf. Message in per minute - inbound messages count by each OUTBOUND topicg. Lag by - lag on each BATCH consumer grouph. Message consume per minute - messages consumed by each BATCH consumer groupi. Message in per minute - inbound messages count by each BATCH topic\n\n\n" }, { "title": "HUB Performance", "": "", "pageLink": "/display/GMDM/HUB+Performance", "content": "\n\n\n\nDashboard:  . Read Rate - API Read operations in rateb. Read Latency - API Read operations latency in for 50/75/99th percentile of requests. response time, processing time and total timec. Write Rate - API Write operations in rated. Write Latency - API Write operations latency in for 50/75/99th percentile of requests per each operation\n\n\n\n\n\nPublishing Performancea. Event Preprocessing Total Rate - Publisher's preprocessed events rate divided for entity/relation eventsb. Event Preprocessing Total Latency - preprocessing time in for 50/75/99th percentile of events\n\n\n\n\n\nSubscribing . MDM Events Subscribing Rate - Subscriber's events rateb. MDM Events Subscribing Latency - Subscriber's event processing (passing downstream) rate\n\n\n" }, { "title": "JMX Overview", "": "", "pageLink": "/display/GMDM/JMX+Overview", "content": "Dashboard:  dashboard organizes and displays data extracted from each component by a JMX exporter - related to this component's resource usage. You can switch currently viewed environment/component/node using variables on the top of the dashboard.a. Memoryb. Total RAMc. Used SWAPd. Total SWAPe. CPU System Load(1m avg)f. Load(5m avg)g. . Usagei. Memory Heap/NonHeapj. Memory Pool Usedk. Threads usedl. Class loadingm. Open File Descriptorsn. time / 1 min. rate - time rate/mino. count - operations count" }, { "title": "Kafka Overview", "": "", "pageLink": "/display/GMDM/Kafka+Overview", "content": "Dashboard:  dashboard describes 's per node resource usage.a. . . spent in GCd. Messages in Per Topice. Bytes in Per Topicf. Bytes Out Per Topic" }, { "title": "Kafka Overview - Total", "": "", "pageLink": "/display/GMDM/Kafka+Overview+-+Total", "content": "Dashboard:  dashboard describes 's total (all node summary) resource usage per environment.a. . . spent in GCd. Messages ratee. Bytes in Ratef. Bytes Out Rate" }, { "title": "Kafka Topics Overview", "": "", "pageLink": "/display/GMDM/Kafka+Topics+Overview", "content": "Dashboard:  dashboard describes topics and consumer groups in each environment.a. Topics purge in time it should take for each consumer group to process all the events on their topicb. Lag by . Message in per minute - per topicd. Message consume per minute - per consumer groupe. Message in per second - per topic" }, { "title": "Kong Dashboard", "": "", "pageLink": "/display//Kong+Dashboard", "content": "Dashboard:  dashboard describes the component statistics.a. Total requests per secondb. DB reachabilityc. Requests per serviced. Requests by HTTP status codee. Total Bandwidthf. Egress per service (All) - traffic exiting the network in bytesg. Ingress per service (All) - traffic entering the network in bytesh. Kong Proxy Latency across all services - divided on 90/95/99 percentilei. Kong Proxy Latency per service (All) - divided on percentilej. Request Time across all services - divided on 90/95/99 percentilek. Request Time per service (All) - divided on percentilel. across all services - divided on percentilem. Upstream Time per service (All) - divided on 90/95/99 percentileo. Nginx connection statep. Total Connectionsq. Handled Connectionsr. Accepted Connections" }, { "title": "MongoDB", "": "", "pageLink": "/display/GMDM/MongoDB", "content": "Dashboard:  . Document Operationsc. Document . Member Healthe. Member . Replica . Uptimeh. Available Connectionsi. Open Connectionsj. . Memoryl. Network I/Om. . Disk I/O Utilizationo. Disk Reads Completedp. Disk Writes Completed" }, { "title": "Snowflake Tasks", "": "", "pageLink": "/display/GMDM/Snowflake+Tasks", "content": "Dashboard:  dashboard describes tasks running on each ease keep in mind that metrics supporting this dashboard are scraped rarely (every 8h on nprod, every 2h on prod), so keep the Time since last scrape gauge in mind when reviewing the results.a. Time since last scrape - time since the metrics were last scraped - it marks dashboard freshnessb. Last Task Runs - table contains:task's name,date&time of last recorded run,visualisation of how long ago was the last run,state of last run,duration of last run (processing time)c. Processing time - visualizes how the processing time of each task was changing over time" }, { "title": "Kibana Dashboard Overview", "": "", "pageLink": "/display/GMDM/Kibana+Dashboard+Overview", "content": "" }, { "title": " Calls Dashboard", "": "", "pageLink": "/display//API+Calls+Dashboard", "content": "The dashboard contains summary of calls in the chosen time e it to:find a certain call by entity/timestamp/username,check which host this request was sent to,check request processing time e dashboard is divided into the following sections:a. Total requests count - how many requests have been logged in this time range (or passed the filter if that's the case)b. Controls - allows user to filter requests based on username and operationc. Requests by operation - how many requests have been sent per each operationd. Average response time - how long the response time was on average per each actione. Request per client - how many requests have been sent per each clientf. Response status - how many requests have resulted with each statusg. Top 10 processing times - summary of 10 requests that have been processed the longest in this time range. Contains transaction ID, related entity URI, operation type and duration in ms.681pxh. Logs - summary of all the logged requests" }, { "title": "Batch Loads Dashboard", "": "", "pageLink": "/display/GMDM/Batch+Loads+Dashboard", "content": "The dashboard contains information about files processed by the e this dashboard to:check whether the files were delivered on schedule,check processing time,verify that the files have been processed e dashboard is divided into following sections:a. File by type - summary of how many files of each type were delivered in this time range.b. File load status count - visualisation of how many entities were extracted from each file type and what was the result of their processing.c. File load count - visualisation of loaded files in this time range. Use it to verify that the files have been delivered on schedule.d. File load summary - summary of the processing of each loaded file. e. Response status load summary - summary of processing result for each file type." }, { "title": "HL Dashboard", "": "", "pageLink": "/display/GMDM/HL+DCR+Dashboard", "content": "This dashboard contains information related to the HL flow ( Service).Use it to:track issues related to the HL e dashboard is divided into following sections:a. DCR Status - summary of how many DCRs have each of the statusesb. Reltio DCR Stats - summary of how many DCRs that have been processed and sent to have each of the statusesc. DCRRequestProcessing report - list of reports generated in this time ranged. state - list of DCRs and their current statuses" }, { "title": "HUB Events Dashboard", "": "", "pageLink": "/display/GMDM/HUB+Events+Dashboard", "content": "Dashboard contains information about the Publisher component - events sent to clients or internal components (ex. Callback Service).Use it to:track issues related to Publisher's event processing (filtering/publishing),find information about Publisher's event processing time,find potential issues with events not being published from one topic or being constantly skipped e dashboard is divided into following sections:a. Count - how many events have been processed by the Publisher in this time rangeb. Event count - visualisation of how many events have been processed over timec. Simple events in time - visualisation of how many simple events have been processed (published) over time per each outbound topicd. Skipped events in time - visualisation of how many events have been skipped (filtered) for each reason over timee. Full events in time - visualisation of how many full events have been published over time per each topicf. Processing time - visualisation of how long the processing of entities/relations events tookg. Events by country - summary of how many events were related to each countryh. Event types - summary of how many events were of each typei. Full events by Topics - visualisation of how many full events of each type were published on each of the topicsj. Simple events by Topics - visualisation of how many simple events of each type were published on each of the topicsk. Publisher Logs - list containing all the useful information extracted from the Publisher logs for each event. Use it to track issues related to Publisher's event processing." }, { "title": "HUB Store Dashboard", "": "", "pageLink": "/display//HUB+Store+Dashboard", "content": "Summary of all entities in the in this environment. Contains summary information about entities count, countries and sources. The dashboard is divided into following sections:a. Entities count - how many entities are there currently in MDMb. Entities modification count - how many entity modifications (create/update/delete) were there over timec. Status - summary of how many entities have each of the statusesd. Type - summary of how many entities are () or (Health Care Professional)e. MDM - summary of how many MDM entities are in . Entities country - visualisation of country to entity countg. Entities source - visualisation of source to entity counth. Entities by country source type - visualisation of how many entities are there from each country with each sourcei. World Map - visualisation of how many entities are there from each countryj. Source/Country Heat Map - another visualisation of Country-Source distribution" }, { "title": "MDM Events Dashboard", "": "", "pageLink": "/display//MDM+Events+Dashboard", "content": "This dashboard contains information extracted from the Subscriber e it to:confirm that a certain event was received from ,check the consume e dashboard is divided into following sections:a. Total events count - how many events have been received and published to an internal topic in this time rangeb. Event types - visualisation of how many events processed were of each typec. Event count - visualisation of how many events were processed over timed. Event destinations - visualisation of how many events have been passed to each of internal topics over timee. Average consume time - visualisation of how long it took to process/pass received events over timef. Subscriber Logs - list containing all the useful information extracted from the Subscriber logs. Use it to track potential issues" }, { "title": "Profile Updates Dashboard", "": "", "pageLink": "/display/GMDM/Profile+Updates+Dashboard", "content": "This dashboard contains information about / profile updates via e it to:check how many updates have been processed,check processing results (statuses),track an issue related to the te, that the is not only used by the external vendors, but also by 's components (Callback Service).The dashboard is divided into following sections:a. Count - how many profile updates have been logged in this time periodb. Updates by status - how many updates have each of the statusesc. Updates count - visualisation of how many updates were received by the over timed. Updates by country source status - visualisation of how many updates were there for each country, from each source and with each statuse. Updates by source - summary of how many profile updates were there from each sourcef. Updates by country source status - another visualisation of how many updates were there for each country, source, . World Map - visualisation of how many updates were there on profiles from each of the countriesh. Gateway Logs - list containing all the useful information extracted from the components' logs. Use it to track issues related to the MDM Gateway" }, { "title": "Reconciliation metrics Dashboard", "": "", "pageLink": "/display/GMDM/Reconciliation+metrics+Dashboard", "content": "The Reconciliation Metrics Dashboard shows reasons why the object (entity or relation) was e it to:Check how many records were reconciled,Find the reasons for rrently, the dashboard can show the following reasons:ror - new lookup error was added. Caused by changes in RDM anged - lookup code was changed. Caused by changes in RDM anged - entity updateTime changed anged - Any description attribute changed. Checks attribute path for .*[Dd]escription.* anged - Addresses, Stateprovince value changed  anged - Workplace changed  anged - /attributes/Rank changed anged - /startObject/label or /endObject/label changed reconciliation.object.missed - Object was removed ded - Object was added  anged - Specialities changed(added/removed/replaced) anged - Specialities label changed(added/removed/replaced) anged - /attributes/MainHCO changed(added/removed/replaced) anged - Any field under Address changed(added/removed/replaced) anged - Any reference entity changed('^/attributes/.*refEntity.+$' - added/removed/replaced) anged - Any reference relationchanged('^/attributes/.*refRelation.+$' - added/removed/replaced) ange - Crosswalk attributes changed(added/removed/replaced) anged - directionalLabel changed(added/removed/replaced) anged - Any attribute changed(added/removed/replaced) ason - Non clasified reason - other cases The dashboard consists of a few diagrams:{ENV NAME} Reconciliation reasons - shows the most often existing reasons for reconciliation,Number by country - general number of reconciliation reasons divided by countries,Number by types - shows the general number of reconciliation reasons grouped by MDM object type,Reason list - reconciliation reasons with the number of their occurrences,{ENV NAME} Reconciliation metrics - detail view that shows data generated by Reconciliation Metrics flow. Data has detailed information about what exactly changed on specific MDM object." }, { "title": "Prometheus Alerts", "": "", "pageLink": "/display//Prometheus+Alerts", "content": "DashboardsThere are 2 dashboards available for problems overview:  DashboardAlertsENVNameAlertCause (Expression)TimeSeverityAction to be takenALLMDMhigh_load> 30 load130mwarningDetect why load is increasing. Decrease number of threads on components or turn off some of LMDMhigh_load> 30 load12hcriticalDetect why load is increasing. Decrease number of threads on components or turn off some of LMDMmemory_usage>  90% used1hcriticalDetect the component which is causing high memory usage and restart LMDMdisk_usage< 10% free2mhighRemove or archive old component LMDMdisk_usage<  5% free2mcriticalRemove or archive old component LMDMkong_processor_usage> 120% used by container10mhighCheck the containerALLMDMcpu_usage> 90% used1hcriticalDetect the cause of high use and take appropriate measuresALLMDMsnowflake_task_not_successful_nprodLast task run has state other than "SUCCEEDED"1mhighInvestigate whether the task failed or was skipped, and what caused tric value returned by the alert corresponds to the task state:0 - FAILED1 - SUCCEEDED2 - SCHEDULED3 - SKIPPEDALLMDMsnowflake_task_not_successful_prodLast Snowflake task run has state other than "SUCCEEDED"1mhighInvestigate whether the task failed or was skipped, and what caused tric value returned by the alert corresponds to the task state:0 - FAILED1 - SUCCEEDED2 - SCHEDULED3 - SKIPPEDALLMDMsnowflake_task_not_started_24hSnowflake task has not started in (+ 8h scrape time)1mhighInvestigate why the task was not scheduled/did not LMDMreltio_response_timeReltio response time to entities/get requests is >= 3 for 99th percentile20mhighNotify N PRODMDMservice_downup{env!~".*_prod"} == 020mwarningDetect the not working component and start N PRODMDMkafka_streams_client_statekafka streams client state != 21mhighCheck and restart unreachable20mwarningCheck the DB N PRODKongkong_http_500_status_rateHTTP 500 > 10%5mwarningCheck Gateway components' N PRODKongkong_http_502_status_rateHTTP 502 > 10%5mwarningCheck 's port N PRODKongkong_http_503_status_rateHTTP 503 > 10%5mwarningCheck the N PRODKongkong_http_504_status_rateHTTP 504 > 10%5mwarningCheck Reltio response rates. Check Gateway components for N PRODKongkong_http_401_status_rateHTTP 401 > 30%20mwarningCheck logs. Notify the authorities in case of suspected break-in L NON PRODKafkainternal_reltio_events_lag_dev> 500 00030minfoCheck why lag is increasing. Restart the Event NON PRODKafkainternal_reltio_relations_events_lag_dev> 500 00030minfoCheck why lag is increasing. Restart the Event NON PRODKafkainternal_reltio_events_lag_stage> 500 00030minfoCheck why lag is increasing. Restart the Event L NON PRODKafkainternal_reltio_relations_events_lag_stage> 500 00030minfoCheck why lag is increasing. Restart the Event L NON PRODKafkainternal_reltio_events_lag_qa> 500 00030minfoCheck why lag is increasing. Restart the Event NON PRODKafkainternal_reltio_relations_events_lag_qa> 500 00030minfoCheck why lag is increasing. Restart the Event L NON PRODKafkakafka_jvm_heap_memory_increasing> 1000 memory use predicted in 5 hours20mhighCheck if is rebalancing. Check the Event L NON PRODKafkafluentd_dev_kafka_consumer_group_members0 EFK consumergroup members30mhighCheck Fluentd logs. Restart LUS NON PRODKafkainternal_reltio_events_lag_gblus_dev> 500 00040minfoCheck why lag is increasing. Restart the Event LUS NON PRODKafkainternal_reltio_events_lag_gblus_qa> 500 00040minfoCheck why lag is increasing. Restart the Event LUS NON PRODKafkainternal_reltio_events_lag_gblus_stage> 500 00040minfoCheck why lag is increasing. Restart the Event LUS NON PRODKafkakafka_jvm_heap_memory_increasing> 3100 memory use predicted in 5 hours20mhighCheck if is rebalancing. Check the Event LUS NON PRODKafkafluentd_gblus_dev_kafka_consumer_group_members0 EFK consumergroup members30mhighCheck Fluentd logs. Restart PRODMDMservice_downcount(up{env=~"gbl_prod"} == 0) by (env,component) == 15mhighDetect the not working component and start PRODMDMservice_downcount(up{env=~"gbl_prod"} == 0) by (env,component) > 15mcriticalDetect the not working component and start L PRODMDMservice_down_kafka_connect0 Kafka Connect Exporters up in the environment5mcriticalCheck and start the Kafka Connect L PRODMDMservice_downOne or more instances down5mcriticalCheck and start he PRODMDMdcr_stuck_on_prepared_statusDCR has been PREPARED for 1h1hhighDCR has not been processed downstream. Notify L PRODMDMdcr_processing_failureDCR processing failed in 24 hoursCheck , L PRODCron Jobsmongo_automated_script_not_startedMongo Cron Job has not started1hhighCheck the L the DB PRODKongkong_http_500_status_rateHTTP 500 > 10%5mwarningCheck Gateway components' PRODKongkong_http_502_status_rateHTTP 502 > 10%5mwarningCheck 's port PRODKongkong_http_503_status_rateHTTP 503 > 10%5mwarningCheck the PRODKongkong_http_504_status_rateHTTP 504 > 10%5mwarningCheck Reltio response rates. Check Gateway components for L PRODKongkong_http_401_status_rateHTTP 401 > logs. Notify the authorities in case of suspected break-in PRODKafkainternal_reltio_events_lag_prod> 1 000 00030minfoCheck why lag is increasing. Restart the Event PRODKafkainternal_reltio_relations_events_lag_prod> 1 000 00030minfoCheck why lag is increasing. Restart the Event PRODKafkaprod-out-full-snowflake-all_no_consumersprod-out-full-snowflake-all has lag and has not been consumed for 2 hours1mhighCheck and restart the Kafka Connect Snowflake L PRODKafkainternal_gw_gcp_events_deg_lag_prod> 50 00030minfoCheck the Map Channel PRODKafkainternal_gw_gcp_events_raw_lag_prod> 50 00030minfoCheck the Map Channel L PRODKafkainternal_gw_grv_events_deg_lag_prod> 50 00030minfoCheck the Map Channel L PRODKafkainternal_gw_grv_events_deg_lag_prod> 50 00030minfoCheck the Map Channel L PRODKafkaforwarder_mapp_prod_kafka_consumer_group_membersforwarder_mapp_prod consumer group has 0 members30mcriticalCheck the Events PRODKafkaigate_prod_kafka_consumer_group_membersigate_prod consumer group members have decreased (still > 20)15minfoCheck the PRODKafkaigate_prod_kafka_consumer_group_membersigate_prod consumer group members have decreased (still > 10)15mhighCheck the PRODKafkaigate_prod_kafka_consumer_group_membersigate_prod consumer group has 0 members15mcriticalCheck the PRODKafkahub_prod_kafka_consumer_group_membershub_prod consumer group members have decreased (still > 100)15minfoCheck the Hub PRODKafkahub_prod_kafka_consumer_group_membershub_prod consumer group members have decreased (still > 50)15minfoCheck the Hub PRODKafkahub_prod_kafka_consumer_group_membershub_prod consumer group has 0 members15minfoCheck the Hub PRODKafkakafka_jvm_heap_memory_increasing> 2100 memory use on node 1 predicted in 5 hours20mhighCheck if is rebalancing. Check the Event PRODKafkakafka_jvm_heap_memory_increasing> memory use on nodes 2&3 predicted in 5 hours20mhighCheck if is rebalancing. Check the Event L PRODKafkafluentd_prod_kafka_consumer_group_membersFluentd consumergroup has 0 members30mhighCheck and restart is not running5mcriticalStart the Batch ChannelUS PRODMDMservice_down1 component is not running5mhighDetect the not working component and start PRODMDMservice_down>1 component is not running5mcriticalDetect the not working components and start PRODCron Jobsarchiver_not_startedArchiver has not started in 24 hours1hhighCheck the PRODKafkainternal_reltio_events_lag_us_prod> 500 0005mhighCheck why lag is increasing. Restart the Event PRODKafkainternal_reltio_events_lag_us_prod> 1 000 0005mcriticalCheck why lag is increasing. Restart the Event PRODKafkahin_kafka_consumer_lag_us_prod> 100015mcriticalCheck why lag is increasing. Restart PRODKafkaflex_kafka_consumer_lag_us_prod> 100015mcriticalCheck why lag is increasing. Restart PRODKafkasap_kafka_consumer_lag_us_prod> 100015mcriticalCheck why lag is increasing. Restart PRODKafkadea_kafka_consumer_lag_us_prod> 100015mcriticalCheck why lag is increasing. Restart PRODKafkaigate_prod_hco_create_kafka_consumer_group_members>= and lag > 100015minfoCheck why the number of consumers is decreasing. Restart PRODKafkaigate_prod_hco_create_kafka_consumer_group_members>= and lag > 100015mhighCheck why the number of consumers is decreasing. Restart PRODKafkaigate_prod_hco_create_kafka_consumer_group_members== 0 and lag > 100015mcriticalCheck why the number of consumers is decreasing. Restart PRODKafkahub_prod_kafka_consumer_group_members>= and lag > 100015minfoCheck why the number of consumers is decreasing. Restart the Event PRODKafkahub_prod_kafka_consumer_group_members>= 10 < 30 and lag > 100015mhighCheck why the number of consumers is decreasing. Restart the Event PRODKafkahub_prod_kafka_consumer_group_members== 0 and lag > 100015mcriticalCheck why the number of consumers is decreasing. Restart the Event PRODKafkafluentd_prod_kafka_consumer_group_membersEFK consumer group has 0 members30mhighCheck and restart PRODKafkaflex_prod_kafka_consumer_group_membersFLEX has 0 consumers10mcriticalNotify the FLEX TeamGBLUS PRODMDMservice_downcount(up{env=~"gblus_prod"} == 0) by (env,component) == 15mhighDetect the not working component and start LUS PRODMDMservice_downcount(up{env=~"gblus_prod"} == 0) by (env,component) > 15mcriticalDetect the not working component and start LUS PRODKongkong_database_downKong DB unreachable20mwarningCheck the DB LUS PRODKongkong_http_500_status_rateHTTP 500 > 10%5mwarningCheck Gateway components' PRODKongkong_http_502_status_rateHTTP 502 > 10%5mwarningCheck 's port PRODKongkong_http_503_status_rateHTTP 503 > 10%5mwarningCheck the PRODKongkong_http_504_status_rateHTTP 504 > 10%5mwarningCheck Reltio response rates. Check Gateway components for LUS PRODKongkong_http_401_status_rateHTTP 401 > logs. Notify the authorities in case of suspected break-in LUS PRODKafkainternal_reltio_events_lag_prod> 1 000 00030minfoCheck why lag is increasing. Restart the Event PRODKafkaigate_async_prod_kafka_consumer_group_membersigate_async_prod consumer group members have decreased (still > 20)15minfoCheck the PRODKafkaigate_async_prod_kafka_consumer_group_membersigate_async_prod consumer group members have decreased (still > 10)15mhighCheck the PRODKafkaigate_async_prod_kafka_consumer_group_membersigate_async_prod consumer group has 0 members15mcriticalCheck the PRODKafkahub_prod_kafka_consumer_group_membershub_prod consumer group members have decreased (still > 20)15minfoCheck the Hub PRODKafkahub_prod_kafka_consumer_group_membershub_prod consumer group members have decreased (still > 10)15mhighCheck the Hub PRODKafkahub_prod_kafka_consumer_group_membershub_prod consumer group has 0 members15mcriticalCheck the Hub PRODKafkabatch_service_prod_kafka_consumer_group_membersbatch_service_prod consumer group has 0 members15mcriticalCheck the LUS PRODKafkabatch_service_prod_ack_kafka_consumer_group_membersbatch_service_prod_ack consumer group has 0 members15mcriticalCheck the LUS PRODKafkafluentd_gblus_prod_kafka_consumer_group_membersEFK consumer group has 0 members30mhighCheck Fluentd. Restart if LUS PRODKafkakafka_jvm_heap_memory_increasing> 3100 memory use predicted in 5 hours20mhighCheck if is rebalancing. Check the Event Publisher." }, { "title": "Security", "": "", "pageLink": "/display/GMDM/Security", "content": "\nThere are following aspects supporting security implemented in the solution:\n\n\tAll server nodes are in COMPANY VPN.\n\tExternal endpoints (, KONG API) are exposed to cloud services (MAP, ) through the AWS ELB.\n\tEach endpoint has secured transport established using TLS 1.2 – see section.\n\tOnly authenticated clients can access MDM services.\n\tAccess to resources is controlled by built-in authorization process.\n\tEvery call is logged in access log. It is a standard access log format.\n\n" }, { "title": "Authentication", "": "", "pageLink": "/display/GMDM/Authentication", "content": "\nAPI Authentication\nAPI authentication is provided by . There are two methods supported:\n\n\tOAuth2 internal\n\tOAuth 2 external – (recommended)\n\tAPI key\n\n\n\nOAuth2 method is recommended, especially for cloud services. The gateway uses Client Credentials grant type variant of . The method is supported by plugin. Client secrets are managed by and stored in configuration database.\nAPI key authentication is a deprecated method, its usage should be avoided for new services. Keys are unique, randomly generated with 32 characters length managed by – please see documentation for details." }, { "title": "Authorization", "": "", "pageLink": "/display/GMDM/Authorization", "content": "\nRest to exposed services is controlled with the following algorithm:\n\n\tREST channel component reads user authorization configuration based on X-Consumer-Username header passed by KONG.\n\tAuthorization configuration contains:\n\t\n\t\tList of roles user can access. Roles express operation/logic user can execute.\n\t\tList of countries user can read or write.\n\t\tList of source systems (related to crosswalk type) that data can come from.\n\t\n\t\n\tOperation level authorization – system checks if user can execute an operation.\n\tData level authorization – system checks if user can read or modify entities:\n\t\n\t\tDuring read operation by crosswalk – it is checked if country attribute value is on the allowed country list, otherwise system throws access forbidden error.\n\t\tDuring search operation, filter is modified restriction on country attribute are added) to limit countries user has no access to.\n\t\tDuring write operation, system validates if country attribute and crosswalk type are authorized.\n\n\nTable 12. Role definitions\n \n\n\nRole name\nDescription\n\n\nPOST_HCP\nAllows user to create a new HCP entity\n\n\nPATCH_HCP\nAllows user to update entity\n\n\nPOST_HCO\nAllows user to create a new entity\n\n\nPATCH_HCO\nAllows user to update HCO entity\n\n\nGET_ENTITY\nAllows user to get data of single Entity, specified by ID\n\n\nSEARCH_ENTITY\nAllows user to search for Entities by search criteria\n\n\nRESPONSE_DCR\nAllows user to send response to Gateway\n\n\nDELETE_CROSSWALK\nAllows user to delete crosswalk, effectively removing one datasource from Entity\n\n\nGET_LOV\nAllows user to get dictionary data ( authorization configuration for are protected by mechanism, clients are granted permission to read only from topics dedicated to them. Complexity of is hidden behind Ansible – permissions are defined in file, in the following format:\n \nType and description of each parameter is specified in table below.\n\n\nTable 13. Topic configuration parameters\n \n \n\n\nParameter\nType\nDescription \n\n\nname\nString\nTopic name\n\n\npartitions\nInteger\nNumber of partitions to create\n\n\nreplicas\nInteger\nReplication factor for partitions\n\n\nproducers\nList of of usernames that are allowed to publish message to , String\nConsumers that are allowed to consume from this topic. Map entries are in format "username":"consumer_group_id"\n\n\n\n\t\n\t\n" }, { "title": " external plugin", "": "", "pageLink": "/display/GMDM/KONG+external+OAuth2+plugin", "content": "\nTo integrate with token validation process, external plugin was implemented. Source code and instructions for installation and configuration of local environment were published on . \nCheck readme file for more information.\nThe role of plugin: \nValidate access tokens sent by developers using a third-party OAuth 2.0 Authorization Server (RFC 7662). The flow of plugin, request, and response from have to be compatible with RFC 7622 specification. To get more information about this specification check .Plugin assumes that the already has an access token that will be validated against a third-party 2.0 server – . \nFlow of the plugin:\n\n\tClient invokes Gateway API providing token generated from PING API\n\tKONG plugin introspects this token\n\t\n\t\tif the token is active, plugin will fill X-Consumer-Username header\n\t\tif the token is not active, the access to the specific uri will be Plugin configuration:\n \n\nTo define a mdm-external-oauth plugin the following parameters have to be defined:\n\n\tintrospection_url – url address to ping federate with access to introspect oauth2 tokens\n\tauthorization_value – username and ●●●●●●●●●●●●●●●● to "Basic " format which is authorized to invoke introspect API.\n\thide_credentials – if true, the token provided in request will be removed from request after validation to obtain more security specifications.\n\tusers_map – this map contains comma separated list of values. The first value is user name defined in the second value separated by colon is the user name defined in mdm-manager application. This map is used to correctly map and validate tokens received in request. Additionally, when introspect token, it returns the username. This username is mapped on existing user in mdm-manager, so there is no need to define additional users in mdm-manager – it is enough to fill users_map configuration with appropriate values.\n\n\n\nKAFKA authentication\nKafka access is protected using SASL framework. Clients are required to specify user and ●●●●●●●●●●● the configuration. Credentials are sent over transport." }, { "title": "Transport", "": "", "pageLink": "/display/GMDM/Transport", "content": "\nCommunication between and external systems is secured by setting up an encrypted connection with the following specifications:\n\n\tCiphersuites: ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCMSHA384:ECDHE-ECDSA-CHACHA20-POLY1305:-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256\n\tVersions: TLSv1.2\n\tTLS curves: prime256v1, secp384r1, secp521r1\n\tCertificate type: ECDSA\n\tCertificate curve: prime256v1, secp384r1, secp521r1\n\tCertificate signature: sha256WithRSAEncryption, ecdsa-with-SHA256, ecdsa-with-SHA384, ecdsa-with-SHA512\n\tRSA key size: 2048 (if not ecdsa)\n\tDH Parameter size: None (disabled entirely)\n\tECDH Parameter size: 256\n\tHSTS: -age=\n\tCertificate switching: None\n\n\n\n" }, { "title": "User management", "": "", "pageLink": "/display/GMDM/User+management", "content": "\nUser accounts are managed by the respective components of the Gateway and Hub. \nAPI are managed by and stored in database. There are two ways of adding a new user to configuration:\n\n\tUsing configuration repository and Ansible\n\n\n\nAnsible tool, which is used to deploy , has a plugin that supports user management. User configuration is kept in configuration files (passwords being encrypted using built-in encryption). Adding a new user requires adding the following section to the appropriate configuration file:\n \n\n\tDirectly, using REST API\n\n\n\nThis method requires access to COMPANY VPN and to machine that hosts , since REST endpoints are only bound to "localhost", and not exposed to the outside world. URL of the endpoint is:\n It can be accessed via cURL commandline tool. To list all the users that are currently defined use the following command:\n \nTo create a new user:\n To set an API Key for the user:\n A new key will be automatically generated by and returned in response.\nTo create credentials use the following call instead:\n client_id and client_secret are login credentials, redirect_uri should point to HUB endpoint. Please see documentation for details.\n\nKAFKA users\nKafka users are managed by brokers. Authentication method used is Authentication and Authorization Service (JAAS) with module. User configuration is stored inside kafka_server_nf file, that is present in each broker. File has the following structure:\n \nProperties "username" and "password" define credentials to use to secure inter-broker communication. Properties in format "user_" are actual definitions of users. So, adding a new user named "bob" would require addition of the following property to file:\n\n \n\nCAUTION! Since JAAS configuration file is only read on broker startup, adding a new user requires restart of all brokers. In multi-broker environment this can be achieved by restarting one broker at a time, which should be transparent for end users, given fault-tolerance capabilities. This limitation might be overcome in future versions by using external user store or custom login module, instead of e process of adding this entry and distributing file is automated with Ansible: usernames and ●●●●●●●●●●●● kept in configuration file, encrypted using Ansible Vault (with encryption). \nMongoDB users\nMongoDB is used only internally, by modules and is not exposed to external users, therefore there is no need to create accounts for them. For operational purposes there might be some administration/technical accounts created using standard Mongo commandline tools, as described in MongoDB documentation." }, { "title": "SOP HUB", "": "", "pageLink": "/display//SOP+HUB", "content": "" }, { "title": "Hub Configuration", "": "", "pageLink": "/display/GMDM/Hub+Configuration", "content": "" }, { "title": ":", "": "", "pageLink": "/pages/tion?pageId=", "content": "" }, { "title": "Setup APM integration in ", "": "", "pageLink": "/display/GMDM/Setup+APM+integration+in+Kibana", "content": "To setup integration in you need to deploy fleet server first. To do so you need to enable it in mdm-hub-cluster-env repository(eg. in /nprod/namespaces/emea-backend/values.yaml)After deploying it open kibana . And got to rify if fleet-server is properly configured:Go to Add the APM IntegrationClick Add Elastic APMChange host to :8200In section 2 choose Existing hosts and choose desired agent-policy(Fleet server on policy)Save changesAfter configuring your service to connect to apm-server it should be visible in ." }, { "title": "Consul:", "": "", "pageLink": "/pages/tion?pageId=", "content": "" }, { "title": "Updating Dictionary", "": "", "pageLink": "/display/GMDM/Updating+Dictionary", "content": "To update dictionary from excelConvert excel to csv formatChange to Unix Put file in appropriate path in mdm-config-registry repository in config-extCheck Updating ETL Dictionaries in Consul page for appropriate Consul UI URL (You need to have a security token set in section)" }, { "title": "Updating ETL Dictionaries in Consul", "": "", "pageLink": "/display//Updating+ETL+Dictionaries+in+Consul", "content": "Configuration repository has dedicated directories that store dictionaries used by the engine during loading data with batch service. The content of directories is published in Consul. The table shows the name and consul's key under which data in posted:Dir nameConsul keyconfig-ext/dev_gblus update Consul values you have to:Make changes in the desired directory and push them to the master git branch,git2consul will synchronize the git repo to Consul Please be advised that proper token is required to access key/value path you desire. Especially important for /GBLUS directories. " }, { "title": "Environment Setup:", "": "", "pageLink": "/pages/tion?pageId=", "content": "" }, { "title": "Configuration ()", "": "", "pageLink": "/pages/tion?pageId=", "content": "Configuration steps:Configure mongo permissions for users mdm_batch_service, , and mdmgw. Add permissions to database schema related to new environment:---users:  mdm_batch_service:    mongo:      databases:        reltio_amer-dev:          roles:            - "readWrite"        reltio_[tenant-env]:             - "readWrite"2. Add directory with environment configuration files in /nprod/namespaces/. You can just make a copy of the existing amer-dev configuration.3. Change file [tenant-env]/values.yaml:Change the value of "env" property,Change the value of "logging_index" property,Change the address of service - "kong_m_external_rospection_url" property. Use value from below table:Env classoAuth introspection URLDEV Change file [tenant-env]/kafka-topics.yaml by changing the prefix of topic names.5. Add kafka connect instance for newly added environment - add the configuration section to kafkaConnect property located in /nprod/namespaces/amer-backend/values.yaml5.1 Add secrets - kafkaConnect.[tenant-env].connectors.[tenant-env]ssphrase and kafkaConnect.[tenant-env].connectors.[tenant-env]y6. Configure Consul (/nprod/namespaces/amer-backend/values.yaml and amer/nprod/namespaces/amer-backend/secrets.yaml):Add repository to git2consul - property pos,Add policies - property consul_acl.policies,And policy binding - property consul_metl-token.policiesAdd secrets - pos.[tenant-env]ername: and pos.[tenant-env]sswordCreate proper branch in mdm-hub-env-config repo, like in an example: config/dev_amer - Modify components configuration:Change [tenant-env]/config_files/all/config/application.yamlchange "env" property,change "seURL" property,change "mdmConfig.rdmURL" property,change "flow.url" property,Change [tenant-env]/config_files/event-publisher/config/application.yaml:Change "local_env" propertyChange [tenant-env]/config_files/reltio-subscriber/config/application.yaml:Change "" properties according to Reltio configuration,check and confirm if secrets for this component needn't be changed - changing of queue could cause changing of credentials - verify with 's tenant configuration,Change [tenant-env]/config_files/mdm-manager/config/application.yaml:Change "incipalMappings" according the correct topic tenants details for the above properties:8. Add transaction topics in fluentd configuration - amer/nprod/namespaces/amer-backend/values.yaml and change ics list.9. ) Add additional service monitor to /nprod/namespaces/monitoring/service-monitors.yaml configuration file:- namespace: [tenant-env]  name: sm-[tenant-env]-services  selector:    matchLabels:      prometheus: [tenant-env]-services  endpoints:    - port: interval: 30s      scrapeTimeout: 30s    - port: prometheus-fluent-bit      path: "/api//metrics/prometheus"      interval: 30s      scrapeTimeout: 30sb) Add Snowflake database details to /nprod/namespaces/monitoring/jdbc-exporter.yaml configuration file:jdbcExporters: : db: url: "jdbc:snowflake://" username: "[ USERNAME ]"Add ●●●●●●●●●●● amer/nprod/namespaces/monitoring/secrets.yamljdbcExporters: : db: password: "[ ●●●●●●●●●●●10. job responsible for deploying backend services - to apply mongo and fluentd changes.11. Connect to mongodb server and create scheme reltio_[tenant-env].11.1 Create collections and indexes in the newly added schemas: eateCollection("entityHistory") eateIndex({country: -1},  {background: true, name:  "idx_country"});eateIndex({sources: -1},  {background: true, name:  "idx_sources"});eateIndex({entityType: -1},  {background: true, name:  "idx_entityType"});eateIndex({status: -1},  {background: true, name:  "idx_status"});eateIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});eateIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});eateIndex({"lue": 1},  {background: true, name:  "idx_crosswalks_v_asc"});eateIndex({"osswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});eateIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});eateIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});eateIndex({entityChecksum: -1},  {background: true, name:  "idx_entityChecksum"});eateIndex({parentEntityId: -1},  {background: true, name:  "idx_parentEntityId"});eateIndex({COMPANYGlobalCustomerID: -1},  {background: true, name:  "idx_COMPANYGlobalCustomerID"});eateCollection("entityRelations")eateIndex({country: -1},  {background: true, name:  "idx_country"});eateIndex({sources: -1},  {background: true, name:  "idx_sources"});eateIndex({relationType: -1},  {background: true, name:  "idx_relationType"});eateIndex({status: -1},  {background: true, name:  "idx_status"});eateIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});eateIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});eateIndex({startObjectId: -1},  {background: true, name:  "idx_startObjectId"});eateIndex({endObjectId: -1},  {background: true, name:  "idx_endObjectId"});eateIndex({"lue": 1},  {background: true, name:  "idx_crosswalks_v_asc"});   eateIndex({"osswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});   eateIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});   eateIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"}); eateCollection("LookupValues")eateIndex({updatedOn: 1},  {background: true, name:  "idx_updatedOn"});eateIndex({countries: 1},  {background: true, name:  "idx_countries"});eateIndex({mdmSource: 1},  {background: true, name:  "idx_mdmSource"});eateIndex({type: 1},  {background: true, name:  "idx_type"});eateIndex({code: 1},  {background: true, name:  "idx_code"});eateIndex({valueUpdateDate: 1},  {background: true, name:  "idx_valueUpdateDate"});eateCollection("ErrorLogs")eateIndex({plannedResubmissionDate: -1},  {background: true, name:  "idx_plannedResubmissionDate_-1"});eateIndex({timestamp: -1},  {background: true, name:  "idx_timestamp_-1"});eateIndex({exceptionClass: 1},  {background: true, name:  "idx_exceptionClass_1"});eateIndex({status: -1},  {background: true, name:  "idx_status_-1"});eateCollection("batchEntityProcessStatus")eateIndex({batchName: -1, sourceId: -1},  {background: true, name:  "idx_findByBatchNameAndSourceId"});eateIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1},  {background: true, name:  "idx_EntitiesUnseen_SoftDeleteJob"});eateIndex({batchName: -1, deleted: -1, : -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResult_ProcessingJob"});eateIndex({batchName: -1, : -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResultAll_ProcessingJob"});eateCollection("batchInstance")eateCollection("relationCache")eateIndex({startSourceId: -1},  {background: true, name:  "idx_findByStartSourceId"});eateCollection("DCRRequests")eateIndex({type: -1, : -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});eateIndex({entityURI: -1, : -1},  {background: true, name:  "idx_entityURIStatusNameFind_SubmitVR"});eateIndex({changeRequestURI: -1, : -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});eateCollection("entityMatchesHistory")eateIndex({_id: -1, "tchObjectUri": -1, "tchType": -1},  {background: true, name:  "idx_findAutoLinkMatch_CleanerStream"});eateCollection("DCRRegistry")eateIndex({"angeDate": -1},  {background: true, name:  "idx_changeDate_FindDCRsBy"});eateIndex({extDCRRequestId: -1},  {background: true, name:  "idx_extDCRRequestId_FindByExtId"});eateIndex({changeRequestURI: -1, : -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});eateIndex({type: -1, : -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});eateCollection("sequenceCounters")sertOne({_id: "COMPANYAddressIDSeq", sequence: NumberLong([sequence start number])}) //NOTE!!!! replace text [sequence start count] with value from below start numberemea5000000000amer6000000000apac700000000012. job to deploy kafka resources and mdmhub components for the new environment.13. Create paths on bucket required by and 's DAGs.14. Configure :Add index patterns,Configure retention,Add dashboards.15. Configure basic Airflow DAGs (ansible ,hub_reconciliation_v2,lookup_values_export_to_s3,reconciliation_snowflake.16. Deploy DAGs (NOTE: check if your kubectl is configured to communicate with the cluster you wanted to change):ansible-playbook -i inventory/[tenant-env]/inventory17. Configure Snowflake for the [tenant-env] in mdm-hub-env-config as in example inventory/dev_amer/group_vars/snowflake/*. Verification pointsCheck 's configuration - get reltio tenant configuration:Check if you are able to execute 's operations using credentials of the service user,Check if streaming processing is enable - stinations.enabled = true, reamingEnabled=true, reamingAPIEnabled=true,Check if cassanda export is configured - condaryDsEnabled = eck :Check if you are able to connect to server using command line client running from your local eck :Users mdmgw, and mdm_batch_service - permissions for the newly added database (readWrite),Indexes,Verify if correct start value is set for sequance COMPANYAddressIDSeq - collection sequenceCounters _id = eck MDMHUB API:Check mdm-manager with apikey authentication by executing one of read operations: GET {{ manager_url }}/entities?filter=equals(type, 'configuration/entityTypes/'). The request should execute properly (HTTP status code 200) and returns some objects. The empty response is also possible in the case when there is no data in ,Run the same operation using oAuth2 authentication - remember that the manager url is different,Check mdm-manager with apikey authentication by executing write operation:curl --location --request POST '{{ manager_url }}/hcp' \\--header 'apikey: {{ api_key }}' \\--header 'Content-Type: application/json' \\--data-raw '{  "hcp" : {    "type" : "configuration/entityTypes/",    "attributes" : {      "Country" : [ {        "value" : "{{ country }}"      } ],      "FirstName" : [ {        "value" : "Verification Test MDMHUB"      } ],      "LastName" : [ {        "value" : "Verification Test MDMHUB"      } ]    },    "crosswalks" : [ {      "type" : "configuration/sources/{{ source }}",      "value" : "verification_test_mdmhub"    } ]  }}'Replace all placeholders in the above request using the correct values for the configured environment. The response should return HTTP code 200 and a URI of the created object. After verification deleted created object by running: curl --location --request DELETE '{{ manager_url }}/entities/crosswalk?type={{ source }}&value=verification_test_mdmhub' --header 'apikey: {{ api_key }}'Run the same operations using oAuth2 authentication - remember that the mdm manager url is different,Verify api-router with apikey authentication using search operation: GET {{ api_router_url }}/entities?filter=equals(type, 'configuration/entityTypes/'). The request should execute properly (HTTP status code 200) and returns some objects. Empty response is also possible in the case when there is no data in ,Check api-router with apikey authentication by executing write operation:curl --location --request POST '{{ api_router_url }}/hcp' \\--header 'apikey: {{ api_key }}' \\--header 'Content-Type: application/json' \\--data-raw '{  "hcp" : {    "type" : "configuration/entityTypes/",    "attributes" : {      "Country" : [ {        "value" : "{{ country }}"      } ],      "FirstName" : [ {        "value" : "Verification Test MDMHUB"      } ],      "LastName" : [ {        "value" : "Verification Test MDMHUB"      } ]    },    "crosswalks" : [ {      "type" : "configuration/sources/{{ source }}",      "value" : "verification_test_mdmhub"    } ]  }}'Replace all placeholders in the above request using the correct values for the configured environment. The response should return HTTP code 200 and a URI of the created object. After verification deleted created object by running: curl --location --request DELETE '2/entities/crosswalk?type={{ source }}&value=verification_test_mdmhub' --header 'apikey: {{ api_key }}'Run the same operations using oAuth2 authentication - remember that the api router url is different,Check batch service with apikey authentication by executing following operation GET {{ batch_service_url }}/batchController//instances/. The request should return 403 HTTP Code and body:{    "code": "403",    "message": "Forbidden: thorizationException: Batch '' is not allowed."}The request doesn't create any the same operation using oAuth2 authentication - remember that the batch service url is different,Verify of component logs: mdm-manager, and batch-service url. Focus on errors and - rebalancing, authorization problems, topic existence warnings streaming services:Check logs of reltio-subscriber, entity-enricher, callback-service, event-publisher and mdm-reconciliation-service components. Verify if there is no errors and kafka warnings related with rebalancing, authorization problems, topic existence warnings etc,Verify if lookup refresh process is working properly - check existance of mongo collection LookupValues. It should have data,Airflow:Check if DAGs are enabled and have a defined schedule,Run DAGs: export_merges_from_reltio_to_s3_full_{{ env }}, hub_reconciliation_v2_{{ env }}, lookup_values_export_to_s3_{{ env }}, reconciliation_snowflake_{{ env }}.Wait for their finish and validate owflake:Check snowflake connector logs,Check if tables HUB_KAFKA_DATA, LOV_DATA, MERGE_TREE_DATA exist at LANDING schama and has data,Verify if mdm-hub-snowflake-dm package is deployed,What else?Monitoring:Check grafana dashboards:HUB Performance,,Host Statistics,,,eck index patterns:{{env}}-internal-batch-efk-transactions*,{{env}}-internal-gw-efk-transactions*,{{env}}-internal-publisher-efk-transactions*,{{env}}-internal-subscriber-efk-transactions*,{{env}}-mdmhub,Check dashboards:{{env}} API calls,{{env}} Batch Instances,{{env}} Batch loads,{{env}} Error Logs Overview,{{env}} Error Logs RDM,{{env}} HUB Store{{env}} HUB events,{{env}} MDM Events,{{env}} ,Check alerts - How?" }, { "title": "Configuration ()", "": "", "pageLink": "/pages/tion?pageId=", "content": "Configuration steps:Copy mdm-hub-cluster-env/amer/nprod directory into mdm-hub-cluster-env/amer/nprod place ...CertificatesGenerate private-keys, CSRs and request certificate (/config_files/certs).\nmarek@CF-19CHU8:~$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout y -out .csr\nGenerating a private key\n.....+++++\n.....................................................+++++\nwriting new private key to 'y'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []: \nEmail Address []:\n\nPlease enter the following 'extra' be sent with your certificate request\nA challenge >●●●●●●●●●●●●\nAn optional company name []:\nGenerate private-keys, CSRs and request certificate (-backend/secret.yaml\nmarek@CF-19CHU8:~$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout y -out .csr\nGenerating a private key\n..........................+++++\n.....+++++\nwriting new private key to 'y'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:\nEmail Address []:\n\nPlease enter the following 'extra' be sent with your certificate request\nA challenge >●●●●●●●●●●●●\nAn optional company name []:\nBELOW IS COPY WE USE AS A REFERENCEConfiguration steps:Configure mongo permissions for users mdm_batch_service, , and mdmgw. Add permissions to database schema related to new environment:---users:  mdm_batch_service:    mongo:      databases:        reltio_amer-dev:          roles:            - "readWrite"        reltio_[tenant-env]:             - "readWrite"2. Add directory with environment configuration files in /nprod/namespaces/. You can just make a copy of the existing amer-dev configuration.3. Change file [tenant-env]/values.yaml:Change the value of "env" property,Change the value of "logging_index" property,Change the address of service - "kong_m_external_rospection_url" property. Use value from below table:Env classoAuth introspection URLDEV Change file [tenant-env]/kafka-topics.yaml by changing the prefix of topic names.5. Add kafka connect instance for newly added environment - add the configuration section to kafkaConnect property located in /nprod/namespaces/amer-backend/values.yaml5.1 Add secrets - kafkaConnect.[tenant-env].connectors.[tenant-env]ssphrase and kafkaConnect.[tenant-env].connectors.[tenant-env]y6. Configure Consul (/nprod/namespaces/amer-backend/values.yaml and amer/nprod/namespaces/amer-backend/secrets.yaml):Add repository to git2consul - property pos,Add policies - property consul_acl.policies,And policy binding - property consul_metl-token.policiesAdd secrets - pos.[tenant-env]ername: and pos.[tenant-env]sswordCreate proper branch in mdm-hub-env-config repo, like in an example: config/dev_amer - Modify components configuration:Change [tenant-env]/config_files/all/config/application.yamlchange "env" property,change "seURL" property,change "mdmConfig.rdmURL" property,change "flow.url" property,Change [tenant-env]/config_files/event-publisher/config/application.yaml:Change "local_env" propertyChange [tenant-env]/config_files/reltio-subscriber/config/application.yaml:Change "" properties according to Reltio configuration,check and confirm if secrets for this component needn't be changed - changing of queue could cause changing of credentials - verify with 's tenant configuration,Change [tenant-env]/config_files/mdm-manager/config/application.yaml:Change "incipalMappings" according the correct topic tenants details for the above properties:8. Add transaction topics in fluentd configuration - amer/nprod/namespaces/amer-backend/values.yaml and change ics list.9. ) Add additional service monitor to /nprod/namespaces/monitoring/service-monitors.yaml configuration file:- namespace: [tenant-env]  name: sm-[tenant-env]-services  selector:    matchLabels:      prometheus: [tenant-env]-services  endpoints:    - port: interval: 30s      scrapeTimeout: 30s    - port: prometheus-fluent-bit      path: "/api//metrics/prometheus"      interval: 30s      scrapeTimeout: 30sb) Add Snowflake database details to /nprod/namespaces/monitoring/jdbc-exporter.yaml configuration file:jdbcExporters: : db: url: "jdbc:snowflake://" username: "[ USERNAME ]"Add ●●●●●●●●●●● amer/nprod/namespaces/monitoring/secrets.yamljdbcExporters: : db: password: "[ ●●●●●●●●●●●10. job responsible for deploying backend services - to apply mongo and fluentd changes.11. Connect to mongodb server and create scheme reltio_[tenant-env].11.1 Create collections and indexes in the newly added schemas: eateCollection("entityHistory") eateIndex({country: -1},  {background: true, name:  "idx_country"});eateIndex({sources: -1},  {background: true, name:  "idx_sources"});eateIndex({entityType: -1},  {background: true, name:  "idx_entityType"});eateIndex({status: -1},  {background: true, name:  "idx_status"});eateIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});eateIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});eateIndex({"lue": 1},  {background: true, name:  "idx_crosswalks_v_asc"});eateIndex({"osswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});eateIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});eateIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});eateIndex({entityChecksum: -1},  {background: true, name:  "idx_entityChecksum"});eateIndex({parentEntityId: -1},  {background: true, name:  "idx_parentEntityId"});eateIndex({COMPANYGlobalCustomerID: -1},  {background: true, name:  "idx_COMPANYGlobalCustomerID"});eateCollection("entityRelations")eateIndex({country: -1},  {background: true, name:  "idx_country"});eateIndex({sources: -1},  {background: true, name:  "idx_sources"});eateIndex({relationType: -1},  {background: true, name:  "idx_relationType"});eateIndex({status: -1},  {background: true, name:  "idx_status"});eateIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});eateIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});eateIndex({startObjectId: -1},  {background: true, name:  "idx_startObjectId"});eateIndex({endObjectId: -1},  {background: true, name:  "idx_endObjectId"});eateIndex({"lue": 1},  {background: true, name:  "idx_crosswalks_v_asc"});   eateIndex({"osswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});   eateIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});   eateIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"}); eateCollection("LookupValues")eateIndex({updatedOn: 1},  {background: true, name:  "idx_updatedOn"});eateIndex({countries: 1},  {background: true, name:  "idx_countries"});eateIndex({mdmSource: 1},  {background: true, name:  "idx_mdmSource"});eateIndex({type: 1},  {background: true, name:  "idx_type"});eateIndex({code: 1},  {background: true, name:  "idx_code"});eateIndex({valueUpdateDate: 1},  {background: true, name:  "idx_valueUpdateDate"});eateCollection("ErrorLogs")eateIndex({plannedResubmissionDate: -1},  {background: true, name:  "idx_plannedResubmissionDate_-1"});eateIndex({timestamp: -1},  {background: true, name:  "idx_timestamp_-1"});eateIndex({exceptionClass: 1},  {background: true, name:  "idx_exceptionClass_1"});eateIndex({status: -1},  {background: true, name:  "idx_status_-1"});eateCollection("batchEntityProcessStatus")eateIndex({batchName: -1, sourceId: -1},  {background: true, name:  "idx_findByBatchNameAndSourceId"});eateIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1},  {background: true, name:  "idx_EntitiesUnseen_SoftDeleteJob"});eateIndex({batchName: -1, deleted: -1, : -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResult_ProcessingJob"});eateIndex({batchName: -1, : -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResultAll_ProcessingJob"});eateCollection("batchInstance")eateCollection("relationCache")eateIndex({startSourceId: -1},  {background: true, name:  "idx_findByStartSourceId"});eateCollection("DCRRequests")eateIndex({type: -1, : -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});eateIndex({entityURI: -1, : -1},  {background: true, name:  "idx_entityURIStatusNameFind_SubmitVR"});eateIndex({changeRequestURI: -1, : -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});eateCollection("entityMatchesHistory")eateIndex({_id: -1, "tchObjectUri": -1, "tchType": -1},  {background: true, name:  "idx_findAutoLinkMatch_CleanerStream"});eateCollection("DCRRegistry")eateIndex({"angeDate": -1},  {background: true, name:  "idx_changeDate_FindDCRsBy"});eateIndex({extDCRRequestId: -1},  {background: true, name:  "idx_extDCRRequestId_FindByExtId"});eateIndex({changeRequestURI: -1, : -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});eateIndex({type: -1, : -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});eateCollection("sequenceCounters")sertOne({_id: "COMPANYAddressIDSeq", sequence: NumberLong([sequence start number])}) //NOTE!!!! replace text [sequence start count] with value from below start numberemea5000000000amer6000000000apac700000000012. job to deploy kafka resources and mdmhub components for the new environment.13. Create paths on bucket required by and 's DAGs.14. Configure :Add index patterns,Configure retention,Add dashboards.15. Configure basic Airflow DAGs (ansible ,hub_reconciliation_v2,lookup_values_export_to_s3,reconciliation_snowflake.16. Deploy DAGs (NOTE: check if your kubectl is configured to communicate with the cluster you wanted to change):ansible-playbook -i inventory/[tenant-env]/inventory17. Configure Snowflake for the [tenant-env] in mdm-hub-env-config as in example inventory/dev_amer/group_vars/snowflake/*. Verification pointsCheck 's configuration - get reltio tenant configuration:Check if you are able to execute 's operations using credentials of the service user,Check if streaming processing is enable - stinations.enabled = true, reamingEnabled=true, reamingAPIEnabled=true,Check if cassanda export is configured - condaryDsEnabled = eck :Users mdmgw, and mdm_batch_service - permissions for the newly added database (readWrite),Indexes,Verify if correct start value is set for sequance COMPANYAddressIDSeq - collection sequenceCounters _id = eck MDMHUB API:Check mdm-manager with apikey authentication by executing one of read operations: GET {{ manager_url }}/entities?filter=equals(type, 'configuration/entityTypes/'). The request should execute properly (HTTP status code 200) and returns some objects. The empty response is also possible in the case when there is no data in ,Run the same operation using oAuth2 authentication - remember that the manager url is different,Verify api-router with apikey authentication using search operation: GET {{ api_router_url }}/entities?filter=equals(type, 'configuration/entityTypes/'). The request should execute properly (HTTP status code 200) and returns some objects. Empty response is also possible in the case when there is no data in ,Run the same operation using oAuth2 authentication - remember that the api router url is different,Check batch service with apikey authentication by executing following operation GET {{ batch_service_url }}/batchController//instances/. The request should return 403 HTTP Code and body:{    "code": "403",    "message": "Forbidden: thorizationException: Batch '' is not allowed."}The request doesn't create any the same operation using oAuth2 authentication - remember that the batch service url is different,Verify of component logs: mdm-manager, and batch-service url. Focus on errors and - rebalancing, authorization problems, topic existence warnings streaming services:Check logs of reltio-subscriber, entity-enricher, callback-service, event-publisher and mdm-reconciliation-service components. Verify if there is no errors and kafka warnings related with rebalancing, authorization problems, topic existence warnings etc,Verify if lookup refresh process is working properly - check existance of mongo collection LookupValues. It should have data,:Run DAGs: export_merges_from_reltio_to_s3_full_{{ env }}, hub_reconciliation_v2_{{ env }}, lookup_values_export_to_s3_{{ env }}, reconciliation_snowflake_{{ env }}.Wait for their finish and validate owflake:Check snowflake connector logs,Check if tables HUB_KAFKA_DATA, LOV_DATA, MERGE_TREE_DATA exist at LANDING schama and has data,Verify if mdm-hub-snowflake-dm package is deployed,What else?Monitoring:Check grafana dashboards:HUB Performance,,Host Statistics,,,eck index patterns:{{env}}-internal-batch-efk-transactions*,{{env}}-internal-gw-efk-transactions*,{{env}}-internal-publisher-efk-transactions*,{{env}}-internal-subscriber-efk-transactions*,{{env}}-mdmhub,Check dashboards:{{env}} API calls,{{env}} Batch Instances,{{env}} Batch loads,{{env}} Error Logs Overview,{{env}} Error Logs RDM,{{env}} HUB Store{{env}} HUB events,{{env}} MDM Events,{{env}} ,Check alerts - How?" }, { "title": "Configuration ( k8s)", "": "", "pageLink": "/pages/tion?pageId=", "content": "Installation of new non-prod cluster basing on non-prod py mdm-hub-cluster-env/amer directory into mdm-hub-cluster-env/ ange dir names from "amer" to "apac".Replace everything in files in directory: "amer"→"apac".CertificatesGenerate private-keys, CSRs and request certificate (/config_files/certs).\nanuskp@CF-$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout y -out .csr\nGenerating a private key\n..................+++++\n.........................+++++\nwriting new private key to 'y'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:\nEmail Address []:\n\nPlease enter the following 'extra' be sent with your certificate request\nA challenge >●●●●●●●●●●●●\nAn optional company name []:: Name=DNS Name=DNS Name=DNS Name=DNS Name=DNS Name=DNS Name=DNS Name=DNS Name=DNS Name=DNS Name=DNS Name=Place private-key and signed certificate in /config_files/certs. Git-ignore them and encrypt them into .encrypt nerate private-keys, CSRs and request certificate (-backend/secret.yaml)\nanuskp@CF-$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout y -out .csr\nGenerating a private key\n................................................................+++++\n.......................................+++++\nwriting new private key to 'y'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:\nEmail Address []:\n\nPlease enter the following 'extra' be sent with your certificate request\nA challenge >●●●●●●●●●●●●\nAn optional company name []:: Name=DNS Name=DNS Name=DNS Name=DNS Name=DNS Name=DNS Name=After receiving the certificate, encode it with base64 and paste into -backend/secrets.yaml:  -> y  -> t  (*) Since this is a new environment, remove everything under "migration" key in -backend/place all user_passwords in /. for each ●●●●●●●●●●●●●●●●● a new, 32-char one and globally replace it in all configs.Go through /config_files one by one and adjust settings such as: Reltio, etc.(*) Change topics and consumergroup names to fit naming standards. This is a one-time activity and does not need to be repeated if next environments will be built based on config.Export amer-nprod CRDs into yaml file and import it in -nprod:\n$ kubectx kubectl get crd -A -o yaml > ~/crd-definitions-amer.yaml\n$ kubectx -apac\n$ kubectl apply -f ~/crd-definitions-amer.yaml\nCreate config dirs for git2consul (mdm-hub-env-config):\n$ git checkout config/dev_amer\n$ git pull\n$ git branch config/dev_apac\n$ git checkout config/dev_apac\n$ git push origin config/dev_apac\nRepeat for qa and stall operators:\n$ ./ -l operators -r -c nprod -e apac-dev -v 3.9.4\nInstall backend:\n$ ./ -l backend -r -c nprod -e apac-dev -v 3.9.4\nLog into mongodb (use port forward if there is no connection to : run "kubectl port-forward mongo-0 -n apac-backend 27017" and connect to mongo on ). Run below script:\eateCollection("entityHistory") \eateIndex({country: -1}, {background: true, name: "idx_country"});\eateIndex({sources: -1}, {background: true, name: "idx_sources"});\eateIndex({entityType: -1}, {background: true, name: "idx_entityType"});\eateIndex({status: -1}, {background: true, name: "idx_status"});\eateIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\eateIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\eateIndex({"lue": 1}, {background: true, name: "idx_crosswalks_v_asc"});\eateIndex({"osswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"});\eateIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"});\eateIndex({mdmSource: -1}, {background: true, name: "idx_mdmSource"});\eateIndex({entityChecksum: -1}, {background: true, name: "idx_entityChecksum"});\eateIndex({parentEntityId: -1}, {background: true, name: "idx_parentEntityId"});\n\eateIndex({COMPANYGlobalCustomerID: -1}, {background: true, name: "idx_COMPANYGlobalCustomerID"});\n\eateCollection("entityRelations")\eateIndex({country: -1}, {background: true, name: "idx_country"});\eateIndex({sources: -1}, {background: true, name: "idx_sources"});\eateIndex({relationType: -1}, {background: true, name: "idx_relationType"});\eateIndex({status: -1}, {background: true, name: "idx_status"});\eateIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\eateIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\eateIndex({startObjectId: -1}, {background: true, name: "idx_startObjectId"});\eateIndex({endObjectId: -1}, {background: true, name: "idx_endObjectId"});\eateIndex({"lue": 1}, {background: true, name: "idx_crosswalks_v_asc"}); \eateIndex({"osswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"}); \eateIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"}); \eateIndex({mdmSource: -1}, {background: true, name: "eateIndex({updatedOn: 1}, {background: true, name: "idx_updatedOn"});\eateIndex({countries: 1}, {background: true, name: "eateIndex({mdmSource: 1}, {background: true, name: "idx_mdmSource"});\eateIndex({type: 1}, {background: true, name: "idx_type"});\eateIndex({code: 1}, {background: true, name: "idx_code"});\eateIndex({valueUpdateDate: 1}, {background: true, name: "idx_valueUpdateDate"});\n\eateCollection("ErrorLogs")\eateIndex({plannedResubmissionDate: -1}, {background: true, name: "idx_plannedResubmissionDate_-1"});\eateIndex({timestamp: -1}, {background: true, name: "idx_timestamp_-1"});\eateIndex({exceptionClass: 1}, {background: true, name: "idx_exceptionClass_1"});\eateIndex({status: -1}, {background: true, name: "idx_status_-1"});\n\eateCollection("batchEntityProcessStatus")\eateIndex({batchName: -1, sourceId: -1}, {background: true, name: "idx_findByBatchNameAndSourceId"});\eateIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1}, {background: true, name: "idx_EntitiesUnseen_SoftDeleteJob"});\eateIndex({batchName: -1, deleted: -1, : -1, updateDateMDM: -1}, {background: true, name: "idx_ProcessingResult_ProcessingJob"});\eateIndex({batchName: , : -1, updateDateMDM: -1}, {background: true, name: "idx_ProcessingResultAll_ProcessingJob"});\n\eateCollection("batchInstance")\n\eateCollection("relationCache")\eateIndex({startSourceId: -1}, {background: true, name: "idx_findByStartSourceId"});\n\eateCollection("DCRRequests")\eateIndex({type: -1, : -1}, {background: true, name: "idx_typeStatusNameFind_TraceVR"});\eateIndex({entityURI: -1, : -1}, {background: true, name: "idx_entityURIStatusNameFind_SubmitVR"});\eateIndex({changeRequestURI: -1, : -1}, {background: true, name: "idx_changeRequestURIStatusNameFind_DSResponse"});\n\eateCollection("entityMatchesHistory")\eateIndex({_id: -1, "tchObjectUri": -1, "tchType": -1}, {background: true, name: "idx_findAutoLinkMatch_CleanerStream"});\n\eateCollection("DCRRegistry")\eateIndex({"angeDate": -1}, {background: true, name: "idx_changeDate_FindDCRsBy"});\eateIndex({extDCRRequestId: -1}, {background: true, name: "idx_extDCRRequestId_FindByExtId"});\eateIndex({changeRequestURI: -1, : -1}, {background: true, name: "idx_changeRequestURIStatusNameFind_DSResponse"});\n\eateIndex({type: -1, : -1}, {background: true, name: "idx_typeStatusNameFind_TraceVR"});\n\eateCollection("sequenceCounters")\sertOne({_id: "COMPANYAddressIDSeq", sequence: NumberLong)}) // NOTE: is -specific\nLog into . Export dashboards/indices from and import them in ./ -l mdmhub -r -c nprod -e apac-dev -v 3.9.4\nTickets: names ticket:Ticket queue: -NETWORK : Add domains to :Hi add below \\\\\\\\\\n\nas CNAMEs of our ELB:\Also, please add one CNAME for each one of below : \nELB: \n\nCNAME: \nELB: \n\nCNAME: \nELB: \n\nCNAME: \nELB: Best Regards,PiotrMDM whitelistingTicket queue: -NETWORK ECSTitle: Firewall exceptions for new PDKS clusterDescription:Hi open all traffic listed in attached Excel sheet.\nIn case this is not the queue where I should request Firewall changes, kindly point me in the right direction.\n\nBest excel:SourceSource IPDestinationDestination monitoring ()CI/CD server ()/-443MDM Hub monitoring ()CI/CD server ()EMEA NPROD MDM Hub10.90.98.0/24APAC cluster●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●4439094Global Hub10.90.96.0/24APAC cluster●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●443APAC cluster●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●Global NPROD MDM Hub10.90.96.0/248443APAC cluster●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●EMEA NPROD MDM Hub10.90.98.0/248443Integration tests:In mdm-hub-env-config prepare inventory/kube_dev_apac (copy and adjust variables)run "prepare_int_tests" prepare_int_tests.yml -i inventory/kube_dev_apac/inventory -e src_dir="/mnt/c/Users/panu/gitrep/mdm-hub-inbound-services-all"\nin mdm-hub-inbound-services confirm test resources (citrus properties) for mdm-integration-tests have been replaced and run two tasks:-mdm-gateway/mdm-interation-tests/Tasks/verification/commonIntegrationTests-mdm-gateway/mdm-interation-tests/Tasks/verification/integrationTestsForCOMPANYModel" }, { "title": "Configuration ()", "": "", "pageLink": "/pages/tion?pageId=", "content": "Installation of new prod cluster basing on prod py mdm-hub-cluster-env/amer/prod directory into mdm-hub-cluster-env/ ange dir names from "amer" to "" - apac-backend, -prodReplace everything in files in directory: "amer"→"apac".CertificatesGenerate private-keys, CSRs and request certificate (/config_files/certs).\nanuskp@CF-$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout y -out .csr\nGenerating a private key\n..................+++++\n.........................+++++\nwriting new private key to 'y'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:\nEmail Address []:\n\nPlease enter the following 'extra' be sent with your certificate request\nA challenge >●●●●●●●●●●●●\nAn optional company name []:: Name=DNS Name=DNS Name=DNS Name=DNS Name=DNS Name=DNS Name=DNS Name=DNS Name=DNS Name=DNS Name=DNS Name=Place private-key and signed certificate in /config_files/certs. Git-ignore them and encrypt them into .encrypt nerate private-keys, CSRs and request certificate (-backend/secret.yaml)\nanuskp@CF-$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout y -out .csr\nGenerating a private key\n................................................................+++++\n.......................................+++++\nwriting new private key to 'y'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:\nEmail Address []:\n\nPlease enter the following 'extra' be sent with your certificate request\nA challenge >●●●●●●●●●●●●\nAn optional company name []:: Name=DNS Name=DNS Name=DNS Name=DNS Name=DNS Name=DNS Name=After receiving the certificate, encode it with base64 and paste into -backend/secrets.yaml:  -> y  -> t Raise a ticket via Request Manager (*) Since this is a new environment, remove everything under "migration" key in -backend/place all user_passwords in /prod/secrets.yaml. for each ●●●●●●●●●●●●●●●●● a new, 40-char one and globally replace it in all configs.Go through /config_files one by one and adjust settings such as: Reltio, etc.(*) Change topics and consumergroup names to fit naming standards. This is a one-time activity and does not need to be repeated if next environments will be built based on config.Export amer-prod CRDs into yaml file and import it in kubectx kubectl get crd -A -o yaml > ~/crd-definitions-amer.yaml\n$ kubectx kubectl apply -f ~/crd-definitions-amer.yaml\nCreate config dirs for git2consul (mdm-hub-env-config):\n$ git checkout config/dev_amer\n$ git pull\n$ git branch config/dev_apac\n$ git checkout config/dev_apac\n$ git push origin config/dev_apac\nRepeat for qa and stall operators:\n$ ./ -l operators -r -c prod -e apac-dev -v 3.9.4\nInstall backend:\n$ ./ -l backend -r -c prod -e apac-dev -v 3.9.4\ Log into mongodb (use port forward if there is no connection to : run "kubectl port-forward mongo-0 -n apac-backend 27017" and connect to mongo on localhost:27017) orretrieve ip address from service and add it to Windows hosts file as name (example. ●●●●●●●●●●●● ) and connect to mongo on : Run below script:\eateCollection("entityHistory") \eateIndex({country: -1}, {background: true, name: "idx_country"});\eateIndex({sources: -1}, {background: true, name: "idx_sources"});\eateIndex({entityType: -1}, {background: true, name: "idx_entityType"});\eateIndex({status: -1}, {background: true, name: "idx_status"});\eateIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\eateIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\eateIndex({"lue": 1}, {background: true, name: "idx_crosswalks_v_asc"});\eateIndex({"osswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"});\eateIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"});\eateIndex({mdmSource: -1}, {background: true, name: "idx_mdmSource"});\eateIndex({entityChecksum: -1}, {background: true, name: "idx_entityChecksum"});\eateIndex({parentEntityId: -1}, {background: true, name: "idx_parentEntityId"});\n\eateIndex({COMPANYGlobalCustomerID: -1}, {background: true, name: "idx_COMPANYGlobalCustomerID"});\n\eateCollection("entityRelations")\eateIndex({country: -1}, {background: true, name: "idx_country"});\eateIndex({sources: -1}, {background: true, name: "idx_sources"});\eateIndex({relationType: -1}, {background: true, name: "idx_relationType"});\eateIndex({status: -1}, {background: true, name: "idx_status"});\eateIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\eateIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\eateIndex({startObjectId: -1}, {background: true, name: "idx_startObjectId"});\eateIndex({endObjectId: -1}, {background: true, name: "idx_endObjectId"});\eateIndex({"lue": 1}, {background: true, name: "idx_crosswalks_v_asc"}); \eateIndex({"osswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"}); \eateIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"}); \eateIndex({mdmSource: -1}, {background: true, name: "eateIndex({updatedOn: 1}, {background: true, name: "idx_updatedOn"});\eateIndex({countries: 1}, {background: true, name: "eateIndex({mdmSource: 1}, {background: true, name: "idx_mdmSource"});\eateIndex({type: 1}, {background: true, name: "idx_type"});\eateIndex({code: 1}, {background: true, name: "idx_code"});\eateIndex({valueUpdateDate: 1}, {background: true, name: "idx_valueUpdateDate"});\n\eateCollection("ErrorLogs")\eateIndex({plannedResubmissionDate: -1}, {background: true, name: "idx_plannedResubmissionDate_-1"});\eateIndex({timestamp: -1}, {background: true, name: "idx_timestamp_-1"});\eateIndex({exceptionClass: 1}, {background: true, name: "idx_exceptionClass_1"});\eateIndex({status: -1}, {background: true, name: "idx_status_-1"});\n\eateCollection("batchEntityProcessStatus")\eateIndex({batchName: -1, sourceId: -1}, {background: true, name: "idx_findByBatchNameAndSourceId"});\eateIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1}, {background: true, name: "idx_EntitiesUnseen_SoftDeleteJob"});\eateIndex({batchName: -1, deleted: -1, : -1, updateDateMDM: -1}, {background: true, name: "idx_ProcessingResult_ProcessingJob"});\eateIndex({batchName: , : -1, updateDateMDM: -1}, {background: true, name: "idx_ProcessingResultAll_ProcessingJob"});\n\eateCollection("batchInstance")\n\eateCollection("relationCache")\eateIndex({startSourceId: -1}, {background: true, name: "idx_findByStartSourceId"});\n\eateCollection("DCRRequests")\eateIndex({type: -1, : -1}, {background: true, name: "idx_typeStatusNameFind_TraceVR"});\eateIndex({entityURI: -1, : -1}, {background: true, name: "idx_entityURIStatusNameFind_SubmitVR"});\eateIndex({changeRequestURI: -1, : -1}, {background: true, name: "idx_changeRequestURIStatusNameFind_DSResponse"});\n\eateCollection("entityMatchesHistory")\eateIndex({_id: -1, "tchObjectUri": -1, "tchType": -1}, {background: true, name: "idx_findAutoLinkMatch_CleanerStream"});\n\eateCollection("DCRRegistry")\eateIndex({"angeDate": -1}, {background: true, name: "idx_changeDate_FindDCRsBy"});\eateIndex({extDCRRequestId: -1}, {background: true, name: "idx_extDCRRequestId_FindByExtId"});\eateIndex({changeRequestURI: -1, : -1}, {background: true, name: "idx_changeRequestURIStatusNameFind_DSResponse"});\n\eateIndex({type: -1, : -1}, {background: true, name: "idx_typeStatusNameFind_TraceVR"});\n\eateCollection("sequenceCounters")\sertOne({_id: "COMPANYAddressIDSeq", sequence: NumberLong)}) // NOTE: is -specific\nRegionSeq start numberamer6000000000apac7000000000emea5000000000Log into . Export dashboards/indices from and import them in e the following playbook:- change values in  repository:inventory/jenkins/group_vars/all/all.yml → #CHNG- run playbook:  -playbook install_kibana_objects.yml -i inventory/jenkins/inventory --vault-password-file=../vault -vInstall mdmhub:\n$ ./ -l mdmhub -r apac -c prod -e apac-dev -v 3.9.4\nTickets: names ticket:Ticket queue: -NETWORK : Add domains to :Hi Team,Please add below domains:as CNAMEs of our :Also, please add one CNAME for each one of below ELBs:CNAME: ELB: CNAME: ELB: CNAME: ELB: CNAME: ELB: , whitelistingTicket queue: -NETWORK ECSTitle: Firewall exceptions for new PDKS clusterDescription:Hi open all traffic listed in attached Excel sheet.\nIn case this is not the queue where I should request Firewall changes, kindly point me in the right direction.\n\nBest excel:SourceSource IPDestinationDestination monitoring ()CI/CD server ()/-443MDM Hub monitoring ()CI/CD server ()EMEA prod MDM Hub10.90.98.0/24APAC prod - PDKS cluster●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●4439094Global prod MDM Hub10.90.96.0/24APAC prod - PDKS cluster●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●443APAC prod - PDKS cluster●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●Global prod MDM Hub10.90.96.0/248443APAC prod - PDKS cluster●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●EMEA prod MDM Hub10.90.98.0/248443Integration tests:In mdm-hub-env-config prepare inventory/kube_dev_apac (copy and adjust variables)run "prepare_int_tests" prepare_int_tests.yml -i inventory/kube_dev_apac/inventory -e src_dir="/mnt/c/Users/panu/gitrep/mdm-hub-inbound-services-all"\nin mdm-hub-inbound-services confirm test resources (citrus properties) for mdm-integration-tests have been replaced and run two tasks:-mdm-gateway/mdm-interation-tests/Tasks/verification/commonIntegrationTests-mdm-gateway/mdm-interation-tests/Tasks/verification/integrationTestsForCOMPANYModel" }, { "title": "Configuration (emea)", "": "", "pageLink": "/pages/tion?pageId=", "content": "Setup Mongo Indexes and Collections:eateIndex({COMPANYGlobalCustomerID: -1},  {background: true, name:  "idx_COMPANYGlobalCustomerID"});DCR Service 2 Indexes: Indexes\eateIndex({type: -1, : -1}, {background: true, name: "idx_typeStatusNameFind_TraceVR"});\n\eateIndex({"angeDate": -1}, {background: true, name: "idx_changeDate_FindDCRsBy"});\eateIndex({extDCRRequestId: -1}, {background: true, name: "idx_extDCRRequestId_FindByExtId"});\eateIndex({changeRequestURI: -1, : -1}, {background: true, name: "idx_changeRequestURIStatusNameFind_DSResponse"});\n" }, { "title": "Configuration (gblus prod)", "": "", "pageLink": "/pages/tion?pageId=", "content": "Config file: gblmdm-hub-us-spec_v05.xlsxAWS ResourcesResource NameResource TypeSpecificationAWS RegionAWS Availability ZoneDependen onDescriptionComponentsHUBGWInterfaceGBL Svr1 - amraelp00007844EC2r5.2xlargeus-east-1bEBS APP DATA MDM PROD SVR1EBS DOCKER DATA SVR1- Mongo - data redundancy and high availability   primary, secondary, tertiary needs to be hosted on a separated server and zones - high availability if one zone is offline- :     Mount 50G - /var/lib/docker/ - docker installation directory    Mount 750 - /app/ - docker applications local storage OS: Red Hat Enterprise Linux Server release 7.4mongoEFK-DATAGBL MDM US APP DATA MDM PROD SVR2EBS DOCKER DATA SVR2- Mongo - data redundancy and high availability   primary, secondary, tertiary needs to be hosted on a separated server and zones - high availability if one zone is offline- :     Mount 50G - /var/lib/docker/ - docker installation directory    Mount 750 - /app/ - docker applications local storage OS: Red Hat Enterprise Linux Server release 7.4mongoEFK-DATAGBL MDM US HUB Prod Data Svr3 - amraelp00007847EC2r5.2xlargeus-east-1bEBS APP DATA MDM PROD SVR3EBS DOCKER DATA MDM PROD SVR3- Mongo - data redundancy and high availability   primary, secondary, tertiary needs to be hosted on a separated server and zones - high availability if one zone is offline- :     Mount 50G - /var/lib/docker/ - docker installation directory    Mount 750 - /app/ - docker applications local storage OS: Red Hat Enterprise Linux Server release 7.4mongoEFK-DATAGBL MDM HUB Prod Svc Svr1 - amraelp00007848EC2r5.2xlargeus-east-1bEBS APP SVC MDM PROD SVR1EBS DOCKER SVC MDM PROD SVR1- and zookeeper - Kong and replication factory set to 3 – proxy high availability     Load balancer for :     Mount 50G - /var/lib/docker/ - docker installation directory    Mount 450 - /app/ - docker applications local storage OS: Red Hat Enterprise Linux Server release 7.4KafkaZookeeperKongCassandraHUBGWinboundoutboundGBL Prod Svc Svr2 - amraelp00007849EC2r5.2xlargeus-east-1bEBS APP SVC MDM PROD SVR2EBS DOCKER SVC MDM PROD SVR2- and zookeeper - Kong and replication factory set to 3 – proxy high availability     Load balancer for :     Mount 50G - /var/lib/docker/ - docker installation directory    Mount 450 - /app/ - docker applications local storage OS: Red Hat Enterprise Linux Server release 7.4KafkaZookeeperKongCassandraHUBGWinboundoutboundGBL Svr3 - amraelp00007871EC2r5.2xlargeus-east-1eEBS APP SVC MDM PROD SVR3EBS DOCKER SVR3- and zookeeper - Kong and replication factory set to 3 – proxy high availability     Load balancer for Disks:     Mount 50G - /var/lib/docker/ - docker installation directory    Mount 450 - /app/ - docker applications local storage OS: Red Hat Enterprise Linux Server release 7.4KafkaZookeeperKongCassandraHUBGWinboundoutboundEBS APP DATA GB XFSus-east-1bmount to /app on MDM Svr1 - amraelp00007844EBS APP DATA to /app on MDM Svr2 - amraelp00007870EBS APP DATA MDM Prod Svr3EBS750 GB XFSus-east-1bmount to /app on MDM HUB Prod Data Svr3 - amraelp00007847EBS DATA GB XFSus-east-1bmount to on MDM Svr1 - amraelp00007844EBS DATA GB XFSus-east-1emount to docker devicemapper on MDM Svr2 - amraelp00007870EBS DATA Svr3EBS50 GB XFSus-east-1bmount to on MDM HUB Prod Data Svr3 - amraelp00007847EBS APP SVC MDM Prod Svr1EBS450 to /app on MDM HUB Prod Svc Svr1 - amraelp00007848EBS APP SVC MDM Prod Svr2EBS450 GB XFSus-east-1bmount to /app on MDM HUB Prod Svc Svr2 - amraelp00007849EBS APP SVC MDM Prod Svr3EBS450 GB XFSus-east-1emount to /app on MDM HUB Prod Svc Svr3 - amraelp00007871EBS DOCKER SVC MDM Prod Svr1EBS50 GB XFSus-east-1bmount to on MDM HUB Prod Svc Svr1 - amraelp00007848EBS DOCKER SVC MDM Prod Svr2EBS50 GB XFSus-east-1bmount to on MDM HUB Prod Svc Svr2 - amraelp00007849EBS DOCKER SVC MDM Prod Svr3EBS50 GB XFSus-east-1emount to docker devicemapper on MDM HUB Prod Svc Svr3 - amraelp00007871GBLMDMHUB Bucketgblmdmhubprodamrasp101478S3us-east-1Load BalancerELBELBGBL MDM HUB Prod Svc Svr1GBL Svr2GBL MDM HUB Prod Svc Svr3MAP 443 - 8443 (only ) - ssl offloading on KONGDomain: NAME:  -CLB-ATP-MDMHUB-US-PROD-001DNS Name : SSL cert for doiman domain CertificateDomain : domain DNS RecordDNSAddress: -> Load BalancerRolesNameTypePrivilegesMember ofDescriptionReqeusts IDProvided accessUNIX-universal-awscbsdev-mdmhub-us-prod-computers- to hosts: MDM Svr1GBL Svr2GBL Svr3GBL Svr1GBL Svr2GBL MDM HUB Prod Svc Svr3Computer role including all servers-UNIX-GBLMDMHUB-US-PROD-ADMINUser Role- dzdo root - access to docker- access to docker-engine (systemctl) – restart, stop, start docker engineUNIX-GBLMDMHUB-US-PROD-U  Admin role to manage all resource on servers-KUCR - 20200519090759337WARECP - 20200519083956229GENDEL - 20200519094636480MORAWM03 - 20200519084328245PIASEM - 20200519095309490UNIX-GBLMDMHUB-US-PROD-HUBROLEUser Role- Read only for logs- dzdo docker - list docker container- dzdo docker logs * - check docker container logs- Read access to /app/* - check  docker container logsUNIX-GBLMDMHUB-US-PROD-U  role without root access, read only for logs and check docker status. It will be used by monitoring-UNIX-GBLMDMHUB-US-PROD-SEROLEUser Role- dzdo docker * UNIX-GBLMDMHUB-US-PROD-U  service role - it will be used to run microservices  from CD pipeline-Service Account - GBL32452299imdmuspr mdmhubuspr - 20200519095543524UNIX-GBLMDMHUB-US-PROD-UUser Role- Read only for logs- Read access to /app/* - check  docker container logsUNIX-GBLMDMHUB-US-PROD-U  -Ports - Security Group PFE-SG-GBLMDMHUB-US-APP-PROD-001 Port ApplicationWhitelisted8443Kong ( proxy)ALL from COMPANY VPN7000Cassandra ( DB)  - inter-node communicationALL from COMPANY VPN7001Cassandra ( DB) - inter-node communicationALL from )  - client portALL from COMPANY VPN9094Kafka - SASL_SSL protocolALL from COMPANY VPN9093Kafka - protocolALL from -broker communication   ALL from COMPANY VPN2181ZookeeperALL from COMPANY VPN2888Zookeeper - intercommunicationALL from COMPANY VPN3888Zookeeper - intercommunicationALL from COMPANY VPN27017MongoALL from COMPANY VPN9999HawtIO - administration consoleALL from COMPANY VPN9200ElasticsearchALL from COMPANY VPN9300Elasticsearch TCP - cluster communication portALL from COMPANY VPN5601KibanaALL from exportersALL from COMPANY VPN9542Kong exporterALL from COMPANY VPN2376Docker encrypted communication with the daemonALL from COMPANY VPNDocumentationService Account ( / server access ) - UNIX - user access to Servers: to add user access to UNIX-GBLMDMHUB-US-PROD-ADMINlog in to   - UNIXuser access to Servers -  to Request Manager -> Request Catalog Search Requests for ntinueFill Formula Add user access details formualAccount , MikolajAD Username-MORAWM03User Domain-EMEARequestID-20200310100151888Request Details BelowRoleName: YesDescription:requestorCommentsList:Hi Team,I created the request to add account () to the role on the following servers:amraelp00007844amraelp00007870amraelp00007847amraelp00007848amraelp00007849amraelp00007871Role name: UNIX-GBLMDMHUB-US-PROD-ADMIN-U -> member of: UNIX-universal-awscbsdev-mdmhub-us-prod-computers-U (UNIX-GBLMDMHUB-US-PROD-U)Could you please verify if I provided all required information?Regards,MikolajaccessToSpecificServerList_roleLst_2: : access toGBL Svr1 - amraelp00007844GBL - amraelp00007870GBL HUB Prod Data Svr3 - amraelp00007847GBL Prod Svc Svr1 - amraelp00007848GBL - amraelp00007849GBL HUB Prod Svc Svr3 - amraelp00007871regarding Fletcher projectserverLocationList: Not ApplicablenisDomainOtherList: OtherroleGroupAccount_roleLst_6: Add to Role Group(s)roleGroupNameList: UNIX-GBLMDMHUB-US-PROD-ADMIN-UaccountPrivilegeList_roleLst_7: Add PrivilegesaccountList_roleLst_8: group membershipunixGroupNameList: UNIX-GBLMDMHUB-US-PROD-ADMIN-USubmit requestHow to add/create new Service Account with access to UNIX-GBLMDMHUB-US-PROD-SEROLEService Account NameUNIX group  mdmusprmdmhubusprService Account Name has to contain max 8 Requires Additional Information (GBL32099918i).msglog in to   - UNIXuser access to Servers -  to Request Manager -> Request Catalog Search Requests for -> LegacyYesExistingLegacyamraelp00007844amraelp00007870amraelp00007847amraelp00007848amraelp00007849amraelp00007871N/AOtherTo manage the account and for the MDM HUBIt will be used to run microservices from CD pipelinePrimary: VARGAA08Secondary: TIRUMS05Service AccountService Account Name: group namePROD:mdmuspr Account Name have to contain 8 charactersMDM access (related to microservices and CD) forGBL MDM US HUB Prod Data Svr1 - amraelp00007844GBL - amraelp00007870GBL HUB Prod Data Svr3 - amraelp00007847GBL Prod Svc Svr1 - amraelp00007848GBL - amraelp00007849GBL HUB Prod Svc Svr3 - amraelp00007871regarding ,I am trying to create the request to create for the following two servers. amraelp00007844amraelp00007870amraelp00007847amraelp00007848amraelp00007849amraelp00007871I want to provide the privileges for this Service Account:Role name: UNIX-GBLMDMHUB-US-PROD-SEROLE-U -> member of: UNIX-GBLMDMHUB-US-PROD-U  -> UNIX-universal-awscbsdev-mdmhub-us-prod-computers-U- docker * - folder access read/writeComputer role related: UNIX-universal-awscbsdev-mdmhub-us-prod-computers-UCould you please verify if I provided all the required information and this Request is correct?Regards, DIR: /app/mdmusprHow to open ports / create new - PFE-SG-GBLMDMHUB-US-APP-PROD-001 create a new security group:Create server Security Group and Open Ports on  queue Name: GBL-BTI-IOD AWS FULL SUPPORTlog in to  go to Get Support Search for queue:  FULL SUPPORTSubmit Request to this queue:,Could you please create a new security group and assign it with these MDM Svr1 - GBL Svr3 - GBL HUB Prod Svc Svr1 - GBL MDM HUB Prod Svc Svr3 - Please add the following owners:Primary: VARGAA08Secondary: TIRUMS05(please let me know if approval is group Requested: -APP-PROD-001Please Open the following ports: ( proxy) ALL from COMPANY VPN7000 Cassandra ( DB) - inter-node communication ALL from COMPANY VPN7001 Cassandra ( DB) - inter-node communication ALL from COMPANY VPN9042 Cassandra ( DB) - client port ALL from COMPANY VPN9094 Kafka - SASL_SSL protocol ALL from COMPANY VPN9093 protocol ALL from COMPANY VPN9092 KAFKA - Inter-broker communication ALL from ALL from COMPANY VPN2888 Zookeeper - intercommunication ALL from COMPANY VPN3888 Zookeeper - intercommunication ALL from COMPANY VPN27017 Mongo ALL from COMPANY VPN9999 HawtIO - administration console ALL from Elasticsearch ALL from COMPANY VPN9300 Elasticsearch TCP - cluster communication port ALL from ALL from exporters ALL from COMPANY VPN9542 Kong exporter ALL from VPN2376 encrypted communication with the daemon ALL from COMPANY this group to the following servers:,MikolajThis will create a new Security Group these security groups have to be assigned to servers through the portal by the Servers open new ports:log in to  go to Get Support Search for queue:  FULL SUPPORTSubmit Request to this queue:RequestHi,Could you please modify the below security group and open the following D security group:Security group: -APP-PROD-001Port: 2376(this port is related to for encrypted communication with the daemon)The host related to this:amraelp00007844amraelp00007870amraelp00007847amraelp00007848amraelp00007849amraelp00007871Regards, ConfigurationKafka GO TO:How to Generate JKS Keystore and Truststorekeytool -genkeypair -alias -keyalg -keysize 2048 -keystore ystore.jks -dname "CN=, O=COMPANY, L=mdm_gbl_us_hub, C=US"keytool -certreq -alias -file .csr -keystore ystore.jksSAN:●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● guest_user for - "CN=est_, O=COMPANY, L=GBLMDMHUB-US-PROD-KAFKA, C=US":GO TO: How to Generate JKS Keystore and Truststorekeytool -genkeypair -alias guest_user -keyalg RSA -keysize 2048 -keystore guest_ystore.jks -dname "CN=est_, O=COMPANY, L=GBLMDMHUB-US-PROD-KAFKA, C=US"keytool -certreq -alias guest_user -file est_.csr -keystore guest_ystore.jksKongopenssl req -nodes -newkey rsa:2048 -sha256 -keyout y -out gbl-mdm-hub-us-prod.csrSubject Alternative ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●EFKPROD_GBL_USopenssl req -nodes -newkey rsa:2048 -sha256 -keyout y -out Subject Alternative Names ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●esnode1openssl req -nodes -newkey rsa:2048 -sha256 -keyout y -out mdm-esnode1-gbl-us-prod.csr - Elasticsearch esnode1Subject Alternative Names ●●●●●●●●●●●●●●esnode2openssl req -nodes -newkey rsa:2048 -sha256 -keyout y -out mdm-esnode2-gbl-us-prod.csr - Elasticsearch esnode2Subject Alternative Names ●●●●●●●●●●●●●●esnode3openssl req -nodes -newkey rsa:2048 -sha256 -keyout y -out mdm-esnode3-gbl-us-prod.csr - Elasticsearch esnode3Subject Alternative Names ●●●●●●●●●●●●●Domain Configuration:Example request: GBL30514754i "Register domains "mdm-log-management*"log in to  can we help you with? - Search for "Network Team Ticket"Select the most relevant topic - " Request"Submit a ticket to this queue.Ticket Details: - GBL32508266iRequestHi,Could you please register the following domains:ADD the below entry:========================              Alias Record to                             [●●●●●●●●●●●●●]Kind regards,MikolajRequest DNSHi,Could you please register the following domains:ADD the below entry for the : PFE-CLB-ATP-MDMHUB-US-PROD-001:========================              Alias Record to                              Name : Referenced creation ticket: GBL32561307iKind regards,MikolajEnvironment InstallationDISC:server1 amraelp00007844    APP DISC: nvme1n1   DOCKER DISC: nvme2n1server2 amraelp00007870   APP DISC: nvme2n1   DOCKER DISC: nvme1n1server3 amraelp00007847   APP DISC: nvme2n1   DOCKER DISC: nvme1n1server4 amraelp00007848   APP1 DISC: : nvme3n1   DOCKER DISC: nvme1n1server5 : nvme3n1    DOCKER DISC: nvme1nserver6    APP1 DISC: : nvme3n1    DOCKER DISC: nvme1n1Pre:umount /var/lib/dockerlvremove /dev/datavg/varlibdockervgreduce datavg /dev/nvme1n1vi /etc/fstabRM - /dev/mapper/datavg-varlibdocker /var/lib/docker ext4 defaults 1 2rmdir /var/lib/ -> dockermkdir /app/dockerln /app/docker /var/lib/dockerStart docker service after prepare_env_airflow_certs playbook run is completedClear content of /etc/sysconfig/docker-storage to DOCKER_STORAGE_OPTIONS="" to use deamon.json fileAnsible:ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server1 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server1 --vault-password-file=~/vault-password-fileCN_NAME=SUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●●ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server2 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server2 --vault-password-file=~/vault-password-fileCN_NAME=SUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●●ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server3 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server3 --vault-password-file=~/vault-password-fileCN_NAME=SUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server4 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server4 --vault-password-file=~/vault-password-fileCN_NAME=SUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●●ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server5 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server5 --vault-password-file=~/vault-password-fileCN_NAME=SUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server6 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server6 --vault-password-file=~/vault-password-fileCN_NAME=Docker Version:amraelp00007844:root:[04:57 AM]:/home/morawm03> docker version 1.13.1, build b2f74b2/1.13.1amraelp00007870:root:[04:57 AM]:/home/morawm03> docker version 1.13.1, build b2f74b2/1.13.1amraelp00007847:root:[04:57 AM]:/home/morawm03> docker version 1.13.1, build b2f74b2/1.13.1amraelp00007848:root:[04:57 AM]:/home/morawm03> docker version 1.13.1, build b2f74b2/1.13.1amraelp00007849:root:[04:57 AM]:/home/morawm03> docker version 1.13.1, build b2f74b2/1.13.1amraelp00007871:root:[05:00 AM]:/home/morawm03> docker version 1.13.1, build b2f74b2/1.13.1Configure Registry Login ():ansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server1 --vault-password-file=~/vault-password-fileansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server2 --vault-password-file=~/vault-password-fileansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server3 --vault-password-file=~/vault-password-fileansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server4 --vault-password-file=~/vault-password-fileansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server5 --vault-password-file=~/vault-password-fileansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server6 --vault-password-file=~/vault-password-fileRegistry (manual config):  Copy certs: /etc/docker/certs.d/ from (mdm-reltio-handler-env\\ssl_certs\\registry)  login  (login on service account too)  user/pass: mdm/**** (check -handler-env\\group_vars\\all\\secret.yml)Playbooks installation order:Install node_exporter (run on user with root access - systemctl node_exprter installation): -playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file -playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-file -playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus3 --vault-password-file=~/vault-password-file -playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus4 --vault-password-file=~/vault-password-file -playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus5 --vault-password-file=~/vault-password-file -playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus6 --vault-password-file=~/vault-password-fileInstall ansible-playbook install_hub_broker_cluster.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-fileInstall Kafka TOPICS: -playbook install_hub_broker_cluster.yml -i inventory/prod_gblus/inventory --limit --vault-password-file=~/vault-password-fileInstall ansible-playbook install_hub_mongo_rs_cluster.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-fileInstall ansible-playbook install_mdmgw_gateway_v1.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-fileUpdate Config -playbook update_kong_api_v1.yml -i inventory/prod_gblus/inventory --limit kong_v1_01 --vault-password-file=~/vault-password-fileVerification: openssl s_client -connect :8443 -servername -CAfile /mnt/d/dev/mdm/GBL_US_NPROD/root_inter/openssl s_client -connect :8443 -servername -CAfile /mnt/d/dev/mdm/GBL_US_NPROD/root_inter/openssl s_client -connect :8443 -servername -CAfile /mnt/d/dev/mdm/GBL_US_NPROD/root_inter/-playbook install_efk_stack.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-fileInstall services : mongo_exporter: -playbook -i inventory/prod_gblus/inventory --limit mongo3_exporter --vault-password-file=~/vault-password-file : -playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus3 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus4 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus5 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus6 --vault-password-file=~/vault-password-file sqs_exporter: -playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus6 --vault-password-file=~/vault-password-fileInstall Consul -playbook install_consul.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-file# After operation get SecretID from consul container. On the container execute the following consul  bootstrapand copy it as mgmt_token to consul secrets.ymlAfter install consul step run update consul playbook with proper mgmt_token (secret.yml) in every execution for each node.Update Consul -playbook update_consul.yml -i inventory/prod_gblus/inventory --limit consul1 --vault-password-file=~/vault-password-file -v ansible-playbook update_consul.yml -i inventory/prod_gblus/inventory --limit consul2 --vault-password-file=~/vault-password-file -v ansible-playbook update_consul.yml -i inventory/prod_gblus/inventory --limit consul3 --vault-password-file=~/vault-password-file and Collections:Create Collections and Indexes\nCreate and Indexes:\n entityHistory\n\n eateIndex({country: -1}, {background: true, name: "idx_country"});\n eateIndex({sources: -1}, {background: true, name: "idx_sources"});\n eateIndex({entityType: -1}, {background: true, name: "idx_entityType"});\n eateIndex({status: -1}, {background: true, name: "idx_status"});\n eateIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\n eateIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\n eateIndex({"lue": 1}, {background: true, name: "idx_crosswalks_v_asc"});\n eateIndex({"osswalks.type": 1}, {background: true, name: "eateIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"});\n eateIndex({mdmSource: -1}, {background: true, name: "eateIndex({entityChecksum: -1}, {background: true, name: "idx_entityChecksum"});\n eateIndex({parentEntityId: -1}, {background: true, name: "idx_parentEntityId"}); \n \n \n \n\n entityRelations\n eateIndex({country: -1}, {background: true, name: ": -1}, {background: true, name: "idx_sources"});\n eateIndex({relationType: -1}, {background: true, name: "idx_relationType"});\n eateIndex({status: -1}, {background: true, name: "idx_status"});\n eateIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\n eateIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\n eateIndex({startObjectId: -1}, {background: true, name: "idx_startObjectId"});\n eateIndex({endObjectId: -1}, {background: true, name: "idx_endObjectId"});\n eateIndex({"lue": 1}, {background: true, name: "idx_crosswalks_v_asc"}); \n eateIndex({"osswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"}); \n eateIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"}); \n eateIndex({mdmSource: -1}, {background: true, name: "idx_mdmSource"});\n\n\n\n LookupValues\n eateIndex({updatedOn: 1}, {background: true, name: "idx_updatedOn"});\n eateIndex({countries: 1}, {background: true, name: "idx_countries"});\n eateIndex({mdmSource: 1}, {background: true, name: "idx_mdmSource"});\n eateIndex({type: 1}, {background: true, name: "eateIndex({code: 1}, {background: true, name: "eateIndex({valueUpdateDate: 1}, {background: true, name: "idx_valueUpdateDate"});\n\n\n ErrorLogs\n eateIndex({plannedResubmissionDate: -1}, {background: true, name: "idx_plannedResubmissionDate_-1"});\n eateIndex({timestamp: -1}, {background: true, name: "idx_timestamp_-1"});\n eateIndex({exceptionClass: 1}, {background: true, name: "idx_exceptionClass_1"});\n eateIndex({status: -1}, {background: true, name: "idx_status_-1"});\n\n\tbatchEntityProcessStatus\n \eateIndex({batchName: -1, sourceId: -1}, {background: true, name: "idx_findByBatchNameAndSourceId"});\n\t eateIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1}, {background: true, name: "idx_EntitiesUnseen_SoftDeleteJob"});\n\t\eateIndex({batchName: -1, deleted: -1, : -1, updateDateMDM: -1}, {background: true, name: "idx_ProcessingResult_ProcessingJob"});\n\t\eateIndex({batchName: -1, : -1, updateDateMDM: -1}, {background: true, name: "idx_ProcessingResultAll_ProcessingJob"});\n\n\n batchInstance\n\t\t- create collection\n\n\trelationCache\n\t\eateIndex({startSourceId: -1}, {background: true, name: "idx_findByStartSourceId"});\n\n eateIndex({type: -1, : -1}, {background: true, name: "idx_typeStatusNameFind_TraceVR"});\n eateIndex({entityURI: -1, : -1}, {background: true, name: "idx_entityURIStatusNameFind_SubmitVR"});\n eateIndex({changeRequestURI: -1, : -1}, {background: true, name: "idx_changeRequestURIStatusNameFind_DSResponse"});\n \n entityMatchesHistory \n eateIndex({_id: -1, "tchObjectUri": -1, "tchType": -1}, {background: true, name: "idx_findAutoLinkMatch_CleanerStream"});\n\n ENV with Prometheus:Prometheus config\nnode_exporter\n - targets:\n - ":9100"\n - ":9100"\n - ":9100"\n - ":9100"\n - ":9100"\n - ":9100"\n labels:\n env: gblus_prod\n component: node\n \n\nkafka\n - targets:\n - ":9101"\n labels:\n env: gblus_prod\n node: 1\n component: kafka\n - targets:\n - ":9101"\n labels:\n env: gblus_prod\n node: 2\n component: kafka\n - targets:\n - ":9101"\n labels:\n env: gblus_prod\n node: 3\n component: kafka\n \nkafka_exporter\n - targets:\n - ":9102"\n labels:\n trade: gblus\n node: 1\n component: kafka\n env: gblus_prod\n - targets:\n - ":9102"\n labels:\n trade: gblus\n node: 2\n component: kafka\n env: gblus_prod\n - targets:\n - ":9102"\n labels:\n trade: gblus\n node: 3\n component: kafka\n env: gblus_prod \n \n \nComponents:\n jmx_manager\n - targets:\n - ":9104"\n labels:\n env: gblus_prod\n node: 1\n component: manager\n - targets:\n - ":9104"\n labels:\n env: gblus_prod\n node: 2\n component: manager\n - targets:\n - ":9104"\n labels:\n env: gblus_prod\n node: 3\n component: manager \n \n jmx_event_publisher\n - targets:\n - ":9106"\n labels:\n env: gblus_prod\n node: 1\n component: publisher\n - targets:\n - ":9106"\n labels:\n env: gblus_prod\n node: 2\n component: publisher\n - targets:\n - ":9106"\n labels:\n env: gblus_prod\n node: 3\n component: publisher\n \n jmx_reltio_subscriber\n - targets:\n - ":9105"\n labels:\n env: gblus_prod\n node: 1\n component: subscriber\n - targets:\n - ":9105"\n labels:\n env: gblus_prod\n node: 2\n component: subscriber\n - targets:\n - ":9105"\n labels:\n env: gblus_prod\n node: 3\n component: subscriber\n \n jmx_batch_service\n - targets:\n - ":9107"\n labels:\n env: gblus_prod\n node: 1\n component: batch_service\n - targets:\n - ":9107"\n labels:\n env: gblus_prod\n node: 2\n component: batch_service\n - targets:\n - ":9107"\n labels:\n env: gblus_prod\n node: 3\n component: batch_service\n \n batch_service_actuator\n - targets:\n - ":9116"\n labels:\n env: gblus_prod\n node: 1\n component: batch_service\n - targets:\n - ":9116"\n labels:\n env: gblus_prod\n node: 2\n component: batch_service\n - targets:\n - ":9116"\n labels:\n env: gblus_prod\n node: 3\n component: batch_service\n \n \nsqs_exporter \n - targets:\n - ":9122"\n labels:\n env: gblus_prod\n component: sqs_exporter\n\n \n \ncadvisor\n \n - targets:\n - ":9103"\n labels:\n env: gblus_prod\n node: 1\n component: cadvisor_exporter\n - targets:\n - ":9103"\n labels:\n env: gblus_prod\n node: 2\n component: cadvisor_exporter \n - targets:\n - ":9103"\n labels:\n env: gblus_prod\n node: 3\n component: cadvisor_exporter \n - targets:\n - ":9103"\n labels:\n env: gblus_prod\n node: 4\n component: cadvisor_exporter \n - targets:\n - ":9103"\n labels:\n env: gblus_prod\n node: 5\n component: cadvisor_exporter \n - targets:\n - ":9103"\n labels:\n env: gblus_prod\n node: 6\n component: cadvisor_exporter \nmongodb_exporter\n \n - targets:\n - ":9120"\n labels:\n env: gblus_prod\n component: mongodb_exporter\n \n \nkong_exporter\n - targets:\n - ":9542"\n labels:\n env: gblus_prod\n node: 1\n component: kong_exporter\n - targets:\n - ":9542"\n labels:\n env: gblus_prod\n node: 2\n component: kong_exporter\n - targets:\n - ":9542"\n labels:\n env: gblus_prod\n node: 3\n component: kong_exporter\n" }, { "title": "Configuration (gblus)", "": "", "pageLink": "/pages/tion?pageId=", "content": "Config file: gblmdm-hub-us-spec_v04.xlsxAWS TypeSpecificationAWS RegionAWS Availability ZoneDependen onDescriptionComponentsHUBGWInterfaceGBL HUB nProd Svr1 amrae4PFE--MULTI-AZ-DEV-us-east-1EC2r5.2xlargeus-east-1bEBS APP DATA DOCKER DATA SVR1- Mongo -  no data redundancy for :     Mount 50G - docker installation directory    Mount 1000GB - /app/ - docker applications local storageOS: Red Hat Enterprise Linux Server release 7.3 (Maipo)mongoEFKHUBoutboundGBL MDM HUB nProd Svr2 amrae5PFE-AWS-MULTI-AZ-DEV-us-east-1EC2r5.2xlargeus-east-1bEBS APP DATA SVR2EBS DOCKER DATA SVR2- and zookeeper - Kong and Cassandra- Disks:     Mount 50G - docker installation directory    Mount 500 - /app/ - docker applications local storageOS: Red Hat Enterprise Linux Server release 7.3 ( APP DATA to /app on amrae4EBS APP DATA to /app on amrae5EBS DATA GB XFSus-east-1bmount to on amrae4EBS DOCKER DATA GB XFSus-east-1bmount to on  cert for doiman domain CertificateDomain : domain DNS RecordDNSAddress: RolesNameTypePrivilegesMember ofDescriptionReqeusts -global-mdmhub-us-nprod-computers- to hosts: MDM HUB nProd Svr1GBL HUB nProd Svr2Computer role including all Role- dzdo root - access to docker- access to docker-engine (systemctl) – restart, stop, start docker engineUNIX-GBLMDMHUB-US-NPROD-COMPUTERS-UAdmin role to manage all resource on serversNSA-UNIX: 20200303065003900KUCR - GBL32099554iWARECP - GENDEL - G7iMORAWM03 - GBL32097468iUNIX-GBLMDMHUB-US-NPROD-HUBROLE-UUser Role- Read only for logs- dzdo docker - list docker container- dzdo docker logs * - check docker container logs- Read access to /app/* - check  docker container logsUNIX-GBLMDMHUB-US-NPROD-COMPUTERS-Urole without root access, read only for logs and check docker status. It will be used by monitoringNSA-UNIX: 20200303065731900UNIX-GBLMDMHUB-US-NPROD-SEROLE-UUser Role- dzdo docker * UNIX-GBLMDMHUB-US-NPROD-COMPUTERS-Uservice role - it will be used to run microservices  from CD pipelineNSA-UNIX: 20200303070216948Service only for logs- Read access to /app/* - check  docker container logsUNIX-GBLMDMHUB-US-NPROD-COMPUTERS-UNSA-UNIX: 20200303070544951Ports - Security Group PFE-SG-GBLMDMHUB-US-APP-NPROD-001 Port ApplicationWhitelisted8443Kong ( proxy)ALL from COMPANY VPN9094Kafka - SASL_SSL protocolALL from COMPANY VPN9093Kafka - protocolALL from COMPANY VPN2181ZookeeperALL from COMPANY VPN27017MongoALL from COMPANY VPN9999HawtIO - administration consoleALL from COMPANY VPN9200ElasticsearchALL from COMPANY VPN5601KibanaALL from exportersALL from COMPANY VPN9542Kong exporterALL from COMPANY VPN2376Docker encrypted communication with the daemonALL from ports between and to  and  - this is required to open ports between WBS<>IOD blocked traffic ( the requests take some time to finish so request at the beginning) A connection is required from (●●●●●●●●●●●●●)                       to (●●●●●●●●●●●●●) port . This connection is between airflow and docker host to run gblus DAGs.                       to (●●●●●●●●●●●●●) port 22. This connection is between airflow and docker host to run gblus DAGs.      2. A connection is required from the instance (gbinexuscd01 - ●●●●●●●●●●●●●).                       to (●●●●●●●●●●●●●) port 22. This connection is between and the target host required for code deployment cumentationService Account ( / server access ) - UNIX - user access to Servers: to add user access to in to   - UNIXuser access to Servers -  to Request Manager -> Request Catalog Search Requests for ntinueFill Formula Add user access details formualAccount , MikolajAD Username-MORAWM03User Domain-EMEARequestID-20200310100151888Request Details BelowRoleName: YesDescription:requestorCommentsList: Hi Team,I created the request to add account (EMEAMORAWM03 to the role on the following servers:amrae4amrae5Role name: UNIX-GBLMDMHUB-US-NPROD-ADMIN-U -> member of: UNIX-GBLMDMHUB-US-NPROD-COMPUTERS-U -> NSA-UNIX: 20200303065003900Could you please verify if I provided all required information?Regards,MikolajaccessToSpecificServerList_roleLst_2: : access toGBL HUB nProd Svr1 (amrae4) - PFE-AWS-MULTI-AZ-DEV-us-east-1andGBL MDM HUB nProd Svr2 () - PFE-AWS-MULTI-AZ-DEV-us-east-1regarding : Not ApplicablenisDomainOtherList: OtherroleGroupAccount_roleLst_6: Add to Role Group(s)roleGroupNameList: UNIX-GBLMDMHUB-US-NPROD-ADMIN-UaccountPrivilegeList_roleLst_7: Add PrivilegesaccountList_roleLst_8: group membershipunixGroupNameList: UNIX-GBLMDMHUB-US-NPROD-ADMIN-USubmit requestHow to add/create new Service Account with access to -GBLMDMHUB-US-NPROD-SEROLE-UService Account NameUNIX group  mdmusnprmdmhubusnprService Account Name has to contain max 8 charactersGBL32099918iRE Requires Additional Information (GBL32099918i).msglog in to   - UNIXuser access to Servers -  to Request Manager -> Request Catalog Search Requests for -> LegacyYesExistingLegacyamrae4amrae5N/ manage the account and for the MDM HUBIt will be used to run microservices from CD pipelinePrimary: VARGAA08Secondary: TIRUMS05Service AccountService Account Name: group nameNPROD:mdmusnpr Account Name have to contain 8 charactersMDM access (related to microservices and CD) forGBL MDM HUB nProd Svr1 (amrae4) - PFE-AWS-MULTI-AZ-DEV-us-east-1andGBL MDM HUB nProd Svr2 () - PFE-AWS-MULTI-AZ-DEV-us-east-1regarding ,I am trying to create the request to create for the following two servers. want to provide the privileges for this Service Account:Role name: UNIX-GBLMDMHUB-US-NPROD-SEROLE-U -> member of: UNIX-GBLMDMHUB-US-NPROD-COMPUTERS-U -> NSA-UNIX: - dzdo docker * - folder access read/writeComputer role related: UNIX-IoD-global-mdmhub-us-nprod-computers-UCould you please verify if I provided all required information and this Request is correct?Regards,MikolajHow to open ports / create new - PFE-SG-GBLMDMHUB-US-APP-NPROD-001 create a new security group:Create server Security Group and Open Ports on  queue Name: GBL-BTI-IOD AWS FULL SUPPORTlog in to  go to Get Support Search for queue:  FULL SUPPORTSubmit Request to this queue:,Could you please create a new security group and assign it with two MDM HUB nProd Svr1 (amrae4) - PFE-AWS-MULTI-AZ-DEV-us-east-1andGBL MDM HUB nProd Svr2 () - PFE-AWS-MULTI-AZ-DEV-us-east-1Please add the following owners:Primary: VARGAA08Secondary: TIRUMS05(please let me know if approval is group Requested: -APP-NPROD-001Please Open the following ports:Port  Application Whitelisted8443 Kong ( proxy) ALL from COMPANY VPN9094 Kafka - SASL_SSL protocol ALL from COMPANY VPN9093 - SASL_SSL protocol ALL from ALL from COMPANY VPN 27017 Mongo ALL from COMPANY VPN9999 HawtIO - administration console ALL from Elasticsearch ALL from ALL from exporters ALL from COMPANY this group to the following servers:amrae4amrae5Regards,MikolajThis will create a new Security Group these security groups have to be assigned to servers through the portal by the Servers open new ports:log in to  go to Get Support Search for queue:  FULL SUPPORTSubmit Request to this queue:RequestHi,Could you please modify the below security group and open the following security group:Security group: -APP-NPROD-001Port: 2376(this port is related to for encrypted communication with the daemon)The host related to this:amrae4amrae5Regards,MikolajCertificates ConfigurationKafka - GBL32139266i  GO TO:How to Generate JKS Keystore and Truststorekeytool -genkeypair -alias -keyalg RSA -keysize 2048 -keystore ystore.jks -dname "CN=, O=COMPANY, L=mdm_gbl_us_hub, C=US"keytool -certreq -alias -file .csr -keystore ystore.jksSAN:●●●●●●●●●●●●●●●●●●●●●●●● guest_user for - "CN=est_, O=COMPANY, L=GBLMDMHUB-US-NONPROD-KAFKA, C=US":GO TO: How to Generate JKS Keystore and Truststorekeytool -genkeypair -alias guest_user -keyalg RSA -keysize 2048 -keystore guest_ystore.jks -dname "CN=est_, O=COMPANY, L=GBLMDMHUB-US-NONPROD-KAFKA, C=US"keytool -certreq -alias guest_user -file est_.csr -keystore guest_ystore.jksKong - GBL32144418iopenssl req -nodes -newkey rsa:2048 -sha256 -keyout y -out -us-nprod.csrSubject Alternative ●●●●●●●●●●●●●●●●●●●●●●●●EFK - GBL32139762i  ,  rsa:2048 -sha256 -keyout y -out mdm-log-management-gbl-us-nonprod.csr Subject Alternative Names ●●●●●●●●●●●●●●●●●●●●●●●●openssl req -nodes -newkey rsa:2048 -sha256 -keyout y -out Names ●●●●●●●●●●●●●●●●●●●●●●●●Domain Configuration:Example request: GBL30514754i "Register domains "mdm-log-management*"log in to  can we help you with? - Search for "Network Team Ticket"Select the most relevant topic - " Request"Submit a ticket to this queue.Ticket Details:RequestHi,Could you please register the following domains:ADD the below entry:========================              Alias Record to                             [●●●●●●●●●●●●]                                        Alias Record to                             [●●●●●●●●●●●●]Kind regards,MikolajEnvironment InstallationPre:rmdir /var/lib/ -> dockerln /app/docker /var/lib/dockerumount /var/lib/dockerlvremove /dev/datavg/varlibdockervgreduce datavg /dev/nvme1n1Clear content of /etc/sysconfig/docker-storage to DOCKER_STORAGE_OPTIONS="" to use deamon.json fileAnsible:ansible-playbook prepare_env_gbl_us.yml -i inventory/dev_gblus/inventory --limit server1 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/dev_gblus/inventory --limit server1 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_gbl_us.yml -i inventory/dev_gblus/inventory --limit server2 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/dev_gblus/inventory --limit server2 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_gbl_us.yml -i inventory/dev_gblus/inventory --limit server3 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/dev_gblus/inventory --limit server3 --vault-password-file=~/vault-password-filecopy daemon_docker_tls_overlay.json. to /etc/docker/daemon.jsonFIX using -  sudo cp /lib/systemd/system/rvice /etc/systemd/system/\n$ sudo sed -i 's/\\ -H\\ fd:\\/\\///g' /etc/systemd/system/rvice\n$ sudo systemctl daemon-reload\n$ sudo service docker restartDocker Version:amrae4:root:[10:10 AM]:/app> docker version 1.13.1, build b2f74b2/1.13.1amrae5:root:[10:04 AM]:/app> docker version 1.13.1, build b2f74b2/1.13.1[root@amraelp00008810 docker]# docker version -ce, build 4484c46Configure Registry Login ():ansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit server1 --vault-password-file=~/vault-password-file - using ●●●●●●●●●●●●● root accessansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit server2 --vault-password-file=~/vault-password-fileansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit server3 --vault-password-file=~/vault-password-fileansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file - using ●●●●●●●●●●●● service accountansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-fileansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit prometheus3 --vault-password-file=~/vault-password-fileRegistry (manual config):  Copy certs: /etc/docker/certs.d/ from (mdm-reltio-handler-env\\ssl_certs\\registry)  docker login  (login on service account too)  user/pass: mdm/**** (check -handler-env\\group_vars\\all\\secret.yml)Playbooks installation order:Install node_exporter:    -playbook install_prometheus_node_exporter.yml -i inventory/dev_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file    -playbook install_prometheus_node_exporter.yml -i inventory/dev_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-file -playbook install_prometheus_node_exporter.yml -i inventory/dev_gblus/inventory --limit prometheus3 --vault-password-file=~/vault-password-fileInstall   ansible-playbook install_hub_broker.yml -i inventory/dev_gblus/inventory --limit broker --vault-password-file=~/vault-password-fileInstall   -playbook install_hub_db.yml -i inventory/dev_gblus/inventory --limit mongo --vault-password-file=~/vault-password-fileInstall   -playbook install_mdmgw_gateway_v1.yml -i inventory/dev_gblus/inventory --limit kong_v1_01 --vault-password-file=~/vault-password-fileUpdate KONG Config (IT NEEDS TO BE UPDATED ON EACH ENV (DEV, QA, STAGE)!!)  -playbook update_kong_api_v1.yml -i inventory/dev_gblus/inventory --limit kong_v1_01 --vault-password-file=~/vault-password-file  Verification:    openssl s_client -connect :8443 -servername -CAfile /mnt/d/dev/mdm/GBL_US_NPROD/root_inter/-playbook install_efk_stack.yml -i inventory/dev_gblus/inventory --limit efk --vault-password-file=~/vault-password-fileInstall (without this docker loggin may not work and docker commands will be blocked)  -playbook install_fluentd_forwarder.yml -i inventory/dev_gblus/inventory --limit docker-services --vault-password-file=~/vault-password-fileInstall services :  mongo_exporter:    -playbook -i inventory/dev_gblus/inventory --limit mongo_exporter1 --vault-password-file=~/vault-password-file  :    -playbook install_prometheus_stack.yml -i inventory/dev_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file    -playbook install_prometheus_stack.yml -i inventory/dev_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_stack.yml -i inventory/dev_gblus/inventory --limit prometheus3 --vault-password-file=~/vault-password-file  sqs_exporter:     -playbook install_prometheus_stack.yml -i inventory/dev_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file    -playbook install_prometheus_stack.yml -i inventory/stage_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file    -playbook install_prometheus_stack.yml -i inventory/qa_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-fileInstall Consul -playbook install_consul.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-file# After operation get SecretID from consul container. On the container execute the following consul  bootstrapand copy it as mgmt_token to consul secrets.ymlAfter install consul step run update consul playbookUpdate Consul -playbook update_consul.yml -i inventory/prod_gblus/inventory --limit consul1 --vault-password-file=~/vault-password-file -v Setup Mongo Indexes and Collections:Create Collections and Indexes\nCreate and Indexes:\n entityHistory\n\n eateIndex({country: -1}, {background: true, name: "idx_country"});\n eateIndex({sources: -1}, {background: true, name: "idx_sources"});\n eateIndex({entityType: -1}, {background: true, name: "idx_entityType"});\n eateIndex({status: -1}, {background: true, name: "idx_status"});\n eateIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\n eateIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\n eateIndex({"lue": 1}, {background: true, name: "idx_crosswalks_v_asc"});\n eateIndex({"osswalks.type": 1}, {background: true, name: "eateIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"});\n eateIndex({mdmSource: -1}, {background: true, name: "eateIndex({entityChecksum: -1}, {background: true, name: "idx_entityChecksum"});\n eateIndex({parentEntityId: -1}, {background: true, name: "idx_parentEntityId"}); \n \n \n \n\n entityRelations\n eateIndex({country: -1}, {background: true, name: ": -1}, {background: true, name: "idx_sources"});\n eateIndex({relationType: -1}, {background: true, name: "idx_relationType"});\n eateIndex({status: -1}, {background: true, name: "idx_status"});\n eateIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\n eateIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\n eateIndex({startObjectId: -1}, {background: true, name: "idx_startObjectId"});\n eateIndex({endObjectId: -1}, {background: true, name: "idx_endObjectId"});\n eateIndex({"lue": 1}, {background: true, name: "idx_crosswalks_v_asc"}); \n eateIndex({"osswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"}); \n eateIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"}); \n eateIndex({mdmSource: -1}, {background: true, name: "idx_mdmSource"});\n\n\n\n LookupValues\n eateIndex({updatedOn: 1}, {background: true, name: "idx_updatedOn"});\n eateIndex({countries: 1}, {background: true, name: "idx_countries"});\n eateIndex({mdmSource: 1}, {background: true, name: "idx_mdmSource"});\n eateIndex({type: 1}, {background: true, name: "eateIndex({code: 1}, {background: true, name: "eateIndex({valueUpdateDate: 1}, {background: true, name: "idx_valueUpdateDate"});\n\n\n ErrorLogs\n eateIndex({plannedResubmissionDate: -1}, {background: true, name: "idx_plannedResubmissionDate_-1"});\n eateIndex({timestamp: -1}, {background: true, name: "idx_timestamp_-1"});\n eateIndex({exceptionClass: 1}, {background: true, name: "idx_exceptionClass_1"});\n eateIndex({status: -1}, {background: true, name: "idx_status_-1"});\n\n\tbatchEntityProcessStatus\n eateIndex({batchName: -1, sourceId: -1}, {background: true, name: "idx_findByBatchNameAndSourceId"});\n eateIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1}, {background: true, name: "idx_EntitiesUnseen_SoftDeleteJob"});\n eateIndex({batchName: -1, deleted: -1, : -1, updateDateMDM: -1}, {background: true, name: "idx_ProcessingResult_ProcessingJob"});\n eateIndex({batchName: -1, : -1, updateDateMDM: -1}, {background: true, name: "idx_ProcessingResultAll_ProcessingJob"});\n\n batchInstance\n\t\t- create collection\n\n\trelationCache\n\t\eateIndex({startSourceId: -1}, {background: true, name: "idx_findByStartSourceId"});\n\n eateIndex({type: -1, : -1}, {background: true, name: "idx_typeStatusNameFind_TraceVR"});\n eateIndex({entityURI: -1, : -1}, {background: true, name: "idx_entityURIStatusNameFind_SubmitVR"});\n eateIndex({changeRequestURI: -1, : -1}, {background: true, name: "idx_changeRequestURIStatusNameFind_DSResponse"});\n \n entityMatchesHistory \n eateIndex({_id: -1, "tchObjectUri": -1, "tchType": -1}, {background: true, name: "idx_findAutoLinkMatch_CleanerStream"});\n\n\n Connect ENV with :Update config -  -playbook install_prometheus_configuration.yml -i inventory/prod_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-filePrometheus config\nnode_exporter\n - targets:\n - ":9100"\n - ":9100"\n labels:\n env: gblus_dev\n component: node\n\n\nkafka\n - targets:\n - ":9101"\n labels:\n env: gblus_dev\n node: 1 \n component: kafka\n \n \nkafka_exporter\n\n - targets:\n - ":9102"\n labels:\n trade: gblus\n node: 1\n component: kafka\n env: gblus_dev \n\n\nComponents:\n jmx_manager\n - targets:\n - ":9104"\n labels:\n env: gblus_dev\n node: 1\n component: manager\n - targets:\n - ":9108"\n labels:\n env: gblus_qa\n node: 1\n component: manager\n - targets:\n - ":9112"\n labels:\n env: gblus_stage\n node: 1\n component: manager \n jmx_event_publisher\n - targets:\n - ":9106"\n labels:\n env: gblus_dev\n node: 1\n component: publisher \n - targets:\n - ":9110"\n labels:\n env: gblus_qa\n node: 1\n component: publisher - targets:\n - ":9104"\n labels:\n env: gblus_stage\n node: 1\n component: publisher \n jmx_reltio_subscriber\n - targets:\n - ":9105"\n labels:\n env: gblus_dev\n node: 1\n component: subscriber\n - targets:\n - ":9109"\n labels:\n env: gblus_qa\n node: 1\n component: subscriber\n - targets:\n - ":9113"\n labels:\n env: gblus_stage\n node: 1\n component: subscriber\n jmx_batch_service\n - targets:\n - ":9107"\n labels:\n env: gblus_dev\n node: 1\n component: batch_service\n - targets:\n - ":9111"\n labels:\n env: gblus_qa\n node: 1\n component: batch_service\n - targets:\n - ":9115"\n labels:\n env: gblus_stage\n node: 1\n component: batch_service\n\nsqs_exporter \n - targets:\n - ":9122"\n labels:\n env: gblus_dev\n component: sqs_exporter\n - targets:\n - ":9123"\n labels:\n env: gblus_qa\n component: sqs_exporter\n - targets:\n - ":9124"\n labels:\n env: gblus_stage\n component: sqs_exporter\n\n\ncadvisor\n\n - targets:\n - ":9103"\n labels:\n env: gblus_dev\n node: 1\n component: cadvisor_exporter\n - targets:\n - ":9103"\n labels:\n env: gblus_dev\n node: 2\n component: cadvisor_exporter \n\n\n \nmongodb_exporter\n\n - targets:\n - ":9120"\n labels:\n env: gblus_dev\n component: mongodb_exporter\n \n\nkong_exporter\n - targets:\n - ":9542"\n labels:\n env: gblus_dev\n component: kong_exporter\n" }, { "title": "Getting access to and Kubernetes clusters", "": "", "pageLink": "/display/GMDM/Getting+access+to+PDKS+Rancher+and+Kubernetes+clusters", "content": "Go to nsa-unix and select first link (-UNIX)You will see the form for requesting an access which should be fulfilled like on an example below: Do you need to be added to any Role Groups? YESDo you need privileged access to specific Servers in a Role Group? provide Other Add to Role Group(s) UNIX-GBLMDMHUB-US-PROD-ADMIN-U or UNIX-GBLMDMHUB-US-NPROD-ADMIN-U (depends on an environment)Please provide information about Account Privileges: Add Privileges  Please choose   group provide Name:  UNIX-GBLMDMHUB-US-PROD-COMPUTERS-U or UNIX-GBLMDMHUB-US-NPROD-COMPUTERS-UPlease provide a brief Business Justification:For prod:atp-mdmhub-prod-ameratp-mdmhub-prod-emeaatp-mdmhub-prod-apacPDKS EKS clusters regarding project r nprod:-nprod-ameratp-mdmhub-nprod-emeaatp-mdmhub-nprod-apacPDKS EKS clusters regarding project ments or Special Instructions:  I am creating this request to have an access to prod clusters. " }, { "title": "UI:", "": "", "pageLink": "/pages/tion?pageId=", "content": "" }, { "title": "Add new role and add users to the ", "": "", "pageLink": "/display/GMDM/Add+new+role+and+add+users+to+the+UI", "content": "MDM HUB UI roles standards:Here is the role standard that has to be used to get access to the by specific users:EnvironmentsNON-PRODPRODDEVQASTAGEPRODGBL****EMEA****AMER****APAC****GBLUS****ALL****Use the 'ALL' keyword with connection to the 'NON-PROD' and 'PROD' - using this approach will produce only 2 roles for the le Schema:______ - COMM - ALL or e.t.c (recommendation is name> - MDMHUB  - UI  - PROD / NON-PROD  or specific based on a table above HUB_ADMIN / PTRS e.t.c Important: name has to be in sync with HUB configuration users in e.g     ROLEexample roles:HUB ADMIN → COMM_ALL_MDMHUB_UI_NON-PROD_HUB_ADMIN_ROLE - HUB group for hub-admin users - access to all clusters, and non-prod B ADMIN → COMM_ALL_MDMHUB_UI_PROD_HUB_ADMIN_ROLE - HUB group for hub-admin users - access to all clusters, and prod RS system → COMM_ALL_MDMHUB_UI_NON-PROD_PTRS_ROLE - HUB UI group for PTRS users - access to all clusters, and non-prod RS system → COMM_ALL_MDMHUB_UI_PROD_PTRS_ROLE - HUB group for PTRS users - access to all clusters, and prod e system is the user name used in HUB. All users related to the specific system can have access to the specific r example, if someone from the PTRS system wants to have access to the , how to process such request:Add user to existing roleGo to a group:If a role is found in search results you can check current members or request a new memberadd a new user:savego to submit the request.If the role does not exist:First, create a new role:click Create a NEW Security Group -EMEAname - the name of a group primary owner - owner  - Mikołaj MorawskiDescription - e.g. HUB group for hub-admin users - access to all clusters, and prod you can add users to this groupSecond, configure roles and access to the user in HUB:Important: name has to be in sync with HUB configuration users in  Users can have access to the following roles and APIs: and ADMIN roles:MODIFY_KAFKA_OFFSET             - "/kafka/offset" allows modifying offset on specific topics related to the systemRESEND_KAFKA_EVENT               - "/jobs/hub/resend_events" - resend events to a specific topicUPDATE_IDENTIFIERS                 -   "/jobs/hub/update_identifiers" - starts update identifiers flowMERGE_UNMERGE_ENTITIES         - "/jobs/hub/merge_unmerge_entities" - starts merge unmerge flow REINDEX_ENTITIES                         - "/jobs/mdm/reindex_entities" - executes Reltio Reindex APICLEAR_CACHE_BATCH                  - "/jobs/hub/clear_batch_cache" - executes clear batch cache operationHUB ADMIN roles:RESEND_KAFKA_EVENT_COMPLEX    - "/jobs/hub/resend_events" - resend events to a specific topic using complex  RECONCILE                - "/jobs/hub/reconciliation_entities" - regenerates events to HUB using simple - starts JOBRECONCILE_COMPLEX        - "/jobs/hub/reconciliation_entities_complex" - regenerates events to HUB using complex - starts the                    - "/precallback/partials") - list or resubmit partials that stuck in the queueAdd roles and topics to the user:.e.g: "kafka" section with specific kafka topics:Add mdm admin section with specific roles and access to topics:e.g.     mdm_admin:      reconciliationTargets:        - emea-dev-out-full-ptrs-eu        - emea-dev-out-full-ptrs-global2        - emea-qa-out-full-ptrs-eu        - emea-qa-out-full-ptrs-global2        - emea-stag-out-full-ptrs-eu        - emea-stag-out-full-ptrs-global2        - gbl-dev-out-full-ptrs        - gbl-dev-out-full-ptrs-eu        - gbl-dev-out-full-ptrs-porind        - gbl-qa-out-full-ptrs-eu        - gbl-stage-out-full-ptrs        - gbl-stage-out-full-ptrs-eu        - gbl-stage-out-full-ptrs-porind      sources:        - ALL      countries:        - ALL      roles: &roles        - MODIFY_KAFKA_OFFSET        - RESEND_KAFKA_EVENT      kafka: *kafkaREMEMBER TO ADD: Add mdm_auth  section  this  will  start  the    access.Without this section the will not show HUB Admin tools! mdm_auth: roles: *rolesThe mdm_auth section and roles there will cause the user will only see 2 pages in - in that case, MODIFY OFFSET and RESET_KAFKA_EVENTSWhen the roles and users are configured on the HUB end go to the first step and add selected users to the selected arting from this time any new e.g. PTRS user can be added to the COMM_ALL_MDMHUB_UI_NON-PROD_PTRS_ROLE and will be able to log in to and see the pages and use through ." }, { "title": "Current users and roles", "": "", "pageLink": "/display//Current+users+and+roles", "content": "EnvironmentClientClusterRoleCOMPANY UsersHUB internal userNON-PRODMDMHUBALLCOMM_ALL_MDMHUB_UI_NON-PROD_HUB_ADMIN_ROLEALL HUB Team Members +rganin@ivedi@e.g.    ALL HUB Team Members+rganin@. " }, { "title": " and roles", "": "", "pageLink": "/display//SSO+and+roles", "content": "To login to dashboard You have to be in COMPANY network. sso authorization is made by , using COMPANY th flowSSO loginSAML login roleAfter successful authentication with we are receiving roles from Manager - distribution list)Then we are decoding roles using following regexp:COMM_(?[A-Z]+)_MDMHUB_UI_(?NON-PROD|PROD)_(?.+)_ROLEWhen role is matching environment and tenant we are getting roles by searching system in user ckend AD groupsServiceNPROD GroupPROD GroupDescriptionKibanaCOMM_ALL_MDMHUB_KIBANA_NON-PROD_ADMIN_ROLECOMM_ALL_MDMHUB_KIBANA_PROD_ADMIN_ROLECOMM_ALL_MDMHUB_KIBANA_NON-PROD_VIEWER_ROLECOMM_ALL_MDMHUB_KIBANA_PROD_VIEWER_ROLEGrafanaCOMM_ALL_MDMHUB_GRAFANA_PROD_ADMIN_ROLECOMM_ALL_MDMHUB_GRAFANA_PROD_VIEWER_ROLEAkhqCOMM_ALL_MDMHUB_KAFKA_NON-PROD_ADMIN_ROLECOMM_ALL_MDMHUB_KAFKA_PROD_ADMIN_ROLECOMM_ALL_MDMHUB_KAFKA_NON-PROD_VIEWER_ROLECOMM_ALL_MDMHUB_KAFKA_PROD_VIEWER_ROLEMonitoringCOMM_ALL_MDMHUB_ALL_NON-PROD_MON_ROLECOMM_ALL_MDMHUB_ALL_PROD_MON_ROLEThis groups aggregates users that are responsible for monitoring of MDMHUB AirflowCOMM_ALL_MDMHUB_AIRFLOW_NON-PROD_ADMIN_ROLECOMM_ALL_MDMHUB_AIRFLOW_PROD_ADMIN_ROLECOMM_ALL_MDMHUB_AIRFLOW_NON-PROD_VIEWER_ROLECOMM_ALL_MDMHUB_AIRFLOW_PROD_VIEWER_ROLE" }, { "title": "UI Connect Guide", "": "", "pageLink": "/display/GMDM/UI+Connect+Guide", "content": "Log in to and switch log in to please use the following link: in to using your COMPANY credentials:There is no need to know each address, you can easily switch between using the following link (available on the TOP RIGHT corner in near the USERNAME):What pages are available with the default VIEW roleBy default, you are logged in with the default VIEW role, the following pages are available:HUB StatusYou can use the HUB Dashboard main page that contains HUB platform status: Event processing details, refresh time, started batches and to load data to or get Events from gestion Services ConfigurationThis page contains the documentation related to the checks, Source Match Categorization, Cleansing & Formatting, , and Minimum Viable Profile can choose a filter to switch between different entity types and use input boxes to filter can use the 'Category' filter to include the operations that you are interested inYou can use the 'Query' filter and put any text to find what you are looking for (e.g. 'prefix' to find rules with prefix word)You can use the 'Date' filter to find rules created or updated after a specific time - now using this filter you can easily find the rules added after data reload and reload data one more time to reflect changes. This page contains also documentation related to duplicate identifiers and noise can choose a  filter to switch between different entity types and use input boxes to filter resultsIngestion Services TesterThis page contains the tester, input JSON and click the 'Test' button to check the output JSON with all rules appliedClick the 'Difference' to get only changed sectionsClick the 'Validation result' to get the rules that were re details here: HUB UI User GuideWhat operations are available in the UIAs a user, you can request access to the technical operations in HUB. The details on how to access more operations are described in the section below.Here you will get to know the different operations and what can be done using these operations:HUB Admin allows to: operationOn this page user can modify offset on specific consumer groupSystem/User that wants to have access to this page will be allowed to maintain the consumer group offset, change to:latestearliestspecific date timeshift by a specific number of B ReconciliationTechnical operationUsed internally by is operation allows us to mimic Reltio events generation - this operation generates the events to the input HUB topic so that we can reprocess the can use this page and generates events by:provide an input array with entity/relation URIsorprovide the query and select the source/market that you want to reprocess. operationThis operation can be used to generate events for your topicUse case - you are consuming data from HUB and you want to test something on non-prod environments and consume events for a specific market one more time. You want to receive 1000 events for market for your can use this page and generates events for the target topic:Specify and Target Reconciliation topic - as a result, you will receive the ltio ReindexTechnical operationThis operation executes the Reltio Reindexing operationYou can use this page and generates events by:provide the query and select the source/market that you want to reprocess.orprovide the input file with entity/relation URIs, that will be sent to Unmerge EntitiesBusiness operationThis operation consumes the input file and executes the merge/unmerge operations in ReltioMore details about the file and process are described here:  unmergeUpdate operationThis operation consumes the input file and executes the merge/unmerge operations in ReltioMore details about the file and process are described here:  update identifiersClear operationClear Batch CacheMore details about the file and process are described here: Batch clear data load cacheHow to request additional access to new operationsPlease send the following email to the HUB DL: DL-ATP_MDMHUB_SUPPORT@Subject:HUB UI - Access request for Body:Please provide the access / update the existing access for to HUB Admin DetailsComments:1Action neededAdd user to the HUB user in the HUB UI (please provide the existing group name)2TenantGBL, , , , / - more details in EnvironmentsBy default please select ALL , but if you need access only to a specified one please select.3Environments PROD / NON-PROD  or specific: PRODBy default please select environments, but if you need access only to a specified one please select.4Permissions rangeChoose the operation: OffsetHUB ReconciliationKafka Republish EventsReltio ReindexMerge/Unmerge EntitiesUpdate IdentifiersClear Cache5COMPANY TeamETL/COMPANY or or e.t.c8Business access to execute merge unmerge operation in of contactIf you are from the system please provide the email and system details.7Sourcesrequired in Events/Reindex/Reconciliation operations3Countriesrequired in Events/Reindex/Reconciliation operationsThe request will be processed after approval. In the response, you will receive the Group Name. Please use this for future reference.e.g. PTRS system roles used in the PTRS system to manage operations.   PTRS system → COMM_ALL_MDMHUB_UI_NON-PROD_PTRS_ROLE - HUB UI group for PTRS users - access to all clusters, and non-prod environments.   PTRS system → COMM_ALL_MDMHUB_UI_PROD_PTRS_ROLE - HUB group for PTRS users - access to all clusters, and prod will use the following SOP to add you to a selected role: Add a new role and add users to the HelpIn case of any questions, the page or full HUB documentation is available here ( page footer):GetHelpWelcome to !" }, { "title": "Users:", "": "", "pageLink": "/pages/tion?pageId=", "content": "" }, { "title": "Add Direct API User to HUB", "": "", "pageLink": "/display//Add+Direct+API+User+to+HUB", "content": "To add a new user to direct a few steps must be done. That document describes what activities must be fulfilled and who is responsible fot eate PingFederate user - client's responsibility  If the client's authentication method is then there is a need to create add a user you must have a user created: How to Request PingFederate (PXED) External OAuth 2.0 Account Caution: If the authentication method is key auth then generates it and sends it securely way to the nd a request to that contains all necessary data - client's responsibility Send a request to create a new user with direct access to : dl-atp_mdmhub_support@The request must contain as follows:1Action needed2PingFederate username3Countries4Tenant5Environments6Permissions range7Sources8Business justification9Point of contact10GatewayDescriptionAction needed – this is a place where you decide if you want to create a new user or modify the existing one.PingFederate username – you need to create a user on the side. Its username is crucial to authenticate on the HUB side. If you do not have a user please check: - list of countries that access to will be grantedTenant – a tenant or list of tenants where the user will be created. Please notice that if you have a connection from open internet only is possible. If you have a local application split to Reltio Region it is recommended to request a local tenant. If you have a global solution you can call and your requests will be routed by HUB.Environments – list of environment instances – range – do you need to write or read/write? To which entities do you need access? – to which sources do you need to have access?Business justification – please describeWhy do you have a connection with HUB?Why the user must be created/modified?What’s the project name?Who’s the project manager?Point of contact – please add a group name - in case of any issues connected with that userWhich you want to call: , , ,etcPrepare new user on side - HUB Team Responsibility Store clients' request in dedicated confluence space: ClientsIn the COMPANY tenants, there is a need to connect the new user with ange router configuration, and add a new user with:user name or when the user uses key auth add key to secrets.yamlsourcescountriesrolesChange Manager configuration, addsourcescountriesChange service configuration - if applicabledcrServiceConfig-  initTrackingDetailsStatus, initTrackingDetail, dcrTyperoles - CREATE_DCR, GET_DCRYou need to check how the request will be routed. If there is a  need to make a routing configuration, follow these steps:change configuration by adding new countries to proper tenantschange Manager configuration in destinated tenant by addingsourcescountries" }, { "title": "Add External User to ", "": "", "pageLink": "/display//Add+External+User+to+MDM+Hub", "content": "Kong configurationFirstly You need to have users logins from for every envGo folder inventory/{{ kong_env }}/group_vars/kong_v1 in repository mdm-hub-env-configFind section PLUGINS in file _{{ env }}.yml and then rule with name mdm-external-oauthin this section find "users_map"add there new entry with following rule:\n- ":"\nchange False to True in create_or_update setting for this rule\ncreate_or_update: True\nRepeat this steps( a-c ) for every environment {{ env }} you want to apply changes to(e.g., dev, qa, stage){{ kong_env }} - environment on which instance is deployed{{ env }} - environment on which instance is , , stageprodproddev_gblusdev_gblus, qa_gblus, stage_gblusprod_gblusprod_gblusdev_usdev_usprod_usprod_usGo to folder inventory/{{ env }}/group_vars/gw-servicesIn file gw_users.yml add section with new user after last added user, specify roles and sources needed for this user. E.g.,User configuration\n- name: ""\n description: ""\n defaultClient: "ReltioAll"\n getEntityUsesMongoCache: : roles:\n - \n countries:\n - US\n sources: \n\t- \nRepeat this step for every environment {{ env }} you want to apply changes to( e.g., dev, qa, stage)After configuration changes You need to update using following commandfor nonprod gblus envsGBLUS update\nansible-playbook update_kong_api_v1.yml -i inventory/dev_gblus/inventory --limit kong_v1_01 --vault-password-file=~/cret\nfor prod gblus envGBLUS PROD - kong update\nansible-playbook update_kong_api_v1.yml -i inventory/prod_gblus/inventory --limit kong_v1_01 --vault-password-file=~/cret\nfor nprod gbl envsGBL update\nansible-playbook update_kong_api_v1.yml -i inventory/dev/inventory --vault-password-file=~/cret\nfor prod gbl envGBL PROD - kong update\nansible-playbook update_kong_api_v1.yml -i inventory/prod/inventory --vault-password-file=~/cret\nfor nprod envUS update\nansible-playbook update_kong_api_v1.yml -i inventory/dev_us/inventory --vault-password-file=~/cret\nfor prod update\nansible-playbook update_kong_api_v1.yml -i inventory/prod_us/inventory --vault-password-file=~/cret\nTroubleshootingIn case when there will be a problem with deploying You need to set create_or_update as True also for route and manager secretTo use this script You need to have cret file created in your home directory or adjust vault-password-file if other option is to change --vault-password-file to --ask-vault and provide ansible vault during the fore commiting changes find all occurrences where You set create_or_update to true and change it again to:\ncreate_or_update: False\nThen commit changesRedeploy gateway services on all modified envs. Before deploying please verify if there is no batch running in progressJenkins job to deploy gateway services:" }, { "title": "Add new Batch to HUB", "": "", "pageLink": "/display//Add+new+Batch+to+HUB", "content": "To add a new batch to   a few steps must be done. That document describes what activities must be fulfilled and who is responsible for eck source and country configurationThe first step is to check if rules and are configured for the new source. Repository: mdm-config-registry; Path: \\config-hub\\\\mdm-manager\\quality-service\\quality-rules\\If not you have to immediately send an email to a person that requested a new batch. This condition is usually performed on a separate task as prerequisite to adding the batch configuration."This is a new source. You have to send and requirements for a new source to and . Based on it a new HUB requirement deck will be prepared. When we received it the task can be planned. Until that time the task is blocked." The same exercise has to be made when we get requirements for a new thorization and authenticationClients use mdmetl batch service user to populate data to Reltio. There is no changes nd a request to that contains all necessary data - client's responsibility Send a request to create a new batch to : dl-atp_mdmhub_support@The request must contain as follows:subject arealist of stages sourcecountries listsource /incrementalfrequencybussines justificationsingle point of contact on client sidePrepare new batch on side - HUB Team Responsibility Repository: mdm-hub-cluster-envChanges on manager levelIn mdmetl.yaml configuration must be extended with:Path: \\\\\\users\\mdmetl.yamlNew sourcesNew countriesAdd new batch with stages to batch_service, example:batch_service: defaultClient: "ReltioAll" description: " Informatica IICS User - BATCH loader" batches: "": <- new batch name - "" <- new stage - "HCOLoading" <- new stage - "RelationLoading" <- new stageIn the manager config, if the batch includes stage then add to the refAttributesEnricher configuration relationType: ProviderAffiliationsrelationType: ContactAffiliationsrelationType: ACOAffiliationsNew sourcesNew countriesChanges in batch-service levelBased on stages that are adding there is a need to change a batch-service th: \\\\\\namespaces\\\\config_files\\batch-service\\config\\application.ymlAdd configuration in , example:- batchName: "PFORCERX_ODS" batchDescription: "PFORCERX_ODS - HCO, , entities loading" stages: - stageName: "HCOLoading" - stageName: "HCOSending" softDependentStages: [ "HCOLoading" ] processingJobName: "SendingJob" - stageName: "HCOProcessing" dependentStages: [ "HCOSending" ] processingJobName: "ProcessingJob" # -------------------------------- - stageName: "HCPLoading" - stageName: "HCPSending" softDependentStages: [ "HCPLoading" ] processingJobName: "SendingJob" - stageName: "HCPProcessing" dependentStages: [ "HCPSending" ] processingJobName: "ProcessingJob" # ------------------ - stageName: "RelationLoading" - stageName: "RelationSending" dependentStages: [ "HCOProcessing", "HCPProcessing" ] softDependentStages: [ "RelationLoading" ] processingJobName: "SendingJob" - stageName: "RelationProcessing" dependentStages: [ "RelationSending" ] processingJobName: " batch is full load than two additional stages must be configured, it destination is to allows deletating profiles:- stageName: "EntitiesUnseenDeletion" dependentStages: [ "HCOProcessing" ] processingJobName: "DeletingJob"- stageName: "HCODeletesProcessing" dependentStages: [ "EntitiesUnseenDeletion" ] processingJobName: "ProcessingJob"2. Add configuration to bulkConfiguration, example:"PFORCERX_ODS": HCOLoading: bulkLimit: 25 destination: topic: "${env}-internal-batch-pforcerx-ods-hco" maxInFlightRequest: 5 HCPLoading: bulkLimit: 25 destination: topic: "${env}-internal-batch-pforcerx-ods-hcp" maxInFlightRequest: 5 RelationLoading: bulkLimit: 25 destination: topic: "${env}-internal-batch-pforcerx-ods-rel" maxInFlightRequest: 5All new dedicated topic must be configured. There is a need to add configuration in kafka-topics.yml, example:emea-prod-internal-batch-pulse-kam-hco: partitions: 6 replicas: 33. Add configuration in sendingJob, example:PFORCERX_ODS: HCOSending: source: topic: "${env}-internal-batch-pforcerx-ods-hco" maxInFlightRequest: 5 bulkSending: false bulkPacketSize: 10 reltioRequestTopic: "${env}-internal-async-all-mdmetl-user" reltioReponseTopic: "${env}-internal-async-all-mdmetl-user-ack" : source: topic: "${env}-internal-batch-pforcerx-ods-hcp" maxInFlightRequest: 5 bulkSending: false bulkPacketSize: 10 reltioRequestTopic: "${env}-internal-async-all-mdmetl-user" reltioReponseTopic: "${env}-internal-async-all-mdmetl-user-ack" RelationSending: source: topic: "${env}-internal-batch-pforcerx-ods-rel" maxInFlightRequest: 5 bulkSending: false bulkPacketSize: 10 reltioRequestTopic: "${env}-internal-async-all-mdmetl-user" reltioReponseTopic: "${env}-internal-async-all-mdmetl-user-ack"4. If a batch is full load then deletingJob must be configured, for example:PULSE_KAM: EntitiesUnseenDeletion: maxDeletesLimit: 10000 queryBatchSize: 10 reltioRequestTopic: "${env}-internal-async-all-mdmetl-user" reltioResponseTopic: "${env}-internal-async-all-mdmetl-user-ack"" }, { "title": "How to Request PingFederate (PXED) External OAuth 2.0 Account", "": "", "pageLink": "/display//How+to+Request+PingFederate+%28PXED%29+External+OAuth+2.0+Account", "content": "This instruction describes the Client steps that should be triggered to create the account. Referring to security requirements HUB should only know the details about the UserName created by . HUB is not requesting external accounts, passwords and all the details are shared only with the Client. The client is sharing the user name to HUB and only after the User name is configured Client will gain the access to HUB resources. Contact Persons:, <> / - All details related to VCAS Reference number, ID ( Solution profile number and other details.  (PXED) - DL-CIT-PXED Operations <>; , >Details required to fulfill the request are in this doc:User Name standard: -MDM_clientSteps:Go to Search For Application type: PXED Pick - Application enablement with enterprise authentication services (, LDAP and/or SSO)Fulfill the request and send.Wait for the user name and passwordAfter confirmation share the Client Id with HUB and wait for the grant of access. Do not share the password. EXAMPLE: For the Reference Example request send for user:Request TicketG9iTicket IDNameVarganin, user nameAD UsernameVARGAA08Requested user IdUser DomainAMERRegion (...)Request ID20200717112252425request IDHosting locationExternalHosting location of the Client services: ( or   Reference AS Reference numberData FeedNo, / - requests send to then - API/ServicesApplication access methodsWeb of access for the Client application - (Intranet/Web Browser e.t.c) Application User baseCOMPANY colleaguesContractorsApplication User baseApplication access devicesLaptop/DesktopTablets ( access devicesApplication (External - Internet / Internal - Intranet)Application NameRequested application name that requires new accountCMDB ID (Production Deployment) ID ( profile number.... Solution profile numberNumber of users for the mber of users for the applicationConcurrent ncurrent UsersCommentsApplication-to-Application Integration using NSA (.)  PTRS will use REST APIs to authenticate to and access is application will access (MDM_client) and will need account (KOL-MDM_client) for access to those APIs/Servicesfull description of requested account and integrationApplication ScopeAll UsersApplication ScopeReferenced tickets (only for example / reference purposes):" }, { "title": "Hub Operations", "": "", "pageLink": "/display/GMDM/Hub+Operations", "content": "" }, { "title": "Airflow:", "": "", "pageLink": "/pages/tion?pageId=", "content": "" }, { "title": "Checking that Process Ends Correctly", "": "", "pageLink": "/display/GMDM/Checking+that+Process+Ends+Correctly", "content": "To check that process ended without any issues you need to login into and check the Alerts Monitoring PROD dashboard. You have to check rows in the GBL PROD Airflow DAG's panel. If you can see red rows (like on blow screenshot) it means that there occured some issues:Details of issues are available in the ." }, { "title": "Common Problems", "": "", "pageLink": "/display//Common+Problems", "content": "Failed task getEarliestUploadedFileDuring reviewing of failed DAG you noticed that the task getEarliestUploadedFile has failed state. In the task's logs you can see the line like this:[ 18:44:07,082] {{docker_:252}} INFO - Unable to find the earliest uploaded file. directory is empty?The issue is because getEarliestUploadedFile was not able to download the export file. In this case you need to check the localtion and verify that the correct export file was uploded to valid location." }, { "title": "Deploy Airflow Components", "": "", "pageLink": "/display//Deploy+Airflow+Components", "content": "Deployment procedure is implemented as playbook. The source code is stored in configuration repository. The runnable file is available under the path:   and can be run by the command: ansible-playbook install_mdmgw_airflow_services.yml -i inventory/[env name]/inventory  Deployment has following steps: Creating directory structure on execution host, Templating configuration files and transferring those to config location, Creating DAG, variable and connections in Apache Airflow, Restarting Airflow instance to apply configuration changes. After successful deployment the dag and configuration changes should be available to trigger in . " }, { "title": "Deploying DAGs", "": "", "pageLink": "/display/GMDM/Deploying+DAGs", "content": "To deploy newly created DAG or configuration changes you have to run the deployment procedure implemented as playbook install_mdmgw_airflow_services.yml:-playbook install_mdmgw_airflow_services.yml -i inventory/[env name]/inventoryIf you you have access to you can also use ' jobs: Each environment has its own deploy job. Once you choose the right job you have to:1 Click the button "Build Now": 2 After the stage icon "Choose dags to deploy" will be active and will wait for choosing DAG to deploy:3 Choose the DAG you wanted to deploy and approve you ter this job will deploy all changes made by you to 's server." }, { "title": "Error Grabbing Grapes - hub_reconciliation_v2", "": "", "pageLink": "/display//Error+Grabbing+Grapes+-+hub_reconciliation_v2", "content": "In hub_reconciliation_v2 airflow , during stage  entities_generate_hub_reconciliation_events grape error might occur:\ltipleCompilationErrorsException: startup failed:\nGeneral error during conversion: Error grabbing Grapes\n(...)\nCause:That could be caused by connectivity/configuration around:For this dag dependencies are mounted in container. Mounted directory is located in airflow server on path: /app/airflow/{{ env_name }}/hub_reconciliation_v2/tmp/.groovy/grapes/To solve this problem copy libs from working dag. E.g. " }, { "title": "Batches (Batch Service):", "": "", "pageLink": "/pages/tion?pageId=", "content": "" }, { "title": "Adding a New Batch", "": "", "pageLink": "/display//Adding+a+New+Batch", "content": "1. Add batch to batch_service.yml in the following sections- add batch info to section batchWorkflows - add basing on some already defined- add bulk configuration- add to sendingJob- add to deletingJob if needed2. Add source and user for batch to batch_service_users.yml- add for user mdmetl_nprod apropriate source and . Add user to:for / GBLUS - /inventory//group_vars/gw-services/gw_users.ymlfor - /config_files/manager/config/users- for appropriate source, country and . Add topic to bundle section in manager/config/application.yml 5. Add kafka topicsWe use manager to add new topics which can be found under directory /inventory//group_vars//manager/topics.ymlFirstly set create_or_update to True after creation of topics change to False7. Create topics and redeploy services by using Redeploy gateway on others envs qa, stage, prod only if there is no batch running - check it in mongo on batchInstance collection using following query: {"status" : "STARTED"}9. Ask if new source should be added to dq rules" }, { "title": "Cache Address ID Clear (Remove Duplicates) Process", "": "", "pageLink": "/display//Cache+Address+ID+Clear+%28Remove+Duplicates%29+Process", "content": "This process is similar to the Cache Address ID Update Process . So the user should load the file to mongo and process it with the following steps: Download the files that were indicated by the user and apply on a specific environment (sometimes only STAGE and sometimes all envs)For example - 3 files - /us/prod/inbound/cdw/one-time-feeds/other/Merge these file to one file - Duplicate_Address_Ids_.txtProceed with the based on the Cache Address ID Update load to the removeIdsFromkeyIdRegistry collectionmongoimport --host=localhost:27017 --username=admin --password=zuMMQvMl7vlkZ9XhXGRZWoqM8ux9d08f7BIpoHb --authenticationDatabase=admin --db=reltio_stage --collection=removeIdsFromkeyIdRegistry --type=csv --columnsHaveTypes --fields="_ring(),ring(),ring(),64(),_ring()" --file=EXTRACT_Duplicate_Address_Ids_16042021.txt --mode=insertCLEAR keyIdRegistrydocker exec -it mongo_mongo_1 bashcd /data/configdbNPROD - nohup mongo duplicate_address_ids_clear.js &PROD   - nohup mongo --host mongo_reltio_repl_set/:27017,:27017,:28017 -u mdm_hub -p --authenticationDatabase reltio_prod REFERENCE SCRIPT:\nCLEAR keyIdRegistry\n db = tSiblingDB('reltio_dev')\n th("mdm_hub", "")\n \n db = tSiblingDB('reltio_prod')\n th("mdm_hub", "")\n\n\n\n print("START")\n var start = new Date().getTime();\n\n\n var cursor = tCollection("removeIdsFromkeyIdRegistry").aggregate( \n [\n \n ], \n { \n "allowDiskUse" : false\n }\n )\n \n rEach(function (doc){\n tCollection("keyIdRegistry").remove({"_id": doc._id});\n });\n\n var end = new Date().getTime();\n var duration = end - start;\n print("duration: " + duration + " ms")\n print("END")\n\n\n nohup mongo duplicate_address_ids_clear.js &\n\n nohup mongo --host mongo_reltio_repl_set/:27017,:27017,:28017 -u mdm_hub -p --authenticationDatabase reltio_prod batchEntityProcessStatus checksumsdocker exec -it mongo_mongo_1 bashcd /data/configdbNPROD - nohup mongo unset_checsum_duplicate_address_ids_clear.js &PROD   - nohup mongo --host mongo_reltio_repl_set/:27017,:27017,:28017 -u mdm_hub -p --authenticationDatabase reltio_prod REFERENCE SCRIPT\nCLEAR batchEntityProcessStatus\n\n db = tSiblingDB('reltio_dev')\n th("mdm_hub", "")\n \n db = tSiblingDB('reltio_prod')\n th("mdm_hub", "")\n\n\n print("START")\n var start = new Date().getTime();\n var cursor = tCollection("removeIdsFromkeyIdRegistry").aggregate( \n [\n ], \n { \n "allowDiskUse" : false\n }\n )\n \n rEach(function (doc){\n var key = y var arrVars = key.split("/");\n \n var type = "configuration/sources/"+arrVars[0]\n var value = arrVars[3];\n \n print(type + " " + value)\n \n var result = tCollection("batchEntityProcessStatus").update(\n { "batchName" : { $exists : true }, "sourceId" : { "type" : type, "value" : value } },{ $set: { "checksum": "" } },{ multi: true}\n )\n \n printjson(result);\n \n });\n \n var end = new Date().getTime();\n var duration = end - start;\n print("duration: " + duration + " ms")\n print("END")\n\n nohup mongo unset_checsum_duplicate_address_ids_clear.js &\n \n nohup mongo --host mongo_reltio_repl_set/:27017,:27017,:28017 -u mdm_hub -p --authenticationDatabase reltio_prod nohup outputCheck few rows and verify if these rows do not exist in the KeyIdRegistry collectionCheck few profiles and verify if the checksum was cleared in the BatchEntityProcessStatus collectionISSUE - for the profiles there is a difference between the generated cache and the corresponding SUE - for the profiles there is a difference between the generated cache and the corresponding profile. - check the crosswalks values in COMPANY_ADDRESS_ID_EXTRACT_PAC_files - should be e.g. 00002b9b--456c-959c-fd5b04ed04b8ISSUE - for the ENGAGE 1.0 profiles there is a difference between the generated cache and the corresponding profile.  check the crosswalks values in COMPANY_ADDRESS_ID_EXTRACT_ENG_ files - should be e.g 00002b9b--456c-959c-fd5b04ed04b8Please check the following example:CUST_SYSTEM,CUST_TYPE,,SRC_CUST_ID,SRC_CUST_ID_TYPE,,PFZ_CUST_ID,SRC_SYS,MDM_SRC_SYS,EXTRACT_DTPROBLEM : HCPM,HCP,,,HCE,,,HCPS,HCPS,2021-04-15OK            : HCPM,,a012K000022cqBoQAI,0012K00001lCEyYQAW,,VVA,2021-04-15For the crosswalk is equal to the 001A000001VgOEVIA3 and it is easy to match with the profile and clear the cache for the generated row is equal to the - COMPANYAddressIDSeq|ONEKEY/HCP/HCE//,ONEKEY/HCP/HCE//,COMPANYAddressIDSeq,,yIdRegistryThe  is not a crosswalk so to remove the checksum from the BatchEntityProcessStatus collection there is a need to find the profile in Reltio - crosswalk si WUSM01113231 - and clear the cache in the BatchEntityProcessStatus my example, there was only one crosswalk. So it was easy to find this profile. For multiple profiles, there is a need to find the solution. ( I think we need to ask to provide the file for with an additional crosswalk column, so we will be able to match the crosswalk with the Key and clear the checksum)    Solution: once we receive KeyIdRegstriy Update file ask to generate crosswalks ids - simple CSV fileThe file received from does not contain crosswalks id, only COMPANYAddressIds - example input -  DT Team and download CSV fileLoad the file to TMP collection in e.g. - AddressIDCrosswalks_COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210511Execute the batchEntityProcessStatus based on crosswalks ID list \n\n db = tSiblingDB('reltio_dev')\n th("mdm_hub", "")\n \n db = tSiblingDB('reltio_prod')\n th("mdm_hub", "")\n\n\n print("START")\n var start = new Date().getTime();\n var cursor = tCollection("AddressIDCrosswalks_COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210511").aggregate( \n [\n ], \n { \n "allowDiskUse" : false\n }\n )\n \n rEach(function (doc){\n \n var type = "configuration/sources/ONEKEY";\n var value = PANYcustid_individualeid;\n \n print(type + " " + value)\n \n var result = tCollection("batchEntityProcessStatus").update(\n { "batchName" : { $exists : true }, "sourceId" : { "type" : type, "value" : value } },{ $set: { "checksum": "" } },{ multi: true}\n )\n \n printjson(result);\n \n });\n \n var end = new Date().getTime();\n var duration = end - start;\n print("duration: " + duration + " ms")\n print("END")\n" }, { "title": "Changelog of removed duplicates", "": "", "pageLink": "/display//Changelog+of+removed+duplicates", "content": " - DROP keys          Duplicate_Address_Ids.txt         /Duplicate_Address_Ids.txt > - DROP keys STAGE GBLUS          Duplicate_Address_Ids_16042021.txt - 11 380 - 1 , , CENTRIS           inbound/Duplicate_Address_Ids_16042021.txt > EXTRACT_Duplicate_Address_Ids_16042021.txt & - DROP STAGE GBLUS          Duplicate_Address_Ids_17052021.txt - 25121 - 1           inbound/Duplicate_Address_Ids_17052021.txt > EXTRACT_Duplicate_Address_Ids_17052021.txt25.06.2021 - DROP STAGE GBLUS          Duplicate_Address_Ids_17052021.txt - 71509, 2           inbound/Duplicate_Address_Ids_25062021.txt > - DROP PROD GBLUS          Duplicate_Address_Ids_12072021.txt - 4550 Duplicate_Address_Ids_12072021.txt - us/prod/inbound/cdw/one-time-feeds/          inbound/Duplicate_Address_Ids_12072021.txt > EXTRACT_Duplicate_Address_Ids_12072021.txt & " }, { "title": "Cache Address ID Update Process", "": "", "pageLink": "/display/GMDM/Cache+Address+ID+Update+Process", "content": "1. Log using browser to production bucket and go to dir /us/prod/inbound/cdw/one-time-feeds/COMPANY-address-id/ and check last update . Log using mdmusnpr service user to server  using . files from using below commanddocker run -u : -e "AWS_ACCESS_KEY_ID=" -e "AWS_SECRET_ACCESS_KEY=" -e "AWS_DEFAULT_REGION=us-east-1" -v /app/mdmusnpr/AddressID/inbound:/src:z mesosphere/aws-cli sync ://gblmdmhubprodamrasp101478/us/prod/inbound/cdw/one-time-feeds/COMPANY-address-id/ /src4. After syncing check new files with those two commads replacing new_file_name with name of the file which was updated. Check in script file that SRC_SYS and MDM_SRC_SYS exists, if not something is wrong and probably script needs to be updated ask the person who asked for address id updatecut -d',' - | sort | uniqcut -d',' - | sort | . Remove old extracts from /app/mdmusnpr/AddressIDrm EXTRACT_6. Run script which will prepare data for mongonohup ./ inbound/ > EXTRACT_ &Wait until processing in foreground finishes. Check after some time using below command:ps ax | grep scriptIf process is marked as done You can continue with next file or if there is no more files You can proceed to next step.7. Log in using Your user to the server and change to . Go to /app/mongo/config and remove old extractsrm EXTRACT_9. Go to /app/mdmusnpr/AddressID and copy new extracts to mongocp EXTRACT_ /app/mongo/config/10. Run mongo shelldocker exec -it mongo_mongo_1 bashcd /data/configdb11. Execute following command for each non prod env and for every new extract file - reltio_dev, reltio_qa, reltio_stagemongoimport --host=localhost:27017 --username=admin --password= --authenticationDatabase=admin --db= --collection=keyIdRegistry --type=csv --columnsHaveTypes --fields="_ring(),ring(),ring(),64(),_ring()" --file=EXTRACT_ --mode=upsertWrite into changelog the number of records that were updated - it should be equal on all envs.12. If needed and requested update production using following commandmongoimport --host=mongo_reltio_repl_set/:27017,:27017,:28017 --username=admin --password= --authenticationDatabase=admin --db=reltio_prod --collection=keyIdRegistry --type=csv --columnsHaveTypes --fields="_ring(),ring(),ring(),64(),_ring()" --file=EXTRACT_ --mode=upsert13. Verify number of entries from input file with updated records number in . Update changelog15. Respond to email that update is done16. Force merge will be generated - there will be mail about this.17. Download force merge delta from using browser and change name to merge__1.csvbucket: gblmdmhubprodamrasp101478path: us/prod/inbound/HcpmForceMerge/ForceMergeDelta18. Upload file merge__1.csv tobucket: gblmdmhubprodamrasp101478path: us/prod/inbound/hub/merge_unmerge_entities/input/19. Trigger dag  After is finished login using Browser bucket: gblmdmhubprodamrasp101478path: us/prod/inbound/hub/merge_unmerge_entities/output/_so for date and time 12:11: 39, the file looks like this:          us/prod/inbound/hub/merge_unmerge_entities/output/20210517_121139and download result file, check for failed merge and send it in response to email about force merge" }, { "title": "Changelog of updated", "": "", "pageLink": "/display/GMDM/Changelog+of+updated", "content": " - Loading NEW files: ENGAGE 1.0nohup ./ inbound/COMPANY_ADDRESS_ID_EXTRACT_PAC_ENG.txt > EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_PAC_ENG.txt &IQVIA_RXnohup ./ inbound/COMPANY_ADDRESS_ID_EXTRACT_HCPS00.txt > ./ inbound/COMPANY_ACCOUNT_ADDR_ID_EXTRACT.txt > new file: -> .12.2020 - Loading new file: PAC_ENG -> 820 document, CAPP-> document16.12.2020 - Loading MILLIMAN_MCO: 10504 document22.12.2020 - Loading CPMRTE: 15686 document, : 1287, PAC_ENG: 1340, : , : 343, i problem, CENTRIS: 41496, : .12.2020 - Loading PAC_ENG: 1260, : .01.2021 - Loading PAC_ENG: 330, : 33808.01.2021 - Loading HCPS00: .01.2021 - Loading PAC_ENG: 496, : 51218.01.2021 - Loading PAC_ENG: 616, : 79525.01.2021 - Loading PAC_ENG: 1009, : - Loading PAC_ENG: 884, : .02.2021 - Loading PAC_ENG: 576, : 39415.02.2021 - Loading PAC_ENG: 690, : 69617.02.2021 - Loading VVA: .02.2021 - Loading PAC_ENG: 724, : 75701.03.2021 - Loading PAC_ENG: 906, : - Loading PAC_ENG: 738, : 79511.05.2021 - Loading PAC_ENG: 589, : 62617.05.2021 - Loading PAC_ENG: 489, : 61317.05.2021 - Loading - us/prod/inbound/cdw/one-time-feeds/COMPANY-address-id/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210511.txt                     Updated: - customers updated - cleared cache in batchEntityProcessStatus collection for reload                     Updated: ) imported successfully in KeyIdRegistry18.05.2021 - STAGE only      - 43771 document(s) imported successfully      COMPANY_ACCOUNT_ADDR_ID_EXTRACT_IMS_20210511.txt - 10076 document(s) imported successfully19.05.3021 -  Load 15 Files to PROD and clear cache. Load these files to and STAGE      2972 /COMPANY_ACCOUNT_ADDR_ID_EXTRACT_DVA_20210511.txt >       >       inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_IMS_20210511.txt >       inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MLM_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MLM_20210511.txt &      /COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MMIT_20210511.txt >       inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_SAP_20210511.txt > 73236 /COMPANY_ADDRESS_ID_EXTRACT_APUS-VVA_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_APUS-VVA_20210511.txt &      inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_CENTRIS_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_CENTRIS_20210511.txt &      60175 inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_EMDS-VVA_20210511.txt > Prod_Sync_FileSet/ 14:59       /COMPANY_ADDRESS_ID_EXTRACT_HCPS_ZIP_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_HCPS_ZIP_20210511.txt &      inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_KOL_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_KOL_20210511.txt &      inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_PAC_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_PAC_20210511.txt &      May inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_SHS_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_SHS_20210511.txt &      inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_SHS_ZIP_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_SHS_ZIP_20210511.txt & - Loading PAC_ENG: Dev:1283, QA: 1283, Stage: 1509, Prod: 1283                                         CAPP: Dev: , QA: 1392, Stage: , Prod: 18731/ - Loading PAC_ENG: 379, : 4339/ - Loading PAC_ENG: 38, : 4714/ - Loading PAC_ENG: 83, : 10216/ - Loading COMPANY_ACCT: Prod: 236  - Loading PAC_ENG: Dev:182, QA: 182, Stage: 182, Prod: 646, CAPP: Dev: 215, QA: 215, Stage: 215, Prod: 21502.07.2021     Load 11 Files to PROD and clear cache. Load these files to and STAGE     /COMPANY_ACCOUNT_ADDR_ID_EXTRACT_HCOS_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_HCOS_20210630.txt &    /COMPANY_ACCOUNT_ADDR_ID_EXTRACT_IMS_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_IMS_20210630.txt &    inbound/Prod_Sync_FileSet_3/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MLM_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MLM_20210630.txt &    inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_APUS-VVA_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_APUS-VVA_20210630.txt &    inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_CENTRIS_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_CENTRIS_20210630.txt &    inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_EMDS-VVA_20210630.txt > Prod_Sync_FileSet_3/    inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210630.txt &    inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_HCPS_ZIP_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_HCPS_ZIP_20210630.txt &    inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_KOL_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_KOL_20210630.txt &    inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_SHS_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_SHS_20210630.txt &    inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_SHS_ZIP_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_SHS_ZIP_20210630.txt & - Loading PAC_ENG: 39 , :     Load 1 VVA File to PROD and clear cache. Load this file to and STAGE     inbound/COMPANY_ADDRESS_ID_EXTRACT_VVA_20210715.txt > EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_VVA_20210715.txt &     Load 1 VVA File to PROD and clear cache. Load this file to and STAGE     inbound/COMPANY_ADDRESS_ID_EXTRACT_VVA_20210718.txt > EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_VVA_20210718.txt &GBLUS/Fletcher PROD GO-LIVE COMPANYAddressID sequence - PROD (MAX) + = " }, { "title": "Manual Cache Clear", "": "", "pageLink": "/display//Manual+Cache+Clear", "content": "Open Studio 3T and connect to appropriate IntelliShellRun following query for appropriate source - replace with right name\tCollection("batchEntityProcessStatus").updateMany({"sourceId.type":"configuration/sources/"}, {$set: {"checksum" : ""}})" }, { "title": "Data Quality", "": "", "pageLink": "/display//Data+Quality", "content": "" }, { "title": "Quality Rules Deployment Process", "": "", "pageLink": "/display//Quality+Rules+Deployment+Process", "content": "Resource changingThe process regards modifying the resources related to data quality configuration that are stored in Consul and load by mdm-manager, , precallback-service components in runtime. They are present in mdm-config-registry/config-hub location.When modifying data quality rules configuration present at mdm-config-registry/config-hub//mdm-manager/quality-service/quality-rules , the following rules should be applied:Each file should be formatted in accordance with yamllint rules (See Yamllint validation rules)The attributes createdDate/modifiedDate were deleted from the rules configuration files. They will be automatically set for each rule during the deployment process. (See Deployment of changes)Adding more than one rule with the same value of name attribute is not validationEvery PR to mdm-config-registry repository is validated for correctness of syntax (See Yamllint validation rules). Upon PR creation the job is triggered that checks the format of files using yamllint. The jobs succeeds only when all the yaml files in repository passed the yamllint e PRs that did not passed validations should not be merged to ployment of changesAll changes in mdm-config-registry/config-hub should be deployed to consul using JOBS. The separate job exist for deploying changes done on each environment. Eg. job deploy_config_amer_nprod_amer-dev is used to deploy all changes done on DEV environment (all changes under path mdm-config-registry/config/hub/dev_amer). Jobs allow to deploy configuration from master branch or to mdm-config-registry e deployment job flow can be described by the following diagram: workspace - wipes workspace of all the files left from previous job eckout mdm-config-registry - this repository contains files with data quality configuration and yamllint rulesCheckout mdm-hub-cluster-env - this repository contains script for assigning createdDate / modifiedDate attributes to quality rules and job for running this script and uploading files to lidate yaml files - runs yamllint validation for every file at mdm-config-registry/config-hub/ (See Yamllint validation rules)Get previous quality rules registry files - downloads quality rules registry file produced after previous successfull run of a job. The file is responsible for storing information about modification dates and checksum of quality rules. Decision if modification dates should be update is made based on checksum change, . The registry file is a csv with the following headers:ID - ID for each quality rule in form of :CREATED_DATE - stores createdDate attribute value for each ruleMODIFIED_DATE - stores modifiedDate attribute value for each ruleCHECKSUM - stores checksum counted for each ruleUpdate files - runs job responsible for:Running script oovy - responsible for adjusting createdDate / modifiedDate for quality rules based on checksum changes and creating new quality rules registry file.Updating changed quality rules files in Consul kv ve quality rules registry file - save new registry file in job gorithm of updating modification datesThe following algorithm is implemented in oovy script. The main goal of this is to update createdDate/modifiedDate in the case when new quality rule has been added or its definition changed.Yamllint validation rulesTODO" }, { "title": "DCRs:", "": "", "pageLink": "/pages/tion?pageId=", "content": "" }, { "title": ":", "": "", "pageLink": "/pages/tion?pageId=", "content": "" }, { "title": "Reject pending transfer to ", "": "", "pageLink": "/display/", "content": "'s a request which was sent to () by HUB however it hasn't been processed - we didn't receive information whether is should be ACCEPTED or REJECTED. This causes a couple of things:in we're having in status VR Status = OPEN and VR Detailed Status = SENTin in collection we're having in status = in collection DCRVeevaRequest we're having in status = SENTalerts are raised in Prometheus/Karma since we usually should receive response within couple of daysGoalWe want to simulate REJECT response from which will make to return to Reltio for further processing by . This may be realized in a couple of ways: Procedure #1 - (minutes to process) Populate event to topic $env-internal-veeva-dcr-change-events-in which skips VeevaAdapter and simulates response from → see diagram for more details Veeva DCR flowsProcedure #2 - (hours to process) Create response ZIP file with specific payload, which needs to be placed to specific location, which is further ingested by VeevaAdapterProcedure #1Step 1 - Adjust below event template(optional) update eventTime to current timestamp in milliseconds → use (optional) update countryCode to the on from Request(requited) update dcrId to the one you want JSON event to populate\n{\n "eventType": "CHANGE_REJECTED",\n "eventTime": ,\n "countryCode": "SG",\n "dcrId": "a51f229331b14800846503600c787083",\n "vrDetails": {\n "vrStatus": "CLOSED",\n "vrStatusDetail": "REJECTED",\n "veevaComment": "MDM HUB: Simulated reject response to close DCR.",\n "veevaHCPIds": [],\n "veevaHCOIds": []\n }\n}\nStep 2 - Populate event to topic $env-internal-veeva-dcr-change-events-in (for -STAGE: apac-stage-internal-veeva-dcr-change-events-in). For this purpose use AKHQ (for -STAGE: )Select topic $env-internal-veeva-dcr-change-events-in and use "Produce to Topic" button in bottom rightPaste event details, update Key by providing dcrId and press "Populate"After things should be in effect: in should change its status from SENT_TO_VEEVA to DS Action RequiredMongoDB document in collection DCRRegistry will change its status to DS_ACTION_REQUIREDStep 3 - update MongoDB DCRRegistryVeeva collection Connect to Mongo with Studio 3T, find out document using "_id" in collection DCRRegistryVeeva and update its status to REJECTED and changeDate to current cument update\n{\n $set : {\n : "REJECTED",\n "angeDate" : ""\n }\n}\nStep 4 - check Reltio DCRCheck if status has changed to "DS Action Required" and Tracing details has been updated with simulated Veeva Reject response. " }, { "title": "Close - override any status", "": "", "pageLink": "/display/GMDM/Close+VOD+DCR+-+override+any+status", "content": "This SoP is almost identical to the one in Override VOD Accept to for DCR with small updates:In Step 1, please also update target = to target = Reltio. " }, { "title": "Override VOD Accept to for DCR", "": "", "pageLink": "/display/GMDM/Override+VOD+Accept+to+VOD+Reject+for+VOD+DCR", "content": "'s a request which was sent to () and mistakenly ACCEPTED, however business requires such to be Rejected and redirected to for processing via Reltio Inbox.GoalWe want to:remove incorrect entries in Tracking details - usually "Veeva Accepted" and "Waiting for REJECT response from which will make to return to Reltio for further processing by event to topic $env-internal-veeva-dcr-change-events-in which skips VeevaAdapter and simulates response from → see diagram for more details Veeva DCR flowsProcedureStep 0 - Assume that VOD_NOT_FOUNDSet retryCounter to 9999Wait for 12hStep 1 - Adjust document in MongoDB in DCRRegistry collection (Studio3T)Remove incorrect Tracking entries for your (trackingDetails section) - usually nested attribute 3 and 4 in this sectionSet retryCounter to 0Set to "SENT_TO_VEEVA"Step 2 - update MongoDB DCRRegistryVeeva collection with Studio 3T, find out document using "_id" in collection DCRRegistryVeeva and update its status to REJECTED and changeDate to current cument update\n{\n $set : {\n : "REJECTED",\n "angeDate" : ""\n }\n}\nStep 3 - Adjust below event template(optional) update eventTime to current timestamp in milliseconds → use (optional) update countryCode to the on from Request(requited) update dcrId to the one you want JSON event to populate\n{\n "eventType": "CHANGE_REJECTED",\n "eventTime": ,\n "countryCode": "SG",\n "dcrId": "a51f229331b14800846503600c787083",\n "vrDetails": {\n "vrStatus": "CLOSED",\n "vrStatusDetail": "REJECTED",\n "veevaComment": "MDM HUB: Simulated reject response to close DCR.",\n "veevaHCPIds": [],\n "veevaHCOIds": []\n }\n}\nStep 4 - Populate event to topic $env-internal-veeva-dcr-change-events-in (for -STAGE: apac-stage-internal-veeva-dcr-change-events-in). For this purpose use AKHQ (for -STAGE: )Select topic $env-internal-veeva-dcr-change-events-in and use "Produce to Topic" button in bottom rightPaste event details, update Key by providing dcrId and press "Populate"After (it depends on the traceVR schedule - it my take up to 6h on PROD) two things should be in effect: in should change its status from SENT_TO_VEEVA to DS Action RequiredMongoDB document in collection DCRRegistry will change its status to DS_ACTION_REQUIREDStep 6 - check Reltio DCRCheck if status has changed to "DS Action Required" and Tracing details has been updated with simulated Veeva Reject response. " }, { "title": " escalation to (VOD)", "": "", "pageLink": "/pages/tion?pageId=", "content": "Integration failIt occasionally happens that response files from are not being delivered to bucket which is used for ingestion by HUB. provides /ZIP files , even though there's no actual payload related to DCRs - files contain only headers. This disruption may be caused by two things:  didn't generate response and didn't place it on their SFTPGMFT's synchronization job responsible for moving file between and stopped working Either way, we need to pin point of the two are causing the oubleshooting It's usually good to check when the last synchronization took issueIf there is more than one file (usually this dir should be empty) in outbound directory /globalmdmprodaspasp202202171415//prod/outbound/vod//DCR_request it means that job does not push files from to SFTP. The files which are properly processed by job are copied to Veeva SFTP and additionally moved to  /globalmdmprodaspasp202202171415//prod/archive/vod//DCR_eva Open Data issueOnce you are sure it's not issue, check archive directory for the latest response file: /globalmdmprodaspasp202202171415//prod/archive/vod/APAC/DCR_response/globalmdmprodaspasp202202171415//prod/archive/vod/CN/DCR_responseIf the latest file is older that 24h → there's an issue on side. Who to contact?SFTP, please contact or directly to , and : hendran@Veeva Open data(important one) create ticket in smartsheet:  → you may not have access to this file without prior request to moneem.ahmed@at the moment has access to this file(optional) please contact , , (and for escalation and PROD issues CC: , and )" }, { "title": " rejects from IQVIA due to missing RDM codes", "": "", "pageLink": "/display//DCR+rejects+from+IQVIA+due+to+missing+RDM+codes", "content": "DescriptionSometimes our Clients are being provided with below error message when they are trying to send DCRs to . This request was not accepted by the IQVIA due to missing RDM code mapping and was redirected to Reltio Inbox. The reason is: 'Target lookup code not found for attribute: , country: CA, source value: SP.ONCM.'. This means that there is no equivalent of this code in IQVIA code mapping. Please contact MDM Hub  asking to add this code and click "SendTo3Party" in after 's confirmation.WhyThis is caused when PforceRx tries to send with changes on attribute with Lookup Values. On HUB end we're trying to remap canonical codes from to source mapping values which are specific to and understood by them. Usual we are dealing with situation that for each canonical code there is a proper source code mapping mapping. Please refer to below screen ( collection LookupValues). However when their is no such mapping like in case below (no entry in sourceMappings) then we're dealing with problem aboveFor more information about canonical code mapping and the flow to get target code sent to or , please refer to → : create method (), section "Mapping Reltio canonical codes → source codes"HowWe should contact people responsible for RDM codes mappings ( team) to add find out correct sourceMapping value for this specific canonical code for specific country. In the end they will contact to add it to RDM (usually )." }, { "title": "Defaults", "": "", "pageLink": "/display//Defaults", "content": " defaults map the source codes of the system to the codes in the or () system. Occur for specific types of attributes: , , , , , . The values ​​are configured in the Consul system. To configure the values:  Sort the source (.xlsx) file: Divide the file into separate sheets for each ve the sheets in separate csv format files - columns separated by ste the contents of the files into the appropriate files in the consul configuration repository - mdm-config-registry:  - each environment has its own folder in the configuration repository  - files must have header- Country;CanonicalCode;DefaultFor more information about canonical code mapping and the flow to get target code sent to or , please refer to → : create method (), section "Mapping Reltio canonical codes → source codes"" }, { "title": "Go-Live Readiness", "": "", "pageLink": "/display/GMDM/Go-Live+Readiness", "content": "Procedure:" }, { "title": "OneKey Crosswalk is Missing and IQVIA Returned Wrong ID in TraceVR Response", "": "", "pageLink": "/display/GMDM/OneKey+Crosswalk+is+Missing+and+IQVIA+Returned+Wrong+ID+in+TraceVR+Response", "content": "This SOP describes how to FIX the case when there is a in OK_NOT_FOUND status and IQVIA change  the individualID from wrong one to correct one (due to human error)Example Case based on EMEA PROD: there is a - 1fced0be830540a89c30f5d374754accstatus is OK_NOT_FOUNDmessage is Received ACCEPTED status from IQVIA, waiting for data load, missing crosswalks: WUKM00110951retrycounter reach 14 (7days)IQVIAshared the following trace VR response at firs and we closed the DCR:{"aceValidationRequestOutputFormatVersion":"1.8","atus":"SUCCESS","sultSize":1,"talNumberOfResults":1,"ccess":true,"sults":[{"codBase":"WUK","cisHostNum":"4606","userEid":"04606","requestType":"Q","responseEntityType":"ENT_ACTIVITY","clientRequestId":"1fced0be830540a89c30f5d374754acc","cegedimRequestEid":"fbf706e175c847cb8f39a1873fc4daaf","customerRequest":null,"trace1ClientRequestDate":"","trace2CegedimOkcProcessDate":"","trace3CegedimOkeTransferDate":"","trace4CegedimOkeIntegrationDate":"","trace5CegedimDboResponseDate":"","trace6CegedimOkcExportDate":null,"requestComment":"FY1 Dr working in the stroke care unit at Livingston","responseComment":"HCP works at involved in this topic:On Reltio side:On IQVIA side: After IQVIA check the TraceVR changed working in the stroke care unit at Livingston","requestEntityType":"ENT_ACTIVITY","requestFirstname":"Beth","requestLastname":"Mulloy","requestOrigin":"WS","requestProcess":"I","requestStatus":"VAS_FOUND","requestType":"Q","requestUsualWkpName":"Care of the Elderly Department","responseComment":"HCP works at ","trace2CegedimOkcProcessDate":"","trace3CegedimOkeTransferDate":"","trace4CegedimOkeIntegrationDate":"","trace5CegedimDboResponseDate":"","trace6CegedimOkcExportDate":null,"lastResponseDate":"","updateDate":"","workplaceEidSource":"WUKH07885517","workplaceEidValidated":"WUKH07885517","userEid":"04606"}}the WUKM00110951 was changed to WUKM00110955This is blocking the DCRThe event that is constantly processing each 12h is in the emea-prod-internal-onekey-dcr-change-events-in The event was already generated so we need to overwrite it to fix the processingSTEPS:Go to the by _id and get the latest event:Change the BodyFROM\n{\n "eventType": "DCR_CHANGED",\n "eventTime": ,\n "eventPublishingTime": ,\n "countryCode": "GB",\n "dcrId": "1fced0be830540a89c30f5d374754acc",\n "targetChangeRequest": {\n "vrStatus": "CLOSED",\n "vrStatusDetail": "ACCEPTED",\n "oneKeyComment": " response comment: works at St Johns Hospital\\nONEKEY HCP ID: ID: WUKH07885517",\n "individualEidValidated": "WUKM00110951",\n "workplaceEidValidated": "WUKH07885517",\n "vrTraceRequest": "{\\"isoCod2\\":\\"GB\\",\\"ientRequestId\\":\\"1fced0be830540a89c30f5d374754acc\\"}",\n "vrTraceResponse": " working in the stroke care unit at Livingston\\",\\"responseComment\\":\\"HCP works at St Johns Hospital\\",\\"individualEidSource\\":null,\\"individualEidValidated\\":\\"WUKM00110951\\",\\"workplaceEidSource\\":\\"WUKH07885517\\",\\"workplaceEidValidated\\":\\"WUKH07885517\\",\\"activityEidSource\\":null,\\"activityEidValidated\\":\\"WUKM0011095101\\",\\"addressEidSource\\":null,\\"addressEidValidated\\":\\"WUK00000092143\\",\\"countryEid\\":\\"GB\\",\\"processStatus\\":\\"REQUEST_RESPONDED\\",\\"requestStatus\\":\\"VAS_FOUND\\",\\"updateDate\\":\\"\\"}]}"\n }\n}\nTO\n{\n "eventType": "DCR_CHANGED",\n "eventTime": ,\n "eventPublishingTime": ,\n "countryCode": "GB",\n "dcrId": "1fced0be830540a89c30f5d374754acc",\n "targetChangeRequest": {\n "vrStatus": "CLOSED",\n "vrStatusDetail": "ACCEPTED",\n "oneKeyComment": " response comment: works at St Johns Hospital\\nONEKEY HCP ID: : WUKH07885517",\n "individualEidValidated": "WUKM00110955",\n "workplaceEidValidated": "WUKH07885517",\n "vrTraceRequest": "{\\"isoCod2\\":\\"GB\\",\\"ientRequestId\\":\\"1fced0be830540a89c30f5d374754acc\\"}",\n "vrTraceResponse": " working in the stroke care unit at Livingston\\",\\"responseComment\\":\\"HCP works at }\n}\nThe result is the replace in the individualEidValidated and all the places where ol ID existsPush the new event with new timestamp and same kafka key to the topicNew Case ( responded with ACCEPTED with ID but response contains: "requestStatus": "VAS_FOUND_BUT_INVALID". is checking every 12h if already provided the data to Reltio. We must manually close this eps:In amer-prod-internal-onekey-dcr-change-events-in topic find the latest event for ID ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●.Change from:\n{\n\t"eventType": "DCR_CHANGED",\n\t"eventTime": ,\n\t"eventPublishingTime": ,\n\t"countryCode": "CA",\n\t"dcrId": "f19305a6e6af4b5aa03d26c1ec1ae5a6",\n\t"targetChangeRequest": {\n\t\t"vrStatus": "CLOSED",\n\t\t"vrStatusDetail": "ACCEPTED",\n\t\t"oneKeyComment": " response comment: Already Exists-Data Privacy\\nONEKEY HCP ID: ID: WCAH00052991",\n\t\t"individualEidValidated": "WCAP00028176",\n\t\t"workplaceEidValidated": "WCAH00052991",\n\t\t"vrTraceRequest": "{\\"isoCod2\\":\\"CA\\",\\"ientRequestId\\":\\"f19305a6e6af4b5aa03d26c1ec1ae5a6\\"}",\n\t\t"vrTraceResponse": "{\\"aceValidationRequestOutputFormatVersion\\":\\"1.8\\",\\"atus\\":\\"SUCCESS\\",\\"sultSize\\":1,\\"talNumberOfResults\\":1,\\"ccess\\":true,\\"sults\\":[{\\"codBase\\":\\"WCA\\",\\"cisHostNum\\":\\"7853\\",\\"userEid\\":\\"07853\\",\\"requestType\\":\\"Q\\",\\"responseEntityType\\":\\"ENT_ACTIVITY\\",\\"clientRequestId\\":\\"f19305a6e6af4b5aa03d26c1ec1ae5a6\\",\\"cegedimRequestEid\\":\\"9d02f7547dbc4e659a9d230c91f96279\\",\\"customerRequest\\":null,\\"trace1ClientRequestDate\\":\\"\\",\\"trace2CegedimOkcProcessDate\\":\\"\\",\\"trace3CegedimOkeTransferDate\\":\\"\\",\\"trace4CegedimOkeIntegrationDate\\":\\"\\",\\"trace5CegedimDboResponseDate\\":\\"\\",\\"trace6CegedimOkcExportDate\\":null,\\"requestComment\\":null,\\"responseComment\\":\\"Already Exists-Data Privacy\\",\\"individualEidSource\\":null,\\"individualEidValidated\\":\\"WCAP00028176\\",\\"workplaceEidSource\\":\\"WCAH00052991\\",\\"workplaceEidValidated\\":\\"WCAH00052991\\",\\"activityEidSource\\":null,\\"activityEidValidated\\":\\"WCAP0002817602\\",\\"addressEidSource\\":null,\\"addressEidValidated\\":\\"W6206\\",\\"countryEid\\":\\"CA\\",\\"processStatus\\":\\"REQUEST_RESPONDED\\",\\"requestStatus\\":\\"VAS_FOUND_BUT_INVALID\\",\\"updateDate\\":\\"\\"}]}"\n\t}\n}\nTo:\n{\n\t"eventType": "DCR_CHANGED",\n\t"eventTime": ,\n\t"eventPublishingTime": ,\n\t"countryCode": "CA",\n\t"dcrId": "f19305a6e6af4b5aa03d26c1ec1ae5a6",\n\t"targetChangeRequest": {\n\t\t"vrStatus": "CLOSED",\n\t\t"vrStatusDetail": "REJECTED",\n\t\t"oneKeyComment": " response comment: Already Exists-Data Privacy\\nONEKEY HCP ID: ID: WCAH00052991",\n\t\t"individualEidValidated": "WCAP00028176",\n\t\t"workplaceEidValidated": "WCAH00052991",\n\t\t"vrTraceRequest": "{\\"isoCod2\\":\\"CA\\",\\"ientRequestId\\":\\"f19305a6e6af4b5aa03d26c1ec1ae5a6\\"}",\n\t\t"vrTraceResponse": "{\\"aceValidationRequestOutputFormatVersion\\":\\"1.8\\",\\"atus\\":\\"SUCCESS\\",\\"sultSize\\":1,\\"talNumberOfResults\\":1,\\"ccess\\":true,\\"sults\\":[{\\"codBase\\":\\"WCA\\",\\"cisHostNum\\":\\"7853\\",\\"userEid\\":\\"07853\\",\\"requestType\\":\\"Q\\",\\"responseEntityType\\":\\"ENT_ACTIVITY\\",\\"clientRequestId\\":\\"f19305a6e6af4b5aa03d26c1ec1ae5a6\\",\\"cegedimRequestEid\\":\\"9d02f7547dbc4e659a9d230c91f96279\\",\\"customerRequest\\":null,\\"trace1ClientRequestDate\\":\\"\\",\\"trace2CegedimOkcProcessDate\\":\\"\\",\\"trace3CegedimOkeTransferDate\\":\\"\\",\\"trace4CegedimOkeIntegrationDate\\":\\"\\",\\"trace5CegedimDboResponseDate\\":\\"\\",\\"trace6CegedimOkcExportDate\\":null,\\"requestComment\\":null,\\"responseComment\\":\\"Already Exists-Data Privacy\\",\\"individualEidSource\\":null,\\"individualEidValidated\\":\\"WCAP00028176\\",\\"workplaceEidSource\\":\\"WCAH00052991\\",\\"workplaceEidValidated\\":\\"WCAH00052991\\",\\"activityEidSource\\":null,\\"activityEidValidated\\":\\"WCAP0002817602\\",\\"addressEidSource\\":null,\\"addressEidValidated\\":\\"W6206\\",\\"countryEid\\":\\"CA\\",\\"processStatus\\":\\"REQUEST_RESPONDED\\",\\"requestStatus\\":\\"VAS_FOUND_BUT_INVALID\\",\\"updateDate\\":\\"\\"}]}"\n\t}\n}\nand post back to the topic. will be closed in Case ( need to force close/reject a couple of DCRs which cannot closed themselves. There were sent to , but for some reasons OK does not recognize them.  IQVIA have not generated the TraceVR response and we need to simulate it.  To break TRACEVR process for this DCRs we need to manually change the Mongo Status to REJECTED. If we keep SENT we are going to ask IQVIA forever in - TODO - describe this in SOPOpen and update for selected profiles. Change status to {  : "REJECTED" } Change details to "HUB manual update due to "Change from:To: Find the latest event for the chosen id and generate the event in the topic "-internal-onekey-dcr-change-events-in" which will change their status\n "vrStatus": "CLOSED",\n "vrStatusDetail": "REJECTED", \n\n {\n "eventType": "DCR_CHANGED",\n "eventTime": ,\n "eventPublishingTime": ,\n "countryCode": "",\n "dcrId": "",\n "targetChangeRequest": {\n "vrStatus": "CLOSED",\n "vrStatusDetail": "REJECTED",\n "oneKeyComment": "HUB manual update due to MR-",\n "individualEidValidated": null,\n "workplaceEidValidated": null,\n "vrTraceRequest": "{\\"isoCod2\\":\\"\\",\\"ientRequestId\\":\\"\\"}",\n "vrTraceResponse": "{\\"aceValidationRequestOutputFormatVersion\\":\\"1.8\\",\\"atus\\":\\"SUCCESS\\",\\"sultSize\\":1,\\"talNumberOfResults\\":1,\\"ccess\\":true,\\"sults\\":[{\\"codBase\\":\\"W\\",\\"cisHostNum\\":\\"4605\\",\\"userEid\\":\\"HUB\\",\\"requestType\\":\\"Q\\",\\"responseEntityType\\":\\"ENT_ACTIVITY\\",\\"clientRequestId\\":\\"\\",\\"cegedimRequestEid\\":\\"\\",\\"customerRequest\\":null,\\"trace1ClientRequestDate\\":\\"\\",\\"trace2CegedimOkcProcessDate\\":\\"\\",\\"trace3CegedimOkeTransferDate\\":\\"\\",\\"trace4CegedimOkeIntegrationDate\\":\\"\\",\\"trace5CegedimDboResponseDate\\":\\"\\",\\"trace6CegedimOkcExportDate\\":null,\\"requestComment\\":\\"\\",\\"responseComment\\":\\"HUB manual update due to MR-\\",\\"individualEidSource\\":null,\\"individualEidValidated\\":null,\\"workplaceEidSource\\":null,\\"workplaceEidValidated\\":null,\\"activityEidSource\\":null,\\"activityEidValidated\\":null,\\"addressEidSource\\":null,\\"addressEidValidated\\":null,\\"countryEid\\":\\"\\",\\"processStatus\\":\\"REQUEST_RESPONDED\\",\\"requestStatus\\":\\"VAS_NOT_FOUND\\",\\"updateDate\\":\\"\\"}]}"\n }\n}\n" }, { "title": "CHANGELOG", "": "", "pageLink": "/display/GMDM/CHANGELOG", "content": "List of DCRs:VR- = 163f209d24d94ea99bd7b47d9108366cVR- = dbd44964afba4bab84d50669b1ccbac3VR- = 07c363c5d3364090a2c0f6fdbbbca1ddRe COMPANY RE IM44066249 VR missing g" }, { "title": "Update DCRs with missing comments", "": "", "pageLink": "/display//Update+DCRs+with+missing+comments", "content": "DescriptionDue to temporary problem with our calls to Reltio workflow we had multiple DCRs with missing workflow comments. The symptoms of this error were: no changeRequestComment field in DCRRegistry mongo collection and lack of content in Comment field in while viewing by entityUrl.We have created a solution allowing to find deficient DCRs and update their comments in database and Reltio.GoalWe want to find all deficient DCRs in a given environment and update their comments in is can be accomplished by following the procedure described 1 - Configure the solutionGo to tools/dcr-update-workflow-comments module in mdm-hub-inbound-services epare env configuration. Provide mongo.dbName and manager.url in application.yaml eate a file named application-secrets.yaml. Copy the content from application-secretsExample.yaml file and replace mock values with real ones appropriate to a given epare solution configuration. Provide desired mode (find/repair) and endTime time limits for deficient DCRs search in application.yaml.Here is an example of update-comments lication.yaml\nupdate-comments:\n mode: find\n starting: ending: 2 - Find deficient DCRsRun the application using ApplicationServiceRunner.java in find mode with profile: a result, dcrs.csv file will appear in resources directory. It contains a list of DCRs to be updated in the next step. Those are DCRs ended within the configuration time limits, with no changeRequestComment field in DCRRegistry and having not empty processInstanceId (that value is needed to retrieve workflow comments from Reltio). This list can be viewed and altered if there is a need to omit a specific ep 3 - Repair the DCRsChange de configuration to repair. Run the application exactly the same as in Step a result, report.txt file will be created in resources directory. It will contain a log for every with its update status. If the update fails, it will contain the reason. In case of failed updated, the application can be ran again with dcrs.csv needed adjustments." }, { "title": "GBLUS DCRs:", "": "", "pageLink": "/pages/tion?pageId=", "content": "" }, { "title": "ICUE VRs manual load from file", "": "", "pageLink": "/display/GMDM/ICUE+VRs+manual+load+from+file", "content": "This SOP describes the manual load of selected ICUE DCRS to the GBLUS ope and issue description:On GBLUS PROD VRs(DCRs) are sent to ) for validation using events. The process is responsible for this is described on this page (OK flows (GBLUS)). receives the data based on singleton profiles. The current flow enables only and ENGAGE. was disabled from the flow and requires manual work to load this to IQVIA due to a high number of standalone profiles created by this system on . More details related to the issue are here:_ IQVIA DRC_VR Request for gDCR_Counts_GBLUS_PROD.xlsxSteps to add in the IQVIA validation process:Check if there are no loads on environment GBLUS PROD:Check reltio-* topics and check if there are no huge number of events per minute and if there is no LAG on topics:Pick the input file from a client and after approval from proceed with changes:example email and input file:First batch_ Leftover ICUE VRs ( March).msgGenerate the events for the VR topic- id: onekey_vr_dcrs_manual destination: "${env}-internal-onekeyvr-in"Reconciliation target ONEKEY_DCRS_MANUALuse the resendLastEvent operation in the publisher (generate CHANGES events)After all events are pushed to topic verify on akhq if generated events are available on desired topicWait for events aggregation window closure(24h).Check if 's are visible in mongo collection. createTime should be within { "entity.uri" : "entities/" }\n" }, { "title": "HL :", "": "", "pageLink": "/pages/tion?pageId=", "content": "" }, { "title": "How do we answer to requests about DCRs?", "": "", "pageLink": "/pages/tion?pageId=", "content": "" }, { "title": "EFK:", "": "", "pageLink": "/pages/tion?pageId=", "content": "" }, { "title": "FLEX Environments - Elasticsearch Shard Limit", "": "", "pageLink": "/display/GMDM/FLEX+Environments+-+Elasticsearch+Shard+Limit", "content": ", below alert gets triggered:This means that has allocated >80% of allowed number of shards (default 1000 max).Further , we can check directly on the EFK cluster what is the shard count:Log into and choose "Dev Tools" from the panel on the left:Use one of below calls:To fetch current cluster status and number of active/unassigned shards (# of active shards + # of unassigned shards = # of allocated shards):GET _cluster/healthTo check the current assigned shards limit:GETSolution: Removing Old Shards/IndicesThis is the preferred solution. Old indices can be removed through .Log into and choose "Management" from the panel on the left:Choose "Index Management":Find and mark indices that can be removed. In my case, I searched for indices containing "2023" in their names:Click "Manage Indices" and "Delete Indices". Confirm:Solution: Increasing the LimitThis is not the preferred solution, as it is not advised to go beyond the default limit of 1000 shards per node - it can lead to worse performance/stability of the DO: extend this section when we need to increase the limit somewhere, use this article: " }, { "title": ": How to Restore Data from Snapshots", "": "", "pageLink": "/display//Kibana%3A+How+to+Restore+Data+from+Snapshots", "content": "NOTE: The time of restoring is based on the amount of data you wanted to restore. Before beginning of restoration you have to be sure that the elastic cluster has a sufficient amount of storage to save restoring restore data from the snapshot you have to use "Snapshot and Restore" site from . It is one of sites avaiable in "Stack Management" section:Select the snapshot which contains data you are interested in and click the Restore button:In the presented wizard please set up the following options:Disable the option "All data streams and indices" and provide index patterns that match index or data stream you want to restore:It is important to enable option "Rename data streams and indices" and set "Capture pattern" as "(.+)" and "Replacement pattern" as "$1-restored-", where the idx <1, , , ... , n> - it is required once we restore more than one snapshot from the same datastream. In another case, the restore operation will override current elasticsearch objects and we lost the data:The rest of the options on this page have to be disabled:Click the "Next" button to move to "Index settings" page. Leave all options disabled and go to the next page.On the page "Review restore details" you can see the summary of the restore process settings. Validate them and click the "Restore snapshot" button to start can track the restoration progress in "Restore Status" section:When data is no longer needed, it should be deleted:" }, { "title": "External proxy", "": "", "pageLink": "/display//External+proxy", "content": "" }, { "title": "No downtime restart/upgrade", "": "", "pageLink": "/pages/tion?pageId=", "content": "This SOP describes how to perform "no downtime" restart.  console - ansible playbook  one node instance from target groups ( console)Access console in using COMPANY SSOChoose Account: prod-dlp-wbs-rapid (). Role: WBS-EUW1-GBICC-ALLENV-RO-SSOChange region to to EC2 → Load Balancing → Target GroupsSearch for target group\n-prod-gbl-mdm\nThere should be 4 target groups visible. 1 for and 3 for KafkaRemove first instance (EUW1Z2DL113) from all 4 target rform below steps for all target groupsTo do so, open each target group select desired instance and choose 'deregister'. Now this instance should have 'Health status': 'Draining'. Next do the same operation for other target not remove two instances from consumer group at the same time. It'll cause so make sure to remove the same instance from all target groups.Wait for Instance to be removed from target groupWait for target groups to be adjusted. Deregistered instance should eventually be removed from target groupAdditionally you can check logs directlyFirst instance: \nssh \ncd /app/kong/\ndocker-compose logs -f --tail=0\n# Check if there are new requests to exteral api\nSecond isntance: \nssh \ncd /app/kong/\ndocker-compose logs -f --tail=0\n# Check if there are new requests to exteral api\nSome internal requests may be still visible, eg. metricsPerform restart of on removed instance ( ansible playbook inside mdm-hub-cluster-env repository inside 'ansible' directoryFor the first instance:\nansible-playbook install_kong.yml -i inventory/proxy_prod/inventory  -l kong_01\nFor the second instance:\nansible-playbook install_kong.yml -i inventory/proxy_prod/inventory  -l kong_02\nMake sure that kong_01 is the same instance you've removed from target group(check ansible inventory)Re-add the removed instancePerform this steps for all target groupsSelect target groupChoose 'Register targets'Filter instances to find previously removed instance. Select it and choose 'Include as pending below'. Make sure that correct port is chosenVerify below request and select 'Register pending targets'Instance should be in 'Initial' state in target groupWait for instance to be properly added to target groupWait for all instances to have 'Healthy' status instead of 'Initial'. Make sure everything work as expected (Check logs)Perform steps 1-5 for second instanceSecond instance: Second host(ansible inventory): " }, { "title": "Full Environment Refresh - Reltio Clone", "": "", "pageLink": "/display//Full+Environment+Refresh+-+Reltio+Clone", "content": "" }, { "title": "Full Environment Refresh", "": "", "pageLink": "/display//Full+Environment+Refresh", "content": "IntroductionBelow steps are the record of steps done in due to between GBLUS PROD → STAGE and refresh consists of:disabling componentsfull cleanup of existing STAGE data: and MongoDBidentifying and copying cache collections from PROD to STAGE MongoDBre-enabling componentsrunning the Hub Reconciliation DAGDisabling Services, out the EFK topics in fluentd configuration:\nmdm-hub-cluster-env\\apac\\nprod\\namespaces\\apac-backend\\values.yaml\nDeploy -backend through , to apply the fluentd changes:(fluentd pods in the -backend namespace should recreate)Block the -stage mdmhub deployment job in : the monitoring/support , that the environment is disabled (in case alerts are triggered or users inquire via command line tools to uninstall the mdmhub components and topics:use kubectx/kubectl to switch context to -nprod cluster:use helm to uninstall below two releases from the -nprod cluster (you can confirm release names by using the "$ helm list helm uninstall -stage\n$ helm uninstall -resources-apac-stage -n apac-backend\nconfirm there are no pods in the -stage namespace:list remaining topics (kubernetes kafkatopic resources) with "-stage" prefix:manually remove all the remaining "-stage" prefixed topics. Note that it is expected that some topics remain - some of them have been created by , for ngoDB into through Studio ear all the collections in the -stage database.Exceptions:"batchInstance" collection"quartz-" prefixed collections"shedLock" collectionWait until MongoDB cleans all these collections (could take a few hours):Log into the APAC PROD MongoDB through Studio 3T. You want to have both connections in the same py below collections from (Ctrl+C):keyIdRegistryrelationCachesequenceCountersRight click database "-stage" and choose "Paste Collections"Dialog will appear - use below options for each collection:Collections Copy Mode: Append to existing target collectionDocuments Copy Mode: Overwrite documents with same _idCopy indices from the source collection: uncheckWait until all the collections are owflake CleanupCleanup the base tables:\nTRUNCATE TABLE CUSTOMER.ENTITIES;\nTRUNCATE TABLE LATIONS;\nTRUNCATE TABLE CUSTOMER.LOV_DATA;\nTRUNCATE TABLE TCHES;\nTRUNCATE TABLE RGES;\nTRUNCATE TABLE CUSTOMER.HIST_INACTIVE_ENTITIES;\nRun the full materialization jobs:\nCALL TERIALIZE_FULL_ALL('M', 'CUSTOMER');\nCALL CUSTOMER.HI_MATERIALIZE_FULL_ALL('CUSTOMER');\nCheck for any tables that haven't been cleaned properly:\nSELECT *\nFROM INFORMATION_SCHEMA.TABLES\nWHERE 1=1\nAND TABLE_TYPE = 'BASE TABLE'\nAND TABLE_NAME ILIKE 'M^_%' ESCAPE '^'\nAND ROW_COUNT != 0;\nRun the materialization for those tables specifically or you can run the queries prepared from the bellow query:\nSELECT 'TRUNCATE TABLE ' || TABLE_SCHEMA || '.' || TABLE_NAME || ';'\nFROM INFORMATION_SCHEMA.TABLES\nWHERE 1=1\nAND TABLE_TYPE = 'BASE TABLE'\nAND TABLE_NAME ILIKE 'M^_%' ESCAPE '^'\nAND ROW_COUNT != 0;\nRe-Enabling HubGet a confirmation that the data cloning process has -enable the mdmhub -stage deployment job and perform a deployment of an adequate version.Uncomment previously commented (look: Disabling The Services, , 1.) EFK transaction topic list, deploy -backend. Fluentd pods in the -backend namespace should recreate.Wait for both deployments to finish (should be performed one after another).Test the MDM Hub API - try sending a couple of GET requests to fetch some entities that exist in Reltio. Confirm that the result is correct and the requests are visible in (dashboard Calls):( : we no longer need to do this - now deploys with minimum 1 pod in every environment) Run below command in your local client environment.\ --bootstrap-server :9094 --group apac-stage-matches-enricher --topic -stage-internal-reltio-matches-events nfig perties\nThis needs to be done to create the consumergroup, so that can scale the deployment in the ning The Hub ReconciliationAfter confirming that is up and working correctly, navigate to : the hub_reconciliation_v2_apac_stage DAG:To minimize the chances of overfilling the storage, set retention of reconciliation metrics topics to : to APAC NPROD AKHQ: below topics and navigate to their "Configs" tabs:-stage-internal-reconciliation-metrics-calculator-in each topic, find the config (do not mistake it with , which is responsible for compaction) and set it to . Apply nitor the DAG, event processing and /Elasticsearch ter the finishes, disable reconciliation jobs (if reconciliations start uncontrollably before the data is fully restored, it will unnecessarily increase the workload):Manually disable the hub_reconciliation_v2_apac_stage DAG: disable the reconciliation_snowflake_apac_stage DAG: all reconciliation events are processed, the environment is ready to use. Compare entity/relation counts between Reltio-MongoDB-Snowflake to confirm that everything went -enable reconciliation jobs from 5." }, { "title": "Full Environment Refresh - Legacy ()", "": "", "pageLink": "/pages/tion?pageId=", "content": "Steps to take when a Hub environment needs to be cleaned up or eparationAdd line gorithm= to perties in your kafka_client folder.Having done that go to the /bin folder and launch the ./consumer_groups_ --describe --group | sortFor every consumer group in this environment. This will list currently connected consumers.If there are external consumers connected they will prevent deletion of topics they're connected to. Contact people responsible for those consumers to disconnect them.2. Stop GW/Hub components: subscriber, publisher, manager, batch_channel$ docker stop 3. Double-check that consumer groups (internal and external) have been disconnected4. Delete all topics:a) Preparation:$ docker exec -it kafka_kafka_1 bash$ export /kafka_server_nf$ --zookeeper zookeeper:2181 --list | grep b) Deleting the topics:$ --zookeeper zookeeper:2181 --delete --topic || true && \\ --zookeeper zookeeper:2181 --delete --topic \\ --zookeeper zookeeper:2181 --delete --topic   || true &&          (...) continue for all . Check whether topics are deleted on disk and using $ ./ --list 6. Recreate the topics by launching the Ansible playbook with parameter create_or_update: True set for desired topics in topics.yml7. Cleanup MongoDB:Access the collections corresponding to the desired environment and choose option "Clear collections" on the following collections: "entityHistory","gateway_errors", "hub_errors", hub_reconcilliation.8. After confirming everything is ready (in case of environment refresh there has to be a notification from that it's ready) restart and . Check component logs to confirm they started up and connected correctly." }, { "title": "Hub Application:", "": "", "pageLink": "/pages/tion?pageId=", "content": "" }, { "title": "Batch Channel: Importing 's Extract", "": "", "pageLink": "/display//Batch+Channel%3A+Importing+MAPP%27s+Extract", "content": "To import 's extract you have to:Have original extract (eg. original.csv) which was uploaded to Teams channel,Open it in Excel and save as "CSV (Comma delimited) (*.csv)",Run dos2unix tool on the steps from 2 and 3 on extract file (eg. ) received form 's team,Compare original file to file with changes and select only lines which was changed in the second file: ( head original.csv changes.csv | grep '^>' | sed 's/^> //' ) > result.csvDivide result file into the smaller ones by running script: ./  result.csv. The script will generate set of files where theirs names will end with _{idx}.{extension} eg.: , , result_02.csv etc.Upload the result set of files to location: ://pfe-baiaes-eu--project/mdm/inbound/mapp/. This action will trigger batch-channel component, which will start loading changes to " }, { "title": "Callback Service: How to Find Events Stuck in Partial State", "": "", "pageLink": "/display/GMDM/Callback+Service%3A+How+to+Find+Events+Stuck+in+Partial+State", "content": "What is partial state?When an event gets processed by , if any change is done at the precallback stage, event will not be sent further, to Event Publisher. It is expected that in another event will come, signaling the change done by precallback logic - this one gets passed to Publisher and downstream clients/Snowflake as far as precallback detects no need for a metimes the second event is not coming - this is what we call a partial state. It means, that update event will actually not reach and downstream clients. functionality of was implemented to monitor such to identify that an event is stuck in partial state?PartialCounter is counting events which have not been passed down to Event Publisher (identified by Reltio URI) and exporting this count as a Prometheus (Actuator) metric. Prometheus alert "callback_service_partial_stuck_24h" is notifying us that an event has been stuck for to find events stuck in partial state?Use below command to fetch the list of currently stuck events as array (example for emea-dev). You will have to authorize using mdm_test_user or mdm_admin:\n# curl details can be found in Swagger Documentation: to do?Events identified as stuck in partial state should be reconciled." }, { "title": "Integration Test - how to run tests locally from your computer to target environment", "": "", "pageLink": "/display//Integration+Test+-+how+to+run+tests+locally+from+your+computer+to+target+environment", "content": "Steps:First, choose the environment and go to the integration tests directory: on DEV:go to the latest RUN and click Workspace on the leftClick on /home/jenkins workspace linkGo to /code/mdm-integretion-tests/src/test/resources/ Download 3 pertieskafka_nfkafka_truststore.jksEdit pertieschange local to real URLS and local PATH. Leave other variables as is. in that case, use the KeePass that contains all URLs: code that is adjusted to DEVAPI URLs + local PATH to certsThis is just the example from that contains the C:\\\\Users\\\\mmor\\\\workspace\\\\SCM\\\\mdm-hub-inbound-services\\\\ path - replace this with your own code localization \nfig=nfig.SpringConfiguration\n\nfig=C:\\\\Users\\\\mmor\\\\workspace\\\\SCM\\\\mdm-hub-inbound-services\\\\mdm-integretion-tests\\\\src\\\\test\\\\resources\\\\kafka_nf\n\nreltio.oauth.url= go to your local code checkout - mdm-hub-inbound-services\\mdm-integretion-testsCopy 3 files to the mdm-integretion-tests/src/test/resourcesSelect the test and click RUNEND - the result: You are running integration tests from your local computer on target DEV environment. Now you can check logs locally and repeat. " }, { "title": "Manager: Reload Entity - Fix COMPANYAddressID Using Reload Action", "": "", "pageLink": "/display//Manager%3A+Reload+Entity+-+Fix+COMPANYAddressID+Using+Reload+Action", "content": "Before starting check what rules have -reload action on the list. Now it is SourceMatchCategory and COMPANYAddressIdcheck here - - example dq ruleupdate with -reload operation to reload more rulesGenerate events using the script : scriptorscript - fix without ONEKEYthe script gets all ACTIVE entities with Addressesthat have missing COMPANYAddressIdthat is lower that correct value for each env: emea     7000000000Script generate events: example:entities/lwBrc9K|{"targetEntity":{"entityURI":"entities/lwBrc9K","sources":["FUSIONMDM"],"targetType":"entityUri"},"overwrites":[{"uriMask":"COMPANYAddressID"}]}entities/1350l3D6|{"targetEntity":{"entityURI":"entities/1350l3D6","sources":["FUSIONMDM"],"targetType":"entityUri"},"overwrites":[{"uriMask":"COMPANYAddressID"}]}entities/1350kZNI|{"targetEntity":{"entityURI":"entities/1350kZNI","sources":["FUSIONMDM"],"targetType":"entityUri"},"overwrites":[{"uriMask":"COMPANYAddressID"}]}entities/cPSKBB9|{"targetEntity":{"entityURI":"entities/cPSKBB9","sources":["FUSIONMDM"],"targetType":"entityUri"},"overwrites":[{"uriMask":"COMPANYAddressID"}]}Make a fix for that is lower than the correct value for each envGo to the keyIdRegistry collectionfind all entries that have generatedId lower than emea     7000000000increase the generatedId  adding the correct value from correct environments using the script - scriptGet the file and push it to the -internal-async-all-reload-entity topic./start_sasl_ -internal-async-all-reload-entityor using the input file  ./start_sasl_ -internal-async-all-reload-entity < reload_dev_emea_pack_entities.txt (file that contains each json generated by the script, each row in new line)How to Run a script on docker:example emea DEV:go to - svc-mdmnpr@euw1z2dl111docker exec -it mongo_mongo_1 bashcd  /data/configdbcreate script - touch reload_entities_fix_COMPANYaddressid_hub.jsedit header:db = tSiblingDB("")th("mdm_hub", "")RUN: nohup mongo --host mongo_dev_emea_reltio_rs/:27017 -u mdm_hub -p --authenticationDatabase reltio_dev mongo --host mongo_dev_emea_reltio_rs/:27017 -u mdm_hub -p --authenticationDatabase reltio_dev reload_entities_fix_sourcematch_hub_DEV.js > smc_DEV_FIX.out 2>&1 &nohup mongo --host mongo_dev_emea_reltio_rs/:27017 -u mdm_hub -p --authenticationDatabase reltio_qa reload_entities_fix_sourcematch_hub_QA.js > smc_QA_FIX.out 2>&1 &nohup mongo --host mongo_dev_emea_reltio_rs/:27017 -u mdm_hub -p --authenticationDatabase reltio_stage reload_entities_fix_sourcematch_hub_STAGE.js > smc_STAGE_FIX.out 2>&1 &" }, { "title": "Manager: Resubmitting Failed Records", "": "", "pageLink": "/display/GMDM/Manager%3A+Resubmitting+Failed+Records", "content": "There is new in manager for getting/resubmitting/removing failed records from batches.1. Get failed records method - it returns list of errors basing on provided /errorsRequestList of objectsfield - name of the field that is stored in errorqueueoperation - operation that is used to create query, possible options are: Equals, Is, , Lowervalue - the value which we compareii. Example:[        {            "field" : "HubAsyncBatchServiceBatchName",            "operation" : "Equals",            "value" : "testBatchBundle"        }    ]b. Responsei. List of Error objectsid - identifier of the error batchName - batch nameobjectType - object typebatchInstanceId - batch instance idkey - keyerrorClass - the name of the error class that happen during record submissionerrorMessage - the message of the error that happen during record submissionresubmitted - true/false - it tells if errror was resubmitted or notdeleted - true/false - it tells if error was deleted or not during remove api callii. Example:[    {        "id": "5fa93377e720a55f0bb68c99",        "batchName": "testBatchBundle",        "objectType": "configuration/entityTypes/HCP",        "batchInstanceId": "0+3j45V7S1K1GT2i6c3Mqw",        "key": "{  \\"type\\" : \\"SHS\\",\\r\\n  \\"value\\" : \\"TEST::b09b6085-28dc-451d-85b6-fe3ce2079446\\"\\r\\n}",        "errorClass": "ientErrorException",        "errorMessage": "HTTP 409 Conflict",        "resubmitted": false,        "deleted": false    },    {        "id": "5fa93378e720a55f0bb68ca6",        "batchName": "testBatchBundle",        "objectType": "configuration/entityTypes/HCP",        "batchInstanceId": "0+3j45V7S1K1GT2i6c3Mqw",        "key": "{  \\"type\\" : \\"SHS\\",\\r\\n  \\"value\\" : \\"TEST:HCP:25bfc672-9ba1-44a5-b3c1-d657de701d76\\"\\r\\n}",        "errorClass": "ientErrorException",        "errorMessage": "HTTP 409 Conflict",        "resubmitted": false,        "deleted": false    },    {        "id": "5fa93377e720a55f0bb68c9a",        "batchName": "testBatchBundle",        "objectType": "configuration/entityTypes/HCP",        "batchInstanceId": "0+3j45V7S1K1GT2i6c3Mqw",        "key": "{  \\"type\\" : \\"SHS\\",\\r\\n  \\"value\\" : \\"TEST:-07a6-4902-b9e8-1bf2acbc8a6e\\"\\r\\n}",        "errorClass": "ientErrorException",        "errorMessage": "HTTP 409 Conflict",        "resubmitted": false,        "deleted": false    },    {        "id": "5fa93377e720a55f0bb68c9b",        "batchName": "testBatchBundle",        "objectType": "configuration/entityTypes/HCP",        "batchInstanceId": "0+3j45V7S1K1GT2i6c3Mqw",        "key": "{  \\"type\\" : \\"SHS\\",\\r\\n  \\"value\\" : \\"TEST::e8d05d96-7aa3-4059-895e-ce20550d7ead\\"\\r\\n}",        "errorClass": "ientErrorException",        "errorMessage": "HTTP 409 Conflict",        "resubmitted": false,        "deleted": false    },    {        "id": "5fa96ba300061d51e822854a",        "batchName": "testBatchBundle",        "objectType": "configuration/entityTypes/HCP",        "batchInstanceId": "iN2LB3TiT3+Sd5dYemDGHg",        "key": "{  \\"type\\" : \\"SHS\\",\\r\\n  \\"value\\" : \\"TEST:HCP:973411ec-33d4-477e-a6ae-aca5a0875abb\\"\\r\\n}",        "errorClass": "ientErrorException",        "errorMessage": "HTTP 409 Conflict",        "resubmitted": false,        "deleted": false    }]2. Resubmit failed records - it takes list of objects and returns list of errors that were resubmitted - if it was correctly resubmitted resubmitted flag is set to truePOST /errors/_resubmita.  Requesti. List of objectsb. Responsei. List of Error objects3. Remove failed records - it takes list of objects that contains criteria for removing error objects and returns list of errors that were deleted - if it was correctly deleted deleted flag is set to /errors/_removea.  Requesti. List of objectsb. Responsei. List of Error objects" }, { "title": "Issues diagnosis", "": "", "pageLink": "/display//Issues+diagnosis", "content": "" }, { "title": " issues", "": "", "pageLink": "/display/GMDM/API+issues", "content": "Symptomsat least one of the following alert is active:kong_http_500_status_prod,kong_http_502_status_prod,kong_http_503_status_prod,,,kong3_http_503_status_prod,Clients report problems related to communication with our HTTP confirm if problem with is really occurring, you have to invoke some operation that is shared by HTTP interface. To do this you can use or other tool that can run requests. Below you can find a few examples that describe how to check in components that expose this:mdm-manager:GET {{ manager_url }}/entities?filter=equals(type, 'configuration/entityTypes/') - The request should execute properly (HTTP status code 200) and returns some HCP objects.api-router:GET {{ api_router_url }}/entities?filter=equals(type, 'configuration/entityTypes/') - The request should execute properly (HTTP status code 200) and returns some :GET {{ batch_service_url }}/batchController//instances/NA - The request should return 403 HTTP Code and body:{    "code": "403",    "message": "Forbidden: thorizationException: Batch '' is not allowed."}dcr-service2: findingBelow diagram presents the request processing flow with engaged components:" }, { "title": ":", "": "", "pageLink": "/pages/tion?pageId=", "content": "" }, { "title": "Client Configuration", "": "", "pageLink": "/display/GMDM/Client+Configuration", "content": "      1. InstallationTo install kafka binary version 2.8.1 should be downloaded and installed from      2. The email from the TeamIn the email received from the support team you can find connection parameters like server address, topic name, group name, and the following files: – kafka consumer properties, – JAAS credentials requiered to authenticate with server,kafka_truststore.jks – java truststore required to build certification path of connections.      3. Example command to test client and configurationTo connect with using the command line client save delivered files on your disc and run the following command:export KAFKA_OPTS=nfig={ ●●●●●●●●●●●● } --bootstrap-server { kafka server } --group { group } --topic { topic_name } nfig { consumer config file eg. perties}For example for amer dev:●●●●●●●●●●● in provided file: kafka_client_nfKafka server: :9094Group: dev-muleTopic: dev-out-full-pforcerx-grv-allConsumer config is in provided file: pertiesexport KAFKA_OPTS=nfig=kafka_client_ --bootstrap-server :9094 --group dev-mule --topic dev-out-full-pforcerx--all nfig perties" }, { "title": "Client Configuration in k8s", "": "", "pageLink": "/display//Client+Configuration+in+k8s", "content": "Each of k8s clusters have installed kafka-client pod. To find this pod you have to list all pods deployed in *-backend namespace and select pod which name starts with kafka-client:\nkubectl get pods --namespace emea-backend  | grep kafka-client\nTo run commands on this pod you have to remember its name and use in "kubectl exec" command:Using kubectl exec with kafka client\nkubectl exec --namespace emea-backend 55cjm -- \nAs a you can use all of standard client scripts eg. or one of wrapper scripts which simplify configuration of standard scripts - broker and authentication configuration. They are the following scripts:consumer_ - it's wrapper of kafka-consumer-groups,consumer_groups_ - it's also wrapper of kafka-consumer-groups and can be used only to delete consumer group. Has only one input argument - consumer group name,reset_ - it's also wrapper of kafka-consumer-groups and can be used only to reset offsets of consumer group,start_ - it's wrapper of kafka-console-consumer,start_ - it's wrapper of kafka-console-producer, - it's wrapper of kafka-topics.-client pod has other kafka tool named kcat. To use this tool you have to run commands on container kafka-kcat unsing wrapper script :Running on emea-nprod cluster\nkubectl exec --namespace emea-backend 55cjm -c kafka-kcat -- \nNOTE: Remember that all wrapper scripts work with admin permissions.ExamplesDescribe the current offsets of a groupDescribe group dev_grv_pforcerx on emea-nprod cluster\nkubectl exec --namespace emea-backend 55cjm -- consumer_ --describe --group dev_grv_pforcerx\nReset offset of group to earlisetReset offset to earliest for group and topic gbl-dev-internal-gw-efk-transactions on emea-nprod cluster\nkubectl exec --namespace emea-backend 55cjm -- reset_ --group group1 --to-earliest gbl-dev-internal-gw-efk-transactions\nConsumer events from the beginning of topic. It will produce the output where each of lines will have the following format: |Read topic gbl-dev-internal-gw-efk-transactions from beginning on emea-nprod cluster\nkubectl exec --namespace emea-backend 55cjm -- start_ -transactions --from-beginning\nSend messages defined in text file to kafka topics. Each of message in file have to have following format: |Send all messages from file file_with_messages.csv to topic gbl-dev-internal-gw-efk-transactions\nkubectl exec -i --namespace emea-backend -- start_ gbl-dev-internal-gw-efk-transactions < consumer group on topicDelete consumer group test on topic gbl-dev-internal-gw-efk-transactions emea-nprod cluster\nkubectl exec --namespace emea-backend 55cjm -- consumer_ --delete-offsets --group test gbl-dev-internal-gw-efk-transactions\nList topics and their partitions using kcatList topcis into on emea-nprod cluster\nkubectl exec --namespace emea-backend 55cjm -c kafka-kcat -- -L\n" }, { "title": "How to Add a New Consumer Group", "": "", "pageLink": "/display/GMDM/How+to+Add+a+New+Consumer+Group", "content": "These instructions demonstrate how to add an additional consumer group to an existing topic.Open file "topics.yml" located under mdm-reltio-handler-env\\inventory\\\\group_vars\\kafka and find the topic to be updated. In this example new consumer group "flex_dev_prj2" was added to topic "dev-out-full-flex-all".   2. Make sure the parameter "create_or_update" is set to True for the desired topic:   3.  Additionally, double-check that the parameter "install_only_topics" in the "all.yml" file is set to True:    4. Save the files after making the changes. Run ansible to update the configuration using the following command:  -playbook install_hub_broker.yml -i inventory//inventory --limit broker1 --vault-password-file=~/vault-password-file   5. Double-check ansible output to make sure changes have been implemented correctly.   6. Change the "create_or_update" parameter in "topics.yml" back to False.   7. Save the file and upload the new configuration to git. " }, { "title": "How to Generate JKS Keystore and Truststore", "": "", "pageLink": "/display/", "content": "This instruction is based on the current GBL PROD Kafka keystore.jks and trustrore.jks generation. Create a certificate pair using keytool genkeypair command keytool -genkeypair -alias -keyalg -keysize 2048 -keystore ystore.jks -dname "CN=, O=COMPANY, L=mdm_hub, C="  set the security password, set the same ●●●●●●●●●●●● the key passphraseNow create a certificate signing request ( ) which has to be passed on to our external / third party ).keytool -certreq -alias -file .csr -keystore ystore.jks Send the csr file through the Request Manager:Log in to the BT On DemandGo to Request "Continue"Search for " Digital Certificates"Select the " Digital Certificates" Application and click "Continue"Click "Checkout"Select "COMPANY SSL Certificate - Internal Only" and fill:Copy filefill e.g from the GBL PROD Kafka: ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●fill email addressselect "No" for additional SSL Cert request, ContinueSend the reqeust.When you receive the signed certificate verify the certificateCheck the Subject: CN and O should be filled just like in the  eck the SAN: there should be the list of hosts from 3.g.ii.If the certificate is correct CONTINUE:Now we need to import these certificates into ystore.jks keystore. Import the intermediate certificate first --> then the root certificate --> and then the signed ytool -importcert -alias inter -file PBACA-.cer -keystore ystore.jkskeytool -importcert -alias root -file RootCA-.cer -keystore ystore.jkskeytool -importcert -alias -file .cer -keystore ystore.jksAfter importing all three certificates you should see : "Certificate reply was installed in keystore" list the keystore and check if all the certificates are imported -keystore ystore.jksYour keystore contains 3 entriesFor debugging start with "-v" parameterLets create a truststore now. Set the security ●●●●●●●●●● different than the keystorekeytool -import -file PBACA-.cer -alias inter -keystore uststore.jkskeytool -import -file RootCA-.cer -alias root -keystore uststore.jksCOMPANY Certificates:PBACA-.cer RootCA-.cer" }, { "title": "Reset Consumergroup Offset", "": "", "pageLink": "/display//Reset+Consumergroup+Offset", "content": "To reset offset on topic you need to have configured the command line client. The tool that can do this action is . You have to specify a few parameters which determine where you want to reset the offset:--topic - the topic name,--group - the consumer group name,and specify the offset value by proving one of following parameters:1. --shift-byReset offsets shifting current offset by provided number which can be negative or positive: --bootstrap-server { server } --group { group } -–command-config {  perties } --reset-offsets --shift-by {  number from formula } --topic {  topic } --execute2. --to-datetimeSwitch which can be used to rest offset from datetime. Date should be in format ‘YYYY-MM-DDTHH:mm:SS.sss’ --bootstrap-server { server }--group { group } -–command-config {  perties } --reset-offsets --to-datetime --topic {  topic } --execute3. --to-earliestSwitch which can be used to reset the offsets to the earliest (oldest) offset which is available in the --bootstrap-server { server }--group { group } -–command-config {  perties } --reset-offsets -–to-earliest --topic {  topic } --execute4. --to-latestSwitch which can be used to reset the offsets to the latest (the most recent) offset which is available in the --bootstrap-server { server }--group { group } -–command-config {  perties } --reset-offsets -–to-latest --topic {  topic } --executeExampleLet's assume that you want to have 10000 messages to read by your consumer and the topic has 10 partitions. The first step is moving the current offset to the latest to make sure that there is no messages to read on the topic: --bootstrap-server { server }--group { group } -–command-config {  perties } --reset-offsets --to-latest --topic {  topic } --executeThen calculate the offset you need to shift to achieve requested lag using following formula:-1 * desired_lag / number_of_partitionsIn our example the result will be: -1 * 10000 / 10 = -1000. Use this value in the below  command: --bootstrap-server { server } --group { group } -–command-config {  perties } --reset-offsets --shift-by -1000 --topic {  topic } --execute" }, { "title": " gateway", "": "", "pageLink": "/display/GMDM/Kong+gateway", "content": "" }, { "title": " gateway migration", "": "", "pageLink": "/display/GMDM/Kong+gateway+migration", "content": "Installation procedureDeploy crds\n# Download package with crds to current directory\ntar -xzf crds_to_deploy.tar.gzcd crds_to_deploy/\nbase=$(pwd)\nBackup olds to proper k8s context\nkubectx atp-mdmhub-nprod-apac\n\n# Get all crds from cluster and saves them into file ${crd_name}_${env}.yaml\n# Args:\n# $1 = env\ncd $base\nmkdir old_apac_nprod\ncd old_apac_nprod\nget_ apac_nprod\n\n\ncreate new crds\ncd $base/new/splitted/\n# create new crds\nfor i in $(ls); do echo $i; kubectl create -f $i ; done\n# apply new crds\nfor i in $(ls); do echo $i; kubectl apply -f $i ; done\n# replace crds that were not properly installed \nfor i in   -crds.yaml01 kic-crds.yaml03 kic-crds.yaml05 kic-crds.yaml07 kic-crds.yaml11; do echo $i ; kubectl replace -f $i; done\nApply new version of gatewayconfigrations \ncd $base/new\nkubectl replace -f gatewayconfiguration-new.yaml\nApply old version of kongingress\ncd $base/old\nkubectl replace tests is advised to check if everything is workingDeploy operators with version that have -gateway-operator(4.32.0 or newer)# Performing tests is advised to check if everything is workingMerge configuration backend (4.33.0-project-boldmove-SNAPSHOT or newer)# Performing tests is advised to check if everything is workingDeploy mdmhub components (4.33.0-project-boldmove-SNAPSHOT or newer)# Performing tests is advised to check if everything is workingTestsChecking all ingresses\n# Change /etc/hosts if 's are not yet changed. To obtain all hosts that should be modified in /etc/hosts: to correct k8s context\n# k get ingresses -o custom-columns=host0:les[0].host -A | tail -n +2 | sort | uniq | tr '\\n' ' '\n# To get : \n# k get svc -n kong -l (kubectl get ingress -A -o custom-columns="NAME:,,PATH:les[0]ths[0].path" | tail -n +2 | awk '{print "https://"$2":443"$3}')\nwhile IFS= read -r line; do echo -e "\\n\\n---- $line ----"; curl -k $line; done <<< $endpoints\nChecking plugins \nexport reltio_authorization="yyyyyyyyy"\nexport consul_token="zzzzzzzzzzz"\n\n\nkey-auth:\n curl curl -H "apikey: $apikey"\n curl -H 'apikey: $apikey'\n\nmdm-external-oauth:\n curl --location --request POST --header 'Content-Type: application/x-www-form-urlencoded "Authorization: Basic $reltio_authorization" | jq .access_token\n curl --header 'Authorization: Bearer access_token_from_previous_command'\n\ncorrelation-id:\n curl -v -H "apikey: $apikey" 2>&1 | grep hub-correlation-id \n\nbackend-auth:\n kibana-backend-auth:\n # Web browser \n    # Web browser   # Open debugger console in web browser and check if cookies are set\n\npre-function:\n k logs -n emea-backend -l app=consul -f --tail=0\n k exec -n airflow airflow-scheduler-0 -- curl -k uster.local:80//kv/dev?token=$consul_token\n\nopentelemetry:\n curl -H "apikey: $apikey"\n +\n # Web browser\n -it -75bb85fc4c-2msfv -- /bin/bash\n curl localhost:8100/metrics\n\n\nCheck logsGateway operatorKong pod - proxy and ingress controllerNew kong dataplaneNew kong controlPlaneStatus of objects: DataplaneControlplaneGateway\nk get ,dataplane,controlplane -n kong\nCheck services in old and  Old kong\nservices=$(k exec -n kong mdmhub-kong-kong-f548788cd-27ltl -c proxy -- curl -k https://localhost:8444/services); echo $services | jq .\nNew kong\n exec -n kong dataplane-kong-knkcn-bjrc7-5c9f596ff9-t94lf -c proxy -- curl -k https://localhost:8444/services); echo $services | jq .\nReferenceKong operator configuration gateway operator crd's reference" }, { "title": "MongoDB:", "": "", "pageLink": "/pages/tion?pageId=", "content": "" }, { "title": "Mongo-SOP-001: Mongo Scripts", "": "", "pageLink": "/display/GMDM/Mongo-SOP-001%3A+Mongo+Scripts", "content": "Create Mongo Indexes\nhub_errors\n b_eateIndex({plannedResubmissionDate: -1}, {background: true, name: "idx_plannedResubmissionDate_-1"});\n b_eateIndex({timestamp: -1}, {background: true, name: "idx_timestamp_-1"});\n b_eateIndex({exceptionClass: 1}, {background: true, name: "idx_exceptionClass_1"});\n b_eateIndex({status: -1}, {background: true, name: "idx_status_-1"});\n\n\ngateway_errors\n teway_eateIndex({plannedResubmissionDate: -1}, {background: true, name: "idx_plannedResubmissionDate_-1"});\n teway_eateIndex({timestamp: -1}, {background: true, name: "idx_timestamp_-1"});\n teway_eateIndex({exceptionClass: 1}, {background: true, name: "idx_exceptionClass_1"});\n teway_eateIndex({status: -1}, {background: true, name: "idx_status_-1"});\n\n\ngateway_transactions\n teway_eateIndex({transactionTS: -1}, {background: true, name: "idx_transactionTS_-1"});\n teway_eateIndex({status: -1}, {background: true, name: "idx_status_-1"});\n teway_eateIndex({requestId: -1}, {background: true, name: "idx_requestId_-1"});\n teway_eateIndex({username: -1}, {background: true, name: "idx_username_-1"});\n\n\nentityHistory\n eateIndex({country: -1}, {background: true, name: "idx_country"});\n eateIndex({sources: -1}, {background: true, name: "idx_sources"});\n eateIndex({entityType: -1}, {background: true, name: "idx_entityType"});\n eateIndex({status: -1}, {background: true, name: "idx_status"});\n eateIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\n eateIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\n eateIndex({"lue": 1}, {background: true, name: "idx_crosswalks_v_asc"});\n eateIndex({"osswalks.type": 1}, {background: true, name: ": -1}, {background: true, name: "idx_forceModificationDate"});\n\n\nentityRelations\n eateIndex({country: -1}, {background: true, name: ": -1}, {background: true, name: "idx_sources"});\n eateIndex({entityType: -1}, {background: true, name: "idx_relationType"});\n eateIndex({status: -1}, {background: true, name: "idx_status"});\n eateIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\n eateIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\n eateIndex({startObjectId: -1}, {background: true, name: "idx_startObjectId"});\n eateIndex({endObjectId: -1}, {background: true, name: "idx_endObjectId"});\n eateIndex.({"lue": 1}, {background: true, name: "idx_crosswalks_v_asc"}); eateIndex.({"osswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"}); eateIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"});\n\n\n\n\n\n\nFind ACTIVE relations connected to inactive Entities\nvar start = new Date().getTime();\n\nvar result = Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: { \n\t\t\t "status" : "ACTIVE"\n\t\t\t}\n\t\t},\n\n//\t\t// Stage 2\n//\t\t{\n//\t\t\t$limit: 1000\n//\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$lookup: // Equality Match\n\t\t\t{\n\t\t\t from: "entityHistory",\n\t\t\t localField: "relation.endObject.objectURI",\n\t\t\t foreignField: "_id",\n\t\t\t as: "matched_entity"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$match: {\n\t\t\t "$or" : [\n\t\t\t {\n\t\t\t "matched_atus" : "INACTIVE"\n\t\t\t }, \n\t\t\t {\n\t\t\t "matched_atus" : "LOST_MERGE"\n\t\t\t },\n\t\t\t {\n\t\t\t "matched_atus" : "DELETED"\n\t\t\t } \n\t\t\t ]\n\t\t\t}\n\t\t},\n\n\t\t// Stage 5\n\t\t{\n\t\t\t$group: {\n\t\t\t\t\t\t _id:"$matched_atus", \n\t\t\t\t\t\t count:{$sum:1}, Created with Studio 3T, the for MongoDB - );\n\n\n \t\nprintjson(result._batch) \t\n\nvar end = new Date().getTime();\nvar duration = end - start;\nprint("duration: " + duration + " ms")\nprint("END")\nFix entities with wrong parentEntityId\nprint("START")\nvar start = new Date().getTime();\n\nvar result = // Pipeline\n [\n // Stage 1\n {\n $match: {\n "status" : "LOST_MERGE",\n "$and" : [\n {\n "$or" : [\n {\n "mdmSource" : "RELTIO"\n },\n {\n "mdmSource" : {\n "$exists" : false\n }\n }\n ]\n }\n ]\n }\n },\n\n // Stage 2\n {\n $graphLookup: {\n "from" : "entityHistory",\n "startWith" : "$_id",\n "connectFromField" : "parentEntityId",\n "connectToField" : "_id",\n "as" : "master",\n "maxDepth" : 10.0,\n "depthField" : "depthField"\n }\n },\n\n // Stage 3\n {\n $unwind: {\n "path" : "$master",\n "includeArrayIndex" : "arrayIndex",\n "preserveNullAndEmptyArrays" : false\n }\n },\n\n // Stage 4\n {\n $match: {\n "atus" : {\n "$ne" : "LOST_MERGE"\n }\n }\n },\n\n // Stage 5\n {\n $redact: {"$cond" : {\n "if" : {\n "$ne" : [\n "$master._id",\n "$parentEntityId"\n ]\n }, "then" : "$$KEEP",\n "else" : "$$PRUNE"\n }\n }\n },\n\n ]\n\n // Created with Studio 3T, the for MongoDB - );\n\n\rEach(function(obj) {\n var id = obj._id;\n var masterId = ster._id;\n\n if( masterId !== undefined){\n\n print( id + " " + " " + rentEntityId +" replaced to "+ masterId);\n var currentTime = new Date().getTime();\n\n var result = db.entityHistory.update( {"_id":id}, {$set: { "parentEntityId":masterId, "forceModificationDate": NumberLong(currentTime) } });\n printjson(result);\n }\n\n});\n\n\nvar end = new Date().getTime();\nvar duration = end - start;\nprint("duration: " + duration + " ms")\nprint("END")\n\n\n\nFind entities based on the FILE with the crosswalks\ndb = tSiblingDB('reltio')\nvar file = cat('crosswalks.txt'); // read the crosswalks file\nvar crosswalk_ids = file.split('\\n'); // create an array of crosswalks\nfor (var i = 0, l = crosswalk_ids.length; i < l; i++){ // for every crosswalk search it in the entityHistory\n print("ID crosswalk: " + crosswalk_ids[i])\n var result = nd({\n status: { $eq: "ACTIVE" },\n "lue": crosswalk_ids[i]\n }).projection({id:1, country:1})\n printjson(Array());\n}\nFind ACTIVE entities with duplicated crosswalk - missing or wrong LOST_MERGE event\tCollection("entityHistory").aggregate(\n\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: { status: { $eq: "ACTIVE" }, entityType:"configuration/entityTypes/" , : "RELTIO", "lastModificationDate" : {\n\t\t\t "$gte" : NumberLong()\n\t\t\t } }\n\t\t},\n\n\t\t// Stage 2\n\t\t{\n\t\t\t$project: { _id: 0, "osswalks": 1,"entity.uri":2, "entity.updatedTime":3 }\n\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$unwind: "$osswalks"\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$group: {_id:"$lue", count:{$sum:1}, entities:{$push: {uri:"$entity.uri", modificationTime:"$entity.updatedTime"}}}\n\t\t},\n\n\t\t// Stage 5\n\t\t{\n\t\t\t$match: { count: { $gte: 2 } }\n\t\t},\n\n\t\t// Stage 6\n\t\t{\n\t\t\t$redact: {\n\t\t\t "$cond" : {\n\t\t\t "if" : {\n\t\t\t "$ne" : [\n\t\t\t "$lue", \n\t\t\t "$lue"\n\t\t\t ]\n\t\t\t }, \n\t\t\t "then" : "$$KEEP", \n\t\t\t "else" : "$$PRUNE"\n\t\t\t }\n\t\t\t}\n\t\t},\n\t],\n\n\t// Options\n\t{\n\t\tallowDiskUse: true\n\t}\n\n\t// Created with Studio 3T, the for MongoDB - );\n\n\n\nFix LOST_MEREGE entities with missing entityType attribute\nprint("START")\nvar start = new Date().getTime();\n\nvar result = tCollection("entityHistory").aggregate(\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: {\n\t\t\t "status" : "", \n\t\t\t "entityType" : {\n\t\t\t "$exists" : false\n\t\t\t }, \n\t\t\t "$and" : [\n\t\t\t {\n\t\t\t "$or" : [\n\t\t\t {\n\t\t\t "mdmSource" : , \n\t\t\t {\n\t\t\t "mdmSource" : {\n\t\t\t "$exists" : false\n\t\t\t }\n\t\t\t }\n\t\t\t ]\n\t\t\t }\n\t\t\t ]\n\t\t\t}\n\t\t},\n\n\t\t// Stage 2\n\t\t{\n\t\t\t$graphLookup: {\n\t\t\t "from" : "entityHistory", \n\t\t\t "startWith" : "$_id", \n\t\t\t "connectFromField" : "parentEntityId", \n\t\t\t "connectToField" : "_id", \n\t\t\t "as" : "master", \n\t\t\t "maxDepth" : 10.0, \n\t\t\t "depthField" : "depthField"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$unwind: {\n\t\t\t "path" : "$master", \n\t\t\t "includeArrayIndex" : "arrayIndex", \n\t\t\t "preserveNullAndEmptyArrays" : false\n\t\t\t}\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$match: {\n\t\t\t "atus" : {\n\t\t\t "$ne" : "LOST_MERGE"\n\t\t\t }\n\t\t\t}\n\t\t},\n\n\t\t// Stage 5\n\t\t{\n\t\t\t$redact: {\n\t\t\t "$cond" : {\n\t\t\t "if" : {\n\t\t\t "$eq" : [\n\t\t\t "$master._id", \n\t\t\t "$parentEntityId"\n\t\t\t ]\n\t\t\t }, \n\t\t\t "then" : "$$KEEP", \n\t\t\t "else" : "$$PRUNE"\n\t\t\t }\n\t\t\t}\n\t\t}\n\t]\n\n\t// Created with Studio 3T, the for MongoDB - );\n\n\t\rEach(function(obj) {\n var id = obj._id;\n\n var masterEntityType = ( masterEntityType !== undefined){\n if(obj.entityType == undefined){\n\t print("entityType is " + obj.entityType + " for " + id +", changing to "+ masterEntityType);\n\t var currentTime = new result = db.entityHistory.update( {"_id":id}, {$set: { "entityType":masterEntityType, "lastModificationDate": NumberLong(currentTime) } });\n printjson(result);\n }\n\t}\n\n});\n \t\n \t\nvar end = new Date().getTime();\nvar duration = end - start;\nprint("duration: " + duration + " ms")\nprint("END")\nGenerate report from gateway_transaction ( Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: { \n\t\t\t "$and" : [\n\t\t\t {\n\t\t\t "transactionTS" : {\n\t\t\t "$gte" : NumberLong()\n\t\t\t }, \n\t\t\t "username" : "dea_batch"\n\t\t\t }\n\t\t\t ]\n\t\t\t}\n\t\t},\n\n\t\t// Stage 2\n\t\t{\n\t\t\t$group: {\n\t\t\t _id:"$requestId", \n\t\t\t count: { $sum:1 },\n\t\t\t transactions: { $push : "$$ROOT" }\n\t\t\t}\n\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$unwind: {\n\t\t\t path : "$transactions",\n\t\t\t}\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$addFields: {\n\t\t\t \n\t\t\t "statusNumber": { \n\t\t\t $cond: { \n\t\t\t if: { \n\t\t\t $eq: ["$atus", "failed"] \n\t\t\t }, \n\t\t\t then: 0, \n\t\t\t else: 1 \n\t\t\t }\n\t\t\t } \n\t\t\t \n\t\t\t \n\t\t\t}\n\t\t},\n\n\t\t// Stage 5\n\t\t{\n\t\t\t$sort: {\n\t\t\t "questId": 1, \n\t\t\t "statusNumber": -1,\n\t\t\t "ansactionTS": -1 \n\t\t\t}\n\t\t},\n\n\t\t// Stage 6\n\t\t{\n\t\t\t$group: {\n\t\t\t _id:"$_id", \n\t\t\t transaction: { "$first": "$$CURRENT" }\n\t\t\t}\n\t\t},\n\n\t\t// Stage 7\n\t\t{\n\t\t\t$addFields: {\n\t\t\t "unt": "$unt" \n\t\t\t}\n\t\t},\n\n\t\t// Stage 8\n\t\t{\n\t\t\t$replaceRoot: {\n\t\t\t newRoot: "$ansactions"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 9\n\t\t{\n\t\t\t$addFields: {\n\t\t\t "file_raw_line": "$le_raw_line",\n\t\t\t "filename": "$lename"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 10\n\t\t{\n\t\t\t$project: {\n\t\t\t requestId : 1,\n\t\t\t count: 2,\n\t\t\t "filename": 3,\n\t\t\t uri: "$mdmUri",\n\t\t\t country: 5,\n\t\t\t source: 6,\n\t\t\t crosswalkId: 7,\n\t\t\t status: 8,\n\t\t\t timestamp: "$transactionTS",\n\t\t\t //"file_raw_line": 10,\n\t\t\t\n\t\t\t}\n\t\t},\n\t],\n\n\t// Options\n\t{\n\t\tallowDiskUse: true\n\t}\n\n\t// Created with Studio 3T, the for MongoDB - );\n\n\n\nExport Config for Studio3T - format: 1 CURRENT_QUERY_RESULT 0 0 CSV 2 MAKE_NULL " false true true _id count country crosswalkId filename requestId source status timestamp uri false false false 0 false false false false Excel _id count country crosswalkId filename requestId source status timestamp uri FILE D:\\docs\\FLEX\\REPORT_transaction_log\\10_10_2018\\load_report.csv trueFind entities and BY country\n gregate([\n {$match: { status: { $eq: "ACTIVE" }, entityType:"configuration/entityTypes/" } }, {$project: { _id: 1, "country":1 } }, {$group : {_id:"$country", count:{$sum:1},}},\n {$match: { count: { $gte: 2 } } },],{ allowDiskUse: true } )\nFind Entities where ALL/ANY of the crosswalks array objects has delete date set\n// find entities where ALL crosswalk array objects has delete date set (not + exists false)\nd({\n entityType: "configuration/entityTypes/HCP",\n country: "br",\n status: "ACTIVE",\n "osswalks": { $not: { $elemMatch: { deleteDate: {$exists:false} } } }\n})\n\n// find entities where ANY OF crosswalk array objecst has delete date set\nd({\n entityType: "configuration/entityTypes/HCP",\n country: "br",\n status: "ACTIVE",\n "osswalks": { $elemMatch: { deleteDate: {$exists:true} } }\n})\nExample of Multiple Update based on the search query\tCollection("entityHistory").update(\n { \n "status" : "", "entity" : {\n "$exists" : true\n }\n },\n { \n $set: { "lastModificationDate": ) }, $unset: {entity:""}\n },\n { multi: true }\n)\n\n\n\nGroup RDM exceptions and get details with sample entities that have been excluded from the aggregation pipeline query\n__3tsoftwarelabs_disabled_aggregation_stages = [\n\n\t{\n\t\t// Stage 2 - excluded\n\t\tstage: 2, source: {\n\t\t\t$limit: 1000\n\t\t}\n\t},\n]\n\tCollection("hub_errors").aggregate(\n\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: {\n\t\t\t "exceptionClass" : "cessing.RDMMissingEventForwardedException",\n\t\t\t "status" : "NEW"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$project: { \n\t\t\t "entityId":"$exchangeInHeaders.kafka[dot]KEY",\n\t\t\t "attributeName": "$tributeName",\n\t\t\t "attributeValue": "$tributeValue", \n\t\t\t "errorCode": "$rorCode"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$group: {\n\t\t\t _id: { entityId:"$entityId", attributeValue: "$attributeValue",attributeName:"$attributeName"}, // can be grouped on multiple properties \n\t\t\t dups: { "$addToSet": "$_id" }, \n\t\t\t count: { "$sum": 1 } \n\t\t\t}\n\t\t},\n\n\t\t// Stage 5\n\t\t{\n\t\t\t$group: {\n\t\t\t //_id: { attributeValue: "$_tributeValue",attributeName:"$_tributeName"}, // can be grouped on multiple properties \n\t\t\t _id: { attributeName:"$_tributeName"}, // can be grouped on multiple properties \n\t\t\t entities: { "$addToSet": "$_id.entityId" }\n\t\t\t}\n\t\t},\n\n\t\t// Stage 6\n\t\t{\n\t\t\t$project: {\n\t\t\t _id: 1,\n\t\t\t sample_entities: { $slice: [ "$entities", 10 ] } \n\t\t\t affected_entities_count: { $size: "$entities" } \n\t\t\t}\n\t\t},\n\t],\n\n\t// Options\n\t{\n\t\tallowDiskUse: true\n\t}\n\n\t// Created with Studio 3T, the for MongoDB - );\n\n\n\nMongo SIMPLE searches/filter/lengs/regexp examples\n// GET\nd({})\n// GET random 20 entities\gregate( \n [ \n { $match : { status : "ACTIVE" } },{ \n $sample: {size: 20} \n }, { $project: {_id:1}\n },\n\n] )\n \n// entity get by /rOATtJD"\n})\n\n\ndb.entityHistory_nd({\n _id: "entities/ exists\nd({\n "tributes.Specialities": { $exists: true\n } size > 4\nd({\n "tributes.Specialities": {\n $exists: true\n },\n $and: [{$where: "tributes.Specialities.length > 6"}, {$where: "urces.length >= 2"},\n ]\n\n})\mit(10)\n// only project ID\jection({id:1})\n\n\n// Address size > 4\nd({\n "dress": {\n $exists: true\n },\n $and: [{$where: "dress.length > 4"}, {$where: "urces.length > 2"},\n ]\n\n})\mit(10)\n// only project ID\n//.projection({id:1})\n\n\n// Address AddressType size 2\nd({\n "dress": {\n $exists: true\n },"atus.lookupCode": {\n $exists: true,\n $eq: "ACTV"\n },\n }, {"atus": 1\n })\n .limit(10)\n\n\n// Address AddressType size 2\nd({\n "dress": {\n $exists: true\n },\n $and: [{$where: "dress.length >= 4"}, {$where: "urces.length >= 4"},\n ]\n\n})\mit(2)\n//.projection({id:1})\n// only "dress": {\n $exists: true\n },"": {\n $exists: true\n }\n})\mit(2)\n// only project ID\n//.projection({id:1})\n\nd({\n "dress": {\n $exists: true\n },"lidationStatus": {\n $exists: true\n },"entityType":"configuration/entityTypes/HCO",\n $and: [{\n $where: "dress.length > 4"\n \n }]\n })\n .limit(1)\n// only project ID\n//.projection({id:1})\n\n\n\n//SOURCE NAME\nd({\n "dress": {\n $exists: true\n },lastModificationDate: {\n $gt: \n }\n })\n .limit(10)\n// only project\n\n\n\nd({\n "dress": {\n $exists: true\n },"fRelation.objectURI": {\n $exists: false\n },\n }).limit(10)\n// only project\n\n\n// Phone exists\nd({\n "one": { $exists: true\n }\n}) .limit(1)\n\n//Specialities exists\nd({\n "tributes.Specialities": {\n $exists: true\n },\n country: "mx"\n}).limit(10)\n \n// Speclaity Code\nd({\n "tributes.Specialities": {\n $exists: true\n },\n "lue.Specialty.lookupCode": "country: "\n// tributes. Identifiers License exists\nd({\n "entifiers": {\n $exists: true\n },\n country: "mx"\n}).limit(1)\n \n \n// Name of organization is empty\nd({\n entityType: "configuration/entityTypes/HCO",\n : {\n $exists: false\n },\n // "parentEntityId": {\n // $exists: false\n // },\n country: "mx"\n}).limit(10)\n\n\n\n\n// entity get by ID startObjectID\nd({\n startObjectId: "entities/14tDdkhy"\n})\n\nd({\n endObjectId: "entities/14tDdkhy"\n})\n\n\nd({\n _id: "relations/RJx9ZkM"\n})\n\nd({\n "": {\n $exists: true\n }\n}).limit(1)\n\n\n\n// Address size > 4\nd({\n "one": {\n $exists: true\n },\n "relationType":"configuration/relationTypes/HasAddress",\n //$and: [\n// {$where: "dress.length > 3"}, //{$where: "urces.length >= 2"},\n //]\n\n})\mit(10)\n// only project ID\n//.projection({id:1})\n\n\n\n\n// \nd({\n "osswalks": {\n $exists: true\n },\n "leteDate": {\n $exists: true\n }\n\n})\mit(10)\n// only project ID\n//.projection({id:1})\n\n\nd({\n "artObject": {\n $exists: true\n },\n "artObject.objectURI": {\n $exists: false\n }\n\n})\mit(1)\n\n\n\n// merge finder\nd({\n "artObject": {\n $exists: true\n },\n "relation.endObject": {\n $exists: true\n },\n $and: [{$where: "osswalks.length > 2"}, {$where: "urces.length >= 1"},\n ]\n\n})\mit(10)\n// only project ID\n//.projection({id:1})\n\n\n// merge finder\nd({\n "artObject": {\n $exists: true\n },"relation.endObject": {\n $exists: true\n },//"osswalks.0.uri": artsWith("artObject.objectURI")\n "osswalks.0.uri": /^artObject.objectURI.*$/i\n})\mit(2)\n\n\n\n\n\n// Phone - HasAddress\nd({\n "one": {\n $exists: true\n },\n "relationType":"configuration/relationTypes/"": {\n $exists: true\n },\n "relationType":"configuration/relationTypes/Activity",\n})\n\n\n// Identifiers - HasAddress\nd({\n "entifiers": {\n $exists: true\n },\n "relationType":"configuration/relationTypes/"": {\n $exists: true\n },\n "relationType":"configuration/relationTypes/Activity",\n})\n\n\n\n\nd({\n "dress": {\n $exists: true\n }\n })\n// only project\n\n\nd({\n "dress": {\n $exists: true\n },"fRelation.uri": {\n $exists: false\n },"fRelation.objectURI": {\n $exists: true\n },\n })\n// only project\n\n\nd({\n "dress": {\n $exists: true\n },"fRelation.uri": {\n $exists: true\n },"fRelation.objectURI": {\n $exists: false\n }\n })\n// only project\n\nd({\n "dress": {\n $exists: true\n },"fRelation.uri": {\n $exists: true\n },"fRelation.objectURI": {\n $exists: true\n },\n })\n\nd({\n "dress": {\n $exists: true\n },lastModificationDate: {\n $gt: \n }\n })\n .limit(10)\n// only project\n\nd({})\n// GET random 20 entities\n\n \n// entity get by ID\nd({\n _id: "entities/Nzn07bq"\n})\n\n\n// Address AddressType size 2\nd({\n "dress": {\n $exists: true\n },\n $and: [{$where: "dress.length >= 4"}, {$where: "urces.length >= 4"},\n ]\n\n})\mit(2)\n\n\n\n\nGet the and the Crosswalks Size - ifNull return 0 elements\tCollection("entityHistory").aggregate(\n\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: { mdmSource: "RELTIO" \n\t\t\t}\n\t\t},\n\n\t\t// Stage 2\n\t\t{\n\t\t\t$limit: 1000\n\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$addFields: {\n\t\t\t "crosswalksSize": { $size: { "$ifNull": [ "$osswalks", [] ] } }\n\t\t\t}\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$project: {\n\t\t\t _id: 1,\n\t\t\t crosswalksSize:1 \n\t\t\t \n\t\t\t}\n\t\t},\n\n\t]\n\n\t// Created with Studio 3T, the for MongoDB - );\n\n\nTMP Copy\n// COPY THIS SECTION \n" }, { "title": ": Running mongo scripts remotely on k8s cluster", "": "", "pageLink": "/display/GMDM/Mongo-SOP-002%3A+Running+mongo+scripts+remotely+on+k8s+cluster", "content": "Get the tool:Go to file in inbound-services the file to your e tool requires kubenetes installed and (tested on WSL2) for working age guide:Available commands:./run_mongo_ --helpShows general help message for the script tool:./run_mongo_ exec Execute to run script remotely on pod agent on k8s script. Script will be copied from the given path on local machine to pod and then run on pod. To get details about accepted arguments run ./run_mongo_ exec --help./run_mongo_ get Execute to download script results from pod agent and save in given path on your local machine. To get details about accepted arguments run ./run_mongo_ get --helpExample flow:Save mongo script you want to run in file example_script.js (Script file has to have .js or .mongo extension for tool to run  ./run_mongo_ exec example_script.js emea_dev to run your script on emea_dev environmentUpon complection the path where the script results were saved on pod agent will be returned (eg. /pod/path/result.txt)Run ./run_mongo_ get /pod/path/result.txt local/machine/path/example_script_result.txt emea_dev to save script results on your local ol editionThe tool was written using bashly - a bash framework for developing e tool source is available HERE. Edit files and generate singular output script based on guides available on bashly NOT EDIT run_mongo_ file MANUALLY (it may result in script not working correctly)." }, { "title": "Notifications:", "": "", "pageLink": "/pages/tion?pageId=", "content": "" }, { "title": "Sending notification", "": "", "pageLink": "/display/GMDM/Sending+notification", "content": "We send notifications to our clients in the case of the following events:Unplanned outage - MDMHUB is not available for our clients - REST , or doesn't work properly and clients are not able to connect. Currently, you have to send notification in the case of the following events:kong_http_500_status_prodkong_http_502_status_prodkong_http_503_status_prodkong3_http_500_status_prodkong3_http_502_status_prodkong3_http_503_status_prodkafka_missing_all_brokers_prodPlanned outage - it is maintenance window when we have to do some maintenance tasks that will cause temporary problems with accessing to endpoints,Update configuration - some of endpoints are changed i.e.: rest URL address, address etc.We always sends notification in the case of unplanned outage to inform our clients about and let them know that somebody from us is working on issue. Planned outage and update configuration are always planned activity that are confirmed with release management and scheduled to specific time tification send notifications using your 's email CC always set our DLs: , our clients as  according to table mentioned below:Click here to expand Recepients list (XLS above is easier to filter){"name":"MDM_Hub_notification_recipients.xlsx","type":"xlsx","pageID":""}Loading On the above screen we can see a few placeholders,Notification type - must be one of: UNPLANNED OUTAGE, PLANNED OUTAGE or UPDATE CONFIGURATION,Environments - a list of MDMHUB environments that related to notification. It is very important to provide region and specific environment type eg. /STAGE, AMER NPRODs etc. It is good to provide a links to documentation that describe listed environments. Environment documentation can be found here,When - the date when situation that notification describes start occurring. In the case of unplanned outage you have to provide the date when we noticed the failure. For rest of situations it should be time range to determine when activity will start and finish,Description - details that describe situation, possible impacts and expected time of resolution (if it is possible to determine). Some of the notification templates have placeholder "" that should be fill up using labels endpoint and endpoint_ext value from alert triggered in karma. Thanks this, customers will be able to recognize that outage impacting on theirs tification templatesBelow you can find notification templates that you can get, fill and send to our clients:Generic template: gKafka issues: gAPI issues: g" }, { "title": "COMPANYGlobalCustomerID:", "": "", "pageLink": "/pages/tion?pageId=", "content": "" }, { "title": "Fix \"\" or null IDs - Fix Duplicates", "": "", "pageLink": "/pages/tion?pageId=", "content": "The following SOP describes how to fix "" or null COMPANYGlobalCustomerIDs values in and regenerate events in e SOP also contains the step to fix duplicated values and regenerate eps: Check empty or null: \n\t db = tSiblingDB("reltio_amer-prod");\n\t\tCollection("entityHistory").find(\n\t\t\t{\n\t\t\t\t"$or" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t"COMPANYGlobalCustomerID" : ""\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t"COMPANYGlobalCustomerID" : {\n\t\t\t\t\t\t\t"$exists" : false\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t],\n\t\t\t\t"status" : {\n\t\t\t\t\t"$ne" : "DELETED"\n\t\t\t\t}\n\t\t\t}\n\t\t);\nMark all ids for further event regeneration. Run the on Studio3t or on :log in to correct cluster on backend namespace copy script - kubectl cp  ./reload_entities_fix_COMPANY_id_DEV.js mongo-0:/tmp/reload_entities_fix_COMPANY_id_DEV.jsrun - nohup mongo --host mongo/localhost:27017 -u admin -p --authenticationDatabase admin reload_entities_fix_COMPANY_id_DEV.js > out/reload_DEV.out 2>&1 &download result - kubectl cp /out/reload_DEV.out ./reload_DEV.outUsing output find all "TODO" lines and regenerate correct eventsCheck {\n\t\t\t\t\t_id: {COMPANYID: "$COMPANYID"},\n\t\t\t\t\tuniqueIds: {$addToSet: "$_id"},\n\t\t\t\t\tcount: {$sum: 1}\n\t\t\t\t\t}\n\t\t\t\t},\n\n\t\t\t\t// Stage 2\n\t\t\t\t{\n\t\t\t\t\t$match: { \n\t\t\t\t\tcount: {"$gt": 1}\n\t\t\t\t\t}\n\t\t\t\t}, \n\t\t\t],\n\n\t\t\t// Options\n\t\t\t{\n\t\t\t\tallowDiskUse: true\n\t\t\t}\n\n\t\t\t// Created with Studio 3T, the for MongoDB - there are duplicates run run the on Studio3t or on :log in to correct cluster on backend namespace copy script - kubectl cp  ./reload_entities_fix_COMPANY_id_DEV.js mongo-0:/tmp/reload_entities_fix_COMPANY_id_DEV.jsrun - nohup mongo --host mongo/localhost:27017 -u admin -p --authenticationDatabase admin reload_entities_fix_COMPANY_id_DEV.js > out/reload_DEV.out 2>&1 &download result - kubectl cp /out/reload_DEV.out ./reload_DEV.outUsing output find all "TODO" lines and regenerate correct eventsReload events    Events RUNYou can use the following 2 scripts:\n#!/bin/bash\n\nfile=$1\nevent_type=$2\n\ndos2unix $file\n\njq -R -s -c 'split("\\n")' < "${file}" | jq --arg eventTimeArg `date +%s%3N` --arg eventType ${event_type} -r '.[] | . +"|{\\"eventType\\": \\"\\($eventType)\\", \\"eventTime\\": \\"\\($eventTimeArg)\\", \\"entityModificationTime\\": \\"\\($eventTimeArg)\\", ": [\\"" + (.|tostring) + "\\"], \\"mdmSource\\": \\"RELTIO\\", \\"viewName\\": \\"default\\"}"'\n\n\nThis script input is the file with entityid separated by new lineExmaple:entities/xVIK0nhentities/uP4eLwsentities/iiKryQOentities/ZYjRCFNentities/13n4v93AExample execution:./ dev_reload_empty_ids.csv HCP_CHANGED >> EMEA_DEV_events.txtOR\n#!/bin/bash\n\nfile=$1\n\ndos2unix $file\n\njq -R -s -c 'split("\\n")' < "${file}" | jq --arg eventTimeArg `date +%s%3N` -r '.[] | (. | tostring | split(",") | .[0] | tostring ) +"|{\\"eventType\\": \\""+ ( . | tostring | split(",") | if .[1] == "" then "HCP_LOST_MERGE" else "HCP_CHANGED" end ) + "\\", \\"eventTime\\": \\"\\($eventTimeArg)\\", \\"entityModificationTime\\": \\"\\($eventTimeArg)\\", ": [\\"" + (. | tostring | split(",") | .[0] | tostring ) + "\\"], \\"mdmSource\\": \\"RELTIO\\", \\"viewName\\": \\"default\\"}"'\n\n\nThis script input is the file with entityId,status separate by new lineExample:entities/10BBdiHR,LOST_MERGEentities/10BBdv4D,LOST_MERGEentities/10BBe7qz,LOST_MERGEentities/10BBgKFF,, execution:./script_2_ dev_reload_lost_merges.csv >> EMEA_DEV_events.txtPush the generate file to topic using producer:./start_sasl_ prod-internal-reltio-events < EMEA_PROD_events.txtSnowflake Check\n-- COMPANY COMPANY_GLOBAL_CUSTOMER_ID checks - null/empty\nSELECT count(*) FROM ENTITIES WHERE COMPANY_GLOBAL_CUSTOMER_ID IS NULL OR COMPANY_GLOBAL_CUSTOMER_ID = '' \nSELECT * FROM ENTITIES WHERE COMPANY_GLOBAL_CUSTOMER_ID IS NULL OR COMPANY_GLOBAL_CUSTOMER_ID = '' \n\n-- duplicates\nSELECT COMPANY_GLOBAL_CUSTOMER_ID \nFROM ENTITIES \nWHERE COMPANY_GLOBAL_CUSTOMER_ID IS NOT NULL OR COMPANY_GLOBAL_CUSTOMER_ID != '' \nGROUP BY COMPANY_GLOBAL_CUSTOMER_ID HAVING COUNT(*) >1\n\n\n" }, { "title": "Initialization Process", "": "", "pageLink": "/display/GMDM/Initialization+Process", "content": "The process will sync COMPANYGlobalCustomerID attributes to the MongoDB (EntityHistory and COMPANYIDRegistry) and then refresh the snowflake with this e process is divided into the following steps:Create an index in eateIndex({COMPANYGlobalCustomerID: -1},  {background: true, name:  "idx_COMPANYGlobalCustomerID"});Configure entity-enricher so it has the ov:false option for nOvAttributesToInclude:- COMPANYCustID- COMPANYGlobalCustomerIDDeploy the hub components with callback enabled (3.9.1 version)RUN hub_reconciliation_v2 - first run the HUB Reconciliation -> this will enrich all data with COMPANYGlobaCustomerID with ov:true and ov:false valuesbased on this is here - :8080/airflow/tree?dag_id=hub_reconciliation_v2_emea_dev&root=doc - HUB Reconciliation Process V2check if the configuration contains the following - nonOvAttrToInclude: "COMPANYCustID, directory structure and perties file in emea//inbound/hub/hub_reconciliation/ :8080/airflow/tree?dag_id=hub_reconciliation_v2_emea_dev:8080/airflow/tree?dag_id=hub_reconciliation_v2_emea_qa:8080/airflow/tree?dag_id=hub_reconciliation_v2_emea_stageRUN hub_COMPANYglobacustomerid_initial_sync_ DAGIt contains 2 steps:COMPANYglobacustomerid_active_inactive_reconciliation the groovy script that - check the HUB entityHistory ACTIVE/INACTIVE/DELETED entities - for all these entities get ov:true COMPANYGlobalCustomerId and enrich and CacheCOMPANYglobacustomerid_lost_merge_reconciliation  the groovy script that - this step checks entities. Do the merge_tree full export from . Based on merge_tree adds the RUN snowflake_reconciliation - full snowflake reconciliation by generating the full file with empty checksums" }, { "title": "Remove Duplicates and Regenerate Events", "": "", "pageLink": "/display/", "content": "This SOP describes the workaround to fix the COMPANYGlobalCustomerID duplicated se:There are 2 entities with the same COMPANYGlobalCustomerID.Example:    1Qbu0jBQ - Jun 14, @ 18:10:44.963    ID-mdmhub-reltio-subscriber-dynamic-866b588c7-w9crm--0-    ENTITY_CREATED    entities/1Qbu0jBQ    RELTIO    success    entities/1Qbu0jBQ        3Ot2Cfw  - Aug 11, @ 18:53:31.433    ID-mdmhub-reltio-subscriber-dynamic-79cd788b59-gtzm6--0-    ENTITY_CREATED    entities/3Ot2Cfw    RELTIO    success    entities/3Ot2Cfw3Ot2Cfw  is a   is a LOSER. Rule: if there are duplicates, always pick the LOST_MERGED entity and update the looser only with the different value. Do not change an active entity:Steps:GO to Reltio to the winner and check the other (OV:FALSE) COMPANYGlobalCustomerIDsPick the new value from the list:Check if there are no duplicates in , and search for a new value by the COMPANY in the cache. If exists pick different.Regenerate event:if the loser entity is now active in but not active in regenerate CREATED event:entities/1Qbu0jBQ|{  "eventType" : "HCP_CREATED",  "eventTime" : "",  "entityModificationTime" : "",  "entitiesURIs" : [ "entities/1Qbu0jBQ" ],  "mdmSource" : "RELTIO",  "viewName" : "default" }if the loser entity is not present in because is a looser regenerate event:entities/1Q7XLreu|{"eventType":"HCO_LOST_MERGE","eventTime":,"entityModificationTime":,"entitiesURIs":["entities/1Q7XLreu"],"mdmSource":"RELTIO","viewName":"default"}Example PUSH to PROD:Check , an updated entity should change COMPANYGlobalCustomerIDCheck " }, { "title": " ():", "": "", "pageLink": "/pages/tion?pageId=", "content": "" }, { "title": "Batch Loads - Client-Sourced", "": "", "pageLink": "/display/GMDM/Batch+Loads+-+Client-Sourced", "content": "Log in to PROD :5601/app/kibanause the dedicated "kibana_gbiccs_user" Go to the Dashboards Tab - "PROD Batch loads"Change the rage Choose to check if the new file was loaded for e is divided into the following sections:File by type - this visualization presents how many file of the specific type were loaded during a specific time rangeFile load count - this visualization presents when the specific file was loadedFile load summary - on this table you can verify the detailed information about file loadCheck if files are loaded with the following agenda:SAP - incremental loads - max 4 files per day, min 2 files per day Agenda: . 01:20 CET time 2. time 3. time . timeSaturday1. - incremental loads - 2 file per day. WKCE.*.txt and WKHH.*.txtAgenda:whenhoursTuesday-Saturday1. estimates: load -  1 file per week FF_DEA_IN_.*.txtAgenda:whenhoursTuesday1. estimates: CET time340B - incremental load - 4 files per month. 340B_FLEX_TO_RELTIO_*.txtAgenda:Files uploaded on 3rd, 10th, 24th and at ~12:30 PM CET time. If is on , the file will be loaded on eck if file limit was not exceeded. Check "Suspended Entities" attribute. If this parameter is grater than 0, it means that post processing was not invoked. Current post processing limit is 22 000. To increase limit - Send the notification (7.d), after agreement do (8.)Take an action if the input files are not delivered on schedule:SAP To:  ;;;DL-GMFT-EDI-PRD-SUPPORT@CC: ;;;;;;irumurthy@HINTo: ;;;DL-GMFT-EDI-PRD-SUPPORT@CC: ;;;;;;;; irumurthy@DEATo: ;;;DL-GMFT-EDI-PRD-SUPPORT@CC: ;;;;;;;; - limit notificationTo: ;;;irumurthy@CC: ;rawski@Take an action if limit was exceeded. Login to each PROD hostGo to "cd /app/mdmgw/batch_channel/config/"Edit "application.yml" on each host:Change leteDateLimit: 22 000 to new art Components: Execute :8443/job/mdm_manage_playbooks/job/Microservices/job/manage_microservices__prod_us/component: mdmgw_batch-channel_1node: all_nodescommand: restartLoad the latest file (MD5 checksum skips all entities, so only post-processing step will be executed) Change and commit new limit to GIT:  Example Emails: limit exceeded:  load checkHi Team,We just received the file, the current post processing process is set to 22 000 limitation. The load resulted in xxxx profiles to be updated in post-processing. Should I change the limit and re-process profiles ?Regards,HIN File missingHIN PROD file missingHi,  we expected to receive new HIN files. I checked that HIN files are missing on bucket. we received files at