Case Type A and Case Type B capture information about multiple line items. Each line item is an Instance erf the same Line Item data type. Separate work pool classes are used for Case Types A and B.
What is the optimal data model design to meet these requirements?
Embedded List with Declare Index:
Both Case Types A and B possess an embedded list of line items. Using a Declare Index against each embedded list ensures efficient data retrieval and indexing.
Changing the direct inheritance class for each Declare Index class to the Line Item data type allows for proper data management and querying.
Reference:
Pega best practices for data modeling and indexing recommend using Declare Index for embedded lists to enhance performance and data organization.
Therefore, the correct answer is:
D . Case Types A and B both possess an embedded list of line items. Define a Declare Index against each embedded list. Change the direct inheritance class for each Declare Index class to the Line Item data type.
A queue processor is configured with the value of the Max attempts field is 4, the value of the Initial delay field is 2 minutes and the the value of the Delay factor field is 2 for processing retries.
What is the delay between the second and third attempt, assuming each previous attempt fails?
Max Attempts: The maximum number of retry attempts set is 4.
Initial Delay: The initial delay before the first retry attempt is 2 minutes.
Delay Factor: The delay factor is set to 2, which means the delay doubles with each retry.
Calculating Delays:
First attempt fails, delay before second attempt = Initial delay = 2 minutes.
Second attempt fails, delay before third attempt = Initial delay * Delay factor = 2 minutes * 2 = 4 minutes.
Third attempt fails, delay before fourth attempt = 4 minutes * Delay factor = 4 minutes * 2 = 8 minutes.
Delay Between Attempts: The question specifically asks for the delay between the second and third attempt, which is calculated as 8 minutes.
Pega documentation on queue processor configuration.
Pega Platform 8.x help files related to processing retries and delay factors.
MyHealth Corporation wants to use the age of the claim to increase the urgency of the assignment so that persons processing the claims work on the most urgent claims first. The claim assignment urgency increases by 1 each day the claim remains in an Unresolved status. At any time, MyHealth has up to 10,000 claims that are in process. Claims in the PendingProcessing workbasket are subject to this calculation. The application updates the claim urgency daily before the work day begins. All claims are processed within 30 days.
Which approach satisfies the claim urgency requirement and provides the best experience for the user who processes the claims?
Job Scheduler:
Using a job scheduler on a dedicated node is a robust solution for incrementing the urgency of claims. The scheduler can run daily to update the value of pyUrgencyAssignAdjust by 1 for every assignment in the PendingProcessing workbasket.
Reference:
Pega documentation on job schedulers and background processing highlights the use of job schedulers for periodic updates and batch processing.
Therefore, the correct answer is:
C . Use a job scheduler on a dedicated node to increase the value of pyUrgencyAssignAdjust by 1 for every assignment that matches the selection criteria.
In a telecom customer service application, birthday wishes must be sent to customers every day at midnight by email, according to KYC records. The business requirement states that if email delivery failures occur, the system must record the failures.
There is an infrastructure limitation in using an external stream node; other available nodes are Webuser, Background processing, and BIX only. What two options can comprise an alternative approach as an interim solution? (Choose Two)
Job Scheduler with SendEmailNotification Utility:
Running a job scheduler to pull customer records matching the current date and using the SendEmailNotification utility ensures that emails are sent out at midnight. Recording failures in a data type captures the necessary error information.
Job Scheduler with CorrNew Utility:
Using the CorrNew utility to send emails and record any exceptions in a data type also satisfies the requirement for recording email delivery failures.
Reference:
Pega documentation on job schedulers and email utilities provides guidelines for implementing scheduled tasks and handling email delivery.
Therefore, the correct answers are:
B . Run a job scheduler, and pull the customers that match the current date. For each record, invoke the SendEmailNotification utility. If any exception is received in email delivery, mark the failure in a data type.
D . Run a job scheduler, and pull the customers that match the current date. For each record, invoke the CorrNew utility. If any exception is received in email delivery, mark the failure in a data type.
An external application calls a Pega REST service, which takes a significant amount of time to respond. Pega Platform returns a unique identifier instantly and runs the service without the application waiting.
Which configuration implements this functionality?
Service Request Processor for Asynchronous Processing:
Configuring a REST service to run asynchronously using a service request processor ensures that the service runs without making the external application wait. The system returns a unique identifier instantly and processes the request in the background.
Reference:
Pega's best practices for REST services and asynchronous processing recommend using service request processors for handling long-running processes.
Therefore, the correct answer is:
B . A REST service that runs asynchronously by using a service request processor.
Man
14 days agoNana
24 days agoJerry
29 days agoAdelina
1 months agoArlyne
2 months agoCatarina
2 months agoMiss
2 months agoLazaro
2 months agoEleonora
2 months agoVince
3 months agoKeneth
3 months agoEmile
4 months agoClaudio
5 months agoJaime
5 months agoCassie
5 months agoTerrilyn
5 months agoDiane
6 months agoCatarina
6 months agoLore
6 months ago