New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Esri EADP19-001 Exam - Topic 10 Question 47 Discussion

Actual exam question for Esri's EADP19-001 exam
Question #: 47
Topic #: 10
[All EADP19-001 Questions]

An ArcGIS user in a county office receives a large volume of data In shapefiles, coverages and table formats. The user is responsible for converting the data to a standard feature class format and writing it into a central enterprise geodatabase. The data must be simultaneously available for editing and spatial analysis tasks.

How should the Windows Task Scheduler be used to achieve these goals?

Show Suggested Answer Hide Answer
Suggested Answer: A, D

Contribute your Thoughts:

0/2000 characters
Marge
3 months ago
Wait, can we really run models at non-peak hours? That’s new to me!
upvoted 0 times
...
Oliva
3 months ago
A is too complicated, I’d stick with B or C.
upvoted 0 times
...
Svetlana
3 months ago
D seems like a good choice, but I wonder if it’s efficient enough.
upvoted 0 times
...
Aliza
4 months ago
I think C is better for avoiding peak hours.
upvoted 0 times
...
Jettie
4 months ago
Option B sounds solid for task allocation!
upvoted 0 times
...
Ty
4 months ago
I’m a bit confused about whether to patch tasks in a script or a model. I remember both options were mentioned, but I can't decide which is more efficient for this scenario.
upvoted 0 times
...
Maybelle
4 months ago
I practiced a similar question where we had to automate data conversion, and I feel like running a model at non-peak hours could help with performance issues.
upvoted 0 times
...
Bernardine
4 months ago
I think scheduling a geoprocessing service with specific tasks might be the way to go, but I can't recall if that was covered in detail.
upvoted 0 times
...
Stephaine
5 months ago
I remember we talked about using the Task Scheduler for automating processes, but I'm not sure if it's better to run a script or a model for this situation.
upvoted 0 times
...
Roslyn
5 months ago
I'm a bit confused by the question. Is it asking specifically about using the Task Scheduler, or are there other potential approaches we could consider? I want to make sure I fully understand the requirements before deciding on a solution.
upvoted 0 times
...
Jules
5 months ago
Okay, let me think this through. I believe the key is to package the data conversion and loading tasks into a script or model, and then schedule that to run at a specific time. That way it can be automated and run during non-peak hours.
upvoted 0 times
...
Thaddeus
5 months ago
Hmm, I'm a little unsure about the best approach here. I know the Task Scheduler can be used to run scripts and models, but I'm not sure which option would be most appropriate for this scenario.
upvoted 0 times
...
Romana
5 months ago
This seems like a straightforward question about using the Windows Task Scheduler to automate data conversion and loading tasks. I think I have a good handle on the key concepts here.
upvoted 0 times
...
Sunshine
5 months ago
Based on the details provided, I think option D is the best approach. Packaging the tasks into a model and scheduling that to run during non-peak hours seems like the most efficient way to handle the large volume of data and ensure it's available for editing and analysis.
upvoted 0 times
...
Glory
5 months ago
This is a pretty straightforward question about the differences between synchronous and asynchronous API calls. I'm confident I can answer this correctly.
upvoted 0 times
...
Theola
5 months ago
I was thinking about the common phone profile. It might be related to the configuration of endpoints. But I guess it could be one of the other options too.
upvoted 0 times
...
Lanie
5 months ago
I remember practicing a question similar to this one, and I think "WrapUp Time" was definitely involved.
upvoted 0 times
...
Alex
5 months ago
Hmm, I'm a bit unsure here. The question mentions improving web application performance, but the options seem to focus on different optimization techniques. I'll need to think through how each of these options could impact performance for this specific scenario.
upvoted 0 times
...
Olen
10 months ago
Option A seems a bit too basic. I mean, a simple log of script validation errors? Where's the excitement, the drama? I'm going with D for the win!
upvoted 0 times
Maybelle
9 months ago
Definitely. Plus, scheduling it at non-peak hours ensures minimal disruption to other tasks.
upvoted 0 times
...
Maira
9 months ago
Yeah, I agree. It's a more advanced approach compared to just logging script validation errors.
upvoted 0 times
...
Ruby
10 months ago
I think D is the way to go. Running a model at non-peak hours sounds efficient.
upvoted 0 times
...
...
Roxanne
10 months ago
Haha, patching tasks in a script? That's like trying to fix a leaky faucet with duct tape. Option D all the way, folks!
upvoted 0 times
Harrison
9 months ago
D) patch the tasks in a model and schedule to run the model at a scheduled time at non-peak hours
upvoted 0 times
...
Desire
9 months ago
C) patch the tasks in a script and schedule to run the script at a scheduled time at non-peak hours
upvoted 0 times
...
Leota
10 months ago
B) schedule to run a geoprocessing service with the tasks and allocate a specific time to run each task in the service
upvoted 0 times
...
Adelina
10 months ago
A) schedule to run a geoprocessing service with the tasks and maintain log of script validation errors
upvoted 0 times
...
...
Dewitt
11 months ago
I'm torn between B and D. Running a geoprocessing service or a model, both seem viable options. Guess I'll have to weigh the pros and cons of each approach.
upvoted 0 times
Lavonna
10 months ago
D) patch the tasks in a model and schedule to run the model at a scheduled time at non-peak hours
upvoted 0 times
...
Adria
10 months ago
B) schedule to run a geoprocessing service with the tasks and allocate a specific time to run each task in the service
upvoted 0 times
...
...
Clarence
11 months ago
That's a good point, Phillip. Running a model can streamline the process and reduce errors.
upvoted 0 times
...
Kattie
11 months ago
Option D sounds like the way to go. Scheduling a model to run during non-peak hours ensures the data is available for editing and analysis without disrupting the workflow.
upvoted 0 times
Glennis
10 months ago
Scheduling tasks during non-peak hours is a smart way to manage data processing in a county office.
upvoted 0 times
...
Mauricio
10 months ago
It's important to make sure the data is accessible for editing and analysis when needed.
upvoted 0 times
...
Marcelle
10 months ago
I agree, running a model at a scheduled time is efficient and won't interfere with other tasks.
upvoted 0 times
...
Allene
10 months ago
Option D sounds like the way to go. Scheduling a model to run during non-peak hours ensures the data is available for editing and analysis without disrupting the workflow.
upvoted 0 times
...
...
Phillip
11 months ago
I disagree, I believe option D is more efficient. Running a model at non-peak hours is the way to go.
upvoted 0 times
...
Clarence
11 months ago
I think option B is the best choice. It allows for specific timing for each task.
upvoted 0 times
...

Save Cancel