The latest Amazon DAS-C01 dumps by Lead4Pass helps you pass the DAS-C01 exam for the first time! Lead4Pass Latest Update Amazon DAS-C01 VCE Dump and DAS-C01 PDF Dumps, Lead4Pass DAS-C01 Exam Questions Updated, Answers corrected! Get the latest Lead4Pass DAS-C01 dumps with Vce and PDF: https://www.leads4pass.com/das-c01.html (Q&As: 77 dumps)

[Free DAS-C01 PDF] Latest Amazon DAS-C01 Dumps PDF collected by Lead4pass Google Drive:
https://drive.google.com/file/d/1SBtAYu3S8dc7a3iiAGAF36RHhwH02Rsx/

[Lead4pass DAS-C01 Youtube] Amazon DAS-C01 Dumps can be viewed on Youtube shared by Lead4Pass

https://youtube.com/watch?v=yDbPebQ57Q8

Latest Amazon DAS-C01 Exam Practice Questions and Answers

QUESTION 1
A retail company\\’s data analytics team recently created multiple product sales analysis dashboards for the average
selling price per product using Amazon QuickSight. The dashboards were created from .csv files uploaded to Amazon
S3. The team is now planning to share the dashboards with the respective external product owners by creating
individual users in Amazon QuickSight. For compliance and governance reasons, restricting access is a key
requirement. The product owners should view only their respective product analysis in the dashboard reports.
Which approach should the data analytics team take to allow product owners to view only their products in the
dashboard?
A. Separate the data by product and use S3 bucket policies for authorization.
B. Separate the data by product and use IAM policies for authorization.
C. Create a manifest file with row-level security.
D. Create dataset rules with row-level security.
Correct Answer: B

 

QUESTION 2
A large company has a central data lake to run analytics across different departments. Each department uses a
separate AWS account and stores its data in an Amazon S3 bucket in that account. Each AWS account uses the AWS
Glue Data Catalog as its data catalog. There are different data lake access requirements based on roles. Associate
analysts should only have read access to their departmental data. Senior data analysts can have access to multiple
departments including theirs, but for a subset of columns only.
Which solution achieves these required access patterns to minimize costs and administrative tasks?
A. Consolidate all AWS accounts into one account. Create different S3 buckets for each department and move all the
data from every account to the central data lake account. Migrate the individual data catalogs into a central data catalog
and apply fine-grained permissions to give to each user the required access to tables and databases in AWS Glue and
Amazon S3.
B. Keep the account structure and the individual AWS Glue catalogs on each account. Add a central data lake account
and use AWS Glue to catalog data from various accounts. Configure cross-account access for AWS Glue crawlers to
scan the data in each departmental S3 bucket to identify the schema and populate the catalog. Add the senior data
analysts into the central account and apply highly detailed access controls in the Data Catalog and Amazon S3.
C. Set up an individual AWS account for the central data lake. Use AWS Lake Formation to catalog the cross-account
locations. On each individual S3 bucket, modify the bucket policy to grant S3 permissions to the Lake Formation service-linked role. Use Lake Formation permissions to add fine-grained access controls to allow senior analysts to view specific
tables and columns.
D. Set up an individual AWS account for the central data lake and configure a central S3 bucket. Use an AWS Lake
Formation blueprint to move the data from the various buckets into the central S3 bucket. On each individual bucket,
modify the bucket policy to grant S3 permissions to the Lake Formation service-linked role. Use Lake Formation
permissions to add fine-grained access controls for both associate and senior analysts to view specific tables and
columns.
Correct Answer: B


QUESTION 3
Once a month, a company receives a 100 MB .csv file compressed with gzip. The file contains 50,000 property listing
records and is stored in Amazon S3 Glacier. The company needs its data analyst to query a subset of the data for a
specific vendor.
What is the most cost-effective solution?
A. Load the data into Amazon S3 and query it with Amazon S3 Select.
B. Query the data from Amazon S3 Glacier directly with Amazon Glacier Select.
C. Load the data to Amazon S3 and query it with Amazon Athena.
D. Load the data to Amazon S3 and query it with Amazon Redshift Spectrum.
Correct Answer: C
Reference: https://aws.amazon.com/athena/faqs/

 

QUESTION 4
A manufacturing company has been collecting IoT sensor data from devices on its factory floor for a year and is storing
the data in Amazon Redshift for daily analysis. A data analyst has determined that, at an expected ingestion rate of
about 2 TB per day, the cluster will be undersized in less than 4 months. A long-term solution is needed. The data
analyst has indicated that most queries only reference the most recent 13 months of data, yet there are also quarterly
reports that need to query all the data generated from the past 7 years. The chief technology officer (CTO) is concerned
about the costs, administrative effort, and performance of a long-term solution.
Which solution should the data analyst use to meet these requirements?
A. Create a daily job in AWS Glue to UNLOAD records older than 13 months to Amazon S3 and delete those records
from Amazon Redshift. Create an external table in Amazon Redshift to point to the S3 location. Use Amazon Redshift
Spectrum to join data that is older than 13 months.
B. Take a snapshot of the Amazon Redshift cluster. Restore the cluster to a new cluster using dense storage nodes with
additional storage capacity.
C. Execute a CREATE TABLE AS SELECT (CTAS) statement to move records that are older than 13 months to
quarterly partitioned data in Amazon Redshift Spectrum backed by Amazon S3.
D. Unload all the tables in Amazon Redshift to an Amazon S3 bucket using S3 Intelligent-Tiering. Use AWS Glue to
crawl the S3 bucket location to create external tables in an AWS Glue Data Catalog. Create an Amazon EMR cluster
using Auto Scaling for any daily analytics needs, and use Amazon Athena for the quarterly reports, with both using the
same AWS Glue Data Catalog.
Correct Answer: B

 

QUESTION 5
A financial company uses Apache Hive on Amazon EMR for ad-hoc queries. Users are complaining of sluggish
performance.
A data analyst notes the following:
Approximately 90% of the queries are submitted 1 hour after the market opens. Hadoop Distributed File System (HDFS)
utilization never exceeds 10%.
Which solution would help address the performance issues?
A. Create instance fleet configurations for core and task nodes. Create an automatic scaling policy to scale out the
instance groups based on the Amazon CloudWatch CapacityRemainingGB metric. Create an automatic scaling policy to
scale in the instance fleet based on the CloudWatch CapacityRemainingGB metric.
B. Create instance fleet configurations for core and task nodes. Create an automatic scaling policy to scale out the
instance groups based on the Amazon CloudWatch YARNMemoryAvailablePercentage metric. Create an automatic
scaling policy to scale in the instance fleet based on the CloudWatch YARNMemoryAvailablePercentage metric.
C. Create instance group configurations for core and task nodes. Create an automatic scaling policy to scale out the
instance groups based on the Amazon CloudWatch CapacityRemainingGB metric. Create an automatic scaling policy to
scale in the instance groups based on the CloudWatch CapacityRemainingGB metric.
D. Create instance group configurations for core and task nodes. Create an automatic scaling policy to scale out the
instance groups based on the Amazon CloudWatch YARNMemoryAvailablePercentage metric. Create an automatic
scaling policy to scale in the instance groups based on the CloudWatch YARNMemoryAvailablePercentage metric.
Correct Answer: C

 

QUESTION 6
A US-based sneaker retail company launched its global website. All the transaction data is stored in Amazon RDS and
curated historic transaction data is stored in Amazon Redshift in the us-east-1 Region. The business intelligence (BI) team wants to enhance the user experience by providing a dashboard for sneaker trends.
The BI team decides to use Amazon QuickSight to render the website dashboards. During development, a team in
Japan provisioned Amazon QuickSight in ap-northeast-1. The team is having difficulty connecting Amazon QuickSight
from ap-northeast-1 to Amazon Redshift in us-east-1.
Which solution will solve this issue and meet the requirements?
A. In the Amazon Redshift console, choose to configure cross-Region snapshots and set the destination Region as apnortheast-1. Restore the Amazon Redshift Cluster from the snapshot and connect to Amazon QuickSight launched in apnortheast-1.
B. Create a VPC endpoint from the Amazon QuickSight VPC to the Amazon Redshift VPC so Amazon QuickSight can
access data from Amazon Redshift.
C. Create an Amazon Redshift endpoint connection string with Region information in the string and use this connection
string in Amazon QuickSight to connect to Amazon Redshift.
D. Create a new security group for Amazon Redshift in us-east-1 with an inbound rule authorizing access from the
appropriate IP address range for the Amazon QuickSight servers in ap-northeast-1.
Correct Answer: B


QUESTION 7
A company\\’s marketing team has asked for help in identifying a high performing long-term storage service for their
data based on the following requirements:
The data size is approximately 32 TB uncompressed.
There is a low volume of single-row inserts each day.
There is a high volume of aggregation queries each day.
Multiple complex joins are performed.
The queries typically involve a small subset of the columns in a table.
Which storage service will provide the MOST performant solution?
A. Amazon Aurora MySQL
B. Amazon Redshift
C. Amazon Neptune
D. Amazon Elasticsearch
Correct Answer: B

 

QUESTION 8
A large financial company is running its ETL process. Part of this process is to move data from Amazon S3 into an
Amazon Redshift cluster. The company wants to use the most cost-efficient method to load the dataset into Amazon
Redshift.
Which combination of steps would meet these requirements? (Choose two.)
A. Use the COPY command with the manifest file to load data into Amazon Redshift.
B. Use S3DistCp to load files into Amazon Redshift.
C. Use temporary staging tables during the loading process.
D. Use the UNLOAD command to upload data into Amazon Redshift.
E. use Amazon Redshift Spectrum to query files from Amazon S3.
Correct Answer: CE
Reference: https://aws.amazon.com/blogs/big-data/top-8-best-practices-for-high-performance-etl-processing-usingamazon-redshift/

 


QUESTION 9
A company stores its sales and marketing data that includes personally identifiable information (PII) in Amazon S3. The
company allows its analysts to launch their own Amazon EMR cluster and run analytics reports with the data. To meet
compliance requirements, the company must ensure the data is not publicly accessible throughout this process. A data
engineer has secured Amazon S3 but must ensure the individual EMR clusters created by the analysts are not exposed
to the public internet.
Which solution should the data engineer to meet this compliance requirement with the LEAST amount of effort?
A. Create an EMR security configuration and ensure the security configuration is associated with the EMR clusters
when they are created.
B. Check the security group of the EMR clusters regularly to ensure it does not allow inbound traffic from IPv4 0.0.0.0/0
or IPv6::/0.
C. Enable the block public access setting for Amazon EMR at the account level before any EMR cluster is created.
D. Use AWS WAF to block public internet access to the EMR clusters across the board.
Correct Answer: B
Reference: https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-security-groups.html

 

QUESTION 10
A company that produces network devices has millions of users. Data is collected from the devices on an hourly basis
and stored in an Amazon S3 data lake.
The company runs analyses on the last 24 hours of data flow logs for abnormality detection and to troubleshoot and
resolve user issues. The company also analyzes historical logs dating back 2 years to discover patterns and look for
improvement opportunities.
The data flow logs contain many metrics, such as date, timestamp, source IP, and target IP. There are about 10 billion
events every day.
How should this data be stored for optimal performance?
A. In Apache ORC partitioned by date and sorted by source IP
B. In compressed .csv partitioned by date and sorted by source IP
C. In Apache Parquet partitioned by source IP and sorted by date
D. In compressed nested JSON partitioned by source IP and sorted by date
Correct Answer: D

 

QUESTION 11
A banking company is currently using an Amazon Redshift cluster with dense storage (DS) nodes to store sensitive
data. An audit found that the cluster is unencrypted. Compliance requirements state that a database with sensitive data
must be encrypted through a hardware security module (HSM) with automated key rotation.
Which combination of steps is required to achieve compliance? (Choose two.)
A. Set up a trusted connection with HSM using a client and server certificate with automatic key rotation.
B. Modify the cluster with an HSM encryption option and automatic key rotation.
C. Create a new HSM-encrypted Amazon Redshift cluster and migrate the data to the new cluster.
D. Enable HSM with key rotation through the AWS CLI.
E. Enable Elliptic Curve Diffie-Hellman Ephemeral (ECDHE) encryption in the HSM.
Correct Answer: BD
Reference: https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-db-encryption.html

 

QUESTION 12
A company has a data warehouse in Amazon Redshift that is approximately 500 TB in size. New data is imported every
few hours and read-only queries are run throughout the day and evening. There is a particularly heavy load with no
writes for several hours each morning on business days. During those hours, some queries are queued and take a long
time to execute. The company needs to optimize query execution and avoid any downtime.
What is the MOST cost-effective solution?
A. Enable concurrency scaling in the workload management (WLM) queue.
B. add more nodes using the AWS Management Console during peak hours. Set the distribution style to ALL.
C. Use elastic resize to quickly add nodes during peak times. Remove the nodes when they are not needed.
D. Use a snapshot, restore, and resize operation. Switch to the new target cluster.
Correct Answer: A

 

QUESTION 13
A company is building a data lake and needs to ingest data from a relational database that has time-series data. The
company wants to use managed services to accomplish this. The process needs to be scheduled daily and bring
incremental data only from the source into Amazon S3.
What is the MOST cost-effective approach to meet these requirements?
A. Use AWS Glue to connect to the data source using JDBC Drivers. Ingest incremental records only using job
bookmarks.
B. Use AWS Glue to connect to the data source using JDBC Drivers. Store the last updated key in an Amazon
DynamoDB table and ingest the data using the updated key as a filter.
C. Use AWS Glue to connect to the data source using JDBC Drivers and ingest the entire dataset. Use appropriate
Apache Spark libraries to compare the dataset, and find the delta.
D. Use AWS Glue to connect to the data source using JDBC Drivers and ingest the full data. Use AWS DataSync to
ensure the delta only is written into Amazon S3.
Correct Answer: B


latest updated Amazon DAS-C01 exam questions from the Lead4Pass DAS-C01 dumps! 100% pass the DAS-C01 exam! Download Lead4Pass DAS-C01 VCE and PDF dumps: https://www.leads4pass.com/das-c01.html (Q&As: 77 dumps)

Get free Amazon DAS-C01 dumps PDF online: https://drive.google.com/file/d/1SBtAYu3S8dc7a3iiAGAF36RHhwH02Rsx/

Related Posts