An organization's leadership team gathered managers and key team members in each division to help create a disaster recovery plan. They realize they lack a complete understanding of the infrastructure and software needed to formulate the plan. Which action should they take to correct this issue?
Without a clear understanding of infrastructure and software, the leadership team must first conduct an inventory of assets. An asset inventory provides a comprehensive list of hardware, software, and services that support business operations.
Creating checklists, defining criteria, and assigning roles are important, but they rely on knowing what assets exist. Without an inventory, the disaster recovery plan would miss critical dependencies, making recovery incomplete or impossible.
Performing an inventory supports business impact analysis, risk assessments, and recovery prioritization. It ensures that all critical systems are accounted for and appropriate recovery strategies can be designed. Asset inventories are a foundational best practice for disaster recovery and continuity planning.
Which type of data sanitization should be used to destroy data on a USB thumb drive while keeping the drive intact?
The correct approach for sanitizing a USB thumb drive while preserving its usability is overwriting. Overwriting involves replacing the existing data on the device with random data or specific patterns to ensure that the original information cannot be recovered. This process leaves the physical device intact, allowing it to be reused securely.
Physical destruction, such as shredding, renders the device unusable. Degaussing only works on magnetic media like hard disks or tapes, not on solid-state or flash-based USB drives. Key revocation applies to cryptographic keys and not to physical devices.
By using overwriting, organizations comply with data sanitization standards while balancing operational efficiency. Many tools exist that perform multi-pass overwrites to meet regulatory requirements such as those from NIST or ISO. This ensures that sensitive data is removed while allowing the device to remain in circulation for continued use.
In most redundant array of independent disks (RAID) configurations, data is stored across different disks. Which method of storing data is described?
The method described is striping, which is a technique used in RAID configurations to improve performance and distribute risk. Striping involves splitting data into smaller segments and writing those segments across multiple disks simultaneously. For example, if a file is divided into four parts, each part is written to a separate disk in the RAID array.
This parallelism enhances input/output (I/O) performance because multiple drives can be accessed at once. It also provides resilience depending on the RAID level. While striping by itself (RAID 0) increases performance but not redundancy, when combined with mirroring or parity (e.g., RAID 5 or RAID 10), it offers both speed and fault tolerance.
The purpose of striping in the data management context is to optimize how data is stored, accessed, and protected. It is fundamentally different from archiving, mapping, or crypto-shredding, as those serve different objectives (long-term storage, logical placement, or secure deletion). Striping is central to high-performance storage systems and supports availability in mission-critical environments.
When should a cloud service provider delete customer data?
The correct time for data deletion is after the specified retention period defined by contractual agreements, regulatory frameworks, or internal policies. Retention policies ensure that data is kept for as long as necessary for business, legal, or compliance reasons but not longer than required.
Oversubscription, inactivity, or review cycles are not valid triggers because they may conflict with compliance mandates such as GDPR, HIPAA, or PCI DSS. Deleting data prematurely could result in legal penalties or business risks, while keeping it longer than necessary could increase exposure.
By deleting data only after the retention period, providers demonstrate adherence to data governance principles and protect customer rights while minimizing storage costs and liability.
Which release management term describes the process from code implementation to code review and approval to automated testing and then to production deployment?
A pipeline refers to the structured process of moving code from development to production, encompassing implementation, review, automated testing, and deployment. In DevOps, this is known as a CI/CD pipeline (Continuous Integration/Continuous Deployment).
An iteration refers to a development cycle, a baseline represents a stable reference configuration, and a framework provides structure but not a deployment sequence. Only pipeline accurately captures the sequential, automated flow of code into production.
Pipelines enhance efficiency, consistency, and quality assurance by automating repetitive tasks, reducing human error, and ensuring that code changes are validated before reaching production. They are essential for modern cloud-native applications where rapid deployment is expected.
Yuonne
10 days agoTatum
17 days agoMalinda
24 days agoTimothy
1 month agoCharlie
1 month agoTamesha
2 months agoAntonio
2 months agoLinn
2 months agoEffie
2 months agoJess
3 months agoOmega
3 months agoOctavio
3 months agoDawne
3 months agoShaun
3 months agoLeonora
4 months ago