(4.1 File encryption)
Line 92: Line 92:
 
''Step 4'': In generating the pseudo-message data block, the private key <math display="inline"> K </math> is randomly selected and encryption is realized by the symmetric encryption function <math display="inline">E(\cdot )</math>, i.e.:  <math display="inline">E_K (F')\to C</math> and the ciphertext set  <math>C=\{C_1,C_2,C_3,\cdots,C_{n^{\prime}}\}</math>;
 
''Step 4'': In generating the pseudo-message data block, the private key <math display="inline"> K </math> is randomly selected and encryption is realized by the symmetric encryption function <math display="inline">E(\cdot )</math>, i.e.:  <math display="inline">E_K (F')\to C</math> and the ciphertext set  <math>C=\{C_1,C_2,C_3,\cdots,C_{n^{\prime}}\}</math>;
  
''Step 5'': After obtaining the set of ciphertexts, divide it into sets of varying lengths, with the long set being <math display="inline">\alpha</math> and the short set being <math display="inline">\beta</math>, then there is <math display="inline">C=\alpha + \beta</math>, where <math display="inline">\vert \alpha \vert \leqq \vert \beta \vert</math>;
+
''Step 5'': After obtaining the set of ciphertexts, divide it into sets of varying lengths, with the long set being <math display="inline">\alpha</math> and the short set being <math display="inline">\beta</math>, then there is <math display="inline">C=\alpha + \beta</math>, where <math display="inline">\vert \alpha \vert \ll \vert \beta \vert</math>;
  
''Step 6'': Use <math display="inline">\beta</math> as a user file and upload it to a third party; <math display="inline">\alpha</math> and <math display="inline"> K </math> are saved on the user side.
+
''Step 6'': Use <math display="inline">\beta</math> as an user file and upload it to a third party; <math display="inline">\alpha</math> and <math display="inline"> K </math> are saved on the user side.
  
 
===4.2 Pre-processing===
 
===4.2 Pre-processing===

Revision as of 15:33, 15 May 2024

Abstract

In the burgeoning epoch of big data, the imperative for secure computer network data storage is confronted with formidable challenges, including the perils of data breaches and a paucity of robust security measures. An enhanced storage paradigm, predicated upon a refined Hash algorithm—termed H-AONT—is herein delineated. This methodology augments data storage security through the formulation of an apposite system model, the amalgamation of the merits inherent in conventional encryption algorithms, and the deployment of the H-AONT dual encryption algorithm in data processing. Empirical evidence substantiates that, vis-à-vis alternative approaches, the proposed method significantly bolsters data storage security, furnishes an elevated stratum of protection for computer network data repositories, ensures data storage reliability, and proffers a pertinent benchmark for the safeguarding of computer network data within the vast landscape of big data.

Keywords: Big data, cloud storage, secure data storage, H-AONT

1. Introduction

The swift advancement of information technology has ostensibly augmented the convenience of daily life. Concomitant with this advancement, however, is an escalated jeopardy of privacy data breaches. In the big data milieu, the voluminous repositories of data and information are besieged by grave threats; notably, the data storage process is beleaguered by security vulnerabilities, and the precision of data information screening is markedly deficient. Consequently, the exploration of computer network data security storage technology emerges as a pivotal endeavor for amplifying the caliber of data security storage. Traditional computer encryption, predominantly plaintext, is readily intercepted by malefactors, rendering the data storage security tenuous and the regional data transmission security feeble [1]. To ameliorate data storage security and guarantee storage accuracy, a novel schema predicated on attribute encryption has been posited. This schema encompasses a single server model outsourcing mode index calculation algorithm, thereby relegating the data encryption task to a cloud service provider and enabling the corroboration of encryption computation outcomes [2]. Furthermore, research has been conducted on a revocable and traceable KP-ABE scheme, predicated on cloud storage technology, which not only ensures attribute revocability but also facilitates the real-time traceability of user identities [3]. Additionally, a hierarchical remote data retrieval scheme has been scrutinized, which, to some extent, enhances the reliability of data storage. However, this scheme’s authentication subject is predominantly user-centric, engendering substantial computational overhead [4]. To mitigate this overhead, studies have delegated data verification to a third party (TPA), albeit this incurs an elevated risk of data leakage due to the TPA’s curiosity when users upload data [5]. To curtail the peril of data pilferage by the TPA, an identity-based public auditing scheme has been proposed, bolstering the trustworthiness of third-party services by increasing the computational overhead on the user’s end [6]. While this scheme efficaciously diminishes the risk of third-party data theft, it does not alleviate the computational burden. Typically, users remunerate service fees to compel the third party to earnestly adhere to the service agreement, thereby diminishing the risk of data leakage. It is thus discerned that the third party’s credibility is instrumental in determining the resilience of data information against supply chain vulnerabilities.

In the contemporary era of big data, this treatise advances a secure data storage protocol. Initially, it delineates the quandaries encountered in safeguarding computer network data, subsequently, it articulates the system and objective designs predicated on these quandaries. Thereafter, it advocates for a storage strategy that enhances the Hash algorithm (H-AONT), amalgamating it with conventional encryption techniques to elevate data security storage levels. Ultimately, the practicability of this computer data security storage blueprint is corroborated through the construction of an analogous system model, thereby furnishing a benchmark for the enhancement of data security storage within computer networks.

2. Challenges to the secure storage of computer network data in the context of the big data era

2.1 Security risks in data information

Amidst the swift proliferation of information technology, mobile terminals have become ubiquitous in daily life, engendering a dependency that, while facilitating routine activities, concurrently amplifies the susceptibility to computer data breaches. The vast expanse of big data, with its diverse data types, furnishes malefactors with clandestine avenues for data exfiltration, imperiling the financial integrity of individuals and corporations alike. Despite the fact that information technology’s widespread adoption has catalyzed corporate growth, it has concomitantly escalated the security risks associated with data storage. Instances abound where cybercriminals, leveraging malware, compromise corporate systems, adulterating the stored data and undermining its sanctity.

2.2 Insufficient precision in screening data information

In the big data epoch, the velocity of computer network data storage is accelerating, intensifying the volume of data and information stored and necessitating more stringent screening protocols. Traditional data screening methods, lacking in precision, fail to satisfy user requirements, complicating the accurate retrieval of information and leaving it prone to contamination by spurious data, which severely disrupts the maintenance of computer data by operational personnel.

2.3 Wide spread of spam data information

Furthermore, the big data era is characterized by an inundation of spam data, a byproduct of information technology’s advancement that facilitates access to information but also intersperses copious amounts of unsolicited content, such as advertisements and emails, within legitimate data streams. This deluge of spam significantly hampers user data acquisition efforts. Although current computer technologies can filter out spam to some degree, eradication is elusive. Notably, as of 2019, spam constituted 57% of global email traffic, with significant contributions from various countries. These spam communications often harbor viruses, planted by attackers, which, upon interaction, compromise user data privacy and present a formidable threat to the security of data storage for both individuals and enterprises.

3. System modeling and design objectives

3.1 Modeling

In the present study, a network data security storage model is delineated, encompassing two principal facets: data security and data veracity. The model is underpinned by a tripartite framework comprising end-users, a third-party auditing agent (TAP), and a cloud service provider (CSP). Within this schema, end-users are tasked with the conversion and encryption of data prior to its transmission to the cloud repository. Concurrently, the TAP’s role is pivotal in authenticating the keys and data block tags generated by users, thereby ascertaining the integrity and completeness of the stored data. The CSP, in turn, is instrumental in furnishing requisite storage solutions to the users [4].

3.2 Threat modeling

The role of the secure storage program designed in this research is to address the problem of usable information leakage from outsourced data. Assuming that TAP is serious about fulfilling public audit protocols, the threats modeled at this point specifically include the following five aspects:

(1) Intruder Attacks: Attackers obtain user cloud information through different channels;

(2) CSP attack: maintains the holding validation in spite of an attack on the server;

(3) Replacement attack. This form is mainly audited by the complete block of data to achieve the replacement of corrupted data blocks;

(4) Forgery attack. After data corruption, the attacker deceives the TPA by forging proof information;

(5) replay attack. the CS runs the previously verified information against any intact data block to prove that the corrupted or lost data block is held intact [5].

3.3 Design objectives

To ensure that data storage is secure and efficient and resistant to the threat models described above. The storage solution needs to be designed to meet the following objectives:

(1) Conversion encryption: when the user encrypts the data, the original data file needs to be converted using the H-AONT conversion mechanism, and the security encryption algorithm generates the corresponding ciphertext data;

(2) Lightweight architecture: After the ciphertext data is uploaded to the cloud, a trusted third party is authorized to perform data auditing, determine whether the data is complete or not, and generate corresponding data labels;

(3) Storage security: In the process of data storage, it must be ensured that TAP and the attacker can not be outsourced through the data block to obtain plaintext information;

(4) Low communication: only TPA and CSP communication is provided at the time of proof of possession.

4. Computer network data security storage program

The secure storage scheme proposed in this research consists of seven algorithms: Lock(·), SliceGen(·), TagGen(·), ChallGen(·), Proof-Gen(·), Proof-Veri(·), Unlock(·), which are mainly divided into the following stages:

4.1 File encryption

In the file encryption process, the algorithm is Lock(·), when the user encrypts the data through the scheme proposed in this paper, the pseudo-message data block is sent to the TAP. In this process, the asymmetric encryption algorithm then achieves the effective protection of outsourced data [6].

Assuming the original file is , then the user can encrypt the original data file via Lock(F)→UF. The steps are as follows:

Step 1: Divide the original file to generate files, i.e.: ;

Step 2: After the division is completed, randomly select the file block and convert the secret key , to indicate the replacement rule;

Step 3: With the help of H-AONT, is converted and thus the strongly indivisible pseudo-message data block is obtained, i.e., , where .

Step 4: In generating the pseudo-message data block, the private key is randomly selected and encryption is realized by the symmetric encryption function , i.e.: and the ciphertext set ;

Step 5: After obtaining the set of ciphertexts, divide it into sets of varying lengths, with the long set being and the short set being , then there is , where ;

Step 6: Use as an user file and upload it to a third party; and are saved on the user side.

4.2 Pre-processing

In the preprocessing process, the main algorithms used are SliceGen(·) and TagGen(·). After the user transmits the ciphertext to the TPA, the data is sliced and the corresponding labels are generated, and the random numbers and location information are added to them, which ensures the uniqueness of the data and avoids the attacks of the server. The specific process is:

4.2.1 The data is processed in slices, i.e.: .

① The TPA divides UF into s data blocks, which gives:  ;

② Randomly select the t slices and construct the s✖t data slice matrix UFs✖t , then .

(1)

4.2.2 Generate labels, i.e. .

① The third party randomly selects the corresponding authentication key Kver;

② Calculate the document identification ;

③ Generate labels, then there:

(2)

In Eq. (2), h(·) denotes the homomorphic Hash function; ri denotes the random number; w(·) denotes the random function; generates, in w(i)→ri, i denotes the position information of the data block in the matrix.

④ Finally, is uploaded to the GSP and the data blocks as well as labels are stored in the cloud server[7] respectively.

4.3 Proof of data holdings

The algorithms ChallGen(·), Proof-Gen(·) and Proof-Veri(·) are mainly used in this phase. After the third party sends a challenge to the GSP, the information returned in the GSP is verified for correctness;

4.3.1 Send a challenge to ChallGen(λ)→chall and a third party generates a challenge message

① Randomly select the line parameter λ to challenge and obtain the index key kind←f(λ) ;

② The third party develops a challenge fast quantity, which is recorded as Z;

③ Generate challenge messages.

4.3.2 Generation of supporting information, i.e.

① After the GSP obtains the challenge information, calculate the Z set.

② Return proof information .

(3)

4.3.3 Integrity validation of challenge data blocks

Calculate the challenge index [i] with the random number ri, ie:

(4)

The integrity verification equation can be expressed as:

(5)

If Eq. (5) holds, then it passes the verification, at which point the proof challenge block is correctly held; conversely, the challenge block is lost, or appears to have been tampered with [8].

4.4 Confidentiality of documents

This phase mainly uses the Unlock(·) algorithm, which combines and decrypts the corresponding outsourced data with the local data blocks after the user downloads it from the cloud server, thus obtaining the original file.

The user side performs the unlocking, i.e. . The main steps are as follows:

① Download the complete outsourcing data from the cloud ;

② Integrate the α and β ciphertexts to obtain the pseudo-message ciphertext data block;

③ Decryption through the H-AONT mechanism.

4.5 Security analysis

In order to analyze the security of the above threat model, this paper evaluates the effectiveness of the model against intruder attacks in terms of strong indivisibility, holdout determinism of outsourced data [9].

4.5.1 Strong indivisibility analysis

If the H-AONT mechanism can realize resistance against intruder attacks, and the corresponding data information can be obtained through the outsourced information.

Suppose the stored pseudo-message data block is f1' and the outsourced data block stolen by the attacker through different channels is , at this point the steps for the provider to restore the original data learning based on the outsourced data are:

① Sorting the acquired data according to the user's replacement rule t gives: ;

② Get the conversion key Ktr;

③ Based on the conversion key, compute the original data block.

An attacker can decrypt the original data only if all the above three conditions are met simultaneously.

The security analysis of outsourced data is specifically:

① Assuming that the replacement rule t is compromised and the amount of outsourced data stolen by the attacker is n, there are:

(6)

The lack of f1' at the time of the attacker's substitution makes it difficult for H(M')H(M'), at this point, to obtain the corresponding conversion key Ktr, making it difficult for the attacker to obtain the information [10].

② Suppose f1' is leaked, and the acquired f1' is n+1 one. At this point the attacker can complete the sequence reorganization by forging the alignment rules, i.e.:

(7)

In Eq. (7), k' denotes the alignment rule forged by the attacker. Due to H(M')H(M), the provider fails to steal the data message. From this, we know that it is difficult for the attacker to obtain n+1 f1' to obtain the correct message sequence [11] without the user replacement rule t being compromised.

The comprehensive analysis shows that the completeness of network data storage improves with the increase in the number of converted data blocks after the corresponding conversion through the H-AONT mechanism.

4.5.2 correctness analysis

If the third party is serious about fulfilling the agreement with the service provider and the data block in the cloud is complete, at this point the certificate of possession can be used to determine whether the data block stored in the GSP is complete [12].

Assuming that a block of data in the cloud is corrupted or lost, at this point the GSP can perform a proof of holding on the basis of the data held by the TPA, i.e:

(8)

In the case of packet loss, it is difficult for GSPs to pass third-party hold validation [13].

Assuming that the data storage scheme proposed in this research is resistant to GSP attacks, in the case of ufi loss, the GSP is attacked through the complete uft as well as ø as a way to pass the third-party's holdability verification, at which point it returns that the evidence returned by the GSP contains:

(9)

This shows that the scheme proposed in this paper enables GSP attack resistance.

It is assumed that the data storage scheme proposed in this research is resistant to the GSP forgery attack, in the case of ufi loss GSP through the third party deceived by the forgery of uft, based on the Hash function, only when ufi=uft can be obtained from h(ufi')=h(uft'), and then through the proof of holdability [14,15].

Assuming that the data storage scheme proposed in this research is resistant to GSP multiple attacks, and that the GSP, in order not to disclose the loss of data blocks to the third party, will operate on the evidence messages that pass integrity, the data blocks that are not lost, and the corresponding labels, and return the evidence to the third party, there is:

(10)

At this time . As a result, it is known that the data location is then added to the data label, making it difficult for the GSP to access the data, which in turn effectively improves the security of data storage.

5 System Test

5.1 Test environment

In this paper, we further verify the performance of the proposed scheme in this paper by building the corresponding system model. The built system model mainly consists of a computer with IntelCorei7 CPU, 16GBRAM, and an AliCloud server [16].

5.2 Test indicators

System storage overhead, communication overhead, computation overhead, are selected as test metrics.

5.3 Analysis of results

(1) Comparative analysis of communication overhead

In the communication overhead comparison process, challenge-answer is used as the benchmark. For the scheme in this paper, its main common index key, the number of challenge blocks, etc. constitute the corresponding index set, and the data block own set and label set returned by the GSP are used as evidence, and the communication load is relatively low.

(2) Comparative analysis of storage overhead

In the scheme of this paper, after the user transmits the ciphertext to the cloud server, the locally reserved copy can be deleted, and only the private key and the short ciphertext data block collection are retained. Wherein, the size of the private key is 128bit; the size of the short ciphertext data block collection α can be calculated by the following formula, i.e.:

(11)

In Eq. (11), α and q represent the number and length of ciphertext data blocks respectively. In practice, the size of q is set to 128bit. In order to ensure the security of data storage, the user usually only retains 1 block of ciphertext data, and the size of α is 128bit. It is thus known that the storage overhead of the proposed scheme in this paper is relatively low [17].

(3) Comparative analysis of computational overhead

In the scheme proposed in this paper, the plaintext is converted and encrypted by H-AONT algorithm after dividing it accordingly. Among them, the size of each data block is 128bit, compared with the traditional AONT algorithm, H-AONT replaces the 2s a 1 time different-or operation by the Hash value. Since the Hash function has strong practicality, the calculation process is more convenient. To ensure the accuracy of the test, this paper uses the same structure of pseudo-random function AONT, ASE, H-AONT + ASE instead of the encryption algorithm to analyze the computational overhead, the results are shown in Figure 1.

1-1-1.png

Fig. 1 Comparison of computational overhead between this paper's algorithm and traditional encryption algorithm

Upon scrutiny of Fig. 1, it becomes evident that the algorithm delineated herein incurs a computational overhead that is comparatively diminutive for an equivalent file magnitude. Postulation of a file dimensionality at 20M permits the inference that subsequent to the partial data transmission by the user to the cloud server, the TPA engenders the pertinent data labels. Thereafter, the data’s veracity undergoes validation. Concomitantly, Fig. 2 elucidates the computational overhead juxtaposition at the user’s terminus vis-à-vis the TPA’s locus.

2-2-2.png
Fig. 2 Computational overhead of user side and TPA in data preprocessing stage

Fig. 2 elucidates that the encryption’s epicenter resides with the user, while label generation for data is executed by the TAP. It is observed that when encrypted data spans 20M, the user’s computational burden remains invariant despite an escalation in the count of authenticated data blocks. This phase witnesses a progressive augmentation in the third party’s computational load. Notably, for fewer than 30 files of 20M size, the user’s overhead is pronounced, attributable to the third party’s predominant role in data block-based label generation, where fewer labels equate to diminished computational demands on the TPA.

The analysis substantiates that the algorithm enhances data storage security post H-AONT key transformation without imposing significant computational overhead. Moreover, post-encryption and cloud upload by the user, the TPA’s verification of cloud data holdings mitigates the user’s computational load, thereby bolstering data storage security.

5.4 Security Comparison

Literature 1 and literature 2 schemes are used respectively to compare with the scheme designed in this paper, and then analyze the safety of the scheme designed in this paper, and the comparison results are detailed in Table 1.

Table 1 Comparison of security data storage security of different schemes
safety Program of this paper Literature [3] program Literature [4] program
Resistant to Replacement Attacks be be be
Resistant to forgery attacks be be be
Resistant to replay attacks be be be
Resistant to key attacks be clogged clogged
Availability of enhanced credibility of external audit services be clogged be


Table 1’s analysis reveals that the triad of schemes proffer robust security capabilities, effectively thwarting substitution, forgery, and replay assaults. However, the methodologies delineated in references [3] and [4] exhibit a pronounced reliance on third-party audit services. The encryption algorithms employed are of a conventional ilk, inheriting intrinsic limitations that predispose data to potential compromise upon key exposure. Concurrently, while the strategy in reference [4] marginally bolsters third-party audit service trustworthiness, it fails to impose requisite constraints on the auditor, thereby attenuating its reliability. In contrast, the approach advocated in this study fortifies data storage security, adeptly circumventing key leakage risks and augmenting audit service viability, thereby significantly contributing to the secure storage of computer network data.

6 Suggestions for the Development of Computerized Data Storage in the Era of Big Data

6.1 Sound computer data information security prevention system

In the era of burgeoning big data, the escalation of computer network data necessitates stringent security measures. To foster the salubrious evolution of data storage technologies, an enhancement of the computer security apparatus is imperative to safeguard information integrity.

Firstly, the refinement of the information security management system is paramount. This involves a methodical standardization of the technical protocols associated with secure data storage and a bolstered scrutiny of potential security hazards. Implementing preventative measures facilitates the prompt identification of vulnerabilities within the data storage continuum.

Secondly, the professional acumen of IT personnel is crucial. Given their pivotal role in data storage operations, it is essential to augment their proficiency in information security and elevate their training in professional competencies. This will enhance their cognizance of security protocols, thereby advancing the state of computer data storage technology.

Lastly, the persistent advancement and application of computer information security technologies are vital. In an epoch characterized by rapid technological progression, attention to the development of data confidentiality, obfuscation, and authentication technologies is critical. Confidentiality technologies aim to shield user privacy effectively, while obfuscation techniques, such as covert communication and content verification, enhance the stability of data security storage. Authentication technologies serve as bulwarks against cyber threats, mitigating the risks of system infiltration and data manipulation.

6.2 Strengthening computerized data storage and screening efforts

In the era of big data, the intricate interconnectivity of networked data has exacerbated the challenges associated with data storage and filtration. The proliferation of superfluous data—manifested as spam and junk advertisements—compounds these difficulties. Such data, upon integration into computer systems, opens avenues for malicious entities to launch cyber-attacks or disseminate viruses, thereby imperiling data integrity and precipitating potential system collapse. To circumvent these threats, it is imperative to implement precise data curation and storage strategies. Leveraging the multifaceted nature of big data and aligning with user requisites, it is essential to amalgamate data, employing sophisticated, intelligent methodologies to excise non-essential data, thereby economizing storage capacity while fortifying computer system security. This approach not only ensures a seamless network milieu but also augments the efficacy of data conveyance and conservation. Furthermore, the deployment of data mining and cloud computing technologies is crucial for the meticulous excavation and sifting of voluminous datasets. Given the comprehensive and bespoke attributes of data mining technology, its application in data processing—tailored to data idiosyncrasies and mining extant correlations—can be seamlessly integrated with computer storage technology. This integration is pivotal in enhancing the efficiency and security of data storage, concurrently satisfying the concrete data storage needs of users.

6.3 Emphasize the construction of computer hardware performance and quality

Amidst the swift advancement of information technology, the robustness and quality of computer hardware are recognized as pivotal determinants in the evolution of data storage technologies. Failures in computer hardware are typically categorized into three distinct phases: pre-failure, failure, and post-failure, with causative factors attributed to external, internal, and human elements. These factors collectively pose a formidable risk to the secure storage of networked computer data. Consequently, to substantially elevate the security level of data storage within computer networks, it is incumbent upon IT professionals to incessantly fortify the architecture of computer hardware. This necessitates a two-pronged approach:

Firstly, the optimization of the computer network is essential. In comparison to international benchmarks, China’s development in computer hardware performance is perceived as lagging, thereby impeding the nation’s technological progression. Addressing this, it is imperative to harness cutting-edge technologies, such as cloud computing, to effectuate a comprehensive enhancement of the computer network. This strategy promises not only to bolster hardware performance but also to refine the overall quality of hardware development.

Secondly, the design and development of computer hardware must be integrally aligned with the evolving landscape of the computer industry. This entails a collaborative, multi-faceted design paradigm that leverages collective strengths and mitigates weaknesses. Such an approach, deeply embedded in the research and development of computer technology, aims to amplify hardware performance. Concurrently, it seeks to augment the construction quality of the hardware, thereby fostering an environment conducive to the secure storage of computer network data.

7 Conclusion

In an epoch characterized by accelerated technological progression, the safeguarding of data within computer networks has emerged as both a novel developmental frontier and a domain susceptible to data breaches. This manuscript delineates an enhanced secure storage protocol predicated on an augmented All-Or-Nothing Transform (AONT), utilizing the Hybrid-AONT (H-AONT) dual encryption algorithm. This schema not only fortifies the security of network data repositories but also, to a measurable degree, bolsters the trustworthiness of third-party audit services. Concurrently, it facilitates the transference of computational burdens from the user to the third party, thereby alleviating the data processing load on the user end. Moreover, the third party is equipped to generate requisite duplicate files tailored to user specifications, which not only solidifies system dependability but also incrementally fortifies the system’s resilience against risks. To ensure the unimpeded advancement of secure data storage technologies, it is incumbent to perpetually refine the technical acumen of personnel tasked with data operation and maintenance management. This encompasses continual enhancements in computer hardware infrastructure, rigorous data vetting, and the reinforcement of computer network data security storage levels, all while amplifying data storage precision. Given the current climate of burgeoning information technology and the escalation of malevolent cyber activities, future research must integrate user-centric data storage requisites with ongoing refinements in network data storage methodologies to effectively mitigate data leakage risks.

References:

[1] Lei, L., Wang, Y., & Meng, F. (2012). Computer data security storage technology and application. Network Security Technology and Application, 4, 33-36.

[2] Chanhyuk, L., Jisoo, K., Heedong, K., et al. (2024). Addressing Io Tstorage constraints. A hybrid architecture for decentralized data storage and centralized management. Internet of Things, 25, 101014-101020.

[3] Fu, A.-m., Li, Y.-h., Yu, Y., et al. (2018). DoR: an IDA-based dynamic proving rebability scheme for cloud storage systems. Journal of Network and Computer Applications, 104, 97-106.

[4] Xue, J.-t., Xu, C.-x., Zhao, J.-n., et al. (2019). Identity-based public auditing for cloud storage systems against malicious auditors via blockchain. Science China Information Sciences, 62(3), 45-60.

[5] Ghulam, D., Sobia, N., Aamir, R., et al. (2024). Atomically engineered, high-speed non-volatile flash memory device exhibiting multi bit data storage operations. NanoEnergy, 119.

[6] Cervantée, E.K.W., Ngauru, T.R., Ken, T., et al. (2023). In safe hands: child health data storage, linkage and consent for use. Health Promotion International, 38(6), 1-10.

[7] Verdesoto, I., Navajas, F.Á., Roca, B.J.P., et al. (2023). Preventive Conservation of a Short Theatre Skit (Valencian "Sainete") with Cloud Data Storage and Internet of Things. Sensors, 23(24), 134-141.

[8] Famutimi, R.F., Oyelami, M.O., et al. (2023). An empirical comparison of the performances of single structure columnar in-memory and disk-resident data storage techniques using healthcare big data. Journal of Big Data, 10(1), 1123-1129.

[10] Sivakumaran, S. (2023). Easing the data sharing burden: perspectives and principles to successfully leverage AD data repositories, infrastructures and functionality. Alzheimer's & Dementia, 19(S21).

[11] Wen, X., & Liu, Q. (2023). The use of artificial intelligence technology in computer network data security storage. Digital Communication World, 11, 148-150.

[12] LIU, F., WANG, B., JIANG, F., et al. (2023). A secure storage system for hydropower station data based on cloud computing. Automation Technology and Application, 42(3), 97-100.

[13] Chou, T. B. (2023). Research on data security storage and permission verification scheme based on blockchain technology. Software, 44(7), 86-88.

[14] Han, S.-P. (2023). Data security storage method of electronic labor contract based on blockchain technology. Information Technology and Informatization, 5, 169-172.

[15] Jinshan. (2023). Design and experimental analysis of network data security storage retrieval system for medical big data. Science and Technology Innovation, 8, 96-99.

[16] WANG, L., WANG, Z., & WANG, L. (2021). Exploration of data security storage strategy based on cloud computing. Network Security Technology and Application, 6, 68-70.

[17] Liu, Y., Wang, H., Zhang, M., et al. (2018). Collaborative model for secure data storage in cloud computing environment. Computer Application Research, 35(10), 3091-3095.

Back to Top

Document information

Published on 17/05/24
Accepted on 05/05/24
Submitted on 23/04/24

Volume 40, Issue 2, 2024
DOI: 10.23967/j.rimni.2024.05.001
Licence: CC BY-NC-SA license

Document Score

0

Views 0
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?