THE DFIR BLOG
Menu

Blog

How file gets stored in HDFS

2/18/2018

 

  • Image a big text file
  • File is broken up into several blocks of data(Chunks).
  • each block is stored in different node in a cluster
  • Advantage of doing this 
    • Each block is of equal size. Allows HDFS to deal with bigger files in the same way. 
    • Makes storage in simple. 
    • Only keep multiple copy of block not the whole file in different node. 
    • Always dealing with same about of data - Good for processes and equal processing time
  • Optimum block size is 128 MB
  • Namenode contains mapping of blocks in datanode

Comments are closed.

    Subscribe to Newsletter

    Mac Forensics
    Windows Forensics
    Forensic Tools

    Categories

    All
    Attack
    Bash
    Bigdata
    CISSP
    Corporate
    Ctf
    Data
    Digital Forensics
    Docker
    EDR
    Forensics
    Hacking
    Hadoop
    HDFS
    Health Care
    Linux
    Memory
    Network
    Network Forensics
    PCIP
    SQL
    Windows
    Wireshark

    Archives

    August 2024
    July 2024
    January 2023
    October 2019
    September 2019
    July 2019
    June 2019
    May 2019
    March 2019
    April 2018
    March 2018
    February 2018
    July 2017
    June 2017
    May 2017
    November 2015
    October 2015
    July 2015
    June 2015
    May 2015
    April 2015
    March 2015

    RSS Feed

  • Infosec
  • Mac Forensics
  • Windows Forensics
  • Linux Forensics
  • Memory Forensics
  • Incident Response
  • Blog
  • About Me
  • Infosec
  • Mac Forensics
  • Windows Forensics
  • Linux Forensics
  • Memory Forensics
  • Incident Response
  • Blog
  • About Me