<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>linux Archives - ITEC4B</title>
	<atom:link href="https://itec4b.com/tag/linux/feed/" rel="self" type="application/rss+xml" />
	<link>https://itec4b.com/tag/linux/</link>
	<description>Information Technology Expert Consulting</description>
	<lastBuildDate>Thu, 02 Mar 2023 18:56:27 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.1.3</generator>
	<item>
		<title>rsync: Remote Synchronization</title>
		<link>https://itec4b.com/rsync-remote-synchronization/</link>
		
		<dc:creator><![CDATA[author]]></dc:creator>
		<pubDate>Thu, 02 Mar 2023 12:44:17 +0000</pubDate>
				<category><![CDATA[Application]]></category>
		<category><![CDATA[Data Transfer]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[rsync]]></category>
		<category><![CDATA[data transfer]]></category>
		<category><![CDATA[linux]]></category>
		<guid isPermaLink="false">https://itec4b.com/?p=989</guid>

					<description><![CDATA[rsync is a complete and powerful open source utility that provides fast incremental files transfer.It efficiently transfers and synchronizes files/directories between storage drive(s) and across networked hosts. It was created in 1996 by Andrew Tridgell and Paul Mackerras.It is currently maintained by Wayne Davison. rsync is freely available under the GNU General Public License.rsync source &#8230; <p class="link-more"><a href="https://itec4b.com/rsync-remote-synchronization/" class="more-link">Read more<span class="screen-reader-text"> "rsync: Remote Synchronization"</span></a></p>]]></description>
										<content:encoded><![CDATA[
<p><span style="text-decoration: underline;"><strong><a href="https://rsync.samba.org">rsync</a> is a complete and powerful open source utility that provides fast incremental files transfer</strong></span>.<br>It efficiently transfers and synchronizes files/directories between storage drive(s) and across networked hosts.</p>



<p>It was created in 1996 by Andrew Tridgell and Paul Mackerras.<br>It is currently maintained by Wayne Davison.<br><br>rsync is freely available under the GNU General Public License.<br><a href="https://git.samba.org/?p=rsync.git;a=tree">rsync source code is here</a> <br><br>The rsync algorithm is a type of delta encoding, and is used for minimizing network usage.<br><span style="text-decoration: underline;"><strong>It efficiently computes/identify which parts (splitted blocks by <strong>fragmentation</strong>) of a source file match some part of an existing destination file (those parts don&#8217;t need to be sent across the communication link), thus minimizing the amount of data to transfer by only moving the portions of files that have changed</strong></span>.<br><br>For further speed improvements, the data sent to the receiver can be compressed using any of the supported algorithms.<br><br><strong>ssh is the default remote shell for rsync since <a href="https://download.samba.org/pub/rsync/NEWS#2.6.0">version 2.6.0 (January 1st 2004)</a></strong></p>



<pre class="wp-block-code"><code>Install rsync (Debian)
# apt install rsync

rsync version
$ rsync -V</code></pre>



<h2>Usage</h2>



<p><strong><span style="text-decoration: underline;">Local SRC &gt; Local DST</span></strong><br><code>rsync [OPTIONS] SRC [DST]</code></p>



<p><strong><span style="text-decoration: underline;">Push (Local SRC > Remote DST)</span></strong> <code>rsync [OPTIONS] SRC [USER@]HOST:DST</code><br><strong><span style="text-decoration: underline;">Pull (Local DST &lt; Remote SRC)</span></strong> <code>rsync [OPTIONS] [USER@]HOST:SRC [DST]</code><br><br>Usages with just one SRC arg and no DST arg will list the source files instead of copying:<br><code><strong>&lt;type>&lt;perms_rwx> &lt;size_bytes> &lt;mtime YYYY/MM/DD> &lt;mtime hh:mm:ss> &lt;relative_path></strong></code><br><br><strong><span style="text-decoration: underline;">IMPORTANT: rsync must be installed on both the source and destination machines</span></strong><br><br>If you still has this error:<br><code>rsync: command not found<br>rsync: connection unexpectedly closed (0 bytes received so far) [sender]<br>rsync error: error in rsync protocol data stream ...</code><br><br>It means Local rsync cannot find the remote rsync executable.<br>In this case you need to know the path of the remote host&#8217;s rsync binary and make it part of the command with <code><strong>--rsync-path=/path/to/remote/rsync</strong></code></p>



<pre class="wp-block-code"><code>$ which rsync
/usr/bin/rsync  (Debian)</code></pre>



<h2>Options</h2>



<p><span style="text-decoration: underline;"><strong>If <code>--delete </code>option is specified, rsync will identify the files NOT present on the sender and delete them on the receiver</strong></span>. This option can be dangerous if used incorrectly! It is recommended to do a simulation run before, using the <code><strong>--dry-run</strong></code> option (<code><strong>-n</strong></code>) to find out which files are going to be deleted.<br><br>Each file from the list generated by rsync will be checked to see if it can be skipped.<br><span style="text-decoration: underline;"><strong>In the most common mode of operation, files are not skipped if the modification time or size differs</strong></span>.</p>



<p>rsync performs a slower but comprehensive check if invoked with <code><strong>--checksum</strong></code> option.<br>This forces a full checksum comparison on every file present on both systems.<br><br><code><span style="text-decoration: underline;"><strong>--checksum, -c</strong></span></code><br><strong><span style="text-decoration: underline;">Skip files based on checksum, not mtime AND size</span></strong>.<br>This changes the way rsync checks if the files have been changed and are in need of a transfer.<br>Without this option, rsync uses a &#8220;quick check&#8221; that (by default) checks if each file&#8217; size and time of last modification match between the sender and receiver.<br>This option changes this to compare a 128-bit checksum for each file that has a matching size.<br>Generating the checksums means that both sides will expend a lot of disk I/O reading all the data in the files in the transfer, so this can slow things down significantly (and this is prior to any reading that will be done to transfer changed files)</p>



<p><code><strong><span style="text-decoration: underline;">--human-readable, -h</span></strong></code><br>Output numbers in a more human-readable format.<br>Unit letters: K (Kilo), M (Mega), G (Giga), T (Tera), or P (Peta).</p>



<p><strong><span style="text-decoration: underline;"><code>--dry-run, -n</code></span></strong><br>Simulation run (no changes made)</p>



<p><code><strong><span style="text-decoration: underline;">--verbose, -v</span></strong></code><br>Increases the amount of information you are given during the transfer.<br>By default, rsync works silently.<br>A single -v will give you information about what files are being transferred and a brief summary at the end.<br>Two -v options will give you information on what files are being skipped and slightly more information at the end.<br>More than two -v options should only be used if you are debugging rsync.</p>



<p><code><strong><span style="text-decoration: underline;">--quiet, -q</span></strong></code><br>Decreases the amount of information you are given during the transfer, notably suppressing information messages from the remote server. This option is useful when invoking rsync from cron.</p>



<p><strong><span style="text-decoration: underline;"><code>--info=FLAGS</code></span></strong><br>Choose the information output<br>An individual flag name may be followed by a level number, with 0 meaning to silence that output, 1 being the default output level, and higher numbers increasing the output of that flag (for those that support higher levels).<br><code>$ rsync --info=help</code><br><code>$ <strong>rsync -av --info=progress2 SRC/ DST/</strong></code></p>



<p><span style="text-decoration: underline;"><code><strong>--progress</strong></code></span><br>Print information showing the progress of the transfer.<br>This is the same as specifying <code>'<strong>--info=flist2,name,progress</strong>'</code> but any user-supplied settings for those info flags takes precedence (e.g. <code>--info=flist0 --progress</code>).</p>



<p>While rsync is transferring a regular file, it updates a progress line that looks like this:<br><code><strong>&lt;reconstructed_bytes&gt; &lt;%_current_file&gt; &lt;throughput/sec&gt; &lt;remaining_time&gt;</strong></code></p>



<p>When the file transfer is done, rsync replaces the progress line with a summary line that looks like this:<br><code><strong>&lt;filesize_bytes&gt; 100% &lt;throughput/sec&gt; &lt;elapsed_time&gt; (xfr#?, to-chk=???/N)</strong></code><br>where ? is the nth transfer, ??? is the remaining files for the receiver to check (to see if they are uptodate or not)<br><br>In an incremental recursion scan (<code><strong>--recursive</strong></code>), rsync doesn&#8217;t know the total number of files in the files list until it reaches the end of the scan. Since it starts transfering files during the scan, it displays a line with the text &#8220;ir-chk&#8221; (for incremental recursion check) instead of &#8220;to-chk&#8221; until it knows the full size of the list, at which point it switches to &#8220;to-chk&#8221;. &#8220;ir-chk&#8221; lets you know that the number of files in the files list is still going to increase.</p>



<p><strong><span style="text-decoration: underline;"><code>--archive, -a</code></span></strong><br><strong>It is equivalent to <code>-rlptgoD</code></strong><br>This is a quick way of saying you want recursion and want to preserve almost everything.<br><span style="text-decoration: underline;"><strong>Be aware that it does not include preserving ACLs (<code>-A</code>), xattrs (<code>-X</code>), atimes (<code>-U</code>), crtimes (<code>-N</code>), nor the finding and preserving of hardlinks (<code>-H</code>)</strong></span>.<br>The only exception to the above equivalence is when <code>--files-from</code> is specified, in which case <code>-r</code> is not implied.</p>



<p><strong><span style="text-decoration: underline;"><code>--recursive, -r</code></span></strong><br>This tells rsync to copy directories recursively. See also <code>--dirs</code> (<code>-d</code>).<br>Beginning with rsync 3.0.0, the recursive algorithm used is now an incremental scan that uses much less memory than before and <span style="text-decoration: underline;"><strong>begins the transfer after the scanning of the first few directories have been completed</strong></span>.<br>It is only possible when both ends of the transfer are at least version 3.0.0.<br><br><span style="text-decoration: underline;">Some options require rsync to know the full files list, these options disable the incremental recursion mode.<br>These include: <code>--delete-before</code>, <code>--delete-after</code>, <code>--prune-empty-dirs</code>, and <code>--delay-updates</code></span>.</p>



<p>Because of this, <strong><span style="text-decoration: underline;">the default delete mode when you specify <code>--delete</code> is now <code>--delete-during</code> when both ends of the connection are at least 3.0.0</span></strong> (use <code>--del</code> or <code>--delete-during</code> to request this improved deletion mode explicitly).<br>See also the &#8211;delete-delay option that is a better choice than using &#8211;delete-after.</p>



<p>Incremental recursion can be disabled using the <code>--no-inc-recursive</code> option or its shorter <code>--no-i-r </code>alias.</p>



<p><strong><code><span style="text-decoration: underline;">--delete-during, --del</span></code></strong><br>Request that the file deletions on the receiving side be done incrementally as the transfer happens.<br>The per-directory delete scan is done right before each directory is checked for updates, so it behaves like a more efficient <code>--delete-before</code>. This option was first added in rsync version 2.6.4. See <code>--delete</code> (which is implied) for more details on file deletion.</p>



<p><strong><code><span style="text-decoration: underline;">--delete-before</span></code></strong><br>Request that the file deletions on the receiving side be done before the transfer starts.<br>It does imply a delay before the start of the transfer, and this delay might cause the transfer to timeout (if <code>--timeout</code> was specified). It also forces rsync to use the old, non-incremental recursion algorithm that requires rsync to scan all the files in the transfer into memory at once (see <code>--recursive</code>).</p>



<p><strong><span style="text-decoration: underline;"><code>--delete-after</code></span></strong><br>Request that the file deletions on the receiving side be done after the transfer has completed.<br><span style="text-decoration: underline;">Important: this option forces rsync to use the old, non-incremental recursion algorithm that requires rsync to scan all the files in the transfer into memory at once (see <code>--recursive</code>)</span>. Use <code>--delete-delay</code> instead.</p>



<p><code><strong><span style="text-decoration: underline;">--delete-delay</span></strong></code><br>Request that the file deletions on the receiving side be computed during the transfer (like &#8211;delete-during), but removed after the transfer completes. <span style="text-decoration: underline;">This is more efficient than using <code>--delete-after</code></span>.<br>If the number of removed files overflows an internal buffer, a temporary file will be created on the receiving side to hold the names. If the creation of the temporary file fails, rsync will try to fall back to using <code>--delete-after</code> (which it cannot do if <code>--recursive</code> is doing an incremental scan).</p>



<p><strong><span style="text-decoration: underline;"><code>--links, -l</code></span></strong><br>By default, symbolic links are not transferred at all.<br>A message <code>"skipping non-regular"</code> file is emitted for any symlinks that exist.<br>If <code>--links</code> is specified, then symlinks are recreated with the same target on the destination.<br>Note that <code>--archive</code> implies <code>--links</code>.</p>



<p><strong><span style="text-decoration: underline;"><code>--perms, -p</code></span></strong><br>Preserve permissions<br>This option causes the receiving rsync to <strong><span style="text-decoration: underline;">set the destination permissions to be the same as the source permissions</span></strong>.<br>(See also the <code>--chmod</code> option for a way to modify what rsync considers to be the source permissions)<br><br><span style="text-decoration: underline;">When this option is off, permissions are set as follows</span>:<br><br>&#8211; Existing files (including updated files) retain their existing permissions, though the <code>--executability</code> option might change just the execute permission for the file.<br><br>&#8211; New files get their <code>"</code>normal<code>"</code> permission bits set to the source file&#8217;s permissions masked with the receiving directory&#8217;s default permissions (either the receiving umask, or the permissions specified via the destination directory&#8217;s default ACL), AND their special permission bits disabled except in the case where a new directory inherits a setgid bit from its parent directory.</p>



<p>Thus, when<code> --perms</code> and <code>--executability</code> are both disabled, rsync&#8217;s behavior is the same as that of other file copy utilities, such as cp(1) and tar(1).</p>



<p>In summary:<br><span style="text-decoration: underline;"><strong>To give destination files (both existing and new) the source permissions, use <code>--perms</code></strong></span>.<br>To give new files the destination default permissions (while leaving existing files unchanged), make sure that the <code>--perms</code> option is off and use <code>--chmod=ugo=rwX</code> (which ensures that all non-masked bits get enabled).</p>



<p>The preservation of the destination&#8217;s setgid bit on newly-created directories when &#8211;perms is off was added in rsync 2.6.7.</p>



<p><strong><span style="text-decoration: underline;"><code>--times, -t</code></span></strong><br><strong><span style="text-decoration: underline;">Preserve modification times</span></strong><br>This tells rsync to transfer modification times along with the files and update them on the remote system.<br>Note that if this option is not used, the optimization that excludes files that have not been modified cannot be effective.<br>In other words, a missing <code>-t</code> or <code>-a</code> will cause the transfer to behave as if it used <code>--ignore-times</code>, causing all files to be updated (though rsync&#8217;s delta-transfer algorithm will make the update fairly efficient if the files haven&#8217;t actually changed, you&#8217;re much better off using <code>-t</code>).</p>



<p><strong><span style="text-decoration: underline;"><code>--ignore-times, -I</code></span></strong><br>Normally rsync will skip any files that are already the same size and have the same modification timestamp.<br>This option turns off this <code>"</code>quick check<code>"</code> behavior, causing all files to be updated.</p>



<p><strong><span style="text-decoration: underline;"><code>--atimes, -U</code></span></strong><br><strong><span style="text-decoration: underline;">Preserve access times</span></strong><br>This tells rsync to <strong><span style="text-decoration: underline;">set the access (use) times of the destination files to the same value as the source files</span></strong>.<br><strong><span style="text-decoration: underline;">nanoseconds are not preserved (set to .000000000</span>)<span style="text-decoration: underline;">, command <code>cp -a</code> does</span></strong>.</p>



<p><strong><span style="text-decoration: underline;">IMPORTANT</span>:<br>There is no option to preserve ctime <code>"</code>status time<code>"</code></strong><br><strong>(the timestamp used to record when the inode changed, it is specific to a filesystem)</strong><br>An inode changes if any of its attributes are updated:<br>&#8211; at creation time (new file)<br>&#8211; file name<br>&#8211; mode/permissions<br>&#8211; owner/group<br>&#8211; hard link count<br>etc.<br><br>The creation of a file is one of the conditions listed above (creation of inode/file).<br>ctime cannot be preserved when files are brought into a new filesystem.</p>



<p><strong><span style="text-decoration: underline;"><code>--open-noatime</code></span></strong><br><strong><span style="text-decoration: underline;">Avoid changing the atime on opened file</span></strong><br>This tells rsync to open files with the <code>O_NOATIME</code> flag (on systems that support it) to avoid changing the access time of the files that are being transferred. If your OS does not support the <code>O_NOATIME</code> flag then rsync will silently ignore this option. Note also that some filesystems are mounted to avoid updating the atime on read access even without the <code>O_NOATIME</code> flag being set.</p>



<p><strong><span style="text-decoration: underline;"><code>--crtimes, -N</code></span></strong><br><strong>MAY NOT BE SUPPORTED, DEPENDS ON THE FILESYSTEM</strong>.<br>This tells rsync to set the create times (newness) of the destination files to the same value as the source files.</p>



<p><strong><span style="text-decoration: underline;"><code>--group, -g</code></span></strong><br><strong>Preserve group</strong><br>This option causes rsync to <strong><span style="text-decoration: underline;">set the group of the destination file to be the same as the source file</span></strong>.<br><span style="text-decoration: underline;">If the receiving program is not running as the super-user (or if <code>--no-super</code> was specified), only groups that the invoking user on the receiving side is a member of will be preserved</span>. Without this option, the group is set to the default group of the invoking user on the receiving side.</p>



<p><strong><span style="text-decoration: underline;"><code>--owner, -o</code></span></strong><br>This option causes rsync to <strong><span style="text-decoration: underline;">set the owner of the destination file to be the same as the source file, but only if the receiving rsync is being run as the super-user</span></strong> (see also the <code>--super</code> and <code>--fake-super</code> options).<br>Without this option, the owner of new and/or transferred files are set to the invoking user on the receiving side.</p>



<p><strong><span style="text-decoration: underline;"><code>--acls, -A</code></span></strong><br>This option causes rsync to <strong><span style="text-decoration: underline;">update the destination ACLs to be the same as the source ACLs</span></strong>.<br>The option also implies <code>--perms</code>.<br>The source and destination systems must have compatible ACL entries for this option to work properly.<br>See the <code>--fake-super</code> option for a way to backup and restore ACLs that are not compatible.</p>



<p><strong><span style="text-decoration: underline;"><code>--xattrs, -X</code></span></strong><br>This option causes rsync to update the destination extended attributes to be the same as the source ones.</p>



<p><strong><span style="text-decoration: underline;"><code>--hard-links, -H</code></span></strong><br>This tells rsync to look for hard-linked files in the source and link together the corresponding files on the destination. Without this option, hard-linked files in the source are treated as though they were separate files.<br><br>This option does NOT necessarily ensure that the pattern of hard links on the destination exactly matches that on the source.</p>



<h2>Usual Usage</h2>



<p><strong><span style="text-decoration: underline;">Local SRC_DIR > Local DST_DIR</span></strong><br>NOTE: By default, if Local DST_DIR does not exist it is created</p>



<pre class="wp-block-code"><code><span style="text-decoration: underline;">Copy SRC_DIR inside /path/to/local/DST_DIR/ : /path/to/local/DST_DIR/SRC_DIR</span>
$ rsync -av --info=progress2 /path/to/local/SRC_DIR /path/to/local/DST_DIR

<span style="text-decoration: underline;">Copy &lt;src_path>'s content inside &lt;dst_path>/</span>
$ rsync -av --info=progress2 &lt;src_path><strong>/</strong> &lt;dst_path>/
$ rsync -av --info=progress2 &lt;src_path><strong>/*</strong> &lt;dst_path>/</code></pre>



<p><strong><span style="text-decoration: underline;">Local SRC_FILE > Local DST</span></strong></p>



<pre class="wp-block-code"><code>$ rsync -av --info=progress2 /path/to/local/SRC_FILE /path/to/local/DST</code></pre>



<p><span style="text-decoration: underline;">IF DST is a directory</span>, it copies SRC_FILE inside DST (DIR)<br><br><span style="text-decoration: underline;">IF DST is a file</span>, its content is replaced with the content of SRC_FILE<br>(with -a option ONLY mtime is the same, atime, ctime are different, you may use -U + -N options)<br><br><span style="text-decoration: underline;">If DST does not exist</span> :<br>&#8211; if there is a trailing slash &#8216;/&#8217; it creates the directory DST (only for a direct subdirectory of an existing path &#8220;mkdir -p does not work&#8221; ) AND copies SRC_FILE inside DST<br><br>&#8211; otherwise it creates file DST (copy of SRC_FILE)</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>qpdf: PDF Transformation Software</title>
		<link>https://itec4b.com/qpdf-pdf-transformation-software/</link>
		
		<dc:creator><![CDATA[author]]></dc:creator>
		<pubDate>Sat, 25 Feb 2023 17:16:05 +0000</pubDate>
				<category><![CDATA[Application]]></category>
		<category><![CDATA[File Manipulation]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[PDF]]></category>
		<category><![CDATA[qpdf]]></category>
		<category><![CDATA[file manipulation]]></category>
		<category><![CDATA[linux]]></category>
		<category><![CDATA[pdf]]></category>
		<guid isPermaLink="false">https://itec4b.com/?p=1573</guid>

					<description><![CDATA[qpdf is both a free command-line program and a C++ library (open source PDF manipulation library) for structural, content-preserving transformations on PDF files.qpdf has been designed with very few external dependencies and is intentionally very lightweight. It was created in 2005 by Jay Berkenbilt. One of the main features is the capability to merge and &#8230; <p class="link-more"><a href="https://itec4b.com/qpdf-pdf-transformation-software/" class="more-link">Read more<span class="screen-reader-text"> "qpdf: PDF Transformation Software"</span></a></p>]]></description>
										<content:encoded><![CDATA[
<p><a href="http://qpdf.sourceforge.net">qpdf</a> is both a free command-line program and a C++ library (open source PDF manipulation library) for structural, content-preserving transformations on PDF files.<br>qpdf has been designed with very few external dependencies and is intentionally very lightweight.<br><br>It was created in 2005 by Jay Berkenbilt.<br><br><strong><span style="text-decoration: underline;">One of the main features is the capability to merge and split PDF files by selecting pages from one or more input files</span></strong>.<br><span style="text-decoration: underline;"><strong>It is also capable of performing a variety of transformations such as linearization (known as web optimization or fast web viewing), encryption, and decryption of PDF files</strong></span>.<br><br><a href="https://qpdf.readthedocs.io/en/stable/cli.html">qpdf Online Documentation</a><br><br><span style="text-decoration: underline;">qpdf Local Documentation</span>: /usr/share/doc/qpdf/qpdf-manual.html</p>



<h2>Portable Document Format</h2>



<p><a href="https://www.adobe.com/acrobat/about-adobe-pdf.html">Adobe created the PDF in 1992 by Dr. John Warnock</a>, offering an easy, reliable way to present and exchange documents regardless of the software, hardware, or operating systems being used.<br>Today, it is one the most trusted file formats around the world, it can be easily viewed on any operating system.<br><br><span style="text-decoration: underline;">PDF was standardized as ISO 32000 in 2008 as an open standard</span>.<br>The PDF format is now maintained by the International Organization for Standardization (ISO).<br><span style="text-decoration: underline;">ISO 32000-2:2020 edition was published in December 2020, it does not include any proprietary technologies</span>.</p>



<p>The PDF specification also provides for encryption (in which case a password is needed to view or edit the contents), digital signatures (to provide secure authentication), file attachments, and metadata.<br>PDF 2.0 defines 256-bit AES encryption as standard for PDF 2.0 files.<br><br>The standard security provided by PDF consists of two different passwords:<br><br>&#8211; user password, which encrypts the file and prevents opening<br><br>&#8211; owner password, which specifies operations that should be restricted even when the document is decrypted, which can include modifying, printing, or copying text and graphics out of the document, or adding or modifying text notes.</p>



<p>The user password encrypts the file, the owner password does not, instead relying on client software to respect content restrictions.<br>An owner password can easily be removed by software.<br>Thus, the used restrictions that an author places on a PDF document are not secure, and cannot be assured once the file is distributed.</p>



<p>Metadata includes information about the document and its content, such as the author’s name, document title, description, creation/modification dates, application used to create the file, keywords, copyright information, etc.</p>



<h2>Install qpdf (Debian)</h2>



<pre class="wp-block-code"><code># apt install qpdf</code></pre>



<h2>Usage</h2>



<p><code>--linearize</code><br>Create linearized (web-optimized) output file.<br>Linearized files are formatted in a way that allows compliant readers to begin displaying a PDF file before it is fully downloaded.<br>Ordinarily, the entire file must be present before it can be rendered because important cross-reference information typically appears at the end of the file.</p>



<pre class="wp-block-code"><code>$ qpdf --linearize infile.pdf  outfile.pdf</code></pre>



<h2>Merge PDF files with pages selection</h2>



<p>qpdf allows you to use the <code>--pages</code> option to select pages from one or more input files.</p>



<pre class="wp-block-code"><code>$ qpdf primary_input_file.pdf --pages . &#91;--password=password] &#91;page-range] &#91; ... ] -- outputfile.pdf

Within &#91; ... ] you may repeat the following:  inputfile_N.pdf &#91;--password=password] &#91;page-range]</code></pre>



<p>The special input file <code>'.'</code> can be used as an alias for the primary input file.<br>Multiple input files may be specified and you can select specific pages from it.<br>For each inputfile that pages should be extracted from, specify the filename, a password (if needed) to open the file, and a page range.<br>Note that <code>'--'</code> terminates parsing of page selection flags.<br><br><code>--password=password</code> specifies a password for accessing encrypted files<br>The password option is only needed for password-protected files<br><br>The page range may be omitted. In this case, all pages are included.<br><br>Document-level information (metadata, outline, etc.) is taken from the primary input file (in the above example, <code>primary_input_file.pdf</code>) and is preserved in <code>outputfile.pdf</code><br><strong><span style="text-decoration: underline;">You can use <code>--empty</code> in place of the primary input file to start from an empty file (without any metadata, outline, etc.) and just merge selected pages from input files</span></strong>.<br><br><strong><span style="text-decoration: underline;">In most cases you will most likely use this following syntax</span></strong></p>



<pre class="wp-block-code"><code>$ qpdf --empty --pages inputfile_1.pdf &#91;page-range] inputfile_2.pdf &#91;page-range] inputfile_3.pdf &#91;page-range] &#91; ... ] -- outputfile.pdf</code></pre>



<p>The page-range is a set of numbers separated by commas, ranges of numbers separated dashes, or combinations of those.<br>The character <code>'z'</code> represents the last page.<br>A number preceded by an <code>'r'</code> indicates to count from the end, so <code>r3-r1</code> would be the last three pages of the document.<br>Pages can be specified in any order (selection of any pages).<br>Ranges can be specified in any order (ascending or descending): a high number followed by a low number causes the pages to appear in reverse.<br>Numbers may be repeated in a page range.<br>A page range may be optionally appended with <code>:even</code> or <code>:odd</code> to indicate only the even or odd pages in the given range.<br>Note that even and odd refer to the positions within the specified, range, not whether the original number is even or odd.<br><br><span style="text-decoration: underline;">Example page ranges</span>:<br><br>1,3,5-9,15-12<br>Pages 1, 3, 5, 6, 7, 8, 9, 15, 14, 13, and 12 in that order</p>



<p>z-1<br>All pages in the document in reverse</p>



<p>r3-r1<br>The last three pages of the document</p>



<p>r1-r3<br>The last three pages of the document in reverse order</p>



<p>1-20:even<br>Even pages from 2 to 20</p>



<p>5,7-9,12:odd<br>Pages 5, 8 and 12, which are the pages in odd positions from among the original range (pages 5, 7, 8, 9, and 12)</p>



<pre class="wp-block-code"><code>Example, to extract pages 1 through 5 from infile.pdf while preserving all metadata associated with that file in outfile.pdf
$ qpdf infile.pdf --pages . 1-5 -- outfile.pdf

If you want pages 1 through 5 from infile.pdf without any metadata, use
$ qpdf --empty --pages infile.pdf 1-5 -- outfile.pdf

Merge all .pdf files
$ qpdf --empty  --pages *.pdf -- outfile.pdf</code></pre>



<h2>Split a PDF into separate PDF files</h2>



<p><code>--split-pages[=n]</code><br>Write each group of n pages to a separate output file.<br>If n is not specified, create single pages.<br><br>Output file names are generated as follows:<br>If the string %d appears in the output file name, it is replaced with a range of zero-padded page numbers starting from 1.<br>Otherwise, if the output file name ends in .pdf (case insensitive), a zero-padded page range, preceded by a dash, is inserted before the file extension.<br>Otherwise, the file name is appended with a zero-padded page range preceded by a dash.<br><br>Zero padding is added to all page numbers in file names so that all the numbers are the same length, which causes the output filenames to sort lexically in numerical order.<br><br>Page ranges are a single number in the case of single-page groups or two numbers separated by a dash otherwise.<br><br>Here are some examples. In these examples, infile.pdf has 20 pages</p>



<pre class="wp-block-code"><code>Output files are 01-outfile through 20-outfile with no extension
$ qpdf --split-pages infile.pdf %d-outfile

Output files are outfile-01.pdf through outfile-20.pdf
$ qpdf --split-pages infile.pdf outfile.pdf

Output files are outfile-01-04.pdf, outfile-05-08.pdf, outfile-09-12.pdf, outfile-13-16.pdf, outfile-17-20.pdf
$ qpdf --split-pages=4 infile.pdf outfile.pdf

Output files are outfile.notpdf-01 through outfile.notpdf-20
The extension .notpdf is not treated in any special way regarding the placement of the number
$ qpdf --split-pages infile.pdf outfile.notpdf</code></pre>



<p>Note that metadata, outline, etc, and other document-level features of the original PDF file are not preserved.<br>For each page of output, this option creates an empty PDF and copies a single page from the output into it.<br>If you require the document-level data, you will have to run qpdf with the <code>--pages</code> option once for each page.<br>Using <code>--split-pages</code> is much faster if you don’t require the document-level data.<br><br><span style="text-decoration: underline;">If you don’t want to split out every page, use page ranges to select the pages you only want to extract</span>.<br>The page range is used to specify the pages or ranges you want, <span style="text-decoration: underline;">but each extracted page is still stored in a single PDF</span>.</p>



<pre class="wp-block-code"><code>$ qpdf --split-pages infile.pdf outfile.pdf --pages infile.pdf 4-5,8,9-13 --</code></pre>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Linux: ASCII random string generator</title>
		<link>https://itec4b.com/linux-ascii-random-string-generator/</link>
		
		<dc:creator><![CDATA[author]]></dc:creator>
		<pubDate>Thu, 23 Feb 2023 16:20:49 +0000</pubDate>
				<category><![CDATA[Debian]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Shell Scripting]]></category>
		<category><![CDATA[linux]]></category>
		<category><![CDATA[password]]></category>
		<guid isPermaLink="false">https://itec4b.com/?p=1567</guid>

					<description><![CDATA[https://github.com/ITEC4B/ASCII-random-string-generator]]></description>
										<content:encoded><![CDATA[
<pre class="wp-block-code"><code><strong><span style="text-decoration: underline;">Generate a random ASCII string (40 printable characters without empty spaces)</span></strong>

$ cat /dev/urandom | tr -dc '&#91;:graph:]' | head -c 40
OR
$ cat /dev/urandom | tr -dc '&#91;:alnum:]&#91;:punct:]' | head -c 40

&#91;:lower:]   All lower case letters     abcdefghijklmnopqrstuvwxyz
&#91;:upper:]   All upper case letters     ABCDEFGHIJKLMNOPQRSTUVWXYZ
&#91;:alpha:]   All letters
&#91;:digit:]   All digits                 0123456789
&#91;:alnum:]   All letters and digits
&#91;:punct:]   All punctuation characters !"#$%&amp;'()*+,-./:;&lt;=>?@&#91;\]^_`{|}~
&#91;:graph:]   All printable characters, not including space
&#91;:print:]   All printable characters, including space</code></pre>



<p><a href="https://github.com/ITEC4B/ASCII-random-string-generator">https://github.com/ITEC4B/ASCII-random-string-generator</a></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Linux Symbolic/Hard Links</title>
		<link>https://itec4b.com/linux-symbolic-hard-links/</link>
		
		<dc:creator><![CDATA[author]]></dc:creator>
		<pubDate>Sun, 12 Feb 2023 20:03:14 +0000</pubDate>
				<category><![CDATA[File System]]></category>
		<category><![CDATA[filesystem]]></category>
		<category><![CDATA[inode]]></category>
		<category><![CDATA[linux]]></category>
		<guid isPermaLink="false">https://itec4b.com/?p=1309</guid>

					<description><![CDATA[To understand links in a file system, you first have to understand what an inode is. In linux there are two types of links :&#8211; Soft/Symbolic Links&#8211; Hard Links Hard Links Every file on the Linux filesystem starts with a single hard link.The link is between the filename and the actual data stored on the &#8230; <p class="link-more"><a href="https://itec4b.com/linux-symbolic-hard-links/" class="more-link">Read more<span class="screen-reader-text"> "Linux Symbolic/Hard Links"</span></a></p>]]></description>
										<content:encoded><![CDATA[
<p><span style="text-decoration: underline;">To understand links in a file system, you first have to understand <a href="https://itec4b.com/linux-filesystem-directory-entries-inodes-datablocks">what an inode is</a></span>.<br><br><span style="text-decoration: underline;"><strong>In linux there are two types of links</strong></span> :<br>&#8211; <strong>Soft/Symbolic Links</strong><br>&#8211; <strong>Hard Links</strong></p>



<h2>Hard Links</h2>



<p><span style="text-decoration: underline;">Every file on the Linux filesystem starts with a single hard link</span>.<br>The link is between the filename and the actual data stored on the filesystem (directory entry &gt; inode &gt; data blocks).<br><br><strong><span style="text-decoration: underline;">When you create a hard link you create a file that gets the same inode as the target file</span></strong>.<br><span style="text-decoration: underline;"><strong>You have different file names/paths for a unique physical file on a partition (pointing to the same inode)</strong></span> </p>


<div class="wp-block-image">
<figure class="aligncenter size-full is-resized"><a href="https://itec4b.com/wp-content/uploads/2023/02/linux-schema-hardlink-inodes-datablocks.png"><img decoding="async" src="https://itec4b.com/wp-content/uploads/2023/02/linux-schema-hardlink-inodes-datablocks.png" alt="" class="wp-image-1321" width="697" height="393" srcset="https://itec4b.com/wp-content/uploads/2023/02/linux-schema-hardlink-inodes-datablocks.png 929w, https://itec4b.com/wp-content/uploads/2023/02/linux-schema-hardlink-inodes-datablocks-300x169.png 300w, https://itec4b.com/wp-content/uploads/2023/02/linux-schema-hardlink-inodes-datablocks-768x433.png 768w" sizes="(max-width: 697px) 100vw, 697px" /></a></figure></div>


<p class="has-vivid-red-color has-text-color"><strong><span style="text-decoration: underline;">Hard links can only be created for regular files (not directories or special files) and ONLY within the same filesystem.<br>A hard link cannot span multiple filesystems</span></strong>.</p>



<p class="has-black-color has-text-color">If you delete the <code>"</code>original file<code>"</code>, you can still access it via any remaining hard link having the same inode.<br>Apart from the filename/filepath, you cannot tell which one is the hard link since they share the same inode.</p>



<pre class="wp-block-code"><code>$ ln /path/to/target_file /path/to/hardlink</code></pre>



<pre class="wp-block-code"><code><strong><span style="text-decoration: underline;">List inodes files from &lt;dir>, Recursive, No Sort, date (mtime)</span></strong>
<strong>CMD_LS_RECURSIVE_INODE_MTIME_NOSORT</strong>
<strong><span style="text-decoration: underline;">OUTPUT</span>:</strong>
<strong>&lt;inode> &lt;mode?rwx> &lt;links> &lt;uname> &lt;gname> &lt;size_bytes> &lt;date YYYY-MM-DD> &lt;time hh:mm:ss> &lt;filepath></strong>

$ LC_ALL=C ls -ilR --time-style='+%F %T' &lt;dir> 2>/dev/null | sed -e '/:$/,/^total &#91;0-9]\{1,\}/d' -n -e '/^&#91;0-9]\{1,\} -/p' | tr -s '\n'


<strong><span style="text-decoration: underline;">List inodes with hard link(s) from &lt;dir>, Recursive, Natural Sort (inode first), date (mtime)</span></strong>
<strong><span style="text-decoration: underline;">OUTPUT</span>:</strong>
<strong>&lt;inode> &lt;mode?rwx> &lt;links> &lt;uname> &lt;gname> &lt;size_bytes> &lt;date YYYY-MM-DD> &lt;time hh:mm:ss> &lt;filepath></strong>

$ &lt;CMD_LS_RECURSIVE_INODE_MTIME_NOSORT> | awk '$3 > 1 {print $0}' | sort


<span style="text-decoration: underline;"><strong>Find inodes with hard link(s)</strong></span>
<strong><span style="text-decoration: underline;">OUTPUT</span>:</strong> <strong>&lt;inode></strong>
$ &lt;CMD_LS_RECURSIVE_INODE_MTIME_NOSORT> | awk '$3 > 1 {print $1}' | sort -u</code></pre>



<h2>Soft/Symbolic Links</h2>



<p><strong><span style="text-decoration: underline;">When you create a soft link, you create a new file with a new inode, which points to the target path</span></strong>.<br><strong><span style="text-decoration: underline;">It doesn&#8217;t reference the target inode</span></strong>. <strong><span style="text-decoration: underline;">If the target&#8217;s path/name changes or is deleted, the reference breaks (pointing to a nonexistent file path)</span></strong>.<br><br>Symbolic links can link together non-regular and regular files.<br><span style="text-decoration: underline;"><strong>They also can span multiple filesystems</strong></span>.<br><br><strong><span style="text-decoration: underline;">A symbolic link is identified by the mode lrwxrwxrwx, you cannot change it</span></strong>, so it is easy to identify them.<br>The symbolic link&#8217; size is the target&#8217;s path length.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full is-resized"><a href="https://itec4b.com/wp-content/uploads/2023/02/linux-schema-softlink-inodes-datablocks.png"><img decoding="async" loading="lazy" src="https://itec4b.com/wp-content/uploads/2023/02/linux-schema-softlink-inodes-datablocks.png" alt="" class="wp-image-1322" width="721" height="393" srcset="https://itec4b.com/wp-content/uploads/2023/02/linux-schema-softlink-inodes-datablocks.png 961w, https://itec4b.com/wp-content/uploads/2023/02/linux-schema-softlink-inodes-datablocks-300x164.png 300w, https://itec4b.com/wp-content/uploads/2023/02/linux-schema-softlink-inodes-datablocks-768x419.png 768w" sizes="(max-width: 721px) 100vw, 721px" /></a></figure></div>


<p>Changing owner, group, permissions of a symbolic link only has effect on the target file, in this case target file&#8217;s ctime is updated.<br><br>A change to a symlink name, updates its access time (atime) and status time (ctime).<br>That is the only thing you can change for the symbolic file itself.</p>



<pre class="wp-block-code"><code>$ ln -s /path/to/target_file /path/to/symlink</code></pre>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Linux Filesystem: Directory Entries, Inodes, Data Blocks</title>
		<link>https://itec4b.com/linux-filesystem-directory-entries-inodes-datablocks/</link>
		
		<dc:creator><![CDATA[author]]></dc:creator>
		<pubDate>Sat, 11 Feb 2023 17:00:39 +0000</pubDate>
				<category><![CDATA[File System]]></category>
		<category><![CDATA[filesystem]]></category>
		<category><![CDATA[inode]]></category>
		<category><![CDATA[linux]]></category>
		<guid isPermaLink="false">https://itec4b.com/?p=1126</guid>

					<description><![CDATA[An inode is a unique number assigned to each Linux file and directory in a filesystem (except for hard links), it is used as an index (Index Node). Inodes store metadata (attributes) about the files they refer to (it is like the "file&#8217;s identity card" without the name)ANDBecause the data of a file is actually &#8230; <p class="link-more"><a href="https://itec4b.com/linux-filesystem-directory-entries-inodes-datablocks/" class="more-link">Read more<span class="screen-reader-text"> "Linux Filesystem: Directory Entries, Inodes, Data Blocks"</span></a></p>]]></description>
										<content:encoded><![CDATA[
<p><span style="text-decoration: underline;"><strong>An inode is a unique number assigned to each Linux file and directory in a filesystem (except for <a href="https://itec4b.com/linux-symbolic-hard-links">hard links</a>), it is used as an index (Index Node)</strong></span>.<br><br><strong><span style="text-decoration: underline;">Inodes store metadata (attributes) about the files they refer to (it is like the <code>"</code>file&#8217;s identity card<code>"</code> without the name)</span></strong><br><strong>AND</strong><br>Because the data of a file is actually stored into data blocks on a physical drive, <strong><span style="text-decoration: underline;">serve as a reference to the disk blocks locations of the data they point to (via data block pointers)</span></strong>.<br><span style="text-decoration: underline;">Note that this information is not directly accessible to the user</span>.<br><br>Thus, an inode is a data structure in a Unix-style filesystem that describes a filesystem object such as a file or a directory.<br><br><span style="text-decoration: underline;">A <strong>block device</strong> is a storage device from which you can read/write data blocks</span>.<br>You create partitions on it and then format it with a <strong>filesystem</strong> that dictates how the files are organized/managed.<br><span style="text-decoration: underline;">Every filesystem needs to split up a partition into data blocks to store files and file parts</span>.<br><br>A <strong>data block</strong> is the basic unit of data storage in a filesystem.<br>It is the smallest unit of data that can be read or written in a single operation.<br>In most filesystems, each data block has a fixed size, typically between 512 and 4096 bytes.<br><span style="text-decoration: underline;"><strong>Today the default is usually 4096 bytes for storage I/O and filesystems</strong></span>.<br><br>With a default filesystem block size of 4096 bytes, if you have a data file of 3 bytes (logical size), it will take away 1 block (4096 bytes: physical size on storage device) from your disk&#8217;s capacity, since that is the smallest unit of the filesystem.<br>If you have a data file of 4097 bytes it will take 2 blocks.</p>



<pre class="wp-block-code"><code><strong><span style="text-decoration: underline;">NOTE</span></strong>:
'stat' command provides 'Size:' and 'Blocks:' informations
'Size:' is the data file' size in bytes (logical size)
'Blocks:' is the real disk usage in blocks of 512 bytes (physical size)

<strong><span style="text-decoration: underline;">List size files from DIR</span></strong>
<strong>OUTPUT: &lt;logical_size_bytes> &lt;physical_size_bytes> &lt;filepath></strong>

$ LC_ALL=C find DIR -type f -exec stat -c '%s %b %B %n' {} + 2>/dev/null | awk '{ fname=""; for (i=4; i &lt;= NF; i++) fname=fname $i " "; print $1" "($2*$3)" "fname }'</code></pre>



<p>Linux ext filesystem uses a default 4096 bytes for a block size because that&#8217;s the default pagesize of CPUs, so there&#8217;s an easy mapping between memory-mapped files and disk blocks.<br>The hardware (specifically, the Memory Management Unit, which is part of the CPU) determines what page sizes are possible. It is the smallest unit of data for memory management in a virtual memory operating system. Almost all architectures support a 4kB page size. Modern architectures support larger pages (and a few also support smaller pages), but <strong><span style="text-decoration: underline;">4kB is a very widespread default</span></strong>.</p>



<pre id="block-5bda9865-bf3c-48ec-b7c1-7ad5af414fbc" class="wp-block-code"><code><span style="text-decoration: underline;">Get the filesystem block size in bytes</span>
(size used internally by kernel, it may be modified by filesystem driver on mount)
# blockdev --getbsz /dev/&lt;device&gt;

<span style="text-decoration: underline;">Get the system's page size</span>
(number of bytes in a memory page, where "page" is a fixed-length block, the unit for memory allocation and file mapping)
$ getconf PAGE_SIZE
$ getconf PAGESIZE</code></pre>



<pre class="wp-block-code"><code><span style="text-decoration: underline;">Print inode's metadata for a specific file/dir using stat command</span>
$ LC_ALL=C stat /path/to/file_or_dir
$ LC_ALL=C stat -c '%i %y %A %U %s %N' /path/to/file_or_dir | sed -e 's;&#91;.]&#91;0-9]\{9\} +&#91;0-9]\{4\};;g'

<span style="text-decoration: underline;">Get inode number(s) with ls -i</span>
$ ls -i1 /path/to/file_or_dir</code></pre>



<pre class="wp-block-code"><code>Get the number of blocks a file uses on disk, so you can calculate disk space really used per file (physical file size).
<span style="text-decoration: underline;"><strong>IMPORTANT</strong></span>:
<strong><span style="text-decoration: underline;">By default 'ls', 'du' and 'df' commands use 1block=1024bytes which may differ from the filesystem unit</span>. <span style="text-decoration: underline;">You can use --block-size option or <a href="https://www.gnu.org/software/coreutils/manual/html_node/Block-size.html">set environment variables</a></span></strong>:
Display values are in units of the first available SIZE from --block-size, DF_BLOCK_SIZE, BLOCK_SIZE and BLOCKSIZE environment variables. Otherwise, units default to 1024 bytes (or 512 if POSIXLY_CORRECT is set).

$ du --block-size=4096 /path/to/file
$ ls -s --block-size=4096 /path/to/file

<span style="text-decoration: underline;">ls -l prints the data size in bytes (logical file size), which is less than the actual used space on disk</span>.</code></pre>



<h2><strong>Inodes Metadata</strong></h2>



<pre class="wp-block-code"><code>$ man inode</code></pre>



<p>&#8211; <span style="text-decoration: underline;"><strong>Inode number</strong></span><br>Each file in a filesystem has a unique inode number (except for hard links).<br>Inode numbers are guaranteed to be unique only within a filesystem (i.e. <span style="text-decoration: underline;"><strong>the same inode numbers may be used by different filesystems, which is the reason that hard links may not cross filesystem boundaries</strong></span>).</p>



<p>&#8211; <strong><span style="text-decoration: underline;">Device where inode resides</span></strong><br>Each inode (as well as the associated file) resides in a filesystem that is hosted on a device.<br>That device is identified by the combination of its major ID (which identifies the general class of device) and minor ID (which identifies a specific instance in the general class).<br><br>&#8211; <strong><span style="text-decoration: underline;">Device represented by this inode</span></strong><br>If the current file (inode) represents a device, then the inode records the major and minor ID of that device.<br><br>&#8211; <strong><span style="text-decoration: underline;">Links count</span></strong> (number of hard links to the file)<br><br>&#8211; <strong><span style="text-decoration: underline;">User ID</span></strong> (of the owner of the file)<br><br>&#8211; <strong><span style="text-decoration: underline;">Group ID</span></strong> (of the file)<br><br>&#8211; <strong><span style="text-decoration: underline;">Mode</span></strong>: <strong><span style="text-decoration: underline;">File Type</span></strong> + <strong><span style="text-decoration: underline;">Permissions</span></strong> (read, write and execute permissions of the file for the owner, group and others)<br>The standard Unix file types are regular, directory, symbolic link, FIFO (named pipe), block device, character device, and socket as defined by POSIX.<br><br>&#8211; <span style="text-decoration: underline;"><strong>File size (in bytes)</strong></span><br>This field gives the size of the file (if it is a regular file) in bytes.<br>The size of a symbolic link is the length of the pathname it contains, without a terminating null byte.<br>Default size for a directory is usually one block size (4096 bytes on most ext4 filesystems).<br><br>&#8211; <strong><span style="text-decoration: underline;">Preferred block size for I/O operations (in bytes)</span></strong><br>This field gives the <code>"</code>preferred<code>"</code> block size for efficient filesystem I/O operations.<br>(Writing to a file in smaller chunks may cause an inefficient read-modify-rewrite)<br><br>&#8211; <strong><span style="text-decoration: underline;">Number of blocks allocated to the file</span></strong><br>This field indicates the <strong>number of blocks allocated to the file in 512-byte units</strong><br><br>&#8211; <strong><span style="text-decoration: underline;">File creation (birth) timestamp (btime)</span></strong><br>This is set on file creation and not changed subsequently.<br>The btime timestamp was not historically present on UNIX systems and is not currently supported by most Linux filesystems.<br><br>&#8211; <strong><span style="text-decoration: underline;">Last modification timestamp (mtime)</span></strong><br>This is the file&#8217;s last modification timestamp. It is changed by file modifications (file&#8217;s content: data).<br>Moreover, the mtime timestamp of a directory is changed by the creation or deletion of files in that directory.<br><span style="text-decoration: underline;">The mtime timestamp is not changed for changes in file&#8217;s name, owner, group, hard link count, or mode.</span><br><br>&#8211; <strong><span style="text-decoration: underline;">Last access timestamp (atime)</span></strong><br>It is changed by file accesses.<br><br>&#8211; <strong><span style="text-decoration: underline;">Last status change timestamp (ctime)</span></strong><br>It is changed by modifying file&#8217;s metadata information (i.e. file&#8217;s name, owner, group, link count, mode, etc.).</p>



<p><span style="text-decoration: underline;"><strong>According to The POSIX standard an inode is a <code>"</code>file serial number<code>"</code>, defined as a per-filesystem unique identifier for a file.<br>Combined together with the device ID of the device containing the file, they uniquely identify the file within the whole system</strong></span>.</p>



<p>Two files can have the same inode, but only if they are part of different partitions (except for hard links).<br><strong><span style="text-decoration: underline;">Inodes are only unique on a partition level, not on the whole system</span></strong>.</p>



<h2>Directory Entry</h2>



<p>You may have noticed that inodes do not contain the file&#8217;s name.<br><strong><span style="text-decoration: underline;">The file’s name is not stored in the inode metadata but in its directory structure</span></strong>.<br><strong><span style="text-decoration: underline;">UNIX systems use a directory stream mapping system: directory entries contain the filenames and their inodes number</span></strong>.<br><br><strong>From a user perspective a directory contains files, technically a directory is a structure used to locate other files/directories.<br>In most Unix filesystems, a directory is a mapping from filenames to inode numbers</strong>.<br><strong>There&#8217;s a separate table mapping inode numbers to inode data</strong>.<br><br><span style="text-decoration: underline;">The header file <code><strong>dirent.h</strong></code> describes the format of a directory entry</span>.<br><br>Format of a Directory Entry<br><a href="https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/dirent.h.html">https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/dirent.h.html</a><br><a href="https://www.gnu.org/software/libc/manual/html_node/Directory-Entries.html">https://www.gnu.org/software/libc/manual/html_node/Directory-Entries.html</a><br><br><span style="text-decoration: underline;">In the glibc implementation, the <strong>dirent structure</strong> is defined as follows</span>:</p>



<pre class="wp-block-code"><code>struct dirent {
   ino_t   d_ino;              /* Inode number */
   off_t   d_off;              /* Not an offset */
   unsigned short   d_reclen;  /* Length of this record */
   unsigned char   d_type;     /* Type of file; not supported by all filesystem types */
   char d_name&#91;256];           /* Null-terminated filename */
};</code></pre>



<p><strong><span style="text-decoration: underline;">The only fields in the <code>dirent</code> structure that are mandated by POSIX.1 are <code>d_name</code> and <code>d_ino</code></span></strong>.<br>The other fields are unstandardized, and not present on all systems.</p>



<pre class="wp-block-code"><code>/* This is the data type of directory stream objects. */
typedef struct __dirstream DIR;

The DIR data type represents a directory stream.
You shouldn’t ever allocate objects of the struct dirent or DIR data types, since the directory access functions do that for you. Instead, you refer to these objects using the pointers returned by the <a href="https://www.gnu.org/software/libc/manual/html_node/Opening-a-Directory.html">functions</a>.
Directory streams are a high-level interface.</code></pre>



<p><span style="text-decoration: underline;">The design of data block pointers is actually more complex than the schema illustrates below</span>, it also depends on the filesystem. For ext filesystem an inode pointer structure is used to list the addresses of a file&#8217;s data blocks (around 15 direct/indirect data blocks pointers).</p>



<div style="height:18px" aria-hidden="true" class="wp-block-spacer"></div>


<div class="wp-block-image">
<figure class="aligncenter size-full is-resized"><a href="https://itec4b.com/wp-content/uploads/2023/02/linux-schema-dir-inodes-datablocks.jpg"><img decoding="async" loading="lazy" src="https://itec4b.com/wp-content/uploads/2023/02/linux-schema-dir-inodes-datablocks.jpg" alt="" class="wp-image-1160" width="697" height="361" srcset="https://itec4b.com/wp-content/uploads/2023/02/linux-schema-dir-inodes-datablocks.jpg 929w, https://itec4b.com/wp-content/uploads/2023/02/linux-schema-dir-inodes-datablocks-300x155.jpg 300w, https://itec4b.com/wp-content/uploads/2023/02/linux-schema-dir-inodes-datablocks-768x398.jpg 768w" sizes="(max-width: 697px) 100vw, 697px" /></a></figure></div>


<h2>Filesystem</h2>



<p><strong><span style="text-decoration: underline;">Linux uses filesystems to manage data stored on storage devices</span></strong>.<br>The filesystem manages a map (inodes table) to locate each file placed in the storage device.<br>The filesystem divides the partition into blocks: small contiguous areas.<br>The size of these blocks is defined during creation of the filesystem.<br><br><span style="text-decoration: underline;"><strong>Before you can mount a drive partition, you must format it using a filesystem</strong></span>.<br><br><span style="text-decoration: underline;"><strong>The default filesystem used by most Linux distributions is ext4</strong></span>.<br>ext4 filesystem provides journaling, which is a method of tracking data not yet written to the drive in a log file, called the journal. If the system fails before the data can be written to the drive, the journal data can be recovered and stored upon the next system boot.<br><br>After creating a partition, you need to create a filesystem (mkfs program is specifically dedicated for that)</p>



<pre class="wp-block-code"><code>#  LC_ALL=C mkfs -t ext4 /dev/&lt;partition_id&gt;</code></pre>



<p><strong><span style="text-decoration: underline;">Some filesystems (ext4 included), allocate a limited number of inodes when created</span></strong>.<br><strong><span style="text-decoration: underline;">If the filesystem runs out of inode entries in the table, you cannot create any more files, even if there is still space available on the drive: that may happen with a multitude of very small files</span></strong>.<br>When a file is created on the partition or volume, a new entry in the inode table is created.<br>Using the <code>-i</code> option with the <code>df</code> command will show you the percentage of inodes used.<br>It is theoretically possible (although uncommon) that you could run out of available inodes while still having actual space left on your partition, so it’s worth keeping that in mind.<br><br><span style="text-decoration: underline;"><strong>Report file system disk space usage</strong></span></p>



<pre class="wp-block-code"><code><span style="text-decoration: underline;">By blocks (most important)</span>
$ LC_ALL=C df -Th --block-size=4096 -x tmpfs -x devtmpfs -x squashfs 2&gt;/dev/null

<span style="text-decoration: underline;">By inodes</span>
$ LC_ALL=C df -Ti -x tmpfs -x devtmpfs -x squashfs 2&gt;/dev/null</code></pre>



<p>Linux uses <code><strong>e2fsprogs</strong></code> package to provide utilities for working with ext filesystems</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Linux: MAX Path Length</title>
		<link>https://itec4b.com/linux-max-path-length/</link>
		
		<dc:creator><![CDATA[author]]></dc:creator>
		<pubDate>Fri, 27 Jan 2023 13:37:59 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[filesystem]]></category>
		<category><![CDATA[linux]]></category>
		<guid isPermaLink="false">https://itec4b.com/?p=435</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[
<pre class="wp-block-code"><code>#bytes for a path: 4096

<strong><span style="text-decoration: underline;">IMPORTANT</span></strong>:
The maximum number of characters for a path varies since Non-ASCII characters occupies several bytes.

See also <a href="https://itec4b.com/linux-max-file-dir-name-length">linux-max-file-dir-name-length</a>

$ getconf -a | awk '/^PATH_MAX/ {print $2}'</code></pre>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Linux: MAX File/Dir Name Length</title>
		<link>https://itec4b.com/linux-max-file-dir-name-length/</link>
		
		<dc:creator><![CDATA[author]]></dc:creator>
		<pubDate>Thu, 26 Jan 2023 22:28:32 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[filesystem]]></category>
		<category><![CDATA[linux]]></category>
		<guid isPermaLink="false">https://itec4b.com/?p=395</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[
<pre class="wp-block-code"><code>#bytes for a file/dir name: 255

<strong><span style="text-decoration: underline;">IMPORTANT</span></strong>:
The maximum number of characters for a file/dir name varies since Non-ASCII characters occupies several bytes.

$ getconf -a | awk '/^NAME_MAX/ {print $2}'</code></pre>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Debian Shell: List Files</title>
		<link>https://itec4b.com/debian-shell-list-files/</link>
		
		<dc:creator><![CDATA[author]]></dc:creator>
		<pubDate>Thu, 26 Jan 2023 20:29:25 +0000</pubDate>
				<category><![CDATA[Shell Scripting]]></category>
		<category><![CDATA[linux]]></category>
		<category><![CDATA[shell script]]></category>
		<guid isPermaLink="false">https://itec4b.com/?p=365</guid>

					<description><![CDATA[IMPORTANT: You should NOT use empty spaces within files or directories names You can use a program to rename directories and files without empty spaces Recursive List/Delete Empty Directories List/Delete Empty Files List Top 100 biggest files from DIR Get directory size (content) List (sub)directories size + mtime from DIR, sort by size List (sub)directories &#8230; <p class="link-more"><a href="https://itec4b.com/debian-shell-list-files/" class="more-link">Read more<span class="screen-reader-text"> "Debian Shell: List Files"</span></a></p>]]></description>
										<content:encoded><![CDATA[
<p class="has-vivid-red-color has-text-color"><strong><span style="text-decoration: underline;">IMPORTANT</span>: You should NOT use empty spaces within files or directories names</strong></p>



<p>You can use a <a href="https://github.com/ITEC4B/rename-dir-files">program to rename directories and files</a> without empty spaces</p>



<pre class="wp-block-code"><code><strong><span style="text-decoration: underline;">List size files from DIR</span></strong>
<strong>OUTPUT: &lt;logical_size_bytes&gt; &lt;physical_size_bytes&gt; &lt;filepath&gt;</strong>

$ LC_ALL=C find DIR -type f -exec stat -c '%s %b %B %n' {} + 2&gt;/dev/null | awk '{ fname=""; for (i=4; i &lt;= NF; i++) fname=fname $i " "; print $1" "($2*$3)" "fname }'


<strong><span style="text-decoration: underline;">List files from DIR with physical size &gt;= nMB</span></strong>
<strong>OUTPUT: &lt;logical_size_bytes&gt; &lt;physical_size_bytes&gt; &lt;filepath&gt;</strong>

$ LC_ALL=C find DIR -type f -exec stat -c '%s %b %B %n' {} + 2&gt;/dev/null | awk '{ fname=""; for (i=4; i &lt;= NF; i++) fname=fname $i " "; print $1" "($2*$3)" "fname }' | awk '$2&gt;=nMB*(1024^2) {print $0}'</code></pre>



<pre class="wp-block-code has-black-color has-text-color"><code><strong><span style="text-decoration: underline;">List <strong>ALL files from</strong> DIR, No Sort, date (mtime)</span></strong>
<strong>STD_CMD_LS_ALL</strong>
<strong>OUTPUT:</strong>
<strong>&lt;mode?rwx&gt; &lt;links&gt; &lt;uname&gt; &lt;gname&gt; &lt;size_bytes&gt; &lt;mdate YYYY-MM-DD&gt; &lt;mtime hh:mm:ss&gt; &lt;filename&gt;</strong>
$ LC_ALL=C ls -al --time-style="+%F %T" DIR 2&gt;/dev/null | sed '/^total /d' | awk '($8 != "." &amp;&amp; $8 != "..") {print $0}'


<strong><span style="text-decoration: underline;">List ALL files from DIR, No Sort<strong>, date (mtime)</strong></span>
SPE_CMD_LS
OUTPUT: &lt;mdate YYYY-MM-DD&gt; &lt;mtime hh:mm:ss&gt; &lt;mode?rwx&gt; &lt;uname&gt; &lt;gname&gt; &lt;size_bytes&gt; &lt;filename&gt;</strong>
$ STD_CMD_LS_ALL | awk '{ fname=""; for (i=8; i &lt;= NF; i++) fname=fname $i " "; print $6" "$7" "$1" "$3" "$4" "$5" "fname }'


<strong><span style="text-decoration: underline;">List ALL files from DIR, No Sort<strong>, date (mtime)</strong></span>
SPE_CMD_STAT
OUTPUT: &lt;mdate YYYY-MM-DD&gt; &lt;mtime hh:mm:ss&gt; &lt;mode?rwx&gt; &lt;uname&gt; &lt;gname&gt; &lt;size_bytes&gt; &lt;filepath&gt;</strong>
$ LC_ALL=C stat -c '%y %A %U %G %s %n' DIR/{*,.*} 2&gt;/dev/null | sed -e 's;&#91;.]&#91;0-9]\{9\} +&#91;0-9]\{4\};;g'</code></pre>



<pre class="wp-block-code"><code><strong><span style="text-decoration: underline;">List regular files/dirs/symlinks from DIR, No Sort, date (mtime)</span>
STD_CMD_LS_FDL
OUTPUT: &lt;mode?rwx&gt; &lt;links&gt; &lt;uname&gt; &lt;gname&gt; &lt;size_bytes&gt; &lt;mdate YYYY-MM-DD&gt; &lt;mtime hh:mm:ss&gt; &lt;filepath&gt;</strong>
$ STD_CMD_LS_ALL | grep '^-\|^d\|^l'


<strong><span style="text-decoration: underline;">List regular files/dirs/symlinks from DIR, No Sort, date (mtime)</span>
SPE_CMD_LS_FDL
OUTPUT: &lt;mdate YYYY-MM-DD&gt; &lt;mtime hh:mm:ss&gt; &lt;mode?rwx&gt; &lt;uname&gt; &lt;gname&gt; &lt;size_bytes&gt; &lt;filename&gt;</strong>
$ STD_CMD_LS_FDL | awk '{ fname=""; for (i=8; i &lt;= NF; i++) fname=fname $i " "; print $6" "$7" "$1" "$3" "$4" "$5" "fname }'</code></pre>



<pre class="wp-block-code"><code><strong><span style="text-decoration: underline;"><strong>List <strong><strong><strong>regular files</strong></strong>/dirs/symlinks from</strong> DIR</strong>, Sort: ascending date (mtime)</span></strong>
<strong>CMD_LS_FDL_MTIME_SORT_ASC</strong>
<strong><span style="text-decoration: underline;">OUTPUT:</span> &lt;mdate YYYY-MM-DD&gt; &lt;mtime hh:mm:ss&gt; <strong>&lt;<strong>mode?rwx</strong>&gt;</strong> &lt;uname&gt; <strong><strong>&lt;gname&gt;</strong></strong></strong> <strong>&lt;size_bytes&gt; &lt;filename&gt;</strong>

$ SPE_CMD_LS_FDL | sort</code></pre>



<pre class="wp-block-code"><code><strong><span style="text-decoration: underline;">List regular files/dirs/symlinks from DIR, Sort: descending date (mtime)</span></strong>
<strong><strong>CMD_LS_FDL_MTIME_SORT</strong>_DESC</strong>
<strong><span style="text-decoration: underline;">OUTPUT</span>: &lt;mdate YYYY-MM-DD&gt; &lt;mtime hh:mm:ss&gt; <strong>&lt;<strong>mode?rwx</strong>&gt;</strong> &lt;uname&gt; <strong><strong><strong>&lt;gname&gt;</strong></strong></strong> &lt;size_bytes&gt; &lt;filename&gt;</strong>

$ SPE_CMD_LS_FDL | sort -r</code></pre>



<pre class="wp-block-code"><code><strong><span style="text-decoration: underline;"><strong>List regular files/dirs/symlinks from DIR</strong>, Latest and Oldest (mtime)</span></strong>
<strong><span style="text-decoration: underline;">OUTPUT</span>: &lt;mdate YYYY-MM-DD&gt; &lt;mtime hh:mm:ss&gt; <strong>&lt;<strong>mode?rwx</strong>&gt;</strong> &lt;uname&gt; <strong><strong><strong><strong>&lt;gname&gt;</strong></strong></strong> </strong>&lt;size_bytes&gt; &lt;filename&gt;</strong>

$ CMD_LS_FDL_MTIME_SORT_DESC | sed -n '1p;$p'</code></pre>



<pre class="wp-block-code"><code><strong><span style="text-decoration: underline;"><strong><strong>List regular files/dirs/symlinks from DIR</strong></strong>, Sort: ascending filename</span></strong>
<strong><span style="text-decoration: underline;">OUTPUT</span>: &lt;mdate YYYY-MM-DD&gt; &lt;mtime hh:mm:ss&gt; <strong>&lt;<strong>mode?rwx</strong>&gt;</strong> &lt;uname&gt;<strong> <strong><strong><strong><strong>&lt;gname</strong></strong></strong></strong></strong>&gt; &lt;size_bytes&gt; &lt;filename&gt;</strong>

$ SPE_CMD_LS_FDL | sort -k7</code></pre>



<pre class="wp-block-code"><code><strong><span style="text-decoration: underline;"><strong><strong><strong>List regular files/dirs/symlinks from DIR</strong></strong>,</strong> Sort: descending size</span></strong>
<strong><span style="text-decoration: underline;">OUTPUT</span>: &lt;mdate YYYY-MM-DD&gt; &lt;mtime hh:mm:ss&gt; <strong>&lt;<strong>mode?rwx</strong>&gt;</strong> &lt;uname&gt; <strong><strong><strong><strong><strong><strong>&lt;gname</strong></strong></strong></strong></strong>&gt; </strong>&lt;size_bytes&gt; &lt;filename&gt;</strong>

$ SPE_CMD_LS_FDL | sort -k6n,6nr</code></pre>



<h2>Recursive</h2>



<pre class="wp-block-code"><code><strong><span style="text-decoration: underline;">List ALL (not hidden) files from DIR, Recursive, No Sort, date (mtime)</span></strong>
<strong>Standard command ls -R without headers</strong>
<strong>OUTPUT: &lt;mode?rwx&gt; &lt;links&gt; &lt;uname&gt; &lt;gname&gt; &lt;size_bytes&gt; &lt;mdate YYYY-MM-DD&gt; &lt;mtime hh:mm:ss&gt; &lt;filepath&gt;
</strong>
$ LC_ALL=C ls -lR --time-style='+%F %T' DIR 2&gt;/dev/null | sed '/:$/,/^total &#91;0-9]\{1,\}/d' | tr -s '\n'


<strong><span style="text-decoration: underline;">List ALL (hidden) files from DIR, Recursive, No Sort, date (mtime)</span></strong>
<strong>Standard command ls -R without headers</strong>
<strong>OUTPUT: &lt;mode?rwx&gt; &lt;links&gt; &lt;uname&gt; &lt;gname&gt; &lt;size_bytes&gt; &lt;mdate YYYY-MM-DD&gt; &lt;mtime hh:mm:ss&gt; &lt;filepath&gt;
</strong>
$ LC_ALL=C ls -alR --time-style='+%F %T' DIR 2&gt;/dev/null | sed '/:$/,/^total &#91;0-9]\{1,\}/d' | tr -s '\n' | awk '($8 != "." &amp;&amp; $8 != "..") {print $0}'</code></pre>



<pre class="wp-block-code"><code><strong><span style="text-decoration: underline;"><strong><strong>List regular files/dirs/symlinks from DIR</strong></strong>, Recursive, No Sort, date (mtime)</span></strong>

<strong>CMD_FIND_FDL_RECURSIVE_STAT_MTIME</strong>
<strong><span style="text-decoration: underline;">OUTPUT</span>: &lt;mdate YYYY-MM-DD&gt; &lt;mtime hh:mm:ss&gt; <strong>&lt;<strong>mode?rwx</strong>&gt;</strong> &lt;uname&gt; <strong>&lt;gname&gt; </strong>&lt;size_bytes&gt; &lt;filepath&gt;</strong>

$ LC_ALL=C find DIR \( -type f -o -type d -o -type l \) -exec stat -c '%y %A %U %G %s %n' {} + 2&gt;/dev/null | sed 's;&#91;.]&#91;0-9]\{9\} &#91;+-]&#91;0-9]\{4\};;'


<strong>CMD_FIND_FDL_RECURSIVE_LS_MTIME</strong>
<strong>OUTPUT: &lt;mdate YYYY-MM-DD&gt; &lt;mtime hh:mm:ss&gt; <strong>&lt;<strong>mode?rwx</strong>&gt;</strong> &lt;uname&gt;<strong> <strong>&lt;gname&gt;</strong></strong> &lt;size_bytes&gt; &lt;filepath&gt;</strong>

$ LC_ALL=C find DIR \( -type f -o -type d -o -type l \) -exec ls -ld --time-style="+%F %T" {} + 2&gt;/dev/null | awk '{ fname=""; for (i=8; i &lt;= NF; i++) fname=fname $i " "; print $6" "$7" "$1" "$3" "$4" "$5" "fname }'


<strong>CMD_LS_FDL_RECURSIVE_MTIME</strong>
<strong>OUTPUT: &lt;mdate YYYY-MM-DD&gt; &lt;mtime hh:mm:ss&gt; <strong>&lt;<strong>mode?rwx</strong>&gt;</strong> &lt;uname&gt; <strong><strong><strong>&lt;gname&gt;</strong></strong> </strong>&lt;size_bytes&gt; &lt;filename&gt;</strong>

$ LC_ALL=C ls -alR --time-style="+%F %T" DIR 2&gt;/dev/null | grep '^-\|^d\|^l'  | awk '($8 != "." &amp;&amp; $8 != "..") {print $0}' | awk '{ fname=""; for (i=8; i &lt;= NF; i++) fname=fname $i " "; print $6" "$7" "$1" "$3" "$4" "$5" "fname }'
</code></pre>



<pre class="wp-block-code"><code><strong><span style="text-decoration: underline;"><strong><strong><strong><strong><strong><strong>List regular files/dirs/symlinks from DIR</strong></strong></strong></strong></strong></strong>, Recursive, Sort: descending date (mtime)</span></strong>

<strong>OUTPUT: &lt;mdate YYYY-MM-DD&gt; &lt;mtime hh:mm:ss&gt; <strong>&lt;<strong>mode?rwx</strong>&gt;</strong> &lt;uname&gt;<strong> <strong><strong><strong>&lt;gname&gt;</strong></strong></strong></strong> &lt;size_bytes&gt; &lt;filepath&gt;</strong>
<strong>CMD_FIND_FDL_RECURSIVE_STAT_MTIME_SORT_DESC</strong>

$ CMD_FIND_FDL_RECURSIVE_STAT_MTIME | sort -r


<strong>OUTPUT: &lt;mdate YYYY-MM-DD&gt; &lt;mtime hh:mm:ss&gt; <strong>&lt;<strong>mode?rwx</strong>&gt;</strong> &lt;uname&gt; <strong><strong><strong><strong><strong>&lt;gname&gt;</strong></strong></strong></strong></strong> &lt;size_bytes&gt; &lt;filepath&gt;</strong>
<strong>CMD_FIND_FDL_RECURSIVE_LS_MTIME_SORT_DESC</strong>

$ CMD_FIND_FDL_RECURSIVE_LS_MTIME | sort -r


<strong>OUTPUT: &lt;mdate YYYY-MM-DD&gt; &lt;mtime hh:mm:ss&gt; <strong>&lt;<strong>mode?rwx</strong>&gt;</strong> &lt;uname&gt; <strong><strong><strong><strong><strong>&lt;gname&gt;</strong></strong></strong></strong></strong></strong> <strong>&lt;size_bytes&gt; &lt;filename&gt;</strong>
<strong>CMD_LS_FDL_RECURSIVE_MTIME_SORT_DESC</strong>

$ CMD_LS_FDL_RECURSIVE_MTIME | sort -r</code></pre>



<pre class="wp-block-code"><code><strong><span style="text-decoration: underline;"><strong><strong><strong>List regular files/dirs/symlinks from DIR</strong></strong></strong>, Recursive, Latest and Oldest (mtime)</span></strong>

<strong>OUTPUT: &lt;mdate YYYY-MM-DD&gt; &lt;mtime hh:mm:ss&gt; <strong>&lt;<strong>mode?rwx</strong>&gt;</strong> &lt;uname&gt;<strong> <strong><strong><strong><strong><strong>&lt;gname&gt;</strong></strong></strong></strong></strong></strong> &lt;size_bytes&gt; &lt;filepath&gt;</strong>

$ CMD_FIND_FDL_RECURSIVE_STAT_MTIME_SORT_DESC | sed -n '1p;$p'
$ CMD_FIND_FDL_RECURSIVE_LS_MTIME_SORT_DESC | sed -n '1p;$p'


<strong>OUTPUT: &lt;mdate YYYY-MM-DD&gt; &lt;mtime hh:mm:ss&gt; <strong>&lt;<strong>mode?rwx</strong>&gt;</strong> &lt;uname&gt;<strong><strong> <strong><strong><strong><strong><strong>&lt;gname&gt;</strong></strong></strong></strong></strong></strong> </strong>&lt;size_bytes&gt; &lt;filename&gt;</strong>

$ CMD_LS_FDL_RECURSIVE_MTIME_SORT_DESC | sed -n '1p;$p'</code></pre>



<pre class="wp-block-code"><code><strong><span style="text-decoration: underline;"><strong><strong><strong><strong>List regular files/dirs/symlinks from DIR</strong></strong></strong></strong>, Recursive, Sort: ascending filepath</span></strong>

<strong>OUTPUT: &lt;mdate YYYY-MM-DD&gt; &lt;mtime hh:mm:ss&gt; <strong>&lt;<strong>mode?rwx</strong>&gt;</strong> &lt;uname&gt;<strong><strong> <strong><strong><strong><strong><strong>&lt;gname&gt;</strong></strong></strong></strong></strong></strong></strong> &lt;size_bytes&gt; &lt;filepath&gt;</strong>

$ CMD_FIND_FDL_RECURSIVE_STAT_MTIME | sort -k7
$ CMD_FIND_FDL_RECURSIVE_LS_MTIME | sort -k7


<strong>OUTPUT: &lt;mdate YYYY-MM-DD&gt; &lt;mtime hh:mm:ss&gt; &lt;mode?rwx&gt; &lt;uname&gt; &lt;gname&gt; &lt;size_bytes&gt; &lt;filename&gt;
</strong>
$ CMD_LS_FDL_RECURSIVE_MTIME | sort -k7</code></pre>



<pre class="wp-block-code"><code><strong><span style="text-decoration: underline;"><strong><strong><strong><strong><strong>List regular files/dirs/symlinks from DIR</strong></strong></strong></strong>,</strong> Recursive, Sort: descending size</span></strong>

<strong><strong>OUTPUT: &lt;mdate YYYY-MM-DD&gt; &lt;mtime hh:mm:ss&gt; <strong>&lt;<strong>mode?rwx</strong>&gt;</strong> &lt;uname&gt;<strong><strong> <strong><strong><strong><strong><strong>&lt;gname&gt;</strong></strong></strong></strong></strong></strong></strong> &lt;size_bytes&gt; &lt;filepath&gt;</strong></strong>

$ CMD_FIND_FDL_RECURSIVE_STAT_MTIME | sort -k6n,6nr
$ CMD_FIND_FDL_RECURSIVE_LS_MTIME | sort -k6n,6nr


<strong><strong>OUTPUT: &lt;mdate YYYY-MM-DD&gt; &lt;mtime hh:mm:ss&gt; &lt;mode?rwx&gt; &lt;uname&gt; &lt;gname&gt; &lt;size_bytes&gt; &lt;filename&gt;</strong></strong>
$ CMD_LS_FDL_RECURSIVE_MTIME | sort -k6n,6nr</code></pre>



<h2>List/Delete Empty Directories</h2>



<pre class="wp-block-code"><code><strong><span style="text-decoration: underline;">List Empty Directories</span></strong>
$ find DIR -type d -empty 2&gt;/dev/null

<strong><span style="text-decoration: underline;">Delete Empty Directories (verbose)</span></strong>
$ find DIR -type d -empty -exec rmdir -v {} +</code></pre>



<h2>List/Delete Empty Files</h2>



<pre class="wp-block-code"><code><strong><span style="text-decoration: underline;">List Empty Files</span></strong>
$ find DIR -type f -empty 2&gt;/dev/null

<strong><span style="text-decoration: underline;">Delete Empty Files (verbose)</span></strong>
$ find DIR -type f -empty -exec rm -v {} +</code></pre>



<h2>List Top 100 biggest files from DIR</h2>



<pre class="wp-block-code"><code><strong><span style="text-decoration: underline;">By default 'du' command print the real disk usage in blocks of 1024 bytes (number of used blocks) rather than the logical file' size (raw data in bytes). If you want to show the logical size use -b, --bytes equivalent to '--apparent-size --block-size=1' option</span></strong>.

$ man du
Display  values  are  in  units of the first available SIZE from --block-size, and the DU_BLOCK_SIZE, BLOCK_SIZE and BLOCKSIZE environment variables. <strong><span style="text-decoration: underline;">Otherwise, units default to 1024 bytes (or 512 if POSIXLY_CORRECT is set)</span></strong>.


<span style="text-decoration: underline;"><strong>Real disk usage in blocks</strong></span><strong><span style="text-decoration: underline;"> (default: 1block=1024bytes)</span></strong>
$ LC_ALL=C find DIR -type f -exec du {} + 2&gt;/dev/null | sort -nr | head -n100

<span style="text-decoration: underline;"><strong>Logical file's size in bytes</strong></span>
$ LC_ALL=C find DIR -type f -exec du -b {} + 2&gt;/dev/null | sort -nr | head -n100
$ LC_ALL=C find DIR -type f -exec du -b -h {} + 2&gt;/dev/null | sort -hr | head -n100

<strong><span style="text-decoration: underline;">List Top 100 biggest files in blocks</span></strong>
$ LC_ALL=C find / -type f -exec du {} + 2&gt;/dev/null | sort -nr | head -n100</code></pre>



<h2>Get directory size (content)</h2>



<pre class="wp-block-code"><code><span style="text-decoration: underline;"><strong>Real disk usage in blocks</strong></span><strong><span style="text-decoration: underline;"> (default: 1block=1024bytes), hidden subdirectories excluded</span></strong>
$ du -s --exclude=.* DIR 2&gt;/dev/null

<strong><span style="text-decoration: underline;">Logical directory's size in bytes (approximation), hidden subdirectories excluded</span></strong>
$ du -sb --exclude=.* DIR 2&gt;/dev/nul<strong><span style="text-decoration: underline;"></span></strong></code></pre>



<h2>List (sub)directories size + mtime from DIR, sort by size</h2>



<pre class="wp-block-code"><code><span style="text-decoration: underline;"><strong>Limiting Directory Tree Depth: 2</strong></span>

<strong><span style="text-decoration: underline;">Real disk usage in blocks (default: 1block=1024bytes), hidden subdirectories excluded</span></strong>
$ du -d2 --time --time-style='+%F %T' --exclude=.* DIR 2&gt;/dev/null | sort -nr

<strong><span style="text-decoration: underline;">Logical file's size in bytes (approximation), hidden subdirectories excluded</span></strong>
$ du  -d2 -b   --time --time-style='+%F %T' --exclude=.* DIR 2&gt;/dev/null | sort -nr
$ du  -d2 -bh --time --time-style='+%F %T' --exclude=.* DIR 2&gt;/dev/null | sort -hr</code></pre>



<h2>List (sub)directories size + mtime from DIR, sort by name</h2>



<pre class="wp-block-code"><code><strong><span style="text-decoration: underline;">Limiting Directory Tree Depth: 2</span></strong>

<strong><span style="text-decoration: underline;">Real disk usage in blocks (default: 1block=1024bytes), hidden subdirectories excluded</span></strong>
$ du -d2 --time --time-style='+%F %T' --exclude=.* DIR 2&gt;/dev/null | sort -k4

<strong><span style="text-decoration: underline;">Logical file's size in bytes (approximation), hidden subdirectories excluded</span></strong>
$ du  -d2 -b   --time --time-style='+%F %T' --exclude=.* DIR 2&gt;/dev/null | sort -k4
$ du  -d2 -bh --time --time-style='+%F %T' --exclude=.* DIR 2&gt;/dev/null | sort -k4</code></pre>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>curl: Transfer Data From/To a Server</title>
		<link>https://itec4b.com/curl-transfer-data-from-to-a-server/</link>
		
		<dc:creator><![CDATA[author]]></dc:creator>
		<pubDate>Tue, 24 Jan 2023 17:44:16 +0000</pubDate>
				<category><![CDATA[Application]]></category>
		<category><![CDATA[curl]]></category>
		<category><![CDATA[Data Transfer]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[data transfer]]></category>
		<category><![CDATA[linux]]></category>
		<guid isPermaLink="false">https://itec4b.com/?p=336</guid>

					<description><![CDATA[curl is entirely free and open source software, it is a complete and powerful tool to transfer data from/to a server.Daniel Stenberg is the founder and lead developer of cURL and libcurl since 1996. curl is used daily by virtually every Internet-using human on the globe, it is used everywhere ! cURL is an Open &#8230; <p class="link-more"><a href="https://itec4b.com/curl-transfer-data-from-to-a-server/" class="more-link">Read more<span class="screen-reader-text"> "curl: Transfer Data From/To a Server"</span></a></p>]]></description>
										<content:encoded><![CDATA[
<p><a href="https://curl.se">curl</a> is entirely free and open source software, it is a <strong>complete and powerful tool to transfer data from/to a server</strong>.<br>Daniel Stenberg is the founder and lead developer of <strong>cURL</strong> and <strong>libcurl</strong> <a href="https://curl.haxx.se/docs/history.html">since 1996</a>.<br><br><span style="text-decoration: underline;"><strong>curl is used daily by virtually every Internet-using human on the globe, it is used everywhere !</strong></span></p>



<p>cURL is an Open Source project consisting of voluntary members from all over the world.<br>The cURL project is completely independent and free.<br><br>It is a <span style="text-decoration: underline;">client-side program</span> (the &#8216;c&#8217;), a URL (Uniform Resource Locator) client, and shows the data (by default).<br>So &#8216;c&#8217; for Client and URL: cURL</p>



<p><a href="https://everything.curl.dev">Everything curl</a> is a detailed and free book that explains basically everything there is to know about curl, libcurl and the associated project.</p>



<p>cURL is a project and its primary purpose and focus is to make two products:<br>&#8211; curl, the command-line tool<br>&#8211; libcurl the transfer library with a C API<br><br>Both the tool and the library do Internet transfers for resources specified as URLs using Internet protocols.<br><span style="text-decoration: underline;"><strong>Everything and anything that is related to Internet protocol transfers can be considered curl&#8217;s business</strong></span>.</p>



<p>The protocol describes exactly how to ask the server for data, or to tell the server that there is data coming.<br>Protocols are typically defined by the IETF (<a href="https://www.ietf.org">Internet Engineering Task Force</a> ),<br>which hosts RFC documents that describe exactly how each protocol works: how clients and servers are supposed to act and what to send and so on.</p>



<p>curl and libcurl are distributed under an Open Source license known as a MIT license derivative.<br>A key thing to remember is, that libcurl is the library and that this library is the biggest component of the curl command-line tool.</p>



<p><strong>Where&#8217;s the code ?</strong><br>The curl git tree can be browsed with a web browser at <a href="https://github.com/curl/curl">https://github.com/curl/curl</a><br>To check out the curl source code from git, you can clone it like this:</p>



<pre class="wp-block-code"><code>$ git clone https://github.com/curl/curl.git</code></pre>



<p>curl started out as a command-line tool and it has been invoked from shell prompts and from within scripts by an uncountable number of users over the years.<br><span style="text-decoration: underline;">curl has established itself as one of those trusty tools that is there for you to help you get your work done</span>.</p>



<p><a href="https://curl.se/docs/comparison-table.html">Here</a> you can find <strong>why curl is your first choice</strong>.</p>



<h2><a href="https://curl.se">curl</a> vs. <a href="https://www.gnu.org/software/wget">wget</a></h2>



<p><a href="https://daniel.haxx.se/docs/curl-vs-wget.html">https://daniel.haxx.se/docs/curl-vs-wget.html</a><br><br><span style="text-decoration: underline;">wget has (recursive) downloading powers that curl does not feature and it also handle download retries over unreliable connections possibly slightly more effective</span>.<br><strong>For just about everything else, curl is probably the more suitable tool</strong>.</p>



<p>curl operates on URLs. URI (Uniform Resource Identifier) is actually the correct name for them.<br>The syntax is defined in RFC 3986 (2005): <a href="https://datatracker.ietf.org/doc/html/rfc3986">https://datatracker.ietf.org/doc/html/rfc3986</a></p>



<h2>Install curl (Debian)</h2>



<pre class="wp-block-code"><code># apt install curl</code></pre>



<h2>curl options basics</h2>



<p><strong><span style="text-decoration: underline;"><code>-V, --version</code></span></strong><br>Displays information about curl and the libcurl version it uses.<br>The output from that command line is typically four lines, out of which some will be rather long and might wrap in your terminal window<br><br><span style="text-decoration: underline;">Line 1: curl</span><br>The first line includes the full version of curl, libcurl and other 3rd party libraries linked with the executable.<br>The first line starts with &#8216;curl&#8217; and first shows the main version number of the tool.<br>Then follows the &#8220;platform&#8221; the tool was built for within parentheses and the libcurl version.<br>Those three fields are common for all curl builds.<br><br>If the curl version number has -DEV appended to it, it means the version is built straight from a in-development source code and it is not an officially released and &#8220;blessed&#8221; version.<br>The rest of this line contains names of third party components this build of curl uses, often with their individual version number next to it with a slash separator.<br><br><span style="text-decoration: underline;">Line 2: Release-Date</span><br>This line shows the date this curl version was released by the curl project,<br>and it can also show a secondary &#8220;Patch date&#8221; if it has been updated somehow after it was originally released.</p>



<p><span style="text-decoration: underline;">Line 3: Protocols</span><br>The third line (starts with &#8220;Protocols:&#8221;) is a list of all transfer protocols (URL schemes really) in alphabetical order that this curl build supports.<br>All names are shown in lowercase letters.<br><br><span style="text-decoration: underline;">Line 4: Features</span><br>The fourth line (starts with &#8220;Features:&#8221;) is the list of features this build of curl supports.<br>If the name is present in the list, that feature is enabled. If the name is not present, that feature is not enabled.</p>



<p><span style="text-decoration: underline;"><strong><code>-v, --verbose</code></strong></span><br>Makes curl verbose during the operation.<br>Useful for debugging and seeing what&#8217;s going on &#8220;under the hood&#8221;.<br>When verbose mode is enabled, curl gets more talkative and will explain and show a lot more of its doings.<br>It will add informational tests and prefix them with &#8216;*&#8217;.</p>



<p>A line starting with &#8216;&gt;&#8217; means &#8220;header data&#8221; sent by curl,<br>A line starting with &#8216;&lt;&#8216; means &#8220;header data&#8221; received by curl that is hidden in normal cases,<br>A line starting with &#8216;*&#8217; means additional info provided by curl.</p>



<p>If you only want HTTP headers in the output, <code>-i, --include</code> might be the option you&#8217;re looking for.<br>If you think this option still doesn&#8217;t give you enough details, consider using <code>--trace</code> or <code>--trace-ascii</code> instead.</p>



<p>See also <code>-i, --include</code>. This option overrides <code>--trace</code> and <code>--trace-ascii</code>.</p>



<p><span style="text-decoration: underline;"><strong><code>-s, --silent</code></strong></span><br>Silent or quiet mode.<br>Don&#8217;t show progress meter or error messages when errors occur. Makes Curl mute.<br>It will still output the downloaded data you ask for, potentially even to the terminal/stdout unless you redirect it.<br>Use <code>-S, --show-error</code> in addition to this option to only disable progress meter but still show error messages.</p>



<p><span style="text-decoration: underline;"><strong><code>-S, --show-error</code></strong></span><br>When used with <code>-s, --silent</code>, it makes curl show an error message if it fails.</p>



<p><code><span style="text-decoration: underline;"><strong>-I, --head</strong></span></code><br>(HTTP, FTP, FILE) Fetch the headers only.<br>HTTP-servers feature the command HEAD which this uses to get nothing but the header of a document.<br>When used on an FTP or FILE file, curl displays the file size and last modification time only.</p>



<p><span style="text-decoration: underline;"><strong><code>--ssl</code></strong></span><br>Try to use SSL/TLS for the connection.<br>Reverts to a non-secure connection if the server doesn&#8217;t support SSL/TLS.<br><br><code><strong><span style="text-decoration: underline;">--ssl-reqd</span></strong></code><br>Require SSL/TLS for the connection. Terminates the connection if the server doesn&#8217;t support SSL/TLS.<br><br><code><span style="text-decoration: underline;"><strong>-L, --location</strong></span></code> (<strong>HTTP 3NN Redirection</strong>)<br>(HTTP) If the server reports that the requested page has moved to a different location (indicated with a Location: header and a 3NN response code), <strong><span style="text-decoration: underline;">this option will make curl redo the request automatically on the new place</span></strong>.</p>



<p>If used together with -i, &#8211;include or -I, &#8211;head, headers from all requested pages will be shown.<br><br><span style="text-decoration: underline;"><strong><code>--stderr</code></strong></span><br>Redirect all writes to stderr to the specified file instead.<br>If the file name is a plain &#8216;-&#8216;, it is instead written to stdout.<br>If this option is used several times, the last one will be used.</p>



<p><strong><span style="text-decoration: underline;">OUPUT</span></strong><br><span style="text-decoration: underline;">If not told otherwise, curl writes the received data to stdout</span>.<br>It can be instructed to instead save that data into a local file, using the <code>-o, --output </code>OR<code> -O, --remote-name</code> options.<br>If curl is given multiple URLs to transfer on the command line, it similarly needs multiple options for where to save them.<br>curl does not parse or otherwise &#8220;understand&#8221; the content it gets or writes as output.<br>It does no encoding or decoding, unless explicitly asked so with dedicated command line options.<br><br><code><strong><span style="text-decoration: underline;">-o, --output &lt;path></span></strong></code><br>Write output to &lt;path> instead of stdout.<br><br><code><strong><span style="text-decoration: underline;">-O, --remote-name</span></strong></code><br>Write output to a local file named like the remote file you get.<br>(Only the file part of the remote file is used, the path is cut off)<br><br><span style="text-decoration: underline;">The file will be saved in the current working directory</span>.<br>If you want the file saved in a different directory, make sure you change the current working directory before invoking curl with this option.</p>



<p>The remote file name to use for saving is extracted from the given URL, nothing else, and if it already exists it will be overwritten.<br>There is no URL decoding done on the file name. If it has %20 or other URL encoded parts of the name, they will end up as-is as file name.</p>



<p>You may use this option as many times as the number of URLs you have.<br><br><code><strong><span style="text-decoration: underline;">-C, --continue-at</span></strong></code><br>Continue/Resume a previous file transfer at the given offset.<br>The given offset is the exact number of bytes that will be skipped, counting from the beginning of the source file before it is transferred to the destination.<br><br><strong><span style="text-decoration: underline;">Use <code>'-C -'</code> to tell curl to automatically find out where/how to resume the transfer</span></strong>.<br>It then uses the given output/input files to figure that out.<br>If this option is used several times, the last one will be used.</p>



<pre class="wp-block-code"><code><strong><span style="text-decoration: underline;">Download a file</span></strong>
$ curl URL -o /path/to/dst_filename

<strong><span style="text-decoration: underline;">Download a file to the current directory</span></strong>
$ curl -O URL

<strong><span style="text-decoration: underline;">Resume an interrupted download</span></strong>
$ curl -O -C - URL</code></pre>



<p><strong><span style="text-decoration: underline;">PROGRESS METER</span></strong><br>curl normally displays a progress meter during operations, indicating the amount of transferred data, transfer speeds and estimated time left, etc.<br>The progress meter displays number of bytes and the speeds are in bytes per second.<br>The suffixes (K, M, G, T, P) are 1024 based. For example 1K is 1024 bytes. 1M is 1048576 bytes (1024^2).</p>



<p>curl displays this data to the terminal by default, so if you invoke curl to do an operation and it is about to write data to the terminal, it disables the progress meter as otherwise it would mess up the output mixing progress meter and response data.</p>



<p>If you want a progress meter for HTTP POST or PUT requests, you need to redirect the response output to a file, using shell redirect (&gt;), <code>-o, --output</code> or similar.</p>



<p>It is not the same case for FTP upload as that operation does not spit out any response data to the terminal.</p>



<p>If you prefer a progress &#8220;bar&#8221; instead of the regular meter, <code>-#, --progress-bar</code> is your friend.<br>You can also disable the progress meter completely with the <code>-s, --silent</code> option.</p>



<p><span style="text-decoration: underline;"><strong><code>-#, --progress-bar</code></strong></span><br>Make curl display transfer progress as a simple progress bar instead of the standard, more informational, meter.<br>This progress bar draws a single line of &#8216;#&#8217; characters across the screen and shows a percentage if the transfer size is known.<br>For transfers without a known size, there will be space ship (-=o=-) that moves back and forth but only while data is being transferred, with a set of flying hash sign symbols on top.</p>



<p><span style="text-decoration: underline;"><strong><code>--no-progress-meter</code></strong></span><br>Option to switch off the progress meter output without muting or otherwise affecting warning and informational messages like <code>-s, --silent</code> does.<br>Note that this is the negated option name documented.<br>You can thus use <code>--progress-meter</code> to enable the progress meter again.<br>Added in version 7.67.0.</p>



<h2>Connection Test: Get Server Information</h2>



<pre class="wp-block-code"><code>$ curl -I -v --silent --ssl  &#91;protocol://]ip_or_hostname | grep '^&#91;&gt;&lt;*]'</code></pre>



<p>If you specify URL without protocol:// prefix, curl will attempt to guess what protocol you might want.<br>It will then default to HTTP but try other protocols based on often-used host name prefixes.<br>For example, for host names starting with &#8220;ftp.&#8221; curl will assume you want to speak FTP.</p>



<p>curl will do its best to use what you pass to it as a URL.<br><br><a href="https://everything.curl.dev/usingcurl/tls/enable">https://everything.curl.dev/usingcurl/tls/enable</a><br><br>Using <code>--ssl</code> means that curl will attempt to upgrade the connection to TLS but if that fails, it will still continue with the transfer using the plain-text version of the protocol.<br>To make the <code>--ssl</code> option require TLS to continue, there is instead the <code>--ssl-reqd</code> option which will make the transfer fail if curl cannot successfully negotiate TLS.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Linux Disk Space: ext filesystem reserved blocks</title>
		<link>https://itec4b.com/linux-disk-space-ext-filesystem-reserved-blocks/</link>
		
		<dc:creator><![CDATA[author]]></dc:creator>
		<pubDate>Wed, 18 Jan 2023 13:59:46 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[disk]]></category>
		<category><![CDATA[ext]]></category>
		<category><![CDATA[filesystem]]></category>
		<category><![CDATA[linux]]></category>
		<guid isPermaLink="false">https://itec4b.com/?p=163</guid>

					<description><![CDATA[With an ext filesystem 5% of disk space is by default reserved for privileged processes/root user. This allows the system to keep functioning even if non-privileged users fill up all the space available to them.Important tasks and system processes will still be able to work and write to the drive. NEVER set it to 0 &#8230; <p class="link-more"><a href="https://itec4b.com/linux-disk-space-ext-filesystem-reserved-blocks/" class="more-link">Read more<span class="screen-reader-text"> "Linux Disk Space: ext filesystem reserved blocks"</span></a></p>]]></description>
										<content:encoded><![CDATA[
<p>With an ext filesystem 5% of disk space is by default reserved for privileged processes/root user.</p>



<p>This allows the system to keep functioning even if non-privileged users fill up all the space available to them.<br>Important tasks and system processes will still be able to work and write to the drive.</p>



<p>NEVER set it to 0 for a system partition<br>tune2fs reservation changes happen immediately</p>



<pre class="wp-block-code"><code># tune2fs -m &lt;reserved-blocks-percentage> /dev/&lt;device-name></code></pre>



<p>Set the percentage of the filesystem which may only be allocated by privileged processes.<br>Reserving some number of filesystem blocks for use by privileged processes is done to avoid filesystem fragmentation, and to allow system daemons (root-owned daemons) , such as syslogd(8), to continue to function correctly after non-privileged processes are prevented from writing to the filesystem.<br><br>The default percentage of reserved blocks is 5%<br><br>See also: mkfs.ext4 with option -m (reserved-blocks-percentage) to create an ext4 filesystem</p>



<p>For a large ext filesystem used for system partition you may reduce it to 1% of disk space:</p>



<pre class="wp-block-code"><code># tune2fs -m 1 /dev/&lt;device-name></code></pre>



<p>For an ext filesystem that only acts as storage you may disable it:</p>



<pre class="wp-block-code"><code># tune2fs -m 0 /dev/&lt;device-name>
# tune2fs -r 0 /dev/&lt;device-name></code></pre>



<p>Set the number of reserved filesystem blocks directly:</p>



<pre class="wp-block-code"><code># tune2fs -r &lt;reserved-blocks-count></code></pre>



<p>Example for 3GB of reserved filesystem 4K blocks:<br>1GB = 1024^3 bytes = 1073741824 bytes<br><br>3GB = 3 x 1073741824 = 3221225472 bytes<br>3GB = 3221225472 bytes / 4096 bytes = 786432 blocks</p>



<pre class="wp-block-code"><code># tune2fs -r 786432</code></pre>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
