May the source be with you, but remember the KISS principle ;-)
Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Squid as privacy enhancing tool

News Social Sites as intelligence collection tools Recommended Links Squid logs  Perl web log analysers Squid-Log-Analyzers Web sites filtering with Squid
  Calamaris Squid2MySQL Pwebstats Sysadmin Horror Stories  Humor Etc

If you would like to limit the amount of information you expose to snoopy sites like Facebook you can run Squid on you home network. You can configure Squid to filter some HTTP header fields. After this, most web servers will assume that you are requesting content directly, not via proxy.

You can also block the most obnoxious data collector sites such as Facebook.

On RHEL, CentOS and similar systems installation of squid is just a single command

yum install squid

There are three additional necessary to configure Squid:

  1. Make Squid proxy service start on system boot automatically and start it
    chkconfig squid on
    service squid start
  2. Change the port (optional). Open the squid configuration file, which on Linux should be in the following location:

    By default Squid listens on port 3128. If you want Squid to listen on a different port, for example port 8080, locate the http_port directive and set it to 8080, as follows:

    http_port 8080

    Note: If you're running a local firewall (eg. iptables), make sure your firewall allows incoming connections on this port!

  3. Set the name of the server. You generally should set the name of your proxy server in the conf file using the visible_hostname directive. For example:
    visible_hostname proxyhost
  4. Create network for which squid will accept requests and allow request from it. For example
    acl our_networks src
    http_access allow our_networks

By default Squid passes through the IP address of the client machine in the HTTP headers. To turn this feature off increase privacy. You need to set the following directive in the conf file:

forwarded_for off

Making changes in config file active

If Squid is already running, you can reload the configuration file by running the following command as the root user:

squid -k reconfigure

If squid is not already running, start it by running the following command as the root user:

/etc/init.d/squid start

Top Visited
Past week
Past month


Old News ;-)

[Mar 23, 2015] Flexible Access Control with Squid Proxy By Mike Diehl

Blocking a boy from internet sure increase its interest in discovering how to bypass those restrictions ;-)
Linux Journal

My code will tell the proxy server how to handle each request as it comes in. The proxy either will complete the request for the user or send the user a Web page indicating that the site the user is trying to access has been blocked. This is how the proxy will implement whatever policy we choose.

I've decided that I want to be able to give my family members one of four levels of Internet access. At the two extremes, family members with "open" access can go just about anywhere they want, whereas family members with "blocked" access can't go anywhere on the Internet. My wife and I will have open access, for example. If one of the boys is grounded from the Internet, we'll simply set him as blocked.

However, it might be nice to be able to allow our kids to go to only a predetermined list of sites, say for educational purposes. In this case, we need a "whitelist-only" access level. Finally, I'm planning on a "filtered" access level where we can be a bit more granular and block things like music download, Flash games and Java applets. This is the access level the boys generally will have. We then can say "no more games" and have the proxy enforce that policy.

[Aug 06, 2013] How To Increase Squid's Cache Directory Swap Size

September 15, 2008 |
Increase Squid Cache Directory Swap Size

With default Squid rpm installation, the default value of squid cache directory swap size is set to 100MB. Having a large disk storage would be efficient to store a larger directory swap size for squid to use.

Simply edit /etc/squid/squid.conf and find the cache directory squid directive

cache_dir ufs /var/spool/squid 100 16 256

/var/spool/squid is the directory folder location where squid will use to swap cached web files.

100 The first number is the amount of disk space in MB to be used by squid for caching directory.

16 is the number of first-level subdirectories which will be created under the 'Directory'.

256 is the number of second-level subdirectories which will be created under each first-level directory.

To verify your disk and partition sizes, simply

# df -ah

giving you a similar lines

/dev/sda3 49G 4.5G 42G 10% /var
/dev/sda2 387G 18G 350G 5% /home
/dev/sda1 99M 12M 83M 13% /boot

From default squid installation, cache dir is physically dumped to /var directory by default.

Now, considering a server with high space, say 40GB free disk partition size, you could reconfigure squid cache_dir to a value of

cache_dir ufs /var/spool/squid 2000 32 512

That is 2GB of cache directory for squid to use with 512 second-level directory under the first level of 32.

Save, exit and create the cache directory.

Stop Squid first

# service squid stop

Recreate Squid Cache Directory

# squid -z

Start Squid Service

# service squid start

This setup would nicely suit a large and regular internet user infrastructure setup.

All done.

SQUID Proxy On RHEL5/CentOS - Everything That You Should Know About [Part 1]

The main feature or duty of a proxy server could be a gateway that receives HTTP requests from clients and forwards the request to the destination and relays the answer back to the requestor.

Squid is most popular open-source software that brings this to us. It also has some excellent features for doing something else such as web access controlling, bandwidth controlling, restriction policies, and content caching and filtering. Actually people install SQUID to pursuit 2 goals: first reduce the bandwidth charges by content caching and second for restricting access to particular contents.

The following guide explains advantages of using Squid and will show you how to install, configure, control, and maintain the Squid Proxy Server on RHEL5 and CentOS Linux.

*** Notice : This guide or tutorial or whatever you like to call it is based on my personal exprience and I guarantee to you 100% that it's working like a charm for me. So if you install this software and for any reason you have technical difficulties just post the comment and I'll be with you to solve that. ***

Just something that you should know:
'#' hash sign before rule line in config file will disable the rule.
If you need to use your proxy server you need to modify the browser settings on your client computer, for example: IE > Internet Option > Lan Setting > enable proxy server checkbox > set IP Address of your Squid proxy server and port (default is 3128).

Before anything: If you are not sure Squid was installed, type the following command:

# rpm -q squid

squid-2.6.STABLE6-5.el5_1.3 //this means you have squid installed on your box and do not need to install, so prepare your self for the configuration.

To install on RHEL5/CentOS type this command:

# yum install squid

And if you cannot use yum then try this way:

First download the latest version of squid from (official Squid website) and move it to /tmp:

# cd /tmp
# rpm -ivh squid-2.6.STABLE.rpm

This will install squid configurations and binaries in their directories. After that use this command to run the program automatically when the system boots:

# chkconfig --level 35 squid on // runlevel 3 is for running squid on text based and 5 is for running on x environments

Ok, now it's time to start the service so:

# service squid start

For the configuration you need to open the config file depending on your version of Linux, for RHEL5/CentOS do like this:

# vi /etc/squid/squid.conf

That's it, you can define most parameters in here, remember on start or restart of the service or viewing the log files you may see this error:

WARNING: Could not determine this machines public hostname. Please configure one or set 'visible_hostname'.

It means the hostname isn't correctly defined and you need to change the visible_hostname in the config file. It needs to change for identity of the cache server or troubleshooting or viewing the logs. So change it before anything else like this:

visible_hostname HowtoForge

As you can see http_port 3128, it means Squid listens for requests from HTTP clients on this port.

Access Control Lists (ACL)

ACLs are used to restrict usage, limit web access of host(s); each ACL line defines a particular type of activity, such as an access time or source network, after that we need to link the ACL to an http_access statement that tells Squid whether or not to deny or allow traffic that matches the ACL.

When you install Squid for the first time, you need to add some acls to allow your network to use the internet because squid by default denies web access.

The syntax of an ACL is like this:

acl aclname acltype value
aclname = rulename (it could be some desire name like mynetwork)
acltype = type of acl like : src, dst (src:source ip | dst:destination ip)
value = it could be ip address, networks, URLs ,...This example will allow localhost to access the internet:

acl localhost src
http_access allow localhost

We are allowing the computer that matches the ip address range contained in the localhost ACL to access the internet. There are other ACLs and ACL-operators available for Squid, but this is good for practice.

So with this syntax, you can now tell squid how to work. Suppose you want to allow your 192.168.1 network range to access the internet, you can do this but first open the config file and find these lines:

http_access allow localhost
http_access deny all Replace them with:

acl mynetwork src
http_access allow localhost
http_access allow mynetwork
http_access deny all Note: Specify the rules before the line http_access deny all. After that change save your file and restart the squid service.

(If you use vi editor use this to save and quit > 1-press ESC key 2-type ':x' without quotation and hit enter.)

# service squid restart

Remember you may see an error after restarting the squid service for using "/24" in your config, if so don't panic you can easily change /24 to / and again restart the squid service, after restarting your entire network which uses the IP addresses to have access to the internet.

You may ask yourself about allowing internet to everyone except particular ip addresses, actually it's a good start and brings some fun :) . Ok, to do this open the config file and do like this:

acl bad_employee src
http_access deny bad_employee
acl mynetwork src

http access allow mynetworkIn the above example the entire network will be allowed to use the internet except the blocked person (bad_employee). Remember Squid interprets the rules from top to bottom, so you need to be careful.

You can create a restricting rule by times for your company and assign that to your created mynetwork acl like this:

acl mynetwork src
acl business_hours time M T W H F 9:00-17:00
acl bad_employee src
http_access deny bad_employee
http_access allow mynetwork business_hours

Day-abbrevs: S - Sunday M - Monday T Tuesday W - Wednesday H - Thursday F - Friday A - Saturday

You can also block a particular URL like this:

acl block_site dst
http_access deny block_site will be filtered BUT is open because we block, so if you want to block a single url and its subdomains we do it like this:

acl block_domain dstdomain
http_access deny block_domainAnd you can do more than blocking one URL, if you want to block more than a single domain we need to create a file to hold the particular URLs and give this file read permissions like this:

# touch /etc/squid/block_list.txt
# chmod 444 /etc/squid/block_list.txt
# vi /etc/squid/block_list.txt

Enter some URLs to block like this:

And then save and quit, it's time to create rules. Open the config file and put these parameters in it:

acl block_list url_regex "/etc/squid/block_list.txt"
http_access deny block_listYou can block the URLs that contain unexpected words like this:

acl blockword url_regex sxx
http_access deny blockword(You can block case insensitive words like this : -i sxx)

You can block downloads of .exe files like this:

acl block_exe url_regex .*\.exe$
http_access deny block_exeIf you want block more extensions to download you can specify all in a file as described before (exact like some URL to block section).

You can block TLDs (.br .eu) like this:

acl block_tld dstdom_regex \.br$
http_access deny block_tldYou can configure squid to prompt for username and password from users with ncsa_auth that reads an NCSA-compliant encrypted password file, so:

# htpasswd -c /etc/squid/squid_passwd your_username

enter pass : your_password

# chmod o+r /etc/squid/squid_passwd

Open the config file and put these lines in it and change to your own configuration:

auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/squid_passwd
acl ncsa_user proxy_auth REQUIRED
http_access allow ncsa_userIf you don't want to modify the browser for using a proxy there is a method that is called "Transparent Proxy"; to use this you need to do like this:

This guide was part 1, and in part 2 we will know about "Content Caching" , "Load Balancing", "Bandwidth Management", "Squid Logs", "Nmap" and "Monitoring [Visited URLs by Useres]" and more ...

Speeding up the internet with squid

Squid Caching Proxy Server

Now it's time to install the squid caching proxy server this is even easier!

What is a caching Proxy Server?

Now that pdnsd knows the IP Address of Google, we are saving some time everytime we wish to visit Google (by not having to look up the IP Address at's DNS servers.)

But, we are still downloading logo.gif (the Google logo) from the site and even at only 8Kb large, it is using up some of our bandwidth.

What our caching proxy server does is keep a copy of that logo (and a whole hosts of other HTTP Objects) and, according to a complex set of rules, doesn't bother going to the originating site the next time the object is requested it delivers it from its local cache.

This can dramatically reduce bandwidth.

Now, Google is a well designed site with a minimum of logos and graphics to download not all sites are as austere as Google. Over a period of time, a store of HTTP objects (graphics/webpages etc.) will be built up in the cache and these will be served to any machines on the network that are accessing the cache.

Squid installation

1). Start off by installing Squid. On my Debian system all I have to do was type apt-get install squid.

2). Now we need to stop squid - /etc/init.d/squid stop and press Enter

3). We need to edit /etc/squid/squid.conf

This is my squid.conf

# Access Control Lists
acl all src all
acl manager proto cache_object
acl localhost src
acl to_localhost dst
acl localnet src # RFC1918 possible internal network
acl localnet src # RFC1918 possible internal network
acl localnet src # RFC1918 possible internal network
acl SSL_ports port 443 # https
acl SSL_ports port 563 # snews
acl SSL_ports port 873 # rsync
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 631 # cups
acl Safe_ports port 873 # rsync
acl Safe_ports port 901 # SWAT
acl apache rep_header Server ^Apache
acl mydomain src
acl purge method PURGE

# Refresh patterns
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern (Release|Package(.gz)*)$ 0 20% 2880
refresh_pattern . 0 50% 40320
refresh_pattern -i \.jpg$ 3600 90% 40320 override-expire override-lastmod reload-into-ims ignore-reload
refresh_pattern -i \.jpeg$ 3600 90% 40320 override-expire override-lastmod reload-into-ims ignore-reload
refresh_pattern -i \.gif$ 3600 90% 40320 override-expire override-lastmod reload-into-ims ignore-reload
refresh_pattern -i \.html 300 50% 10 ignore-reload
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow mydomain
http_access deny all

# Denying non-necessary access
icp_access allow localnet
icp_access deny all
htcp_access allow localnet
htcp_access deny all

# Stopping stuff we don't need
log_fqdn off
log_icp_queries off
buffered_logs on
emulate_httpd_log off
client_db off
cache_store_log none
memory_pools off
forwarded_for off

#General config stuff
http_port 3128
visible_hostname brahms-squid
hierarchy_stoplist cgi-bin ?
coredump_dir /var/spool/squid
access_log /var/log/squid/access.log squid

# Caching stuff
cache_mem 64 MB
cache_dir diskd /var/spool/squid 4000 16 256
maximum_object_size_in_memory 32 KB
maximum_object_size 128 MB
request_body_max_size 8 MB

# DNS Stuff
hosts_file /etc/hosts

4). Now we only need to restart squid by typing /etc/squid/squid start and then forcing it to re-read the config file by typing squid -k reconfigure

5). You will need to set up your browser to use the local proxy server.

To do this just find the configuration screen (this will vary from one browser to another) and set the broser to access the internet via a proxy server. Use an IP address of and a port of 3128. Now tick the box that says Use this proxy server for all protocols and click on the OK button.


It's difficult to tell how much this has speeded up my internet access as I was using a separate caching proxy server before.

I know that it's very rare for me to wait for a website to be found (a function of the DNS server) and once I have visited a site once, the following time the graphics just snap into place (no waiting at all!)

Overall, this has proven to be an extremely simple way to improve my internet access without resorting to additional hardware.

My pdnsd.conf and squid.conf files are available at and

To download them you just need to use

wget and


All the best

>[Jan 15, 2012] Squid Configuration Basics
Simple Access Control

In many cases only the most basic level of access control is needed. If you have a small network, and do not wish to use things like user/password authentication or blocking by destination domain, you may find that this small section is sufficient for all your access control setup. If not, you should read chapter 7, where access control is discussed in detail.

The simplest way of restricting access is to only allow IPs that are on your network. If you wish to implement different access control, it's suggested that you put this in place later, after Squid is running. In the meantime, set it up, but only allow access from your PC's IP address.

Example access control entries are included in the default squid.conf. The included entries should help you avoid some of the more obscure problems, such as bandwidth-chewing loops, cache tunneling with SSL CONNECTs and other strange access problems. In chapter 7 we work through the config file's default config options, since some of them are pretty complex.

Access control is done on a per-protocol basis: when Squid accepts an HTTP request, the list of HTTP controls is checked. Similarly, when an ICP request is accepted, the ICP list is checked before a reply is sent.

Assume that you have a list of IP addresses that are to have access to your cache. If you want them to be able to access your cache with both HTTP and ICP, you would have to enter the list of IP addresses twice: you would have lines something like this:

acl localnet src
http_access allow  localnet 
icp_access  allow  localnet

Rule sets like the above are great for small organisations: they are straightforward. Note that as http_access and icp_access rules are processed in the order they appear in the file, you will need to place the http_access and icp_access entries as is appropriate.

For large organizations, though, things are more convenient if you can create classes of users. You can then allow or deny classes of users in more complex relationships. Let's look at an example like this, where we duplicate the above example with classes of users:

Sure, it's more complex for this example. The benefits only become apparent if you have large access lists, or when you want to integrate refresh-times (which control how long objects are kept) and the sources of incoming requests. I am getting quite far ahead of myself, though, so let's skip back.

We need some terminology to discuss access control lists, otherwise this could become a rather long chapter. So: lines beginning with acl are (appropriately, I believe) acl lines. The lines that use these acls (such as http_access and icp_access in the above example) are called acl-operators. An acl-operator can either allow or deny a request.

So, to recap: acls are used to define classes. When Squid accepts a request it checks the list of acl-operators specific to the type of request: an HTTP request causes the http_access lines to be checked; an ICP request checks the icp_access lists.

Acl-operators are checked in the order that they occur in the file (ie from top to bottom). The first acl-operator line that matches causes Squid to drop out of the acl list. Squid will not check through all acl-operators if the first denies the request.

In the previous example, we used a src acl: this checks that the source of the request is within the given IP range. The src acl-type accepts IP address lists in many formats, though we used the subnet/netmask in the earlier example. CIDR (Classless Internet Domain Routing) notation can also be used here. Here is an example of the same address range in either notation:

Subnet/Netmask (Dot Notation):

Access control lists inherit permissions when there is no matching acl If all acl-operators in the file are checked, and no match is found, the last acl-operator checked determines whether the request is allowed or denied. This can be confusing, so it's normally a good idea to place a final catch-all acl-operator at the end of the list. The simplest way to create such an operator is to create an acl that matches any IP address. This is done with a src acl with a netmask of all 0's. When the netmask arithmetic is done, Squid will find that any IP matches this acl.

Your cache server may well be on the network placed in the relevant allow lists on your cache, and if you were thus to run the client on the cache machine (as opposed to another machine somewhere on your network) the above acl and http_access rules would allow you to test the cache. In many cases, however, a program running on the cache server will end up connecting to (and from) the address '' (also known as localhost). Your cache should thus allow requests to come from the address In the below example we don't allow icp requests from the localhost address, since there is no reason to run two caches on the same machine.

The squid.conf file that comes with Squid includes acls that deny all HTTP requests. To use your cache, you need to explicitly allow incoming requests from the appropriate range. The squid.conf file includes text that reads:


To allow your client machines access, you need to add rules similar to the below in this space. The default access-control rules stop people exploiting your cache, it's best to leave them in. by ace

> Open Source Research and Reference
> squid-proxy-on-rhel5-centos-everything-that-you-should-know-about

This example will allow localhost to access the internet:

acl localhost src
http_access allow localhost

We are allowing the computer that matches the ip address range contained in the localhost ACL to access the internet. There are other ACLs and ACL-operators available for Squid, but this is good for practice.

So with this syntax, you can now tell squid how to work. Suppose you want to allow your 192.168.1 network range to access the internet, you can do this but first open the config file and find these lines:

acl mynetwork src
http_access allow localhost
http_access allow mynetwork
http_access deny all

>[Jul 27, 2011] SAWstats
SAWstats is an improved version of AWstats with the following additional features:

AWstats is a great tool for parsing and analysis of web logs with nice web interface, but after years it still lacks Squid support, so I decided to fork. The name "SAWstats" stands for "Severely Advanced Web Statistics".

Planned features:

Similar projects:

CNstats is a flexible and versatile system for accumulation and analysis of the site attendance statistics. See demo site and nulled version.

>Squid - Proxy Server

Squid is configured by editing the directives contained within the /etc/squid/squid.conf configuration file. The following examples illustrate some of the directives which may be modified to affect the behavior of the Squid server. For more in-depth configuration of Squid, see the References section.

Prior to editing the configuration file, you should make a copy of the original file and protect it from writing so you will have the original settings as a reference, and to re-use as necessary.

Copy the /etc/squid/squid.conf file and protect it from writing with the following commands entered at a terminal prompt:

sudo cp /etc/squid/squid.conf /etc/squid/squid.conf.original
sudo chmod a-w /etc/squid/squid.conf.original

After making changes to the /etc/squid/squid.conf file, save the file and restart the squid server application to effect the changes using the following command entered at a terminal prompt:


FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  


Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haters Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least

Copyright 1996-2016 by Dr. Nikolai Bezroukov. was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case is down you can use the at


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Last modified: September, 12, 2017