|Contents||Bulletin||Scripting in shell and Perl||Network troubleshooting||History||Humor|
|Recommended Books||Recommended Links||Suse network configuration|
|SUSE AutoYaST||Pure-ftpd configuration|
|Routing||Suse NFS||RC scripts||Setting Up an SSH Server on Suse||Runlevels||SLES Log rotation||AppArmore|
|Installation||SuSE install from USB drive||Installation from ISO Image||Batch SLES Registration||The SUSE Rescue System||Startup and shutdown||Cd and DVD burning|
|Serial console on suse||
HP Data Protector
|Disk Management||Package management||Processes and Services||Xinetd||Daylight Saving Time change|
|Certification||Security||/etc/permissions. local file||Wheel Group||SecurID on Suse||PAM||Orarun|
|Suse Troubleshooting||Snapper||zypper||Using rug||OpenSuse||Baseliners & System information||Cool solutions|
|Xen||SLES on VMware||Suse under Microsoft Virtual PC||Sysadmin Horror Stories||Humor||Etc|
I know that suesse means sweet in German, but the name is just abbreviation. And sweet days of Suse are probably over after Novell was acquired by partnership of buy-out sharks... Now this distributions faces huge problem with pressure on its labor force and low market share. And no matter how sweet it is, it has one serious problem: and it is not Red Hat ;-). It is overcomplexity. Among problems with SLES we can mention
There are also some advantages of using Suse
As for real origin of the name Wikipedia says: "The name 'S.u.S.E' was originally a German acronym for 'Software und System-Entwicklung', meaning 'Software and systems development'. However, the full name has never been used and the company has always been known as 'S.u.S.E', shortened to 'SuSE' in October 1998 and more recently 'SUSE'."On September 2, 2012, SUSE marked 20 years. The company was set up in Nuremberg by three university students - Hubert Mantel Roland Dyroff, and Burchard Steinbild - and a software engineer, Thomas Fehr. And in 1994, the first S.u.S.E Linux emerged, a German version of Slackware. A couple of years later the company built its own distro, based on the now extinct jurix. The founder of jurix, Florian La Roche joined the company, and wrote YaST, the SUSE installer and administrative GUI tool similar to Windows Control Panel. . Later when the distribution became more Red Hat like and use RPM as package format, YaST became the major distinguishing factor between Suse and RHEL. The fact that cost of annual support is cheaper for SLES was another important factor, which propelled the distribution. But Oracle with its own clone of Red Hat eliminated this advantage... Here is a quote from IBM (Comparison of SLES (SUSE) and RHEL (Red Hat) on IBM System p):
SUSE was founded in 1992 as a UNIX Consulting Group. The first real Linux distribution was released in 1996. SLES is based on SUSE's Linux and was first released in October 2000. Interestingly enough, it was first released as a version for the IBM mainframe. The x86 version of SLES was released in April 2001. Novell acquired SUSE in January 2004. SLES V9 was released in August 2004, and SLES 10 was released in February 2006.
Novell bought it in 2004 and divested back to Nuremberg as a separate unit after Attachmate Corporation bought Novell and took the company private in 2011. After Novel demise, Suse became "mostly European" distribution again.
While quality of SLES 11 SP1 is OK, the quality of SLES SP2, the first version produced in Nuremberg was low to very low. That's probably at least partially due to transfer pains. SLES enterprise customers in the USA continue to gradually migrate to Red Hat RHEL6.x and Oracle Linux. The only bright spots for SUSE is that is it used by SAP, Suse on mainframe and its role as a hedge against Red Hat dominance (Does SUSE Linux Have A Future – ReadWrite, January 25, 2013 )
...in a conversation with a former SUSE employee who is familiar with SUSE's past and current performance, revenue from SUSE's hardware partners like HP and IBM has been constant but stagnant over the past few years. As he puts it, these longtime SUSE partners want a hedge against Red Hat, but they know that their businesses largely depend upon Red Hat. So they give SUSE just enough business to keep it alive.
...It remains relatively strong in Europe, as Paulo Frazao highlights, and its role as a hedge against Red Hat puts it in a good position with VMware, in particular, as Ian Waring suggests.
t even as a Red Hat hedge it plays second fiddle to CentOS, of which I was reminded by Kevin Schroeder. No, not with the server vendors, who generally avoid CentOS in an attempt to placate Red Hat. But over the past few years I've seen very large enterprises shift applications, including mission-critical applications, to CentOS as a way to cut costs. And in terms of general interest in the two platforms, well, a chart says a thousand words:
Still SLES is one of two major enterprise version of Linux.
Visually SLES is very appealing. But appearance is deceptive and inside it is a pretty complicated and capricious monster with a lot of non-server components present by default on the server (games, audio, etc). Several Novell products that are used instead of simpler alternatives (for example Red Carpet is used instead of yum). Mono is also preinstalled and cannot be deleted.
The same is generally true to Suse 11 which tries to make itself more compatible with the dominant distribution (Red Hat) and introduced support of SElinux in kernel. Making the distribution even more complex and eliminating important competitive advantage over RHEL Hat (SELinux is universally hated by Unix sysadmins while Suse security system AppArmore is much more elegant and reasonable solution to the same problem.
One distinct advantage of Suse is that SLES includes XEN with unlimited number of virtual instances for the same price of support contract. That can cut support costs as you do not need to buy a separate support contract for each instance. SLES is also preferred solution for VMware and Microsoft hypervisors.
Suse 10 supports only Xen. Does not support KVM which is a good thing ;-). But as with SELinux never say never. See:
VMware-tuned 32 bit version (only 32-bit) of SLES kernel (included in SUSE Linux Enterprise 11 SP 1 and SUSE Linux Enterprise Server 11 SP1 for VMware ) is considerably faster, close to speed to para-virtualization solutions like Xen. I think Novell provides para-virtualized 32-bit kernel for VM-ware with enterprise distribution too but I am not sure. In any case Xen+Suse has an important advantage over VMware -- you do not need to license SLES for hosts running under Xen. And cost of VMware is such that all your saving goes to VMware, they simply did not stay with the company. SUSE Linux Enterprise Server 11 SP1 for VMware is the result of the recent Novell-VMware partnership (see also SUSE Linux Enterprise Server (SLES) for VMware FAQs). The problem is that VMware sells Suse Enterprise for prices higher then Novell :-).
SUSE Linux Enterprise Server 11 SP1 for VMware is intended for use by customers running VMware vSphere and its subscriptions and support are sold only by VMware and its partners.
For more information visit http://www.vmware.com/go/slesforvmware.
If you have obtained a subscription for SUSE Linux Enterprise Server from Novell, that subscription does not provide the right to use SUSE Linux Enterprise Server for VMware, nor does it provide access to patches, maintenance, or support for SLES for VMware.
If you are a Novell customer, please download the standard SUSE Linux Enterprise Server 11 SP1 from http://download.novell.com.
Here are the terms (from VMware and Novell Expand Strategic Partnership to Deliver and Support SUSE® Linux Enterprise Server for VMware vSphere™ Environments) :
- Customers will receive SLES with one (1) entitlement for a subscription to patches and updates per qualified VMware vSphere SKU. For example, if a customer were to buy 100 licenses of a qualified vSphere Enterprise Plus SKU, that customer would receive SLES with one hundred (100) entitlements for subscription to patches and updates.
Suse 11 SP2 compatibility with Oracle 11g is OK. SLES 11 SP 1 and SLES 10 SP3 and 4 are also compatible (and certified by Oracle). Please do not install Suse package orarrun which supposedly helps to run Oracle on Suse. It is a horrible junk and only complicates things. See Installation of Oracle 11g on SLES 11
Some information about versions compatibility can be found:
Officially SLES has relatively short for enterprise OS life span: five years. In reality for Suse 9 it was closer to 10. Novell tends to extend it what the date became close. Also in 2011 they declared that they will provide patches for fives years from the latest service pack (SP4 in case of Suse 10). See SLES life cycle. For example:
For enterprise-class OS the registration process is too complex unreliable/capricious especially in case HTTP proxy is used within the organization. It can be somewhat alienated by purchasing three year or longer contracts. With one year contract it is mess: you need to reregister servers at the end of the year (a new license in Novel-speak means new registration code). In this case it would not be exaggeration to say that it is worse that any other proprietary product. You can somewhat alleviate pains by using batch re-registration is very important for Suse. See Batch SLES Registration
This is a unique proposition from Novell that makes Suse much more attractive. See press-release Novell Announces SUSE Appliance Program and LimeJeos - openSUSE) .
Novell is one of the few linux vendors that provides more or less decent manuals which can help users beyond installing the product. see SLES Documentation. There is special document devoted to Automated Installation using autoyast.
Suse log files are slightly different from Red Hat. Like Red Hat, suse uses logrotate for log rotation. The following is a list of the most common Suse logs:
Messages from the kernel during the boot process.
Messages from the mail system.
Ongoing messages from the kernel and system log daemon when running.
Log file from NetworkManager to collect problems with network connectivity
Hardware messages from the SaX display and KVM system.
Messages from the desktop applications currently running. Replace user with the actual username.
All messages from the kernel and system log daemon assigned WARNING level or higher.
Binary file containing user login records for the current machine session. View it with last.
Various start-up and runtime logs from the X Window system. It is useful for debugging failed X start-ups.
Directory containing YaST's actions and their results.
Directory containing Samba server and client log messages.
Apart from log files, your machine also supplies you with information about the running system. See System Information.
This displays processor information, including its type, make, model, and performance.
This shows which DMA channels are currently being used.
This shows which interrupts are in use and how many of each have been in use.
This displays the status of I/O (input/output) memory.
This shows which I/O ports are in use at the moment.
This displays memory status.
This displays the individual modules.
This displays devices currently mounted.
This shows the partitioning of all hard disks.
This displays the current version of Linux.
For more information see Linux Troubleshooting
|df||Reports total, used, and available disk space across all mounted filesystems|
|du||Estimates disk space usage by directories|
|free||Displays total, used, and free memory statistics; also reports information on memory buffers and swap space|
|hwinfo||Reports detailed information on known hardware|
|iostat||Reports input/output statistics for block devices|
|lsof||Lists currently open files|
|ltrace||Traces library calls made by a process|
|netstat||Reports network statistics and route information|
|supportconfig||Comprehensive reporting tool used to generate a report documenting the entire running environment|
|strace||Traces system calls and signals made by a process|
|tcpdump||Used to capture network traffic for later review using a utility such as Ethereal|
|top||Displays running process and various statistics regarding each process (CPU utilization, memory, and so on)|
|vmstat||Reports virtual memory statistics|
|xosview||Graphical utility used to report system statistics such as CPU usage, load average memory usage, and several other parameters|
SUSE Linux Enterprise patching is difficult to configure but after configuration works more or less OK. Compatibility is good unless you use you compiled version of daemons such as Sendmail and bind. In letter case it tries to overwrite them you need to chose a location different from the standard.
Package management is not that polished in comparison with smart and also much slower but we have what we have. OpenSuse users actually can use Smart without any problems.
SUSE Linux Enterprise can work with Microsoft Active Directory but I did not check that.
SUSE Linux Enterprise includes Novell AppArmor application-level security which is vastly superior to Red Hat solution based on SE-linux. AppArmore works by enforcing of a set of application-based file permissions (which is a pretty elegant idea). Because each application can have different set of permissions such a system tremendously helps to protect against typical attacks as assumptions about system file permissions used by the attacker became invalid. The latter alone makes it much more difficult to explore application flaws as well as packaging flaws.
Many network application security solutions never meet the purposes for which they were designed because they are too complex or require too much maintenance. AppArmor, on the other hand, is designed to get you started quickly with minimal investment in time and resources. Its name-based access-control method does not require relabeling of the file system as other methods do, and applications don't have to be modified to benefit from AppArmor protection. In addition, the default configuration of AppArmor includes a number of predefined profiles for common Linux programs like Web, e-mail and remote-login servers that can be deployed immediately. Security profiles for custom or third-party applications can be developed using the included wizard-based tools, which also make policy updates simple as your environment change.
YAST2 is dual purpose software. It is simultaneously GUI installer and configuration tool. As a configuration tool it looks superficially similar to Microsoft control Panel: a single application that supposedly can help to configure almost any aspect of the system, including such things as software installation, services configuration, sharing files or configuring the external devices.
When it works it's OK (and actually hardware detection is really good -- no complain here -- and this really matters). Interface (not mix with functionality) is another story and it can be better. But when it does not the quality of diagnostics is sometimes problematic. In a few cases is can be misleading. My recommendation would be to try command line version of YAST which sometimes provides more helpful diagnostics.
There is also a X-Window configuration tool — SaX2. It gives ability to choose graphic card, set resolution, color depth etc. Please note that Suse sometimes set refresh rate for monitors too high.
AutoYaST is broken and started working only is SLES 11 SP3
For more information see Disabling Unnecessary Services in SLES
January 27, 2006 | www.cyberciti.biz
... ... ...
Linux / UNIX will not allow you to unmount a device that is busy. There are many reasons for this (such as program accessing partition or open file) , but the most important one is to prevent the data loss. Try the following command to find out what processes have activities on the device/partition. If your device name is /dev/sdb1, enter the following command as root user:# lsof | grep '/dev/sda1'Output:vi 4453 vivek 3u BLK 8,1 8167 /dev/sda1
Above output tells that user vivek has a vi process running that is using /dev/sda1. All you have to do is stop vi process and run umount again. As soon as that program terminates its task, the device will no longer be busy and you can unmount it with the following command:# umount /dev/sda1How do I list the users on the file-system /nas01/?
Type the following command:# fuser -u /nas01/Sample outputs:
# fuser -u /var/www//var/www: 3781rc(root) 3782rc(nginx) 3783rc(nginx) 3784rc(nginx) 3785rc(nginx) 3786rc(nginx) 3787rc(nginx) 3788rc(nginx) 3789rc(nginx) 3790rc(nginx) 3791rc(nginx) 3792rc(nginx) 3793rc(nginx) 3794rc(nginx) 3795rc(nginx) 3796rc(nginx) 3797rc(nginx) 3798rc(nginx) 3800rc(nginx) 3801rc(nginx) 3802rc(nginx) 3803rc(nginx) 3804rc(nginx) 3805rc(nginx) 3807rc(nginx) 3808rc(nginx) 3809rc(nginx) 3810rc(nginx) 3811rc(nginx) 3812rc(nginx) 3813rc(nginx) 3815rc(nginx) 3816rc(nginx) 3817rc(nginx)
The following discussion allows you to unmount device and partition forcefully using mount or fuser Linux commands.Linux fuser command to forcefully unmount a disk partition
Suppose you have /dev/sda1 mounted on /mnt directory then you can use fuser command as follows:WARNING! These examples may result into data loss if not executed properly (see "Understanding device error busy error" for more information).
Type the command to unmount /mnt forcefully:# fuser -km /mntWhere,
- -k : Kill processes accessing the file.
- -m : Name specifies a file on a mounted file system or a block device that is mounted. In above example you are using /mnt
Linux umount command to unmount a disk partition.
You can also try the umount command with –l option on a Linux based system:# umount -l /mntWhere,
- -l : Also known as Lazy unmount. Detach the filesystem from the filesystem hierarchy now, and cleanup all references to the filesystem as soon as it is not busy anymore. This option works with kernel version 2.4.11+ and above only.
If you would like to unmount a NFS mount point then try following command:# umount -f /mntWhere,
- -f: Force unmount in case of an unreachable NFS system
Please note that using these commands or options can cause data loss for open files; programs which access files after the file system has been unmounted will get an error.
www.howtoforge.comZFS is a combined filesystem and logical volume manager. The features of ZFS include protection against data corruption, support for high storage capacities, efficient data compression, integration of the filesystem and volume management concept, snapshots and copy-on-write clones, continuous integrity checking and automatic repair, RAID-Z and native NFSv4 ACLs.
ZFS was originally implemented as open-source software, licensed under the Common Development and Distribution License (CDDL).
When we talking about the ZFS filesystem, we can highlight the following key concepts:
- Data integrity.
- Simple storage administration with only two commands: zfs and zpool.
- Everything can be done while the filesystem is online.
For a full overview and description of all available features see this detailed wikipedia article.
In this tutorial, I will guide you step by step through the installation of the ZFS filesystem on Debian 8.1 (Jessie). I will show you how to create and configure pool's using raid0 (stripe), raid1 (Mirror) and RAID-Z (Raid with parity) and explain how to configure a file system with ZFS.
Based on the information from the website www.zfsonlinux.org, ZFS is only supported on the AMD64 and Intel 64 Bit architecture (amd64). Let's get started with the setup. ... ... ...
The ZFS file system is a revolutionary new file system that fundamentally changes the way file systems are administered on Unix-like operating systems. ZFS provides features and benefits that were not found in any other file system available today. ZFS is robust, scalable, and easy to administer.
October 26, 2014 | LinuxBSDos.comWin2NIX, on October 26, 2014 at 11:04 pmUmmm..., on September 4, 2014 at 7:55 pm
I migrated fully to NIX after 10-15 years as a Win admin and got tired of having control "hidden". Worked with ESX and used the console and loved the freedom. The trend I am noticing with the systemd debate is VERY similar to what has happened with M$. Keep It Simple Stupid is something Nix should be doing, having things modular and not depending on something else makes life easier. If one thing breaks it's not taking everything else with it. Further, if this is all done in binary and not easily read THIS IS NOT GOOD. I hated M$ making me download other crap to diagnose their BSODs if you like having your system flipping out and not saving your data then I guess systemd would be for you given it's direction. This is also akin to making your browser part of your OS and having it intertwine with it. (Bad Voodoo) I'm using Mint and looking for a possible way to decouple from systemd. I just don't see this as a good thing and it reminds me too much of M$ tactics. Now is the time to deviate from systemd and keep a more modular approach then watch and see if systemd starts to be an issue, which at this point if it keeps taking over more management it's only a matter of time. I also wonder if the M$ embracing open source has anything to do with this, it certainly smells of large corporation thinking or lack there of. I like improving things, but this does not appear to be an improvement rather a bomb waiting to go off. On these points this is a bad idea, binary not an easy way to gain insight and correct issues and adding multiple processes to control with more being added. I was able to patch heartbleed within 15 minutes after finding out about it. In the M$/corp world good luck hope it's this month.AC, on September 4, 2014 at 2:16 pm
I will admit right off, that I am not a linux designer or maintainer. I got started with linux about 20 years ago. People state that the old init system was fragile. Maybe it was, again…not building linux from scratch I wouldn't know. I don't recall ever having any issues though.
Whether right or wrong, from my (very) limited understanding, the systemd process is driven by binary files, which are not really meant to be edited or looked at by hand. So if something catastrophic happens (which granted hasn't happened yet)…how would I fix it or know what to fix? Go to my distro's forum and hope someone can fix it/release a patch soon?
Anyway, if one of the earlier commenters is correct, and there is no specific plan for systemd (which frankly is a scary thought)…how much more of the system will it continue to take over? And at what point does too much become too much?
I'm all for progress, but I think the Keep It Simple Stupid approach, which may not be "exciting" stuff to develop, it still the best approach.
"why did the people responsible for the development of the major Linux distributions accept it as a replacement for old init system?"
I can't speak for the initial decision, but at this point, I would suspect that inertia is keeping it in place. I highly doubt that any of the major linux desktop systems that must current users depend on would even function without systemd…at least not without a lot of major programming changes to make it happen. If someone did take that route, then all of those custom changes then need to be maintained.
(Simplistically thinking) Why can't things be more pluggable/portable? Distro X uses a systemd plugin for their init, and distro Y chooses to build against something else? Granted systemd is most likely now too big for that, but one can dream I suppose.xx, on September 4, 2014 at 1:17 pm
Yes. Systemd is a trojan.Dimitri Minaev, on September 4, 2014 at 11:59 am
Systemd is a perfect system for rootkits, and NSA backdoors.
Once it will be complete it will hide necessary processes even from root, it will filter unnecessary events from log, and it will do much much more.
But it seems, that only minority care about that.T Davis, on September 4, 2014 at 11:20 am
IMHO, the downside of systems as a project is that its parts lack a defined stable interface. This means that you cannot replace one part with a different one, creating your own stack of tools. When you configure your desktop system, you can combine any display manager with any window manager with any panel or file manager. Can you replace networkd with another tool transparently? If yes, can you be sure that your tool will keep working after the next systemd upgrade?Ericg, on September 3, 2014 at 7:12 pm
The reason Debian (and therefore Ubuntu) adopted SystemD is that the appointed Debian tech team is now devided equally between Ubuntu devs (which were Debian devs before Ubuntu came along) and Redhat employees. Look at the voting emails and 3 months of arguments.
The biggest issue is really not one of SystemD infiltration, but more of Redhat taking over every aspect of the Linux development process. Time and again, I have seen Canonical steer in their own direction, not because they want too go rogue, but because the upstreams for the main projects (Gnome, Wayland, Pulse Audio, now SystemD and possibly OpenStack, and even the kernel to some extent) are almost exclusively owned by Redhat, and only wish to make forward progress at their own pace (wayland has had almost twice the development time and resources as mir for example).
The REAL issue here is; who has the Linux community in their best interests? Do some real investigation and write a story on that.Except you, the author, has fallen into the same trap everyone else does… Confusing Systemd (the project) with systemd (the binary). Systemd, the project, is like Apache, its an umbrella term for a lot of other things. Systemd, logind, networkd, and other utilities.Peter, on September 4, 2014 at 4:42 am
Systemd, the binary, handles service management in pid1, that includes socket and explicit activation. Other tasks it passes off to non-pid 1 processes. For example: session management isn't handled through systemd pid 1, its handled through logind.
Readahead is handled through a service file for systemd, just like other daemons.
syslog functionality isn't handled in pid1, its handled in journald which is a separate process.
hostname, locale, and time registation are all handled through explicit utilities: hostnamectl, localectl, and timedatectl, which are done as separate processes.
Network configuration got added in networkd. What is networkd? The most minimal network userland you can have. Its for people who don't want to write by-hand config files, but for whom NetworkManager is way overkill. Is it pid 1? Nope.
Yes, systemd started off as "just an init replacement." It grew into more things. But don't assume that "systemd" (the binary) is the same as "systemd" (the project). Most things that are added to systemd in recent times AREN'T pid 1 like boycottsystemd claims, they're just small utilities that got added under the systemd umbrella project.Ericg, thats the problem
systemmd has become a whole integrated stack
init.d while not easy to use for starters, was at least within the idea of simple units which can be mixed and matched to get the results the user wants – note user wants – not developer wants
a Linux user, on September 4, 2014 at 5:23 amhostname, locale, and time registation are all handled through explicit utilities: hostnamectl, localectl, and timedatectl, which are done as separate processes.J. Orejarena, on September 4, 2014 at 9:38 am
Missing the point.
People talk as though prior to systemd such tasks were beyond Linux, didn't work, always crashed, were a nightmare to use or manage and that is not the case.
The only difference I see between my Linux machine now and my Linux machine of a few years ago is that it now boots faster. And that's it. And whilst that's nice, it's so meaningless as to be painful to behold the enthusiasm that some display, as though all they did all day long was sit and reboot their machines with a stop watch in one hand.
The main problem with systemd is this – if there are ulterior motives at work here (and by definition they will be hidden at present) then by the time we find that out it will be too late.
And the other problem is that it takes a special kind of arrogance to sneer at 20+ years of development by some seriously smart people and claim that you, as a mere child, can do better. I do wonder how far systemd would have got had it not had Red Hat's weight behind it. I do realise that improvement sometimes means kicking out old 'tried and trusted' methods. But it's the way its happening with systemd that rings alarm bells – too many sneering, nasty bullies trashing anyone who disagrees (just like anyone who thinks Corporations should pay proper taxes is sneered at, or anyone who thinks Putin is not as bad as he is made out to be gets sneered at – sneering is the new way of silencing genuine debate, so when I come across it in Linuxland, alarm bells beging to ring).
Linux is about granular power and control, not convenience."The main problem with systemd is this – if there are ulterior motives at work here (and by definition they will be hidden at present) then by the time we find that out it will be too late."
Just read http://0pointer.net/blog/revisiting-how-we-put-together-linux-systems.html (without the blank space before ".net") to find the ulterior motives.
Parallel SSH to control large numbers of Machines simultaneously
pssh provides parallel versions of the OpenSSH tools that are useful for controlling large numbers of machines simultaneously. It includes parallel versions of ssh, scp, and rsync, as well as a parallel kill command.
An anonymous reader writes
The man who in every sense sits at the nerve centre of SUSE Linux has no airs about him. At 38, Vojtch Pavlík is disarmingly frank and often seems a bit embarrassed to talk about his achievements, which are many and varied. He is every bit a nerd, but can be candid, though precise. As director of SUSE Labs, it would be no exaggeration to call him the company's kernel guru. Both recent innovations that have come from SUSE — patching a live kernel, technology called kGraft, and creating a means for booting openSUSE on machines locked down with secure boot, have been his babies.
by Bengie (1121981) on Thursday November 20, 2014 @03:46PM (#48429167)
Re:Will it ever be the year of Linux on the Deskto (Score:4, Interesting)
Desktop users started taking over Linux and now we have SystemD. Be careful what you wish for.
Anonymous Coward writes:
Do you think systemd sucks? (Score:0)
Systemd, for or against? This is pretty important to the community here and probably the first question that needs to be asked before any others.
Re:Do you think systemd sucks? (Score:2)
1) There are numerous technical reasons against systemd, and not one good reason for it.
2) Admins are way more than users with a special password.
3) It is way more than a few people who oppose systemd.
Maxwell (13985) on Thursday November 20, 2014 @04:43PM (#48429689) Homepage
Re:patching a live kernel? (Score:3)
Because x86 doesn't have the system management chip that Unix boxes (full disclosure, old AIX admin, last used 5.1L) have. x86 has the crappy bios and UEFI neither of which can manage the system. This also what allows hot cpu, hot ram upgrades etc. The AIX system chip is an OS unto itself, it will boot with no RAM or CPU on board.
red_dragon (1761) on Thursday November 20, 2014 @09:06PM (#48431133) Homepage
Re:patching a live kernel? (Score:2)
What do you think ILO/ILOM, DRAC, RSA, etc. do on x86 servers? Those have their own CPU/storage/OS/network to manage the server remotely even if the main CPU gives out the magic smoke. A sysadmin can use it to wipe out and reinstall the server's OS and perform firmware upgrades without even walking into the server room.
Anonymous Coward on Thursday November 20, 2014 @11:34PM (#48431641)
sick of openSUSE ignoring bug reports (Score:0)
I've submitted several bug reports, with some including patches to fix the problem. They all get ignored for a very long time, if not forever. In the few cases they've actually been looked at, it's been 6+ months later.
Given the proximity of the SUSE Linux Enterprise 12 release to the publication of the “shellshock” series of vulnerabilities in the GNU Bourne Again Shell (bash), we want to provide customers with information on the fix status of the bash version shipped in the SLE 12 GA release:
- CVE-2014-6271 (original shellshock)
- CVE-2014-7169 (taviso bug)
- CVE-2014-7186 (redir_stack bug)
- CVE-2014-7187 and
- non-exploitable CVE-2014-6277
- non-exploitable CVE-2014-6278
Up-to-date information is available online: https://www.suse.com/support/shellshock/ (https://www.suse.com/support/shellshock/).
By default, systemd cleans tmp directories daily, and systemd does not honor sysconfig settings in /etc/sysconfig/cron such as TMP_DIRS_TO_CLEAR. Thus it is needed to transform sysconfig settings to avoid potential data loss or unwanted misbehavior.
When updating to SLE 12, the variables in
/etc/sysconfig/cron will be automatically
migrated into an appropriate systemd configuration (see
). The following variable are affected:
MAX_DAYS_IN_TMP MAX_DAYS_IN_LONG_TMP TMP_DIRS_TO_CLEAR LONG_TMP_DIRS_TO_CLEAR CLEAR_TMP_DIRS_AT_BOOTUP OWNER_TO_KEEP_IN_TMP
/run/media/<user_name>is now used as top directory for removable media mount points. It replaces
/media, which is not longer available.
Looks that on some SLES 11 and SLES 10 systems permissions for crontab are-rwsr-x--- 1 root trusted 40432 May 8 2012 /usr/bin/crontab
It might be because on this particular server file permissions (in Security & Hardening) are set to "secure" template, instead of the default (easy). If this setting is in place, that means that no ordinary user can use crontab. Looks like in their infinite wisdom SLES developers introduced group trusted as if allow and deny files are not enough.
I would recommend resetting permission to 4755:-rwsr-xr-x 1 root trusted 40432 May 8 2010 /usr/bin/crontab
In addition to changing the permissions of crontab, you also have to put a line in /etc/permissions.local in order to keep updates from changing it back to 4750./usr/bin/crontab root:trusted 4755
I tested this solution and it looks like it works.
By default group trusted has no members, but what is interesting that adding user (say oracle) to this group did not solve the problem for me.
14.4.2 ext4: Runtime Switch for Write Support #
The SLE 11 SP3 kernel contains a fully supported ext4 file system module, which provides read-only access to the file system. A separate package is not required.
Read-write access to an ext4 file system can be enabled by using the
rw=1module parameter. The parameter can be passed while loading the ext4 module manually, by adding it for automatic use by creating
/etc/modprobe.conf.d/ext4.confwith the contents
options ext4 rw=1, or after loading the module by writing
/sys/module/ext4/parameters/rw. Note that read-write ext4 file systems are still officially unsupported by SUSE Technical Services.
ext4 is not supported for the installation of the SUSE Linux Enterprise operating system.
Since SUSE Linux Enterprise 11 SP2 we support offline migration from ext4 to the supported btrfs file system.
The ext4-writeable package is still available for compatibility with systems with kernels from both the SLE11 SP2 and SLE11 SP3 releases installed.
Most server hardware clocks are use UTC. UTC stands for the Universal Time, Coordinated, also known as Greenwich Mean Time (GMT). Other time zones are determined by adding or subtracting from the UTC time. Server typically displays local time, which now is subject of DST correction twice a year.
Wikipedia defines DST as follows:
Daylight saving time (DST), also known as summer time in British English, is the convention of advancing clocks so that evenings have more daylight and mornings have less. Typically clocks are adjusted forward one hour in late winter or early spring and are adjusted backward in autumn.
DST patch is only required in few countries such as USA. Please see this wikipedia article.
Linux will change to and from DST when the HWCLOCK setting is set to `-u', i.e. when the hardware clock is set to UTC (which is closely related to GMT), regardless of whether Linux was running at the time DST is entered or left.
When the HWCLOCK in /etc/sysconfig/clock is set to `--localtime', Linux will not adjust the time, operating under the assumption that your system may be a dual boot system at that time and that the other OS takes care of the DST switch. If that was not the case, the DST change needs to be made manually.
EST is defined as being GMT -5 all year round. US/Eastern, on the other hand, means GMT-5 or GMT-4 depending on whether Daylight Savings Time (DST) is in effect or not.
The tzdata package contains data files with rules for various timezones around the world. When this package is updated, it will update multiple timezone changes for all previous timezone fixes.
he local time as seen by regular applications under Linux is based on two things:
To list the valid values for TZ, execute
- The system time
- The TZ (for timezone) environment variable
$ cd /usr/share/zoneinfo ; find | grep EST ./EST ./right/EST ./right/EST5EDT ./posix/EST ./posix/EST5EDT ./EST5EDT
These zoneinfo files, part of the timezone package, are not human-readable. To check the data in them, use the zdump command. For example,$ zdump EST EST Thu Oct 24 18:09:54 2013 ESTThe output of zdump -v lists all clock jumps for this timezone, including the offset from Greenwich Mean Time and whether DST applies (it does when isdst=1).$ zdump -v EST EST -9223372036854775808 = NULL EST -9223372036854689408 = NULL EST 9223372036854689407 = NULL EST 9223372036854775807 = NULL
Oct. 10, 2013 | OStatic
Last summer Lukas Ocilka mentioned the completion of the basic conversion of YaST from YCP to Ruby. At the time it was said the change was needed to encourage contributions from a wider set of developers, and Ruby is said to be simpler and more flexible. Well, today Jos Poortvliet posted an interview with two YaST developers explaining the move in more detail.
In a discussion with Josef Reidinger and David Majda, Poortvliet discovered the reason for the move was because all the original YCP developers had moved on to other things and everyone else felt YCP slowed them down. "It didn’t support many useful concepts like OOP or exception handling, code written in it was hard to test, there were some annoying features (like a tendency to be “robust”, which really means hiding errors)."
Ruby was chosen because it is a well known language over at the openSUSE camp and was already being used on other SUSE projects (such as WebYaST). "The internal knowledge and standardization was the decisive factor." The translation went smoothly according to developers because they "automated the whole process and did testing builds months in advance. We even did our custom builds of openSUSE 13. 1 Milestones 2 and 3 with pre-release versions of YaST in Ruby."
For now performance under the Ruby code is comparable to the YCP version because developers were concentrating on getting it working well during this first few phases and user will notice very little if any visual changes to the YaST interface. No more major changes are planned for this development cycle, but the new Yast will be used in 13.1 due out November 19.
See the full interview for lots more detail.
Support for the btrfs File System #
Btrfs is a copy-on-write (CoW) general purpose file system. Based on the CoW functionality, btrfs provides snapshoting. Beyond that data and metadata checksums improve the reliability of the file system. btrfs is highly scalable, but also supports online shrinking to adopt to real-life environments. On appropriate storage devices btrfs also supports the TRIM command.
With SUSE Linux Enterprise 11 SP2, the btrfs file system joins ext3, reiserfs, xfs and ocfs2 as commercially supported file systems. Each file system offers disctinct advantages. While the installation default is ext3, we recommend xfs when maximizing data performance is desired, and btrfs as a root file system when snapshotting and rollback capabilities are required. Btrfs is supported as a root file system (i.e. the file system for the operating system) across all architectures of SUSE Linux Enterprise 11 SP2. Customers are advised to use the YaST partitioner (or AutoYaST) to build their systems: YaST will prepare the btrfs file system for use with subvolumes and snapshots. Snapshots will be automatically enabled for the root file system using SUSE's snapper infrastructure. For more information about snapper, its integration into ZYpp and YaST, and the YaST snapper module, see the SUSE Linux Enterprise documentation.
Migration from "ext" File Systems to btrfs
Migration from existing "ext" file systems (ext2, ext3, ext4) is supported "offline" and "in place". Calling "btrfs-convert [device]" will convert the file system. This is an offline process, which needs at least 15% free space on the device, but is applied in place. Roll back: calling "btrfs-convert -r [device]" will roll back. Caveat: when rolling back, all data will be lost that has been added after the conversion into btrfs; in other words: the roll back is complete, not partial.
Btrfs is supported on top of MD (multiple devices) and DM (device mapper) configurations. Please use the YaST partitioner to achieve a proper setup. Multivolume/RAID with btrfs is not supported yet and will be enabled with a future maintenance update.
- We are planning to announce support for btrfs' built-in multi volume handling and RAID in a later version of SUSE Linux Enterprise.
- Starting with SUSE Linux Enterprise 12, we are planning to implement bootloader support for /boot on btrfs.
- Transparent compression is implemented and mature. We are planning to support this functionality in the YaST partitioner in a future release.
- We are commited to actively work on the btrfs file system with the community, and we keep customers and partners informed about progress and experience in terms of scalability and performance. This may also apply to cloud and cloud storage infrastructures.
Online Check and Repair Functionality
Check and repair functionality ("scrub") is available as part of the btrfs command line tools. "Scrub" is aimed to verify data and metadata assuming the tree structures are fine. "Scrub" can (and should) be run periodically on a mounted file system: it runs as a background process during normal operation.
The tool "fsck.btrfs" tool will soon be available in the SUSE Linux Enterprise update repositories.
If you are planning to use btrfs with its snapshot capability, it is advisable to reserve twice as much disk space than the standard storage proposal. This is automatically done by the YaST2 partitioner for the root file system.
Hard Link Limitation
In order to provide a more robust file system, btrfs incorporates back references for all file names, eliminating the classic "lost+found" directory added during recovery. A temporary limitation of this approach affects the number of hard links in a single directory that link to the same file. The limitation is dynamic based on the length of the file names used. A realistic average is approximately 150 hard links. When using 255 character file names, the limit is 14 links. We intend to raise the limitation to a more usable limit of 65535 links in a future maintenance update.
Read-Only Root File System #
It is possible to run SUSE Linux Enterprise Server 11 on a shared read-only root file system. A read-only root setup consists of the read-only root file system, a scratch and a state file system. The
/etc/rwtabfile defines which files and directories on the read-only root file system are replaced by which files on the state and scratch file systems for each system instance.
readonlyrootkernel command line option enables read-only root mode; the
scratch=kernel command line options determine the devices on which the state and scratch file systems are located.
In order to set up a system with a read-only root file system, set up a scratch file system, set up a file system to use for storing persistent per-instance state, adjust
/etc/rwtabas needed, add the appropriate kernel command line options to your boot loader configuration, replace
/etc/mtabwith a symlink to
/proc/mountsas described below, and (re)boot the system.
/etc/mtabwith the appropriate symlinks, call:ln -sf /proc/mounts /etc/mtab
New Intel Platform and CPU Support #
This SP adds support for the following new Intel CPUs:
- 4th Generation Intel® Core™ Processor
- Intel® Xeon® processor E5-2600 v2 product family
- Intel® Xeon® processor E5-1600 v2 product family
- Intel® Xeon® processor E5-2400 v2 product family
- Intel® Xeon® processor E5-4600 v2 product family
- Next generation Intel® Xeon® processor E7-8800/4800/2800 v2 product families (codenamed ‘Ivy Bridge-EX’)
This covers new support for the following platforms:
- Next generation
March 3, 2011 The Register
The quickest way to build a commercial Linux business is to clone whatever Red Hat does. That's what Oracle and CentOS do with their Enterprise Linux redistributions and accompanying paid-for support offerings, and it is now what Novell is doing with a "new" product called SUSE Manager.
With SUSE Manager, announced today, Novell is trying to not only provide a better tool for managing its SUSE Linux Enterprise server than its existing Yast and ZENworks products, but is also trying to branch out into managing Red Hat Enterprise Linux as well as its own distro for servers.
The company could have spent a lot of time and money creating a tool that allowed for the management of RHEL and SLES, the two most popular Linuxes (in terms of support revenue, not necessarily installations). Or it could grab the Red Hat Network Satellite code that its Linux rival created to provision, patch, and manage RHEL and make it play nice with SLES.
The code behind Red Hat Network Satellite, the version of Red Hat's management system that you run behind your own firewall (as distinct from plain old Red Hat Network, which Red Hat runs on its systems to manage yours from outside your firewall) was open sourced in June 2008. The RHN Satellite code was opened up as Project Spacewalk, which now exists upstream from the code base that eventually becomes RHN Satellite.
Doug Jarvis, product marketing manager for enterprise Linux at Novell, tells El Reg that Novell decided to create its own SUSE-compatible version of the Spacewalk code base about nine months ago. To make the Spacewalk code work with SUSE Linux required some changes, but nothing monumental, according to Jarvis. Essentially, you have to make RHN Satellite speak Zypp and AutoYast for SUSE Linux configuration and package management, both of which are used by SUSE Linux.
Red Hat's Enterprise Linux distro uses Yum for package management and Kickstart for network installs. Novell has added support for Zypp and AutoYast (their equivalents) to Spacewalk 1.2 and contributed this SLES-friendly code back to the project. So if it wants to, Red Hat can turn around and add support for SLES to RHN and RHN Satellite. Nothing has been removed from SUSE Manager that would break compatibility with RHEL, so you can switch from RHN Satellite to SUSE Manager.
SUSE Manager is similar, in concept, to the functionality in ZENworks Linux Management 7.3 and ZENworks Configuration Management 11, two existing products sold by Novell. But Jarvis says that these ZENworks products were designed at first to control Windows-based desktops and use terminology and logic familiar to Windows admins, not to Linux nerds.
But the problem is larger than that. Linux people want open source tools, Windows people could care less. Linux admins are still mad at Red Hat because its KVM hypervisor management tools run only on Windows, and ditto for VMware and its vCenter console. Linux and Windows platforms are like Hatfields and McCoys in the data center. They may share the same kind of iron most of the time, but even with virtualization on the rise and presumably making them share virtualized platforms under a single management framework, they tend to be siloed and they take shots at each other - mostly verbally - over the racks from opposite sides of the data center.
Novell tried to ignore this fact, as have Red Hat and VMware. The existing ZENworks tools were extended to support SUSE Linux Enterprise Server a few years back, but hard-core SLESheads didn't want to use a Winders tool to manage their Linux servers. They probably won't mind using a clone of RHN Satellite painted green with chameleons crawling all over it, though. And now, the RHEL people and the SLES people are allies inside of a company and in supporting the Spacewalk project, which Novell has been contributing to as it developed SUSE Manager. (If Novell wanted to be politically neutral, it could have called it Novell Linux Manager and not used the SUSE brand at all, of course.)
SUSE Manager is designed to run inside if a SLES 11 appliance atop a KVM or Xen hypervisor, but you can run it on bare metal iron running SLES 11 if you want to. The program can manage RHEL 4, 5, and 6 as well as SLES 11. The plan is to support the earlier SLES 10 release by the end of the year, but Novell does not have plans to go all the way back to SLES 9.
SUSE Manager is available today and is open source, so you don't have to pay to use it. But if you want the supported version, which gets patch feeds from Novell and Red Hat for their respective Linuxes, then you have to pay some cash.
The product has five different components. The first is the SUSE Manager Server, which runs on Linux; this costs $13,500 and it is the thing you put inside your firewall from which you control and patch your SLES and RHEL instances. You can manage virtualized versions of those Linuxes running locally on your own iron as well as out on public clouds; SUSE Manager can also be used, just like RHN Satellite, to patch and provision those Linuxes on bare physical iron in a non-virtualized manner. The second component is the SUSE Manager Proxy Server, which is used to do the patching and management work on your iron; this costs $3,500 per year, and larger installations with lots of servers might need a couple of these proxies to handle their machines.
Then, on top of this, you have to buy management, provisioning, and monitoring modules to provide the functionality that you want. Each of these modules has their own price on top of the cost of SUSE Manager Server and SUSE Manager Proxy Server, and the prices are the same for each module.
If you are running Linux on a bare-metal servers using x64 processors, then these modules cost $96 per server per year for support. If you are running in a virtualized environment, then in costs $192 per physical server with unlimited numbers of virtual machines. SUSE Manager can also be used to manage SLES and RHEL instances on IBM's logical partitions on its System z mainframes. In this case, support for the add-on modules cost $1,000 per year per mainframe engine.
While Novell is playing friendly with Spacewalk in helping the project work with SLES, the move is not just about co-existing with RHEL but also trying to replace it. The management tools that RHEL customers use are probably as important to system admins as the Linux distro itself, so trying to convince companies to drop RHEL because SLES is a lot cheaper is problematic. But if the management tools are essentially the same, now Novell at least stands a chance of winning some RHEL takeout deals. ®
NOVELL Downloads (Media)
3073_Sample Manage Virtualization with Xen
Just released, this prescriptive guide shows IT Pros how to use Microsoft Windows Server 2003 Active Directory for both authentication and identity storage within heterogeneous Microsoft Windows and UNIX environments.
REDBOOK Sg245863[PDF] SuSE Linux Integration Guide for IBM for xSeries and Netfinity
Download.openSUSE.org - downloads from this site using HTTPD are unreliable.
FileMirrors for DVD
LQ ISOs SUSE openSUSE 10.2
Getting Started view size last update openSUSE 10.2 Start-Up html 2 MB 12/07/2006 KDE Quick Start html 2 MB 12/07/2006 GNOME Quick Start html 2 MB 12/07/2006
User Guides view size last update KDE User Guide html .1 MB 12/07/2006 GNOME User Guide html 10 MB 12/07/2006
Administration view size last update openSUSE 10.2 Reference Guide html 7 MB 12/07/2006 AppArmor 2.0.1 Administration Guide html 2 MB 12/07/2006 AppArmor 2.0.1 Quick Start html 1 MB 12/07/2006
Additional Information view size last update Release Notes html 12/07/2006
Security and bug fixes http://download.suse.com/update/10.2/
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes. If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner.
ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least
Copyright © 1996-2015 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.
Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down currently there are two functional mirrors: softpanorama.info (the fastest) and softpanorama.net.|
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.
Last modified: October 11, 2015