Hist?rico de comandos

Uma dica importante para usu?rios e administradores de sistema ? quanto o uso do hist?rico do bash.

Muitos Cracker usam a t?cnica de desabilitar o hist?rico do bash para poder fazer um ataque mais oculto no sistema.

Para desabilitar o hist?rico do bash ? usado o comando:

#unset HISTFILE

at? mais.

Posted in Sem categoria

Configura??o da Impressora_no_Debian

O m?todo tradicional ? lpr/lpd. H? um novo sistema: CUPS? (Common UNIX Printing System). PDQ ? outra op??o. Veja Linux Printing HOWTO para mais informa??es.

pr/lpd

Para os spoolers do tipo lpr/lpd (lpr, lprng, e gnulpr), configure /etc/printcap como se segue, deste que conectado a um impressora PostScript ou somente texto (o b?sico):

lp|alias:\
:sd=/var/spool/lpd/lp:\
:mx#0:\
:sh:\
:lp=/dev/lp0:

Significado das linhas acima:

* Linha cabe?alho: lp ? nome do spool, alias = alias
* mx#0 ? tamanho m?ximo do arquivo ? ilimitado
* sh ? suprime a impress?o da primeira p?gina do cabe?alho
* lp=/dev/lp0 ? dispositivo de impressora local, ou porta@host, se remota

Esta ? uma boa configura??o se voc? est? conectado a uma impressora PostScript. Al?m disto, quando imprimir a partir de uma m?quina Windows atrav?s do Samba, esta ? uma boa configura??o para qualquer impressora suportada por Windows (nenhuma comunica??o bidirecional ? suportada). Voc? tem d e selecionar a configura??o da impressora correspondente na m?quina Windows.

Se voc? n?o tem uma impressora PostScript, voc? precisa configurar um sistema de filtros usando gs. H? v?rias ferramentas para autoconfigura??o com a finalidade de configurar /etc/printcap. Quaisquer destas combina??es ? uma op??o:

* gnulpr, (lpr-ppd) e printtool ?eu uso isto.
* lpr e apsfilter
* lpr e magicfilter
* lprng e lprngtool
* lprng e apsfilter
* lprng e magicfilter

Para rodar uma ferramenta de configura??o GUI, tal como printtool, veja Obtendo root no X, Se??o 9.4.12 para obter privil?gios de root. Impressoras spools criadas com printtool usam gs e agem como impressoras PostScript. Assim, quando as acessar, use drivers de impressora PostScript. Na perspectiva do Windows, “Apple LaserWriter” ? o ?nico padr?o.

CUPS?

Instale o Sistema de Impress?o Comum UNIX (Common UNIX Printing System) (ou CUPS?):

# apt-get install cupsys foomatic-bin foomatic-db
# apt-get install cupsys-bsd cupsys-driver-gimpprint

Ent?o configure o sistema usando qualquer navegador web:

$ meunavegador http://localhost:631

Por exemplo, para adicionar sua impressora em alguma porta ? lista de impressoras acess?veis:

* clique “Printers” na p?gina principal e, e ent?o “Add Printer”,

* tecle “root” para o nome de usu?rio e sua senha,

* adicione a impressora seguindo os prompts,

* volte para a p?gina “Printers” e clique “Configure Printer”, e

* configure o tamanho do papel, resolu??o, e outros par?metros.

Mais informa??o em http://localhost:631/documentation.html e http://www.cups.org/cups-help.html.

Para um kernel 2.4, veja tamb?m Suporte a porta paralela, Se??o 7.2.6.

Outras dicas de instala??o no host
Instalar mais alguns pacotes ap?s a instala??o inicial

Desde que voc? j? o tenha feito, j? ter? um pequeno mas funcional sistema Debian. ? um bom momento para instalar pacotes maiores.

* Rode tasksel. Veja Instalar tarefa com o tasksel ou aptitude, Se??o 6.2.1.

Voc? pode escolher estes, se necess?rio:
o End-user (usu?rio final) ? X Window System
o Development (desenvolvimento) ? C and C++
o Development ? Python
o Development ? Tcl/Tk
o Miscellaneous ? TeX/LaTeX environment
o Para outros, eu prefiro usar o tasksel como um guia, olhando os componentes listados sob e instalando-os seletivamente atrav?s do dselect.

* Rode dselect.

Aqui, a primeira coisa que talvez voc? queira ? selecionar seu editor favorito e quaisquer programas que voc? necessitar. Voc? pode instalar muitas varia??es do Emacs ao mesmo tempo. Veja dselect, Se??o 6.2.3 e Editores populares, Se??o 11.1.

Voc? pode tamb?m substituir alguns dos pacotes padr?es por outros com muitos recursos.
o lynx-ssh (ao inv?s de lynx)
o …

* …

Normalmente, eu edito /etc/inittab para desligar facilmente.


# O que fazer quando CTRL-ALT-DEL ? pressionado.
ca:12345:ctrlaltdel:/sbin/shutdown -t1 -a -h now

Posted in Sem categoria

apt-get para slackware

Apt-get do Slackware

Ao escolher uma distribui??o Unix/Linux para rodar em Servidores devem ser analisados v?rios pontos: estabilidade, disponibilidade, ferramentas de configura??es dispon?veis, ferramentas de atualiza??es, agilidade na corre??o de erros por parte dos desenvolvedores etc..
Um grande chamativo para os administradores hoje em dia s?o as ferramentas de instala??o/atualiza??o de pacotes on-line como o “apt-get” do Debian, que tamb?m foi portado para o Conectiva na vers?o 6.0, o “up2date” do RedHat, o “urpmi” do Mandrake, o “ports”do BSD. Realmente eles s?o uma m?o na roda quanto a instala??o e atualiza??o de pacotes.
No entanto algumas distribui??es importantes do mundo Linux como Slackware, n?o possuem utilit?rio para instala??o e atualiza??o de pacotes on-line!! Opa, quem disse isso?? ; )
Realmente na distribui??o oficial do Slackware n?o ? inclu?do nenhum utilit?rio do estilo apt-get, mas com os milhares de slackusers espalhados pelo mundo seria imposs?vel que algu?m n?o tivesse desenvolvido um script para tal.
Pesquisando pela internet encontrei v?rios bons scripts que podem realizar as instala??o/atualiza??o de pacotes no slackware, existe tamb?m um sistema similar ao BSD Ports, que baixa os sources ao inv?s de pacotes pr? compilados.

Neste artigo vamos falar sobre o slackpkg que ? um programa bem no estilo atp-get, ele trabalha com pacotes pr? compilados, por?m se voc? est? interessado em implantar o sistema Ports visite o site oficial do projeto: http://slackports.sourceforge.net (o slackports fica para o proximo artigo).

O slackpkg foi um dos scripts que eu achei mais interessantes por isso resolvi escrever sobre ele:
Para baixar o slackpkg clique aqui
Site do Projeto: http://slackpkg.sourceforge.net

Ap?s baixar o programa, instale com o comando:
# installpkg slackpkg-0.92-i386-1.tgz

Em seguida ser? necess?rio configurar a lista de mirrors:
# mcedit /etc/slackpkg/mirrors

Este ? o arquivo que cont?m a lista com os mirrors, ? s? descomentar a linha referente ao mirror que deseja utilizar, s? ? permitido o uso de um mirror por vez, n?o adianta descomentar todas as linhas que n?o vai funcionar 🙂
Entre em: http://www.slackware.com/getslack para obter a mais recente lista de mirrors do slackware, assim voc? pode ir atualizando seu arquivo de mirrors.
Servidores HTTP e FTP s?o suportados pelo programa.

Agora devemos fazer download da lista de pacotes, para isso de o comando:
# slackpkg update

Agora o programa j? est? pronto para usar!
Algumas caracter?sticas interessantes do slackpkg s?o: instalar, reinstalar, remover, atualizar, buscar pacotes nos ftp’s, instalar programas j? patcheados (programas remendados/corrigidos).

Agora vamos a lista de comandos do programa:

Atualizar lista de pacotes:
# slackpkg update

Busca de pacotes, este comando faz o trabalho duro de procurar no MANIFEST.gz, e pode ser utilizado com qualquer arquivo integrante do slackware linux:
# slackpkg search nome_do_pacote

Instala??o de pacotes:
# slackpkg install nome_do_pacote

Remo??o de pacotes:
# slackpkg remove nome_do_pacote

Atualiza??o de pacotes j? instalados:
# slackpkg upgrade nome_do_pacote

Reinstala??o de pacotes:
# slackpkg reinstall nome_do_pacote

Instala??o de patches de seguran?a:
# slackpkg upgrade patches

Se voc? quiser ? poss?vel fazer um upgrade da distribui??o inteira. Configure o seu arquivo de mirrors para apontar para a vers?o current (slackware-current) e de os comandos:
# slackpkg update
# slackpkg upgrade slackware
# slackpkg install slackware

Este artigo vai ficando por aqui!! Qualquer duvida, sugest?o, critica ou elogio ? s? enviar um eMail para drusian@tdkom.com.br , ou poste na se??o de criticas do F?rum.
Espero ter ajudado os SlackUsers : )
Um forte abra?o a toda Comunidade LinuxBSD, At? a pr?xima!!!!

Posted in Sem categoria

Alguns Atalhos_Gnome 2.6

Ol? pessoal andei fussando no Gnome 2.6 e ai descobri alguns atalhos interessantes:

1) Quando abrir uma nova pasta no Nautilus e quiser fechar autom?ticamente a janela anterior basta usar:

shift + duploclick na pasta

2) Quando estiver em uma pasta e quiser subir um niv?l:

ctrl shift seta para cima.

Posted in Sem categoria

Criando uma imagem_do_sistema

Neste artigo em ingl?s estaremos abordando como criar uma imagem de nosso sistema em plataforma Gnu/Linux:

Preparing to change inodes directly

My advice? Don’t do it this way. I really don’t think it’s wise to play with a file system at a low enough level for this to work. This method also has problems in that you can only reliably recover the first 12 blocks of each file. So if you have any long files to recover, you’ll normally have to use the other method anyway. (Although see section Will this get easier in future? for additional information.)

If you feel you must do it this way, my advice is to copy the raw partition data to an image on a different partition, and then mount this using loopback:

# cp /dev/hda5 /root/working
# mount -t ext2 -o loop /root/working /mnt

(Note that obsolete versions of mount may have problems with this. If your mount doesn’t work, I strongly suggest you get the latest version, or at least version 2.7, as some very old versions have severe security bugs.)

Using loopback means that if and when you completely destroy the file system, all you have to do is copy the raw partition back and start over.

Preparing to write data elsewhere

If you chose to go this route, you need to make sure you have a rescue partition somewhere — a place to write out new copies of the files you recover. Hopefully, your system has several partitions on it: perhaps a root, a /usr, and a /home. With all these to choose from, you should have no problem: just create a new directory on one of these.

If you have only a root partition, and store everything on that, things are slightly more awkward. Perhaps you have an MS-DOS or Windows partition you could use? Or you have the ramdisk driver in your kernel, maybe as a module? To use the ramdisk (assuming a kernel more recent than 1.3.48), say the following:

# dd if=/dev/zero of=/dev/ram0 bs=1k count=2048
# mke2fs -v -m 0 /dev/ram0 2048
# mount -t ext2 /dev/ram0 /mnt

This creates a 2MB ramdisk volume, and mounts it on /mnt.

A short word of warning: if you use kerneld (or its replacement kmod in 2.2.x and later 2.1.x kernels) to automatically load and unload kernel modules, then don’t unmount the ramdisk until you’ve copied any files from it onto non-volatile storage. Once you unmount it, kerneld assumes it can unload the module (after the usual waiting period), and once this happens, the memory gets re-used by other parts of the kernel, losing all the painstaking hours you just spent recovering your data.

If you have a Zip, Jaz, or LS-120 drive, or something similar, it would probably be a good choice for a rescue partition location. Otherwise, you’ll just have to stick with floppies.

The other thing you’re likely to need is a program which can read the necessary data from the middle of the partition device. At a pinch, dd will do the job, but to read from, say, 600 MB into an 800 MB partition, dd insists on reading but ignoring the first 600 MB. This takes a not inconsiderable amount of time, even on fast disks. My way round this was to write a program which will seek to the middle of the partition. It’s called fsgrab; you can find the source package on my website or on Metalab (and mirrors). If you want to use this method, the rest of this mini-Howto assumes that you have fsgrab.

If none of the files you are trying to recover were more than 12 blocks long (where a block is usually one kilobyte), then you won’t need fsgrab.

If you need to use fsgrab but don’t want to download and build it, it is fairly straightforward to translate an fsgrab command-line to one for dd. If we have

fsgrab -c count -s skip device

then the corresponding (but typically much slower) dd command is

dd bs=1k if=device count=count skip=skip

I must warn you that, although fsgrab functioned perfectly for me, I can take no responsibility for how it performs. It was really a very quick and dirty kludge just to get things to work. For more details on the lack of warranty, see the `No Warranty’ section in the COPYING file included with it (the GNU General Public Licence).

Finding the deleted inodes

The next step is to ask the file system which inodes have recently been freed. This is a task you can accomplish with debugfs. Start debugfs with the name of the device on which the file system is stored:

# debugfs /dev/hda5

If you want to modify the inodes directly, add a -w option to enable writing to the file system:

# debugfs -w /dev/hda5

The debugfs command to find the deleted inodes is lsdel. So, type the command at the prompt:

debugfs: lsdel

After much wailing and grinding of disk mechanisms, a long list is piped into your favourite pager (the value of $PAGER). Now you’ll want to save a copy of this somewhere else. If you have less, you can type -o followed by the name of an output file. Otherwise, you’ll have to arrange to send the output elsewhere. Try this:

debugfs: quit
# echo lsdel | debugfs /dev/hda5 > lsdel.out

Now, based only on the deletion time, the size, the type, and the numerical permissions and owner, you must work out which of these deleted inodes are the ones you want. With luck, you’ll be able to spot them because they’re the big bunch you deleted about five minutes ago. Otherwise, trawl through that list carefully.

I suggest that if possible, you print out the list of the inodes you want to recover. It will make life a lot easier.

Obtaining the details of the inodes

debugfs has a stat command which prints details about an inode. Issue the command for each inode in your recovery list. For example, if you’re interested in inode number 148003, try this:

debugfs: stat <148003>
Inode: 148003 Type: regular Mode: 0644 Flags: 0x0 Version: 1
User: 503 Group: 100 Size: 6065
File ACL: 0 Directory ACL: 0
Links: 0 Blockcount: 12
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x31a9a574 — Mon May 27 13:52:04 1996
atime: 0x31a21dd1 — Tue May 21 20:47:29 1996
mtime: 0x313bf4d7 — Tue Mar 5 08:01:27 1996
dtime: 0x31a9a574 — Mon May 27 13:52:04 1996
BLOCKS:
594810 594811 594814 594815 594816 594817
TOTAL: 6

If you have a lot of files to recover, you’ll want to automate this. Assuming that your lsdel list of inodes to recover in is in lsdel.out, try this:

# cut -c1-6 lsdel.out | grep “[0-9]” | tr -d ” ” > inodes

This new file inodes contains just the numbers of the inodes to recover, one per line. We save it because it will very likely come in handy later on. Then you just say:

# sed ‘s/^.*$/stat < \0>/’ inodes | debugfs /dev/hda5 > stats

and stats contains the output of all the stat commands.

Posted in Sem categoria

Fazendo redirecionamento de portas com DNAT

Umas das fun??es do iptables ? o uso da tabela DNAT para criar redirecionamentos de portas.

Neste exemplo abaixo iremos fazer um redirecionamento de porta 5900 usado pelo VNC para uma m?quina dentro de nossa LAN.

# Prerouting

/sbin/iptables -t nat -A PREROUTING -p tcp -d 200.2.2.0 –dport 5900 -j DNAT –to 192.168.0.1

# Postrouting

/sbin/iptables -t nat -A POSTROUTING -p tcp -s 192.168.0.1 –sport 5900 -j SNAT –to 200.2.2.0

Para o redicionamento de outras portas siga o mesmo exemplo.

Posted in Sem categoria

Fazendo Upgrade da biblioteca GlibC

3.2. Special things you need to do

Since you are going to substitute the basic library many programs rely on, you can imagine the problems that may occur.

For me, it so happened that everything went fine until I typed in make install. At about halfway through the installation process I got an error telling me that rm was not able to run, and I found out that even all the common commands like cp, ls, mv, ln, tar, etc., did not work; all told me that they were not able to find parts of the library they needed.

But there is help available. You can force the compilation of programs with the libraries compiled into them, so the programs do not need to look them up from the library.

For that reason, in this chapter, we will compile all the utilities we need for the install into a static version.
3.2.1. Things you will definitely need
3.2.1.1. The GNU-Binutils

1.

Get the newest version from: ftp.gnu.org/gnu/binutils; at the time of writing, this was version 2.14
2.

Open the package:

tar xIvf binutils-2.14.tar.bz2

3.

Change to the directory:

cd binutils-2.14

4.

Configure the Makefiles:

./configure

5.

Compile the sources:

make

6.

Install them with:

make install

If you run into trouble with the compilation of the binutils, referring to problems with gettext (indicated by errors like: “undeclared reference to lib_intl” or similar) please install the newest version, available from ftp.gnu.org/gnu/gettext.

If this does not help, try disabling the native-language support by using:

./configure –no-nls

You don’t need to build a static version of the binutils, though it would not hurt, but I encountered many systems running with very old versions and ran into errors almost every time, so I think it is a good idea to mention them here.
3.2.1.2. GNU make

The make command is responsible for the compiling of the sources, calling gcc and all the other programs needed for a compile. Since you may need to compile something if a problem occurs with the new glibc, it is a good idea to have it static, otherwise it might not work after an error appears.

1.

Download the source from ftp.gnu.org/gnu/make/; at the time of writing the current version was 3.80
2.

Unpack the source, eg.:

tar xIvf make-3.80.tar.bz2

3.

Change to the created directory:

cd make-3.80

4.

Take care that the binaries are built static:

export CFLAGS=”-static -O2 -g”

5.

Run the configure script:

./configure

6.

Compile the stuff:

make

7.

Install the binaries:

make install

8.

Make a check:

make -v

You should now see the new version installed. If not, check for older binary files and replace them by smlinks to the new version.

Congratulations! You have compiled another static-linked program.
3.2.1.3. the GNU core-utils

The core-utils are commands like: cp, rm, ln, mv, etc. In case of an error in the installation, these are an absolute requirement to help bring your system up again, so static binaries are really necessary here.

1.

Again, download the source tarball from: ftp.gnu.org/gnu/coreutils/; at the time of writing, version 5.0 was current.
2.

Unpack it:

tar xIvf coreutils-5.0.tar.bz2

3.

Change to the directory:

cd coreutils-5.0

4.

Take care that the binaries are built static:

export CFLAGS=”-static -O2 -g”

5.

Configure the package:

./configure

6.

Compile the binaries:

make

7.

And install them:

make install

8.

Verify that the right core-utils are used:

cp –version

. You should see the correct version, otherwise remove any old binaries and replace them with symlinks to the new version.

Now that the binaries of these very elementary tools are static, you can be sure they will work every time you need them.
3.2.1.4. GNU tar

You have already used GNU tar to unpack all the programs compiled and installed so far. But maybe you need to compile another program which is needed by glibc after you had a crash, and in this situation (I experienced this myself!) it is very useful to have a working tar ready to unpack the missing programs. With tar, we also need to take care of the bz2 compression algorithm, which is not included in the normal source distribution of tar.

1.

Get the source of GNU tar from ftp.gnu.org/gnu/tar; at the time of writing, version 1.13 was up-to-date.
2.

As many source tarballs are compressed with bzip2, we would like to have the support built in, rather than working with pipes, so get the patch from: ftp://infogroep.be/pub/linux/lfs/lfs-packages/4.1/tar-1.13.patch.
3.

Unpack the source by invoking:

tar xzvf tar-1.13.tar.gz

4.

Copy the patch to the source directory of tar:

cp tar-1.13.patch tar-1.13/

5.

Apply the patch:

patch -Np1 -i tar-1.13.patch

6.

Set the compiler flags to make a static binary:

export CFLAGS=”-static -O2 -g”

7.

Now we are ready to configure:

./configure

8.

Compile with:

make

9.

And as the next step, install the package:

make install

10.

Do a quick check to ensure the new version is being used from now on:

tar –version

The version you just installed should display, otherwise check for old binaries and replace them with symlinks to the new location.

If you experience problems with the execution of make, try to turn off native-language support (nls). You may achieve this by invoking configure with the option:

–disable-nls

Note: In this new version of tar, you must use the -j switch to decompress .bzip2 files, so instead of

tar xIvf anyfile.tar.bz2

you now have to use

tar xjvf anyfile.tar.bz2

I do not know why this was changed, but it works fine.
3.2.1.5. The Bash shell

I prefer Bash as my shell; if you use a different one, please be sure you have installed a static version of it before you install glibc.

1.

Get Bash from: ftp.gnu.org/gnu/bash/. Download the newest version you can find; at the time of writing this was version 2.05b.
2.

Unpack the source tree:

tar xzvf bash-2.05b.tar.gz

which will create a directory called bash-2.05b with all the unpacked sources in it.
3.

Go to the directory:

cd bash-2.05a

4.

Set everything up for building a static version:

export CFLAGS=”-static -O2 -g”

5.

Configure the makefiles:

./configure

If you would like something special in your Bash, see

./configure –help

for a list of options.
6.

Compile everything:

make

7.

Install the compiled binaries:

make install

This will install the binaries to /usr/local/bin/.
8.

Make sure there is not another version laying around (like in my Suse-Linux: /bin/), by copying the file:

cp /usr/local/bin/bash /bin/

We don’t use a symlink here because both at boot-time and when starting Bash there might be trouble with symlinks.

You now have installed a static version of Bash. For that reason, the binary is much bigger than usual, but it will run under all circumstances.

If you prefer to use another shell, you are free to do so, but make sure it is a statically-linked version. Feel free to email me a method to build the shell of your choice in a static version, and chances are good that it will be implemented in the next revision of this document.
3.2.2. Software that may come in handy
3.2.2.1. Midnight Commander

Midnight Commander is a very useful file manager, supporting many nice features like transparent decompression of packed files, built-in copy, move and other common commands, as well as an integrated editor.

To compile this piece of software, you will need to have glib installed; in some distributions this is already the case. If you get an error in the make command saying that ld could not find glib, you will need to install this library first. You can get the sources from: ftp.gnome.org/pub/gnome/sources/glib/2.2/, and the installation is straight-forward.

Here are the steps to build Midnight Commander:

1.

Get the source from http://www.ibiblio.org/pub/Linux/utils/file/managers/mc/”; at the time of writing, the newest version was 4.6.0.
2.

Unpack the sources:

tar xzvf mc-4.6.0.tar.gz

3.

Change to the directory you just created:

cd mc-4.6.0

4.

Set up the configuration-files:

./configure

5.

Start compiling:

make

6.

Install everything:

make install

Posted in Sem categoria

Usando e configurando o PHPMyadmin

Neste tutorial estaremos mostrando como configurar o aplicativo phpMyadmin.
O PHPMyadmin ? uma ferramenta de gerenciamento das bases de dados em servidores Mysql.

Seu c?digo fonte por ser adquirido em:
http://prdownloads.sourceforge.net/phpmyadmin/phpMyAdmin-2.5.7-pl1.tar.gz?use_mirror=belnet

O site do projeto pode ser encontrado em :
http://phpmyadmin.sourceforge.net

1) Feito o download do arquivo, descompacte-o usando a ferramenta correspondente ao tipo de arquivo adquirido.

Em nosso exemplo estaremos levando em considera??o que estamos usando o sistema operacional Debian Gnu/Linux Sarge, Apache 1.3, php 4.3, Mysql 4.021.

e PHPMyadmin vers?o phpMyAdmin 2.5.7-pl1.

O diret?rio p?blico de nosso servidor Apache ? o /var/www/

2) Tendo feito um mapeamento das ferramentas que est?o sendo usadas vamos a trabalho:
Mova ou copie a pasta phpMyAdmin-2.5.7-pl1 para a pasta /var/www ex:
# mv phpMyAdmin-2.5.7-pl1 /var/www/phpmyadmin/

Ajuste as permiss?es do diret?rio do PHPMyadmin para 775 ex:
# chmod -R 775 /var/www/phpmyadmin/

3) J? com nosso PHPMyadmin no seu devido lugar vamos as configura??es:

Caso esteja usando a vers?o do PHPMyadmin phpMyAdmin 2.5.7-pl1, verifique em seu editor de texto a linha 82 do arquivo config-inc.php

// —
$cfg[‘Servers’][$i][‘auth_type’] = ‘config‘; // Authentication method (config, http or cookie based)?
$cfg[‘Servers’][$i][‘user’] = ‘root‘; // MySQL user
$cfg[‘Servers’][$i][‘password’] = ‘suasenha‘; // MySQL password (only needed // with ‘config’ auth_type)
$cfg[‘Servers’][$i][‘only_db’] = ”; // If set to a db-name, only
// —

Na primeira linha auth_type ? o tipo de aut?ntica??o que ser? usada para acessar o PHPMyadmin, na segunda linha seria o usu?rio do Banco de Dados, na terceira linha ? a senha para o usu?rio descriminado acima e na quarta linha caso voc? queira disponibilizar acesso restrito apenas a uma base de dados basta descrever o nome dela em only_db, ? claro que para fortuficar a seguran?a do PHPMyadmin e ainda de seu servidor Mysql voc? dever? n?o disponibilizar o acesso ao diret?rio de configura??o do PHPMyadmin.

Caso seja um ISP aconselho usar um ambiente de FTP em ambiente chroot e ainda colocar o PHPMyadmin fora de uma pasta que possa ser acessada via FTP.

4) Caso voc? inicie o PHPMyadmin e ele apresentar um erro no frame direito na parte inferior, edite novamente o config-inc.php e procure pela linha 47 ex:

cfg[‘PmaAbsoluteUri_DisableWarning’] = TRUE;

a linha 47 provav?lmente deve estar com o valor em FALSE, altere para TRUE, isso faz com que ele desabilie o aviso de que o PMA est? ou n?o habilitado.

Com isso salve o arquivo e teste sua configura??es indicando em seu navegador a pasta p?blica do Apache.

Qualquer d?vida ou informa??o entre em contato.

Obrigado.

Posted in Sem categoria

Configurando XFCE 4.2 no Debian Gnu/Linux

Neste artigo abaixo do site os-cillation.com, descreve como instalar e configurar o XFCe 4.2 no Debian Gnu/Linux, documenta??o em ingl?s.

We have created Debian packages with recent snapshots of the upcoming Xfce 4.2 desktop environment for i386 machines. We’ve build the packages on a Debian testing (sarge) machine, so you need atleast Debian testing to install them. We haven’t tested these packages on Debian unstable (sid), it is likely that they work with unstable as well, but your mileage may vary. In any case, we don’t provide support for installing the packages on Debian unstable machine.

Please use the Forum for questions, problems, wishes and further discussion.
In addition we would like to ask you to give us a short feedback so we have a little bit of data for our statistics. None of the data will be passed to third parties, we’ll keep it safe, and of course it is voluntary…

If you have currently installed Xfce 4.0.x on your Debian machine, it is highly suggested that you uninstall it before you continue with the instructions below. To uninstall perform the command

# apt-get remove –purge libxfce4util-1

where the –purge parameter is optional, but suggested. Please see the Debian manuals for more information on APT. In addition, it is recommended to also remove or rename any existing ~/.xfce4 directory, but atleast to remove any customized menu.xml file, since the file format has changed in Xfce 4.2. For example

$ mv ~/.xfce4 ~/dot.xfce4-4.0

will backup your old .xfce4 directory. Also the startup process of Xfce has changed. If you’ve used the xfce4_setup to setup Xfce 4.0 as your desktop, you’ll probably have to remove the files .xsession and .xinitrc in your home directory.

Now, to start the installation of the debian packages, add the following line to the file /etc/apt/sources.list on your system:

deb http://www.os-cillation.de/debian binary/
deb-src http://www.os-cillation.de/debian source/

Afterwards you’ll have to update your package cache using the command

# apt-get update

This may take some time depending on your bandwidth.

If you want to install only the Xfce 4.2 core desktop components, use the command

# apt-get install -t binary xfce4

if you removed any Xfce 4.0 as mentioned above or if Xfce wasn’t installed previously. Else, the above command might fail, but you might succeed using the command

# apt-get upgrade

in that case, though this way of installing isn’t suggested nor supported.

If you plan to install the complete base desktop environment that we use for the Xfld distribution, use the command

# apt-get install -t binary xfld-desktop

instead. This will install several panel plugins, a recent snapshot of the ROX file manager and the network card configuration tool we use with Xfld. If you’ve used one of the available ROX debian packages, you may need to uninstall them before, as with the Xfce 4.0 packages, see the notes above.

If the installation succeeds, you can start using Xfce now. If you are using a display manager like GDM or KDM, Xfce will automagically appear in the menu. Else if you are using XDM, you’ll have to create a file .xsession in your home directory with the following content

#!/bin/sh
exec /usr/bin/startxfce4

and mark the file executable (chmod +x ~/.xsession). If you use startx to log in to your X desktop, create a file .xinitrc in your home directory with the content

#!/bin/sh
exec /usr/bin/startxfce4

and mark the file executable (chmod +x ~/.xinitrc).

Now you are finally done and can start to enjoy your trip with Xfce 4.2. If you have any problems, have a look at the Forum.

Posted in Sem categoria

VPN dicas com FreeSwan

No menu networking Options:
ip security protocol (frees/wan ipsec)
ipsec: ip-in-ip encapsulation (tunnel mode)
ipsec: authentication Header
hmac-md5 authentication algorithm
hmac-sha1 authentication algorithm
ipsec: encapsulating security payload
3des encryption algorithm
ipsec: ip compression
ipsec: debuggin option

Vc ira precisar das chaves RSA de ambas as maquinas (left) e (right):
[root@left:/]#ipsec showhostkey -left
[root@right:/]#ipsec showhostkey -right
o ipsec.conf ficar? mais ou menos assim:
conn vpn
leftrsasigkey=
rightrsasigkey=
left=200.207.x.x
leftsubnet=192.168.0.0/24
leftnexthop=
right=200.207.Z.Z
rightsubnet=10.10.192.0/24
rightnexthop=
spi=0x300
esp=3des-md5-96
espenckey=0x0a5b47ab_fec52b0c_6200e505_28ebcbee_d79c3726_7d02a827
espauthkey=0x7767e921_3debaeef_66bc49ee_0ca71cb7
type=tunnel
auto=add

Dai salve e digite em ambas as maquinas:
/etc/init.d/ipsec stop
/etc/init.d/ipsec start

Sete para 0 os seguintes arquivos:
echo 0 > /proc/sys/net/ipv4/conf/eth0/rp_filter
echo 0 > /proc/sys/net/ipv4/conf/ipsec0/rp_filter

Verifique se o iptables nao esta filtrando a porta 500 tcp e udp.Se estiver, libere-as.

Escolha uma das maquinas para levantar a conexao:
ipsec auto -up vpn

Isso dever? funcionar. Qualquer coisa, http://www.dextra.com.br/opensource/howto.htm

Posted in Sem categoria

Configurando o MailDrop com Postfix na distro Debian Gnu/Linux

O MailDrop ? uma ferramenta importante para filtragem de e-mails recebidos.
Ele age como um agente de filtragem no sistema para todas as mensagens recebidas.

Exemplificaremos aqui uma configura??o dele junto ao servidor Smtp Postfix.

Dentro do main.cf dor servidor PostFix mudaremos as seguintes linhas:

NOTA: Caso as tags n?o existam, crie-as.

mailbox_command
—————
Valor que dever? ficar:

mailbox_command = /usr/bin/maildrop -d ${USER}

home_mailbox
————
Valor que dever? ficar:

home_mailbox = Maildir/

Bom, acho que isso deve resolver.

Maildrop

Para instalar o Maildrop, execute o seguinte comando:

apt-get install maildrop

Com o Maildrop a ?nica coisa que eu tive que fazer foi
descomentar uma linha do arquivo /etc/maildroprc, para que
ele lesse os Maildirs dentro dos diret?rios HOME dos usu?rios.

A linha ?:

#DEFAULT=”$HOME/Maildir”

E ela dever? ficar assim:

DEFAULT=”$HOME/Maildir”

*Lembrando que para criar as caixas de entrada dos usu?rios devemos usar o comando
maildirmake Maildir dentro do home dos usu?rios.

Posted in Sem categoria

AT&T planeja sustituir Windows por GNU/Linux

A maior operadora de longa dist?ncia dos Estados Unidos, a AT&T, est? testando o GNU/Linux com a inten??o de substituir o sistema operacional Windows, que hoje opera em 70.000 PCs de seus empregados. Calcula-se que isso pouparia uns 50% de seus custos. A potencial decis?o da AT&T de abandonar Windows poderia ser a maior perda da Microsoft frente ao GNU/Linux desde sua apari??o.

A not?cia da AT&T indica que o GNU/Linux est? fazendo bem o neg?cio de software para servidores, onde est? crescendo mais rapidamente do que a Microsoft, e assim amea?ando seriamente romper o monop?lio de PCs da companhia de Redmond.
“Como outros respons?veis pela ?rea de inform?tica de empresas do pa?s, estou preocupado com a viabilidade, seguran?a, produtividade e redu??o de custos”, assegurou Hossein Eslambolchi, diretor de inform?tica da AT&T. Eslambolchi explica que sua companhia poderia poupar entre 50% e 60% nos custos. Al?m disso, o diretor se queixa de que a AT&T teve mais ataques de v?rus nos computadores pessoais durante os ?ltimos seis meses que nos 10 anos anteriores.

A multinacional tomar? a decis?o no final de 2005.

Fonte: vivalinux.com.ar

AT&T planeja sustituir Windows por GNU/Linux

A maior operadora de longa dist?ncia dos Estados Unidos, a AT&T, est? testando o GNU/Linux com a inten??o de substituir o sistema operacional Windows, que hoje opera em 70.000 PCs de seus empregados. Calcula-se que isso pouparia uns 50% de seus custos. A potencial decis?o da AT&T de abandonar Windows poderia ser a maior perda da Microsoft frente ao GNU/Linux desde sua apari??o.

A not?cia da AT&T indica que o GNU/Linux est? fazendo bem o neg?cio de software para servidores, onde est? crescendo mais rapidamente do que a Microsoft, e assim amea?ando seriamente romper o monop?lio de PCs da companhia de Redmond.
“Como outros respons?veis pela ?rea de inform?tica de empresas do pa?s, estou preocupado com a viabilidade, seguran?a, produtividade e redu??o de custos”, assegurou Hossein Eslambolchi, diretor de inform?tica da AT&T. Eslambolchi explica que sua companhia poderia poupar entre 50% e 60% nos custos. Al?m disso, o diretor se queixa de que a AT&T teve mais ataques de v?rus nos computadores pessoais durante os ?ltimos seis meses que nos 10 anos anteriores.

A multinacional tomar? a decis?o no final de 2005.

Fonte: vivalinux.com.ar

Posted in Sem categoria