Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
Bug 100666 - possible memory leak in slocate
Summary: possible memory leak in slocate
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Fedora
Classification: Fedora
Component: kernel
Version: 2
Hardware: i386
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Arjan van de Ven
QA Contact:
URL: http://203.38.2.136/mem.html
Whiteboard:
: 119758 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2003-07-24 05:01 UTC by Paul Schubert
Modified: 2007-11-30 22:10 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2004-06-19 12:27:06 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)
top, /proc/meminfo, ps-Af in runlevel 3 (deleted)
2004-05-27 09:21 UTC, Andy Green
no flags Details
slabtop (deleted)
2004-06-03 21:00 UTC, Andy Green
no flags Details

Description Paul Schubert 2003-07-24 05:01:49 UTC
Description of problem:
slocate runs daily in /etc/cron.daily/slocate.cron and causes free memory 
(RAM) to drop from 400 MB to 200 MB on a cleanly booted system.

Version-Release number of selected component (if applicable):
slocate-2.6-8

How reproducible:
every time

Steps to Reproduce:
1.run free -o to show free memory
2.run /etc/cron.daily/slocate.cron 
3.run free -o to show free memory
    
Actual results:
[root@bluebox cron.daily]# ls
00-logwatch  0anacron   makewhatis.cron  slocate.cron  tmpwatch
00webalizer  logrotate  rpm              tetex.cron
[root@bluebox cron.daily]# free -o
             total       used       free     shared    buffers     cached
Mem:        505712      73992     431720          0       6880      31988
Swap:      1044216          0    1044216
[root@bluebox cron.daily]# ./slocate.cron
[root@bluebox cron.daily]# free -o
             total       used       free     shared    buffers     cached
Mem:        505712     256640     249072          0      61752      35428
Swap:      1044216          0    1044216

Expected results:
I would expect free memory to return to 400MB after script has executed.

Additional info:
Intel Celeron 1.8 GHz
512 MB RAM
Linux version 2.4.20-19.9 (bhcompile.redhat.com) (gcc version 
3.
2.2 20030222 (Red Hat Linux 3.2.2-5)) #1 Tue Jul 15 17:18:13 EDT 2003

Comment 1 Bill Nottingham 2003-07-24 14:22:10 UTC
It looks like most of this (60MB is just stuff in the buffer cache, which is to
be expected. However, if it persists after slocate exits, that indicates more of
a kernel issue. What sort of filesystems are you uisng?

Comment 2 Paul Schubert 2003-07-24 23:03:52 UTC
[paul@bluebox proc]$ cat filesystems
nodev   rootfs
nodev   bdev
nodev   proc
nodev   sockfs
nodev   tmpfs
nodev   shm
nodev   pipefs
        ext2
nodev   ramfs
        iso9660
nodev   devpts
        ext3
nodev   autofs
[paul@bluebox proc]$ mount
/dev/hda2 on / type ext3 (rw)
none on /proc type proc (rw)
/dev/hda1 on /boot type ext3 (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
none on /dev/shm type tmpfs (rw)
[paul@bluebox proc]$

Comment 3 Paul Schubert 2003-07-25 06:39:00 UTC
Some stuff that may be of use:
[paul@bluebox vm]$ cat bdflush
50      500     0       0       500     3000    80      50      0

[paul@bluebox vm]$ ps -ef | grep bdflush
root         9     1  0 Jul24 ?        00:00:00 [bdflush]
paul      5070  5010  0 16:38 pts/0    00:00:00 grep bdflush

(bdflush daemon is running)

Comment 4 jason tigg 2003-11-05 08:57:28 UTC
I have exactly the same problem. When I cat my filesystems i get

nodev   rootfs
nodev   bdev
nodev   proc
nodev   sockfs
nodev   tmpfs
nodev   shm
nodev   pipefs
        ext2
nodev   ramfs
        iso9660
nodev   devpts
        ext3
nodev   usbdevfs
nodev   usbfs
nodev   autofs

I am running cacti on this box and noticed the memory drop at 4 in 
the morning when cron.daily ran. I have had to turn off 
slocate.cron. 

Comment 5 Andy Green 2004-04-05 10:00:54 UTC
"Me too", this is running the current FC2 Test 2 + devel stuff and
slocate slocate-2.7-8.  I lose several hundred MB after running
slocate from cron, and the loss of memory persists.  It is NOT cache
memory either, it is reported as real "application" memory by ksysguard.


Comment 6 Andy Green 2004-05-27 08:28:54 UTC
slocate.cron was added back into /etc/cron.daily on my machine by an 
yum update.  Here is the situation after a couple of days.  Note 
this is NOT buffer or filecache usage, nothing that top can see is 
using the bulk of this memory!!!  Seems to be a 620MB commit with 
just KDE, konq, firefox and kate up. 
 
top - 09:24:28 up 22:11,  8 users,  load average: 0.13, 0.25, 0.12 
Tasks: 100 total,   1 running,  99 sleeping,   0 stopped,   0 zombie 
Cpu(s):  6.0% us,  2.0% sy,  0.0% ni, 91.4% id,  0.0% wa,  0.7% hi,  
0.0% si 
Mem:   1037128k total,  1018196k used,    18932k free,   155356k 
buffers 
Swap:  1044216k total,        0k used,  1044216k free,   239012k 
cached 
 
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND 
 2192 agreen    16   0 72568  56m  46m S  0.0  5.5   0:52.97 kmail 
31720 agreen    15   0  113m  48m  35m S  2.3  4.8  10:33.39 
firefox-bin 
 3546 root      15   0  122m  38m  90m S  2.3  3.8  26:00.26 X 
 5032 agreen    15   0 47704  32m  34m S  0.0  3.2   0:48.07 kdeinit 
24884 agreen    16   0 37888  22m  31m S  0.0  2.3   0:00.83 kdeinit 
 3747 agreen    15   0 32520  19m  26m S  0.3  2.0   0:58.78 kdeinit 
 3777 agreen    15   0 32316  17m  26m S  1.0  1.8   0:15.93 kdeinit 
 3709 agreen    15   0 32180  17m  27m S  0.0  1.7   0:18.14 kdeinit 
 3744 agreen    16   0 29044  16m  25m S  0.0  1.6   0:03.82 kdeinit 
 3740 agreen    16   0 29380  15m  24m S  0.0  1.5   0:20.56 kdeinit 
 3776 agreen    16   0 28484  15m  24m S  0.0  1.5   0:01.15 kget 
 3735 agreen    16   0 31600  14m  28m S  0.0  1.5   0:00.64 kdeinit 
 4030 agreen    16   0 28284  14m  24m S  0.0  1.4   0:01.93 kdeinit 
 3772 agreen    16   0 26092  14m  22m S  0.0  1.4   0:00.51 
kwalletmanager 
 3763 agreen    16   0 26308  12m  23m S  0.0  1.3   0:00.29 kdeinit 
 3762 agreen    16   0 26268  12m  22m S  0.0  1.2   0:02.26 kdeinit 
 4057 agreen    16   0 25796  12m  23m S  0.0  1.2   0:00.32 kalarmd 
 3739 agreen    16   0 26144  12m  22m S  0.0  1.2   0:00.37 kdeinit 
 3731 agreen    16   0 25884  11m  22m S  0.0  1.2   0:00.38 kdeinit 
 2210 agreen    15   0 26268  11m  23m S  0.0  1.1   0:12.16 kdeinit 
 3705 agreen    16   0 26836  10m  24m S  0.0  1.1   0:00.19 kdeinit 
11563 agreen    16   0 24312 9.9m  22m S  0.0  1.0   0:00.00 kdeinit 
 3703 agreen    15   0 24420 9752  22m S  0.0  0.9   0:02.23 kdeinit 
 3699 agreen    17   0 22524 9452  20m S  0.0  0.9   0:00.16 kdeinit 
 3827 agreen    16   0 19984 7364  16m S  0.0  0.7   0:01.03 eggcups 
 3023 ntp       16   0  5248 5248 3620 S  0.0  0.5   0:00.12 ntpd 
 3248 xfs       16   0  5792 4004 2388 S  0.0  0.4   0:14.48 xfs 
15617 root      16   0 11852 3628 8848 S  0.0  0.3   0:00.00 smbd 
 3260 root      17   0 11324 3140 8812 S  0.0  0.3   0:00.00 smbd 
 2758 root      16   0  9528 2792 5960 S  0.0  0.3   0:00.97 cupsd 
 4049 agreen    16   0  5936 2112 4944 S  0.0  0.2   0:00.06 
gconfd-2 
 3265 root      16   0  8424 2044 6952 S  0.0  0.2   0:00.34 nmbd 
26039 root      16   0  8796 1932 6672 S  0.0  0.2   0:00.00 
mount.smb 
 6293 root      16   0  8760 1932 6672 S  0.0  0.2   0:00.00 
mount.smb 
 3551 root      16   0  4724 1772 3348 S  0.0  0.2   0:00.12 kdm 
 4044 agreen    15   0  4060 1600 3328 S  0.0  0.2   0:00.14 ssh 
 3183 root      16   0  6364 1572 4732 S  0.0  0.2   0:00.05 master 
 3855 agreen    15   0  5396 1560 3952 S  0.0  0.2   0:00.05 bash 
 3874 agreen    15   0  5992 1552 3952 S  0.0  0.1   0:00.06 bash 
 3834 agreen    16   0  5952 1544 3952 S  0.0  0.1   0:00.04 bash 
 3845 agreen    16   0  5236 1544 3952 S  0.0  0.1   0:00.01 bash 
 3861 agreen    16   0  4948 1536 3952 S  0.0  0.1   0:00.04 bash 
 3886 agreen    16   0  5560 1536 3952 S  0.0  0.1   0:00.04 bash 
 3859 agreen    16   0  5196 1532 3952 S  0.0  0.1   0:00.04 bash 
 2979 root      16   0  5276 1524 3460 S  0.0  0.1   0:00.13 sshd 
 3198 postfix   16   0  6148 1524 4804 S  0.0  0.1   0:00.03 nqmgr 
16696 postfix   17   0  6128 1424 4772 S  0.0  0.1   0:00.00 pickup 
 

Comment 7 Andy Green 2004-05-27 09:21:38 UTC
Created attachment 100621 [details]
top, /proc/meminfo, ps-Af in runlevel 3

This is a short log of a top in runlevel 5, and top, /proc/meminfo and ps -Af
in runlevel 3, ie, with no X or X apps.  You can see there is still a huge
oversized commit in there, not showing up in any process.  This behaviour I
only see after allowing cron to run slocate.  The behaviour persists and
accumilates nightly until the machine is unusable.

Comment 8 Need Real Name 2004-06-03 18:37:11 UTC
Here's the output of steps executed on a P3 600Mhz machine with 384M
ram (Linux backup.semandex.net 2.4.22-1.2115.nptl #1 Wed Oct 29
15:42:51 EST 2003 i686 i686 i386 GNU/Linux)

[root@backup root]# free -o
             total       used       free     shared    buffers     cached
Mem:        383764      87416     296348          0       8644      44200
Swap:      1052152          0    1052152
[root@backup root]# /etc/cron.monthly/slocate.cron
[root@backup root]# free -o
             total       used       free     shared    buffers     cached
Mem:        383764     231152     152612          0      56184      45580
Swap:      1052152          0    1052152


Comment 9 Andy Green 2004-06-03 18:55:55 UTC
Here is the same deal from me: 
 
[root@fastcat root]# uname -r 
2.6.6-1.406 
[root@fastcat root]# free -o 
             total       used       free     shared    buffers     
cached 
Mem:       1037152     454184     582968          0      27960     
234188 
Swap:      1044216          0    1044216 
[root@fastcat root]# /etc/cron.daily/slocate.cron 
[root@fastcat root]# free -o 
             total       used       free     shared    buffers     
cached 
Mem:       1037152    1030912       6240          0     226328     
131456 
Swap:      1044216          0    1044216 
 
 
[root@fastcat root]# mount 
/dev/hdc2 on / type ext3 (rw,noatime) 
none on /proc type proc (rw) 
none on /dev/pts type devpts (rw,gid=5,mode=620) 
usbdevfs on /proc/bus/usb type usbdevfs (rw) 
/dev/hdc1 on /boot type ext3 (rw) 
none on /dev/shm type tmpfs (rw) 
none on /var/lib/jack/tmp type tmpfs (rw) 
automount(pid2912) on /misc type autofs 
(rw,fd=4,pgrp=2912,minproto=2,maxproto=4) 
/dev/sda2 on /mnt/hard type ext3 (rw) 
 

Comment 10 Dave Jones 2004-06-03 20:38:50 UTC
what does slabtop say ?

Comment 11 Andy Green 2004-06-03 21:00:53 UTC
Created attachment 100851 [details]
slabtop

Not sure what a slab is but it looks interesting!

Comment 12 Dave Jones 2004-06-03 21:05:53 UTC
670663 667493  99%    0.50K  95809        7    383236K ext3_inode_cache
658008 604830  91%    0.15K  25308       26    101232K dentry_cache
                                                                     
          
shitola, they're through the roof.
looks like these aren't getting pruned aggressively enough.

Comment 13 Arjan van de Ven 2004-06-03 21:08:15 UTC
time to put back the fc2 ga hack???

Comment 14 Arjan van de Ven 2004-06-04 20:39:58 UTC
can you try the 422 kernel from http://people.redhat.com/arjanv/2.6/

Comment 15 Andy Green 2004-06-04 21:31:59 UTC
Well - it's definitely improved and different. 
 
 Active / Total Objects (% used)    : 593462 / 623875 (95.1%) 
 Active / Total Slabs (% used)      : 17185 / 17189 (100.0%) 
 Active / Total Caches (% used)     : 76 / 117 (65.0%) 
 Active / Total Size (% used)       : 58980.51K / 63945.79K (92.2%) 
 Minimum / Average / Maximum Object : 0.01K / 0.10K / 128.00K 
 
  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME 
315112 314812  99%    0.03K   2648      119     10592K size-32 
105300 105252  99%    0.05K   1404       75      5616K buffer_head 
 66246  52266  78%    0.06K   1086       61      4344K size-64 
 52934  47668  90%    0.50K   7562        7     30248K 
ext3_inode_cache 
 39702  34788  87%    0.15K   1527       26      6108K dentry_cache 
 18270  18267  99%    0.27K   1305       14      5220K 
radix_tree_node 
 
[root@fastcat root]# free -o 
             total       used       free     shared    buffers     
cached 
Mem:       1037136     371464     665672          0      23636     
175124 
Swap:      1044216          0    1044216 
[root@fastcat root]# /etc/cron.daily/slocate.cron 
[root@fastcat root]# free -o 
             total       used       free     shared    buffers     
cached 
Mem:       1037136     708856     328280          0     393716     
118960 
Swap:      1044216          0    1044216 
 
 
What I am effectively seeing now is instead of the memory 
disappearing off what is visible to userspace, in the ext3_inode and 
dentry caches, about 360M of my 1G physical memory is still 
allocated to "buffers" after the slocate completes. 
 
I guess that is a better situation than before, since it is at least 
accounted for in the userspace free app, I guess that means it can 
be clawed back at need.  I don't understand what role a "buffer" 
fulfils, but is there any reason for it to remain allocated after 
the file handle and even the process that used it is closed? 
 

Comment 16 Dave Jones 2004-06-15 00:13:36 UTC
*** Bug 119758 has been marked as a duplicate of this bug. ***

Comment 17 Arjan van de Ven 2004-06-19 12:27:06 UTC
buffer/buffers is basically part of the disk cache... removing
diskcache to just gain back memory is not a good idea, it's better to
remove it on demand, which is what should be happening.

Comment 18 Andy Green 2004-06-21 08:43:42 UTC
Thanks for that advice Arjan, I have noticed that the buffers
allocation is indeed given up over time harmlessly.

The real commit now with a fairly busy desktop and slocate having been
run is smaller than ever, so it is a very good fix.  Thanks.

Comment 19 Nikolay Ognyanov 2004-12-07 06:01:44 UTC
The problem I encounter with all this is that part 
of memory "eaten up" by updatedb does not show as
buffers or cached in output of free (without -o option). 
What I observe is that after execution of slocate there
is increase in the quantity of (used - (buffers + cached))
and corresponding decrease of (free + buffers +cached).
This may or may not be real kernel leak but it is real
problem on a server where not knowing the size of actually 
available memory is not too much better than not haveing 
available memory at all:((
Does anybody have a clue to this?


Note You need to log in before you can comment on or make changes to this bug.