Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
Bug 119758 - ext3_inode_cache hogs memory
Summary: ext3_inode_cache hogs memory
Keywords:
Status: CLOSED DUPLICATE of bug 100666
Alias: None
Product: Fedora
Classification: Fedora
Component: kernel
Version: rawhide
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Arjan van de Ven
QA Contact: Brian Brock
URL:
Whiteboard:
Depends On:
Blocks: FC2Target
TreeView+ depends on / blocked
 
Reported: 2004-04-02 00:40 UTC by Arun Sharma
Modified: 2007-11-30 22:10 UTC (History)
0 users

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2006-02-21 19:02:20 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Arun Sharma 2004-04-02 00:40:57 UTC
Description of problem:

When a cron job kicks off "updatedb", it seems to allocate a lot of
inodes that never get freed. This results in the system becoming very
slow and unresponsive.

Version-Release number of selected component (if applicable):

2.6.3-2.1.253.2.1
2.6.4-1.300

How reproducible:

Run updatedb.

 
Actual results:

On a 512MB machine:

$ vmstat 1
procs -----------memory---------- ---swap-- -----io---- --system--
----cpu----
r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us
sy id wa
0  1   1772  11368   1180  54572    0    0    15     9   29   273  8 
1 60 31
0  1   1772  11552    980  54548    0    0     4     0 1044   324  1 
0  0 99
0  1   1772  11552    980  54544    0    0     0     0 1039   409  0 
0  0 100

# cat /proc/slabinfo | sed 's/:.*//' | awk '{print $0, $3 * $4}' |
sort +6rn | head -10
ext3_inode_cache  159575 159642   1128    7    2  180076176
dentry_cache      137420 138924    408    9    1  56680992
size-256          162326 162386    280   14    1  45468080
size-64           265608 265869     88   43    1  23396472
pte_chain          10841  12600    128   30    1  1612800
size-4096            305    305   4096    1    1  1249280
radix_tree_node      897   2177    544    7    1  1184288
inode_cache         1350   1350    848    9    2  1144800
biovec-BIO_MAX_PAGES    256    256   4096    1    1  1048576
vm_area_struct      6459   6475    152   25    1  984200

Alt+sysrq+t shows:

Apr  1 12:18:49 arun-desktop kernel: updatedb      D 00000100175339f4
    0  5839   5813                     (NOTLB)
Apr  1 12:18:49 arun-desktop kernel: 0000010017533a50 0000000000000006
000000501c378920 0000010017533a34
Apr  1 12:18:49 arun-desktop kernel:        0000010017d25240
000000000001d9a1 00004c4d31079f84 ffffffff803d9860
Apr  1 12:18:49 arun-desktop kernel:        0000010017533b90
0000000000000246
Apr  1 12:18:49 arun-desktop kernel: Call
Trace:<ffffffff80143e13>{schedule_timeout+216}
<ffffffff80143d36>{process_timeout+0}
Apr  1 12:18:49 arun-desktop kernel:       
<ffffffff801362c3>{io_schedule_timeout+15}
<ffffffff80256877>{blk_congestion_wait+125}
Apr  1 12:18:49 arun-desktop kernel:       
<ffffffff80136e57>{autoremove_wake_function+0}
<ffffffff80136e57>{autoremove_wake_function+0}
Apr  1 12:18:49 arun-desktop kernel:       
<ffffffff8016312f>{__alloc_pages+724}
<ffffffff80163197>{__get_free_pages+31}
Apr  1 12:18:49 arun-desktop kernel:       
<ffffffff80167835>{cache_grow+479}
<ffffffff8016802c>{cache_alloc_refill+1101}
Apr  1 12:18:49 arun-desktop kernel:       
<ffffffff801685b7>{kmem_cache_alloc+75}
<ffffffffa0056b71>{:ext3:ext3_alloc_inode+19}
Apr  1 12:18:49 arun-desktop kernel:       
<ffffffff801a7571>{alloc_inode+21}
<ffffffff801a89d3>{get_new_inode_fast+21}
Apr  1 12:18:49 arun-desktop kernel:       
<ffffffffa0053e95>{:ext3:ext3_lookup+90}
<ffffffff801978f8>{real_lookup+111}
Apr  1 12:18:49 arun-desktop kernel:       
<ffffffff80197d11>{do_lookup+84} <ffffffff80198aae>{link_path_walk+3429}
Apr  1 12:18:49 arun-desktop kernel:       
<ffffffff80197237>{getname+31} <ffffffff80199024>{path_lookup+359}
Apr  1 12:18:49 arun-desktop kernel:       
<ffffffff801991ae>{__user_walk+47} <ffffffff8019297e>{vfs_lstat+21}
Apr  1 12:18:49 arun-desktop kernel:       
<ffffffff80125a87>{sys32_lstat64+17}
<ffffffff80125317>{sysenter_do_call+27} 

Additional info:

It looks like ext3_inode_cache object size is much bigger than the
base kernels. Regardless of that, the number of objects seems to be
monotonically increasing.

Comment 1 Arjan van de Ven 2004-04-02 07:49:11 UTC
are there any oopses that happened ?

Comment 2 Arun Sharma 2004-04-02 18:09:10 UTC
No oopses. Only slow behavior due to a large amount of memory locked 
up in ext3_inode_cache slab.

Comment 3 Arun Sharma 2004-04-08 06:01:45 UTC
BTW, I was using the 32 bit updatedb. It's possible that the leak is
in the 32 bit syscall layer - but I'm not sure.

Comment 4 Dave Jones 2004-05-25 16:28:14 UTC
this should be fixed in the final FC2 kernel.


Comment 5 Dave Jones 2004-06-15 00:13:33 UTC

*** This bug has been marked as a duplicate of 100666 ***

Comment 6 Red Hat Bugzilla 2006-02-21 19:02:20 UTC
Changed to 'CLOSED' state since 'RESOLVED' has been deprecated.


Note You need to log in before you can comment on or make changes to this bug.