第7章 数据清洗和准备

本章讨论处理缺失数据、重复数据、字符串操作和其它分析数据转换的⼯具。下⼀章,我会关注于⽤多种⽅法合并、重塑数据集。

7.1 处理缺失数据

pandas对象的所有描述性统计默认都不包括缺失数据。

pandas使⽤浮点值NaN(Not a Number)表示缺失数据。我们称其为哨兵值,它表示不可⽤not available。

检测哨兵值很简单:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
import pandas as pd
import numpy as np


string_data = pd.Series(['aardvark', 'artichoke', np.nan, 'avocado',None])
print(string_data)
# 0 aardvark
# 1 artichoke
# 2 NaN
# 3 avocado
# 4 None
# dtype: object

print(string_data.isnull())
# 0 False
# 1 False
# 2 True
# 3 False
# 4 True
# dtype: bool

NaN的处理方法

687479702537

滤除缺失数据

对于⼀个Series,dropna返回⼀个仅含⾮空数据和索引值的Series:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
import pandas as pd
import numpy as np

from numpy import nan as NA

data = pd.Series([1, NA, 3.5, NA, 7])
print(data)
# 0 1.0
# 1 NaN
# 2 3.5
# 3 NaN
# 4 7.0
# dtype: float64


print(data.dropna())
# 0 1.0
# 2 3.5
# 4 7.0
# dtype: float64


# 等价于
print(data[data.notnull()])
# 0 1.0
# 2 3.5
# 4 7.0
# dtype: float64

对于DataFrame对象,dropna默认丢弃含有缺失值的(默认axis=0):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
import pandas as pd
import numpy as np

from numpy import nan as NA

data = pd.DataFrame([[1., 6.5, 3.], [1., NA, NA],
[NA, NA, NA], [NA, 6.5, 3.]])

print(data)
# 0 1 2
# 0 1.0 6.5 3.0
# 1 1.0 NaN NaN
# 2 NaN NaN NaN
# 3 NaN 6.5 3.0

# 默认axis=0
print(data.dropna())
# 0 1 2
# 0 1.0 6.5 3.0


# 传⼊how='all'将只丢弃全为NA的那些⾏:
print(data.dropna(how='all'))
# 0 1 2
# 0 1.0 6.5 3.0
# 1 1.0 NaN NaN
# 3 NaN 6.5 3.0


# 使用axis参数删除列中的NA
print(data.dropna(axis=1))
# Empty DataFrame
# Columns: []
# Index: [0, 1, 2, 3]

# 使用axis参数删除列中的NA
print(data.dropna(axis=0))
# 0 1 2
# 0 1.0 6.5 3.0


df = pd.DataFrame(np.random.randn(7, 3))
df.iloc[:4, 1] = NA
df.iloc[:2, 2] = NA

print(df)
# 0 1 2
# 0 -1.601317 NaN NaN
# 1 -1.626248 NaN NaN
# 2 -1.142053 NaN 0.679764
# 3 0.246375 NaN 0.441402
# 4 -0.004399 1.075954 -1.366072
# 5 0.038879 0.077374 -0.557103
# 6 1.207704 0.092570 0.587832

print(df.dropna())
# 0 1 2
# 4 -0.004399 1.075954 -1.366072
# 5 0.038879 0.077374 -0.557103
# 6 1.207704 0.092570 0.587832

# thresh:非NA数量大于等于2的话就保留该Series
# 人话:非NA列的数量大于等于2的话就保留,否则删除
print(df.dropna(thresh=2))
# 0 1 2
# 2 -1.142053 NaN 0.679764
# 3 0.246375 NaN 0.441402
# 4 -0.004399 1.075954 -1.366072
# 5 0.038879 0.077374 -0.557103
# 6 1.207704 0.092570 0.587832

填充缺失数据

fillna⽅法:

  • 通过⼀个常数调⽤fillna,就会将缺失值替换为那个常数值
  • 通过⼀个字典调⽤fillna,就会对不同的填充不同的值
    (就是说,如果传入一个字典,会导致axis=1失效,这是函数的设计问题)
    (此时就需要使用双转置)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
import pandas as pd
import numpy as np

from numpy import nan as NA

adict = {'col1':[1., 1., NA,NA],
'col2':[6.5, NA, NA,6.5],
'col3':[3.0, NA, NA,3.0],}

df = pd.DataFrame(adict,index=['a','b','c','d'])

print(df)
# col1 col2 col3
# a 1.0 6.5 3.0
# b 1.0 NaN NaN
# c NaN NaN NaN
# d NaN 6.5 3.0

print(df.fillna(0))
# col1 col2 col3
# a 1.0 6.5 3.0
# b 1.0 0.0 0.0
# c 0.0 0.0 0.0
# d 0.0 6.5 3.0

print(df.fillna('hyl',axis=1))
# col1 col2 col3
# a 1 6.5 3
# b 1 hyl hyl
# c hyl hyl hyl
# d hyl 6.5 3

print(df.fillna({'col1': 0.5, 'col3': 0}))
# col1 col2 col3
# a 1.0 6.5 3.0
# b 1.0 NaN 0.0
# c 0.5 NaN 0.0
# d 0.5 6.5 3.0

# 无法指定行填充元素
# print(df.fillna({'b':'uu'},axis=1))
# NotImplementedError: Currently only can fill with dict/Series column by column

# 想要指定行填充元素,这时就必须使用双转置
print(df.T.fillna({'b':'uu'}).T)
# col1 col2 col3
# a 1 6.5 3
# b 1 uu uu
# c NaN NaN NaN
# d NaN 6.5 3

fillna的其他参数:

  • inplace:
    fillna默认会返回新对象,但也可以对现有对象进⾏就地修改
  • method:
    填充的方法
  • limit:
    连续的NaN值的前向/后向填充的最大数量
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
import pandas as pd
import numpy as np

from numpy import nan as NA

adict = {'col1':[1., 1., NA,NA],
'col2':[6.5, NA, NA,6.5],
'col3':[3.0, NA, NA,3.0],}


df = pd.DataFrame(adict,index=['a','b','c','d'])

print(df)
# col1 col2 col3
# a 1.0 6.5 3.0
# b 1.0 NaN NaN
# c NaN NaN NaN
# d NaN 6.5 3.0

a = df.fillna(0, inplace=True)

print(a)
# None

print(df)
# col1 col2 col3
# a 1.0 6.5 3.0
# b 1.0 0.0 0.0
# c 0.0 0.0 0.0
# d 0.0 6.5 3.0


df = pd.DataFrame(np.random.randn(6, 3))
df.iloc[2:, 1] = NA
df.iloc[4:, 2] = NA

print(df)
# 0 1 2
# 0 -0.221341 -0.456265 -0.349181
# 1 0.004801 -1.274752 0.509877
# 2 -1.511773 NaN 2.708623
# 3 -0.499189 NaN 1.929494
# 4 1.425541 NaN NaN
# 5 0.235956 NaN NaN

print(df.fillna(method='ffill'))
# 0 1 2
# 0 -0.221341 -0.456265 -0.349181
# 1 0.004801 -1.274752 0.509877
# 2 -1.511773 -1.274752 2.708623
# 3 -0.499189 -1.274752 1.929494
# 4 1.425541 -1.274752 1.929494
# 5 0.235956 -1.274752 1.929494

# 连续的NaN值的前向/后向填充的最大数量
# 人话:最多只会填充连续的两个NaN
print(df.fillna(method='ffill', limit=2))
# 0 1 2
# 0 -0.221341 -0.456265 -0.349181
# 1 0.004801 -1.274752 0.509877
# 2 -1.511773 -1.274752 2.708623
# 3 -0.499189 -1.274752 1.929494
# 4 1.425541 NaN 1.929494
# 5 0.235956 NaN 1.929494


print(df.fillna('hyl',limit=2))
# 0 1 2
# 0 -0.221341 -0.456265 -0.349181
# 1 0.004801 -1.274752 0.509877
# 2 -1.511773 hyl 2.708623
# 3 -0.499189 hyl 1.929494
# 4 1.425541 NaN hyl
# 5 0.235956 NaN hyl

注意:value参数不一定要是参数,大可以传入其他值

1
2
3
4
5
6
7
8
9
# 传入Series的平均值或中位数
data = pd.Series([1., NA, 3.5, NA, 7])
print(data.fillna(data.mean()))
# 0 1.000000
# 1 3.833333
# 2 3.500000
# 3 3.833333
# 4 7.000000
# dtype: float64

7.2 数据转换

本节讲过滤、清理以及其他的转换⼯作。

移除重复数据:

  • duplicated:
    各⾏是否是重复⾏(前⾯出现过的⾏)
  • drop_duplicates:
    丢掉重复行
  • df.drop_duplicates([‘k1’]):
    只根据k1列过滤重复项
  • keep:
    默认是保留第一个出现的行,keep=’last’保留最后一个
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
import pandas as pd
import numpy as np


data = pd.DataFrame({'k1': ['one', 'two'] * 3 + ['two'],
'k2': [1, 1, 2, 3, 3, 4, 4]})
print(data)
# k1 k2
# 0 one 1
# 1 two 1
# 2 one 2
# 3 two 3
# 4 one 3
# 5 two 4
# 6 two 4

# duplicated:各⾏是否是重复⾏(前⾯出现过的⾏)
print(data.duplicated())
# 0 False
# 1 False
# 2 False
# 3 False
# 4 False
# 5 False
# 6 True
# dtype: bool

# drop_duplicates:丢掉重复行
print(data.drop_duplicates())
# k1 k2
# 0 one 1
# 1 two 1
# 2 one 2
# 3 two 3
# 4 one 3
# 5 two 4


data['v1'] = range(7)
print(data)
# k1 k2 v1
# 0 one 1 0
# 1 two 1 1
# 2 one 2 2
# 3 two 3 3
# 4 one 3 4
# 5 two 4 5
# 6 two 4 6

# 只根据k1列过滤重复项(只要k1重复了,就丢掉该行)
print(data.drop_duplicates(['k1']))
# k1 k2 v1
# 0 one 1 0
# 1 two 1 1

# keep:默认是保留第一个出现的行,keep='last'保留最后一个
print(data.drop_duplicates(['k1', 'k2'], keep='last'))
# k1 k2 v1
# 0 one 1 0
# 1 two 1 1
# 2 one 2 2
# 3 two 3 3
# 4 one 3 4
# 6 two 4 6

利⽤函数或映射进⾏数据转换

map:实现元素级转换以及其他数据清理⼯作的便捷⽅式。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
import pandas as pd
import numpy as np


data = pd.DataFrame({'food': ['bacon', 'pulled pork', 'bacon',
'Pastrami', 'corned beef', 'Bacon',
'pastrami', 'honey ham', 'nova lox'],
'ounces': [4, 3, 12, 6, 7.5, 8, 3, 5, 6]})
print(data)
# food ounces
# 0 bacon 4.0
# 1 pulled pork 3.0
# 2 bacon 12.0
# 3 Pastrami 6.0
# 4 corned beef 7.5
# 5 Bacon 8.0
# 6 pastrami 3.0
# 7 honey ham 5.0
# 8 nova lox 6.0


meat_to_animal = {
'bacon' : 'pig',
'pulled pork': 'pig',
'pastrami' : 'cow',
'corned beef': 'cow',
'honey ham' : 'pig',
'nova lox' : 'salmon'
}


lowercased = data['food'].str.lower()
print(lowercased)
# 0 bacon
# 1 pulled pork
# 2 bacon
# 3 pastrami
# 4 corned beef
# 5 bacon
# 6 pastrami
# 7 honey ham
# 8 nova lox
# Name: food, dtype: object

# map⽅法:接受⼀个函数或含有映射关系的字典型对象
# 使用字典映射
data['animal'] = lowercased.map(meat_to_animal)
print(data)
# food ounces animal
# 0 bacon 4.0 pig
# 1 pulled pork 3.0 pig
# 2 bacon 12.0 pig
# 3 Pastrami 6.0 cow
# 4 corned beef 7.5 cow
# 5 Bacon 8.0 pig
# 6 pastrami 3.0 cow
# 7 honey ham 5.0 pig
# 8 nova lox 6.0 salmon

# 或者使用匿名函数
result = data['food'].map(lambda x: meat_to_animal[x.lower()])
print(result)
# 0 pig
# 1 pig
# 2 pig
# 3 cow
# 4 cow
# 5 pig
# 6 cow
# 7 pig
# 8 salmon
# Name: food, dtype: object

map, applymap apply的区别:

  • map:
    Series元素级函数映射
  • applymap:
    DataFrame元素级函数映射
  • apply:
    DataFrame轴级函数映射

替换值

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
import pandas as pd
import numpy as np

data = pd.Series([1., -999., 2., -999., -1000., 3.])
print(data)
# 0 1.0
# 1 -999.0
# 2 2.0
# 3 -999.0
# 4 -1000.0
# 5 3.0
# dtype: float64

# replace: 替换掉目标元素
print(data.replace(-999, np.nan))
# 0 1.0
# 1 NaN
# 2 2.0
# 3 NaN
# 4 -1000.0
# 5 3.0
# dtype: float64

# 替换掉一系列目标元素
print(data.replace([-999, -1000], np.nan))
# 0 1.0
# 1 NaN
# 2 2.0
# 3 NaN
# 4 NaN
# 5 3.0
# dtype: float64

# 列表形式:用不同的值替换掉不同的元素
print(data.replace([-999, -1000], [np.nan, 0]))
# 0 1.0
# 1 NaN
# 2 2.0
# 3 NaN
# 4 0.0
# 5 3.0
# dtype: float64

# 字典形式:用不同的值替换掉不同的元素
print(data.replace({-999: np.nan, -1000: 0}))
# 0 1.0
# 1 NaN
# 2 2.0
# 3 NaN
# 4 0.0
# 5 3.0
# dtype: float64

重命名轴索引

  • 使用map
  • 使用rename
    1. 可以传入函数映射
    2. 可以传入字典实现部分标签的更新
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
import pandas as pd
import numpy as np

data = pd.DataFrame(np.arange(12).reshape((3, 4)),
index=['Ohio', 'Colorado', 'New York'],
columns=['one', 'two', 'three', 'four'])

print(data)
# one two three four
# Ohio 0 1 2 3
# Colorado 4 5 6 7
# New York 8 9 10 11


transform = lambda x: x[:4].upper()
# map:series元素级函数映射
print(data.index.map(transform))
# Index(['OHIO', 'COLO', 'NEW '], dtype='object')


# 将修改后的Index赋给DataFrame
data.index = data.index.map(transform)
print(data)
# one two three four
# OHIO 0 1 2 3
# COLO 4 5 6 7
# NEW 8 9 10 11

# rename:如果单单想改变DataFrame的行列标签,可以直接使用rename函数
print(data.rename(index=str.title, columns=str.upper))
# ONE TWO THREE FOUR
# Ohio 0 1 2 3
# Colo 4 5 6 7
# New 8 9 10 11

# rename:也可以传入字典,实现部分标签的更新
result = data.rename(index={'OHIO': 'INDIANA'},
columns={'three': 'peekaboo'})
print(result)
# one two peekaboo four
# INDIANA 0 1 2 3
# COLO 4 5 6 7
# NEW 8 9 10 11

# rename也有inplace属性
x = data.rename(index={'OHIO': 'INDIANA'}, inplace=True)
print(x)
# None

离散化和⾯元划分

为了便于分析,连续数据常常被离散化或拆分为面元(bin)。
比如:现在有一组人员的年龄数据,将这些年龄划分为为“18到25”、“26到35”、“35到60”以及“60以上”⼏个⾯元

使用pandas.cut函数:cats = pd.cut(ages, bins)

  • cats:
    Categories对象

  • cats.codes:

    这个分类的分类代码

  • cats.categories:
    这个分类的类别

  • right:
    控制右边是否为开

  • lables:
    设置面元的名称

  • pd.value_counts(cats):
    对面元大小计数

  • precision:
    限定小数位数最多为n

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
import pandas as pd
import numpy as np

ages = [20, 22, 25, 27, 21, 23, 37, 31, 61, 45, 41, 32]
bins = [18, 25, 35, 60, 100]


cats = pd.cut(ages, bins)

# 返回Categories对象
print(type(cats))
# <class 'pandas.core.arrays.categorical.Categorical'>

print(cats)
# [(18, 25], (18, 25], (18, 25], (25, 35], (18, 25], ..., (25, 35], (60, 100], (35, 60], (35, 60], (25, 35]]
# Length: 12
# Categories (4, interval[int64]): [(18, 25] < (25, 35] < (35, 60] < (60, 100]]


# 这个分类的分类代码。
print(cats.codes)
# [0 0 0 1 0 0 2 1 3 2 2 1]


# 这个分类的类别。(注意:默认是左开右闭)
print(cats.categories)
# IntervalIndex([(18, 25], (25, 35], (35, 60], (60, 100]]
# closed='right',
# dtype='interval[int64]')


# 对pandas.cut结果的⾯元进行计数
print(pd.value_counts(cats))
# (18, 25] 5
# (35, 60] 3
# (25, 35] 3
# (60, 100] 1
# dtype: int64

# right:控制右边是否为开
cats = pd.cut(ages, [18, 26, 36, 61, 100], right=False)
print(cats)
# [[18, 26), [18, 26), [18, 26), [26, 36), [18, 26), ..., [26, 36), [61, 100), [36, 61), [36, 61), [26, 36)]
# Length: 12
# Categories (4, interval[int64]): [[18, 26) < [26, 36) < [36, 61) < [61, 100)]


ages = [20, 22, 25, 27, 21, 23, 37, 31, 61, 45, 41, 32]
bins = [18, 25, 35, 60, 100]
group_names = ['Youth', 'YoungAdult', 'MiddleAged', 'Senior']

# lables:设置面元的名称
cats = pd.cut(ages, bins, labels=group_names)
print(cats)
# [Youth, Youth, Youth, YoungAdult, Youth, ..., YoungAdult, Senior, MiddleAged, MiddleAged, YoungAdult]
# Length: 12
# Categories (4, object): [Youth < YoungAdult < MiddleAged < Senior]




data = np.random.rand(20)
print(data)
# [0.61227823 0.26643464 0.98705774 0.21116076 0.01736529 0.5511922
# 0.15892424 0.50131 0.23453 0.57254727 0.84205302 0.80831397
# 0.81056495 0.8453584 0.42360283 0.50968521 0.97950919 0.22434297
# 0.07420821 0.66474358]

# 传⼊的是⾯元的数量⽽不是确切的⾯元边界,则它会根据数据的最⼩值和最⼤值计算等⻓⾯元。
# 这个例⼦中,会将⼀些均匀分布的数据分成四组

# precision:限定小数位数最多为2
print(pd.cut(data, 4, precision=2))
# [(0.5, 0.74], (0.26, 0.5], (0.74, 0.99], (0.016, 0.26], (0.016, 0.26], ..., (0.5, 0.74], (0.74, 0.99], (0.016, 0.26], (0.016, 0.26], (0.5, 0.74]]
# Length: 20
# Categories (4, interval[float64]): [(0.016, 0.26] < (0.26, 0.5] < (0.5, 0.74] < (0.74, 0.99]]

qcut是⼀个⾮常类似于cut的函数,它可以根据样本分位数对数据进⾏⾯元划分.因此可以得到大小基本相等的面元:(每个面元的元素数量基本相等)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
import pandas as pd
import numpy as np


data = np.random.randn(1000)

# 根据样本分位数对数据进⾏⾯元划分,因此可以得到⼤⼩基本相等的⾯元:
cats = pd.qcut(data, 4)
print(cats)
# [(-0.66, 0.013], (-0.66, 0.013], (-2.8979999999999997, -0.66], (0.674, 2.814], (-0.66, 0.013], ..., (0.013, 0.674], (0.013, 0.674], (-0.66, 0.013], (-2.8979999999999997, -0.66], (0.674, 2.814]]
# Length: 1000
# Categories (4, interval[float64]): [(-2.8979999999999997, -0.66] < (-0.66, 0.013] < (0.013, 0.674] < (0.674, 2.814]]

# 面元大小基本相等
print(pd.value_counts(cats))
# (0.674, 2.814] 250
# (0.013, 0.674] 250
# (-0.66, 0.013] 250
# (-2.8979999999999997, -0.66] 250
# dtype: int64

# 传递⾃定义的分位数(0到1之间的数值,包含端点):
ca = pd.qcut(data, [0, 0.1, 0.5, 0.9, 1.])

print(ca)
# [(-1.295, 0.013], (-1.295, 0.013], (-1.295, 0.013], (1.253, 2.814], (-1.295, 0.013], ..., (0.013, 1.253], (0.013, 1.253], (-1.295, 0.013], (-1.295, 0.013], (0.013, 1.253]]
# Length: 1000
# Categories (4, interval[float64]): [(-2.8979999999999997, -1.295] < (-1.295, 0.013] < (0.013, 1.253] <
# (1.253, 2.814]]

print(pd.value_counts(ca))
# (0.013, 1.253] 400
# (-1.295, 0.013] 400
# (1.253, 2.814] 100
# (-2.8979999999999997, -1.295] 100
# dtype: int64

检测和过滤异常值:

np.sign(df):根据数据的值是正还是负,返回1或-1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
import pandas as pd
import numpy as np

# 正态分布的DataFrame
data = pd.DataFrame(np.random.randn(1000, 4))

print(data.describe())
# 0 1 2 3
# count 1000.000000 1000.000000 1000.000000 1000.000000
# mean 0.001186 -0.011577 -0.022340 -0.037435
# std 1.030316 1.013564 0.995029 1.076454
# min -3.041240 -2.801578 -3.434478 -3.507477
# 25% -0.682765 -0.642027 -0.659873 -0.779336
# 50% 0.038580 -0.007997 -0.042658 -0.060064
# 75% 0.643663 0.680967 0.662104 0.673498
# max 4.063368 3.352299 3.377122 3.341305


# 某列中绝对值⼤⼩超过3的值:
col = data[2]
print(col[np.abs(col) > 3])
# 10 -3.074688
# 41 -3.434478
# 201 -3.066475
# 796 3.010803
# 896 3.377122
# Name: 2, dtype: float64


# 选出全部含有“超过3或-3的值”的⾏
print(data[(np.abs(data) > 3).any(axis=1)])
# 0 1 2 3
# 10 0.399631 -1.872873 -3.074688 -0.830713
# 41 1.252504 2.392584 -3.434478 2.768397
# ...
# 862 3.015624 -0.078902 -0.388504 0.220393
# 896 -1.152745 0.992461 3.377122 -0.394303


# sign():根据数据的值是正还是负,⽣成1和-1
# 将元素的值限定在[-3,3],超出的值将被强制设置为-3/3
data[np.abs(data) > 3] = np.sign(data) * 3
print(data.describe())
# 0 1 2 3
# count 1000.000000 1000.000000 1000.000000 1000.000000
# mean -0.000274 -0.011929 -0.022152 -0.037598
# std 1.025224 1.012455 0.991949 1.072968
# min -3.000000 -2.801578 -3.000000 -3.000000
# 25% -0.682765 -0.642027 -0.659873 -0.779336
# 50% 0.038580 -0.007997 -0.042658 -0.060064
# 75% 0.643663 0.680967 0.662104 0.673498
# max 3.000000 3.000000 3.000000 3.000000

排列和随机采样:

  • numpy.random.permutation():
    产生随机顺序的整数数组
  • df.take():
    获取行/列
  • df.sample(n=3)
    随机抽取列
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
import pandas as pd
import numpy as np

df = pd.DataFrame(np.arange(5 * 4).reshape((5, 4)))
print(df)
# 0 1 2 3
# 0 0 1 2 3
# 1 4 5 6 7
# 2 8 9 10 11
# 3 12 13 14 15
# 4 16 17 18 19

# permutation():产生随机顺序的整数数组
sampler = np.random.permutation(5)
print(sampler)
# [3 2 1 0 4]

print(df.take([1],axis=1))
# 1
# 0 1
# 1 5
# 2 9
# 3 13
# 4 17

# 根据permutation的值获取行数据
print(df.take(sampler))
# 0 1 2 3
# 3 12 13 14 15
# 2 8 9 10 11
# 1 4 5 6 7
# 0 0 1 2 3
# 4 16 17 18 19

# 随机抽取3行/列(使用axis参数)
print(df.sample(n=3))
# 0 1 2 3
# 4 16 17 18 19
# 2 8 9 10 11
# 3 12 13 14 15

计算指标/哑变量

前期知识预备:

  1. 在构建回归模型时,如果自变量X为连续性变量,回归系数β可以解释为:在其他自变量不变的条件下,X每改变一个单位,所引起的因变量Y的平均变化量;如果自变量X为二分类变量,例如是否饮酒(1=是,0=否),则回归系数β可以解释为:其他自变量不变的条件下,X=1(饮酒者)与X=0(不饮酒者)相比,所引起的因变量Y的平均变化量。

    但是,当自变量X为多分类变量时,例如职业、学历、血型、疾病严重程度等等,此时仅用一个回归系数来解释多分类变量之间的变化关系,及其对因变量的影响,就显得太不理想。

    此时,我们通常会将原始的多分类变量转化为哑变量,每个哑变量只代表某两个级别或若干个级别间的差异,通过构建回归模型,每一个哑变量都能得出一个估计的回归系数,从而使得回归的结果更易于解释,更具有实际意义。

  2. 哑变量:也叫虚拟变量( Dummy Variables)

  3. 分类型变量:
    如性别,班级,科目等等.
    对于性别这样的分类型变量,我们就需要用一个数值来表示它们,常用0和1来表示,如男性为1,女性为0。
    这样将性别“量化”的方法是为了在统计分析模型中纳入性别的影响,提高模型的精度。广义上说这就是对哑变量(Dummy variable)的应用。

  4. 哑变量:
    就是用一些数值上虚拟的值(0或1)去代替那些无法直接纳入统计分析的变量。对于性别这种两分类的情况,我们只需要设置0和1即可,那么对于胃癌的病理类型(有4个分类)呢,我们就需要用一系列数值来表示了,它应该设置成如下结构:

    D1 D2 D3
    腺癌 0 0 0
    粘液腺癌 1 0 0
    印戒细胞癌 0 1 0
    特殊类型癌 0 0 1

    也就是说我们引进了3个变量来表达上述4种病理类型,3个变量分别是D1、D2和D3。

    为什么不引入4个变量呢?伍德里奇在《计量经济学导论》中说“如果某个定性变量有m种互相排斥的类型,则模型中只能引入m-1个虚拟变量,否则会陷入虚拟变量陷阱,产生完全共线性。”感兴趣的可以了解一下。不感兴趣的,知道某一个变量如果有m个互斥的分类设成m-1个哑变量就ok了。

  5. ==设置哑变量就是为了将分类变量数量化==,然后纳入分析模型进行分析。哑变量的引入,扩大了回归分析中自变量的纳入范围,也就是说分类变量也可以纳入回归分析啦。

  6. 参考资料:
    回归模型中的哑变量是个啥?何时需要设置哑变量?

将分类变量(categorical variable)转换为哑变量指标矩阵

  • get_dummies函数:
    ==如果DataFrame的某⼀列中含有k个不同的值,则可以派⽣出⼀个k列矩阵或DataFrame(其值全为1和0)==。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
# get_dummies():创建哑变量
import pandas as pd
import numpy as np


se = pd.Series({'key': ['hyl','dsz','czj','gzr','hyl','gzr']})
print(se)
# key [hyl, dsz, czj, gzr, hyl, gzr]
# dtype: object

print(pd.get_dummies(se['key']))
# czj dsz gzr hyl
# 0 0 0 0 1
# 1 0 1 0 0
# 2 1 0 0 0
# 3 0 0 1 0
# 4 0 0 0 1
# 5 0 0 1 0


df = pd.DataFrame({'key': ['b', 'b', 'a', 'c', 'a', 'b']})
print(df)
# key
# 0 b
# 1 b
# 2 a
# 3 c
# 4 a
# 5 b

print(pd.get_dummies(df['key']))
# a b c
# 0 0 1 0
# 1 0 1 0
# 2 1 0 0
# 3 0 0 1
# 4 1 0 0
# 5 0 1 0


df = pd.DataFrame({'key': ['b', 'b', 'a', 'c', 'a', 'b'],
'data1': range(6)})
print(df)
# key data1
# 0 b 0
# 1 b 1
# 2 a 2
# 3 c 3
# 4 a 4
# 5 b 5

print(pd.get_dummies(df['key']))
# a b c
# 0 0 1 0
# 1 0 1 0
# 2 1 0 0
# 3 0 0 1
# 4 1 0 0
# 5 0 1 0

注意:

  • df[['data1']]:DataFrame类型
  • df['data1']:Series类型
  • index对象具有字典的映射功能:
    1. get_loc(value):获得一个标签的索引值
    2. get_indexer(values):获得一组标签的索引值,当值不存在则返回-1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
import pandas as pd
import numpy as np


df = pd.DataFrame({'key': ['b', 'b', 'a', 'c', 'a', 'b'],
'data1': range(6)})
print(df)
# key data1
# 0 b 0
# 1 b 1
# 2 a 2
# 3 c 3
# 4 a 4
# 5 b 5

x = pd.get_dummies(df['key'])

# 使用get_dummies添加哑变量,类型还是DF
print(type(x))
# <class 'pandas.core.frame.DataFrame'>

print(x)
# a b c
# 0 0 1 0
# 1 0 1 0
# 2 1 0 0
# 3 0 0 1
# 4 1 0 0
# 5 0 1 0


# prefix:给列名添加前缀
dummies = pd.get_dummies(df['key'], prefix='hyl')
print(dummies)
# hyl_a hyl_b hyl_c
# 0 0 1 0
# 1 0 1 0
# 2 1 0 0
# 3 0 0 1
# 4 1 0 0
# 5 0 1 0

# df[['data1']]:这是DaTaFrame类型
df_with_dummy = df[['data1']].join(dummies)
print(df_with_dummy)
# data1 hyl_a hyl_b hyl_c
# 0 0 0 1 0
# 1 1 0 1 0
# 2 2 1 0 0
# 3 3 0 0 1
# 4 4 1 0 0
# 5 5 0 1 0

print(type(df[['data1']]))
# <class 'pandas.core.frame.DataFrame'>

print(type(df['data1']))
# <class 'pandas.core.series.Series'>

如果DataFrame中的某⾏同属于多个分类,则事情就会有点复杂。

1
2
3
4
5
6
movies.dat的内容为:

1::Toy Story (1995)::Animation|Children's|Comedy
2::Jumanji (1995)::Adventure|Children's|Fantasy
3::Grumpier Old Men (1995)::Comedy|Romance
4::Waiting to Exhale (1995)::Comedy|Drama

有三列,电影id,标题,分类.可以发现,一部电影有很多的分类

现在要找出所有的分类,并且构建指标(找出每部电影含有和不含有的分类):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
import pandas as pd
import numpy as np


mnames = ['movie_id', 'title', 'genres']
movies = pd.read_table('datasets/movielens/movies.dat', sep='::',
header=None, names=mnames)
print(movies.head())
# movie_id ... genres
# 0 1 ... Animation|Children's|Comedy
# 1 2 ... Adventure|Children's|Fantasy
# 2 3 ... Comedy|Romance
# 3 4 ... Comedy|Drama
# 4 5 ... Comedy
# [5 rows x 3 columns]


all_genres = []
for x in movies.genres:
all_genres.extend(x.split('|'))
genres = pd.unique(all_genres)

print(type(genres))
# <class 'numpy.ndarray'>

# 找出所有的分类
print(genres)
# ['Animation' "Children's" 'Comedy' 'Adventure' 'Fantasy' 'Romance' 'Drama'
# 'Action' 'Crime' 'Thriller' 'Horror' 'Sci-Fi' 'Documentary' 'War'
# 'Musical' 'Mystery' 'Film-Noir' 'Western']


print('-------------------')


# 构建一个全零DataFrame,列标签为所有的分类
zero_matrix = np.zeros((len(movies), len(genres)))
dummies = pd.DataFrame(zero_matrix, columns=genres)

# # 获取第一行数据的分类
# gen = movies.genres[0]
# print(gen) # Animation|Children's|Comedy


# # 尝试使用get_indexer,获取一组标签的索引值
# idx = dummies.columns.get_indexer(gen.split('|'))
# print(idx) # [0 1 2]


# 获取全部行的分类
for i, gen in enumerate(movies.genres):
# 找出当前这部电影的分类 在dummies的分类列 中的索引值
indices = dummies.columns.get_indexer(gen.split('|'))
# 将该元素设置为1
dummies.iloc[i, indices] = 1

# 给dummies的列添加前缀
movies_windic = movies.join(dummies.add_prefix('Genre_'))

# 得出第一部电影含有和不含有的分类
print(movies_windic.iloc[0])
# movie_id 1
# title Toy Story (1995)
# genres Animation|Children's|Comedy
# Genre_Animation 1
# Genre_Children's 1
# Genre_Comedy 1
# Genre_Adventure 0
# Genre_Fantasy 0
# Genre_Romance 0
# Genre_Drama 0
# Genre_Action 0
# Genre_Crime 0
# Genre_Thriller 0
# Genre_Horror 0
# Genre_Sci-Fi 0
# Genre_Documentary 0
# Genre_War 0
# Genre_Musical 0
# Genre_Mystery 0
# Genre_Film-Noir 0
# Genre_Western 0
# Name: 0, dtype: object

⼀个对统计应⽤有⽤的秘诀是:
结合get_dummies和诸如cut之类的离散化函数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
import pandas as pd
import numpy as np


np.random.seed(12345)
values = np.random.rand(10)

print(values)
# [0.92961609 0.31637555 0.18391881 0.20456028 0.56772503 0.5955447
# 0.96451452 0.6531771 0.74890664 0.65356987]

bins = [0, 0.2, 0.4, 0.6, 0.8, 1]

# 切割,取得分类
categories = pd.cut(values, bins)
print(categories)
# [(0.8, 1.0], (0.2, 0.4], (0.0, 0.2], (0.2, 0.4], (0.4, 0.6], (0.4, 0.6], (0.8, 1.0], (0.6, 0.8], (0.6, 0.8], (0.6, 0.8]]
# Categories (5, interval[float64]): [(0.0, 0.2] < (0.2, 0.4] < (0.4, 0.6] < (0.6, 0.8] < (0.8, 1.0]]

# 将分类传给get_dummies,制作成哑变量
print(pd.get_dummies(categories))
# (0.0, 0.2] (0.2, 0.4] (0.4, 0.6] (0.6, 0.8] (0.8, 1.0]
# 0 0 0 0 0 1
# 1 0 1 0 0 0
# 2 1 0 0 0 0
# 3 0 1 0 0 0
# 4 0 0 1 0 0
# 5 0 0 1 0 0
# 6 0 0 0 0 1
# 7 0 0 0 1 0
# 8 0 0 0 1 0
# 9 0 0 0 1 0

7.3 字符串操作

字符串对象⽅法
老生常谈的东西.不多讲

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
val = 'a,b,  guido'
print(val.split(','))
# ['a', 'b', ' guido']

# split常常与strip⼀起使⽤,以去除空⽩符
pieces = [x.strip() for x in val.split(',')]
print(pieces)
# ['a', 'b', 'guido']

val = '::'.join(pieces)
print(val)
# a::b::guido

print(val.find('guido'))
# 6

print(val.count(':'))
# 4

print(val.replace(':',''))
# abguido

正则:
注意re.split方法

1
2
3
4
5
import re

text = 'hyl is \t sb'
print(re.split('\s',text))
# ['hyl', 'is', '', '', 'sb']

pandas的⽮量化字符串函数

就像前面说的,series.map能将函数映射到数据上,但是如果存在NA(null)就会报错。

  • pandas调用python的原生字符串方法要加一个str,有点类似于Scrapy
  • ==pandas调用正则也是在前面添加str==

也就是说,pandas的字符串方法和正则表达式共用一个str属性

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
import re

import pandas as pd
import numpy as np



data = {'Dave': 'asdasd dave@google.com', 'Steve': 'asdasd steve@gmail.com',
'Rob': 'asdasd rob@gmail.com', 'Wes': np.nan}
data = pd.Series(data)
print(data)
# Dave asdasd dave@google.com
# Steve asdasd steve@gmail.com
# Rob asdasd rob@gmail.com
# Wes NaN
# dtype: object

# 调用py的原生方法要加一个str,有点类似于Scrapy
print(data.str.contains('gmail'))
# Dave False
# Steve True
# Rob True
# Wes NaN
# dtype: object


pattern = r'[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}'
result = data.str.findall(pattern, flags=re.IGNORECASE)
print(result)
# Dave [dave@google.com]
# Steve [steve@gmail.com]
# Rob [rob@gmail.com]
# Wes NaN
# dtype: object


matches = data.str.match(pattern, flags=re.IGNORECASE)
print(matches)
# Dave False
# Steve False
# Rob False
# Wes NaN
# dtype: object


print(matches.str.get(1))
# Dave NaN
# Steve NaN
# Rob NaN
# Wes NaN
# dtype: float64


print(matches.str[0])
# Dave NaN
# Steve NaN
# Rob NaN
# Wes NaN
# dtype: float64


print(data.str[:5])
# Dave asdas
# Steve asdas
# Rob asdas
# Wes NaN
# dtype: object

部分⽮量化字符串⽅法

702537