使用activerecord防止sql注入(Protecting against sql injection using activerecord)
关于这个问题, 如何在sinatra中使用ruby中的查询? 我有以下问题保护我的sql免于注入。这是我的方法从类型字符串进行查询,它接收av(alue)搜索和ak(ey)(=字段)查看。之后各种pick.join
selection.join(' and ')
加入了selctionsdef string_selector(k, v) case when v[/\|/] v.scan(/([^\|]+)(\|)([^\|]+)/).map {|p| "lower(#{k}) LIKE '%#{p.first.downcase}%' or lower(#{k}) LIKE '%#{p.last.downcase}%'"} when v[/[<>=]/] v.scan(/(<=?|>=?|=)([^<>=]+)/).map { |part| p part; "#{k} #{part.first} '#{part.last.strip}'"} else # "lower(#{k}) LIKE '%#{v.downcase}%'" #(works) ("lower(#{k}) LIKE ?", '%#{v.downcase}%') #doesn't work end end
但是我得到了错误
selectors.rb:38: syntax error, unexpected keyword_end, expecting $end from C:/../1.9.1/rubygems/core_ext/kernel_require.rb:55:in `require'
我能做错什么?
Following on the question how can I use like query in ruby with sinatra? I have the following problem securing my sql from injection.Here is my method to make a query from the type string, it receives a v(alue) to search for and a k(ey) (=field) to look in. After that the various selctions are joined by
selection.join(' and ')
def string_selector(k, v) case when v[/\|/] v.scan(/([^\|]+)(\|)([^\|]+)/).map {|p| "lower(#{k}) LIKE '%#{p.first.downcase}%' or lower(#{k}) LIKE '%#{p.last.downcase}%'"} when v[/[<>=]/] v.scan(/(<=?|>=?|=)([^<>=]+)/).map { |part| p part; "#{k} #{part.first} '#{part.last.strip}'"} else # "lower(#{k}) LIKE '%#{v.downcase}%'" #(works) ("lower(#{k}) LIKE ?", '%#{v.downcase}%') #doesn't work end end
But i get the error
selectors.rb:38: syntax error, unexpected keyword_end, expecting $end from C:/../1.9.1/rubygems/core_ext/kernel_require.rb:55:in `require'
What could i be doing wrong ?
原文:
最满意答案
问题是我的版本的
numexpr=2.4.4
更新到numexpr=2.4.6
解决了这个问题。 Github问题: https : //github.com/pydata/pandas/issues/12167The problem was with my version of
numexpr=2.4.4
updating tonumexpr=2.4.6
fixes the problem. Github issue: https://github.com/pydata/pandas/issues/12167
相关问答
更多-
按列比较2个pandas数据帧的行,并保持更大和总和(Compare rows of 2 pandas data frames by column and keep bigger and sum)[2022-05-24]
df = (pd.concat([df1, df2]) .groupby(['ID','X','Y']) .agg({'value':'max', 'value_sum':'sum'})) df = (pd.concat([df1, df2]) .groupby(['ID','X','Y']) .agg({'value':'max', 'value_sum':'sum'})) -
你可以使用IndexSlice访问器: In [57]: s Out[57]: account last_30 1548051 False 30439 True 6713 1548052 False 27491 True 8477 Name: val, dtype: int64 In [58]: s.loc[pd.IndexSlice[:,True]] Out[58]: account 1548051 6713 ...
-
在groupby之后,COUNTn被转换为Series,它没有列(它只是一列)。 如果你想把它保存为数据帧,就像代码所期望的那样,使用groupby(grouper, as_index=False) 。 After the groupby, COUNTn is converted into a Series, which doesn't have columns (it's just a single column). If you want to keep it as a dataframe, as yo ...
-
Pandas:DataFrame.sum()或DataFrame()。as_matrix.sum()(Pandas: DataFrame.sum() or DataFrame().as_matrix.sum())[2023-10-05]
在阅读我遇到的文档时: 7.1.1节快速标量值的获取和设置由于使用[]进行索引必须处理很多情况(单标签访问,切片,布尔索引等),所以它有一些开销以便确定你的'要求。 如果您只想访问标量值,最快的方法是使用get_value方法,该方法在所有数据结构上实现: In [656]: s.get_value(dates[5]) Out[656]: -0.67368970808837059 In [657]: df.get_value(dates[5], ’A’) Out[657]: -0.6736897080883 ... -
如何将标量广播到pandas数据帧中的已过滤列(How to broadcast a scalar to a filtered column in a pandas dataframe)[2021-10-11]
首先选择列,然后过滤: df1['C'][df1['A'] > 2] = "R" A B C 0 1 w H 1 2 x H 2 3 y R 3 4 z R First choose the column, then filter: df1['C'][df1['A'] > 2] = "R" A B C 0 1 w H 1 2 x H 2 3 y R 3 4 z R -
使用drop df.assign( count__4s_abc=df.drop('D', 1).eq(4).sum(1), sum__abc=df.drop('D', 1).sum(1) ) 或明确选择3列。 df.assign( count__4s_abc=df[['A', 'B', 'C']].eq(4).sum(1), sum__abc=df[['A', 'B', 'C']].sum(1) ) 或使用iloc获得前3列。 df.assign( count__ ...
-
您可以在组的每列中使用大小(每个组的长度)而不是计数非NaN。 In [11]: df[['age', 'name', 'interest']].groupby(['age' , 'name']).size() Out[11]: age name 9 zoe 1 11 willy 2 dtype: int64 In [12]: df[['age', 'name', 'interest']].groupby(['age' , 'name']).size().reset_index ...
-
问题是我的版本的numexpr=2.4.4更新到numexpr=2.4.6解决了这个问题。 Github问题: https : //github.com/pydata/pandas/issues/12167 The problem was with my version of numexpr=2.4.4 updating to numexpr=2.4.6 fixes the problem. Github issue: https://github.com/pydata/pandas/issues/1216 ...
-
IIUC,你想要transform :它进行聚合,但返回一个与原始索引相同的索引对象。 >>> df A B C D 0 1 1 0 7 1 1 1 0 9 2 1 1 1 5 3 1 1 1 3 >>> df.groupby(["A", "B", "C"]).transform('sum') D 0 16 1 16 2 8 3 8 >>> df["D"] = df.groupby(["A", "B", "C"]).transform('sum ...
-
试试这个: In [71]: df Out[71]: a b c 0 NaN 7.0 0 1 0.0 NaN 4 2 2.0 NaN 4 3 1.0 7.0 0 4 1.0 3.0 9 5 7.0 4.0 9 6 2.0 6.0 9 7 9.0 6.0 4 8 3.0 0.0 9 9 9.0 0.0 1 In [72]: pd.isnull(df).sum() Out[72]: a 1 b 2 c 0 dtyp ...