将文本转换为格式字符串[复制](Converting text to format string [duplicate])
这个问题在这里已经有了答案:
- INT转换不起作用[复制] 1个回答
有没有一些python的方式来将文本(例如文件)转换为格式字符串?
我的意思是一个文本文件:this is a {format} string.
在python中加载它并使它变得像三引号格式字符串:
var = """this is a {format} string."""
我知道如何只读取文件并替换花括号,但我想知道是否已经有了这样的东西。
谢谢
编辑:这是我试过的代码:
with open('file.txt', 'r') as rs: lines = rs.readlines() text = ','.join(lines) print(text) text.format(format='something_else') print(text)
它只打印文本文件。 我想知道是否有更多pythonic的方式,那么我不得不写一个这样做的课程。 谢谢
This question already has an answer here:
Is there some python way to convert a text (from a file for example) into a format string?
I mean for a text file:this is a {format} string.
Load it in python and have it become like the triple quotes format string:
var = """this is a {format} string."""
I know how to just read the file and replace the curly braces, but I was wondering if there is already something that does this.
Thanks
Edit: This is the code I've tried:
with open('file.txt', 'r') as rs: lines = rs.readlines() text = ','.join(lines) print(text) text.format(format='something_else') print(text)
It just prints the text file. I'm looking to know if there is a more pythonic way then me having to write a class that does this. Thanks
原文:https://stackoverflow.com/questions/39942507
最满意答案
你要找的是
!
方法!($"name".isin(data:_*))
或不起作用:
import org.apache.spark.sql.functions._ not($"name".isin(data:_*))
What you're looking for is either
!
method!($"name".isin(data:_*))
or
not
function:import org.apache.spark.sql.functions._ not($"name".isin(data:_*))
相关问答
更多-
def StratifiedKFold(nSamples: Int, k: Int, labels: List[Int],shuffle: Boolean = false): (Map[Int,List[List[Int]]],Int)= { var idxs = (0 until nSamples).toArray val unqLabels = labels.distinct val noOfLabels = unqLabels.length val idxsbyl ...
-
有没有办法使用Dataframe的isin()而不使用列表?(Is there a way to use Dataframe's isin() without using a list?)[2022-04-14]
子查询 spark.createDataFrame([(1, ), (2, ), (3, ), (4, )], ["x"]).createTempView("df1") spark.createDataFrame([(1, ), (3, ) ], ["x"]).createTempView("df2") spark.sql("SELECT * FROM df1 WHERE x IN (SELECT x FROM df2)").show() +---+ ... -
为了理解它,我们举个例子 scala> def echo(args: String*) = for (arg <- args) println(arg) echo: (args: String*)Unit scala> val arr = Array("What's", "up", "doc?") arr: Array[String] = Array(What's, up, doc?) scala> echo(arr)
:14: error: type mismatch; foun ... -
您可以将Array[Byte]转换为Java String ,然后如果您的白名单List
,您可以将其与isin(whitelist:_*)匹配 根据文档, isin方法接受java.lang.object或Seq(java.lang.object) https://spark.apache.org/docs/1.6.0/api/java/org/apache/spark/sql/Column.html#isin(scala.collection.Seq) You can convert A ... -
一种方法是将该系列的小写或大写与列表中的相同 df[df['column'].str.lower().isin([x.lower() for x in mylist])] 这样做的好处是我们不保存对原始df或列表的更改,使操作更高效 考虑这个虚拟DF: Color Val 0 Green 1 1 Green 1 2 Red 2 3 Red 2 4 Blue 3 5 Blue 3 对于列表l: l = ['green', 'BLUE' ...
-
你要找的是! 方法 !($"name".isin(data:_*)) 或不起作用: import org.apache.spark.sql.functions._ not($"name".isin(data:_*)) What you're looking for is either ! method !($"name".isin(data:_*)) or not function: import org.apache.spark.sql.functions._ not($"name".isin( ...
-
尝试使用df.columns作为标题: df[df[df.columns[1]].isin(list)] Try to use df.columns as your header instead: df[df[df.columns[1]].isin(list)]
-
使用stream方法如下: df.filter(col("something").isin(selected.stream().toArray(String[]::new)))) Use stream method as follows: df.filter(col("something").isin(selected.stream().toArray(String[]::new))))
-
熊猫指数是方法(Pandas index isin method)[2023-05-18]
使用str.contains并传递正则表达式模式: In[5]: df.index.str.contains('01|02') Out[5]: array([ True, True, False], dtype=bool) isin查找完全匹配,这就是为什么你返回所有False数组 Use str.contains and pass a regex pattern: In[5]: df.index.str.contains('01|02') Out[5]: array([ True, True, ... -
您可以使用Scala的语法将集合转换为“重复参数”(Java-speak中的AKA“varagrs”),请参阅Scala语言规范中的4.6.2节: val list = List("type1","type2") df.where(col("pType").isin(list: _*) You can use Scala's syntax for converting a collection into a "repeated parameter" (AKA "varargs" in Java-speak ...