Puppet kubernetes模块(Puppet kubernetes module)
我安装了puppet kubernetes模块,用https://github.com/garethr/garethr-kubernetes/blob/master/README.md来管理我的kubernetes集群的pod。
我跑步时无法获取任何pod信息
木偶资源kubernetes_pod
它只返回一个空行。
我正在使用minikube k8s集群来测试puppet模块。
cat /etc/puppetlabs/puppet/kubernetes.conf
apiVersion: v1 clusters: - cluster: certificate-authority: /root/.minikube/ca.crt server: https://<ip address>:8443 name: minikube contexts: - context: cluster: minikube user: minikube name: minikube current-context: minikube kind: Config preferences: {} users: - name: minikube user: client-certificate: /root/.minikube/apiserver.crt client-key: /root/.minikube/apiserver.key
我可以使用curl和证书来讨论K8s REST API
curl --cacert /root/.minikube/ca.crt --cert /root/.minikube/apiserver.crt --key /root/.minikube/apiserver.key https://<minikube ip>:844/api/v1/pods/
I installed the puppet kubernetes module to manage pods of my kubernetes cluster with https://github.com/garethr/garethr-kubernetes/blob/master/README.md
I am not able to get any pod information back when I run
puppet resource kubernetes_pod
It just returns an empty line.
I am using a minikube k8s cluster to test the puppet module against.
cat /etc/puppetlabs/puppet/kubernetes.conf
apiVersion: v1 clusters: - cluster: certificate-authority: /root/.minikube/ca.crt server: https://<ip address>:8443 name: minikube contexts: - context: cluster: minikube user: minikube name: minikube current-context: minikube kind: Config preferences: {} users: - name: minikube user: client-certificate: /root/.minikube/apiserver.crt client-key: /root/.minikube/apiserver.key
I am able to use curl with the certs to talk to the K8s REST API
curl --cacert /root/.minikube/ca.crt --cert /root/.minikube/apiserver.crt --key /root/.minikube/apiserver.key https://<minikube ip>:844/api/v1/pods/
原文:https://stackoverflow.com/questions/49787683
最满意答案
你可以尝试这个吗?
libraryDependencies ++= Seq( "org.apache.spark" %% "spark-core" % "2.0.0", "org.apache.spark" %% "spark-sql" % "2.0.0", "cc.mallet" % "mallet" % "2.0.7", "com.amazonaws" % "aws-java-sdk" % "1.11.229", "com.datastax.spark" % "spark-cassandra-connector_2.11" % "2.0.0" exclude("joda-time", "joda-time"), "joda-time" % "joda-time" % "2.3"
)
Can you plz try this
libraryDependencies ++= Seq( "org.apache.spark" %% "spark-core" % "2.0.0", "org.apache.spark" %% "spark-sql" % "2.0.0", "cc.mallet" % "mallet" % "2.0.7", "com.amazonaws" % "aws-java-sdk" % "1.11.229", "com.datastax.spark" % "spark-cassandra-connector_2.11" % "2.0.0" exclude("joda-time", "joda-time"), "joda-time" % "joda-time" % "2.3"
)
相关问答
更多-
本地在这种情况下是指定Spark主机(告诉它在本地模式下运行),而不是Cassandra连接主机。 要设置Cassandra连接主机,您必须在Spark配置中设置不同的属性 import org.apache.spark._ val conf = new SparkConf(true) .set("spark.cassandra.connection.host", "IP Cassandra Is Listening On") .set("spark.cassandra.u ...
-
Spark Cassandra Connector找不到java.time.LocalDate(Spark Cassandra Connector cannot find java.time.LocalDate)[2024-01-04]
你可以尝试这个吗? libraryDependencies ++= Seq( "org.apache.spark" %% "spark-core" % "2.0.0", "org.apache.spark" %% "spark-sql" % "2.0.0", "cc.mallet" % "mallet" % "2.0.7", "com.amazonaws" % "aws-java-sdk" % "1.11.229", "com.datastax.spark" ... -
解决方案,以防任何人寻找答案 要将DStream或JavaDStream写入Cassandra,需要导入: import static com.datastax.spark.connector.japi.CassandraStreamingJavaUtil.*; 并使用javaFunctions(DStream
arg0)或javaFunctions(JavaDStream arg0) Solution, in case anyone looking for the answer To wri ... -
你的第一段代码 val rdd = sc.cassandraTable("foo", "bar") val date = DateTime.now().minusDays(30) rdd.filter(r => r.getDate("date").after(date.toDate)).count // Count Filtered RDD 所以要小心。 RDD是不可变的,因此当您应用过滤器时,您需要使用返回的RDD而不是您应用该函数的RDD。 val rdd = sc.cassandraTable("f ...
-
Spark Cassandra连接器带有Spark 2.0的Java API(Spark cassandra connector Java API with Spark 2.0)[2023-06-05]
如果您正在运行Spark 2.0-2.2,您可以使用DataStax中的Cassandra连接器https://spark-packages.org/package/datastax/spark-cassandra-connector (用Scala编写,但您可以使用jar) 。 查看官方仓库中的兼容性表格: https : //github.com/datastax/spark-cassandra-connector#version-compatibility If you're running Spar ... -
Spark cassandra与spark-cassandra连接器集成时出错(Error in Spark cassandra integration with spark-cassandra connector)[2022-03-14]
我会先删除 libraryDependencies ++= Seq( "org.apache.cassandra" % "cassandra-thrift" % "3.5" , "org.apache.cassandra" % "cassandra-clientutil" % "3.5", "com.datastax.cassandra" % "cassandra-driver-core" % "3.0.0" ) 由于作为连接器依赖关系的库将自动 ... -
Cassandra使用spark-cassandra连接器插入性能(Cassandra insert performance using spark-cassandra connector)[2021-04-07]
看起来你在你的计时中包括并行化操作。 此外,由于您的spark工作程序在与Cassandra不同的计算机上运行,因此saveToCassandra操作将通过网络进行写入。 尝试配置系统以在Cassandra节点上运行spark worker。 然后在单独的步骤中创建RDD并在其上调用count()之类的操作以将数据加载到内存中。 此外,您可能希望持久化()或缓存()RDD以确保它保留在内存中以进行测试。 然后只计算缓存的RDD的saveToCassandra。 您可能还想查看Cassandra连接器提供的r ... -
检查cassandra节点上cassandra.yaml文件中的rpc_address配置。 spark连接器可能正在使用system.local / system.peers表中的值,并且可能在cassandra.yaml中将其设置为127.0.0.1。 spark连接器使用thrift从cassandra获取令牌范围分割。 最后我打赌这将被替换,因为C * 2.1.4有一个名为system.size_estimates( CASSANDRA-7688 )的新表。 看起来它正在获取主机元数据以找到最近的主 ...
-
你可以让它做import com.datastax.spark.connector.CassandraRow; 我正在使用与您相同的依赖项(无论版本)
com.datastax.spark spark-cassandra-connector-java_2.10 1.5.0-M2 -
2.0的里程碑版本。 与Spark 2.0兼容 https://mvnrepository.com/artifact/com.datastax.spark/spark-cassandra-connector_2.11/2.0.0-M3 There is a milestone release for 2.0 out. Compatible with Spark 2.0 https://mvnrepository.com/artifact/com.datastax.spark/spark-cassandra- ...