Pyspark configuration issue












0















Environment:
Centos7
Spark2.4.0
Hadoop2.9.2
Scale2.12.8
Python3.6.6



when I start pyspark by /opt/spark/bin/pyspark, the error shows like:



Python 3.6.6 (default, Jan 29 2019, 20:02:39) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux
Type "help", "copyright", "credits" or "license" for more information.
2019-01-30 19:47:26 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
/opt/spark/python/pyspark/shell.py:45: UserWarning: Failed to initialize Spark session.
warnings.warn("Failed to initialize Spark session.")
Traceback (most recent call last):
File "/opt/spark/python/pyspark/shell.py", line 41, in <module>
spark = SparkSession._create_shell_session()
File "/opt/spark/python/pyspark/sql/session.py", line 573, in _create_shell_session
return SparkSession.builder
File "/opt/spark/python/pyspark/sql/session.py", line 173, in getOrCreate
sc = SparkContext.getOrCreate(sparkConf)
File "/opt/spark/python/pyspark/context.py", line 349, in getOrCreate
SparkContext(conf=conf or SparkConf())
File "/opt/spark/python/pyspark/context.py", line 118, in __init__
conf, jsc, profiler_cls)
File "/opt/spark/python/pyspark/context.py", line 187, in _do_init
self._accumulatorServer = accumulators._start_update_server(auth_token)
File "/opt/spark/python/pyspark/accumulators.py", line 291, in _start_update_server
server = AccumulatorServer(("localhost", 0), _UpdateRequestHandler, auth_token)
File "/opt/spark/python/pyspark/accumulators.py", line 274, in __init__
SocketServer.TCPServer.__init__(self, server_address, RequestHandlerClass)
File "/usr/local/python3/lib/python3.6/socketserver.py", line 453, in __init__
self.server_bind()
File "/usr/local/python3/lib/python3.6/socketserver.py", line 467, in server_bind
self.socket.bind(self.server_address)
socket.gaierror: [Errno -2] Name or service not known


But only spark-shell could give a good start:



2019-01-30 19:58:49 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://datanode1:4040
Spark context available as 'sc' (master = local[*], app id = local-1548896336336).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_ / _ / _ `/ __/ '_/
/___/ .__/_,_/_/ /_/_ version 2.4.0
/_/

Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_131)
Type in expressions to have them evaluated.
Type :help for more information.

scala>


Looking help for this problem.










share|improve this question



























    0















    Environment:
    Centos7
    Spark2.4.0
    Hadoop2.9.2
    Scale2.12.8
    Python3.6.6



    when I start pyspark by /opt/spark/bin/pyspark, the error shows like:



    Python 3.6.6 (default, Jan 29 2019, 20:02:39) 
    [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    2019-01-30 19:47:26 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    Setting default log level to "WARN".
    To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
    /opt/spark/python/pyspark/shell.py:45: UserWarning: Failed to initialize Spark session.
    warnings.warn("Failed to initialize Spark session.")
    Traceback (most recent call last):
    File "/opt/spark/python/pyspark/shell.py", line 41, in <module>
    spark = SparkSession._create_shell_session()
    File "/opt/spark/python/pyspark/sql/session.py", line 573, in _create_shell_session
    return SparkSession.builder
    File "/opt/spark/python/pyspark/sql/session.py", line 173, in getOrCreate
    sc = SparkContext.getOrCreate(sparkConf)
    File "/opt/spark/python/pyspark/context.py", line 349, in getOrCreate
    SparkContext(conf=conf or SparkConf())
    File "/opt/spark/python/pyspark/context.py", line 118, in __init__
    conf, jsc, profiler_cls)
    File "/opt/spark/python/pyspark/context.py", line 187, in _do_init
    self._accumulatorServer = accumulators._start_update_server(auth_token)
    File "/opt/spark/python/pyspark/accumulators.py", line 291, in _start_update_server
    server = AccumulatorServer(("localhost", 0), _UpdateRequestHandler, auth_token)
    File "/opt/spark/python/pyspark/accumulators.py", line 274, in __init__
    SocketServer.TCPServer.__init__(self, server_address, RequestHandlerClass)
    File "/usr/local/python3/lib/python3.6/socketserver.py", line 453, in __init__
    self.server_bind()
    File "/usr/local/python3/lib/python3.6/socketserver.py", line 467, in server_bind
    self.socket.bind(self.server_address)
    socket.gaierror: [Errno -2] Name or service not known


    But only spark-shell could give a good start:



    2019-01-30 19:58:49 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    Setting default log level to "WARN".
    To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
    Spark context Web UI available at http://datanode1:4040
    Spark context available as 'sc' (master = local[*], app id = local-1548896336336).
    Spark session available as 'spark'.
    Welcome to
    ____ __
    / __/__ ___ _____/ /__
    _ / _ / _ `/ __/ '_/
    /___/ .__/_,_/_/ /_/_ version 2.4.0
    /_/

    Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_131)
    Type in expressions to have them evaluated.
    Type :help for more information.

    scala>


    Looking help for this problem.










    share|improve this question

























      0












      0








      0








      Environment:
      Centos7
      Spark2.4.0
      Hadoop2.9.2
      Scale2.12.8
      Python3.6.6



      when I start pyspark by /opt/spark/bin/pyspark, the error shows like:



      Python 3.6.6 (default, Jan 29 2019, 20:02:39) 
      [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux
      Type "help", "copyright", "credits" or "license" for more information.
      2019-01-30 19:47:26 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
      Setting default log level to "WARN".
      To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
      /opt/spark/python/pyspark/shell.py:45: UserWarning: Failed to initialize Spark session.
      warnings.warn("Failed to initialize Spark session.")
      Traceback (most recent call last):
      File "/opt/spark/python/pyspark/shell.py", line 41, in <module>
      spark = SparkSession._create_shell_session()
      File "/opt/spark/python/pyspark/sql/session.py", line 573, in _create_shell_session
      return SparkSession.builder
      File "/opt/spark/python/pyspark/sql/session.py", line 173, in getOrCreate
      sc = SparkContext.getOrCreate(sparkConf)
      File "/opt/spark/python/pyspark/context.py", line 349, in getOrCreate
      SparkContext(conf=conf or SparkConf())
      File "/opt/spark/python/pyspark/context.py", line 118, in __init__
      conf, jsc, profiler_cls)
      File "/opt/spark/python/pyspark/context.py", line 187, in _do_init
      self._accumulatorServer = accumulators._start_update_server(auth_token)
      File "/opt/spark/python/pyspark/accumulators.py", line 291, in _start_update_server
      server = AccumulatorServer(("localhost", 0), _UpdateRequestHandler, auth_token)
      File "/opt/spark/python/pyspark/accumulators.py", line 274, in __init__
      SocketServer.TCPServer.__init__(self, server_address, RequestHandlerClass)
      File "/usr/local/python3/lib/python3.6/socketserver.py", line 453, in __init__
      self.server_bind()
      File "/usr/local/python3/lib/python3.6/socketserver.py", line 467, in server_bind
      self.socket.bind(self.server_address)
      socket.gaierror: [Errno -2] Name or service not known


      But only spark-shell could give a good start:



      2019-01-30 19:58:49 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
      Setting default log level to "WARN".
      To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
      Spark context Web UI available at http://datanode1:4040
      Spark context available as 'sc' (master = local[*], app id = local-1548896336336).
      Spark session available as 'spark'.
      Welcome to
      ____ __
      / __/__ ___ _____/ /__
      _ / _ / _ `/ __/ '_/
      /___/ .__/_,_/_/ /_/_ version 2.4.0
      /_/

      Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_131)
      Type in expressions to have them evaluated.
      Type :help for more information.

      scala>


      Looking help for this problem.










      share|improve this question














      Environment:
      Centos7
      Spark2.4.0
      Hadoop2.9.2
      Scale2.12.8
      Python3.6.6



      when I start pyspark by /opt/spark/bin/pyspark, the error shows like:



      Python 3.6.6 (default, Jan 29 2019, 20:02:39) 
      [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux
      Type "help", "copyright", "credits" or "license" for more information.
      2019-01-30 19:47:26 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
      Setting default log level to "WARN".
      To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
      /opt/spark/python/pyspark/shell.py:45: UserWarning: Failed to initialize Spark session.
      warnings.warn("Failed to initialize Spark session.")
      Traceback (most recent call last):
      File "/opt/spark/python/pyspark/shell.py", line 41, in <module>
      spark = SparkSession._create_shell_session()
      File "/opt/spark/python/pyspark/sql/session.py", line 573, in _create_shell_session
      return SparkSession.builder
      File "/opt/spark/python/pyspark/sql/session.py", line 173, in getOrCreate
      sc = SparkContext.getOrCreate(sparkConf)
      File "/opt/spark/python/pyspark/context.py", line 349, in getOrCreate
      SparkContext(conf=conf or SparkConf())
      File "/opt/spark/python/pyspark/context.py", line 118, in __init__
      conf, jsc, profiler_cls)
      File "/opt/spark/python/pyspark/context.py", line 187, in _do_init
      self._accumulatorServer = accumulators._start_update_server(auth_token)
      File "/opt/spark/python/pyspark/accumulators.py", line 291, in _start_update_server
      server = AccumulatorServer(("localhost", 0), _UpdateRequestHandler, auth_token)
      File "/opt/spark/python/pyspark/accumulators.py", line 274, in __init__
      SocketServer.TCPServer.__init__(self, server_address, RequestHandlerClass)
      File "/usr/local/python3/lib/python3.6/socketserver.py", line 453, in __init__
      self.server_bind()
      File "/usr/local/python3/lib/python3.6/socketserver.py", line 467, in server_bind
      self.socket.bind(self.server_address)
      socket.gaierror: [Errno -2] Name or service not known


      But only spark-shell could give a good start:



      2019-01-30 19:58:49 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
      Setting default log level to "WARN".
      To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
      Spark context Web UI available at http://datanode1:4040
      Spark context available as 'sc' (master = local[*], app id = local-1548896336336).
      Spark session available as 'spark'.
      Welcome to
      ____ __
      / __/__ ___ _____/ /__
      _ / _ / _ `/ __/ '_/
      /___/ .__/_,_/_/ /_/_ version 2.4.0
      /_/

      Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_131)
      Type in expressions to have them evaluated.
      Type :help for more information.

      scala>


      Looking help for this problem.







      linux command-line bash python






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Jan 31 at 1:00









      Dexuan28Dexuan28

      12




      12






















          0






          active

          oldest

          votes












          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "3"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1400344%2fpyspark-configuration-issue%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes
















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Super User!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1400344%2fpyspark-configuration-issue%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Plaza Victoria

          Puebla de Zaragoza

          Musa