Remove all references to stacks, including .po files, translations. (#1239)

Signed-off-by: michael vincerra <michael.vincerra@intel.com>
This commit is contained in:
michael vincerra
2022-02-04 14:26:45 -08:00
committed by GitHub
parent ec0b823a91
commit b188657398
6 changed files with 1 additions and 1573 deletions

View File

@@ -1,192 +0,0 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2019, many
# This file is distributed under the same license as the Clear Linux*
# Project Docs package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2019.
#
msgid ""
msgstr "Project-Id-Version: Clear Linux* Project Docs latest\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2019-08-09 14:33-0700\n"
"PO-Revision-Date: 2019-09-04 16:21-0008\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh-Hans\n"
"Language-Team: zh-Hans\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Intel® International Developer Studio Version 4.1.273.0\n"
#: ../../guides/stacks/dars.rst:4
msgid "Data Analytics Reference Stack"
msgstr "数据分析参考堆栈"
#: ../../guides/stacks/dars.rst:6
msgid ""
"This guide explains how to use the :abbr:`DARS (Data Analytics Reference "
"Stack)`, and to optionally build your own DARS container image."
msgstr "本指南说明了如何使用 :abbr:`DARS (Data Analytics Reference Stack)`,以及如何选择性地构建您自己的 DARS 容器映像。"
#: ../../guides/stacks/dars.rst:9
msgid ""
"Any system that supports Docker\\* containers can be used with DARS. This"
" steps in this guide use |CL-ATTR| as the host system."
msgstr "任何支持 Docker\\* 容器的系统都可与 DARS 一起使用。本指南中的这些步骤使用 |CL-ATTR| 作为主机系统。"
#: ../../guides/stacks/dars.rst:17
msgid "The Data Analytics Reference Stack release"
msgstr "数据分析参考堆栈版本"
#: ../../guides/stacks/dars.rst:19
msgid ""
"The Data Analytics Reference Stack (DARS) provides developers and "
"enterprises a straightforward, highly optimized software stack for "
"storing and processing large amounts of data. More detail is available "
"on the `DARS architecture and performance benchmarks`_."
msgstr "数据分析参考堆栈 (DARS) 为开发人员和企业提供了一个简单、高度优化的软件堆栈来存储和处理大量数据。更多详细信息请参阅 `DARS architecture and performance benchmarks`_。"
#: ../../guides/stacks/dars.rst:23
msgid ""
"The Data Analytics Reference Stack provides two pre-built Docker images, "
"available on `Docker Hub`_:"
msgstr "数据分析参考堆栈提供了两个预构建的 Docker 映像,可在 `Docker Hub`_ 获得:"
#: ../../guides/stacks/dars.rst:26
msgid "A |CL|-derived `DARS with OpenBlas`_ stack optimized for `OpenBLAS`_"
msgstr "一个从 |CL| 派生且针对 `OpenBLAS`_ 优化的 `DARS with OpenBlas`_ 堆栈"
#: ../../guides/stacks/dars.rst:27
msgid "A |CL|-derived `DARS with Intel® MKL`_ stack optimized for `MKL`_"
msgstr "一个从 |CL| 派生且针对 `MKL`_ 优化的 `DARS with MKL`_ 堆栈"
#: ../../guides/stacks/dars.rst:29
msgid ""
"We recommend you view the latest component versions for each image in the"
" :file:`README` found in the `Data Analytics Reference Stack`_ GitHub\\* "
"repository. Because |CL| is a rolling distribution, the package version "
"numbers in the |CL|-based containers may not be the latest released by "
"|CL|."
msgstr "我们建议您在 `DARS repository`_ 中找到 :file:`README`,查看每个映像的最新组件版本。由于 |CL| 是滚动发行的,基于 |CL| 的容器中的软件包版本号可能不是 |CL| 最新发布的版本号。"
#: ../../guides/stacks/dars.rst:36
msgid ""
"The Data Analytics Reference Stack is a collective work, and each piece "
"of software within the work has its own license. Please see the `DARS "
"Terms of Use`_ for more details about licensing and usage of the Data "
"Analytics Reference Stack."
msgstr "数据分析参考堆栈是一项集体成果,成果中的每一个软件都有自己的许可证。有关数据分析参考堆栈的许可和使用的更多详细信息,请参阅 `DARS Terms of Use`_。"
#: ../../guides/stacks/dars.rst:42
msgid "Using the Docker images"
msgstr "使用 Docker 映像"
#: ../../guides/stacks/dars.rst:44
msgid ""
"To immediately start using the latest stable DARS images, pull an image "
"directly from `Docker Hub`_. This example uses the `DARS with Intel® "
"MKL`_ Docker image."
msgstr "要立即开始使用最新的稳定版 DARS 映像,请直接从 `Docker Hub`_ 提取。在本教程中,我们将使用 `Dars with MKL`_ 版本堆栈。"
#: ../../guides/stacks/dars.rst:48
msgid "Once you have downloaded the image, you can run it with"
msgstr "下载完映像后,您可以使用以下命令运行它:"
#: ../../guides/stacks/dars.rst:54
msgid ""
"This will launch the image and drop you into a bash shell inside the "
"container. You will see output similar to the following:"
msgstr "此命令将启动映像,并进入容器内的 bash shell 中。您将看到类似以下内容的输出:"
#: ../../guides/stacks/dars.rst:75
msgid ""
"The :command:`--ulimit nofile` parameter is currently required in order "
"to increase the number of open files opened at certain point by the spark"
" engine."
msgstr ":command:`--ulimit nofile` 参数是当前必需的参数,以便增加 spark 引擎在某一时点打开的打开文件的数量。"
#: ../../guides/stacks/dars.rst:80
msgid "Building DARS images"
msgstr "构建 DARS 映像"
#: ../../guides/stacks/dars.rst:82
msgid ""
"If you choose to build your own DARS container images, you can customize "
"them as needed. Use the provided Dockerfile as a baseline."
msgstr "如果选择构建您自己的 DARS 容器映像,您可以根据需要对它们进行自定义。将提供的 Dockerfile 用作基准。"
#: ../../guides/stacks/dars.rst:85
msgid ""
"To construct images with |CL|, start with a |CL| development platform "
"that has the :command:`containers-basic-dev` bundle installed. Learn more"
" about bundles and installing them by using :ref:`swupd-guide`."
msgstr "要使用 |CL| 构建映像,请从安装了 :command:`containers-basic-dev` 捆绑包的 |CL| 开发平台开始。使用 :ref:`swupd-guide` 了解有关捆绑包和安装捆绑包的更多信息。"
#: ../../guides/stacks/dars.rst:89
msgid "Clone the `Data Analytics Reference Stack`_ GitHub\\* repository."
msgstr "克隆 `Data Analytics Reference Stack`_ GitHub\\* 存储库。"
#: ../../guides/stacks/dars.rst:95
msgid ""
"Inside the DARS directory, run :command:`make` to build OpenBLAS and MKL "
"images."
msgstr "在 DARS 目录中,运行 :command:`make` 来构建 OpenBLAS 和 MKL 映像。"
#: ../../guides/stacks/dars.rst:101
msgid ""
"Run :command:`make baseline` to build the baseline CentOS image. "
"Depending on the system, it may take a while to finish building."
msgstr "然后运行 :command:`make baseline` 构建基准 CentOS 映像。根据系统的不同,可能需要一段时间才能完成构建。完成后,使用 :command:`Docker` 检查生成的映像。"
#: ../../guides/stacks/dars.rst:108
msgid "Once completed, check the resulting images with :command:`Docker`"
msgstr "完成后,使用 :command:`Docker` 检查生成的映像"
#: ../../guides/stacks/dars.rst:114
msgid ""
"You can use any of the resulting images to launch fully functional "
"containers. If you need to customize the containers, you can edit the "
"provided :file:`Dockerfile`."
msgstr "您可以使用任何一个生成的映像来启动功能齐全的容器。如果需要自定义容器,您可以编辑所提供的 :file:`Dockerfile`。"
#~ msgid ""
#~ "This tutorial shows you how to use"
#~ " the Data Analytics Reference Stack "
#~ "(DARS), and to optionally build your "
#~ "own images with the baseline Dockerfiles"
#~ " provided in the `DARS repository`_. "
#~ "Our assumption is that |CL-ATTR| "
#~ "is the host. However, any system "
#~ "that supports Docker\\* containers can "
#~ "be used to follow these steps."
#~ msgstr ""
#~ "本教程介绍如何使用数据分析参考堆栈 (DARS),以及如何使用 `DARS repository`_"
#~ " 中提供的基准 Dockerfiles 来选择构建您自己的映像。我们假设 |CL-"
#~ "ATTR| 是主机。但是,任何支持 Docker\\* 容器的系统都可以用来执行这些步骤。"
#~ msgid ""
#~ "If you choose to build your own"
#~ " DARS container images, you can "
#~ "customize them as needed. Use the "
#~ "provided Dockerfile as a baseline. To"
#~ " construct images with |CL|, start "
#~ "with a |CL| development platform that"
#~ " has the :command:`containers-basic-dev`"
#~ " bundle installed. Learn more about "
#~ "bundles and installing them by using "
#~ ":ref:`swupd-guide`."
#~ msgstr ""
#~ "如果选择构建您自己的 DARS 容器映像,您可以根据需要对它们进行自定义。将提供的 Dockerfile"
#~ " 用作基准。要使用 |CL| 构建映像,请从安装了 :command"
#~ ":`containers-basic-dev` 捆绑包的 |CL| 开发平台开始。使用"
#~ " :ref:`swupd-guide` 了解有关捆绑包和安装捆绑包的更多信息。"
#~ msgid "First, clone the `DARS repository`_ from GitHub."
#~ msgstr "首先,从 GitHub 中克隆 `DARS repository`_。"

View File

@@ -1,656 +0,0 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2019, many
# This file is distributed under the same license as the Clear Linux*
# Project Docs package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2019.
#
msgid ""
msgstr "Project-Id-Version: Clear Linux* Project Docs latest\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2019-08-09 14:33-0700\n"
"PO-Revision-Date: 2019-09-04 16:21-0008\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh-Hans\n"
"Language-Team: zh-Hans\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Intel® International Developer Studio Version 4.1.273.0\n"
#: ../../guides/stacks/dlrs/dlrs.rst:4
msgid "Deep Learning Reference Stack"
msgstr "深度学习参考堆栈"
#: ../../guides/stacks/dlrs/dlrs.rst:6
msgid ""
"This guide describes how to run benchmarking workloads for TensorFlow\\*,"
" PyTorch\\*, and Kubeflow in |CL-ATTR| using the Deep Learning Reference "
"Stack."
msgstr "本教程介绍如何在 |CL-ATTR| 中使用深度学习参考堆栈运行 TensorFlow\\*、PyTorch\\* 和 Kubeflow 基准工作负载。"
#: ../../guides/stacks/dlrs/dlrs.rst:14
msgid "Overview"
msgstr "概述"
#: ../../guides/stacks/dlrs/dlrs.rst:16
msgid ""
"We created the Deep Learning Reference Stack to help AI developers "
"deliver the best experience on Intel® Architecture. This stack reduces "
"complexity common with deep learning software components, provides "
"flexibility for customized solutions, and enables you to quickly "
"prototype and deploy Deep Learning workloads. Use this guide to run "
"benchmarking workloads on your solution."
msgstr "我们打造了深度学习参考堆栈来帮助 AI 开发人员在英特尔架构上获得最佳开发体验。此堆栈降低了深度学习软件组件常见的复杂性,为自定义解决方案提供了灵活性,并使您能够快速构建原型并部署深度学习工作负载。使用本教程可在您的解决方案上运行基准工作负载。"
#: ../../guides/stacks/dlrs/dlrs.rst:23
msgid "The Deep Learning Reference Stack is available in the following versions:"
msgstr "深度学习参考堆栈有以下版本:"
#: ../../guides/stacks/dlrs/dlrs.rst:25
msgid ""
"`Intel MKL-DNN-VNNI`_, which is optimized using Intel® Math Kernel "
"Library for Deep Neural Networks (Intel® MKL-DNN) primitives and "
"introduces support for Intel® AVX-512 Vector Neural Network Instructions "
"(VNNI)."
msgstr "`Intel MKL-DNN-VNNI`_它使用面向深度神经网络英特尔® MKL-DNN原语的英特尔®数学内核库进行优化并支持英特尔® AVX-512 矢量神经网络指令 (VNI)。"
#: ../../guides/stacks/dlrs/dlrs.rst:28
msgid ""
"`Intel MKL-DNN`_, which includes the TensorFlow framework optimized using"
" Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) "
"primitives."
msgstr "`Intel MKL-DNN`_它包括使用面向深度神经网络英特尔® MKL-DNN原语的英特尔®数学内核库进行优化的 TensorFlow 框架。"
#: ../../guides/stacks/dlrs/dlrs.rst:31
msgid "`Eigen`_, which includes `TensorFlow`_ optimized for Intel® architecture."
msgstr "`Eigen`_它包括针对英特尔®架构优化的 `TensorFlow`_。"
#: ../../guides/stacks/dlrs/dlrs.rst:32
msgid "`PyTorch with OpenBLAS`_, which includes PyTorch with OpenBlas."
msgstr "`PyTorch with OpenBLAS`_它包括 PyTorch with OpenBlas。"
#: ../../guides/stacks/dlrs/dlrs.rst:33
msgid ""
"`PyTorch with Intel MKL-DNN`_, which includes PyTorch optimized using "
"Intel® Math Kernel Library (Intel® MKL) and Intel MKL-DNN."
msgstr "`PyTorch with Intel MKL-DNN`_它包括使用英特尔®数学内核库英特尔® MKL和英特尔 MKL-DNN 进行优化的 PyTorch。"
#: ../../guides/stacks/dlrs/dlrs.rst:38
msgid ""
"To take advantage of the Intel® AVX-512 and VNNI functionality with the "
"Deep Learning Reference Stack, you must use the following hardware:"
msgstr "要利用英特尔® AVX-512 和 VNI 功能以及深度学习参考堆栈,您必须使用以下硬件:"
#: ../../guides/stacks/dlrs/dlrs.rst:41
msgid "Intel® AVX-512 images require an Intel® Xeon® Scalable Platform"
msgstr "英特尔® AVX-512 映像需要使用英特尔®至强®可扩展平台"
#: ../../guides/stacks/dlrs/dlrs.rst:42
msgid "VNNI requires a 2nd generation Intel® Xeon® Scalable Platform"
msgstr "VNNI 需要使用第二代英特尔®至强®可扩展平台"
#: ../../guides/stacks/dlrs/dlrs.rst:45
msgid "Stack features"
msgstr "堆栈功能和特性"
#: ../../guides/stacks/dlrs/dlrs.rst:47
msgid "`DLRS V3.0`_ release announcement."
msgstr "`DLRS V3.0`_ 发布公告。"
#: ../../guides/stacks/dlrs/dlrs.rst:48
msgid "Deep Learning Reference Stack v2.0 including current `PyTorch benchmark`_."
msgstr "深度学习参考堆栈 v2.0,包括最新的 `PyTorch benchmark results`_。"
#: ../../guides/stacks/dlrs/dlrs.rst:50
msgid ""
"Deep Learning Reference Stack v1.0 including current `TensorFlow "
"benchmark`_ results."
msgstr "深度学习参考堆栈 v1.0,包括最新的 `TensorFlow benchmark results`_。"
#: ../../guides/stacks/dlrs/dlrs.rst:52
msgid ""
"`DLRS Release notes`_ on Github\\* for the latest release of Deep "
"Learning Reference Stack."
msgstr "`DLRS Release notes`_ on Github\\*,了解深度学习参考堆栈的最新版本。"
#: ../../guides/stacks/dlrs/dlrs.rst:57
msgid ""
"The Deep Learning Reference Stack is a collective work, and each piece of"
" software within the work has its own license. Please see the `DLRS "
"Terms of Use`_ for more details about licensing and usage of the Deep "
"Learning Reference Stack."
msgstr "深度学习参考堆栈是一项集体成果,成果中的每一个软件都有自己的许可证。有关深度学习参考堆栈的许可和使用的更多详细信息,请参阅 `DLRS Terms of Use`_。"
#: ../../guides/stacks/dlrs/dlrs.rst:62
msgid "Prerequisites"
msgstr "必备条件"
#: ../../guides/stacks/dlrs/dlrs.rst:64
msgid ":ref:`Install <bare-metal-install-desktop>` |CL| on your host system"
msgstr "在主机系统上 :ref:`Install <bare-metal-install-desktop>` |CL|"
#: ../../guides/stacks/dlrs/dlrs.rst:65
msgid ":command:`containers-basic` bundle"
msgstr ":command:`containers-basic` 捆绑包"
#: ../../guides/stacks/dlrs/dlrs.rst:66
msgid ":command:`cloud-native-basic` bundle"
msgstr ":command:`cloud-native-basic` 捆绑包"
#: ../../guides/stacks/dlrs/dlrs.rst:68
msgid ""
"In |CL|, :command:`containers-basic` includes Docker\\*, which is "
"required for TensorFlow and PyTorch benchmarking. Use the "
":command:`swupd` utility to check if :command:`containers-basic` and "
":command:`cloud-native-basic` are present:"
msgstr "在 |CL| 中,:command:`containers-basic` 包括 TensorFlow 和 PyTorch 基准测试所必需的 Docker\\*。使用 :command:`swupd` 实用程序检查 :command:`containers-basic` 和 :command:`cloud-native-basic` 是否存在:"
#: ../../guides/stacks/dlrs/dlrs.rst:77
msgid ""
"To install the :command:`containers-basic` or :command:`cloud-native-"
"basic` bundles, enter:"
msgstr "要安装 :command:`containers-basic` 或 :command:`cloud-native-basic` 捆绑包,请输入:"
#: ../../guides/stacks/dlrs/dlrs.rst:84
msgid ""
"Docker is not started upon installation of the :command:`containers-"
"basic` bundle. To start Docker, enter:"
msgstr "安装 :command:`containers-basic` 捆绑包后 Docker 不会启动。要启动 Docker请输入"
#: ../../guides/stacks/dlrs/dlrs.rst:91
msgid ""
"To ensure that Kubernetes is correctly installed and configured, follow "
"the instructions in :ref:`kubernetes`."
msgstr "要确保正确安装和配置 Kubernetes请遵循 :ref:`kubernetes` 中的说明。"
#: ../../guides/stacks/dlrs/dlrs.rst:95
msgid "Version compatibility"
msgstr "版本兼容性"
#: ../../guides/stacks/dlrs/dlrs.rst:97
msgid "We validated these steps against the following software package versions:"
msgstr "我们根据以下软件包版本验证了这些步骤:"
#: ../../guides/stacks/dlrs/dlrs.rst:99
msgid "|CL| 26240 (Minimum supported version)"
msgstr "|CL| 26240支持的最低版本"
#: ../../guides/stacks/dlrs/dlrs.rst:100
msgid "Docker 18.06.1"
msgstr "Docker 18.06.1"
#: ../../guides/stacks/dlrs/dlrs.rst:101
msgid "Kubernetes 1.11.3"
msgstr "Kubernetes 1.11.3"
#: ../../guides/stacks/dlrs/dlrs.rst:102
msgid "Go 1.11.12"
msgstr "Go 1.11.12"
#: ../../guides/stacks/dlrs/dlrs.rst:107
msgid ""
"The Deep Learning Reference Stack was developed to provide the best user "
"experience when executed on a |CL| host. However, as the stack runs in a"
" container environment, you should be able to complete the following "
"sections of this guide on other Linux* distributions, provided they "
"comply with the Docker*, Kubernetes* and Go* package versions listed "
"above. Look for your distribution documentation on how to update packages"
" and manage Docker services."
msgstr "深度学习参考堆栈是为了在 |CL| 主机上执行时获得最佳用户体验而开发的。但是,该堆栈在容器环境中运行时,您应该能够在其他 Linux* 发行版上完成本教程的以下部分,只要这些发行版满足上面列出的 Docker*、Kubernetes* 和 Go* 软件包版本。查找关于如何更新软件包和管理 Docker 服务的分发版文档。"
#: ../../guides/stacks/dlrs/dlrs.rst:112
msgid "TensorFlow single and multi-node benchmarks"
msgstr "TensorFlow 单节点和多节点基准测试"
#: ../../guides/stacks/dlrs/dlrs.rst:114
msgid ""
"This section describes running the `TensorFlow Benchmarks`_ in single "
"node. For multi-node testing, replicate these steps for each node. These "
"steps provide a template to run other benchmarks, provided that they can "
"invoke TensorFlow."
msgstr "本部分介绍在单节点中运行 `TensorFlow benchmarks`_。对于多节点测试请为每个节点重复这些步骤。这些步骤提供了运行其他基准测试的模板前提是它们可以调用 TensorFlow。"
#: ../../guides/stacks/dlrs/dlrs.rst:121
msgid ""
"Performance test results for the Deep Learning Reference Stack and for "
"this guide were obtained using `runc` as the runtime."
msgstr "深度学习参考堆栈和本教程的性能测试结果是使用 `runc` 作为运行时获得的。"
#: ../../guides/stacks/dlrs/dlrs.rst:124
msgid ""
"Download either the `Eigen`_ or the `Intel MKL-DNN`_ Docker image from "
"`Docker Hub`_."
msgstr "从 `Docker Hub`_ 下载 `Eigen`_ 或 `Intel MKL-DNN`_ Docker 映像。"
#: ../../guides/stacks/dlrs/dlrs.rst:127 ../../guides/stacks/dlrs/dlrs.rst:169
msgid "Run the image with Docker:"
msgstr "使用 Docker 运行映像:"
#: ../../guides/stacks/dlrs/dlrs.rst:136 ../../guides/stacks/dlrs/dlrs.rst:177
msgid ""
"Launching the Docker image with the :command:`-i` argument starts "
"interactive mode within the container. Enter the following commands in "
"the running container."
msgstr "使用 :command:`-i` 参数启动 Docker 映像,从而在容器内启动交互模式。在正在运行的容器中输入以下命令。"
#: ../../guides/stacks/dlrs/dlrs.rst:140
msgid "Clone the benchmark repository in the container:"
msgstr "克隆容器中的基准测试存储库:"
#: ../../guides/stacks/dlrs/dlrs.rst:146 ../../guides/stacks/dlrs/dlrs.rst:187
msgid "Execute the benchmark script:"
msgstr "执行基准测试脚本:"
#: ../../guides/stacks/dlrs/dlrs.rst:154
msgid ""
"You can replace the model with one of your choice supported by the "
"TensorFlow benchmarks."
msgstr "您可以将该模型更换为 TensorFlow 支持的其他模型。"
#: ../../guides/stacks/dlrs/dlrs.rst:157
msgid ""
"If you are using an FP32 based model, it can be converted to an int8 "
"model using `Intel® quantization tools`_."
msgstr "如果使用基于 FP32 的模型,可以使用 `Intel® quantization tools`_ 将其转换为 int8 模型。"
#: ../../guides/stacks/dlrs/dlrs.rst:161
msgid "PyTorch single and multi-node benchmarks"
msgstr "PyTorch 单节点和多节点基准测试"
#: ../../guides/stacks/dlrs/dlrs.rst:163
msgid ""
"This section describes running the `PyTorch benchmarks`_ for Caffe2 in "
"single node."
msgstr "本部分介绍在单节点中运行针对 Caffe2 的 `PyTorch benchmarks`_。"
#: ../../guides/stacks/dlrs/dlrs.rst:166
msgid ""
"Download either the `PyTorch with OpenBLAS`_ or the `PyTorch with Intel "
"MKL-DNN`_ Docker image from `Docker Hub`_."
msgstr "从 `Docker Hub`_ 下载 `PyTorch with OpenBLAS`_ 或 `PyTorch with Intel MKL-DNN`_ Docker 映像。"
#: ../../guides/stacks/dlrs/dlrs.rst:181
msgid "Clone the benchmark repository:"
msgstr "克隆基准测试存储库:"
#: ../../guides/stacks/dlrs/dlrs.rst:197
msgid "Kubeflow multi-node benchmarks"
msgstr "Kubeflow 多节点基准测试"
#: ../../guides/stacks/dlrs/dlrs.rst:199
msgid ""
"The benchmark workload runs in a Kubernetes cluster. The guide uses "
"`Kubeflow`_ for the Machine Learning workload deployment on three nodes."
msgstr "基准测试工作负载在 Kubernetes 集群中运行。本教程使用 `Kubeflow`_ 在三个节点上部署机器学习工作负载。"
#: ../../guides/stacks/dlrs/dlrs.rst:204
msgid ""
"If you choose the Intel® MKL-DNN or Intel® MKL-DNN-VNNI image, your "
"platform must support the Intel® AVX-512 instruction set. Otherwise, an "
"*illegal instruction* error may appear, and you wont be able to complete"
" this guide."
msgstr "如果选择英特尔® MKL-DNN 或英特尔® MKL-DNN-VNNI 映像,您的平台必须支持英特尔® AVX-512 指令集。否则,可能会出现非法指令错误,导致无法完成本教程。"
#: ../../guides/stacks/dlrs/dlrs.rst:210
msgid "Kubernetes setup"
msgstr "Kubernetes 设置"
#: ../../guides/stacks/dlrs/dlrs.rst:212
msgid ""
"Follow the instructions in the :ref:`kubernetes` tutorial to get set up "
"on |CL|. The Kubernetes community also has instructions for creating a "
"cluster, described in `Creating a single control-plane cluster with "
"kubeadm`_."
msgstr "按照 :ref:`kubernetes` 教程中的说明在 |CL| 上进行设置。Kubernetes 社区也提供了 `Creating a single control-plane cluster with kubeadm`_。"
#: ../../guides/stacks/dlrs/dlrs.rst:217
msgid "Kubernetes networking"
msgstr "Kubernetes 网络连接"
#: ../../guides/stacks/dlrs/dlrs.rst:219
msgid ""
"We used `flannel`_ as the network provider for these tests. If you prefer"
" a different network layer, refer to the Kubernetes network documentation"
" described in `Creating a single control-plane cluster with kubeadm`_ for"
" setup."
msgstr "在这些测试中,我们使用 `flannel`_ 作为网络提供程序。如果青睐不同的网络层,请参阅 Kubernetes `Creating a single control-plane cluster with kubeadm`_ 进行设置。"
#: ../../guides/stacks/dlrs/dlrs.rst:224
msgid "Kubectl"
msgstr "Kubectl"
#: ../../guides/stacks/dlrs/dlrs.rst:226
msgid ""
"You can use kubectl to run commands against your Kubernetes cluster. "
"Refer to the `Overview of kubectl`_ for details on syntax and operations."
" Once you have a working cluster on Kubernetes, use the following YAML "
"script to start a pod with a simple shell script, and keep the pod open."
msgstr "您可以使用 kubectl 对您的 Kubernetes 集群运行命令。有关语法和操作的详细信息,请参阅 `Overview of kubectl`_。建立一个 Kubernetes 工作集群后,请使用下面的 YAML 脚本启动一个含有简单 shell 脚本的 Pod并保持该 Pod 处于打开状态。"
#: ../../guides/stacks/dlrs/dlrs.rst:231
msgid "Copy this example.yaml script to your system:"
msgstr "将 example.yaml 脚本复制到您的系统中:"
#: ../../guides/stacks/dlrs/dlrs.rst:248
msgid "Execute the script with kubectl:"
msgstr "使用 kubectl 执行该脚本:"
#: ../../guides/stacks/dlrs/dlrs.rst:254
msgid ""
"This script opens a single pod. More robust solutions would create a "
"deployment or inject a python script or larger shell script into the "
"container."
msgstr "该脚本打开一个 Pod。更稳健的解决方案是创建部署或者将 python 脚本或更大的 shell 脚本注入容器。"
#: ../../guides/stacks/dlrs/dlrs.rst:258
msgid "Images"
msgstr "图像"
#: ../../guides/stacks/dlrs/dlrs.rst:260
msgid ""
"You must add `launcher.py`_ to the Docker image to include the Deep "
"Learning Reference Stack and put the benchmarks repo in the correct "
"location. Note that this guide uses Kubeflow v0.4.0, and cannot guarantee"
" results if you use a different version."
msgstr "您必须将 `launcher.py`_ 添加到 Docker 映像中,以包含深度学习参考堆栈,并将基准测试存储库放在正确的位置。请注意,本教程使用 Kubeflow v0.4.0。如果使用不同的版本,则不能保证结果。"
#: ../../guides/stacks/dlrs/dlrs.rst:264
msgid "From the Docker image, run the following:"
msgstr "从 Docker 映像中,运行以下命令:"
#: ../../guides/stacks/dlrs/dlrs.rst:273
msgid "Your entry point becomes: :file:`/opt/launcher.py`."
msgstr "您的入口点变成 :file:`/opt/launcher.py`。"
#: ../../guides/stacks/dlrs/dlrs.rst:275
msgid "This builds an image that can be consumed directly by TFJob from Kubeflow."
msgstr "这会构建一个可供 TFJob 从 Kubeflow 直接使用的映像。"
#: ../../guides/stacks/dlrs/dlrs.rst:278
msgid "ksonnet\\*"
msgstr "ksonnet\\*"
#: ../../guides/stacks/dlrs/dlrs.rst:280
msgid ""
"Kubeflow uses ksonnet\\* to manage deployments, so you must install it "
"before setting up Kubeflow."
msgstr "Kubeflow 使用 ksonnet\\* 来管理部署,因此您必须在设置 Kubeflow 之前安装它。"
#: ../../guides/stacks/dlrs/dlrs.rst:283
msgid ""
"ksonnet was added to the :command:`cloud-native-basic` bundle in |CL| "
"version 27550. If you are using an older |CL| version (not recommended), "
"you must manually install ksonnet as described below."
msgstr "ksonnet 已添加到 |CL| 版本 27550 中的 :command:`cloud-native-basic` 捆绑包中。如果使用的是较旧的 |CL| 版本(不推荐),您必须如下所述手动安装 ksonnet。"
#: ../../guides/stacks/dlrs/dlrs.rst:287
msgid "On |CL|, follow these steps:"
msgstr "在 |CL| 上,请按照下列步骤操作:"
#: ../../guides/stacks/dlrs/dlrs.rst:298
msgid ""
"After the ksonnet installation is complete, ensure that binary `ks` is "
"accessible across the environment."
msgstr "ksonnet 安装完成后,确保可在整个环境中访问 `ks` 二进制文件。"
#: ../../guides/stacks/dlrs/dlrs.rst:302
msgid "Kubeflow"
msgstr "Kubeflow"
#: ../../guides/stacks/dlrs/dlrs.rst:304
msgid ""
"Once you have Kubernetes running on your nodes, set up `Kubeflow`_ by "
"following these instructions from the `Getting Started with Kubeflow`_ "
"guide."
msgstr "Kubernetes 在节点上运行后,请按照 `Getting Started with Kubeflow`_ 中的说明设置 `Kubeflow`_。"
#: ../../guides/stacks/dlrs/dlrs.rst:322
msgid "Next, deploy the primary package for our purposes: tf-job-operator."
msgstr "接下来为我们的目的部署主要软件包tf-job-operator。"
#: ../../guides/stacks/dlrs/dlrs.rst:332
msgid ""
"This creates the CustomResourceDefinition (CRD) endpoint to launch a "
"TFJob."
msgstr "这将创建 CustomResourceDefinition (CRD) 端点来启动 TFJob。"
#: ../../guides/stacks/dlrs/dlrs.rst:335
msgid "Run a TFJob"
msgstr "运行 TFJob"
#: ../../guides/stacks/dlrs/dlrs.rst:337
msgid "Get the ksonnet registries for deploying TFJobs from `dlrs-tfjob`_."
msgstr "从 `dlrs-tfjob`_ 获取用于部署 TFJobs 的 ksonnet 注册表。"
#: ../../guides/stacks/dlrs/dlrs.rst:339
msgid "Install the TFJob components as follows:"
msgstr "按照以下步骤安装 TFJob 组件:"
#: ../../guides/stacks/dlrs/dlrs.rst:347
msgid "Export the image name to use for the deployment:"
msgstr "导出用于部署的映像名称:"
#: ../../guides/stacks/dlrs/dlrs.rst:355
msgid "Replace <docker_name> with the image name you specified in previous steps."
msgstr "将 <docker_name> 替换为前述步骤中指定的映像名称。"
#: ../../guides/stacks/dlrs/dlrs.rst:357
msgid ""
"Generate Kubernetes manifests for the workloads and apply them using "
"these commands:"
msgstr "为工作负载生成 Kubernetes 清单,并使用以下命令应用这些清单:"
#: ../../guides/stacks/dlrs/dlrs.rst:367
msgid "This replicates and deploys three test setups in your Kubernetes cluster."
msgstr "这会在 Kubernetes 集群中复制和部署三个测试设置。"
#: ../../guides/stacks/dlrs/dlrs.rst:370
msgid "Results of running this guide"
msgstr "运行本教程的结果"
#: ../../guides/stacks/dlrs/dlrs.rst:372
msgid ""
"You must parse the logs of the Kubernetes pod to retrieve performance "
"data. The pods will still exist post-completion and will be in "
"Completed state. You can get the logs from any of the pods to inspect "
"the benchmark results. More information about Kubernetes logging is "
"available in the Kubernetes `Logging Architecture`_ documentation."
msgstr "您必须解析 Kubernetes Pod 的日志来检索性能数据。完成后Pod 仍会存在,并将处于“已完成”状态。您可以从任何一个 Pod 中获取日志来检查基准测试结果。有关 Kubernetes 日志记录的更多信息,请参见 Kubernetes `Logging Architecture`_ 文档。"
#: ../../guides/stacks/dlrs/dlrs.rst:379
msgid "Use Jupyter Notebook"
msgstr "使用 Jupyter Notebook"
#: ../../guides/stacks/dlrs/dlrs.rst:381
msgid ""
"This example uses the `PyTorch with OpenBLAS`_ container image. After it "
"is downloaded, run the Docker image with :command:`-p` to specify the "
"shared port between the container and the host. This example uses port "
"8888."
msgstr "本示例使用 `PyTorch with OpenBLAS`_ 容器映像。下载后,使用 :command:`-p` 运行 Docker 映像,以指定容器和主机之间的共享端口。本示例使用端口 8888。"
#: ../../guides/stacks/dlrs/dlrs.rst:389
msgid ""
"After you start the container, launch the Jupyter Notebook. This command "
"is executed inside the container image."
msgstr "启动容器后,启动 Jupyter Notebook。该命令在容器映像内执行。"
#: ../../guides/stacks/dlrs/dlrs.rst:396
msgid ""
"After the notebook has loaded, you will see output similar to the "
"following:"
msgstr "加载笔记本后,您将看到类似以下内容的输出:"
#: ../../guides/stacks/dlrs/dlrs.rst:404
msgid ""
"From your host system, or any system that can access the host's IP "
"address, start a web browser with the following. If you are not running "
"the browser on the host system, replace :command:`127.0.0.1` with the IP "
"address of the host."
msgstr "从您的主机系统或任何可以访问主机 IP 地址的系统,使用以下命令启动 Web 浏览器。如果没有在主机系统上运行浏览器,请将 :command:`127.0.0.1` 更换为主机的 IP 地址。"
#: ../../guides/stacks/dlrs/dlrs.rst:412
msgid "Your browser displays the following:"
msgstr "您的浏览器会显示以下内容:"
#: ../../guides/stacks/dlrs/dlrs.rst:418
msgid "Figure 1: :guilabel:`Jupyter Notebook`"
msgstr "图 1 :guilabel:`Jupyter Notebook`"
#: ../../guides/stacks/dlrs/dlrs.rst:421
msgid ""
"To create a new notebook, click :guilabel:`New` and select "
":guilabel:`Python 3`."
msgstr "要创建新笔记本,请点击 :guilabel:`New`,然后选择 :guilabel:`Python 3`。"
#: ../../guides/stacks/dlrs/dlrs.rst:427
msgid "Figure 2: Create a new notebook"
msgstr "图 2创建一个新笔记本"
#: ../../guides/stacks/dlrs/dlrs.rst:429
msgid "A new, blank notebook is displayed, with a cell ready for input."
msgstr "此时将显示一个新的空白笔记本,其中有一个单元格可供输入内容。"
#: ../../guides/stacks/dlrs/dlrs.rst:436
msgid ""
"To verify that PyTorch is working, copy the following snippet into the "
"blank cell, and run the cell."
msgstr "要验证 PyTorch 是否正在工作,请将以下片段复制到空白单元格中,并运行该单元格。"
#: ../../guides/stacks/dlrs/dlrs.rst:450
msgid "When you run the cell, your output will look something like this:"
msgstr "运行单元格时,您的输出将如下所示:"
#: ../../guides/stacks/dlrs/dlrs.rst:456
msgid ""
"You can continue working in this notebook, or you can download existing "
"notebooks to take advantage of the Deep Learning Reference Stack's "
"optimized deep learning frameworks. Refer to `Jupyter Notebook`_ for "
"details."
msgstr "您可以继续在此笔记本中工作,也可以下载现有笔记本来利用深度学习参考堆栈的优化深度学习框架。详情请参阅 `Jupyter Notebook`_。"
#: ../../guides/stacks/dlrs/dlrs.rst:461
msgid "Uninstallation"
msgstr "卸载"
#: ../../guides/stacks/dlrs/dlrs.rst:463
msgid ""
"To uninstall the Deep Learning Reference Stack, you can choose to stop "
"the container so that it is not using system resources, or you can stop "
"the container and delete it to free storage space."
msgstr "要卸载深度学习参考堆栈,您可以选择停止容器以使其不使用系统资源,或者可以停止容器并将其删除以释放存储空间。"
#: ../../guides/stacks/dlrs/dlrs.rst:467
msgid "To stop the container, execute the following from your host system:"
msgstr "要停止容器,请从主机系统执行以下操作:"
#: ../../guides/stacks/dlrs/dlrs.rst:469
msgid "Find the container's ID"
msgstr "找到容器的 ID"
#: ../../guides/stacks/dlrs/dlrs.rst:475
msgid "This will result in output similar to the following:"
msgstr "这将产生类似于以下内容的输出:"
#: ../../guides/stacks/dlrs/dlrs.rst:482
msgid ""
"You can then use the ID or container name to stop the container. This "
"example uses the name \"oss\":"
msgstr "然后,您可以使用 ID 或容器名称来停止容器。本示例使用名称 \"oss\""
#: ../../guides/stacks/dlrs/dlrs.rst:490
msgid "Verify that the container is not running"
msgstr "验证容器未在运行"
#: ../../guides/stacks/dlrs/dlrs.rst:497
msgid "To delete the container from your system you need to know the Image ID:"
msgstr "要从系统中删除容器,您需要知道映像 ID"
#: ../../guides/stacks/dlrs/dlrs.rst:503
msgid "This command results in output similar to the following:"
msgstr "该命令会产生类似于以下内容的输出:"
#: ../../guides/stacks/dlrs/dlrs.rst:511
msgid "To remove an image use the image ID:"
msgstr "要移除映像,请使用映像 ID"
#: ../../guides/stacks/dlrs/dlrs.rst:527
msgid ""
"Note that you can execute the :command:`docker rmi` command using only "
"the first few characters of the image ID, provided they are unique on the"
" system."
msgstr "请注意,您可以只使用映像 ID 的前几个字符来执行 :command:`docker rmi` 命令,前提是它们在系统上是唯一的。"
#: ../../guides/stacks/dlrs/dlrs.rst:529
msgid "Once you have removed the image, you can verify it has been deleted with:"
msgstr "移除映像后,您可以通过以下方式验证它是否已被移除:"
#: ../../guides/stacks/dlrs/dlrs.rst:537
msgid "Related topics"
msgstr "相关主题"
#: ../../guides/stacks/dlrs/dlrs.rst:539
msgid "`DLRS V3.0`_ release announcement"
msgstr "`DLRS V3.0`_ 发布公告"
#: ../../guides/stacks/dlrs/dlrs.rst:540
msgid "`TensorFlow Benchmarks`_"
msgstr "`TensorFlow Benchmarks`_"
#: ../../guides/stacks/dlrs/dlrs.rst:541
msgid "`PyTorch benchmarks`_"
msgstr "`PyTorch benchmarks`_"
#: ../../guides/stacks/dlrs/dlrs.rst:542
msgid "`Kubeflow`_"
msgstr "`Kubeflow`_"
#: ../../guides/stacks/dlrs/dlrs.rst:543
msgid ":ref:`kubernetes` tutorial"
msgstr ":ref:`kubernetes` 教程"
#: ../../guides/stacks/dlrs/dlrs.rst:544
msgid "`Jupyter Notebook`_"
msgstr "`Jupyter Notebook`_"
#~ msgid "Deep Learning Reference Stack `V3.0 release announcement`_."
#~ msgstr "深度学习参考堆栈 `V3.0 release announcement`_。"
#~ msgid ""
#~ "You must parse the logs of the "
#~ "Kubernetes pod to retrieve performance "
#~ "data. The pods will still exist "
#~ "post-completion and will be in "
#~ "Completed state. You can get the "
#~ "logs from any of the pods to "
#~ "inspect the benchmark results. More "
#~ "information about `Kubernetes logging`_ is "
#~ "available from the Kubernetes community."
#~ msgstr ""
#~ "您必须解析 Kubernetes Pod 的日志来检索性能数据。完成后Pod "
#~ "仍会存在,并将处于“已完成”状态。您可以从任何一个 Pod 中获取日志来检查基准测试结果。有关 "
#~ "`Kubernetes logging`_ 的更多信息可从 Kubernetes 社区获取。"
#~ msgid "Deep Learning Reference Stack `V3.0 release announcement`_"
#~ msgstr "深度学习参考堆栈 `V3.0 release announcement`_"

View File

@@ -1,711 +0,0 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2019, many
# This file is distributed under the same license as the Clear Linux*
# Project Docs package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2019.
#
msgid ""
msgstr "Project-Id-Version: Clear Linux* Project Docs latest\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2019-08-09 14:33-0700\n"
"PO-Revision-Date: 2019-09-04 16:21-0008\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh-Hans\n"
"Language-Team: zh-Hans\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Intel® International Developer Studio Version 4.1.273.0\n"
#: ../../guides/stacks/greengrass.rst:4
msgid "Enable AWS Greengrass\\* and OpenVINO™ toolkit"
msgstr "启用 AWS Greengrass\\* 和 OpenVINO™ 工具包"
#: ../../guides/stacks/greengrass.rst:6
msgid ""
"This guide explains how to enable AWS Greengrass\\* and OpenVINO™ "
"toolkit. Specifically, the guide demonstrates how to:"
msgstr "本指南说明了如何启用 AWS Greengrass\\* 和 OpenVINO™ 工具包。具体而言,该指南演示了如何:"
#: ../../guides/stacks/greengrass.rst:9
msgid "Set up the Intel® edge device with |CL-ATTR|"
msgstr "使用 |CL-ATTR| 设置英特尔®边缘设备"
#: ../../guides/stacks/greengrass.rst:10
msgid ""
"Install the OpenVINO™ toolkit and Amazon Web Services\\* (AWS\\*) "
"Greengrass\\* software stacks"
msgstr "安装 OpenVINO™ 工具包和 Amazon Web Services\\* (AWS\\*) Greengrass\\* 软件堆栈"
#: ../../guides/stacks/greengrass.rst:12
msgid ""
"Use AWS Greengrass\\* and AWS Lambda\\* to deploy the FaaS samples from "
"the cloud"
msgstr "使用 AWS Greengrass\\* 和 AWS Lambda\\* 从云中部署 FaaS 示例"
#: ../../guides/stacks/greengrass.rst:20
msgid "Overview"
msgstr "概述"
#: ../../guides/stacks/greengrass.rst:22
msgid ""
"Hardware accelerated Function-as-a-Service (FaaS) enables cloud "
"developers to deploy inference functionalities [1] on Intel® IoT edge "
"devices with accelerators (CPU, Integrated GPU, Intel® FPGA, and Intel® "
"Movidius™ technology). These functions provide a great developer "
"experience and seamless migration of visual analytics from cloud to edge "
"in a secure manner using a containerized environment. Hardware-"
"accelerated FaaS provides the best-in-class performance by accessing "
"optimized deep learning libraries on Intel® IoT edge devices with "
"accelerators."
msgstr "硬件加速的功能即服务 (FaaS) 有助于云开发人员在搭载加速器的英特尔® IoT 边缘设备CPU、集成 GPU、英特尔® FPGA 和英特尔® Movidius™ 技术)上部署推理功能 [1]。这些功能使用容器化环境,为开发人员提供了出色的体验,有助于开发人员将可视化分析从云安全地迁移到边缘。硬件加速的 FaaS 支持在搭载加速器的英特尔® IoT 边缘设备上访问经过优化的深度学习库,实现业界最佳性能。"
#: ../../guides/stacks/greengrass.rst:32
msgid "Supported platforms"
msgstr "支持的平台"
#: ../../guides/stacks/greengrass.rst:34
msgid "Operating System: |CL| latest release"
msgstr "操作系统:|CL| 最新版本"
#: ../../guides/stacks/greengrass.rst:35
msgid "Hardware: Intel® core platforms (that support inference on CPU only)"
msgstr "硬件:英特尔®酷睿™平台(本教程仅支持 CPU 推理。)"
#: ../../guides/stacks/greengrass.rst:38
msgid "Sample description"
msgstr "示例说明"
#: ../../guides/stacks/greengrass.rst:40
msgid ""
"The AWS Greengrass samples are located at `Edge-Analytics-FaaS`_. This "
"guide uses the 1.0 version of the source code."
msgstr "AWS Greengrass 示例位于 `Edge-Analytics-FaaS`_ 中。本教程使用 1.0 版本的源代码。"
#: ../../guides/stacks/greengrass.rst:43
msgid "|CL| provides the following AWS Greengrass samples:"
msgstr "|CL| 提供以下 AWS Greengrass 示例:"
#: ../../guides/stacks/greengrass.rst:45
msgid "`greengrass_classification_sample.py`_"
msgstr "`greengrass_classification_sample.py`_"
#: ../../guides/stacks/greengrass.rst:47
msgid ""
"This AWS Greengrass sample classifies a video stream using classification"
" networks such as AlexNet and GoogLeNet and publishes top-10 results on "
"AWS\\* IoT Cloud every second."
msgstr "此 AWS Greengrass 示例使用 AlexNet 和 GoogLeNet 等分类网络对视频流进行分类,并每秒在 AWS\\* IoT 云上发布前十名结果。"
#: ../../guides/stacks/greengrass.rst:51
msgid "`greengrass_object_detection_sample_ssd.py`_"
msgstr "`greengrass_object_detection_sample_ssd.py`_"
#: ../../guides/stacks/greengrass.rst:53
msgid ""
"This AWS Greengrass sample detects objects in a video stream and "
"classifies them using single-shot multi-box detection (SSD) networks such"
" as SSD Squeezenet, SSD Mobilenet, and SSD300. This sample publishes "
"detection outputs such as class label, class confidence, and bounding box"
" coordinates on AWS IoT Cloud every second."
msgstr "此 AWS Greengrass 示例会检测视频流中的对象,并使用单步多框检测 (SSD) 网络(例如 SSD Squeezenet、SSD Mobilenet 和 SSD300对它们进行分类。此示例每秒在 AWS IoT 云上发布检测输出,如类标签、类置信度和边界框坐标。"
#: ../../guides/stacks/greengrass.rst:61
msgid "Install the OS on the edge device"
msgstr "在边缘设备上安装操作系统"
#: ../../guides/stacks/greengrass.rst:63
msgid ""
"Start with a clean installation of |CL| on a new system, using the :ref"
":`bare-metal-install-desktop`, found in :ref:`get-started`."
msgstr "使用 :ref:`get-started` 中的 :ref:`bare-metal-install-desktop`,在新系统上安装干净的 |CL|。"
#: ../../guides/stacks/greengrass.rst:67
msgid "Create user accounts"
msgstr "创建用户帐户"
#: ../../guides/stacks/greengrass.rst:69
msgid ""
"After |CL| is installed, create two user accounts. Create an "
"administrative user in |CL| and create a user account for the Greengrass "
"services to use ( see Greengrass user below)."
msgstr "安装 |CL| 后,创建两个用户帐户。在 |CL| 中创建一个管理用户,并为要使用的 Greengrass 服务创建一个用户帐户(请参阅下面的 Greengrass 用户)。"
#: ../../guides/stacks/greengrass.rst:73
msgid ""
"Create a new user and set a password for that user. Enter the following "
"commands as ``root``:"
msgstr "创建新用户并为该用户设置密码。以 ``root`` 用户身份输入以下命令:"
#: ../../guides/stacks/greengrass.rst:81
msgid ""
"Next, enable the :command:`sudo` command for your new <userid>. Add "
"<userid> to the `wheel` group:"
msgstr "接下来,为新的 <userid> 启用 :command:`sudo` 命令。将 <userid> 添加到 `wheel` 组:"
#: ../../guides/stacks/greengrass.rst:88
msgid "Create a :file:`/etc/fstab` file."
msgstr "创建一个 :file:`/etc/fstab` 文件。"
#: ../../guides/stacks/greengrass.rst:96
msgid ""
"By default, |CL| does not create an :file:`/etc/fstab` file. You must "
"create this file before the Greengrass service runs."
msgstr "默认情况下,|CL| 不会创建 :file:`/etc/fstab` 文件。您必须在 Greengrass 服务运行之前创建此文件。"
#: ../../guides/stacks/greengrass.rst:100
msgid "Add required bundles"
msgstr "添加所需的捆绑包"
#: ../../guides/stacks/greengrass.rst:102
msgid ""
"Use the :command:`swupd` software updater utility to add the prerequisite"
" bundles for the OpenVINO software stack:"
msgstr "使用 :command:`swupd` 软件更新程序实用程序添加 OpenVINO 软件堆栈必备的软件包:"
#: ../../guides/stacks/greengrass.rst:111
msgid "Learn more about how to :ref:`swupd-guide`."
msgstr "详细了解如何 :ref:`swupd-guide`。"
#: ../../guides/stacks/greengrass.rst:113
msgid ""
"The :command:`computer-vision-basic` bundle installs the OpenVINO™ "
"toolkit, and the sample models optimized for Intel® edge platforms."
msgstr ":command:`computer-vision-basic` 捆绑包会安装 OpenVINO™ 工具包以及针对英特尔®边缘平台优化的示例模型。"
#: ../../guides/stacks/greengrass.rst:117
msgid "Convert deep learning models"
msgstr "转换深度学习模型"
#: ../../guides/stacks/greengrass.rst:120
msgid "Locate sample models"
msgstr "找到示例模型"
#: ../../guides/stacks/greengrass.rst:122
msgid ""
"There are two types of provided models that can be used in conjunction "
"with AWS Greengrass for this guide: classification or object detection."
msgstr "本教程中提供了两种可以与 AWS Greengrass 配合使用的模型:分类和对象检测。"
#: ../../guides/stacks/greengrass.rst:125
msgid ""
"To complete this guide using an image classification model, download the "
"BVLC AlexNet model files `bvlc_alexnet.caffemodel`_ and "
"`deploy.prototxt`_ to the default model_location at "
":file:`/usr/share/openvino/models`. Any custom pre-trained classification"
" models can be used with the classification sample."
msgstr "要使用图像分类模型完成本教程,请将 BVLC AlexNet 模型文件 `bvlc_alexnet.caffemodel`_ 和 `deploy.prototxt`_ 下载到 :file:`/usr/share/openvino/models` 处的默认 model_location。预先训练的任何自定义分类模型都可与分类示例配合使用。"
#: ../../guides/stacks/greengrass.rst:131
msgid ""
"For object detection, the sample models optimized for Intel® edge "
"platforms are included with the computer-vision-basic bundle installation"
" at :file:`/usr/share/openvino/models`. These models are provided as an "
"example; you may also use a custom SSD model with the Greengrass object "
"detection sample."
msgstr "对于对象检测,安装 computer-vision-basic 捆绑包时会在 :file:`/usr/share/openvino/models` 处附带针对英特尔®边缘平台优化的示例模型。这些模型作为示例提供;但是,您也可以将自定义 SSD 模型与 Greengrass 对象检测示例结合使用。"
#: ../../guides/stacks/greengrass.rst:137
msgid "Run model optimizer"
msgstr "运行模型优化器"
#: ../../guides/stacks/greengrass.rst:139
msgid ""
"Follow the instructions in the `Model Optimizer Developer Guide`_ for "
"converting deep learning models to Intermediate Representation using "
"Model Optimizer. To optimize either of the sample models described above,"
" run one of the following commands."
msgstr "遵循 `Model Optimizer Developer Guide`_ 中的说明,使用 Model Optimizer 将深度学习模型转换为 Intermediate Representation。要优化上述任一示例模型请运行以下命令之一。"
#: ../../guides/stacks/greengrass.rst:143
msgid "For classification using BVLC AlexNet model:"
msgstr "对于使用 BVLC AlexNet 模型的分类:"
#: ../../guides/stacks/greengrass.rst:152
msgid "For object detection using SqueezeNetSSD-5Class model:"
msgstr "对于使用 SqueezeNetSSD-5Class 模型的对象检测:"
#: ../../guides/stacks/greengrass.rst:161
msgid "In these examples:"
msgstr "在这些示例中:"
#: ../../guides/stacks/greengrass.rst:163
msgid "`<model_location>` is :file:`/usr/share/openvino/models`."
msgstr "`<model_location>` 是 :file:`/usr/share/openvino/models`。"
#: ../../guides/stacks/greengrass.rst:165
msgid "`<data_type>` is FP32 or FP16, depending on target device."
msgstr "`<data_type>` 是 FP32 或 FP16具体取决于目标设备。"
#: ../../guides/stacks/greengrass.rst:167
msgid ""
"`<output_dir>` is the directory where the Intermediate Representation "
"(IR) is stored. IR contains .xml format corresponding to the network "
"structure and .bin format corresponding to weights. This .xml file should"
" be passed to :command:`<PARAM_MODEL_XML>`."
msgstr "`<output_dir>` 是存储中间表示 (IR) 的目录。IR 包含与网络结构对应的 .xml 格式以及与权重对应的 .bin 格式。此 .xml 文件应传递给 :command:`<PARAM_MODEL_XML>`。"
#: ../../guides/stacks/greengrass.rst:172
msgid ""
"In the BVLC AlexNet model, the prototxt defines the input shape with "
"batch size 10 by default. In order to use any other batch size, the "
"entire input shape must be provided as an argument to the model "
"optimizer. For example, to use batch size 1, you must provide: "
"`--input_shape [1,3,227,227]`"
msgstr "在 BVLC AlexNet 模型中默认情况下prototxt 会定义批处理大小为 10 的输入形状。要使用任何其他批处理大小,必须将整个输入形状作为参数提供给模型优化器。例如,要使用批处理大小 1您必须提供 `--input_shape [1,3,227,227]`"
#: ../../guides/stacks/greengrass.rst:180
msgid "Configure AWS Greengrass group"
msgstr "配置 AWS Greengrass 组"
#: ../../guides/stacks/greengrass.rst:182
msgid ""
"For each Intel® edge platform, you must create a new AWS Greengrass group"
" and install AWS Greengrass core software to establish the connection "
"between cloud and edge."
msgstr "对于每个英特尔®边缘平台,您必须创建一个新的 AWS Greengrass 组,并安装 AWS Greengrass 核心软件,以在云和边缘之间建立连接。"
#: ../../guides/stacks/greengrass.rst:186
msgid ""
"To create an AWS Greengrass group, follow the instructions in `Configure "
"AWS IoT Greengrass on AWS IoT`_."
msgstr "要创建 AWS Greengrass 组,请按照 `Configure AWS IoT Greengrass on AWS IoT`_ 中的说明执行操作。"
#: ../../guides/stacks/greengrass.rst:189
msgid ""
"To install and configure AWS Greengrass core on edge platform, follow the"
" instructions in `Start AWS Greengrass on the Core Device`_. In step "
"8(b), download the x86_64 Ubuntu\\* configuration of the AWS Greengrass "
"core software."
msgstr "要在边缘平台上安装和配置 AWS Greengrass 核心,请按照 `Start AWS Greengrass on the Core Device`_ 中的说明执行操作。在步骤 8(b) 中,下载 AWS Greengrass 核心软件的 x86_64 Ubuntu\\* 配置。"
#: ../../guides/stacks/greengrass.rst:196
msgid ""
"You do not need to run the :file:`cgroupfs-mount.sh` script in step #6 of"
" Module 1 of the `AWS Greengrass Developer Guide`_ because this is "
"enabled already in |CL|."
msgstr "您不需要在 `AWS Greengrass developer guide`_ 模块 1 的步骤 6 中运行 :file:`cgroupfs-mount.sh` 脚本,因为它已经在 |CL| 中启用。"
#: ../../guides/stacks/greengrass.rst:200
msgid ""
"Be sure to download both the security resources and the AWS Greengrass "
"core software."
msgstr "请务必下载安全资源和 AWS Greengrass 核心软件。"
#: ../../guides/stacks/greengrass.rst:205
msgid "Security certificates are linked to your AWS account."
msgstr "安全证书会链接到您的 AWS 帐户。"
#: ../../guides/stacks/greengrass.rst:209
msgid "Create and package Lambda function"
msgstr "创建并打包 Lambda 函数"
#: ../../guides/stacks/greengrass.rst:211
msgid ""
"Complete steps 1-4 of the AWS Greengrass guide at `Create and Package a "
"Lambda Function`_."
msgstr "在 `Create and Package a Lambda Function`_ 中完成 AWS Greengrass 教程的步骤 1-4。"
#: ../../guides/stacks/greengrass.rst:216
msgid ""
"This creates the tarball needed to create the AWS Greengrass environment "
"on the edge device."
msgstr "这会创建必要的 tarball以便在边缘设备上创建 AWS Greengrass 环境。"
#: ../../guides/stacks/greengrass.rst:220
msgid ""
"In step 5, replace :file:`greengrassHelloWorld.py` with the "
"classification or object detection Greengrass sample from `Edge-"
"Analytics-Faas`_:"
msgstr "在步骤 5 中,将 :file:`greengrassHelloWorld.py` 替换为 `Edge-Analytics-Faas`_ 中的分类或对象检测 Greengrass 示例:"
#: ../../guides/stacks/greengrass.rst:223
msgid "Classification: `greengrass_classification_sample.py`_"
msgstr "分类:`greengrass_classification_sample.py`_"
#: ../../guides/stacks/greengrass.rst:225
msgid "Object Detection: `greengrass_object_detection_sample_ssd.py`_"
msgstr "对象检测:`greengrass_object_detection_sample_ssd.py`_"
#: ../../guides/stacks/greengrass.rst:227
msgid ""
"Zip the selected Greengrass sample with the extracted Greengrass SDK "
"folders from the previous step into "
":file:`greengrass_sample_python_lambda.zip`."
msgstr "将所选的 Greengrass 示例以及从上一步提取的 Greengrass SDK 文件夹压缩到 :file:`greengrass_sample_python_lambda.zip`。"
#: ../../guides/stacks/greengrass.rst:230
msgid "The zip should contain:"
msgstr "压缩包应包含:"
#: ../../guides/stacks/greengrass.rst:232
msgid "greengrasssdk"
msgstr "greengrasssdk"
#: ../../guides/stacks/greengrass.rst:234
msgid "greengrass classification or object detection sample"
msgstr "greengrass 分类或对象检测示例"
#: ../../guides/stacks/greengrass.rst:236
msgid "For example:"
msgstr "例如:"
#: ../../guides/stacks/greengrass.rst:243
msgid ""
"Return to the AWS documentation section called `Create and Package a "
"Lambda Function`_ and complete the procedure."
msgstr "返回名为 `Create and Package a Lambda Function`_ 的 AWS 文档部分,并完成步骤。"
#: ../../guides/stacks/greengrass.rst:248
msgid ""
"In step 9(a) of the AWS documentation, while uploading the zip file, make"
" sure to name the handler to one of the following, depending on the AWS "
"Greengrass sample you are using:"
msgstr "在 AWS 文档的步骤 9(a) 中,上传 Zip 文件,并确保根据使用的 AWS Greengrass 示例将处理程序命名为以下名称之一:"
#: ../../guides/stacks/greengrass.rst:252
msgid "greengrass_object_detection_sample_ssd.function_handler"
msgstr "greengrass_object_detection_sample_ssd.function_handler"
#: ../../guides/stacks/greengrass.rst:253
msgid "greengrass_classification_sample.function_handler"
msgstr "greengrass_classification_sample.function_handler"
#: ../../guides/stacks/greengrass.rst:257
msgid "Configure Lambda function"
msgstr "配置 Lambda 函数"
#: ../../guides/stacks/greengrass.rst:259
msgid ""
"After creating the Greengrass group and the Lambda function, start "
"configuring the Lambda function for AWS Greengrass."
msgstr "创建 Greengrass 组和 Lambda 函数后,开始为 AWS Greengrass 配置 Lambda 函数。"
#: ../../guides/stacks/greengrass.rst:262
msgid ""
"Follow steps 1-8 in `Configure the Lambda Function for AWS IoT "
"Greengrass`_ in the AWS documentation."
msgstr "按照 AWS 文档中 `Configure the Lambda Function for AWS IoT Greengrass`_ 中的步骤 1-8 执行操作。"
#: ../../guides/stacks/greengrass.rst:265
msgid ""
"In addition to the details mentioned in step 8, change the Memory limit "
"to 2048 MB to accommodate large input video streams."
msgstr "除了步骤 8 中提到的细节之外,将内存限制更改为 2048 MB以容纳较大的输入视频流。"
#: ../../guides/stacks/greengrass.rst:268
msgid ""
"Add the following environment variables as key-value pairs when editing "
"the Lambda configuration and click on update:"
msgstr "编辑 Lambda 配置时,添加以下环境变量作为键值对,然后点击更新:"
#: ../../guides/stacks/greengrass.rst:271
msgid "**Table 1. Environment variables: Lambda configuration**"
msgstr "**表 1.环境变量Lambda 配置**"
#: ../../guides/stacks/greengrass.rst:275
msgid "Key"
msgstr "键"
#: ../../guides/stacks/greengrass.rst:276
msgid "Value"
msgstr "值"
#: ../../guides/stacks/greengrass.rst:277
msgid "PARAM_MODEL_XML"
msgstr "PARAM_MODEL_XML"
#: ../../guides/stacks/greengrass.rst:278
msgid ""
"<MODEL_DIR>/<IR.xml>, where <MODEL_DIR> is user specified and contains "
"IR.xml, the Intermediate Representation file from Intel® Model Optimizer."
" For this guide, <MODEL_DIR> should be set to "
"'/usr/share/openvino/models' or one of its subdirectories."
msgstr "<MODEL_DIR>/<IR.xml>,其中 <MODEL_DIR> 是用户指定的,包含来自英特尔®模型优化器的中间表示文件 IR.xml。在本教程中<MODEL_DIR> 应设置为 '/usr/share/openvino/models' 或其某个子目录。"
#: ../../guides/stacks/greengrass.rst:282
msgid "PARAM_INPUT_SOURCE"
msgstr "PARAM_INPUT_SOURCE"
#: ../../guides/stacks/greengrass.rst:283
msgid "<DATA_DIR>/input.webm to be specified by user. Holds both input and"
msgstr "<DATA_DIR>由用户指定的 /input.webm。保存输入和"
#: ../../guides/stacks/greengrass.rst:284
msgid "output data. For webcam, set PARAM_INPUT_SOURCE to /dev/video0"
msgstr "输出数据。对于网络摄像头,请将 PARAM_INPUT_SOURCE 设置为 /dev/video0"
#: ../../guides/stacks/greengrass.rst:285
msgid "PARAM_DEVICE"
msgstr "PARAM_DEVICE"
#: ../../guides/stacks/greengrass.rst:286
msgid "\"CPU\""
msgstr "\"CPU\""
#: ../../guides/stacks/greengrass.rst:287
msgid "PARAM_CPU_EXTENSION_PATH"
msgstr "PARAM_CPU_EXTENSION_PATH"
#: ../../guides/stacks/greengrass.rst:288
msgid "/usr/lib64/libcpu_extension.so"
msgstr "/usr/lib64/libcpu_extension.so"
#: ../../guides/stacks/greengrass.rst:289
msgid "PARAM_OUTPUT_DIRECTORY"
msgstr "PARAM_OUTPUT_DIRECTORY"
#: ../../guides/stacks/greengrass.rst:290
msgid "<DATA_DIR> to be specified by user. Holds both input and output data"
msgstr "<DATA_DIR> 由用户指定。保存输入和输出数据"
#: ../../guides/stacks/greengrass.rst:292
msgid "PARAM_NUM_TOP_RESULTS"
msgstr "PARAM_NUM_TOP_RESULTS"
#: ../../guides/stacks/greengrass.rst:293
msgid ""
"User specified for classification sample. (e.g. 1 for top-1 result, 5 for"
" top-5 results)"
msgstr "为分类示例指定的用户。例如1 为 前 1 名结果5 为前 5 名结果)"
#: ../../guides/stacks/greengrass.rst:296
msgid ""
"Add subscription to subscribe, or publish messages from AWS Greengrass "
"Lambda function by completing the procedure in `Configure the Lambda "
"Function for AWS IoT Greengrass`_."
msgstr "完成 `Configure the Lambda Function for AWS IoT Greengrass`_ 中的步骤,添加订阅以进行订阅或发布来自 AWS Greengrass Lambda 函数的消息。"
#: ../../guides/stacks/greengrass.rst:301
msgid ""
"The optional topic filter field is the topic mentioned inside the Lambda "
"function. In this guide, sample topics include the following: "
":command:`openvino/ssd` or :command:`openvino/classification`"
msgstr "可选主题过滤器字段是 Lambda 函数中提到的主题。在本教程中,示例主题包括以下 :command:`openvino/ssd` 或 :command:`openvino/classification`"
#: ../../guides/stacks/greengrass.rst:305
msgid "Add local resources"
msgstr "添加本地资源"
#: ../../guides/stacks/greengrass.rst:307
msgid ""
"Refer to the AWS documentation `Access Local Resources with Lambda "
"Functions and Connectors`_ for details about local resources and access "
"privileges."
msgstr "有关 `Access Local Resources with Lambda Functions and Connectors`_ 的详细信息,请参阅 AWS 文档。"
#: ../../guides/stacks/greengrass.rst:310
msgid "The following table describes the local resources needed for the CPU:"
msgstr "下表列出了 CPU 所需的本地资源:"
#: ../../guides/stacks/greengrass.rst:312
msgid "**Local resources**"
msgstr "**本地资源**"
#: ../../guides/stacks/greengrass.rst:316
msgid "Name"
msgstr "名称"
#: ../../guides/stacks/greengrass.rst:317
msgid "Resource type"
msgstr "资源类型"
#: ../../guides/stacks/greengrass.rst:318
msgid "Local path"
msgstr "本地路径"
#: ../../guides/stacks/greengrass.rst:319
msgid "Access"
msgstr "访问"
#: ../../guides/stacks/greengrass.rst:321
msgid "ModelDir"
msgstr "ModelDir"
#: ../../guides/stacks/greengrass.rst:322
#: ../../guides/stacks/greengrass.rst:332
msgid "Volume"
msgstr "卷"
#: ../../guides/stacks/greengrass.rst:323
msgid "<MODEL_DIR> to be specified by user"
msgstr "<MODEL_DIR> 由用户指定"
#: ../../guides/stacks/greengrass.rst:324
#: ../../guides/stacks/greengrass.rst:329
msgid "Read-Only"
msgstr "只读"
#: ../../guides/stacks/greengrass.rst:326
msgid "Webcam"
msgstr "网络摄像头"
#: ../../guides/stacks/greengrass.rst:327
msgid "Device"
msgstr "设备"
#: ../../guides/stacks/greengrass.rst:328
msgid "/dev/video0"
msgstr "/dev/video0"
#: ../../guides/stacks/greengrass.rst:331
msgid "DataDir"
msgstr "DataDir"
#: ../../guides/stacks/greengrass.rst:333
msgid "<DATA_DIR> to be specified by user. Holds both input and output data."
msgstr "<DATA_DIR> 由用户指定。保存输入和输出数据。"
#: ../../guides/stacks/greengrass.rst:335
msgid "Read and Write"
msgstr "读取和写入"
#: ../../guides/stacks/greengrass.rst:338
msgid "Deploy Lambda function"
msgstr "部署 Lambda 函数"
#: ../../guides/stacks/greengrass.rst:340
msgid ""
"Refer to the AWS documentation `Deploy Cloud Configurations to an AWS IoT"
" Greengrass Core Device`_ for instructions on how to deploy the lambda "
"function to AWS Greengrass core device. Select *Deployments* on the group"
" page and follow the instructions."
msgstr "有关如何 `Deploy Cloud Configurations to an AWS IoT Greengrass Core Device`_ 的说明,请参阅 AWS 文档。在组页面上选择 *Deployments*,并按照说明执行操作。"
#: ../../guides/stacks/greengrass.rst:344
msgid "Output consumption"
msgstr "输出的使用"
#: ../../guides/stacks/greengrass.rst:346
msgid ""
"There are four options available for output consumption. These options "
"are used to report, stream, upload, or store inference output at an "
"interval defined by the variable :command:`reporting_interval` in the AWS"
" Greengrass samples."
msgstr "使用输出时有四种可用选项。这些选项用于按 AWS Greengrass 示例中 :command:`reporting_interval` 变量定义的间隔,报告、流式传输、上传或存储推理输出。"
#: ../../guides/stacks/greengrass.rst:350
msgid "IoT cloud output:"
msgstr "IoT 云输出:"
#: ../../guides/stacks/greengrass.rst:352
msgid ""
"This option is enabled by default in the AWS Greengrass samples using the"
" :command:`enable_iot_cloud_output` variable. You can use it to verify "
"the lambda running on the edge device. It enables publishing messages to "
"IoT cloud using the subscription topic specified in the lambda. (For "
"example, topics may include :command:`openvino/classification` for "
"classification and :command:`openvino/ssd` for object detection samples.)"
" For classification, top-1 result with class label are published to IoT "
"cloud. For SSD object detection, detection results such as bounding box "
"coordinates of objects, class label, and class confidence are published."
msgstr "在 AWS Greengrass 示例中,默认情况下使用 :command:`enable_iot_cloud_output` 变量启用此选项。您可以使用它来验证在边缘设备上运行的 lambda。它支持使用 lambda 中指定的订阅主题向 IoT 云发布消息。(例如,主题可能包括用于分类示例的 :command:`openvino/classification` 以及用于对象检测示例的 :command:`openvino/ssd`。) 对于分类,具有类标签的前 1 名结果会发布到 IoT 云。对于 SSD 对象检测,则发布对象的边界框坐标、类标签和类置信度等检测结果。"
#: ../../guides/stacks/greengrass.rst:362
msgid ""
"Refer to the AWS documentation `Verify the Lambda Function Is Running on "
"the Device`_ for instructions on how to view the output on IoT cloud."
msgstr "有关如何在 IoT 云上查看输出的说明,请参考 AWS 文档 `Verify the Lambda Function Is Running on the Device`_。"
#: ../../guides/stacks/greengrass.rst:366
msgid "Kinesis streaming:"
msgstr "Kinesis 流式传输:"
#: ../../guides/stacks/greengrass.rst:368
msgid ""
"This option enables inference output to be streamed from the edge device "
"to cloud using Kinesis [3] streams when :command:`enable_kinesis_output` "
"is set to True. The edge devices act as data producers and continually "
"push processed data to the cloud. You must set up and specify Kinesis "
"stream name, Kinesis shard, and AWS region in the AWS Greengrass samples."
msgstr ":command:`enable_kinesis_output` 设置为 True 时,此选项支持使用 Kinesis [3] 流将推理输出从边缘设备流式传输到云。边缘设备充当数据生产者,并将处理后的数据不断推送到云中。您必须在 AWS Greengrass 示例中设置和指定 Kinesis 流名称、Kinesis shard 和 AWS 区域。"
#: ../../guides/stacks/greengrass.rst:375
msgid "Cloud storage using AWS S3 bucket:"
msgstr "使用 AWS S3 存储桶的云存储:"
#: ../../guides/stacks/greengrass.rst:377
msgid ""
"When the :command:`enable_s3_jpeg_output` variable is set to True, it "
"enables uploading and storing processed frames (in jpeg format) in an AWS"
" S3 bucket. You must set up and specify the S3 bucket name in the AWS "
"Greengrass samples to store the JPEG images. The images are named using "
"the timestamp and uploaded to S3."
msgstr "将 :command:`enable_s3_jpeg_output` 变量设置为 True 时,它允许在 AWS S3 存储桶中上传和存储已处理的帧jpeg 格式)。您必须在 AWS Greengrass 示例中设置和指定用来存储 JPEG 图像的 S3 存储桶名称。这些映像使用时间戳命名,并上传到 S3。"
#: ../../guides/stacks/greengrass.rst:383
msgid "Local storage:"
msgstr "本地存储:"
#: ../../guides/stacks/greengrass.rst:385
msgid ""
"When the :command:`enable_s3_jpeg_output` variable is set to True, it "
"enables storing processed frames (in jpeg format) on the edge device. The"
" images are named using the timestamp and stored in a directory specified"
" by :command:`PARAM_OUTPUT_DIRECTORY`."
msgstr "将 :command:`enable_s3_jpeg_output` 变量设置为 True 时它允许在边缘设备上存储已处理的帧jpeg 格式)。这些映像使用时间戳命名,并存储在由 :command:`PARAM_OUTPUT_DIRECTORY` 指定的目录中。"
#: ../../guides/stacks/greengrass.rst:391
msgid "References"
msgstr "参考"
#: ../../guides/stacks/greengrass.rst:393
msgid "AWS Greengrass: https://aws.amazon.com/greengrass/"
msgstr "AWS Greengrasshttps://aws.amazon.com/greengrass/"
#: ../../guides/stacks/greengrass.rst:394
msgid "AWS Lambda: https://aws.amazon.com/lambda/"
msgstr "AWS Lambdahttps://aws.amazon.com/lambda/"
#: ../../guides/stacks/greengrass.rst:395
msgid "AWS Kinesis: https://aws.amazon.com/kinesis/"
msgstr "AWS Kinesishttps://aws.amazon.com/kinesis/"
#~ msgid "This tutorial demonstrates how to:"
#~ msgstr "本教程演示了如何:"
#~ msgid "Refer to the following topics:"
#~ msgstr "请参阅以下主题:"
#~ msgid ""
#~ "Follow these instructions for `converting "
#~ "deep learning models to Intermediate "
#~ "Representation using Model Optimizer`_. To "
#~ "optimize either of the sample models "
#~ "described above, run one of the "
#~ "following commands."
#~ msgstr ""
#~ "按照 `converting deep learning models to"
#~ " Intermediate Representation using Model "
#~ "Optimizer`_ 中的说明执行操作。要优化上述任一示例模型,请运行以下命令之一。"
#~ msgid "Follow the instructions here to `view the output on IoT cloud`_."
#~ msgstr "按照这里的说明`view the output on IoT cloud`_。"

View File

@@ -64,7 +64,7 @@ msgstr ":ref:`guides`"
#: ../../index.rst:31
msgid ""
"Guides cover a range of topics from |CL| features and tooling, to system "
"maintenance, network, and stacks."
"maintenance, and network."
msgstr "指南页面涵盖了从 |CL| 功能和工具到系统维护、网络和堆栈的一系列主题。"
#: ../../index.rst:34

View File

@@ -47,11 +47,3 @@ Kernel
kernel/*
Stacks
=======
.. toctree::
:maxdepth: 1
:glob:
stacks/*

View File

@@ -15,11 +15,6 @@
**autospec** is a tool to assist in the automated creation and
maintenance of RPM packaging in Clear Linux OS.
:ref:`dlrs`
This tutorial shows you how to run benchmarking workloads in Clear
Linux OS using TensorFlow\* or PyTorch\* with the Deep Learning
Reference Stack.
:ref:`docker`
Clear Linux OS supports multiple containerization platforms,
including a Docker solution.