This commit is contained in:
AJ ONeal 2019-07-01 02:44:54 -06:00
parent 84f1dacbac
commit 870d149797
547 changed files with 239341 additions and 0 deletions

17
vendor/git.rootprojects.org/root/go-gitver/.gitignore generated vendored Normal file
View File

@ -0,0 +1,17 @@
xversion.go
zversion.go
# ---> Go
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
# Test binary, build with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out

312
vendor/git.rootprojects.org/root/go-gitver/LICENSE generated vendored Normal file
View File

@ -0,0 +1,312 @@
Mozilla Public License Version 2.0
1. Definitions
1.1. "Contributor" means each individual or legal entity that creates, contributes
to the creation of, or owns Covered Software.
1.2. "Contributor Version" means the combination of the Contributions of others
(if any) used by a Contributor and that particular Contributor's Contribution.
1.3. "Contribution" means Covered Software of a particular Contributor.
1.4. "Covered Software" means Source Code Form to which the initial Contributor
has attached the notice in Exhibit A, the Executable Form of such Source Code
Form, and Modifications of such Source Code Form, in each case including portions
thereof.
1.5. "Incompatible With Secondary Licenses" means
(a) that the initial Contributor has attached the notice described in Exhibit
B to the Covered Software; or
(b) that the Covered Software was made available under the terms of version
1.1 or earlier of the License, but not also under the terms of a Secondary
License.
1.6. "Executable Form" means any form of the work other than Source Code Form.
1.7. "Larger Work" means a work that combines Covered Software with other
material, in a separate file or files, that is not Covered Software.
1.8. "License" means this document.
1.9. "Licensable" means having the right to grant, to the maximum extent possible,
whether at the time of the initial grant or subsequently, any and all of the
rights conveyed by this License.
1.10. "Modifications" means any of the following:
(a) any file in Source Code Form that results from an addition to, deletion
from, or modification of the contents of Covered Software; or
(b) any new file in Source Code Form that contains any Covered Software.
1.11. "Patent Claims" of a Contributor means any patent claim(s), including
without limitation, method, process, and apparatus claims, in any patent Licensable
by such Contributor that would be infringed, but for the grant of the License,
by the making, using, selling, offering for sale, having made, import, or
transfer of either its Contributions or its Contributor Version.
1.12. "Secondary License" means either the GNU General Public License, Version
2.0, the GNU Lesser General Public License, Version 2.1, the GNU Affero General
Public License, Version 3.0, or any later versions of those licenses.
1.13. "Source Code Form" means the form of the work preferred for making modifications.
1.14. "You" (or "Your") means an individual or a legal entity exercising rights
under this License. For legal entities, "You" includes any entity that controls,
is controlled by, or is under common control with You. For purposes of this
definition, "control" means (a) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or otherwise,
or (b) ownership of more than fifty percent (50%) of the outstanding shares
or beneficial ownership of such entity.
2. License Grants and Conditions
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free, non-exclusive
license:
(a) under intellectual property rights (other than patent or trademark) Licensable
by such Contributor to use, reproduce, make available, modify, display, perform,
distribute, and otherwise exploit its Contributions, either on an unmodified
basis, with Modifications, or as part of a Larger Work; and
(b) under Patent Claims of such Contributor to make, use, sell, offer for
sale, have made, import, and otherwise transfer either its Contributions or
its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution become
effective for each Contribution on the date the Contributor first distributes
such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under this
License. No additional rights or licenses will be implied from the distribution
or licensing of Covered Software under this License. Notwithstanding Section
2.1(b) above, no patent license is granted by a Contributor:
(a) for any code that a Contributor has removed from Covered Software; or
(b) for infringements caused by: (i) Your and any other third party's modifications
of Covered Software, or (ii) the combination of its Contributions with other
software (except as part of its Contributor Version); or
(c) under Patent Claims infringed by Covered Software in the absence of its
Contributions.
This License does not grant any rights in the trademarks, service marks, or
logos of any Contributor (except as may be necessary to comply with the notice
requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to distribute
the Covered Software under a subsequent version of this License (see Section
10.2) or under the terms of a Secondary License (if permitted under the terms
of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its Contributions
are its original creation(s) or it has sufficient rights to grant the rights
to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under applicable
copyright doctrines of fair use, fair dealing, or other equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any Modifications
that You create or to which You contribute, must be under the terms of this
License. You must inform recipients that the Source Code Form of the Covered
Software is governed by the terms of this License, and how they can obtain
a copy of this License. You may not attempt to alter or restrict the recipients'
rights in the Source Code Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
(a) such Covered Software must also be made available in Source Code Form,
as described in Section 3.1, and You must inform recipients of the Executable
Form how they can obtain a copy of such Source Code Form by reasonable means
in a timely manner, at a charge no more than the cost of distribution to the
recipient; and
(b) You may distribute such Executable Form under the terms of this License,
or sublicense it under different terms, provided that the license for the
Executable Form does not attempt to limit or alter the recipients' rights
in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice, provided
that You also comply with the requirements of this License for the Covered
Software. If the Larger Work is a combination of Covered Software with a work
governed by one or more Secondary Licenses, and the Covered Software is not
Incompatible With Secondary Licenses, this License permits You to additionally
distribute such Covered Software under the terms of such Secondary License(s),
so that the recipient of the Larger Work may, at their option, further distribute
the Covered Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices (including
copyright notices, patent notices, disclaimers of warranty, or limitations
of liability) contained within the Source Code Form of the Covered Software,
except that You may alter any license notices to the extent required to remedy
known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support, indemnity
or liability obligations to one or more recipients of Covered Software. However,
You may do so only on Your own behalf, and not on behalf of any Contributor.
You must make it absolutely clear that any such warranty, support, indemnity,
or liability obligation is offered by You alone, and You hereby agree to indemnify
every Contributor for any liability incurred by such Contributor as a result
of warranty, support, indemnity or liability terms You offer. You may include
additional disclaimers of warranty and limitations of liability specific to
any jurisdiction.
4. Inability to Comply Due to Statute or Regulation
If it is impossible for You to comply with any of the terms of this License
with respect to some or all of the Covered Software due to statute, judicial
order, or regulation then You must: (a) comply with the terms of this License
to the maximum extent possible; and (b) describe the limitations and the code
they affect. Such description must be placed in a text file included with
all distributions of the Covered Software under this License. Except to the
extent prohibited by statute or regulation, such description must be sufficiently
detailed for a recipient of ordinary skill to be able to understand it.
5. Termination
5.1. The rights granted under this License will terminate automatically if
You fail to comply with any of its terms. However, if You become compliant,
then the rights granted under this License from a particular Contributor are
reinstated (a) provisionally, unless and until such Contributor explicitly
and finally terminates Your grants, and (b) on an ongoing basis, if such Contributor
fails to notify You of the non-compliance by some reasonable means prior to
60 days after You have come back into compliance. Moreover, Your grants from
a particular Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means, this is the first
time You have received notice of non-compliance with this License from such
Contributor, and You become compliant prior to 30 days after Your receipt
of the notice.
5.2. If You initiate litigation against any entity by asserting a patent infringement
claim (excluding declaratory judgment actions, counter-claims, and cross-claims)
alleging that a Contributor Version directly or indirectly infringes any patent,
then the rights granted to You by any and all Contributors for the Covered
Software under Section 2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end
user license agreements (excluding distributors and resellers) which have
been validly granted by You or Your distributors under this License prior
to termination shall survive termination.
6. Disclaimer of Warranty
Covered Software is provided under this License on an "as is" basis, without
warranty of any kind, either expressed, implied, or statutory, including,
without limitation, warranties that the Covered Software is free of defects,
merchantable, fit for a particular purpose or non-infringing. The entire risk
as to the quality and performance of the Covered Software is with You. Should
any Covered Software prove defective in any respect, You (not any Contributor)
assume the cost of any necessary servicing, repair, or correction. This disclaimer
of warranty constitutes an essential part of this License. No use of any Covered
Software is authorized under this License except under this disclaimer.
7. Limitation of Liability
Under no circumstances and under no legal theory, whether tort (including
negligence), contract, or otherwise, shall any Contributor, or anyone who
distributes Covered Software as permitted above, be liable to You for any
direct, indirect, special, incidental, or consequential damages of any character
including, without limitation, damages for lost profits, loss of goodwill,
work stoppage, computer failure or malfunction, or any and all other commercial
damages or losses, even if such party shall have been informed of the possibility
of such damages. This limitation of liability shall not apply to liability
for death or personal injury resulting from such party's negligence to the
extent applicable law prohibits such limitation. Some jurisdictions do not
allow the exclusion or limitation of incidental or consequential damages,
so this exclusion and limitation may not apply to You.
8. Litigation
Any litigation relating to this License may be brought only in the courts
of a jurisdiction where the defendant maintains its principal place of business
and such litigation shall be governed by laws of that jurisdiction, without
reference to its conflict-of-law provisions. Nothing in this Section shall
prevent a party's ability to bring cross-claims or counter-claims.
9. Miscellaneous
This License represents the complete agreement concerning the subject matter
hereof. If any provision of this License is held to be unenforceable, such
provision shall be reformed only to the extent necessary to make it enforceable.
Any law or regulation which provides that the language of a contract shall
be construed against the drafter shall not be used to construe this License
against a Contributor.
10. Versions of the License
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section 10.3,
no one other than the license steward has the right to modify or publish new
versions of this License. Each version will be given a distinguishing version
number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version of
the License under which You originally received the Covered Software, or under
the terms of any subsequent version published by the license steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to create
a new license for such software, you may create and use a modified version
of this License if you rename the license and remove any references to the
name of the license steward (except to note that such modified license differs
from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses
If You choose to distribute Source Code Form that is Incompatible With Secondary
Licenses under the terms of this version of the License, the notice described
in Exhibit B of this License must be attached. Exhibit A - Source Code Form
License Notice
This Source Code Form is subject to the terms of the Mozilla Public License,
v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain
one at http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular file,
then You may include the notice in a location (such as a LICENSE file in a
relevant directory) where a recipient would be likely to look for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
This Source Code Form is "Incompatible With Secondary Licenses", as defined
by the Mozilla Public License, v. 2.0.

195
vendor/git.rootprojects.org/root/go-gitver/README.md generated vendored Normal file
View File

@ -0,0 +1,195 @@
# git-version.go
Use git tags to add semver to your go package.
```txt
Goal: Either use an exact version like v1.0.0
or translate the git version like v1.0.0-4-g0000000
to a semver like v1.0.1-pre4+g0000000
Fail gracefully when git repo isn't available.
```
# Demo
Generate an `xversion.go` file:
```bash
go run git.rootprojects.org/root/go-gitver
cat xversion.go
```
<small>**Note**: The file is named `xversion.go` by default so that the
generated file's `init()` will come later, and thus take priority, over
most other files.</small>
See `go-gitver`s self-generated version:
```bash
go run git.rootprojects.org/root/go-gitver version
```
# QuickStart
Add this to the top of your main file:
```go
//go:generate go run -mod=vendor git.rootprojects.org/root/go-gitver
```
Add a file that imports go-gitver (for versioning)
```go
// +build tools
package example
import _ "git.rootprojects.org/root/go-gitver"
```
Change you build instructions to be something like this:
```bash
go mod vendor
go generate -mod=vendor ./...
go build -mod=vendor -o example cmd/example/*.go
```
You don't have to use `mod vendor`, but I highly recommend it.
# Options
```txt
version print version and exit
--fail exit with non-zero status code on failure
--package <name> will set the package name
--outfile <name> will replace `xversion.go` with the given file path
```
ENVs
```bash
# Alias for --fail
GITVER_FAIL=true
```
For example:
```go
//go:generate go run -mod=vendor git.rootprojects.org/root/go-gitver --fail
```
```bash
go run -mod=vendor git.rootprojects.org/root/go-gitver version
```
# Usage
See `examples/basic`
1. Create a `tools` package in your project
2. Guard it against regular builds with `// +build tools`
3. Include `_ "git.rootprojects.org/root/go-gitver"` in the imports
4. Declare `var GitRev, GitVersion, GitTimestamp string` in your `package main`
5. Include `//go:generate go run -mod=vendor git.rootprojects.org/root/go-gitver` as well
`tools/tools.go`:
```go
// +build tools
// This is a dummy package for build tooling
package tools
import (
_ "git.rootprojects.org/root/go-gitver"
)
```
`main.go`:
```go
//go:generate go run git.rootprojects.org/root/go-gitver --fail
package main
import "fmt"
var (
GitRev = "0000000"
GitVersion = "v0.0.0-pre0+0000000"
GitTimestamp = "0000-00-00T00:00:00+0000"
)
func main() {
fmt.Println(GitRev)
fmt.Println(GitVersion)
fmt.Println(GitTimestamp)
}
```
If you're using `go mod vendor` (which I highly recommend that you do),
you'd modify the `go:generate` ever so slightly:
```go
//go:generate go run -mod=vendor git.rootprojects.org/root/go-gitver --fail
```
The only reason I didn't do that in the example is that I'd be included
the repository in itself and that would be... weird.
# Why a tools package?
> import "git.rootprojects.org/root/go-gitver" is a program, not an importable package
Having a tools package with a build tag that you don't use is a nice way to add exact
versions of a command package used for tooling to your `go.mod` with `go mod tidy`,
without getting the error above.
# git: behind the curtain
These are the commands that are used under the hood to produce the versions.
Shows the git tag + description. Assumes that you're using the semver format `v1.0.0` for your base tags.
```bash
git describe --tags --dirty --always
# v1.0.0
# v1.0.0-1-g0000000
# v1.0.0-dirty
```
Show the commit date (when the commit made it into the current tree).
Internally we use the current date when the working tree is dirty.
```bash
git show v1.0.0-1-g0000000 --format=%cd --date=format:%Y-%m-%dT%H:%M:%SZ%z --no-patch
# 2010-01-01T20:30:00Z-0600
# fatal: ambiguous argument 'v1.0.0-1-g0000000-dirty': unknown revision or path not in the working tree.
```
Shows the most recent commit.
```bash
git rev-parse HEAD
# 0000000000000000000000000000000000000000
```
# Errors
### cannot find package "."
```txt
package git.rootprojects.org/root/go-gitver: cannot find package "." in:
/Users/me/go-example/vendor/git.rootprojects.org/root/go-gitver
cmd/example/example.go:1: running "go": exit status 1
```
You forgot to update deps and re-vendor:
```bash
go mod tidy
go mod vendor
```

98
vendor/git.rootprojects.org/root/go-gitver/gitver.go generated vendored Normal file
View File

@ -0,0 +1,98 @@
//go:generate go run -mod=vendor git.rootprojects.org/root/go-gitver
package main
import (
"bytes"
"fmt"
"go/format"
"log"
"os"
"text/template"
"time"
"git.rootprojects.org/root/go-gitver/gitver"
)
var GitRev, GitVersion, GitTimestamp string
var exitCode int
var verFile = "xversion.go"
func main() {
pkg := "main"
args := os.Args[1:]
for i := range args {
arg := args[i]
if "-f" == arg || "--fail" == arg {
exitCode = 1
} else if ("--outfile" == arg || "-o" == arg) && len(args) > i+1 {
verFile = args[i+1]
args[i+1] = ""
} else if "--package" == arg && len(args) > i+1 {
pkg = args[i+1]
args[i+1] = ""
} else if "-V" == arg || "version" == arg || "-version" == arg || "--version" == arg {
fmt.Println(GitRev)
fmt.Println(GitVersion)
fmt.Println(GitTimestamp)
os.Exit(0)
}
}
if "" != os.Getenv("GITVER_FAIL") && "false" != os.Getenv("GITVER_FAIL") {
exitCode = 1
}
v, err := gitver.ExecAndParse()
if nil != err {
log.Fatalf("Failed to get git version: %s\n", err)
os.Exit(exitCode)
}
// Create or overwrite the go file from template
var buf bytes.Buffer
if err := versionTpl.Execute(&buf, struct {
Package string
Timestamp string
Version string
GitRev string
}{
Package: pkg,
Timestamp: v.Timestamp.Format(time.RFC3339),
Version: v.Version,
GitRev: v.Rev,
}); nil != err {
panic(err)
}
// Format
src, err := format.Source(buf.Bytes())
if nil != err {
panic(err)
}
// Write to disk (in the Current Working Directory)
f, err := os.Create(verFile)
if nil != err {
panic(err)
}
if _, err := f.Write(src); nil != err {
panic(err)
}
if err := f.Close(); nil != err {
panic(err)
}
}
var versionTpl = template.Must(template.New("").Parse(`// Code generated by go generate; DO NOT EDIT.
package {{ .Package }}
func init() {
GitRev = "{{ .GitRev }}"
{{ if .Version -}}
GitVersion = "{{ .Version }}"
{{ end -}}
GitTimestamp = "{{ .Timestamp }}"
}
`))

View File

@ -0,0 +1,143 @@
package gitver
import (
"fmt"
"os/exec"
"regexp"
"strconv"
"strings"
"time"
)
var exactVer *regexp.Regexp
var gitVer *regexp.Regexp
func init() {
// exactly vX.Y.Z (go-compatible semver)
exactVer = regexp.MustCompile(`^v\d+\.\d+\.\d+$`)
// vX.Y.Z-n-g0000000 git post-release, semver prerelease
// vX.Y.Z-dirty git post-release, semver prerelease
gitVer = regexp.MustCompile(`^(v\d+\.\d+)\.(\d+)(-(\d+))?(-(g[0-9a-f]+))?(-(dirty))?`)
}
type Versions struct {
Timestamp time.Time
Version string
Rev string
}
func ExecAndParse() (*Versions, error) {
desc, err := gitDesc()
if nil != err {
return nil, err
}
rev, err := gitRev()
if nil != err {
return nil, err
}
ver, err := semVer(desc)
if nil != err {
return nil, err
}
ts, err := gitTimestamp(desc)
if nil != err {
ts = time.Now()
}
return &Versions{
Timestamp: ts,
Version: ver,
Rev: rev,
}, nil
}
func gitDesc() (string, error) {
args := strings.Split("git describe --tags --dirty --always", " ")
cmd := exec.Command(args[0], args[1:]...)
out, err := cmd.CombinedOutput()
if nil != err {
// Don't panic, just carry on
//out = []byte("v0.0.0-0-g0000000")
return "", err
}
return strings.TrimSpace(string(out)), nil
}
func gitRev() (string, error) {
args := strings.Split("git rev-parse HEAD", " ")
cmd := exec.Command(args[0], args[1:]...)
out, err := cmd.CombinedOutput()
if nil != err {
return "", fmt.Errorf("\nUnexpected Error\n\n"+
"Please open an issue at https://git.rootprojects.org/root/go-gitver/issues/new \n"+
"Please include the following:\n\n"+
"Command: %s\n"+
"Output: %s\n"+
"Error: %s\n"+
"\nPlease and Thank You.\n\n", strings.Join(args, " "), out, err)
}
return strings.TrimSpace(string(out)), nil
}
func semVer(desc string) (string, error) {
if exactVer.MatchString(desc) {
// v1.0.0
return desc, nil
}
if !gitVer.MatchString(desc) {
return "", nil
}
// (v1.0).(0)(-(1))(-(g0000000))(-(dirty))
vers := gitVer.FindStringSubmatch(desc)
patch, err := strconv.Atoi(vers[2])
if nil != err {
return "", fmt.Errorf("\nUnexpected Error\n\n"+
"Please open an issue at https://git.rootprojects.org/root/go-gitver/issues/new \n"+
"Please include the following:\n\n"+
"git description: %s\n"+
"RegExp: %#v\n"+
"Error: %s\n"+
"\nPlease and Thank You.\n\n", desc, gitVer, err)
}
// v1.0.1-pre1
// v1.0.1-pre1+g0000000
// v1.0.1-pre0+dirty
// v1.0.1-pre0+g0000000-dirty
if "" == vers[4] {
vers[4] = "0"
}
ver := fmt.Sprintf("%s.%d-pre%s", vers[1], patch+1, vers[4])
if "" != vers[6] || "dirty" == vers[8] {
ver += "+"
if "" != vers[6] {
ver += vers[6]
if "" != vers[8] {
ver += "-"
}
}
ver += vers[8]
}
return ver, nil
}
func gitTimestamp(desc string) (time.Time, error) {
args := []string{
"git",
"show", desc,
"--format=%cd",
"--date=format:%Y-%m-%dT%H:%M:%SZ%z",
"--no-patch",
}
cmd := exec.Command(args[0], args[1:]...)
out, err := cmd.CombinedOutput()
if nil != err {
// a dirty desc was probably used
return time.Time{}, err
}
return time.Parse(time.RFC3339, strings.TrimSpace(string(out)))
}

3
vendor/git.rootprojects.org/root/go-gitver/go.mod generated vendored Normal file
View File

@ -0,0 +1,3 @@
module git.rootprojects.org/root/go-gitver
go 1.12

View File

@ -0,0 +1,9 @@
package main
// use recently generated version info as a fallback
// for when git isn't present (i.e. go run <url>)
func init() {
GitRev = "0b8c2d86df4bfe32ff4534eec84cd56909c398e9"
GitVersion = "v1.1.1"
GitTimestamp = "2019-06-21T00:18:13-06:00"
}

5
vendor/github.com/BurntSushi/toml/.gitignore generated vendored Normal file
View File

@ -0,0 +1,5 @@
TAGS
tags
.*.swp
tomlcheck/tomlcheck
toml.test

15
vendor/github.com/BurntSushi/toml/.travis.yml generated vendored Normal file
View File

@ -0,0 +1,15 @@
language: go
go:
- 1.1
- 1.2
- 1.3
- 1.4
- 1.5
- 1.6
- tip
install:
- go install ./...
- go get github.com/BurntSushi/toml-test
script:
- export PATH="$PATH:$HOME/gopath/bin"
- make test

3
vendor/github.com/BurntSushi/toml/COMPATIBLE generated vendored Normal file
View File

@ -0,0 +1,3 @@
Compatible with TOML version
[v0.4.0](https://github.com/toml-lang/toml/blob/v0.4.0/versions/en/toml-v0.4.0.md)

21
vendor/github.com/BurntSushi/toml/COPYING generated vendored Normal file
View File

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2013 TOML authors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

19
vendor/github.com/BurntSushi/toml/Makefile generated vendored Normal file
View File

@ -0,0 +1,19 @@
install:
go install ./...
test: install
go test -v
toml-test toml-test-decoder
toml-test -encoder toml-test-encoder
fmt:
gofmt -w *.go */*.go
colcheck *.go */*.go
tags:
find ./ -name '*.go' -print0 | xargs -0 gotags > TAGS
push:
git push origin master
git push github master

218
vendor/github.com/BurntSushi/toml/README.md generated vendored Normal file
View File

@ -0,0 +1,218 @@
## TOML parser and encoder for Go with reflection
TOML stands for Tom's Obvious, Minimal Language. This Go package provides a
reflection interface similar to Go's standard library `json` and `xml`
packages. This package also supports the `encoding.TextUnmarshaler` and
`encoding.TextMarshaler` interfaces so that you can define custom data
representations. (There is an example of this below.)
Spec: https://github.com/toml-lang/toml
Compatible with TOML version
[v0.4.0](https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.4.0.md)
Documentation: https://godoc.org/github.com/BurntSushi/toml
Installation:
```bash
go get github.com/BurntSushi/toml
```
Try the toml validator:
```bash
go get github.com/BurntSushi/toml/cmd/tomlv
tomlv some-toml-file.toml
```
[![Build Status](https://travis-ci.org/BurntSushi/toml.svg?branch=master)](https://travis-ci.org/BurntSushi/toml) [![GoDoc](https://godoc.org/github.com/BurntSushi/toml?status.svg)](https://godoc.org/github.com/BurntSushi/toml)
### Testing
This package passes all tests in
[toml-test](https://github.com/BurntSushi/toml-test) for both the decoder
and the encoder.
### Examples
This package works similarly to how the Go standard library handles `XML`
and `JSON`. Namely, data is loaded into Go values via reflection.
For the simplest example, consider some TOML file as just a list of keys
and values:
```toml
Age = 25
Cats = [ "Cauchy", "Plato" ]
Pi = 3.14
Perfection = [ 6, 28, 496, 8128 ]
DOB = 1987-07-05T05:45:00Z
```
Which could be defined in Go as:
```go
type Config struct {
Age int
Cats []string
Pi float64
Perfection []int
DOB time.Time // requires `import time`
}
```
And then decoded with:
```go
var conf Config
if _, err := toml.Decode(tomlData, &conf); err != nil {
// handle error
}
```
You can also use struct tags if your struct field name doesn't map to a TOML
key value directly:
```toml
some_key_NAME = "wat"
```
```go
type TOML struct {
ObscureKey string `toml:"some_key_NAME"`
}
```
### Using the `encoding.TextUnmarshaler` interface
Here's an example that automatically parses duration strings into
`time.Duration` values:
```toml
[[song]]
name = "Thunder Road"
duration = "4m49s"
[[song]]
name = "Stairway to Heaven"
duration = "8m03s"
```
Which can be decoded with:
```go
type song struct {
Name string
Duration duration
}
type songs struct {
Song []song
}
var favorites songs
if _, err := toml.Decode(blob, &favorites); err != nil {
log.Fatal(err)
}
for _, s := range favorites.Song {
fmt.Printf("%s (%s)\n", s.Name, s.Duration)
}
```
And you'll also need a `duration` type that satisfies the
`encoding.TextUnmarshaler` interface:
```go
type duration struct {
time.Duration
}
func (d *duration) UnmarshalText(text []byte) error {
var err error
d.Duration, err = time.ParseDuration(string(text))
return err
}
```
### More complex usage
Here's an example of how to load the example from the official spec page:
```toml
# This is a TOML document. Boom.
title = "TOML Example"
[owner]
name = "Tom Preston-Werner"
organization = "GitHub"
bio = "GitHub Cofounder & CEO\nLikes tater tots and beer."
dob = 1979-05-27T07:32:00Z # First class dates? Why not?
[database]
server = "192.168.1.1"
ports = [ 8001, 8001, 8002 ]
connection_max = 5000
enabled = true
[servers]
# You can indent as you please. Tabs or spaces. TOML don't care.
[servers.alpha]
ip = "10.0.0.1"
dc = "eqdc10"
[servers.beta]
ip = "10.0.0.2"
dc = "eqdc10"
[clients]
data = [ ["gamma", "delta"], [1, 2] ] # just an update to make sure parsers support it
# Line breaks are OK when inside arrays
hosts = [
"alpha",
"omega"
]
```
And the corresponding Go types are:
```go
type tomlConfig struct {
Title string
Owner ownerInfo
DB database `toml:"database"`
Servers map[string]server
Clients clients
}
type ownerInfo struct {
Name string
Org string `toml:"organization"`
Bio string
DOB time.Time
}
type database struct {
Server string
Ports []int
ConnMax int `toml:"connection_max"`
Enabled bool
}
type server struct {
IP string
DC string
}
type clients struct {
Data [][]interface{}
Hosts []string
}
```
Note that a case insensitive match will be tried if an exact match can't be
found.
A working example of the above can be found in `_examples/example.{go,toml}`.

509
vendor/github.com/BurntSushi/toml/decode.go generated vendored Normal file
View File

@ -0,0 +1,509 @@
package toml
import (
"fmt"
"io"
"io/ioutil"
"math"
"reflect"
"strings"
"time"
)
func e(format string, args ...interface{}) error {
return fmt.Errorf("toml: "+format, args...)
}
// Unmarshaler is the interface implemented by objects that can unmarshal a
// TOML description of themselves.
type Unmarshaler interface {
UnmarshalTOML(interface{}) error
}
// Unmarshal decodes the contents of `p` in TOML format into a pointer `v`.
func Unmarshal(p []byte, v interface{}) error {
_, err := Decode(string(p), v)
return err
}
// Primitive is a TOML value that hasn't been decoded into a Go value.
// When using the various `Decode*` functions, the type `Primitive` may
// be given to any value, and its decoding will be delayed.
//
// A `Primitive` value can be decoded using the `PrimitiveDecode` function.
//
// The underlying representation of a `Primitive` value is subject to change.
// Do not rely on it.
//
// N.B. Primitive values are still parsed, so using them will only avoid
// the overhead of reflection. They can be useful when you don't know the
// exact type of TOML data until run time.
type Primitive struct {
undecoded interface{}
context Key
}
// DEPRECATED!
//
// Use MetaData.PrimitiveDecode instead.
func PrimitiveDecode(primValue Primitive, v interface{}) error {
md := MetaData{decoded: make(map[string]bool)}
return md.unify(primValue.undecoded, rvalue(v))
}
// PrimitiveDecode is just like the other `Decode*` functions, except it
// decodes a TOML value that has already been parsed. Valid primitive values
// can *only* be obtained from values filled by the decoder functions,
// including this method. (i.e., `v` may contain more `Primitive`
// values.)
//
// Meta data for primitive values is included in the meta data returned by
// the `Decode*` functions with one exception: keys returned by the Undecoded
// method will only reflect keys that were decoded. Namely, any keys hidden
// behind a Primitive will be considered undecoded. Executing this method will
// update the undecoded keys in the meta data. (See the example.)
func (md *MetaData) PrimitiveDecode(primValue Primitive, v interface{}) error {
md.context = primValue.context
defer func() { md.context = nil }()
return md.unify(primValue.undecoded, rvalue(v))
}
// Decode will decode the contents of `data` in TOML format into a pointer
// `v`.
//
// TOML hashes correspond to Go structs or maps. (Dealer's choice. They can be
// used interchangeably.)
//
// TOML arrays of tables correspond to either a slice of structs or a slice
// of maps.
//
// TOML datetimes correspond to Go `time.Time` values.
//
// All other TOML types (float, string, int, bool and array) correspond
// to the obvious Go types.
//
// An exception to the above rules is if a type implements the
// encoding.TextUnmarshaler interface. In this case, any primitive TOML value
// (floats, strings, integers, booleans and datetimes) will be converted to
// a byte string and given to the value's UnmarshalText method. See the
// Unmarshaler example for a demonstration with time duration strings.
//
// Key mapping
//
// TOML keys can map to either keys in a Go map or field names in a Go
// struct. The special `toml` struct tag may be used to map TOML keys to
// struct fields that don't match the key name exactly. (See the example.)
// A case insensitive match to struct names will be tried if an exact match
// can't be found.
//
// The mapping between TOML values and Go values is loose. That is, there
// may exist TOML values that cannot be placed into your representation, and
// there may be parts of your representation that do not correspond to
// TOML values. This loose mapping can be made stricter by using the IsDefined
// and/or Undecoded methods on the MetaData returned.
//
// This decoder will not handle cyclic types. If a cyclic type is passed,
// `Decode` will not terminate.
func Decode(data string, v interface{}) (MetaData, error) {
rv := reflect.ValueOf(v)
if rv.Kind() != reflect.Ptr {
return MetaData{}, e("Decode of non-pointer %s", reflect.TypeOf(v))
}
if rv.IsNil() {
return MetaData{}, e("Decode of nil %s", reflect.TypeOf(v))
}
p, err := parse(data)
if err != nil {
return MetaData{}, err
}
md := MetaData{
p.mapping, p.types, p.ordered,
make(map[string]bool, len(p.ordered)), nil,
}
return md, md.unify(p.mapping, indirect(rv))
}
// DecodeFile is just like Decode, except it will automatically read the
// contents of the file at `fpath` and decode it for you.
func DecodeFile(fpath string, v interface{}) (MetaData, error) {
bs, err := ioutil.ReadFile(fpath)
if err != nil {
return MetaData{}, err
}
return Decode(string(bs), v)
}
// DecodeReader is just like Decode, except it will consume all bytes
// from the reader and decode it for you.
func DecodeReader(r io.Reader, v interface{}) (MetaData, error) {
bs, err := ioutil.ReadAll(r)
if err != nil {
return MetaData{}, err
}
return Decode(string(bs), v)
}
// unify performs a sort of type unification based on the structure of `rv`,
// which is the client representation.
//
// Any type mismatch produces an error. Finding a type that we don't know
// how to handle produces an unsupported type error.
func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
// Special case. Look for a `Primitive` value.
if rv.Type() == reflect.TypeOf((*Primitive)(nil)).Elem() {
// Save the undecoded data and the key context into the primitive
// value.
context := make(Key, len(md.context))
copy(context, md.context)
rv.Set(reflect.ValueOf(Primitive{
undecoded: data,
context: context,
}))
return nil
}
// Special case. Unmarshaler Interface support.
if rv.CanAddr() {
if v, ok := rv.Addr().Interface().(Unmarshaler); ok {
return v.UnmarshalTOML(data)
}
}
// Special case. Handle time.Time values specifically.
// TODO: Remove this code when we decide to drop support for Go 1.1.
// This isn't necessary in Go 1.2 because time.Time satisfies the encoding
// interfaces.
if rv.Type().AssignableTo(rvalue(time.Time{}).Type()) {
return md.unifyDatetime(data, rv)
}
// Special case. Look for a value satisfying the TextUnmarshaler interface.
if v, ok := rv.Interface().(TextUnmarshaler); ok {
return md.unifyText(data, v)
}
// BUG(burntsushi)
// The behavior here is incorrect whenever a Go type satisfies the
// encoding.TextUnmarshaler interface but also corresponds to a TOML
// hash or array. In particular, the unmarshaler should only be applied
// to primitive TOML values. But at this point, it will be applied to
// all kinds of values and produce an incorrect error whenever those values
// are hashes or arrays (including arrays of tables).
k := rv.Kind()
// laziness
if k >= reflect.Int && k <= reflect.Uint64 {
return md.unifyInt(data, rv)
}
switch k {
case reflect.Ptr:
elem := reflect.New(rv.Type().Elem())
err := md.unify(data, reflect.Indirect(elem))
if err != nil {
return err
}
rv.Set(elem)
return nil
case reflect.Struct:
return md.unifyStruct(data, rv)
case reflect.Map:
return md.unifyMap(data, rv)
case reflect.Array:
return md.unifyArray(data, rv)
case reflect.Slice:
return md.unifySlice(data, rv)
case reflect.String:
return md.unifyString(data, rv)
case reflect.Bool:
return md.unifyBool(data, rv)
case reflect.Interface:
// we only support empty interfaces.
if rv.NumMethod() > 0 {
return e("unsupported type %s", rv.Type())
}
return md.unifyAnything(data, rv)
case reflect.Float32:
fallthrough
case reflect.Float64:
return md.unifyFloat64(data, rv)
}
return e("unsupported type %s", rv.Kind())
}
func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
tmap, ok := mapping.(map[string]interface{})
if !ok {
if mapping == nil {
return nil
}
return e("type mismatch for %s: expected table but found %T",
rv.Type().String(), mapping)
}
for key, datum := range tmap {
var f *field
fields := cachedTypeFields(rv.Type())
for i := range fields {
ff := &fields[i]
if ff.name == key {
f = ff
break
}
if f == nil && strings.EqualFold(ff.name, key) {
f = ff
}
}
if f != nil {
subv := rv
for _, i := range f.index {
subv = indirect(subv.Field(i))
}
if isUnifiable(subv) {
md.decoded[md.context.add(key).String()] = true
md.context = append(md.context, key)
if err := md.unify(datum, subv); err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
} else if f.name != "" {
// Bad user! No soup for you!
return e("cannot write unexported field %s.%s",
rv.Type().String(), f.name)
}
}
}
return nil
}
func (md *MetaData) unifyMap(mapping interface{}, rv reflect.Value) error {
tmap, ok := mapping.(map[string]interface{})
if !ok {
if tmap == nil {
return nil
}
return badtype("map", mapping)
}
if rv.IsNil() {
rv.Set(reflect.MakeMap(rv.Type()))
}
for k, v := range tmap {
md.decoded[md.context.add(k).String()] = true
md.context = append(md.context, k)
rvkey := indirect(reflect.New(rv.Type().Key()))
rvval := reflect.Indirect(reflect.New(rv.Type().Elem()))
if err := md.unify(v, rvval); err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
rvkey.SetString(k)
rv.SetMapIndex(rvkey, rvval)
}
return nil
}
func (md *MetaData) unifyArray(data interface{}, rv reflect.Value) error {
datav := reflect.ValueOf(data)
if datav.Kind() != reflect.Slice {
if !datav.IsValid() {
return nil
}
return badtype("slice", data)
}
sliceLen := datav.Len()
if sliceLen != rv.Len() {
return e("expected array length %d; got TOML array of length %d",
rv.Len(), sliceLen)
}
return md.unifySliceArray(datav, rv)
}
func (md *MetaData) unifySlice(data interface{}, rv reflect.Value) error {
datav := reflect.ValueOf(data)
if datav.Kind() != reflect.Slice {
if !datav.IsValid() {
return nil
}
return badtype("slice", data)
}
n := datav.Len()
if rv.IsNil() || rv.Cap() < n {
rv.Set(reflect.MakeSlice(rv.Type(), n, n))
}
rv.SetLen(n)
return md.unifySliceArray(datav, rv)
}
func (md *MetaData) unifySliceArray(data, rv reflect.Value) error {
sliceLen := data.Len()
for i := 0; i < sliceLen; i++ {
v := data.Index(i).Interface()
sliceval := indirect(rv.Index(i))
if err := md.unify(v, sliceval); err != nil {
return err
}
}
return nil
}
func (md *MetaData) unifyDatetime(data interface{}, rv reflect.Value) error {
if _, ok := data.(time.Time); ok {
rv.Set(reflect.ValueOf(data))
return nil
}
return badtype("time.Time", data)
}
func (md *MetaData) unifyString(data interface{}, rv reflect.Value) error {
if s, ok := data.(string); ok {
rv.SetString(s)
return nil
}
return badtype("string", data)
}
func (md *MetaData) unifyFloat64(data interface{}, rv reflect.Value) error {
if num, ok := data.(float64); ok {
switch rv.Kind() {
case reflect.Float32:
fallthrough
case reflect.Float64:
rv.SetFloat(num)
default:
panic("bug")
}
return nil
}
return badtype("float", data)
}
func (md *MetaData) unifyInt(data interface{}, rv reflect.Value) error {
if num, ok := data.(int64); ok {
if rv.Kind() >= reflect.Int && rv.Kind() <= reflect.Int64 {
switch rv.Kind() {
case reflect.Int, reflect.Int64:
// No bounds checking necessary.
case reflect.Int8:
if num < math.MinInt8 || num > math.MaxInt8 {
return e("value %d is out of range for int8", num)
}
case reflect.Int16:
if num < math.MinInt16 || num > math.MaxInt16 {
return e("value %d is out of range for int16", num)
}
case reflect.Int32:
if num < math.MinInt32 || num > math.MaxInt32 {
return e("value %d is out of range for int32", num)
}
}
rv.SetInt(num)
} else if rv.Kind() >= reflect.Uint && rv.Kind() <= reflect.Uint64 {
unum := uint64(num)
switch rv.Kind() {
case reflect.Uint, reflect.Uint64:
// No bounds checking necessary.
case reflect.Uint8:
if num < 0 || unum > math.MaxUint8 {
return e("value %d is out of range for uint8", num)
}
case reflect.Uint16:
if num < 0 || unum > math.MaxUint16 {
return e("value %d is out of range for uint16", num)
}
case reflect.Uint32:
if num < 0 || unum > math.MaxUint32 {
return e("value %d is out of range for uint32", num)
}
}
rv.SetUint(unum)
} else {
panic("unreachable")
}
return nil
}
return badtype("integer", data)
}
func (md *MetaData) unifyBool(data interface{}, rv reflect.Value) error {
if b, ok := data.(bool); ok {
rv.SetBool(b)
return nil
}
return badtype("boolean", data)
}
func (md *MetaData) unifyAnything(data interface{}, rv reflect.Value) error {
rv.Set(reflect.ValueOf(data))
return nil
}
func (md *MetaData) unifyText(data interface{}, v TextUnmarshaler) error {
var s string
switch sdata := data.(type) {
case TextMarshaler:
text, err := sdata.MarshalText()
if err != nil {
return err
}
s = string(text)
case fmt.Stringer:
s = sdata.String()
case string:
s = sdata
case bool:
s = fmt.Sprintf("%v", sdata)
case int64:
s = fmt.Sprintf("%d", sdata)
case float64:
s = fmt.Sprintf("%f", sdata)
default:
return badtype("primitive (string-like)", data)
}
if err := v.UnmarshalText([]byte(s)); err != nil {
return err
}
return nil
}
// rvalue returns a reflect.Value of `v`. All pointers are resolved.
func rvalue(v interface{}) reflect.Value {
return indirect(reflect.ValueOf(v))
}
// indirect returns the value pointed to by a pointer.
// Pointers are followed until the value is not a pointer.
// New values are allocated for each nil pointer.
//
// An exception to this rule is if the value satisfies an interface of
// interest to us (like encoding.TextUnmarshaler).
func indirect(v reflect.Value) reflect.Value {
if v.Kind() != reflect.Ptr {
if v.CanSet() {
pv := v.Addr()
if _, ok := pv.Interface().(TextUnmarshaler); ok {
return pv
}
}
return v
}
if v.IsNil() {
v.Set(reflect.New(v.Type().Elem()))
}
return indirect(reflect.Indirect(v))
}
func isUnifiable(rv reflect.Value) bool {
if rv.CanSet() {
return true
}
if _, ok := rv.Interface().(TextUnmarshaler); ok {
return true
}
return false
}
func badtype(expected string, data interface{}) error {
return e("cannot load TOML value of type %T into a Go %s", data, expected)
}

121
vendor/github.com/BurntSushi/toml/decode_meta.go generated vendored Normal file
View File

@ -0,0 +1,121 @@
package toml
import "strings"
// MetaData allows access to meta information about TOML data that may not
// be inferrable via reflection. In particular, whether a key has been defined
// and the TOML type of a key.
type MetaData struct {
mapping map[string]interface{}
types map[string]tomlType
keys []Key
decoded map[string]bool
context Key // Used only during decoding.
}
// IsDefined returns true if the key given exists in the TOML data. The key
// should be specified hierarchially. e.g.,
//
// // access the TOML key 'a.b.c'
// IsDefined("a", "b", "c")
//
// IsDefined will return false if an empty key given. Keys are case sensitive.
func (md *MetaData) IsDefined(key ...string) bool {
if len(key) == 0 {
return false
}
var hash map[string]interface{}
var ok bool
var hashOrVal interface{} = md.mapping
for _, k := range key {
if hash, ok = hashOrVal.(map[string]interface{}); !ok {
return false
}
if hashOrVal, ok = hash[k]; !ok {
return false
}
}
return true
}
// Type returns a string representation of the type of the key specified.
//
// Type will return the empty string if given an empty key or a key that
// does not exist. Keys are case sensitive.
func (md *MetaData) Type(key ...string) string {
fullkey := strings.Join(key, ".")
if typ, ok := md.types[fullkey]; ok {
return typ.typeString()
}
return ""
}
// Key is the type of any TOML key, including key groups. Use (MetaData).Keys
// to get values of this type.
type Key []string
func (k Key) String() string {
return strings.Join(k, ".")
}
func (k Key) maybeQuotedAll() string {
var ss []string
for i := range k {
ss = append(ss, k.maybeQuoted(i))
}
return strings.Join(ss, ".")
}
func (k Key) maybeQuoted(i int) string {
quote := false
for _, c := range k[i] {
if !isBareKeyChar(c) {
quote = true
break
}
}
if quote {
return "\"" + strings.Replace(k[i], "\"", "\\\"", -1) + "\""
}
return k[i]
}
func (k Key) add(piece string) Key {
newKey := make(Key, len(k)+1)
copy(newKey, k)
newKey[len(k)] = piece
return newKey
}
// Keys returns a slice of every key in the TOML data, including key groups.
// Each key is itself a slice, where the first element is the top of the
// hierarchy and the last is the most specific.
//
// The list will have the same order as the keys appeared in the TOML data.
//
// All keys returned are non-empty.
func (md *MetaData) Keys() []Key {
return md.keys
}
// Undecoded returns all keys that have not been decoded in the order in which
// they appear in the original TOML document.
//
// This includes keys that haven't been decoded because of a Primitive value.
// Once the Primitive value is decoded, the keys will be considered decoded.
//
// Also note that decoding into an empty interface will result in no decoding,
// and so no keys will be considered decoded.
//
// In this sense, the Undecoded keys correspond to keys in the TOML document
// that do not have a concrete type in your representation.
func (md *MetaData) Undecoded() []Key {
undecoded := make([]Key, 0, len(md.keys))
for _, key := range md.keys {
if !md.decoded[key.String()] {
undecoded = append(undecoded, key)
}
}
return undecoded
}

27
vendor/github.com/BurntSushi/toml/doc.go generated vendored Normal file
View File

@ -0,0 +1,27 @@
/*
Package toml provides facilities for decoding and encoding TOML configuration
files via reflection. There is also support for delaying decoding with
the Primitive type, and querying the set of keys in a TOML document with the
MetaData type.
The specification implemented: https://github.com/toml-lang/toml
The sub-command github.com/BurntSushi/toml/cmd/tomlv can be used to verify
whether a file is a valid TOML document. It can also be used to print the
type of each key in a TOML document.
Testing
There are two important types of tests used for this package. The first is
contained inside '*_test.go' files and uses the standard Go unit testing
framework. These tests are primarily devoted to holistically testing the
decoder and encoder.
The second type of testing is used to verify the implementation's adherence
to the TOML specification. These tests have been factored into their own
project: https://github.com/BurntSushi/toml-test
The reason the tests are in a separate project is so that they can be used by
any implementation of TOML. Namely, it is language agnostic.
*/
package toml

568
vendor/github.com/BurntSushi/toml/encode.go generated vendored Normal file
View File

@ -0,0 +1,568 @@
package toml
import (
"bufio"
"errors"
"fmt"
"io"
"reflect"
"sort"
"strconv"
"strings"
"time"
)
type tomlEncodeError struct{ error }
var (
errArrayMixedElementTypes = errors.New(
"toml: cannot encode array with mixed element types")
errArrayNilElement = errors.New(
"toml: cannot encode array with nil element")
errNonString = errors.New(
"toml: cannot encode a map with non-string key type")
errAnonNonStruct = errors.New(
"toml: cannot encode an anonymous field that is not a struct")
errArrayNoTable = errors.New(
"toml: TOML array element cannot contain a table")
errNoKey = errors.New(
"toml: top-level values must be Go maps or structs")
errAnything = errors.New("") // used in testing
)
var quotedReplacer = strings.NewReplacer(
"\t", "\\t",
"\n", "\\n",
"\r", "\\r",
"\"", "\\\"",
"\\", "\\\\",
)
// Encoder controls the encoding of Go values to a TOML document to some
// io.Writer.
//
// The indentation level can be controlled with the Indent field.
type Encoder struct {
// A single indentation level. By default it is two spaces.
Indent string
// hasWritten is whether we have written any output to w yet.
hasWritten bool
w *bufio.Writer
}
// NewEncoder returns a TOML encoder that encodes Go values to the io.Writer
// given. By default, a single indentation level is 2 spaces.
func NewEncoder(w io.Writer) *Encoder {
return &Encoder{
w: bufio.NewWriter(w),
Indent: " ",
}
}
// Encode writes a TOML representation of the Go value to the underlying
// io.Writer. If the value given cannot be encoded to a valid TOML document,
// then an error is returned.
//
// The mapping between Go values and TOML values should be precisely the same
// as for the Decode* functions. Similarly, the TextMarshaler interface is
// supported by encoding the resulting bytes as strings. (If you want to write
// arbitrary binary data then you will need to use something like base64 since
// TOML does not have any binary types.)
//
// When encoding TOML hashes (i.e., Go maps or structs), keys without any
// sub-hashes are encoded first.
//
// If a Go map is encoded, then its keys are sorted alphabetically for
// deterministic output. More control over this behavior may be provided if
// there is demand for it.
//
// Encoding Go values without a corresponding TOML representation---like map
// types with non-string keys---will cause an error to be returned. Similarly
// for mixed arrays/slices, arrays/slices with nil elements, embedded
// non-struct types and nested slices containing maps or structs.
// (e.g., [][]map[string]string is not allowed but []map[string]string is OK
// and so is []map[string][]string.)
func (enc *Encoder) Encode(v interface{}) error {
rv := eindirect(reflect.ValueOf(v))
if err := enc.safeEncode(Key([]string{}), rv); err != nil {
return err
}
return enc.w.Flush()
}
func (enc *Encoder) safeEncode(key Key, rv reflect.Value) (err error) {
defer func() {
if r := recover(); r != nil {
if terr, ok := r.(tomlEncodeError); ok {
err = terr.error
return
}
panic(r)
}
}()
enc.encode(key, rv)
return nil
}
func (enc *Encoder) encode(key Key, rv reflect.Value) {
// Special case. Time needs to be in ISO8601 format.
// Special case. If we can marshal the type to text, then we used that.
// Basically, this prevents the encoder for handling these types as
// generic structs (or whatever the underlying type of a TextMarshaler is).
switch rv.Interface().(type) {
case time.Time, TextMarshaler:
enc.keyEqElement(key, rv)
return
}
k := rv.Kind()
switch k {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
reflect.Uint64,
reflect.Float32, reflect.Float64, reflect.String, reflect.Bool:
enc.keyEqElement(key, rv)
case reflect.Array, reflect.Slice:
if typeEqual(tomlArrayHash, tomlTypeOfGo(rv)) {
enc.eArrayOfTables(key, rv)
} else {
enc.keyEqElement(key, rv)
}
case reflect.Interface:
if rv.IsNil() {
return
}
enc.encode(key, rv.Elem())
case reflect.Map:
if rv.IsNil() {
return
}
enc.eTable(key, rv)
case reflect.Ptr:
if rv.IsNil() {
return
}
enc.encode(key, rv.Elem())
case reflect.Struct:
enc.eTable(key, rv)
default:
panic(e("unsupported type for key '%s': %s", key, k))
}
}
// eElement encodes any value that can be an array element (primitives and
// arrays).
func (enc *Encoder) eElement(rv reflect.Value) {
switch v := rv.Interface().(type) {
case time.Time:
// Special case time.Time as a primitive. Has to come before
// TextMarshaler below because time.Time implements
// encoding.TextMarshaler, but we need to always use UTC.
enc.wf(v.UTC().Format("2006-01-02T15:04:05Z"))
return
case TextMarshaler:
// Special case. Use text marshaler if it's available for this value.
if s, err := v.MarshalText(); err != nil {
encPanic(err)
} else {
enc.writeQuoted(string(s))
}
return
}
switch rv.Kind() {
case reflect.Bool:
enc.wf(strconv.FormatBool(rv.Bool()))
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64:
enc.wf(strconv.FormatInt(rv.Int(), 10))
case reflect.Uint, reflect.Uint8, reflect.Uint16,
reflect.Uint32, reflect.Uint64:
enc.wf(strconv.FormatUint(rv.Uint(), 10))
case reflect.Float32:
enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 32)))
case reflect.Float64:
enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 64)))
case reflect.Array, reflect.Slice:
enc.eArrayOrSliceElement(rv)
case reflect.Interface:
enc.eElement(rv.Elem())
case reflect.String:
enc.writeQuoted(rv.String())
default:
panic(e("unexpected primitive type: %s", rv.Kind()))
}
}
// By the TOML spec, all floats must have a decimal with at least one
// number on either side.
func floatAddDecimal(fstr string) string {
if !strings.Contains(fstr, ".") {
return fstr + ".0"
}
return fstr
}
func (enc *Encoder) writeQuoted(s string) {
enc.wf("\"%s\"", quotedReplacer.Replace(s))
}
func (enc *Encoder) eArrayOrSliceElement(rv reflect.Value) {
length := rv.Len()
enc.wf("[")
for i := 0; i < length; i++ {
elem := rv.Index(i)
enc.eElement(elem)
if i != length-1 {
enc.wf(", ")
}
}
enc.wf("]")
}
func (enc *Encoder) eArrayOfTables(key Key, rv reflect.Value) {
if len(key) == 0 {
encPanic(errNoKey)
}
for i := 0; i < rv.Len(); i++ {
trv := rv.Index(i)
if isNil(trv) {
continue
}
panicIfInvalidKey(key)
enc.newline()
enc.wf("%s[[%s]]", enc.indentStr(key), key.maybeQuotedAll())
enc.newline()
enc.eMapOrStruct(key, trv)
}
}
func (enc *Encoder) eTable(key Key, rv reflect.Value) {
panicIfInvalidKey(key)
if len(key) == 1 {
// Output an extra newline between top-level tables.
// (The newline isn't written if nothing else has been written though.)
enc.newline()
}
if len(key) > 0 {
enc.wf("%s[%s]", enc.indentStr(key), key.maybeQuotedAll())
enc.newline()
}
enc.eMapOrStruct(key, rv)
}
func (enc *Encoder) eMapOrStruct(key Key, rv reflect.Value) {
switch rv := eindirect(rv); rv.Kind() {
case reflect.Map:
enc.eMap(key, rv)
case reflect.Struct:
enc.eStruct(key, rv)
default:
panic("eTable: unhandled reflect.Value Kind: " + rv.Kind().String())
}
}
func (enc *Encoder) eMap(key Key, rv reflect.Value) {
rt := rv.Type()
if rt.Key().Kind() != reflect.String {
encPanic(errNonString)
}
// Sort keys so that we have deterministic output. And write keys directly
// underneath this key first, before writing sub-structs or sub-maps.
var mapKeysDirect, mapKeysSub []string
for _, mapKey := range rv.MapKeys() {
k := mapKey.String()
if typeIsHash(tomlTypeOfGo(rv.MapIndex(mapKey))) {
mapKeysSub = append(mapKeysSub, k)
} else {
mapKeysDirect = append(mapKeysDirect, k)
}
}
var writeMapKeys = func(mapKeys []string) {
sort.Strings(mapKeys)
for _, mapKey := range mapKeys {
mrv := rv.MapIndex(reflect.ValueOf(mapKey))
if isNil(mrv) {
// Don't write anything for nil fields.
continue
}
enc.encode(key.add(mapKey), mrv)
}
}
writeMapKeys(mapKeysDirect)
writeMapKeys(mapKeysSub)
}
func (enc *Encoder) eStruct(key Key, rv reflect.Value) {
// Write keys for fields directly under this key first, because if we write
// a field that creates a new table, then all keys under it will be in that
// table (not the one we're writing here).
rt := rv.Type()
var fieldsDirect, fieldsSub [][]int
var addFields func(rt reflect.Type, rv reflect.Value, start []int)
addFields = func(rt reflect.Type, rv reflect.Value, start []int) {
for i := 0; i < rt.NumField(); i++ {
f := rt.Field(i)
// skip unexported fields
if f.PkgPath != "" && !f.Anonymous {
continue
}
frv := rv.Field(i)
if f.Anonymous {
t := f.Type
switch t.Kind() {
case reflect.Struct:
// Treat anonymous struct fields with
// tag names as though they are not
// anonymous, like encoding/json does.
if getOptions(f.Tag).name == "" {
addFields(t, frv, f.Index)
continue
}
case reflect.Ptr:
if t.Elem().Kind() == reflect.Struct &&
getOptions(f.Tag).name == "" {
if !frv.IsNil() {
addFields(t.Elem(), frv.Elem(), f.Index)
}
continue
}
// Fall through to the normal field encoding logic below
// for non-struct anonymous fields.
}
}
if typeIsHash(tomlTypeOfGo(frv)) {
fieldsSub = append(fieldsSub, append(start, f.Index...))
} else {
fieldsDirect = append(fieldsDirect, append(start, f.Index...))
}
}
}
addFields(rt, rv, nil)
var writeFields = func(fields [][]int) {
for _, fieldIndex := range fields {
sft := rt.FieldByIndex(fieldIndex)
sf := rv.FieldByIndex(fieldIndex)
if isNil(sf) {
// Don't write anything for nil fields.
continue
}
opts := getOptions(sft.Tag)
if opts.skip {
continue
}
keyName := sft.Name
if opts.name != "" {
keyName = opts.name
}
if opts.omitempty && isEmpty(sf) {
continue
}
if opts.omitzero && isZero(sf) {
continue
}
enc.encode(key.add(keyName), sf)
}
}
writeFields(fieldsDirect)
writeFields(fieldsSub)
}
// tomlTypeName returns the TOML type name of the Go value's type. It is
// used to determine whether the types of array elements are mixed (which is
// forbidden). If the Go value is nil, then it is illegal for it to be an array
// element, and valueIsNil is returned as true.
// Returns the TOML type of a Go value. The type may be `nil`, which means
// no concrete TOML type could be found.
func tomlTypeOfGo(rv reflect.Value) tomlType {
if isNil(rv) || !rv.IsValid() {
return nil
}
switch rv.Kind() {
case reflect.Bool:
return tomlBool
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
reflect.Uint64:
return tomlInteger
case reflect.Float32, reflect.Float64:
return tomlFloat
case reflect.Array, reflect.Slice:
if typeEqual(tomlHash, tomlArrayType(rv)) {
return tomlArrayHash
}
return tomlArray
case reflect.Ptr, reflect.Interface:
return tomlTypeOfGo(rv.Elem())
case reflect.String:
return tomlString
case reflect.Map:
return tomlHash
case reflect.Struct:
switch rv.Interface().(type) {
case time.Time:
return tomlDatetime
case TextMarshaler:
return tomlString
default:
return tomlHash
}
default:
panic("unexpected reflect.Kind: " + rv.Kind().String())
}
}
// tomlArrayType returns the element type of a TOML array. The type returned
// may be nil if it cannot be determined (e.g., a nil slice or a zero length
// slize). This function may also panic if it finds a type that cannot be
// expressed in TOML (such as nil elements, heterogeneous arrays or directly
// nested arrays of tables).
func tomlArrayType(rv reflect.Value) tomlType {
if isNil(rv) || !rv.IsValid() || rv.Len() == 0 {
return nil
}
firstType := tomlTypeOfGo(rv.Index(0))
if firstType == nil {
encPanic(errArrayNilElement)
}
rvlen := rv.Len()
for i := 1; i < rvlen; i++ {
elem := rv.Index(i)
switch elemType := tomlTypeOfGo(elem); {
case elemType == nil:
encPanic(errArrayNilElement)
case !typeEqual(firstType, elemType):
encPanic(errArrayMixedElementTypes)
}
}
// If we have a nested array, then we must make sure that the nested
// array contains ONLY primitives.
// This checks arbitrarily nested arrays.
if typeEqual(firstType, tomlArray) || typeEqual(firstType, tomlArrayHash) {
nest := tomlArrayType(eindirect(rv.Index(0)))
if typeEqual(nest, tomlHash) || typeEqual(nest, tomlArrayHash) {
encPanic(errArrayNoTable)
}
}
return firstType
}
type tagOptions struct {
skip bool // "-"
name string
omitempty bool
omitzero bool
}
func getOptions(tag reflect.StructTag) tagOptions {
t := tag.Get("toml")
if t == "-" {
return tagOptions{skip: true}
}
var opts tagOptions
parts := strings.Split(t, ",")
opts.name = parts[0]
for _, s := range parts[1:] {
switch s {
case "omitempty":
opts.omitempty = true
case "omitzero":
opts.omitzero = true
}
}
return opts
}
func isZero(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return rv.Int() == 0
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return rv.Uint() == 0
case reflect.Float32, reflect.Float64:
return rv.Float() == 0.0
}
return false
}
func isEmpty(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Array, reflect.Slice, reflect.Map, reflect.String:
return rv.Len() == 0
case reflect.Bool:
return !rv.Bool()
}
return false
}
func (enc *Encoder) newline() {
if enc.hasWritten {
enc.wf("\n")
}
}
func (enc *Encoder) keyEqElement(key Key, val reflect.Value) {
if len(key) == 0 {
encPanic(errNoKey)
}
panicIfInvalidKey(key)
enc.wf("%s%s = ", enc.indentStr(key), key.maybeQuoted(len(key)-1))
enc.eElement(val)
enc.newline()
}
func (enc *Encoder) wf(format string, v ...interface{}) {
if _, err := fmt.Fprintf(enc.w, format, v...); err != nil {
encPanic(err)
}
enc.hasWritten = true
}
func (enc *Encoder) indentStr(key Key) string {
return strings.Repeat(enc.Indent, len(key)-1)
}
func encPanic(err error) {
panic(tomlEncodeError{err})
}
func eindirect(v reflect.Value) reflect.Value {
switch v.Kind() {
case reflect.Ptr, reflect.Interface:
return eindirect(v.Elem())
default:
return v
}
}
func isNil(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
return rv.IsNil()
default:
return false
}
}
func panicIfInvalidKey(key Key) {
for _, k := range key {
if len(k) == 0 {
encPanic(e("Key '%s' is not a valid table name. Key names "+
"cannot be empty.", key.maybeQuotedAll()))
}
}
}
func isValidKeyName(s string) bool {
return len(s) != 0
}

19
vendor/github.com/BurntSushi/toml/encoding_types.go generated vendored Normal file
View File

@ -0,0 +1,19 @@
// +build go1.2
package toml
// In order to support Go 1.1, we define our own TextMarshaler and
// TextUnmarshaler types. For Go 1.2+, we just alias them with the
// standard library interfaces.
import (
"encoding"
)
// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
// so that Go 1.1 can be supported.
type TextMarshaler encoding.TextMarshaler
// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
// here so that Go 1.1 can be supported.
type TextUnmarshaler encoding.TextUnmarshaler

View File

@ -0,0 +1,18 @@
// +build !go1.2
package toml
// These interfaces were introduced in Go 1.2, so we add them manually when
// compiling for Go 1.1.
// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
// so that Go 1.1 can be supported.
type TextMarshaler interface {
MarshalText() (text []byte, err error)
}
// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
// here so that Go 1.1 can be supported.
type TextUnmarshaler interface {
UnmarshalText(text []byte) error
}

953
vendor/github.com/BurntSushi/toml/lex.go generated vendored Normal file
View File

@ -0,0 +1,953 @@
package toml
import (
"fmt"
"strings"
"unicode"
"unicode/utf8"
)
type itemType int
const (
itemError itemType = iota
itemNIL // used in the parser to indicate no type
itemEOF
itemText
itemString
itemRawString
itemMultilineString
itemRawMultilineString
itemBool
itemInteger
itemFloat
itemDatetime
itemArray // the start of an array
itemArrayEnd
itemTableStart
itemTableEnd
itemArrayTableStart
itemArrayTableEnd
itemKeyStart
itemCommentStart
itemInlineTableStart
itemInlineTableEnd
)
const (
eof = 0
comma = ','
tableStart = '['
tableEnd = ']'
arrayTableStart = '['
arrayTableEnd = ']'
tableSep = '.'
keySep = '='
arrayStart = '['
arrayEnd = ']'
commentStart = '#'
stringStart = '"'
stringEnd = '"'
rawStringStart = '\''
rawStringEnd = '\''
inlineTableStart = '{'
inlineTableEnd = '}'
)
type stateFn func(lx *lexer) stateFn
type lexer struct {
input string
start int
pos int
line int
state stateFn
items chan item
// Allow for backing up up to three runes.
// This is necessary because TOML contains 3-rune tokens (""" and ''').
prevWidths [3]int
nprev int // how many of prevWidths are in use
// If we emit an eof, we can still back up, but it is not OK to call
// next again.
atEOF bool
// A stack of state functions used to maintain context.
// The idea is to reuse parts of the state machine in various places.
// For example, values can appear at the top level or within arbitrarily
// nested arrays. The last state on the stack is used after a value has
// been lexed. Similarly for comments.
stack []stateFn
}
type item struct {
typ itemType
val string
line int
}
func (lx *lexer) nextItem() item {
for {
select {
case item := <-lx.items:
return item
default:
lx.state = lx.state(lx)
}
}
}
func lex(input string) *lexer {
lx := &lexer{
input: input,
state: lexTop,
line: 1,
items: make(chan item, 10),
stack: make([]stateFn, 0, 10),
}
return lx
}
func (lx *lexer) push(state stateFn) {
lx.stack = append(lx.stack, state)
}
func (lx *lexer) pop() stateFn {
if len(lx.stack) == 0 {
return lx.errorf("BUG in lexer: no states to pop")
}
last := lx.stack[len(lx.stack)-1]
lx.stack = lx.stack[0 : len(lx.stack)-1]
return last
}
func (lx *lexer) current() string {
return lx.input[lx.start:lx.pos]
}
func (lx *lexer) emit(typ itemType) {
lx.items <- item{typ, lx.current(), lx.line}
lx.start = lx.pos
}
func (lx *lexer) emitTrim(typ itemType) {
lx.items <- item{typ, strings.TrimSpace(lx.current()), lx.line}
lx.start = lx.pos
}
func (lx *lexer) next() (r rune) {
if lx.atEOF {
panic("next called after EOF")
}
if lx.pos >= len(lx.input) {
lx.atEOF = true
return eof
}
if lx.input[lx.pos] == '\n' {
lx.line++
}
lx.prevWidths[2] = lx.prevWidths[1]
lx.prevWidths[1] = lx.prevWidths[0]
if lx.nprev < 3 {
lx.nprev++
}
r, w := utf8.DecodeRuneInString(lx.input[lx.pos:])
lx.prevWidths[0] = w
lx.pos += w
return r
}
// ignore skips over the pending input before this point.
func (lx *lexer) ignore() {
lx.start = lx.pos
}
// backup steps back one rune. Can be called only twice between calls to next.
func (lx *lexer) backup() {
if lx.atEOF {
lx.atEOF = false
return
}
if lx.nprev < 1 {
panic("backed up too far")
}
w := lx.prevWidths[0]
lx.prevWidths[0] = lx.prevWidths[1]
lx.prevWidths[1] = lx.prevWidths[2]
lx.nprev--
lx.pos -= w
if lx.pos < len(lx.input) && lx.input[lx.pos] == '\n' {
lx.line--
}
}
// accept consumes the next rune if it's equal to `valid`.
func (lx *lexer) accept(valid rune) bool {
if lx.next() == valid {
return true
}
lx.backup()
return false
}
// peek returns but does not consume the next rune in the input.
func (lx *lexer) peek() rune {
r := lx.next()
lx.backup()
return r
}
// skip ignores all input that matches the given predicate.
func (lx *lexer) skip(pred func(rune) bool) {
for {
r := lx.next()
if pred(r) {
continue
}
lx.backup()
lx.ignore()
return
}
}
// errorf stops all lexing by emitting an error and returning `nil`.
// Note that any value that is a character is escaped if it's a special
// character (newlines, tabs, etc.).
func (lx *lexer) errorf(format string, values ...interface{}) stateFn {
lx.items <- item{
itemError,
fmt.Sprintf(format, values...),
lx.line,
}
return nil
}
// lexTop consumes elements at the top level of TOML data.
func lexTop(lx *lexer) stateFn {
r := lx.next()
if isWhitespace(r) || isNL(r) {
return lexSkip(lx, lexTop)
}
switch r {
case commentStart:
lx.push(lexTop)
return lexCommentStart
case tableStart:
return lexTableStart
case eof:
if lx.pos > lx.start {
return lx.errorf("unexpected EOF")
}
lx.emit(itemEOF)
return nil
}
// At this point, the only valid item can be a key, so we back up
// and let the key lexer do the rest.
lx.backup()
lx.push(lexTopEnd)
return lexKeyStart
}
// lexTopEnd is entered whenever a top-level item has been consumed. (A value
// or a table.) It must see only whitespace, and will turn back to lexTop
// upon a newline. If it sees EOF, it will quit the lexer successfully.
func lexTopEnd(lx *lexer) stateFn {
r := lx.next()
switch {
case r == commentStart:
// a comment will read to a newline for us.
lx.push(lexTop)
return lexCommentStart
case isWhitespace(r):
return lexTopEnd
case isNL(r):
lx.ignore()
return lexTop
case r == eof:
lx.emit(itemEOF)
return nil
}
return lx.errorf("expected a top-level item to end with a newline, "+
"comment, or EOF, but got %q instead", r)
}
// lexTable lexes the beginning of a table. Namely, it makes sure that
// it starts with a character other than '.' and ']'.
// It assumes that '[' has already been consumed.
// It also handles the case that this is an item in an array of tables.
// e.g., '[[name]]'.
func lexTableStart(lx *lexer) stateFn {
if lx.peek() == arrayTableStart {
lx.next()
lx.emit(itemArrayTableStart)
lx.push(lexArrayTableEnd)
} else {
lx.emit(itemTableStart)
lx.push(lexTableEnd)
}
return lexTableNameStart
}
func lexTableEnd(lx *lexer) stateFn {
lx.emit(itemTableEnd)
return lexTopEnd
}
func lexArrayTableEnd(lx *lexer) stateFn {
if r := lx.next(); r != arrayTableEnd {
return lx.errorf("expected end of table array name delimiter %q, "+
"but got %q instead", arrayTableEnd, r)
}
lx.emit(itemArrayTableEnd)
return lexTopEnd
}
func lexTableNameStart(lx *lexer) stateFn {
lx.skip(isWhitespace)
switch r := lx.peek(); {
case r == tableEnd || r == eof:
return lx.errorf("unexpected end of table name " +
"(table names cannot be empty)")
case r == tableSep:
return lx.errorf("unexpected table separator " +
"(table names cannot be empty)")
case r == stringStart || r == rawStringStart:
lx.ignore()
lx.push(lexTableNameEnd)
return lexValue // reuse string lexing
default:
return lexBareTableName
}
}
// lexBareTableName lexes the name of a table. It assumes that at least one
// valid character for the table has already been read.
func lexBareTableName(lx *lexer) stateFn {
r := lx.next()
if isBareKeyChar(r) {
return lexBareTableName
}
lx.backup()
lx.emit(itemText)
return lexTableNameEnd
}
// lexTableNameEnd reads the end of a piece of a table name, optionally
// consuming whitespace.
func lexTableNameEnd(lx *lexer) stateFn {
lx.skip(isWhitespace)
switch r := lx.next(); {
case isWhitespace(r):
return lexTableNameEnd
case r == tableSep:
lx.ignore()
return lexTableNameStart
case r == tableEnd:
return lx.pop()
default:
return lx.errorf("expected '.' or ']' to end table name, "+
"but got %q instead", r)
}
}
// lexKeyStart consumes a key name up until the first non-whitespace character.
// lexKeyStart will ignore whitespace.
func lexKeyStart(lx *lexer) stateFn {
r := lx.peek()
switch {
case r == keySep:
return lx.errorf("unexpected key separator %q", keySep)
case isWhitespace(r) || isNL(r):
lx.next()
return lexSkip(lx, lexKeyStart)
case r == stringStart || r == rawStringStart:
lx.ignore()
lx.emit(itemKeyStart)
lx.push(lexKeyEnd)
return lexValue // reuse string lexing
default:
lx.ignore()
lx.emit(itemKeyStart)
return lexBareKey
}
}
// lexBareKey consumes the text of a bare key. Assumes that the first character
// (which is not whitespace) has not yet been consumed.
func lexBareKey(lx *lexer) stateFn {
switch r := lx.next(); {
case isBareKeyChar(r):
return lexBareKey
case isWhitespace(r):
lx.backup()
lx.emit(itemText)
return lexKeyEnd
case r == keySep:
lx.backup()
lx.emit(itemText)
return lexKeyEnd
default:
return lx.errorf("bare keys cannot contain %q", r)
}
}
// lexKeyEnd consumes the end of a key and trims whitespace (up to the key
// separator).
func lexKeyEnd(lx *lexer) stateFn {
switch r := lx.next(); {
case r == keySep:
return lexSkip(lx, lexValue)
case isWhitespace(r):
return lexSkip(lx, lexKeyEnd)
default:
return lx.errorf("expected key separator %q, but got %q instead",
keySep, r)
}
}
// lexValue starts the consumption of a value anywhere a value is expected.
// lexValue will ignore whitespace.
// After a value is lexed, the last state on the next is popped and returned.
func lexValue(lx *lexer) stateFn {
// We allow whitespace to precede a value, but NOT newlines.
// In array syntax, the array states are responsible for ignoring newlines.
r := lx.next()
switch {
case isWhitespace(r):
return lexSkip(lx, lexValue)
case isDigit(r):
lx.backup() // avoid an extra state and use the same as above
return lexNumberOrDateStart
}
switch r {
case arrayStart:
lx.ignore()
lx.emit(itemArray)
return lexArrayValue
case inlineTableStart:
lx.ignore()
lx.emit(itemInlineTableStart)
return lexInlineTableValue
case stringStart:
if lx.accept(stringStart) {
if lx.accept(stringStart) {
lx.ignore() // Ignore """
return lexMultilineString
}
lx.backup()
}
lx.ignore() // ignore the '"'
return lexString
case rawStringStart:
if lx.accept(rawStringStart) {
if lx.accept(rawStringStart) {
lx.ignore() // Ignore """
return lexMultilineRawString
}
lx.backup()
}
lx.ignore() // ignore the "'"
return lexRawString
case '+', '-':
return lexNumberStart
case '.': // special error case, be kind to users
return lx.errorf("floats must start with a digit, not '.'")
}
if unicode.IsLetter(r) {
// Be permissive here; lexBool will give a nice error if the
// user wrote something like
// x = foo
// (i.e. not 'true' or 'false' but is something else word-like.)
lx.backup()
return lexBool
}
return lx.errorf("expected value but found %q instead", r)
}
// lexArrayValue consumes one value in an array. It assumes that '[' or ','
// have already been consumed. All whitespace and newlines are ignored.
func lexArrayValue(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r) || isNL(r):
return lexSkip(lx, lexArrayValue)
case r == commentStart:
lx.push(lexArrayValue)
return lexCommentStart
case r == comma:
return lx.errorf("unexpected comma")
case r == arrayEnd:
// NOTE(caleb): The spec isn't clear about whether you can have
// a trailing comma or not, so we'll allow it.
return lexArrayEnd
}
lx.backup()
lx.push(lexArrayValueEnd)
return lexValue
}
// lexArrayValueEnd consumes everything between the end of an array value and
// the next value (or the end of the array): it ignores whitespace and newlines
// and expects either a ',' or a ']'.
func lexArrayValueEnd(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r) || isNL(r):
return lexSkip(lx, lexArrayValueEnd)
case r == commentStart:
lx.push(lexArrayValueEnd)
return lexCommentStart
case r == comma:
lx.ignore()
return lexArrayValue // move on to the next value
case r == arrayEnd:
return lexArrayEnd
}
return lx.errorf(
"expected a comma or array terminator %q, but got %q instead",
arrayEnd, r,
)
}
// lexArrayEnd finishes the lexing of an array.
// It assumes that a ']' has just been consumed.
func lexArrayEnd(lx *lexer) stateFn {
lx.ignore()
lx.emit(itemArrayEnd)
return lx.pop()
}
// lexInlineTableValue consumes one key/value pair in an inline table.
// It assumes that '{' or ',' have already been consumed. Whitespace is ignored.
func lexInlineTableValue(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r):
return lexSkip(lx, lexInlineTableValue)
case isNL(r):
return lx.errorf("newlines not allowed within inline tables")
case r == commentStart:
lx.push(lexInlineTableValue)
return lexCommentStart
case r == comma:
return lx.errorf("unexpected comma")
case r == inlineTableEnd:
return lexInlineTableEnd
}
lx.backup()
lx.push(lexInlineTableValueEnd)
return lexKeyStart
}
// lexInlineTableValueEnd consumes everything between the end of an inline table
// key/value pair and the next pair (or the end of the table):
// it ignores whitespace and expects either a ',' or a '}'.
func lexInlineTableValueEnd(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r):
return lexSkip(lx, lexInlineTableValueEnd)
case isNL(r):
return lx.errorf("newlines not allowed within inline tables")
case r == commentStart:
lx.push(lexInlineTableValueEnd)
return lexCommentStart
case r == comma:
lx.ignore()
return lexInlineTableValue
case r == inlineTableEnd:
return lexInlineTableEnd
}
return lx.errorf("expected a comma or an inline table terminator %q, "+
"but got %q instead", inlineTableEnd, r)
}
// lexInlineTableEnd finishes the lexing of an inline table.
// It assumes that a '}' has just been consumed.
func lexInlineTableEnd(lx *lexer) stateFn {
lx.ignore()
lx.emit(itemInlineTableEnd)
return lx.pop()
}
// lexString consumes the inner contents of a string. It assumes that the
// beginning '"' has already been consumed and ignored.
func lexString(lx *lexer) stateFn {
r := lx.next()
switch {
case r == eof:
return lx.errorf("unexpected EOF")
case isNL(r):
return lx.errorf("strings cannot contain newlines")
case r == '\\':
lx.push(lexString)
return lexStringEscape
case r == stringEnd:
lx.backup()
lx.emit(itemString)
lx.next()
lx.ignore()
return lx.pop()
}
return lexString
}
// lexMultilineString consumes the inner contents of a string. It assumes that
// the beginning '"""' has already been consumed and ignored.
func lexMultilineString(lx *lexer) stateFn {
switch lx.next() {
case eof:
return lx.errorf("unexpected EOF")
case '\\':
return lexMultilineStringEscape
case stringEnd:
if lx.accept(stringEnd) {
if lx.accept(stringEnd) {
lx.backup()
lx.backup()
lx.backup()
lx.emit(itemMultilineString)
lx.next()
lx.next()
lx.next()
lx.ignore()
return lx.pop()
}
lx.backup()
}
}
return lexMultilineString
}
// lexRawString consumes a raw string. Nothing can be escaped in such a string.
// It assumes that the beginning "'" has already been consumed and ignored.
func lexRawString(lx *lexer) stateFn {
r := lx.next()
switch {
case r == eof:
return lx.errorf("unexpected EOF")
case isNL(r):
return lx.errorf("strings cannot contain newlines")
case r == rawStringEnd:
lx.backup()
lx.emit(itemRawString)
lx.next()
lx.ignore()
return lx.pop()
}
return lexRawString
}
// lexMultilineRawString consumes a raw string. Nothing can be escaped in such
// a string. It assumes that the beginning "'''" has already been consumed and
// ignored.
func lexMultilineRawString(lx *lexer) stateFn {
switch lx.next() {
case eof:
return lx.errorf("unexpected EOF")
case rawStringEnd:
if lx.accept(rawStringEnd) {
if lx.accept(rawStringEnd) {
lx.backup()
lx.backup()
lx.backup()
lx.emit(itemRawMultilineString)
lx.next()
lx.next()
lx.next()
lx.ignore()
return lx.pop()
}
lx.backup()
}
}
return lexMultilineRawString
}
// lexMultilineStringEscape consumes an escaped character. It assumes that the
// preceding '\\' has already been consumed.
func lexMultilineStringEscape(lx *lexer) stateFn {
// Handle the special case first:
if isNL(lx.next()) {
return lexMultilineString
}
lx.backup()
lx.push(lexMultilineString)
return lexStringEscape(lx)
}
func lexStringEscape(lx *lexer) stateFn {
r := lx.next()
switch r {
case 'b':
fallthrough
case 't':
fallthrough
case 'n':
fallthrough
case 'f':
fallthrough
case 'r':
fallthrough
case '"':
fallthrough
case '\\':
return lx.pop()
case 'u':
return lexShortUnicodeEscape
case 'U':
return lexLongUnicodeEscape
}
return lx.errorf("invalid escape character %q; only the following "+
"escape characters are allowed: "+
`\b, \t, \n, \f, \r, \", \\, \uXXXX, and \UXXXXXXXX`, r)
}
func lexShortUnicodeEscape(lx *lexer) stateFn {
var r rune
for i := 0; i < 4; i++ {
r = lx.next()
if !isHexadecimal(r) {
return lx.errorf(`expected four hexadecimal digits after '\u', `+
"but got %q instead", lx.current())
}
}
return lx.pop()
}
func lexLongUnicodeEscape(lx *lexer) stateFn {
var r rune
for i := 0; i < 8; i++ {
r = lx.next()
if !isHexadecimal(r) {
return lx.errorf(`expected eight hexadecimal digits after '\U', `+
"but got %q instead", lx.current())
}
}
return lx.pop()
}
// lexNumberOrDateStart consumes either an integer, a float, or datetime.
func lexNumberOrDateStart(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumberOrDate
}
switch r {
case '_':
return lexNumber
case 'e', 'E':
return lexFloat
case '.':
return lx.errorf("floats must start with a digit, not '.'")
}
return lx.errorf("expected a digit but got %q", r)
}
// lexNumberOrDate consumes either an integer, float or datetime.
func lexNumberOrDate(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumberOrDate
}
switch r {
case '-':
return lexDatetime
case '_':
return lexNumber
case '.', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemInteger)
return lx.pop()
}
// lexDatetime consumes a Datetime, to a first approximation.
// The parser validates that it matches one of the accepted formats.
func lexDatetime(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexDatetime
}
switch r {
case '-', 'T', ':', '.', 'Z', '+':
return lexDatetime
}
lx.backup()
lx.emit(itemDatetime)
return lx.pop()
}
// lexNumberStart consumes either an integer or a float. It assumes that a sign
// has already been read, but that *no* digits have been consumed.
// lexNumberStart will move to the appropriate integer or float states.
func lexNumberStart(lx *lexer) stateFn {
// We MUST see a digit. Even floats have to start with a digit.
r := lx.next()
if !isDigit(r) {
if r == '.' {
return lx.errorf("floats must start with a digit, not '.'")
}
return lx.errorf("expected a digit but got %q", r)
}
return lexNumber
}
// lexNumber consumes an integer or a float after seeing the first digit.
func lexNumber(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumber
}
switch r {
case '_':
return lexNumber
case '.', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemInteger)
return lx.pop()
}
// lexFloat consumes the elements of a float. It allows any sequence of
// float-like characters, so floats emitted by the lexer are only a first
// approximation and must be validated by the parser.
func lexFloat(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexFloat
}
switch r {
case '_', '.', '-', '+', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemFloat)
return lx.pop()
}
// lexBool consumes a bool string: 'true' or 'false.
func lexBool(lx *lexer) stateFn {
var rs []rune
for {
r := lx.next()
if !unicode.IsLetter(r) {
lx.backup()
break
}
rs = append(rs, r)
}
s := string(rs)
switch s {
case "true", "false":
lx.emit(itemBool)
return lx.pop()
}
return lx.errorf("expected value but found %q instead", s)
}
// lexCommentStart begins the lexing of a comment. It will emit
// itemCommentStart and consume no characters, passing control to lexComment.
func lexCommentStart(lx *lexer) stateFn {
lx.ignore()
lx.emit(itemCommentStart)
return lexComment
}
// lexComment lexes an entire comment. It assumes that '#' has been consumed.
// It will consume *up to* the first newline character, and pass control
// back to the last state on the stack.
func lexComment(lx *lexer) stateFn {
r := lx.peek()
if isNL(r) || r == eof {
lx.emit(itemText)
return lx.pop()
}
lx.next()
return lexComment
}
// lexSkip ignores all slurped input and moves on to the next state.
func lexSkip(lx *lexer, nextState stateFn) stateFn {
return func(lx *lexer) stateFn {
lx.ignore()
return nextState
}
}
// isWhitespace returns true if `r` is a whitespace character according
// to the spec.
func isWhitespace(r rune) bool {
return r == '\t' || r == ' '
}
func isNL(r rune) bool {
return r == '\n' || r == '\r'
}
func isDigit(r rune) bool {
return r >= '0' && r <= '9'
}
func isHexadecimal(r rune) bool {
return (r >= '0' && r <= '9') ||
(r >= 'a' && r <= 'f') ||
(r >= 'A' && r <= 'F')
}
func isBareKeyChar(r rune) bool {
return (r >= 'A' && r <= 'Z') ||
(r >= 'a' && r <= 'z') ||
(r >= '0' && r <= '9') ||
r == '_' ||
r == '-'
}
func (itype itemType) String() string {
switch itype {
case itemError:
return "Error"
case itemNIL:
return "NIL"
case itemEOF:
return "EOF"
case itemText:
return "Text"
case itemString, itemRawString, itemMultilineString, itemRawMultilineString:
return "String"
case itemBool:
return "Bool"
case itemInteger:
return "Integer"
case itemFloat:
return "Float"
case itemDatetime:
return "DateTime"
case itemTableStart:
return "TableStart"
case itemTableEnd:
return "TableEnd"
case itemKeyStart:
return "KeyStart"
case itemArray:
return "Array"
case itemArrayEnd:
return "ArrayEnd"
case itemCommentStart:
return "CommentStart"
}
panic(fmt.Sprintf("BUG: Unknown type '%d'.", int(itype)))
}
func (item item) String() string {
return fmt.Sprintf("(%s, %s)", item.typ.String(), item.val)
}

592
vendor/github.com/BurntSushi/toml/parse.go generated vendored Normal file
View File

@ -0,0 +1,592 @@
package toml
import (
"fmt"
"strconv"
"strings"
"time"
"unicode"
"unicode/utf8"
)
type parser struct {
mapping map[string]interface{}
types map[string]tomlType
lx *lexer
// A list of keys in the order that they appear in the TOML data.
ordered []Key
// the full key for the current hash in scope
context Key
// the base key name for everything except hashes
currentKey string
// rough approximation of line number
approxLine int
// A map of 'key.group.names' to whether they were created implicitly.
implicits map[string]bool
}
type parseError string
func (pe parseError) Error() string {
return string(pe)
}
func parse(data string) (p *parser, err error) {
defer func() {
if r := recover(); r != nil {
var ok bool
if err, ok = r.(parseError); ok {
return
}
panic(r)
}
}()
p = &parser{
mapping: make(map[string]interface{}),
types: make(map[string]tomlType),
lx: lex(data),
ordered: make([]Key, 0),
implicits: make(map[string]bool),
}
for {
item := p.next()
if item.typ == itemEOF {
break
}
p.topLevel(item)
}
return p, nil
}
func (p *parser) panicf(format string, v ...interface{}) {
msg := fmt.Sprintf("Near line %d (last key parsed '%s'): %s",
p.approxLine, p.current(), fmt.Sprintf(format, v...))
panic(parseError(msg))
}
func (p *parser) next() item {
it := p.lx.nextItem()
if it.typ == itemError {
p.panicf("%s", it.val)
}
return it
}
func (p *parser) bug(format string, v ...interface{}) {
panic(fmt.Sprintf("BUG: "+format+"\n\n", v...))
}
func (p *parser) expect(typ itemType) item {
it := p.next()
p.assertEqual(typ, it.typ)
return it
}
func (p *parser) assertEqual(expected, got itemType) {
if expected != got {
p.bug("Expected '%s' but got '%s'.", expected, got)
}
}
func (p *parser) topLevel(item item) {
switch item.typ {
case itemCommentStart:
p.approxLine = item.line
p.expect(itemText)
case itemTableStart:
kg := p.next()
p.approxLine = kg.line
var key Key
for ; kg.typ != itemTableEnd && kg.typ != itemEOF; kg = p.next() {
key = append(key, p.keyString(kg))
}
p.assertEqual(itemTableEnd, kg.typ)
p.establishContext(key, false)
p.setType("", tomlHash)
p.ordered = append(p.ordered, key)
case itemArrayTableStart:
kg := p.next()
p.approxLine = kg.line
var key Key
for ; kg.typ != itemArrayTableEnd && kg.typ != itemEOF; kg = p.next() {
key = append(key, p.keyString(kg))
}
p.assertEqual(itemArrayTableEnd, kg.typ)
p.establishContext(key, true)
p.setType("", tomlArrayHash)
p.ordered = append(p.ordered, key)
case itemKeyStart:
kname := p.next()
p.approxLine = kname.line
p.currentKey = p.keyString(kname)
val, typ := p.value(p.next())
p.setValue(p.currentKey, val)
p.setType(p.currentKey, typ)
p.ordered = append(p.ordered, p.context.add(p.currentKey))
p.currentKey = ""
default:
p.bug("Unexpected type at top level: %s", item.typ)
}
}
// Gets a string for a key (or part of a key in a table name).
func (p *parser) keyString(it item) string {
switch it.typ {
case itemText:
return it.val
case itemString, itemMultilineString,
itemRawString, itemRawMultilineString:
s, _ := p.value(it)
return s.(string)
default:
p.bug("Unexpected key type: %s", it.typ)
panic("unreachable")
}
}
// value translates an expected value from the lexer into a Go value wrapped
// as an empty interface.
func (p *parser) value(it item) (interface{}, tomlType) {
switch it.typ {
case itemString:
return p.replaceEscapes(it.val), p.typeOfPrimitive(it)
case itemMultilineString:
trimmed := stripFirstNewline(stripEscapedWhitespace(it.val))
return p.replaceEscapes(trimmed), p.typeOfPrimitive(it)
case itemRawString:
return it.val, p.typeOfPrimitive(it)
case itemRawMultilineString:
return stripFirstNewline(it.val), p.typeOfPrimitive(it)
case itemBool:
switch it.val {
case "true":
return true, p.typeOfPrimitive(it)
case "false":
return false, p.typeOfPrimitive(it)
}
p.bug("Expected boolean value, but got '%s'.", it.val)
case itemInteger:
if !numUnderscoresOK(it.val) {
p.panicf("Invalid integer %q: underscores must be surrounded by digits",
it.val)
}
val := strings.Replace(it.val, "_", "", -1)
num, err := strconv.ParseInt(val, 10, 64)
if err != nil {
// Distinguish integer values. Normally, it'd be a bug if the lexer
// provides an invalid integer, but it's possible that the number is
// out of range of valid values (which the lexer cannot determine).
// So mark the former as a bug but the latter as a legitimate user
// error.
if e, ok := err.(*strconv.NumError); ok &&
e.Err == strconv.ErrRange {
p.panicf("Integer '%s' is out of the range of 64-bit "+
"signed integers.", it.val)
} else {
p.bug("Expected integer value, but got '%s'.", it.val)
}
}
return num, p.typeOfPrimitive(it)
case itemFloat:
parts := strings.FieldsFunc(it.val, func(r rune) bool {
switch r {
case '.', 'e', 'E':
return true
}
return false
})
for _, part := range parts {
if !numUnderscoresOK(part) {
p.panicf("Invalid float %q: underscores must be "+
"surrounded by digits", it.val)
}
}
if !numPeriodsOK(it.val) {
// As a special case, numbers like '123.' or '1.e2',
// which are valid as far as Go/strconv are concerned,
// must be rejected because TOML says that a fractional
// part consists of '.' followed by 1+ digits.
p.panicf("Invalid float %q: '.' must be followed "+
"by one or more digits", it.val)
}
val := strings.Replace(it.val, "_", "", -1)
num, err := strconv.ParseFloat(val, 64)
if err != nil {
if e, ok := err.(*strconv.NumError); ok &&
e.Err == strconv.ErrRange {
p.panicf("Float '%s' is out of the range of 64-bit "+
"IEEE-754 floating-point numbers.", it.val)
} else {
p.panicf("Invalid float value: %q", it.val)
}
}
return num, p.typeOfPrimitive(it)
case itemDatetime:
var t time.Time
var ok bool
var err error
for _, format := range []string{
"2006-01-02T15:04:05Z07:00",
"2006-01-02T15:04:05",
"2006-01-02",
} {
t, err = time.ParseInLocation(format, it.val, time.Local)
if err == nil {
ok = true
break
}
}
if !ok {
p.panicf("Invalid TOML Datetime: %q.", it.val)
}
return t, p.typeOfPrimitive(it)
case itemArray:
array := make([]interface{}, 0)
types := make([]tomlType, 0)
for it = p.next(); it.typ != itemArrayEnd; it = p.next() {
if it.typ == itemCommentStart {
p.expect(itemText)
continue
}
val, typ := p.value(it)
array = append(array, val)
types = append(types, typ)
}
return array, p.typeOfArray(types)
case itemInlineTableStart:
var (
hash = make(map[string]interface{})
outerContext = p.context
outerKey = p.currentKey
)
p.context = append(p.context, p.currentKey)
p.currentKey = ""
for it := p.next(); it.typ != itemInlineTableEnd; it = p.next() {
if it.typ != itemKeyStart {
p.bug("Expected key start but instead found %q, around line %d",
it.val, p.approxLine)
}
if it.typ == itemCommentStart {
p.expect(itemText)
continue
}
// retrieve key
k := p.next()
p.approxLine = k.line
kname := p.keyString(k)
// retrieve value
p.currentKey = kname
val, typ := p.value(p.next())
// make sure we keep metadata up to date
p.setType(kname, typ)
p.ordered = append(p.ordered, p.context.add(p.currentKey))
hash[kname] = val
}
p.context = outerContext
p.currentKey = outerKey
return hash, tomlHash
}
p.bug("Unexpected value type: %s", it.typ)
panic("unreachable")
}
// numUnderscoresOK checks whether each underscore in s is surrounded by
// characters that are not underscores.
func numUnderscoresOK(s string) bool {
accept := false
for _, r := range s {
if r == '_' {
if !accept {
return false
}
accept = false
continue
}
accept = true
}
return accept
}
// numPeriodsOK checks whether every period in s is followed by a digit.
func numPeriodsOK(s string) bool {
period := false
for _, r := range s {
if period && !isDigit(r) {
return false
}
period = r == '.'
}
return !period
}
// establishContext sets the current context of the parser,
// where the context is either a hash or an array of hashes. Which one is
// set depends on the value of the `array` parameter.
//
// Establishing the context also makes sure that the key isn't a duplicate, and
// will create implicit hashes automatically.
func (p *parser) establishContext(key Key, array bool) {
var ok bool
// Always start at the top level and drill down for our context.
hashContext := p.mapping
keyContext := make(Key, 0)
// We only need implicit hashes for key[0:-1]
for _, k := range key[0 : len(key)-1] {
_, ok = hashContext[k]
keyContext = append(keyContext, k)
// No key? Make an implicit hash and move on.
if !ok {
p.addImplicit(keyContext)
hashContext[k] = make(map[string]interface{})
}
// If the hash context is actually an array of tables, then set
// the hash context to the last element in that array.
//
// Otherwise, it better be a table, since this MUST be a key group (by
// virtue of it not being the last element in a key).
switch t := hashContext[k].(type) {
case []map[string]interface{}:
hashContext = t[len(t)-1]
case map[string]interface{}:
hashContext = t
default:
p.panicf("Key '%s' was already created as a hash.", keyContext)
}
}
p.context = keyContext
if array {
// If this is the first element for this array, then allocate a new
// list of tables for it.
k := key[len(key)-1]
if _, ok := hashContext[k]; !ok {
hashContext[k] = make([]map[string]interface{}, 0, 5)
}
// Add a new table. But make sure the key hasn't already been used
// for something else.
if hash, ok := hashContext[k].([]map[string]interface{}); ok {
hashContext[k] = append(hash, make(map[string]interface{}))
} else {
p.panicf("Key '%s' was already created and cannot be used as "+
"an array.", keyContext)
}
} else {
p.setValue(key[len(key)-1], make(map[string]interface{}))
}
p.context = append(p.context, key[len(key)-1])
}
// setValue sets the given key to the given value in the current context.
// It will make sure that the key hasn't already been defined, account for
// implicit key groups.
func (p *parser) setValue(key string, value interface{}) {
var tmpHash interface{}
var ok bool
hash := p.mapping
keyContext := make(Key, 0)
for _, k := range p.context {
keyContext = append(keyContext, k)
if tmpHash, ok = hash[k]; !ok {
p.bug("Context for key '%s' has not been established.", keyContext)
}
switch t := tmpHash.(type) {
case []map[string]interface{}:
// The context is a table of hashes. Pick the most recent table
// defined as the current hash.
hash = t[len(t)-1]
case map[string]interface{}:
hash = t
default:
p.bug("Expected hash to have type 'map[string]interface{}', but "+
"it has '%T' instead.", tmpHash)
}
}
keyContext = append(keyContext, key)
if _, ok := hash[key]; ok {
// Typically, if the given key has already been set, then we have
// to raise an error since duplicate keys are disallowed. However,
// it's possible that a key was previously defined implicitly. In this
// case, it is allowed to be redefined concretely. (See the
// `tests/valid/implicit-and-explicit-after.toml` test in `toml-test`.)
//
// But we have to make sure to stop marking it as an implicit. (So that
// another redefinition provokes an error.)
//
// Note that since it has already been defined (as a hash), we don't
// want to overwrite it. So our business is done.
if p.isImplicit(keyContext) {
p.removeImplicit(keyContext)
return
}
// Otherwise, we have a concrete key trying to override a previous
// key, which is *always* wrong.
p.panicf("Key '%s' has already been defined.", keyContext)
}
hash[key] = value
}
// setType sets the type of a particular value at a given key.
// It should be called immediately AFTER setValue.
//
// Note that if `key` is empty, then the type given will be applied to the
// current context (which is either a table or an array of tables).
func (p *parser) setType(key string, typ tomlType) {
keyContext := make(Key, 0, len(p.context)+1)
for _, k := range p.context {
keyContext = append(keyContext, k)
}
if len(key) > 0 { // allow type setting for hashes
keyContext = append(keyContext, key)
}
p.types[keyContext.String()] = typ
}
// addImplicit sets the given Key as having been created implicitly.
func (p *parser) addImplicit(key Key) {
p.implicits[key.String()] = true
}
// removeImplicit stops tagging the given key as having been implicitly
// created.
func (p *parser) removeImplicit(key Key) {
p.implicits[key.String()] = false
}
// isImplicit returns true if the key group pointed to by the key was created
// implicitly.
func (p *parser) isImplicit(key Key) bool {
return p.implicits[key.String()]
}
// current returns the full key name of the current context.
func (p *parser) current() string {
if len(p.currentKey) == 0 {
return p.context.String()
}
if len(p.context) == 0 {
return p.currentKey
}
return fmt.Sprintf("%s.%s", p.context, p.currentKey)
}
func stripFirstNewline(s string) string {
if len(s) == 0 || s[0] != '\n' {
return s
}
return s[1:]
}
func stripEscapedWhitespace(s string) string {
esc := strings.Split(s, "\\\n")
if len(esc) > 1 {
for i := 1; i < len(esc); i++ {
esc[i] = strings.TrimLeftFunc(esc[i], unicode.IsSpace)
}
}
return strings.Join(esc, "")
}
func (p *parser) replaceEscapes(str string) string {
var replaced []rune
s := []byte(str)
r := 0
for r < len(s) {
if s[r] != '\\' {
c, size := utf8.DecodeRune(s[r:])
r += size
replaced = append(replaced, c)
continue
}
r += 1
if r >= len(s) {
p.bug("Escape sequence at end of string.")
return ""
}
switch s[r] {
default:
p.bug("Expected valid escape code after \\, but got %q.", s[r])
return ""
case 'b':
replaced = append(replaced, rune(0x0008))
r += 1
case 't':
replaced = append(replaced, rune(0x0009))
r += 1
case 'n':
replaced = append(replaced, rune(0x000A))
r += 1
case 'f':
replaced = append(replaced, rune(0x000C))
r += 1
case 'r':
replaced = append(replaced, rune(0x000D))
r += 1
case '"':
replaced = append(replaced, rune(0x0022))
r += 1
case '\\':
replaced = append(replaced, rune(0x005C))
r += 1
case 'u':
// At this point, we know we have a Unicode escape of the form
// `uXXXX` at [r, r+5). (Because the lexer guarantees this
// for us.)
escaped := p.asciiEscapeToUnicode(s[r+1 : r+5])
replaced = append(replaced, escaped)
r += 5
case 'U':
// At this point, we know we have a Unicode escape of the form
// `uXXXX` at [r, r+9). (Because the lexer guarantees this
// for us.)
escaped := p.asciiEscapeToUnicode(s[r+1 : r+9])
replaced = append(replaced, escaped)
r += 9
}
}
return string(replaced)
}
func (p *parser) asciiEscapeToUnicode(bs []byte) rune {
s := string(bs)
hex, err := strconv.ParseUint(strings.ToLower(s), 16, 32)
if err != nil {
p.bug("Could not parse '%s' as a hexadecimal number, but the "+
"lexer claims it's OK: %s", s, err)
}
if !utf8.ValidRune(rune(hex)) {
p.panicf("Escaped character '\\u%s' is not valid UTF-8.", s)
}
return rune(hex)
}
func isStringType(ty itemType) bool {
return ty == itemString || ty == itemMultilineString ||
ty == itemRawString || ty == itemRawMultilineString
}

1
vendor/github.com/BurntSushi/toml/session.vim generated vendored Normal file
View File

@ -0,0 +1 @@
au BufWritePost *.go silent!make tags > /dev/null 2>&1

91
vendor/github.com/BurntSushi/toml/type_check.go generated vendored Normal file
View File

@ -0,0 +1,91 @@
package toml
// tomlType represents any Go type that corresponds to a TOML type.
// While the first draft of the TOML spec has a simplistic type system that
// probably doesn't need this level of sophistication, we seem to be militating
// toward adding real composite types.
type tomlType interface {
typeString() string
}
// typeEqual accepts any two types and returns true if they are equal.
func typeEqual(t1, t2 tomlType) bool {
if t1 == nil || t2 == nil {
return false
}
return t1.typeString() == t2.typeString()
}
func typeIsHash(t tomlType) bool {
return typeEqual(t, tomlHash) || typeEqual(t, tomlArrayHash)
}
type tomlBaseType string
func (btype tomlBaseType) typeString() string {
return string(btype)
}
func (btype tomlBaseType) String() string {
return btype.typeString()
}
var (
tomlInteger tomlBaseType = "Integer"
tomlFloat tomlBaseType = "Float"
tomlDatetime tomlBaseType = "Datetime"
tomlString tomlBaseType = "String"
tomlBool tomlBaseType = "Bool"
tomlArray tomlBaseType = "Array"
tomlHash tomlBaseType = "Hash"
tomlArrayHash tomlBaseType = "ArrayHash"
)
// typeOfPrimitive returns a tomlType of any primitive value in TOML.
// Primitive values are: Integer, Float, Datetime, String and Bool.
//
// Passing a lexer item other than the following will cause a BUG message
// to occur: itemString, itemBool, itemInteger, itemFloat, itemDatetime.
func (p *parser) typeOfPrimitive(lexItem item) tomlType {
switch lexItem.typ {
case itemInteger:
return tomlInteger
case itemFloat:
return tomlFloat
case itemDatetime:
return tomlDatetime
case itemString:
return tomlString
case itemMultilineString:
return tomlString
case itemRawString:
return tomlString
case itemRawMultilineString:
return tomlString
case itemBool:
return tomlBool
}
p.bug("Cannot infer primitive type of lex item '%s'.", lexItem)
panic("unreachable")
}
// typeOfArray returns a tomlType for an array given a list of types of its
// values.
//
// In the current spec, if an array is homogeneous, then its type is always
// "Array". If the array is not homogeneous, an error is generated.
func (p *parser) typeOfArray(types []tomlType) tomlType {
// Empty arrays are cool.
if len(types) == 0 {
return tomlArray
}
theType := types[0]
for _, t := range types[1:] {
if !typeEqual(theType, t) {
p.panicf("Array contains values of type '%s' and '%s', but "+
"arrays must be homogeneous.", theType, t)
}
}
return tomlArray
}

242
vendor/github.com/BurntSushi/toml/type_fields.go generated vendored Normal file
View File

@ -0,0 +1,242 @@
package toml
// Struct field handling is adapted from code in encoding/json:
//
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the Go distribution.
import (
"reflect"
"sort"
"sync"
)
// A field represents a single field found in a struct.
type field struct {
name string // the name of the field (`toml` tag included)
tag bool // whether field has a `toml` tag
index []int // represents the depth of an anonymous field
typ reflect.Type // the type of the field
}
// byName sorts field by name, breaking ties with depth,
// then breaking ties with "name came from toml tag", then
// breaking ties with index sequence.
type byName []field
func (x byName) Len() int { return len(x) }
func (x byName) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byName) Less(i, j int) bool {
if x[i].name != x[j].name {
return x[i].name < x[j].name
}
if len(x[i].index) != len(x[j].index) {
return len(x[i].index) < len(x[j].index)
}
if x[i].tag != x[j].tag {
return x[i].tag
}
return byIndex(x).Less(i, j)
}
// byIndex sorts field by index sequence.
type byIndex []field
func (x byIndex) Len() int { return len(x) }
func (x byIndex) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byIndex) Less(i, j int) bool {
for k, xik := range x[i].index {
if k >= len(x[j].index) {
return false
}
if xik != x[j].index[k] {
return xik < x[j].index[k]
}
}
return len(x[i].index) < len(x[j].index)
}
// typeFields returns a list of fields that TOML should recognize for the given
// type. The algorithm is breadth-first search over the set of structs to
// include - the top struct and then any reachable anonymous structs.
func typeFields(t reflect.Type) []field {
// Anonymous fields to explore at the current level and the next.
current := []field{}
next := []field{{typ: t}}
// Count of queued names for current level and the next.
count := map[reflect.Type]int{}
nextCount := map[reflect.Type]int{}
// Types already visited at an earlier level.
visited := map[reflect.Type]bool{}
// Fields found.
var fields []field
for len(next) > 0 {
current, next = next, current[:0]
count, nextCount = nextCount, map[reflect.Type]int{}
for _, f := range current {
if visited[f.typ] {
continue
}
visited[f.typ] = true
// Scan f.typ for fields to include.
for i := 0; i < f.typ.NumField(); i++ {
sf := f.typ.Field(i)
if sf.PkgPath != "" && !sf.Anonymous { // unexported
continue
}
opts := getOptions(sf.Tag)
if opts.skip {
continue
}
index := make([]int, len(f.index)+1)
copy(index, f.index)
index[len(f.index)] = i
ft := sf.Type
if ft.Name() == "" && ft.Kind() == reflect.Ptr {
// Follow pointer.
ft = ft.Elem()
}
// Record found field and index sequence.
if opts.name != "" || !sf.Anonymous || ft.Kind() != reflect.Struct {
tagged := opts.name != ""
name := opts.name
if name == "" {
name = sf.Name
}
fields = append(fields, field{name, tagged, index, ft})
if count[f.typ] > 1 {
// If there were multiple instances, add a second,
// so that the annihilation code will see a duplicate.
// It only cares about the distinction between 1 or 2,
// so don't bother generating any more copies.
fields = append(fields, fields[len(fields)-1])
}
continue
}
// Record new anonymous struct to explore in next round.
nextCount[ft]++
if nextCount[ft] == 1 {
f := field{name: ft.Name(), index: index, typ: ft}
next = append(next, f)
}
}
}
}
sort.Sort(byName(fields))
// Delete all fields that are hidden by the Go rules for embedded fields,
// except that fields with TOML tags are promoted.
// The fields are sorted in primary order of name, secondary order
// of field index length. Loop over names; for each name, delete
// hidden fields by choosing the one dominant field that survives.
out := fields[:0]
for advance, i := 0, 0; i < len(fields); i += advance {
// One iteration per name.
// Find the sequence of fields with the name of this first field.
fi := fields[i]
name := fi.name
for advance = 1; i+advance < len(fields); advance++ {
fj := fields[i+advance]
if fj.name != name {
break
}
}
if advance == 1 { // Only one field with this name
out = append(out, fi)
continue
}
dominant, ok := dominantField(fields[i : i+advance])
if ok {
out = append(out, dominant)
}
}
fields = out
sort.Sort(byIndex(fields))
return fields
}
// dominantField looks through the fields, all of which are known to
// have the same name, to find the single field that dominates the
// others using Go's embedding rules, modified by the presence of
// TOML tags. If there are multiple top-level fields, the boolean
// will be false: This condition is an error in Go and we skip all
// the fields.
func dominantField(fields []field) (field, bool) {
// The fields are sorted in increasing index-length order. The winner
// must therefore be one with the shortest index length. Drop all
// longer entries, which is easy: just truncate the slice.
length := len(fields[0].index)
tagged := -1 // Index of first tagged field.
for i, f := range fields {
if len(f.index) > length {
fields = fields[:i]
break
}
if f.tag {
if tagged >= 0 {
// Multiple tagged fields at the same level: conflict.
// Return no field.
return field{}, false
}
tagged = i
}
}
if tagged >= 0 {
return fields[tagged], true
}
// All remaining fields have the same length. If there's more than one,
// we have a conflict (two fields named "X" at the same level) and we
// return no field.
if len(fields) > 1 {
return field{}, false
}
return fields[0], true
}
var fieldCache struct {
sync.RWMutex
m map[reflect.Type][]field
}
// cachedTypeFields is like typeFields but uses a cache to avoid repeated work.
func cachedTypeFields(t reflect.Type) []field {
fieldCache.RLock()
f := fieldCache.m[t]
fieldCache.RUnlock()
if f != nil {
return f
}
// Compute fields without lock.
// Might duplicate effort but won't hold other computations back.
f = typeFields(t)
if f == nil {
f = []field{}
}
fieldCache.Lock()
if fieldCache.m == nil {
fieldCache.m = map[reflect.Type][]field{}
}
fieldCache.m[t] = f
fieldCache.Unlock()
return f
}

3
vendor/github.com/UnnoTed/fileb0x/.gitignore generated vendored Normal file
View File

@ -0,0 +1,3 @@
_example/simple/static/
_example/echo/myEmbeddedFiles/
fileb0x

30
vendor/github.com/UnnoTed/fileb0x/CHANGELOG.md generated vendored Normal file
View File

@ -0,0 +1,30 @@
# Changelog
All notable changes to this project will be documented in this file.
To update simply run:
```bash
go get -u github.com/UnnoTed/fileb0x
```
## 2018-04-17
### Changed
- Improved file processing's speed
- Improved walk speed with [godirwalk](https://github.com/karrick/godirwalk)
- Fixed updater's progressbar
## 2018-03-17
### Added
- Added condition to files' template to avoid creating error variable when not required.
## 2018-03-14
### Removed
- [go-dry](https://github.com/ungerik/go-dry) dependency.
## 2018-02-22
### Added
- Avoid rewriting the main b0x file by checking a MD5 hash of the (file's modification time + cfg).
- Avoid rewriting unchanged files by comparing the Timestamp of the b0x's file and the file's modification time.
- Config option `lcf` which when enabled along with `spread` **l**ogs the list of **c**hanged **f**iles to the console.
- Message to inform that no file or cfg changes have been detecTed (not an error).
### Changed
- Config option `clean` to only remove unused b0x files instead of everything.

21
vendor/github.com/UnnoTed/fileb0x/LICENSE generated vendored Normal file
View File

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2016 UnnoTed (UnnoTedx@gmail.com)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

627
vendor/github.com/UnnoTed/fileb0x/README.md generated vendored Normal file
View File

@ -0,0 +1,627 @@
fileb0x [![Circle CI](https://circleci.com/gh/UnnoTed/fileb0x.svg?style=svg)](https://circleci.com/gh/UnnoTed/fileb0x) [![GoDoc](https://godoc.org/github.com/UnnoTed/fileb0x?status.svg)](https://godoc.org/github.com/UnnoTed/fileb0x) [![GoReportCard](https://goreportcard.com/badge/unnoted/fileb0x)](https://goreportcard.com/report/unnoted/fileb0x)
-------
### What is fileb0x?
A better customizable tool to embed files in go.
It is an alternative to `go-bindata` that have better features and organized configuration.
###### TL;DR
a better `go-bindata`
-------
### How does it compare to `go-bindata`?
Feature | fileb0x | go-bindata
--------------------- | ------------- | ------------------
gofmt | yes (optional) | no
golint | safe | unsafe
gzip compression | yes | yes
gzip decompression | yes (optional: runtime) | yes (on read)
gzip compression levels | yes | no
separated prefix / base for each file | yes | no (all files only)
different build tags for each file | yes | no
exclude / ignore files | yes (glob) | yes (regex)
spread files | yes | no (single file only)
unexported vars/funcs | yes (optional) | no
virtual memory file system | yes | no
http file system / handler | yes | no
replace text in files | yes | no
glob support | yes | no (walk folders only)
regex support | no | yes (ignore files only)
config file | yes (config file only) | no (cmd args only)
update files remotely | yes | no
-------
### What are the benefits of using a Virtual Memory File System?
By using a virtual memory file system you can have access to files like when they're stored in a hard drive instead of a `map[string][]byte` you would be able to use IO writer and reader.
This means you can `read`, `write`, `remove`, `stat` and `rename` files also `make`, `remove` and `stat` directories.
###### TL;DR
Virtual Memory File System has similar functions as a hdd stored files would have.
### Features
- [x] golint safe code output
- [x] optional: gzip compression (with optional run-time decompression)
- [x] optional: formatted code (gofmt)
- [x] optional: spread files
- [x] optional: unexporTed variables, functions and types
- [x] optional: include multiple files and folders
- [x] optional: exclude files or/and folders
- [x] optional: replace text in files
- [x] optional: custom base and prefix path
- [x] Virtual Memory FileSystem - [webdav](https://godoc.org/golang.org/x/net/webdav)
- [x] HTTP FileSystem and Handler
- [x] glob support - [doublestar](https://github.com/bmatcuk/doublestar)
- [x] json / yaml / toml support
- [x] optional: Update files remotely
- [x] optional: Build tags for each file
### License
MIT
### Get Started
###### TL;DR QuickStart&trade;
Here's the get-you-going in 30 seconds or less:
```bash
git clone https://github.com/UnnoTed/fileb0x.git
cd fileb0x
cd _example/simple
go generate
go build
./simple
```
* `mod.go` defines the package as `example.com/foo/simple`
* `b0x.yaml` defines the sub-package `static` from the folder `public`
* `main.go` includes the comment `//go:generate go run github.com/UnnoTed/fileb0x b0x.yaml`
* `main.go` also includes the import `example.com/foo/simple/static`
* `go generate` locally installs `fileb0x` which generates `./static` according to `bax.yaml`
* `go build` creates the binary `simple` from `package main` in the current folder
* `./simple` runs the self-contained standalone webserver with built-in files from `public`
<details>
<summary>How to use it?</summary>
##### 1. Download
```bash
go get -u github.com/UnnoTed/fileb0x
```
##### 2. Create a config file
First you need to create a config file, it can be `*.json`, `*.yaml` or `*.toml`. (`*` means any file name)
Now write into the file the configuration you wish, you can use the example files as a start.
json config file example [b0x.json](https://raw.githubusercontent.com/UnnoTed/fileb0x/master/_example/simple/b0x.json)
yaml config file example [b0x.yaml](https://github.com/UnnoTed/fileb0x/blob/master/_example/simple/b0x.yaml)
toml config file example [b0x.toml](https://github.com/UnnoTed/fileb0x/blob/master/_example/simple/b0x.toml)
##### 3. Run
if you prefer to use it from the `cmd or terminal` edit and run the command below.
```bash
fileb0x YOUR_CONFIG_FILE.yaml
```
or if you wish to generate the embedded files through `go generate` just add and edit the line below into your `main.go`.
```go
//go:generate fileb0x YOUR_CONFIG_FILE.yaml
```
</details>
<details>
<summary>What functions and variables fileb0x let me access and what are they for?</summary>
#### HTTP
```go
var HTTP http.FileSystem
```
##### Type
[`http.FileSystem`](https://golang.org/pkg/net/http/#FileSystem)
##### What is it?
A In-Memory HTTP File System.
##### What it does?
Serve files through a HTTP FileServer.
##### How to use it?
```go
// http.ListenAndServe will create a server at the port 8080
// it will take http.FileServer() as a param
//
// http.FileServer() will use HTTP as a file system so all your files
// can be avialable through the port 8080
http.ListenAndServe(":8080", http.FileServer(myEmbeddedFiles.HTTP))
```
</details>
<details>
<summary>How to use it with `echo`?</summary>
```go
package main
import (
"github.com/labstack/echo"
"github.com/labstack/echo/engine/standard"
// your embedded files import here ...
"github.com/UnnoTed/fileb0x/_example/echo/myEmbeddedFiles"
)
func main() {
e := echo.New()
// enable any filename to be loaded from in-memory file system
e.GET("/*", echo.WrapHandler(myEmbeddedFiles.Handler))
// http://localhost:1337/public/README.md
e.Start(":1337")
}
```
##### How to serve a single file through `echo`?
```go
package main
import (
"github.com/labstack/echo"
// your embedded files import here ...
"github.com/UnnoTed/fileb0x/_example/echo/myEmbeddedFiles"
)
func main() {
e := echo.New()
// read ufo.html from in-memory file system
htmlb, err := myEmbeddedFiles.ReadFile("ufo.html")
if err != nil {
log.Fatal(err)
}
// convert to string
html := string(htmlb)
// serve ufo.html through "/"
e.GET("/", func(c echo.Context) error {
// serve as html
return c.HTML(http.StatusOK, html)
})
e.Start(":1337")
}
```
</details>
<details>
<summary>Examples</summary>
[simple example](https://github.com/UnnoTed/fileb0x/tree/master/_example/simple) -
[main.go](https://github.com/UnnoTed/fileb0x/blob/master/_example/simple/main.go)
[echo example](https://github.com/UnnoTed/fileb0x/tree/master/_example/echo) -
[main.go](https://github.com/UnnoTed/fileb0x/blob/master/_example/echo/main.go)
```go
package main
import (
"log"
"net/http"
// your generaTed package
"github.com/UnnoTed/fileb0x/_example/simple/static"
)
func main() {
files, err := static.WalkDirs("", false)
if err != nil {
log.Fatal(err)
}
log.Println("ALL FILES", files)
// here we'll read the file from the virtual file system
b, err := static.ReadFile("public/README.md")
if err != nil {
log.Fatal(err)
}
// byte to str
s := string(b)
s += "#hello"
// write file back into the virtual file system
err := static.WriteFile("public/README.md", []byte(s), 0644)
if err != nil {
log.Fatal(err)
}
log.Println(string(b))
// true = handler
// false = file system
as := false
// try it -> http://localhost:1337/public/secrets.txt
if as {
// as Handler
panic(http.ListenAndServe(":1337", static.Handler))
} else {
// as File System
panic(http.ListenAndServe(":1337", http.FileServer(static.HTTP)))
}
}
```
</details>
<details>
<summary>Update files remotely</summary>
Having to upload an entire binary just to update some files in a b0x and restart a server isn't something that i like to do...
##### How it works?
By enabling the updater option, the next time that you generate a b0x, it will include a http server, this http server will use a http basic auth and it contains 1 endpoint `/` that accepts 2 methods: `GET, POST`.
The `GET` method responds with a list of file names and sha256 hash of each file.
The `POST` method is used to upload files, it creates the directory tree of a new file and then creates the file or it updates an existing file from the virtual memory file system... it responds with a `ok` string when the upload is successful.
##### How to update files remotely?
1. First enable the updater option in your config file:
```yaml
##################
## yaml example ##
##################
# updater allows you to update a b0x in a running server
# without having to restart it
updater:
# disabled by default
enabled: false
# empty mode creates a empty b0x file with just the
# server and the filesystem, then you'll have to upload
# the files later using the cmd:
# fileb0x -update=http://server.com:port b0x.yaml
#
# it avoids long compile time
empty: false
# amount of uploads at the same time
workers: 3
# to get a username and password from a env variable
# leave username and password blank (username: "")
# then set your username and password in the env vars
# (no caps) -> fileb0x_username and fileb0x_password
#
# when using env vars, set it before generating a b0x
# so it can be applied to the updater server.
username: "user" # username: ""
password: "pass" # password: ""
port: 8041
```
2. Generate a b0x with the updater option enabled, don't forget to set the username and password for authentication.
3. When your files update, just run `fileb0x -update=http://yourServer.com:8041 b0x.toml` to update the files in the running server.
</details>
<details>
<summary>Build Tags</summary>
To use build tags for a b0x package just add the tags to the `tags` property in the main object of your config file
```yaml
# default: main
pkg: static
# destination
dest: "./static/"
# build tags for the main b0x.go file
tags: "!linux"
```
You can also have different build tags for a list of files, you must enable the `spread` property in the main object of your config file, then at the `custom` list, choose the set of files which you want a different build tag
```yaml
# default: main
pkg: static
# destination
dest: "./static/"
# build tags for the main b0x.go file
tags: "windows darwin"
# [spread] means it will make a file to hold all fileb0x data
# and each file into a separaTed .go file
#
# example:
# theres 2 files in the folder assets, they're: hello.json and world.txt
# when spread is activaTed, fileb0x will make a file:
# b0x.go or [output]'s data, assets_hello.json.go and assets_world.txt.go
#
#
# type: bool
# default: false
spread: true
# type: array of objects
custom:
# type: array of strings
- files:
- "start_space_ship.exe"
# build tags for this set of files
# it will only work if spread mode is enabled
tags: "windows"
# type: array of strings
- files:
- "ufo.dmg"
# build tags for this set of files
# it will only work if spread mode is enabled
tags: "darwin"
```
the config above will make:
```yaml
ab0x.go # // +build windows darwin
b0xfile_ufo.exe.go # // +build windows
b0xfile_start_space_ship.bat.go # // +build darwin
```
</details>
### Functions and Variables
<details>
<summary>FS (File System)</summary>
```go
var FS webdav.FileSystem
```
##### Type
[`webdav.FileSystem`](https://godoc.org/golang.org/x/net/webdav#FileSystem)
##### What is it?
In-Memory File System.
##### What it does?
Lets you `read, write, remove, stat and rename` files and `make, remove and stat` directories...
##### How to use it?
```go
func main() {
// you have the following functions available
// they all control files/dirs from/to the in-memory file system!
func Mkdir(name string, perm os.FileMode) error
func OpenFile(name string, flag int, perm os.FileMode) (File, error)
func RemoveAll(name string) error
func Rename(oldName, newName string) error
func Stat(name string) (os.FileInfo, error)
// you should remove those lines ^
// 1. creates a directory
err := myEmbeddedFiles.FS.Mkdir(myEmbeddedFiles.CTX, "assets", 0777)
if err != nil {
log.Fatal(err)
}
// 2. creates a file into the directory we created before and opens it
// with fileb0x you can use ReadFile and WriteFile instead of this complicaTed thing
f, err := myEmbeddedFiles.FS.OpenFile(myEmbeddedFiles.CTX, "assets/memes.txt", os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0644)
if err != nil {
log.Fatal(err)
}
data := []byte("I are programmer I make computer beep boop beep beep boop")
// write the data into the file
n, err := f.Write(data)
if err == nil && n < len(data) {
err = io.ErrShortWrite
}
// close the file
if err1 := f.Close(); err == nil {
log.Fatal(err1)
}
// 3. rename a file
// can also move files
err = myEmbeddedFiles.FS.Rename(myEmbeddedFiles.CTX, "assets/memes.txt", "assets/programmer_memes.txt")
if err != nil {
log.Fatal(err)
}
// 4. checks if the file we renamed exists
if _, err = myEmbeddedFiles.FS.Stat(myEmbeddedFiles.CTX, "assets/programmer_memes.txt"); os.IsExist(err) {
// exists!
// tries to remove the /assets/ directory
// from the in-memory file system
err = myEmbeddedFiles.FS.RemoveAll(myEmbeddedFiles.CTX, "assets")
if err != nil {
log.Fatal(err)
}
}
// 5. checks if the dir we removed exists
if _, err = myEmbeddedFiles.FS.Stat(myEmbeddedFiles.CTX, "public/"); os.IsNotExist(err) {
// doesn't exists!
log.Println("works!")
}
}
```
</details>
<details>
<summary>Handler</summary>
```go
var Handler *webdav.Handler
```
##### Type
[`webdav.Handler`](https://godoc.org/golang.org/x/net/webdav#Handler)
##### What is it?
A HTTP Handler implementation.
##### What it does?
Serve your embedded files.
##### How to use it?
```go
// ListenAndServer will create a http server at port 8080
// and use Handler as a http handler to serve your embedded files
http.ListenAndServe(":8080", myEmbeddedFiles.Handler)
```
</details>
<details>
<summary>ReadFile</summary>
```go
func ReadFile(filename string) ([]byte, error)
```
##### Type
[`ioutil.ReadFile`](https://godoc.org/io/ioutil#ReadFile)
##### What is it?
A Helper function to read your embedded files.
##### What it does?
Reads the specified file from the in-memory file system and return it as a byte slice.
##### How to use it?
```go
// it works the same way that ioutil.ReadFile does.
// but it will read the file from the in-memory file system
// instead of the hard disk!
//
// the file name is passwords.txt
// topSecretFile is a byte slice ([]byte)
topSecretFile, err := myEmbeddedFiles.ReadFile("passwords.txt")
if err != nil {
log.Fatal(err)
}
log.Println(string(topSecretFile))
```
</details>
<details>
<summary>WriteFile</summary>
```go
func WriteFile(filename string, data []byte, perm os.FileMode) error
```
##### Type
[`ioutil.WriteFile`](https://godoc.org/io/ioutil#WriteFile)
##### What is it?
A Helper function to write a file into the in-memory file system.
##### What it does?
Writes the `data` into the specified `filename` in the in-memory file system, meaning you embedded a file!
-- IMPORTANT --
IT WON'T WRITE THE FILE INTO THE .GO GENERATED FILE, IT WILL BE TEMPORARY, WHILE YOUR APP IS RUNNING THE FILE WILL BE AVAILABLE,
AFTER IT SHUTDOWN, IT IS GONE.
##### How to use it?
```go
// it works the same way that ioutil.WriteFile does.
// but it will write the file into the in-memory file system
// instead of the hard disk!
//
// the file name is secret.txt
// data should be a byte slice ([]byte)
// 0644 is a unix file permission
data := []byte("jet fuel can't melt steel beams")
err := myEmbeddedFiles.WriteFile("secret.txt", data, 0644)
if err != nil {
log.Fatal(err)
}
```
</details>
<details>
<summary>WalkDirs</summary>
```go
func WalkDirs(name string, includeDirsInList bool, files ...string) ([]string, error) {
```
##### Type
`[]string`
##### What is it?
A Helper function to walk dirs from the in-memory file system.
##### What it does?
Returns a list of files (with option to include dirs) that are currently in the in-memory file system.
##### How to use it?
```go
includeDirsInTheList := false
// WalkDirs returns a string slice with all file paths
files, err := myEmbeddedFiles.WalkDirs("", includeDirsInTheList)
if err != nil {
log.Fatal(err)
}
log.Println("List of all my files", files)
```
</details>

1
vendor/github.com/UnnoTed/fileb0x/bench.bat generated vendored Normal file
View File

@ -0,0 +1 @@
go test -bench=. -benchmem -v

11
vendor/github.com/UnnoTed/fileb0x/bench.txt generated vendored Normal file
View File

@ -0,0 +1,11 @@
./_example/echo/ufo.html (1.4kb)
BenchmarkOldConvert-4 50000 37127 ns/op 31200 B/op 11 allocs/op
BenchmarkNewConvert-4 300000 5847 ns/op 12288 B/op 2 allocs/op
gitkraken's binary (80mb)
BenchmarkOldConvert-4 1 1777277402 ns/op 1750946416 B/op 30 allocs/op
BenchmarkNewConvert-4 5 236663214 ns/op 643629056 B/op 2 allocs/op
https://www.youtube.com/watch?v=fT4lDU-QLUY (232mb)
BenchmarkOldConvert-4 1 5089024416 ns/op 4071281120 B/op 28 allocs/op
BenchmarkNewConvert-4 2 712384868 ns/op 1856667696 B/op 2 allocs/op

3
vendor/github.com/UnnoTed/fileb0x/circle.yml generated vendored Normal file
View File

@ -0,0 +1,3 @@
test:
override:
- go test ./... -v

64
vendor/github.com/UnnoTed/fileb0x/compression/gzip.go generated vendored Normal file
View File

@ -0,0 +1,64 @@
package compression
import (
"bytes"
"compress/flate"
"compress/gzip"
)
// Gzip compression support
type Gzip struct {
*Options
}
// NewGzip creates a Gzip + Options variable
func NewGzip() *Gzip {
gz := new(Gzip)
gz.Options = new(Options)
return gz
}
// Compress to gzip
func (gz *Gzip) Compress(content []byte) ([]byte, error) {
if !gz.Options.Compress {
return content, nil
}
// method
var m int
switch gz.Options.Method {
case "NoCompression":
m = flate.NoCompression
break
case "BestSpeed":
m = flate.BestSpeed
break
case "BestCompression":
m = flate.BestCompression
break
default:
m = flate.DefaultCompression
break
}
// compress
var b bytes.Buffer
w, err := gzip.NewWriterLevel(&b, m)
if err != nil {
return nil, err
}
// insert content
_, err = w.Write(content)
if err != nil {
return nil, err
}
err = w.Close()
if err != nil {
return nil, err
}
// compressed content
return b.Bytes(), nil
}

View File

@ -0,0 +1,22 @@
package compression
// Options for compression
type Options struct {
// activates the compression
// default: false
Compress bool
// valid values are:
// -> "NoCompression"
// -> "BestSpeed"
// -> "BestCompression"
// -> "DefaultCompression"
//
// default: "DefaultCompression" // when: Compress == true && Method == ""
Method string
// true = do it yourself (the file is written as gzip into the memory file system)
// false = decompress at run time (while writing file into memory file system)
// default: false
Keep bool
}

75
vendor/github.com/UnnoTed/fileb0x/config/config.go generated vendored Normal file
View File

@ -0,0 +1,75 @@
package config
import (
"strings"
"github.com/UnnoTed/fileb0x/compression"
"github.com/UnnoTed/fileb0x/custom"
"github.com/UnnoTed/fileb0x/updater"
)
// Config holds the json/yaml/toml data
type Config struct {
Dest string
NoPrefix bool
Pkg string
Fmt bool // gofmt
Compression *compression.Options
Tags string
Output string
Custom []custom.Custom
Spread bool
Unexported bool
Clean bool
Debug bool
Updater updater.Config
Lcf bool
}
// Defaults set the default value for some variables
func (cfg *Config) Defaults() error {
// default destination
if cfg.Dest == "" {
cfg.Dest = "/"
}
// insert "/" at end of dest when it's not found
if !strings.HasSuffix(cfg.Dest, "/") {
cfg.Dest += "/"
}
// default file name
if cfg.Output == "" {
cfg.Output = "b0x.go"
}
// inserts .go at the end of file name
if !strings.HasSuffix(cfg.Output, ".go") {
cfg.Output += ".go"
}
// inserts an A before the output file's name so it can
// run init() before b0xfile's
if !cfg.NoPrefix && !strings.HasPrefix(cfg.Output, "a") {
cfg.Output = "a" + cfg.Output
}
// default package
if cfg.Pkg == "" {
cfg.Pkg = "main"
}
if cfg.Compression == nil {
cfg.Compression = &compression.Options{
Compress: false,
Method: "DefaultCompression",
Keep: false,
}
}
return nil
}

115
vendor/github.com/UnnoTed/fileb0x/config/file.go generated vendored Normal file
View File

@ -0,0 +1,115 @@
package config
import (
"encoding/json"
"errors"
"io/ioutil"
"os"
"path"
"github.com/UnnoTed/fileb0x/utils"
"github.com/BurntSushi/toml"
"gopkg.in/yaml.v2"
"fmt"
)
// File holds config file info
type File struct {
FilePath string
Data []byte
Mode string // "json" || "yaml" || "yml" || "toml"
}
// FromArg gets the json/yaml/toml file from args
func (f *File) FromArg(read bool) error {
// (length - 1)
arg := os.Args[len(os.Args)-1:][0]
// get extension
ext := path.Ext(arg)
if len(ext) > 1 {
ext = ext[1:] // remove dot
}
// when json/yaml/toml file isn't found on last arg
// it searches for a ".json", ".yaml", ".yml" or ".toml" string in all args
if ext != "json" && ext != "yaml" && ext != "yml" && ext != "toml" {
// loop through args
for _, a := range os.Args {
// get extension
ext := path.Ext(a)
// check for valid extensions
if ext == ".json" || ext == ".yaml" || ext == ".yml" || ext == ".toml" {
f.Mode = ext[1:] // remove dot
ext = f.Mode
arg = a
break
}
}
} else {
f.Mode = ext
}
// check if extension is json, yaml or toml
// then get it's absolute path
if ext == "json" || ext == "yaml" || ext == "yml" || ext == "toml" {
f.FilePath = arg
// so we can test without reading a file
if read {
if !utils.Exists(f.FilePath) {
return errors.New("Error: I Can't find the config file at [" + f.FilePath + "]")
}
}
} else {
return errors.New("Error: You must specify a json, yaml or toml file")
}
return nil
}
// Parse gets the config file's content from File.Data
func (f *File) Parse() (*Config, error) {
// remove comments
f.RemoveJSONComments()
to := &Config{}
switch f.Mode {
case "json":
return to, json.Unmarshal(f.Data, to)
case "yaml", "yml":
return to, yaml.Unmarshal(f.Data, to)
case "toml":
return to, toml.Unmarshal(f.Data, to)
default:
return nil, fmt.Errorf("unknown mode '%s'", f.Mode)
}
}
// Load the json/yaml file that was specified from args
// and transform it into a config struct
func (f *File) Load() (*Config, error) {
var err error
if !utils.Exists(f.FilePath) {
return nil, errors.New("Error: I Can't find the config file at [" + f.FilePath + "]")
}
// read file
f.Data, err = ioutil.ReadFile(f.FilePath)
if err != nil {
return nil, err
}
// parse file
return f.Parse()
}
// RemoveJSONComments from the file
func (f *File) RemoveJSONComments() {
if f.Mode == "json" {
// remove inline comments
f.Data = []byte(regexComments.ReplaceAllString(string(f.Data), ""))
}
}

11
vendor/github.com/UnnoTed/fileb0x/config/regexp.go generated vendored Normal file
View File

@ -0,0 +1,11 @@
package config
import "regexp"
var (
// used to remove comments from json
regexComments = regexp.MustCompile(`\/\/([\w\s\'].*)`)
// SafeVarName is used to remove special chars from paths
SafeVarName = regexp.MustCompile(`[^a-zA-Z0-9]`)
)

228
vendor/github.com/UnnoTed/fileb0x/custom/custom.go generated vendored Normal file
View File

@ -0,0 +1,228 @@
package custom
import (
"errors"
"fmt"
"io/ioutil"
"log"
"os"
"path"
"strings"
"github.com/UnnoTed/fileb0x/compression"
"github.com/UnnoTed/fileb0x/dir"
"github.com/UnnoTed/fileb0x/file"
"github.com/UnnoTed/fileb0x/updater"
"github.com/UnnoTed/fileb0x/utils"
"github.com/bmatcuk/doublestar"
"github.com/karrick/godirwalk"
)
const hextable = "0123456789abcdef"
// SharedConfig holds needed data from config package
// without causing import cycle
type SharedConfig struct {
Output string
Compression *compression.Gzip
Updater updater.Config
}
// Custom is a set of files with dedicaTed customization
type Custom struct {
Files []string
Base string
Prefix string
Tags string
Exclude []string
Replace []Replacer
}
var (
xx = []byte(`\x`)
start = []byte(`[]byte("`)
)
const lowerhex = "0123456789abcdef"
// Parse the files transforming them into a byte string and inserting the file
// into a map of files
func (c *Custom) Parse(files *map[string]*file.File, dirs **dir.Dir, config *SharedConfig) error {
to := *files
dirList := *dirs
var newList []string
for _, customFile := range c.Files {
// get files from glob
list, err := doublestar.Glob(customFile)
if err != nil {
return err
}
// insert files from glob into the new list
newList = append(newList, list...)
}
// copy new list
c.Files = newList
// 0 files in the list
if len(c.Files) == 0 {
return errors.New("No files found")
}
// loop through files from glob
for _, customFile := range c.Files {
// gives error when file doesn't exist
if !utils.Exists(customFile) {
return fmt.Errorf("File [%s] doesn't exist", customFile)
}
cb := func(fpath string, d *godirwalk.Dirent) error {
if config.Updater.Empty && !config.Updater.IsUpdating {
log.Println("empty mode")
return nil
}
// only files will be processed
if d != nil && d.IsDir() {
return nil
}
originalPath := fpath
fpath = utils.FixPath(fpath)
var fixedPath string
if c.Prefix != "" || c.Base != "" {
c.Base = strings.TrimPrefix(c.Base, "./")
if strings.HasPrefix(fpath, c.Base) {
fixedPath = c.Prefix + fpath[len(c.Base):]
} else {
if c.Base != "" {
fixedPath = c.Prefix + fpath
}
}
fixedPath = utils.FixPath(fixedPath)
} else {
fixedPath = utils.FixPath(fpath)
}
// check for excluded files
for _, excludedFile := range c.Exclude {
m, err := doublestar.Match(c.Prefix+excludedFile, fixedPath)
if err != nil {
return err
}
if m {
return nil
}
}
info, err := os.Stat(fpath)
if err != nil {
return err
}
if info.Name() == config.Output {
return nil
}
// get file's content
content, err := ioutil.ReadFile(fpath)
if err != nil {
return err
}
replaced := false
// loop through replace list
for _, r := range c.Replace {
// check if path matches the pattern from property: file
matched, err := doublestar.Match(c.Prefix+r.File, fixedPath)
if err != nil {
return err
}
if matched {
for pattern, word := range r.Replace {
content = []byte(strings.Replace(string(content), pattern, word, -1))
replaced = true
}
}
}
// compress the content
if config.Compression.Options != nil {
content, err = config.Compression.Compress(content)
if err != nil {
return err
}
}
dst := make([]byte, len(content)*4)
for i := 0; i < len(content); i++ {
dst[i*4] = byte('\\')
dst[i*4+1] = byte('x')
dst[i*4+2] = hextable[content[i]>>4]
dst[i*4+3] = hextable[content[i]&0x0f]
}
f := file.NewFile()
f.OriginalPath = originalPath
f.ReplacedText = replaced
f.Data = `[]byte("` + string(dst) + `")`
f.Name = info.Name()
f.Path = fixedPath
f.Tags = c.Tags
f.Base = c.Base
f.Prefix = c.Prefix
f.Modified = info.ModTime().String()
if _, ok := to[fixedPath]; ok {
f.Tags = to[fixedPath].Tags
}
// insert dir to dirlist so it can be created on b0x's init()
dirList.Insert(path.Dir(fixedPath))
// insert file into file list
to[fixedPath] = f
return nil
}
customFile = utils.FixPath(customFile)
// unlike filepath.walk, godirwalk will only walk dirs
f, err := os.Open(customFile)
if err != nil {
return err
}
defer f.Close()
fs, err := f.Stat()
if err != nil {
return err
}
if fs.IsDir() {
if err := godirwalk.Walk(customFile, &godirwalk.Options{
Unsorted: true,
Callback: cb,
}); err != nil {
return err
}
} else {
if err := cb(customFile, nil); err != nil {
return err
}
}
}
return nil
}

7
vendor/github.com/UnnoTed/fileb0x/custom/replacer.go generated vendored Normal file
View File

@ -0,0 +1,7 @@
package custom
// Replacer strings in a file
type Replacer struct {
File string
Replace map[string]string
}

70
vendor/github.com/UnnoTed/fileb0x/dir/dir.go generated vendored Normal file
View File

@ -0,0 +1,70 @@
package dir
import "strings"
// Dir holds directory information to insert into templates
type Dir struct {
List [][]string
Blacklist []string
}
// Exists checks if a directory exists or not
func (d *Dir) Exists(newDir string) bool {
for _, dir := range d.Blacklist {
if dir == newDir {
return true
}
}
return false
}
// Parse a directory to build a list of directories to be made at b0x.go
func (d *Dir) Parse(newDir string) []string {
list := strings.Split(newDir, "/")
var dirWalk []string
for indx := range list {
dirList := ""
for i := -1; i < indx; i++ {
dirList += list[i+1] + "/"
}
if !d.Exists(dirList) {
if strings.HasSuffix(dirList, "//") {
dirList = dirList[:len(dirList)-1]
}
dirWalk = append(dirWalk, dirList)
d.Blacklist = append(d.Blacklist, dirList)
}
}
return dirWalk
}
// Insert a new folder to the list
func (d *Dir) Insert(newDir string) {
if !d.Exists(newDir) {
d.Blacklist = append(d.Blacklist, newDir)
d.List = append(d.List, d.Parse(newDir))
}
}
// Clean dupes
func (d *Dir) Clean() []string {
var cleanList []string
for _, dirs := range d.List {
for _, dir := range dirs {
if dir == "./" || dir == "/" || dir == "." || dir == "" {
continue
}
cleanList = append(cleanList, dir)
}
}
return cleanList
}

21
vendor/github.com/UnnoTed/fileb0x/file/file.go generated vendored Normal file
View File

@ -0,0 +1,21 @@
package file
// File holds file's data
type File struct {
OriginalPath string
Name string
Path string
Data string
Bytes []byte
ReplacedText bool
Tags string
Base string
Prefix string
Modified string
}
// NewFile creates a new File
func NewFile() *File {
f := new(File)
return f
}

18
vendor/github.com/UnnoTed/fileb0x/file/methods.go generated vendored Normal file
View File

@ -0,0 +1,18 @@
// +build !windows
package file
// GetRemap returns a map's params with
// info required to load files directly
// from the hard drive when using prefix
// and base while debug mode is activaTed
func (f *File) GetRemap() string {
if f.Base == "" && f.Prefix == "" {
return ""
}
return `"` + f.Path + `": {
"prefix": "` + f.Prefix + `",
"base": "` + f.Base + `",
},`
}

View File

@ -0,0 +1,18 @@
package file
import "strings"
// GetRemap returns a map's params with
// info required to load files directly
// from the hard drive when using prefix
// and base while debug mode is activaTed
func (f *File) GetRemap() string {
if f.Base == "" && f.Prefix == "" {
return ""
}
return `"` + strings.Replace(f.Path, `\`, `\\`, -1) + `": {
"prefix": "` + f.Prefix + `",
"base": "` + f.Base + `",
},`
}

26
vendor/github.com/UnnoTed/fileb0x/go.mod generated vendored Normal file
View File

@ -0,0 +1,26 @@
module github.com/UnnoTed/fileb0x
require (
github.com/BurntSushi/toml v0.3.1
github.com/airking05/termui v2.2.0+incompatible
github.com/bmatcuk/doublestar v1.1.1
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/dgrijalva/jwt-go v3.2.0+incompatible // indirect
github.com/karrick/godirwalk v1.7.8
github.com/labstack/echo v3.2.1+incompatible
github.com/labstack/gommon v0.2.7 // indirect
github.com/maruel/panicparse v1.1.1 // indirect
github.com/mattn/go-colorable v0.0.9 // indirect
github.com/mattn/go-isatty v0.0.4 // indirect
github.com/mattn/go-runewidth v0.0.3 // indirect
github.com/mitchellh/go-wordwrap v1.0.0 // indirect
github.com/nsf/termbox-go v0.0.0-20180819125858-b66b20ab708e // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/stretchr/testify v1.2.2
github.com/valyala/bytebufferpool v1.0.0 // indirect
github.com/valyala/fasttemplate v0.0.0-20170224212429-dcecefd839c4 // indirect
golang.org/x/crypto v0.0.0-20180910181607-0e37d006457b // indirect
golang.org/x/net v0.0.0-20180921000356-2f5d2388922f
golang.org/x/sys v0.0.0-20181019160139-8e24a49d80f8 // indirect
gopkg.in/yaml.v2 v2.2.1
)

50
vendor/github.com/UnnoTed/fileb0x/go.sum generated vendored Normal file
View File

@ -0,0 +1,50 @@
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/airking05/termui v2.2.0+incompatible h1:S3j2WJzr70u8KjUktaQ0Cmja+R0edOXChltFoQSGG8I=
github.com/airking05/termui v2.2.0+incompatible/go.mod h1:B/M5sgOwSZlvGm3TsR98s1BSzlSH4wPQzUUNwZG+uUM=
github.com/bmatcuk/doublestar v1.1.1 h1:YroD6BJCZBYx06yYFEWvUuKVWQn3vLLQAVmDmvTSaiQ=
github.com/bmatcuk/doublestar v1.1.1/go.mod h1:UD6OnuiIn0yFxxA2le/rnRU1G4RaI4UvFv1sNto9p6w=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/karrick/godirwalk v1.7.3 h1:UP4CfXf1LfNwXrX6vqWf1DOhuiFRn2hXsqtRAQlQOUQ=
github.com/karrick/godirwalk v1.7.3/go.mod h1:2c9FRhkDxdIbgkOnCEvnSWs71Bhugbl46shStcFDJ34=
github.com/karrick/godirwalk v1.7.8 h1:VfG72pyIxgtC7+3X9CMHI0AOl4LwyRAg98WAgsvffi8=
github.com/karrick/godirwalk v1.7.8/go.mod h1:2c9FRhkDxdIbgkOnCEvnSWs71Bhugbl46shStcFDJ34=
github.com/labstack/echo v3.2.1+incompatible h1:J2M7YArHx4gi8p/3fDw8tX19SXhBCoRpviyAZSN3I88=
github.com/labstack/echo v3.2.1+incompatible/go.mod h1:0INS7j/VjnFxD4E2wkz67b8cVwCLbBmJyDaka6Cmk1s=
github.com/labstack/gommon v0.2.7 h1:2qOPq/twXDrQ6ooBGrn3mrmVOC+biLlatwgIu8lbzRM=
github.com/labstack/gommon v0.2.7/go.mod h1:/tj9csK2iPSBvn+3NLM9e52usepMtrd5ilFYA+wQNJ4=
github.com/maruel/panicparse v1.1.1 h1:k62YPcEoLncEEpjMt92GtG5ugb8WL/510Ys3/h5IkRc=
github.com/maruel/panicparse v1.1.1/go.mod h1:nty42YY5QByNC5MM7q/nj938VbgPU7avs45z6NClpxI=
github.com/mattn/go-colorable v0.0.9 h1:UVL0vNpWh04HeJXV0KLcaT7r06gOH2l4OW6ddYRUIY4=
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
github.com/mattn/go-isatty v0.0.4 h1:bnP0vzxcAdeI1zdubAl5PjU6zsERjGZb7raWodagDYs=
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-runewidth v0.0.3 h1:a+kO+98RDGEfo6asOGMmpodZq4FNtnGP54yps8BzLR4=
github.com/mattn/go-runewidth v0.0.3/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/mitchellh/go-wordwrap v1.0.0 h1:6GlHJ/LTGMrIJbwgdqdl2eEH8o+Exx/0m8ir9Gns0u4=
github.com/mitchellh/go-wordwrap v1.0.0/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo=
github.com/nsf/termbox-go v0.0.0-20180819125858-b66b20ab708e h1:fvw0uluMptljaRKSU8459cJ4bmi3qUYyMs5kzpic2fY=
github.com/nsf/termbox-go v0.0.0-20180819125858-b66b20ab708e/go.mod h1:IuKpRQcYE1Tfu+oAQqaLisqDeXgjyyltCfsaoYN18NQ=
github.com/pkg/errors v0.8.0 h1:WdK/asTD0HN+q6hsWO3/vpuAkAr+tw6aNJNDFFf0+qw=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasttemplate v0.0.0-20170224212429-dcecefd839c4 h1:gKMu1Bf6QINDnvyZuTaACm9ofY+PRh+5vFz4oxBZeF8=
github.com/valyala/fasttemplate v0.0.0-20170224212429-dcecefd839c4/go.mod h1:50wTf68f99/Zt14pr046Tgt3Lp2vLyFZKzbFXTOabXw=
golang.org/x/crypto v0.0.0-20180910181607-0e37d006457b h1:2b9XGzhjiYsYPnKXoEfL7klWZQIt8IfyRCz62gCqqlQ=
golang.org/x/crypto v0.0.0-20180910181607-0e37d006457b/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/net v0.0.0-20180921000356-2f5d2388922f h1:QM2QVxvDoW9PFSPp/zy9FgxJLfaWTZlS61KEPtBwacM=
golang.org/x/net v0.0.0-20180921000356-2f5d2388922f/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/sys v0.0.0-20181019160139-8e24a49d80f8 h1:R91KX5nmbbvEd7w370cbVzKC+EzCTGqZq63Zad5IcLM=
golang.org/x/sys v0.0.0-20181019160139-8e24a49d80f8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v2 v2.2.1 h1:mUhvW9EsL+naU5Q3cakzfE91YhliOondGd6ZrsDBHQE=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=

11
vendor/github.com/UnnoTed/fileb0x/lint.bat generated vendored Normal file
View File

@ -0,0 +1,11 @@
golint ./...
@echo off
cd .\_example\simple\
@echo on
call golint ./...
@echo off
cd ..\..\
@echo on

383
vendor/github.com/UnnoTed/fileb0x/main.go generated vendored Normal file
View File

@ -0,0 +1,383 @@
package main
import (
"bufio"
"bytes"
"crypto/md5"
"flag"
"fmt"
"go/format"
"io/ioutil"
"log"
"os"
"path"
"path/filepath"
"runtime"
"sort"
"strconv"
"strings"
"time"
"github.com/UnnoTed/fileb0x/compression"
"github.com/UnnoTed/fileb0x/config"
"github.com/UnnoTed/fileb0x/custom"
"github.com/UnnoTed/fileb0x/dir"
"github.com/UnnoTed/fileb0x/file"
"github.com/UnnoTed/fileb0x/template"
"github.com/UnnoTed/fileb0x/updater"
"github.com/UnnoTed/fileb0x/utils"
// just to install automatically
_ "github.com/labstack/echo"
_ "golang.org/x/net/webdav"
)
var (
err error
cfg *config.Config
files = make(map[string]*file.File)
dirs = new(dir.Dir)
cfgPath string
fUpdate string
startTime = time.Now()
hashStart = []byte("// modification hash(")
hashEnd = []byte(")")
modTimeStart = []byte("// modified(")
modTimeEnd = []byte(")")
)
func main() {
runtime.GOMAXPROCS(runtime.NumCPU())
// check for updates
flag.StringVar(&fUpdate, "update", "", "-update=http(s)://host:port - default port: 8041")
flag.Parse()
var (
update = fUpdate != ""
up *updater.Updater
)
// create config and try to get b0x file from args
f := new(config.File)
err = f.FromArg(true)
if err != nil {
panic(err)
}
// load b0x file's config
cfg, err = f.Load()
if err != nil {
panic(err)
}
err = cfg.Defaults()
if err != nil {
panic(err)
}
cfgPath = f.FilePath
if err := cfg.Updater.CheckInfo(); err != nil {
panic(err)
}
cfg.Updater.IsUpdating = update
// creates a config that can be inserTed into custom
// without causing a import cycle
sharedConfig := new(custom.SharedConfig)
sharedConfig.Output = cfg.Output
sharedConfig.Updater = cfg.Updater
sharedConfig.Compression = compression.NewGzip()
sharedConfig.Compression.Options = cfg.Compression
// loop through b0x's [custom] objects
for _, c := range cfg.Custom {
err = c.Parse(&files, &dirs, sharedConfig)
if err != nil {
panic(err)
}
}
// builds remap's list
var (
remap string
modHash string
mods []string
lastHash string
)
for _, f := range files {
remap += f.GetRemap()
mods = append(mods, f.Modified)
}
// sorts modification time list and create a md5 of it
sort.Strings(mods)
modHash = stringMD5Hex(strings.Join(mods, "")) + "." + stringMD5Hex(string(f.Data))
exists := fileExists(cfg.Dest + cfg.Output)
if exists {
// gets the modification hash from the main b0x file
lastHash, err = getModification(cfg.Dest+cfg.Output, hashStart, hashEnd)
if err != nil {
panic(err)
}
}
if !exists || lastHash != modHash {
// create files template and exec it
t := new(template.Template)
t.Set("files")
t.Variables = struct {
ConfigFile string
Now string
Pkg string
Files map[string]*file.File
Tags string
Spread bool
Remap string
DirList []string
Compression *compression.Options
Debug bool
Updater updater.Config
ModificationHash string
}{
ConfigFile: filepath.Base(cfgPath),
Now: time.Now().String(),
Pkg: cfg.Pkg,
Files: files,
Tags: cfg.Tags,
Remap: remap,
Spread: cfg.Spread,
DirList: dirs.Clean(),
Compression: cfg.Compression,
Debug: cfg.Debug,
Updater: cfg.Updater,
ModificationHash: modHash,
}
tmpl, err := t.Exec()
if err != nil {
panic(err)
}
if err := os.MkdirAll(cfg.Dest, 0770); err != nil {
panic(err)
}
// gofmt
if cfg.Fmt {
tmpl, err = format.Source(tmpl)
if err != nil {
panic(err)
}
}
// write final execuTed template into the destination file
err = ioutil.WriteFile(cfg.Dest+cfg.Output, tmpl, 0640)
if err != nil {
panic(err)
}
}
// write spread files
var (
finalList []string
changedList []string
)
if cfg.Spread {
a := strings.Split(path.Dir(cfg.Dest), "/")
dirName := a[len(a)-1:][0]
for _, f := range files {
a := strings.Split(path.Dir(f.Path), "/")
fileDirName := a[len(a)-1:][0]
if dirName == fileDirName {
continue
}
// transform / to _ and some other chars...
customName := "b0xfile_" + utils.FixName(f.Path) + ".go"
finalList = append(finalList, customName)
exists := fileExists(cfg.Dest + customName)
var mth string
if exists {
mth, err = getModification(cfg.Dest+customName, modTimeStart, modTimeEnd)
if err != nil {
panic(err)
}
}
changed := mth != f.Modified
if changed {
changedList = append(changedList, f.OriginalPath)
}
if !exists || changed {
// creates file template and exec it
t := new(template.Template)
t.Set("file")
t.Variables = struct {
ConfigFile string
Now string
Pkg string
Path string
Name string
Dir [][]string
Tags string
Data string
Compression *compression.Options
Modified string
OriginalPath string
}{
ConfigFile: filepath.Base(cfgPath),
Now: time.Now().String(),
Pkg: cfg.Pkg,
Path: f.Path,
Name: f.Name,
Dir: dirs.List,
Tags: f.Tags,
Data: f.Data,
Compression: cfg.Compression,
Modified: f.Modified,
OriginalPath: f.OriginalPath,
}
tmpl, err := t.Exec()
if err != nil {
panic(err)
}
// gofmt
if cfg.Fmt {
tmpl, err = format.Source(tmpl)
if err != nil {
panic(err)
}
}
// write final execuTed template into the destination file
if err := ioutil.WriteFile(cfg.Dest+customName, tmpl, 0640); err != nil {
panic(err)
}
}
}
}
// remove b0xfiles when [clean] is true
// it doesn't clean destination's folders
if cfg.Clean {
matches, err := filepath.Glob(cfg.Dest + "b0xfile_*.go")
if err != nil {
panic(err)
}
// remove matched file if they aren't in the finalList
// which contains the list of all files written by the
// spread option
for _, f := range matches {
var found bool
for _, name := range finalList {
if strings.HasSuffix(f, name) {
found = true
}
}
if !found {
err = os.Remove(f)
if err != nil {
panic(err)
}
}
}
}
// main b0x
if lastHash != modHash {
log.Printf("fileb0x: took [%dms] to write [%s] from config file [%s] at [%s]",
time.Since(startTime).Nanoseconds()/1e6, cfg.Dest+cfg.Output,
filepath.Base(cfgPath), time.Now().String())
} else {
log.Printf("fileb0x: no changes detected")
}
// log changed files
if cfg.Lcf && len(changedList) > 0 {
log.Printf("fileb0x: list of changed files [%s]", strings.Join(changedList, " | "))
}
if update {
if !cfg.Updater.Enabled {
panic("fileb0x: The updater is disabled, enable it in your config file!")
}
// includes port when not present
if !strings.HasSuffix(fUpdate, ":"+strconv.Itoa(cfg.Updater.Port)) {
fUpdate += ":" + strconv.Itoa(cfg.Updater.Port)
}
up = &updater.Updater{
Server: fUpdate,
Auth: updater.Auth{
Username: cfg.Updater.Username,
Password: cfg.Updater.Password,
},
Workers: cfg.Updater.Workers,
}
// get file hashes from server
if err := up.Init(); err != nil {
panic(err)
}
// check if an update is available, then updates...
if err := up.UpdateFiles(files); err != nil {
panic(err)
}
}
}
func getModification(path string, start []byte, end []byte) (string, error) {
file, err := os.Open(path)
if err != nil {
return "", err
}
defer file.Close()
reader := bufio.NewReader(file)
var data []byte
for {
line, _, err := reader.ReadLine()
if err != nil {
return "", err
}
if !bytes.HasPrefix(line, start) || !bytes.HasSuffix(line, end) {
continue
}
data = line
break
}
hash := bytes.TrimPrefix(data, start)
hash = bytes.TrimSuffix(hash, end)
return string(hash), nil
}
func fileExists(filename string) bool {
_, err := os.Stat(filename)
return err == nil
}
func stringMD5Hex(data string) string {
hash := md5.New()
hash.Write([]byte(data))
return fmt.Sprintf("%x", hash.Sum(nil))
}

7
vendor/github.com/UnnoTed/fileb0x/run generated vendored Normal file
View File

@ -0,0 +1,7 @@
#!/bin/bash
go install
cd ./_example/simple/
./run
cd ../../

11
vendor/github.com/UnnoTed/fileb0x/run.bat generated vendored Normal file
View File

@ -0,0 +1,11 @@
go install
@echo off
cd .\_example\simple\
@echo on
call b0x.bat
@echo off
cd ..\..\
@echo on

64
vendor/github.com/UnnoTed/fileb0x/template/file.go generated vendored Normal file
View File

@ -0,0 +1,64 @@
package template
var fileTemplate = `{{buildTags .Tags}}// Code generaTed by fileb0x at "{{.Now}}" from config file "{{.ConfigFile}}" DO NOT EDIT.
// modified({{.Modified}})
// original path: {{.OriginalPath}}
package {{.Pkg}}
import (
{{if .Compression.Compress}}
{{if not .Compression.Keep}}
"bytes"
"compress/gzip"
"io"
{{end}}
{{end}}
"os"
)
// {{exportedTitle "File"}}{{buildSafeVarName .Path}} is "{{.Path}}"
var {{exportedTitle "File"}}{{buildSafeVarName .Path}} = {{.Data}}
func init() {
{{if .Compression.Compress}}
{{if not .Compression.Keep}}
rb := bytes.NewReader({{exportedTitle "File"}}{{buildSafeVarName .Path}})
r, err := gzip.NewReader(rb)
if err != nil {
panic(err)
}
err = r.Close()
if err != nil {
panic(err)
}
{{end}}
{{end}}
f, err := {{exported "FS"}}.OpenFile({{exported "CTX"}}, "{{.Path}}", os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0777)
if err != nil {
panic(err)
}
{{if .Compression.Compress}}
{{if not .Compression.Keep}}
_, err = io.Copy(f, r)
if err != nil {
panic(err)
}
{{end}}
{{else}}
_, err = f.Write({{exportedTitle "File"}}{{buildSafeVarName .Path}})
if err != nil {
panic(err)
}
{{end}}
err = f.Close()
if err != nil {
panic(err)
}
}
`

411
vendor/github.com/UnnoTed/fileb0x/template/files.go generated vendored Normal file
View File

@ -0,0 +1,411 @@
package template
var filesTemplate = `{{buildTags .Tags}}// Code generated by fileb0x at "{{.Now}}" from config file "{{.ConfigFile}}" DO NOT EDIT.
// modification hash({{.ModificationHash}})
package {{.Pkg}}
{{$Compression := .Compression}}
import (
"bytes"
{{if not .Spread}}{{if and $Compression.Compress (not .Debug)}}{{if not $Compression.Keep}}"compress/gzip"{{end}}{{end}}{{end}}
"context"
"io"
"net/http"
"os"
"path"
{{if or .Updater.Enabled .Debug}}
"strings"
{{end}}
"golang.org/x/net/webdav"
{{if .Updater.Enabled}}
"crypto/sha256"
"encoding/hex"
"log"
"path/filepath"
"github.com/labstack/echo"
"github.com/labstack/echo/middleware"
{{end}}
)
var (
// CTX is a context for webdav vfs
{{exported "CTX"}} = context.Background()
{{if .Debug}}
{{exported "FS"}} = webdav.Dir(".")
{{else}}
// FS is a virtual memory file system
{{exported "FS"}} = webdav.NewMemFS()
{{end}}
// Handler is used to server files through a http handler
{{exportedTitle "Handler"}} *webdav.Handler
// HTTP is the http file system
{{exportedTitle "HTTP"}} http.FileSystem = new({{exported "HTTPFS"}})
)
// HTTPFS implements http.FileSystem
type {{exported "HTTPFS"}} struct {
// Prefix allows to limit the path of all requests. F.e. a prefix "css" would allow only calls to /css/*
Prefix string
}
{{if (and (not .Spread) (not .Debug))}}
{{range .Files}}
// {{exportedTitle "File"}}{{buildSafeVarName .Path}} is "{{.Path}}"
var {{exportedTitle "File"}}{{buildSafeVarName .Path}} = {{.Data}}
{{end}}
{{end}}
func init() {
err := {{exported "CTX"}}.Err()
if err != nil {
panic(err)
}
{{ $length := len .DirList }}
{{ $fLength := len .Files }}
{{ $noDirsButFiles := (and (not .Spread) (eq $length 0) (gt $fLength 0)) }}
{{if not .Debug}}
{{range $index, $dir := .DirList}}
{{if and (ne $dir "./") (ne $dir "/") (ne $dir ".") (ne $dir "")}}
err = {{exported "FS"}}.Mkdir({{exported "CTX"}}, "{{$dir}}", 0777)
if err != nil && err != os.ErrExist {
panic(err)
}
{{end}}
{{end}}
{{end}}
{{if (and (not .Spread) (not .Debug))}}
{{if not .Updater.Empty}}
var f webdav.File
{{end}}
{{if $Compression.Compress}}
{{if not $Compression.Keep}}
var rb *bytes.Reader
var r *gzip.Reader
{{end}}
{{end}}
{{range .Files}}
{{if $Compression.Compress}}
{{if not $Compression.Keep}}
rb = bytes.NewReader({{exportedTitle "File"}}{{buildSafeVarName .Path}})
r, err = gzip.NewReader(rb)
if err != nil {
panic(err)
}
err = r.Close()
if err != nil {
panic(err)
}
{{end}}
{{end}}
f, err = {{exported "FS"}}.OpenFile({{exported "CTX"}}, "{{.Path}}", os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0777)
if err != nil {
panic(err)
}
{{if $Compression.Compress}}
{{if not $Compression.Keep}}
_, err = io.Copy(f, r)
if err != nil {
panic(err)
}
{{end}}
{{else}}
_, err = f.Write({{exportedTitle "File"}}{{buildSafeVarName .Path}})
if err != nil {
panic(err)
}
{{end}}
err = f.Close()
if err != nil {
panic(err)
}
{{end}}
{{end}}
{{exportedTitle "Handler"}} = &webdav.Handler{
FileSystem: FS,
LockSystem: webdav.NewMemLS(),
}
{{if .Updater.Enabled}}
go func() {
svr := &{{exportedTitle "Server"}}{}
svr.Init()
}()
{{end}}
}
{{if .Debug}}
var remap = map[string]map[string]string{
{{.Remap}}
}
{{end}}
// Open a file
func (hfs *{{exported "HTTPFS"}}) Open(path string) (http.File, error) {
path = hfs.Prefix + path
{{if .Debug}}
path = strings.TrimPrefix(path, "/")
for current, f := range remap {
if path == current {
path = f["base"] + strings.TrimPrefix(path, f["prefix"])
break
}
}
{{end}}
f, err := {{if .Debug}}os{{else}}{{exported "FS"}}{{end}}.OpenFile({{if not .Debug}}{{exported "CTX"}}, {{end}}path, os.O_RDONLY, 0644)
if err != nil {
return nil, err
}
return f, nil
}
// ReadFile is adapTed from ioutil
func {{exportedTitle "ReadFile"}}(path string) ([]byte, error) {
f, err := {{if .Debug}}os{{else}}{{exported "FS"}}{{end}}.OpenFile({{if not .Debug}}{{exported "CTX"}}, {{end}}path, os.O_RDONLY, 0644)
if err != nil {
return nil, err
}
buf := bytes.NewBuffer(make([]byte, 0, bytes.MinRead))
// If the buffer overflows, we will get bytes.ErrTooLarge.
// Return that as an error. Any other panic remains.
defer func() {
e := recover()
if e == nil {
return
}
if panicErr, ok := e.(error); ok && panicErr == bytes.ErrTooLarge {
err = panicErr
} else {
panic(e)
}
}()
_, err = buf.ReadFrom(f)
return buf.Bytes(), err
}
// WriteFile is adapTed from ioutil
func {{exportedTitle "WriteFile"}}(filename string, data []byte, perm os.FileMode) error {
f, err := {{exported "FS"}}.OpenFile({{exported "CTX"}}, filename, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, perm)
if err != nil {
return err
}
n, err := f.Write(data)
if err == nil && n < len(data) {
err = io.ErrShortWrite
}
if err1 := f.Close(); err == nil {
err = err1
}
return err
}
// WalkDirs looks for files in the given dir and returns a list of files in it
// usage for all files in the b0x: WalkDirs("", false)
func {{exportedTitle "WalkDirs"}}(name string, includeDirsInList bool, files ...string) ([]string, error) {
f, err := {{exported "FS"}}.OpenFile({{exported "CTX"}}, name, os.O_RDONLY, 0)
if err != nil {
return nil, err
}
fileInfos, err := f.Readdir(0)
if err != nil {
return nil, err
}
err = f.Close()
if err != nil {
return nil, err
}
for _, info := range fileInfos {
filename := path.Join(name, info.Name())
if includeDirsInList || !info.IsDir() {
files = append(files, filename)
}
if info.IsDir() {
files, err = {{exportedTitle "WalkDirs"}}(filename, includeDirsInList, files...)
if err != nil {
return nil, err
}
}
}
return files, nil
}
{{if .Updater.Enabled}}
// Auth holds information for a http basic auth
type {{exportedTitle "Auth"}} struct {
Username string
Password string
}
// ResponseInit holds a list of hashes from the server
// to be sent to the client so it can check if there
// is a new file or a changed file
type {{exportedTitle "ResponseInit"}} struct {
Success bool
Hashes map[string]string
}
// Server holds information about the http server
// used to update files remotely
type {{exportedTitle "Server"}} struct {
Auth {{exportedTitle "Auth"}}
Files []string
}
// Init sets the routes and basic http auth
// before starting the http server
func (s *{{exportedTitle "Server"}}) Init() {
s.Auth = {{exportedTitle "Auth"}}{
Username: "{{.Updater.Username}}",
Password: "{{.Updater.Password}}",
}
e := echo.New()
e.Use(middleware.Recover())
e.Use(s.BasicAuth())
e.POST("/", s.Post)
e.GET("/", s.Get)
log.Println("fileb0x updater server is running at port 0.0.0.0:{{.Updater.Port}}")
if err := e.Start(":{{.Updater.Port}}"); err != nil {
panic(err)
}
}
// Get gives a list of file names and hashes
func (s *{{exportedTitle "Server"}}) Get(c echo.Context) error {
log.Println("[fileb0x.Server]: Hashing server files...")
// file:hash
hashes := map[string]string{}
// get all files in the virtual memory file system
var err error
s.Files, err = {{exportedTitle "WalkDirs"}}("", false)
if err != nil {
return err
}
// get a hash for each file
for _, filePath := range s.Files {
f, err := FS.OpenFile(CTX, filePath, os.O_RDONLY, 0644)
if err != nil {
return err
}
hash := sha256.New()
_, err = io.Copy(hash, f)
if err != nil {
return err
}
hashes[filePath] = hex.EncodeToString(hash.Sum(nil))
}
log.Println("[fileb0x.Server]: Done hashing files")
return c.JSON(http.StatusOK, &ResponseInit{
Success: true,
Hashes: hashes,
})
}
// Post is used to upload a file and replace
// it in the virtual memory file system
func (s *{{exportedTitle "Server"}}) Post(c echo.Context) error {
file, err := c.FormFile("file")
if err != nil {
return err
}
log.Println("[fileb0x.Server]:", file.Filename, "Found request to upload a file")
src, err := file.Open()
if err != nil {
return err
}
defer src.Close()
newDir := filepath.Dir(file.Filename)
_, err = {{exported "FS"}}.Stat({{exported "CTX"}}, newDir)
if err != nil && strings.HasSuffix(err.Error(), os.ErrNotExist.Error()) {
log.Println("[fileb0x.Server]: Creating dir tree", newDir)
list := strings.Split(newDir, "/")
var tree string
for _, dir := range list {
if dir == "" || dir == "." || dir == "/" || dir == "./" {
continue
}
tree += dir + "/"
err = {{exported "FS"}}.Mkdir({{exported "CTX"}}, tree, 0777)
if err != nil && err != os.ErrExist {
log.Println("failed", err)
return err
}
}
}
log.Println("[fileb0x.Server]:", file.Filename, "Opening file...")
f, err := {{exported "FS"}}.OpenFile({{exported "CTX"}}, file.Filename, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0777)
if err != nil && !strings.HasSuffix(err.Error(), os.ErrNotExist.Error()) {
return err
}
log.Println("[fileb0x.Server]:", file.Filename, "Writing file into Virutal Memory FileSystem...")
if _, err = io.Copy(f, src); err != nil {
return err
}
if err = f.Close(); err != nil {
return err
}
log.Println("[fileb0x.Server]:", file.Filename, "Done writing file")
return c.String(http.StatusOK, "ok")
}
// BasicAuth is a middleware to check if
// the username and password are valid
// echo's middleware isn't used because of golint issues
func (s *{{exportedTitle "Server"}}) BasicAuth() echo.MiddlewareFunc {
return func(next echo.HandlerFunc) echo.HandlerFunc {
return func(c echo.Context) error {
u, p, _ := c.Request().BasicAuth()
if u != s.Auth.Username || p != s.Auth.Password {
return echo.ErrUnauthorized
}
return next(c)
}
}
}
{{end}}
`

130
vendor/github.com/UnnoTed/fileb0x/template/funcs.go generated vendored Normal file
View File

@ -0,0 +1,130 @@
package template
import (
"regexp"
"strconv"
"strings"
"text/template"
"github.com/UnnoTed/fileb0x/config"
)
var safeNameBlacklist = map[string]string{}
var blacklist = map[string]int{}
// taken from golint @ https://github.com/golang/lint/blob/master/lint.go#L702
var commonInitialisms = map[string]bool{
"API": true,
"ASCII": true,
"CPU": true,
"CSS": true,
"DNS": true,
"EOF": true,
"GUID": true,
"HTML": true,
"HTTP": true,
"HTTPS": true,
"ID": true,
"IP": true,
"JSON": true,
"LHS": true,
"QPS": true,
"RAM": true,
"RHS": true,
"RPC": true,
"SLA": true,
"SMTP": true,
"SQL": true,
"SSH": true,
"TCP": true,
"TLS": true,
"TTL": true,
"UDP": true,
"UI": true,
"UID": true,
"UUID": true,
"URI": true,
"URL": true,
"UTF8": true,
"VM": true,
"XML": true,
"XSRF": true,
"XSS": true,
}
var r = regexp.MustCompile(`[^a-zA-Z0-9]`)
var funcsTemplate = template.FuncMap{
"exported": exported,
"buildTags": buildTags,
"exportedTitle": exportedTitle,
"buildSafeVarName": buildSafeVarName,
}
var unexported bool
// SetUnexported variables, functions and types
func SetUnexported(e bool) {
unexported = e
}
func exported(field string) string {
if !unexported {
return strings.ToUpper(field)
}
return strings.ToLower(field)
}
func exportedTitle(field string) string {
if !unexported {
return strings.Title(field)
}
return strings.ToLower(field[0:1]) + field[1:]
}
func buildSafeVarName(path string) string {
name, exists := safeNameBlacklist[path]
if exists {
return name
}
n := config.SafeVarName.ReplaceAllString(path, "$")
words := strings.Split(n, "$")
name = ""
// check for uppercase words
for _, word := range words {
upper := strings.ToUpper(word)
if commonInitialisms[upper] {
name += upper
} else {
name += strings.Title(word)
}
}
// avoid redeclaring variables
//
// _file.txt
// file.txt
_, blacklisted := blacklist[name]
if blacklisted {
blacklist[name]++
name += strconv.Itoa(blacklist[name])
}
safeNameBlacklist[path] = name
blacklist[name]++
return name
}
func buildTags(tags string) string {
if tags != "" {
tags = "// +build " + tags + "\n"
}
return tags
}

49
vendor/github.com/UnnoTed/fileb0x/template/template.go generated vendored Normal file
View File

@ -0,0 +1,49 @@
package template
import (
"bytes"
"errors"
"text/template"
)
// Template holds b0x and file template
type Template struct {
template string
name string
Variables interface{}
}
// Set the template to be used
// "files" or "file"
func (t *Template) Set(name string) error {
t.name = name
if name != "files" && name != "file" {
return errors.New(`Error: Template must be "files" or "file"`)
}
if name == "files" {
t.template = filesTemplate
} else if name == "file" {
t.template = fileTemplate
}
return nil
}
// Exec the template and return the final data as byte array
func (t *Template) Exec() ([]byte, error) {
tmpl, err := template.New(t.name).Funcs(funcsTemplate).Parse(t.template)
if err != nil {
return nil, err
}
// exec template
buff := bytes.NewBufferString("")
err = tmpl.Execute(buff, t.Variables)
if err != nil {
return nil, err
}
return buff.Bytes(), nil
}

1
vendor/github.com/UnnoTed/fileb0x/test.bat generated vendored Normal file
View File

@ -0,0 +1 @@
go test ./... -v

40
vendor/github.com/UnnoTed/fileb0x/updater/config.go generated vendored Normal file
View File

@ -0,0 +1,40 @@
package updater
import (
"errors"
"os"
)
type Config struct {
IsUpdating bool
Username string
Password string
Enabled bool
Workers int
Empty bool
Port int
}
func (u Config) CheckInfo() error {
if !u.Enabled {
return nil
}
if u.Username == "{FROM_ENV}" || u.Username == "" {
u.Username = os.Getenv("fileb0x_username")
}
if u.Password == "{FROM_ENV}" || u.Password == "" {
u.Password = os.Getenv("fileb0x_password")
}
// check for empty username and password
if u.Username == "" {
return errors.New("fileb0x: You must provide an username in the config file or through an env var: fileb0x_username")
} else if u.Password == "" {
return errors.New("fileb0x: You must provide an password in the config file or through an env var: fileb0x_password")
}
return nil
}

366
vendor/github.com/UnnoTed/fileb0x/updater/updater.go generated vendored Normal file
View File

@ -0,0 +1,366 @@
package updater
import (
"bytes"
"crypto/sha256"
"errors"
"fmt"
"io"
"io/ioutil"
"log"
"mime/multipart"
"net/http"
"os"
"strings"
"encoding/hex"
"encoding/json"
"github.com/UnnoTed/fileb0x/file"
"github.com/airking05/termui"
)
// Auth holds authentication for the http basic auth
type Auth struct {
Username string
Password string
}
// ResponseInit holds a list of hashes from the server
// to be sent to the client so it can check if there
// is a new file or a changed file
type ResponseInit struct {
Success bool
Hashes map[string]string
}
// ProgressReader implements a io.Reader with a Read
// function that lets a callback report how much
// of the file was read
type ProgressReader struct {
io.Reader
Reporter func(r int64)
}
func (pr *ProgressReader) Read(p []byte) (n int, err error) {
n, err = pr.Reader.Read(p)
pr.Reporter(int64(n))
return
}
// Updater sends files that should be update to the b0x server
type Updater struct {
Server string
Auth Auth
ui []termui.Bufferer
RemoteHashes map[string]string
LocalHashes map[string]string
ToUpdate []string
Workers int
}
// Init gets the list of file hash from the server
func (up *Updater) Init() error {
return up.Get()
}
// Get gets the list of file hash from the server
func (up *Updater) Get() error {
log.Println("Creating hash list request...")
req, err := http.NewRequest("GET", up.Server, nil)
if err != nil {
return err
}
req.SetBasicAuth(up.Auth.Username, up.Auth.Password)
log.Println("Sending hash list request...")
client := &http.Client{}
resp, err := client.Do(req)
if err != nil {
return err
}
if resp.StatusCode == http.StatusUnauthorized {
return errors.New("Error Unautorized")
}
log.Println("Reading hash list response's body...")
var buf bytes.Buffer
_, err = buf.ReadFrom(resp.Body)
if err != nil {
return err
}
log.Println("Parsing hash list response's body...")
ri := &ResponseInit{}
err = json.Unmarshal(buf.Bytes(), &ri)
if err != nil {
log.Println("Body is", buf.Bytes())
return err
}
resp.Body.Close()
// copy hash list
if ri.Success {
log.Println("Copying hash list...")
up.RemoteHashes = ri.Hashes
up.LocalHashes = map[string]string{}
log.Println("Done")
}
return nil
}
// Updatable checks if there is any file that should be updaTed
func (up *Updater) Updatable(files map[string]*file.File) (bool, error) {
hasUpdates := !up.EqualHashes(files)
if hasUpdates {
log.Println("----------------------------------------")
log.Println("-- Found files that should be updated --")
log.Println("----------------------------------------")
} else {
log.Println("-----------------------")
log.Println("-- Nothing to update --")
log.Println("-----------------------")
}
return hasUpdates, nil
}
// EqualHash checks if a local file hash equals a remote file hash
// it returns false when a remote file hash isn't found (new files)
func (up *Updater) EqualHash(name string) bool {
hash, existsLocally := up.LocalHashes[name]
_, existsRemotely := up.RemoteHashes[name]
if !existsRemotely || !existsLocally || hash != up.RemoteHashes[name] {
if hash != up.RemoteHashes[name] {
log.Println("Found changes in file: ", name)
} else if !existsRemotely && existsLocally {
log.Println("Found new file: ", name)
}
return false
}
return true
}
// EqualHashes builds the list of local hashes before
// checking if there is any that should be updated
func (up *Updater) EqualHashes(files map[string]*file.File) bool {
for _, f := range files {
log.Println("Checking file for changes:", f.Path)
if len(f.Bytes) == 0 && !f.ReplacedText {
data, err := ioutil.ReadFile(f.OriginalPath)
if err != nil {
panic(err)
}
f.Bytes = data
// removes the []byte("") from the string
// when the data isn't in the Bytes variable
} else if len(f.Bytes) == 0 && f.ReplacedText && len(f.Data) > 0 {
f.Data = strings.TrimPrefix(f.Data, `[]byte("`)
f.Data = strings.TrimSuffix(f.Data, `")`)
f.Data = strings.Replace(f.Data, "\\x", "", -1)
var err error
f.Bytes, err = hex.DecodeString(f.Data)
if err != nil {
log.Println("SHIT", err)
return false
}
f.Data = ""
}
sha := sha256.New()
if _, err := sha.Write(f.Bytes); err != nil {
panic(err)
return false
}
up.LocalHashes[f.Path] = hex.EncodeToString(sha.Sum(nil))
}
// check if there is any file to update
update := false
for k := range up.LocalHashes {
if !up.EqualHash(k) {
up.ToUpdate = append(up.ToUpdate, k)
update = true
}
}
return !update
}
type job struct {
current int
files *file.File
total int
}
// UpdateFiles sends all files that should be updated to the server
// the limit is 3 concurrent files at once
func (up *Updater) UpdateFiles(files map[string]*file.File) error {
updatable, err := up.Updatable(files)
if err != nil {
return err
}
if !updatable {
return nil
}
// everything's height
height := 3
err = termui.Init()
if err != nil {
panic(err)
}
defer termui.Close()
// info text
p := termui.NewPar("PRESS ANY KEY TO QUIT")
p.Height = height
p.Width = 50
p.TextFgColor = termui.ColorWhite
up.ui = append(up.ui, p)
doneTotal := 0
total := len(up.ToUpdate)
jobs := make(chan *job, total)
done := make(chan bool, total)
if up.Workers <= 0 {
up.Workers = 1
}
// just so it can listen to events
go func() {
termui.Loop()
}()
// cancel with any key
termui.Handle("/sys/kbd", func(termui.Event) {
termui.StopLoop()
os.Exit(1)
})
// stops rendering when total is reached
go func(upp *Updater, d *int) {
for {
if *d >= total {
break
}
termui.Render(upp.ui...)
}
}(up, &doneTotal)
for i := 0; i < up.Workers; i++ {
// creates a progress bar
g := termui.NewGauge()
g.Width = termui.TermWidth()
g.Height = height
g.BarColor = termui.ColorBlue
g.Y = len(up.ui) * height
up.ui = append(up.ui, g)
go up.worker(jobs, done, g)
}
for i, name := range up.ToUpdate {
jobs <- &job{
current: i + 1,
files: files[name],
total: total,
}
}
close(jobs)
for i := 0; i < total; i++ {
<-done
doneTotal++
}
return nil
}
func (up *Updater) worker(jobs <-chan *job, done chan<- bool, g *termui.Gauge) {
for job := range jobs {
f := job.files
fr := bytes.NewReader(f.Bytes)
g.BorderLabel = fmt.Sprintf("%d/%d %s", job.current, job.total, f.Path)
// updates progress bar's percentage
var total int64
pr := &ProgressReader{fr, func(r int64) {
total += r
g.Percent = int(float64(total) / float64(fr.Size()) * 100)
}}
r, w := io.Pipe()
writer := multipart.NewWriter(w)
// copy the file into the form
go func(fr *ProgressReader) {
defer w.Close()
part, err := writer.CreateFormFile("file", f.Path)
if err != nil {
panic(err)
}
_, err = io.Copy(part, fr)
if err != nil {
panic(err)
}
err = writer.Close()
if err != nil {
panic(err)
}
}(pr)
// create a post request with basic auth
// and the file included in a form
req, err := http.NewRequest("POST", up.Server, r)
if err != nil {
panic(err)
}
req.Header.Set("Content-Type", writer.FormDataContentType())
req.SetBasicAuth(up.Auth.Username, up.Auth.Password)
// sends the request
client := &http.Client{}
resp, err := client.Do(req)
if err != nil {
panic(err)
}
body := &bytes.Buffer{}
_, err = body.ReadFrom(resp.Body)
if err != nil {
panic(err)
}
if err := resp.Body.Close(); err != nil {
panic(err)
}
if body.String() != "ok" {
panic(body.String())
}
done <- true
}
}

35
vendor/github.com/UnnoTed/fileb0x/utils/utils.go generated vendored Normal file
View File

@ -0,0 +1,35 @@
package utils
import (
"os"
"path/filepath"
"strings"
)
// FixPath converts \ and \\ to /
func FixPath(path string) string {
a := filepath.Clean(path)
b := strings.Replace(a, `\`, "/", -1)
c := strings.Replace(b, `\\`, "/", -1)
return c
}
// FixName converts [/ to _](1), [ to -](2) and [, to __](3)
func FixName(path string) string {
a := FixPath(path)
b := strings.Replace(a, "/", "_", -1) // / to _
c := strings.Replace(b, " ", "-", -1) // {space} to -
return strings.Replace(c, ",", "__", -1) // , to __
}
// GetCurrentDir gets the directory where the application was run
func GetCurrentDir() (string, error) {
d, err := filepath.Abs(filepath.Dir(os.Args[0]))
return d, err
}
// Exists returns true when a folder/file exists
func Exists(path string) bool {
_, err := os.Stat(path)
return !os.IsNotExist(err)
}

26
vendor/github.com/airking05/termui/.gitignore generated vendored Normal file
View File

@ -0,0 +1,26 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
*.test
*.prof
.DS_Store
/vendor

6
vendor/github.com/airking05/termui/.travis.yml generated vendored Normal file
View File

@ -0,0 +1,6 @@
language: go
go:
- tip
script: go test -v ./

22
vendor/github.com/airking05/termui/LICENSE generated vendored Normal file
View File

@ -0,0 +1,22 @@
The MIT License (MIT)
Copyright (c) 2015 Zack Guo
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

151
vendor/github.com/airking05/termui/README.md generated vendored Normal file
View File

@ -0,0 +1,151 @@
# termui [![Build Status](https://travis-ci.org/gizak/termui.svg?branch=master)](https://travis-ci.org/gizak/termui) [![Doc Status](https://godoc.org/github.com/gizak/termui?status.png)](https://godoc.org/github.com/gizak/termui)
<img src="./_example/dashboard.gif" alt="demo cast under osx 10.10; Terminal.app; Menlo Regular 12pt.)" width="80%">
`termui` is a cross-platform, easy-to-compile, and fully-customizable terminal dashboard. It is inspired by [blessed-contrib](https://github.com/yaronn/blessed-contrib), but purely in Go.
Now version v2 has arrived! It brings new event system, new theme system, new `Buffer` interface and specific colour text rendering. (some docs are missing, but it will be completed soon!)
## Installation
`master` mirrors v2 branch, to install:
go get -u github.com/gizak/termui
It is recommanded to use locked deps by using [glide](https://glide.sh): move to `termui` src directory then run `glide up`.
For the compatible reason, you can choose to install the legacy version of `termui`:
go get gopkg.in/gizak/termui.v1
## Usage
### Layout
To use `termui`, the very first thing you may want to know is how to manage layout. `termui` offers two ways of doing this, known as absolute layout and grid layout.
__Absolute layout__
Each widget has an underlying block structure which basically is a box model. It has border, label and padding properties. A border of a widget can be chosen to hide or display (with its border label), you can pick a different front/back colour for the border as well. To display such a widget at a specific location in terminal window, you need to assign `.X`, `.Y`, `.Height`, `.Width` values for each widget before sending it to `.Render`. Let's demonstrate these by a code snippet:
`````go
import ui "github.com/gizak/termui" // <- ui shortcut, optional
func main() {
err := ui.Init()
if err != nil {
panic(err)
}
defer ui.Close()
p := ui.NewPar(":PRESS q TO QUIT DEMO")
p.Height = 3
p.Width = 50
p.TextFgColor = ui.ColorWhite
p.BorderLabel = "Text Box"
p.BorderFg = ui.ColorCyan
g := ui.NewGauge()
g.Percent = 50
g.Width = 50
g.Height = 3
g.Y = 11
g.BorderLabel = "Gauge"
g.BarColor = ui.ColorRed
g.BorderFg = ui.ColorWhite
g.BorderLabelFg = ui.ColorCyan
ui.Render(p, g) // feel free to call Render, it's async and non-block
// event handler...
}
`````
Note that components can be overlapped (I'd rather call this a feature...), `Render(rs ...Renderer)` renders its args from left to right (i.e. each component's weight is arising from left to right).
__Grid layout:__
<img src="./_example/grid.gif" alt="grid" width="60%">
Grid layout uses [12 columns grid system](http://www.w3schools.com/bootstrap/bootstrap_grid_system.asp) with expressive syntax. To use `Grid`, all we need to do is build a widget tree consisting of `Row`s and `Col`s (Actually a `Col` is also a `Row` but with a widget endpoint attached).
```go
import ui "github.com/gizak/termui"
// init and create widgets...
// build
ui.Body.AddRows(
ui.NewRow(
ui.NewCol(6, 0, widget0),
ui.NewCol(6, 0, widget1)),
ui.NewRow(
ui.NewCol(3, 0, widget2),
ui.NewCol(3, 0, widget30, widget31, widget32),
ui.NewCol(6, 0, widget4)))
// calculate layout
ui.Body.Align()
ui.Render(ui.Body)
```
### Events
`termui` ships with a http-like event mux handling system. All events are channeled up from different sources (typing, click, windows resize, custom event) and then encoded as universal `Event` object. `Event.Path` indicates the event type and `Event.Data` stores the event data struct. Add a handler to a certain event is easy as below:
```go
// handle key q pressing
ui.Handle("/sys/kbd/q", func(ui.Event) {
// press q to quit
ui.StopLoop()
})
ui.Handle("/sys/kbd/C-x", func(ui.Event) {
// handle Ctrl + x combination
})
ui.Handle("/sys/kbd", func(ui.Event) {
// handle all other key pressing
})
// handle a 1s timer
ui.Handle("/timer/1s", func(e ui.Event) {
t := e.Data.(ui.EvtTimer)
// t is a EvtTimer
if t.Count%2 ==0 {
// do something
}
})
ui.Loop() // block until StopLoop is called
```
### Widgets
Click image to see the corresponding demo codes.
[<img src="./_example/par.png" alt="par" type="image/png" width="45%">](https://github.com/gizak/termui/blob/master/_example/par.go)
[<img src="./_example/list.png" alt="list" type="image/png" width="45%">](https://github.com/gizak/termui/blob/master/_example/list.go)
[<img src="./_example/gauge.png" alt="gauge" type="image/png" width="45%">](https://github.com/gizak/termui/blob/master/_example/gauge.go)
[<img src="./_example/linechart.png" alt="linechart" type="image/png" width="45%">](https://github.com/gizak/termui/blob/master/_example/linechart.go)
[<img src="./_example/barchart.png" alt="barchart" type="image/png" width="45%">](https://github.com/gizak/termui/blob/master/_example/barchart.go)
[<img src="./_example/mbarchart.png" alt="barchart" type="image/png" width="45%">](https://github.com/gizak/termui/blob/master/_example/mbarchart.go)
[<img src="./_example/sparklines.png" alt="sparklines" type="image/png" width="45%">](https://github.com/gizak/termui/blob/master/_example/sparklines.go)
[<img src="./_example/table.png" alt="table" type="image/png" width="45%">](https://github.com/gizak/termui/blob/master/_example/table.go)
## GoDoc
[godoc](https://godoc.org/github.com/gizak/termui)
## TODO
- [x] Grid layout
- [x] Event system
- [x] Canvas widget
- [x] Refine APIs
- [ ] Focusable widgets
## Changelog
## License
This library is under the [MIT License](http://opensource.org/licenses/MIT)

149
vendor/github.com/airking05/termui/barchart.go generated vendored Normal file
View File

@ -0,0 +1,149 @@
// Copyright 2017 Zack Guo <zack.y.guo@gmail.com>. All rights reserved.
// Use of this source code is governed by a MIT license that can
// be found in the LICENSE file.
package termui
import "fmt"
// BarChart creates multiple bars in a widget:
/*
bc := termui.NewBarChart()
data := []int{3, 2, 5, 3, 9, 5}
bclabels := []string{"S0", "S1", "S2", "S3", "S4", "S5"}
bc.BorderLabel = "Bar Chart"
bc.Data = data
bc.Width = 26
bc.Height = 10
bc.DataLabels = bclabels
bc.TextColor = termui.ColorGreen
bc.BarColor = termui.ColorRed
bc.NumColor = termui.ColorYellow
*/
type BarChart struct {
Block
BarColor Attribute
TextColor Attribute
NumColor Attribute
Data []int
DataLabels []string
BarWidth int
BarGap int
CellChar rune
labels [][]rune
dataNum [][]rune
numBar int
scale float64
max int
}
// NewBarChart returns a new *BarChart with current theme.
func NewBarChart() *BarChart {
bc := &BarChart{Block: *NewBlock()}
bc.BarColor = ThemeAttr("barchart.bar.bg")
bc.NumColor = ThemeAttr("barchart.num.fg")
bc.TextColor = ThemeAttr("barchart.text.fg")
bc.BarGap = 1
bc.BarWidth = 3
bc.CellChar = ' '
return bc
}
func (bc *BarChart) layout() {
bc.numBar = bc.innerArea.Dx() / (bc.BarGap + bc.BarWidth)
bc.labels = make([][]rune, bc.numBar)
bc.dataNum = make([][]rune, len(bc.Data))
for i := 0; i < bc.numBar && i < len(bc.DataLabels) && i < len(bc.Data); i++ {
bc.labels[i] = trimStr2Runes(bc.DataLabels[i], bc.BarWidth)
n := bc.Data[i]
s := fmt.Sprint(n)
bc.dataNum[i] = trimStr2Runes(s, bc.BarWidth)
}
//bc.max = bc.Data[0] // what if Data is nil? Sometimes when bar graph is nill it produces panic with panic: runtime error: index out of range
// Asign a negative value to get maxvalue auto-populates
if bc.max == 0 {
bc.max = -1
}
for i := 0; i < len(bc.Data); i++ {
if bc.max < bc.Data[i] {
bc.max = bc.Data[i]
}
}
bc.scale = float64(bc.max) / float64(bc.innerArea.Dy()-1)
}
func (bc *BarChart) SetMax(max int) {
if max > 0 {
bc.max = max
}
}
// Buffer implements Bufferer interface.
func (bc *BarChart) Buffer() Buffer {
buf := bc.Block.Buffer()
bc.layout()
for i := 0; i < bc.numBar && i < len(bc.Data) && i < len(bc.DataLabels); i++ {
h := int(float64(bc.Data[i]) / bc.scale)
oftX := i * (bc.BarWidth + bc.BarGap)
barBg := bc.Bg
barFg := bc.BarColor
if bc.CellChar == ' ' {
barBg = bc.BarColor
barFg = ColorDefault
if bc.BarColor == ColorDefault { // the same as above
barBg |= AttrReverse
}
}
// plot bar
for j := 0; j < bc.BarWidth; j++ {
for k := 0; k < h; k++ {
c := Cell{
Ch: bc.CellChar,
Bg: barBg,
Fg: barFg,
}
x := bc.innerArea.Min.X + i*(bc.BarWidth+bc.BarGap) + j
y := bc.innerArea.Min.Y + bc.innerArea.Dy() - 2 - k
buf.Set(x, y, c)
}
}
// plot text
for j, k := 0, 0; j < len(bc.labels[i]); j++ {
w := charWidth(bc.labels[i][j])
c := Cell{
Ch: bc.labels[i][j],
Bg: bc.Bg,
Fg: bc.TextColor,
}
y := bc.innerArea.Min.Y + bc.innerArea.Dy() - 1
x := bc.innerArea.Min.X + oftX + k
buf.Set(x, y, c)
k += w
}
// plot num
for j := 0; j < len(bc.dataNum[i]); j++ {
c := Cell{
Ch: bc.dataNum[i][j],
Fg: bc.NumColor,
Bg: barBg,
}
if h == 0 {
c.Bg = bc.Bg
}
x := bc.innerArea.Min.X + oftX + (bc.BarWidth-len(bc.dataNum[i]))/2 + j
y := bc.innerArea.Min.Y + bc.innerArea.Dy() - 2
buf.Set(x, y, c)
}
}
return buf
}

240
vendor/github.com/airking05/termui/block.go generated vendored Normal file
View File

@ -0,0 +1,240 @@
// Copyright 2017 Zack Guo <zack.y.guo@gmail.com>. All rights reserved.
// Use of this source code is governed by a MIT license that can
// be found in the LICENSE file.
package termui
import "image"
// Hline is a horizontal line.
type Hline struct {
X int
Y int
Len int
Fg Attribute
Bg Attribute
}
// Vline is a vertical line.
type Vline struct {
X int
Y int
Len int
Fg Attribute
Bg Attribute
}
// Buffer draws a horizontal line.
func (l Hline) Buffer() Buffer {
if l.Len <= 0 {
return NewBuffer()
}
return NewFilledBuffer(l.X, l.Y, l.X+l.Len, l.Y+1, HORIZONTAL_LINE, l.Fg, l.Bg)
}
// Buffer draws a vertical line.
func (l Vline) Buffer() Buffer {
if l.Len <= 0 {
return NewBuffer()
}
return NewFilledBuffer(l.X, l.Y, l.X+1, l.Y+l.Len, VERTICAL_LINE, l.Fg, l.Bg)
}
// Buffer draws a box border.
func (b Block) drawBorder(buf Buffer) {
if !b.Border {
return
}
min := b.area.Min
max := b.area.Max
x0 := min.X
y0 := min.Y
x1 := max.X - 1
y1 := max.Y - 1
// draw lines
if b.BorderTop {
buf.Merge(Hline{x0, y0, x1 - x0, b.BorderFg, b.BorderBg}.Buffer())
}
if b.BorderBottom {
buf.Merge(Hline{x0, y1, x1 - x0, b.BorderFg, b.BorderBg}.Buffer())
}
if b.BorderLeft {
buf.Merge(Vline{x0, y0, y1 - y0, b.BorderFg, b.BorderBg}.Buffer())
}
if b.BorderRight {
buf.Merge(Vline{x1, y0, y1 - y0, b.BorderFg, b.BorderBg}.Buffer())
}
// draw corners
if b.BorderTop && b.BorderLeft && b.area.Dx() > 0 && b.area.Dy() > 0 {
buf.Set(x0, y0, Cell{TOP_LEFT, b.BorderFg, b.BorderBg})
}
if b.BorderTop && b.BorderRight && b.area.Dx() > 1 && b.area.Dy() > 0 {
buf.Set(x1, y0, Cell{TOP_RIGHT, b.BorderFg, b.BorderBg})
}
if b.BorderBottom && b.BorderLeft && b.area.Dx() > 0 && b.area.Dy() > 1 {
buf.Set(x0, y1, Cell{BOTTOM_LEFT, b.BorderFg, b.BorderBg})
}
if b.BorderBottom && b.BorderRight && b.area.Dx() > 1 && b.area.Dy() > 1 {
buf.Set(x1, y1, Cell{BOTTOM_RIGHT, b.BorderFg, b.BorderBg})
}
}
func (b Block) drawBorderLabel(buf Buffer) {
maxTxtW := b.area.Dx() - 2
tx := DTrimTxCls(DefaultTxBuilder.Build(b.BorderLabel, b.BorderLabelFg, b.BorderLabelBg), maxTxtW)
for i, w := 0, 0; i < len(tx); i++ {
buf.Set(b.area.Min.X+1+w, b.area.Min.Y, tx[i])
w += tx[i].Width()
}
}
// Block is a base struct for all other upper level widgets,
// consider it as css: display:block.
// Normally you do not need to create it manually.
type Block struct {
area image.Rectangle
innerArea image.Rectangle
X int
Y int
Border bool
BorderFg Attribute
BorderBg Attribute
BorderLeft bool
BorderRight bool
BorderTop bool
BorderBottom bool
BorderLabel string
BorderLabelFg Attribute
BorderLabelBg Attribute
Display bool
Bg Attribute
Width int
Height int
PaddingTop int
PaddingBottom int
PaddingLeft int
PaddingRight int
id string
Float Align
}
// NewBlock returns a *Block which inherits styles from current theme.
func NewBlock() *Block {
b := Block{}
b.Display = true
b.Border = true
b.BorderLeft = true
b.BorderRight = true
b.BorderTop = true
b.BorderBottom = true
b.BorderBg = ThemeAttr("border.bg")
b.BorderFg = ThemeAttr("border.fg")
b.BorderLabelBg = ThemeAttr("label.bg")
b.BorderLabelFg = ThemeAttr("label.fg")
b.Bg = ThemeAttr("block.bg")
b.Width = 2
b.Height = 2
b.id = GenId()
b.Float = AlignNone
return &b
}
func (b Block) Id() string {
return b.id
}
// Align computes box model
func (b *Block) Align() {
// outer
b.area.Min.X = 0
b.area.Min.Y = 0
b.area.Max.X = b.Width
b.area.Max.Y = b.Height
// float
b.area = AlignArea(TermRect(), b.area, b.Float)
b.area = MoveArea(b.area, b.X, b.Y)
// inner
b.innerArea.Min.X = b.area.Min.X + b.PaddingLeft
b.innerArea.Min.Y = b.area.Min.Y + b.PaddingTop
b.innerArea.Max.X = b.area.Max.X - b.PaddingRight
b.innerArea.Max.Y = b.area.Max.Y - b.PaddingBottom
if b.Border {
if b.BorderLeft {
b.innerArea.Min.X++
}
if b.BorderRight {
b.innerArea.Max.X--
}
if b.BorderTop {
b.innerArea.Min.Y++
}
if b.BorderBottom {
b.innerArea.Max.Y--
}
}
}
// InnerBounds returns the internal bounds of the block after aligning and
// calculating the padding and border, if any.
func (b *Block) InnerBounds() image.Rectangle {
b.Align()
return b.innerArea
}
// Buffer implements Bufferer interface.
// Draw background and border (if any).
func (b *Block) Buffer() Buffer {
b.Align()
buf := NewBuffer()
buf.SetArea(b.area)
buf.Fill(' ', ColorDefault, b.Bg)
b.drawBorder(buf)
b.drawBorderLabel(buf)
return buf
}
// GetHeight implements GridBufferer.
// It returns current height of the block.
func (b Block) GetHeight() int {
return b.Height
}
// SetX implements GridBufferer interface, which sets block's x position.
func (b *Block) SetX(x int) {
b.X = x
}
// SetY implements GridBufferer interface, it sets y position for block.
func (b *Block) SetY(y int) {
b.Y = y
}
// SetWidth implements GridBuffer interface, it sets block's width.
func (b *Block) SetWidth(w int) {
b.Width = w
}
func (b Block) InnerWidth() int {
return b.innerArea.Dx()
}
func (b Block) InnerHeight() int {
return b.innerArea.Dy()
}
func (b Block) InnerX() int {
return b.innerArea.Min.X
}
func (b Block) InnerY() int { return b.innerArea.Min.Y }

20
vendor/github.com/airking05/termui/block_common.go generated vendored Normal file
View File

@ -0,0 +1,20 @@
// Copyright 2017 Zack Guo <zack.y.guo@gmail.com>. All rights reserved.
// Use of this source code is governed by a MIT license that can
// be found in the LICENSE file.
// +build !windows
package termui
const TOP_RIGHT = '┐'
const VERTICAL_LINE = '│'
const HORIZONTAL_LINE = '─'
const TOP_LEFT = '┌'
const BOTTOM_RIGHT = '┘'
const BOTTOM_LEFT = '└'
const VERTICAL_LEFT = '┤'
const VERTICAL_RIGHT = '├'
const HORIZONTAL_DOWN = '┬'
const HORIZONTAL_UP = '┴'
const QUOTA_LEFT = '«'
const QUOTA_RIGHT = '»'

14
vendor/github.com/airking05/termui/block_windows.go generated vendored Normal file
View File

@ -0,0 +1,14 @@
// Copyright 2017 Zack Guo <zack.y.guo@gmail.com>. All rights reserved.
// Use of this source code is governed by a MIT license that can
// be found in the LICENSE file.
// +build windows
package termui
const TOP_RIGHT = '+'
const VERTICAL_LINE = '|'
const HORIZONTAL_LINE = '-'
const TOP_LEFT = '+'
const BOTTOM_RIGHT = '+'
const BOTTOM_LEFT = '+'

106
vendor/github.com/airking05/termui/buffer.go generated vendored Normal file
View File

@ -0,0 +1,106 @@
// Copyright 2017 Zack Guo <zack.y.guo@gmail.com>. All rights reserved.
// Use of this source code is governed by a MIT license that can
// be found in the LICENSE file.
package termui
import "image"
// Cell is a rune with assigned Fg and Bg
type Cell struct {
Ch rune
Fg Attribute
Bg Attribute
}
// Buffer is a renderable rectangle cell data container.
type Buffer struct {
Area image.Rectangle // selected drawing area
CellMap map[image.Point]Cell
}
// At returns the cell at (x,y).
func (b Buffer) At(x, y int) Cell {
return b.CellMap[image.Pt(x, y)]
}
// Set assigns a char to (x,y)
func (b Buffer) Set(x, y int, c Cell) {
b.CellMap[image.Pt(x, y)] = c
}
// Bounds returns the domain for which At can return non-zero color.
func (b Buffer) Bounds() image.Rectangle {
x0, y0, x1, y1 := 0, 0, 0, 0
for p := range b.CellMap {
if p.X > x1 {
x1 = p.X
}
if p.X < x0 {
x0 = p.X
}
if p.Y > y1 {
y1 = p.Y
}
if p.Y < y0 {
y0 = p.Y
}
}
return image.Rect(x0, y0, x1+1, y1+1)
}
// SetArea assigns a new rect area to Buffer b.
func (b *Buffer) SetArea(r image.Rectangle) {
b.Area.Max = r.Max
b.Area.Min = r.Min
}
// Sync sets drawing area to the buffer's bound
func (b *Buffer) Sync() {
b.SetArea(b.Bounds())
}
// NewCell returns a new cell
func NewCell(ch rune, fg, bg Attribute) Cell {
return Cell{ch, fg, bg}
}
// Merge merges bs Buffers onto b
func (b *Buffer) Merge(bs ...Buffer) {
for _, buf := range bs {
for p, v := range buf.CellMap {
b.Set(p.X, p.Y, v)
}
b.SetArea(b.Area.Union(buf.Area))
}
}
// NewBuffer returns a new Buffer
func NewBuffer() Buffer {
return Buffer{
CellMap: make(map[image.Point]Cell),
Area: image.Rectangle{}}
}
// Fill fills the Buffer b with ch,fg and bg.
func (b Buffer) Fill(ch rune, fg, bg Attribute) {
for x := b.Area.Min.X; x < b.Area.Max.X; x++ {
for y := b.Area.Min.Y; y < b.Area.Max.Y; y++ {
b.Set(x, y, Cell{ch, fg, bg})
}
}
}
// NewFilledBuffer returns a new Buffer filled with ch, fb and bg.
func NewFilledBuffer(x0, y0, x1, y1 int, ch rune, fg, bg Attribute) Buffer {
buf := NewBuffer()
buf.Area.Min = image.Pt(x0, y0)
buf.Area.Max = image.Pt(x1, y1)
for x := buf.Area.Min.X; x < buf.Area.Max.X; x++ {
for y := buf.Area.Min.Y; y < buf.Area.Max.Y; y++ {
buf.Set(x, y, Cell{ch, fg, bg})
}
}
return buf
}

72
vendor/github.com/airking05/termui/canvas.go generated vendored Normal file
View File

@ -0,0 +1,72 @@
// Copyright 2017 Zack Guo <zack.y.guo@gmail.com>. All rights reserved.
// Use of this source code is governed by a MIT license that can
// be found in the LICENSE file.
package termui
/*
dots:
,___,
|1 4|
|2 5|
|3 6|
|7 8|
`````
*/
var brailleBase = '\u2800'
var brailleOftMap = [4][2]rune{
{'\u0001', '\u0008'},
{'\u0002', '\u0010'},
{'\u0004', '\u0020'},
{'\u0040', '\u0080'}}
// Canvas contains drawing map: i,j -> rune
type Canvas map[[2]int]rune
// NewCanvas returns an empty Canvas
func NewCanvas() Canvas {
return make(map[[2]int]rune)
}
func chOft(x, y int) rune {
return brailleOftMap[y%4][x%2]
}
func (c Canvas) rawCh(x, y int) rune {
if ch, ok := c[[2]int{x, y}]; ok {
return ch
}
return '\u0000' //brailleOffset
}
// return coordinate in terminal
func chPos(x, y int) (int, int) {
return y / 4, x / 2
}
// Set sets a point (x,y) in the virtual coordinate
func (c Canvas) Set(x, y int) {
i, j := chPos(x, y)
ch := c.rawCh(i, j)
ch |= chOft(x, y)
c[[2]int{i, j}] = ch
}
// Unset removes point (x,y)
func (c Canvas) Unset(x, y int) {
i, j := chPos(x, y)
ch := c.rawCh(i, j)
ch &= ^chOft(x, y)
c[[2]int{i, j}] = ch
}
// Buffer returns un-styled points
func (c Canvas) Buffer() Buffer {
buf := NewBuffer()
for k, v := range c {
buf.Set(k[0], k[1], Cell{Ch: v + brailleBase})
}
return buf
}

54
vendor/github.com/airking05/termui/config.py generated vendored Normal file
View File

@ -0,0 +1,54 @@
#!/usr/bin/env python3
import re
import os
import io
copyright = """// Copyright 2017 Zack Guo <zack.y.guo@gmail.com>. All rights reserved.
// Use of this source code is governed by a MIT license that can
// be found in the LICENSE file.
"""
exclude_dirs = [".git", "_docs"]
exclude_files = []
include_dirs = [".", "debug", "extra", "test", "_example"]
def is_target(fpath):
if os.path.splitext(fpath)[-1] == ".go":
return True
return False
def update_copyright(fpath):
print("processing " + fpath)
f = io.open(fpath, 'r', encoding='utf-8')
fstr = f.read()
f.close()
# remove old
m = re.search('^// Copyright .+?\r?\n\r?\n', fstr, re.MULTILINE|re.DOTALL)
if m:
fstr = fstr[m.end():]
# add new
fstr = copyright + fstr
f = io.open(fpath, 'w',encoding='utf-8')
f.write(fstr)
f.close()
def main():
for d in include_dirs:
files = [
os.path.join(d, f) for f in os.listdir(d)
if os.path.isfile(os.path.join(d, f))
]
for f in files:
if is_target(f):
update_copyright(f)
if __name__ == '__main__':
main()

29
vendor/github.com/airking05/termui/doc.go generated vendored Normal file
View File

@ -0,0 +1,29 @@
// Copyright 2017 Zack Guo <zack.y.guo@gmail.com>. All rights reserved.
// Use of this source code is governed by a MIT license that can
// be found in the LICENSE file.
/*
Package termui is a library designed for creating command line UI. For more info, goto http://github.com/gizak/termui
A simplest example:
package main
import ui "github.com/gizak/termui"
func main() {
if err:=ui.Init(); err != nil {
panic(err)
}
defer ui.Close()
g := ui.NewGauge()
g.Percent = 50
g.Width = 50
g.BorderLabel = "Gauge"
ui.Render(g)
ui.Loop()
}
*/
package termui

323
vendor/github.com/airking05/termui/events.go generated vendored Normal file
View File

@ -0,0 +1,323 @@
// Copyright 2017 Zack Guo <zack.y.guo@gmail.com>. All rights reserved.
// Use of this source code is governed by a MIT license that can
// be found in the LICENSE file.
package termui
import (
"path"
"strconv"
"sync"
"time"
"github.com/nsf/termbox-go"
)
type Event struct {
Type string
Path string
From string
To string
Data interface{}
Time int64
}
var sysEvtChs []chan Event
type EvtKbd struct {
KeyStr string
}
func evtKbd(e termbox.Event) EvtKbd {
ek := EvtKbd{}
k := string(e.Ch)
pre := ""
mod := ""
if e.Mod == termbox.ModAlt {
mod = "M-"
}
if e.Ch == 0 {
if e.Key > 0xFFFF-12 {
k = "<f" + strconv.Itoa(0xFFFF-int(e.Key)+1) + ">"
} else if e.Key > 0xFFFF-25 {
ks := []string{"<insert>", "<delete>", "<home>", "<end>", "<previous>", "<next>", "<up>", "<down>", "<left>", "<right>"}
k = ks[0xFFFF-int(e.Key)-12]
}
if e.Key <= 0x7F {
pre = "C-"
k = string('a' - 1 + int(e.Key))
kmap := map[termbox.Key][2]string{
termbox.KeyCtrlSpace: {"C-", "<space>"},
termbox.KeyBackspace: {"", "<backspace>"},
termbox.KeyTab: {"", "<tab>"},
termbox.KeyEnter: {"", "<enter>"},
termbox.KeyEsc: {"", "<escape>"},
termbox.KeyCtrlBackslash: {"C-", "\\"},
termbox.KeyCtrlSlash: {"C-", "/"},
termbox.KeySpace: {"", "<space>"},
termbox.KeyCtrl8: {"C-", "8"},
}
if sk, ok := kmap[e.Key]; ok {
pre = sk[0]
k = sk[1]
}
}
}
ek.KeyStr = pre + mod + k
return ek
}
func crtTermboxEvt(e termbox.Event) Event {
systypemap := map[termbox.EventType]string{
termbox.EventKey: "keyboard",
termbox.EventResize: "window",
termbox.EventMouse: "mouse",
termbox.EventError: "error",
termbox.EventInterrupt: "interrupt",
}
ne := Event{From: "/sys", Time: time.Now().Unix()}
typ := e.Type
ne.Type = systypemap[typ]
switch typ {
case termbox.EventKey:
kbd := evtKbd(e)
ne.Path = "/sys/kbd/" + kbd.KeyStr
ne.Data = kbd
case termbox.EventResize:
wnd := EvtWnd{}
wnd.Width = e.Width
wnd.Height = e.Height
ne.Path = "/sys/wnd/resize"
ne.Data = wnd
case termbox.EventError:
err := EvtErr(e.Err)
ne.Path = "/sys/err"
ne.Data = err
case termbox.EventMouse:
m := EvtMouse{}
m.X = e.MouseX
m.Y = e.MouseY
ne.Path = "/sys/mouse"
ne.Data = m
}
return ne
}
type EvtWnd struct {
Width int
Height int
}
type EvtMouse struct {
X int
Y int
Press string
}
type EvtErr error
func hookTermboxEvt() {
for {
e := termbox.PollEvent()
for _, c := range sysEvtChs {
go func(ch chan Event) {
ch <- crtTermboxEvt(e)
}(c)
}
}
}
func NewSysEvtCh() chan Event {
ec := make(chan Event)
sysEvtChs = append(sysEvtChs, ec)
return ec
}
var DefaultEvtStream = NewEvtStream()
type EvtStream struct {
sync.RWMutex
srcMap map[string]chan Event
stream chan Event
wg sync.WaitGroup
sigStopLoop chan Event
Handlers map[string]func(Event)
hook func(Event)
}
func NewEvtStream() *EvtStream {
return &EvtStream{
srcMap: make(map[string]chan Event),
stream: make(chan Event),
Handlers: make(map[string]func(Event)),
sigStopLoop: make(chan Event),
}
}
func (es *EvtStream) Init() {
es.Merge("internal", es.sigStopLoop)
go func() {
es.wg.Wait()
close(es.stream)
}()
}
func cleanPath(p string) string {
if p == "" {
return "/"
}
if p[0] != '/' {
p = "/" + p
}
return path.Clean(p)
}
func isPathMatch(pattern, path string) bool {
if len(pattern) == 0 {
return false
}
n := len(pattern)
return len(path) >= n && path[0:n] == pattern
}
func (es *EvtStream) Merge(name string, ec chan Event) {
es.Lock()
defer es.Unlock()
es.wg.Add(1)
es.srcMap[name] = ec
go func(a chan Event) {
for n := range a {
n.From = name
es.stream <- n
}
es.wg.Done()
}(ec)
}
func (es *EvtStream) Handle(path string, handler func(Event)) {
es.Handlers[cleanPath(path)] = handler
}
func findMatch(mux map[string]func(Event), path string) string {
n := -1
pattern := ""
for m := range mux {
if !isPathMatch(m, path) {
continue
}
if len(m) > n {
pattern = m
n = len(m)
}
}
return pattern
}
// Remove all existing defined Handlers from the map
func (es *EvtStream) ResetHandlers() {
for Path, _ := range es.Handlers {
delete(es.Handlers, Path)
}
return
}
func (es *EvtStream) match(path string) string {
return findMatch(es.Handlers, path)
}
func (es *EvtStream) Hook(f func(Event)) {
es.hook = f
}
func (es *EvtStream) Loop() {
for e := range es.stream {
switch e.Path {
case "/sig/stoploop":
return
}
go func(a Event) {
es.RLock()
defer es.RUnlock()
if pattern := es.match(a.Path); pattern != "" {
es.Handlers[pattern](a)
}
}(e)
if es.hook != nil {
es.hook(e)
}
}
}
func (es *EvtStream) StopLoop() {
go func() {
e := Event{
Path: "/sig/stoploop",
}
es.sigStopLoop <- e
}()
}
func Merge(name string, ec chan Event) {
DefaultEvtStream.Merge(name, ec)
}
func Handle(path string, handler func(Event)) {
DefaultEvtStream.Handle(path, handler)
}
func Loop() {
DefaultEvtStream.Loop()
}
func StopLoop() {
DefaultEvtStream.StopLoop()
}
type EvtTimer struct {
Duration time.Duration
Count uint64
}
func NewTimerCh(du time.Duration) chan Event {
t := make(chan Event)
go func(a chan Event) {
n := uint64(0)
for {
n++
time.Sleep(du)
e := Event{}
e.Type = "timer"
e.Path = "/timer/" + du.String()
e.Time = time.Now().Unix()
e.Data = EvtTimer{
Duration: du,
Count: n,
}
t <- e
}
}(t)
return t
}
var DefualtHandler = func(e Event) {
}
var usrEvtCh = make(chan Event)
func SendCustomEvt(path string, data interface{}) {
e := Event{}
e.Path = path
e.Data = data
e.Time = time.Now().Unix()
usrEvtCh <- e
}

109
vendor/github.com/airking05/termui/gauge.go generated vendored Normal file
View File

@ -0,0 +1,109 @@
// Copyright 2017 Zack Guo <zack.y.guo@gmail.com>. All rights reserved.
// Use of this source code is governed by a MIT license that can
// be found in the LICENSE file.
package termui
import (
"strconv"
"strings"
)
// Gauge is a progress bar like widget.
// A simple example:
/*
g := termui.NewGauge()
g.Percent = 40
g.Width = 50
g.Height = 3
g.BorderLabel = "Slim Gauge"
g.BarColor = termui.ColorRed
g.PercentColor = termui.ColorBlue
*/
const ColorUndef Attribute = Attribute(^uint16(0))
type Gauge struct {
Block
Percent int
BarColor Attribute
PercentColor Attribute
PercentColorHighlighted Attribute
Label string
LabelAlign Align
}
// NewGauge return a new gauge with current theme.
func NewGauge() *Gauge {
g := &Gauge{
Block: *NewBlock(),
PercentColor: ThemeAttr("gauge.percent.fg"),
BarColor: ThemeAttr("gauge.bar.bg"),
Label: "{{percent}}%",
LabelAlign: AlignCenter,
PercentColorHighlighted: ColorUndef,
}
g.Width = 12
g.Height = 5
return g
}
// Buffer implements Bufferer interface.
func (g *Gauge) Buffer() Buffer {
buf := g.Block.Buffer()
// plot bar
w := g.Percent * g.innerArea.Dx() / 100
for i := 0; i < g.innerArea.Dy(); i++ {
for j := 0; j < w; j++ {
c := Cell{}
c.Ch = ' '
c.Bg = g.BarColor
if c.Bg == ColorDefault {
c.Bg |= AttrReverse
}
buf.Set(g.innerArea.Min.X+j, g.innerArea.Min.Y+i, c)
}
}
// plot percentage
s := strings.Replace(g.Label, "{{percent}}", strconv.Itoa(g.Percent), -1)
pry := g.innerArea.Min.Y + g.innerArea.Dy()/2
rs := str2runes(s)
var pos int
switch g.LabelAlign {
case AlignLeft:
pos = 0
case AlignCenter:
pos = (g.innerArea.Dx() - strWidth(s)) / 2
case AlignRight:
pos = g.innerArea.Dx() - strWidth(s) - 1
}
pos += g.innerArea.Min.X
for i, v := range rs {
c := Cell{
Ch: v,
Fg: g.PercentColor,
}
if w+g.innerArea.Min.X > pos+i {
c.Bg = g.BarColor
if c.Bg == ColorDefault {
c.Bg |= AttrReverse
}
if g.PercentColorHighlighted != ColorUndef {
c.Fg = g.PercentColorHighlighted
}
} else {
c.Bg = g.Block.Bg
}
buf.Set(1+pos+i, pry, c)
}
return buf
}

30
vendor/github.com/airking05/termui/glide.lock generated vendored Normal file
View File

@ -0,0 +1,30 @@
hash: 7a754ba100256404a978b2fc8738aee337beb822458e4b6060399fb89ebd215c
updated: 2016-11-03T17:39:24.323773674-04:00
imports:
- name: github.com/maruel/panicparse
version: ad661195ed0e88491e0f14be6613304e3b1141d6
subpackages:
- stack
- name: github.com/mattn/go-runewidth
version: 737072b4e32b7a5018b4a7125da8d12de90e8045
- name: github.com/mitchellh/go-wordwrap
version: ad45545899c7b13c020ea92b2072220eefad42b8
- name: github.com/nsf/termbox-go
version: b6acae516ace002cb8105a89024544a1480655a5
- name: golang.org/x/net
version: 569280fa63be4e201b975e5411e30a92178f0118
subpackages:
- websocket
testImports:
- name: github.com/davecgh/go-spew
version: 346938d642f2ec3594ed81d874461961cd0faa76
subpackages:
- spew
- name: github.com/pmezard/go-difflib
version: d8ed2627bdf02c080bf22230dbb337003b7aba2d
subpackages:
- difflib
- name: github.com/stretchr/testify
version: 976c720a22c8eb4eb6a0b4348ad85ad12491a506
subpackages:
- assert

9
vendor/github.com/airking05/termui/glide.yaml generated vendored Normal file
View File

@ -0,0 +1,9 @@
package: github.com/gizak/termui
import:
- package: github.com/mattn/go-runewidth
- package: github.com/mitchellh/go-wordwrap
- package: github.com/nsf/termbox-go
- package: golang.org/x/net
subpackages:
- websocket
- package: github.com/maruel/panicparse

279
vendor/github.com/airking05/termui/grid.go generated vendored Normal file
View File

@ -0,0 +1,279 @@
// Copyright 2017 Zack Guo <zack.y.guo@gmail.com>. All rights reserved.
// Use of this source code is governed by a MIT license that can
// be found in the LICENSE file.
package termui
// GridBufferer introduces a Bufferer that can be manipulated by Grid.
type GridBufferer interface {
Bufferer
GetHeight() int
SetWidth(int)
SetX(int)
SetY(int)
}
// Row builds a layout tree
type Row struct {
Cols []*Row //children
Widget GridBufferer // root
X int
Y int
Width int
Height int
Span int
Offset int
}
// calculate and set the underlying layout tree's x, y, height and width.
func (r *Row) calcLayout() {
r.assignWidth(r.Width)
r.Height = r.solveHeight()
r.assignX(r.X)
r.assignY(r.Y)
}
// tell if the node is leaf in the tree.
func (r *Row) isLeaf() bool {
return r.Cols == nil || len(r.Cols) == 0
}
func (r *Row) isRenderableLeaf() bool {
return r.isLeaf() && r.Widget != nil
}
// assign widgets' (and their parent rows') width recursively.
func (r *Row) assignWidth(w int) {
r.SetWidth(w)
accW := 0 // acc span and offset
calcW := make([]int, len(r.Cols)) // calculated width
calcOftX := make([]int, len(r.Cols)) // computated start position of x
for i, c := range r.Cols {
accW += c.Span + c.Offset
cw := int(float64(c.Span*r.Width) / 12.0)
if i >= 1 {
calcOftX[i] = calcOftX[i-1] +
calcW[i-1] +
int(float64(r.Cols[i-1].Offset*r.Width)/12.0)
}
// use up the space if it is the last col
if i == len(r.Cols)-1 && accW == 12 {
cw = r.Width - calcOftX[i]
}
calcW[i] = cw
r.Cols[i].assignWidth(cw)
}
}
// bottom up calc and set rows' (and their widgets') height,
// return r's total height.
func (r *Row) solveHeight() int {
if r.isRenderableLeaf() {
r.Height = r.Widget.GetHeight()
return r.Widget.GetHeight()
}
maxh := 0
if !r.isLeaf() {
for _, c := range r.Cols {
nh := c.solveHeight()
// when embed rows in Cols, row widgets stack up
if r.Widget != nil {
nh += r.Widget.GetHeight()
}
if nh > maxh {
maxh = nh
}
}
}
r.Height = maxh
return maxh
}
// recursively assign x position for r tree.
func (r *Row) assignX(x int) {
r.SetX(x)
if !r.isLeaf() {
acc := 0
for i, c := range r.Cols {
if c.Offset != 0 {
acc += int(float64(c.Offset*r.Width) / 12.0)
}
r.Cols[i].assignX(x + acc)
acc += c.Width
}
}
}
// recursively assign y position to r.
func (r *Row) assignY(y int) {
r.SetY(y)
if r.isLeaf() {
return
}
for i := range r.Cols {
acc := 0
if r.Widget != nil {
acc = r.Widget.GetHeight()
}
r.Cols[i].assignY(y + acc)
}
}
// GetHeight implements GridBufferer interface.
func (r Row) GetHeight() int {
return r.Height
}
// SetX implements GridBufferer interface.
func (r *Row) SetX(x int) {
r.X = x
if r.Widget != nil {
r.Widget.SetX(x)
}
}
// SetY implements GridBufferer interface.
func (r *Row) SetY(y int) {
r.Y = y
if r.Widget != nil {
r.Widget.SetY(y)
}
}
// SetWidth implements GridBufferer interface.
func (r *Row) SetWidth(w int) {
r.Width = w
if r.Widget != nil {
r.Widget.SetWidth(w)
}
}
// Buffer implements Bufferer interface,
// recursively merge all widgets buffer
func (r *Row) Buffer() Buffer {
merged := NewBuffer()
if r.isRenderableLeaf() {
return r.Widget.Buffer()
}
// for those are not leaves but have a renderable widget
if r.Widget != nil {
merged.Merge(r.Widget.Buffer())
}
// collect buffer from children
if !r.isLeaf() {
for _, c := range r.Cols {
merged.Merge(c.Buffer())
}
}
return merged
}
// Grid implements 12 columns system.
// A simple example:
/*
import ui "github.com/gizak/termui"
// init and create widgets...
// build
ui.Body.AddRows(
ui.NewRow(
ui.NewCol(6, 0, widget0),
ui.NewCol(6, 0, widget1)),
ui.NewRow(
ui.NewCol(3, 0, widget2),
ui.NewCol(3, 0, widget30, widget31, widget32),
ui.NewCol(6, 0, widget4)))
// calculate layout
ui.Body.Align()
ui.Render(ui.Body)
*/
type Grid struct {
Rows []*Row
Width int
X int
Y int
BgColor Attribute
}
// NewGrid returns *Grid with given rows.
func NewGrid(rows ...*Row) *Grid {
return &Grid{Rows: rows}
}
// AddRows appends given rows to Grid.
func (g *Grid) AddRows(rs ...*Row) {
g.Rows = append(g.Rows, rs...)
}
// NewRow creates a new row out of given columns.
func NewRow(cols ...*Row) *Row {
rs := &Row{Span: 12, Cols: cols}
return rs
}
// NewCol accepts: widgets are LayoutBufferer or widgets is A NewRow.
// Note that if multiple widgets are provided, they will stack up in the col.
func NewCol(span, offset int, widgets ...GridBufferer) *Row {
r := &Row{Span: span, Offset: offset}
if widgets != nil && len(widgets) == 1 {
wgt := widgets[0]
nw, isRow := wgt.(*Row)
if isRow {
r.Cols = nw.Cols
} else {
r.Widget = wgt
}
return r
}
r.Cols = []*Row{}
ir := r
for _, w := range widgets {
nr := &Row{Span: 12, Widget: w}
ir.Cols = []*Row{nr}
ir = nr
}
return r
}
// Align calculate each rows' layout.
func (g *Grid) Align() {
h := 0
for _, r := range g.Rows {
r.SetWidth(g.Width)
r.SetX(g.X)
r.SetY(g.Y + h)
r.calcLayout()
h += r.GetHeight()
}
}
// Buffer implments Bufferer interface.
func (g Grid) Buffer() Buffer {
buf := NewBuffer()
for _, r := range g.Rows {
buf.Merge(r.Buffer())
}
return buf
}
var Body *Grid

222
vendor/github.com/airking05/termui/helper.go generated vendored Normal file
View File

@ -0,0 +1,222 @@
// Copyright 2017 Zack Guo <zack.y.guo@gmail.com>. All rights reserved.
// Use of this source code is governed by a MIT license that can
// be found in the LICENSE file.
package termui
import (
"regexp"
"strings"
tm "github.com/nsf/termbox-go"
)
import rw "github.com/mattn/go-runewidth"
/* ---------------Port from termbox-go --------------------- */
// Attribute is printable cell's color and style.
type Attribute uint16
// 8 basic clolrs
const (
ColorDefault Attribute = iota
ColorBlack
ColorRed
ColorGreen
ColorYellow
ColorBlue
ColorMagenta
ColorCyan
ColorWhite
)
//Have a constant that defines number of colors
const NumberofColors = 8
// Text style
const (
AttrBold Attribute = 1 << (iota + 9)
AttrUnderline
AttrReverse
)
var (
dot = "…"
dotw = rw.StringWidth(dot)
)
/* ----------------------- End ----------------------------- */
func toTmAttr(x Attribute) tm.Attribute {
return tm.Attribute(x)
}
func str2runes(s string) []rune {
return []rune(s)
}
// Here for backwards-compatibility.
func trimStr2Runes(s string, w int) []rune {
return TrimStr2Runes(s, w)
}
// TrimStr2Runes trims string to w[-1 rune], appends …, and returns the runes
// of that string if string is grather then n. If string is small then w,
// return the runes.
func TrimStr2Runes(s string, w int) []rune {
if w <= 0 {
return []rune{}
}
sw := rw.StringWidth(s)
if sw > w {
return []rune(rw.Truncate(s, w, dot))
}
return str2runes(s)
}
// TrimStrIfAppropriate trim string to "s[:-1] + …"
// if string > width otherwise return string
func TrimStrIfAppropriate(s string, w int) string {
if w <= 0 {
return ""
}
sw := rw.StringWidth(s)
if sw > w {
return rw.Truncate(s, w, dot)
}
return s
}
func strWidth(s string) int {
return rw.StringWidth(s)
}
func charWidth(ch rune) int {
return rw.RuneWidth(ch)
}
var whiteSpaceRegex = regexp.MustCompile(`\s`)
// StringToAttribute converts text to a termui attribute. You may specifiy more
// then one attribute like that: "BLACK, BOLD, ...". All whitespaces
// are ignored.
func StringToAttribute(text string) Attribute {
text = whiteSpaceRegex.ReplaceAllString(strings.ToLower(text), "")
attributes := strings.Split(text, ",")
result := Attribute(0)
for _, theAttribute := range attributes {
var match Attribute
switch theAttribute {
case "reset", "default":
match = ColorDefault
case "black":
match = ColorBlack
case "red":
match = ColorRed
case "green":
match = ColorGreen
case "yellow":
match = ColorYellow
case "blue":
match = ColorBlue
case "magenta":
match = ColorMagenta
case "cyan":
match = ColorCyan
case "white":
match = ColorWhite
case "bold":
match = AttrBold
case "underline":
match = AttrUnderline
case "reverse":
match = AttrReverse
}
result |= match
}
return result
}
// TextCells returns a coloured text cells []Cell
func TextCells(s string, fg, bg Attribute) []Cell {
cs := make([]Cell, 0, len(s))
// sequence := MarkdownTextRendererFactory{}.TextRenderer(s).Render(fg, bg)
// runes := []rune(sequence.NormalizedText)
runes := str2runes(s)
for n := range runes {
// point, _ := sequence.PointAt(n, 0, 0)
// cs = append(cs, Cell{point.Ch, point.Fg, point.Bg})
cs = append(cs, Cell{runes[n], fg, bg})
}
return cs
}
// Width returns the actual screen space the cell takes (usually 1 or 2).
func (c Cell) Width() int {
return charWidth(c.Ch)
}
// Copy return a copy of c
func (c Cell) Copy() Cell {
return c
}
// TrimTxCells trims the overflowed text cells sequence.
func TrimTxCells(cs []Cell, w int) []Cell {
if len(cs) <= w {
return cs
}
return cs[:w]
}
// DTrimTxCls trims the overflowed text cells sequence and append dots at the end.
func DTrimTxCls(cs []Cell, w int) []Cell {
l := len(cs)
if l <= 0 {
return []Cell{}
}
rt := make([]Cell, 0, w)
csw := 0
for i := 0; i < l && csw <= w; i++ {
c := cs[i]
cw := c.Width()
if cw+csw < w {
rt = append(rt, c)
csw += cw
} else {
rt = append(rt, Cell{'…', c.Fg, c.Bg})
break
}
}
return rt
}
func CellsToStr(cs []Cell) string {
str := ""
for _, c := range cs {
str += string(c.Ch)
}
return str
}

331
vendor/github.com/airking05/termui/linechart.go generated vendored Normal file
View File

@ -0,0 +1,331 @@
// Copyright 2017 Zack Guo <zack.y.guo@gmail.com>. All rights reserved.
// Use of this source code is governed by a MIT license that can
// be found in the LICENSE file.
package termui
import (
"fmt"
"math"
)
// only 16 possible combinations, why bother
var braillePatterns = map[[2]int]rune{
[2]int{0, 0}: '⣀',
[2]int{0, 1}: '⡠',
[2]int{0, 2}: '⡐',
[2]int{0, 3}: '⡈',
[2]int{1, 0}: '⢄',
[2]int{1, 1}: '⠤',
[2]int{1, 2}: '⠔',
[2]int{1, 3}: '⠌',
[2]int{2, 0}: '⢂',
[2]int{2, 1}: '⠢',
[2]int{2, 2}: '⠒',
[2]int{2, 3}: '⠊',
[2]int{3, 0}: '⢁',
[2]int{3, 1}: '⠡',
[2]int{3, 2}: '⠑',
[2]int{3, 3}: '⠉',
}
var lSingleBraille = [4]rune{'\u2840', '⠄', '⠂', '⠁'}
var rSingleBraille = [4]rune{'\u2880', '⠠', '⠐', '⠈'}
// LineChart has two modes: braille(default) and dot. Using braille gives 2x capicity as dot mode,
// because one braille char can represent two data points.
/*
lc := termui.NewLineChart()
lc.BorderLabel = "braille-mode Line Chart"
lc.Data = [1.2, 1.3, 1.5, 1.7, 1.5, 1.6, 1.8, 2.0]
lc.Width = 50
lc.Height = 12
lc.AxesColor = termui.ColorWhite
lc.LineColor = termui.ColorGreen | termui.AttrBold
// termui.Render(lc)...
*/
type LineChart struct {
Block
Data []float64
DataLabels []string // if unset, the data indices will be used
Mode string // braille | dot
DotStyle rune
LineColor Attribute
scale float64 // data span per cell on y-axis
AxesColor Attribute
drawingX int
drawingY int
axisYHeight int
axisXWidth int
axisYLabelGap int
axisXLabelGap int
topValue float64
bottomValue float64
labelX [][]rune
labelY [][]rune
labelYSpace int
maxY float64
minY float64
autoLabels bool
}
// NewLineChart returns a new LineChart with current theme.
func NewLineChart() *LineChart {
lc := &LineChart{Block: *NewBlock()}
lc.AxesColor = ThemeAttr("linechart.axes.fg")
lc.LineColor = ThemeAttr("linechart.line.fg")
lc.Mode = "braille"
lc.DotStyle = '•'
lc.axisXLabelGap = 2
lc.axisYLabelGap = 1
lc.bottomValue = math.Inf(1)
lc.topValue = math.Inf(-1)
return lc
}
// one cell contains two data points
// so the capicity is 2x as dot-mode
func (lc *LineChart) renderBraille() Buffer {
buf := NewBuffer()
// return: b -> which cell should the point be in
// m -> in the cell, divided into 4 equal height levels, which subcell?
getPos := func(d float64) (b, m int) {
cnt4 := int((d-lc.bottomValue)/(lc.scale/4) + 0.5)
b = cnt4 / 4
m = cnt4 % 4
return
}
// plot points
for i := 0; 2*i+1 < len(lc.Data) && i < lc.axisXWidth; i++ {
b0, m0 := getPos(lc.Data[2*i])
b1, m1 := getPos(lc.Data[2*i+1])
if b0 == b1 {
c := Cell{
Ch: braillePatterns[[2]int{m0, m1}],
Bg: lc.Bg,
Fg: lc.LineColor,
}
y := lc.innerArea.Min.Y + lc.innerArea.Dy() - 3 - b0
x := lc.innerArea.Min.X + lc.labelYSpace + 1 + i
buf.Set(x, y, c)
} else {
c0 := Cell{Ch: lSingleBraille[m0],
Fg: lc.LineColor,
Bg: lc.Bg}
x0 := lc.innerArea.Min.X + lc.labelYSpace + 1 + i
y0 := lc.innerArea.Min.Y + lc.innerArea.Dy() - 3 - b0
buf.Set(x0, y0, c0)
c1 := Cell{Ch: rSingleBraille[m1],
Fg: lc.LineColor,
Bg: lc.Bg}
x1 := lc.innerArea.Min.X + lc.labelYSpace + 1 + i
y1 := lc.innerArea.Min.Y + lc.innerArea.Dy() - 3 - b1
buf.Set(x1, y1, c1)
}
}
return buf
}
func (lc *LineChart) renderDot() Buffer {
buf := NewBuffer()
for i := 0; i < len(lc.Data) && i < lc.axisXWidth; i++ {
c := Cell{
Ch: lc.DotStyle,
Fg: lc.LineColor,
Bg: lc.Bg,
}
x := lc.innerArea.Min.X + lc.labelYSpace + 1 + i
y := lc.innerArea.Min.Y + lc.innerArea.Dy() - 3 - int((lc.Data[i]-lc.bottomValue)/lc.scale+0.5)
buf.Set(x, y, c)
}
return buf
}
func (lc *LineChart) calcLabelX() {
lc.labelX = [][]rune{}
for i, l := 0, 0; i < len(lc.DataLabels) && l < lc.axisXWidth; i++ {
if lc.Mode == "dot" {
if l >= len(lc.DataLabels) {
break
}
s := str2runes(lc.DataLabels[l])
w := strWidth(lc.DataLabels[l])
if l+w <= lc.axisXWidth {
lc.labelX = append(lc.labelX, s)
}
l += w + lc.axisXLabelGap
} else { // braille
if 2*l >= len(lc.DataLabels) {
break
}
s := str2runes(lc.DataLabels[2*l])
w := strWidth(lc.DataLabels[2*l])
if l+w <= lc.axisXWidth {
lc.labelX = append(lc.labelX, s)
}
l += w + lc.axisXLabelGap
}
}
}
func shortenFloatVal(x float64) string {
s := fmt.Sprintf("%.2f", x)
if len(s)-3 > 3 {
s = fmt.Sprintf("%.2e", x)
}
if x < 0 {
s = fmt.Sprintf("%.2f", x)
}
return s
}
func (lc *LineChart) calcLabelY() {
span := lc.topValue - lc.bottomValue
lc.scale = span / float64(lc.axisYHeight)
n := (1 + lc.axisYHeight) / (lc.axisYLabelGap + 1)
lc.labelY = make([][]rune, n)
maxLen := 0
for i := 0; i < n; i++ {
s := str2runes(shortenFloatVal(lc.bottomValue + float64(i)*span/float64(n)))
if len(s) > maxLen {
maxLen = len(s)
}
lc.labelY[i] = s
}
lc.labelYSpace = maxLen
}
func (lc *LineChart) calcLayout() {
// set datalabels if it is not provided
if (lc.DataLabels == nil || len(lc.DataLabels) == 0) || lc.autoLabels {
lc.autoLabels = true
lc.DataLabels = make([]string, len(lc.Data))
for i := range lc.Data {
lc.DataLabels[i] = fmt.Sprint(i)
}
}
// lazy increase, to avoid y shaking frequently
// update bound Y when drawing is gonna overflow
lc.minY = lc.Data[0]
lc.maxY = lc.Data[0]
// valid visible range
vrange := lc.innerArea.Dx()
if lc.Mode == "braille" {
vrange = 2 * lc.innerArea.Dx()
}
if vrange > len(lc.Data) {
vrange = len(lc.Data)
}
for _, v := range lc.Data[:vrange] {
if v > lc.maxY {
lc.maxY = v
}
if v < lc.minY {
lc.minY = v
}
}
span := lc.maxY - lc.minY
if lc.minY < lc.bottomValue {
lc.bottomValue = lc.minY - 0.2*span
}
if lc.maxY > lc.topValue {
lc.topValue = lc.maxY + 0.2*span
}
lc.axisYHeight = lc.innerArea.Dy() - 2
lc.calcLabelY()
lc.axisXWidth = lc.innerArea.Dx() - 1 - lc.labelYSpace
lc.calcLabelX()
lc.drawingX = lc.innerArea.Min.X + 1 + lc.labelYSpace
lc.drawingY = lc.innerArea.Min.Y
}
func (lc *LineChart) plotAxes() Buffer {
buf := NewBuffer()
origY := lc.innerArea.Min.Y + lc.innerArea.Dy() - 2
origX := lc.innerArea.Min.X + lc.labelYSpace
buf.Set(origX, origY, Cell{Ch: ORIGIN, Fg: lc.AxesColor, Bg: lc.Bg})
for x := origX + 1; x < origX+lc.axisXWidth; x++ {
buf.Set(x, origY, Cell{Ch: HDASH, Fg: lc.AxesColor, Bg: lc.Bg})
}
for dy := 1; dy <= lc.axisYHeight; dy++ {
buf.Set(origX, origY-dy, Cell{Ch: VDASH, Fg: lc.AxesColor, Bg: lc.Bg})
}
// x label
oft := 0
for _, rs := range lc.labelX {
if oft+len(rs) > lc.axisXWidth {
break
}
for j, r := range rs {
c := Cell{
Ch: r,
Fg: lc.AxesColor,
Bg: lc.Bg,
}
x := origX + oft + j
y := lc.innerArea.Min.Y + lc.innerArea.Dy() - 1
buf.Set(x, y, c)
}
oft += len(rs) + lc.axisXLabelGap
}
// y labels
for i, rs := range lc.labelY {
for j, r := range rs {
buf.Set(
lc.innerArea.Min.X+j,
origY-i*(lc.axisYLabelGap+1),
Cell{Ch: r, Fg: lc.AxesColor, Bg: lc.Bg})
}
}
return buf
}
// Buffer implements Bufferer interface.
func (lc *LineChart) Buffer() Buffer {
buf := lc.Block.Buffer()
if lc.Data == nil || len(lc.Data) == 0 {
return buf
}
lc.calcLayout()
buf.Merge(lc.plotAxes())
if lc.Mode == "dot" {
buf.Merge(lc.renderDot())
} else {
buf.Merge(lc.renderBraille())
}
return buf
}

11
vendor/github.com/airking05/termui/linechart_others.go generated vendored Normal file
View File

@ -0,0 +1,11 @@
// Copyright 2017 Zack Guo <zack.y.guo@gmail.com>. All rights reserved.
// Use of this source code is governed by a MIT license that can
// be found in the LICENSE file.
// +build !windows
package termui
const VDASH = '┊'
const HDASH = '┈'
const ORIGIN = '└'

View File

@ -0,0 +1,11 @@
// Copyright 2017 Zack Guo <zack.y.guo@gmail.com>. All rights reserved.
// Use of this source code is governed by a MIT license that can
// be found in the LICENSE file.
// +build windows
package termui
const VDASH = '|'
const HDASH = '-'
const ORIGIN = '+'

89
vendor/github.com/airking05/termui/list.go generated vendored Normal file
View File

@ -0,0 +1,89 @@
// Copyright 2017 Zack Guo <zack.y.guo@gmail.com>. All rights reserved.
// Use of this source code is governed by a MIT license that can
// be found in the LICENSE file.
package termui
import "strings"
// List displays []string as its items,
// it has a Overflow option (default is "hidden"), when set to "hidden",
// the item exceeding List's width is truncated, but when set to "wrap",
// the overflowed text breaks into next line.
/*
strs := []string{
"[0] github.com/gizak/termui",
"[1] editbox.go",
"[2] iterrupt.go",
"[3] keyboard.go",
"[4] output.go",
"[5] random_out.go",
"[6] dashboard.go",
"[7] nsf/termbox-go"}
ls := termui.NewList()
ls.Items = strs
ls.ItemFgColor = termui.ColorYellow
ls.BorderLabel = "List"
ls.Height = 7
ls.Width = 25
ls.Y = 0
*/
type List struct {
Block
Items []string
Overflow string
ItemFgColor Attribute
ItemBgColor Attribute
}
// NewList returns a new *List with current theme.
func NewList() *List {
l := &List{Block: *NewBlock()}
l.Overflow = "hidden"
l.ItemFgColor = ThemeAttr("list.item.fg")
l.ItemBgColor = ThemeAttr("list.item.bg")
return l
}
// Buffer implements Bufferer interface.
func (l *List) Buffer() Buffer {
buf := l.Block.Buffer()
switch l.Overflow {
case "wrap":
cs := DefaultTxBuilder.Build(strings.Join(l.Items, "\n"), l.ItemFgColor, l.ItemBgColor)
i, j, k := 0, 0, 0
for i < l.innerArea.Dy() && k < len(cs) {
w := cs[k].Width()
if cs[k].Ch == '\n' || j+w > l.innerArea.Dx() {
i++
j = 0
if cs[k].Ch == '\n' {
k++
}
continue
}
buf.Set(l.innerArea.Min.X+j, l.innerArea.Min.Y+i, cs[k])
k++
j++
}
case "hidden":
trimItems := l.Items
if len(trimItems) > l.innerArea.Dy() {
trimItems = trimItems[:l.innerArea.Dy()]
}
for i, v := range trimItems {
cs := DTrimTxCls(DefaultTxBuilder.Build(v, l.ItemFgColor, l.ItemBgColor), l.innerArea.Dx())
j := 0
for _, vv := range cs {
w := vv.Width()
buf.Set(l.innerArea.Min.X+j, l.innerArea.Min.Y+i, vv)
j += w
}
}
}
return buf
}

242
vendor/github.com/airking05/termui/mbarchart.go generated vendored Normal file
View File

@ -0,0 +1,242 @@
// Copyright 2017 Zack Guo <zack.y.guo@gmail.com>. All rights reserved.
// Use of this source code is governed by a MIT license that can
// be found in the LICENSE file.
package termui
import (
"fmt"
)
// This is the implemetation of multi-colored or stacked bar graph. This is different from default barGraph which is implemented in bar.go
// Multi-Colored-BarChart creates multiple bars in a widget:
/*
bc := termui.NewMBarChart()
data := make([][]int, 2)
data[0] := []int{3, 2, 5, 7, 9, 4}
data[1] := []int{7, 8, 5, 3, 1, 6}
bclabels := []string{"S0", "S1", "S2", "S3", "S4", "S5"}
bc.BorderLabel = "Bar Chart"
bc.Data = data
bc.Width = 26
bc.Height = 10
bc.DataLabels = bclabels
bc.TextColor = termui.ColorGreen
bc.BarColor = termui.ColorRed
bc.NumColor = termui.ColorYellow
*/
type MBarChart struct {
Block
BarColor [NumberofColors]Attribute
TextColor Attribute
NumColor [NumberofColors]Attribute
Data [NumberofColors][]int
DataLabels []string
BarWidth int
BarGap int
labels [][]rune
dataNum [NumberofColors][][]rune
numBar int
scale float64
max int
minDataLen int
numStack int
ShowScale bool
maxScale []rune
}
// NewBarChart returns a new *BarChart with current theme.
func NewMBarChart() *MBarChart {
bc := &MBarChart{Block: *NewBlock()}
bc.BarColor[0] = ThemeAttr("mbarchart.bar.bg")
bc.NumColor[0] = ThemeAttr("mbarchart.num.fg")
bc.TextColor = ThemeAttr("mbarchart.text.fg")
bc.BarGap = 1
bc.BarWidth = 3
return bc
}
func (bc *MBarChart) layout() {
bc.numBar = bc.innerArea.Dx() / (bc.BarGap + bc.BarWidth)
bc.labels = make([][]rune, bc.numBar)
DataLen := 0
LabelLen := len(bc.DataLabels)
bc.minDataLen = 9999 //Set this to some very hight value so that we find the minimum one We want to know which array among data[][] has got the least length
// We need to know how many stack/data array data[0] , data[1] are there
for i := 0; i < len(bc.Data); i++ {
if bc.Data[i] == nil {
break
}
DataLen++
}
bc.numStack = DataLen
//We need to know what is the mimimum size of data array data[0] could have 10 elements data[1] could have only 5, so we plot only 5 bar graphs
for i := 0; i < DataLen; i++ {
if bc.minDataLen > len(bc.Data[i]) {
bc.minDataLen = len(bc.Data[i])
}
}
if LabelLen > bc.minDataLen {
LabelLen = bc.minDataLen
}
for i := 0; i < LabelLen && i < bc.numBar; i++ {
bc.labels[i] = trimStr2Runes(bc.DataLabels[i], bc.BarWidth)
}
for i := 0; i < bc.numStack; i++ {
bc.dataNum[i] = make([][]rune, len(bc.Data[i]))
//For each stack of bar calcualte the rune
for j := 0; j < LabelLen && i < bc.numBar; j++ {
n := bc.Data[i][j]
s := fmt.Sprint(n)
bc.dataNum[i][j] = trimStr2Runes(s, bc.BarWidth)
}
//If color is not defined by default then populate a color that is different from the prevous bar
if bc.BarColor[i] == ColorDefault && bc.NumColor[i] == ColorDefault {
if i == 0 {
bc.BarColor[i] = ColorBlack
} else {
bc.BarColor[i] = bc.BarColor[i-1] + 1
if bc.BarColor[i] > NumberofColors {
bc.BarColor[i] = ColorBlack
}
}
bc.NumColor[i] = (NumberofColors + 1) - bc.BarColor[i] //Make NumColor opposite of barColor for visibility
}
}
//If Max value is not set then we have to populate, this time the max value will be max(sum(d1[0],d2[0],d3[0]) .... sum(d1[n], d2[n], d3[n]))
if bc.max == 0 {
bc.max = -1
}
for i := 0; i < bc.minDataLen && i < LabelLen; i++ {
var dsum int
for j := 0; j < bc.numStack; j++ {
dsum += bc.Data[j][i]
}
if dsum > bc.max {
bc.max = dsum
}
}
//Finally Calculate max sale
if bc.ShowScale {
s := fmt.Sprintf("%d", bc.max)
bc.maxScale = trimStr2Runes(s, len(s))
bc.scale = float64(bc.max) / float64(bc.innerArea.Dy()-2)
} else {
bc.scale = float64(bc.max) / float64(bc.innerArea.Dy()-1)
}
}
func (bc *MBarChart) SetMax(max int) {
if max > 0 {
bc.max = max
}
}
// Buffer implements Bufferer interface.
func (bc *MBarChart) Buffer() Buffer {
buf := bc.Block.Buffer()
bc.layout()
var oftX int
for i := 0; i < bc.numBar && i < bc.minDataLen && i < len(bc.DataLabels); i++ {
ph := 0 //Previous Height to stack up
oftX = i * (bc.BarWidth + bc.BarGap)
for i1 := 0; i1 < bc.numStack; i1++ {
h := int(float64(bc.Data[i1][i]) / bc.scale)
// plot bars
for j := 0; j < bc.BarWidth; j++ {
for k := 0; k < h; k++ {
c := Cell{
Ch: ' ',
Bg: bc.BarColor[i1],
}
if bc.BarColor[i1] == ColorDefault { // when color is default, space char treated as transparent!
c.Bg |= AttrReverse
}
x := bc.innerArea.Min.X + i*(bc.BarWidth+bc.BarGap) + j
y := bc.innerArea.Min.Y + bc.innerArea.Dy() - 2 - k - ph
buf.Set(x, y, c)
}
}
ph += h
}
// plot text
for j, k := 0, 0; j < len(bc.labels[i]); j++ {
w := charWidth(bc.labels[i][j])
c := Cell{
Ch: bc.labels[i][j],
Bg: bc.Bg,
Fg: bc.TextColor,
}
y := bc.innerArea.Min.Y + bc.innerArea.Dy() - 1
x := bc.innerArea.Max.X + oftX + ((bc.BarWidth - len(bc.labels[i])) / 2) + k
buf.Set(x, y, c)
k += w
}
// plot num
ph = 0 //re-initialize previous height
for i1 := 0; i1 < bc.numStack; i1++ {
h := int(float64(bc.Data[i1][i]) / bc.scale)
for j := 0; j < len(bc.dataNum[i1][i]) && h > 0; j++ {
c := Cell{
Ch: bc.dataNum[i1][i][j],
Fg: bc.NumColor[i1],
Bg: bc.BarColor[i1],
}
if bc.BarColor[i1] == ColorDefault { // the same as above
c.Bg |= AttrReverse
}
if h == 0 {
c.Bg = bc.Bg
}
x := bc.innerArea.Min.X + oftX + (bc.BarWidth-len(bc.dataNum[i1][i]))/2 + j
y := bc.innerArea.Min.Y + bc.innerArea.Dy() - 2 - ph
buf.Set(x, y, c)
}
ph += h
}
}
if bc.ShowScale {
//Currently bar graph only supprts data range from 0 to MAX
//Plot 0
c := Cell{
Ch: '0',
Bg: bc.Bg,
Fg: bc.TextColor,
}
y := bc.innerArea.Min.Y + bc.innerArea.Dy() - 2
x := bc.X
buf.Set(x, y, c)
//Plot the maximum sacle value
for i := 0; i < len(bc.maxScale); i++ {
c := Cell{
Ch: bc.maxScale[i],
Bg: bc.Bg,
Fg: bc.TextColor,
}
y := bc.innerArea.Min.Y
x := bc.X + i
buf.Set(x, y, c)
}
}
return buf
}

28
vendor/github.com/airking05/termui/mkdocs.yml generated vendored Normal file
View File

@ -0,0 +1,28 @@
pages:
- Home: 'index.md'
- Quickstart: 'quickstart.md'
- Recipes: 'recipes.md'
- References:
- Layouts: 'layouts.md'
- Components: 'components.md'
- Events: 'events.md'
- Themes: 'themes.md'
- Versions: 'versions.md'
- About: 'about.md'
site_name: termui
repo_url: https://github.com/gizak/termui/
site_description: 'termui user guide'
site_author: gizak
docs_dir: '_docs'
theme: readthedocs
markdown_extensions:
- smarty
- admonition
- toc
extra:
version: 1.0

73
vendor/github.com/airking05/termui/par.go generated vendored Normal file
View File

@ -0,0 +1,73 @@
// Copyright 2017 Zack Guo <zack.y.guo@gmail.com>. All rights reserved.
// Use of this source code is governed by a MIT license that can
// be found in the LICENSE file.
package termui
// Par displays a paragraph.
/*
par := termui.NewPar("Simple Text")
par.Height = 3
par.Width = 17
par.BorderLabel = "Label"
*/
type Par struct {
Block
Text string
TextFgColor Attribute
TextBgColor Attribute
WrapLength int // words wrap limit. Note it may not work properly with multi-width char
}
// NewPar returns a new *Par with given text as its content.
func NewPar(s string) *Par {
return &Par{
Block: *NewBlock(),
Text: s,
TextFgColor: ThemeAttr("par.text.fg"),
TextBgColor: ThemeAttr("par.text.bg"),
WrapLength: 0,
}
}
// Buffer implements Bufferer interface.
func (p *Par) Buffer() Buffer {
buf := p.Block.Buffer()
fg, bg := p.TextFgColor, p.TextBgColor
cs := DefaultTxBuilder.Build(p.Text, fg, bg)
// wrap if WrapLength set
if p.WrapLength < 0 {
cs = wrapTx(cs, p.Width-2)
} else if p.WrapLength > 0 {
cs = wrapTx(cs, p.WrapLength)
}
y, x, n := 0, 0, 0
for y < p.innerArea.Dy() && n < len(cs) {
w := cs[n].Width()
if cs[n].Ch == '\n' || x+w > p.innerArea.Dx() {
y++
x = 0 // set x = 0
if cs[n].Ch == '\n' {
n++
}
if y >= p.innerArea.Dy() {
buf.Set(p.innerArea.Min.X+p.innerArea.Dx()-1,
p.innerArea.Min.Y+p.innerArea.Dy()-1,
Cell{Ch: '…', Fg: p.TextFgColor, Bg: p.TextBgColor})
break
}
continue
}
buf.Set(p.innerArea.Min.X+x, p.innerArea.Min.Y+y, cs[n])
n++
x += w
}
return buf
}

78
vendor/github.com/airking05/termui/pos.go generated vendored Normal file
View File

@ -0,0 +1,78 @@
// Copyright 2017 Zack Guo <zack.y.guo@gmail.com>. All rights reserved.
// Use of this source code is governed by a MIT license that can
// be found in the LICENSE file.
package termui
import "image"
// Align is the position of the gauge's label.
type Align uint
// All supported positions.
const (
AlignNone Align = 0
AlignLeft Align = 1 << iota
AlignRight
AlignBottom
AlignTop
AlignCenterVertical
AlignCenterHorizontal
AlignCenter = AlignCenterVertical | AlignCenterHorizontal
)
func AlignArea(parent, child image.Rectangle, a Align) image.Rectangle {
w, h := child.Dx(), child.Dy()
// parent center
pcx, pcy := parent.Min.X+parent.Dx()/2, parent.Min.Y+parent.Dy()/2
// child center
ccx, ccy := child.Min.X+child.Dx()/2, child.Min.Y+child.Dy()/2
if a&AlignLeft == AlignLeft {
child.Min.X = parent.Min.X
child.Max.X = child.Min.X + w
}
if a&AlignRight == AlignRight {
child.Max.X = parent.Max.X
child.Min.X = child.Max.X - w
}
if a&AlignBottom == AlignBottom {
child.Max.Y = parent.Max.Y
child.Min.Y = child.Max.Y - h
}
if a&AlignTop == AlignRight {
child.Min.Y = parent.Min.Y
child.Max.Y = child.Min.Y + h
}
if a&AlignCenterHorizontal == AlignCenterHorizontal {
child.Min.X += pcx - ccx
child.Max.X = child.Min.X + w
}
if a&AlignCenterVertical == AlignCenterVertical {
child.Min.Y += pcy - ccy
child.Max.Y = child.Min.Y + h
}
return child
}
func MoveArea(a image.Rectangle, dx, dy int) image.Rectangle {
a.Min.X += dx
a.Max.X += dx
a.Min.Y += dy
a.Max.Y += dy
return a
}
var termWidth int
var termHeight int
func TermRect() image.Rectangle {
return image.Rect(0, 0, termWidth, termHeight)
}

164
vendor/github.com/airking05/termui/render.go generated vendored Normal file
View File

@ -0,0 +1,164 @@
// Copyright 2017 Zack Guo <zack.y.guo@gmail.com>. All rights reserved.
// Use of this source code is governed by a MIT license that can
// be found in the LICENSE file.
package termui
import (
"image"
"io"
"sync"
"time"
"fmt"
"os"
"runtime/debug"
"bytes"
"github.com/maruel/panicparse/stack"
tm "github.com/nsf/termbox-go"
)
// Bufferer should be implemented by all renderable components.
type Bufferer interface {
Buffer() Buffer
}
// Init initializes termui library. This function should be called before any others.
// After initialization, the library must be finalized by 'Close' function.
func Init() error {
if err := tm.Init(); err != nil {
return err
}
sysEvtChs = make([]chan Event, 0)
go hookTermboxEvt()
renderJobs = make(chan []Bufferer)
//renderLock = new(sync.RWMutex)
Body = NewGrid()
Body.X = 0
Body.Y = 0
Body.BgColor = ThemeAttr("bg")
Body.Width = TermWidth()
DefaultEvtStream.Init()
DefaultEvtStream.Merge("termbox", NewSysEvtCh())
DefaultEvtStream.Merge("timer", NewTimerCh(time.Second))
DefaultEvtStream.Merge("custom", usrEvtCh)
DefaultEvtStream.Handle("/", DefualtHandler)
DefaultEvtStream.Handle("/sys/wnd/resize", func(e Event) {
w := e.Data.(EvtWnd)
Body.Width = w.Width
})
DefaultWgtMgr = NewWgtMgr()
DefaultEvtStream.Hook(DefaultWgtMgr.WgtHandlersHook())
go func() {
for bs := range renderJobs {
render(bs...)
}
}()
return nil
}
// Close finalizes termui library,
// should be called after successful initialization when termui's functionality isn't required anymore.
func Close() {
tm.Close()
}
var renderLock sync.Mutex
func termSync() {
renderLock.Lock()
tm.Sync()
termWidth, termHeight = tm.Size()
renderLock.Unlock()
}
// TermWidth returns the current terminal's width.
func TermWidth() int {
termSync()
return termWidth
}
// TermHeight returns the current terminal's height.
func TermHeight() int {
termSync()
return termHeight
}
// Render renders all Bufferer in the given order from left to right,
// right could overlap on left ones.
func render(bs ...Bufferer) {
defer func() {
if e := recover(); e != nil {
Close()
fmt.Fprintf(os.Stderr, "Captured a panic(value=%v) when rendering Bufferer. Exit termui and clean terminal...\nPrint stack trace:\n\n", e)
//debug.PrintStack()
gs, err := stack.ParseDump(bytes.NewReader(debug.Stack()), os.Stderr)
if err != nil {
debug.PrintStack()
os.Exit(1)
}
p := &stack.Palette{}
buckets := stack.SortBuckets(stack.Bucketize(gs, stack.AnyValue))
srcLen, pkgLen := stack.CalcLengths(buckets, false)
for _, bucket := range buckets {
io.WriteString(os.Stdout, p.BucketHeader(&bucket, false, len(buckets) > 1))
io.WriteString(os.Stdout, p.StackLines(&bucket.Signature, srcLen, pkgLen, false))
}
os.Exit(1)
}
}()
for _, b := range bs {
buf := b.Buffer()
// set cels in buf
for p, c := range buf.CellMap {
if p.In(buf.Area) {
tm.SetCell(p.X, p.Y, c.Ch, toTmAttr(c.Fg), toTmAttr(c.Bg))
}
}
}
renderLock.Lock()
// render
tm.Flush()
renderLock.Unlock()
}
func Clear() {
tm.Clear(tm.ColorDefault, toTmAttr(ThemeAttr("bg")))
}
func clearArea(r image.Rectangle, bg Attribute) {
for i := r.Min.X; i < r.Max.X; i++ {
for j := r.Min.Y; j < r.Max.Y; j++ {
tm.SetCell(i, j, ' ', tm.ColorDefault, toTmAttr(bg))
}
}
}
func ClearArea(r image.Rectangle, bg Attribute) {
clearArea(r, bg)
tm.Flush()
}
var renderJobs chan []Bufferer
func Render(bs ...Bufferer) {
//go func() { renderJobs <- bs }()
renderJobs <- bs
}

167
vendor/github.com/airking05/termui/sparkline.go generated vendored Normal file
View File

@ -0,0 +1,167 @@
// Copyright 2017 Zack Guo <zack.y.guo@gmail.com>. All rights reserved.
// Use of this source code is governed by a MIT license that can
// be found in the LICENSE file.
package termui
// Sparkline is like: ▅▆▂▂▅▇▂▂▃▆▆▆▅▃. The data points should be non-negative integers.
/*
data := []int{4, 2, 1, 6, 3, 9, 1, 4, 2, 15, 14, 9, 8, 6, 10, 13, 15, 12, 10, 5, 3, 6, 1}
spl := termui.NewSparkline()
spl.Data = data
spl.Title = "Sparkline 0"
spl.LineColor = termui.ColorGreen
*/
type Sparkline struct {
Data []int
Height int
Title string
TitleColor Attribute
LineColor Attribute
displayHeight int
scale float32
max int
}
// Sparklines is a renderable widget which groups together the given sparklines.
/*
spls := termui.NewSparklines(spl0,spl1,spl2) //...
spls.Height = 2
spls.Width = 20
*/
type Sparklines struct {
Block
Lines []Sparkline
displayLines int
displayWidth int
}
var sparks = []rune{'▁', '▂', '▃', '▄', '▅', '▆', '▇', '█'}
// Add appends a given Sparkline to s *Sparklines.
func (s *Sparklines) Add(sl Sparkline) {
s.Lines = append(s.Lines, sl)
}
// NewSparkline returns a unrenderable single sparkline that intended to be added into Sparklines.
func NewSparkline() Sparkline {
return Sparkline{
Height: 1,
TitleColor: ThemeAttr("sparkline.title.fg"),
LineColor: ThemeAttr("sparkline.line.fg")}
}
// NewSparklines return a new *Spaklines with given Sparkline(s), you can always add a new Sparkline later.
func NewSparklines(ss ...Sparkline) *Sparklines {
s := &Sparklines{Block: *NewBlock(), Lines: ss}
return s
}
func (sl *Sparklines) update() {
for i, v := range sl.Lines {
if v.Title == "" {
sl.Lines[i].displayHeight = v.Height
} else {
sl.Lines[i].displayHeight = v.Height + 1
}
}
sl.displayWidth = sl.innerArea.Dx()
// get how many lines gotta display
h := 0
sl.displayLines = 0
for _, v := range sl.Lines {
if h+v.displayHeight <= sl.innerArea.Dy() {
sl.displayLines++
} else {
break
}
h += v.displayHeight
}
for i := 0; i < sl.displayLines; i++ {
data := sl.Lines[i].Data
max := 0
for _, v := range data {
if max < v {
max = v
}
}
sl.Lines[i].max = max
if max != 0 {
sl.Lines[i].scale = float32(8*sl.Lines[i].Height) / float32(max)
} else { // when all negative
sl.Lines[i].scale = 0
}
}
}
// Buffer implements Bufferer interface.
func (sl *Sparklines) Buffer() Buffer {
buf := sl.Block.Buffer()
sl.update()
oftY := 0
for i := 0; i < sl.displayLines; i++ {
l := sl.Lines[i]
data := l.Data
if len(data) > sl.innerArea.Dx() {
data = data[len(data)-sl.innerArea.Dx():]
}
if l.Title != "" {
rs := trimStr2Runes(l.Title, sl.innerArea.Dx())
oftX := 0
for _, v := range rs {
w := charWidth(v)
c := Cell{
Ch: v,
Fg: l.TitleColor,
Bg: sl.Bg,
}
x := sl.innerArea.Min.X + oftX
y := sl.innerArea.Min.Y + oftY
buf.Set(x, y, c)
oftX += w
}
}
for j, v := range data {
// display height of the data point, zero when data is negative
h := int(float32(v)*l.scale + 0.5)
if v < 0 {
h = 0
}
barCnt := h / 8
barMod := h % 8
for jj := 0; jj < barCnt; jj++ {
c := Cell{
Ch: ' ', // => sparks[7]
Bg: l.LineColor,
}
x := sl.innerArea.Min.X + j
y := sl.innerArea.Min.Y + oftY + l.Height - jj
//p.Bg = sl.BgColor
buf.Set(x, y, c)
}
if barMod != 0 {
c := Cell{
Ch: sparks[barMod-1],
Fg: l.LineColor,
Bg: sl.Bg,
}
x := sl.innerArea.Min.X + j
y := sl.innerArea.Min.Y + oftY + l.Height - barCnt
buf.Set(x, y, c)
}
}
oftY += l.displayHeight
}
return buf
}

185
vendor/github.com/airking05/termui/table.go generated vendored Normal file
View File

@ -0,0 +1,185 @@
// Copyright 2017 Zack Guo <zack.y.guo@gmail.com>. All rights reserved.
// Use of this source code is governed by a MIT license that can
// be found in the LICENSE file.
package termui
import "strings"
/* Table is like:
Awesome Table
Col0 | Col1 | Col2 | Col3 | Col4 | Col5 | Col6 |
Some Item #1 | AAA | 123 | CCCCC | EEEEE | GGGGG | IIIII |
Some Item #2 | BBB | 456 | DDDDD | FFFFF | HHHHH | JJJJJ |
Datapoints are a two dimensional array of strings: [][]string
Example:
data := [][]string{
{"Col0", "Col1", "Col3", "Col4", "Col5", "Col6"},
{"Some Item #1", "AAA", "123", "CCCCC", "EEEEE", "GGGGG", "IIIII"},
{"Some Item #2", "BBB", "456", "DDDDD", "FFFFF", "HHHHH", "JJJJJ"},
}
table := termui.NewTable()
table.Rows = data // type [][]string
table.FgColor = termui.ColorWhite
table.BgColor = termui.ColorDefault
table.Height = 7
table.Width = 62
table.Y = 0
table.X = 0
table.Border = true
*/
// Table tracks all the attributes of a Table instance
type Table struct {
Block
Rows [][]string
CellWidth []int
FgColor Attribute
BgColor Attribute
FgColors []Attribute
BgColors []Attribute
Separator bool
TextAlign Align
}
// NewTable returns a new Table instance
func NewTable() *Table {
table := &Table{Block: *NewBlock()}
table.FgColor = ColorWhite
table.BgColor = ColorDefault
table.Separator = true
return table
}
// CellsWidth calculates the width of a cell array and returns an int
func cellsWidth(cells []Cell) int {
width := 0
for _, c := range cells {
width += c.Width()
}
return width
}
// Analysis generates and returns an array of []Cell that represent all columns in the Table
func (table *Table) Analysis() [][]Cell {
var rowCells [][]Cell
length := len(table.Rows)
if length < 1 {
return rowCells
}
if len(table.FgColors) == 0 {
table.FgColors = make([]Attribute, len(table.Rows))
}
if len(table.BgColors) == 0 {
table.BgColors = make([]Attribute, len(table.Rows))
}
cellWidths := make([]int, len(table.Rows[0]))
for y, row := range table.Rows {
if table.FgColors[y] == 0 {
table.FgColors[y] = table.FgColor
}
if table.BgColors[y] == 0 {
table.BgColors[y] = table.BgColor
}
for x, str := range row {
cells := DefaultTxBuilder.Build(str, table.FgColors[y], table.BgColors[y])
cw := cellsWidth(cells)
if cellWidths[x] < cw {
cellWidths[x] = cw
}
rowCells = append(rowCells, cells)
}
}
table.CellWidth = cellWidths
return rowCells
}
// SetSize calculates the table size and sets the internal value
func (table *Table) SetSize() {
length := len(table.Rows)
if table.Separator {
table.Height = length*2 + 1
} else {
table.Height = length + 2
}
table.Width = 2
if length != 0 {
for _, cellWidth := range table.CellWidth {
table.Width += cellWidth + 3
}
}
}
// CalculatePosition ...
func (table *Table) CalculatePosition(x int, y int, coordinateX *int, coordinateY *int, cellStart *int) {
if table.Separator {
*coordinateY = table.innerArea.Min.Y + y*2
} else {
*coordinateY = table.innerArea.Min.Y + y
}
if x == 0 {
*cellStart = table.innerArea.Min.X
} else {
*cellStart += table.CellWidth[x-1] + 3
}
switch table.TextAlign {
case AlignRight:
*coordinateX = *cellStart + (table.CellWidth[x] - len(table.Rows[y][x])) + 2
case AlignCenter:
*coordinateX = *cellStart + (table.CellWidth[x]-len(table.Rows[y][x]))/2 + 2
default:
*coordinateX = *cellStart + 2
}
}
// Buffer ...
func (table *Table) Buffer() Buffer {
buffer := table.Block.Buffer()
rowCells := table.Analysis()
pointerX := table.innerArea.Min.X + 2
pointerY := table.innerArea.Min.Y
borderPointerX := table.innerArea.Min.X
for y, row := range table.Rows {
for x := range row {
table.CalculatePosition(x, y, &pointerX, &pointerY, &borderPointerX)
background := DefaultTxBuilder.Build(strings.Repeat(" ", table.CellWidth[x]+3), table.BgColors[y], table.BgColors[y])
cells := rowCells[y*len(row)+x]
for i, back := range background {
buffer.Set(borderPointerX+i, pointerY, back)
}
coordinateX := pointerX
for _, printer := range cells {
buffer.Set(coordinateX, pointerY, printer)
coordinateX += printer.Width()
}
if x != 0 {
dividors := DefaultTxBuilder.Build("|", table.FgColors[y], table.BgColors[y])
for _, dividor := range dividors {
buffer.Set(borderPointerX, pointerY, dividor)
}
}
}
if table.Separator {
border := DefaultTxBuilder.Build(strings.Repeat("─", table.Width-2), table.FgColor, table.BgColor)
for i, cell := range border {
buffer.Set(i+1, pointerY+1, cell)
}
}
}
return buffer
}

278
vendor/github.com/airking05/termui/textbuilder.go generated vendored Normal file
View File

@ -0,0 +1,278 @@
// Copyright 2017 Zack Guo <zack.y.guo@gmail.com>. All rights reserved.
// Use of this source code is governed by a MIT license that can
// be found in the LICENSE file.
package termui
import (
"regexp"
"strings"
"github.com/mitchellh/go-wordwrap"
)
// TextBuilder is a minimal interface to produce text []Cell using specific syntax (markdown).
type TextBuilder interface {
Build(s string, fg, bg Attribute) []Cell
}
// DefaultTxBuilder is set to be MarkdownTxBuilder.
var DefaultTxBuilder = NewMarkdownTxBuilder()
// MarkdownTxBuilder implements TextBuilder interface, using markdown syntax.
type MarkdownTxBuilder struct {
baseFg Attribute
baseBg Attribute
plainTx []rune
markers []marker
}
type marker struct {
st int
ed int
fg Attribute
bg Attribute
}
var colorMap = map[string]Attribute{
"red": ColorRed,
"blue": ColorBlue,
"black": ColorBlack,
"cyan": ColorCyan,
"yellow": ColorYellow,
"white": ColorWhite,
"default": ColorDefault,
"green": ColorGreen,
"magenta": ColorMagenta,
}
var attrMap = map[string]Attribute{
"bold": AttrBold,
"underline": AttrUnderline,
"reverse": AttrReverse,
}
func rmSpc(s string) string {
reg := regexp.MustCompile(`\s+`)
return reg.ReplaceAllString(s, "")
}
// readAttr translates strings like `fg-red,fg-bold,bg-white` to fg and bg Attribute
func (mtb MarkdownTxBuilder) readAttr(s string) (Attribute, Attribute) {
fg := mtb.baseFg
bg := mtb.baseBg
updateAttr := func(a Attribute, attrs []string) Attribute {
for _, s := range attrs {
// replace the color
if c, ok := colorMap[s]; ok {
a &= 0xFF00 // erase clr 0 ~ 8 bits
a |= c // set clr
}
// add attrs
if c, ok := attrMap[s]; ok {
a |= c
}
}
return a
}
ss := strings.Split(s, ",")
fgs := []string{}
bgs := []string{}
for _, v := range ss {
subs := strings.Split(v, "-")
if len(subs) > 1 {
if subs[0] == "fg" {
fgs = append(fgs, subs[1])
}
if subs[0] == "bg" {
bgs = append(bgs, subs[1])
}
}
}
fg = updateAttr(fg, fgs)
bg = updateAttr(bg, bgs)
return fg, bg
}
func (mtb *MarkdownTxBuilder) reset() {
mtb.plainTx = []rune{}
mtb.markers = []marker{}
}
// parse streams and parses text into normalized text and render sequence.
func (mtb *MarkdownTxBuilder) parse(str string) {
rs := str2runes(str)
normTx := []rune{}
square := []rune{}
brackt := []rune{}
accSquare := false
accBrackt := false
cntSquare := 0
reset := func() {
square = []rune{}
brackt = []rune{}
accSquare = false
accBrackt = false
cntSquare = 0
}
// pipe stacks into normTx and clear
rollback := func() {
normTx = append(normTx, square...)
normTx = append(normTx, brackt...)
reset()
}
// chop first and last
chop := func(s []rune) []rune {
return s[1 : len(s)-1]
}
for i, r := range rs {
switch {
// stacking brackt
case accBrackt:
brackt = append(brackt, r)
if ')' == r {
fg, bg := mtb.readAttr(string(chop(brackt)))
st := len(normTx)
ed := len(normTx) + len(square) - 2
mtb.markers = append(mtb.markers, marker{st, ed, fg, bg})
normTx = append(normTx, chop(square)...)
reset()
} else if i+1 == len(rs) {
rollback()
}
// stacking square
case accSquare:
switch {
// squares closed and followed by a '('
case cntSquare == 0 && '(' == r:
accBrackt = true
brackt = append(brackt, '(')
// squares closed but not followed by a '('
case cntSquare == 0:
rollback()
if '[' == r {
accSquare = true
cntSquare = 1
brackt = append(brackt, '[')
} else {
normTx = append(normTx, r)
}
// hit the end
case i+1 == len(rs):
square = append(square, r)
rollback()
case '[' == r:
cntSquare++
square = append(square, '[')
case ']' == r:
cntSquare--
square = append(square, ']')
// normal char
default:
square = append(square, r)
}
// stacking normTx
default:
if '[' == r {
accSquare = true
cntSquare = 1
square = append(square, '[')
} else {
normTx = append(normTx, r)
}
}
}
mtb.plainTx = normTx
}
func wrapTx(cs []Cell, wl int) []Cell {
tmpCell := make([]Cell, len(cs))
copy(tmpCell, cs)
// get the plaintext
plain := CellsToStr(cs)
// wrap
plainWrapped := wordwrap.WrapString(plain, uint(wl))
// find differences and insert
finalCell := tmpCell // finalcell will get the inserts and is what is returned
plainRune := []rune(plain)
plainWrappedRune := []rune(plainWrapped)
trigger := "go"
plainRuneNew := plainRune
for trigger != "stop" {
plainRune = plainRuneNew
for i := range plainRune {
if plainRune[i] == plainWrappedRune[i] {
trigger = "stop"
} else if plainRune[i] != plainWrappedRune[i] && plainWrappedRune[i] == 10 {
trigger = "go"
cell := Cell{10, 0, 0}
j := i - 0
// insert a cell into the []Cell in correct position
tmpCell[i] = cell
// insert the newline into plain so we avoid indexing errors
plainRuneNew = append(plainRune, 10)
copy(plainRuneNew[j+1:], plainRuneNew[j:])
plainRuneNew[j] = plainWrappedRune[j]
// restart the inner for loop until plain and plain wrapped are
// the same; yeah, it's inefficient, but the text amounts
// should be small
break
} else if plainRune[i] != plainWrappedRune[i] &&
plainWrappedRune[i-1] == 10 && // if the prior rune is a newline
plainRune[i] == 32 { // and this rune is a space
trigger = "go"
// need to delete plainRune[i] because it gets rid of an extra
// space
plainRuneNew = append(plainRune[:i], plainRune[i+1:]...)
break
} else {
trigger = "stop" // stops the outer for loop
}
}
}
finalCell = tmpCell
return finalCell
}
// Build implements TextBuilder interface.
func (mtb MarkdownTxBuilder) Build(s string, fg, bg Attribute) []Cell {
mtb.baseFg = fg
mtb.baseBg = bg
mtb.reset()
mtb.parse(s)
cs := make([]Cell, len(mtb.plainTx))
for i := range cs {
cs[i] = Cell{Ch: mtb.plainTx[i], Fg: fg, Bg: bg}
}
for _, mrk := range mtb.markers {
for i := mrk.st; i < mrk.ed; i++ {
cs[i].Fg = mrk.fg
cs[i].Bg = mrk.bg
}
}
return cs
}
// NewMarkdownTxBuilder returns a TextBuilder employing markdown syntax.
func NewMarkdownTxBuilder() TextBuilder {
return MarkdownTxBuilder{}
}

140
vendor/github.com/airking05/termui/theme.go generated vendored Normal file
View File

@ -0,0 +1,140 @@
// Copyright 2017 Zack Guo <zack.y.guo@gmail.com>. All rights reserved.
// Use of this source code is governed by a MIT license that can
// be found in the LICENSE file.
package termui
import "strings"
/*
// A ColorScheme represents the current look-and-feel of the dashboard.
type ColorScheme struct {
BodyBg Attribute
BlockBg Attribute
HasBorder bool
BorderFg Attribute
BorderBg Attribute
BorderLabelTextFg Attribute
BorderLabelTextBg Attribute
ParTextFg Attribute
ParTextBg Attribute
SparklineLine Attribute
SparklineTitle Attribute
GaugeBar Attribute
GaugePercent Attribute
LineChartLine Attribute
LineChartAxes Attribute
ListItemFg Attribute
ListItemBg Attribute
BarChartBar Attribute
BarChartText Attribute
BarChartNum Attribute
MBarChartBar Attribute
MBarChartText Attribute
MBarChartNum Attribute
TabActiveBg Attribute
}
// default color scheme depends on the user's terminal setting.
var themeDefault = ColorScheme{HasBorder: true}
var themeHelloWorld = ColorScheme{
BodyBg: ColorBlack,
BlockBg: ColorBlack,
HasBorder: true,
BorderFg: ColorWhite,
BorderBg: ColorBlack,
BorderLabelTextBg: ColorBlack,
BorderLabelTextFg: ColorGreen,
ParTextBg: ColorBlack,
ParTextFg: ColorWhite,
SparklineLine: ColorMagenta,
SparklineTitle: ColorWhite,
GaugeBar: ColorRed,
GaugePercent: ColorWhite,
LineChartLine: ColorYellow | AttrBold,
LineChartAxes: ColorWhite,
ListItemBg: ColorBlack,
ListItemFg: ColorYellow,
BarChartBar: ColorRed,
BarChartNum: ColorWhite,
BarChartText: ColorCyan,
MBarChartBar: ColorRed,
MBarChartNum: ColorWhite,
MBarChartText: ColorCyan,
TabActiveBg: ColorMagenta,
}
var theme = themeDefault // global dep
// Theme returns the currently used theme.
func Theme() ColorScheme {
return theme
}
// SetTheme sets a new, custom theme.
func SetTheme(newTheme ColorScheme) {
theme = newTheme
}
// UseTheme sets a predefined scheme. Currently available: "hello-world" and
// "black-and-white".
func UseTheme(th string) {
switch th {
case "helloworld":
theme = themeHelloWorld
default:
theme = themeDefault
}
}
*/
var ColorMap = map[string]Attribute{
"fg": ColorWhite,
"bg": ColorDefault,
"border.fg": ColorWhite,
"label.fg": ColorGreen,
"par.fg": ColorYellow,
"par.label.bg": ColorWhite,
}
func ThemeAttr(name string) Attribute {
return lookUpAttr(ColorMap, name)
}
func lookUpAttr(clrmap map[string]Attribute, name string) Attribute {
a, ok := clrmap[name]
if ok {
return a
}
ns := strings.Split(name, ".")
for i := range ns {
nn := strings.Join(ns[i:len(ns)], ".")
a, ok = ColorMap[nn]
if ok {
break
}
}
return a
}
// 0<=r,g,b <= 5
func ColorRGB(r, g, b int) Attribute {
within := func(n int) int {
if n < 0 {
return 0
}
if n > 5 {
return 5
}
return n
}
r, b, g = within(r), within(b), within(g)
return Attribute(0x0f + 36*r + 6*g + b)
}

94
vendor/github.com/airking05/termui/widget.go generated vendored Normal file
View File

@ -0,0 +1,94 @@
// Copyright 2017 Zack Guo <zack.y.guo@gmail.com>. All rights reserved.
// Use of this source code is governed by a MIT license that can
// be found in the LICENSE file.
package termui
import (
"fmt"
"sync"
)
// event mixins
type WgtMgr map[string]WgtInfo
type WgtInfo struct {
Handlers map[string]func(Event)
WgtRef Widget
Id string
}
type Widget interface {
Id() string
}
func NewWgtInfo(wgt Widget) WgtInfo {
return WgtInfo{
Handlers: make(map[string]func(Event)),
WgtRef: wgt,
Id: wgt.Id(),
}
}
func NewWgtMgr() WgtMgr {
wm := WgtMgr(make(map[string]WgtInfo))
return wm
}
func (wm WgtMgr) AddWgt(wgt Widget) {
wm[wgt.Id()] = NewWgtInfo(wgt)
}
func (wm WgtMgr) RmWgt(wgt Widget) {
wm.RmWgtById(wgt.Id())
}
func (wm WgtMgr) RmWgtById(id string) {
delete(wm, id)
}
func (wm WgtMgr) AddWgtHandler(id, path string, h func(Event)) {
if w, ok := wm[id]; ok {
w.Handlers[path] = h
}
}
func (wm WgtMgr) RmWgtHandler(id, path string) {
if w, ok := wm[id]; ok {
delete(w.Handlers, path)
}
}
var counter struct {
sync.RWMutex
count int
}
func GenId() string {
counter.Lock()
defer counter.Unlock()
counter.count += 1
return fmt.Sprintf("%d", counter.count)
}
func (wm WgtMgr) WgtHandlersHook() func(Event) {
return func(e Event) {
for _, v := range wm {
if k := findMatch(v.Handlers, e.Path); k != "" {
v.Handlers[k](e)
}
}
}
}
var DefaultWgtMgr WgtMgr
func (b *Block) Handle(path string, handler func(Event)) {
if _, ok := DefaultWgtMgr[b.Id()]; !ok {
DefaultWgtMgr.AddWgt(b)
}
DefaultWgtMgr.AddWgtHandler(b.Id(), path, handler)
}

29
vendor/github.com/bmatcuk/doublestar/.gitignore generated vendored Normal file
View File

@ -0,0 +1,29 @@
# vi
*~
*.swp
*.swo
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
*.test
*.prof

17
vendor/github.com/bmatcuk/doublestar/.travis.yml generated vendored Normal file
View File

@ -0,0 +1,17 @@
language: go
go:
- 1.3
- 1.4
- 1.5
- 1.6
before_install:
- go get -t -v ./...
script:
- go test -race -coverprofile=coverage.txt -covermode=atomic
after_success:
- bash <(curl -s https://codecov.io/bash)

22
vendor/github.com/bmatcuk/doublestar/LICENSE generated vendored Normal file
View File

@ -0,0 +1,22 @@
The MIT License (MIT)
Copyright (c) 2014 Bob Matcuk
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

109
vendor/github.com/bmatcuk/doublestar/README.md generated vendored Normal file
View File

@ -0,0 +1,109 @@
![Release](https://img.shields.io/github/release/bmatcuk/doublestar.svg?branch=master)
[![Build Status](https://travis-ci.org/bmatcuk/doublestar.svg?branch=master)](https://travis-ci.org/bmatcuk/doublestar)
[![codecov.io](https://img.shields.io/codecov/c/github/bmatcuk/doublestar.svg?branch=master)](https://codecov.io/github/bmatcuk/doublestar?branch=master)
# doublestar
**doublestar** is a [golang](http://golang.org/) implementation of path pattern
matching and globbing with support for "doublestar" (aka globstar: `**`)
patterns.
doublestar patterns match files and directories recursively. For example, if
you had the following directory structure:
```
grandparent
`-- parent
|-- child1
`-- child2
```
You could find the children with patterns such as: `**/child*`,
`grandparent/**/child?`, `**/parent/*`, or even just `**` by itself (which will
return all files and directories recursively).
Bash's globstar is doublestar's inspiration and, as such, works similarly.
Note that the doublestar must appear as a path component by itself. A pattern
such as `/path**` is invalid and will be treated the same as `/path*`, but
`/path*/**` should achieve the desired result. Additionally, `/path/**` will
match all directories and files under the path directory, but `/path/**/` will
only match directories.
## Installation
**doublestar** can be installed via `go get`:
```bash
go get github.com/bmatcuk/doublestar
```
To use it in your code, you must import it:
```go
import "github.com/bmatcuk/doublestar"
```
## Functions
### Match
```go
func Match(pattern, name string) (bool, error)
```
Match returns true if `name` matches the file name `pattern`
([see below](#patterns)). `name` and `pattern` are split on forward slash (`/`)
characters and may be relative or absolute.
Note: `Match()` is meant to be a drop-in replacement for `path.Match()`. As
such, it always uses `/` as the path separator. If you are writing code that
will run on systems where `/` is not the path separator (such as Windows), you
want to use `PathMatch()` (below) instead.
### PathMatch
```go
func PathMatch(pattern, name string) (bool, error)
```
PathMatch returns true if `name` matches the file name `pattern`
([see below](#patterns)). The difference between Match and PathMatch is that
PathMatch will automatically use your system's path separator to split `name`
and `pattern`.
`PathMatch()` is meant to be a drop-in replacement for `filepath.Match()`.
### Glob
```go
func Glob(pattern string) ([]string, error)
```
Glob finds all files and directories in the filesystem that match `pattern`
([see below](#patterns)). `pattern` may be relative (to the current working
directory), or absolute.
`Glob()` is meant to be a drop-in replacement for `filepath.Glob()`.
## Patterns
**doublestar** supports the following special terms in the patterns:
Special Terms | Meaning
------------- | -------
`*` | matches any sequence of non-path-separators
`**` | matches any sequence of characters, including path separators
`?` | matches any single non-path-separator character
`[class]` | matches any single non-path-separator character against a class of characters ([see below](#character-classes))
`{alt1,...}` | matches a sequence of characters if one of the comma-separated alternatives matches
Any character with a special meaning can be escaped with a backslash (`\`).
### Character Classes
Character classes support the following:
Class | Meaning
---------- | -------
`[abc]` | matches any single character within the set
`[a-z]` | matches any single character in the range
`[^class]` | matches any single character which does *not* match the class

455
vendor/github.com/bmatcuk/doublestar/doublestar.go generated vendored Normal file
View File

@ -0,0 +1,455 @@
package doublestar
import (
"fmt"
"os"
"path"
"path/filepath"
"strings"
"unicode/utf8"
)
var ErrBadPattern = path.ErrBadPattern
// Split a path on the given separator, respecting escaping.
func splitPathOnSeparator(path string, separator rune) []string {
// if the separator is '\\', then we can just split...
if separator == '\\' {
return strings.Split(path, string(separator))
}
// otherwise, we need to be careful of situations where the separator was escaped
cnt := strings.Count(path, string(separator))
if cnt == 0 {
return []string{path}
}
ret := make([]string, cnt+1)
pathlen := len(path)
separatorLen := utf8.RuneLen(separator)
idx := 0
for start := 0; start < pathlen; {
end := indexRuneWithEscaping(path[start:], separator)
if end == -1 {
end = pathlen
} else {
end += start
}
ret[idx] = path[start:end]
start = end + separatorLen
idx++
}
return ret[:idx]
}
// Find the first index of a rune in a string,
// ignoring any times the rune is escaped using "\".
func indexRuneWithEscaping(s string, r rune) int {
end := strings.IndexRune(s, r)
if end == -1 {
return -1
}
if end > 0 && s[end-1] == '\\' {
start := end + utf8.RuneLen(r)
end = indexRuneWithEscaping(s[start:], r)
if end != -1 {
end += start
}
}
return end
}
// Match returns true if name matches the shell file name pattern.
// The pattern syntax is:
//
// pattern:
// { term }
// term:
// '*' matches any sequence of non-path-separators
// '**' matches any sequence of characters, including
// path separators.
// '?' matches any single non-path-separator character
// '[' [ '^' ] { character-range } ']'
// character class (must be non-empty)
// '{' { term } [ ',' { term } ... ] '}'
// c matches character c (c != '*', '?', '\\', '[')
// '\\' c matches character c
//
// character-range:
// c matches character c (c != '\\', '-', ']')
// '\\' c matches character c
// lo '-' hi matches character c for lo <= c <= hi
//
// Match requires pattern to match all of name, not just a substring.
// The path-separator defaults to the '/' character. The only possible
// returned error is ErrBadPattern, when pattern is malformed.
//
// Note: this is meant as a drop-in replacement for path.Match() which
// always uses '/' as the path separator. If you want to support systems
// which use a different path separator (such as Windows), what you want
// is the PathMatch() function below.
//
func Match(pattern, name string) (bool, error) {
return matchWithSeparator(pattern, name, '/')
}
// PathMatch is like Match except that it uses your system's path separator.
// For most systems, this will be '/'. However, for Windows, it would be '\\'.
// Note that for systems where the path separator is '\\', escaping is
// disabled.
//
// Note: this is meant as a drop-in replacement for filepath.Match().
//
func PathMatch(pattern, name string) (bool, error) {
return matchWithSeparator(pattern, name, os.PathSeparator)
}
// Match returns true if name matches the shell file name pattern.
// The pattern syntax is:
//
// pattern:
// { term }
// term:
// '*' matches any sequence of non-path-separators
// '**' matches any sequence of characters, including
// path separators.
// '?' matches any single non-path-separator character
// '[' [ '^' ] { character-range } ']'
// character class (must be non-empty)
// '{' { term } [ ',' { term } ... ] '}'
// c matches character c (c != '*', '?', '\\', '[')
// '\\' c matches character c
//
// character-range:
// c matches character c (c != '\\', '-', ']')
// '\\' c matches character c, unless separator is '\\'
// lo '-' hi matches character c for lo <= c <= hi
//
// Match requires pattern to match all of name, not just a substring.
// The only possible returned error is ErrBadPattern, when pattern
// is malformed.
//
func matchWithSeparator(pattern, name string, separator rune) (bool, error) {
patternComponents := splitPathOnSeparator(pattern, separator)
nameComponents := splitPathOnSeparator(name, separator)
return doMatching(patternComponents, nameComponents)
}
func doMatching(patternComponents, nameComponents []string) (matched bool, err error) {
// check for some base-cases
patternLen, nameLen := len(patternComponents), len(nameComponents)
if patternLen == 0 && nameLen == 0 {
return true, nil
}
if patternLen == 0 || nameLen == 0 {
return false, nil
}
patIdx, nameIdx := 0, 0
for patIdx < patternLen && nameIdx < nameLen {
if patternComponents[patIdx] == "**" {
// if our last pattern component is a doublestar, we're done -
// doublestar will match any remaining name components, if any.
if patIdx++; patIdx >= patternLen {
return true, nil
}
// otherwise, try matching remaining components
for ; nameIdx < nameLen; nameIdx++ {
if m, _ := doMatching(patternComponents[patIdx:], nameComponents[nameIdx:]); m {
return true, nil
}
}
return false, nil
} else {
// try matching components
matched, err = matchComponent(patternComponents[patIdx], nameComponents[nameIdx])
if !matched || err != nil {
return
}
}
patIdx++
nameIdx++
}
return patIdx >= patternLen && nameIdx >= nameLen, nil
}
// Glob returns the names of all files matching pattern or nil
// if there is no matching file. The syntax of pattern is the same
// as in Match. The pattern may describe hierarchical names such as
// /usr/*/bin/ed (assuming the Separator is '/').
//
// Glob ignores file system errors such as I/O errors reading directories.
// The only possible returned error is ErrBadPattern, when pattern
// is malformed.
//
// Your system path separator is automatically used. This means on
// systems where the separator is '\\' (Windows), escaping will be
// disabled.
//
// Note: this is meant as a drop-in replacement for filepath.Glob().
//
func Glob(pattern string) (matches []string, err error) {
patternComponents := splitPathOnSeparator(filepath.ToSlash(pattern), '/')
if len(patternComponents) == 0 {
return nil, nil
}
// On Windows systems, this will return the drive name ('C:'), on others,
// it will return an empty string.
volumeName := filepath.VolumeName(pattern)
// If the first pattern component is equal to the volume name, then the
// pattern is an absolute path.
if patternComponents[0] == volumeName {
return doGlob(fmt.Sprintf("%s%s", volumeName, string(os.PathSeparator)), patternComponents[1:], matches)
}
// otherwise, it's a relative pattern
return doGlob(".", patternComponents, matches)
}
// Perform a glob
func doGlob(basedir string, components, matches []string) (m []string, e error) {
m = matches
e = nil
// figure out how many components we don't need to glob because they're
// just names without patterns - we'll use os.Lstat below to check if that
// path actually exists
patLen := len(components)
patIdx := 0
for ; patIdx < patLen; patIdx++ {
if strings.IndexAny(components[patIdx], "*?[{\\") >= 0 {
break
}
}
if patIdx > 0 {
basedir = filepath.Join(basedir, filepath.Join(components[0:patIdx]...))
}
// Lstat will return an error if the file/directory doesn't exist
fi, err := os.Lstat(basedir)
if err != nil {
return
}
// if there are no more components, we've found a match
if patIdx >= patLen {
m = append(m, basedir)
return
}
// otherwise, we need to check each item in the directory...
// first, if basedir is a symlink, follow it...
if (fi.Mode() & os.ModeSymlink) != 0 {
fi, err = os.Stat(basedir)
if err != nil {
return
}
}
// confirm it's a directory...
if !fi.IsDir() {
return
}
// read directory
dir, err := os.Open(basedir)
if err != nil {
return
}
defer dir.Close()
files, _ := dir.Readdir(-1)
lastComponent := (patIdx + 1) >= patLen
if components[patIdx] == "**" {
// if the current component is a doublestar, we'll try depth-first
for _, file := range files {
// if symlink, we may want to follow
if (file.Mode() & os.ModeSymlink) != 0 {
file, err = os.Stat(filepath.Join(basedir, file.Name()))
if err != nil {
continue
}
}
if file.IsDir() {
// recurse into directories
if lastComponent {
m = append(m, filepath.Join(basedir, file.Name()))
}
m, e = doGlob(filepath.Join(basedir, file.Name()), components[patIdx:], m)
} else if lastComponent {
// if the pattern's last component is a doublestar, we match filenames, too
m = append(m, filepath.Join(basedir, file.Name()))
}
}
if lastComponent {
return // we're done
}
patIdx++
lastComponent = (patIdx + 1) >= patLen
}
// check items in current directory and recurse
var match bool
for _, file := range files {
match, e = matchComponent(components[patIdx], file.Name())
if e != nil {
return
}
if match {
if lastComponent {
m = append(m, filepath.Join(basedir, file.Name()))
} else {
m, e = doGlob(filepath.Join(basedir, file.Name()), components[patIdx+1:], m)
}
}
}
return
}
// Attempt to match a single pattern component with a path component
func matchComponent(pattern, name string) (bool, error) {
// check some base cases
patternLen, nameLen := len(pattern), len(name)
if patternLen == 0 && nameLen == 0 {
return true, nil
}
if patternLen == 0 {
return false, nil
}
if nameLen == 0 && pattern != "*" {
return false, nil
}
// check for matches one rune at a time
patIdx, nameIdx := 0, 0
for patIdx < patternLen && nameIdx < nameLen {
patRune, patAdj := utf8.DecodeRuneInString(pattern[patIdx:])
nameRune, nameAdj := utf8.DecodeRuneInString(name[nameIdx:])
if patRune == '\\' {
// handle escaped runes
patIdx += patAdj
patRune, patAdj = utf8.DecodeRuneInString(pattern[patIdx:])
if patRune == utf8.RuneError {
return false, ErrBadPattern
} else if patRune == nameRune {
patIdx += patAdj
nameIdx += nameAdj
} else {
return false, nil
}
} else if patRune == '*' {
// handle stars
if patIdx += patAdj; patIdx >= patternLen {
// a star at the end of a pattern will always
// match the rest of the path
return true, nil
}
// check if we can make any matches
for ; nameIdx < nameLen; nameIdx += nameAdj {
if m, _ := matchComponent(pattern[patIdx:], name[nameIdx:]); m {
return true, nil
}
}
return false, nil
} else if patRune == '[' {
// handle character sets
patIdx += patAdj
endClass := indexRuneWithEscaping(pattern[patIdx:], ']')
if endClass == -1 {
return false, ErrBadPattern
}
endClass += patIdx
classRunes := []rune(pattern[patIdx:endClass])
classRunesLen := len(classRunes)
if classRunesLen > 0 {
classIdx := 0
matchClass := false
if classRunes[0] == '^' {
classIdx++
}
for classIdx < classRunesLen {
low := classRunes[classIdx]
if low == '-' {
return false, ErrBadPattern
}
classIdx++
if low == '\\' {
if classIdx < classRunesLen {
low = classRunes[classIdx]
classIdx++
} else {
return false, ErrBadPattern
}
}
high := low
if classIdx < classRunesLen && classRunes[classIdx] == '-' {
// we have a range of runes
if classIdx++; classIdx >= classRunesLen {
return false, ErrBadPattern
}
high = classRunes[classIdx]
if high == '-' {
return false, ErrBadPattern
}
classIdx++
if high == '\\' {
if classIdx < classRunesLen {
high = classRunes[classIdx]
classIdx++
} else {
return false, ErrBadPattern
}
}
}
if low <= nameRune && nameRune <= high {
matchClass = true
}
}
if matchClass == (classRunes[0] == '^') {
return false, nil
}
} else {
return false, ErrBadPattern
}
patIdx = endClass + 1
nameIdx += nameAdj
} else if patRune == '{' {
// handle alternatives such as {alt1,alt2,...}
patIdx += patAdj
endOptions := indexRuneWithEscaping(pattern[patIdx:], '}')
if endOptions == -1 {
return false, ErrBadPattern
}
endOptions += patIdx
options := splitPathOnSeparator(pattern[patIdx:endOptions], ',')
patIdx = endOptions + 1
for _, o := range options {
m, e := matchComponent(o+pattern[patIdx:], name[nameIdx:])
if e != nil {
return false, e
}
if m {
return true, nil
}
}
return false, nil
} else if patRune == '?' || patRune == nameRune {
// handle single-rune wildcard
patIdx += patAdj
nameIdx += nameAdj
} else {
return false, nil
}
}
if patIdx >= patternLen && nameIdx >= nameLen {
return true, nil
}
if nameIdx >= nameLen && pattern[patIdx:] == "*" || pattern[patIdx:] == "**" {
return true, nil
}
return false, nil
}

1
vendor/github.com/bmatcuk/doublestar/go.mod generated vendored Normal file
View File

@ -0,0 +1 @@
module github.com/bmatcuk/doublestar

14
vendor/github.com/karrick/godirwalk/.gitignore generated vendored Normal file
View File

@ -0,0 +1,14 @@
# Binaries for programs and plugins
*.exe
*.dll
*.so
*.dylib
# Test binary, build with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
# Project-local glide cache, RE: https://github.com/Masterminds/glide/issues/736
.glide/

25
vendor/github.com/karrick/godirwalk/LICENSE generated vendored Normal file
View File

@ -0,0 +1,25 @@
BSD 2-Clause License
Copyright (c) 2017, Karrick McDermott
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

208
vendor/github.com/karrick/godirwalk/README.md generated vendored Normal file
View File

@ -0,0 +1,208 @@
# godirwalk
`godirwalk` is a library for traversing a directory tree on a file
system.
In short, why do I use this library?
1. It's faster than `filepath.Walk`.
1. It's more correct on Windows than `filepath.Walk`.
1. It's more easy to use than `filepath.Walk`.
1. It's more flexible than `filepath.Walk`.
## Usage Example
Additional examples are provided in the `examples/` subdirectory.
This library will normalize the provided top level directory name
based on the os-specific path separator by calling `filepath.Clean` on
its first argument. However it always provides the pathname created by
using the correct os-specific path separator when invoking the
provided callback function.
```Go
dirname := "some/directory/root"
err := godirwalk.Walk(dirname, &godirwalk.Options{
Callback: func(osPathname string, de *godirwalk.Dirent) error {
fmt.Printf("%s %s\n", de.ModeType(), osPathname)
return nil
},
Unsorted: true, // (optional) set true for faster yet non-deterministic enumeration (see godoc)
})
```
This library not only provides functions for traversing a file system
directory tree, but also for obtaining a list of immediate descendants
of a particular directory, typically much more quickly than using
`os.ReadDir` or `os.ReadDirnames`.
Documentation is available via
[![GoDoc](https://godoc.org/github.com/karrick/godirwalk?status.svg)](https://godoc.org/github.com/karrick/godirwalk).
## Description
Here's why I use `godirwalk` in preference to `filepath.Walk`,
`os.ReadDir`, and `os.ReadDirnames`.
### It's faster than `filepath.Walk`
When compared against `filepath.Walk` in benchmarks, it has been
observed to run between five and ten times the speed on darwin, at
speeds comparable to the that of the unix `find` utility; about twice
the speed on linux; and about four times the speed on Windows.
How does it obtain this performance boost? It does less work to give
you nearly the same output. This library calls the same `syscall`
functions to do the work, but it makes fewer calls, does not throw
away information that it might need, and creates less memory churn
along the way by reusing the same scratch buffer rather than
reallocating a new buffer every time it reads data from the operating
system.
While traversing a file system directory tree, `filepath.Walk` obtains
the list of immediate descendants of a directory, and throws away the
file system node type information provided by the operating system
that comes with the node's name. Then, immediately prior to invoking
the callback function, `filepath.Walk` invokes `os.Stat` for each
node, and passes the returned `os.FileInfo` information to the
callback.
While the `os.FileInfo` information provided by `os.Stat` is extremely
helpful--and even includes the `os.FileMode` data--providing it
requires an additional system call for each node.
Because most callbacks only care about what the node type is, this
library does not throw the type information away, but rather provides
that information to the callback function in the form of a
`os.FileMode` value. Note that the provided `os.FileMode` value that
this library provides only has the node type information, and does not
have the permission bits, sticky bits, or other information from the
file's mode. If the callback does care about a particular node's
entire `os.FileInfo` data structure, the callback can easiy invoke
`os.Stat` when needed, and only when needed.
#### Benchmarks
##### macOS
```Bash
go test -bench=.
goos: darwin
goarch: amd64
pkg: github.com/karrick/godirwalk
BenchmarkFilepathWalk-8 1 3001274570 ns/op
BenchmarkGoDirWalk-8 3 465573172 ns/op
BenchmarkFlameGraphFilepathWalk-8 1 6957916936 ns/op
BenchmarkFlameGraphGoDirWalk-8 1 4210582571 ns/op
PASS
ok github.com/karrick/godirwalk 16.822s
```
##### Linux
```Bash
go test -bench=.
goos: linux
goarch: amd64
pkg: github.com/karrick/godirwalk
BenchmarkFilepathWalk-12 1 1609189170 ns/op
BenchmarkGoDirWalk-12 5 211336628 ns/op
BenchmarkFlameGraphFilepathWalk-12 1 3968119932 ns/op
BenchmarkFlameGraphGoDirWalk-12 1 2139598998 ns/op
PASS
ok github.com/karrick/godirwalk 9.007s
```
### It's more correct on Windows than `filepath.Walk`
I did not previously care about this either, but humor me. We all love
how we can write once and run everywhere. It is essential for the
language's adoption, growth, and success, that the software we create
can run unmodified on all architectures and operating systems
supported by Go.
When the traversed file system has a logical loop caused by symbolic
links to directories, on unix `filepath.Walk` ignores symbolic links
and traverses the entire directory tree without error. On Windows
however, `filepath.Walk` will continue following directory symbolic
links, even though it is not supposed to, eventually causing
`filepath.Walk` to terminate early and return an error when the
pathname gets too long from concatenating endless loops of symbolic
links onto the pathname. This error comes from Windows, passes through
`filepath.Walk`, and to the upstream client running `filepath.Walk`.
The takeaway is that behavior is different based on which platform
`filepath.Walk` is running. While this is clearly not intentional,
until it is fixed in the standard library, it presents a compatibility
problem.
This library correctly identifies symbolic links that point to
directories and will only follow them when `FollowSymbolicLinks` is
set to true. Behavior on Windows and other operating systems is
identical.
### It's more easy to use than `filepath.Walk`
Since this library does not invoke `os.Stat` on every file system node
it encounters, there is no possible error event for the callback
function to filter on. The third argument in the `filepath.WalkFunc`
function signature to pass the error from `os.Stat` to the callback
function is no longer necessary, and thus eliminated from signature of
the callback function from this library.
Also, `filepath.Walk` invokes the callback function with a solidus
delimited pathname regardless of the os-specific path separator. This
library invokes the callback function with the os-specific pathname
separator, obviating a call to `filepath.Clean` in the callback
function for each node prior to actually using the provided pathname.
In other words, even on Windows, `filepath.Walk` will invoke the
callback with `some/path/to/foo.txt`, requiring well written clients
to perform pathname normalization for every file prior to working with
the specified file. In truth, many clients developed on unix and not
tested on Windows neglect this subtlety, and will result in software
bugs when running on Windows. This library would invoke the callback
function with `some\path\to\foo.txt` for the same file when running on
Windows, eliminating the need to normalize the pathname by the client,
and lessen the likelyhood that a client will work on unix but not on
Windows.
### It's more flexible than `filepath.Walk`
#### Configurable Handling of Symbolic Links
The default behavior of this library is to ignore symbolic links to
directories when walking a directory tree, just like `filepath.Walk`
does. However, it does invoke the callback function with each node it
finds, including symbolic links. If a particular use case exists to
follow symbolic links when traversing a directory tree, this library
can be invoked in manner to do so, by setting the
`FollowSymbolicLinks` parameter to true.
#### Configurable Sorting of Directory Children
The default behavior of this library is to always sort the immediate
descendants of a directory prior to visiting each node, just like
`filepath.Walk` does. This is usually the desired behavior. However,
this does come at a performance penalty to sort the names when a
directory node has many entries. If a particular use case exists that
does not require sorting the directory's immediate descendants prior
to visiting its nodes, this library will skip the sorting step when
the `Unsorted` parameter is set to true.
#### Configurable Post Children Callback
This library provides upstream code with the ability to specify a
callback to be invoked for each directory after its children are
processed. This has been used to recursively delete empty directories
after traversing the file system in a more efficient manner. See the
`examples/clean-empties` directory for an example of this usage.
#### Configurable Error Callback
This library provides upstream code with the ability to specify a
callback to be invoked for errors that the operating system returns,
allowing the upstream code to determine the next course of action to
take, whether to halt walking the hierarchy, as it would do were no
error callback provided, or skip the node that caused the error. See
the `examples/walk-fast` directory for an example of this usage.

74
vendor/github.com/karrick/godirwalk/dirent.go generated vendored Normal file
View File

@ -0,0 +1,74 @@
package godirwalk
import (
"os"
"path/filepath"
"github.com/pkg/errors"
)
// Dirent stores the name and file system mode type of discovered file system
// entries.
type Dirent struct {
name string
modeType os.FileMode
}
// NewDirent returns a newly initialized Dirent structure, or an error. This
// function does not follow symbolic links.
//
// This function is rarely used, as Dirent structures are provided by other
// functions in this library that read and walk directories.
func NewDirent(osPathname string) (*Dirent, error) {
fi, err := os.Lstat(osPathname)
if err != nil {
return nil, errors.Wrap(err, "cannot lstat")
}
return &Dirent{
name: filepath.Base(osPathname),
modeType: fi.Mode() & os.ModeType,
}, nil
}
// Name returns the basename of the file system entry.
func (de Dirent) Name() string { return de.name }
// ModeType returns the mode bits that specify the file system node type. We
// could make our own enum-like data type for encoding the file type, but Go's
// runtime already gives us architecture independent file modes, as discussed in
// `os/types.go`:
//
// Go's runtime FileMode type has same definition on all systems, so that
// information about files can be moved from one system to another portably.
func (de Dirent) ModeType() os.FileMode { return de.modeType }
// IsDir returns true if and only if the Dirent represents a file system
// directory. Note that on some operating systems, more than one file mode bit
// may be set for a node. For instance, on Windows, a symbolic link that points
// to a directory will have both the directory and the symbolic link bits set.
func (de Dirent) IsDir() bool { return de.modeType&os.ModeDir != 0 }
// IsRegular returns true if and only if the Dirent represents a regular
// file. That is, it ensures that no mode type bits are set.
func (de Dirent) IsRegular() bool { return de.modeType&os.ModeType == 0 }
// IsSymlink returns true if and only if the Dirent represents a file system
// symbolic link. Note that on some operating systems, more than one file mode
// bit may be set for a node. For instance, on Windows, a symbolic link that
// points to a directory will have both the directory and the symbolic link bits
// set.
func (de Dirent) IsSymlink() bool { return de.modeType&os.ModeSymlink != 0 }
// Dirents represents a slice of Dirent pointers, which are sortable by
// name. This type satisfies the `sort.Interface` interface.
type Dirents []*Dirent
// Len returns the count of Dirent structures in the slice.
func (l Dirents) Len() int { return len(l) }
// Less returns true if and only if the Name of the element specified by the
// first index is lexicographically less than that of the second index.
func (l Dirents) Less(i, j int) bool { return l[i].name < l[j].name }
// Swap exchanges the two Dirent entries specified by the two provided indexes.
func (l Dirents) Swap(i, j int) { l[i], l[j] = l[j], l[i] }

34
vendor/github.com/karrick/godirwalk/doc.go generated vendored Normal file
View File

@ -0,0 +1,34 @@
/*
Package godirwalk provides functions to read and traverse directory trees.
In short, why do I use this library?
* It's faster than `filepath.Walk`.
* It's more correct on Windows than `filepath.Walk`.
* It's more easy to use than `filepath.Walk`.
* It's more flexible than `filepath.Walk`.
USAGE
This library will normalize the provided top level directory name based on the
os-specific path separator by calling `filepath.Clean` on its first
argument. However it always provides the pathname created by using the correct
os-specific path separator when invoking the provided callback function.
dirname := "some/directory/root"
err := godirwalk.Walk(dirname, &godirwalk.Options{
Callback: func(osPathname string, de *godirwalk.Dirent) error {
fmt.Printf("%s %s\n", de.ModeType(), osPathname)
return nil
},
})
This library not only provides functions for traversing a file system directory
tree, but also for obtaining a list of immediate descendants of a particular
directory, typically much more quickly than using `os.ReadDir` or
`os.ReadDirnames`.
*/
package godirwalk

3
vendor/github.com/karrick/godirwalk/go.mod generated vendored Normal file
View File

@ -0,0 +1,3 @@
module github.com/karrick/godirwalk
require github.com/pkg/errors v0.8.0

Some files were not shown because too many files have changed in this diff Show More